You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Kris Ogirri <ka...@gmail.com> on 2014/02/17 14:51:05 UTC

Fwd: Issue with HBase Sink in Flume ( 1.3.0)

Dear Mailing Group,

I am currently having issues with the Hbase sink function. I have developed
an agent with a fanout channel setup ( single source, multiple channels,
multiple sinks) sinking to a HDFS cluster and Hbase deployment.

 The issue is that although the HDFS is working well, the Hbase flow is
simply not working. There are no errors being reported by Flume for the
Hbase channel but there are never any records being written to the HBase
store. The Hbase table as stipulated in the config always remains empty.
Studying the Flume startup logs I observe that the session connection to
Zookeeper is always successfully established

Are there any special configurations I am missing out?

I am using the Async Event Serializer to persist the txns.

Any assistance will be greatly appreciated.


Please see below for the flume configuration:

[biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
agent.sources=exec-source
agent.sinks=hdfs-sink hbase-sink
agent.channels=ch1 ch2

agent.sources.exec-source.type=exec
agent.sources.exec-source.command=tail -F
/home/biadmin/bigdemo/data/rec_telco.cdr

agent.sinks.hdfs-sink.type=hdfs
agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
# File size to trigger roll, in bytes (0: never roll based on file size)
agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
agent.sinks.hdfs-sink.hdfs.rollCount = 0
# number of events written to file before it flushed to HDFS
agent.sinks.hdfs-sink.hdfs.batchSize = 10000
agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000


agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
agent.sinks.hbase-sink.table=telco_cdr_rec
agent.sinks.hbase-sink.columnFamily = colfam
agent.sinks.hbase-sink.channels = ch2
#agent.sinks.hbase-sink.hdfs.batchSize = 10000
#agent.sinks.hbase-sink.hdfs.txnEventMax = 40000


agent.channels.ch1.type=file
agent.channels.ch1.checkpointInterval=3000
agent.channels.ch1.transactionCapacity=10000
agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
agent.channels.ch1.write-timeout=30
agent.channels.ch1.keep-alive=30
#agent.channels.ch1.capacity=1000

agent.channels.ch2.type=file
agent.channels.ch2.checkpointInterval=300
agent.channels.ch2.transactionCapacity=10000
agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
agent.channels.ch2.write-timeout=30
agent.channels.ch2.keep-alive=30
#agent.channels.ch2.capacity=1000


agent.sources.exec-source.channels=ch1 ch2
agent.sinks.hdfs-sink.channel=ch1
agent.sinks.hbase-sink.channel=ch2

Flume and encryption

Posted by Richard Ross <ri...@gmail.com>.
Hello:

I am just getting started with Flume and my use case requires moving encrypted messages from a JMS queue into HDFS. I am wondering if there is an "out of the box" configuration to handle this, or if I will need to write a custom sink or source (the filesystem that the agent will be running on will be encrypted, so it doesn't matter when the decryption occurs). For the most part, I understand everything except the decryption. I have read some documentation and didn't see anything addressing this particular use case.

Thanks for any advice,
Richard.

Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Kris Ogirri <ka...@gmail.com>.
Hello Guys,

Just to revert back to the team... It seems that the upgrade to Flume
version 1.4.0 resolved this problem and items are currently being written
from the file Channel to the pCol CF in my specified Hbase table.

I hope there is a way to make this mail trail show as 'FIXED' so people
that have this problem in the future can refer back to this mail trail.

Thanks for all the support.




On 18 February 2014 20:53, Hari Shreedharan <hs...@cloudera.com>wrote:

> The performance issues in that version were mainly due to the file
> channel, which could essentially end up not working in some situations. I
> am not sure if that is the case, but there is nothing very obvious in your
> logs that suggest any other issues. Since this is a virtual appliance, you
> might want to contact the vendor's support channels who would have more
> info about what exactly is packaged in your specific version.
>
>
> Thanks,
> Hari
>
> On Tuesday, February 18, 2014 at 11:47 AM, Kris Ogirri wrote:
>
> Hello Hari,
>
> No I have no reason NOT to use a newer version but since I am working with
> a pre-packaged virtual appliance, I ideally would not want to perform any
> updates on any of the Hadoop components ( as I would then need to update
> the entire Virtual appliance to reflect the changes) without being sure
> that the update would solve this problem.
>
> What are your thoughts? Were the performance problems demonstrated related
> to Hbase Sinking? I am thinking this could be an issue between my Zookeeper
> deployment and my Hbase setup but I am open to suggestions.
>
> Thanks again for all the help.
>
>
>
> On 18 February 2014 20:41, Hari Shreedharan <hs...@cloudera.com>wrote:
>
>  Looks like you are using Flume 1.3.0. Is there a reason for not using a
> newer version? Flume 1.4.0 is not almost 6 months old. 1.3.0 did have a
> known performance issue which was the reason 1.3.1 was release almost
> immediately after.
>
>
> Thanks,
> Hari
>
> On Tuesday, February 18, 2014 at 11:23 AM, Kris Ogirri wrote:
>
> Hello Hari,
>
> I didn't know it was a holiday in the US.
>
> Please see version information below:
>
> Hbase:
> HBase Shell; enter 'help<RETURN>' for list of supported commands.
> Type "exit<RETURN>" to leave the HBase Shell
> Version 0.94.3, rab548827f0c52211c1d67437484fcba635072767, Wed Jul 31
> 18:13:25 PDT 2013
>
>
> Flume:
> [biadmin@bivm bin]$ ./flume-ng version
> Flume 1.3.0
> Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
> Revision: abbccbd2ff14dd6fed2a8a3891eb51aff985e9f5
> Compiled by jenkins on Wed Jun 12 19:16:33 PDT 2013
> From source with checksum dce204011600e67e1455971266d3da07
>
>
> Thanks for all the assistance.
>
> BR,
>
>
>
> On 18 February 2014 20:14, Hari Shreedharan <hs...@cloudera.com>wrote:
>
>  Hi Kris,
>
> Please realize that people usually work on their own time on these mailing
> lists and since your first message was sent on a Monday early morning on a
> long weekend in the US, others may not have seen your message either.
>
> Are you running Apache Flume and Apache HBase? If yes, what versions
> (output of flume-ng version and hbase version)?
>
>
> Thanks,
> Hari
>
> On Tuesday, February 18, 2014 at 10:22 AM, Kris Ogirri wrote:
>
> Hi,
>
> Cant anybody help with this? I am thinking its a small issue because
> everything seems to work fine but the data from the Channel never gets
> persisted into Hbase?
>
> I have added the description of the Hbase tables:
>
> hbase(main):005:0> describe 'telco_cdr_rec'
> DESCRIPTION
> ENABLED
>  {NAME => 'telco_cdr_rec', FAMILIES => [{NAME => 'co
> true
>  lfam', REPLICATION_SCOPE => '0',
> KEEP_DELETED_CELLS
>   => 'false', COMPRESSION => 'NONE',
> ENCODE_ON_DISK
>  => 'true', BLOCKCACHE => 'true', MIN_VERSIONS =>
> '0
>  ', DATA_BLOCK_ENCODING => 'NONE', IN_MEMORY =>
> 'fal
>  se', BLOOMFILTER => 'NONE', TTL => '2147483647',
> VE
>  RSIONS => '3', BLOCKSIZE =>
> '65536'}]}
> 1 row(s) in 0.1600 seconds
>
>
> If no one can help with the problem, can anyone provide a link to the
> Flume -> Zookeeper -> Hbase Internal documentation so I can trace where the
> error lies.
>
>  Are there Zookeeper log files where I can analyse whether Flume actually
> sends the Txns to Hbase via Zookeeper?
>
>
>
> On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:
>
> Hello Jeff,
>
> Please find below requested logs.. Initiation part of the logs were
> unfortunately not included. I can run these again if necessary but the
> Zookeeper connection is included in the logs.
>
>
> 14/02/17 10:26:12 INFO properties.PropertiesFileConfigurationProvider:
> created channel ch2
> 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hbase-sink, type: org.apache.flume.sink.hbase.HBaseSink
> 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hdfs-sink, type: hdfs
> 14/02/17 10:26:14 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting new
> configuration:{ sourceRunners:{exec-source=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:IDLE} }}
> sinkRunners:{hbase-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
> name:null counters:{} } }, hdfs-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
> name:null counters:{} } }} channels:{ch1=FileChannel ch1 { dataDirs:
> [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
> [/home/biadmin/.flume/file-channel2/data] }} }
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel ch1
> 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch1 {
> dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel ch2
> 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch2 {
> dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> 14/02/17 10:26:14 INFO file.Log: Replay started
> 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> 14/02/17 10:26:14 INFO file.Log: Replay started
> 14/02/17 10:26:14 INFO file.Log: Found NextFileID 7, from
> [/home/biadmin/.flume/file-channel/data/log-7,
> /home/biadmin/.flume/file-channel/data/log-6]
> 14/02/17 10:26:14 INFO file.Log: Found NextFileID 6, from
> [/home/biadmin/.flume/file-channel2/data/log-6,
> /home/biadmin/.flume/file-channel2/data/log-4,
> /home/biadmin/.flume/file-channel2/data/log-5]
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint and
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
> checkpoint metadata from
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint and
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
> checkpoint metadata from
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:35 EST
> 2014, queue depth = 0
> 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:31 EST
> 2014, queue depth = 0
> 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
> [/home/biadmin/.flume/file-channel/data/log-6,
> /home/biadmin/.flume/file-channel/data/log-7]
> 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
> [/home/biadmin/.flume/file-channel2/data/log-4,
> /home/biadmin/.flume/file-channel2/data/log-5,
> /home/biadmin/.flume/file-channel2/data/log-6]
> 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-4
> 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Unable to get
> maxDirectMemory from VM: NoSuchMethodException:
> sun.misc.VM.maxDirectMemory(null)
> 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Direct Memory Allocation:
> Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20971520,
> Remaining = 20971520
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 32040
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 2496
> 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
> file(/home/biadmin/.flume/file-channel2/data/log-4) is: 1392407375821,
> which is beyond the requested checkpoint time: 1392650490155 and position 0
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 22843
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 22843 in
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-6
> 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
> file(/home/biadmin/.flume/file-channel2/data/log-6) is: 1392650490155,
> which is beyond the requested checkpoint time: 1392650490155 and position 0
> 14/02/17 10:26:16 INFO file.ReplayHandler: read: 0, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 0, eventCount:0
> 14/02/17 10:26:16 INFO file.Log: Rolling
> /home/biadmin/.flume/file-channel2/data
> 14/02/17 10:26:16 INFO file.Log: Roll start
> /home/biadmin/.flume/file-channel2/data
> 14/02/17 10:26:16 INFO file.LogFile: Opened
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 2496 in
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 32071 in
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:26:16 INFO file.ReplayHandler: read: 1, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 1, eventCount:0
> 14/02/17 10:26:16 INFO file.Log: Rolling
> /home/biadmin/.flume/file-channel/data
> 14/02/17 10:26:16 INFO file.Log: Roll start
> /home/biadmin/.flume/file-channel/data
> 14/02/17 10:26:16 INFO file.LogFile: Opened
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:26:16 INFO file.Log: Roll end
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 0
> 14/02/17 10:26:16 INFO file.Log: Roll end
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 0
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774387, queueSize: 0,
> queueHead: 10516
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774388, queueSize: 0,
> queueHead: 223682
> 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 0, logWriteOrderID = 1392650774387
> 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 0, logWriteOrderID = 1392650774388
> 14/02/17 10:26:16 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 0 logWriteOrderID:
> 1392650774387
> 14/02/17 10:26:16 INFO file.FileChannel: Queue Size after replay: 0
> [channel=ch2]
> 14/02/17 10:26:17 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 0 logWriteOrderID:
> 1392650774388
> 14/02/17 10:26:17 INFO file.FileChannel: Queue Size after replay: 0
> [channel=ch1]
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: ch2, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch2 started
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: ch1, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch1 started
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink hbase-sink
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink hdfs-sink
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Source exec-source
> 14/02/17 10:26:17 INFO source.ExecSource: Exec source starting with
> command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: SINK, name: hdfs-sink, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs-sink started
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.5--1, built on 01/23/2013 14:29 GMT
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:host.name
> =bivm
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.vendor=IBM Corporation
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.home=/opt/ibm/biginsights/jdk/jre
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.class.path=conf:/opt/ibm/biginsights/flume/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/flume/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-mapper-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/flume-avro-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-jdbc-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/velocity-1.7.jar:/opt/ibm/biginsights/flume/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/flume/lib/flume-ng-node-1.3.0.jar:/opt/ibm/biginsights/flume/lib/commons-dbcp-1.4.jar:/opt/ibm/biginsights/flume/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/flume/lib/flume-hdfs-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/asynchbase-1.2.0.jar:/opt/ibm/biginsights/flume/lib/flume-recoverable-memory-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/async-1.3.1.jar:/opt/ibm/biginsights/flume/lib/slf4j-log4j12-1.6.1.jar:/opt/ibm/biginsights/flume/lib/flume-thrift-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-file-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/libthrift-0.6.1.jar:/opt/ibm/biginsights/flume/lib/avro-1.7.2.jar:/opt/ibm/biginsights/flume/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-core-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/servlet-api-2.5-20110124.jar:/opt/ibm/biginsights/flume/lib/flume-ng-elasticsearch-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-configuration-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/flume/lib/irclib-1.10.jar:/opt/ibm/biginsights/flume/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/flume/lib/derby-10.8.3.1.jar:/opt/ibm/biginsights/flume/lib/flume-ng-log4jappender-1.3.0.jar:/opt/ibm/biginsights/flume/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/flume/lib/flume-irc-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jcl-over-slf4j-1.7.2.jar:/opt/ibm/biginsights/flume/lib/slf4j-api-1.6.1.jar:/opt/ibm/biginsights/flume/lib/joda-time-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/flume/lib/commons-io-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/flume/lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/flume/lib/commons-pool-1.5.4.jar:/opt/ibm/biginsights/flume/lib/flume-ng-hbase-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/protobuf-java-2.4.1.jar:/opt/ibm/biginsights/flume/lib/flume-scribe-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-core-1.3.0.jar:/opt/ibm/biginsights/flume/lib/gson-2.2.2.jar:/opt/ibm/biginsights/flume/lib/flume-ng-sdk-1.3.0.jar:/opt/ibm/biginsights/flume/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/flume/lib/guava-10.0.1.jar:/opt/ibm/biginsights/flume/lib/paranamer-2.3.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/hbase:/opt/ibm/biginsights/hbase/hbase-0.94.3-security.jar:/opt/ibm/biginsights/hbase/hbase-0.94.3-security-tests.jar:/opt/ibm/biginsights/hbase/hbase.jar:/opt/ibm/biginsights/hbase/lib/activation-1.1.jar:/opt/ibm/biginsights/hbase/lib/asm-3.1.jar:/opt/ibm/biginsights/hbase/lib/avro-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/hbase/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/hbase/lib/commons-codec-1.4.jar:/opt/ibm/biginsights/hbase/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/hbase/lib/commons-digester-1.8.jar:/opt/ibm/biginsights/hbase/lib/commons-el-1.0.jar:/opt/ibm/biginsights/hbase/lib/commons-httpclient-3.1.jar:/opt/ibm/biginsights/hbase/lib/commons-io-2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/hbase/lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/commons-math-2.2.jar:/opt/ibm/biginsights/hbase/lib/commons-net-3.1.jar:/opt/ibm/biginsights/hbase/lib/core-3.1.1.jar:/opt/ibm/biginsights/hbase/lib/guardium-proxy.jar:/opt/ibm/biginsights/hbase/lib/guava-11.0.2.jar:/opt/ibm/biginsights/hbase/lib/hadoop-core.jar:/opt/ibm/biginsights/hbase/lib/hadoop-tools-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/high-scale-lib-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/httpclient-4.1.2.jar:/opt/ibm/biginsights/hbase/lib/httpcore-4.1.3.jar:/opt/ibm/biginsights/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-xc-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jamon-runtime-2.3.1.jar:/opt/ibm/biginsights/hbase/lib/jasper-compiler-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jasper-runtime-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jaxb-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/ibm/biginsights/hbase/lib/jersey-core-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-json-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-server-1.8.jar:/opt/ibm/biginsights/hbase/lib/jettison-1.1.jar:/opt/ibm/biginsights/hbase/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jruby-complete-1.6.5.1.jar:/opt/ibm/biginsights/hbase/lib/jsp-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/hbase/lib/junit-4.10-HBASE-1.jar:/opt/ibm/biginsights/hbase/lib/libthrift-0.8.0.jar:/opt/ibm/biginsights/hbase/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/hbase/lib/metrics-core-2.1.2.jar:/opt/ibm/biginsights/hbase/lib/netty-3.2.4.Final.jar:/opt/ibm/biginsights/hbase/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/hbase/lib/protobuf-java-2.4.0a.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/hbase/lib/stax-api-1.0.1.jar:/opt/ibm/biginsights/hbase/lib/velocity-1.7.jar:/opt/ibm/biginsights/hbase/lib/xmlenc-0.52.jar:/opt/ibm/biginsights/hbase/lib/xml-ibm.jar:/opt/ibm/biginsights/hbase/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/hbase/lib/zookeeper.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/hbase/conf
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.library.path=:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=j9jit24
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.name
> =Linux
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:os.version=2.6.18-194.17.4.el5
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.name
> =biadmin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:user.home=/home/biadmin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:user.dir=/opt/ibm/biginsights/flume/bin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Initiating client connection,
> connectString=bivm:2181 sessionTimeout=180000 watcher=hconnection
> 14/02/17 10:26:17 INFO zookeeper.RecoverableZooKeeper: The identifier of
> this process is 20984@bivm
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to
> server bivm/192.168.37.128:2181. Will not attempt to authenticate using
> SASL (Unable to locate a login configuration)
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Socket connection established
> to bivm/192.168.37.128:2181, initiating session
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Session establishment
> complete on server bivm/192.168.37.128:2181, sessionid =
> 0x144401355b4001d, negotiated timeout = 60000
> 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 60
> 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774536, queueSize: 60,
> queueHead: 10514
> 14/02/17 10:29:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 32036, logWriteOrderID = 1392650774536
> 14/02/17 10:29:57 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 32036
> logWriteOrderID: 1392650774536
> 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-4
> 14/02/17 10:29:57 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel2/data/log-4, result = true, minFileID 7
> 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:29:57 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel2/data/log-5, result = true, minFileID 7
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 460
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775504, queueSize: 520,
> queueHead: 10514
> 14/02/17 10:29:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 277565, logWriteOrderID = 1392650775504
> 14/02/17 10:29:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 277565
> logWriteOrderID: 1392650775504
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 540
> 14/02/17 10:29:59 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998182.tmp
> 14/02/17 10:29:59 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 423
> 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775933, queueSize: 137,
> queueHead: 10917
> 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775934, queueSize: 539,
> queueHead: 223681
> 14/02/17 10:30:01 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 304892, logWriteOrderID = 1392650775933
> 14/02/17 10:30:01 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 304892
> logWriteOrderID: 1392650775933
> 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 137
> 14/02/17 10:30:02 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 288266, logWriteOrderID = 1392650775934
> 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776074, queueSize: 0,
> queueHead: 11054
> 14/02/17 10:30:04 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 288266
> logWriteOrderID: 1392650775934
> 14/02/17 10:30:04 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:30:04 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel/data/log-6, result = true, minFileID 8
> 14/02/17 10:30:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 29
> 14/02/17 10:30:06 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 310581, logWriteOrderID = 1392650776074
> 14/02/17 10:30:13 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776105, queueSize: 550,
> queueHead: 223690
> 14/02/17 10:30:19 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 310581
> logWriteOrderID: 1392650776074
> 14/02/17 10:30:21 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:29 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776127, queueSize: 20,
> queueHead: 11052
> 14/02/17 10:30:29 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 299362, logWriteOrderID = 1392650776105
> 14/02/17 10:30:30 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 321308, logWriteOrderID = 1392650776127
> 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 299362
> logWriteOrderID: 1392650776105
> 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 321308
> logWriteOrderID: 1392650776127
> 14/02/17 10:30:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 21
> 14/02/17 10:30:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 38
> 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776192, queueSize: 569,
> queueHead: 223691
> 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776193, queueSize: 20,
> queueHead: 11070
> 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 310040, logWriteOrderID = 1392650776192
> 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 332801, logWriteOrderID = 1392650776193
> 14/02/17 10:30:34 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 310040
> logWriteOrderID: 1392650776192
> 14/02/17 10:30:35 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 332801
> logWriteOrderID: 1392650776193
> 14/02/17 10:30:37 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:39 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182
> 14/02/17 10:30:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776236, queueSize: 0,
> queueHead: 11090
> 14/02/17 10:30:40 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998183.tmp
> 14/02/17 10:30:42 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776237, queueSize: 589,
> queueHead: 223691
> 14/02/17 10:30:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 333657, logWriteOrderID = 1392650776236
> 14/02/17 10:30:59 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 320738, logWriteOrderID = 1392650776237
> 14/02/17 10:31:01 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 333657
> logWriteOrderID: 1392650776236
> 14/02/17 10:31:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 320738
> logWriteOrderID: 1392650776237
> 14/02/17 10:31:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 125
> 14/02/17 10:31:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776384, queueSize: 464,
> queueHead: 223816
> 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776385, queueSize: 20,
> queueHead: 11088
> 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 344355, logWriteOrderID = 1392650776385
> 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 325863, logWriteOrderID = 1392650776384
> 14/02/17 10:31:20 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183
> 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 325863
> logWriteOrderID: 1392650776384
> 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 344355
> logWriteOrderID: 1392650776385
> 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1
> 14/02/17 10:31:23 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998184.tmp
> 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776427, queueSize: 0,
> queueHead: 11108
> 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776428, queueSize: 463,
> queueHead: 223817
> 14/02/17 10:31:25 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 335946, logWriteOrderID = 1392650776428
> 14/02/17 10:31:26 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 345211, logWriteOrderID = 1392650776427
> 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 345211
> logWriteOrderID: 1392650776427
> 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 335946
> logWriteOrderID: 1392650776428
> 14/02/17 10:31:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 40
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 70
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776540, queueSize: 473,
> queueHead: 223847
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776541, queueSize: 40,
> queueHead: 11106
> 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 356818, logWriteOrderID = 1392650776540
> 14/02/17 10:31:28 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 356818
> logWriteOrderID: 1392650776540
> 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 366536, logWriteOrderID = 1392650776541
> 14/02/17 10:31:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 366536
> logWriteOrderID: 1392650776541
> 14/02/17 10:31:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 493
> 14/02/17 10:31:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 40
> 14/02/17 10:31:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650777082, queueSize: 0,
> queueHead: 11146
> 14/02/17 10:31:35 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650777083, queueSize: 0,
> queueHead: 224340
> 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 368733, logWriteOrderID = 1392650777082
> 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 379163, logWriteOrderID = 1392650777083
> 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 368733
> logWriteOrderID: 1392650777082
> 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 379163
> logWriteOrderID: 1392650777083
> 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 920
> 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 900
> 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650778995, queueSize: 900,
> queueHead: 224338
> 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650778996, queueSize: 920,
> queueHead: 11144
> 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 859009, logWriteOrderID = 1392650778996
> 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 859505, logWriteOrderID = 1392650778995
> 14/02/17 10:31:49 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 859009
> logWriteOrderID: 1392650778996
> 14/02/17 10:31:50 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 920
> 14/02/17 10:31:53 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 859505
> logWriteOrderID: 1392650778995
> 14/02/17 10:31:53 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650779929, queueSize: 0,
> queueHead: 12064
> 14/02/17 10:31:54 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184
> 14/02/17 10:31:54 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998185.tmp
> 14/02/17 10:31:54 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 22
> 14/02/17 10:31:55 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650779951, queueSize: 918,
> queueHead: 224340
> 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 897089, logWriteOrderID = 1392650779929
> 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 870220, logWriteOrderID = 1392650779951
> 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 870220
> logWriteOrderID: 1392650779951
> 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 897089
> logWriteOrderID: 1392650779929
> 14/02/17 10:31:57 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 300
> 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650781760, queueSize: 300,
> queueHead: 12062
> 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1198
> 14/02/17 10:32:01 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650781761, queueSize: 0,
> queueHead: 225538
> 14/02/17 10:32:02 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1057180, logWriteOrderID = 1392650781760
> 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1057180
> logWriteOrderID: 1392650781760
> 14/02/17 10:32:03 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1068832, logWriteOrderID = 1392650781761
> 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1068832
> logWriteOrderID: 1392650781761
> 14/02/17 10:32:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 798
> 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783137, queueSize: 500,
> queueHead: 12360
> 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 520
> 14/02/17 10:32:08 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783138, queueSize: 519,
> queueHead: 225537
> 14/02/17 10:32:12 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1336479, logWriteOrderID = 1392650783137
> 14/02/17 10:32:14 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1346456, logWriteOrderID = 1392650783138
> 14/02/17 10:32:14 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1336479
> logWriteOrderID: 1392650783137
> 14/02/17 10:32:15 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 100
> 14/02/17 10:32:16 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1346456
> logWriteOrderID: 1392650783138
> 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783761, queueSize: 400,
> queueHead: 12460
> 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 519
> 14/02/17 10:32:20 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783762, queueSize: 0,
> queueHead: 226056
> 14/02/17 10:32:21 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1341143, logWriteOrderID = 1392650783761
> 14/02/17 10:32:23 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1367771, logWriteOrderID = 1392650783762
> 14/02/17 10:32:23 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1341143
> logWriteOrderID: 1392650783761
> 14/02/17 10:32:24 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1367771
> logWriteOrderID: 1392650783762
> 14/02/17 10:32:24 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 300
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 100
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650784174, queueSize: 300,
> queueHead: 12660
> 14/02/17 10:32:25 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650784175, queueSize: 100,
> queueHead: 226054
> 14/02/17 10:32:25 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1402287, logWriteOrderID = 1392650784174
> 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1402287
> logWriteOrderID: 1392650784174
> 14/02/17 10:32:26 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1421128, logWriteOrderID = 1392650784175
> 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1421128
> logWriteOrderID: 1392650784175
> 14/02/17 10:32:27 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 480
> 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 278
> 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785222, queueSize: 98,
> queueHead: 13042
> 14/02/17 10:32:32 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785223, queueSize: 0,
> queueHead: 226332
> 14/02/17 10:32:33 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1514767, logWriteOrderID = 1392650785222
> 14/02/17 10:32:34 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1514767
> logWriteOrderID: 1392650785222
> 14/02/17 10:32:35 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 118
> 14/02/17 10:32:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1528845, logWriteOrderID = 1392650785223
> 14/02/17 10:32:38 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785364, queueSize: 0,
> queueHead: 13160
> 14/02/17 10:32:40 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1528845
> logWriteOrderID: 1392650785223
> 14/02/17 10:32:41 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1529781, logWriteOrderID = 1392650785364
> 14/02/17 10:32:42 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1529781
> logWriteOrderID: 1392650785364
> 14/02/17 10:32:43 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 500
> 14/02/17 10:32:44 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 500
> 14/02/17 10:32:45 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786415, queueSize: 500,
> queueHead: 13158
> 14/02/17 10:32:47 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786416, queueSize: 500,
> queueHead: 226330
> 14/02/17 10:32:53 INFO node.FlumeNode: Flume node stopping - agent
> 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
> supervisor 9
> 14/02/17 10:32:53 INFO properties.PropertiesFileConfigurationProvider:
> Configuration provider stopping
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Node manager
> stopping
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Shutting
> down configuration: { sourceRunners:{exec-source=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }}
> sinkRunners:{hbase-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
> name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } },
> hdfs-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
> name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }}
> channels:{ch1=FileChannel ch1 { dataDirs:
> [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
> [/home/biadmin/.flume/file-channel2/data] }} }
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Source exec-source
> 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping component:
> EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }
> 14/02/17 10:32:53 INFO source.ExecSource: Stopping exec source with
> command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1795949, logWriteOrderID = 1392650786416
> 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1796885, logWriteOrderID = 1392650786415
> 14/02/17 10:32:57 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1795949
> logWriteOrderID: 1392650786416
> 14/02/17 10:32:57 ERROR source.ExecSource: Failed while running command:
> tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> java.io.IOException: Pipe closed
>         at java.io.PipedInputStream.read(PipedInputStream.java:302)
>         at java.lang.ProcessPipedInputStream.read(UNIXProcess.java:412)
>         at java.io.PipedInputStream.read(PipedInputStream.java:372)
>         at java.lang.ProcessInputStream.read(UNIXProcess.java:471)
>         at
> sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
>         at
> sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
>         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
>         at java.io.InputStreamReader.read(InputStreamReader.java:188)
>         at java.io.BufferedReader.fill(BufferedReader.java:147)
>         at java.io.BufferedReader.readLine(BufferedReader.java:310)
>         at java.io.BufferedReader.readLine(BufferedReader.java:373)
>         at
> org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:272)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:452)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO source.ExecSource: Command [tail -F
> /home/biadmin/bigdemo/data/rec_telco.cdr] exited with 130
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Sink hbase-sink
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004ccounterGroup:{ name:null counters:{runner.backoffs.consecutive=2,
> runner.backoffs=59} } }
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Component has
> already been stopped EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:STOP} }
> 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
> lock
> java.lang.InterruptedException
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
>         at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
>         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:446)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
> Exception follows.
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=ch2]
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO
> client.HConnectionManager$HConnectionImplementation: Closed zookeeper
> sessionid=0x144401355b4001d
> 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1796885
> logWriteOrderID: 1392650786415
> 14/02/17 10:32:57 WARN hdfs.BucketWriter: Caught IOException writing to
> HDFSWriter (Filesystem closed). Closing file
> (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp)
> and rethrowing exception.
> 14/02/17 10:32:58 WARN hdfs.BucketWriter: Caught IOException while closing
> file
> (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp).
> Exception follows.
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1
> 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:58 ERROR hdfs.BucketWriter: Unexpected error
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:237)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:232)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:109)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:217)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786418, queueSize: 499,
> queueHead: 226331
> 14/02/17 10:32:58 INFO zookeeper.ZooKeeper: Session: 0x144401355b4001d
> closed
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Sink hdfs-sink
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01counterGroup:{ name:null counters:{runner.backoffs.consecutive=3,
> runner.backoffs=53} } }
> 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
> lock
> java.lang.InterruptedException
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
>         at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
>         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:536)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
> Exception follows.
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=ch1]
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:539)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO hdfs.HDFSEventSink: Closing
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec
> 14/02/17 10:32:58 INFO zookeeper.ClientCnxn: EventThread shut down
> 14/02/17 10:32:58 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1795990, logWriteOrderID = 1392650786418
> 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:58 WARN hdfs.HDFSEventSink: Exception while closing
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec. Exception follows.
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:757)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:755)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs-sink stopped
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Channel ch1
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }
> 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch1 {
> dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1795990
> logWriteOrderID: 1392650786418
> 14/02/17 10:32:58 INFO file.LogFile: Closing
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch1 stopped
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Channel ch2
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }
> 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch2 {
> dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> 14/02/17 10:32:58 INFO file.LogFile: Closing
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-6
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch2 stopped
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
> supervisor 9
>
>
>
> On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:
>
> Hello Jeff,
>
> Please find below requested logs.. Initiation part of the logs were
> unfortunately not included. I can run these again if necessary but the
> Zookeeper connection is included in the logs.
>
>
>
> On 17 February 2014 16:05, Jeff Lord <jl...@cloudera.com> wrote:
>
> Logs ?
>
> On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <ka...@gmail.com> wrote:
> > Dear Mailing Group,
> >
> > I am currently having issues with the Hbase sink function. I have
> developed
> > an agent with a fanout channel setup ( single source, multiple channels,
> > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
> >
> >  The issue is that although the HDFS is working well, the Hbase flow is
> > simply not working. There are no errors being reported by Flume for the
> > Hbase channel but there are never any records being written to the HBase
> > store. The Hbase table as stipulated in the config always remains empty.
> > Studying the Flume startup logs I observe that the session connection to
> > Zookeeper is always successfully established
> >
> > Are there any special configurations I am missing out?
> >
> > I am using the Async Event Serializer to persist the txns.
> >
> > Any assistance will be greatly appreciated.
> >
> >
> > Please see below for the flume configuration:
> >
> > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
> > agent.sources=exec-source
> > agent.sinks=hdfs-sink hbase-sink
> > agent.channels=ch1 ch2
> >
> > agent.sources.exec-source.type=exec
> > agent.sources.exec-source.command=tail -F
> > /home/biadmin/bigdemo/data/rec_telco.cdr
> >
> > agent.sinks.hdfs-sink.type=hdfs
> > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
> > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
> > # File size to trigger roll, in bytes (0: never roll based on file size)
> > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
> > agent.sinks.hdfs-sink.hdfs.rollCount = 0
> > # number of events written to file before it flushed to HDFS
> > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
> > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
> >
> >
> > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
> >
> agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
> > agent.sinks.hbase-sink.table=telco_cdr_rec
> > agent.sinks.hbase-sink.columnFamily = colfam
> > agent.sinks.hbase-sink.channels = ch2
> > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
> > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
> >
> >
> > agent.channels.ch1.type=file
> > agent.channels.ch1.checkpointInterval=3000
> > agent.channels.ch1.transactionCapacity=10000
> >
> agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
> > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
> > agent.channels.ch1.write-timeout=30
> > agent.channels.ch1.keep-alive=30
> > #agent.channels.ch1.capacity=1000
> >
> > agent.channels.ch2.type=file
> > agent.channels.ch2.checkpointInterval=300
> > agent.channels.ch2.transactionCapacity=10000
> >
> agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
> > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
> > agent.channels.ch2.write-timeout=30
> > agent.channels.ch2.keep-alive=30
> > #agent.channels.ch2.capacity=1000
> >
> >
> > agent.sources.exec-source.channels=ch1 ch2
> > agent.sinks.hdfs-sink.channel=ch1
> > agent.sinks.hbase-sink.channel=ch2
> >
>
>
>
>
>
>
>
>
>
>

Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Hari Shreedharan <hs...@cloudera.com>.
The performance issues in that version were mainly due to the file channel, which could essentially end up not working in some situations. I am not sure if that is the case, but there is nothing very obvious in your logs that suggest any other issues. Since this is a virtual appliance, you might want to contact the vendor’s support channels who would have more info about what exactly is packaged in your specific version.  


Thanks,
Hari


On Tuesday, February 18, 2014 at 11:47 AM, Kris Ogirri wrote:

> Hello Hari,
>  
> No I have no reason NOT to use a newer version but since I am working with a pre-packaged virtual appliance, I ideally would not want to perform any updates on any of the Hadoop components ( as I would then need to update the entire Virtual appliance to reflect the changes) without being sure that the update would solve this problem.
>  
> What are your thoughts? Were the performance problems demonstrated related to Hbase Sinking? I am thinking this could be an issue between my Zookeeper deployment and my Hbase setup but I am open to suggestions.
>  
> Thanks again for all the help.
>  
>  
>  
> On 18 February 2014 20:41, Hari Shreedharan <hshreedharan@cloudera.com (mailto:hshreedharan@cloudera.com)> wrote:
> > Looks like you are using Flume 1.3.0. Is there a reason for not using a newer version? Flume 1.4.0 is not almost 6 months old. 1.3.0 did have a known performance issue which was the reason 1.3.1 was release almost immediately after.  
> >  
> >  
> > Thanks,
> > Hari
> >  
> >  
> > On Tuesday, February 18, 2014 at 11:23 AM, Kris Ogirri wrote:
> >  
> > > Hello Hari,
> > >  
> > > I didn't know it was a holiday in the US.  
> > >  
> > > Please see version information below:
> > >  
> > > Hbase:
> > > HBase Shell; enter 'help<RETURN>' for list of supported commands.
> > > Type "exit<RETURN>" to leave the HBase Shell
> > > Version 0.94.3, rab548827f0c52211c1d67437484fcba635072767, Wed Jul 31 18:13:25 PDT 2013
> > >  
> > >  
> > > Flume:  
> > > [biadmin@bivm bin]$ ./flume-ng version
> > > Flume 1.3.0
> > > Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
> > > Revision: abbccbd2ff14dd6fed2a8a3891eb51aff985e9f5
> > > Compiled by jenkins on Wed Jun 12 19:16:33 PDT 2013
> > > From source with checksum dce204011600e67e1455971266d3da07
> > >  
> > >  
> > > Thanks for all the assistance.
> > >  
> > > BR,
> > >  
> > >  
> > >  
> > > On 18 February 2014 20:14, Hari Shreedharan <hshreedharan@cloudera.com (mailto:hshreedharan@cloudera.com)> wrote:
> > > > Hi Kris,  
> > > >  
> > > > Please realize that people usually work on their own time on these mailing lists and since your first message was sent on a Monday early morning on a long weekend in the US, others may not have seen your message either.   
> > > >  
> > > > Are you running Apache Flume and Apache HBase? If yes, what versions (output of flume-ng version and hbase version)?  
> > > >  
> > > >  
> > > > Thanks,
> > > > Hari
> > > >  
> > > >  
> > > > On Tuesday, February 18, 2014 at 10:22 AM, Kris Ogirri wrote:
> > > >  
> > > > > Hi,
> > > > >  
> > > > > Cant anybody help with this? I am thinking its a small issue because everything seems to work fine but the data from the Channel never gets persisted into Hbase?
> > > > >  
> > > > > I have added the description of the Hbase tables:
> > > > >  
> > > > > hbase(main):005:0> describe 'telco_cdr_rec'
> > > > > DESCRIPTION                                          ENABLED                     
> > > > >  {NAME => 'telco_cdr_rec', FAMILIES => [{NAME => 'co true                        
> > > > >  lfam', REPLICATION_SCOPE => '0', KEEP_DELETED_CELLS                             
> > > > >   => 'false', COMPRESSION => 'NONE', ENCODE_ON_DISK                              
> > > > >  => 'true', BLOCKCACHE => 'true', MIN_VERSIONS => '0                             
> > > > >  ', DATA_BLOCK_ENCODING => 'NONE', IN_MEMORY => 'fal                             
> > > > >  se', BLOOMFILTER => 'NONE', TTL => '2147483647 (tel:2147483647)', VE                             
> > > > >  RSIONS => '3', BLOCKSIZE => '65536'}]}                                          
> > > > > 1 row(s) in 0.1600 seconds
> > > > >  
> > > > >  
> > > > > If no one can help with the problem, can anyone provide a link to the Flume -> Zookeeper -> Hbase Internal documentation so I can trace where the error lies.
> > > > >  
> > > > > Are there Zookeeper log files where I can analyse whether Flume actually sends the Txns to Hbase via Zookeeper?
> > > > >  
> > > > >  
> > > > >  
> > > > > On 17 February 2014 16:38, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > > > > > Hello Jeff,
> > > > > >  
> > > > > > Please find below requested logs.. Initiation part of the logs were unfortunately not included. I can run these again if necessary but the Zookeeper connection is included in the logs.
> > > > > >  
> > > > > >  
> > > > > > 14/02/17 10:26:12 INFO properties.PropertiesFileConfigurationProvider: created channel ch2
> > > > > > 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink: hbase-sink, type: org.apache.flume.sink.hbase.HBaseSink
> > > > > > 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink: hdfs-sink, type: hdfs
> > > > > > 14/02/17 10:26:14 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
> > > > > > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{exec-source=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:IDLE} }} sinkRunners:{hbase-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{} } }, hdfs-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{} } }} channels:{ch1=FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }} }
> > > > > > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel ch1
> > > > > > 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> > > > > > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel ch2
> > > > > > 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> > > > > > 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> > > > > > 14/02/17 10:26:14 INFO file.Log: Replay started
> > > > > > 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> > > > > > 14/02/17 10:26:14 INFO file.Log: Replay started
> > > > > > 14/02/17 10:26:14 INFO file.Log: Found NextFileID 7, from [/home/biadmin/.flume/file-channel/data/log-7, /home/biadmin/.flume/file-channel/data/log-6]
> > > > > > 14/02/17 10:26:14 INFO file.Log: Found NextFileID 6, from [/home/biadmin/.flume/file-channel2/data/log-6, /home/biadmin/.flume/file-channel2/data/log-4, /home/biadmin/.flume/file-channel2/data/log-5]
> > > > > > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with /home/biadmin/.flume/file-channel2/checkpoint/checkpoint and /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> > > > > > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading checkpoint metadata from /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> > > > > > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with /home/biadmin/.flume/file-channel/checkpoint/checkpoint and /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> > > > > > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading checkpoint metadata from /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> > > > > > 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:35 EST 2014, queue depth = 0
> > > > > > 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:31 EST 2014, queue depth = 0
> > > > > > 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> > > > > > 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> > > > > > 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of [/home/biadmin/.flume/file-channel/data/log-6, /home/biadmin/.flume/file-channel/data/log-7]
> > > > > > 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of [/home/biadmin/.flume/file-channel2/data/log-4, /home/biadmin/.flume/file-channel2/data/log-5, /home/biadmin/.flume/file-channel2/data/log-6]
> > > > > > 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel/data/log-6
> > > > > > 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-4
> > > > > > 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Unable to get maxDirectMemory from VM: NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
> > > > > > 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Direct Memory Allocation:  Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20971520, Remaining = 20971520
> > > > > > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 32040
> > > > > > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel/data/log-7
> > > > > > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 2496
> > > > > > 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for file(/home/biadmin/.flume/file-channel2/data/log-4) is: 1392407375821, which is beyond the requested checkpoint time: 1392650490155 and position 0
> > > > > > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-5
> > > > > > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 22843
> > > > > > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 22843 in /home/biadmin/.flume/file-channel2/data/log-5
> > > > > > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-6
> > > > > > 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for file(/home/biadmin/.flume/file-channel2/data/log-6) is: 1392650490155, which is beyond the requested checkpoint time: 1392650490155 and position 0
> > > > > > 14/02/17 10:26:16 INFO file.ReplayHandler: read: 0, put: 0, take: 0, rollback: 0, commit: 0, skip: 0, eventCount:0
> > > > > > 14/02/17 10:26:16 INFO file.Log: Rolling /home/biadmin/.flume/file-channel2/data
> > > > > > 14/02/17 10:26:16 INFO file.Log: Roll start /home/biadmin/.flume/file-channel2/data
> > > > > > 14/02/17 10:26:16 INFO file.LogFile: Opened /home/biadmin/.flume/file-channel2/data/log-7
> > > > > > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 2496 in /home/biadmin/.flume/file-channel/data/log-7
> > > > > > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 32071 in /home/biadmin/.flume/file-channel/data/log-6
> > > > > > 14/02/17 10:26:16 INFO file.ReplayHandler: read: 1, put: 0, take: 0, rollback: 0, commit: 0, skip: 1, eventCount:0
> > > > > > 14/02/17 10:26:16 INFO file.Log: Rolling /home/biadmin/.flume/file-channel/data
> > > > > > 14/02/17 10:26:16 INFO file.Log: Roll start /home/biadmin/.flume/file-channel/data
> > > > > > 14/02/17 10:26:16 INFO file.LogFile: Opened /home/biadmin/.flume/file-channel/data/log-8
> > > > > > 14/02/17 10:26:16 INFO file.Log: Roll end
> > > > > > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 0
> > > > > > 14/02/17 10:26:16 INFO file.Log: Roll end
> > > > > > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 0
> > > > > > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774387, queueSize: 0, queueHead: 10516
> > > > > > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774388, queueSize: 0, queueHead: 223682
> > > > > > 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-7.meta currentPosition = 0, logWriteOrderID = 1392650774387
> > > > > > 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-8.meta currentPosition = 0, logWriteOrderID = 1392650774388
> > > > > > 14/02/17 10:26:16 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 0 logWriteOrderID: 1392650774387
> > > > > > 14/02/17 10:26:16 INFO file.FileChannel: Queue Size after replay: 0 [channel=ch2]
> > > > > > 14/02/17 10:26:17 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 0 logWriteOrderID: 1392650774388
> > > > > > 14/02/17 10:26:17 INFO file.FileChannel: Queue Size after replay: 0 [channel=ch1]
> > > > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: ch2, registered successfully.
> > > > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch2 started
> > > > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: ch1, registered successfully.
> > > > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 started
> > > > > > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink hbase-sink
> > > > > > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink hdfs-sink
> > > > > > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Source exec-source
> > > > > > 14/02/17 10:26:17 INFO source.ExecSource: Exec source starting with command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: hdfs-sink, registered successfully.
> > > > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink started
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5--1, built on 01/23/2013 14:29 GMT
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:host.name (http://host.name)=bivm
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.vendor=IBM Corporation
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/ibm/biginsights/jdk/jre
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.class.path=conf:/opt/ibm/biginsights/flume/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/flume/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-mapper-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/flume-avro-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-jdbc-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/velocity-1.7.jar:/opt/ibm/biginsights/flume/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/flume/lib/flume-ng-node-1.3.0.jar:/opt/ibm/biginsights/flume/lib/commons-dbcp-1.4.jar:/opt/ibm/biginsights/flume/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/flume/lib/flume-hdfs-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/asynchbase-1.2.0.jar:/opt/ibm/biginsights/flume/lib/flume-recoverable-memory-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/async-1.3.1.jar:/opt/ibm/biginsights/flume/lib/slf4j-log4j12-1.6.1.jar:/opt/ibm/biginsights/flume/lib/flume-thrift-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-file-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/libthrift-0.6.1.jar:/opt/ibm/biginsights/flume/lib/avro-1.7.2.jar:/opt/ibm/biginsights/flume/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-core-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/servlet-api-2.5-20110124.jar:/opt/ibm/biginsights/flume/lib/flume-ng-elasticsearch-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-configuration-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/flume/lib/irclib-1.10.jar:/opt/ibm/biginsights/flume/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/flume/lib/derby-10.8.3.1.jar:/opt/ibm/biginsights/flume/lib/flume-ng-log4jappender-1.3.0.jar:/opt/ibm/biginsights/flume/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/flume/lib/flume-irc-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jcl-over-slf4j-1.7.2.jar:/opt/ibm/biginsights/flume/lib/slf4j-api-1.6.1.jar:/opt/ibm/biginsights/flume/lib/joda-time-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/flume/lib/commons-io-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/flume/lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/flume/lib/commons-pool-1.5.4.jar:/opt/ibm/biginsights/flume/lib/flume-ng-hbase-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/protobuf-java-2.4.1.jar:/opt/ibm/biginsights/flume/lib/flume-scribe-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-core-1.3.0.jar:/opt/ibm/biginsights/flume/lib/gson-2.2.2.jar:/opt/ibm/biginsights/flume/lib/flume-ng-sdk-1.3.0.jar:/opt/ibm/biginsights/flume/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/flume/lib/guava-10.0.1.jar:/opt/ibm/biginsights/flume/lib/paranamer-2.3.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/hbase:/opt/ibm/biginsights/hbase/hbase-0.94.3-security.jar:/opt/ibm/biginsights/hbase/hbase-0.94.3-security-tests.jar:/opt/ibm/biginsights/hbase/hbase.jar:/opt/ibm/biginsights/hbase/lib/activation-1.1.jar:/opt/ibm/biginsights/hbase/lib/asm-3.1.jar:/opt/ibm/biginsights/hbase/lib/avro-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/hbase/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/hbase/lib/commons-codec-1.4.jar:/opt/ibm/biginsights/hbase/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/hbase/lib/commons-digester-1.8.jar:/opt/ibm/biginsights/hbase/lib/commons-el-1.0.jar:/opt/ibm/biginsights/hbase/lib/commons-httpclient-3.1.jar:/opt/ibm/biginsights/hbase/lib/commons-io-2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/hbase/lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/commons-math-2.2.jar:/opt/ibm/biginsights/hbase/lib/commons-net-3.1.jar:/opt/ibm/biginsights/hbase/lib/core-3.1.1.jar:/opt/ibm/biginsights/hbase/lib/guardium-proxy.jar:/opt/ibm/biginsights/hbase/lib/guava-11.0.2.jar:/opt/ibm/biginsights/hbase/lib/hadoop-core.jar:/opt/ibm/biginsights/hbase/lib/hadoop-tools-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/high-scale-lib-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/httpclient-4.1.2.jar:/opt/ibm/biginsights/hbase/lib/httpcore-4.1.3.jar:/opt/ibm/biginsights/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-xc-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jamon-runtime-2.3.1.jar:/opt/ibm/biginsights/hbase/lib/jasper-compiler-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jasper-runtime-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jaxb-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/ibm/biginsights/hbase/lib/jersey-core-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-json-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-server-1.8.jar:/opt/ibm/biginsights/hbase/lib/jettison-1.1.jar:/opt/ibm/biginsights/hbase/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jruby-complete-1.6.5.1.jar:/opt/ibm/biginsights/hbase/lib/jsp-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/hbase/lib/junit-4.10-HBASE-1.jar:/opt/ibm/biginsights/hbase/lib/libthrift-0.8.0.jar:/opt/ibm/biginsights/hbase/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/hbase/lib/metrics-core-2.1.2.jar:/opt/ibm/biginsights/hbase/lib/netty-3.2.4.Final.jar:/opt/ibm/biginsights/hbase/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/hbase/lib/protobuf-java-2.4.0a.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/hbase/lib/stax-api-1.0.1.jar:/opt/ibm/biginsights/hbase/lib/velocity-1.7.jar:/opt/ibm/biginsights/hbase/lib/xmlenc-0.52.jar:/opt/ibm/biginsights/hbase/lib/xml-ibm.jar:/opt/ibm/biginsights/hbase/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/hbase/lib/zookeeper.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/hbase/conf
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.compiler=j9jit24
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.name (http://os.name)=Linux
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.18-194.17.4.el5
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.name (http://user.name)=biadmin
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/biadmin
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/ibm/biginsights/flume/bin
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=bivm:2181 sessionTimeout=180000 watcher=hconnection
> > > > > > 14/02/17 10:26:17 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 20984@bivm
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to server bivm/192.168.37.128:2181 (http://192.168.37.128:2181). Will not attempt to authenticate using SASL (Unable to locate a login configuration)
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Socket connection established to bivm/192.168.37.128:2181 (http://192.168.37.128:2181), initiating session
> > > > > > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Session establishment complete on server bivm/192.168.37.128:2181 (http://192.168.37.128:2181), sessionid = 0x144401355b4001d, negotiated timeout = 60000
> > > > > > 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 60
> > > > > > 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774536, queueSize: 60, queueHead: 10514
> > > > > > 14/02/17 10:29:56 INFO file.LogFileV3: Updating log-7.meta currentPosition = 32036, logWriteOrderID = 1392650774536
> > > > > > 14/02/17 10:29:57 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 32036 logWriteOrderID: 1392650774536
> > > > > > 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-4
> > > > > > 14/02/17 10:29:57 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel2/data/log-4, result = true, minFileID 7
> > > > > > 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-5
> > > > > > 14/02/17 10:29:57 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel2/data/log-5, result = true, minFileID 7
> > > > > > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 460
> > > > > > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775504, queueSize: 520, queueHead: 10514
> > > > > > 14/02/17 10:29:58 INFO file.LogFileV3: Updating log-7.meta currentPosition = 277565, logWriteOrderID = 1392650775504
> > > > > > 14/02/17 10:29:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 277565 logWriteOrderID: 1392650775504
> > > > > > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 540
> > > > > > 14/02/17 10:29:59 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998182.tmp
> > > > > > 14/02/17 10:29:59 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 423
> > > > > > 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775933, queueSize: 137, queueHead: 10917
> > > > > > 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775934, queueSize: 539, queueHead: 223681
> > > > > > 14/02/17 10:30:01 INFO file.LogFileV3: Updating log-7.meta currentPosition = 304892, logWriteOrderID = 1392650775933
> > > > > > 14/02/17 10:30:01 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 304892 logWriteOrderID: 1392650775933
> > > > > > 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 137
> > > > > > 14/02/17 10:30:02 INFO file.LogFileV3: Updating log-8.meta currentPosition = 288266, logWriteOrderID = 1392650775934
> > > > > > 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776074, queueSize: 0, queueHead: 11054
> > > > > > 14/02/17 10:30:04 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 288266 logWriteOrderID: 1392650775934
> > > > > > 14/02/17 10:30:04 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-6
> > > > > > 14/02/17 10:30:04 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel/data/log-6, result = true, minFileID 8
> > > > > > 14/02/17 10:30:05 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 29
> > > > > > 14/02/17 10:30:06 INFO file.LogFileV3: Updating log-7.meta currentPosition = 310581, logWriteOrderID = 1392650776074
> > > > > > 14/02/17 10:30:13 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776105, queueSize: 550, queueHead: 223690
> > > > > > 14/02/17 10:30:19 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 310581 logWriteOrderID: 1392650776074
> > > > > > 14/02/17 10:30:21 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > > > > > 14/02/17 10:30:29 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776127, queueSize: 20, queueHead: 11052
> > > > > > 14/02/17 10:30:29 INFO file.LogFileV3: Updating log-8.meta currentPosition = 299362, logWriteOrderID = 1392650776105
> > > > > > 14/02/17 10:30:30 INFO file.LogFileV3: Updating log-7.meta currentPosition = 321308, logWriteOrderID = 1392650776127
> > > > > > 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 299362 logWriteOrderID: 1392650776105
> > > > > > 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 321308 logWriteOrderID: 1392650776127
> > > > > > 14/02/17 10:30:31 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 21
> > > > > > 14/02/17 10:30:32 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 38
> > > > > > 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776192, queueSize: 569, queueHead: 223691
> > > > > > 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776193, queueSize: 20, queueHead: 11070
> > > > > > 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-8.meta currentPosition = 310040, logWriteOrderID = 1392650776192
> > > > > > 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-7.meta currentPosition = 332801, logWriteOrderID = 1392650776193
> > > > > > 14/02/17 10:30:34 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 310040 logWriteOrderID: 1392650776192
> > > > > > 14/02/17 10:30:35 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 332801 logWriteOrderID: 1392650776193
> > > > > > 14/02/17 10:30:37 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > > > > > 14/02/17 10:30:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 20
> > > > > > 14/02/17 10:30:39 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182
> > > > > > 14/02/17 10:30:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776236, queueSize: 0, queueHead: 11090
> > > > > > 14/02/17 10:30:40 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998183.tmp
> > > > > > 14/02/17 10:30:42 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776237, queueSize: 589, queueHead: 223691
> > > > > > 14/02/17 10:30:58 INFO file.LogFileV3: Updating log-7.meta currentPosition = 333657, logWriteOrderID = 1392650776236
> > > > > > 14/02/17 10:30:59 INFO file.LogFileV3: Updating log-8.meta currentPosition = 320738, logWriteOrderID = 1392650776237
> > > > > > 14/02/17 10:31:01 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 333657 logWriteOrderID: 1392650776236
> > > > > > 14/02/17 10:31:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 320738 logWriteOrderID: 1392650776237
> > > > > > 14/02/17 10:31:04 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 125
> > > > > > 14/02/17 10:31:05 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > > > > > 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776384, queueSize: 464, queueHead: 223816
> > > > > > 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776385, queueSize: 20, queueHead: 11088
> > > > > > 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-7.meta currentPosition = 344355, logWriteOrderID = 1392650776385
> > > > > > 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-8.meta currentPosition = 325863, logWriteOrderID = 1392650776384
> > > > > > 14/02/17 10:31:20 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183
> > > > > > 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 325863 logWriteOrderID: 1392650776384
> > > > > > 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 344355 logWriteOrderID: 1392650776385
> > > > > > 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > > > > > 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1
> > > > > > 14/02/17 10:31:23 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998184.tmp
> > > > > > 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776427, queueSize: 0, queueHead: 11108
> > > > > > 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776428, queueSize: 463, queueHead: 223817
> > > > > > 14/02/17 10:31:25 INFO file.LogFileV3: Updating log-8.meta currentPosition = 335946, logWriteOrderID = 1392650776428
> > > > > > 14/02/17 10:31:26 INFO file.LogFileV3: Updating log-7.meta currentPosition = 345211, logWriteOrderID = 1392650776427
> > > > > > 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 345211 logWriteOrderID: 1392650776427
> > > > > > 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 335946 logWriteOrderID: 1392650776428
> > > > > > 14/02/17 10:31:27 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 40
> > > > > > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 70
> > > > > > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776540, queueSize: 473, queueHead: 223847
> > > > > > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776541, queueSize: 40, queueHead: 11106
> > > > > > 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-8.meta currentPosition = 356818, logWriteOrderID = 1392650776540
> > > > > > 14/02/17 10:31:28 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 356818 logWriteOrderID: 1392650776540
> > > > > > 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-7.meta currentPosition = 366536, logWriteOrderID = 1392650776541
> > > > > > 14/02/17 10:31:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 366536 logWriteOrderID: 1392650776541
> > > > > > 14/02/17 10:31:31 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 493
> > > > > > 14/02/17 10:31:32 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 40
> > > > > > 14/02/17 10:31:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650777082, queueSize: 0, queueHead: 11146
> > > > > > 14/02/17 10:31:35 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650777083, queueSize: 0, queueHead: 224340
> > > > > > 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-7.meta currentPosition = 368733, logWriteOrderID = 1392650777082
> > > > > > 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-8.meta currentPosition = 379163, logWriteOrderID = 1392650777083
> > > > > > 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 368733 logWriteOrderID: 1392650777082
> > > > > > 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 379163 logWriteOrderID: 1392650777083
> > > > > > 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 920
> > > > > > 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 900
> > > > > > 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650778995, queueSize: 900, queueHead: 224338
> > > > > > 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650778996, queueSize: 920, queueHead: 11144
> > > > > > 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-7.meta currentPosition = 859009, logWriteOrderID = 1392650778996
> > > > > > 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-8.meta currentPosition = 859505, logWriteOrderID = 1392650778995
> > > > > > 14/02/17 10:31:49 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 859009 logWriteOrderID: 1392650778996
> > > > > > 14/02/17 10:31:50 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 920
> > > > > > 14/02/17 10:31:53 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 859505 logWriteOrderID: 1392650778995
> > > > > > 14/02/17 10:31:53 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650779929, queueSize: 0, queueHead: 12064
> > > > > > 14/02/17 10:31:54 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184
> > > > > > 14/02/17 10:31:54 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998185.tmp
> > > > > > 14/02/17 10:31:54 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 22
> > > > > > 14/02/17 10:31:55 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650779951, queueSize: 918, queueHead: 224340
> > > > > > 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-7.meta currentPosition = 897089, logWriteOrderID = 1392650779929
> > > > > > 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-8.meta currentPosition = 870220, logWriteOrderID = 1392650779951
> > > > > > 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 870220 logWriteOrderID: 1392650779951
> > > > > > 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 897089 logWriteOrderID: 1392650779929
> > > > > > 14/02/17 10:31:57 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 300
> > > > > > 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650781760, queueSize: 300, queueHead: 12062
> > > > > > 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1198
> > > > > > 14/02/17 10:32:01 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650781761, queueSize: 0, queueHead: 225538
> > > > > > 14/02/17 10:32:02 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1057180, logWriteOrderID = 1392650781760
> > > > > > 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1057180 logWriteOrderID: 1392650781760
> > > > > > 14/02/17 10:32:03 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1068832, logWriteOrderID = 1392650781761
> > > > > > 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1068832 logWriteOrderID: 1392650781761
> > > > > > 14/02/17 10:32:04 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 798
> > > > > > 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783137, queueSize: 500, queueHead: 12360
> > > > > > 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 520
> > > > > > 14/02/17 10:32:08 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783138, queueSize: 519, queueHead: 225537
> > > > > > 14/02/17 10:32:12 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1336479, logWriteOrderID = 1392650783137
> > > > > > 14/02/17 10:32:14 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1346456, logWriteOrderID = 1392650783138
> > > > > > 14/02/17 10:32:14 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1336479 logWriteOrderID: 1392650783137
> > > > > > 14/02/17 10:32:15 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 100
> > > > > > 14/02/17 10:32:16 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1346456 logWriteOrderID: 1392650783138
> > > > > > 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783761, queueSize: 400, queueHead: 12460
> > > > > > 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 519
> > > > > > 14/02/17 10:32:20 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783762, queueSize: 0, queueHead: 226056
> > > > > > 14/02/17 10:32:21 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1341143, logWriteOrderID = 1392650783761
> > > > > > 14/02/17 10:32:23 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1367771, logWriteOrderID = 1392650783762
> > > > > > 14/02/17 10:32:23 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1341143 logWriteOrderID: 1392650783761
> > > > > > 14/02/17 10:32:24 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1367771 logWriteOrderID: 1392650783762
> > > > > > 14/02/17 10:32:24 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 300
> > > > > > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 100
> > > > > > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650784174, queueSize: 300, queueHead: 12660
> > > > > > 14/02/17 10:32:25 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185
> > > > > > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650784175, queueSize: 100, queueHead: 226054
> > > > > > 14/02/17 10:32:25 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1402287, logWriteOrderID = 1392650784174
> > > > > > 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1402287 logWriteOrderID: 1392650784174
> > > > > > 14/02/17 10:32:26 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1421128, logWriteOrderID = 1392650784175
> > > > > > 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1421128 logWriteOrderID: 1392650784175
> > > > > > 14/02/17 10:32:27 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > > > > > 14/02/17 10:32:27 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 480
> > > > > > 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 278
> > > > > > 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785222, queueSize: 98, queueHead: 13042
> > > > > > 14/02/17 10:32:32 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785223, queueSize: 0, queueHead: 226332
> > > > > > 14/02/17 10:32:33 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1514767, logWriteOrderID = 1392650785222
> > > > > > 14/02/17 10:32:34 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1514767 logWriteOrderID: 1392650785222
> > > > > > 14/02/17 10:32:35 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 118
> > > > > > 14/02/17 10:32:38 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1528845, logWriteOrderID = 1392650785223
> > > > > > 14/02/17 10:32:38 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785364, queueSize: 0, queueHead: 13160
> > > > > > 14/02/17 10:32:40 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1528845 logWriteOrderID: 1392650785223
> > > > > > 14/02/17 10:32:41 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1529781, logWriteOrderID = 1392650785364
> > > > > > 14/02/17 10:32:42 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1529781 logWriteOrderID: 1392650785364
> > > > > > 14/02/17 10:32:43 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 500
> > > > > > 14/02/17 10:32:44 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 500
> > > > > > 14/02/17 10:32:45 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786415, queueSize: 500, queueHead: 13158
> > > > > > 14/02/17 10:32:47 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786416, queueSize: 500, queueHead: 226330
> > > > > > 14/02/17 10:32:53 INFO node.FlumeNode: Flume node stopping - agent
> > > > > > 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 9
> > > > > > 14/02/17 10:32:53 INFO properties.PropertiesFileConfigurationProvider: Configuration provider stopping
> > > > > > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Node manager stopping
> > > > > > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Shutting down configuration: { sourceRunners:{exec-source=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }} sinkRunners:{hbase-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } }, hdfs-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }} channels:{ch1=FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }} }
> > > > > > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Stopping Source exec-source
> > > > > > 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping component: EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }
> > > > > > 14/02/17 10:32:53 INFO source.ExecSource: Stopping exec source with command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > > > 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1795949, logWriteOrderID = 1392650786416
> > > > > > 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1796885, logWriteOrderID = 1392650786415
> > > > > > 14/02/17 10:32:57 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1795949 logWriteOrderID: 1392650786416
> > > > > > 14/02/17 10:32:57 ERROR source.ExecSource: Failed while running command: tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > > > java.io.IOException: Pipe closed
> > > > > >         at java.io.PipedInputStream.read(PipedInputStream.java:302)
> > > > > >         at java.lang.ProcessPipedInputStream.read(UNIXProcess.java:412)
> > > > > >         at java.io.PipedInputStream.read(PipedInputStream.java:372)
> > > > > >         at java.lang.ProcessInputStream.read(UNIXProcess.java:471)
> > > > > >         at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
> > > > > >         at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
> > > > > >         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
> > > > > >         at java.io.InputStreamReader.read(InputStreamReader.java:188)
> > > > > >         at java.io.BufferedReader.fill(BufferedReader.java:147)
> > > > > >         at java.io.BufferedReader.readLine(BufferedReader.java:310)
> > > > > >         at java.io.BufferedReader.readLine(BufferedReader.java:373)
> > > > > >         at org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:272)
> > > > > >         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:452)
> > > > > >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> > > > > >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> > > > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> > > > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> > > > > >         at java.lang.Thread.run(Thread.java:738)
> > > > > > 14/02/17 10:32:58 INFO source.ExecSource: Command [tail -F /home/biadmin/bigdemo/data/rec_telco.cdr] exited with 130
> > > > > > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Sink hbase-sink
> > > > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } }
> > > > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Component has already been stopped EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:STOP} }
> > > > > > 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared lock
> > > > > > java.lang.InterruptedException
> > > > > >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
> > > > > >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
> > > > > >         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
> > > > > >         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
> > > > > >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:446)
> > > > > >         at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> > > > > >         at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
> > > > > >         at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
> > > > > >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > > > >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > > > >         at java.lang.Thread.run(Thread.java:738)
> > > > > > 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
> > > > > > org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=ch2]
> > > > > >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
> > > > > >         at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> > > > > >         at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
> > > > > >         at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
> > > > > >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > > > >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > > > >         at java.lang.Thread.run(Thread.java:738)
> > > > > > 14/02/17 10:32:58 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x144401355b4001d
> > > > > > 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1796885 logWriteOrderID: 1392650786415
> > > > > > 14/02/17 10:32:57 WARN hdfs.BucketWriter: Caught IOException writing to HDFSWriter (Filesystem closed). Closing file (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp) and rethrowing exception.
> > > > > > 14/02/17 10:32:58 WARN hdfs.BucketWriter: Caught IOException while closing file (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp). Exception follows.
> > > > > > java.io.IOException: Filesystem closed
> > > > > >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> > > > > >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> > > > > >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> > > > > >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382)
> > > > > >         at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
> > > > > >         at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
> > > > > >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> > > > > >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> > > > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> > > > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> > > > > >         at java.lang.Thread.run(Thread.java:738)
> > > > > > 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1
> > > > > > 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed: hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > > > > > 14/02/17 10:32:58 ERROR hdfs.BucketWriter: Unexpected error
> > > > > > java.io.IOException: Filesystem closed
> > > > > >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> > > > > >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> > > > > >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> > > > > >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:237)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:232)
> > > > > >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> > > > > >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> > > > > >         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:109)
> > > > > >         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:217)
> > > > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> > > > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> > > > > >         at java.lang.Thread.run(Thread.java:738)
> > > > > > 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786418, queueSize: 499, queueHead: 226331
> > > > > > 14/02/17 10:32:58 INFO zookeeper.ZooKeeper: Session: 0x144401355b4001d closed
> > > > > > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Sink hdfs-sink
> > > > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }
> > > > > > 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared lock
> > > > > > java.lang.InterruptedException
> > > > > >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
> > > > > >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
> > > > > >         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
> > > > > >         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
> > > > > >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:536)
> > > > > >         at org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
> > > > > >         at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
> > > > > >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > > > >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > > > >         at java.lang.Thread.run(Thread.java:738)
> > > > > > 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
> > > > > > org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=ch1]
> > > > > >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:539)
> > > > > >         at org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
> > > > > >         at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
> > > > > >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > > > >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > > > >         at java.lang.Thread.run(Thread.java:738)
> > > > > > 14/02/17 10:32:58 INFO hdfs.HDFSEventSink: Closing hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec
> > > > > > 14/02/17 10:32:58 INFO zookeeper.ClientCnxn: EventThread shut down
> > > > > > 14/02/17 10:32:58 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1795990, logWriteOrderID = 1392650786418
> > > > > > 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed: hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > > > > > 14/02/17 10:32:58 WARN hdfs.HDFSEventSink: Exception while closing hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec. Exception follows.
> > > > > > java.io.IOException: Filesystem closed
> > > > > >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> > > > > >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> > > > > >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> > > > > >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> > > > > >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> > > > > >         at org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:757)
> > > > > >         at org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:755)
> > > > > >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> > > > > >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> > > > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> > > > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> > > > > >         at java.lang.Thread.run(Thread.java:738)
> > > > > > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink stopped
> > > > > > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Channel ch1
> > > > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }
> > > > > > 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> > > > > > 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1795990 logWriteOrderID: 1392650786418
> > > > > > 14/02/17 10:32:58 INFO file.LogFile: Closing /home/biadmin/.flume/file-channel/data/log-8
> > > > > > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-7
> > > > > > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-8
> > > > > > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 stopped
> > > > > > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Channel ch2
> > > > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }
> > > > > > 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> > > > > > 14/02/17 10:32:58 INFO file.LogFile: Closing /home/biadmin/.flume/file-channel2/data/log-7
> > > > > > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-6
> > > > > > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-7
> > > > > > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch2 stopped
> > > > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 9
> > > > > >  
> > > > > >  
> > > > > >  
> > > > > > On 17 February 2014 16:38, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > > > > > > Hello Jeff,
> > > > > > >  
> > > > > > > Please find below requested logs.. Initiation part of the logs were unfortunately not included. I can run these again if necessary but the Zookeeper connection is included in the logs.
> > > > > > >  
> > > > > > >  
> > > > > > >  
> > > > > > > On 17 February 2014 16:05, Jeff Lord <jlord@cloudera.com (mailto:jlord@cloudera.com)> wrote:
> > > > > > > > Logs ?
> > > > > > > >  
> > > > > > > > On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > > > > > > > > Dear Mailing Group,
> > > > > > > > >
> > > > > > > > > I am currently having issues with the Hbase sink function. I have developed
> > > > > > > > > an agent with a fanout channel setup ( single source, multiple channels,
> > > > > > > > > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
> > > > > > > > >
> > > > > > > > >  The issue is that although the HDFS is working well, the Hbase flow is
> > > > > > > > > simply not working. There are no errors being reported by Flume for the
> > > > > > > > > Hbase channel but there are never any records being written to the HBase
> > > > > > > > > store. The Hbase table as stipulated in the config always remains empty.
> > > > > > > > > Studying the Flume startup logs I observe that the session connection to
> > > > > > > > > Zookeeper is always successfully established
> > > > > > > > >
> > > > > > > > > Are there any special configurations I am missing out?
> > > > > > > > >
> > > > > > > > > I am using the Async Event Serializer to persist the txns.
> > > > > > > > >
> > > > > > > > > Any assistance will be greatly appreciated.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Please see below for the flume configuration:
> > > > > > > > >
> > > > > > > > > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
> > > > > > > > > agent.sources=exec-source
> > > > > > > > > agent.sinks=hdfs-sink hbase-sink
> > > > > > > > > agent.channels=ch1 ch2
> > > > > > > > >
> > > > > > > > > agent.sources.exec-source.type=exec
> > > > > > > > > agent.sources.exec-source.command=tail -F
> > > > > > > > > /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > > > > > >
> > > > > > > > > agent.sinks.hdfs-sink.type=hdfs
> > > > > > > > > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
> > > > > > > > > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
> > > > > > > > > # File size to trigger roll, in bytes (0: never roll based on file size)
> > > > > > > > > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
> > > > > > > > > agent.sinks.hdfs-sink.hdfs.rollCount = 0
> > > > > > > > > # number of events written to file before it flushed to HDFS
> > > > > > > > > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
> > > > > > > > > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
> > > > > > > > > agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
> > > > > > > > > agent.sinks.hbase-sink.table=telco_cdr_rec
> > > > > > > > > agent.sinks.hbase-sink.columnFamily = colfam
> > > > > > > > > agent.sinks.hbase-sink.channels = ch2
> > > > > > > > > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
> > > > > > > > > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > agent.channels.ch1.type=file
> > > > > > > > > agent.channels.ch1.checkpointInterval=3000
> > > > > > > > > agent.channels.ch1.transactionCapacity=10000
> > > > > > > > > agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
> > > > > > > > > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
> > > > > > > > > agent.channels.ch1.write-timeout=30
> > > > > > > > > agent.channels.ch1.keep-alive=30
> > > > > > > > > #agent.channels.ch1.capacity=1000
> > > > > > > > >
> > > > > > > > > agent.channels.ch2.type=file
> > > > > > > > > agent.channels.ch2.checkpointInterval=300
> > > > > > > > > agent.channels.ch2.transactionCapacity=10000
> > > > > > > > > agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
> > > > > > > > > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
> > > > > > > > > agent.channels.ch2.write-timeout=30
> > > > > > > > > agent.channels.ch2.keep-alive=30
> > > > > > > > > #agent.channels.ch2.capacity=1000
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > agent.sources.exec-source.channels=ch1 ch2
> > > > > > > > > agent.sinks.hdfs-sink.channel=ch1
> > > > > > > > > agent.sinks.hbase-sink.channel=ch2
> > > > > > > > >
> > > > > > >  
> > > > > >  
> > > > >  
> > > >  
> > >  
> >  
>  


Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Kris Ogirri <ka...@gmail.com>.
Hello Hari,

No I have no reason NOT to use a newer version but since I am working with
a pre-packaged virtual appliance, I ideally would not want to perform any
updates on any of the Hadoop components ( as I would then need to update
the entire Virtual appliance to reflect the changes) without being sure
that the update would solve this problem.

What are your thoughts? Were the performance problems demonstrated related
to Hbase Sinking? I am thinking this could be an issue between my Zookeeper
deployment and my Hbase setup but I am open to suggestions.

Thanks again for all the help.



On 18 February 2014 20:41, Hari Shreedharan <hs...@cloudera.com>wrote:

>  Looks like you are using Flume 1.3.0. Is there a reason for not using a
> newer version? Flume 1.4.0 is not almost 6 months old. 1.3.0 did have a
> known performance issue which was the reason 1.3.1 was release almost
> immediately after.
>
>
> Thanks,
> Hari
>
> On Tuesday, February 18, 2014 at 11:23 AM, Kris Ogirri wrote:
>
> Hello Hari,
>
> I didn't know it was a holiday in the US.
>
> Please see version information below:
>
> Hbase:
> HBase Shell; enter 'help<RETURN>' for list of supported commands.
> Type "exit<RETURN>" to leave the HBase Shell
> Version 0.94.3, rab548827f0c52211c1d67437484fcba635072767, Wed Jul 31
> 18:13:25 PDT 2013
>
>
> Flume:
> [biadmin@bivm bin]$ ./flume-ng version
> Flume 1.3.0
> Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
> Revision: abbccbd2ff14dd6fed2a8a3891eb51aff985e9f5
> Compiled by jenkins on Wed Jun 12 19:16:33 PDT 2013
> From source with checksum dce204011600e67e1455971266d3da07
>
>
> Thanks for all the assistance.
>
> BR,
>
>
>
> On 18 February 2014 20:14, Hari Shreedharan <hs...@cloudera.com>wrote:
>
>  Hi Kris,
>
> Please realize that people usually work on their own time on these mailing
> lists and since your first message was sent on a Monday early morning on a
> long weekend in the US, others may not have seen your message either.
>
> Are you running Apache Flume and Apache HBase? If yes, what versions
> (output of flume-ng version and hbase version)?
>
>
> Thanks,
> Hari
>
> On Tuesday, February 18, 2014 at 10:22 AM, Kris Ogirri wrote:
>
> Hi,
>
> Cant anybody help with this? I am thinking its a small issue because
> everything seems to work fine but the data from the Channel never gets
> persisted into Hbase?
>
> I have added the description of the Hbase tables:
>
> hbase(main):005:0> describe 'telco_cdr_rec'
> DESCRIPTION
> ENABLED
>  {NAME => 'telco_cdr_rec', FAMILIES => [{NAME => 'co
> true
>  lfam', REPLICATION_SCOPE => '0',
> KEEP_DELETED_CELLS
>   => 'false', COMPRESSION => 'NONE',
> ENCODE_ON_DISK
>  => 'true', BLOCKCACHE => 'true', MIN_VERSIONS =>
> '0
>  ', DATA_BLOCK_ENCODING => 'NONE', IN_MEMORY =>
> 'fal
>  se', BLOOMFILTER => 'NONE', TTL => '2147483647',
> VE
>  RSIONS => '3', BLOCKSIZE =>
> '65536'}]}
> 1 row(s) in 0.1600 seconds
>
>
> If no one can help with the problem, can anyone provide a link to the
> Flume -> Zookeeper -> Hbase Internal documentation so I can trace where the
> error lies.
>
>  Are there Zookeeper log files where I can analyse whether Flume actually
> sends the Txns to Hbase via Zookeeper?
>
>
>
> On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:
>
> Hello Jeff,
>
> Please find below requested logs.. Initiation part of the logs were
> unfortunately not included. I can run these again if necessary but the
> Zookeeper connection is included in the logs.
>
>
> 14/02/17 10:26:12 INFO properties.PropertiesFileConfigurationProvider:
> created channel ch2
> 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hbase-sink, type: org.apache.flume.sink.hbase.HBaseSink
> 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hdfs-sink, type: hdfs
> 14/02/17 10:26:14 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting new
> configuration:{ sourceRunners:{exec-source=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:IDLE} }}
> sinkRunners:{hbase-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
> name:null counters:{} } }, hdfs-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
> name:null counters:{} } }} channels:{ch1=FileChannel ch1 { dataDirs:
> [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
> [/home/biadmin/.flume/file-channel2/data] }} }
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel ch1
> 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch1 {
> dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel ch2
> 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch2 {
> dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> 14/02/17 10:26:14 INFO file.Log: Replay started
> 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> 14/02/17 10:26:14 INFO file.Log: Replay started
> 14/02/17 10:26:14 INFO file.Log: Found NextFileID 7, from
> [/home/biadmin/.flume/file-channel/data/log-7,
> /home/biadmin/.flume/file-channel/data/log-6]
> 14/02/17 10:26:14 INFO file.Log: Found NextFileID 6, from
> [/home/biadmin/.flume/file-channel2/data/log-6,
> /home/biadmin/.flume/file-channel2/data/log-4,
> /home/biadmin/.flume/file-channel2/data/log-5]
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint and
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
> checkpoint metadata from
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint and
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
> checkpoint metadata from
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:35 EST
> 2014, queue depth = 0
> 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:31 EST
> 2014, queue depth = 0
> 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
> [/home/biadmin/.flume/file-channel/data/log-6,
> /home/biadmin/.flume/file-channel/data/log-7]
> 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
> [/home/biadmin/.flume/file-channel2/data/log-4,
> /home/biadmin/.flume/file-channel2/data/log-5,
> /home/biadmin/.flume/file-channel2/data/log-6]
> 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-4
> 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Unable to get
> maxDirectMemory from VM: NoSuchMethodException:
> sun.misc.VM.maxDirectMemory(null)
> 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Direct Memory Allocation:
> Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20971520,
> Remaining = 20971520
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 32040
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 2496
> 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
> file(/home/biadmin/.flume/file-channel2/data/log-4) is: 1392407375821,
> which is beyond the requested checkpoint time: 1392650490155 and position 0
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 22843
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 22843 in
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-6
> 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
> file(/home/biadmin/.flume/file-channel2/data/log-6) is: 1392650490155,
> which is beyond the requested checkpoint time: 1392650490155 and position 0
> 14/02/17 10:26:16 INFO file.ReplayHandler: read: 0, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 0, eventCount:0
> 14/02/17 10:26:16 INFO file.Log: Rolling
> /home/biadmin/.flume/file-channel2/data
> 14/02/17 10:26:16 INFO file.Log: Roll start
> /home/biadmin/.flume/file-channel2/data
> 14/02/17 10:26:16 INFO file.LogFile: Opened
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 2496 in
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 32071 in
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:26:16 INFO file.ReplayHandler: read: 1, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 1, eventCount:0
> 14/02/17 10:26:16 INFO file.Log: Rolling
> /home/biadmin/.flume/file-channel/data
> 14/02/17 10:26:16 INFO file.Log: Roll start
> /home/biadmin/.flume/file-channel/data
> 14/02/17 10:26:16 INFO file.LogFile: Opened
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:26:16 INFO file.Log: Roll end
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 0
> 14/02/17 10:26:16 INFO file.Log: Roll end
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 0
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774387, queueSize: 0,
> queueHead: 10516
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774388, queueSize: 0,
> queueHead: 223682
> 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 0, logWriteOrderID = 1392650774387
> 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 0, logWriteOrderID = 1392650774388
> 14/02/17 10:26:16 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 0 logWriteOrderID:
> 1392650774387
> 14/02/17 10:26:16 INFO file.FileChannel: Queue Size after replay: 0
> [channel=ch2]
> 14/02/17 10:26:17 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 0 logWriteOrderID:
> 1392650774388
> 14/02/17 10:26:17 INFO file.FileChannel: Queue Size after replay: 0
> [channel=ch1]
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: ch2, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch2 started
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: ch1, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch1 started
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink hbase-sink
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink hdfs-sink
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Source exec-source
> 14/02/17 10:26:17 INFO source.ExecSource: Exec source starting with
> command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: SINK, name: hdfs-sink, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs-sink started
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.5--1, built on 01/23/2013 14:29 GMT
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:host.name
> =bivm
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.vendor=IBM Corporation
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.home=/opt/ibm/biginsights/jdk/jre
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.class.path=conf:/opt/ibm/biginsights/flume/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/flume/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-mapper-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/flume-avro-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-jdbc-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/velocity-1.7.jar:/opt/ibm/biginsights/flume/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/flume/lib/flume-ng-node-1.3.0.jar:/opt/ibm/biginsights/flume/lib/commons-dbcp-1.4.jar:/opt/ibm/biginsights/flume/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/flume/lib/flume-hdfs-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/asynchbase-1.2.0.jar:/opt/ibm/biginsights/flume/lib/flume-recoverable-memory-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/async-1.3.1.jar:/opt/ibm/biginsights/flume/lib/slf4j-log4j12-1.6.1.jar:/opt/ibm/biginsights/flume/lib/flume-thrift-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-file-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/libthrift-0.6.1.jar:/opt/ibm/biginsights/flume/lib/avro-1.7.2.jar:/opt/ibm/biginsights/flume/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-core-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/servlet-api-2.5-20110124.jar:/opt/ibm/biginsights/flume/lib/flume-ng-elasticsearch-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-configuration-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/flume/lib/irclib-1.10.jar:/opt/ibm/biginsights/flume/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/flume/lib/derby-10.8.3.1.jar:/opt/ibm/biginsights/flume/lib/flume-ng-log4jappender-1.3.0.jar:/opt/ibm/biginsights/flume/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/flume/lib/flume-irc-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jcl-over-slf4j-1.7.2.jar:/opt/ibm/biginsights/flume/lib/slf4j-api-1.6.1.jar:/opt/ibm/biginsights/flume/lib/joda-time-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/flume/lib/commons-io-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/flume/lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/flume/lib/commons-pool-1.5.4.jar:/opt/ibm/biginsights/flume/lib/flume-ng-hbase-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/protobuf-java-2.4.1.jar:/opt/ibm/biginsights/flume/lib/flume-scribe-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-core-1.3.0.jar:/opt/ibm/biginsights/flume/lib/gson-2.2.2.jar:/opt/ibm/biginsights/flume/lib/flume-ng-sdk-1.3.0.jar:/opt/ibm/biginsights/flume/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/flume/lib/guava-10.0.1.jar:/opt/ibm/biginsights/flume/lib/paranamer-2.3.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/hbase:/opt/ibm/biginsights/hbase/hbase-0.94.3-security.jar:/opt/ibm/biginsights/hbase/hbase-0.94.3-security-tests.jar:/opt/ibm/biginsights/hbase/hbase.jar:/opt/ibm/biginsights/hbase/lib/activation-1.1.jar:/opt/ibm/biginsights/hbase/lib/asm-3.1.jar:/opt/ibm/biginsights/hbase/lib/avro-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/hbase/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/hbase/lib/commons-codec-1.4.jar:/opt/ibm/biginsights/hbase/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/hbase/lib/commons-digester-1.8.jar:/opt/ibm/biginsights/hbase/lib/commons-el-1.0.jar:/opt/ibm/biginsights/hbase/lib/commons-httpclient-3.1.jar:/opt/ibm/biginsights/hbase/lib/commons-io-2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/hbase/lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/commons-math-2.2.jar:/opt/ibm/biginsights/hbase/lib/commons-net-3.1.jar:/opt/ibm/biginsights/hbase/lib/core-3.1.1.jar:/opt/ibm/biginsights/hbase/lib/guardium-proxy.jar:/opt/ibm/biginsights/hbase/lib/guava-11.0.2.jar:/opt/ibm/biginsights/hbase/lib/hadoop-core.jar:/opt/ibm/biginsights/hbase/lib/hadoop-tools-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/high-scale-lib-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/httpclient-4.1.2.jar:/opt/ibm/biginsights/hbase/lib/httpcore-4.1.3.jar:/opt/ibm/biginsights/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-xc-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jamon-runtime-2.3.1.jar:/opt/ibm/biginsights/hbase/lib/jasper-compiler-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jasper-runtime-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jaxb-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/ibm/biginsights/hbase/lib/jersey-core-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-json-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-server-1.8.jar:/opt/ibm/biginsights/hbase/lib/jettison-1.1.jar:/opt/ibm/biginsights/hbase/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jruby-complete-1.6.5.1.jar:/opt/ibm/biginsights/hbase/lib/jsp-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/hbase/lib/junit-4.10-HBASE-1.jar:/opt/ibm/biginsights/hbase/lib/libthrift-0.8.0.jar:/opt/ibm/biginsights/hbase/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/hbase/lib/metrics-core-2.1.2.jar:/opt/ibm/biginsights/hbase/lib/netty-3.2.4.Final.jar:/opt/ibm/biginsights/hbase/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/hbase/lib/protobuf-java-2.4.0a.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/hbase/lib/stax-api-1.0.1.jar:/opt/ibm/biginsights/hbase/lib/velocity-1.7.jar:/opt/ibm/biginsights/hbase/lib/xmlenc-0.52.jar:/opt/ibm/biginsights/hbase/lib/xml-ibm.jar:/opt/ibm/biginsights/hbase/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/hbase/lib/zookeeper.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/hbase/conf
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.library.path=:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=j9jit24
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.name
> =Linux
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:os.version=2.6.18-194.17.4.el5
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.name
> =biadmin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:user.home=/home/biadmin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:user.dir=/opt/ibm/biginsights/flume/bin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Initiating client connection,
> connectString=bivm:2181 sessionTimeout=180000 watcher=hconnection
> 14/02/17 10:26:17 INFO zookeeper.RecoverableZooKeeper: The identifier of
> this process is 20984@bivm
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to
> server bivm/192.168.37.128:2181. Will not attempt to authenticate using
> SASL (Unable to locate a login configuration)
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Socket connection established
> to bivm/192.168.37.128:2181, initiating session
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Session establishment
> complete on server bivm/192.168.37.128:2181, sessionid =
> 0x144401355b4001d, negotiated timeout = 60000
> 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 60
> 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774536, queueSize: 60,
> queueHead: 10514
> 14/02/17 10:29:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 32036, logWriteOrderID = 1392650774536
> 14/02/17 10:29:57 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 32036
> logWriteOrderID: 1392650774536
> 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-4
> 14/02/17 10:29:57 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel2/data/log-4, result = true, minFileID 7
> 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:29:57 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel2/data/log-5, result = true, minFileID 7
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 460
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775504, queueSize: 520,
> queueHead: 10514
> 14/02/17 10:29:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 277565, logWriteOrderID = 1392650775504
> 14/02/17 10:29:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 277565
> logWriteOrderID: 1392650775504
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 540
> 14/02/17 10:29:59 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998182.tmp
> 14/02/17 10:29:59 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 423
> 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775933, queueSize: 137,
> queueHead: 10917
> 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775934, queueSize: 539,
> queueHead: 223681
> 14/02/17 10:30:01 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 304892, logWriteOrderID = 1392650775933
> 14/02/17 10:30:01 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 304892
> logWriteOrderID: 1392650775933
> 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 137
> 14/02/17 10:30:02 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 288266, logWriteOrderID = 1392650775934
> 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776074, queueSize: 0,
> queueHead: 11054
> 14/02/17 10:30:04 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 288266
> logWriteOrderID: 1392650775934
> 14/02/17 10:30:04 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:30:04 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel/data/log-6, result = true, minFileID 8
> 14/02/17 10:30:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 29
> 14/02/17 10:30:06 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 310581, logWriteOrderID = 1392650776074
> 14/02/17 10:30:13 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776105, queueSize: 550,
> queueHead: 223690
> 14/02/17 10:30:19 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 310581
> logWriteOrderID: 1392650776074
> 14/02/17 10:30:21 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:29 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776127, queueSize: 20,
> queueHead: 11052
> 14/02/17 10:30:29 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 299362, logWriteOrderID = 1392650776105
> 14/02/17 10:30:30 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 321308, logWriteOrderID = 1392650776127
> 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 299362
> logWriteOrderID: 1392650776105
> 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 321308
> logWriteOrderID: 1392650776127
> 14/02/17 10:30:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 21
> 14/02/17 10:30:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 38
> 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776192, queueSize: 569,
> queueHead: 223691
> 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776193, queueSize: 20,
> queueHead: 11070
> 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 310040, logWriteOrderID = 1392650776192
> 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 332801, logWriteOrderID = 1392650776193
> 14/02/17 10:30:34 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 310040
> logWriteOrderID: 1392650776192
> 14/02/17 10:30:35 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 332801
> logWriteOrderID: 1392650776193
> 14/02/17 10:30:37 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:39 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182
> 14/02/17 10:30:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776236, queueSize: 0,
> queueHead: 11090
> 14/02/17 10:30:40 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998183.tmp
> 14/02/17 10:30:42 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776237, queueSize: 589,
> queueHead: 223691
> 14/02/17 10:30:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 333657, logWriteOrderID = 1392650776236
> 14/02/17 10:30:59 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 320738, logWriteOrderID = 1392650776237
> 14/02/17 10:31:01 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 333657
> logWriteOrderID: 1392650776236
> 14/02/17 10:31:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 320738
> logWriteOrderID: 1392650776237
> 14/02/17 10:31:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 125
> 14/02/17 10:31:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776384, queueSize: 464,
> queueHead: 223816
> 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776385, queueSize: 20,
> queueHead: 11088
> 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 344355, logWriteOrderID = 1392650776385
> 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 325863, logWriteOrderID = 1392650776384
> 14/02/17 10:31:20 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183
> 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 325863
> logWriteOrderID: 1392650776384
> 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 344355
> logWriteOrderID: 1392650776385
> 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1
> 14/02/17 10:31:23 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998184.tmp
> 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776427, queueSize: 0,
> queueHead: 11108
> 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776428, queueSize: 463,
> queueHead: 223817
> 14/02/17 10:31:25 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 335946, logWriteOrderID = 1392650776428
> 14/02/17 10:31:26 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 345211, logWriteOrderID = 1392650776427
> 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 345211
> logWriteOrderID: 1392650776427
> 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 335946
> logWriteOrderID: 1392650776428
> 14/02/17 10:31:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 40
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 70
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776540, queueSize: 473,
> queueHead: 223847
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776541, queueSize: 40,
> queueHead: 11106
> 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 356818, logWriteOrderID = 1392650776540
> 14/02/17 10:31:28 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 356818
> logWriteOrderID: 1392650776540
> 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 366536, logWriteOrderID = 1392650776541
> 14/02/17 10:31:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 366536
> logWriteOrderID: 1392650776541
> 14/02/17 10:31:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 493
> 14/02/17 10:31:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 40
> 14/02/17 10:31:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650777082, queueSize: 0,
> queueHead: 11146
> 14/02/17 10:31:35 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650777083, queueSize: 0,
> queueHead: 224340
> 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 368733, logWriteOrderID = 1392650777082
> 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 379163, logWriteOrderID = 1392650777083
> 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 368733
> logWriteOrderID: 1392650777082
> 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 379163
> logWriteOrderID: 1392650777083
> 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 920
> 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 900
> 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650778995, queueSize: 900,
> queueHead: 224338
> 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650778996, queueSize: 920,
> queueHead: 11144
> 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 859009, logWriteOrderID = 1392650778996
> 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 859505, logWriteOrderID = 1392650778995
> 14/02/17 10:31:49 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 859009
> logWriteOrderID: 1392650778996
> 14/02/17 10:31:50 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 920
> 14/02/17 10:31:53 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 859505
> logWriteOrderID: 1392650778995
> 14/02/17 10:31:53 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650779929, queueSize: 0,
> queueHead: 12064
> 14/02/17 10:31:54 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184
> 14/02/17 10:31:54 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998185.tmp
> 14/02/17 10:31:54 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 22
> 14/02/17 10:31:55 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650779951, queueSize: 918,
> queueHead: 224340
> 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 897089, logWriteOrderID = 1392650779929
> 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 870220, logWriteOrderID = 1392650779951
> 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 870220
> logWriteOrderID: 1392650779951
> 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 897089
> logWriteOrderID: 1392650779929
> 14/02/17 10:31:57 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 300
> 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650781760, queueSize: 300,
> queueHead: 12062
> 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1198
> 14/02/17 10:32:01 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650781761, queueSize: 0,
> queueHead: 225538
> 14/02/17 10:32:02 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1057180, logWriteOrderID = 1392650781760
> 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1057180
> logWriteOrderID: 1392650781760
> 14/02/17 10:32:03 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1068832, logWriteOrderID = 1392650781761
> 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1068832
> logWriteOrderID: 1392650781761
> 14/02/17 10:32:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 798
> 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783137, queueSize: 500,
> queueHead: 12360
> 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 520
> 14/02/17 10:32:08 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783138, queueSize: 519,
> queueHead: 225537
> 14/02/17 10:32:12 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1336479, logWriteOrderID = 1392650783137
> 14/02/17 10:32:14 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1346456, logWriteOrderID = 1392650783138
> 14/02/17 10:32:14 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1336479
> logWriteOrderID: 1392650783137
> 14/02/17 10:32:15 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 100
> 14/02/17 10:32:16 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1346456
> logWriteOrderID: 1392650783138
> 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783761, queueSize: 400,
> queueHead: 12460
> 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 519
> 14/02/17 10:32:20 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783762, queueSize: 0,
> queueHead: 226056
> 14/02/17 10:32:21 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1341143, logWriteOrderID = 1392650783761
> 14/02/17 10:32:23 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1367771, logWriteOrderID = 1392650783762
> 14/02/17 10:32:23 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1341143
> logWriteOrderID: 1392650783761
> 14/02/17 10:32:24 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1367771
> logWriteOrderID: 1392650783762
> 14/02/17 10:32:24 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 300
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 100
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650784174, queueSize: 300,
> queueHead: 12660
> 14/02/17 10:32:25 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650784175, queueSize: 100,
> queueHead: 226054
> 14/02/17 10:32:25 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1402287, logWriteOrderID = 1392650784174
> 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1402287
> logWriteOrderID: 1392650784174
> 14/02/17 10:32:26 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1421128, logWriteOrderID = 1392650784175
> 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1421128
> logWriteOrderID: 1392650784175
> 14/02/17 10:32:27 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 480
> 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 278
> 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785222, queueSize: 98,
> queueHead: 13042
> 14/02/17 10:32:32 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785223, queueSize: 0,
> queueHead: 226332
> 14/02/17 10:32:33 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1514767, logWriteOrderID = 1392650785222
> 14/02/17 10:32:34 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1514767
> logWriteOrderID: 1392650785222
> 14/02/17 10:32:35 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 118
> 14/02/17 10:32:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1528845, logWriteOrderID = 1392650785223
> 14/02/17 10:32:38 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785364, queueSize: 0,
> queueHead: 13160
> 14/02/17 10:32:40 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1528845
> logWriteOrderID: 1392650785223
> 14/02/17 10:32:41 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1529781, logWriteOrderID = 1392650785364
> 14/02/17 10:32:42 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1529781
> logWriteOrderID: 1392650785364
> 14/02/17 10:32:43 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 500
> 14/02/17 10:32:44 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 500
> 14/02/17 10:32:45 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786415, queueSize: 500,
> queueHead: 13158
> 14/02/17 10:32:47 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786416, queueSize: 500,
> queueHead: 226330
> 14/02/17 10:32:53 INFO node.FlumeNode: Flume node stopping - agent
> 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
> supervisor 9
> 14/02/17 10:32:53 INFO properties.PropertiesFileConfigurationProvider:
> Configuration provider stopping
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Node manager
> stopping
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Shutting
> down configuration: { sourceRunners:{exec-source=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }}
> sinkRunners:{hbase-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
> name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } },
> hdfs-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
> name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }}
> channels:{ch1=FileChannel ch1 { dataDirs:
> [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
> [/home/biadmin/.flume/file-channel2/data] }} }
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Source exec-source
> 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping component:
> EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }
> 14/02/17 10:32:53 INFO source.ExecSource: Stopping exec source with
> command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1795949, logWriteOrderID = 1392650786416
> 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1796885, logWriteOrderID = 1392650786415
> 14/02/17 10:32:57 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1795949
> logWriteOrderID: 1392650786416
> 14/02/17 10:32:57 ERROR source.ExecSource: Failed while running command:
> tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> java.io.IOException: Pipe closed
>         at java.io.PipedInputStream.read(PipedInputStream.java:302)
>         at java.lang.ProcessPipedInputStream.read(UNIXProcess.java:412)
>         at java.io.PipedInputStream.read(PipedInputStream.java:372)
>         at java.lang.ProcessInputStream.read(UNIXProcess.java:471)
>         at
> sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
>         at
> sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
>         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
>         at java.io.InputStreamReader.read(InputStreamReader.java:188)
>         at java.io.BufferedReader.fill(BufferedReader.java:147)
>         at java.io.BufferedReader.readLine(BufferedReader.java:310)
>         at java.io.BufferedReader.readLine(BufferedReader.java:373)
>         at
> org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:272)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:452)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO source.ExecSource: Command [tail -F
> /home/biadmin/bigdemo/data/rec_telco.cdr] exited with 130
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Sink hbase-sink
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004ccounterGroup:{ name:null counters:{runner.backoffs.consecutive=2,
> runner.backoffs=59} } }
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Component has
> already been stopped EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:STOP} }
> 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
> lock
> java.lang.InterruptedException
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
>         at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
>         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:446)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
> Exception follows.
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=ch2]
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO
> client.HConnectionManager$HConnectionImplementation: Closed zookeeper
> sessionid=0x144401355b4001d
> 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1796885
> logWriteOrderID: 1392650786415
> 14/02/17 10:32:57 WARN hdfs.BucketWriter: Caught IOException writing to
> HDFSWriter (Filesystem closed). Closing file
> (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp)
> and rethrowing exception.
> 14/02/17 10:32:58 WARN hdfs.BucketWriter: Caught IOException while closing
> file
> (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp).
> Exception follows.
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1
> 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:58 ERROR hdfs.BucketWriter: Unexpected error
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:237)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:232)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:109)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:217)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786418, queueSize: 499,
> queueHead: 226331
> 14/02/17 10:32:58 INFO zookeeper.ZooKeeper: Session: 0x144401355b4001d
> closed
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Sink hdfs-sink
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01counterGroup:{ name:null counters:{runner.backoffs.consecutive=3,
> runner.backoffs=53} } }
> 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
> lock
> java.lang.InterruptedException
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
>         at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
>         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:536)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
> Exception follows.
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=ch1]
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:539)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO hdfs.HDFSEventSink: Closing
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec
> 14/02/17 10:32:58 INFO zookeeper.ClientCnxn: EventThread shut down
> 14/02/17 10:32:58 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1795990, logWriteOrderID = 1392650786418
> 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:58 WARN hdfs.HDFSEventSink: Exception while closing
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec. Exception follows.
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:757)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:755)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs-sink stopped
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Channel ch1
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }
> 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch1 {
> dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1795990
> logWriteOrderID: 1392650786418
> 14/02/17 10:32:58 INFO file.LogFile: Closing
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch1 stopped
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Channel ch2
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }
> 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch2 {
> dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> 14/02/17 10:32:58 INFO file.LogFile: Closing
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-6
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch2 stopped
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
> supervisor 9
>
>
>
> On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:
>
> Hello Jeff,
>
> Please find below requested logs.. Initiation part of the logs were
> unfortunately not included. I can run these again if necessary but the
> Zookeeper connection is included in the logs.
>
>
>
> On 17 February 2014 16:05, Jeff Lord <jl...@cloudera.com> wrote:
>
> Logs ?
>
> On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <ka...@gmail.com> wrote:
> > Dear Mailing Group,
> >
> > I am currently having issues with the Hbase sink function. I have
> developed
> > an agent with a fanout channel setup ( single source, multiple channels,
> > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
> >
> >  The issue is that although the HDFS is working well, the Hbase flow is
> > simply not working. There are no errors being reported by Flume for the
> > Hbase channel but there are never any records being written to the HBase
> > store. The Hbase table as stipulated in the config always remains empty.
> > Studying the Flume startup logs I observe that the session connection to
> > Zookeeper is always successfully established
> >
> > Are there any special configurations I am missing out?
> >
> > I am using the Async Event Serializer to persist the txns.
> >
> > Any assistance will be greatly appreciated.
> >
> >
> > Please see below for the flume configuration:
> >
> > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
> > agent.sources=exec-source
> > agent.sinks=hdfs-sink hbase-sink
> > agent.channels=ch1 ch2
> >
> > agent.sources.exec-source.type=exec
> > agent.sources.exec-source.command=tail -F
> > /home/biadmin/bigdemo/data/rec_telco.cdr
> >
> > agent.sinks.hdfs-sink.type=hdfs
> > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
> > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
> > # File size to trigger roll, in bytes (0: never roll based on file size)
> > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
> > agent.sinks.hdfs-sink.hdfs.rollCount = 0
> > # number of events written to file before it flushed to HDFS
> > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
> > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
> >
> >
> > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
> >
> agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
> > agent.sinks.hbase-sink.table=telco_cdr_rec
> > agent.sinks.hbase-sink.columnFamily = colfam
> > agent.sinks.hbase-sink.channels = ch2
> > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
> > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
> >
> >
> > agent.channels.ch1.type=file
> > agent.channels.ch1.checkpointInterval=3000
> > agent.channels.ch1.transactionCapacity=10000
> >
> agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
> > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
> > agent.channels.ch1.write-timeout=30
> > agent.channels.ch1.keep-alive=30
> > #agent.channels.ch1.capacity=1000
> >
> > agent.channels.ch2.type=file
> > agent.channels.ch2.checkpointInterval=300
> > agent.channels.ch2.transactionCapacity=10000
> >
> agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
> > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
> > agent.channels.ch2.write-timeout=30
> > agent.channels.ch2.keep-alive=30
> > #agent.channels.ch2.capacity=1000
> >
> >
> > agent.sources.exec-source.channels=ch1 ch2
> > agent.sinks.hdfs-sink.channel=ch1
> > agent.sinks.hbase-sink.channel=ch2
> >
>
>
>
>
>
>
>
>

Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Hari Shreedharan <hs...@cloudera.com>.
Looks like you are using Flume 1.3.0. Is there a reason for not using a newer version? Flume 1.4.0 is not almost 6 months old. 1.3.0 did have a known performance issue which was the reason 1.3.1 was release almost immediately after. 


Thanks,
Hari


On Tuesday, February 18, 2014 at 11:23 AM, Kris Ogirri wrote:

> Hello Hari,
> 
> I didn't know it was a holiday in the US. 
> 
> Please see version information below:
> 
> Hbase:
> HBase Shell; enter 'help<RETURN>' for list of supported commands.
> Type "exit<RETURN>" to leave the HBase Shell
> Version 0.94.3, rab548827f0c52211c1d67437484fcba635072767, Wed Jul 31 18:13:25 PDT 2013
> 
> 
> Flume: 
> [biadmin@bivm bin]$ ./flume-ng version
> Flume 1.3.0
> Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
> Revision: abbccbd2ff14dd6fed2a8a3891eb51aff985e9f5
> Compiled by jenkins on Wed Jun 12 19:16:33 PDT 2013
> From source with checksum dce204011600e67e1455971266d3da07
> 
> 
> Thanks for all the assistance.
> 
> BR,
> 
> 
> 
> On 18 February 2014 20:14, Hari Shreedharan <hshreedharan@cloudera.com (mailto:hshreedharan@cloudera.com)> wrote:
> > Hi Kris, 
> > 
> > Please realize that people usually work on their own time on these mailing lists and since your first message was sent on a Monday early morning on a long weekend in the US, others may not have seen your message either.  
> > 
> > Are you running Apache Flume and Apache HBase? If yes, what versions (output of flume-ng version and hbase version)? 
> > 
> > 
> > Thanks,
> > Hari
> > 
> > 
> > On Tuesday, February 18, 2014 at 10:22 AM, Kris Ogirri wrote:
> > 
> > > Hi,
> > > 
> > > Cant anybody help with this? I am thinking its a small issue because everything seems to work fine but the data from the Channel never gets persisted into Hbase?
> > > 
> > > I have added the description of the Hbase tables:
> > > 
> > > hbase(main):005:0> describe 'telco_cdr_rec'
> > > DESCRIPTION                                          ENABLED                    
> > >  {NAME => 'telco_cdr_rec', FAMILIES => [{NAME => 'co true                       
> > >  lfam', REPLICATION_SCOPE => '0', KEEP_DELETED_CELLS                            
> > >   => 'false', COMPRESSION => 'NONE', ENCODE_ON_DISK                             
> > >  => 'true', BLOCKCACHE => 'true', MIN_VERSIONS => '0                            
> > >  ', DATA_BLOCK_ENCODING => 'NONE', IN_MEMORY => 'fal                            
> > >  se', BLOOMFILTER => 'NONE', TTL => '2147483647 (tel:2147483647)', VE                            
> > >  RSIONS => '3', BLOCKSIZE => '65536'}]}                                         
> > > 1 row(s) in 0.1600 seconds
> > > 
> > > 
> > > If no one can help with the problem, can anyone provide a link to the Flume -> Zookeeper -> Hbase Internal documentation so I can trace where the error lies.
> > > 
> > > Are there Zookeeper log files where I can analyse whether Flume actually sends the Txns to Hbase via Zookeeper?
> > > 
> > > 
> > > 
> > > On 17 February 2014 16:38, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > > > Hello Jeff,
> > > > 
> > > > Please find below requested logs.. Initiation part of the logs were unfortunately not included. I can run these again if necessary but the Zookeeper connection is included in the logs.
> > > > 
> > > > 
> > > > 14/02/17 10:26:12 INFO properties.PropertiesFileConfigurationProvider: created channel ch2
> > > > 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink: hbase-sink, type: org.apache.flume.sink.hbase.HBaseSink
> > > > 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink: hdfs-sink, type: hdfs
> > > > 14/02/17 10:26:14 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
> > > > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{exec-source=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:IDLE} }} sinkRunners:{hbase-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{} } }, hdfs-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{} } }} channels:{ch1=FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }} }
> > > > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel ch1
> > > > 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> > > > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel ch2
> > > > 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> > > > 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> > > > 14/02/17 10:26:14 INFO file.Log: Replay started
> > > > 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> > > > 14/02/17 10:26:14 INFO file.Log: Replay started
> > > > 14/02/17 10:26:14 INFO file.Log: Found NextFileID 7, from [/home/biadmin/.flume/file-channel/data/log-7, /home/biadmin/.flume/file-channel/data/log-6]
> > > > 14/02/17 10:26:14 INFO file.Log: Found NextFileID 6, from [/home/biadmin/.flume/file-channel2/data/log-6, /home/biadmin/.flume/file-channel2/data/log-4, /home/biadmin/.flume/file-channel2/data/log-5]
> > > > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with /home/biadmin/.flume/file-channel2/checkpoint/checkpoint and /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> > > > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading checkpoint metadata from /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> > > > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with /home/biadmin/.flume/file-channel/checkpoint/checkpoint and /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> > > > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading checkpoint metadata from /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> > > > 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:35 EST 2014, queue depth = 0
> > > > 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:31 EST 2014, queue depth = 0
> > > > 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> > > > 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> > > > 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of [/home/biadmin/.flume/file-channel/data/log-6, /home/biadmin/.flume/file-channel/data/log-7]
> > > > 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of [/home/biadmin/.flume/file-channel2/data/log-4, /home/biadmin/.flume/file-channel2/data/log-5, /home/biadmin/.flume/file-channel2/data/log-6]
> > > > 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel/data/log-6
> > > > 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-4
> > > > 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Unable to get maxDirectMemory from VM: NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
> > > > 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Direct Memory Allocation:  Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20971520, Remaining = 20971520
> > > > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 32040
> > > > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel/data/log-7
> > > > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 2496
> > > > 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for file(/home/biadmin/.flume/file-channel2/data/log-4) is: 1392407375821, which is beyond the requested checkpoint time: 1392650490155 and position 0
> > > > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-5
> > > > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 22843
> > > > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 22843 in /home/biadmin/.flume/file-channel2/data/log-5
> > > > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-6
> > > > 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for file(/home/biadmin/.flume/file-channel2/data/log-6) is: 1392650490155, which is beyond the requested checkpoint time: 1392650490155 and position 0
> > > > 14/02/17 10:26:16 INFO file.ReplayHandler: read: 0, put: 0, take: 0, rollback: 0, commit: 0, skip: 0, eventCount:0
> > > > 14/02/17 10:26:16 INFO file.Log: Rolling /home/biadmin/.flume/file-channel2/data
> > > > 14/02/17 10:26:16 INFO file.Log: Roll start /home/biadmin/.flume/file-channel2/data
> > > > 14/02/17 10:26:16 INFO file.LogFile: Opened /home/biadmin/.flume/file-channel2/data/log-7
> > > > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 2496 in /home/biadmin/.flume/file-channel/data/log-7
> > > > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 32071 in /home/biadmin/.flume/file-channel/data/log-6
> > > > 14/02/17 10:26:16 INFO file.ReplayHandler: read: 1, put: 0, take: 0, rollback: 0, commit: 0, skip: 1, eventCount:0
> > > > 14/02/17 10:26:16 INFO file.Log: Rolling /home/biadmin/.flume/file-channel/data
> > > > 14/02/17 10:26:16 INFO file.Log: Roll start /home/biadmin/.flume/file-channel/data
> > > > 14/02/17 10:26:16 INFO file.LogFile: Opened /home/biadmin/.flume/file-channel/data/log-8
> > > > 14/02/17 10:26:16 INFO file.Log: Roll end
> > > > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 0
> > > > 14/02/17 10:26:16 INFO file.Log: Roll end
> > > > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 0
> > > > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774387, queueSize: 0, queueHead: 10516
> > > > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774388, queueSize: 0, queueHead: 223682
> > > > 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-7.meta currentPosition = 0, logWriteOrderID = 1392650774387
> > > > 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-8.meta currentPosition = 0, logWriteOrderID = 1392650774388
> > > > 14/02/17 10:26:16 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 0 logWriteOrderID: 1392650774387
> > > > 14/02/17 10:26:16 INFO file.FileChannel: Queue Size after replay: 0 [channel=ch2]
> > > > 14/02/17 10:26:17 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 0 logWriteOrderID: 1392650774388
> > > > 14/02/17 10:26:17 INFO file.FileChannel: Queue Size after replay: 0 [channel=ch1]
> > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: ch2, registered successfully.
> > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch2 started
> > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: ch1, registered successfully.
> > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 started
> > > > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink hbase-sink
> > > > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink hdfs-sink
> > > > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Source exec-source
> > > > 14/02/17 10:26:17 INFO source.ExecSource: Exec source starting with command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: hdfs-sink, registered successfully.
> > > > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink started
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5--1, built on 01/23/2013 14:29 GMT
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:host.name (http://host.name)=bivm
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.vendor=IBM Corporation
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/ibm/biginsights/jdk/jre
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.class.path=conf:/opt/ibm/biginsights/flume/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/flume/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-mapper-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/flume-avro-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-jdbc-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/velocity-1.7.jar:/opt/ibm/biginsights/flume/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/flume/lib/flume-ng-node-1.3.0.jar:/opt/ibm/biginsights/flume/lib/commons-dbcp-1.4.jar:/opt/ibm/biginsights/flume/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/flume/lib/flume-hdfs-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/asynchbase-1.2.0.jar:/opt/ibm/biginsights/flume/lib/flume-recoverable-memory-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/async-1.3.1.jar:/opt/ibm/biginsights/flume/lib/slf4j-log4j12-1.6.1.jar:/opt/ibm/biginsights/flume/lib/flume-thrift-source-1.3.0.jar:/opt/ibm/biginsight
s/flume/lib/flume-file-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/libthrift-0.6.1.jar:/opt/ibm/biginsights/flume/lib/avro-1.7.2.jar:/opt/ibm/biginsights/flume/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-core-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/servlet-api-2.5-20110124.jar:/opt/ibm/biginsights/flume/lib/flume-ng-elasticsearch-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-configuration-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/flume/lib/irclib-1.10.jar:/opt/ibm/biginsights/flume/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/flume/lib/derby-10.8.3.1.jar:/opt/ibm/biginsights/flume/lib/flume-ng-log4jappender-1.3.0.jar:/opt/ibm/biginsights/flume/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/flume/lib/flume-irc-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jcl-over-slf4j-1.7.2.jar:/opt/ibm/biginsights/flume/lib/slf4j-api-1.6.1.jar:/opt/ibm/biginsights/flume/lib/joda-time-2.1.jar:/opt/ibm/biginsights/flume/lib/commo
ns-lang-2.5.jar:/opt/ibm/biginsights/flume/lib/commons-io-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/flume/lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/flume/lib/commons-pool-1.5.4.jar:/opt/ibm/biginsights/flume/lib/flume-ng-hbase-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/protobuf-java-2.4.1.jar:/opt/ibm/biginsights/flume/lib/flume-scribe-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-core-1.3.0.jar:/opt/ibm/biginsights/flume/lib/gson-2.2.2.jar:/opt/ibm/biginsights/flume/lib/flume-ng-sdk-1.3.0.jar:/opt/ibm/biginsights/flume/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/flume/lib/guava-10.0.1.jar:/opt/ibm/biginsights/flume/lib/paranamer-2.3.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/bigi
nsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/com
mons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm
/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/
biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/opt/ibm/biginsights/jdk/lib/tools
.jar:/opt/ibm/biginsights/hbase:/opt/ibm/biginsights/hbase/hbase-0.94.3-security.jar:/opt/ibm/biginsights/hbase/hbase-0.94.3-security-tests.jar:/opt/ibm/biginsights/hbase/hbase.jar:/opt/ibm/biginsights/hbase/lib/activation-1.1.jar:/opt/ibm/biginsights/hbase/lib/asm-3.1.jar:/opt/ibm/biginsights/hbase/lib/avro-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/hbase/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/hbase/lib/commons-codec-1.4.jar:/opt/ibm/biginsights/hbase/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/hbase/lib/commons-digester-1.8.jar:/opt/ibm/biginsights/hbase/lib/commons-el-1.0.jar:/opt/ibm/biginsights/hbase/lib/commons-httpclient-3.1.jar:/opt/ibm/biginsights/hbase/lib/commons-io-2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/hbase/lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights
/hbase/lib/commons-math-2.2.jar:/opt/ibm/biginsights/hbase/lib/commons-net-3.1.jar:/opt/ibm/biginsights/hbase/lib/core-3.1.1.jar:/opt/ibm/biginsights/hbase/lib/guardium-proxy.jar:/opt/ibm/biginsights/hbase/lib/guava-11.0.2.jar:/opt/ibm/biginsights/hbase/lib/hadoop-core.jar:/opt/ibm/biginsights/hbase/lib/hadoop-tools-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/high-scale-lib-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/httpclient-4.1.2.jar:/opt/ibm/biginsights/hbase/lib/httpcore-4.1.3.jar:/opt/ibm/biginsights/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-xc-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jamon-runtime-2.3.1.jar:/opt/ibm/biginsights/hbase/lib/jasper-compiler-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jasper-runtime-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jaxb-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/ibm/biginsi
ghts/hbase/lib/jersey-core-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-json-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-server-1.8.jar:/opt/ibm/biginsights/hbase/lib/jettison-1.1.jar:/opt/ibm/biginsights/hbase/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jruby-complete-1.6.5.1.jar:/opt/ibm/biginsights/hbase/lib/jsp-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/hbase/lib/junit-4.10-HBASE-1.jar:/opt/ibm/biginsights/hbase/lib/libthrift-0.8.0.jar:/opt/ibm/biginsights/hbase/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/hbase/lib/metrics-core-2.1.2.jar:/opt/ibm/biginsights/hbase/lib/netty-3.2.4.Final.jar:/opt/ibm/biginsights/hbase/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/hbase/lib/protobuf-java-2.4.0a.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/hba
se/lib/servlet-api-2.5-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/hbase/lib/stax-api-1.0.1.jar:/opt/ibm/biginsights/hbase/lib/velocity-1.7.jar:/opt/ibm/biginsights/hbase/lib/xmlenc-0.52.jar:/opt/ibm/biginsights/hbase/lib/xml-ibm.jar:/opt/ibm/biginsights/hbase/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/hbase/lib/zookeeper.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libe
xec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/
biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.j
ar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/hbase/conf
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.compiler=j9jit24
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.name (http://os.name)=Linux
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.18-194.17.4.el5
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.name (http://user.name)=biadmin
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/biadmin
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/ibm/biginsights/flume/bin
> > > > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=bivm:2181 sessionTimeout=180000 watcher=hconnection
> > > > 14/02/17 10:26:17 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 20984@bivm
> > > > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to server bivm/192.168.37.128:2181 (http://192.168.37.128:2181). Will not attempt to authenticate using SASL (Unable to locate a login configuration)
> > > > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Socket connection established to bivm/192.168.37.128:2181 (http://192.168.37.128:2181), initiating session
> > > > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Session establishment complete on server bivm/192.168.37.128:2181 (http://192.168.37.128:2181), sessionid = 0x144401355b4001d, negotiated timeout = 60000
> > > > 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 60
> > > > 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774536, queueSize: 60, queueHead: 10514
> > > > 14/02/17 10:29:56 INFO file.LogFileV3: Updating log-7.meta currentPosition = 32036, logWriteOrderID = 1392650774536
> > > > 14/02/17 10:29:57 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 32036 logWriteOrderID: 1392650774536
> > > > 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-4
> > > > 14/02/17 10:29:57 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel2/data/log-4, result = true, minFileID 7
> > > > 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-5
> > > > 14/02/17 10:29:57 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel2/data/log-5, result = true, minFileID 7
> > > > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 460
> > > > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775504, queueSize: 520, queueHead: 10514
> > > > 14/02/17 10:29:58 INFO file.LogFileV3: Updating log-7.meta currentPosition = 277565, logWriteOrderID = 1392650775504
> > > > 14/02/17 10:29:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 277565 logWriteOrderID: 1392650775504
> > > > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 540
> > > > 14/02/17 10:29:59 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998182.tmp
> > > > 14/02/17 10:29:59 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 423
> > > > 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775933, queueSize: 137, queueHead: 10917
> > > > 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775934, queueSize: 539, queueHead: 223681
> > > > 14/02/17 10:30:01 INFO file.LogFileV3: Updating log-7.meta currentPosition = 304892, logWriteOrderID = 1392650775933
> > > > 14/02/17 10:30:01 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 304892 logWriteOrderID: 1392650775933
> > > > 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 137
> > > > 14/02/17 10:30:02 INFO file.LogFileV3: Updating log-8.meta currentPosition = 288266, logWriteOrderID = 1392650775934
> > > > 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776074, queueSize: 0, queueHead: 11054
> > > > 14/02/17 10:30:04 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 288266 logWriteOrderID: 1392650775934
> > > > 14/02/17 10:30:04 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-6
> > > > 14/02/17 10:30:04 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel/data/log-6, result = true, minFileID 8
> > > > 14/02/17 10:30:05 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 29
> > > > 14/02/17 10:30:06 INFO file.LogFileV3: Updating log-7.meta currentPosition = 310581, logWriteOrderID = 1392650776074
> > > > 14/02/17 10:30:13 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776105, queueSize: 550, queueHead: 223690
> > > > 14/02/17 10:30:19 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 310581 logWriteOrderID: 1392650776074
> > > > 14/02/17 10:30:21 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > > > 14/02/17 10:30:29 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776127, queueSize: 20, queueHead: 11052
> > > > 14/02/17 10:30:29 INFO file.LogFileV3: Updating log-8.meta currentPosition = 299362, logWriteOrderID = 1392650776105
> > > > 14/02/17 10:30:30 INFO file.LogFileV3: Updating log-7.meta currentPosition = 321308, logWriteOrderID = 1392650776127
> > > > 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 299362 logWriteOrderID: 1392650776105
> > > > 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 321308 logWriteOrderID: 1392650776127
> > > > 14/02/17 10:30:31 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 21
> > > > 14/02/17 10:30:32 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 38
> > > > 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776192, queueSize: 569, queueHead: 223691
> > > > 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776193, queueSize: 20, queueHead: 11070
> > > > 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-8.meta currentPosition = 310040, logWriteOrderID = 1392650776192
> > > > 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-7.meta currentPosition = 332801, logWriteOrderID = 1392650776193
> > > > 14/02/17 10:30:34 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 310040 logWriteOrderID: 1392650776192
> > > > 14/02/17 10:30:35 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 332801 logWriteOrderID: 1392650776193
> > > > 14/02/17 10:30:37 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > > > 14/02/17 10:30:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 20
> > > > 14/02/17 10:30:39 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182
> > > > 14/02/17 10:30:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776236, queueSize: 0, queueHead: 11090
> > > > 14/02/17 10:30:40 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998183.tmp
> > > > 14/02/17 10:30:42 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776237, queueSize: 589, queueHead: 223691
> > > > 14/02/17 10:30:58 INFO file.LogFileV3: Updating log-7.meta currentPosition = 333657, logWriteOrderID = 1392650776236
> > > > 14/02/17 10:30:59 INFO file.LogFileV3: Updating log-8.meta currentPosition = 320738, logWriteOrderID = 1392650776237
> > > > 14/02/17 10:31:01 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 333657 logWriteOrderID: 1392650776236
> > > > 14/02/17 10:31:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 320738 logWriteOrderID: 1392650776237
> > > > 14/02/17 10:31:04 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 125
> > > > 14/02/17 10:31:05 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > > > 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776384, queueSize: 464, queueHead: 223816
> > > > 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776385, queueSize: 20, queueHead: 11088
> > > > 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-7.meta currentPosition = 344355, logWriteOrderID = 1392650776385
> > > > 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-8.meta currentPosition = 325863, logWriteOrderID = 1392650776384
> > > > 14/02/17 10:31:20 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183
> > > > 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 325863 logWriteOrderID: 1392650776384
> > > > 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 344355 logWriteOrderID: 1392650776385
> > > > 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > > > 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1
> > > > 14/02/17 10:31:23 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998184.tmp
> > > > 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776427, queueSize: 0, queueHead: 11108
> > > > 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776428, queueSize: 463, queueHead: 223817
> > > > 14/02/17 10:31:25 INFO file.LogFileV3: Updating log-8.meta currentPosition = 335946, logWriteOrderID = 1392650776428
> > > > 14/02/17 10:31:26 INFO file.LogFileV3: Updating log-7.meta currentPosition = 345211, logWriteOrderID = 1392650776427
> > > > 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 345211 logWriteOrderID: 1392650776427
> > > > 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 335946 logWriteOrderID: 1392650776428
> > > > 14/02/17 10:31:27 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 40
> > > > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 70
> > > > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776540, queueSize: 473, queueHead: 223847
> > > > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776541, queueSize: 40, queueHead: 11106
> > > > 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-8.meta currentPosition = 356818, logWriteOrderID = 1392650776540
> > > > 14/02/17 10:31:28 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 356818 logWriteOrderID: 1392650776540
> > > > 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-7.meta currentPosition = 366536, logWriteOrderID = 1392650776541
> > > > 14/02/17 10:31:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 366536 logWriteOrderID: 1392650776541
> > > > 14/02/17 10:31:31 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 493
> > > > 14/02/17 10:31:32 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 40
> > > > 14/02/17 10:31:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650777082, queueSize: 0, queueHead: 11146
> > > > 14/02/17 10:31:35 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650777083, queueSize: 0, queueHead: 224340
> > > > 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-7.meta currentPosition = 368733, logWriteOrderID = 1392650777082
> > > > 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-8.meta currentPosition = 379163, logWriteOrderID = 1392650777083
> > > > 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 368733 logWriteOrderID: 1392650777082
> > > > 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 379163 logWriteOrderID: 1392650777083
> > > > 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 920
> > > > 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 900
> > > > 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650778995, queueSize: 900, queueHead: 224338
> > > > 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650778996, queueSize: 920, queueHead: 11144
> > > > 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-7.meta currentPosition = 859009, logWriteOrderID = 1392650778996
> > > > 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-8.meta currentPosition = 859505, logWriteOrderID = 1392650778995
> > > > 14/02/17 10:31:49 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 859009 logWriteOrderID: 1392650778996
> > > > 14/02/17 10:31:50 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 920
> > > > 14/02/17 10:31:53 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 859505 logWriteOrderID: 1392650778995
> > > > 14/02/17 10:31:53 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650779929, queueSize: 0, queueHead: 12064
> > > > 14/02/17 10:31:54 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184
> > > > 14/02/17 10:31:54 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998185.tmp
> > > > 14/02/17 10:31:54 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 22
> > > > 14/02/17 10:31:55 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650779951, queueSize: 918, queueHead: 224340
> > > > 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-7.meta currentPosition = 897089, logWriteOrderID = 1392650779929
> > > > 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-8.meta currentPosition = 870220, logWriteOrderID = 1392650779951
> > > > 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 870220 logWriteOrderID: 1392650779951
> > > > 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 897089 logWriteOrderID: 1392650779929
> > > > 14/02/17 10:31:57 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 300
> > > > 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650781760, queueSize: 300, queueHead: 12062
> > > > 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1198
> > > > 14/02/17 10:32:01 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650781761, queueSize: 0, queueHead: 225538
> > > > 14/02/17 10:32:02 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1057180, logWriteOrderID = 1392650781760
> > > > 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1057180 logWriteOrderID: 1392650781760
> > > > 14/02/17 10:32:03 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1068832, logWriteOrderID = 1392650781761
> > > > 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1068832 logWriteOrderID: 1392650781761
> > > > 14/02/17 10:32:04 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 798
> > > > 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783137, queueSize: 500, queueHead: 12360
> > > > 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 520
> > > > 14/02/17 10:32:08 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783138, queueSize: 519, queueHead: 225537
> > > > 14/02/17 10:32:12 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1336479, logWriteOrderID = 1392650783137
> > > > 14/02/17 10:32:14 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1346456, logWriteOrderID = 1392650783138
> > > > 14/02/17 10:32:14 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1336479 logWriteOrderID: 1392650783137
> > > > 14/02/17 10:32:15 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 100
> > > > 14/02/17 10:32:16 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1346456 logWriteOrderID: 1392650783138
> > > > 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783761, queueSize: 400, queueHead: 12460
> > > > 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 519
> > > > 14/02/17 10:32:20 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783762, queueSize: 0, queueHead: 226056
> > > > 14/02/17 10:32:21 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1341143, logWriteOrderID = 1392650783761
> > > > 14/02/17 10:32:23 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1367771, logWriteOrderID = 1392650783762
> > > > 14/02/17 10:32:23 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1341143 logWriteOrderID: 1392650783761
> > > > 14/02/17 10:32:24 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1367771 logWriteOrderID: 1392650783762
> > > > 14/02/17 10:32:24 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 300
> > > > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 100
> > > > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650784174, queueSize: 300, queueHead: 12660
> > > > 14/02/17 10:32:25 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185
> > > > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650784175, queueSize: 100, queueHead: 226054
> > > > 14/02/17 10:32:25 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1402287, logWriteOrderID = 1392650784174
> > > > 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1402287 logWriteOrderID: 1392650784174
> > > > 14/02/17 10:32:26 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1421128, logWriteOrderID = 1392650784175
> > > > 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1421128 logWriteOrderID: 1392650784175
> > > > 14/02/17 10:32:27 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > > > 14/02/17 10:32:27 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 480
> > > > 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 278
> > > > 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785222, queueSize: 98, queueHead: 13042
> > > > 14/02/17 10:32:32 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785223, queueSize: 0, queueHead: 226332
> > > > 14/02/17 10:32:33 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1514767, logWriteOrderID = 1392650785222
> > > > 14/02/17 10:32:34 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1514767 logWriteOrderID: 1392650785222
> > > > 14/02/17 10:32:35 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 118
> > > > 14/02/17 10:32:38 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1528845, logWriteOrderID = 1392650785223
> > > > 14/02/17 10:32:38 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785364, queueSize: 0, queueHead: 13160
> > > > 14/02/17 10:32:40 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1528845 logWriteOrderID: 1392650785223
> > > > 14/02/17 10:32:41 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1529781, logWriteOrderID = 1392650785364
> > > > 14/02/17 10:32:42 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1529781 logWriteOrderID: 1392650785364
> > > > 14/02/17 10:32:43 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 500
> > > > 14/02/17 10:32:44 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 500
> > > > 14/02/17 10:32:45 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786415, queueSize: 500, queueHead: 13158
> > > > 14/02/17 10:32:47 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786416, queueSize: 500, queueHead: 226330
> > > > 14/02/17 10:32:53 INFO node.FlumeNode: Flume node stopping - agent
> > > > 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 9
> > > > 14/02/17 10:32:53 INFO properties.PropertiesFileConfigurationProvider: Configuration provider stopping
> > > > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Node manager stopping
> > > > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Shutting down configuration: { sourceRunners:{exec-source=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }} sinkRunners:{hbase-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } }, hdfs-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }} channels:{ch1=FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }} }
> > > > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Stopping Source exec-source
> > > > 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping component: EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }
> > > > 14/02/17 10:32:53 INFO source.ExecSource: Stopping exec source with command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1795949, logWriteOrderID = 1392650786416
> > > > 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1796885, logWriteOrderID = 1392650786415
> > > > 14/02/17 10:32:57 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1795949 logWriteOrderID: 1392650786416
> > > > 14/02/17 10:32:57 ERROR source.ExecSource: Failed while running command: tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > java.io.IOException: Pipe closed
> > > >         at java.io.PipedInputStream.read(PipedInputStream.java:302)
> > > >         at java.lang.ProcessPipedInputStream.read(UNIXProcess.java:412)
> > > >         at java.io.PipedInputStream.read(PipedInputStream.java:372)
> > > >         at java.lang.ProcessInputStream.read(UNIXProcess.java:471)
> > > >         at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
> > > >         at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
> > > >         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
> > > >         at java.io.InputStreamReader.read(InputStreamReader.java:188)
> > > >         at java.io.BufferedReader.fill(BufferedReader.java:147)
> > > >         at java.io.BufferedReader.readLine(BufferedReader.java:310)
> > > >         at java.io.BufferedReader.readLine(BufferedReader.java:373)
> > > >         at org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:272)
> > > >         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:452)
> > > >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> > > >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> > > >         at java.lang.Thread.run(Thread.java:738)
> > > > 14/02/17 10:32:58 INFO source.ExecSource: Command [tail -F /home/biadmin/bigdemo/data/rec_telco.cdr] exited with 130
> > > > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Sink hbase-sink
> > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } }
> > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Component has already been stopped EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:STOP} }
> > > > 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared lock
> > > > java.lang.InterruptedException
> > > >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
> > > >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
> > > >         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
> > > >         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
> > > >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:446)
> > > >         at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> > > >         at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
> > > >         at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
> > > >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > >         at java.lang.Thread.run(Thread.java:738)
> > > > 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
> > > > org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=ch2]
> > > >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
> > > >         at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> > > >         at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
> > > >         at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
> > > >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > >         at java.lang.Thread.run(Thread.java:738)
> > > > 14/02/17 10:32:58 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x144401355b4001d
> > > > 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1796885 logWriteOrderID: 1392650786415
> > > > 14/02/17 10:32:57 WARN hdfs.BucketWriter: Caught IOException writing to HDFSWriter (Filesystem closed). Closing file (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp) and rethrowing exception.
> > > > 14/02/17 10:32:58 WARN hdfs.BucketWriter: Caught IOException while closing file (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp). Exception follows.
> > > > java.io.IOException: Filesystem closed
> > > >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> > > >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> > > >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> > > >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382)
> > > >         at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
> > > >         at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
> > > >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> > > >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> > > >         at java.lang.Thread.run(Thread.java:738)
> > > > 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1
> > > > 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed: hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > > > 14/02/17 10:32:58 ERROR hdfs.BucketWriter: Unexpected error
> > > > java.io.IOException: Filesystem closed
> > > >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> > > >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> > > >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> > > >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:237)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:232)
> > > >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> > > >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> > > >         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:109)
> > > >         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:217)
> > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> > > >         at java.lang.Thread.run(Thread.java:738)
> > > > 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786418, queueSize: 499, queueHead: 226331
> > > > 14/02/17 10:32:58 INFO zookeeper.ZooKeeper: Session: 0x144401355b4001d closed
> > > > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Sink hdfs-sink
> > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }
> > > > 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared lock
> > > > java.lang.InterruptedException
> > > >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
> > > >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
> > > >         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
> > > >         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
> > > >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:536)
> > > >         at org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
> > > >         at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
> > > >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > >         at java.lang.Thread.run(Thread.java:738)
> > > > 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
> > > > org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=ch1]
> > > >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:539)
> > > >         at org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
> > > >         at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
> > > >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > > >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > > >         at java.lang.Thread.run(Thread.java:738)
> > > > 14/02/17 10:32:58 INFO hdfs.HDFSEventSink: Closing hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec
> > > > 14/02/17 10:32:58 INFO zookeeper.ClientCnxn: EventThread shut down
> > > > 14/02/17 10:32:58 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1795990, logWriteOrderID = 1392650786418
> > > > 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed: hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > > > 14/02/17 10:32:58 WARN hdfs.HDFSEventSink: Exception while closing hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec. Exception follows.
> > > > java.io.IOException: Filesystem closed
> > > >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> > > >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> > > >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> > > >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> > > >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> > > >         at org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:757)
> > > >         at org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:755)
> > > >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> > > >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> > > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> > > >         at java.lang.Thread.run(Thread.java:738)
> > > > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink stopped
> > > > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Channel ch1
> > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }
> > > > 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> > > > 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1795990 logWriteOrderID: 1392650786418
> > > > 14/02/17 10:32:58 INFO file.LogFile: Closing /home/biadmin/.flume/file-channel/data/log-8
> > > > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-7
> > > > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-8
> > > > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 stopped
> > > > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Channel ch2
> > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }
> > > > 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> > > > 14/02/17 10:32:58 INFO file.LogFile: Closing /home/biadmin/.flume/file-channel2/data/log-7
> > > > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-6
> > > > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-7
> > > > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch2 stopped
> > > > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 9
> > > > 
> > > > 
> > > > 
> > > > On 17 February 2014 16:38, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > > > > Hello Jeff,
> > > > > 
> > > > > Please find below requested logs.. Initiation part of the logs were unfortunately not included. I can run these again if necessary but the Zookeeper connection is included in the logs.
> > > > > 
> > > > > 
> > > > > 
> > > > > On 17 February 2014 16:05, Jeff Lord <jlord@cloudera.com (mailto:jlord@cloudera.com)> wrote:
> > > > > > Logs ?
> > > > > > 
> > > > > > On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > > > > > > Dear Mailing Group,
> > > > > > >
> > > > > > > I am currently having issues with the Hbase sink function. I have developed
> > > > > > > an agent with a fanout channel setup ( single source, multiple channels,
> > > > > > > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
> > > > > > >
> > > > > > >  The issue is that although the HDFS is working well, the Hbase flow is
> > > > > > > simply not working. There are no errors being reported by Flume for the
> > > > > > > Hbase channel but there are never any records being written to the HBase
> > > > > > > store. The Hbase table as stipulated in the config always remains empty.
> > > > > > > Studying the Flume startup logs I observe that the session connection to
> > > > > > > Zookeeper is always successfully established
> > > > > > >
> > > > > > > Are there any special configurations I am missing out?
> > > > > > >
> > > > > > > I am using the Async Event Serializer to persist the txns.
> > > > > > >
> > > > > > > Any assistance will be greatly appreciated.
> > > > > > >
> > > > > > >
> > > > > > > Please see below for the flume configuration:
> > > > > > >
> > > > > > > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
> > > > > > > agent.sources=exec-source
> > > > > > > agent.sinks=hdfs-sink hbase-sink
> > > > > > > agent.channels=ch1 ch2
> > > > > > >
> > > > > > > agent.sources.exec-source.type=exec
> > > > > > > agent.sources.exec-source.command=tail -F
> > > > > > > /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > > > >
> > > > > > > agent.sinks.hdfs-sink.type=hdfs
> > > > > > > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
> > > > > > > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
> > > > > > > # File size to trigger roll, in bytes (0: never roll based on file size)
> > > > > > > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
> > > > > > > agent.sinks.hdfs-sink.hdfs.rollCount = 0
> > > > > > > # number of events written to file before it flushed to HDFS
> > > > > > > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
> > > > > > > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
> > > > > > >
> > > > > > >
> > > > > > > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
> > > > > > > agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
> > > > > > > agent.sinks.hbase-sink.table=telco_cdr_rec
> > > > > > > agent.sinks.hbase-sink.columnFamily = colfam
> > > > > > > agent.sinks.hbase-sink.channels = ch2
> > > > > > > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
> > > > > > > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
> > > > > > >
> > > > > > >
> > > > > > > agent.channels.ch1.type=file
> > > > > > > agent.channels.ch1.checkpointInterval=3000
> > > > > > > agent.channels.ch1.transactionCapacity=10000
> > > > > > > agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
> > > > > > > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
> > > > > > > agent.channels.ch1.write-timeout=30
> > > > > > > agent.channels.ch1.keep-alive=30
> > > > > > > #agent.channels.ch1.capacity=1000
> > > > > > >
> > > > > > > agent.channels.ch2.type=file
> > > > > > > agent.channels.ch2.checkpointInterval=300
> > > > > > > agent.channels.ch2.transactionCapacity=10000
> > > > > > > agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
> > > > > > > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
> > > > > > > agent.channels.ch2.write-timeout=30
> > > > > > > agent.channels.ch2.keep-alive=30
> > > > > > > #agent.channels.ch2.capacity=1000
> > > > > > >
> > > > > > >
> > > > > > > agent.sources.exec-source.channels=ch1 ch2
> > > > > > > agent.sinks.hdfs-sink.channel=ch1
> > > > > > > agent.sinks.hbase-sink.channel=ch2
> > > > > > >
> > > > > 
> > > > 
> > > 
> > 
> 


Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Kris Ogirri <ka...@gmail.com>.
Hello Hari,

I didn't know it was a holiday in the US.

Please see version information below:

Hbase:
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.3, rab548827f0c52211c1d67437484fcba635072767, Wed Jul 31
18:13:25 PDT 2013


Flume:
[biadmin@bivm bin]$ ./flume-ng version
Flume 1.3.0
Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
Revision: abbccbd2ff14dd6fed2a8a3891eb51aff985e9f5
Compiled by jenkins on Wed Jun 12 19:16:33 PDT 2013
>From source with checksum dce204011600e67e1455971266d3da07


Thanks for all the assistance.

BR,



On 18 February 2014 20:14, Hari Shreedharan <hs...@cloudera.com>wrote:

>  Hi Kris,
>
> Please realize that people usually work on their own time on these mailing
> lists and since your first message was sent on a Monday early morning on a
> long weekend in the US, others may not have seen your message either.
>
> Are you running Apache Flume and Apache HBase? If yes, what versions
> (output of flume-ng version and hbase version)?
>
>
> Thanks,
> Hari
>
> On Tuesday, February 18, 2014 at 10:22 AM, Kris Ogirri wrote:
>
> Hi,
>
> Cant anybody help with this? I am thinking its a small issue because
> everything seems to work fine but the data from the Channel never gets
> persisted into Hbase?
>
> I have added the description of the Hbase tables:
>
> hbase(main):005:0> describe 'telco_cdr_rec'
> DESCRIPTION
> ENABLED
>  {NAME => 'telco_cdr_rec', FAMILIES => [{NAME => 'co
> true
>  lfam', REPLICATION_SCOPE => '0',
> KEEP_DELETED_CELLS
>   => 'false', COMPRESSION => 'NONE',
> ENCODE_ON_DISK
>  => 'true', BLOCKCACHE => 'true', MIN_VERSIONS =>
> '0
>  ', DATA_BLOCK_ENCODING => 'NONE', IN_MEMORY =>
> 'fal
>  se', BLOOMFILTER => 'NONE', TTL => '2147483647',
> VE
>  RSIONS => '3', BLOCKSIZE =>
> '65536'}]}
> 1 row(s) in 0.1600 seconds
>
>
> If no one can help with the problem, can anyone provide a link to the
> Flume -> Zookeeper -> Hbase Internal documentation so I can trace where the
> error lies.
>
> Are there Zookeeper log files where I can analyse whether Flume actually
> sends the Txns to Hbase via Zookeeper?
>
>
>
> On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:
>
> Hello Jeff,
>
> Please find below requested logs.. Initiation part of the logs were
> unfortunately not included. I can run these again if necessary but the
> Zookeeper connection is included in the logs.
>
>
> 14/02/17 10:26:12 INFO properties.PropertiesFileConfigurationProvider:
> created channel ch2
> 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hbase-sink, type: org.apache.flume.sink.hbase.HBaseSink
> 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hdfs-sink, type: hdfs
> 14/02/17 10:26:14 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting new
> configuration:{ sourceRunners:{exec-source=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:IDLE} }}
> sinkRunners:{hbase-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
> name:null counters:{} } }, hdfs-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
> name:null counters:{} } }} channels:{ch1=FileChannel ch1 { dataDirs:
> [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
> [/home/biadmin/.flume/file-channel2/data] }} }
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel ch1
> 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch1 {
> dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel ch2
> 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch2 {
> dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> 14/02/17 10:26:14 INFO file.Log: Replay started
> 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> 14/02/17 10:26:14 INFO file.Log: Replay started
> 14/02/17 10:26:14 INFO file.Log: Found NextFileID 7, from
> [/home/biadmin/.flume/file-channel/data/log-7,
> /home/biadmin/.flume/file-channel/data/log-6]
> 14/02/17 10:26:14 INFO file.Log: Found NextFileID 6, from
> [/home/biadmin/.flume/file-channel2/data/log-6,
> /home/biadmin/.flume/file-channel2/data/log-4,
> /home/biadmin/.flume/file-channel2/data/log-5]
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint and
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
> checkpoint metadata from
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint and
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
> checkpoint metadata from
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:35 EST
> 2014, queue depth = 0
> 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:31 EST
> 2014, queue depth = 0
> 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
> [/home/biadmin/.flume/file-channel/data/log-6,
> /home/biadmin/.flume/file-channel/data/log-7]
> 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
> [/home/biadmin/.flume/file-channel2/data/log-4,
> /home/biadmin/.flume/file-channel2/data/log-5,
> /home/biadmin/.flume/file-channel2/data/log-6]
> 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-4
> 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Unable to get
> maxDirectMemory from VM: NoSuchMethodException:
> sun.misc.VM.maxDirectMemory(null)
> 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Direct Memory Allocation:
> Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20971520,
> Remaining = 20971520
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 32040
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 2496
> 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
> file(/home/biadmin/.flume/file-channel2/data/log-4) is: 1392407375821,
> which is beyond the requested checkpoint time: 1392650490155 and position 0
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 22843
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 22843 in
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-6
> 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
> file(/home/biadmin/.flume/file-channel2/data/log-6) is: 1392650490155,
> which is beyond the requested checkpoint time: 1392650490155 and position 0
> 14/02/17 10:26:16 INFO file.ReplayHandler: read: 0, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 0, eventCount:0
> 14/02/17 10:26:16 INFO file.Log: Rolling
> /home/biadmin/.flume/file-channel2/data
> 14/02/17 10:26:16 INFO file.Log: Roll start
> /home/biadmin/.flume/file-channel2/data
> 14/02/17 10:26:16 INFO file.LogFile: Opened
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 2496 in
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 32071 in
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:26:16 INFO file.ReplayHandler: read: 1, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 1, eventCount:0
> 14/02/17 10:26:16 INFO file.Log: Rolling
> /home/biadmin/.flume/file-channel/data
> 14/02/17 10:26:16 INFO file.Log: Roll start
> /home/biadmin/.flume/file-channel/data
> 14/02/17 10:26:16 INFO file.LogFile: Opened
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:26:16 INFO file.Log: Roll end
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 0
> 14/02/17 10:26:16 INFO file.Log: Roll end
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 0
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774387, queueSize: 0,
> queueHead: 10516
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774388, queueSize: 0,
> queueHead: 223682
> 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 0, logWriteOrderID = 1392650774387
> 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 0, logWriteOrderID = 1392650774388
> 14/02/17 10:26:16 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 0 logWriteOrderID:
> 1392650774387
> 14/02/17 10:26:16 INFO file.FileChannel: Queue Size after replay: 0
> [channel=ch2]
> 14/02/17 10:26:17 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 0 logWriteOrderID:
> 1392650774388
> 14/02/17 10:26:17 INFO file.FileChannel: Queue Size after replay: 0
> [channel=ch1]
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: ch2, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch2 started
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: ch1, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch1 started
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink hbase-sink
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink hdfs-sink
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Source exec-source
> 14/02/17 10:26:17 INFO source.ExecSource: Exec source starting with
> command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: SINK, name: hdfs-sink, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs-sink started
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.5--1, built on 01/23/2013 14:29 GMT
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:host.name
> =bivm
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.vendor=IBM Corporation
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.home=/opt/ibm/biginsights/jdk/jre
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.class.path=conf:/opt/ibm/biginsights/flume/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/flume/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-mapper-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/flume-avro-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-jdbc-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/velocity-1.7.jar:/opt/ibm/biginsights/flume/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/flume/lib/flume-ng-node-1.3.0.jar:/opt/ibm/biginsights/flume/lib/commons-dbcp-1.4.jar:/opt/ibm/biginsights/flume/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/flume/lib/flume-hdfs-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/asynchbase-1.2.0.jar:/opt/ibm/biginsights/flume/lib/flume-recoverable-memory-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/async-1.3.1.jar:/opt/ibm/biginsights/flume/lib/slf4j-log4j12-1.6.1.jar:/opt/ibm/biginsights/flume/lib/flume-thrift-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-file-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/libthrift-0.6.1.jar:/opt/ibm/biginsights/flume/lib/avro-1.7.2.jar:/opt/ibm/biginsights/flume/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-core-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/servlet-api-2.5-20110124.jar:/opt/ibm/biginsights/flume/lib/flume-ng-elasticsearch-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-configuration-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/flume/lib/irclib-1.10.jar:/opt/ibm/biginsights/flume/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/flume/lib/derby-10.8.3.1.jar:/opt/ibm/biginsights/flume/lib/flume-ng-log4jappender-1.3.0.jar:/opt/ibm/biginsights/flume/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/flume/lib/flume-irc-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jcl-over-slf4j-1.7.2.jar:/opt/ibm/biginsights/flume/lib/slf4j-api-1.6.1.jar:/opt/ibm/biginsights/flume/lib/joda-time-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/flume/lib/commons-io-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/flume/lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/flume/lib/commons-pool-1.5.4.jar:/opt/ibm/biginsights/flume/lib/flume-ng-hbase-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/protobuf-java-2.4.1.jar:/opt/ibm/biginsights/flume/lib/flume-scribe-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-core-1.3.0.jar:/opt/ibm/biginsights/flume/lib/gson-2.2.2.jar:/opt/ibm/biginsights/flume/lib/flume-ng-sdk-1.3.0.jar:/opt/ibm/biginsights/flume/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/flume/lib/guava-10.0.1.jar:/opt/ibm/biginsights/flume/lib/paranamer-2.3.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/hbase:/opt/ibm/biginsights/hbase/hbase-0.94.3-security.jar:/opt/ibm/biginsights/hbase/hbase-0.94.3-security-tests.jar:/opt/ibm/biginsights/hbase/hbase.jar:/opt/ibm/biginsights/hbase/lib/activation-1.1.jar:/opt/ibm/biginsights/hbase/lib/asm-3.1.jar:/opt/ibm/biginsights/hbase/lib/avro-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/hbase/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/hbase/lib/commons-codec-1.4.jar:/opt/ibm/biginsights/hbase/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/hbase/lib/commons-digester-1.8.jar:/opt/ibm/biginsights/hbase/lib/commons-el-1.0.jar:/opt/ibm/biginsights/hbase/lib/commons-httpclient-3.1.jar:/opt/ibm/biginsights/hbase/lib/commons-io-2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/hbase/lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/commons-math-2.2.jar:/opt/ibm/biginsights/hbase/lib/commons-net-3.1.jar:/opt/ibm/biginsights/hbase/lib/core-3.1.1.jar:/opt/ibm/biginsights/hbase/lib/guardium-proxy.jar:/opt/ibm/biginsights/hbase/lib/guava-11.0.2.jar:/opt/ibm/biginsights/hbase/lib/hadoop-core.jar:/opt/ibm/biginsights/hbase/lib/hadoop-tools-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/high-scale-lib-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/httpclient-4.1.2.jar:/opt/ibm/biginsights/hbase/lib/httpcore-4.1.3.jar:/opt/ibm/biginsights/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-xc-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jamon-runtime-2.3.1.jar:/opt/ibm/biginsights/hbase/lib/jasper-compiler-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jasper-runtime-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jaxb-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/ibm/biginsights/hbase/lib/jersey-core-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-json-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-server-1.8.jar:/opt/ibm/biginsights/hbase/lib/jettison-1.1.jar:/opt/ibm/biginsights/hbase/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jruby-complete-1.6.5.1.jar:/opt/ibm/biginsights/hbase/lib/jsp-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/hbase/lib/junit-4.10-HBASE-1.jar:/opt/ibm/biginsights/hbase/lib/libthrift-0.8.0.jar:/opt/ibm/biginsights/hbase/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/hbase/lib/metrics-core-2.1.2.jar:/opt/ibm/biginsights/hbase/lib/netty-3.2.4.Final.jar:/opt/ibm/biginsights/hbase/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/hbase/lib/protobuf-java-2.4.0a.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/hbase/lib/stax-api-1.0.1.jar:/opt/ibm/biginsights/hbase/lib/velocity-1.7.jar:/opt/ibm/biginsights/hbase/lib/xmlenc-0.52.jar:/opt/ibm/biginsights/hbase/lib/xml-ibm.jar:/opt/ibm/biginsights/hbase/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/hbase/lib/zookeeper.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/hbase/conf
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.library.path=:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=j9jit24
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.name
> =Linux
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:os.version=2.6.18-194.17.4.el5
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.name
> =biadmin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:user.home=/home/biadmin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:user.dir=/opt/ibm/biginsights/flume/bin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Initiating client connection,
> connectString=bivm:2181 sessionTimeout=180000 watcher=hconnection
> 14/02/17 10:26:17 INFO zookeeper.RecoverableZooKeeper: The identifier of
> this process is 20984@bivm
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to
> server bivm/192.168.37.128:2181. Will not attempt to authenticate using
> SASL (Unable to locate a login configuration)
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Socket connection established
> to bivm/192.168.37.128:2181, initiating session
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Session establishment
> complete on server bivm/192.168.37.128:2181, sessionid =
> 0x144401355b4001d, negotiated timeout = 60000
> 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 60
> 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774536, queueSize: 60,
> queueHead: 10514
> 14/02/17 10:29:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 32036, logWriteOrderID = 1392650774536
> 14/02/17 10:29:57 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 32036
> logWriteOrderID: 1392650774536
> 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-4
> 14/02/17 10:29:57 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel2/data/log-4, result = true, minFileID 7
> 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:29:57 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel2/data/log-5, result = true, minFileID 7
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 460
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775504, queueSize: 520,
> queueHead: 10514
> 14/02/17 10:29:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 277565, logWriteOrderID = 1392650775504
> 14/02/17 10:29:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 277565
> logWriteOrderID: 1392650775504
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 540
> 14/02/17 10:29:59 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998182.tmp
> 14/02/17 10:29:59 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 423
> 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775933, queueSize: 137,
> queueHead: 10917
> 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775934, queueSize: 539,
> queueHead: 223681
> 14/02/17 10:30:01 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 304892, logWriteOrderID = 1392650775933
> 14/02/17 10:30:01 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 304892
> logWriteOrderID: 1392650775933
> 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 137
> 14/02/17 10:30:02 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 288266, logWriteOrderID = 1392650775934
> 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776074, queueSize: 0,
> queueHead: 11054
> 14/02/17 10:30:04 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 288266
> logWriteOrderID: 1392650775934
> 14/02/17 10:30:04 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:30:04 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel/data/log-6, result = true, minFileID 8
> 14/02/17 10:30:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 29
> 14/02/17 10:30:06 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 310581, logWriteOrderID = 1392650776074
> 14/02/17 10:30:13 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776105, queueSize: 550,
> queueHead: 223690
> 14/02/17 10:30:19 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 310581
> logWriteOrderID: 1392650776074
> 14/02/17 10:30:21 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:29 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776127, queueSize: 20,
> queueHead: 11052
> 14/02/17 10:30:29 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 299362, logWriteOrderID = 1392650776105
> 14/02/17 10:30:30 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 321308, logWriteOrderID = 1392650776127
> 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 299362
> logWriteOrderID: 1392650776105
> 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 321308
> logWriteOrderID: 1392650776127
> 14/02/17 10:30:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 21
> 14/02/17 10:30:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 38
> 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776192, queueSize: 569,
> queueHead: 223691
> 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776193, queueSize: 20,
> queueHead: 11070
> 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 310040, logWriteOrderID = 1392650776192
> 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 332801, logWriteOrderID = 1392650776193
> 14/02/17 10:30:34 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 310040
> logWriteOrderID: 1392650776192
> 14/02/17 10:30:35 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 332801
> logWriteOrderID: 1392650776193
> 14/02/17 10:30:37 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:39 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182
> 14/02/17 10:30:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776236, queueSize: 0,
> queueHead: 11090
> 14/02/17 10:30:40 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998183.tmp
> 14/02/17 10:30:42 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776237, queueSize: 589,
> queueHead: 223691
> 14/02/17 10:30:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 333657, logWriteOrderID = 1392650776236
> 14/02/17 10:30:59 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 320738, logWriteOrderID = 1392650776237
> 14/02/17 10:31:01 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 333657
> logWriteOrderID: 1392650776236
> 14/02/17 10:31:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 320738
> logWriteOrderID: 1392650776237
> 14/02/17 10:31:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 125
> 14/02/17 10:31:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776384, queueSize: 464,
> queueHead: 223816
> 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776385, queueSize: 20,
> queueHead: 11088
> 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 344355, logWriteOrderID = 1392650776385
> 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 325863, logWriteOrderID = 1392650776384
> 14/02/17 10:31:20 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183
> 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 325863
> logWriteOrderID: 1392650776384
> 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 344355
> logWriteOrderID: 1392650776385
> 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1
> 14/02/17 10:31:23 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998184.tmp
> 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776427, queueSize: 0,
> queueHead: 11108
> 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776428, queueSize: 463,
> queueHead: 223817
> 14/02/17 10:31:25 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 335946, logWriteOrderID = 1392650776428
> 14/02/17 10:31:26 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 345211, logWriteOrderID = 1392650776427
> 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 345211
> logWriteOrderID: 1392650776427
> 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 335946
> logWriteOrderID: 1392650776428
> 14/02/17 10:31:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 40
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 70
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776540, queueSize: 473,
> queueHead: 223847
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776541, queueSize: 40,
> queueHead: 11106
> 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 356818, logWriteOrderID = 1392650776540
> 14/02/17 10:31:28 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 356818
> logWriteOrderID: 1392650776540
> 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 366536, logWriteOrderID = 1392650776541
> 14/02/17 10:31:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 366536
> logWriteOrderID: 1392650776541
> 14/02/17 10:31:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 493
> 14/02/17 10:31:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 40
> 14/02/17 10:31:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650777082, queueSize: 0,
> queueHead: 11146
> 14/02/17 10:31:35 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650777083, queueSize: 0,
> queueHead: 224340
> 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 368733, logWriteOrderID = 1392650777082
> 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 379163, logWriteOrderID = 1392650777083
> 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 368733
> logWriteOrderID: 1392650777082
> 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 379163
> logWriteOrderID: 1392650777083
> 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 920
> 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 900
> 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650778995, queueSize: 900,
> queueHead: 224338
> 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650778996, queueSize: 920,
> queueHead: 11144
> 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 859009, logWriteOrderID = 1392650778996
> 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 859505, logWriteOrderID = 1392650778995
> 14/02/17 10:31:49 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 859009
> logWriteOrderID: 1392650778996
> 14/02/17 10:31:50 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 920
> 14/02/17 10:31:53 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 859505
> logWriteOrderID: 1392650778995
> 14/02/17 10:31:53 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650779929, queueSize: 0,
> queueHead: 12064
> 14/02/17 10:31:54 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184
> 14/02/17 10:31:54 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998185.tmp
> 14/02/17 10:31:54 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 22
> 14/02/17 10:31:55 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650779951, queueSize: 918,
> queueHead: 224340
> 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 897089, logWriteOrderID = 1392650779929
> 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 870220, logWriteOrderID = 1392650779951
> 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 870220
> logWriteOrderID: 1392650779951
> 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 897089
> logWriteOrderID: 1392650779929
> 14/02/17 10:31:57 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 300
> 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650781760, queueSize: 300,
> queueHead: 12062
> 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1198
> 14/02/17 10:32:01 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650781761, queueSize: 0,
> queueHead: 225538
> 14/02/17 10:32:02 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1057180, logWriteOrderID = 1392650781760
> 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1057180
> logWriteOrderID: 1392650781760
> 14/02/17 10:32:03 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1068832, logWriteOrderID = 1392650781761
> 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1068832
> logWriteOrderID: 1392650781761
> 14/02/17 10:32:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 798
> 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783137, queueSize: 500,
> queueHead: 12360
> 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 520
> 14/02/17 10:32:08 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783138, queueSize: 519,
> queueHead: 225537
> 14/02/17 10:32:12 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1336479, logWriteOrderID = 1392650783137
> 14/02/17 10:32:14 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1346456, logWriteOrderID = 1392650783138
> 14/02/17 10:32:14 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1336479
> logWriteOrderID: 1392650783137
> 14/02/17 10:32:15 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 100
> 14/02/17 10:32:16 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1346456
> logWriteOrderID: 1392650783138
> 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783761, queueSize: 400,
> queueHead: 12460
> 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 519
> 14/02/17 10:32:20 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783762, queueSize: 0,
> queueHead: 226056
> 14/02/17 10:32:21 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1341143, logWriteOrderID = 1392650783761
> 14/02/17 10:32:23 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1367771, logWriteOrderID = 1392650783762
> 14/02/17 10:32:23 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1341143
> logWriteOrderID: 1392650783761
> 14/02/17 10:32:24 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1367771
> logWriteOrderID: 1392650783762
> 14/02/17 10:32:24 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 300
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 100
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650784174, queueSize: 300,
> queueHead: 12660
> 14/02/17 10:32:25 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650784175, queueSize: 100,
> queueHead: 226054
> 14/02/17 10:32:25 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1402287, logWriteOrderID = 1392650784174
> 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1402287
> logWriteOrderID: 1392650784174
> 14/02/17 10:32:26 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1421128, logWriteOrderID = 1392650784175
> 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1421128
> logWriteOrderID: 1392650784175
> 14/02/17 10:32:27 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 480
> 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 278
> 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785222, queueSize: 98,
> queueHead: 13042
> 14/02/17 10:32:32 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785223, queueSize: 0,
> queueHead: 226332
> 14/02/17 10:32:33 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1514767, logWriteOrderID = 1392650785222
> 14/02/17 10:32:34 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1514767
> logWriteOrderID: 1392650785222
> 14/02/17 10:32:35 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 118
> 14/02/17 10:32:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1528845, logWriteOrderID = 1392650785223
> 14/02/17 10:32:38 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785364, queueSize: 0,
> queueHead: 13160
> 14/02/17 10:32:40 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1528845
> logWriteOrderID: 1392650785223
> 14/02/17 10:32:41 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1529781, logWriteOrderID = 1392650785364
> 14/02/17 10:32:42 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1529781
> logWriteOrderID: 1392650785364
> 14/02/17 10:32:43 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 500
> 14/02/17 10:32:44 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 500
> 14/02/17 10:32:45 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786415, queueSize: 500,
> queueHead: 13158
> 14/02/17 10:32:47 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786416, queueSize: 500,
> queueHead: 226330
> 14/02/17 10:32:53 INFO node.FlumeNode: Flume node stopping - agent
> 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
> supervisor 9
> 14/02/17 10:32:53 INFO properties.PropertiesFileConfigurationProvider:
> Configuration provider stopping
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Node manager
> stopping
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Shutting
> down configuration: { sourceRunners:{exec-source=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }}
> sinkRunners:{hbase-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
> name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } },
> hdfs-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
> name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }}
> channels:{ch1=FileChannel ch1 { dataDirs:
> [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
> [/home/biadmin/.flume/file-channel2/data] }} }
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Source exec-source
> 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping component:
> EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }
> 14/02/17 10:32:53 INFO source.ExecSource: Stopping exec source with
> command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1795949, logWriteOrderID = 1392650786416
> 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1796885, logWriteOrderID = 1392650786415
> 14/02/17 10:32:57 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1795949
> logWriteOrderID: 1392650786416
> 14/02/17 10:32:57 ERROR source.ExecSource: Failed while running command:
> tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> java.io.IOException: Pipe closed
>         at java.io.PipedInputStream.read(PipedInputStream.java:302)
>         at java.lang.ProcessPipedInputStream.read(UNIXProcess.java:412)
>         at java.io.PipedInputStream.read(PipedInputStream.java:372)
>         at java.lang.ProcessInputStream.read(UNIXProcess.java:471)
>         at
> sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
>         at
> sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
>         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
>         at java.io.InputStreamReader.read(InputStreamReader.java:188)
>         at java.io.BufferedReader.fill(BufferedReader.java:147)
>         at java.io.BufferedReader.readLine(BufferedReader.java:310)
>         at java.io.BufferedReader.readLine(BufferedReader.java:373)
>         at
> org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:272)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:452)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO source.ExecSource: Command [tail -F
> /home/biadmin/bigdemo/data/rec_telco.cdr] exited with 130
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Sink hbase-sink
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004ccounterGroup:{ name:null counters:{runner.backoffs.consecutive=2,
> runner.backoffs=59} } }
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Component has
> already been stopped EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:STOP} }
> 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
> lock
> java.lang.InterruptedException
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
>         at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
>         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:446)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
> Exception follows.
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=ch2]
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO
> client.HConnectionManager$HConnectionImplementation: Closed zookeeper
> sessionid=0x144401355b4001d
> 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1796885
> logWriteOrderID: 1392650786415
> 14/02/17 10:32:57 WARN hdfs.BucketWriter: Caught IOException writing to
> HDFSWriter (Filesystem closed). Closing file
> (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp)
> and rethrowing exception.
> 14/02/17 10:32:58 WARN hdfs.BucketWriter: Caught IOException while closing
> file
> (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp).
> Exception follows.
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1
> 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:58 ERROR hdfs.BucketWriter: Unexpected error
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:237)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:232)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:109)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:217)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786418, queueSize: 499,
> queueHead: 226331
> 14/02/17 10:32:58 INFO zookeeper.ZooKeeper: Session: 0x144401355b4001d
> closed
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Sink hdfs-sink
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01counterGroup:{ name:null counters:{runner.backoffs.consecutive=3,
> runner.backoffs=53} } }
> 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
> lock
> java.lang.InterruptedException
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
>         at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
>         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:536)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
> Exception follows.
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=ch1]
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:539)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO hdfs.HDFSEventSink: Closing
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec
> 14/02/17 10:32:58 INFO zookeeper.ClientCnxn: EventThread shut down
> 14/02/17 10:32:58 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1795990, logWriteOrderID = 1392650786418
> 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:58 WARN hdfs.HDFSEventSink: Exception while closing
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec. Exception follows.
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:757)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:755)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs-sink stopped
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Channel ch1
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }
> 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch1 {
> dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1795990
> logWriteOrderID: 1392650786418
> 14/02/17 10:32:58 INFO file.LogFile: Closing
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch1 stopped
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Channel ch2
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }
> 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch2 {
> dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> 14/02/17 10:32:58 INFO file.LogFile: Closing
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-6
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch2 stopped
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
> supervisor 9
>
>
>
> On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:
>
> Hello Jeff,
>
> Please find below requested logs.. Initiation part of the logs were
> unfortunately not included. I can run these again if necessary but the
> Zookeeper connection is included in the logs.
>
>
>
> On 17 February 2014 16:05, Jeff Lord <jl...@cloudera.com> wrote:
>
> Logs ?
>
> On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <ka...@gmail.com> wrote:
> > Dear Mailing Group,
> >
> > I am currently having issues with the Hbase sink function. I have
> developed
> > an agent with a fanout channel setup ( single source, multiple channels,
> > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
> >
> >  The issue is that although the HDFS is working well, the Hbase flow is
> > simply not working. There are no errors being reported by Flume for the
> > Hbase channel but there are never any records being written to the HBase
> > store. The Hbase table as stipulated in the config always remains empty.
> > Studying the Flume startup logs I observe that the session connection to
> > Zookeeper is always successfully established
> >
> > Are there any special configurations I am missing out?
> >
> > I am using the Async Event Serializer to persist the txns.
> >
> > Any assistance will be greatly appreciated.
> >
> >
> > Please see below for the flume configuration:
> >
> > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
> > agent.sources=exec-source
> > agent.sinks=hdfs-sink hbase-sink
> > agent.channels=ch1 ch2
> >
> > agent.sources.exec-source.type=exec
> > agent.sources.exec-source.command=tail -F
> > /home/biadmin/bigdemo/data/rec_telco.cdr
> >
> > agent.sinks.hdfs-sink.type=hdfs
> > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
> > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
> > # File size to trigger roll, in bytes (0: never roll based on file size)
> > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
> > agent.sinks.hdfs-sink.hdfs.rollCount = 0
> > # number of events written to file before it flushed to HDFS
> > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
> > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
> >
> >
> > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
> >
> agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
> > agent.sinks.hbase-sink.table=telco_cdr_rec
> > agent.sinks.hbase-sink.columnFamily = colfam
> > agent.sinks.hbase-sink.channels = ch2
> > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
> > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
> >
> >
> > agent.channels.ch1.type=file
> > agent.channels.ch1.checkpointInterval=3000
> > agent.channels.ch1.transactionCapacity=10000
> >
> agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
> > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
> > agent.channels.ch1.write-timeout=30
> > agent.channels.ch1.keep-alive=30
> > #agent.channels.ch1.capacity=1000
> >
> > agent.channels.ch2.type=file
> > agent.channels.ch2.checkpointInterval=300
> > agent.channels.ch2.transactionCapacity=10000
> >
> agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
> > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
> > agent.channels.ch2.write-timeout=30
> > agent.channels.ch2.keep-alive=30
> > #agent.channels.ch2.capacity=1000
> >
> >
> > agent.sources.exec-source.channels=ch1 ch2
> > agent.sinks.hdfs-sink.channel=ch1
> > agent.sinks.hbase-sink.channel=ch2
> >
>
>
>
>
>
>

Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Hari Shreedharan <hs...@cloudera.com>.
Hi Kris, 

Please realize that people usually work on their own time on these mailing lists and since your first message was sent on a Monday early morning on a long weekend in the US, others may not have seen your message either. 

Are you running Apache Flume and Apache HBase? If yes, what versions (output of flume-ng version and hbase version)? 


Thanks,
Hari


On Tuesday, February 18, 2014 at 10:22 AM, Kris Ogirri wrote:

> Hi,
> 
> Cant anybody help with this? I am thinking its a small issue because everything seems to work fine but the data from the Channel never gets persisted into Hbase?
> 
> I have added the description of the Hbase tables:
> 
> hbase(main):005:0> describe 'telco_cdr_rec'
> DESCRIPTION                                          ENABLED                    
>  {NAME => 'telco_cdr_rec', FAMILIES => [{NAME => 'co true                       
>  lfam', REPLICATION_SCOPE => '0', KEEP_DELETED_CELLS                            
>   => 'false', COMPRESSION => 'NONE', ENCODE_ON_DISK                             
>  => 'true', BLOCKCACHE => 'true', MIN_VERSIONS => '0                            
>  ', DATA_BLOCK_ENCODING => 'NONE', IN_MEMORY => 'fal                            
>  se', BLOOMFILTER => 'NONE', TTL => '2147483647', VE                            
>  RSIONS => '3', BLOCKSIZE => '65536'}]}                                         
> 1 row(s) in 0.1600 seconds
> 
> 
> If no one can help with the problem, can anyone provide a link to the Flume -> Zookeeper -> Hbase Internal documentation so I can trace where the error lies.
> 
> Are there Zookeeper log files where I can analyse whether Flume actually sends the Txns to Hbase via Zookeeper?
> 
> 
> 
> On 17 February 2014 16:38, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > Hello Jeff,
> > 
> > Please find below requested logs.. Initiation part of the logs were unfortunately not included. I can run these again if necessary but the Zookeeper connection is included in the logs.
> > 
> > 
> > 14/02/17 10:26:12 INFO properties.PropertiesFileConfigurationProvider: created channel ch2
> > 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink: hbase-sink, type: org.apache.flume.sink.hbase.HBaseSink
> > 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink: hdfs-sink, type: hdfs
> > 14/02/17 10:26:14 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
> > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{exec-source=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:IDLE} }} sinkRunners:{hbase-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{} } }, hdfs-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{} } }} channels:{ch1=FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }} }
> > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel ch1
> > 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> > 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel ch2
> > 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> > 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> > 14/02/17 10:26:14 INFO file.Log: Replay started
> > 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> > 14/02/17 10:26:14 INFO file.Log: Replay started
> > 14/02/17 10:26:14 INFO file.Log: Found NextFileID 7, from [/home/biadmin/.flume/file-channel/data/log-7, /home/biadmin/.flume/file-channel/data/log-6]
> > 14/02/17 10:26:14 INFO file.Log: Found NextFileID 6, from [/home/biadmin/.flume/file-channel2/data/log-6, /home/biadmin/.flume/file-channel2/data/log-4, /home/biadmin/.flume/file-channel2/data/log-5]
> > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with /home/biadmin/.flume/file-channel2/checkpoint/checkpoint and /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading checkpoint metadata from /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with /home/biadmin/.flume/file-channel/checkpoint/checkpoint and /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> > 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading checkpoint metadata from /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> > 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:35 EST 2014, queue depth = 0
> > 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:31 EST 2014, queue depth = 0
> > 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> > 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> > 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of [/home/biadmin/.flume/file-channel/data/log-6, /home/biadmin/.flume/file-channel/data/log-7]
> > 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of [/home/biadmin/.flume/file-channel2/data/log-4, /home/biadmin/.flume/file-channel2/data/log-5, /home/biadmin/.flume/file-channel2/data/log-6]
> > 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel/data/log-6
> > 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-4
> > 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Unable to get maxDirectMemory from VM: NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
> > 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Direct Memory Allocation:  Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20971520, Remaining = 20971520
> > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 32040
> > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel/data/log-7
> > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 2496
> > 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for file(/home/biadmin/.flume/file-channel2/data/log-4) is: 1392407375821, which is beyond the requested checkpoint time: 1392650490155 and position 0
> > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-5
> > 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position: 22843
> > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 22843 in /home/biadmin/.flume/file-channel2/data/log-5
> > 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying /home/biadmin/.flume/file-channel2/data/log-6
> > 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for file(/home/biadmin/.flume/file-channel2/data/log-6) is: 1392650490155, which is beyond the requested checkpoint time: 1392650490155 and position 0
> > 14/02/17 10:26:16 INFO file.ReplayHandler: read: 0, put: 0, take: 0, rollback: 0, commit: 0, skip: 0, eventCount:0
> > 14/02/17 10:26:16 INFO file.Log: Rolling /home/biadmin/.flume/file-channel2/data
> > 14/02/17 10:26:16 INFO file.Log: Roll start /home/biadmin/.flume/file-channel2/data
> > 14/02/17 10:26:16 INFO file.LogFile: Opened /home/biadmin/.flume/file-channel2/data/log-7
> > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 2496 in /home/biadmin/.flume/file-channel/data/log-7
> > 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 32071 in /home/biadmin/.flume/file-channel/data/log-6
> > 14/02/17 10:26:16 INFO file.ReplayHandler: read: 1, put: 0, take: 0, rollback: 0, commit: 0, skip: 1, eventCount:0
> > 14/02/17 10:26:16 INFO file.Log: Rolling /home/biadmin/.flume/file-channel/data
> > 14/02/17 10:26:16 INFO file.Log: Roll start /home/biadmin/.flume/file-channel/data
> > 14/02/17 10:26:16 INFO file.LogFile: Opened /home/biadmin/.flume/file-channel/data/log-8
> > 14/02/17 10:26:16 INFO file.Log: Roll end
> > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 0
> > 14/02/17 10:26:16 INFO file.Log: Roll end
> > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 0
> > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774387, queueSize: 0, queueHead: 10516
> > 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774388, queueSize: 0, queueHead: 223682
> > 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-7.meta currentPosition = 0, logWriteOrderID = 1392650774387
> > 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-8.meta currentPosition = 0, logWriteOrderID = 1392650774388
> > 14/02/17 10:26:16 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 0 logWriteOrderID: 1392650774387
> > 14/02/17 10:26:16 INFO file.FileChannel: Queue Size after replay: 0 [channel=ch2]
> > 14/02/17 10:26:17 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 0 logWriteOrderID: 1392650774388
> > 14/02/17 10:26:17 INFO file.FileChannel: Queue Size after replay: 0 [channel=ch1]
> > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: ch2, registered successfully.
> > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch2 started
> > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: ch1, registered successfully.
> > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 started
> > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink hbase-sink
> > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink hdfs-sink
> > 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Source exec-source
> > 14/02/17 10:26:17 INFO source.ExecSource: Exec source starting with command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: hdfs-sink, registered successfully.
> > 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink started
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5--1, built on 01/23/2013 14:29 GMT
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:host.name (http://host.name)=bivm
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.vendor=IBM Corporation
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.home=/opt/ibm/biginsights/jdk/jre
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.class.path=conf:/opt/ibm/biginsights/flume/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/flume/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-mapper-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/flume-avro-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-jdbc-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/velocity-1.7.jar:/opt/ibm/biginsights/flume/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/flume/lib/flume-ng-node-1.3.0.jar:/opt/ibm/biginsights/flume/lib/commons-dbcp-1.4.jar:/opt/ibm/biginsights/flume/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/flume/lib/flume-hdfs-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/asynchbase-1.2.0.jar:/opt/ibm/biginsights/flume/lib/flume-recoverable-memory-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/async-1.3.1.jar:/opt/ibm/biginsights/flume/lib/slf4j-log4j12-1.6.1.jar:/opt/ibm/biginsights/flume/lib/flume-thrift-source-1.3.0.jar:/opt/ibm/biginsights/fl
ume/lib/flume-file-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/libthrift-0.6.1.jar:/opt/ibm/biginsights/flume/lib/avro-1.7.2.jar:/opt/ibm/biginsights/flume/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-core-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/servlet-api-2.5-20110124.jar:/opt/ibm/biginsights/flume/lib/flume-ng-elasticsearch-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-configuration-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/flume/lib/irclib-1.10.jar:/opt/ibm/biginsights/flume/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/flume/lib/derby-10.8.3.1.jar:/opt/ibm/biginsights/flume/lib/flume-ng-log4jappender-1.3.0.jar:/opt/ibm/biginsights/flume/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/flume/lib/flume-irc-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jcl-over-slf4j-1.7.2.jar:/opt/ibm/biginsights/flume/lib/slf4j-api-1.6.1.jar:/opt/ibm/biginsights/flume/lib/joda-time-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-l
ang-2.5.jar:/opt/ibm/biginsights/flume/lib/commons-io-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/flume/lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/flume/lib/commons-pool-1.5.4.jar:/opt/ibm/biginsights/flume/lib/flume-ng-hbase-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/protobuf-java-2.4.1.jar:/opt/ibm/biginsights/flume/lib/flume-scribe-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-core-1.3.0.jar:/opt/ibm/biginsights/flume/lib/gson-2.2.2.jar:/opt/ibm/biginsights/flume/lib/flume-ng-sdk-1.3.0.jar:/opt/ibm/biginsights/flume/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/flume/lib/guava-10.0.1.jar:/opt/ibm/biginsights/flume/lib/paranamer-2.3.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsig
hts/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons
-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/big
insights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/bigi
nsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/opt/ibm/biginsights/jdk/lib/tools.jar
:/opt/ibm/biginsights/hbase:/opt/ibm/biginsights/hbase/hbase-0.94.3-security.jar:/opt/ibm/biginsights/hbase/hbase-0.94.3-security-tests.jar:/opt/ibm/biginsights/hbase/hbase.jar:/opt/ibm/biginsights/hbase/lib/activation-1.1.jar:/opt/ibm/biginsights/hbase/lib/asm-3.1.jar:/opt/ibm/biginsights/hbase/lib/avro-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/hbase/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/hbase/lib/commons-codec-1.4.jar:/opt/ibm/biginsights/hbase/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/hbase/lib/commons-digester-1.8.jar:/opt/ibm/biginsights/hbase/lib/commons-el-1.0.jar:/opt/ibm/biginsights/hbase/lib/commons-httpclient-3.1.jar:/opt/ibm/biginsights/hbase/lib/commons-io-2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/hbase/lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/hba
se/lib/commons-math-2.2.jar:/opt/ibm/biginsights/hbase/lib/commons-net-3.1.jar:/opt/ibm/biginsights/hbase/lib/core-3.1.1.jar:/opt/ibm/biginsights/hbase/lib/guardium-proxy.jar:/opt/ibm/biginsights/hbase/lib/guava-11.0.2.jar:/opt/ibm/biginsights/hbase/lib/hadoop-core.jar:/opt/ibm/biginsights/hbase/lib/hadoop-tools-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/high-scale-lib-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/httpclient-4.1.2.jar:/opt/ibm/biginsights/hbase/lib/httpcore-4.1.3.jar:/opt/ibm/biginsights/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-xc-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jamon-runtime-2.3.1.jar:/opt/ibm/biginsights/hbase/lib/jasper-compiler-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jasper-runtime-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jaxb-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/ibm/biginsights
/hbase/lib/jersey-core-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-json-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-server-1.8.jar:/opt/ibm/biginsights/hbase/lib/jettison-1.1.jar:/opt/ibm/biginsights/hbase/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jruby-complete-1.6.5.1.jar:/opt/ibm/biginsights/hbase/lib/jsp-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/hbase/lib/junit-4.10-HBASE-1.jar:/opt/ibm/biginsights/hbase/lib/libthrift-0.8.0.jar:/opt/ibm/biginsights/hbase/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/hbase/lib/metrics-core-2.1.2.jar:/opt/ibm/biginsights/hbase/lib/netty-3.2.4.Final.jar:/opt/ibm/biginsights/hbase/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/hbase/lib/protobuf-java-2.4.0a.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/hbase/l
ib/servlet-api-2.5-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/hbase/lib/stax-api-1.0.1.jar:/opt/ibm/biginsights/hbase/lib/velocity-1.7.jar:/opt/ibm/biginsights/hbase/lib/xmlenc-0.52.jar:/opt/ibm/biginsights/hbase/lib/xml-ibm.jar:/opt/ibm/biginsights/hbase/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/hbase/lib/zookeeper.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/
../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/bigi
nsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/
opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/hbase/conf
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:java.compiler=j9jit24
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.name (http://os.name)=Linux
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.18-194.17.4.el5
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.name (http://user.name)=biadmin
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/biadmin
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.dir=/opt/ibm/biginsights/flume/bin
> > 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=bivm:2181 sessionTimeout=180000 watcher=hconnection
> > 14/02/17 10:26:17 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 20984@bivm
> > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to server bivm/192.168.37.128:2181 (http://192.168.37.128:2181). Will not attempt to authenticate using SASL (Unable to locate a login configuration)
> > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Socket connection established to bivm/192.168.37.128:2181 (http://192.168.37.128:2181), initiating session
> > 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Session establishment complete on server bivm/192.168.37.128:2181 (http://192.168.37.128:2181), sessionid = 0x144401355b4001d, negotiated timeout = 60000
> > 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 60
> > 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650774536, queueSize: 60, queueHead: 10514
> > 14/02/17 10:29:56 INFO file.LogFileV3: Updating log-7.meta currentPosition = 32036, logWriteOrderID = 1392650774536
> > 14/02/17 10:29:57 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 32036 logWriteOrderID: 1392650774536
> > 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-4
> > 14/02/17 10:29:57 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel2/data/log-4, result = true, minFileID 7
> > 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-5
> > 14/02/17 10:29:57 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel2/data/log-5, result = true, minFileID 7
> > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 460
> > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775504, queueSize: 520, queueHead: 10514
> > 14/02/17 10:29:58 INFO file.LogFileV3: Updating log-7.meta currentPosition = 277565, logWriteOrderID = 1392650775504
> > 14/02/17 10:29:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 277565 logWriteOrderID: 1392650775504
> > 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 540
> > 14/02/17 10:29:59 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998182.tmp
> > 14/02/17 10:29:59 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 423
> > 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775933, queueSize: 137, queueHead: 10917
> > 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650775934, queueSize: 539, queueHead: 223681
> > 14/02/17 10:30:01 INFO file.LogFileV3: Updating log-7.meta currentPosition = 304892, logWriteOrderID = 1392650775933
> > 14/02/17 10:30:01 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 304892 logWriteOrderID: 1392650775933
> > 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 137
> > 14/02/17 10:30:02 INFO file.LogFileV3: Updating log-8.meta currentPosition = 288266, logWriteOrderID = 1392650775934
> > 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776074, queueSize: 0, queueHead: 11054
> > 14/02/17 10:30:04 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 288266 logWriteOrderID: 1392650775934
> > 14/02/17 10:30:04 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-6
> > 14/02/17 10:30:04 INFO file.Log: Removing old log /home/biadmin/.flume/file-channel/data/log-6, result = true, minFileID 8
> > 14/02/17 10:30:05 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 29
> > 14/02/17 10:30:06 INFO file.LogFileV3: Updating log-7.meta currentPosition = 310581, logWriteOrderID = 1392650776074
> > 14/02/17 10:30:13 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776105, queueSize: 550, queueHead: 223690
> > 14/02/17 10:30:19 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 310581 logWriteOrderID: 1392650776074
> > 14/02/17 10:30:21 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > 14/02/17 10:30:29 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776127, queueSize: 20, queueHead: 11052
> > 14/02/17 10:30:29 INFO file.LogFileV3: Updating log-8.meta currentPosition = 299362, logWriteOrderID = 1392650776105
> > 14/02/17 10:30:30 INFO file.LogFileV3: Updating log-7.meta currentPosition = 321308, logWriteOrderID = 1392650776127
> > 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 299362 logWriteOrderID: 1392650776105
> > 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 321308 logWriteOrderID: 1392650776127
> > 14/02/17 10:30:31 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 21
> > 14/02/17 10:30:32 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 38
> > 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776192, queueSize: 569, queueHead: 223691
> > 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776193, queueSize: 20, queueHead: 11070
> > 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-8.meta currentPosition = 310040, logWriteOrderID = 1392650776192
> > 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-7.meta currentPosition = 332801, logWriteOrderID = 1392650776193
> > 14/02/17 10:30:34 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 310040 logWriteOrderID: 1392650776192
> > 14/02/17 10:30:35 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 332801 logWriteOrderID: 1392650776193
> > 14/02/17 10:30:37 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > 14/02/17 10:30:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 20
> > 14/02/17 10:30:39 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182
> > 14/02/17 10:30:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776236, queueSize: 0, queueHead: 11090
> > 14/02/17 10:30:40 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998183.tmp
> > 14/02/17 10:30:42 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776237, queueSize: 589, queueHead: 223691
> > 14/02/17 10:30:58 INFO file.LogFileV3: Updating log-7.meta currentPosition = 333657, logWriteOrderID = 1392650776236
> > 14/02/17 10:30:59 INFO file.LogFileV3: Updating log-8.meta currentPosition = 320738, logWriteOrderID = 1392650776237
> > 14/02/17 10:31:01 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 333657 logWriteOrderID: 1392650776236
> > 14/02/17 10:31:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 320738 logWriteOrderID: 1392650776237
> > 14/02/17 10:31:04 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 125
> > 14/02/17 10:31:05 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776384, queueSize: 464, queueHead: 223816
> > 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776385, queueSize: 20, queueHead: 11088
> > 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-7.meta currentPosition = 344355, logWriteOrderID = 1392650776385
> > 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-8.meta currentPosition = 325863, logWriteOrderID = 1392650776384
> > 14/02/17 10:31:20 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183
> > 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 325863 logWriteOrderID: 1392650776384
> > 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 344355 logWriteOrderID: 1392650776385
> > 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 20
> > 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1
> > 14/02/17 10:31:23 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998184.tmp
> > 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776427, queueSize: 0, queueHead: 11108
> > 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776428, queueSize: 463, queueHead: 223817
> > 14/02/17 10:31:25 INFO file.LogFileV3: Updating log-8.meta currentPosition = 335946, logWriteOrderID = 1392650776428
> > 14/02/17 10:31:26 INFO file.LogFileV3: Updating log-7.meta currentPosition = 345211, logWriteOrderID = 1392650776427
> > 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 345211 logWriteOrderID: 1392650776427
> > 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 335946 logWriteOrderID: 1392650776428
> > 14/02/17 10:31:27 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 40
> > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 70
> > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776540, queueSize: 473, queueHead: 223847
> > 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650776541, queueSize: 40, queueHead: 11106
> > 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-8.meta currentPosition = 356818, logWriteOrderID = 1392650776540
> > 14/02/17 10:31:28 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 356818 logWriteOrderID: 1392650776540
> > 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-7.meta currentPosition = 366536, logWriteOrderID = 1392650776541
> > 14/02/17 10:31:30 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 366536 logWriteOrderID: 1392650776541
> > 14/02/17 10:31:31 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 493
> > 14/02/17 10:31:32 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 40
> > 14/02/17 10:31:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650777082, queueSize: 0, queueHead: 11146
> > 14/02/17 10:31:35 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650777083, queueSize: 0, queueHead: 224340
> > 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-7.meta currentPosition = 368733, logWriteOrderID = 1392650777082
> > 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-8.meta currentPosition = 379163, logWriteOrderID = 1392650777083
> > 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 368733 logWriteOrderID: 1392650777082
> > 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 379163 logWriteOrderID: 1392650777083
> > 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 920
> > 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 900
> > 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650778995, queueSize: 900, queueHead: 224338
> > 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650778996, queueSize: 920, queueHead: 11144
> > 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-7.meta currentPosition = 859009, logWriteOrderID = 1392650778996
> > 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-8.meta currentPosition = 859505, logWriteOrderID = 1392650778995
> > 14/02/17 10:31:49 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 859009 logWriteOrderID: 1392650778996
> > 14/02/17 10:31:50 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 920
> > 14/02/17 10:31:53 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 859505 logWriteOrderID: 1392650778995
> > 14/02/17 10:31:53 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650779929, queueSize: 0, queueHead: 12064
> > 14/02/17 10:31:54 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184
> > 14/02/17 10:31:54 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998185.tmp
> > 14/02/17 10:31:54 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 22
> > 14/02/17 10:31:55 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650779951, queueSize: 918, queueHead: 224340
> > 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-7.meta currentPosition = 897089, logWriteOrderID = 1392650779929
> > 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-8.meta currentPosition = 870220, logWriteOrderID = 1392650779951
> > 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 870220 logWriteOrderID: 1392650779951
> > 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 897089 logWriteOrderID: 1392650779929
> > 14/02/17 10:31:57 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 300
> > 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650781760, queueSize: 300, queueHead: 12062
> > 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1198
> > 14/02/17 10:32:01 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650781761, queueSize: 0, queueHead: 225538
> > 14/02/17 10:32:02 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1057180, logWriteOrderID = 1392650781760
> > 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1057180 logWriteOrderID: 1392650781760
> > 14/02/17 10:32:03 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1068832, logWriteOrderID = 1392650781761
> > 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1068832 logWriteOrderID: 1392650781761
> > 14/02/17 10:32:04 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 798
> > 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783137, queueSize: 500, queueHead: 12360
> > 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 520
> > 14/02/17 10:32:08 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783138, queueSize: 519, queueHead: 225537
> > 14/02/17 10:32:12 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1336479, logWriteOrderID = 1392650783137
> > 14/02/17 10:32:14 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1346456, logWriteOrderID = 1392650783138
> > 14/02/17 10:32:14 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1336479 logWriteOrderID: 1392650783137
> > 14/02/17 10:32:15 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 100
> > 14/02/17 10:32:16 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1346456 logWriteOrderID: 1392650783138
> > 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783761, queueSize: 400, queueHead: 12460
> > 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 519
> > 14/02/17 10:32:20 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650783762, queueSize: 0, queueHead: 226056
> > 14/02/17 10:32:21 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1341143, logWriteOrderID = 1392650783761
> > 14/02/17 10:32:23 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1367771, logWriteOrderID = 1392650783762
> > 14/02/17 10:32:23 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1341143 logWriteOrderID: 1392650783761
> > 14/02/17 10:32:24 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1367771 logWriteOrderID: 1392650783762
> > 14/02/17 10:32:24 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 300
> > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 100
> > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650784174, queueSize: 300, queueHead: 12660
> > 14/02/17 10:32:25 INFO hdfs.BucketWriter: Renaming hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185.tmp to hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185
> > 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650784175, queueSize: 100, queueHead: 226054
> > 14/02/17 10:32:25 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1402287, logWriteOrderID = 1392650784174
> > 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1402287 logWriteOrderID: 1392650784174
> > 14/02/17 10:32:26 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1421128, logWriteOrderID = 1392650784175
> > 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1421128 logWriteOrderID: 1392650784175
> > 14/02/17 10:32:27 INFO hdfs.BucketWriter: Creating hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > 14/02/17 10:32:27 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 480
> > 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 278
> > 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785222, queueSize: 98, queueHead: 13042
> > 14/02/17 10:32:32 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785223, queueSize: 0, queueHead: 226332
> > 14/02/17 10:32:33 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1514767, logWriteOrderID = 1392650785222
> > 14/02/17 10:32:34 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1514767 logWriteOrderID: 1392650785222
> > 14/02/17 10:32:35 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 118
> > 14/02/17 10:32:38 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1528845, logWriteOrderID = 1392650785223
> > 14/02/17 10:32:38 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650785364, queueSize: 0, queueHead: 13160
> > 14/02/17 10:32:40 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1528845 logWriteOrderID: 1392650785223
> > 14/02/17 10:32:41 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1529781, logWriteOrderID = 1392650785364
> > 14/02/17 10:32:42 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1529781 logWriteOrderID: 1392650785364
> > 14/02/17 10:32:43 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to sync = 500
> > 14/02/17 10:32:44 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 500
> > 14/02/17 10:32:45 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786415, queueSize: 500, queueHead: 13158
> > 14/02/17 10:32:47 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786416, queueSize: 500, queueHead: 226330
> > 14/02/17 10:32:53 INFO node.FlumeNode: Flume node stopping - agent
> > 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 9
> > 14/02/17 10:32:53 INFO properties.PropertiesFileConfigurationProvider: Configuration provider stopping
> > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Node manager stopping
> > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Shutting down configuration: { sourceRunners:{exec-source=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }} sinkRunners:{hbase-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } }, hdfs-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }} channels:{ch1=FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }} }
> > 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Stopping Source exec-source
> > 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping component: EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }
> > 14/02/17 10:32:53 INFO source.ExecSource: Stopping exec source with command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1795949, logWriteOrderID = 1392650786416
> > 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-7.meta currentPosition = 1796885, logWriteOrderID = 1392650786415
> > 14/02/17 10:32:57 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1795949 logWriteOrderID: 1392650786416
> > 14/02/17 10:32:57 ERROR source.ExecSource: Failed while running command: tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> > java.io.IOException: Pipe closed
> >         at java.io.PipedInputStream.read(PipedInputStream.java:302)
> >         at java.lang.ProcessPipedInputStream.read(UNIXProcess.java:412)
> >         at java.io.PipedInputStream.read(PipedInputStream.java:372)
> >         at java.lang.ProcessInputStream.read(UNIXProcess.java:471)
> >         at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
> >         at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
> >         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
> >         at java.io.InputStreamReader.read(InputStreamReader.java:188)
> >         at java.io.BufferedReader.fill(BufferedReader.java:147)
> >         at java.io.BufferedReader.readLine(BufferedReader.java:310)
> >         at java.io.BufferedReader.readLine(BufferedReader.java:373)
> >         at org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:272)
> >         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:452)
> >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> >         at java.lang.Thread.run(Thread.java:738)
> > 14/02/17 10:32:58 INFO source.ExecSource: Command [tail -F /home/biadmin/bigdemo/data/rec_telco.cdr] exited with 130
> > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Sink hbase-sink
> > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{ name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } }
> > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Component has already been stopped EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:exec-source,state:STOP} }
> > 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared lock
> > java.lang.InterruptedException
> >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
> >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
> >         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
> >         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
> >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:446)
> >         at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> >         at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
> >         at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
> >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> >         at java.lang.Thread.run(Thread.java:738)
> > 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
> > org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=ch2]
> >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
> >         at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
> >         at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
> >         at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
> >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> >         at java.lang.Thread.run(Thread.java:738)
> > 14/02/17 10:32:58 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x144401355b4001d
> > 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel2/data/log-7 position: 1796885 logWriteOrderID: 1392650786415
> > 14/02/17 10:32:57 WARN hdfs.BucketWriter: Caught IOException writing to HDFSWriter (Filesystem closed). Closing file (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp) and rethrowing exception.
> > 14/02/17 10:32:58 WARN hdfs.BucketWriter: Caught IOException while closing file (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp). Exception follows.
> > java.io.IOException: Filesystem closed
> >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> >         at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382)
> >         at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
> >         at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
> >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> >         at java.lang.Thread.run(Thread.java:738)
> > 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Start checkpoint for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to sync = 1
> > 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed: hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > 14/02/17 10:32:58 ERROR hdfs.BucketWriter: Unexpected error
> > java.io.IOException: Filesystem closed
> >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> >         at org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:237)
> >         at org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:232)
> >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> >         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:109)
> >         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:217)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> >         at java.lang.Thread.run(Thread.java:738)
> > 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1392650786418, queueSize: 499, queueHead: 226331
> > 14/02/17 10:32:58 INFO zookeeper.ZooKeeper: Session: 0x144401355b4001d closed
> > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Sink hdfs-sink
> > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{ name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }
> > 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared lock
> > java.lang.InterruptedException
> >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
> >         at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
> >         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
> >         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
> >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:536)
> >         at org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
> >         at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
> >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> >         at java.lang.Thread.run(Thread.java:738)
> > 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
> > org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=ch1]
> >         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:539)
> >         at org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
> >         at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
> >         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> >         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> >         at java.lang.Thread.run(Thread.java:738)
> > 14/02/17 10:32:58 INFO hdfs.HDFSEventSink: Closing hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec
> > 14/02/17 10:32:58 INFO zookeeper.ClientCnxn: EventThread shut down
> > 14/02/17 10:32:58 INFO file.LogFileV3: Updating log-8.meta currentPosition = 1795990, logWriteOrderID = 1392650786418
> > 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed: hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> > 14/02/17 10:32:58 WARN hdfs.HDFSEventSink: Exception while closing hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec. Exception follows.
> > java.io.IOException: Filesystem closed
> >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
> >         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
> >         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
> >         at org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
> >         at org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
> >         at org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
> >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
> >         at org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
> >         at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
> >         at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
> >         at org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:757)
> >         at org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:755)
> >         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
> >         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
> >         at java.lang.Thread.run(Thread.java:738)
> > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink stopped
> > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Channel ch1
> > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }
> > 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> > 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file: /home/biadmin/.flume/file-channel/data/log-8 position: 1795990 logWriteOrderID: 1392650786418
> > 14/02/17 10:32:58 INFO file.LogFile: Closing /home/biadmin/.flume/file-channel/data/log-8
> > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-7
> > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel/data/log-8
> > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 stopped
> > 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Channel ch2
> > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component: FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }
> > 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> > 14/02/17 10:32:58 INFO file.LogFile: Closing /home/biadmin/.flume/file-channel2/data/log-7
> > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-6
> > 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader /home/biadmin/.flume/file-channel2/data/log-7
> > 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch2 stopped
> > 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 9
> > 
> > 
> > 
> > On 17 February 2014 16:38, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > > Hello Jeff,
> > > 
> > > Please find below requested logs.. Initiation part of the logs were unfortunately not included. I can run these again if necessary but the Zookeeper connection is included in the logs.
> > > 
> > > 
> > > 
> > > On 17 February 2014 16:05, Jeff Lord <jlord@cloudera.com (mailto:jlord@cloudera.com)> wrote:
> > > > Logs ?
> > > > 
> > > > On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <kanirip@gmail.com (mailto:kanirip@gmail.com)> wrote:
> > > > > Dear Mailing Group,
> > > > >
> > > > > I am currently having issues with the Hbase sink function. I have developed
> > > > > an agent with a fanout channel setup ( single source, multiple channels,
> > > > > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
> > > > >
> > > > >  The issue is that although the HDFS is working well, the Hbase flow is
> > > > > simply not working. There are no errors being reported by Flume for the
> > > > > Hbase channel but there are never any records being written to the HBase
> > > > > store. The Hbase table as stipulated in the config always remains empty.
> > > > > Studying the Flume startup logs I observe that the session connection to
> > > > > Zookeeper is always successfully established
> > > > >
> > > > > Are there any special configurations I am missing out?
> > > > >
> > > > > I am using the Async Event Serializer to persist the txns.
> > > > >
> > > > > Any assistance will be greatly appreciated.
> > > > >
> > > > >
> > > > > Please see below for the flume configuration:
> > > > >
> > > > > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
> > > > > agent.sources=exec-source
> > > > > agent.sinks=hdfs-sink hbase-sink
> > > > > agent.channels=ch1 ch2
> > > > >
> > > > > agent.sources.exec-source.type=exec
> > > > > agent.sources.exec-source.command=tail -F
> > > > > /home/biadmin/bigdemo/data/rec_telco.cdr
> > > > >
> > > > > agent.sinks.hdfs-sink.type=hdfs
> > > > > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
> > > > > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
> > > > > # File size to trigger roll, in bytes (0: never roll based on file size)
> > > > > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
> > > > > agent.sinks.hdfs-sink.hdfs.rollCount = 0
> > > > > # number of events written to file before it flushed to HDFS
> > > > > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
> > > > > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
> > > > >
> > > > >
> > > > > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
> > > > > agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
> > > > > agent.sinks.hbase-sink.table=telco_cdr_rec
> > > > > agent.sinks.hbase-sink.columnFamily = colfam
> > > > > agent.sinks.hbase-sink.channels = ch2
> > > > > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
> > > > > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
> > > > >
> > > > >
> > > > > agent.channels.ch1.type=file
> > > > > agent.channels.ch1.checkpointInterval=3000
> > > > > agent.channels.ch1.transactionCapacity=10000
> > > > > agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
> > > > > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
> > > > > agent.channels.ch1.write-timeout=30
> > > > > agent.channels.ch1.keep-alive=30
> > > > > #agent.channels.ch1.capacity=1000
> > > > >
> > > > > agent.channels.ch2.type=file
> > > > > agent.channels.ch2.checkpointInterval=300
> > > > > agent.channels.ch2.transactionCapacity=10000
> > > > > agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
> > > > > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
> > > > > agent.channels.ch2.write-timeout=30
> > > > > agent.channels.ch2.keep-alive=30
> > > > > #agent.channels.ch2.capacity=1000
> > > > >
> > > > >
> > > > > agent.sources.exec-source.channels=ch1 ch2
> > > > > agent.sinks.hdfs-sink.channel=ch1
> > > > > agent.sinks.hbase-sink.channel=ch2
> > > > >
> > > 
> > 
> 


Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Kris Ogirri <ka...@gmail.com>.
Hi,

Cant anybody help with this? I am thinking its a small issue because
everything seems to work fine but the data from the Channel never gets
persisted into Hbase?

I have added the description of the Hbase tables:

hbase(main):005:0> describe 'telco_cdr_rec'
DESCRIPTION
ENABLED
 {NAME => 'telco_cdr_rec', FAMILIES => [{NAME => 'co
true
 lfam', REPLICATION_SCOPE => '0',
KEEP_DELETED_CELLS
  => 'false', COMPRESSION => 'NONE',
ENCODE_ON_DISK
 => 'true', BLOCKCACHE => 'true', MIN_VERSIONS =>
'0
 ', DATA_BLOCK_ENCODING => 'NONE', IN_MEMORY =>
'fal
 se', BLOOMFILTER => 'NONE', TTL => '2147483647',
VE
 RSIONS => '3', BLOCKSIZE =>
'65536'}]}
1 row(s) in 0.1600 seconds


If no one can help with the problem, can anyone provide a link to the Flume
-> Zookeeper -> Hbase Internal documentation so I can trace where the error
lies.

Are there Zookeeper log files where I can analyse whether Flume actually
sends the Txns to Hbase via Zookeeper?



On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:

> Hello Jeff,
>
> Please find below requested logs.. Initiation part of the logs were
> unfortunately not included. I can run these again if necessary but the
> Zookeeper connection is included in the logs.
>
>
> 14/02/17 10:26:12 INFO properties.PropertiesFileConfigurationProvider:
> created channel ch2
> 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hbase-sink, type: org.apache.flume.sink.hbase.HBaseSink
> 14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hdfs-sink, type: hdfs
> 14/02/17 10:26:14 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting new
> configuration:{ sourceRunners:{exec-source=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:IDLE} }}
> sinkRunners:{hbase-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
> name:null counters:{} } }, hdfs-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
> name:null counters:{} } }} channels:{ch1=FileChannel ch1 { dataDirs:
> [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
> [/home/biadmin/.flume/file-channel2/data] }} }
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel ch1
> 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch1 {
> dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> 14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel ch2
> 14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch2 {
> dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> 14/02/17 10:26:14 INFO file.Log: Replay started
> 14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
> 14/02/17 10:26:14 INFO file.Log: Replay started
> 14/02/17 10:26:14 INFO file.Log: Found NextFileID 7, from
> [/home/biadmin/.flume/file-channel/data/log-7,
> /home/biadmin/.flume/file-channel/data/log-6]
> 14/02/17 10:26:14 INFO file.Log: Found NextFileID 6, from
> [/home/biadmin/.flume/file-channel2/data/log-6,
> /home/biadmin/.flume/file-channel2/data/log-4,
> /home/biadmin/.flume/file-channel2/data/log-5]
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint and
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
> checkpoint metadata from
> /home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint and
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
> checkpoint metadata from
> /home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
> 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:35 EST
> 2014, queue depth = 0
> 14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:31 EST
> 2014, queue depth = 0
> 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> 14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
> 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
> [/home/biadmin/.flume/file-channel/data/log-6,
> /home/biadmin/.flume/file-channel/data/log-7]
> 14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
> [/home/biadmin/.flume/file-channel2/data/log-4,
> /home/biadmin/.flume/file-channel2/data/log-5,
> /home/biadmin/.flume/file-channel2/data/log-6]
> 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-4
> 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Unable to get
> maxDirectMemory from VM: NoSuchMethodException:
> sun.misc.VM.maxDirectMemory(null)
> 14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Direct Memory Allocation:
> Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20971520,
> Remaining = 20971520
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 32040
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 2496
> 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
> file(/home/biadmin/.flume/file-channel2/data/log-4) is: 1392407375821,
> which is beyond the requested checkpoint time: 1392650490155 and position 0
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
> 22843
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 22843 in
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
> /home/biadmin/.flume/file-channel2/data/log-6
> 14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
> file(/home/biadmin/.flume/file-channel2/data/log-6) is: 1392650490155,
> which is beyond the requested checkpoint time: 1392650490155 and position 0
> 14/02/17 10:26:16 INFO file.ReplayHandler: read: 0, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 0, eventCount:0
> 14/02/17 10:26:16 INFO file.Log: Rolling
> /home/biadmin/.flume/file-channel2/data
> 14/02/17 10:26:16 INFO file.Log: Roll start
> /home/biadmin/.flume/file-channel2/data
> 14/02/17 10:26:16 INFO file.LogFile: Opened
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 2496 in
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 32071 in
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:26:16 INFO file.ReplayHandler: read: 1, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 1, eventCount:0
> 14/02/17 10:26:16 INFO file.Log: Rolling
> /home/biadmin/.flume/file-channel/data
> 14/02/17 10:26:16 INFO file.Log: Roll start
> /home/biadmin/.flume/file-channel/data
> 14/02/17 10:26:16 INFO file.LogFile: Opened
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:26:16 INFO file.Log: Roll end
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 0
> 14/02/17 10:26:16 INFO file.Log: Roll end
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 0
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774387, queueSize: 0,
> queueHead: 10516
> 14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774388, queueSize: 0,
> queueHead: 223682
> 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 0, logWriteOrderID = 1392650774387
> 14/02/17 10:26:16 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 0, logWriteOrderID = 1392650774388
> 14/02/17 10:26:16 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 0 logWriteOrderID:
> 1392650774387
> 14/02/17 10:26:16 INFO file.FileChannel: Queue Size after replay: 0
> [channel=ch2]
> 14/02/17 10:26:17 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 0 logWriteOrderID:
> 1392650774388
> 14/02/17 10:26:17 INFO file.FileChannel: Queue Size after replay: 0
> [channel=ch1]
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: ch2, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch2 started
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: ch1, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch1 started
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink hbase-sink
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink hdfs-sink
> 14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Source exec-source
> 14/02/17 10:26:17 INFO source.ExecSource: Exec source starting with
> command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: SINK, name: hdfs-sink, registered successfully.
> 14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs-sink started
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.5--1, built on 01/23/2013 14:29 GMT
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:host.name
> =bivm
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.vendor=IBM Corporation
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.home=/opt/ibm/biginsights/jdk/jre
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.class.path=conf:/opt/ibm/biginsights/flume/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/flume/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-mapper-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/flume-avro-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-jdbc-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/velocity-1.7.jar:/opt/ibm/biginsights/flume/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/flume/lib/flume-ng-node-1.3.0.jar:/opt/ibm/biginsights/flume/lib/commons-dbcp-1.4.jar:/opt/ibm/biginsights/flume/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/flume/lib/flume-hdfs-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/asynchbase-1.2.0.jar:/opt/ibm/biginsights/flume/lib/flume-recoverable-memory-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/async-1.3.1.jar:/opt/ibm/biginsights/flume/lib/slf4j-log4j12-1.6.1.jar:/opt/ibm/biginsights/flume/lib/flume-thrift-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-file-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/libthrift-0.6.1.jar:/opt/ibm/biginsights/flume/lib/avro-1.7.2.jar:/opt/ibm/biginsights/flume/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-core-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/servlet-api-2.5-20110124.jar:/opt/ibm/biginsights/flume/lib/flume-ng-elasticsearch-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-configuration-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/flume/lib/irclib-1.10.jar:/opt/ibm/biginsights/flume/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/flume/lib/derby-10.8.3.1.jar:/opt/ibm/biginsights/flume/lib/flume-ng-log4jappender-1.3.0.jar:/opt/ibm/biginsights/flume/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/flume/lib/flume-irc-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jcl-over-slf4j-1.7.2.jar:/opt/ibm/biginsights/flume/lib/slf4j-api-1.6.1.jar:/opt/ibm/biginsights/flume/lib/joda-time-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/flume/lib/commons-io-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/flume/lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/flume/lib/commons-pool-1.5.4.jar:/opt/ibm/biginsights/flume/lib/flume-ng-hbase-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/protobuf-java-2.4.1.jar:/opt/ibm/biginsights/flume/lib/flume-scribe-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-core-1.3.0.jar:/opt/ibm/biginsights/flume/lib/gson-2.2.2.jar:/opt/ibm/biginsights/flume/lib/flume-ng-sdk-1.3.0.jar:/opt/ibm/biginsights/flume/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/flume/lib/guava-10.0.1.jar:/opt/ibm/biginsights/flume/lib/paranamer-2.3.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/hbase:/opt/ibm/biginsights/hbase/hbase-0.94.3-security.jar:/opt/ibm/biginsights/hbase/hbase-0.94.3-security-tests.jar:/opt/ibm/biginsights/hbase/hbase.jar:/opt/ibm/biginsights/hbase/lib/activation-1.1.jar:/opt/ibm/biginsights/hbase/lib/asm-3.1.jar:/opt/ibm/biginsights/hbase/lib/avro-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/hbase/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/hbase/lib/commons-codec-1.4.jar:/opt/ibm/biginsights/hbase/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/hbase/lib/commons-digester-1.8.jar:/opt/ibm/biginsights/hbase/lib/commons-el-1.0.jar:/opt/ibm/biginsights/hbase/lib/commons-httpclient-3.1.jar:/opt/ibm/biginsights/hbase/lib/commons-io-2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/hbase/lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/commons-math-2.2.jar:/opt/ibm/biginsights/hbase/lib/commons-net-3.1.jar:/opt/ibm/biginsights/hbase/lib/core-3.1.1.jar:/opt/ibm/biginsights/hbase/lib/guardium-proxy.jar:/opt/ibm/biginsights/hbase/lib/guava-11.0.2.jar:/opt/ibm/biginsights/hbase/lib/hadoop-core.jar:/opt/ibm/biginsights/hbase/lib/hadoop-tools-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/high-scale-lib-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/httpclient-4.1.2.jar:/opt/ibm/biginsights/hbase/lib/httpcore-4.1.3.jar:/opt/ibm/biginsights/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-xc-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jamon-runtime-2.3.1.jar:/opt/ibm/biginsights/hbase/lib/jasper-compiler-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jasper-runtime-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jaxb-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/ibm/biginsights/hbase/lib/jersey-core-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-json-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-server-1.8.jar:/opt/ibm/biginsights/hbase/lib/jettison-1.1.jar:/opt/ibm/biginsights/hbase/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jruby-complete-1.6.5.1.jar:/opt/ibm/biginsights/hbase/lib/jsp-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/hbase/lib/junit-4.10-HBASE-1.jar:/opt/ibm/biginsights/hbase/lib/libthrift-0.8.0.jar:/opt/ibm/biginsights/hbase/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/hbase/lib/metrics-core-2.1.2.jar:/opt/ibm/biginsights/hbase/lib/netty-3.2.4.Final.jar:/opt/ibm/biginsights/hbase/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/hbase/lib/protobuf-java-2.4.0a.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/hbase/lib/stax-api-1.0.1.jar:/opt/ibm/biginsights/hbase/lib/velocity-1.7.jar:/opt/ibm/biginsights/hbase/lib/xmlenc-0.52.jar:/opt/ibm/biginsights/hbase/lib/xml-ibm.jar:/opt/ibm/biginsights/hbase/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/hbase/lib/zookeeper.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/hbase/conf
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.library.path=:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=j9jit24
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.name
> =Linux
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:os.version=2.6.18-194.17.4.el5
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.name
> =biadmin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:user.home=/home/biadmin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
> environment:user.dir=/opt/ibm/biginsights/flume/bin
> 14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Initiating client connection,
> connectString=bivm:2181 sessionTimeout=180000 watcher=hconnection
> 14/02/17 10:26:17 INFO zookeeper.RecoverableZooKeeper: The identifier of
> this process is 20984@bivm
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to
> server bivm/192.168.37.128:2181. Will not attempt to authenticate using
> SASL (Unable to locate a login configuration)
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Socket connection established
> to bivm/192.168.37.128:2181, initiating session
> 14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Session establishment
> complete on server bivm/192.168.37.128:2181, sessionid =
> 0x144401355b4001d, negotiated timeout = 60000
> 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 60
> 14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650774536, queueSize: 60,
> queueHead: 10514
> 14/02/17 10:29:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 32036, logWriteOrderID = 1392650774536
> 14/02/17 10:29:57 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 32036
> logWriteOrderID: 1392650774536
> 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-4
> 14/02/17 10:29:57 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel2/data/log-4, result = true, minFileID 7
> 14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-5
> 14/02/17 10:29:57 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel2/data/log-5, result = true, minFileID 7
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 460
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775504, queueSize: 520,
> queueHead: 10514
> 14/02/17 10:29:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 277565, logWriteOrderID = 1392650775504
> 14/02/17 10:29:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 277565
> logWriteOrderID: 1392650775504
> 14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 540
> 14/02/17 10:29:59 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998182.tmp
> 14/02/17 10:29:59 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 423
> 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775933, queueSize: 137,
> queueHead: 10917
> 14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650775934, queueSize: 539,
> queueHead: 223681
> 14/02/17 10:30:01 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 304892, logWriteOrderID = 1392650775933
> 14/02/17 10:30:01 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 304892
> logWriteOrderID: 1392650775933
> 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 137
> 14/02/17 10:30:02 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 288266, logWriteOrderID = 1392650775934
> 14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776074, queueSize: 0,
> queueHead: 11054
> 14/02/17 10:30:04 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 288266
> logWriteOrderID: 1392650775934
> 14/02/17 10:30:04 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-6
> 14/02/17 10:30:04 INFO file.Log: Removing old log
> /home/biadmin/.flume/file-channel/data/log-6, result = true, minFileID 8
> 14/02/17 10:30:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 29
> 14/02/17 10:30:06 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 310581, logWriteOrderID = 1392650776074
> 14/02/17 10:30:13 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776105, queueSize: 550,
> queueHead: 223690
> 14/02/17 10:30:19 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 310581
> logWriteOrderID: 1392650776074
> 14/02/17 10:30:21 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:29 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776127, queueSize: 20,
> queueHead: 11052
> 14/02/17 10:30:29 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 299362, logWriteOrderID = 1392650776105
> 14/02/17 10:30:30 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 321308, logWriteOrderID = 1392650776127
> 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 299362
> logWriteOrderID: 1392650776105
> 14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 321308
> logWriteOrderID: 1392650776127
> 14/02/17 10:30:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 21
> 14/02/17 10:30:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 38
> 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776192, queueSize: 569,
> queueHead: 223691
> 14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776193, queueSize: 20,
> queueHead: 11070
> 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 310040, logWriteOrderID = 1392650776192
> 14/02/17 10:30:34 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 332801, logWriteOrderID = 1392650776193
> 14/02/17 10:30:34 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 310040
> logWriteOrderID: 1392650776192
> 14/02/17 10:30:35 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 332801
> logWriteOrderID: 1392650776193
> 14/02/17 10:30:37 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:30:39 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182
> 14/02/17 10:30:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776236, queueSize: 0,
> queueHead: 11090
> 14/02/17 10:30:40 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998183.tmp
> 14/02/17 10:30:42 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776237, queueSize: 589,
> queueHead: 223691
> 14/02/17 10:30:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 333657, logWriteOrderID = 1392650776236
> 14/02/17 10:30:59 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 320738, logWriteOrderID = 1392650776237
> 14/02/17 10:31:01 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 333657
> logWriteOrderID: 1392650776236
> 14/02/17 10:31:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 320738
> logWriteOrderID: 1392650776237
> 14/02/17 10:31:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 125
> 14/02/17 10:31:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776384, queueSize: 464,
> queueHead: 223816
> 14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776385, queueSize: 20,
> queueHead: 11088
> 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 344355, logWriteOrderID = 1392650776385
> 14/02/17 10:31:19 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 325863, logWriteOrderID = 1392650776384
> 14/02/17 10:31:20 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183
> 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 325863
> logWriteOrderID: 1392650776384
> 14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 344355
> logWriteOrderID: 1392650776385
> 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 20
> 14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1
> 14/02/17 10:31:23 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998184.tmp
> 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776427, queueSize: 0,
> queueHead: 11108
> 14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776428, queueSize: 463,
> queueHead: 223817
> 14/02/17 10:31:25 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 335946, logWriteOrderID = 1392650776428
> 14/02/17 10:31:26 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 345211, logWriteOrderID = 1392650776427
> 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 345211
> logWriteOrderID: 1392650776427
> 14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 335946
> logWriteOrderID: 1392650776428
> 14/02/17 10:31:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 40
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 70
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776540, queueSize: 473,
> queueHead: 223847
> 14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650776541, queueSize: 40,
> queueHead: 11106
> 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 356818, logWriteOrderID = 1392650776540
> 14/02/17 10:31:28 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 356818
> logWriteOrderID: 1392650776540
> 14/02/17 10:31:28 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 366536, logWriteOrderID = 1392650776541
> 14/02/17 10:31:30 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 366536
> logWriteOrderID: 1392650776541
> 14/02/17 10:31:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 493
> 14/02/17 10:31:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 40
> 14/02/17 10:31:34 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650777082, queueSize: 0,
> queueHead: 11146
> 14/02/17 10:31:35 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650777083, queueSize: 0,
> queueHead: 224340
> 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 368733, logWriteOrderID = 1392650777082
> 14/02/17 10:31:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 379163, logWriteOrderID = 1392650777083
> 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 368733
> logWriteOrderID: 1392650777082
> 14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 379163
> logWriteOrderID: 1392650777083
> 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 920
> 14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 900
> 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650778995, queueSize: 900,
> queueHead: 224338
> 14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650778996, queueSize: 920,
> queueHead: 11144
> 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 859009, logWriteOrderID = 1392650778996
> 14/02/17 10:31:49 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 859505, logWriteOrderID = 1392650778995
> 14/02/17 10:31:49 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 859009
> logWriteOrderID: 1392650778996
> 14/02/17 10:31:50 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 920
> 14/02/17 10:31:53 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 859505
> logWriteOrderID: 1392650778995
> 14/02/17 10:31:53 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650779929, queueSize: 0,
> queueHead: 12064
> 14/02/17 10:31:54 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184
> 14/02/17 10:31:54 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998185.tmp
> 14/02/17 10:31:54 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 22
> 14/02/17 10:31:55 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650779951, queueSize: 918,
> queueHead: 224340
> 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 897089, logWriteOrderID = 1392650779929
> 14/02/17 10:31:56 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 870220, logWriteOrderID = 1392650779951
> 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 870220
> logWriteOrderID: 1392650779951
> 14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 897089
> logWriteOrderID: 1392650779929
> 14/02/17 10:31:57 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 300
> 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650781760, queueSize: 300,
> queueHead: 12062
> 14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1198
> 14/02/17 10:32:01 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650781761, queueSize: 0,
> queueHead: 225538
> 14/02/17 10:32:02 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1057180, logWriteOrderID = 1392650781760
> 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1057180
> logWriteOrderID: 1392650781760
> 14/02/17 10:32:03 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1068832, logWriteOrderID = 1392650781761
> 14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1068832
> logWriteOrderID: 1392650781761
> 14/02/17 10:32:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 798
> 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783137, queueSize: 500,
> queueHead: 12360
> 14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 520
> 14/02/17 10:32:08 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783138, queueSize: 519,
> queueHead: 225537
> 14/02/17 10:32:12 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1336479, logWriteOrderID = 1392650783137
> 14/02/17 10:32:14 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1346456, logWriteOrderID = 1392650783138
> 14/02/17 10:32:14 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1336479
> logWriteOrderID: 1392650783137
> 14/02/17 10:32:15 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 100
> 14/02/17 10:32:16 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1346456
> logWriteOrderID: 1392650783138
> 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783761, queueSize: 400,
> queueHead: 12460
> 14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 519
> 14/02/17 10:32:20 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650783762, queueSize: 0,
> queueHead: 226056
> 14/02/17 10:32:21 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1341143, logWriteOrderID = 1392650783761
> 14/02/17 10:32:23 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1367771, logWriteOrderID = 1392650783762
> 14/02/17 10:32:23 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1341143
> logWriteOrderID: 1392650783761
> 14/02/17 10:32:24 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1367771
> logWriteOrderID: 1392650783762
> 14/02/17 10:32:24 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 300
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 100
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650784174, queueSize: 300,
> queueHead: 12660
> 14/02/17 10:32:25 INFO hdfs.BucketWriter: Renaming
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185.tmp to
> hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185
> 14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650784175, queueSize: 100,
> queueHead: 226054
> 14/02/17 10:32:25 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1402287, logWriteOrderID = 1392650784174
> 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1402287
> logWriteOrderID: 1392650784174
> 14/02/17 10:32:26 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1421128, logWriteOrderID = 1392650784175
> 14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1421128
> logWriteOrderID: 1392650784175
> 14/02/17 10:32:27 INFO hdfs.BucketWriter: Creating
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 480
> 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 278
> 14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785222, queueSize: 98,
> queueHead: 13042
> 14/02/17 10:32:32 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785223, queueSize: 0,
> queueHead: 226332
> 14/02/17 10:32:33 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1514767, logWriteOrderID = 1392650785222
> 14/02/17 10:32:34 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1514767
> logWriteOrderID: 1392650785222
> 14/02/17 10:32:35 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 118
> 14/02/17 10:32:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1528845, logWriteOrderID = 1392650785223
> 14/02/17 10:32:38 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650785364, queueSize: 0,
> queueHead: 13160
> 14/02/17 10:32:40 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1528845
> logWriteOrderID: 1392650785223
> 14/02/17 10:32:41 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1529781, logWriteOrderID = 1392650785364
> 14/02/17 10:32:42 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1529781
> logWriteOrderID: 1392650785364
> 14/02/17 10:32:43 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
> sync = 500
> 14/02/17 10:32:44 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 500
> 14/02/17 10:32:45 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786415, queueSize: 500,
> queueHead: 13158
> 14/02/17 10:32:47 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786416, queueSize: 500,
> queueHead: 226330
> 14/02/17 10:32:53 INFO node.FlumeNode: Flume node stopping - agent
> 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
> supervisor 9
> 14/02/17 10:32:53 INFO properties.PropertiesFileConfigurationProvider:
> Configuration provider stopping
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Node manager
> stopping
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Shutting
> down configuration: { sourceRunners:{exec-source=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }}
> sinkRunners:{hbase-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
> name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } },
> hdfs-sink=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
> name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }}
> channels:{ch1=FileChannel ch1 { dataDirs:
> [/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
> [/home/biadmin/.flume/file-channel2/data] }} }
> 14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Source exec-source
> 14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping component:
> EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }
> 14/02/17 10:32:53 INFO source.ExecSource: Stopping exec source with
> command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1795949, logWriteOrderID = 1392650786416
> 14/02/17 10:32:54 INFO file.LogFileV3: Updating log-7.meta currentPosition
> = 1796885, logWriteOrderID = 1392650786415
> 14/02/17 10:32:57 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1795949
> logWriteOrderID: 1392650786416
> 14/02/17 10:32:57 ERROR source.ExecSource: Failed while running command:
> tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
> java.io.IOException: Pipe closed
>         at java.io.PipedInputStream.read(PipedInputStream.java:302)
>         at java.lang.ProcessPipedInputStream.read(UNIXProcess.java:412)
>         at java.io.PipedInputStream.read(PipedInputStream.java:372)
>         at java.lang.ProcessInputStream.read(UNIXProcess.java:471)
>         at
> sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
>         at
> sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
>         at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
>         at java.io.InputStreamReader.read(InputStreamReader.java:188)
>         at java.io.BufferedReader.fill(BufferedReader.java:147)
>         at java.io.BufferedReader.readLine(BufferedReader.java:310)
>         at java.io.BufferedReader.readLine(BufferedReader.java:373)
>         at
> org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:272)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:452)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO source.ExecSource: Command [tail -F
> /home/biadmin/bigdemo/data/rec_telco.cdr] exited with 130
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Sink hbase-sink
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004ccounterGroup:{ name:null counters:{runner.backoffs.consecutive=2,
> runner.backoffs=59} } }
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Component has
> already been stopped EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:exec-source,state:STOP} }
> 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
> lock
> java.lang.InterruptedException
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
>         at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
>         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:446)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
> Exception follows.
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=ch2]
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at
> org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at
> org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO
> client.HConnectionManager$HConnectionImplementation: Closed zookeeper
> sessionid=0x144401355b4001d
> 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel2/data/log-7 position: 1796885
> logWriteOrderID: 1392650786415
> 14/02/17 10:32:57 WARN hdfs.BucketWriter: Caught IOException writing to
> HDFSWriter (Filesystem closed). Closing file
> (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp)
> and rethrowing exception.
> 14/02/17 10:32:58 WARN hdfs.BucketWriter: Caught IOException while closing
> file
> (hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp).
> Exception follows.
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
> sync = 1
> 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:58 ERROR hdfs.BucketWriter: Unexpected error
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:237)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:232)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:109)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:217)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Updating
> checkpoint metadata: logWriteOrderID: 1392650786418, queueSize: 499,
> queueHead: 226331
> 14/02/17 10:32:58 INFO zookeeper.ZooKeeper: Session: 0x144401355b4001d
> closed
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Sink hdfs-sink
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01counterGroup:{ name:null counters:{runner.backoffs.consecutive=3,
> runner.backoffs=53} } }
> 14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
> lock
> java.lang.InterruptedException
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
>         at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
>         at
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
>         at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:536)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
> Exception follows.
> org.apache.flume.ChannelException: Failed to obtain lock for writing to
> the log. Try increasing the log write timeout value. [channel=ch1]
>         at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:539)
>         at
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
>         at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>         at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO hdfs.HDFSEventSink: Closing
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec
> 14/02/17 10:32:58 INFO zookeeper.ClientCnxn: EventThread shut down
> 14/02/17 10:32:58 INFO file.LogFileV3: Updating log-8.meta currentPosition
> = 1795990, logWriteOrderID = 1392650786418
> 14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
> 14/02/17 10:32:58 WARN hdfs.HDFSEventSink: Exception while closing
> hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec. Exception follows.
> java.io.IOException: Filesystem closed
>         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
>         at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
>         at
> org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
>         at
> org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:757)
>         at
> org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:755)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
>         at java.lang.Thread.run(Thread.java:738)
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs-sink stopped
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Channel ch1
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }
> 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch1 {
> dataDirs: [/home/biadmin/.flume/file-channel/data] }...
> 14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
> /home/biadmin/.flume/file-channel/data/log-8 position: 1795990
> logWriteOrderID: 1392650786418
> 14/02/17 10:32:58 INFO file.LogFile: Closing
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-7
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel/data/log-8
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch1 stopped
> 14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
> Channel ch2
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
> FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }
> 14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch2 {
> dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
> 14/02/17 10:32:58 INFO file.LogFile: Closing
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-6
> 14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
> /home/biadmin/.flume/file-channel2/data/log-7
> 14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: ch2 stopped
> 14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
> supervisor 9
>
>
>
> On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:
>
>> Hello Jeff,
>>
>> Please find below requested logs.. Initiation part of the logs were
>> unfortunately not included. I can run these again if necessary but the
>> Zookeeper connection is included in the logs.
>>
>>
>>
>> On 17 February 2014 16:05, Jeff Lord <jl...@cloudera.com> wrote:
>>
>>> Logs ?
>>>
>>> On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <ka...@gmail.com> wrote:
>>> > Dear Mailing Group,
>>> >
>>> > I am currently having issues with the Hbase sink function. I have
>>> developed
>>> > an agent with a fanout channel setup ( single source, multiple
>>> channels,
>>> > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
>>> >
>>> >  The issue is that although the HDFS is working well, the Hbase flow is
>>> > simply not working. There are no errors being reported by Flume for the
>>> > Hbase channel but there are never any records being written to the
>>> HBase
>>> > store. The Hbase table as stipulated in the config always remains
>>> empty.
>>> > Studying the Flume startup logs I observe that the session connection
>>> to
>>> > Zookeeper is always successfully established
>>> >
>>> > Are there any special configurations I am missing out?
>>> >
>>> > I am using the Async Event Serializer to persist the txns.
>>> >
>>> > Any assistance will be greatly appreciated.
>>> >
>>> >
>>> > Please see below for the flume configuration:
>>> >
>>> > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
>>> > agent.sources=exec-source
>>> > agent.sinks=hdfs-sink hbase-sink
>>> > agent.channels=ch1 ch2
>>> >
>>> > agent.sources.exec-source.type=exec
>>> > agent.sources.exec-source.command=tail -F
>>> > /home/biadmin/bigdemo/data/rec_telco.cdr
>>> >
>>> > agent.sinks.hdfs-sink.type=hdfs
>>> > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
>>> > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
>>> > # File size to trigger roll, in bytes (0: never roll based on file
>>> size)
>>> > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
>>> > agent.sinks.hdfs-sink.hdfs.rollCount = 0
>>> > # number of events written to file before it flushed to HDFS
>>> > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
>>> > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
>>> >
>>> >
>>> > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
>>> >
>>> agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
>>> > agent.sinks.hbase-sink.table=telco_cdr_rec
>>> > agent.sinks.hbase-sink.columnFamily = colfam
>>> > agent.sinks.hbase-sink.channels = ch2
>>> > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
>>> > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
>>> >
>>> >
>>> > agent.channels.ch1.type=file
>>> > agent.channels.ch1.checkpointInterval=3000
>>> > agent.channels.ch1.transactionCapacity=10000
>>> >
>>> agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
>>> > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
>>> > agent.channels.ch1.write-timeout=30
>>> > agent.channels.ch1.keep-alive=30
>>> > #agent.channels.ch1.capacity=1000
>>> >
>>> > agent.channels.ch2.type=file
>>> > agent.channels.ch2.checkpointInterval=300
>>> > agent.channels.ch2.transactionCapacity=10000
>>> >
>>> agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
>>> > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
>>> > agent.channels.ch2.write-timeout=30
>>> > agent.channels.ch2.keep-alive=30
>>> > #agent.channels.ch2.capacity=1000
>>> >
>>> >
>>> > agent.sources.exec-source.channels=ch1 ch2
>>> > agent.sinks.hdfs-sink.channel=ch1
>>> > agent.sinks.hbase-sink.channel=ch2
>>> >
>>>
>>
>>
>

Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Kris Ogirri <ka...@gmail.com>.
Hello Jeff,

Please find below requested logs.. Initiation part of the logs were
unfortunately not included. I can run these again if necessary but the
Zookeeper connection is included in the logs.


14/02/17 10:26:12 INFO properties.PropertiesFileConfigurationProvider:
created channel ch2
14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
hbase-sink, type: org.apache.flume.sink.hbase.HBaseSink
14/02/17 10:26:13 INFO sink.DefaultSinkFactory: Creating instance of sink:
hdfs-sink, type: hdfs
14/02/17 10:26:14 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting new
configuration:{ sourceRunners:{exec-source=EventDrivenSourceRunner: {
source:org.apache.flume.source.ExecSource{name:exec-source,state:IDLE} }}
sinkRunners:{hbase-sink=SinkRunner: {
policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
name:null counters:{} } }, hdfs-sink=SinkRunner: {
policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
name:null counters:{} } }} channels:{ch1=FileChannel ch1 { dataDirs:
[/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
[/home/biadmin/.flume/file-channel2/data] }} }
14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
Channel ch1
14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch1 {
dataDirs: [/home/biadmin/.flume/file-channel/data] }...
14/02/17 10:26:14 INFO nodemanager.DefaultLogicalNodeManager: Starting
Channel ch2
14/02/17 10:26:14 INFO file.FileChannel: Starting FileChannel ch2 {
dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
14/02/17 10:26:14 INFO file.Log: Replay started
14/02/17 10:26:14 INFO file.Log: Encryption is not enabled
14/02/17 10:26:14 INFO file.Log: Replay started
14/02/17 10:26:14 INFO file.Log: Found NextFileID 7, from
[/home/biadmin/.flume/file-channel/data/log-7,
/home/biadmin/.flume/file-channel/data/log-6]
14/02/17 10:26:14 INFO file.Log: Found NextFileID 6, from
[/home/biadmin/.flume/file-channel2/data/log-6,
/home/biadmin/.flume/file-channel2/data/log-4,
/home/biadmin/.flume/file-channel2/data/log-5]
14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
/home/biadmin/.flume/file-channel2/checkpoint/checkpoint and
/home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
checkpoint metadata from
/home/biadmin/.flume/file-channel2/checkpoint/checkpoint.meta
14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Starting up with
/home/biadmin/.flume/file-channel/checkpoint/checkpoint and
/home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
14/02/17 10:26:14 INFO file.EventQueueBackingStoreFileV3: Reading
checkpoint metadata from
/home/biadmin/.flume/file-channel/checkpoint/checkpoint.meta
14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:35 EST
2014, queue depth = 0
14/02/17 10:26:14 INFO file.Log: Last Checkpoint Mon Feb 17 10:21:31 EST
2014, queue depth = 0
14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
14/02/17 10:26:14 INFO file.Log: Replaying logs with v2 replay logic
14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
[/home/biadmin/.flume/file-channel/data/log-6,
/home/biadmin/.flume/file-channel/data/log-7]
14/02/17 10:26:14 INFO file.ReplayHandler: Starting replay of
[/home/biadmin/.flume/file-channel2/data/log-4,
/home/biadmin/.flume/file-channel2/data/log-5,
/home/biadmin/.flume/file-channel2/data/log-6]
14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
/home/biadmin/.flume/file-channel/data/log-6
14/02/17 10:26:14 INFO file.ReplayHandler: Replaying
/home/biadmin/.flume/file-channel2/data/log-4
14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Unable to get
maxDirectMemory from VM: NoSuchMethodException:
sun.misc.VM.maxDirectMemory(null)
14/02/17 10:26:14 INFO tools.DirectMemoryUtils: Direct Memory Allocation:
Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20971520,
Remaining = 20971520
14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
32040
14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
/home/biadmin/.flume/file-channel/data/log-7
14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
2496
14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
file(/home/biadmin/.flume/file-channel2/data/log-4) is: 1392407375821,
which is beyond the requested checkpoint time: 1392650490155 and position 0
14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
/home/biadmin/.flume/file-channel2/data/log-5
14/02/17 10:26:16 INFO file.LogFile: fast-forward to checkpoint position:
22843
14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 22843 in
/home/biadmin/.flume/file-channel2/data/log-5
14/02/17 10:26:16 INFO file.ReplayHandler: Replaying
/home/biadmin/.flume/file-channel2/data/log-6
14/02/17 10:26:16 WARN file.LogFile: Checkpoint for
file(/home/biadmin/.flume/file-channel2/data/log-6) is: 1392650490155,
which is beyond the requested checkpoint time: 1392650490155 and position 0
14/02/17 10:26:16 INFO file.ReplayHandler: read: 0, put: 0, take: 0,
rollback: 0, commit: 0, skip: 0, eventCount:0
14/02/17 10:26:16 INFO file.Log: Rolling
/home/biadmin/.flume/file-channel2/data
14/02/17 10:26:16 INFO file.Log: Roll start
/home/biadmin/.flume/file-channel2/data
14/02/17 10:26:16 INFO file.LogFile: Opened
/home/biadmin/.flume/file-channel2/data/log-7
14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 2496 in
/home/biadmin/.flume/file-channel/data/log-7
14/02/17 10:26:16 INFO file.LogFile: Encountered EOF at 32071 in
/home/biadmin/.flume/file-channel/data/log-6
14/02/17 10:26:16 INFO file.ReplayHandler: read: 1, put: 0, take: 0,
rollback: 0, commit: 0, skip: 1, eventCount:0
14/02/17 10:26:16 INFO file.Log: Rolling
/home/biadmin/.flume/file-channel/data
14/02/17 10:26:16 INFO file.Log: Roll start
/home/biadmin/.flume/file-channel/data
14/02/17 10:26:16 INFO file.LogFile: Opened
/home/biadmin/.flume/file-channel/data/log-8
14/02/17 10:26:16 INFO file.Log: Roll end
14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 0
14/02/17 10:26:16 INFO file.Log: Roll end
14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 0
14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650774387, queueSize: 0, queueHead: 10516
14/02/17 10:26:16 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650774388, queueSize: 0, queueHead: 223682
14/02/17 10:26:16 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 0, logWriteOrderID = 1392650774387
14/02/17 10:26:16 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 0, logWriteOrderID = 1392650774388
14/02/17 10:26:16 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 0 logWriteOrderID:
1392650774387
14/02/17 10:26:16 INFO file.FileChannel: Queue Size after replay: 0
[channel=ch2]
14/02/17 10:26:17 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 0 logWriteOrderID:
1392650774388
14/02/17 10:26:17 INFO file.FileChannel: Queue Size after replay: 0
[channel=ch1]
14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
counter group for type: CHANNEL, name: ch2, registered successfully.
14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
type: CHANNEL, name: ch2 started
14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
counter group for type: CHANNEL, name: ch1, registered successfully.
14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
type: CHANNEL, name: ch1 started
14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink
hbase-sink
14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink
hdfs-sink
14/02/17 10:26:17 INFO nodemanager.DefaultLogicalNodeManager: Starting
Source exec-source
14/02/17 10:26:17 INFO source.ExecSource: Exec source starting with
command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Monitoried
counter group for type: SINK, name: hdfs-sink, registered successfully.
14/02/17 10:26:17 INFO instrumentation.MonitoredCounterGroup: Component
type: SINK, name: hdfs-sink started
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.5--1, built on 01/23/2013 14:29 GMT
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:host.name
=bivm
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:java.version=1.6.0
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:java.vendor=IBM Corporation
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:java.home=/opt/ibm/biginsights/jdk/jre
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:java.class.path=conf:/opt/ibm/biginsights/flume/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/flume/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-mapper-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/flume-avro-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-jdbc-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/velocity-1.7.jar:/opt/ibm/biginsights/flume/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/flume/lib/flume-ng-node-1.3.0.jar:/opt/ibm/biginsights/flume/lib/commons-dbcp-1.4.jar:/opt/ibm/biginsights/flume/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/flume/lib/flume-hdfs-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/asynchbase-1.2.0.jar:/opt/ibm/biginsights/flume/lib/flume-recoverable-memory-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/async-1.3.1.jar:/opt/ibm/biginsights/flume/lib/slf4j-log4j12-1.6.1.jar:/opt/ibm/biginsights/flume/lib/flume-thrift-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-file-channel-1.3.0.jar:/opt/ibm/biginsights/flume/lib/libthrift-0.6.1.jar:/opt/ibm/biginsights/flume/lib/avro-1.7.2.jar:/opt/ibm/biginsights/flume/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/flume/lib/jackson-core-asl-1.9.3.jar:/opt/ibm/biginsights/flume/lib/servlet-api-2.5-20110124.jar:/opt/ibm/biginsights/flume/lib/flume-ng-elasticsearch-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-configuration-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/flume/lib/irclib-1.10.jar:/opt/ibm/biginsights/flume/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/flume/lib/derby-10.8.3.1.jar:/opt/ibm/biginsights/flume/lib/flume-ng-log4jappender-1.3.0.jar:/opt/ibm/biginsights/flume/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/flume/lib/flume-irc-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/jcl-over-slf4j-1.7.2.jar:/opt/ibm/biginsights/flume/lib/slf4j-api-1.6.1.jar:/opt/ibm/biginsights/flume/lib/joda-time-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/flume/lib/commons-io-2.1.jar:/opt/ibm/biginsights/flume/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/flume/lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/flume/lib/commons-pool-1.5.4.jar:/opt/ibm/biginsights/flume/lib/flume-ng-hbase-sink-1.3.0.jar:/opt/ibm/biginsights/flume/lib/protobuf-java-2.4.1.jar:/opt/ibm/biginsights/flume/lib/flume-scribe-source-1.3.0.jar:/opt/ibm/biginsights/flume/lib/flume-ng-core-1.3.0.jar:/opt/ibm/biginsights/flume/lib/gson-2.2.2.jar:/opt/ibm/biginsights/flume/lib/flume-ng-sdk-1.3.0.jar:/opt/ibm/biginsights/flume/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/flume/lib/guava-10.0.1.jar:/opt/ibm/biginsights/flume/lib/paranamer-2.3.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/IHC/:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/opt/ibm/biginsights/IHC/lib/biginsights-gpfs-1.1.1.jar:/opt/ibm/biginsights/IHC/hadoop-core.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-media-support-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-core-3.0.3.jar:home/biadmin/twitter4j/lib/twitter4j-async-3.0.3.jar:/home/biadmin/twitter4j/lib/twitter4j-stream-3.0.3.jar:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/hbase:/opt/ibm/biginsights/hbase/hbase-0.94.3-security.jar:/opt/ibm/biginsights/hbase/hbase-0.94.3-security-tests.jar:/opt/ibm/biginsights/hbase/hbase.jar:/opt/ibm/biginsights/hbase/lib/activation-1.1.jar:/opt/ibm/biginsights/hbase/lib/asm-3.1.jar:/opt/ibm/biginsights/hbase/lib/avro-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/avro-ipc-1.7.2.jar:/opt/ibm/biginsights/hbase/lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/hbase/lib/commons-cli-1.2.jar:/opt/ibm/biginsights/hbase/lib/commons-codec-1.4.jar:/opt/ibm/biginsights/hbase/lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/hbase/lib/commons-digester-1.8.jar:/opt/ibm/biginsights/hbase/lib/commons-el-1.0.jar:/opt/ibm/biginsights/hbase/lib/commons-httpclient-3.1.jar:/opt/ibm/biginsights/hbase/lib/commons-io-2.1.jar:/opt/ibm/biginsights/hbase/lib/commons-lang-2.5.jar:/opt/ibm/biginsights/hbase/lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/commons-math-2.2.jar:/opt/ibm/biginsights/hbase/lib/commons-net-3.1.jar:/opt/ibm/biginsights/hbase/lib/core-3.1.1.jar:/opt/ibm/biginsights/hbase/lib/guardium-proxy.jar:/opt/ibm/biginsights/hbase/lib/guava-11.0.2.jar:/opt/ibm/biginsights/hbase/lib/hadoop-core.jar:/opt/ibm/biginsights/hbase/lib/hadoop-tools-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/high-scale-lib-1.1.1.jar:/opt/ibm/biginsights/hbase/lib/httpclient-4.1.2.jar:/opt/ibm/biginsights/hbase/lib/httpcore-4.1.3.jar:/opt/ibm/biginsights/hbase/lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-jaxrs-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jackson-xc-1.8.8.jar:/opt/ibm/biginsights/hbase/lib/jamon-runtime-2.3.1.jar:/opt/ibm/biginsights/hbase/lib/jasper-compiler-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jasper-runtime-5.5.23.jar:/opt/ibm/biginsights/hbase/lib/jaxb-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/ibm/biginsights/hbase/lib/jersey-core-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-json-1.8.jar:/opt/ibm/biginsights/hbase/lib/jersey-server-1.8.jar:/opt/ibm/biginsights/hbase/lib/jettison-1.1.jar:/opt/ibm/biginsights/hbase/lib/jetty-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/hbase/lib/jruby-complete-1.6.5.1.jar:/opt/ibm/biginsights/hbase/lib/jsp-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/lib/jsr305-1.3.9.jar:/opt/ibm/biginsights/hbase/lib/junit-4.10-HBASE-1.jar:/opt/ibm/biginsights/hbase/lib/libthrift-0.8.0.jar:/opt/ibm/biginsights/hbase/lib/log4j-1.2.16.jar:/opt/ibm/biginsights/hbase/lib/metrics-core-2.1.2.jar:/opt/ibm/biginsights/hbase/lib/netty-3.2.4.Final.jar:/opt/ibm/biginsights/hbase/lib/netty-3.4.0.Final.jar:/opt/ibm/biginsights/hbase/lib/protobuf-java-2.4.0a.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/ibm/biginsights/hbase/lib/snappy-java-1.0.4.1.jar:/opt/ibm/biginsights/hbase/lib/stax-api-1.0.1.jar:/opt/ibm/biginsights/hbase/lib/velocity-1.7.jar:/opt/ibm/biginsights/hbase/lib/xmlenc-0.52.jar:/opt/ibm/biginsights/hbase/lib/xml-ibm.jar:/opt/ibm/biginsights/hbase/lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/hbase/lib/zookeeper.jar:/opt/ibm/biginsights/hadoop-conf:/opt/ibm/biginsights/jdk/lib/tools.jar:/opt/ibm/biginsights/IHC/libexec/..:/opt/ibm/biginsights/IHC/libexec/../hadoop-core-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/adaptive-mr.jar:/opt/ibm/biginsights/IHC/libexec/../lib/asm-3.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjrt-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/aspectjtools-1.6.11.jar:/opt/ibm/biginsights/IHC/libexec/../lib/biginsights-sftpfs-1.0.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-beanutils-1.8.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-cli-1.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-codec-1.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-collections-3.2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-configuration-1.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-daemon-1.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-digester-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-el-1.0.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-httpclient-3.0.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-io-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-lang-2.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-logging-api-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-math-2.2.jar:/opt/ibm/biginsights/IHC/libexec/../lib/commons-net-3.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/core-3.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftplet-api-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ftpserver-core-1.0.6.jar:/opt/ibm/biginsights/IHC/libexec/../lib/guardium-proxy.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-capacity-scheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-fairscheduler-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hadoop-thriftfs-1.1.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/hsqldb-1.8.0.10.jar:/opt/ibm/biginsights/IHC/libexec/../lib/ibm-compression.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-core-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-compiler-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jasper-runtime-5.5.12.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jdeb-0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-core-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-json-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jersey-server-1.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jets3t-0.6.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jetty-util-6.1.26.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.42.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsch-0.1.43.jar:/opt/ibm/biginsights/IHC/libexec/../lib/junit-4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/log4j-1.2.16.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mina-core-2.0.4.jar:/opt/ibm/biginsights/IHC/libexec/../lib/mockito-all-1.8.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/oro-2.0.8.jar:/opt/ibm/biginsights/IHC/libexec/../lib/servlet-api-2.5-20081211.jar:/opt/ibm/biginsights/IHC/libexec/../lib/workflowScheduler.jar:/opt/ibm/biginsights/IHC/libexec/../lib/xmlenc-0.52.jar:/opt/ibm/biginsights/IHC/libexec/../lib/zookeeper-3.4.5.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-2.1.jar:/opt/ibm/biginsights/IHC/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/opt/ibm/biginsights/hbase/conf:/opt/ibm/biginsights/hbase/conf
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:java.library.path=:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64:/opt/ibm/biginsights/IHC/libexec/../lib/native/Linux-amd64-64
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:java.io.tmpdir=/tmp
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:java.compiler=j9jit24
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:os.version=2.6.18-194.17.4.el5
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client environment:user.name
=biadmin
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:user.home=/home/biadmin
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Client
environment:user.dir=/opt/ibm/biginsights/flume/bin
14/02/17 10:26:17 INFO zookeeper.ZooKeeper: Initiating client connection,
connectString=bivm:2181 sessionTimeout=180000 watcher=hconnection
14/02/17 10:26:17 INFO zookeeper.RecoverableZooKeeper: The identifier of
this process is 20984@bivm
14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Opening socket connection to
server bivm/192.168.37.128:2181. Will not attempt to authenticate using
SASL (Unable to locate a login configuration)
14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Socket connection established
to bivm/192.168.37.128:2181, initiating session
14/02/17 10:26:17 INFO zookeeper.ClientCnxn: Session establishment complete
on server bivm/192.168.37.128:2181, sessionid = 0x144401355b4001d,
negotiated timeout = 60000
14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 60
14/02/17 10:29:56 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650774536, queueSize: 60, queueHead: 10514
14/02/17 10:29:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 32036, logWriteOrderID = 1392650774536
14/02/17 10:29:57 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 32036
logWriteOrderID: 1392650774536
14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
/home/biadmin/.flume/file-channel2/data/log-4
14/02/17 10:29:57 INFO file.Log: Removing old log
/home/biadmin/.flume/file-channel2/data/log-4, result = true, minFileID 7
14/02/17 10:29:57 INFO file.LogFile: Closing RandomReader
/home/biadmin/.flume/file-channel2/data/log-5
14/02/17 10:29:57 INFO file.Log: Removing old log
/home/biadmin/.flume/file-channel2/data/log-5, result = true, minFileID 7
14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 460
14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650775504, queueSize: 520, queueHead: 10514
14/02/17 10:29:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 277565, logWriteOrderID = 1392650775504
14/02/17 10:29:58 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 277565
logWriteOrderID: 1392650775504
14/02/17 10:29:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 540
14/02/17 10:29:59 INFO hdfs.BucketWriter: Creating
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998182.tmp
14/02/17 10:29:59 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 423
14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650775933, queueSize: 137, queueHead: 10917
14/02/17 10:30:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650775934, queueSize: 539, queueHead: 223681
14/02/17 10:30:01 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 304892, logWriteOrderID = 1392650775933
14/02/17 10:30:01 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 304892
logWriteOrderID: 1392650775933
14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 137
14/02/17 10:30:02 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 288266, logWriteOrderID = 1392650775934
14/02/17 10:30:02 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776074, queueSize: 0, queueHead: 11054
14/02/17 10:30:04 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 288266
logWriteOrderID: 1392650775934
14/02/17 10:30:04 INFO file.LogFile: Closing RandomReader
/home/biadmin/.flume/file-channel/data/log-6
14/02/17 10:30:04 INFO file.Log: Removing old log
/home/biadmin/.flume/file-channel/data/log-6, result = true, minFileID 8
14/02/17 10:30:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 29
14/02/17 10:30:06 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 310581, logWriteOrderID = 1392650776074
14/02/17 10:30:13 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776105, queueSize: 550, queueHead: 223690
14/02/17 10:30:19 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 310581
logWriteOrderID: 1392650776074
14/02/17 10:30:21 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 20
14/02/17 10:30:29 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776127, queueSize: 20, queueHead: 11052
14/02/17 10:30:29 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 299362, logWriteOrderID = 1392650776105
14/02/17 10:30:30 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 321308, logWriteOrderID = 1392650776127
14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 299362
logWriteOrderID: 1392650776105
14/02/17 10:30:30 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 321308
logWriteOrderID: 1392650776127
14/02/17 10:30:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 21
14/02/17 10:30:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 38
14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776192, queueSize: 569, queueHead: 223691
14/02/17 10:30:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776193, queueSize: 20, queueHead: 11070
14/02/17 10:30:34 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 310040, logWriteOrderID = 1392650776192
14/02/17 10:30:34 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 332801, logWriteOrderID = 1392650776193
14/02/17 10:30:34 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 310040
logWriteOrderID: 1392650776192
14/02/17 10:30:35 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 332801
logWriteOrderID: 1392650776193
14/02/17 10:30:37 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 20
14/02/17 10:30:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 20
14/02/17 10:30:39 INFO hdfs.BucketWriter: Renaming
hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182.tmp to
hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998182
14/02/17 10:30:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776236, queueSize: 0, queueHead: 11090
14/02/17 10:30:40 INFO hdfs.BucketWriter: Creating
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998183.tmp
14/02/17 10:30:42 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776237, queueSize: 589, queueHead: 223691
14/02/17 10:30:58 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 333657, logWriteOrderID = 1392650776236
14/02/17 10:30:59 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 320738, logWriteOrderID = 1392650776237
14/02/17 10:31:01 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 333657
logWriteOrderID: 1392650776236
14/02/17 10:31:03 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 320738
logWriteOrderID: 1392650776237
14/02/17 10:31:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 125
14/02/17 10:31:05 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 20
14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776384, queueSize: 464, queueHead: 223816
14/02/17 10:31:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776385, queueSize: 20, queueHead: 11088
14/02/17 10:31:19 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 344355, logWriteOrderID = 1392650776385
14/02/17 10:31:19 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 325863, logWriteOrderID = 1392650776384
14/02/17 10:31:20 INFO hdfs.BucketWriter: Renaming
hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183.tmp to
hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998183
14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 325863
logWriteOrderID: 1392650776384
14/02/17 10:31:22 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 344355
logWriteOrderID: 1392650776385
14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 20
14/02/17 10:31:23 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 1
14/02/17 10:31:23 INFO hdfs.BucketWriter: Creating
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998184.tmp
14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776427, queueSize: 0, queueHead: 11108
14/02/17 10:31:24 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776428, queueSize: 463, queueHead: 223817
14/02/17 10:31:25 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 335946, logWriteOrderID = 1392650776428
14/02/17 10:31:26 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 345211, logWriteOrderID = 1392650776427
14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 345211
logWriteOrderID: 1392650776427
14/02/17 10:31:26 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 335946
logWriteOrderID: 1392650776428
14/02/17 10:31:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 40
14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 70
14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776540, queueSize: 473, queueHead: 223847
14/02/17 10:31:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650776541, queueSize: 40, queueHead: 11106
14/02/17 10:31:28 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 356818, logWriteOrderID = 1392650776540
14/02/17 10:31:28 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 356818
logWriteOrderID: 1392650776540
14/02/17 10:31:28 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 366536, logWriteOrderID = 1392650776541
14/02/17 10:31:30 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 366536
logWriteOrderID: 1392650776541
14/02/17 10:31:31 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 493
14/02/17 10:31:32 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 40
14/02/17 10:31:34 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650777082, queueSize: 0, queueHead: 11146
14/02/17 10:31:35 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650777083, queueSize: 0, queueHead: 224340
14/02/17 10:31:38 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 368733, logWriteOrderID = 1392650777082
14/02/17 10:31:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 379163, logWriteOrderID = 1392650777083
14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 368733
logWriteOrderID: 1392650777082
14/02/17 10:31:38 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 379163
logWriteOrderID: 1392650777083
14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 920
14/02/17 10:31:39 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 900
14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650778995, queueSize: 900, queueHead: 224338
14/02/17 10:31:40 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650778996, queueSize: 920, queueHead: 11144
14/02/17 10:31:49 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 859009, logWriteOrderID = 1392650778996
14/02/17 10:31:49 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 859505, logWriteOrderID = 1392650778995
14/02/17 10:31:49 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 859009
logWriteOrderID: 1392650778996
14/02/17 10:31:50 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 920
14/02/17 10:31:53 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 859505
logWriteOrderID: 1392650778995
14/02/17 10:31:53 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650779929, queueSize: 0, queueHead: 12064
14/02/17 10:31:54 INFO hdfs.BucketWriter: Renaming
hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184.tmp to
hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998184
14/02/17 10:31:54 INFO hdfs.BucketWriter: Creating
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998185.tmp
14/02/17 10:31:54 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 22
14/02/17 10:31:55 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650779951, queueSize: 918, queueHead: 224340
14/02/17 10:31:56 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 897089, logWriteOrderID = 1392650779929
14/02/17 10:31:56 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 870220, logWriteOrderID = 1392650779951
14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 870220
logWriteOrderID: 1392650779951
14/02/17 10:31:56 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 897089
logWriteOrderID: 1392650779929
14/02/17 10:31:57 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 300
14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650781760, queueSize: 300, queueHead: 12062
14/02/17 10:32:00 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 1198
14/02/17 10:32:01 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650781761, queueSize: 0, queueHead: 225538
14/02/17 10:32:02 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 1057180, logWriteOrderID = 1392650781760
14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 1057180
logWriteOrderID: 1392650781760
14/02/17 10:32:03 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 1068832, logWriteOrderID = 1392650781761
14/02/17 10:32:03 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 1068832
logWriteOrderID: 1392650781761
14/02/17 10:32:04 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 798
14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650783137, queueSize: 500, queueHead: 12360
14/02/17 10:32:07 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 520
14/02/17 10:32:08 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650783138, queueSize: 519, queueHead: 225537
14/02/17 10:32:12 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 1336479, logWriteOrderID = 1392650783137
14/02/17 10:32:14 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 1346456, logWriteOrderID = 1392650783138
14/02/17 10:32:14 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 1336479
logWriteOrderID: 1392650783137
14/02/17 10:32:15 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 100
14/02/17 10:32:16 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 1346456
logWriteOrderID: 1392650783138
14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650783761, queueSize: 400, queueHead: 12460
14/02/17 10:32:17 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 519
14/02/17 10:32:20 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650783762, queueSize: 0, queueHead: 226056
14/02/17 10:32:21 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 1341143, logWriteOrderID = 1392650783761
14/02/17 10:32:23 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 1367771, logWriteOrderID = 1392650783762
14/02/17 10:32:23 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 1341143
logWriteOrderID: 1392650783761
14/02/17 10:32:24 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 1367771
logWriteOrderID: 1392650783762
14/02/17 10:32:24 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 300
14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 100
14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650784174, queueSize: 300, queueHead: 12660
14/02/17 10:32:25 INFO hdfs.BucketWriter: Renaming
hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185.tmp to
hdfs://bivm:9000/user/biadmin/bigdemo/telco_cdr_rec.1392650998185
14/02/17 10:32:25 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650784175, queueSize: 100, queueHead: 226054
14/02/17 10:32:25 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 1402287, logWriteOrderID = 1392650784174
14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 1402287
logWriteOrderID: 1392650784174
14/02/17 10:32:26 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 1421128, logWriteOrderID = 1392650784175
14/02/17 10:32:26 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 1421128
logWriteOrderID: 1392650784175
14/02/17 10:32:27 INFO hdfs.BucketWriter: Creating
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
14/02/17 10:32:27 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 480
14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 278
14/02/17 10:32:28 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650785222, queueSize: 98, queueHead: 13042
14/02/17 10:32:32 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650785223, queueSize: 0, queueHead: 226332
14/02/17 10:32:33 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 1514767, logWriteOrderID = 1392650785222
14/02/17 10:32:34 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 1514767
logWriteOrderID: 1392650785222
14/02/17 10:32:35 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 118
14/02/17 10:32:38 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 1528845, logWriteOrderID = 1392650785223
14/02/17 10:32:38 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650785364, queueSize: 0, queueHead: 13160
14/02/17 10:32:40 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 1528845
logWriteOrderID: 1392650785223
14/02/17 10:32:41 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 1529781, logWriteOrderID = 1392650785364
14/02/17 10:32:42 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 1529781
logWriteOrderID: 1392650785364
14/02/17 10:32:43 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel2/checkpoint/checkpoint, elements to
sync = 500
14/02/17 10:32:44 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 500
14/02/17 10:32:45 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650786415, queueSize: 500, queueHead: 13158
14/02/17 10:32:47 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650786416, queueSize: 500, queueHead: 226330
14/02/17 10:32:53 INFO node.FlumeNode: Flume node stopping - agent
14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
supervisor 9
14/02/17 10:32:53 INFO properties.PropertiesFileConfigurationProvider:
Configuration provider stopping
14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Node manager
stopping
14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Shutting down
configuration: { sourceRunners:{exec-source=EventDrivenSourceRunner: {
source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }}
sinkRunners:{hbase-sink=SinkRunner: {
policy:org.apache.flume.sink.DefaultSinkProcessor@4c004c counterGroup:{
name:null counters:{runner.backoffs.consecutive=2, runner.backoffs=59} } },
hdfs-sink=SinkRunner: {
policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01 counterGroup:{
name:null counters:{runner.backoffs.consecutive=3, runner.backoffs=53} } }}
channels:{ch1=FileChannel ch1 { dataDirs:
[/home/biadmin/.flume/file-channel/data] }, ch2=FileChannel ch2 { dataDirs:
[/home/biadmin/.flume/file-channel2/data] }} }
14/02/17 10:32:53 INFO nodemanager.DefaultLogicalNodeManager: Stopping
Source exec-source
14/02/17 10:32:53 INFO lifecycle.LifecycleSupervisor: Stopping component:
EventDrivenSourceRunner: {
source:org.apache.flume.source.ExecSource{name:exec-source,state:START} }
14/02/17 10:32:53 INFO source.ExecSource: Stopping exec source with
command:tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
14/02/17 10:32:54 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 1795949, logWriteOrderID = 1392650786416
14/02/17 10:32:54 INFO file.LogFileV3: Updating log-7.meta currentPosition
= 1796885, logWriteOrderID = 1392650786415
14/02/17 10:32:57 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 1795949
logWriteOrderID: 1392650786416
14/02/17 10:32:57 ERROR source.ExecSource: Failed while running command:
tail -F /home/biadmin/bigdemo/data/rec_telco.cdr
java.io.IOException: Pipe closed
        at java.io.PipedInputStream.read(PipedInputStream.java:302)
        at java.lang.ProcessPipedInputStream.read(UNIXProcess.java:412)
        at java.io.PipedInputStream.read(PipedInputStream.java:372)
        at java.lang.ProcessInputStream.read(UNIXProcess.java:471)
        at
sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:464)
        at
sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:506)
        at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:234)
        at java.io.InputStreamReader.read(InputStreamReader.java:188)
        at java.io.BufferedReader.fill(BufferedReader.java:147)
        at java.io.BufferedReader.readLine(BufferedReader.java:310)
        at java.io.BufferedReader.readLine(BufferedReader.java:373)
        at
org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:272)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:452)
        at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
        at java.util.concurrent.FutureTask.run(FutureTask.java:149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
        at java.lang.Thread.run(Thread.java:738)
14/02/17 10:32:58 INFO source.ExecSource: Command [tail -F
/home/biadmin/bigdemo/data/rec_telco.cdr] exited with 130
14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Sink
hbase-sink
14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4c004ccounterGroup:{
name:null counters:{runner.backoffs.consecutive=2,
runner.backoffs=59} } }
14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Component has already
been stopped EventDrivenSourceRunner: {
source:org.apache.flume.source.ExecSource{name:exec-source,state:STOP} }
14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
lock
java.lang.InterruptedException
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
        at
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
        at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
        at
org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:446)
        at
org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
        at
org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
        at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
        at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at
org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:738)
14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
Exception follows.
org.apache.flume.ChannelException: Failed to obtain lock for writing to the
log. Try increasing the log write timeout value. [channel=ch2]
        at
org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:447)
        at
org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
        at
org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
        at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:190)
        at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at
org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:738)
14/02/17 10:32:58 INFO client.HConnectionManager$HConnectionImplementation:
Closed zookeeper sessionid=0x144401355b4001d
14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel2/data/log-7 position: 1796885
logWriteOrderID: 1392650786415
14/02/17 10:32:57 WARN hdfs.BucketWriter: Caught IOException writing to
HDFSWriter (Filesystem closed). Closing file
(hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp)
and rethrowing exception.
14/02/17 10:32:58 WARN hdfs.BucketWriter: Caught IOException while closing
file
(hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp).
Exception follows.
java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
        at
org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
        at
org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
        at
org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
        at
org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
        at
org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
        at
org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
        at
org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
        at
org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:382)
        at
org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
        at
org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
        at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
        at java.util.concurrent.FutureTask.run(FutureTask.java:149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
        at java.lang.Thread.run(Thread.java:738)
14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Start checkpoint
for /home/biadmin/.flume/file-channel/checkpoint/checkpoint, elements to
sync = 1
14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
14/02/17 10:32:58 ERROR hdfs.BucketWriter: Unexpected error
java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
        at
org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
        at
org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
        at
org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
        at
org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
        at
org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
        at
org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
        at
org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
        at
org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:237)
        at
org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:232)
        at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
        at java.util.concurrent.FutureTask.run(FutureTask.java:149)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:109)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:217)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
        at java.lang.Thread.run(Thread.java:738)
14/02/17 10:32:58 INFO file.EventQueueBackingStoreFile: Updating checkpoint
metadata: logWriteOrderID: 1392650786418, queueSize: 499, queueHead: 226331
14/02/17 10:32:58 INFO zookeeper.ZooKeeper: Session: 0x144401355b4001d
closed
14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping Sink
hdfs-sink
14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7b017b01counterGroup:{
name:null counters:{runner.backoffs.consecutive=3,
runner.backoffs=53} } }
14/02/17 10:32:58 WARN file.Log: Interrupted while waiting for log shared
lock
java.lang.InterruptedException
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1035)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1314)
        at
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(ReentrantReadWriteLock.java:839)
        at org.apache.flume.channel.file.Log.tryLockShared(Log.java:599)
        at
org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:536)
        at
org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
        at
org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
        at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at
org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:738)
14/02/17 10:32:58 ERROR flume.SinkRunner: Unable to deliver event.
Exception follows.
org.apache.flume.ChannelException: Failed to obtain lock for writing to the
log. Try increasing the log write timeout value. [channel=ch1]
        at
org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:539)
        at
org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
        at
org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)
        at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at
org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:738)
14/02/17 10:32:58 INFO hdfs.HDFSEventSink: Closing
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec
14/02/17 10:32:58 INFO zookeeper.ClientCnxn: EventThread shut down
14/02/17 10:32:58 INFO file.LogFileV3: Updating log-8.meta currentPosition
= 1795990, logWriteOrderID = 1392650786418
14/02/17 10:32:58 INFO hdfs.BucketWriter: HDFSWriter is already closed:
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec.1392650998186.tmp
14/02/17 10:32:58 WARN hdfs.HDFSEventSink: Exception while closing
hdfs://bivm:9000/user/biadmin/bigdemo//telco_cdr_rec. Exception follows.
java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:319)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1026)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:524)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:768)
        at
org.apache.flume.sink.hdfs.BucketWriter.renameBucket(BucketWriter.java:426)
        at
org.apache.flume.sink.hdfs.BucketWriter.doClose(BucketWriter.java:298)
        at
org.apache.flume.sink.hdfs.BucketWriter.access$400(BucketWriter.java:53)
        at
org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:260)
        at
org.apache.flume.sink.hdfs.BucketWriter$3.run(BucketWriter.java:258)
        at
org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
        at
org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:258)
        at
org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:757)
        at
org.apache.flume.sink.hdfs.HDFSEventSink$4.call(HDFSEventSink.java:755)
        at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:314)
        at java.util.concurrent.FutureTask.run(FutureTask.java:149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
        at java.lang.Thread.run(Thread.java:738)
14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
type: SINK, name: hdfs-sink stopped
14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
Channel ch1
14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
FileChannel ch1 { dataDirs: [/home/biadmin/.flume/file-channel/data] }
14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch1 {
dataDirs: [/home/biadmin/.flume/file-channel/data] }...
14/02/17 10:32:58 INFO file.Log: Updated checkpoint for file:
/home/biadmin/.flume/file-channel/data/log-8 position: 1795990
logWriteOrderID: 1392650786418
14/02/17 10:32:58 INFO file.LogFile: Closing
/home/biadmin/.flume/file-channel/data/log-8
14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
/home/biadmin/.flume/file-channel/data/log-7
14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
/home/biadmin/.flume/file-channel/data/log-8
14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
type: CHANNEL, name: ch1 stopped
14/02/17 10:32:58 INFO nodemanager.DefaultLogicalNodeManager: Stopping
Channel ch2
14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping component:
FileChannel ch2 { dataDirs: [/home/biadmin/.flume/file-channel2/data] }
14/02/17 10:32:58 INFO file.FileChannel: Stopping FileChannel ch2 {
dataDirs: [/home/biadmin/.flume/file-channel2/data] }...
14/02/17 10:32:58 INFO file.LogFile: Closing
/home/biadmin/.flume/file-channel2/data/log-7
14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
/home/biadmin/.flume/file-channel2/data/log-6
14/02/17 10:32:58 INFO file.LogFile: Closing RandomReader
/home/biadmin/.flume/file-channel2/data/log-7
14/02/17 10:32:58 INFO instrumentation.MonitoredCounterGroup: Component
type: CHANNEL, name: ch2 stopped
14/02/17 10:32:58 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle
supervisor 9



On 17 February 2014 16:38, Kris Ogirri <ka...@gmail.com> wrote:

> Hello Jeff,
>
> Please find below requested logs.. Initiation part of the logs were
> unfortunately not included. I can run these again if necessary but the
> Zookeeper connection is included in the logs.
>
>
>
> On 17 February 2014 16:05, Jeff Lord <jl...@cloudera.com> wrote:
>
>> Logs ?
>>
>> On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <ka...@gmail.com> wrote:
>> > Dear Mailing Group,
>> >
>> > I am currently having issues with the Hbase sink function. I have
>> developed
>> > an agent with a fanout channel setup ( single source, multiple channels,
>> > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
>> >
>> >  The issue is that although the HDFS is working well, the Hbase flow is
>> > simply not working. There are no errors being reported by Flume for the
>> > Hbase channel but there are never any records being written to the HBase
>> > store. The Hbase table as stipulated in the config always remains empty.
>> > Studying the Flume startup logs I observe that the session connection to
>> > Zookeeper is always successfully established
>> >
>> > Are there any special configurations I am missing out?
>> >
>> > I am using the Async Event Serializer to persist the txns.
>> >
>> > Any assistance will be greatly appreciated.
>> >
>> >
>> > Please see below for the flume configuration:
>> >
>> > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
>> > agent.sources=exec-source
>> > agent.sinks=hdfs-sink hbase-sink
>> > agent.channels=ch1 ch2
>> >
>> > agent.sources.exec-source.type=exec
>> > agent.sources.exec-source.command=tail -F
>> > /home/biadmin/bigdemo/data/rec_telco.cdr
>> >
>> > agent.sinks.hdfs-sink.type=hdfs
>> > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
>> > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
>> > # File size to trigger roll, in bytes (0: never roll based on file size)
>> > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
>> > agent.sinks.hdfs-sink.hdfs.rollCount = 0
>> > # number of events written to file before it flushed to HDFS
>> > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
>> > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
>> >
>> >
>> > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
>> >
>> agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
>> > agent.sinks.hbase-sink.table=telco_cdr_rec
>> > agent.sinks.hbase-sink.columnFamily = colfam
>> > agent.sinks.hbase-sink.channels = ch2
>> > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
>> > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
>> >
>> >
>> > agent.channels.ch1.type=file
>> > agent.channels.ch1.checkpointInterval=3000
>> > agent.channels.ch1.transactionCapacity=10000
>> >
>> agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
>> > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
>> > agent.channels.ch1.write-timeout=30
>> > agent.channels.ch1.keep-alive=30
>> > #agent.channels.ch1.capacity=1000
>> >
>> > agent.channels.ch2.type=file
>> > agent.channels.ch2.checkpointInterval=300
>> > agent.channels.ch2.transactionCapacity=10000
>> >
>> agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
>> > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
>> > agent.channels.ch2.write-timeout=30
>> > agent.channels.ch2.keep-alive=30
>> > #agent.channels.ch2.capacity=1000
>> >
>> >
>> > agent.sources.exec-source.channels=ch1 ch2
>> > agent.sinks.hdfs-sink.channel=ch1
>> > agent.sinks.hbase-sink.channel=ch2
>> >
>>
>
>

Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Kris Ogirri <ka...@gmail.com>.
Hello Jeff,

Please find below requested logs.. Initiation part of the logs were
unfortunately not included. I can run these again if necessary but the
Zookeeper connection is included in the logs.



On 17 February 2014 16:05, Jeff Lord <jl...@cloudera.com> wrote:

> Logs ?
>
> On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <ka...@gmail.com> wrote:
> > Dear Mailing Group,
> >
> > I am currently having issues with the Hbase sink function. I have
> developed
> > an agent with a fanout channel setup ( single source, multiple channels,
> > multiple sinks) sinking to a HDFS cluster and Hbase deployment.
> >
> >  The issue is that although the HDFS is working well, the Hbase flow is
> > simply not working. There are no errors being reported by Flume for the
> > Hbase channel but there are never any records being written to the HBase
> > store. The Hbase table as stipulated in the config always remains empty.
> > Studying the Flume startup logs I observe that the session connection to
> > Zookeeper is always successfully established
> >
> > Are there any special configurations I am missing out?
> >
> > I am using the Async Event Serializer to persist the txns.
> >
> > Any assistance will be greatly appreciated.
> >
> >
> > Please see below for the flume configuration:
> >
> > [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
> > agent.sources=exec-source
> > agent.sinks=hdfs-sink hbase-sink
> > agent.channels=ch1 ch2
> >
> > agent.sources.exec-source.type=exec
> > agent.sources.exec-source.command=tail -F
> > /home/biadmin/bigdemo/data/rec_telco.cdr
> >
> > agent.sinks.hdfs-sink.type=hdfs
> > agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
> > agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
> > # File size to trigger roll, in bytes (0: never roll based on file size)
> > agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
> > agent.sinks.hdfs-sink.hdfs.rollCount = 0
> > # number of events written to file before it flushed to HDFS
> > agent.sinks.hdfs-sink.hdfs.batchSize = 10000
> > agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
> >
> >
> > agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
> >
> agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
> > agent.sinks.hbase-sink.table=telco_cdr_rec
> > agent.sinks.hbase-sink.columnFamily = colfam
> > agent.sinks.hbase-sink.channels = ch2
> > #agent.sinks.hbase-sink.hdfs.batchSize = 10000
> > #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
> >
> >
> > agent.channels.ch1.type=file
> > agent.channels.ch1.checkpointInterval=3000
> > agent.channels.ch1.transactionCapacity=10000
> >
> agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
> > agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
> > agent.channels.ch1.write-timeout=30
> > agent.channels.ch1.keep-alive=30
> > #agent.channels.ch1.capacity=1000
> >
> > agent.channels.ch2.type=file
> > agent.channels.ch2.checkpointInterval=300
> > agent.channels.ch2.transactionCapacity=10000
> >
> agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
> > agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
> > agent.channels.ch2.write-timeout=30
> > agent.channels.ch2.keep-alive=30
> > #agent.channels.ch2.capacity=1000
> >
> >
> > agent.sources.exec-source.channels=ch1 ch2
> > agent.sinks.hdfs-sink.channel=ch1
> > agent.sinks.hbase-sink.channel=ch2
> >
>

Re: Issue with HBase Sink in Flume ( 1.3.0)

Posted by Jeff Lord <jl...@cloudera.com>.
Logs ?

On Mon, Feb 17, 2014 at 5:51 AM, Kris Ogirri <ka...@gmail.com> wrote:
> Dear Mailing Group,
>
> I am currently having issues with the Hbase sink function. I have developed
> an agent with a fanout channel setup ( single source, multiple channels,
> multiple sinks) sinking to a HDFS cluster and Hbase deployment.
>
>  The issue is that although the HDFS is working well, the Hbase flow is
> simply not working. There are no errors being reported by Flume for the
> Hbase channel but there are never any records being written to the HBase
> store. The Hbase table as stipulated in the config always remains empty.
> Studying the Flume startup logs I observe that the session connection to
> Zookeeper is always successfully established
>
> Are there any special configurations I am missing out?
>
> I am using the Async Event Serializer to persist the txns.
>
> Any assistance will be greatly appreciated.
>
>
> Please see below for the flume configuration:
>
> [biadmin@bivm bin]$ cat flume-conf.properties.bigdemo
> agent.sources=exec-source
> agent.sinks=hdfs-sink hbase-sink
> agent.channels=ch1 ch2
>
> agent.sources.exec-source.type=exec
> agent.sources.exec-source.command=tail -F
> /home/biadmin/bigdemo/data/rec_telco.cdr
>
> agent.sinks.hdfs-sink.type=hdfs
> agent.sinks.hdfs-sink.hdfs.path=hdfs://XXXX:9000/user/biadmin/bigdemo/
> agent.sinks.hdfs-sink.hdfs.filePrefix=telco_cdr_rec
> # File size to trigger roll, in bytes (0: never roll based on file size)
> agent.sinks.hdfs-sink.hdfs.rollSize = 134217728
> agent.sinks.hdfs-sink.hdfs.rollCount = 0
> # number of events written to file before it flushed to HDFS
> agent.sinks.hdfs-sink.hdfs.batchSize = 10000
> agent.sinks.hdfs-sink.hdfs.txnEventMax = 40000
>
>
> agent.sinks.hbase-sink.type=org.apache.flume.sink.hbase.AsyncHBaseSink
> agent.sinks.hbase-sink.serializer=org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer
> agent.sinks.hbase-sink.table=telco_cdr_rec
> agent.sinks.hbase-sink.columnFamily = colfam
> agent.sinks.hbase-sink.channels = ch2
> #agent.sinks.hbase-sink.hdfs.batchSize = 10000
> #agent.sinks.hbase-sink.hdfs.txnEventMax = 40000
>
>
> agent.channels.ch1.type=file
> agent.channels.ch1.checkpointInterval=3000
> agent.channels.ch1.transactionCapacity=10000
> agent.channels.ch1.checkpointDir=/home/BDadmin/.flume/file-channel/checkpoint
> agent.channels.ch1.dataDirs=/home/BDadmin/.flume/file-channel/data
> agent.channels.ch1.write-timeout=30
> agent.channels.ch1.keep-alive=30
> #agent.channels.ch1.capacity=1000
>
> agent.channels.ch2.type=file
> agent.channels.ch2.checkpointInterval=300
> agent.channels.ch2.transactionCapacity=10000
> agent.channels.ch2.checkpointDir=/home/BDadmin/.flume/file-channel2/checkpoint
> agent.channels.ch2.dataDirs=/home/BDadmin/.flume/file-channel2/data
> agent.channels.ch2.write-timeout=30
> agent.channels.ch2.keep-alive=30
> #agent.channels.ch2.capacity=1000
>
>
> agent.sources.exec-source.channels=ch1 ch2
> agent.sinks.hdfs-sink.channel=ch1
> agent.sinks.hbase-sink.channel=ch2
>