You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Bean Edwards <ed...@gmail.com> on 2014/03/21 10:03:06 UTC

fileChannel always reached it's capacity

I use hdfsSink and fileChannel, it seems hdfsSink consume too slow.here is
my configuration:
...
agent1.channels.thrift_ch2.capacity = 200000000
agent1.channels.thrift_ch2.transactionCapacity = 100000
...
agent1.sinks.hdfsSink.hdfs.batchSize = 100000
agent1.sinks.hdfsSink.hdfs.threadsPoolSize = 40

very appreciate for guiding !!!

Re: fileChannel always reached it's capacity

Posted by Christopher Shannon <cs...@gmail.com>.
You can add more sinks to the channel to work in parallel until the channel
stays drained. Just make sure you assign a unique prefix to each sink's
file name.
On Mar 21, 2014 4:03 AM, "Bean Edwards" <ed...@gmail.com> wrote:

> I use hdfsSink and fileChannel, it seems hdfsSink consume too slow.here
> is my configuration:
> ...
> agent1.channels.thrift_ch2.capacity = 200000000
> agent1.channels.thrift_ch2.transactionCapacity = 100000
> ...
> agent1.sinks.hdfsSink.hdfs.batchSize = 100000
> agent1.sinks.hdfsSink.hdfs.threadsPoolSize = 40
>
> very appreciate for guiding !!!
>