You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Raymond Ng <ra...@gmail.com> on 2012/09/19 10:19:13 UTC

ChannelException: Cannot acquire capacity

Hi all

I'm getting the following exception and as I result I lost records even
using File channel
root@test:~/flume-ng/deployment/script$ org.apache.flume.ChannelException:
Unable to put batch on required channel: FileChannel probeFileChannel2 {
dataDirs: [/home/local/flume-ng/filechannel2/data] }
 at
org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:195)
 at test.source.KafkaSource$PartitionManager.next(KafkaSource.java:360)
 at test.source.KafkaSource.process(KafkaSource.java:228)
 at
org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:137)
 at java.lang.Thread.run(Thread.java:679)
Caused by: org.apache.flume.ChannelException: Cannot acquire capacity.
[channel=probeFileChannel2]
 at
org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:343)
 at
org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
 at
org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:76)
 at
org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:184)
 ... 4 more

my file channel config below

agent.channels.probeFileChannel2.type = FILE
agent.channels.probeFileChannel2.checkpointDir =
/home/local/flume-ng/filechannel2/checkpoint
agent.channels.probeFileChannel2.dataDirs =
/home/local/flume-ng/filechannel2/data
agent.channels.probeFileChannel2.transactionCapacity = 200000
agent.channels.probeFileChannel2.checkpointInterval = 10
agent.channels.probeFileChannel2.maxFileSize = 1073741824
agent.channels.probeFileChannel2.capacity = 1000000
agent.channels.probeFileChannel2.keep-alive = 3
agent.channels.probeFileChannel2.write-timeout = 3

advice appreciated
-- 
Rgds
Ray

Re: ChannelException: Cannot acquire capacity

Posted by Brock Noland <br...@cloudera.com>.
Checkpoints are to speed up restarts.

I think your source is not keeping up or is not connected.

Brock

On Thu, Sep 20, 2012 at 6:36 AM, Raymond Ng <ra...@gmail.com> wrote:
> Hi Brock
>
> I have left the checkpoint with default but still getting the same
> exception, how does the checkpoint work btw?
>
> Ray
> On Wed, Sep 19, 2012 at 1:36 PM, Brock Noland <br...@cloudera.com> wrote:
>>
>> Hi,
>>
>> 1) that checkpoint interval is way too low. I would leave it at the
>> default.
>>
>> 2) the exception is saying file channel is full. This could be a result of
>> the low checkpoint interval or because the sink in the channel is not
>> keeping up.
>>
>> --
>> Brock Noland
>> Sent with Sparrow
>>
>> On Wednesday, September 19, 2012 at 3:19 AM, Raymond Ng wrote:
>>
>> Hi all
>>
>> I'm getting the following exception and as I result I lost records even
>> using File channel
>> root@test:~/flume-ng/deployment/script$ org.apache.flume.ChannelException:
>> Unable to put batch on required channel: FileChannel probeFileChannel2 {
>> dataDirs: [/home/local/flume-ng/filechannel2/data] }
>>  at
>> org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:195)
>>  at test.source.KafkaSource$PartitionManager.next(KafkaSource.java:360)
>>  at test.source.KafkaSource.process(KafkaSource.java:228)
>>  at
>> org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:137)
>>  at java.lang.Thread.run(Thread.java:679)
>> Caused by: org.apache.flume.ChannelException: Cannot acquire capacity.
>> [channel=probeFileChannel2]
>>  at
>> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:343)
>>  at
>> org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
>>  at
>> org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:76)
>>  at
>> org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:184)
>>  ... 4 more
>>
>> my file channel config below
>>
>> agent.channels.probeFileChannel2.type = FILE
>> agent.channels.probeFileChannel2.checkpointDir =
>> /home/local/flume-ng/filechannel2/checkpoint
>> agent.channels.probeFileChannel2.dataDirs =
>> /home/local/flume-ng/filechannel2/data
>> agent.channels.probeFileChannel2.transactionCapacity = 200000
>> agent.channels.probeFileChannel2.checkpointInterval = 10
>> agent.channels.probeFileChannel2.maxFileSize = 1073741824
>> agent.channels.probeFileChannel2.capacity = 1000000
>> agent.channels.probeFileChannel2.keep-alive = 3
>> agent.channels.probeFileChannel2.write-timeout = 3
>>
>> advice appreciated
>> --
>> Rgds
>> Ray
>>
>>
>
>
>
> --
> Rgds
> Ray



-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Re: ChannelException: Cannot acquire capacity

Posted by Raymond Ng <ra...@gmail.com>.
Hi Brock

I have left the checkpoint with default but still getting the same
exception, how does the checkpoint work btw?

Ray
On Wed, Sep 19, 2012 at 1:36 PM, Brock Noland <br...@cloudera.com> wrote:

> Hi,
>
> 1) that checkpoint interval is way too low. I would leave it at the
> default.
>
> 2) the exception is saying file channel is full. This could be a result of
> the low checkpoint interval or because the sink in the channel is not
> keeping up.
>
> --
> Brock Noland
> Sent with Sparrow <http://www.sparrowmailapp.com/?sig>
>
>  On Wednesday, September 19, 2012 at 3:19 AM, Raymond Ng wrote:
>
>  Hi all
>
> I'm getting the following exception and as I result I lost records even
> using File channel
> root@test:~/flume-ng/deployment/script$org.apache.flume.ChannelException: Unable to put batch on required channel:
> FileChannel probeFileChannel2 { dataDirs:
> [/home/local/flume-ng/filechannel2/data] }
>  at
> org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:195)
>  at test.source.KafkaSource$PartitionManager.next(KafkaSource.java:360)
>  at test.source.KafkaSource.process(KafkaSource.java:228)
>  at
> org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:137)
>  at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.flume.ChannelException: Cannot acquire capacity.
> [channel=probeFileChannel2]
>  at
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:343)
>  at
> org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
>  at
> org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:76)
>  at
> org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:184)
>  ... 4 more
>
> my file channel config below
>
> agent.channels.probeFileChannel2.type = FILE
> agent.channels.probeFileChannel2.checkpointDir =
> /home/local/flume-ng/filechannel2/checkpoint
> agent.channels.probeFileChannel2.dataDirs =
> /home/local/flume-ng/filechannel2/data
> agent.channels.probeFileChannel2.transactionCapacity = 200000
> agent.channels.probeFileChannel2.checkpointInterval = 10
> agent.channels.probeFileChannel2.maxFileSize = 1073741824
> agent.channels.probeFileChannel2.capacity = 1000000
> agent.channels.probeFileChannel2.keep-alive = 3
> agent.channels.probeFileChannel2.write-timeout = 3
>
> advice appreciated
> --
> Rgds
> Ray
>
>
>


-- 
Rgds
Ray

Re: ChannelException: Cannot acquire capacity

Posted by Brock Noland <br...@cloudera.com>.
Hi, 

1) that checkpoint interval is way too low. I would leave it at the default. 

2) the exception is saying file channel is full. This could be a result of the low checkpoint interval or because the sink in the channel is not keeping up.  

-- 
Brock Noland
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Wednesday, September 19, 2012 at 3:19 AM, Raymond Ng wrote:

> Hi all
>  
> I'm getting the following exception and as I result I lost records even using File channel 
> root@test:~/flume-ng/deployment/script$ (mailto:root@test:~/flume-ng/deployment/script$) org.apache.flume.ChannelException: Unable to put batch on required channel: FileChannel probeFileChannel2 { dataDirs: [/home/local/flume-ng/filechannel2/data] }
>  at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:195)
>  at test.source.KafkaSource$PartitionManager.next(KafkaSource.java:360)
>  at test.source.KafkaSource.process(KafkaSource.java:228)
>  at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:137)
>  at java.lang.Thread.run(Thread.java:679)
> Caused by: org.apache.flume.ChannelException: Cannot acquire capacity. [channel=probeFileChannel2]
>  at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:343)
>  at org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
>  at org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:76)
>  at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:184)
>  ... 4 more 
>  
> my file channel config below
>  
> agent.channels.probeFileChannel2.type = FILE
> agent.channels.probeFileChannel2.checkpointDir = /home/local/flume-ng/filechannel2/checkpoint
> agent.channels.probeFileChannel2.dataDirs = /home/local/flume-ng/filechannel2/data
> agent.channels.probeFileChannel2.transactionCapacity = 200000
> agent.channels.probeFileChannel2.checkpointInterval = 10
> agent.channels.probeFileChannel2.maxFileSize = 1073741824
> agent.channels.probeFileChannel2.capacity = 1000000
> agent.channels.probeFileChannel2.keep-alive = 3
> agent.channels.probeFileChannel2.write-timeout = 3 
>  
> advice appreciated
> -- 
> Rgds
> Ray