You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by "Ferenc Szabo (JIRA)" <ji...@apache.org> on 2017/09/12 13:36:05 UTC

[jira] [Updated] (FLUME-3107) When batchSize of sink greater than transactionCapacity of File Channel, Flume can produce endless data

     [ https://issues.apache.org/jira/browse/FLUME-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ferenc Szabo updated FLUME-3107:
--------------------------------
    Fix Version/s:     (was: 1.8.0)
                   1.9.0

> When batchSize of sink greater than transactionCapacity of File Channel, Flume can produce endless data
> -------------------------------------------------------------------------------------------------------
>
>                 Key: FLUME-3107
>                 URL: https://issues.apache.org/jira/browse/FLUME-3107
>             Project: Flume
>          Issue Type: Bug
>          Components: File Channel
>    Affects Versions: 1.7.0
>            Reporter: Yongxi Zhang
>             Fix For: 1.9.0
>
>         Attachments: FLUME-3107-0.patch
>
>
> This problem is the similar as it in FLUME-3106.Flume can produce endless data When batchSize of sink greater than transactionCapacity of File Channel, you can try it with the following config:
> {code:xml}
> agent.sources = src1
> agent.sinks = sink1
> agent.channels = ch2
> agent.sources.src1.type = spooldir
> agent.sources.src1.channels = ch2
> agent.sources.src1.spoolDir = /home/kafka/flumeSpooldir
> agent.sources.src1.fileHeader = false
> agent.sources.src1.batchSize = 5
> agent.channels.ch2.type=file
> agent.channels.ch2.capacity=100
> agent.channels.ch2.checkpointDir=/home/kafka/flumefilechannel/checkpointDir
> agent.channels.ch2.dataDirs=/home/kafka/flumefilechannel/dataDirs
> agent.channels.ch2.transactionCapacity=5
> agent.sinks.sink1.type = hdfs
> agent.sinks.sink1.channel = ch2
> agent.sinks.sink1.hdfs.path = hdfs://kafka1:9000/flume/
> agent.sinks.sink1.hdfs.rollInterval=1
> agent.sinks.sink1.hdfs.fileType = DataStream
> agent.sinks.sink1.hdfs.writeFormat = Text
> agent.sinks.sink1.hdfs.batchSize = 10
> {code}
> Exceptions like this:
> {code:xml}
> 17/06/09 17:16:18 ERROR flume.SinkRunner: Unable to deliver event. Exception follows.
> org.apache.flume.EventDeliveryException: org.apache.flume.ChannelException: Take list for FileBackedTransaction, capacity 5 full, consider
> committing more frequently, increasing capacity, or increasing thread count. [channel=ch2]
>         at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:451)
>         at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
>         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.flume.ChannelException: Take list for FileBackedTransaction, capacity 5 full, consider committing more frequently, in
> creasing capacity, or increasing thread count. [channel=ch2]
>         at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doTake(FileChannel.java:531)
>         at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
>         at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
>         at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:362)
>         ... 3 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)