You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2017/04/20 14:06:04 UTC
[jira] [Commented] (FLUME-3085) HDFS Sink can skip flushing some
BucketWriters, might lead to data loss
[ https://issues.apache.org/jira/browse/FLUME-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15976743#comment-15976743 ]
ASF GitHub Bot commented on FLUME-3085:
---------------------------------------
GitHub user adenes opened a pull request:
https://github.com/apache/flume/pull/129
FLUME-3085: HDFS Sink can skip flushing some BucketWriters, might lead to data loss
This commit fixes the issue when in `HDFSEventSink.process()` a `BucketWriter.append()` call threw a `BucketClosedException` then the newly created `BucketWriter` wasn't flushed after the processing loop.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/adenes/flume FLUME-3085
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/flume/pull/129.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #129
----
commit f775a629c40bf8373cf3c0a991ea8738e2989c39
Author: Denes Arvay <de...@cloudera.com>
Date: 2017-04-20T13:58:47Z
FLUME-3085: HDFS Sink can skip flushing some BucketWriters, might lead to data loss
This commit fixes the issue when in HDFSEventSink.process() a BucketWriter.append()
call threw a BucketClosedException then the newly created BucketWriter wasn't
flushed after the processing loop.
----
> HDFS Sink can skip flushing some BucketWriters, might lead to data loss
> -----------------------------------------------------------------------
>
> Key: FLUME-3085
> URL: https://issues.apache.org/jira/browse/FLUME-3085
> Project: Flume
> Issue Type: Bug
> Components: Sinks+Sources
> Affects Versions: 1.7.0
> Reporter: Denes Arvay
> Assignee: Denes Arvay
> Priority: Critical
>
> The {{HDFSEventSink.process()}} is already prepared for a rare race condition, namely when the BucketWriter acquired in [line 389|https://github.com/apache/flume/blob/trunk/flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSEventSink.java#L389] gets closed by an other thread (e.g. because the {{idleTimeout}} or the {{rollInterval}}) before the {{append()}} is called in [line 406|https://github.com/apache/flume/blob/trunk/flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSEventSink.java#L406].
> If this is the case the {{BucketWriter.append()}} call throws a {{BucketClosedException}} and the sink creates a new {{BucketWriter}} instance and appends to it.
> But this newly created instance won't be added to the {{writers}} list, which means that it won't be flushed after the processing loop finished: https://github.com/apache/flume/blob/trunk/flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/HDFSEventSink.java#L429
> This has multiple consequences:
> - unflushed data might get lost
> - the {{BucketWriter}}'s {{idleAction}} won't be scheduled (https://github.com/apache/flume/blob/trunk/flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/BucketWriter.java#L450), which means that it won't be closed nor renamed if the idle timeout is the only trigger for closing the file.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)