You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by "Denes Arvay (JIRA)" <ji...@apache.org> on 2016/08/17 13:38:20 UTC

[jira] [Created] (FLUME-2973) Deadlock in hdfs sink

Denes Arvay created FLUME-2973:
----------------------------------

             Summary: Deadlock in hdfs sink
                 Key: FLUME-2973
                 URL: https://issues.apache.org/jira/browse/FLUME-2973
             Project: Flume
          Issue Type: Bug
          Components: Sinks+Sources
    Affects Versions: v1.7.0
            Reporter: Denes Arvay
            Assignee: Denes Arvay
            Priority: Critical


Automatic close of BucketWriters (when open file count reached {{hdfs.maxOpenFiles}}) and the file rolling thread can end up in deadlock.

When creating a new {{BucketWriter}} in {{HDFSEventSink}} it locks {{HDFSEventSink.sfWritersLock}} and the {{close()}} called in {{HDFSEventSink.sfWritersLock.removeEldestEntry}} tries to lock the {{BucketWriter}} instance.
On the other hand if the file is being rolled in {{BucketWriter.close(boolean)}} it locks the {{BucketWriter}} instance first and in the close callback it tries to lock the {{sfWritersLock}}.

The chances for this deadlock is higher when the {{hdfs.maxOpenFiles}}'s value is low (1).

Script to reproduce: https://gist.github.com/adenes/96503a6e737f9604ab3ee9397a5809ff
(put to {{flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs}})
Deadlock usually occurs before ~30 iterations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)