You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by "Brock Noland (JIRA)" <ji...@apache.org> on 2014/05/14 06:58:16 UTC

[jira] [Updated] (FLUME-2245) HDFS files with errors unable to close

     [ https://issues.apache.org/jira/browse/FLUME-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Brock Noland updated FLUME-2245:
--------------------------------

    Attachment: FLUME-2245.patch

Attached is the patch which fixed the issue for me.

> HDFS files with errors unable to close
> --------------------------------------
>
>                 Key: FLUME-2245
>                 URL: https://issues.apache.org/jira/browse/FLUME-2245
>             Project: Flume
>          Issue Type: Bug
>            Reporter: Juhani Connolly
>         Attachments: FLUME-2245.patch, flume.log.1133, flume.log.file
>
>
> This  is running on a snapshot of Flume-1.5 with the git hash 99db32ccd163daf9d7685f0e8485941701e1133d
> When a datanode goes unresponsive for a significant amount of time(for example a big gc) an append failure will occur followed by repeated time outs appearing in the log, and failure to close the stream. Relevant section of logs attached(where it first starts appearing.
> The same log repeats periodically, consistently running into a TimeoutException.
> Restarting  flume(or presumably just the HDFSSink) solves the issue.
> Probable cause in comments



--
This message was sent by Atlassian JIRA
(v6.2#6252)