You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by "Juhani Connolly (JIRA)" <ji...@apache.org> on 2013/11/22 07:21:41 UTC

[jira] [Created] (FLUME-2245) HDFS files with errors unable to close

Juhani Connolly created FLUME-2245:
--------------------------------------

             Summary: HDFS files with errors unable to close
                 Key: FLUME-2245
                 URL: https://issues.apache.org/jira/browse/FLUME-2245
             Project: Flume
          Issue Type: Bug
            Reporter: Juhani Connolly


This  is running on a snapshot of Flume-1.5 with the git hash 99db32ccd163daf9d7685f0e8485941701e1133d

When a datanode goes unresponsive for a significant amount of time(for example a big gc) an append failure will occur followed by repeated time outs appearing in the log, and failure to close the stream. Relevant section of logs attached(where it first starts appearing.

The same log repeats periodically, consistently running into a TimeoutException.

Restarting  flume(or presumably just the HDFSSink) solves the issue.

Probable cause in comments



--
This message was sent by Atlassian JIRA
(v6.1#6144)