You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Daryn Sharp (JIRA)" <ji...@apache.org> on 2017/07/14 15:41:00 UTC

[jira] [Created] (HDFS-12142) Files may be closed before streamer is done

Daryn Sharp created HDFS-12142:
----------------------------------

             Summary: Files may be closed before streamer is done
                 Key: HDFS-12142
                 URL: https://issues.apache.org/jira/browse/HDFS-12142
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs
    Affects Versions: 2.8.0
            Reporter: Daryn Sharp


We're encountering multiple cases of clients calling updateBlockForPipeline on completed blocks.  Initial analysis is the client closes a file, completeFile succeeds, then it immediately attempts recovery.  The exception is swallowed on the client, only logged on the NN by checkUCBlock.

The problem "appears" to be benign (no data loss) but it's unproven if the issue always occurs for successfully closed files.  There appears to be very poor coordination between the dfs output stream's threads which leads to races that confuse the streamer thread – which probably should have been joined before returning from close.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org