You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Erik Krogen (JIRA)" <ji...@apache.org> on 2018/11/02 14:32:00 UTC

[jira] [Created] (HDFS-14048) DFSOutputStream close() throws exception on subsequent call after DataNode restart

Erik Krogen created HDFS-14048:
----------------------------------

             Summary: DFSOutputStream close() throws exception on subsequent call after DataNode restart
                 Key: HDFS-14048
                 URL: https://issues.apache.org/jira/browse/HDFS-14048
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs-client
    Affects Versions: 2.3.0
            Reporter: Erik Krogen
            Assignee: Erik Krogen


We recently discovered an issue in which, during a rolling upgrade, some jobs were failing with exceptions like (sadly this is the whole stack trace):
{code}
java.io.IOException: A datanode is restarting: DatanodeInfoWithStorage[1.1.1.1:71,BP-XXXX,DISK]
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:877)
{code}
with an earlier statement in the log like:
{code}
INFO [main] org.apache.hadoop.hdfs.DFSClient: A datanode is restarting: DatanodeInfoWithStorage[1.1.1.1:71,BP-XXXX,DISK]
{code}

Strangely we did not see any other logs about the {{DFSOutputStream}} failing after waiting for the DataNode restart. We eventually realized that in some cases {{DFSOutputStream#close()}} may be called more than once, and that if so, the {{IOException}} above is thrown on the _second_ call to {{close()}} (this is even with HDFS-5335; prior to this it would have been thrown on all calls to {{close()}} besides the first).

The problem is that in {{DataStreamer#createBlockOutputStream()}}, after the new output stream is created, it resets the error states:
{code}
        errorState.resetInternalError();
        // remove all restarting nodes from failed nodes list
        failed.removeAll(restartingNodes);
        restartingNodes.clear(); 
{code}
But it forgets to clear {{lastException}}. When {{DFSOutputStream#closeImpl()}} is called a second time, this block is triggered:
{code}
    if (isClosed()) {
      LOG.debug("Closing an already closed stream. [Stream:{}, streamer:{}]",
          closed, getStreamer().streamerClosed());
      try {
        getStreamer().getLastException().check(true);
{code}
The second time, {{isClosed()}} is true, so the exception checking occurs and the "Datanode is restarting" exception is thrown even though the stream has already been successfully closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org