You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "He Yongqiang (JIRA)" <ji...@apache.org> on 2010/02/05 03:48:28 UTC

[jira] Created: (HDFS-951) DFSClient should handle all nodes in a pipeline failed.

DFSClient should handle all nodes in a pipeline failed.
-------------------------------------------------------

                 Key: HDFS-951
                 URL: https://issues.apache.org/jira/browse/HDFS-951
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: He Yongqiang


processDatanodeError-> setupPipelineForAppendOrRecovery  will set streamerClosed to be true if all nodes in the pipeline failed in the past, and just return.
Back to run() in data streammer,  the logic 
 if (streamerClosed || hasError || dataQueue.size() == 0 || !clientRunning) {
                continue;
  }
will just let set closed=true in closeInternal().

And DataOutputStream will not get a chance to clean up. The DataOutputStream will throw exception or return null for following write/close.
It will leave the file in writing in incomplete state.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.