You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2009/12/10 21:55:18 UTC

[jira] Created: (HDFS-826) Allow a mechanism for an application to detect that datanode(s) have died in the write pipeline

Allow a mechanism for an application to detect that datanode(s)  have died in the write pipeline
------------------------------------------------------------------------------------------------

                 Key: HDFS-826
                 URL: https://issues.apache.org/jira/browse/HDFS-826
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: hdfs client
            Reporter: dhruba borthakur
            Assignee: dhruba borthakur


HDFS does not replicate the last block of the file that is being currently written to by an application. Every datanode death in the write pipeline decreases the reliability of the last block of the currently-being-written block. This situation can be improved if the application can be notified of a datanode death in the write pipeline. Then, the application can decide what is the right course of action to be taken on this event.

In our use-case, the application can close the file on the first datanode death, and start writing to a newly created file. This ensures that the reliability guarantee of a block is close to 3 at all time.

One idea is to make DFSOutoutStream. write() throw an exception if the number of datanodes in the write pipeline fall below minimum.replication.factor that is set on the client (this is backward compatible).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.