You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2009/04/16 04:02:15 UTC

[jira] Updated: (HADOOP-2757) Should DFS outputstream's close wait forever?

     [ https://issues.apache.org/jira/browse/HADOOP-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

dhruba borthakur updated HADOOP-2757:
-------------------------------------

    Attachment: softMount1.patch

This patch introduces a configuration parameter called dfs.leaserenewal.timeout. If the dfsclient is unable to contact the dfs servers for this much time, then the dfsclient marks all open stream as closed and release all resources associated with those streams. Any operation on one of these streams results in an exception to the application.

> Should DFS outputstream's close wait forever?
> ---------------------------------------------
>
>                 Key: HADOOP-2757
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2757
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: dhruba borthakur
>         Attachments: softMount1.patch
>
>
> Currently {{DFSOutputStream.close()}} waits for ever if Namenode keeps throwing {{NotYetReplicated}} exception, for whatever reason. Its pretty annoying for a user. Shoud the loop inside close have a timeout? If so how much? It could probably something like 10 minutes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.