You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Robert Chansler (JIRA)" <ji...@apache.org> on 2008/09/11 18:47:46 UTC

[jira] Commented: (HADOOP-3989) Secondary Namenode: Limit number of retries when fsimage/edits transfer fails

    [ https://issues.apache.org/jira/browse/HADOOP-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12630245#action_12630245 ] 

Robert Chansler commented on HADOOP-3989:
-----------------------------------------

And notice of each failure must be show in the server logs.

(0.18 hides some IOExceptions at the "debug" level of reporting.)

> Secondary Namenode: Limit number of retries when fsimage/edits transfer fails
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-3989
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3989
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Koji Noguchi
>            Priority: Minor
>
> When hitting HADOOP-3980, secondary namenode kept on pulling gigs of fsimage/edits every 10 minutes which slowed down the namenode significantly.    When namenode is down, I'd like the secondary namenode to keep on retrying to connect.  However, when pull/push of large files keep on failing, I'd like a upper limit on the number of retries.  Either shutdown or  sleep for _fs.checkpoint.period_ seconds.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.