You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Boris Shkolnik (JIRA)" <ji...@apache.org> on 2009/04/02 00:54:13 UTC

[jira] Commented: (HADOOP-4045) Increment checkpoint if we see failures in rollEdits

    [ https://issues.apache.org/jira/browse/HADOOP-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12694814#action_12694814 ] 

Boris Shkolnik commented on HADOOP-4045:
----------------------------------------


   1. FSImage.setCheckpointTime() variable al is not used.
bq. fixed
   2. processIOError(ArrayList<StorageDirectory> sds) may be eliminated.
bq. This will force using two-argument version of the function everywhere, in most cases with "true" value for the second argument.
   3. I would also get rid of processIOError(ArrayList<EditLogOutputStream> errorStreams). The point is that it is better to have only one processIOError in each class, otherwise it can get as bad as it is now with all different variants of it. If you think it is a lot of changes, then lets at least make both of them private.
bq. see 2.
   4. Do we want to make removedStorageDirs a map in order to avoid adding the same directory twice into it or does it never happen?
bq. good idea. will need a separate JIRA for it
   5. Same with Storage.storageDirs. If we search in a collection then we might want to use searchable collections. This may be done in a separate issue.
bq. same as 4.
   6. It's somewhat confusing: FSImage.processIOError() calls editLog.processIOError() and then FSEditLog.processIOError() calls fsimage.processIOError(). Is it going to converge at some point?
bq. it should. every time processIOError calles its counterpart in the other class it passes _false_ as second (propagate) argument to make sure it will not call the original function. 
   7. setCheckpointTime() ignores io errors. Just mentioning this, I don't see how to avoid it. Failed streams/directories will be remove next time flushAndSync() called.
bq. Yes, it should be cought elsewhere.



> Increment checkpoint if we see failures in rollEdits
> ----------------------------------------------------
>
>                 Key: HADOOP-4045
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4045
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Lohit Vijayarenu
>            Assignee: Boris Shkolnik
>            Priority: Critical
>             Fix For: 0.19.2
>
>         Attachments: HADOOP-4045-1.patch, HADOOP-4045.patch
>
>
> In _FSEditLog::rollEdits_, if we encounter an error during opening edits.new, we remove  the store directory associated with it. At this point we should also increment checkpoint on all other directories.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.