You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@bookkeeper.apache.org by "Sijie Guo (JIRA)" <ji...@apache.org> on 2012/05/07 07:50:11 UTC

[jira] [Commented] (BOOKKEEPER-229) Deleted entry log files would be garbage collected again and again.

    [ https://issues.apache.org/jira/browse/BOOKKEEPER-229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13269384#comment-13269384 ] 

Sijie Guo commented on BOOKKEEPER-229:
--------------------------------------

the test change is not related to code change. because the patch changed extractMetaFromEntryLog to throw IOException, so EntryLogTest can't get EntryLogMeta. I just changed the method used in EntryLogTest not to break the test.
                
> Deleted entry log files would be garbage collected again and again.
> -------------------------------------------------------------------
>
>                 Key: BOOKKEEPER-229
>                 URL: https://issues.apache.org/jira/browse/BOOKKEEPER-229
>             Project: Bookkeeper
>          Issue Type: Bug
>          Components: bookkeeper-server
>    Affects Versions: 4.1.0
>            Reporter: Sijie Guo
>            Assignee: Sijie Guo
>             Fix For: 4.1.0
>
>         Attachments: BK-229.diff, BK-229.diff_v2
>
>
> after BOOKKEEPER-188 is applied, extractMetaFromEntryLogs is moved from EntryLogger to GarbageCollectorThread with some changed.
> Before BOOKKEEPER-188 is applied,  we just add the entryLogMeta to entryLogMetaMap only when we could extract the entry log file. If a log file is garbage collected, its entryLogMeta would not be put in the map.
> {code}
> -    protected Map<Long, EntryLogMetadata> extractMetaFromEntryLogs(Map<Long, EntryLogMetadata> entryLogMetaMap) throws IOException {
> -        // Extract it for every entry log except for the current one.
> -        // Entry Log ID's are just a long value that starts at 0 and increments
> -        // by 1 when the log fills up and we roll to a new one.
> -        long curLogId = logId;
> -        for (long entryLogId = 0; entryLogId < curLogId; entryLogId++) {
> -            // Comb the current entry log file if it has not already been extracted.
> -            if (entryLogMetaMap.containsKey(entryLogId)) {
> -                continue;
> -            }
> -            LOG.info("Extracting entry log meta from entryLogId: " + entryLogId);
> -            EntryLogMetadata entryLogMeta = new EntryLogMetadata(entryLogId);
> -            ExtractionScanner scanner = new ExtractionScanner(entryLogMeta);
> -            // Read through the entry log file and extract the entry log meta
> -            try {
> -                scanEntryLog(entryLogId, scanner);
> -                LOG.info("Retrieved entry log meta data entryLogId: " + entryLogId + ", meta: " + entryLogMeta);
> -                entryLogMetaMap.put(entryLogId, entryLogMeta);
> -            } catch(IOException e) {
> -              LOG.warn("Premature exception when processing " + entryLogId +
> -                       "recovery will take care of the problem", e);
> -            }
> -
> -        }
> -        return entryLogMetaMap;
> -    }
> {code}
> But after BOOKKEEPER-188 is applied,  an empty entryLogMeta would be put into entryLogMetaMap for those deleted entry log files. So GarbageCollectorThread would gc those deleted entry log files again. Then there is lots of such kind of error messages, these are noise error message but doesn't affect the logic.
> {code}
> +    protected Map<Long, EntryLogMetadata> extractMetaFromEntryLogs(Map<Long, EntryLogMetadata> entryLogMetaMap)
> +            throws IOException {
> +        // Extract it for every entry log except for the current one.
> +        // Entry Log ID's are just a long value that starts at 0 and increments
> +        // by 1 when the log fills up and we roll to a new one.
> +        long curLogId = entryLogger.logId;
> +        for (long entryLogId = 0; entryLogId < curLogId; entryLogId++) {
> +            // Comb the current entry log file if it has not already been extracted.
> +            if (entryLogMetaMap.containsKey(entryLogId)) {
> +                continue;
> +            }
> +            LOG.info("Extracting entry log meta from entryLogId: " + entryLogId);
> +
> +            // Read through the entry log file and extract the entry log meta
> +            entryLogMetaMap.put(entryLogId,
> +                                extractMetaFromEntryLog(entryLogger, entryLogId));
> +        }
> +        return entryLogMetaMap;
> +    }
> +
> +    static EntryLogMetadata extractMetaFromEntryLog(EntryLogger entryLogger, long entryLogId)
> +            throws IOException {
> +        EntryLogMetadata entryLogMeta = new EntryLogMetadata(entryLogId);
> +        ExtractionScanner scanner = new ExtractionScanner(entryLogMeta);
> +        try {
> +            // Read through the entry log file and extract the entry log meta
> +            entryLogger.scanEntryLog(entryLogId, scanner);
> +            LOG.info("Retrieved entry log meta data entryLogId: "
> +                     + entryLogId + ", meta: " + entryLogMeta);
> +        } catch(IOException e) {
> +            LOG.warn("Premature exception when processing " + entryLogId +
> +                     "recovery will take care of the problem", e);
> +        }
> +
> +        return entryLogMeta;
> +    }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira