You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Doğacan Güney (JIRA)" <ji...@apache.org> on 2007/08/08 09:34:59 UTC

[jira] Resolved: (NUTCH-535) ParseData's contentMeta accumulates unnecessary values during parse

     [ https://issues.apache.org/jira/browse/NUTCH-535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Doğacan Güney resolved NUTCH-535.
---------------------------------

    Resolution: Fixed

Fixed in rev. 563777.


> ParseData's contentMeta accumulates unnecessary values during parse
> -------------------------------------------------------------------
>
>                 Key: NUTCH-535
>                 URL: https://issues.apache.org/jira/browse/NUTCH-535
>             Project: Nutch
>          Issue Type: Bug
>    Affects Versions: 1.0.0
>            Reporter: Doğacan Güney
>            Assignee: Doğacan Güney
>             Fix For: 1.0.0
>
>         Attachments: NUTCH-535.patch, NUTCH_535_v2.patch
>
>
> After NUTCH-506, if you run parse on a segment, parseData's contentMeta accumulates metadata of every content parsed so far. This is because NUTCH-506 changed constructor to create a new metadata (before NUTCH-506, a new metadata was created for every call to readFields). It seems hadoop somehow caches Content instance so each new call to Content.readFields during ParseSegment increases size of metadata. Because of this, one can end up with *huge* parse_data directory (something like 10 times larger than content directory)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.