You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Doğacan Güney (JIRA)" <ji...@apache.org> on 2009/01/29 20:45:59 UTC
[jira] Updated: (NUTCH-683) NUTCH-676 broke CrawlDbMerger
[ https://issues.apache.org/jira/browse/NUTCH-683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Doğacan Güney updated NUTCH-683:
--------------------------------
Attachment: crawldbmerger_v2.patch
Patch for issue
> NUTCH-676 broke CrawlDbMerger
> -----------------------------
>
> Key: NUTCH-683
> URL: https://issues.apache.org/jira/browse/NUTCH-683
> Project: Nutch
> Issue Type: Bug
> Affects Versions: 1.0.0
> Reporter: Doğacan Güney
> Assignee: Doğacan Güney
> Priority: Minor
> Fix For: 1.0.0
>
> Attachments: crawldbmerger_v2.patch
>
>
> Switch to hadoop's MapWritable broke CrawlDbMerger. Part of the reason is that we reuse the same MapWritable instance during reduce
> which apparently is a big no-no for hadoop's MapWritable. Also, hadoop's MapWritable#putAll doesn't work (I think.... see HADOOP-5142),
> so we should also work around that.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
Re: [jira] Updated: (NUTCH-683) NUTCH-676 broke CrawlDbMerger
Posted by Raghavendra Neelekani <rk...@gmail.com>.
In webDB how d urs r stored ???? And How ll fetchgenerator fetch urls from
crawlDB ??
On Fri, Jan 30, 2009 at 1:15 AM, Doğacan Güney (JIRA) <ji...@apache.org>wrote:
>
> [
> https://issues.apache.org/jira/browse/NUTCH-683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]
>
> Doğacan Güney updated NUTCH-683:
> --------------------------------
>
> Attachment: crawldbmerger_v2.patch
>
> Patch for issue
>
> > NUTCH-676 broke CrawlDbMerger
> > -----------------------------
> >
> > Key: NUTCH-683
> > URL: https://issues.apache.org/jira/browse/NUTCH-683
> > Project: Nutch
> > Issue Type: Bug
> > Affects Versions: 1.0.0
> > Reporter: Doğacan Güney
> > Assignee: Doğacan Güney
> > Priority: Minor
> > Fix For: 1.0.0
> >
> > Attachments: crawldbmerger_v2.patch
> >
> >
> > Switch to hadoop's MapWritable broke CrawlDbMerger. Part of the reason is
> that we reuse the same MapWritable instance during reduce
> > which apparently is a big no-no for hadoop's MapWritable. Also, hadoop's
> MapWritable#putAll doesn't work (I think.... see HADOOP-5142),
> > so we should also work around that.
>
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>
>
--
Raghavendra Keshava Neelekani