You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Nguyen Manh Tien (JIRA)" <ji...@apache.org> on 2013/12/04 17:29:35 UTC
[jira] [Created] (NUTCH-1679) UpdateDb using batchId, link may
override crawled page.
Nguyen Manh Tien created NUTCH-1679:
---------------------------------------
Summary: UpdateDb using batchId, link may override crawled page.
Key: NUTCH-1679
URL: https://issues.apache.org/jira/browse/NUTCH-1679
Project: Nutch
Issue Type: Bug
Affects Versions: 2.3
Reporter: Nguyen Manh Tien
The problem is in Hbase store, not sure about other store.
Suppose at first crawl cycle we crawl link A, then get an outlink B.
In second cycle we crawl link B which also has a link point to A
In second updatedb we load only page B from store, and will add A as new link because it doesn't know A already exist in store and will override A.
UpdateDb must be run without batchId or we must set additionsAllowed=false
Here are code for new page
page = new WebPage();
schedule.initializeSchedule(url, page);
page.setStatus(CrawlStatus.STATUS_UNFETCHED);
try {
scoringFilters.initialScore(url, page);
} catch (ScoringFilterException e) {
page.setScore(0.0f);
}
new page will override old page status, score, fetchTime, fetchInterval, retries, metadata[CASH_KEY]
- i think we can change something here so that new page will only update one column for example 'link' and if it is really a new page, we can initialize all above fields in generator
- or we add operator checkAndPut to store so when add new page we will check if already exist first
--
This message was sent by Atlassian JIRA
(v6.1#6144)