You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Otis Gospodnetic (JIRA)" <ji...@apache.org> on 2013/12/08 14:57:35 UTC
[jira] [Commented] (NUTCH-656) DeleteDuplicates based on crawlDB
only
[ https://issues.apache.org/jira/browse/NUTCH-656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13842512#comment-13842512 ]
Otis Gospodnetic commented on NUTCH-656:
----------------------------------------
This patch was for 1.x only. We've ported it to 2.x. Should we reopen this issue and add a patch or?
> DeleteDuplicates based on crawlDB only
> ---------------------------------------
>
> Key: NUTCH-656
> URL: https://issues.apache.org/jira/browse/NUTCH-656
> Project: Nutch
> Issue Type: Wish
> Components: indexer
> Reporter: Julien Nioche
> Assignee: Julien Nioche
> Attachments: NUTCH-656.patch, NUTCH-656.v2.patch, NUTCH-656.v3.patch
>
>
> The existing dedup functionality relies on Lucene indices and can't be used when the indexing is delegated to SOLR.
> I was wondering whether we could use the information from the crawlDB instead to detect URLs to delete then do the deletions in an indexer-neutral way. As far as I understand the content of the crawlDB contains all the elements we need for dedup, namely :
> * URL
> * signature
> * fetch time
> * score
> In map-reduce terms we would have two different jobs :
> * read crawlDB and compare on URLs : keep only most recent element - oldest are stored in a file and will be deleted later
> * read crawlDB and have a map function generating signatures as keys and URL + fetch time +score as value
> * reduce function would depend on which parameter is set (i.e. use signature or score) and would output as list of URLs to delete
> This assumes that we can then use the URLs to identify documents in the indices.
> Any thoughts on this? Am I missing something?
> Julien
--
This message was sent by Atlassian JIRA
(v6.1#6144)