You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Lewis John McGibbney (JIRA)" <ji...@apache.org> on 2012/07/18 12:45:41 UTC

[jira] [Commented] (NUTCH-1431) Introduce link 'distance' and add configurable max distance in the generator

    [ https://issues.apache.org/jira/browse/NUTCH-1431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13416989#comment-13416989 ] 

Lewis John McGibbney commented on NUTCH-1431:
---------------------------------------------

Ferdy out of curiosity can you please provide one of your usage scenario(s)? I am familiarized with reasons for implementing a shortest distance/path between URLs/links but really keen to understand your personal drivers for maintaining this graph structure.
                
> Introduce link 'distance' and add configurable max distance in the generator
> ----------------------------------------------------------------------------
>
>                 Key: NUTCH-1431
>                 URL: https://issues.apache.org/jira/browse/NUTCH-1431
>             Project: Nutch
>          Issue Type: New Feature
>            Reporter: Ferdy Galema
>             Fix For: 2.1
>
>         Attachments: NUTCH-1431.patch
>
>
> Introducing a new feature that enables to crawl URLs within a specific distance (shortest path) from the injected source urls. This is where the db-updater of Nutchgora really shines. Because every url in the reducer has all of its inlinks present, it is really easy to determine what the shortest path is to that url. (I would not know how to cleanly implement this feature for trunk).
> Injected urls have distance 0. Outlink urls on those pages have distance 1. Outlinks on those pages have distance 2, etc. Outlinks that already had a smaller distance will keep that distance. Of all inlinks to a page, it will always select the smallest distance in order to maintain the shortest path garantuee.
> Generator now has a property 'generate.max.distance' (default set to -1) that specifies the maximum allowed distance of urls to select for fetch.
> Note that this is fundamentally different from the concept crawl 'depth'. Depth is used for crawl cycles. Distance allows to crawl for unlimited number of cycles AND always stay within a certain number of 'hops' from injected urls.
> I will attach a patch. Will commit in a few days. (It does not change crawl behaviour unless otherwise configured). Let me know if you have comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira