You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Sami Siren (JIRA)" <ji...@apache.org> on 2006/07/18 21:51:14 UTC

[jira] Commented: (NUTCH-293) support for Crawl-delay in Robots.txt

    [ http://issues.apache.org/jira/browse/NUTCH-293?page=comments#action_12421930 ] 
            
Sami Siren commented on NUTCH-293:
----------------------------------

perhaps instead of
delay = crawlDelay > 0 ? crawlDelay : serverDelay;

we could do
delay=Math.max(crawlDelay, serverDelay);

also the delay could be calculated only once and passed as a parameter 
to blockAddr, unblockAddr



> support for Crawl-delay in Robots.txt
> -------------------------------------
>
>                 Key: NUTCH-293
>                 URL: http://issues.apache.org/jira/browse/NUTCH-293
>             Project: Nutch
>          Issue Type: Improvement
>          Components: fetcher
>    Affects Versions: 0.8-dev
>            Reporter: Stefan Groschupf
>            Priority: Critical
>         Attachments: crawlDelayv1.patch
>
>
> Nutch need support for Crawl-delay defined in robots.txt, it is not a standard but a de-facto standard.
> See:
> http://help.yahoo.com/help/us/ysearch/slurp/slurp-03.html
> Webmasters start blocking nutch since we do not support it.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira