You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Chang Fan (JIRA)" <ji...@apache.org> on 2018/09/17 16:00:00 UTC

[jira] [Created] (NUTCH-2646) CLONE - Caching of redirected robots.txt may overwrite correct robots.txt rules

Chang Fan created NUTCH-2646:
--------------------------------

             Summary: CLONE - Caching of redirected robots.txt may overwrite correct robots.txt rules
                 Key: NUTCH-2646
                 URL: https://issues.apache.org/jira/browse/NUTCH-2646
             Project: Nutch
          Issue Type: Bug
          Components: fetcher, robots
    Affects Versions: 2.3.1, 1.14
            Reporter: Chang Fan
            Assignee: Sebastian Nagel
             Fix For: 2.4, 1.15


Redirected robots.txt rules are also cached for the target host. That may cause that the correct robots.txt rules are never fetched. E.g., http://wyomingtheband.com/robots.txt redirects to https://www.facebook.com/wyomingtheband/robots.txt. Because fetching fails with a 404 bots are allowed to crawl wyomingtheband.com. The rules is erroneously also cached for the redirect target host www.facebook.com which is clear regarding their [robots.txt|https://www.facebook.com/robots.txt] rules and does not allow crawling.

Nutch may cache redirected robots.txt rules only if the path part (in doubt, including the query) of the redirect target URL is exactly {{/robots.txt}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)