You are viewing a plain text version of this content. The canonical link for it is here.
Posted to infrastructure-issues@apache.org by "Brett Porter (JIRA)" <ji...@apache.org> on 2010/08/09 16:38:19 UTC
[jira] Reopened: (INFRA-1343) setup robots.txt and/or other access
rules to prevent bots from crawling Continuum pages
[ https://issues.apache.org/jira/browse/INFRA-1343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Brett Porter reopened INFRA-1343:
---------------------------------
we need to dig this up / recreate it (and apply to archiva too). googlebot + slurp triggered some problematic pages simultaneously to spike the load yesterday.
> setup robots.txt and/or other access rules to prevent bots from crawling Continuum pages
> -----------------------------------------------------------------------------------------
>
> Key: INFRA-1343
> URL: https://issues.apache.org/jira/browse/INFRA-1343
> Project: Infrastructure
> Issue Type: Task
> Security Level: public(Regular issues)
> Components: Continuum
> Reporter: Brett Porter
>
> We don't need search engines crawling the build pages (especially since it can navigate its way all the way through a working copy). It is picking up links from the mails sent out to mailing lists, presumably.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.