You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Riyaz Shaik (JIRA)" <ji...@apache.org> on 2014/06/12 09:58:01 UTC
[jira] [Commented] (NUTCH-1614) Plugin to exclude URLs matching
regex list from indexing - to enable crawl but do not index
[ https://issues.apache.org/jira/browse/NUTCH-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028919#comment-14028919 ]
Riyaz Shaik commented on NUTCH-1614:
------------------------------------
I have implemented a similar kind of feature for crawling our sites a year ago or so, I have come across this ticket so just thought of sharing the implementation approach(It's not a plugin approach like existing filters/normalizers).
Created a util class for our customization to handle reading different types of regex patterns like include and exclude as nutch supports.
(on) Nutch version : 2.1
* org.apache.nutch.util.RegexUtil (source code attached)
Added the following changes to IndexerJob class
* org.apache.nutch.indexer.IndexerJob (attached the source code)
code snippet:
{code}
package org.apache.nutch.indexer;
....
import org.apache.nutch.util.RegexUtil;
import org.apache.nutch.util.TableUtil;
public abstract class IndexerJob extends NutchTool implements Tool {
public static final Logger LOG = LoggerFactory.getLogger(IndexerJob.class);
public static final String INDEXING_EXCLUDE_URL_PATTERNS_FILE = "indexing.exclude.url.patterns.file";
public void setup(Context context) throws IOException {
.....
String regexPatternsFileName = conf.get(INDEXING_EXCLUDE_URL_PATTERNS_FILE);
if (regexPatternsFileName != null) {
LOG.info("Loading indexing exculde patterns from the nutch configurations:");
RegexUtil.loadRegexPatterns(conf.getConfResourceAsReader(regexPatternsFileName));
}
}
public void map(String key, WebPage page, Context context)
throws IOException, InterruptedException {
ParseStatus pstatus = page.getParseStatus();
if (pstatus == null || !ParseStatusUtils.isSuccess(pstatus)
|| pstatus.getMinorCode() == ParseStatusCodes.SUCCESS_REDIRECT) {
return; // filter urls not parsed
}
=======================================
/*
* To skip the matched url patterns from indexing.
*
*/
String pageUrl = TableUtil.unreverseUrl(key);
if (RegexUtil.findMatch(pageUrl)){
LOG.info("Skipping the url : " + pageUrl + " from indexing; matched the indexing exclude url patterns.");
return;
}
==================================
.......
.....
{code}
* Add the following property to *??nutch-site.xml??*
{code}
<property>
<name>indexing.exclude.url.patterns.file</name>
<value>crawl-donot-index-patterns.txt</value>
</property>
{code}
> Plugin to exclude URLs matching regex list from indexing - to enable crawl but do not index
> -------------------------------------------------------------------------------------------
>
> Key: NUTCH-1614
> URL: https://issues.apache.org/jira/browse/NUTCH-1614
> Project: Nutch
> Issue Type: Improvement
> Components: indexer
> Affects Versions: 2.2.1
> Reporter: Brian
> Priority: Minor
> Labels: plugin
> Attachments: NUTCH-1614.patch
>
>
> Some pages we need to crawl (such as some main pages and different views of a main page) to get all the other pages, but we don't want to index those pages themselves. Therefore we cannot use the url filter approach.
> This plugin uses a file containing regex strings (see included sample file). If one of the regex strings matches with an entire URL, that URL will be excluded form indexing.
> The file to use is specified by the following property in nutch-site.xml:
> <property>
> <name>indexer.url.filter.exclude.regex.file</name>
> <value>regex-indexer-exclude-urls.txt</value>
> <description>
> Holds the file name containing the regex strings. Any URL matching one of these strings will be excluded from indexing.
> "#" indicates a comment line and will be ignored.
> </description>
> </property>
--
This message was sent by Atlassian JIRA
(v6.2#6252)