You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@nutch.apache.org by "Julien Nioche (JIRA)" <ji...@apache.org> on 2014/05/31 20:27:01 UTC

[jira] [Resolved] (NUTCH-1790) solrdedup causes OutOfMemoryError in Solr

     [ https://issues.apache.org/jira/browse/NUTCH-1790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Julien Nioche resolved NUTCH-1790.
----------------------------------

    Resolution: Not a Problem

Hi Greg, 

solrdedup has already been removed from the trunk and replaced with a more generic and robust solution, which will also be ported to 2.x at some point.

Please give the dedup class from trunk a try and see if it works as you expect. See https://issues.apache.org/jira/browse/NUTCH-656 for background 

> solrdedup causes OutOfMemoryError in Solr
> -----------------------------------------
>
>                 Key: NUTCH-1790
>                 URL: https://issues.apache.org/jira/browse/NUTCH-1790
>             Project: Nutch
>          Issue Type: Bug
>          Components: indexer
>    Affects Versions: 1.7, 2.2
>         Environment: Nutch 1.7 in local mode.
> Solr 4.7 with 2M docs under Jetty with 1GB RAM.
>            Reporter: Greg Padiasek
>         Attachments: SolrDeleteDuplicates.patch
>
>
> Nutch 1.7 and 2.2.1 use Hadoop 1.2. In this version Hadoop overwrites "mapred.map.tasks" variable set in mapred-site.xml and in local mode always sets it to 1. As a result Nutch creates a query to read ALL Solr documents at once in one giant response. This in turn causes Solr to consume all RAM given number of documents is high. I found this issue with Solr running with 2M+ docs, 1GB JVM RAM, 20% of which is used under normal conditions. When running "solrdedup", memory usage exceeds available RAM, solr throws OutOfMemoryError and the dedup job fails.
> I think this could be solved in one of two ways: either by upgrading Nutch to a later version of Hadoop lib (which hopefully does not hard-code "mapred.map.tasks" value anymore), or by changing the SolrDeleteDuplicates class to "stream" documents in batches. The later would make Nutch less dependent on Hadoop version and this was my choice. Attached is a patch that implements batch reading in local mode with user defined batch size. The "streaming" is potentially also applicable in distributed mode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)