You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by Dishanker Raj <di...@adm.uib.no> on 2014/01/22 15:17:49 UTC

Repeated crawling with Solr index deduplication fails.

Hello!

I need some explanation as to why I see this part in the nutch logs when running repeated crawls of a site and nutch is issuing deduplicate requests to our SolrCloud cluster as it should (using the included ‘crawl’ script). Please see the lines below showing: java.lang.Exception: java.lang.IndexOutOfBoundsException .

Thanks!

…………
14/01/21 19:41:58 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/01/21 19:41:59 INFO mapred.JobClient: Running job: job_local1021345478_0001
14/01/21 19:41:59 INFO mapred.LocalJobRunner: Waiting for map tasks
14/01/21 19:41:59 INFO mapred.LocalJobRunner: Starting task: attempt_local1021345478_0001_m_000000_0
14/01/21 19:41:59 INFO util.ProcessTree: setsid exited with exit code 0
14/01/21 19:41:59 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@23729adf
14/01/21 19:41:59 INFO mapred.MapTask: Processing split: org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputSplit@805b8b12
14/01/21 19:42:00 INFO mapred.JobClient:  map 0% reduce 0%
14/01/21 19:42:15 INFO mapred.MapTask: numReduceTasks: 1
14/01/21 19:42:15 INFO mapred.MapTask: io.sort.mb = 100
14/01/21 19:42:16 INFO mapred.MapTask: data buffer = 79691776/99614720
14/01/21 19:42:16 INFO mapred.MapTask: record buffer = 262144/327680
14/01/21 19:42:16 INFO mapred.LocalJobRunner: Map task executor complete.
14/01/21 19:42:16 WARN mapred.FileOutputCommitter: Output path is null in cleanup
14/01/21 19:42:16 WARN mapred.LocalJobRunner: job_local1021345478_0001
java.lang.Exception: java.lang.IndexOutOfBoundsException: Index: 60651, Size: 60651
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
Caused by: java.lang.IndexOutOfBoundsException: Index: 60651, Size: 60651
        at java.util.ArrayList.rangeCheck(ArrayList.java:646)
        at java.util.ArrayList.get(ArrayList.java:422)
        at org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputFormat$1.next(SolrDeleteDuplicates.java:268)
        at org.apache.nutch.indexer.solr.SolrDeleteDuplicates$SolrInputFormat$1.next(SolrDeleteDuplicates.java:241)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:230)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:210)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:482)
        at java.util.concurrent.FutureTask.run(FutureTask.java:273)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1156)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:626)
        at java.lang.Thread.run(Thread.java:804)
14/01/21 19:42:17 INFO mapred.JobClient: Job complete: job_local1021345478_0001
14/01/21 19:42:17 INFO mapred.JobClient: Counters: 0
14/01/21 19:42:17 INFO mapred.JobClient: Job Failed: NA
Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
        at org.apache.nutch.indexer.solr.SolrDeleteDuplicates.dedup(SolrDeleteDuplicates.java:373)
        at org.apache.nutch.indexer.solr.SolrDeleteDuplicates.run(SolrDeleteDuplicates.java:390)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.nutch.indexer.solr.SolrDeleteDuplicates.main(SolrDeleteDuplicates.java:395)
…………


Sincerely,
Dishanker Raj

PGP Public Key: http://goo.gl/YFalnt