You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by Micah Vivion <mi...@gmail.com> on 2007/08/01 04:09:35 UTC

Re: Why does Nutch crawl keep on throwing an exception?

Greeting,

Here is the hadoop.log output from my crash - any ideas?

2007-07-31 19:06:50,702 INFO  indexer.IndexingFilters - Adding  
org.apache.nutch.indexer.basic.BasicIndexingFilter
2007-07-31 19:06:50,799 INFO  indexer.Indexer - Optimizing index.
2007-07-31 19:06:51,497 INFO  indexer.Indexer - Indexer: done
2007-07-31 19:06:51,498 INFO  indexer.DeleteDuplicates - Dedup: starting
2007-07-31 19:06:51,510 INFO  indexer.DeleteDuplicates - Dedup:  
adding indexes in: /var/webindex/data/indexes
2007-07-31 19:06:51,733 WARN  mapred.LocalJobRunner - job_2xsg2o
java.lang.ArrayIndexOutOfBoundsException: -1
         at org.apache.lucene.index.MultiReader.isDeleted 
(MultiReader.java:113)
         at org.apache.nutch.indexer.DeleteDuplicates$InputFormat 
$DDRecordReader.next(DeleteDuplicates.java:176)
         at org.apache.hadoop.mapred.MapTask$1.next(MapTask.java:157)
         at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:46)
         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:175)
         at org.apache.hadoop.mapred.LocalJobRunner$Job.run 
(LocalJobRunner.java:126)






On Jul 30, 2007, at 2:02 PM, DES wrote:

> Look in logs/hadoop.log for the actual reason for this exception. The
> console message is not really helpful.
>
> On 7/30/07, Micah Vivion <mi...@gmail.com> wrote:
>>>> Exception in thread "main" java.io.IOException: Job failed!
>>>>          at org.apache.hadoop.mapred.JobClient.runJob 
>>>> (JobClient.java:
>>>> 604)
>>>>          at org.apache.nutch.indexer.DeleteDuplicates.dedup
>>>> (DeleteDuplicates.java:439)
>>>>          at org.apache.nutch.crawl.Crawl.main(Crawl.java:135)


Re: Why does Nutch crawl keep on throwing an exception?

Posted by DES <sa...@gmail.com>.
Hi,

this exception shows that there are no IndexReaders for your index.
This happens if your index is empty. Look in your crawl/indexes
folder. Maybe you skipped all documents while indexing or there is
some exception in Indexer or its plugins.

des

On 8/1/07, Micah Vivion <mi...@gmail.com> wrote:
> Greeting,
>
> Here is the hadoop.log output from my crash - any ideas?
>
> 2007-07-31 19:06:50,702 INFO  indexer.IndexingFilters - Adding
> org.apache.nutch.indexer.basic.BasicIndexingFilter
> 2007-07-31 19:06:50,799 INFO  indexer.Indexer - Optimizing index.
> 2007-07-31 19:06:51,497 INFO  indexer.Indexer - Indexer: done
> 2007-07-31 19:06:51,498 INFO  indexer.DeleteDuplicates - Dedup: starting
> 2007-07-31 19:06:51,510 INFO  indexer.DeleteDuplicates - Dedup:
> adding indexes in: /var/webindex/data/indexes
> 2007-07-31 19:06:51,733 WARN  mapred.LocalJobRunner - job_2xsg2o
> java.lang.ArrayIndexOutOfBoundsException: -1
>          at org.apache.lucene.index.MultiReader.isDeleted
> (MultiReader.java:113)
>          at org.apache.nutch.indexer.DeleteDuplicates$InputFormat
> $DDRecordReader.next(DeleteDuplicates.java:176)
>          at org.apache.hadoop.mapred.MapTask$1.next(MapTask.java:157)
>          at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:46)
>          at org.apache.hadoop.mapred.MapTask.run(MapTask.java:175)
>          at org.apache.hadoop.mapred.LocalJobRunner$Job.run
> (LocalJobRunner.java:126)
>
>
>
>
>
>
> On Jul 30, 2007, at 2:02 PM, DES wrote:
>
> > Look in logs/hadoop.log for the actual reason for this exception. The
> > console message is not really helpful.
> >
> > On 7/30/07, Micah Vivion <mi...@gmail.com> wrote:
> >>>> Exception in thread "main" java.io.IOException: Job failed!
> >>>>          at org.apache.hadoop.mapred.JobClient.runJob
> >>>> (JobClient.java:
> >>>> 604)
> >>>>          at org.apache.nutch.indexer.DeleteDuplicates.dedup
> >>>> (DeleteDuplicates.java:439)
> >>>>          at org.apache.nutch.crawl.Crawl.main(Crawl.java:135)
>
>