You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by Lincoln Ritter <li...@lincolnritter.com> on 2008/06/11 01:48:55 UTC

'bin/nutch crawl' failing during indexing - "no segments* file found" (Plus some other questions)

Greetings,

First off, I am relatively new to Nutch and Hadoop so please forgive
my ignorance.  I'd like to use nutch/hadoop for some large scale data
collection and processing.  Since I'd like to use hadoop for some
general distributed processing, I'm trying to use nutch as a job.
After some communication with Michael, I applied the patch in
NUTCH-634: https://issues.apache.org/jira/browse/NUTCH-634.  I built
aginst the hadoop 0.17 binaries and followed
http://wiki.apache.org/hadoop/GettingStartedWithHadoop to get Hadoop
up and running. I also tweaked the nutch script (bin/nutch) to invoke
'hadoop jar' and passing it the nutch job jar and appropriate classes:

NUTCH_JOB="${HADOOP_HOME}/nutch-1.0-dev.job"
HADOOP_COMMAND="${HADOOP_HOME}/bin/hadoop"
exec "$HADOOP_COMMAND" jar ${NUTCH_JOB} $CLASS "$@"


So now I am running nutch revision 663092 with hadoop 0.17!

But here's the rub: when I use 'bin/nutch crawl' I get a crash during
what I believe is the indexing stage.  The error is:

"java.io.FileNotFoundException: no segments* file found in
org.apache.nutch.indexer.FsDirectory@hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631:
files: _logs content crawl_fetch crawl_generate crawl_parse parse_data
parse_text"

(Full console output is given below)

However, as far as I can tell, these files/directories do exists in hdfs.

$ bin/hadoop fs -ls /user/lritter/crawl/segments/20080610161631
Found 7 items
/user/lritter/crawl/segments/20080610161631/_logs	<dir>		2008-06-10
16:17	rwxr-xr-x	lritter	supergroup
/user/lritter/crawl/segments/20080610161631/content	<dir>		2008-06-10
16:17	rwxr-xr-x	lritter	supergroup
/user/lritter/crawl/segments/20080610161631/crawl_fetch	<dir>		2008-06-10
16:17	rwxr-xr-x	lritter	supergroup
/user/lritter/crawl/segments/20080610161631/crawl_generate	<dir>		2008-06-10
16:16	rwxr-xr-x	lritter	supergroup
/user/lritter/crawl/segments/20080610161631/crawl_parse	<dir>		2008-06-10
16:17	rwxr-xr-x	lritter	supergroup
/user/lritter/crawl/segments/20080610161631/parse_data	<dir>		2008-06-10
16:17	rwxr-xr-x	lritter	supergroup
/user/lritter/crawl/segments/20080610161631/parse_text	<dir>		2008-06-10
16:17	rwxr-xr-x	lritter	supergroup

Is there something I am missing here?  Really, I don't think I need to
index all the files I've fetched at this point, but I'd like to be
able to verify the operation of the crawler.  It would give me much
greater confidence if I could make this work before proceeding on to
more complicated stuff.

So, the questions I have are:
 - Any ideas on what might be causing this error?
 - Given that I probably will be running M/R jobs over the fetched
content and then generating new docs and indexes based on that, is
there a good way to verify the operation of the crawler?

Thanks
-lincoln

$ ./bin/nutch crawl /user/lritter/urls -dir /user/lritter/crawl -depth
1
08/06/10 16:16:00 INFO crawl.Crawl: crawl started in: /user/lritter/crawl
08/06/10 16:16:00 INFO crawl.Crawl: rootUrlDir = /user/lritter/urls
08/06/10 16:16:00 INFO crawl.Crawl: threads = 10
08/06/10 16:16:00 INFO crawl.Crawl: depth = 1
08/06/10 16:16:00 INFO crawl.Injector: Injector: starting
08/06/10 16:16:00 INFO crawl.Injector: Injector: crawlDb:
/user/lritter/crawl/crawldb
08/06/10 16:16:00 INFO crawl.Injector: Injector: urlDir: /user/lritter/urls
08/06/10 16:16:00 INFO crawl.Injector: Injector: Converting injected
urls to crawl db entries.
08/06/10 16:16:02 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:16:03 INFO mapred.JobClient: Running job: job_200806101251_0091
08/06/10 16:16:04 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:16:08 INFO mapred.JobClient:  map 50% reduce 0%
08/06/10 16:16:10 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:16:16 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:16:17 INFO mapred.JobClient: Job complete: job_200806101251_0091
08/06/10 16:16:17 INFO mapred.JobClient: Counters: 16
08/06/10 16:16:17 INFO mapred.JobClient:   Job Counters
08/06/10 16:16:17 INFO mapred.JobClient:     Launched map tasks=2
08/06/10 16:16:17 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:16:17 INFO mapred.JobClient:     Data-local map tasks=2
08/06/10 16:16:17 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:16:17 INFO mapred.JobClient:     Map input records=1
08/06/10 16:16:17 INFO mapred.JobClient:     Map output records=1
08/06/10 16:16:17 INFO mapred.JobClient:     Map input bytes=25
08/06/10 16:16:17 INFO mapred.JobClient:     Map output bytes=55
08/06/10 16:16:17 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:16:17 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:16:17 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:16:17 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:16:17 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:16:17 INFO mapred.JobClient:   File Systems
08/06/10 16:16:17 INFO mapred.JobClient:     Local bytes read=181
08/06/10 16:16:17 INFO mapred.JobClient:     Local bytes written=496
08/06/10 16:16:17 INFO mapred.JobClient:     HDFS bytes read=39
08/06/10 16:16:17 INFO mapred.JobClient:     HDFS bytes written=149
08/06/10 16:16:17 INFO crawl.Injector: Injector: Merging injected urls
into crawl db.
08/06/10 16:16:18 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:16:19 INFO mapred.JobClient: Running job: job_200806101251_0092
08/06/10 16:16:20 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:16:24 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:16:29 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:16:30 INFO mapred.JobClient: Job complete: job_200806101251_0092
08/06/10 16:16:30 INFO mapred.JobClient: Counters: 16
08/06/10 16:16:30 INFO mapred.JobClient:   Job Counters
08/06/10 16:16:30 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:16:30 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:16:30 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:16:30 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:16:30 INFO mapred.JobClient:     Map input records=1
08/06/10 16:16:30 INFO mapred.JobClient:     Map output records=1
08/06/10 16:16:30 INFO mapred.JobClient:     Map input bytes=63
08/06/10 16:16:30 INFO mapred.JobClient:     Map output bytes=55
08/06/10 16:16:30 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:16:30 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:16:30 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:16:30 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:16:30 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:16:30 INFO mapred.JobClient:   File Systems
08/06/10 16:16:30 INFO mapred.JobClient:     Local bytes read=181
08/06/10 16:16:30 INFO mapred.JobClient:     Local bytes written=370
08/06/10 16:16:30 INFO mapred.JobClient:     HDFS bytes read=149
08/06/10 16:16:30 INFO mapred.JobClient:     HDFS bytes written=367
08/06/10 16:16:30 INFO crawl.Injector: Injector: done
08/06/10 16:16:31 INFO crawl.Generator: Generator: Selecting
best-scoring urls due for fetch.
08/06/10 16:16:31 INFO crawl.Generator: Generator: starting
08/06/10 16:16:31 INFO crawl.Generator: Generator: segment:
/user/lritter/crawl/segments/20080610161631
08/06/10 16:16:31 INFO crawl.Generator: Generator: filtering: true
08/06/10 16:16:32 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:16:33 INFO mapred.JobClient: Running job: job_200806101251_0093
08/06/10 16:16:34 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:16:38 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:16:43 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:16:44 INFO mapred.JobClient: Job complete: job_200806101251_0093
08/06/10 16:16:44 INFO mapred.JobClient: Counters: 16
08/06/10 16:16:44 INFO mapred.JobClient:   Job Counters
08/06/10 16:16:44 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:16:44 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:16:44 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:16:44 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:16:44 INFO mapred.JobClient:     Map input records=1
08/06/10 16:16:44 INFO mapred.JobClient:     Map output records=1
08/06/10 16:16:44 INFO mapred.JobClient:     Map input bytes=63
08/06/10 16:16:44 INFO mapred.JobClient:     Map output bytes=80
08/06/10 16:16:44 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:16:44 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:16:44 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:16:44 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:16:44 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:16:44 INFO mapred.JobClient:   File Systems
08/06/10 16:16:44 INFO mapred.JobClient:     Local bytes read=228
08/06/10 16:16:44 INFO mapred.JobClient:     Local bytes written=464
08/06/10 16:16:44 INFO mapred.JobClient:     HDFS bytes read=149
08/06/10 16:16:44 INFO mapred.JobClient:     HDFS bytes written=196
08/06/10 16:16:44 INFO crawl.Generator: Generator: Partitioning
selected urls by host, for politeness.
08/06/10 16:16:45 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:16:46 INFO mapred.JobClient: Running job: job_200806101251_0094
08/06/10 16:16:47 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:16:52 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:16:58 INFO mapred.JobClient:  map 100% reduce 50%
08/06/10 16:17:00 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:17:01 INFO mapred.JobClient: Job complete: job_200806101251_0094
08/06/10 16:17:01 INFO mapred.JobClient: Counters: 16
08/06/10 16:17:01 INFO mapred.JobClient:   Job Counters
08/06/10 16:17:01 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:17:01 INFO mapred.JobClient:     Launched reduce tasks=2
08/06/10 16:17:01 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:17:01 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:17:01 INFO mapred.JobClient:     Map input records=1
08/06/10 16:17:01 INFO mapred.JobClient:     Map output records=1
08/06/10 16:17:01 INFO mapred.JobClient:     Map input bytes=88
08/06/10 16:17:01 INFO mapred.JobClient:     Map output bytes=102
08/06/10 16:17:01 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:17:01 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:17:01 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:17:01 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:17:01 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:17:01 INFO mapred.JobClient:   File Systems
08/06/10 16:17:01 INFO mapred.JobClient:     Local bytes read=372
08/06/10 16:17:01 INFO mapred.JobClient:     Local bytes written=736
08/06/10 16:17:01 INFO mapred.JobClient:     HDFS bytes read=196
08/06/10 16:17:01 INFO mapred.JobClient:     HDFS bytes written=256
08/06/10 16:17:01 INFO crawl.Generator: Generator: done.
08/06/10 16:17:01 INFO fetcher.Fetcher: Fetcher: starting
08/06/10 16:17:01 INFO fetcher.Fetcher: Fetcher: segment:
/user/lritter/crawl/segments/20080610161631
08/06/10 16:17:02 INFO mapred.FileInputFormat: Total input paths to process : 2
08/06/10 16:17:03 INFO mapred.JobClient: Running job: job_200806101251_0095
08/06/10 16:17:04 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:17:11 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:17:15 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:17:16 INFO mapred.JobClient: Job complete: job_200806101251_0095
08/06/10 16:17:16 INFO mapred.JobClient: Counters: 15
08/06/10 16:17:16 INFO mapred.JobClient:   Job Counters
08/06/10 16:17:16 INFO mapred.JobClient:     Launched map tasks=2
08/06/10 16:17:16 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:17:16 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:17:16 INFO mapred.JobClient:     Map input records=1
08/06/10 16:17:16 INFO mapred.JobClient:     Map output records=1
08/06/10 16:17:16 INFO mapred.JobClient:     Map input bytes=84
08/06/10 16:17:16 INFO mapred.JobClient:     Map output bytes=77
08/06/10 16:17:16 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:17:16 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:17:16 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:17:16 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:17:16 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:17:16 INFO mapred.JobClient:   File Systems
08/06/10 16:17:16 INFO mapred.JobClient:     Local bytes read=206
08/06/10 16:17:16 INFO mapred.JobClient:     Local bytes written=549
08/06/10 16:17:16 INFO mapred.JobClient:     HDFS bytes read=256
08/06/10 16:17:16 INFO mapred.JobClient:     HDFS bytes written=1324
08/06/10 16:17:16 INFO fetcher.Fetcher: Fetcher: done
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: starting
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: db:
/user/lritter/crawl/crawldb
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: segments:
[/user/lritter/crawl/segments/20080610161631]
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: additions allowed: true
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: URL normalizing: true
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: URL filtering: true
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: Merging segment
data into db.
08/06/10 16:17:17 INFO mapred.FileInputFormat: Total input paths to process : 3
08/06/10 16:17:18 INFO mapred.JobClient: Running job: job_200806101251_0096
08/06/10 16:17:19 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:17:26 INFO mapred.JobClient:  map 66% reduce 0%
08/06/10 16:17:28 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:17:37 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:17:38 INFO mapred.JobClient: Job complete: job_200806101251_0096
08/06/10 16:17:38 INFO mapred.JobClient: Counters: 16
08/06/10 16:17:38 INFO mapred.JobClient:   Job Counters
08/06/10 16:17:38 INFO mapred.JobClient:     Launched map tasks=3
08/06/10 16:17:38 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:17:38 INFO mapred.JobClient:     Data-local map tasks=3
08/06/10 16:17:38 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:17:38 INFO mapred.JobClient:     Map input records=2
08/06/10 16:17:38 INFO mapred.JobClient:     Map output records=2
08/06/10 16:17:38 INFO mapred.JobClient:     Map input bytes=139
08/06/10 16:17:38 INFO mapred.JobClient:     Map output bytes=131
08/06/10 16:17:38 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:17:38 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:17:38 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:17:38 INFO mapred.JobClient:     Reduce input records=2
08/06/10 16:17:38 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:17:38 INFO mapred.JobClient:   File Systems
08/06/10 16:17:38 INFO mapred.JobClient:     Local bytes read=265
08/06/10 16:17:38 INFO mapred.JobClient:     Local bytes written=790
08/06/10 16:17:38 INFO mapred.JobClient:     HDFS bytes read=483
08/06/10 16:17:38 INFO mapred.JobClient:     HDFS bytes written=367
08/06/10 16:17:38 INFO crawl.CrawlDb: CrawlDb update: done
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: starting
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: linkdb: /user/lritter/crawl/linkdb
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: URL normalize: true
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: URL filter: true
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: adding segment:
hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631
08/06/10 16:17:40 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:17:41 INFO mapred.JobClient: Running job: job_200806101251_0097
08/06/10 16:17:42 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:17:47 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:17:52 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:17:53 INFO mapred.JobClient: Job complete: job_200806101251_0097
08/06/10 16:17:53 INFO mapred.JobClient: Counters: 16
08/06/10 16:17:53 INFO mapred.JobClient:   Job Counters
08/06/10 16:17:53 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:17:53 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:17:53 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:17:53 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:17:53 INFO mapred.JobClient:     Map input records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Map output records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Map input bytes=0
08/06/10 16:17:53 INFO mapred.JobClient:     Map output bytes=0
08/06/10 16:17:53 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Reduce input groups=0
08/06/10 16:17:53 INFO mapred.JobClient:     Reduce input records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:17:53 INFO mapred.JobClient:   File Systems
08/06/10 16:17:53 INFO mapred.JobClient:     Local bytes read=115
08/06/10 16:17:53 INFO mapred.JobClient:     Local bytes written=238
08/06/10 16:17:53 INFO mapred.JobClient:     HDFS bytes read=128
08/06/10 16:17:53 INFO mapred.JobClient:     HDFS bytes written=255
08/06/10 16:17:53 INFO crawl.LinkDb: LinkDb: done
08/06/10 16:17:53 INFO indexer.Indexer: Indexer: starting
08/06/10 16:17:53 INFO indexer.Indexer: Indexer: linkdb:
/user/lritter/crawl/linkdb
08/06/10 16:17:53 INFO indexer.Indexer: Indexer: adding segment:
hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631
08/06/10 16:17:54 INFO mapred.FileInputFormat: Total input paths to process : 6
08/06/10 16:17:55 INFO mapred.JobClient: Running job: job_200806101251_0098
08/06/10 16:17:56 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:18:01 INFO mapred.JobClient:  map 33% reduce 0%
08/06/10 16:18:04 INFO mapred.JobClient:  map 66% reduce 0%
08/06/10 16:18:06 INFO mapred.JobClient:  map 83% reduce 0%
08/06/10 16:18:07 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:18:14 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:18:15 INFO mapred.JobClient: Job complete: job_200806101251_0098
08/06/10 16:18:15 INFO mapred.JobClient: Counters: 16
08/06/10 16:18:15 INFO mapred.JobClient:   Job Counters
08/06/10 16:18:15 INFO mapred.JobClient:     Launched map tasks=6
08/06/10 16:18:15 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:18:15 INFO mapred.JobClient:     Data-local map tasks=6
08/06/10 16:18:15 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:18:15 INFO mapred.JobClient:     Map input records=2
08/06/10 16:18:15 INFO mapred.JobClient:     Map output records=2
08/06/10 16:18:15 INFO mapred.JobClient:     Map input bytes=139
08/06/10 16:18:15 INFO mapred.JobClient:     Map output bytes=133
08/06/10 16:18:15 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:18:15 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:18:15 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:18:15 INFO mapred.JobClient:     Reduce input records=2
08/06/10 16:18:15 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:18:15 INFO mapred.JobClient:   File Systems
08/06/10 16:18:15 INFO mapred.JobClient:     Local bytes read=310
08/06/10 16:18:15 INFO mapred.JobClient:     Local bytes written=1193
08/06/10 16:18:15 INFO mapred.JobClient:     HDFS bytes read=865
08/06/10 16:18:15 INFO mapred.JobClient:     HDFS bytes written=40
08/06/10 16:18:15 INFO indexer.Indexer: Indexer: done
08/06/10 16:18:15 INFO indexer.DeleteDuplicates: Dedup: starting
08/06/10 16:18:15 INFO indexer.DeleteDuplicates: Dedup: adding indexes
in: /user/lritter/crawl/indexes
08/06/10 16:18:16 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:18:17 INFO mapred.JobClient: Running job: job_200806101251_0099
08/06/10 16:18:18 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:18:24 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:18:30 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:18:31 INFO mapred.JobClient: Job complete: job_200806101251_0099
08/06/10 16:18:31 INFO mapred.JobClient: Counters: 15
08/06/10 16:18:31 INFO mapred.JobClient:   Job Counters
08/06/10 16:18:31 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:18:31 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:18:31 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:18:31 INFO mapred.JobClient:     Map input records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Map output records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Map input bytes=0
08/06/10 16:18:31 INFO mapred.JobClient:     Map output bytes=0
08/06/10 16:18:31 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Reduce input groups=0
08/06/10 16:18:31 INFO mapred.JobClient:     Reduce input records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:18:31 INFO mapred.JobClient:   File Systems
08/06/10 16:18:31 INFO mapred.JobClient:     Local bytes read=135
08/06/10 16:18:31 INFO mapred.JobClient:     Local bytes written=278
08/06/10 16:18:31 INFO mapred.JobClient:     HDFS bytes read=40
08/06/10 16:18:31 INFO mapred.JobClient:     HDFS bytes written=106
08/06/10 16:18:32 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:18:33 INFO mapred.JobClient: Running job: job_200806101251_0100
08/06/10 16:18:34 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:18:38 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:18:43 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:18:44 INFO mapred.JobClient: Job complete: job_200806101251_0100
08/06/10 16:18:44 INFO mapred.JobClient: Counters: 16
08/06/10 16:18:44 INFO mapred.JobClient:   Job Counters
08/06/10 16:18:44 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:18:44 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:18:44 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:18:44 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:18:44 INFO mapred.JobClient:     Map input records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Map output records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Map input bytes=0
08/06/10 16:18:44 INFO mapred.JobClient:     Map output bytes=0
08/06/10 16:18:44 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Reduce input groups=0
08/06/10 16:18:44 INFO mapred.JobClient:     Reduce input records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:18:44 INFO mapred.JobClient:   File Systems
08/06/10 16:18:44 INFO mapred.JobClient:     Local bytes read=138
08/06/10 16:18:44 INFO mapred.JobClient:     Local bytes written=284
08/06/10 16:18:44 INFO mapred.JobClient:     HDFS bytes read=106
08/06/10 16:18:44 INFO mapred.JobClient:     HDFS bytes written=103
08/06/10 16:18:45 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:18:46 INFO mapred.JobClient: Running job: job_200806101251_0101
08/06/10 16:18:47 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:18:53 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:18:59 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:19:00 INFO mapred.JobClient: Job complete: job_200806101251_0101
08/06/10 16:19:00 INFO mapred.JobClient: Counters: 15
08/06/10 16:19:00 INFO mapred.JobClient:   Job Counters
08/06/10 16:19:00 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:19:00 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:19:00 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:19:00 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:19:00 INFO mapred.JobClient:     Map input records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Map output records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Map input bytes=0
08/06/10 16:19:00 INFO mapred.JobClient:     Map output bytes=0
08/06/10 16:19:00 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Reduce input groups=0
08/06/10 16:19:00 INFO mapred.JobClient:     Reduce input records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:19:00 INFO mapred.JobClient:   File Systems
08/06/10 16:19:00 INFO mapred.JobClient:     Local bytes read=117
08/06/10 16:19:00 INFO mapred.JobClient:     Local bytes written=242
08/06/10 16:19:00 INFO mapred.JobClient:     HDFS bytes read=103
08/06/10 16:19:00 INFO indexer.DeleteDuplicates: Dedup: done
08/06/10 16:19:00 INFO indexer.IndexMerger: merging indexes to:
/user/lritter/crawl/index
08/06/10 16:19:00 INFO indexer.IndexMerger: Adding
hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631
java.io.FileNotFoundException: no segments* file found in
org.apache.nutch.indexer.FsDirectory@hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631:
files: _logs content crawl_fetch crawl_generate crawl_parse parse_data
parse_text
	at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:587)
	at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:251)
	at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2156)
	at org.apache.nutch.indexer.IndexMerger.merge(IndexMerger.java:97)
	at org.apache.nutch.crawl.Crawl.main(Crawl.java:151)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
	at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
	at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)

-lincoln

--
lincolnritter.com

'bin/nutch crawl' failing during indexing - "no segments* file found" (Plus some other questions)

Posted by Lincoln Ritter <li...@gmail.com>.
Greetings,

First off, I am relatively new to Nutch and Hadoop so please forgive
my ignorance.  I'd like to use nutch/hadoop for some large scale data
collection and processing.  Since I'd like to use hadoop for some
general distributed processing, I'm trying to use nutch as a job.
After some communication with Michael, I applied the patch in
NUTCH-634: https://issues.apache.org/jira/browse/NUTCH-634.  I built
aginst the hadoop 0.17 binaries and followed
http://wiki.apache.org/hadoop/GettingStartedWithHadoop to get Hadoop
up and running. I also tweaked the nutch script (bin/nutch) to invoke
'hadoop jar' and passing it the nutch job jar and appropriate classes:

NUTCH_JOB="${HADOOP_HOME}/nutch-1.0-dev.job"
HADOOP_COMMAND="${HADOOP_HOME}/bin/hadoop"
exec "$HADOOP_COMMAND" jar ${NUTCH_JOB} $CLASS "$@"


So now I am running nutch revision 663092 with hadoop 0.17!

But here's the rub: when I use 'bin/nutch crawl' I get a crash during
what I believe is the indexing stage.  The error is:

"java.io.FileNotFoundException: no segments* file found in
org.apache.nutch.indexer.FsDirectory@hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631:
files: _logs content crawl_fetch crawl_generate crawl_parse parse_data
parse_text"

(Full console output is given below)

However, as far as I can tell, these files/directories do exists in hdfs.

$ bin/hadoop fs -ls /user/lritter/crawl/segments/20080610161631
Found 7 items
/user/lritter/crawl/segments/20080610161631/_logs       <dir>
 2008-06-10
16:17   rwxr-xr-x       lritter supergroup
/user/lritter/crawl/segments/20080610161631/content     <dir>
 2008-06-10
16:17   rwxr-xr-x       lritter supergroup
/user/lritter/crawl/segments/20080610161631/crawl_fetch <dir>
 2008-06-10
16:17   rwxr-xr-x       lritter supergroup
/user/lritter/crawl/segments/20080610161631/crawl_generate      <dir>
         2008-06-10
16:16   rwxr-xr-x       lritter supergroup
/user/lritter/crawl/segments/20080610161631/crawl_parse <dir>
 2008-06-10
16:17   rwxr-xr-x       lritter supergroup
/user/lritter/crawl/segments/20080610161631/parse_data  <dir>
 2008-06-10
16:17   rwxr-xr-x       lritter supergroup
/user/lritter/crawl/segments/20080610161631/parse_text  <dir>
 2008-06-10
16:17   rwxr-xr-x       lritter supergroup

Is there something I am missing here?  Really, I don't think I need to
index all the files I've fetched at this point, but I'd like to be
able to verify the operation of the crawler.  It would give me much
greater confidence if I could make this work before proceeding on to
more complicated stuff.

So, the questions I have are:
 - Any ideas on what might be causing this error?
 - Given that I probably will be running M/R jobs over the fetched
content and then generating new docs and indexes based on that, is
there a good way to verify the operation of the crawler?

Thanks
-lincoln

$ ./bin/nutch crawl /user/lritter/urls -dir /user/lritter/crawl -depth
1
08/06/10 16:16:00 INFO crawl.Crawl: crawl started in: /user/lritter/crawl
08/06/10 16:16:00 INFO crawl.Crawl: rootUrlDir = /user/lritter/urls
08/06/10 16:16:00 INFO crawl.Crawl: threads = 10
08/06/10 16:16:00 INFO crawl.Crawl: depth = 1
08/06/10 16:16:00 INFO crawl.Injector: Injector: starting
08/06/10 16:16:00 INFO crawl.Injector: Injector: crawlDb:
/user/lritter/crawl/crawldb
08/06/10 16:16:00 INFO crawl.Injector: Injector: urlDir: /user/lritter/urls
08/06/10 16:16:00 INFO crawl.Injector: Injector: Converting injected
urls to crawl db entries.
08/06/10 16:16:02 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:16:03 INFO mapred.JobClient: Running job: job_200806101251_0091
08/06/10 16:16:04 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:16:08 INFO mapred.JobClient:  map 50% reduce 0%
08/06/10 16:16:10 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:16:16 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:16:17 INFO mapred.JobClient: Job complete: job_200806101251_0091
08/06/10 16:16:17 INFO mapred.JobClient: Counters: 16
08/06/10 16:16:17 INFO mapred.JobClient:   Job Counters
08/06/10 16:16:17 INFO mapred.JobClient:     Launched map tasks=2
08/06/10 16:16:17 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:16:17 INFO mapred.JobClient:     Data-local map tasks=2
08/06/10 16:16:17 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:16:17 INFO mapred.JobClient:     Map input records=1
08/06/10 16:16:17 INFO mapred.JobClient:     Map output records=1
08/06/10 16:16:17 INFO mapred.JobClient:     Map input bytes=25
08/06/10 16:16:17 INFO mapred.JobClient:     Map output bytes=55
08/06/10 16:16:17 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:16:17 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:16:17 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:16:17 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:16:17 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:16:17 INFO mapred.JobClient:   File Systems
08/06/10 16:16:17 INFO mapred.JobClient:     Local bytes read=181
08/06/10 16:16:17 INFO mapred.JobClient:     Local bytes written=496
08/06/10 16:16:17 INFO mapred.JobClient:     HDFS bytes read=39
08/06/10 16:16:17 INFO mapred.JobClient:     HDFS bytes written=149
08/06/10 16:16:17 INFO crawl.Injector: Injector: Merging injected urls
into crawl db.
08/06/10 16:16:18 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:16:19 INFO mapred.JobClient: Running job: job_200806101251_0092
08/06/10 16:16:20 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:16:24 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:16:29 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:16:30 INFO mapred.JobClient: Job complete: job_200806101251_0092
08/06/10 16:16:30 INFO mapred.JobClient: Counters: 16
08/06/10 16:16:30 INFO mapred.JobClient:   Job Counters
08/06/10 16:16:30 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:16:30 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:16:30 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:16:30 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:16:30 INFO mapred.JobClient:     Map input records=1
08/06/10 16:16:30 INFO mapred.JobClient:     Map output records=1
08/06/10 16:16:30 INFO mapred.JobClient:     Map input bytes=63
08/06/10 16:16:30 INFO mapred.JobClient:     Map output bytes=55
08/06/10 16:16:30 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:16:30 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:16:30 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:16:30 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:16:30 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:16:30 INFO mapred.JobClient:   File Systems
08/06/10 16:16:30 INFO mapred.JobClient:     Local bytes read=181
08/06/10 16:16:30 INFO mapred.JobClient:     Local bytes written=370
08/06/10 16:16:30 INFO mapred.JobClient:     HDFS bytes read=149
08/06/10 16:16:30 INFO mapred.JobClient:     HDFS bytes written=367
08/06/10 16:16:30 INFO crawl.Injector: Injector: done
08/06/10 16:16:31 INFO crawl.Generator: Generator: Selecting
best-scoring urls due for fetch.
08/06/10 16:16:31 INFO crawl.Generator: Generator: starting
08/06/10 16:16:31 INFO crawl.Generator: Generator: segment:
/user/lritter/crawl/segments/20080610161631
08/06/10 16:16:31 INFO crawl.Generator: Generator: filtering: true
08/06/10 16:16:32 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:16:33 INFO mapred.JobClient: Running job: job_200806101251_0093
08/06/10 16:16:34 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:16:38 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:16:43 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:16:44 INFO mapred.JobClient: Job complete: job_200806101251_0093
08/06/10 16:16:44 INFO mapred.JobClient: Counters: 16
08/06/10 16:16:44 INFO mapred.JobClient:   Job Counters
08/06/10 16:16:44 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:16:44 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:16:44 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:16:44 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:16:44 INFO mapred.JobClient:     Map input records=1
08/06/10 16:16:44 INFO mapred.JobClient:     Map output records=1
08/06/10 16:16:44 INFO mapred.JobClient:     Map input bytes=63
08/06/10 16:16:44 INFO mapred.JobClient:     Map output bytes=80
08/06/10 16:16:44 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:16:44 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:16:44 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:16:44 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:16:44 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:16:44 INFO mapred.JobClient:   File Systems
08/06/10 16:16:44 INFO mapred.JobClient:     Local bytes read=228
08/06/10 16:16:44 INFO mapred.JobClient:     Local bytes written=464
08/06/10 16:16:44 INFO mapred.JobClient:     HDFS bytes read=149
08/06/10 16:16:44 INFO mapred.JobClient:     HDFS bytes written=196
08/06/10 16:16:44 INFO crawl.Generator: Generator: Partitioning
selected urls by host, for politeness.
08/06/10 16:16:45 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:16:46 INFO mapred.JobClient: Running job: job_200806101251_0094
08/06/10 16:16:47 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:16:52 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:16:58 INFO mapred.JobClient:  map 100% reduce 50%
08/06/10 16:17:00 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:17:01 INFO mapred.JobClient: Job complete: job_200806101251_0094
08/06/10 16:17:01 INFO mapred.JobClient: Counters: 16
08/06/10 16:17:01 INFO mapred.JobClient:   Job Counters
08/06/10 16:17:01 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:17:01 INFO mapred.JobClient:     Launched reduce tasks=2
08/06/10 16:17:01 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:17:01 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:17:01 INFO mapred.JobClient:     Map input records=1
08/06/10 16:17:01 INFO mapred.JobClient:     Map output records=1
08/06/10 16:17:01 INFO mapred.JobClient:     Map input bytes=88
08/06/10 16:17:01 INFO mapred.JobClient:     Map output bytes=102
08/06/10 16:17:01 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:17:01 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:17:01 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:17:01 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:17:01 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:17:01 INFO mapred.JobClient:   File Systems
08/06/10 16:17:01 INFO mapred.JobClient:     Local bytes read=372
08/06/10 16:17:01 INFO mapred.JobClient:     Local bytes written=736
08/06/10 16:17:01 INFO mapred.JobClient:     HDFS bytes read=196
08/06/10 16:17:01 INFO mapred.JobClient:     HDFS bytes written=256
08/06/10 16:17:01 INFO crawl.Generator: Generator: done.
08/06/10 16:17:01 INFO fetcher.Fetcher: Fetcher: starting
08/06/10 16:17:01 INFO fetcher.Fetcher: Fetcher: segment:
/user/lritter/crawl/segments/20080610161631
08/06/10 16:17:02 INFO mapred.FileInputFormat: Total input paths to process : 2
08/06/10 16:17:03 INFO mapred.JobClient: Running job: job_200806101251_0095
08/06/10 16:17:04 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:17:11 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:17:15 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:17:16 INFO mapred.JobClient: Job complete: job_200806101251_0095
08/06/10 16:17:16 INFO mapred.JobClient: Counters: 15
08/06/10 16:17:16 INFO mapred.JobClient:   Job Counters
08/06/10 16:17:16 INFO mapred.JobClient:     Launched map tasks=2
08/06/10 16:17:16 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:17:16 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:17:16 INFO mapred.JobClient:     Map input records=1
08/06/10 16:17:16 INFO mapred.JobClient:     Map output records=1
08/06/10 16:17:16 INFO mapred.JobClient:     Map input bytes=84
08/06/10 16:17:16 INFO mapred.JobClient:     Map output bytes=77
08/06/10 16:17:16 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:17:16 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:17:16 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:17:16 INFO mapred.JobClient:     Reduce input records=1
08/06/10 16:17:16 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:17:16 INFO mapred.JobClient:   File Systems
08/06/10 16:17:16 INFO mapred.JobClient:     Local bytes read=206
08/06/10 16:17:16 INFO mapred.JobClient:     Local bytes written=549
08/06/10 16:17:16 INFO mapred.JobClient:     HDFS bytes read=256
08/06/10 16:17:16 INFO mapred.JobClient:     HDFS bytes written=1324
08/06/10 16:17:16 INFO fetcher.Fetcher: Fetcher: done
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: starting
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: db:
/user/lritter/crawl/crawldb
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: segments:
[/user/lritter/crawl/segments/20080610161631]
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: additions allowed: true
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: URL normalizing: true
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: URL filtering: true
08/06/10 16:17:16 INFO crawl.CrawlDb: CrawlDb update: Merging segment
data into db.
08/06/10 16:17:17 INFO mapred.FileInputFormat: Total input paths to process : 3
08/06/10 16:17:18 INFO mapred.JobClient: Running job: job_200806101251_0096
08/06/10 16:17:19 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:17:26 INFO mapred.JobClient:  map 66% reduce 0%
08/06/10 16:17:28 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:17:37 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:17:38 INFO mapred.JobClient: Job complete: job_200806101251_0096
08/06/10 16:17:38 INFO mapred.JobClient: Counters: 16
08/06/10 16:17:38 INFO mapred.JobClient:   Job Counters
08/06/10 16:17:38 INFO mapred.JobClient:     Launched map tasks=3
08/06/10 16:17:38 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:17:38 INFO mapred.JobClient:     Data-local map tasks=3
08/06/10 16:17:38 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:17:38 INFO mapred.JobClient:     Map input records=2
08/06/10 16:17:38 INFO mapred.JobClient:     Map output records=2
08/06/10 16:17:38 INFO mapred.JobClient:     Map input bytes=139
08/06/10 16:17:38 INFO mapred.JobClient:     Map output bytes=131
08/06/10 16:17:38 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:17:38 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:17:38 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:17:38 INFO mapred.JobClient:     Reduce input records=2
08/06/10 16:17:38 INFO mapred.JobClient:     Reduce output records=1
08/06/10 16:17:38 INFO mapred.JobClient:   File Systems
08/06/10 16:17:38 INFO mapred.JobClient:     Local bytes read=265
08/06/10 16:17:38 INFO mapred.JobClient:     Local bytes written=790
08/06/10 16:17:38 INFO mapred.JobClient:     HDFS bytes read=483
08/06/10 16:17:38 INFO mapred.JobClient:     HDFS bytes written=367
08/06/10 16:17:38 INFO crawl.CrawlDb: CrawlDb update: done
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: starting
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: linkdb: /user/lritter/crawl/linkdb
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: URL normalize: true
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: URL filter: true
08/06/10 16:17:38 INFO crawl.LinkDb: LinkDb: adding segment:
hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631
08/06/10 16:17:40 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:17:41 INFO mapred.JobClient: Running job: job_200806101251_0097
08/06/10 16:17:42 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:17:47 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:17:52 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:17:53 INFO mapred.JobClient: Job complete: job_200806101251_0097
08/06/10 16:17:53 INFO mapred.JobClient: Counters: 16
08/06/10 16:17:53 INFO mapred.JobClient:   Job Counters
08/06/10 16:17:53 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:17:53 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:17:53 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:17:53 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:17:53 INFO mapred.JobClient:     Map input records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Map output records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Map input bytes=0
08/06/10 16:17:53 INFO mapred.JobClient:     Map output bytes=0
08/06/10 16:17:53 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Reduce input groups=0
08/06/10 16:17:53 INFO mapred.JobClient:     Reduce input records=0
08/06/10 16:17:53 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:17:53 INFO mapred.JobClient:   File Systems
08/06/10 16:17:53 INFO mapred.JobClient:     Local bytes read=115
08/06/10 16:17:53 INFO mapred.JobClient:     Local bytes written=238
08/06/10 16:17:53 INFO mapred.JobClient:     HDFS bytes read=128
08/06/10 16:17:53 INFO mapred.JobClient:     HDFS bytes written=255
08/06/10 16:17:53 INFO crawl.LinkDb: LinkDb: done
08/06/10 16:17:53 INFO indexer.Indexer: Indexer: starting
08/06/10 16:17:53 INFO indexer.Indexer: Indexer: linkdb:
/user/lritter/crawl/linkdb
08/06/10 16:17:53 INFO indexer.Indexer: Indexer: adding segment:
hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631
08/06/10 16:17:54 INFO mapred.FileInputFormat: Total input paths to process : 6
08/06/10 16:17:55 INFO mapred.JobClient: Running job: job_200806101251_0098
08/06/10 16:17:56 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:18:01 INFO mapred.JobClient:  map 33% reduce 0%
08/06/10 16:18:04 INFO mapred.JobClient:  map 66% reduce 0%
08/06/10 16:18:06 INFO mapred.JobClient:  map 83% reduce 0%
08/06/10 16:18:07 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:18:14 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:18:15 INFO mapred.JobClient: Job complete: job_200806101251_0098
08/06/10 16:18:15 INFO mapred.JobClient: Counters: 16
08/06/10 16:18:15 INFO mapred.JobClient:   Job Counters
08/06/10 16:18:15 INFO mapred.JobClient:     Launched map tasks=6
08/06/10 16:18:15 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:18:15 INFO mapred.JobClient:     Data-local map tasks=6
08/06/10 16:18:15 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:18:15 INFO mapred.JobClient:     Map input records=2
08/06/10 16:18:15 INFO mapred.JobClient:     Map output records=2
08/06/10 16:18:15 INFO mapred.JobClient:     Map input bytes=139
08/06/10 16:18:15 INFO mapred.JobClient:     Map output bytes=133
08/06/10 16:18:15 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:18:15 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:18:15 INFO mapred.JobClient:     Reduce input groups=1
08/06/10 16:18:15 INFO mapred.JobClient:     Reduce input records=2
08/06/10 16:18:15 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:18:15 INFO mapred.JobClient:   File Systems
08/06/10 16:18:15 INFO mapred.JobClient:     Local bytes read=310
08/06/10 16:18:15 INFO mapred.JobClient:     Local bytes written=1193
08/06/10 16:18:15 INFO mapred.JobClient:     HDFS bytes read=865
08/06/10 16:18:15 INFO mapred.JobClient:     HDFS bytes written=40
08/06/10 16:18:15 INFO indexer.Indexer: Indexer: done
08/06/10 16:18:15 INFO indexer.DeleteDuplicates: Dedup: starting
08/06/10 16:18:15 INFO indexer.DeleteDuplicates: Dedup: adding indexes
in: /user/lritter/crawl/indexes
08/06/10 16:18:16 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:18:17 INFO mapred.JobClient: Running job: job_200806101251_0099
08/06/10 16:18:18 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:18:24 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:18:30 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:18:31 INFO mapred.JobClient: Job complete: job_200806101251_0099
08/06/10 16:18:31 INFO mapred.JobClient: Counters: 15
08/06/10 16:18:31 INFO mapred.JobClient:   Job Counters
08/06/10 16:18:31 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:18:31 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:18:31 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:18:31 INFO mapred.JobClient:     Map input records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Map output records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Map input bytes=0
08/06/10 16:18:31 INFO mapred.JobClient:     Map output bytes=0
08/06/10 16:18:31 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Reduce input groups=0
08/06/10 16:18:31 INFO mapred.JobClient:     Reduce input records=0
08/06/10 16:18:31 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:18:31 INFO mapred.JobClient:   File Systems
08/06/10 16:18:31 INFO mapred.JobClient:     Local bytes read=135
08/06/10 16:18:31 INFO mapred.JobClient:     Local bytes written=278
08/06/10 16:18:31 INFO mapred.JobClient:     HDFS bytes read=40
08/06/10 16:18:31 INFO mapred.JobClient:     HDFS bytes written=106
08/06/10 16:18:32 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:18:33 INFO mapred.JobClient: Running job: job_200806101251_0100
08/06/10 16:18:34 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:18:38 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:18:43 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:18:44 INFO mapred.JobClient: Job complete: job_200806101251_0100
08/06/10 16:18:44 INFO mapred.JobClient: Counters: 16
08/06/10 16:18:44 INFO mapred.JobClient:   Job Counters
08/06/10 16:18:44 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:18:44 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:18:44 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:18:44 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:18:44 INFO mapred.JobClient:     Map input records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Map output records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Map input bytes=0
08/06/10 16:18:44 INFO mapred.JobClient:     Map output bytes=0
08/06/10 16:18:44 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Reduce input groups=0
08/06/10 16:18:44 INFO mapred.JobClient:     Reduce input records=0
08/06/10 16:18:44 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:18:44 INFO mapred.JobClient:   File Systems
08/06/10 16:18:44 INFO mapred.JobClient:     Local bytes read=138
08/06/10 16:18:44 INFO mapred.JobClient:     Local bytes written=284
08/06/10 16:18:44 INFO mapred.JobClient:     HDFS bytes read=106
08/06/10 16:18:44 INFO mapred.JobClient:     HDFS bytes written=103
08/06/10 16:18:45 INFO mapred.FileInputFormat: Total input paths to process : 1
08/06/10 16:18:46 INFO mapred.JobClient: Running job: job_200806101251_0101
08/06/10 16:18:47 INFO mapred.JobClient:  map 0% reduce 0%
08/06/10 16:18:53 INFO mapred.JobClient:  map 100% reduce 0%
08/06/10 16:18:59 INFO mapred.JobClient:  map 100% reduce 100%
08/06/10 16:19:00 INFO mapred.JobClient: Job complete: job_200806101251_0101
08/06/10 16:19:00 INFO mapred.JobClient: Counters: 15
08/06/10 16:19:00 INFO mapred.JobClient:   Job Counters
08/06/10 16:19:00 INFO mapred.JobClient:     Launched map tasks=1
08/06/10 16:19:00 INFO mapred.JobClient:     Launched reduce tasks=1
08/06/10 16:19:00 INFO mapred.JobClient:     Data-local map tasks=1
08/06/10 16:19:00 INFO mapred.JobClient:   Map-Reduce Framework
08/06/10 16:19:00 INFO mapred.JobClient:     Map input records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Map output records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Map input bytes=0
08/06/10 16:19:00 INFO mapred.JobClient:     Map output bytes=0
08/06/10 16:19:00 INFO mapred.JobClient:     Combine input records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Combine output records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Reduce input groups=0
08/06/10 16:19:00 INFO mapred.JobClient:     Reduce input records=0
08/06/10 16:19:00 INFO mapred.JobClient:     Reduce output records=0
08/06/10 16:19:00 INFO mapred.JobClient:   File Systems
08/06/10 16:19:00 INFO mapred.JobClient:     Local bytes read=117
08/06/10 16:19:00 INFO mapred.JobClient:     Local bytes written=242
08/06/10 16:19:00 INFO mapred.JobClient:     HDFS bytes read=103
08/06/10 16:19:00 INFO indexer.DeleteDuplicates: Dedup: done
08/06/10 16:19:00 INFO indexer.IndexMerger: merging indexes to:
/user/lritter/crawl/index
08/06/10 16:19:00 INFO indexer.IndexMerger: Adding
hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631
java.io.FileNotFoundException: no segments* file found in
org.apache.nutch.indexer.FsDirectory@hdfs://localhost:54310/user/lritter/crawl/segments/20080610161631:
files: _logs content crawl_fetch crawl_generate crawl_parse parse_data
parse_text
       at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:587)
       at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:251)
       at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2156)
       at org.apache.nutch.indexer.IndexMerger.merge(IndexMerger.java:97)
       at org.apache.nutch.crawl.Crawl.main(Crawl.java:151)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
       at java.lang.reflect.Method.invoke(Method.java:585)
       at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
       at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
       at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)

-lincoln

--
lincolnritter.com