You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by shjiang <ji...@souchang.com> on 2006/10/16 05:03:45 UTC
How to read content of a particular url from the crawldb?
I cannot find any api that support this function to read the content of
a specified url from the crawldb.
Re: Re: How to read content of a particular url from the crawldb?
Posted by Albert Chern <al...@gmail.com>.
That's a good point. I think the Lucene index might be the only place
where that information is stored. If you really needed it I guess you
could build your own mapping of URL to segment. However, I am not
that familiar with Nutch so I will let someone with more experience
answer this.
On 10/15/06, shjiang <ji...@souchang.com> wrote:
> from the version 0.8 ,"bin/nutch segread" has been replaced by
> "bin/nutch readseg"
> I try the command :bin/nutch readseg -get
> ./crawl/segments/20061013144233/ http://www.nokia.com.cn/
> I can get the entire content of the url.
> The problem is that there are sevaral segments directories under
> ./crawl/segments/ ,how can i know the content of the specified url in
> which segment.
> > You could do it from the command line using bin/nutch segread, or you
> > could do it in Java by opening map file readers on the directories
> > called "content" found in each segment.
> >
> > On 10/15/06, shjiang <ji...@souchang.com> wrote:
> >> I cannot find any api that support this function to read the content of
> >> a specified url from the crawldb.
> >>
> >
> >
>
>
Re: How to read content of a particular url from the crawldb?
Posted by shjiang <ji...@souchang.com>.
from the version 0.8 ,"bin/nutch segread" has been replaced by
"bin/nutch readseg"
I try the command :bin/nutch readseg -get
./crawl/segments/20061013144233/ http://www.nokia.com.cn/
I can get the entire content of the url.
The problem is that there are sevaral segments directories under
./crawl/segments/ ,how can i know the content of the specified url in
which segment.
> You could do it from the command line using bin/nutch segread, or you
> could do it in Java by opening map file readers on the directories
> called "content" found in each segment.
>
> On 10/15/06, shjiang <ji...@souchang.com> wrote:
>> I cannot find any api that support this function to read the content of
>> a specified url from the crawldb.
>>
>
>
Re: How to read content of a particular url from the crawldb?
Posted by Albert Chern <al...@gmail.com>.
You could do it from the command line using bin/nutch segread, or you
could do it in Java by opening map file readers on the directories
called "content" found in each segment.
On 10/15/06, shjiang <ji...@souchang.com> wrote:
> I cannot find any api that support this function to read the content of
> a specified url from the crawldb.
>