You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by Marco Didonna <m....@gmail.com> on 2011/02/08 11:06:42 UTC

Distributed Indexing with nutch

Hi everyone,
I've build a little hadoop program to build an inverted index from a
text collection. It performs basic analysis: tokenization,
lowercasing, stopword removal. I was wondering if I could use some
nutch components since I assume they've undergone a more intense
tuning and so they're more efficient. I looked up in the javadoc
(org.apache.nutch.indexer package) to find some hint but I didn't find
any helping material...I hope someone of you can point me to the right
place with - hopefully - some example code.
I would like to underline that I do need anything but the indexing
capabilities of nutch - no crawling or other stuff -  and I need the
whole thing to work on hadoop :)

Thanks for your time

MD

Re: Distributed Indexing with nutch

Posted by Julien Nioche <li...@gmail.com>.
See https://github.com/jnioche/behemoth/wiki

Please use this list for Nutch-related questions only, thanks


> I did download the behemoth source and I was looking for some
> documentation in order to understand what could I re-use...I found
> none and no script to generate it. Could you point me to the right
> direction? I would appreciate it
>
> MD
>



-- 
*
*Open Source Solutions for Text Engineering

http://digitalpebble.blogspot.com/
http://www.digitalpebble.com

Re: Distributed Indexing with nutch

Posted by Marco Didonna <m....@gmail.com>.
On 8 February 2011 11:23, Julien Nioche <li...@gmail.com> wrote:
> Hi Marco
>
> Nutch now delegates the indexing and searching to SOLR, all the steps you
> described (tokenization, lowercasing, etc...) are implemented there and
> Nutch does not do anything special about it. From a Nutch point of view, the
> indexing consists in gathering the data from various sources (crawldb,
> segments, linkdb), apply some simple transformations (indexingfilters) then
> send to SOLR.
> You can of course write some custom map reduce function with SOLR embedded
> but that's not what we do in Nutch. Have a look at the SOLR mailing lists,
> you'll probably find more info there
>
> HTH
>
> Julien
>
> PS: (shameful self promotion for one of my pet projects) Behemoth (
> https://github.com/jnioche/behemoth) is about doing large scale text
> processing on Hadoop. there is a component which delegates the indexing of
> documents to SOLR but it could be modified to do what you described and have
> SOLR instances within the map/reduce functions
>

I did download the behemoth source and I was looking for some
documentation in order to understand what could I re-use...I found
none and no script to generate it. Could you point me to the right
direction? I would appreciate it

MD

Re: Distributed Indexing with nutch

Posted by Marco Didonna <m....@gmail.com>.
On 8 February 2011 12:48, Claudio Martella <cl...@tis.bz.it> wrote:
> I also do use lucene for the analysis of the document, I don't see the
> problem.
> Lucene's tokens have a pool and are re-used, so I wouldn't care about
> that overhead, i.e.
>
> Are you planning on developing a faster lucene? :)
>

Being a newcomer I really don't think so :) The thing is I learned
that in a mapper you should create the least number of object possible
(efficiency reasons ) and to get the positional term vector for a
document I do create lots of object
IndexWriter,IndexReader,Directory,Document, TermPositions ...I just
though there was a better way :)

MD

Re: Distributed Indexing with nutch

Posted by Claudio Martella <cl...@tis.bz.it>.
I also do use lucene for the analysis of the document, I don't see the
problem.
Lucene's tokens have a pool and are re-used, so I wouldn't care about
that overhead, i.e.

Are you planning on developing a faster lucene? :)



On 2/8/11 12:35 PM, Marco Didonna wrote:
> On 8 February 2011 11:46, Claudio Martella <cl...@tis.bz.it> wrote:
>> Hi Marco,
>>
>> as Julien suggests, nutch is probably not the right place to look at.
>> My personal advice is to have a look at Jimmy Lin's Cloud9, and
>> specifically:
>>
>> http://www.umiacs.umd.edu/~jimmylin/Cloud9/docs/exercises/indexing.html
>>
>>
> Actually my implementation is based on jimmy lin's examples because I
> have used his book as introduction to mapreduce algorithm design :)
> However I also take into account term positions, and section (title,
> body, abstract) in which a term occurs. That said, I think my approach
> is a little dirty as I use lucene's class to get the positional term
> vector: there are too many objects created and there's a waste of
> memory, IMHO. So I was looking for something a little more elegant and
> "hadoopish"
>
> Thanks for you answer.
>
> MD
>


-- 
Claudio Martella
Digital Technologies
Unit Research & Development - Analyst

TIS innovation park
Via Siemens 19 | Siemensstr. 19
39100 Bolzano | 39100 Bozen
Tel. +39 0471 068 123
Fax  +39 0471 068 129
claudio.martella@tis.bz.it http://www.tis.bz.it

Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to privacy@tis.bz.it in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.



Re: Distributed Indexing with nutch

Posted by Marco Didonna <m....@gmail.com>.
On 8 February 2011 11:46, Claudio Martella <cl...@tis.bz.it> wrote:
> Hi Marco,
>
> as Julien suggests, nutch is probably not the right place to look at.
> My personal advice is to have a look at Jimmy Lin's Cloud9, and
> specifically:
>
> http://www.umiacs.umd.edu/~jimmylin/Cloud9/docs/exercises/indexing.html
>
>

Actually my implementation is based on jimmy lin's examples because I
have used his book as introduction to mapreduce algorithm design :)
However I also take into account term positions, and section (title,
body, abstract) in which a term occurs. That said, I think my approach
is a little dirty as I use lucene's class to get the positional term
vector: there are too many objects created and there's a waste of
memory, IMHO. So I was looking for something a little more elegant and
"hadoopish"

Thanks for you answer.

MD

Re: Distributed Indexing with nutch

Posted by Claudio Martella <cl...@tis.bz.it>.
Hi Marco,

as Julien suggests, nutch is probably not the right place to look at.
My personal advice is to have a look at Jimmy Lin's Cloud9, and
specifically:

http://www.umiacs.umd.edu/~jimmylin/Cloud9/docs/exercises/indexing.html



On 2/8/11 11:23 AM, Julien Nioche wrote:
> Hi Marco
>
> Nutch now delegates the indexing and searching to SOLR, all the steps you
> described (tokenization, lowercasing, etc...) are implemented there and
> Nutch does not do anything special about it. From a Nutch point of view, the
> indexing consists in gathering the data from various sources (crawldb,
> segments, linkdb), apply some simple transformations (indexingfilters) then
> send to SOLR.
> You can of course write some custom map reduce function with SOLR embedded
> but that's not what we do in Nutch. Have a look at the SOLR mailing lists,
> you'll probably find more info there
>
> HTH
>
> Julien
>
> PS: (shameful self promotion for one of my pet projects) Behemoth (
> https://github.com/jnioche/behemoth) is about doing large scale text
> processing on Hadoop. there is a component which delegates the indexing of
> documents to SOLR but it could be modified to do what you described and have
> SOLR instances within the map/reduce functions
>
> On 8 February 2011 10:06, Marco Didonna <m....@gmail.com> wrote:
>
>> Hi everyone,
>> I've build a little hadoop program to build an inverted index from a
>> text collection. It performs basic analysis: tokenization,
>> lowercasing, stopword removal. I was wondering if I could use some
>> nutch components since I assume they've undergone a more intense
>> tuning and so they're more efficient. I looked up in the javadoc
>> (org.apache.nutch.indexer package) to find some hint but I didn't find
>> any helping material...I hope someone of you can point me to the right
>> place with - hopefully - some example code.
>> I would like to underline that I do need anything but the indexing
>> capabilities of nutch - no crawling or other stuff -  and I need the
>> whole thing to work on hadoop :)
>>
>> Thanks for your time
>>
>> MD
>>
>
>


-- 
Claudio Martella
Digital Technologies
Unit Research & Development - Analyst

TIS innovation park
Via Siemens 19 | Siemensstr. 19
39100 Bolzano | 39100 Bozen
Tel. +39 0471 068 123
Fax  +39 0471 068 129
claudio.martella@tis.bz.it http://www.tis.bz.it

Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to privacy@tis.bz.it in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.



Re: Distributed Indexing with nutch

Posted by Julien Nioche <li...@gmail.com>.
Hi Marco

Nutch now delegates the indexing and searching to SOLR, all the steps you
described (tokenization, lowercasing, etc...) are implemented there and
Nutch does not do anything special about it. From a Nutch point of view, the
indexing consists in gathering the data from various sources (crawldb,
segments, linkdb), apply some simple transformations (indexingfilters) then
send to SOLR.
You can of course write some custom map reduce function with SOLR embedded
but that's not what we do in Nutch. Have a look at the SOLR mailing lists,
you'll probably find more info there

HTH

Julien

PS: (shameful self promotion for one of my pet projects) Behemoth (
https://github.com/jnioche/behemoth) is about doing large scale text
processing on Hadoop. there is a component which delegates the indexing of
documents to SOLR but it could be modified to do what you described and have
SOLR instances within the map/reduce functions

On 8 February 2011 10:06, Marco Didonna <m....@gmail.com> wrote:

> Hi everyone,
> I've build a little hadoop program to build an inverted index from a
> text collection. It performs basic analysis: tokenization,
> lowercasing, stopword removal. I was wondering if I could use some
> nutch components since I assume they've undergone a more intense
> tuning and so they're more efficient. I looked up in the javadoc
> (org.apache.nutch.indexer package) to find some hint but I didn't find
> any helping material...I hope someone of you can point me to the right
> place with - hopefully - some example code.
> I would like to underline that I do need anything but the indexing
> capabilities of nutch - no crawling or other stuff -  and I need the
> whole thing to work on hadoop :)
>
> Thanks for your time
>
> MD
>



-- 
*
*Open Source Solutions for Text Engineering

http://digitalpebble.blogspot.com/
http://www.digitalpebble.com