You are viewing a plain text version of this content. The canonical link for it is here.
Posted to java-user@lucene.apache.org by Herbert Roitblat <he...@orcatec.com> on 2010/04/12 20:15:13 UTC

How to get the tokens for a given document

Hi, folks.
I appreciate the help people have been offering.
Here is my problem.  My immediate need is to get the tokens for a document from the Lucene index.  I have a list of documents that I walk, one at a time.  Right now, I am getting the tokens and their frequencies and the problem is that these stay in the heap as I move from document to document.

Is there another way to get the tokens given a document ID?

Thanks,
I'm looking for alternative ways to skin this cat.

Herb

Re: How to get the tokens for a given document

Posted by Herbert L Roitblat <he...@orcatec.com>.
Thanks David.
I think that I neglected to say that I am using pyLucene 2.4.0.

Your suggestion is almost what we're doing:

>indexReader.getTermFreqVector(ID, fieldName)

        self.hits = list(self.lSearcher.search(self.query))
        if self.hits:
            self.hit = lucene.Hit.cast_(self.hits[0])
            self.tfvs = self.lReader.getTermFreqVectors(self.hit.id)

At the very least, I may be able to reduce overhead by just adding the 
fieldName to the indexReader.

The problem I'm facing is that all of the tokens in all of the fields in 
all of the documents get added to the heap, and it runs out of space.  
I'm looking for other ways of getting the information I need that might 
not fill up the heap.

Thanks again.
Herb


Your other suggestion may also be what we end up doing.  Since our 
documents can be in any language, I will have to make sure that I use 
the right analyzer.

>load that field and analyze it again. 





David Causse wrote:
> Hi,
>
> you are walking from indexReader.terms() then on indexReader.termDocs(Term t) 
> for each term and then match your docID on the termsDocs enum? So you walk
> the whole index?
>
> You need a forward index and lucene is inverted but you have IMHO 2
> solutions with lucene (sadly, they both require re-indexing):
>  - Store the text you indexed, when you have to walk terms inside a doc,
>    just, load that field and analyze it again.
>  - Use a TermVector, when you create your content field use the
>    constructor which accept the TermVector enum. You can then walk on it
>    at search time : indexReader.getTermFreqVector(ID, fieldName)
>
> Hope it helps.
>
> On Mon, Apr 12, 2010 at 11:15:13AM -0700, Herbert Roitblat wrote:
>   
>> Hi, folks.
>> I appreciate the help people have been offering.
>> Here is my problem.  My immediate need is to get the tokens for a document from the Lucene index.  I have a list of documents that I walk, one at a time.  Right now, I am getting the tokens and their frequencies and the problem is that these stay in the heap as I move from document to document.
>>
>> Is there another way to get the tokens given a document ID?
>>
>> Thanks,
>> I'm looking for alternative ways to skin this cat.
>>
>> Herb
>>     
>
>   

Re: How to get the tokens for a given document

Posted by David Causse <dc...@spotter.com>.
Hi,

you are walking from indexReader.terms() then on indexReader.termDocs(Term t) 
for each term and then match your docID on the termsDocs enum? So you walk
the whole index?

You need a forward index and lucene is inverted but you have IMHO 2
solutions with lucene (sadly, they both require re-indexing):
 - Store the text you indexed, when you have to walk terms inside a doc,
   just, load that field and analyze it again.
 - Use a TermVector, when you create your content field use the
   constructor which accept the TermVector enum. You can then walk on it
   at search time : indexReader.getTermFreqVector(ID, fieldName)

Hope it helps.

On Mon, Apr 12, 2010 at 11:15:13AM -0700, Herbert Roitblat wrote:
> Hi, folks.
> I appreciate the help people have been offering.
> Here is my problem.  My immediate need is to get the tokens for a document from the Lucene index.  I have a list of documents that I walk, one at a time.  Right now, I am getting the tokens and their frequencies and the problem is that these stay in the heap as I move from document to document.
> 
> Is there another way to get the tokens given a document ID?
> 
> Thanks,
> I'm looking for alternative ways to skin this cat.
> 
> Herb

-- 
David Causse
Spotter
http://www.spotter.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org