You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Adar, Eytan" <ey...@exch.hpl.hp.com> on 2001/12/07 20:16:17 UTC

indexing race condition?

I have a piece of code that is indexing online... it basically watches a set
of files, and indexes new ones, and deletes ones that get deleted.

The problem I'm encountering is that when I index something, until I
flush/close the index the new documents aren't added.  This means that if my
user adds a file and then immediatley deletes it, the text still gets added.

I think I've tried to call optimize() but that doesn't seem to do it.  It
seems that I need to actually close the writer and reopen, and I don't want
to do that after every new document.

In other words:

add(d1) -> delete(d1) -> get(d1) = d1  (not what I want)
add(d1) -> close index -> delete(d1) -> get(d1) = null (what I want, but
inefficient)

I could just queue up all the delete requests and excute them (once in a
while) after I close the index.  The problem is that some of my delete
operations are actually part of a "replace" procedure (delete then add).
Waiting on the deletes will mean that I wipe the document totally from the
index (not what I wanted).  

I can start doing wierd things with timestamping so that I can delete the
first added document, etc. but that seems like a headache.

Hopefully this makes some sense, and if anyone has a suggestion/solution I'd
love to hear it.

Thanks,

Eytan

 

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>