You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by "Julie Tibshirani (Jira)" <ji...@apache.org> on 2022/05/23 22:27:00 UTC

[jira] [Comment Edited] (LUCENE-10590) Indexing all zero vectors leads to heat death of the universe

    [ https://issues.apache.org/jira/browse/LUCENE-10590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17541185#comment-17541185 ] 

Julie Tibshirani edited comment on LUCENE-10590 at 5/23/22 10:26 PM:
---------------------------------------------------------------------

I don't have a deep understanding of what's happening, but wanted to share this discussion from hnswlib: [https://github.com/nmslib/hnswlib/issues/263#issuecomment-739549454]. It looks like HNSW can really fall apart if there are a lot of duplicate vectors. The duplicates all link to each other, creating a highly disconnected graph. I've often seen libraries recommend that users deduplicate vectors before indexing them ([https://github.com/facebookresearch/faiss/wiki/FAQ#searching-duplicate-vectors-is-slow]). I guess indexing all zero vectors is an extreme version of this!


was (Author: julietibs):
I don't have a deep understanding of what's happening, but wanted to share this discussion from hnswlib: [https://github.com/nmslib/hnswlib/issues/263#issuecomment-739549454]. It looks like HNSW can really fall apart if there are a lot of duplicate vectors. The duplicates all link to each other, creating a highly disconnected graph. I've often seen libraries recommend that users deduplicate vectors before indexing them ([https://github.com/facebookresearch/faiss/wiki/FAQ#searching-duplicate-vectors-is-slow).] I guess indexing all zero vectors is an extreme version of this!

> Indexing all zero vectors leads to heat death of the universe
> -------------------------------------------------------------
>
>                 Key: LUCENE-10590
>                 URL: https://issues.apache.org/jira/browse/LUCENE-10590
>             Project: Lucene - Core
>          Issue Type: Bug
>            Reporter: Michael Sokolov
>            Priority: Major
>
> By accident while testing something else, I ran a luceneutil test indexing 1M 100d vectors where all the vectors were all zeroes. This caused indexing to take a very long time (~40x normal - it did eventually complete) and the search performance was similarly bad.  We should not degrade by orders of magnitude with even the worst data though.
> I'm not entirely sure what the issue is, but perhaps as long as we keep finding hits that are "better" we keep exploring the graph, where better means (score, -docid) >= (lowest score, -docid). If that's right and all docs have the same score, then we probably need to either switch to > (but this could lead to poorer recall in normal cases) or introduce some kind of minimum score threshold?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org