You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Michael Sun (JIRA)" <ji...@apache.org> on 2016/12/02 21:29:58 UTC

[jira] [Comment Edited] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs

    [ https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716573#comment-15716573 ] 

Michael Sun edited comment on SOLR-9764 at 12/2/16 9:29 PM:
------------------------------------------------------------

bq.  I do not know how it would perform when actually used as a filterCache entry, compared to the current bitset implementation.
RoaringDocIdSet looks pretty interesting. From the link in comments,  https://www.elastic.co/blog/frame-of-reference-and-roaring-bitmaps, however, it looks RoaringDocIdSet doesn't save any memory in case a query match all docs.

Basically the idea of RoaringDocIdSet is to divide the entire bitmap into multiple chunks. For each chunk, either a bitmap or a integer array (using diff compression) can be used depending on number of matched docs in that chunk. If matched doc is higher than a certain number in a chunk, a bitmap is used for that chunk. Otherwise integer array is used. It can help in some use cases but it would fall back to something equivalent to FixedBitMap in this use case.

In addition, the 'official' website for roaring bitmaps  http://roaringbitmap.org mentioned roaring bitmaps can also use run length encoding to store the bitmap chunk but also mentioned one of the main goals of roaring bitmap is to solve the problem of run length encoding, which is expensive random access. Need to dig into source code to understand it better. Any suggestion is welcome.



was (Author: michael.sun):
bq.  I do not know how it would perform when actually used as a filterCache entry, compared to the current bitset implementation.
RoaringDocIdSet looks pretty interesting. From the link in comments,  https://www.elastic.co/blog/frame-of-reference-and-roaring-bitmaps, however, it looks RoaringDocIdSet doesn't save any memory in case a query match all docs.

Basically the idea of RoaringDocIdSet is to divide the entire bitmap into multiple chunks. For each chunk, either a bitmap or a integer array (using diff compression) is used depending on number of matched docs in that chunk. If matched doc is higher than a certain number, a bitmap is used for that chunk. Otherwise integer array is used. It can help in some use cases but it would fall back to something equivalent to FixedBitMap in this use case.

In addition, the 'official' website for roaring bitmaps  http://roaringbitmap.org mentioned roaring bitmaps can also use run length encoding to store the bitmap chunk but also mentioned one of the main goals of roaring bitmap is to solve the problem of run length encoding, which is expensive random access. Need to dig into source code to understand it better. Any suggestion is welcome.


> Design a memory efficient DocSet if a query returns all docs
> ------------------------------------------------------------
>
>                 Key: SOLR-9764
>                 URL: https://issues.apache.org/jira/browse/SOLR-9764
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Michael Sun
>         Attachments: SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, SOLR_9764_no_cloneMe.patch
>
>
> In some use cases, particularly use cases with time series data, using collection alias and partitioning data into multiple small collections using timestamp, a filter query can match all documents in a collection. Currently BitDocSet is used which contains a large array of long integers with every bits set to 1. After querying, the resulted DocSet saved in filter cache is large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 days, each collection with one day of data. A filter query for last one week data would result in at least six DocSet in filter cache which matches all documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  The new DocSet removes the large array, reduces memory usage and GC pressure without losing advantage of large filter cache.
> In particular, for use cases when using time series data, collection alias and partition data into multiple small collections using timestamp, the gain can be large.
> For further optimization, it may be helpful to design a DocSet with run length encoding. Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org