You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Varun Thacker (JIRA)" <ji...@apache.org> on 2016/12/15 02:35:58 UTC

[jira] [Created] (SOLR-9866) Reduce memory pressure for expand component

Varun Thacker created SOLR-9866:
-----------------------------------

             Summary: Reduce memory pressure for expand component
                 Key: SOLR-9866
                 URL: https://issues.apache.org/jira/browse/SOLR-9866
             Project: Solr
          Issue Type: Bug
      Security Level: Public (Default Security Level. Issues are Public)
            Reporter: Varun Thacker


A client was having memory pressure issues when running queries with collapse and expand.

I created a setup on my machine with dummy data to reproduce this. This ticket is concentrating just on the expand part as that's the top culprit according to some sampling I did with YourKit.

Started Solr using  - {{./bin/solr start -p 8984 -m 4g}} and created a collection called "ct" ( collapse testing )

The indexing code below indexes 10M records. We have every 1 out of 10 documents as duplicates.

{code}
public void index() throws Exception {
    HttpSolrClient client = new HttpSolrClient.Builder().withBaseSolrUrl("http://localhost:8983/solr").build();

    client.deleteByQuery("ct", "*:*");
    client.commit("ct");

    //Index 10M documents , with every 1/10 document as a duplicate.
    List<SolrInputDocument> docs = new ArrayList<>(1000);
    for(int i=0; i<1000*1000*10; i++) {
      SolrInputDocument doc = new SolrInputDocument();
      doc.addField("id", i);
      if (i%10 ==0 && i!=0) {
        doc.addField("collapseField1_s", i-1); //with docValues
        doc.addField("collapseField1_s", i-1); //without docValues
      } else {
        doc.addField("collapseField1_s", i); //with docValues
        doc.addField("collapseField1_s", i); //without docValues
      }
      docs.add(doc);
      if (docs.size() == 1000) {
        client.add("ct", docs);
        docs.clear();
      }
    }
    client.commit("ct");
  }
{code}

I wrote a script to fire 3k such queries {{&fq=\{!collapse field=collapseField1\}&expand=true&expand.rows=1000}}

I enabled "Object Allocation Recording" on YourKit and I am attaching 2 sets screenshots: 
 - Stock Solr 6.3 : For 1 query (original-1) and for the 3k queries (original-3k) and also GC logs during the 3k query run
 - Patched Solr: For 1 query (patch-1) and for the 3k queries (patch-3k) and also GC logs during the 3k query run

The patch is nothing but tweaking the initial allocation sizes. I haven't fully verified if it's correct , but {{TestExpandComponent}} was happy

 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org