You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by "Haschart, Robert J (rh9ec)" <rh...@virginia.edu> on 2020/02/05 20:41:16 UTC

StatelessScriptUpdateProcessorFactory causing OOM errors?

I've recently started looking at using the updateRequestProcessorChain to ensure the presence of certain fields in our solr records.   The reason for doing so is that we have records from several different sources, that are processed in different ways, and by adding the field via the updateRequestProcessorChain I don't have to duplicate the logic for how to create the fields in several different places.

At first it seemed that I might be able to accomplish what I needed to do with the TemplateUpdateProcessorFactory and the CloneFieldUpdateProcessorFactory and the RegexReplaceProcessorFactory ,  but I quickly went beyond what they can easily accomplish.

example1:
A document will have one or more   pool_f_stored  value(s)   and a full_title_tsearch_stored  value.
generate a field where the field name(s) is drawn from the pool_f_stored value(s) and the field value is equal to the value from the full_title_tsearch_stored field.  (Adding a pool specific title browse field)

example2:
A document will have one (or more) values in a field named uva_availability_f_stored, these values will be from the following set of strings {  Online, On shelf , Request, <anything else> }   these strings should be mapped to  integer values  { 3,  2, 1, 0 } respectively, and a field named  uva_availability_isort should be added with only the largest of those values.

So I tried using the StatelessScriptUpdateProcessorFactory and wrote short javascript implementations to accomplish the above, and called the scripts from the updateRequestProcessorChain  and tested, and everything seemed great.

However when I ran the bulk of our 9 million records through the indexing process, solr would repeatedly, unceremoniously throw a OOM error and terminate.   Usually citing  " # java.lang.OutOfMemoryError: Metaspace"  as the reason.
The only difference is that now I am calling the three javascript scripts during the updateRequestProcessorChain

If I comment out those steps in the updateRequestProcessorChain  I can index all 9 million items and have no problem.

Any thoughts on why this would be the case?   Any suggestions on how to track this down?   Any known "gotchas" with using javascript scripts from within the updateRequestProcessorChain  ?

Java version:
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) 64-Bit Server VM (build 25.0-b70, mixed mode)
Solr version:
solr-spec    7.3.0
solr-impl    7.3.0 98a6b3d642928b1ac9076c6c5a369472581f7633 - woody - 2018-03-28 14:37:45

javascript for example 1:

function processAdd(cmd) {

  doc = cmd.solrDoc;  // org.apache.solr.common.SolrInputDocument
  field_value_name = params.get("field_value");
  field_value = doc.getFieldValue(field_value_name);
  logger.debug("update-script#processAdd: field_value=" + field_value);

  if (field_value != null)
  {
      field_name_name = params.get("field_name");
      field_names_response = doc.getFieldValues(field_name_name);
      field_names = (field_names_response != null) ? field_names_response.toArray() : null;
      for(i=0; field_names != null && i < field_names.length; i++)
      {
          field_name = "full_"+field_names[i]+"_title_f";
          doc.setField(field_name, field_value);
      }
  }
}

SolrConfig.xml  to call script:

     <processor class="solr.StatelessScriptUpdateProcessorFactory">
         <str name="script">title_browse.js</str>
         <lst name="params">
            <str name="field_name">pool_f_stored</str>
            <str name="field_value">full_title_tsearchf_stored</str>
         </lst>
     </processor>

javascript for example 2:

function processAdd(cmd) {

  doc = cmd.solrDoc;  // org.apache.solr.common.SolrInputDocument
  field_name = params.get("field_name");
  field_value_name = params.get("field_value");
  logger.debug("update-script#processAdd: field_value_name=" + field_value_name);
  field_values_result = doc.getFieldValues(field_value_name);
  field_values = (field_values_result != null) ? field_values_result.toArray() : null;
  logger.debug("update-script#processAdd: field_value count=" + (field_values == null ? "null" : " " + (field_values.length)));

  if (field_name != null && field_values != null && field_values.length > 0)
  {
//      logger.debug("update-script#processAdd: field_value=" + field_value);
      value = 0;
      for(i=0; i < field_values.length; i++)
      {
          field_value = field_values[i];
          if (field_value.equals("Request"))  value = Math.max(value, 1);
          else if (field_value.equals("On shelf")) value = Math.max(value, 2);
          else if (field_value.equals("Online"))   value = Math.max(value, 3);
      }
      doc.setField(field_name, value);
  }
}

SolrConfig.xml to call example 2 script:

     <processor class="solr.StatelessScriptUpdateProcessorFactory">
         <str name="script">availability_rank.js</str>
         <lst name="params">
            <str name="field_name">uva_availability_isort</str>
            <str name="field_value">uva_availability_f_stored</str>
         </lst>
     </processor>



Re: StatelessScriptUpdateProcessorFactory causing OOM errors?

Posted by Erick Erickson <er...@gmail.com>.
How many fields do you wind up having? It looks on a quick glance like
it depends on the values of fields. While I’ve seen Solr/Lucene handle
indexes with over 1M different fields, it’s unsatisfactory.

What I’m wondering is if you are adding a zillion different fields to your
docs as time passes and eventually the structures that are needed to
maintain your field mappings are blowing up memory.

If that’s that case, you need an alternative design because your
performance will be unacceptable.

May be off base, if so we can dig further.

Best,
Erick

> On Feb 5, 2020, at 3:41 PM, Haschart, Robert J (rh9ec) <rh...@virginia.edu> wrote:
> 
> StatelessScriptUpdateProcessorFactory


Re: StatelessScriptUpdateProcessorFactory causing OOM errors?

Posted by Erick Erickson <er...@gmail.com>.
Robert:

My concern with fixing by adding memory is that it may just be kicking the can down the road. Assuming there really is some leak eventually they’ll accumulate and you’ll hit another OOM. If that were the case, I’d expect a cursory look at your memory usage to just keep increasing over time as your script is utilized. When I looked at your script, I don’t see anything obvious...

Now, all that said if you bump the memory and it stays in some channel maybe you were just running close to your limits before and got “lucky”.

Here's another possibility:

- your commit interval is too long. While I constantly find them set too short, it’s also possible to set them to be too long. To support Real-time-get, Solr needs to keep pointers in to the TLOGs for all documents that have been added since the last searcher was opened. I can’t really make this square with switching from a jar to a script, but…

You’d probably need to enable the OOM killer script and enable heap-dump-on-oom to really get to the bottom of this, or maybe just take a heap dump after a while when you’re indexing docs. 

Best,
Erick

> On Feb 13, 2020, at 2:45 PM, Jörn Franke <jo...@gmail.com> wrote:
> 
> I had also issues with this factory when creating atomic updates inside there. They worked, but searcher where never closed and new ones where open and stayed open with all the issues related to that one. Maybe one needs to look into more detail into that. However - it is a script in the end so that could be always a bug in your script as well.
> 
>> Am 13.02.2020 um 19:21 schrieb Haschart, Robert J (rh9ec) <rh...@virginia.edu>:
>> 
>> Erick,
>> 
>> Sorry I didn't see this response, for some reason solr-users has stopped being delivered to my mail box.
>> 
>> The script that adds a field based on the value(s) in some other field doesn't add a large number of different fields to the index.
>> The pool_f field only has a total of 11 different values, and except for some rare cases, any given record only has a single value in that field, and those rare cases will have two values.
>> 
>> I had previously implemented the same functionality by making a small jar file containing a customized version of TemplateUpdateProcessorFactory  that could generate different field names, but since I needed another bit of functionality in the Update Chain I decided to port the original functionality to a script  since the "development overhead" of adding a script is less than adding in multiple additional custom UpdateProcessorFactory objects.
>> 
>> I had been running solr with the the memory flag  "-m 8G" and it had been running fine with that setting for a least a year, even recently when the customized java version of TemplateUpdateProcessorFactory was being invoked to perform essentially the same processing step.
>> 
>> However when I tried to accomplish the same thing via javascript through StatelessScriptUpdateProcessorFactory  and start a re-index it would die after about 1 million records being indexed.    And since it is merely my (massive) development machine, during the re-index there are close to zero searches coming through while the re-index is happening.
>> 
>> I've managed to work around the issue on my dev box by upping the the memory for solr to 16G, and haven't had an OOM since doing that, but I'm hesitant to push these changes to our AWS-hosted production instances since running out of memory and terminating there would be more of an issue.
>> 
>> -Bob
>> 
>> 
>> 
>> ________________________________
>>   From: Erick Erickson <er...@gmail.com>
>>   Subject: Re: StatelessScriptUpdateProcessorFactory causing OOM errors?
>>   Date: Thu, 6 Feb 2020 09:18:41 -0500
>> 
>>   How many fields do you wind up having? It looks on a quick glance like
>>   it depends on the values of fields. While I’ve seen Solr/Lucene handle
>>   indexes with over 1M different fields, it’s unsatisfactory.
>> 
>>   What I’m wondering is if you are adding a zillion different fields to your
>>   docs as time passes and eventually the structures that are needed to
>>   maintain your field mappings are blowing up memory.
>> 
>>   If that’s that case, you need an alternative design because your
>>   performance will be unacceptable.
>> 
>>   May be off base, if so we can dig further.
>> 
>>   Best,
>>   Erick
>> 
>>> On Feb 5, 2020, at 3:41 PM, Haschart, Robert J (rh9ec) <rh...@virginia.edu> wrote:
>>> 
>>> StatelessScriptUpdateProcessorFactory
>> 
>> 
>> 
>> 


Re: StatelessScriptUpdateProcessorFactory causing OOM errors?

Posted by Jörn Franke <jo...@gmail.com>.
I had also issues with this factory when creating atomic updates inside there. They worked, but searcher where never closed and new ones where open and stayed open with all the issues related to that one. Maybe one needs to look into more detail into that. However - it is a script in the end so that could be always a bug in your script as well.

> Am 13.02.2020 um 19:21 schrieb Haschart, Robert J (rh9ec) <rh...@virginia.edu>:
> 
> Erick,
> 
> Sorry I didn't see this response, for some reason solr-users has stopped being delivered to my mail box.
> 
> The script that adds a field based on the value(s) in some other field doesn't add a large number of different fields to the index.
> The pool_f field only has a total of 11 different values, and except for some rare cases, any given record only has a single value in that field, and those rare cases will have two values.
> 
> I had previously implemented the same functionality by making a small jar file containing a customized version of TemplateUpdateProcessorFactory  that could generate different field names, but since I needed another bit of functionality in the Update Chain I decided to port the original functionality to a script  since the "development overhead" of adding a script is less than adding in multiple additional custom UpdateProcessorFactory objects.
> 
> I had been running solr with the the memory flag  "-m 8G" and it had been running fine with that setting for a least a year, even recently when the customized java version of TemplateUpdateProcessorFactory was being invoked to perform essentially the same processing step.
> 
> However when I tried to accomplish the same thing via javascript through StatelessScriptUpdateProcessorFactory  and start a re-index it would die after about 1 million records being indexed.    And since it is merely my (massive) development machine, during the re-index there are close to zero searches coming through while the re-index is happening.
> 
> I've managed to work around the issue on my dev box by upping the the memory for solr to 16G, and haven't had an OOM since doing that, but I'm hesitant to push these changes to our AWS-hosted production instances since running out of memory and terminating there would be more of an issue.
> 
> -Bob
> 
> 
> 
> ________________________________
>    From: Erick Erickson <er...@gmail.com>
>    Subject: Re: StatelessScriptUpdateProcessorFactory causing OOM errors?
>    Date: Thu, 6 Feb 2020 09:18:41 -0500
> 
>    How many fields do you wind up having? It looks on a quick glance like
>    it depends on the values of fields. While I’ve seen Solr/Lucene handle
>    indexes with over 1M different fields, it’s unsatisfactory.
> 
>    What I’m wondering is if you are adding a zillion different fields to your
>    docs as time passes and eventually the structures that are needed to
>    maintain your field mappings are blowing up memory.
> 
>    If that’s that case, you need an alternative design because your
>    performance will be unacceptable.
> 
>    May be off base, if so we can dig further.
> 
>    Best,
>    Erick
> 
>> On Feb 5, 2020, at 3:41 PM, Haschart, Robert J (rh9ec) <rh...@virginia.edu> wrote:
>> 
>> StatelessScriptUpdateProcessorFactory
> 
> 
> 
> 

Re: StatelessScriptUpdateProcessorFactory causing OOM errors?

Posted by "Haschart, Robert J (rh9ec)" <rh...@virginia.edu>.
Erick,

Sorry I didn't see this response, for some reason solr-users has stopped being delivered to my mail box.

The script that adds a field based on the value(s) in some other field doesn't add a large number of different fields to the index.
The pool_f field only has a total of 11 different values, and except for some rare cases, any given record only has a single value in that field, and those rare cases will have two values.

I had previously implemented the same functionality by making a small jar file containing a customized version of TemplateUpdateProcessorFactory  that could generate different field names, but since I needed another bit of functionality in the Update Chain I decided to port the original functionality to a script  since the "development overhead" of adding a script is less than adding in multiple additional custom UpdateProcessorFactory objects.

I had been running solr with the the memory flag  "-m 8G" and it had been running fine with that setting for a least a year, even recently when the customized java version of TemplateUpdateProcessorFactory was being invoked to perform essentially the same processing step.

However when I tried to accomplish the same thing via javascript through StatelessScriptUpdateProcessorFactory  and start a re-index it would die after about 1 million records being indexed.    And since it is merely my (massive) development machine, during the re-index there are close to zero searches coming through while the re-index is happening.

I've managed to work around the issue on my dev box by upping the the memory for solr to 16G, and haven't had an OOM since doing that, but I'm hesitant to push these changes to our AWS-hosted production instances since running out of memory and terminating there would be more of an issue.

-Bob



________________________________
    From: Erick Erickson <er...@gmail.com>
    Subject: Re: StatelessScriptUpdateProcessorFactory causing OOM errors?
    Date: Thu, 6 Feb 2020 09:18:41 -0500

    How many fields do you wind up having? It looks on a quick glance like
    it depends on the values of fields. While I’ve seen Solr/Lucene handle
    indexes with over 1M different fields, it’s unsatisfactory.

    What I’m wondering is if you are adding a zillion different fields to your
    docs as time passes and eventually the structures that are needed to
    maintain your field mappings are blowing up memory.

    If that’s that case, you need an alternative design because your
    performance will be unacceptable.

    May be off base, if so we can dig further.

    Best,
    Erick

    > On Feb 5, 2020, at 3:41 PM, Haschart, Robert J (rh9ec) <rh...@virginia.edu> wrote:
    >
    > StatelessScriptUpdateProcessorFactory