You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Rajdeep Sahoo <ra...@gmail.com> on 2020/01/19 17:25:02 UTC

Solr 7.7 heap space is getting full

We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb. We
have completed indexing after starting the server suddenly heap space is
getting full.
   Added gc params  , still not working and jdk version is 1.8 .
Please find the below gc  params
-XX:NewRatio=2
-XX:SurvivorRatio=3
-XX:TargetSurvivorRatio=90 \
-XX:MaxTenuringThreshold=8 \
-XX:+UseConcMarkSweepGC \
-XX:+CMSScavengeBeforeRemark \
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
-XX:PretenureSizeThreshold=512m \
-XX:CMSFullGCsBeforeCompaction=1 \
-XX:+UseCMSInitiatingOccupancyOnly \
-XX:CMSInitiatingOccupancyFraction=70 \
-XX:CMSMaxAbortablePrecleanTime=6000 \
-XX:+CMSParallelRemarkEnabled
-XX:+ParallelRefProcEnabled
-XX:+UseLargePages \
-XX:+AggressiveOpts \

Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
Please reply anyone

On Sun, 19 Jan, 2020, 10:55 PM Rajdeep Sahoo, <ra...@gmail.com>
wrote:

> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb. We
> have completed indexing after starting the server suddenly heap space is
> getting full.
>    Added gc params  , still not working and jdk version is 1.8 .
> Please find the below gc  params
> -XX:NewRatio=2
> -XX:SurvivorRatio=3
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+CMSScavengeBeforeRemark \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:PretenureSizeThreshold=512m \
> -XX:CMSFullGCsBeforeCompaction=1 \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=70 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled
> -XX:+ParallelRefProcEnabled
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
>

Re: Solr 7.7 heap space is getting full

Posted by Erick Erickson <er...@gmail.com>.
Walter’s comment (that I’ve seen too BTW) is something
to pursue if (and only if) you have proof that Solr is spinning
up thousands of threads. Do you have any proof of that?

Having several hundred threads running is quite common BTW.

Attach jconsole or take a thread dump and it’ll be obvious.

However, having thousands of threads is fairly rare in my experience.

You simply must take a heap dump and analyze it to have any hope
of identifying exactly what the issue is. It’s quite possible that you
simply need more memory. It’s possible you don’t have docValues
enabled for all the fields you facet, group, sort, or use function
queries with. It’s possible that…. 

Best,
Erick

> On Feb 6, 2020, at 9:07 PM, Rajdeep Sahoo <ra...@gmail.com> wrote:
> 
> If we reduce the no of threads then is it going to help.
>  Is there any other way to debug this.
> 
> 
> On Mon, 3 Feb, 2020, 2:52 AM Walter Underwood, <wu...@wunderwood.org>
> wrote:
> 
>> The only time I’ve ever had an OOM is when Solr gets a huge load
>> spike and fires up 2000 threads. Then it runs out of space for stacks.
>> 
>> I’ve never run anything other than an 8GB heap, starting with Solr 1.3
>> at Netflix.
>> 
>> Agreed about filter cache, though I’d expect heavy use of that to most
>> often be part of a faceted search system.
>> 
>> wunder
>> Walter Underwood
>> wunder@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>>> On Feb 2, 2020, at 12:36 PM, Erick Erickson <er...@gmail.com>
>> wrote:
>>> 
>>> Mostly I was reacting to the statement that the number
>>> of docs increased by over 4x and then there were
>>> memory problems.
>>> 
>>> Hmmm, that said, what does “heap space is getting full”
>>> mean anyway? If you’re hitting OOMs, that’s one thing. If
>>> you’re measuring the amount of heap consumed and
>>> noticing that it fills up, that’s totally normal. Java will
>>> collect garbage when it needs to. If you attach something
>>> like jconsole to Solr you’ll see memory grow and shrink
>>> quite regularly. Take a look at your garbage collection logs
>>> with something like GCViewer to see how much memory is
>>> still required after a GC cycle. If that number is reasonable
>>> then there’s no problem.
>>> 
>>> Walter:
>>> 
>>> Well, the expectation that one can keep adding docs without
>>> considering heap size is simply naive. The filterCache
>>> for instance grows linearly with the number of documents
>>> (OK, if it it stores the full bitset). Real Time Get requires
>>> on-heap structures to keep track of changed docs between
>>> commits. Etc.
>>> 
>>> The OP hasn’t even told us whether docValues are enabled
>>> appropriately, which if not set for fields needing it will also
>>> grow heap requirements linearly with the number of docs.
>>> 
>>> I’ll totally agree that the relationship between the size of
>>> the index on disk and heap is iffy at best. But if more heap is
>>> _not_ needed for bigger indexes then we’d never hit OOMs
>>> no matter how many docs we put in 4G.
>>> 
>>> Best,
>>> Erick
>>> 
>>> 
>>> 
>>>> On Feb 2, 2020, at 11:18 AM, Walter Underwood <wu...@wunderwood.org>
>> wrote:
>>>> 
>>>> We CANNOT diagnose anything until you tell us the error message!
>>>> 
>>>> Erick, I strongly disagree that more heap is needed for bigger indexes.
>>>> Except for faceting, Lucene was designed to stream index data and
>>>> work regardless of the size of the index. Indexing is in RAM buffer
>>>> sized chunks, so large updates also don’t need extra RAM.
>>>> 
>>>> wunder
>>>> Walter Underwood
>>>> wunder@wunderwood.org
>>>> http://observer.wunderwood.org/  (my blog)
>>>> 
>>>>> On Feb 2, 2020, at 7:52 AM, Rajdeep Sahoo <ra...@gmail.com>
>> wrote:
>>>>> 
>>>>> We have allocated 16 gb of heap space  out of 24 g.
>>>>> There are 3 solr cores here, for one core when the no of documents are
>>>>> getting increased i.e. around 4.5 lakhs,then this scenario is
>> happening.
>>>>> 
>>>>> 
>>>>> On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson, <er...@gmail.com>
>>>>> wrote:
>>>>> 
>>>>>> Allocate more heap and possibly add more RAM.
>>>>>> 
>>>>>> What are you expectations? You can't continue to
>>>>>> add documents to your Solr instance without regard to
>>>>>> how much heap you’ve allocated. You’ve put over 4x
>>>>>> the number of docs on the node. There’s no magic here.
>>>>>> You can’t continue to add docs to a Solr instance without
>>>>>> increasing the heap at some point.
>>>>>> 
>>>>>> And as far as I know, you’ve never told us how much heap yo
>>>>>> _are_ allocating. The default for Java processes is 512M, which
>>>>>> is quite small. so perhaps it’s a simple matter of starting Solr
>>>>>> with the -XmX parameter set to something larger.
>>>>>> 
>>>>>> Best,
>>>>>> Erick
>>>>>> 
>>>>>>> On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <
>> rajdeepsahoo2012@gmail.com>
>>>>>> wrote:
>>>>>>> 
>>>>>>> What can we do in this scenario as the solr master node is going
>> down and
>>>>>>> the indexing is failing.
>>>>>>> Please provide some workaround for this issue.
>>>>>>> 
>>>>>>> On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <
>> wunder@wunderwood.org>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> What message do you get about the heap space.
>>>>>>>> 
>>>>>>>> It is completely normal for Java to use all of heap before running a
>>>>>> major
>>>>>>>> GC. That
>>>>>>>> is how the JVM works.
>>>>>>>> 
>>>>>>>> wunder
>>>>>>>> Walter Underwood
>>>>>>>> wunder@wunderwood.org
>>>>>>>> http://observer.wunderwood.org/  (my blog)
>>>>>>>> 
>>>>>>>>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <
>> rajdeepsahoo2012@gmail.com>
>>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>> Please reply anyone
>>>>>>>>> 
>>>>>>>>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
>>>>>>>> rajdeepsahoo2012@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> This is happening when the no of indexed document count is
>> increasing.
>>>>>>>>>> With 1 million docs it's working fine but when it's crossing 4.5
>>>>>>>>>> million it's heap space is getting full.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
>>>>>>>> michael@michaelgibney.net>
>>>>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>>> Rajdeep, you say that "suddenly" heap space is getting full ...
>> does
>>>>>>>>>>> this mean that some variant of this configuration was working
>> for you
>>>>>>>>>>> at some point, or just that the failure happens quickly?
>>>>>>>>>>> 
>>>>>>>>>>> If heap space and faceting are indeed the bottleneck, you might
>> make
>>>>>>>>>>> sure that you have docValues enabled for your facet field
>> fieldTypes,
>>>>>>>>>>> and perhaps set uninvertible=false.
>>>>>>>>>>> 
>>>>>>>>>>> I'm not seeing where large numbers of facets initially came from
>> in
>>>>>>>>>>> this thread? But on that topic this is perhaps relevant,
>> regarding
>>>>>> the
>>>>>>>>>>> potential utility of a facet cache:
>>>>>>>>>>> https://issues.apache.org/jira/browse/SOLR-13807
>>>>>>>>>>> 
>>>>>>>>>>> Michael
>>>>>>>>>>> 
>>>>>>>>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk>
>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
>>>>>>>>>>>>> I  had a similar issue with a large number of facets. There is
>> no
>>>>>> way
>>>>>>>>>>>>> (At least I know) your can get an acceptable response time from
>>>>>>>>>>>>> search engine with high number of facets.
>>>>>>>>>>>> 
>>>>>>>>>>>> Just for the record then it is doable under specific
>> circumstances
>>>>>>>>>>>> (static single-shard index, only String fields, Solr 4 with
>> patch,
>>>>>>>>>>>> fixed list of facet fields):
>>>>>>>>>>>> 
>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>>>>>>>>>>>> 
>>>>>>>>>>>> More usable for the current case would be to play with
>> facet.threads
>>>>>>>>>>>> and throw hardware with many CPU-cores after the problem.
>>>>>>>>>>>> 
>>>>>>>>>>>> - Toke Eskildsen, Royal Danish Library
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>> 
>>>>>> 
>>>> 
>>> 
>> 
>> 


Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
If we reduce the no of threads then is it going to help.
  Is there any other way to debug this.


On Mon, 3 Feb, 2020, 2:52 AM Walter Underwood, <wu...@wunderwood.org>
wrote:

> The only time I’ve ever had an OOM is when Solr gets a huge load
> spike and fires up 2000 threads. Then it runs out of space for stacks.
>
> I’ve never run anything other than an 8GB heap, starting with Solr 1.3
> at Netflix.
>
> Agreed about filter cache, though I’d expect heavy use of that to most
> often be part of a faceted search system.
>
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Feb 2, 2020, at 12:36 PM, Erick Erickson <er...@gmail.com>
> wrote:
> >
> > Mostly I was reacting to the statement that the number
> > of docs increased by over 4x and then there were
> > memory problems.
> >
> > Hmmm, that said, what does “heap space is getting full”
> > mean anyway? If you’re hitting OOMs, that’s one thing. If
> > you’re measuring the amount of heap consumed and
> > noticing that it fills up, that’s totally normal. Java will
> > collect garbage when it needs to. If you attach something
> > like jconsole to Solr you’ll see memory grow and shrink
> > quite regularly. Take a look at your garbage collection logs
> > with something like GCViewer to see how much memory is
> > still required after a GC cycle. If that number is reasonable
> > then there’s no problem.
> >
> > Walter:
> >
> > Well, the expectation that one can keep adding docs without
> > considering heap size is simply naive. The filterCache
> > for instance grows linearly with the number of documents
> > (OK, if it it stores the full bitset). Real Time Get requires
> > on-heap structures to keep track of changed docs between
> > commits. Etc.
> >
> > The OP hasn’t even told us whether docValues are enabled
> > appropriately, which if not set for fields needing it will also
> > grow heap requirements linearly with the number of docs.
> >
> > I’ll totally agree that the relationship between the size of
> > the index on disk and heap is iffy at best. But if more heap is
> > _not_ needed for bigger indexes then we’d never hit OOMs
> > no matter how many docs we put in 4G.
> >
> > Best,
> > Erick
> >
> >
> >
> >> On Feb 2, 2020, at 11:18 AM, Walter Underwood <wu...@wunderwood.org>
> wrote:
> >>
> >> We CANNOT diagnose anything until you tell us the error message!
> >>
> >> Erick, I strongly disagree that more heap is needed for bigger indexes.
> >> Except for faceting, Lucene was designed to stream index data and
> >> work regardless of the size of the index. Indexing is in RAM buffer
> >> sized chunks, so large updates also don’t need extra RAM.
> >>
> >> wunder
> >> Walter Underwood
> >> wunder@wunderwood.org
> >> http://observer.wunderwood.org/  (my blog)
> >>
> >>> On Feb 2, 2020, at 7:52 AM, Rajdeep Sahoo <ra...@gmail.com>
> wrote:
> >>>
> >>> We have allocated 16 gb of heap space  out of 24 g.
> >>> There are 3 solr cores here, for one core when the no of documents are
> >>> getting increased i.e. around 4.5 lakhs,then this scenario is
> happening.
> >>>
> >>>
> >>> On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson, <er...@gmail.com>
> >>> wrote:
> >>>
> >>>> Allocate more heap and possibly add more RAM.
> >>>>
> >>>> What are you expectations? You can't continue to
> >>>> add documents to your Solr instance without regard to
> >>>> how much heap you’ve allocated. You’ve put over 4x
> >>>> the number of docs on the node. There’s no magic here.
> >>>> You can’t continue to add docs to a Solr instance without
> >>>> increasing the heap at some point.
> >>>>
> >>>> And as far as I know, you’ve never told us how much heap yo
> >>>> _are_ allocating. The default for Java processes is 512M, which
> >>>> is quite small. so perhaps it’s a simple matter of starting Solr
> >>>> with the -XmX parameter set to something larger.
> >>>>
> >>>> Best,
> >>>> Erick
> >>>>
> >>>>> On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <
> rajdeepsahoo2012@gmail.com>
> >>>> wrote:
> >>>>>
> >>>>> What can we do in this scenario as the solr master node is going
> down and
> >>>>> the indexing is failing.
> >>>>> Please provide some workaround for this issue.
> >>>>>
> >>>>> On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <
> wunder@wunderwood.org>
> >>>>> wrote:
> >>>>>
> >>>>>> What message do you get about the heap space.
> >>>>>>
> >>>>>> It is completely normal for Java to use all of heap before running a
> >>>> major
> >>>>>> GC. That
> >>>>>> is how the JVM works.
> >>>>>>
> >>>>>> wunder
> >>>>>> Walter Underwood
> >>>>>> wunder@wunderwood.org
> >>>>>> http://observer.wunderwood.org/  (my blog)
> >>>>>>
> >>>>>>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <
> rajdeepsahoo2012@gmail.com>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>> Please reply anyone
> >>>>>>>
> >>>>>>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
> >>>>>> rajdeepsahoo2012@gmail.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> This is happening when the no of indexed document count is
> increasing.
> >>>>>>>> With 1 million docs it's working fine but when it's crossing 4.5
> >>>>>>>> million it's heap space is getting full.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
> >>>>>> michael@michaelgibney.net>
> >>>>>>>> wrote:
> >>>>>>>>
> >>>>>>>>> Rajdeep, you say that "suddenly" heap space is getting full ...
> does
> >>>>>>>>> this mean that some variant of this configuration was working
> for you
> >>>>>>>>> at some point, or just that the failure happens quickly?
> >>>>>>>>>
> >>>>>>>>> If heap space and faceting are indeed the bottleneck, you might
> make
> >>>>>>>>> sure that you have docValues enabled for your facet field
> fieldTypes,
> >>>>>>>>> and perhaps set uninvertible=false.
> >>>>>>>>>
> >>>>>>>>> I'm not seeing where large numbers of facets initially came from
> in
> >>>>>>>>> this thread? But on that topic this is perhaps relevant,
> regarding
> >>>> the
> >>>>>>>>> potential utility of a facet cache:
> >>>>>>>>> https://issues.apache.org/jira/browse/SOLR-13807
> >>>>>>>>>
> >>>>>>>>> Michael
> >>>>>>>>>
> >>>>>>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk>
> wrote:
> >>>>>>>>>>
> >>>>>>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
> >>>>>>>>>>> I  had a similar issue with a large number of facets. There is
> no
> >>>> way
> >>>>>>>>>>> (At least I know) your can get an acceptable response time from
> >>>>>>>>>>> search engine with high number of facets.
> >>>>>>>>>>
> >>>>>>>>>> Just for the record then it is doable under specific
> circumstances
> >>>>>>>>>> (static single-shard index, only String fields, Solr 4 with
> patch,
> >>>>>>>>>> fixed list of facet fields):
> >>>>>>>>>>
> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
> >>>>>>>>>>
> >>>>>>>>>> More usable for the current case would be to play with
> facet.threads
> >>>>>>>>>> and throw hardware with many CPU-cores after the problem.
> >>>>>>>>>>
> >>>>>>>>>> - Toke Eskildsen, Royal Danish Library
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>
> >
>
>

Re: Solr 7.7 heap space is getting full

Posted by Walter Underwood <wu...@wunderwood.org>.
The only time I’ve ever had an OOM is when Solr gets a huge load
spike and fires up 2000 threads. Then it runs out of space for stacks.

I’ve never run anything other than an 8GB heap, starting with Solr 1.3
at Netflix.

Agreed about filter cache, though I’d expect heavy use of that to most
often be part of a faceted search system.

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Feb 2, 2020, at 12:36 PM, Erick Erickson <er...@gmail.com> wrote:
> 
> Mostly I was reacting to the statement that the number 
> of docs increased by over 4x and then there were
> memory problems. 
> 
> Hmmm, that said, what does “heap space is getting full”
> mean anyway? If you’re hitting OOMs, that’s one thing. If
> you’re measuring the amount of heap consumed and
> noticing that it fills up, that’s totally normal. Java will 
> collect garbage when it needs to. If you attach something
> like jconsole to Solr you’ll see memory grow and shrink
> quite regularly. Take a look at your garbage collection logs
> with something like GCViewer to see how much memory is
> still required after a GC cycle. If that number is reasonable
> then there’s no problem.
> 
> Walter:
> 
> Well, the expectation that one can keep adding docs without
> considering heap size is simply naive. The filterCache
> for instance grows linearly with the number of documents
> (OK, if it it stores the full bitset). Real Time Get requires 
> on-heap structures to keep track of changed docs between
> commits. Etc. 
> 
> The OP hasn’t even told us whether docValues are enabled
> appropriately, which if not set for fields needing it will also
> grow heap requirements linearly with the number of docs.
> 
> I’ll totally agree that the relationship between the size of
> the index on disk and heap is iffy at best. But if more heap is
> _not_ needed for bigger indexes then we’d never hit OOMs
> no matter how many docs we put in 4G.
> 
> Best,
> Erick
> 
> 
> 
>> On Feb 2, 2020, at 11:18 AM, Walter Underwood <wu...@wunderwood.org> wrote:
>> 
>> We CANNOT diagnose anything until you tell us the error message!
>> 
>> Erick, I strongly disagree that more heap is needed for bigger indexes.
>> Except for faceting, Lucene was designed to stream index data and
>> work regardless of the size of the index. Indexing is in RAM buffer
>> sized chunks, so large updates also don’t need extra RAM.
>> 
>> wunder
>> Walter Underwood
>> wunder@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>>> On Feb 2, 2020, at 7:52 AM, Rajdeep Sahoo <ra...@gmail.com> wrote:
>>> 
>>> We have allocated 16 gb of heap space  out of 24 g.
>>> There are 3 solr cores here, for one core when the no of documents are
>>> getting increased i.e. around 4.5 lakhs,then this scenario is happening.
>>> 
>>> 
>>> On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson, <er...@gmail.com>
>>> wrote:
>>> 
>>>> Allocate more heap and possibly add more RAM.
>>>> 
>>>> What are you expectations? You can't continue to
>>>> add documents to your Solr instance without regard to
>>>> how much heap you’ve allocated. You’ve put over 4x
>>>> the number of docs on the node. There’s no magic here.
>>>> You can’t continue to add docs to a Solr instance without
>>>> increasing the heap at some point.
>>>> 
>>>> And as far as I know, you’ve never told us how much heap yo
>>>> _are_ allocating. The default for Java processes is 512M, which
>>>> is quite small. so perhaps it’s a simple matter of starting Solr
>>>> with the -XmX parameter set to something larger.
>>>> 
>>>> Best,
>>>> Erick
>>>> 
>>>>> On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <ra...@gmail.com>
>>>> wrote:
>>>>> 
>>>>> What can we do in this scenario as the solr master node is going down and
>>>>> the indexing is failing.
>>>>> Please provide some workaround for this issue.
>>>>> 
>>>>> On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <wu...@wunderwood.org>
>>>>> wrote:
>>>>> 
>>>>>> What message do you get about the heap space.
>>>>>> 
>>>>>> It is completely normal for Java to use all of heap before running a
>>>> major
>>>>>> GC. That
>>>>>> is how the JVM works.
>>>>>> 
>>>>>> wunder
>>>>>> Walter Underwood
>>>>>> wunder@wunderwood.org
>>>>>> http://observer.wunderwood.org/  (my blog)
>>>>>> 
>>>>>>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <ra...@gmail.com>
>>>>>> wrote:
>>>>>>> 
>>>>>>> Please reply anyone
>>>>>>> 
>>>>>>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
>>>>>> rajdeepsahoo2012@gmail.com>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> This is happening when the no of indexed document count is increasing.
>>>>>>>> With 1 million docs it's working fine but when it's crossing 4.5
>>>>>>>> million it's heap space is getting full.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
>>>>>> michael@michaelgibney.net>
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Rajdeep, you say that "suddenly" heap space is getting full ... does
>>>>>>>>> this mean that some variant of this configuration was working for you
>>>>>>>>> at some point, or just that the failure happens quickly?
>>>>>>>>> 
>>>>>>>>> If heap space and faceting are indeed the bottleneck, you might make
>>>>>>>>> sure that you have docValues enabled for your facet field fieldTypes,
>>>>>>>>> and perhaps set uninvertible=false.
>>>>>>>>> 
>>>>>>>>> I'm not seeing where large numbers of facets initially came from in
>>>>>>>>> this thread? But on that topic this is perhaps relevant, regarding
>>>> the
>>>>>>>>> potential utility of a facet cache:
>>>>>>>>> https://issues.apache.org/jira/browse/SOLR-13807
>>>>>>>>> 
>>>>>>>>> Michael
>>>>>>>>> 
>>>>>>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
>>>>>>>>>> 
>>>>>>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
>>>>>>>>>>> I  had a similar issue with a large number of facets. There is no
>>>> way
>>>>>>>>>>> (At least I know) your can get an acceptable response time from
>>>>>>>>>>> search engine with high number of facets.
>>>>>>>>>> 
>>>>>>>>>> Just for the record then it is doable under specific circumstances
>>>>>>>>>> (static single-shard index, only String fields, Solr 4 with patch,
>>>>>>>>>> fixed list of facet fields):
>>>>>>>>>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>>>>>>>>>> 
>>>>>>>>>> More usable for the current case would be to play with facet.threads
>>>>>>>>>> and throw hardware with many CPU-cores after the problem.
>>>>>>>>>> 
>>>>>>>>>> - Toke Eskildsen, Royal Danish Library
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>> 
>>>>>> 
>>>> 
>>>> 
>> 
> 


Re: Solr 7.7 heap space is getting full

Posted by Erick Erickson <er...@gmail.com>.
Mostly I was reacting to the statement that the number 
of docs increased by over 4x and then there were
memory problems. 

Hmmm, that said, what does “heap space is getting full”
mean anyway? If you’re hitting OOMs, that’s one thing. If
you’re measuring the amount of heap consumed and
noticing that it fills up, that’s totally normal. Java will 
collect garbage when it needs to. If you attach something
like jconsole to Solr you’ll see memory grow and shrink
quite regularly. Take a look at your garbage collection logs
with something like GCViewer to see how much memory is
still required after a GC cycle. If that number is reasonable
then there’s no problem.

Walter:

Well, the expectation that one can keep adding docs without
considering heap size is simply naive. The filterCache
for instance grows linearly with the number of documents
(OK, if it it stores the full bitset). Real Time Get requires 
on-heap structures to keep track of changed docs between
commits. Etc. 

The OP hasn’t even told us whether docValues are enabled
appropriately, which if not set for fields needing it will also
grow heap requirements linearly with the number of docs.

I’ll totally agree that the relationship between the size of
the index on disk and heap is iffy at best. But if more heap is
_not_ needed for bigger indexes then we’d never hit OOMs
no matter how many docs we put in 4G.

Best,
Erick



> On Feb 2, 2020, at 11:18 AM, Walter Underwood <wu...@wunderwood.org> wrote:
> 
> We CANNOT diagnose anything until you tell us the error message!
> 
> Erick, I strongly disagree that more heap is needed for bigger indexes.
> Except for faceting, Lucene was designed to stream index data and
> work regardless of the size of the index. Indexing is in RAM buffer
> sized chunks, so large updates also don’t need extra RAM.
> 
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
> 
>> On Feb 2, 2020, at 7:52 AM, Rajdeep Sahoo <ra...@gmail.com> wrote:
>> 
>> We have allocated 16 gb of heap space  out of 24 g.
>>  There are 3 solr cores here, for one core when the no of documents are
>> getting increased i.e. around 4.5 lakhs,then this scenario is happening.
>> 
>> 
>> On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson, <er...@gmail.com>
>> wrote:
>> 
>>> Allocate more heap and possibly add more RAM.
>>> 
>>> What are you expectations? You can't continue to
>>> add documents to your Solr instance without regard to
>>> how much heap you’ve allocated. You’ve put over 4x
>>> the number of docs on the node. There’s no magic here.
>>> You can’t continue to add docs to a Solr instance without
>>> increasing the heap at some point.
>>> 
>>> And as far as I know, you’ve never told us how much heap yo
>>> _are_ allocating. The default for Java processes is 512M, which
>>> is quite small. so perhaps it’s a simple matter of starting Solr
>>> with the -XmX parameter set to something larger.
>>> 
>>> Best,
>>> Erick
>>> 
>>>> On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <ra...@gmail.com>
>>> wrote:
>>>> 
>>>> What can we do in this scenario as the solr master node is going down and
>>>> the indexing is failing.
>>>> Please provide some workaround for this issue.
>>>> 
>>>> On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <wu...@wunderwood.org>
>>>> wrote:
>>>> 
>>>>> What message do you get about the heap space.
>>>>> 
>>>>> It is completely normal for Java to use all of heap before running a
>>> major
>>>>> GC. That
>>>>> is how the JVM works.
>>>>> 
>>>>> wunder
>>>>> Walter Underwood
>>>>> wunder@wunderwood.org
>>>>> http://observer.wunderwood.org/  (my blog)
>>>>> 
>>>>>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <ra...@gmail.com>
>>>>> wrote:
>>>>>> 
>>>>>> Please reply anyone
>>>>>> 
>>>>>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
>>>>> rajdeepsahoo2012@gmail.com>
>>>>>> wrote:
>>>>>> 
>>>>>>> This is happening when the no of indexed document count is increasing.
>>>>>>> With 1 million docs it's working fine but when it's crossing 4.5
>>>>>>> million it's heap space is getting full.
>>>>>>> 
>>>>>>> 
>>>>>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
>>>>> michael@michaelgibney.net>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> Rajdeep, you say that "suddenly" heap space is getting full ... does
>>>>>>>> this mean that some variant of this configuration was working for you
>>>>>>>> at some point, or just that the failure happens quickly?
>>>>>>>> 
>>>>>>>> If heap space and faceting are indeed the bottleneck, you might make
>>>>>>>> sure that you have docValues enabled for your facet field fieldTypes,
>>>>>>>> and perhaps set uninvertible=false.
>>>>>>>> 
>>>>>>>> I'm not seeing where large numbers of facets initially came from in
>>>>>>>> this thread? But on that topic this is perhaps relevant, regarding
>>> the
>>>>>>>> potential utility of a facet cache:
>>>>>>>> https://issues.apache.org/jira/browse/SOLR-13807
>>>>>>>> 
>>>>>>>> Michael
>>>>>>>> 
>>>>>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
>>>>>>>>> 
>>>>>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
>>>>>>>>>> I  had a similar issue with a large number of facets. There is no
>>> way
>>>>>>>>>> (At least I know) your can get an acceptable response time from
>>>>>>>>>> search engine with high number of facets.
>>>>>>>>> 
>>>>>>>>> Just for the record then it is doable under specific circumstances
>>>>>>>>> (static single-shard index, only String fields, Solr 4 with patch,
>>>>>>>>> fixed list of facet fields):
>>>>>>>>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>>>>>>>>> 
>>>>>>>>> More usable for the current case would be to play with facet.threads
>>>>>>>>> and throw hardware with many CPU-cores after the problem.
>>>>>>>>> 
>>>>>>>>> - Toke Eskildsen, Royal Danish Library
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>> 
>>>>> 
>>> 
>>> 
> 


Re: Solr 7.7 heap space is getting full

Posted by Walter Underwood <wu...@wunderwood.org>.
We CANNOT diagnose anything until you tell us the error message!

Erick, I strongly disagree that more heap is needed for bigger indexes.
Except for faceting, Lucene was designed to stream index data and
work regardless of the size of the index. Indexing is in RAM buffer
sized chunks, so large updates also don’t need extra RAM.

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Feb 2, 2020, at 7:52 AM, Rajdeep Sahoo <ra...@gmail.com> wrote:
> 
> We have allocated 16 gb of heap space  out of 24 g.
>   There are 3 solr cores here, for one core when the no of documents are
> getting increased i.e. around 4.5 lakhs,then this scenario is happening.
> 
> 
> On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson, <er...@gmail.com>
> wrote:
> 
>> Allocate more heap and possibly add more RAM.
>> 
>> What are you expectations? You can't continue to
>> add documents to your Solr instance without regard to
>> how much heap you’ve allocated. You’ve put over 4x
>> the number of docs on the node. There’s no magic here.
>> You can’t continue to add docs to a Solr instance without
>> increasing the heap at some point.
>> 
>> And as far as I know, you’ve never told us how much heap yo
>> _are_ allocating. The default for Java processes is 512M, which
>> is quite small. so perhaps it’s a simple matter of starting Solr
>> with the -XmX parameter set to something larger.
>> 
>> Best,
>> Erick
>> 
>>> On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <ra...@gmail.com>
>> wrote:
>>> 
>>> What can we do in this scenario as the solr master node is going down and
>>> the indexing is failing.
>>> Please provide some workaround for this issue.
>>> 
>>> On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <wu...@wunderwood.org>
>>> wrote:
>>> 
>>>> What message do you get about the heap space.
>>>> 
>>>> It is completely normal for Java to use all of heap before running a
>> major
>>>> GC. That
>>>> is how the JVM works.
>>>> 
>>>> wunder
>>>> Walter Underwood
>>>> wunder@wunderwood.org
>>>> http://observer.wunderwood.org/  (my blog)
>>>> 
>>>>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <ra...@gmail.com>
>>>> wrote:
>>>>> 
>>>>> Please reply anyone
>>>>> 
>>>>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
>>>> rajdeepsahoo2012@gmail.com>
>>>>> wrote:
>>>>> 
>>>>>> This is happening when the no of indexed document count is increasing.
>>>>>> With 1 million docs it's working fine but when it's crossing 4.5
>>>>>> million it's heap space is getting full.
>>>>>> 
>>>>>> 
>>>>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
>>>> michael@michaelgibney.net>
>>>>>> wrote:
>>>>>> 
>>>>>>> Rajdeep, you say that "suddenly" heap space is getting full ... does
>>>>>>> this mean that some variant of this configuration was working for you
>>>>>>> at some point, or just that the failure happens quickly?
>>>>>>> 
>>>>>>> If heap space and faceting are indeed the bottleneck, you might make
>>>>>>> sure that you have docValues enabled for your facet field fieldTypes,
>>>>>>> and perhaps set uninvertible=false.
>>>>>>> 
>>>>>>> I'm not seeing where large numbers of facets initially came from in
>>>>>>> this thread? But on that topic this is perhaps relevant, regarding
>> the
>>>>>>> potential utility of a facet cache:
>>>>>>> https://issues.apache.org/jira/browse/SOLR-13807
>>>>>>> 
>>>>>>> Michael
>>>>>>> 
>>>>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
>>>>>>>> 
>>>>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
>>>>>>>>> I  had a similar issue with a large number of facets. There is no
>> way
>>>>>>>>> (At least I know) your can get an acceptable response time from
>>>>>>>>> search engine with high number of facets.
>>>>>>>> 
>>>>>>>> Just for the record then it is doable under specific circumstances
>>>>>>>> (static single-shard index, only String fields, Solr 4 with patch,
>>>>>>>> fixed list of facet fields):
>>>>>>>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>>>>>>>> 
>>>>>>>> More usable for the current case would be to play with facet.threads
>>>>>>>> and throw hardware with many CPU-cores after the problem.
>>>>>>>> 
>>>>>>>> - Toke Eskildsen, Royal Danish Library
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>> 
>>>> 
>> 
>> 


Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
We have allocated 16 gb of heap space  out of 24 g.
   There are 3 solr cores here, for one core when the no of documents are
getting increased i.e. around 4.5 lakhs,then this scenario is happening.


On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson, <er...@gmail.com>
wrote:

> Allocate more heap and possibly add more RAM.
>
> What are you expectations? You can't continue to
> add documents to your Solr instance without regard to
> how much heap you’ve allocated. You’ve put over 4x
> the number of docs on the node. There’s no magic here.
> You can’t continue to add docs to a Solr instance without
> increasing the heap at some point.
>
> And as far as I know, you’ve never told us how much heap yo
>  _are_ allocating. The default for Java processes is 512M, which
> is quite small. so perhaps it’s a simple matter of starting Solr
> with the -XmX parameter set to something larger.
>
> Best,
> Erick
>
> > On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <ra...@gmail.com>
> wrote:
> >
> > What can we do in this scenario as the solr master node is going down and
> > the indexing is failing.
> > Please provide some workaround for this issue.
> >
> > On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <wu...@wunderwood.org>
> > wrote:
> >
> >> What message do you get about the heap space.
> >>
> >> It is completely normal for Java to use all of heap before running a
> major
> >> GC. That
> >> is how the JVM works.
> >>
> >> wunder
> >> Walter Underwood
> >> wunder@wunderwood.org
> >> http://observer.wunderwood.org/  (my blog)
> >>
> >>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <ra...@gmail.com>
> >> wrote:
> >>>
> >>> Please reply anyone
> >>>
> >>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
> >> rajdeepsahoo2012@gmail.com>
> >>> wrote:
> >>>
> >>>> This is happening when the no of indexed document count is increasing.
> >>>>  With 1 million docs it's working fine but when it's crossing 4.5
> >>>> million it's heap space is getting full.
> >>>>
> >>>>
> >>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
> >> michael@michaelgibney.net>
> >>>> wrote:
> >>>>
> >>>>> Rajdeep, you say that "suddenly" heap space is getting full ... does
> >>>>> this mean that some variant of this configuration was working for you
> >>>>> at some point, or just that the failure happens quickly?
> >>>>>
> >>>>> If heap space and faceting are indeed the bottleneck, you might make
> >>>>> sure that you have docValues enabled for your facet field fieldTypes,
> >>>>> and perhaps set uninvertible=false.
> >>>>>
> >>>>> I'm not seeing where large numbers of facets initially came from in
> >>>>> this thread? But on that topic this is perhaps relevant, regarding
> the
> >>>>> potential utility of a facet cache:
> >>>>> https://issues.apache.org/jira/browse/SOLR-13807
> >>>>>
> >>>>> Michael
> >>>>>
> >>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
> >>>>>>
> >>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
> >>>>>>> I  had a similar issue with a large number of facets. There is no
> way
> >>>>>>> (At least I know) your can get an acceptable response time from
> >>>>>>> search engine with high number of facets.
> >>>>>>
> >>>>>> Just for the record then it is doable under specific circumstances
> >>>>>> (static single-shard index, only String fields, Solr 4 with patch,
> >>>>>> fixed list of facet fields):
> >>>>>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
> >>>>>>
> >>>>>> More usable for the current case would be to play with facet.threads
> >>>>>> and throw hardware with many CPU-cores after the problem.
> >>>>>>
> >>>>>> - Toke Eskildsen, Royal Danish Library
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>
> >>
>
>

Re: Solr 7.7 heap space is getting full

Posted by Erick Erickson <er...@gmail.com>.
Allocate more heap and possibly add more RAM. 

What are you expectations? You can't continue to
add documents to your Solr instance without regard to
how much heap you’ve allocated. You’ve put over 4x 
the number of docs on the node. There’s no magic here. 
You can’t continue to add docs to a Solr instance without
increasing the heap at some point.

And as far as I know, you’ve never told us how much heap yo
 _are_ allocating. The default for Java processes is 512M, which
is quite small. so perhaps it’s a simple matter of starting Solr
with the -XmX parameter set to something larger.

Best,
Erick

> On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <ra...@gmail.com> wrote:
> 
> What can we do in this scenario as the solr master node is going down and
> the indexing is failing.
> Please provide some workaround for this issue.
> 
> On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <wu...@wunderwood.org>
> wrote:
> 
>> What message do you get about the heap space.
>> 
>> It is completely normal for Java to use all of heap before running a major
>> GC. That
>> is how the JVM works.
>> 
>> wunder
>> Walter Underwood
>> wunder@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <ra...@gmail.com>
>> wrote:
>>> 
>>> Please reply anyone
>>> 
>>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
>> rajdeepsahoo2012@gmail.com>
>>> wrote:
>>> 
>>>> This is happening when the no of indexed document count is increasing.
>>>>  With 1 million docs it's working fine but when it's crossing 4.5
>>>> million it's heap space is getting full.
>>>> 
>>>> 
>>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
>> michael@michaelgibney.net>
>>>> wrote:
>>>> 
>>>>> Rajdeep, you say that "suddenly" heap space is getting full ... does
>>>>> this mean that some variant of this configuration was working for you
>>>>> at some point, or just that the failure happens quickly?
>>>>> 
>>>>> If heap space and faceting are indeed the bottleneck, you might make
>>>>> sure that you have docValues enabled for your facet field fieldTypes,
>>>>> and perhaps set uninvertible=false.
>>>>> 
>>>>> I'm not seeing where large numbers of facets initially came from in
>>>>> this thread? But on that topic this is perhaps relevant, regarding the
>>>>> potential utility of a facet cache:
>>>>> https://issues.apache.org/jira/browse/SOLR-13807
>>>>> 
>>>>> Michael
>>>>> 
>>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
>>>>>> 
>>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
>>>>>>> I  had a similar issue with a large number of facets. There is no way
>>>>>>> (At least I know) your can get an acceptable response time from
>>>>>>> search engine with high number of facets.
>>>>>> 
>>>>>> Just for the record then it is doable under specific circumstances
>>>>>> (static single-shard index, only String fields, Solr 4 with patch,
>>>>>> fixed list of facet fields):
>>>>>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>>>>>> 
>>>>>> More usable for the current case would be to play with facet.threads
>>>>>> and throw hardware with many CPU-cores after the problem.
>>>>>> 
>>>>>> - Toke Eskildsen, Royal Danish Library
>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>> 
>> 


Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
What can we do in this scenario as the solr master node is going down and
the indexing is failing.
 Please provide some workaround for this issue.

On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <wu...@wunderwood.org>
wrote:

> What message do you get about the heap space.
>
> It is completely normal for Java to use all of heap before running a major
> GC. That
> is how the JVM works.
>
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <ra...@gmail.com>
> wrote:
> >
> > Please reply anyone
> >
> > On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
> rajdeepsahoo2012@gmail.com>
> > wrote:
> >
> >> This is happening when the no of indexed document count is increasing.
> >>   With 1 million docs it's working fine but when it's crossing 4.5
> >> million it's heap space is getting full.
> >>
> >>
> >> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
> michael@michaelgibney.net>
> >> wrote:
> >>
> >>> Rajdeep, you say that "suddenly" heap space is getting full ... does
> >>> this mean that some variant of this configuration was working for you
> >>> at some point, or just that the failure happens quickly?
> >>>
> >>> If heap space and faceting are indeed the bottleneck, you might make
> >>> sure that you have docValues enabled for your facet field fieldTypes,
> >>> and perhaps set uninvertible=false.
> >>>
> >>> I'm not seeing where large numbers of facets initially came from in
> >>> this thread? But on that topic this is perhaps relevant, regarding the
> >>> potential utility of a facet cache:
> >>> https://issues.apache.org/jira/browse/SOLR-13807
> >>>
> >>> Michael
> >>>
> >>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
> >>>>
> >>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
> >>>>> I  had a similar issue with a large number of facets. There is no way
> >>>>> (At least I know) your can get an acceptable response time from
> >>>>> search engine with high number of facets.
> >>>>
> >>>> Just for the record then it is doable under specific circumstances
> >>>> (static single-shard index, only String fields, Solr 4 with patch,
> >>>> fixed list of facet fields):
> >>>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
> >>>>
> >>>> More usable for the current case would be to play with facet.threads
> >>>> and throw hardware with many CPU-cores after the problem.
> >>>>
> >>>> - Toke Eskildsen, Royal Danish Library
> >>>>
> >>>>
> >>>
> >>
>
>

Re: Solr 7.7 heap space is getting full

Posted by Walter Underwood <wu...@wunderwood.org>.
What message do you get about the heap space.

It is completely normal for Java to use all of heap before running a major GC. That
is how the JVM works.

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <ra...@gmail.com> wrote:
> 
> Please reply anyone
> 
> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <ra...@gmail.com>
> wrote:
> 
>> This is happening when the no of indexed document count is increasing.
>>   With 1 million docs it's working fine but when it's crossing 4.5
>> million it's heap space is getting full.
>> 
>> 
>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <mi...@michaelgibney.net>
>> wrote:
>> 
>>> Rajdeep, you say that "suddenly" heap space is getting full ... does
>>> this mean that some variant of this configuration was working for you
>>> at some point, or just that the failure happens quickly?
>>> 
>>> If heap space and faceting are indeed the bottleneck, you might make
>>> sure that you have docValues enabled for your facet field fieldTypes,
>>> and perhaps set uninvertible=false.
>>> 
>>> I'm not seeing where large numbers of facets initially came from in
>>> this thread? But on that topic this is perhaps relevant, regarding the
>>> potential utility of a facet cache:
>>> https://issues.apache.org/jira/browse/SOLR-13807
>>> 
>>> Michael
>>> 
>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
>>>> 
>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
>>>>> I  had a similar issue with a large number of facets. There is no way
>>>>> (At least I know) your can get an acceptable response time from
>>>>> search engine with high number of facets.
>>>> 
>>>> Just for the record then it is doable under specific circumstances
>>>> (static single-shard index, only String fields, Solr 4 with patch,
>>>> fixed list of facet fields):
>>>> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>>>> 
>>>> More usable for the current case would be to play with facet.threads
>>>> and throw hardware with many CPU-cores after the problem.
>>>> 
>>>> - Toke Eskildsen, Royal Danish Library
>>>> 
>>>> 
>>> 
>> 


Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
Please reply anyone

On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <ra...@gmail.com>
wrote:

> This is happening when the no of indexed document count is increasing.
>    With 1 million docs it's working fine but when it's crossing 4.5
> million it's heap space is getting full.
>
>
> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <mi...@michaelgibney.net>
> wrote:
>
>> Rajdeep, you say that "suddenly" heap space is getting full ... does
>> this mean that some variant of this configuration was working for you
>> at some point, or just that the failure happens quickly?
>>
>> If heap space and faceting are indeed the bottleneck, you might make
>> sure that you have docValues enabled for your facet field fieldTypes,
>> and perhaps set uninvertible=false.
>>
>> I'm not seeing where large numbers of facets initially came from in
>> this thread? But on that topic this is perhaps relevant, regarding the
>> potential utility of a facet cache:
>> https://issues.apache.org/jira/browse/SOLR-13807
>>
>> Michael
>>
>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
>> >
>> > On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
>> > > I  had a similar issue with a large number of facets. There is no way
>> > > (At least I know) your can get an acceptable response time from
>> > > search engine with high number of facets.
>> >
>> > Just for the record then it is doable under specific circumstances
>> > (static single-shard index, only String fields, Solr 4 with patch,
>> > fixed list of facet fields):
>> > https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>> >
>> > More usable for the current case would be to play with facet.threads
>> > and throw hardware with many CPU-cores after the problem.
>> >
>> > - Toke Eskildsen, Royal Danish Library
>> >
>> >
>>
>

Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
This is happening when the no of indexed document count is increasing.
   With 1 million docs it's working fine but when it's crossing 4.5 million
it's heap space is getting full.


On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <mi...@michaelgibney.net>
wrote:

> Rajdeep, you say that "suddenly" heap space is getting full ... does
> this mean that some variant of this configuration was working for you
> at some point, or just that the failure happens quickly?
>
> If heap space and faceting are indeed the bottleneck, you might make
> sure that you have docValues enabled for your facet field fieldTypes,
> and perhaps set uninvertible=false.
>
> I'm not seeing where large numbers of facets initially came from in
> this thread? But on that topic this is perhaps relevant, regarding the
> potential utility of a facet cache:
> https://issues.apache.org/jira/browse/SOLR-13807
>
> Michael
>
> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
> >
> > On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
> > > I  had a similar issue with a large number of facets. There is no way
> > > (At least I know) your can get an acceptable response time from
> > > search engine with high number of facets.
> >
> > Just for the record then it is doable under specific circumstances
> > (static single-shard index, only String fields, Solr 4 with patch,
> > fixed list of facet fields):
> > https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
> >
> > More usable for the current case would be to play with facet.threads
> > and throw hardware with many CPU-cores after the problem.
> >
> > - Toke Eskildsen, Royal Danish Library
> >
> >
>

Re: Solr 7.7 heap space is getting full

Posted by Michael Gibney <mi...@michaelgibney.net>.
Rajdeep, you say that "suddenly" heap space is getting full ... does
this mean that some variant of this configuration was working for you
at some point, or just that the failure happens quickly?

If heap space and faceting are indeed the bottleneck, you might make
sure that you have docValues enabled for your facet field fieldTypes,
and perhaps set uninvertible=false.

I'm not seeing where large numbers of facets initially came from in
this thread? But on that topic this is perhaps relevant, regarding the
potential utility of a facet cache:
https://issues.apache.org/jira/browse/SOLR-13807

Michael

On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <to...@kb.dk> wrote:
>
> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
> > I  had a similar issue with a large number of facets. There is no way
> > (At least I know) your can get an acceptable response time from
> > search engine with high number of facets.
>
> Just for the record then it is doable under specific circumstances
> (static single-shard index, only String fields, Solr 4 with patch,
> fixed list of facet fields):
> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
>
> More usable for the current case would be to play with facet.threads
> and throw hardware with many CPU-cores after the problem.
>
> - Toke Eskildsen, Royal Danish Library
>
>

Re: Solr 7.7 heap space is getting full

Posted by Toke Eskildsen <to...@kb.dk>.
On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
> I  had a similar issue with a large number of facets. There is no way
> (At least I know) your can get an acceptable response time from
> search engine with high number of facets.

Just for the record then it is doable under specific circumstances
(static single-shard index, only String fields, Solr 4 with patch,
fixed list of facet fields):
https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/

More usable for the current case would be to play with facet.threads
and throw hardware with many CPU-cores after the problem.

- Toke Eskildsen, Royal Danish Library



Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
The problem is happening for one index, for other two indexes it is working
fine.
  For other two indexes indexing and search both are working fine.
  But for one index after indexing completion the heap space is getting
full and solr is not responding at all.
  Index sizes are almost same,its around 1gb each.
  When loading the core by selecting it,the jvm space us becoming full.


On Mon, 20 Jan, 2020, 9:26 AM Rajdeep Sahoo, <ra...@gmail.com>
wrote:

> Anything else regarding gc tuning.
>
> On Mon, 20 Jan, 2020, 8:08 AM Rajdeep Sahoo, <ra...@gmail.com>
> wrote:
>
>> Initially we were getting the warning message as  ulimit is low i.e. 1024
>> so we changed it to 65000
>> Using ulimit -u 65000.
>>
>> Then the error was failed to reserve shared memory error =1
>>  Because of this we removed
>>    -xx : +uselargepages
>>
>> Now in console log it is showing
>> Could not find or load main class \
>>
>> And solr is not starting up
>>
>>
>> On Mon, 20 Jan, 2020, 7:50 AM Mehai, Lotfi, <lm...@ptfs.com.invalid>
>> wrote:
>>
>>> I  had a similar issue with a large number of facets. There is no way (At
>>> least I know) your can get an acceptable response time from search engine
>>> with high number of facets.
>>> The way we solved the issue was to cache shallow Facets data structure in
>>> the web services. Facts structures are refreshed periodically. We don't
>>> have near real time indexation requirements. Page response time is under
>>> 5s.
>>>
>>> Here the URLs for our worst use case:
>>> https://www.govinfo.gov/app/collection/cfr
>>> https://www.govinfo.gov/app/cfrparts/month
>>>
>>> I hope that helps.
>>>
>>> Lotfi Mehai
>>> https://www.linkedin.com/in/lmehai/
>>>
>>>
>>>
>>>
>>>
>>> On Sun, Jan 19, 2020 at 9:05 PM Rajdeep Sahoo <
>>> rajdeepsahoo2012@gmail.com>
>>> wrote:
>>>
>>> > Initially we were getting the warning message as  ulimit is low i.e.
>>> 1024
>>> > so we changed it to 65000
>>> > Using ulimit -u 65000.
>>> >
>>> > Then the error was failed to reserve shared memory error =1
>>> >  Because of this we removed
>>> >    -xx : +uselargepages
>>> >
>>> > Now in console log it is showing
>>> > Could not find or load main class \
>>> >
>>> > And solr is not starting up
>>> >
>>> >
>>> >
>>> > On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, <wunder@wunderwood.org
>>> >
>>> > wrote:
>>> >
>>> > > What message do you get that means the heap space is full?
>>> > >
>>> > > Java will always use all of the heap, either as live data or
>>> > > not-yet-collected garbage.
>>> > >
>>> > > wunder
>>> > > Walter Underwood
>>> > > wunder@wunderwood.org
>>> > > http://observer.wunderwood.org/  (my blog)
>>> > >
>>> > > > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <
>>> rajdeepsahoo2012@gmail.com
>>> > >
>>> > > wrote:
>>> > > >
>>> > > > Hi,
>>> > > > Currently there is no request or indexing is happening.
>>> > > >  It's just start up
>>> > > > And during that time heap is getting full.
>>> > > > Index size is approx 1 g.
>>> > > >
>>> > > >
>>> > > > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <
>>> wunder@wunderwood.org
>>> > >
>>> > > > wrote:
>>> > > >
>>> > > >> A new garbage collector won’t fix it, but it might help a bit.
>>> > > >>
>>> > > >> Requesting 200 facet fields and having 50-60 of them with results
>>> is a
>>> > > >> huge amount of work for Solr. A typical faceting implementation
>>> might
>>> > > have
>>> > > >> three to five facets. Your requests will be at least 10X to 20X
>>> > slower.
>>> > > >>
>>> > > >> Check the CPU during one request. It should use nearly 100% of a
>>> > single
>>> > > >> CPU. If it a lot lower than 100%, you have another bottleneck.
>>> That
>>> > > might
>>> > > >> be insufficient heap or accessing disk during query requests (not
>>> > enough
>>> > > >> RAM). If it is near 100%, the only thing you can do is get a
>>> faster
>>> > CPU.
>>> > > >>
>>> > > >> One other question, how frequently is the index updated?
>>> > > >>
>>> > > >> wunder
>>> > > >> Walter Underwood
>>> > > >> wunder@wunderwood.org
>>> > > >> http://observer.wunderwood.org/  (my blog)
>>> > > >>
>>> > > >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <
>>> > rajdeepsahoo2012@gmail.com
>>> > > >
>>> > > >> wrote:
>>> > > >>>
>>> > > >>> Hi,
>>> > > >>> Still facing the same issue...
>>> > > >>> Anything else that we need to check.
>>> > > >>>
>>> > > >>>
>>> > > >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <
>>> > wunder@wunderwood.org
>>> > > >
>>> > > >>> wrote:
>>> > > >>>
>>> > > >>>> With Java 1.8, I would use the G1 garbage collector. We’ve been
>>> > > running
>>> > > >>>> that combination in prod for three years with no problems.
>>> > > >>>>
>>> > > >>>> SOLR_HEAP=8g
>>> > > >>>> # Use G1 GC  -- wunder 2017-01-23
>>> > > >>>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
>>> > > >>>> GC_TUNE=" \
>>> > > >>>> -XX:+UseG1GC \
>>> > > >>>> -XX:+ParallelRefProcEnabled \
>>> > > >>>> -XX:G1HeapRegionSize=8m \
>>> > > >>>> -XX:MaxGCPauseMillis=200 \
>>> > > >>>> -XX:+UseLargePages \
>>> > > >>>> -XX:+AggressiveOpts \
>>> > > >>>> “
>>> > > >>>>
>>> > > >>>> wunder
>>> > > >>>> Walter Underwood
>>> > > >>>> wunder@wunderwood.org
>>> > > >>>> http://observer.wunderwood.org/  (my blog)
>>> > > >>>>
>>> > > >>>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
>>> > > rajdeepsahoo2012@gmail.com
>>> > > >>>
>>> > > >>>> wrote:
>>> > > >>>>>
>>> > > >>>>> We are using solr 7.7 . Ram size is 24 gb and allocated space
>>> is 12
>>> > > gb.
>>> > > >>>> We
>>> > > >>>>> have completed indexing after starting the server suddenly heap
>>> > space
>>> > > >> is
>>> > > >>>>> getting full.
>>> > > >>>>> Added gc params  , still not working and jdk version is 1.8 .
>>> > > >>>>> Please find the below gc  params
>>> > > >>>>> -XX:NewRatio=2
>>> > > >>>>> -XX:SurvivorRatio=3
>>> > > >>>>> -XX:TargetSurvivorRatio=90 \
>>> > > >>>>> -XX:MaxTenuringThreshold=8 \
>>> > > >>>>> -XX:+UseConcMarkSweepGC \
>>> > > >>>>> -XX:+CMSScavengeBeforeRemark \
>>> > > >>>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
>>> > > >>>>> -XX:PretenureSizeThreshold=512m \
>>> > > >>>>> -XX:CMSFullGCsBeforeCompaction=1 \
>>> > > >>>>> -XX:+UseCMSInitiatingOccupancyOnly \
>>> > > >>>>> -XX:CMSInitiatingOccupancyFraction=70 \
>>> > > >>>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
>>> > > >>>>> -XX:+CMSParallelRemarkEnabled
>>> > > >>>>> -XX:+ParallelRefProcEnabled
>>> > > >>>>> -XX:+UseLargePages \
>>> > > >>>>> -XX:+AggressiveOpts \
>>> > > >>>>
>>> > > >>>>
>>> > > >>
>>> > > >>
>>> > >
>>> > >
>>> >
>>>
>>

Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
Anything else regarding gc tuning.

On Mon, 20 Jan, 2020, 8:08 AM Rajdeep Sahoo, <ra...@gmail.com>
wrote:

> Initially we were getting the warning message as  ulimit is low i.e. 1024
> so we changed it to 65000
> Using ulimit -u 65000.
>
> Then the error was failed to reserve shared memory error =1
>  Because of this we removed
>    -xx : +uselargepages
>
> Now in console log it is showing
> Could not find or load main class \
>
> And solr is not starting up
>
>
> On Mon, 20 Jan, 2020, 7:50 AM Mehai, Lotfi, <lm...@ptfs.com.invalid>
> wrote:
>
>> I  had a similar issue with a large number of facets. There is no way (At
>> least I know) your can get an acceptable response time from search engine
>> with high number of facets.
>> The way we solved the issue was to cache shallow Facets data structure in
>> the web services. Facts structures are refreshed periodically. We don't
>> have near real time indexation requirements. Page response time is under
>> 5s.
>>
>> Here the URLs for our worst use case:
>> https://www.govinfo.gov/app/collection/cfr
>> https://www.govinfo.gov/app/cfrparts/month
>>
>> I hope that helps.
>>
>> Lotfi Mehai
>> https://www.linkedin.com/in/lmehai/
>>
>>
>>
>>
>>
>> On Sun, Jan 19, 2020 at 9:05 PM Rajdeep Sahoo <rajdeepsahoo2012@gmail.com
>> >
>> wrote:
>>
>> > Initially we were getting the warning message as  ulimit is low i.e.
>> 1024
>> > so we changed it to 65000
>> > Using ulimit -u 65000.
>> >
>> > Then the error was failed to reserve shared memory error =1
>> >  Because of this we removed
>> >    -xx : +uselargepages
>> >
>> > Now in console log it is showing
>> > Could not find or load main class \
>> >
>> > And solr is not starting up
>> >
>> >
>> >
>> > On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, <wu...@wunderwood.org>
>> > wrote:
>> >
>> > > What message do you get that means the heap space is full?
>> > >
>> > > Java will always use all of the heap, either as live data or
>> > > not-yet-collected garbage.
>> > >
>> > > wunder
>> > > Walter Underwood
>> > > wunder@wunderwood.org
>> > > http://observer.wunderwood.org/  (my blog)
>> > >
>> > > > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <
>> rajdeepsahoo2012@gmail.com
>> > >
>> > > wrote:
>> > > >
>> > > > Hi,
>> > > > Currently there is no request or indexing is happening.
>> > > >  It's just start up
>> > > > And during that time heap is getting full.
>> > > > Index size is approx 1 g.
>> > > >
>> > > >
>> > > > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <
>> wunder@wunderwood.org
>> > >
>> > > > wrote:
>> > > >
>> > > >> A new garbage collector won’t fix it, but it might help a bit.
>> > > >>
>> > > >> Requesting 200 facet fields and having 50-60 of them with results
>> is a
>> > > >> huge amount of work for Solr. A typical faceting implementation
>> might
>> > > have
>> > > >> three to five facets. Your requests will be at least 10X to 20X
>> > slower.
>> > > >>
>> > > >> Check the CPU during one request. It should use nearly 100% of a
>> > single
>> > > >> CPU. If it a lot lower than 100%, you have another bottleneck. That
>> > > might
>> > > >> be insufficient heap or accessing disk during query requests (not
>> > enough
>> > > >> RAM). If it is near 100%, the only thing you can do is get a faster
>> > CPU.
>> > > >>
>> > > >> One other question, how frequently is the index updated?
>> > > >>
>> > > >> wunder
>> > > >> Walter Underwood
>> > > >> wunder@wunderwood.org
>> > > >> http://observer.wunderwood.org/  (my blog)
>> > > >>
>> > > >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <
>> > rajdeepsahoo2012@gmail.com
>> > > >
>> > > >> wrote:
>> > > >>>
>> > > >>> Hi,
>> > > >>> Still facing the same issue...
>> > > >>> Anything else that we need to check.
>> > > >>>
>> > > >>>
>> > > >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <
>> > wunder@wunderwood.org
>> > > >
>> > > >>> wrote:
>> > > >>>
>> > > >>>> With Java 1.8, I would use the G1 garbage collector. We’ve been
>> > > running
>> > > >>>> that combination in prod for three years with no problems.
>> > > >>>>
>> > > >>>> SOLR_HEAP=8g
>> > > >>>> # Use G1 GC  -- wunder 2017-01-23
>> > > >>>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
>> > > >>>> GC_TUNE=" \
>> > > >>>> -XX:+UseG1GC \
>> > > >>>> -XX:+ParallelRefProcEnabled \
>> > > >>>> -XX:G1HeapRegionSize=8m \
>> > > >>>> -XX:MaxGCPauseMillis=200 \
>> > > >>>> -XX:+UseLargePages \
>> > > >>>> -XX:+AggressiveOpts \
>> > > >>>> “
>> > > >>>>
>> > > >>>> wunder
>> > > >>>> Walter Underwood
>> > > >>>> wunder@wunderwood.org
>> > > >>>> http://observer.wunderwood.org/  (my blog)
>> > > >>>>
>> > > >>>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
>> > > rajdeepsahoo2012@gmail.com
>> > > >>>
>> > > >>>> wrote:
>> > > >>>>>
>> > > >>>>> We are using solr 7.7 . Ram size is 24 gb and allocated space
>> is 12
>> > > gb.
>> > > >>>> We
>> > > >>>>> have completed indexing after starting the server suddenly heap
>> > space
>> > > >> is
>> > > >>>>> getting full.
>> > > >>>>> Added gc params  , still not working and jdk version is 1.8 .
>> > > >>>>> Please find the below gc  params
>> > > >>>>> -XX:NewRatio=2
>> > > >>>>> -XX:SurvivorRatio=3
>> > > >>>>> -XX:TargetSurvivorRatio=90 \
>> > > >>>>> -XX:MaxTenuringThreshold=8 \
>> > > >>>>> -XX:+UseConcMarkSweepGC \
>> > > >>>>> -XX:+CMSScavengeBeforeRemark \
>> > > >>>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
>> > > >>>>> -XX:PretenureSizeThreshold=512m \
>> > > >>>>> -XX:CMSFullGCsBeforeCompaction=1 \
>> > > >>>>> -XX:+UseCMSInitiatingOccupancyOnly \
>> > > >>>>> -XX:CMSInitiatingOccupancyFraction=70 \
>> > > >>>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
>> > > >>>>> -XX:+CMSParallelRemarkEnabled
>> > > >>>>> -XX:+ParallelRefProcEnabled
>> > > >>>>> -XX:+UseLargePages \
>> > > >>>>> -XX:+AggressiveOpts \
>> > > >>>>
>> > > >>>>
>> > > >>
>> > > >>
>> > >
>> > >
>> >
>>
>

Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
Initially we were getting the warning message as  ulimit is low i.e. 1024
so we changed it to 65000
Using ulimit -u 65000.

Then the error was failed to reserve shared memory error =1
 Because of this we removed
   -xx : +uselargepages

Now in console log it is showing
Could not find or load main class \

And solr is not starting up


On Mon, 20 Jan, 2020, 7:50 AM Mehai, Lotfi, <lm...@ptfs.com.invalid> wrote:

> I  had a similar issue with a large number of facets. There is no way (At
> least I know) your can get an acceptable response time from search engine
> with high number of facets.
> The way we solved the issue was to cache shallow Facets data structure in
> the web services. Facts structures are refreshed periodically. We don't
> have near real time indexation requirements. Page response time is under
> 5s.
>
> Here the URLs for our worst use case:
> https://www.govinfo.gov/app/collection/cfr
> https://www.govinfo.gov/app/cfrparts/month
>
> I hope that helps.
>
> Lotfi Mehai
> https://www.linkedin.com/in/lmehai/
>
>
>
>
>
> On Sun, Jan 19, 2020 at 9:05 PM Rajdeep Sahoo <ra...@gmail.com>
> wrote:
>
> > Initially we were getting the warning message as  ulimit is low i.e. 1024
> > so we changed it to 65000
> > Using ulimit -u 65000.
> >
> > Then the error was failed to reserve shared memory error =1
> >  Because of this we removed
> >    -xx : +uselargepages
> >
> > Now in console log it is showing
> > Could not find or load main class \
> >
> > And solr is not starting up
> >
> >
> >
> > On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, <wu...@wunderwood.org>
> > wrote:
> >
> > > What message do you get that means the heap space is full?
> > >
> > > Java will always use all of the heap, either as live data or
> > > not-yet-collected garbage.
> > >
> > > wunder
> > > Walter Underwood
> > > wunder@wunderwood.org
> > > http://observer.wunderwood.org/  (my blog)
> > >
> > > > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <
> rajdeepsahoo2012@gmail.com
> > >
> > > wrote:
> > > >
> > > > Hi,
> > > > Currently there is no request or indexing is happening.
> > > >  It's just start up
> > > > And during that time heap is getting full.
> > > > Index size is approx 1 g.
> > > >
> > > >
> > > > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <
> wunder@wunderwood.org
> > >
> > > > wrote:
> > > >
> > > >> A new garbage collector won’t fix it, but it might help a bit.
> > > >>
> > > >> Requesting 200 facet fields and having 50-60 of them with results
> is a
> > > >> huge amount of work for Solr. A typical faceting implementation
> might
> > > have
> > > >> three to five facets. Your requests will be at least 10X to 20X
> > slower.
> > > >>
> > > >> Check the CPU during one request. It should use nearly 100% of a
> > single
> > > >> CPU. If it a lot lower than 100%, you have another bottleneck. That
> > > might
> > > >> be insufficient heap or accessing disk during query requests (not
> > enough
> > > >> RAM). If it is near 100%, the only thing you can do is get a faster
> > CPU.
> > > >>
> > > >> One other question, how frequently is the index updated?
> > > >>
> > > >> wunder
> > > >> Walter Underwood
> > > >> wunder@wunderwood.org
> > > >> http://observer.wunderwood.org/  (my blog)
> > > >>
> > > >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <
> > rajdeepsahoo2012@gmail.com
> > > >
> > > >> wrote:
> > > >>>
> > > >>> Hi,
> > > >>> Still facing the same issue...
> > > >>> Anything else that we need to check.
> > > >>>
> > > >>>
> > > >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <
> > wunder@wunderwood.org
> > > >
> > > >>> wrote:
> > > >>>
> > > >>>> With Java 1.8, I would use the G1 garbage collector. We’ve been
> > > running
> > > >>>> that combination in prod for three years with no problems.
> > > >>>>
> > > >>>> SOLR_HEAP=8g
> > > >>>> # Use G1 GC  -- wunder 2017-01-23
> > > >>>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> > > >>>> GC_TUNE=" \
> > > >>>> -XX:+UseG1GC \
> > > >>>> -XX:+ParallelRefProcEnabled \
> > > >>>> -XX:G1HeapRegionSize=8m \
> > > >>>> -XX:MaxGCPauseMillis=200 \
> > > >>>> -XX:+UseLargePages \
> > > >>>> -XX:+AggressiveOpts \
> > > >>>> “
> > > >>>>
> > > >>>> wunder
> > > >>>> Walter Underwood
> > > >>>> wunder@wunderwood.org
> > > >>>> http://observer.wunderwood.org/  (my blog)
> > > >>>>
> > > >>>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
> > > rajdeepsahoo2012@gmail.com
> > > >>>
> > > >>>> wrote:
> > > >>>>>
> > > >>>>> We are using solr 7.7 . Ram size is 24 gb and allocated space is
> 12
> > > gb.
> > > >>>> We
> > > >>>>> have completed indexing after starting the server suddenly heap
> > space
> > > >> is
> > > >>>>> getting full.
> > > >>>>> Added gc params  , still not working and jdk version is 1.8 .
> > > >>>>> Please find the below gc  params
> > > >>>>> -XX:NewRatio=2
> > > >>>>> -XX:SurvivorRatio=3
> > > >>>>> -XX:TargetSurvivorRatio=90 \
> > > >>>>> -XX:MaxTenuringThreshold=8 \
> > > >>>>> -XX:+UseConcMarkSweepGC \
> > > >>>>> -XX:+CMSScavengeBeforeRemark \
> > > >>>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> > > >>>>> -XX:PretenureSizeThreshold=512m \
> > > >>>>> -XX:CMSFullGCsBeforeCompaction=1 \
> > > >>>>> -XX:+UseCMSInitiatingOccupancyOnly \
> > > >>>>> -XX:CMSInitiatingOccupancyFraction=70 \
> > > >>>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
> > > >>>>> -XX:+CMSParallelRemarkEnabled
> > > >>>>> -XX:+ParallelRefProcEnabled
> > > >>>>> -XX:+UseLargePages \
> > > >>>>> -XX:+AggressiveOpts \
> > > >>>>
> > > >>>>
> > > >>
> > > >>
> > >
> > >
> >
>

Re: Solr 7.7 heap space is getting full

Posted by "Mehai, Lotfi" <lm...@ptfs.com.INVALID>.
I  had a similar issue with a large number of facets. There is no way (At
least I know) your can get an acceptable response time from search engine
with high number of facets.
The way we solved the issue was to cache shallow Facets data structure in
the web services. Facts structures are refreshed periodically. We don't
have near real time indexation requirements. Page response time is under
5s.

Here the URLs for our worst use case:
https://www.govinfo.gov/app/collection/cfr
https://www.govinfo.gov/app/cfrparts/month

I hope that helps.

Lotfi Mehai
https://www.linkedin.com/in/lmehai/





On Sun, Jan 19, 2020 at 9:05 PM Rajdeep Sahoo <ra...@gmail.com>
wrote:

> Initially we were getting the warning message as  ulimit is low i.e. 1024
> so we changed it to 65000
> Using ulimit -u 65000.
>
> Then the error was failed to reserve shared memory error =1
>  Because of this we removed
>    -xx : +uselargepages
>
> Now in console log it is showing
> Could not find or load main class \
>
> And solr is not starting up
>
>
>
> On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, <wu...@wunderwood.org>
> wrote:
>
> > What message do you get that means the heap space is full?
> >
> > Java will always use all of the heap, either as live data or
> > not-yet-collected garbage.
> >
> > wunder
> > Walter Underwood
> > wunder@wunderwood.org
> > http://observer.wunderwood.org/  (my blog)
> >
> > > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <rajdeepsahoo2012@gmail.com
> >
> > wrote:
> > >
> > > Hi,
> > > Currently there is no request or indexing is happening.
> > >  It's just start up
> > > And during that time heap is getting full.
> > > Index size is approx 1 g.
> > >
> > >
> > > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <wunder@wunderwood.org
> >
> > > wrote:
> > >
> > >> A new garbage collector won’t fix it, but it might help a bit.
> > >>
> > >> Requesting 200 facet fields and having 50-60 of them with results is a
> > >> huge amount of work for Solr. A typical faceting implementation might
> > have
> > >> three to five facets. Your requests will be at least 10X to 20X
> slower.
> > >>
> > >> Check the CPU during one request. It should use nearly 100% of a
> single
> > >> CPU. If it a lot lower than 100%, you have another bottleneck. That
> > might
> > >> be insufficient heap or accessing disk during query requests (not
> enough
> > >> RAM). If it is near 100%, the only thing you can do is get a faster
> CPU.
> > >>
> > >> One other question, how frequently is the index updated?
> > >>
> > >> wunder
> > >> Walter Underwood
> > >> wunder@wunderwood.org
> > >> http://observer.wunderwood.org/  (my blog)
> > >>
> > >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <
> rajdeepsahoo2012@gmail.com
> > >
> > >> wrote:
> > >>>
> > >>> Hi,
> > >>> Still facing the same issue...
> > >>> Anything else that we need to check.
> > >>>
> > >>>
> > >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <
> wunder@wunderwood.org
> > >
> > >>> wrote:
> > >>>
> > >>>> With Java 1.8, I would use the G1 garbage collector. We’ve been
> > running
> > >>>> that combination in prod for three years with no problems.
> > >>>>
> > >>>> SOLR_HEAP=8g
> > >>>> # Use G1 GC  -- wunder 2017-01-23
> > >>>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> > >>>> GC_TUNE=" \
> > >>>> -XX:+UseG1GC \
> > >>>> -XX:+ParallelRefProcEnabled \
> > >>>> -XX:G1HeapRegionSize=8m \
> > >>>> -XX:MaxGCPauseMillis=200 \
> > >>>> -XX:+UseLargePages \
> > >>>> -XX:+AggressiveOpts \
> > >>>> “
> > >>>>
> > >>>> wunder
> > >>>> Walter Underwood
> > >>>> wunder@wunderwood.org
> > >>>> http://observer.wunderwood.org/  (my blog)
> > >>>>
> > >>>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
> > rajdeepsahoo2012@gmail.com
> > >>>
> > >>>> wrote:
> > >>>>>
> > >>>>> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12
> > gb.
> > >>>> We
> > >>>>> have completed indexing after starting the server suddenly heap
> space
> > >> is
> > >>>>> getting full.
> > >>>>> Added gc params  , still not working and jdk version is 1.8 .
> > >>>>> Please find the below gc  params
> > >>>>> -XX:NewRatio=2
> > >>>>> -XX:SurvivorRatio=3
> > >>>>> -XX:TargetSurvivorRatio=90 \
> > >>>>> -XX:MaxTenuringThreshold=8 \
> > >>>>> -XX:+UseConcMarkSweepGC \
> > >>>>> -XX:+CMSScavengeBeforeRemark \
> > >>>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> > >>>>> -XX:PretenureSizeThreshold=512m \
> > >>>>> -XX:CMSFullGCsBeforeCompaction=1 \
> > >>>>> -XX:+UseCMSInitiatingOccupancyOnly \
> > >>>>> -XX:CMSInitiatingOccupancyFraction=70 \
> > >>>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
> > >>>>> -XX:+CMSParallelRemarkEnabled
> > >>>>> -XX:+ParallelRefProcEnabled
> > >>>>> -XX:+UseLargePages \
> > >>>>> -XX:+AggressiveOpts \
> > >>>>
> > >>>>
> > >>
> > >>
> >
> >
>

Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
Initially we were getting the warning message as  ulimit is low i.e. 1024
so we changed it to 65000
Using ulimit -u 65000.

Then the error was failed to reserve shared memory error =1
 Because of this we removed
   -xx : +uselargepages

Now in console log it is showing
Could not find or load main class \

And solr is not starting up



On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, <wu...@wunderwood.org>
wrote:

> What message do you get that means the heap space is full?
>
> Java will always use all of the heap, either as live data or
> not-yet-collected garbage.
>
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <ra...@gmail.com>
> wrote:
> >
> > Hi,
> > Currently there is no request or indexing is happening.
> >  It's just start up
> > And during that time heap is getting full.
> > Index size is approx 1 g.
> >
> >
> > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <wu...@wunderwood.org>
> > wrote:
> >
> >> A new garbage collector won’t fix it, but it might help a bit.
> >>
> >> Requesting 200 facet fields and having 50-60 of them with results is a
> >> huge amount of work for Solr. A typical faceting implementation might
> have
> >> three to five facets. Your requests will be at least 10X to 20X slower.
> >>
> >> Check the CPU during one request. It should use nearly 100% of a single
> >> CPU. If it a lot lower than 100%, you have another bottleneck. That
> might
> >> be insufficient heap or accessing disk during query requests (not enough
> >> RAM). If it is near 100%, the only thing you can do is get a faster CPU.
> >>
> >> One other question, how frequently is the index updated?
> >>
> >> wunder
> >> Walter Underwood
> >> wunder@wunderwood.org
> >> http://observer.wunderwood.org/  (my blog)
> >>
> >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <rajdeepsahoo2012@gmail.com
> >
> >> wrote:
> >>>
> >>> Hi,
> >>> Still facing the same issue...
> >>> Anything else that we need to check.
> >>>
> >>>
> >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <wunder@wunderwood.org
> >
> >>> wrote:
> >>>
> >>>> With Java 1.8, I would use the G1 garbage collector. We’ve been
> running
> >>>> that combination in prod for three years with no problems.
> >>>>
> >>>> SOLR_HEAP=8g
> >>>> # Use G1 GC  -- wunder 2017-01-23
> >>>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> >>>> GC_TUNE=" \
> >>>> -XX:+UseG1GC \
> >>>> -XX:+ParallelRefProcEnabled \
> >>>> -XX:G1HeapRegionSize=8m \
> >>>> -XX:MaxGCPauseMillis=200 \
> >>>> -XX:+UseLargePages \
> >>>> -XX:+AggressiveOpts \
> >>>> “
> >>>>
> >>>> wunder
> >>>> Walter Underwood
> >>>> wunder@wunderwood.org
> >>>> http://observer.wunderwood.org/  (my blog)
> >>>>
> >>>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
> rajdeepsahoo2012@gmail.com
> >>>
> >>>> wrote:
> >>>>>
> >>>>> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12
> gb.
> >>>> We
> >>>>> have completed indexing after starting the server suddenly heap space
> >> is
> >>>>> getting full.
> >>>>> Added gc params  , still not working and jdk version is 1.8 .
> >>>>> Please find the below gc  params
> >>>>> -XX:NewRatio=2
> >>>>> -XX:SurvivorRatio=3
> >>>>> -XX:TargetSurvivorRatio=90 \
> >>>>> -XX:MaxTenuringThreshold=8 \
> >>>>> -XX:+UseConcMarkSweepGC \
> >>>>> -XX:+CMSScavengeBeforeRemark \
> >>>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> >>>>> -XX:PretenureSizeThreshold=512m \
> >>>>> -XX:CMSFullGCsBeforeCompaction=1 \
> >>>>> -XX:+UseCMSInitiatingOccupancyOnly \
> >>>>> -XX:CMSInitiatingOccupancyFraction=70 \
> >>>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
> >>>>> -XX:+CMSParallelRemarkEnabled
> >>>>> -XX:+ParallelRefProcEnabled
> >>>>> -XX:+UseLargePages \
> >>>>> -XX:+AggressiveOpts \
> >>>>
> >>>>
> >>
> >>
>
>

Re: Solr 7.7 heap space is getting full

Posted by Walter Underwood <wu...@wunderwood.org>.
What message do you get that means the heap space is full?

Java will always use all of the heap, either as live data or not-yet-collected garbage.

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <ra...@gmail.com> wrote:
> 
> Hi,
> Currently there is no request or indexing is happening.
>  It's just start up
> And during that time heap is getting full.
> Index size is approx 1 g.
> 
> 
> On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <wu...@wunderwood.org>
> wrote:
> 
>> A new garbage collector won’t fix it, but it might help a bit.
>> 
>> Requesting 200 facet fields and having 50-60 of them with results is a
>> huge amount of work for Solr. A typical faceting implementation might have
>> three to five facets. Your requests will be at least 10X to 20X slower.
>> 
>> Check the CPU during one request. It should use nearly 100% of a single
>> CPU. If it a lot lower than 100%, you have another bottleneck. That might
>> be insufficient heap or accessing disk during query requests (not enough
>> RAM). If it is near 100%, the only thing you can do is get a faster CPU.
>> 
>> One other question, how frequently is the index updated?
>> 
>> wunder
>> Walter Underwood
>> wunder@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <ra...@gmail.com>
>> wrote:
>>> 
>>> Hi,
>>> Still facing the same issue...
>>> Anything else that we need to check.
>>> 
>>> 
>>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <wu...@wunderwood.org>
>>> wrote:
>>> 
>>>> With Java 1.8, I would use the G1 garbage collector. We’ve been running
>>>> that combination in prod for three years with no problems.
>>>> 
>>>> SOLR_HEAP=8g
>>>> # Use G1 GC  -- wunder 2017-01-23
>>>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
>>>> GC_TUNE=" \
>>>> -XX:+UseG1GC \
>>>> -XX:+ParallelRefProcEnabled \
>>>> -XX:G1HeapRegionSize=8m \
>>>> -XX:MaxGCPauseMillis=200 \
>>>> -XX:+UseLargePages \
>>>> -XX:+AggressiveOpts \
>>>> “
>>>> 
>>>> wunder
>>>> Walter Underwood
>>>> wunder@wunderwood.org
>>>> http://observer.wunderwood.org/  (my blog)
>>>> 
>>>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <rajdeepsahoo2012@gmail.com
>>> 
>>>> wrote:
>>>>> 
>>>>> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb.
>>>> We
>>>>> have completed indexing after starting the server suddenly heap space
>> is
>>>>> getting full.
>>>>> Added gc params  , still not working and jdk version is 1.8 .
>>>>> Please find the below gc  params
>>>>> -XX:NewRatio=2
>>>>> -XX:SurvivorRatio=3
>>>>> -XX:TargetSurvivorRatio=90 \
>>>>> -XX:MaxTenuringThreshold=8 \
>>>>> -XX:+UseConcMarkSweepGC \
>>>>> -XX:+CMSScavengeBeforeRemark \
>>>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
>>>>> -XX:PretenureSizeThreshold=512m \
>>>>> -XX:CMSFullGCsBeforeCompaction=1 \
>>>>> -XX:+UseCMSInitiatingOccupancyOnly \
>>>>> -XX:CMSInitiatingOccupancyFraction=70 \
>>>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
>>>>> -XX:+CMSParallelRemarkEnabled
>>>>> -XX:+ParallelRefProcEnabled
>>>>> -XX:+UseLargePages \
>>>>> -XX:+AggressiveOpts \
>>>> 
>>>> 
>> 
>> 


Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
Hi,
 Currently there is no request or indexing is happening.
  It's just start up
 And during that time heap is getting full.
 Index size is approx 1 g.


On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <wu...@wunderwood.org>
wrote:

> A new garbage collector won’t fix it, but it might help a bit.
>
> Requesting 200 facet fields and having 50-60 of them with results is a
> huge amount of work for Solr. A typical faceting implementation might have
> three to five facets. Your requests will be at least 10X to 20X slower.
>
> Check the CPU during one request. It should use nearly 100% of a single
> CPU. If it a lot lower than 100%, you have another bottleneck. That might
> be insufficient heap or accessing disk during query requests (not enough
> RAM). If it is near 100%, the only thing you can do is get a faster CPU.
>
> One other question, how frequently is the index updated?
>
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <ra...@gmail.com>
> wrote:
> >
> > Hi,
> > Still facing the same issue...
> > Anything else that we need to check.
> >
> >
> > On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <wu...@wunderwood.org>
> > wrote:
> >
> >> With Java 1.8, I would use the G1 garbage collector. We’ve been running
> >> that combination in prod for three years with no problems.
> >>
> >> SOLR_HEAP=8g
> >> # Use G1 GC  -- wunder 2017-01-23
> >> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> >> GC_TUNE=" \
> >> -XX:+UseG1GC \
> >> -XX:+ParallelRefProcEnabled \
> >> -XX:G1HeapRegionSize=8m \
> >> -XX:MaxGCPauseMillis=200 \
> >> -XX:+UseLargePages \
> >> -XX:+AggressiveOpts \
> >> “
> >>
> >> wunder
> >> Walter Underwood
> >> wunder@wunderwood.org
> >> http://observer.wunderwood.org/  (my blog)
> >>
> >>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <rajdeepsahoo2012@gmail.com
> >
> >> wrote:
> >>>
> >>> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb.
> >> We
> >>> have completed indexing after starting the server suddenly heap space
> is
> >>> getting full.
> >>>  Added gc params  , still not working and jdk version is 1.8 .
> >>> Please find the below gc  params
> >>> -XX:NewRatio=2
> >>> -XX:SurvivorRatio=3
> >>> -XX:TargetSurvivorRatio=90 \
> >>> -XX:MaxTenuringThreshold=8 \
> >>> -XX:+UseConcMarkSweepGC \
> >>> -XX:+CMSScavengeBeforeRemark \
> >>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> >>> -XX:PretenureSizeThreshold=512m \
> >>> -XX:CMSFullGCsBeforeCompaction=1 \
> >>> -XX:+UseCMSInitiatingOccupancyOnly \
> >>> -XX:CMSInitiatingOccupancyFraction=70 \
> >>> -XX:CMSMaxAbortablePrecleanTime=6000 \
> >>> -XX:+CMSParallelRemarkEnabled
> >>> -XX:+ParallelRefProcEnabled
> >>> -XX:+UseLargePages \
> >>> -XX:+AggressiveOpts \
> >>
> >>
>
>

Re: Solr 7.7 heap space is getting full

Posted by Walter Underwood <wu...@wunderwood.org>.
A new garbage collector won’t fix it, but it might help a bit.

Requesting 200 facet fields and having 50-60 of them with results is a huge amount of work for Solr. A typical faceting implementation might have three to five facets. Your requests will be at least 10X to 20X slower.

Check the CPU during one request. It should use nearly 100% of a single CPU. If it a lot lower than 100%, you have another bottleneck. That might be insufficient heap or accessing disk during query requests (not enough RAM). If it is near 100%, the only thing you can do is get a faster CPU.

One other question, how frequently is the index updated?

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <ra...@gmail.com> wrote:
> 
> Hi,
> Still facing the same issue...
> Anything else that we need to check.
> 
> 
> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <wu...@wunderwood.org>
> wrote:
> 
>> With Java 1.8, I would use the G1 garbage collector. We’ve been running
>> that combination in prod for three years with no problems.
>> 
>> SOLR_HEAP=8g
>> # Use G1 GC  -- wunder 2017-01-23
>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
>> GC_TUNE=" \
>> -XX:+UseG1GC \
>> -XX:+ParallelRefProcEnabled \
>> -XX:G1HeapRegionSize=8m \
>> -XX:MaxGCPauseMillis=200 \
>> -XX:+UseLargePages \
>> -XX:+AggressiveOpts \
>> “
>> 
>> wunder
>> Walter Underwood
>> wunder@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <ra...@gmail.com>
>> wrote:
>>> 
>>> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb.
>> We
>>> have completed indexing after starting the server suddenly heap space is
>>> getting full.
>>>  Added gc params  , still not working and jdk version is 1.8 .
>>> Please find the below gc  params
>>> -XX:NewRatio=2
>>> -XX:SurvivorRatio=3
>>> -XX:TargetSurvivorRatio=90 \
>>> -XX:MaxTenuringThreshold=8 \
>>> -XX:+UseConcMarkSweepGC \
>>> -XX:+CMSScavengeBeforeRemark \
>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
>>> -XX:PretenureSizeThreshold=512m \
>>> -XX:CMSFullGCsBeforeCompaction=1 \
>>> -XX:+UseCMSInitiatingOccupancyOnly \
>>> -XX:CMSInitiatingOccupancyFraction=70 \
>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
>>> -XX:+CMSParallelRemarkEnabled
>>> -XX:+ParallelRefProcEnabled
>>> -XX:+UseLargePages \
>>> -XX:+AggressiveOpts \
>> 
>> 


Re: Solr 7.7 heap space is getting full

Posted by Rajdeep Sahoo <ra...@gmail.com>.
Hi,
Still facing the same issue...
Anything else that we need to check.


On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <wu...@wunderwood.org>
wrote:

> With Java 1.8, I would use the G1 garbage collector. We’ve been running
> that combination in prod for three years with no problems.
>
> SOLR_HEAP=8g
> # Use G1 GC  -- wunder 2017-01-23
> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> GC_TUNE=" \
> -XX:+UseG1GC \
> -XX:+ParallelRefProcEnabled \
> -XX:G1HeapRegionSize=8m \
> -XX:MaxGCPauseMillis=200 \
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
> “
>
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <ra...@gmail.com>
> wrote:
> >
> > We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb.
> We
> > have completed indexing after starting the server suddenly heap space is
> > getting full.
> >   Added gc params  , still not working and jdk version is 1.8 .
> > Please find the below gc  params
> > -XX:NewRatio=2
> > -XX:SurvivorRatio=3
> > -XX:TargetSurvivorRatio=90 \
> > -XX:MaxTenuringThreshold=8 \
> > -XX:+UseConcMarkSweepGC \
> > -XX:+CMSScavengeBeforeRemark \
> > -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> > -XX:PretenureSizeThreshold=512m \
> > -XX:CMSFullGCsBeforeCompaction=1 \
> > -XX:+UseCMSInitiatingOccupancyOnly \
> > -XX:CMSInitiatingOccupancyFraction=70 \
> > -XX:CMSMaxAbortablePrecleanTime=6000 \
> > -XX:+CMSParallelRemarkEnabled
> > -XX:+ParallelRefProcEnabled
> > -XX:+UseLargePages \
> > -XX:+AggressiveOpts \
>
>

Re: Solr 7.7 heap space is getting full

Posted by Walter Underwood <wu...@wunderwood.org>.
With Java 1.8, I would use the G1 garbage collector. We’ve been running that combination in prod for three years with no problems.

SOLR_HEAP=8g
# Use G1 GC  -- wunder 2017-01-23
# Settings from https://wiki.apache.org/solr/ShawnHeisey
GC_TUNE=" \
-XX:+UseG1GC \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=8m \
-XX:MaxGCPauseMillis=200 \
-XX:+UseLargePages \
-XX:+AggressiveOpts \
“

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <ra...@gmail.com> wrote:
> 
> We are using solr 7.7 . Ram size is 24 gb and allocated space is 12 gb. We
> have completed indexing after starting the server suddenly heap space is
> getting full.
>   Added gc params  , still not working and jdk version is 1.8 .
> Please find the below gc  params
> -XX:NewRatio=2
> -XX:SurvivorRatio=3
> -XX:TargetSurvivorRatio=90 \
> -XX:MaxTenuringThreshold=8 \
> -XX:+UseConcMarkSweepGC \
> -XX:+CMSScavengeBeforeRemark \
> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> -XX:PretenureSizeThreshold=512m \
> -XX:CMSFullGCsBeforeCompaction=1 \
> -XX:+UseCMSInitiatingOccupancyOnly \
> -XX:CMSInitiatingOccupancyFraction=70 \
> -XX:CMSMaxAbortablePrecleanTime=6000 \
> -XX:+CMSParallelRemarkEnabled
> -XX:+ParallelRefProcEnabled
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \