You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@solr.apache.org by Vincenzo D'Amore <v....@gmail.com> on 2022/03/16 09:56:06 UTC

Solr dashboard - number of CPUs available

Hi all,

just asking how can I rely on the number of processors the solr dashboard
shows.

Just to give you a context, I have a 2 nodes solrcloud instance running in
kubernetes.
Looking at solr dashboard (8.3.0) I see there is only 1 cpu available per
solr instance.
but the Solr pods are deployed in two different kube nodes, and entering
the pod with the
kubectl exec -ti solr-0  -- /bin/bash
and running top I see there are 16 cores available for each solr instance.

Looking at the solr kubernetes statefulset I see there are no limits, the
number of cpus requests/limits is not even defined.
I'm going deeper but as far as I can tell, the solr dashboard don't lie.

I need help trying to understand if the solr dashboard is reliable.

Best regards,
Vincenzo

-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/16/2022 6:11 PM, Vincenzo D'Amore wrote:
> Given that I’ll put the question in another way.
> If Java don’t correctly detect the number of CPU how the overall performance can be affected by this?

I could be wrong, but I don't think Solr makes decisions based on CPU 
count, at least not without special config.  I think the build system 
and some of the tests do use the CPU count.  I did find something called 
RateLimitManager that looks at the CPU count.  It seems to be a query 
rate limiter for SolrCloud, but I think that would have to be 
specifically configured to take effect.

Thanks,
Shawn


Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
I've also found that: CONTAINER.fs.coreRoot.spins: true
Can be this considered a problem so huge as to affect the overall
performance?

On Fri, Mar 18, 2022 at 6:46 PM Dave <ha...@gmail.com> wrote:

> Walter I agree, but with large indexes (850+Gb before merge) I just found
> 31 to be my happy spot.  As well as set xms and xmx to the same value, I
> have no proof but it seems to take less processing to keep them the same
> than to keep allocating different memory footprints
>
> > On Mar 18, 2022, at 1:38 PM, Vincenzo D'Amore <v....@gmail.com>
> wrote:
> >
> > I did it right now in prod environment:
> > {
> >  "responseHeader":{
> >    "zkConnected":true,
> >    "status":0,
> >    "QTime":1943,
> >    "params":{
> >      "q":"*:*",
> >      "rows":"1"}},
> >
> > then for a while, the QTime is 0. I assume (obviously) that it is cached,
> > but after a while the cache expires....
> >
> >> On Fri, Mar 18, 2022 at 6:22 PM Dave <ha...@gmail.com>
> wrote:
> >>
> >> I’ve found that each solr instance will take as many cores as it needs
> per
> >> request. Your 2 sec response sounds like you just started the server and
> >> then did that search. I never trust the first search as nothing has been
> >> put into memory yet. I like to give my jvms 31 gb each and let Linux
> cache
> >> the rest of the files as it sees fit, with swap turned completely off.
> Also
> >> *:* can be heavier than you think if you have every field indexed since
> >> it’s like a punch card like system where all the fields have to match.
> >>
> >>> On Mar 18, 2022, at 12:45 PM, Vincenzo D'Amore <v....@gmail.com>
> >> wrote:
> >>>
> >>> Thanks for your support, just sharing what I found until now.
> >>>
> >>> I'm working with SolrCloud with a 2 node deployment. This deployment
> has
> >>> many indexes but a main one 160GB index that has become very slow.
> >>> Select *:* rows=1 take 2 seconds.
> >>> SolrCloud instances are running in kubernetes and are deployed in a pod
> >>> with 128GB RAM but only 16GB to JVM.
> >>> Looking at Solr Documentation I've found nothing specific about what
> >>> happens to the performance if the number of CPUs is not correctly
> >> detected.
> >>> The only interesting page is the following and it seems to match with
> >> your
> >>> suggestion.
> >>> At the end of paragraph there is a not very clear reference about how
> the
> >>> Concurrent Merge Scheduler behavior can be impacted by the number of
> >>> detected CPUs.
> >>>
> >>>> Similarly, the system property lucene.cms.override_core_count can be
> set
> >>> to the number of CPU cores to override the auto-detected processor
> count.
> >>>
> >>>> Talking Solr to Production > Dynamic Defaults for
> >> ConcurrentMergeScheduler
> >>>>
> >>>
> >>
> https://solr.apache.org/guide/8_3/taking-solr-to-production.html#dynamic-defaults-for-concurrentmergescheduler
> >>>
> >>>
> >>>
> >>>> On Thu, Mar 17, 2022 at 1:22 PM Thomas Matthijs <li...@selckin.be>
> >> wrote:
> >>>>
> >>>> I don't know how it affects solr, but if you're interested in java's
> >>>> support to detect cgroup/container limits on cpu/memory etc, you can
> use
> >>>> these links as starting points to investigate.
> >>>> It affect some jvm configuration, like initial GC selection & settings
> >>>> that can affect performance.
> >>>> It was only backported to java 8 quite recently, so if you're still on
> >>>> that might want to check if you're on the latest version.
> >>>>
> >>>> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115
> >>>> https://bugs.openjdk.java.net/browse/JDK-8264136
> >>>>
> >>>>
> >>>>> On Thu, Mar 17, 2022, at 01:11, Vincenzo D'Amore wrote:
> >>>>> Hi Shawn, thanks for your help.
> >>>>>
> >>>>> Given that I’ll put the question in another way.
> >>>>> If Java don’t correctly detect the number of CPU how the overall
> >>>>> performance can be affected by this?
> >>>>>
> >>>>> Ciao,
> >>>>> Vincenzo
> >>>>>
> >>>>> --
> >>>>> mobile: 3498513251
> >>>>> skype: free.dev
> >>>>>
> >>>>>> On 16 Mar 2022, at 18:56, Shawn Heisey <el...@elyograg.org>
> wrote:
> >>>>>>
> >>>>>> On 3/16/22 03:56, Vincenzo D'Amore wrote:
> >>>>>>> just asking how can I rely on the number of processors the solr
> >>>> dashboard
> >>>>>>> shows.
> >>>>>>>
> >>>>>>> Just to give you a context, I have a 2 nodes solrcloud instance
> >>>> running in
> >>>>>>> kubernetes.
> >>>>>>> Looking at solr dashboard (8.3.0) I see there is only 1 cpu
> available
> >>>> per
> >>>>>>> solr instance.
> >>>>>>> but the Solr pods are deployed in two different kube nodes, and
> >>>> entering
> >>>>>>> the pod with the
> >>>>>>> kubectl exec -ti solr-0  -- /bin/bash
> >>>>>>> and running top I see there are 16 cores available for each solr
> >>>> instance.
> >>>>>>
> >>>>>> The dashboard info comes from Java, and Java gets it from the OS.
> How
> >>>> that works with containers is something I don't know much about.
> Here's
> >>>> what Linux says about a server I have which has two six-core Intel
> CPUs
> >>>> with hyperthreading.  This is bare metal, not a VM or container:
> >>>>>>
> >>>>>> elyograg@smeagol:~$ grep processor /proc/cpuinfo
> >>>>>> processor    : 0
> >>>>>> processor    : 1
> >>>>>> processor    : 2
> >>>>>> processor    : 3
> >>>>>> processor    : 4
> >>>>>> processor    : 5
> >>>>>> processor    : 6
> >>>>>> processor    : 7
> >>>>>> processor    : 8
> >>>>>> processor    : 9
> >>>>>> processor    : 10
> >>>>>> processor    : 11
> >>>>>> processor    : 12
> >>>>>> processor    : 13
> >>>>>> processor    : 14
> >>>>>> processor    : 15
> >>>>>> processor    : 16
> >>>>>> processor    : 17
> >>>>>> processor    : 18
> >>>>>> processor    : 19
> >>>>>> processor    : 20
> >>>>>> processor    : 21
> >>>>>> processor    : 22
> >>>>>> processor    : 23
> >>>>>>
> >>>>>> If I start Solr on that server, the dashboard reports 24 processors.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Shawn
> >>>>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Vincenzo D'Amore
> >>
> >
> >
> > --
> > Vincenzo D'Amore
>


-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Dave <ha...@gmail.com>.
Walter I agree, but with large indexes (850+Gb before merge) I just found 31 to be my happy spot.  As well as set xms and xmx to the same value, I have no proof but it seems to take less processing to keep them the same than to keep allocating different memory footprints 

> On Mar 18, 2022, at 1:38 PM, Vincenzo D'Amore <v....@gmail.com> wrote:
> 
> I did it right now in prod environment:
> {
>  "responseHeader":{
>    "zkConnected":true,
>    "status":0,
>    "QTime":1943,
>    "params":{
>      "q":"*:*",
>      "rows":"1"}},
> 
> then for a while, the QTime is 0. I assume (obviously) that it is cached,
> but after a while the cache expires....
> 
>> On Fri, Mar 18, 2022 at 6:22 PM Dave <ha...@gmail.com> wrote:
>> 
>> I’ve found that each solr instance will take as many cores as it needs per
>> request. Your 2 sec response sounds like you just started the server and
>> then did that search. I never trust the first search as nothing has been
>> put into memory yet. I like to give my jvms 31 gb each and let Linux cache
>> the rest of the files as it sees fit, with swap turned completely off. Also
>> *:* can be heavier than you think if you have every field indexed since
>> it’s like a punch card like system where all the fields have to match.
>> 
>>> On Mar 18, 2022, at 12:45 PM, Vincenzo D'Amore <v....@gmail.com>
>> wrote:
>>> 
>>> Thanks for your support, just sharing what I found until now.
>>> 
>>> I'm working with SolrCloud with a 2 node deployment. This deployment has
>>> many indexes but a main one 160GB index that has become very slow.
>>> Select *:* rows=1 take 2 seconds.
>>> SolrCloud instances are running in kubernetes and are deployed in a pod
>>> with 128GB RAM but only 16GB to JVM.
>>> Looking at Solr Documentation I've found nothing specific about what
>>> happens to the performance if the number of CPUs is not correctly
>> detected.
>>> The only interesting page is the following and it seems to match with
>> your
>>> suggestion.
>>> At the end of paragraph there is a not very clear reference about how the
>>> Concurrent Merge Scheduler behavior can be impacted by the number of
>>> detected CPUs.
>>> 
>>>> Similarly, the system property lucene.cms.override_core_count can be set
>>> to the number of CPU cores to override the auto-detected processor count.
>>> 
>>>> Talking Solr to Production > Dynamic Defaults for
>> ConcurrentMergeScheduler
>>>> 
>>> 
>> https://solr.apache.org/guide/8_3/taking-solr-to-production.html#dynamic-defaults-for-concurrentmergescheduler
>>> 
>>> 
>>> 
>>>> On Thu, Mar 17, 2022 at 1:22 PM Thomas Matthijs <li...@selckin.be>
>> wrote:
>>>> 
>>>> I don't know how it affects solr, but if you're interested in java's
>>>> support to detect cgroup/container limits on cpu/memory etc, you can use
>>>> these links as starting points to investigate.
>>>> It affect some jvm configuration, like initial GC selection & settings
>>>> that can affect performance.
>>>> It was only backported to java 8 quite recently, so if you're still on
>>>> that might want to check if you're on the latest version.
>>>> 
>>>> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115
>>>> https://bugs.openjdk.java.net/browse/JDK-8264136
>>>> 
>>>> 
>>>>> On Thu, Mar 17, 2022, at 01:11, Vincenzo D'Amore wrote:
>>>>> Hi Shawn, thanks for your help.
>>>>> 
>>>>> Given that I’ll put the question in another way.
>>>>> If Java don’t correctly detect the number of CPU how the overall
>>>>> performance can be affected by this?
>>>>> 
>>>>> Ciao,
>>>>> Vincenzo
>>>>> 
>>>>> --
>>>>> mobile: 3498513251
>>>>> skype: free.dev
>>>>> 
>>>>>> On 16 Mar 2022, at 18:56, Shawn Heisey <el...@elyograg.org> wrote:
>>>>>> 
>>>>>> On 3/16/22 03:56, Vincenzo D'Amore wrote:
>>>>>>> just asking how can I rely on the number of processors the solr
>>>> dashboard
>>>>>>> shows.
>>>>>>> 
>>>>>>> Just to give you a context, I have a 2 nodes solrcloud instance
>>>> running in
>>>>>>> kubernetes.
>>>>>>> Looking at solr dashboard (8.3.0) I see there is only 1 cpu available
>>>> per
>>>>>>> solr instance.
>>>>>>> but the Solr pods are deployed in two different kube nodes, and
>>>> entering
>>>>>>> the pod with the
>>>>>>> kubectl exec -ti solr-0  -- /bin/bash
>>>>>>> and running top I see there are 16 cores available for each solr
>>>> instance.
>>>>>> 
>>>>>> The dashboard info comes from Java, and Java gets it from the OS. How
>>>> that works with containers is something I don't know much about.  Here's
>>>> what Linux says about a server I have which has two six-core Intel CPUs
>>>> with hyperthreading.  This is bare metal, not a VM or container:
>>>>>> 
>>>>>> elyograg@smeagol:~$ grep processor /proc/cpuinfo
>>>>>> processor    : 0
>>>>>> processor    : 1
>>>>>> processor    : 2
>>>>>> processor    : 3
>>>>>> processor    : 4
>>>>>> processor    : 5
>>>>>> processor    : 6
>>>>>> processor    : 7
>>>>>> processor    : 8
>>>>>> processor    : 9
>>>>>> processor    : 10
>>>>>> processor    : 11
>>>>>> processor    : 12
>>>>>> processor    : 13
>>>>>> processor    : 14
>>>>>> processor    : 15
>>>>>> processor    : 16
>>>>>> processor    : 17
>>>>>> processor    : 18
>>>>>> processor    : 19
>>>>>> processor    : 20
>>>>>> processor    : 21
>>>>>> processor    : 22
>>>>>> processor    : 23
>>>>>> 
>>>>>> If I start Solr on that server, the dashboard reports 24 processors.
>>>>>> 
>>>>>> Thanks,
>>>>>> Shawn
>>>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> Vincenzo D'Amore
>> 
> 
> 
> -- 
> Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
I did it right now in prod environment:
{
  "responseHeader":{
    "zkConnected":true,
    "status":0,
    "QTime":1943,
    "params":{
      "q":"*:*",
      "rows":"1"}},

then for a while, the QTime is 0. I assume (obviously) that it is cached,
but after a while the cache expires....

On Fri, Mar 18, 2022 at 6:22 PM Dave <ha...@gmail.com> wrote:

> I’ve found that each solr instance will take as many cores as it needs per
> request. Your 2 sec response sounds like you just started the server and
> then did that search. I never trust the first search as nothing has been
> put into memory yet. I like to give my jvms 31 gb each and let Linux cache
> the rest of the files as it sees fit, with swap turned completely off. Also
> *:* can be heavier than you think if you have every field indexed since
> it’s like a punch card like system where all the fields have to match.
>
> > On Mar 18, 2022, at 12:45 PM, Vincenzo D'Amore <v....@gmail.com>
> wrote:
> >
> > Thanks for your support, just sharing what I found until now.
> >
> > I'm working with SolrCloud with a 2 node deployment. This deployment has
> > many indexes but a main one 160GB index that has become very slow.
> > Select *:* rows=1 take 2 seconds.
> > SolrCloud instances are running in kubernetes and are deployed in a pod
> > with 128GB RAM but only 16GB to JVM.
> > Looking at Solr Documentation I've found nothing specific about what
> > happens to the performance if the number of CPUs is not correctly
> detected.
> > The only interesting page is the following and it seems to match with
> your
> > suggestion.
> > At the end of paragraph there is a not very clear reference about how the
> > Concurrent Merge Scheduler behavior can be impacted by the number of
> > detected CPUs.
> >
> >> Similarly, the system property lucene.cms.override_core_count can be set
> > to the number of CPU cores to override the auto-detected processor count.
> >
> >> Talking Solr to Production > Dynamic Defaults for
> ConcurrentMergeScheduler
> >>
> >
> https://solr.apache.org/guide/8_3/taking-solr-to-production.html#dynamic-defaults-for-concurrentmergescheduler
> >
> >
> >
> >> On Thu, Mar 17, 2022 at 1:22 PM Thomas Matthijs <li...@selckin.be>
> wrote:
> >>
> >> I don't know how it affects solr, but if you're interested in java's
> >> support to detect cgroup/container limits on cpu/memory etc, you can use
> >> these links as starting points to investigate.
> >> It affect some jvm configuration, like initial GC selection & settings
> >> that can affect performance.
> >> It was only backported to java 8 quite recently, so if you're still on
> >> that might want to check if you're on the latest version.
> >>
> >> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115
> >> https://bugs.openjdk.java.net/browse/JDK-8264136
> >>
> >>
> >>> On Thu, Mar 17, 2022, at 01:11, Vincenzo D'Amore wrote:
> >>> Hi Shawn, thanks for your help.
> >>>
> >>> Given that I’ll put the question in another way.
> >>> If Java don’t correctly detect the number of CPU how the overall
> >>> performance can be affected by this?
> >>>
> >>> Ciao,
> >>> Vincenzo
> >>>
> >>> --
> >>> mobile: 3498513251
> >>> skype: free.dev
> >>>
> >>>> On 16 Mar 2022, at 18:56, Shawn Heisey <el...@elyograg.org> wrote:
> >>>>
> >>>> On 3/16/22 03:56, Vincenzo D'Amore wrote:
> >>>>> just asking how can I rely on the number of processors the solr
> >> dashboard
> >>>>> shows.
> >>>>>
> >>>>> Just to give you a context, I have a 2 nodes solrcloud instance
> >> running in
> >>>>> kubernetes.
> >>>>> Looking at solr dashboard (8.3.0) I see there is only 1 cpu available
> >> per
> >>>>> solr instance.
> >>>>> but the Solr pods are deployed in two different kube nodes, and
> >> entering
> >>>>> the pod with the
> >>>>> kubectl exec -ti solr-0  -- /bin/bash
> >>>>> and running top I see there are 16 cores available for each solr
> >> instance.
> >>>>
> >>>> The dashboard info comes from Java, and Java gets it from the OS. How
> >> that works with containers is something I don't know much about.  Here's
> >> what Linux says about a server I have which has two six-core Intel CPUs
> >> with hyperthreading.  This is bare metal, not a VM or container:
> >>>>
> >>>> elyograg@smeagol:~$ grep processor /proc/cpuinfo
> >>>> processor    : 0
> >>>> processor    : 1
> >>>> processor    : 2
> >>>> processor    : 3
> >>>> processor    : 4
> >>>> processor    : 5
> >>>> processor    : 6
> >>>> processor    : 7
> >>>> processor    : 8
> >>>> processor    : 9
> >>>> processor    : 10
> >>>> processor    : 11
> >>>> processor    : 12
> >>>> processor    : 13
> >>>> processor    : 14
> >>>> processor    : 15
> >>>> processor    : 16
> >>>> processor    : 17
> >>>> processor    : 18
> >>>> processor    : 19
> >>>> processor    : 20
> >>>> processor    : 21
> >>>> processor    : 22
> >>>> processor    : 23
> >>>>
> >>>> If I start Solr on that server, the dashboard reports 24 processors.
> >>>>
> >>>> Thanks,
> >>>> Shawn
> >>>>
> >>
> >
> >
> > --
> > Vincenzo D'Amore
>


-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by dmitri maziuk <dm...@gmail.com>.
On 2022-03-18 1:35 PM, Vincenzo D'Amore wrote:

> At last, I'm looking at Solr metric but really not sure how to understand
> if it is CPU bound or IO bound.

iostat, iotop, even the regular top has 'wa'(it) number -- you'll 
probably have to install them in your container first.

Dima


Re: Solr dashboard - number of CPUs available

Posted by Dave <ha...@gmail.com>.
Again, never ever trust the result speed of a cold search.  Are you warming your index?

https://solr.apache.org/guide/6_6/query-settings-in-solrconfig.html

> On Mar 18, 2022, at 4:23 PM, Vincenzo D'Amore <v....@gmail.com> wrote:
> 
> perSegFilter
> class:org.apache.solr.search.LRUCache
> description:LRU Cache(maxSize=10, initialSize=0, autowarmCount=10,
> regenerator=org.apache.solr.search.NoOpRegenerator@642f416a)
> stats:
> CACHE.searcher.perSegFilter.cumulative_evictions:0
> CACHE.searcher.perSegFilter.cumulative_evictionsIdleTime:0
> CACHE.searcher.perSegFilter.cumulative_evictionsRamUsage:0
> CACHE.searcher.perSegFilter.cumulative_hitratio:0
> CACHE.searcher.perSegFilter.cumulative_hits:0
> CACHE.searcher.perSegFilter.cumulative_inserts:0
> CACHE.searcher.perSegFilter.cumulative_lookups:0
> CACHE.searcher.perSegFilter.evictions:0
> CACHE.searcher.perSegFilter.evictionsIdleTime:0
> CACHE.searcher.perSegFilter.evictionsRamUsage:0
> CACHE.searcher.perSegFilter.hitratio:0
> CACHE.searcher.perSegFilter.hits:0
> CACHE.searcher.perSegFilter.inserts:0
> CACHE.searcher.perSegFilter.lookups:0
> CACHE.searcher.perSegFilter.maxIdleTime:-1
> CACHE.searcher.perSegFilter.maxRamMB:-1
> CACHE.searcher.perSegFilter.maxSize:10
> CACHE.searcher.perSegFilter.ramBytesUsed:160
> CACHE.searcher.perSegFilter.size:0
> CACHE.searcher.perSegFilter.warmupTime:0
> queryResultCache
> class:org.apache.solr.search.LRUCache
> description:LRU Cache(maxSize=512, initialSize=512)
> stats:
> CACHE.searcher.queryResultCache.cumulative_evictions:18328
> CACHE.searcher.queryResultCache.cumulative_evictionsIdleTime:0
> CACHE.searcher.queryResultCache.cumulative_evictionsRamUsage:0
> CACHE.searcher.queryResultCache.cumulative_hitratio:0.52
> CACHE.searcher.queryResultCache.cumulative_hits:41849
> CACHE.searcher.queryResultCache.cumulative_inserts:38338
> CACHE.searcher.queryResultCache.cumulative_lookups:80187
> CACHE.searcher.queryResultCache.evictions:10172
> CACHE.searcher.queryResultCache.evictionsIdleTime:0
> CACHE.searcher.queryResultCache.evictionsRamUsage:0
> CACHE.searcher.queryResultCache.hitratio:0.44
> CACHE.searcher.queryResultCache.hits:8385
> CACHE.searcher.queryResultCache.inserts:10684
> CACHE.searcher.queryResultCache.lookups:19069
> CACHE.searcher.queryResultCache.maxIdleTime:-1
> CACHE.searcher.queryResultCache.maxRamMB:-1
> CACHE.searcher.queryResultCache.maxSize:512
> CACHE.searcher.queryResultCache.ramBytesUsed:2582208
> CACHE.searcher.queryResultCache.size:512
> CACHE.searcher.queryResultCache.warmupTime:0
> fieldValueCache
> class:org.apache.solr.search.FastLRUCache
> description:Concurrent LRU Cache(maxSize=10000, initialSize=10,
> minSize=9000, acceptableSize=9500, cleanupThread=false)
> stats:
> CACHE.searcher.fieldValueCache.cleanupThread:false
> CACHE.searcher.fieldValueCache.cumulative_evictions:0
> CACHE.searcher.fieldValueCache.cumulative_hitratio:0
> CACHE.searcher.fieldValueCache.cumulative_hits:0
> CACHE.searcher.fieldValueCache.cumulative_idleEvictions:0
> CACHE.searcher.fieldValueCache.cumulative_inserts:0
> CACHE.searcher.fieldValueCache.cumulative_lookups:0
> CACHE.searcher.fieldValueCache.evictions:0
> CACHE.searcher.fieldValueCache.hitratio:0
> CACHE.searcher.fieldValueCache.hits:0
> CACHE.searcher.fieldValueCache.idleEvictions:0
> CACHE.searcher.fieldValueCache.inserts:0
> CACHE.searcher.fieldValueCache.lookups:0
> CACHE.searcher.fieldValueCache.maxRamMB:-1
> CACHE.searcher.fieldValueCache.ramBytesUsed:1328
> CACHE.searcher.fieldValueCache.size:0
> CACHE.searcher.fieldValueCache.warmupTime:0
> fieldCache
> class:org.apache.solr.search.SolrFieldCacheBean
> description:Provides introspection of the Solr FieldCache
> stats:
> CACHE.core.fieldCache.entries_count:13
> CACHE.core.fieldCache.entry#0:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;17c9d12b',
> field='vmEnabled', size =~ 19 KB
> CACHE.core.fieldCache.entry#1:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;8c7641c',
> field='vmEnabled', size =~ 27.9 KB
> CACHE.core.fieldCache.entry#10:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4124d95b',
> field='vmEnabled', size =~ 4 MB
> CACHE.core.fieldCache.entry#11:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;2b6b13f0',
> field='vmEnabled', size =~ 2.3 MB
> CACHE.core.fieldCache.entry#12:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;68b638d3',
> field='vmEnabled', size =~ 333.7 KB
> CACHE.core.fieldCache.entry#2:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4ead0cea',
> field='vmEnabled', size =~ 4.1 MB
> CACHE.core.fieldCache.entry#3:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;5a181271',
> field='vmEnabled', size =~ 197.1 KB
> CACHE.core.fieldCache.entry#4:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;f0fad35',
> field='vmEnabled', size =~ 31.4 KB
> CACHE.core.fieldCache.entry#5:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;237effff',
> field='vmEnabled', size =~ 207 KB
> CACHE.core.fieldCache.entry#6:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4a5ef757',
> field='vmEnabled', size =~ 4 MB
> CACHE.core.fieldCache.entry#7:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;ed3d5a8',
> field='vmEnabled', size =~ 3.9 MB
> CACHE.core.fieldCache.entry#8:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;42602d42',
> field='vmEnabled', size =~ 241.7 KB
> CACHE.core.fieldCache.entry#9:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;19f94cee',
> field='vmEnabled', size =~ 17.6 KB
> CACHE.core.fieldCache.total_size:19.3 MB
> filterCache
> class:org.apache.solr.search.FastLRUCache
> description:Concurrent LRU Cache(maxSize=512, initialSize=512, minSize=460,
> acceptableSize=486, cleanupThread=false)
> stats:
> CACHE.searcher.filterCache.cleanupThread:false
> CACHE.searcher.filterCache.cumulative_evictions:2217
> CACHE.searcher.filterCache.cumulative_hitratio:0.95
> CACHE.searcher.filterCache.cumulative_hits:1213572
> CACHE.searcher.filterCache.cumulative_idleEvictions:0
> CACHE.searcher.filterCache.cumulative_inserts:67924
> CACHE.searcher.filterCache.cumulative_lookups:1279606
> CACHE.searcher.filterCache.evictions:530
> CACHE.searcher.filterCache.hitratio:1
> CACHE.searcher.filterCache.hits:304411
> CACHE.searcher.filterCache.idleEvictions:0
> CACHE.searcher.filterCache.inserts:1409
> CACHE.searcher.filterCache.lookups:305327
> CACHE.searcher.filterCache.maxRamMB:-1
> CACHE.searcher.filterCache.ramBytesUsed:1716745269
> CACHE.searcher.filterCache.size:471
> CACHE.searcher.filterCache.warmupTime:0
> documentCache
> class:org.apache.solr.search.LRUCache
> description:LRU Cache(maxSize=512, initialSize=512)
> stats:
> CACHE.searcher.documentCache.cumulative_evictions:15097
> CACHE.searcher.documentCache.cumulative_evictionsIdleTime:0
> CACHE.searcher.documentCache.cumulative_evictionsRamUsage:0
> CACHE.searcher.documentCache.cumulative_hitratio:0.45
> CACHE.searcher.documentCache.cumulative_hits:20482
> CACHE.searcher.documentCache.cumulative_inserts:25025
> CACHE.searcher.documentCache.cumulative_lookups:45507
> CACHE.searcher.documentCache.evictions:7651
> CACHE.searcher.documentCache.evictionsIdleTime:0
> CACHE.searcher.documentCache.evictionsRamUsage:0
> CACHE.searcher.documentCache.hitratio:0.51
> CACHE.searcher.documentCache.hits:8566
> CACHE.searcher.documentCache.inserts:8164
> CACHE.searcher.documentCache.lookups:16730
> CACHE.searcher.documentCache.maxIdleTime:-1
> CACHE.searcher.documentCache.maxRamMB:-1
> CACHE.searcher.documentCache.maxSize:512
> CACHE.searcher.documentCache.ramBytesUsed:1077408
> CACHE.searcher.documentCache.size:512
> CACHE.searcher.documentCache.warmupTime:0
> 
>> On Fri, Mar 18, 2022 at 9:22 PM Vincenzo D'Amore <v....@gmail.com> wrote:
>> 
>> You were right, *:* first query rows=1 QTime=1947, second query rows=2
>> QTime=0
>> 
>> This is the CACHE, not sure how read this:
>> 
>> 
>>   - perSegFilter
>>      - class:org.apache.solr.search.LRUCache
>>      - description:LRU Cache(maxSize=10, initialSize=0,
>>      autowarmCount=10,
>>      regenerator=org.apache.solr.search.NoOpRegenerator@642f416a)
>>      - stats:
>>         - CACHE.searcher.perSegFilter.cumulative_evictions:0
>>         - CACHE.searcher.perSegFilter.cumulative_evictionsIdleTime:0
>>         - CACHE.searcher.perSegFilter.cumulative_evictionsRamUsage:0
>>         - CACHE.searcher.perSegFilter.cumulative_hitratio:0
>>         - CACHE.searcher.perSegFilter.cumulative_hits:0
>>         - CACHE.searcher.perSegFilter.cumulative_inserts:0
>>         - CACHE.searcher.perSegFilter.cumulative_lookups:0
>>         - CACHE.searcher.perSegFilter.evictions:0
>>         - CACHE.searcher.perSegFilter.evictionsIdleTime:0
>>         - CACHE.searcher.perSegFilter.evictionsRamUsage:0
>>         - CACHE.searcher.perSegFilter.hitratio:0
>>         - CACHE.searcher.perSegFilter.hits:0
>>         - CACHE.searcher.perSegFilter.inserts:0
>>         - CACHE.searcher.perSegFilter.lookups:0
>>         - CACHE.searcher.perSegFilter.maxIdleTime:-1
>>         - CACHE.searcher.perSegFilter.maxRamMB:-1
>>         - CACHE.searcher.perSegFilter.maxSize:10
>>         - CACHE.searcher.perSegFilter.ramBytesUsed:160
>>         - CACHE.searcher.perSegFilter.size:0
>>         - CACHE.searcher.perSegFilter.warmupTime:0
>>      - queryResultCache
>>      - class:org.apache.solr.search.LRUCache
>>      - description:LRU Cache(maxSize=512, initialSize=512)
>>      - stats:
>>         - CACHE.searcher.queryResultCache.cumulative_evictions:18328
>>         - CACHE.searcher.queryResultCache.cumulative_evictionsIdleTime:0
>>         - CACHE.searcher.queryResultCache.cumulative_evictionsRamUsage:0
>>         - CACHE.searcher.queryResultCache.cumulative_hitratio:0.52
>>         - CACHE.searcher.queryResultCache.cumulative_hits:41849
>>         - CACHE.searcher.queryResultCache.cumulative_inserts:38338
>>         - CACHE.searcher.queryResultCache.cumulative_lookups:80187
>>         - CACHE.searcher.queryResultCache.evictions:10172
>>         - CACHE.searcher.queryResultCache.evictionsIdleTime:0
>>         - CACHE.searcher.queryResultCache.evictionsRamUsage:0
>>         - CACHE.searcher.queryResultCache.hitratio:0.44
>>         - CACHE.searcher.queryResultCache.hits:8385
>>         - CACHE.searcher.queryResultCache.inserts:10684
>>         - CACHE.searcher.queryResultCache.lookups:19069
>>         - CACHE.searcher.queryResultCache.maxIdleTime:-1
>>         - CACHE.searcher.queryResultCache.maxRamMB:-1
>>         - CACHE.searcher.queryResultCache.maxSize:512
>>         - CACHE.searcher.queryResultCache.ramBytesUsed:2582208
>>         - CACHE.searcher.queryResultCache.size:512
>>         - CACHE.searcher.queryResultCache.warmupTime:0
>>      - fieldValueCache
>>      - class:org.apache.solr.search.FastLRUCache
>>      - description:Concurrent LRU Cache(maxSize=10000, initialSize=10,
>>      minSize=9000, acceptableSize=9500, cleanupThread=false)
>>      - stats:
>>         - CACHE.searcher.fieldValueCache.cleanupThread:false
>>         - CACHE.searcher.fieldValueCache.cumulative_evictions:0
>>         - CACHE.searcher.fieldValueCache.cumulative_hitratio:0
>>         - CACHE.searcher.fieldValueCache.cumulative_hits:0
>>         - CACHE.searcher.fieldValueCache.cumulative_idleEvictions:0
>>         - CACHE.searcher.fieldValueCache.cumulative_inserts:0
>>         - CACHE.searcher.fieldValueCache.cumulative_lookups:0
>>         - CACHE.searcher.fieldValueCache.evictions:0
>>         - CACHE.searcher.fieldValueCache.hitratio:0
>>         - CACHE.searcher.fieldValueCache.hits:0
>>         - CACHE.searcher.fieldValueCache.idleEvictions:0
>>         - CACHE.searcher.fieldValueCache.inserts:0
>>         - CACHE.searcher.fieldValueCache.lookups:0
>>         - CACHE.searcher.fieldValueCache.maxRamMB:-1
>>         - CACHE.searcher.fieldValueCache.ramBytesUsed:1328
>>         - CACHE.searcher.fieldValueCache.size:0
>>         - CACHE.searcher.fieldValueCache.warmupTime:0
>>      - fieldCache
>>      - class:org.apache.solr.search.SolrFieldCacheBean
>>      - description:Provides introspection of the Solr FieldCache
>>      - stats:
>>         - CACHE.core.fieldCache.entries_count:13
>>         - CACHE.core.fieldCache.entry#0:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;17c9d12b',
>>         field='vmEnabled', size =~ 19 KB
>>         - CACHE.core.fieldCache.entry#1:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;8c7641c',
>>         field='vmEnabled', size =~ 27.9 KB
>>         - CACHE.core.fieldCache.entry#10:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4124d95b',
>>         field='vmEnabled', size =~ 4 MB
>>         - CACHE.core.fieldCache.entry#11:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;2b6b13f0',
>>         field='vmEnabled', size =~ 2.3 MB
>>         - CACHE.core.fieldCache.entry#12:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;68b638d3',
>>         field='vmEnabled', size =~ 333.7 KB
>>         - CACHE.core.fieldCache.entry#2:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4ead0cea',
>>         field='vmEnabled', size =~ 4.1 MB
>>         - CACHE.core.fieldCache.entry#3:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;5a181271',
>>         field='vmEnabled', size =~ 197.1 KB
>>         - CACHE.core.fieldCache.entry#4:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;f0fad35',
>>         field='vmEnabled', size =~ 31.4 KB
>>         - CACHE.core.fieldCache.entry#5:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;237effff',
>>         field='vmEnabled', size =~ 207 KB
>>         - CACHE.core.fieldCache.entry#6:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4a5ef757',
>>         field='vmEnabled', size =~ 4 MB
>>         - CACHE.core.fieldCache.entry#7:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;ed3d5a8',
>>         field='vmEnabled', size =~ 3.9 MB
>>         - CACHE.core.fieldCache.entry#8:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;42602d42',
>>         field='vmEnabled', size =~ 241.7 KB
>>         - CACHE.core.fieldCache.entry#9:
>>         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;19f94cee',
>>         field='vmEnabled', size =~ 17.6 KB
>>         - CACHE.core.fieldCache.total_size:19.3 MB
>>      - filterCache
>>      - class:org.apache.solr.search.FastLRUCache
>>      - description:Concurrent LRU Cache(maxSize=512, initialSize=512,
>>      minSize=460, acceptableSize=486, cleanupThread=false)
>>      - stats:
>>         - CACHE.searcher.filterCache.cleanupThread:false
>>         - CACHE.searcher.filterCache.cumulative_evictions:2217
>>         - CACHE.searcher.filterCache.cumulative_hitratio:0.95
>>         - CACHE.searcher.filterCache.cumulative_hits:1213572
>>         - CACHE.searcher.filterCache.cumulative_idleEvictions:0
>>         - CACHE.searcher.filterCache.cumulative_inserts:67924
>>         - CACHE.searcher.filterCache.cumulative_lookups:1279606
>>         - CACHE.searcher.filterCache.evictions:530
>>         - CACHE.searcher.filterCache.hitratio:1
>>         - CACHE.searcher.filterCache.hits:304411
>>         - CACHE.searcher.filterCache.idleEvictions:0
>>         - CACHE.searcher.filterCache.inserts:1409
>>         - CACHE.searcher.filterCache.lookups:305327
>>         - CACHE.searcher.filterCache.maxRamMB:-1
>>         - CACHE.searcher.filterCache.ramBytesUsed:1716745269
>>         - CACHE.searcher.filterCache.size:471
>>         - CACHE.searcher.filterCache.warmupTime:0
>>      - documentCache
>>      - class:org.apache.solr.search.LRUCache
>>      - description:LRU Cache(maxSize=512, initialSize=512)
>>      - stats:
>>         - CACHE.searcher.documentCache.cumulative_evictions:15097
>>         - CACHE.searcher.documentCache.cumulative_evictionsIdleTime:0
>>         - CACHE.searcher.documentCache.cumulative_evictionsRamUsage:0
>>         - CACHE.searcher.documentCache.cumulative_hitratio:0.45
>>         - CACHE.searcher.documentCache.cumulative_hits:20482
>>         - CACHE.searcher.documentCache.cumulative_inserts:25025
>>         - CACHE.searcher.documentCache.cumulative_lookups:45507
>>         - CACHE.searcher.documentCache.evictions:7651
>>         - CACHE.searcher.documentCache.evictionsIdleTime:0
>>         - CACHE.searcher.documentCache.evictionsRamUsage:0
>>         - CACHE.searcher.documentCache.hitratio:0.51
>>         - CACHE.searcher.documentCache.hits:8566
>>         - CACHE.searcher.documentCache.inserts:8164
>>         - CACHE.searcher.documentCache.lookups:16730
>>         - CACHE.searcher.documentCache.maxIdleTime:-1
>>         - CACHE.searcher.documentCache.maxRamMB:-1
>>         - CACHE.searcher.documentCache.maxSize:512
>>         - CACHE.searcher.documentCache.ramBytesUsed:1077408
>>         - CACHE.searcher.documentCache.size:512
>>         - CACHE.searcher.documentCache.warmupTime:0
>> 
>> 
>> On Fri, Mar 18, 2022 at 8:52 PM matthew sporleder <ms...@gmail.com>
>> wrote:
>> 
>>> My guess is that it's trashing on a "cold" open of the index file.  I'm
>>> sure the next query of *:*&rows=2 is pretty fast since caches get
>>> populated.
>>> 
>>> I don't know what to say for next steps - lower the jvm memory and/or
>>> check
>>> the stats in the admin console -> core selct -> Plugins/Stats -> CACHE.
>>> 
>>> What are the storage speeds?  IMHO you are disk bound.
>>> 
>>> 
>>> On Fri, Mar 18, 2022 at 3:42 PM Vincenzo D'Amore <v....@gmail.com>
>>> wrote:
>>> 
>>>> Is it possible that there are too frequent commits? I mean if each
>>> commit
>>>> usually invalidates the cache, even the a stupid *:* rows=1 can be
>>>> affected.
>>>> How can I see how frequent commits are? Or when the latest commit has
>>> been
>>>> done?
>>>> 
>>>> On Fri, Mar 18, 2022 at 8:36 PM Vincenzo D'Amore <v....@gmail.com>
>>>> wrote:
>>>> 
>>>>> Ok, everything you said is right, but nevertheless even right now a
>>>> stupid
>>>>> *:* rows=1 runs in almost 2 seconds.
>>>>> The average document size is pretty small, less than roughly 100/200
>>>>> bytes.
>>>>> Does someone know if the average doc size is available in the metrics?
>>>>> 
>>>>> {
>>>>>  "responseHeader":{
>>>>>    "zkConnected":true,
>>>>>    "status":0,
>>>>>    "QTime":2033,
>>>>>    "params":{
>>>>>      "q":"*:*",
>>>>>      "rows":"1"}},
>>>>> 
>>>>> On Fri, Mar 18, 2022 at 7:50 PM matthew sporleder <
>>> msporleder@gmail.com>
>>>>> wrote:
>>>>> 
>>>>>> You are getting this general advice but, sadly, it depends on your
>>> doc
>>>>>> sizes, query complexity, write frequency, and a bunch of other stuff
>>> I
>>>>>> don't know about.
>>>>>> 
>>>>>> I prefer to run with the *minimum* JVM memory to handle throughput
>>>>>> (without
>>>>>> OOM) and let the OS do caching because I update/write to the index
>>> every
>>>>>> few minutes making my *solr* caching pretty worthless.
>>>>>> 
>>>>>> tuning solr also includes tuning queries.  Start with timing id:123
>>> type
>>>>>> K:V lookups and work your complexity up from there.  Use debug=true
>>> and
>>>>>> attempt to read it.
>>>>>> 
>>>>>> There are many many knobs.  You need to set a baseline, then a
>>> target,
>>>>>> then
>>>>>> a thesis on how to get there.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Fri, Mar 18, 2022 at 2:36 PM Vincenzo D'Amore <v.damore@gmail.com
>>>> 
>>>>>> wrote:
>>>>>> 
>>>>>>> We have modified the kubernetes configuration and restarted
>>> SolrCloud
>>>>>>> cluster, now we have 16 cores per Solr instance.
>>>>>>> The performance does not seem to be improved though.
>>>>>>> The load average is 0.43 0.83 1.00, to me it seems an IO bound
>>>> problem.
>>>>>>> Looking at the index I see 162M documents, 234M maxDocs, 71M
>>>> deleted...
>>>>>>> maybe this core needs to be optimized.
>>>>>>> The INDEX.size is 70GB, what do you think if I raise the size
>>>> allocated
>>>>>>> from the JVM to 64GB in order to have the index in memory?
>>>>>>> At last, I'm looking at Solr metric but really not sure how to
>>>>>> understand
>>>>>>> if it is CPU bound or IO bound.
>>>>>>> 
>>>>>>> On Fri, Mar 18, 2022 at 6:34 PM Walter Underwood <
>>>> wunder@wunderwood.org
>>>>>>> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> First look at the system metrics. Is it CPU bound or IO bound?
>>> Each
>>>>>>>> request is single threaded, so a CPU bound system will have one
>>> core
>>>>>> used
>>>>>>>> at roughly 100% for that time. An IO bound system will not be
>>> using
>>>>>> much
>>>>>>>> CPU but will have threads in iowait and lots of disk reads.
>>>>>>>> 
>>>>>>>> After you know that, then you know what to work on. If it is IO
>>>> bound,
>>>>>>> get
>>>>>>>> enough RAM for the OS, JVM, and index files to all be in memory.
>>> If
>>>>>> it is
>>>>>>>> CPU bound, get a faster processor and work on the config to have
>>> the
>>>>>>>> request do less work. Sharding can also help.
>>>>>>>> 
>>>>>>>> I’m not a fan of always choosing 31 GB for the JVM. Allocate
>>> only as
>>>>>> much
>>>>>>>> as is needed. Java will use the whole heap whether it is needed
>>> or
>>>>>> not.
>>>>>>> You
>>>>>>>> might only need 8 GB. All of our clusters run with 16 GB. That
>>>>>> includes
>>>>>>>> some machines with 36 cores.
>>>>>>>> 
>>>>>>>> 
>>>>>>> --
>>>>>>> Vincenzo D'Amore
>>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Vincenzo D'Amore
>>>>> 
>>>>> 
>>>> 
>>>> --
>>>> Vincenzo D'Amore
>>>> 
>>> 
>> 
>> 
>> --
>> Vincenzo D'Amore
>> 
>> 
> 
> -- 
> Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
perSegFilter
class:org.apache.solr.search.LRUCache
description:LRU Cache(maxSize=10, initialSize=0, autowarmCount=10,
regenerator=org.apache.solr.search.NoOpRegenerator@642f416a)
stats:
CACHE.searcher.perSegFilter.cumulative_evictions:0
CACHE.searcher.perSegFilter.cumulative_evictionsIdleTime:0
CACHE.searcher.perSegFilter.cumulative_evictionsRamUsage:0
CACHE.searcher.perSegFilter.cumulative_hitratio:0
CACHE.searcher.perSegFilter.cumulative_hits:0
CACHE.searcher.perSegFilter.cumulative_inserts:0
CACHE.searcher.perSegFilter.cumulative_lookups:0
CACHE.searcher.perSegFilter.evictions:0
CACHE.searcher.perSegFilter.evictionsIdleTime:0
CACHE.searcher.perSegFilter.evictionsRamUsage:0
CACHE.searcher.perSegFilter.hitratio:0
CACHE.searcher.perSegFilter.hits:0
CACHE.searcher.perSegFilter.inserts:0
CACHE.searcher.perSegFilter.lookups:0
CACHE.searcher.perSegFilter.maxIdleTime:-1
CACHE.searcher.perSegFilter.maxRamMB:-1
CACHE.searcher.perSegFilter.maxSize:10
CACHE.searcher.perSegFilter.ramBytesUsed:160
CACHE.searcher.perSegFilter.size:0
CACHE.searcher.perSegFilter.warmupTime:0
queryResultCache
class:org.apache.solr.search.LRUCache
description:LRU Cache(maxSize=512, initialSize=512)
stats:
CACHE.searcher.queryResultCache.cumulative_evictions:18328
CACHE.searcher.queryResultCache.cumulative_evictionsIdleTime:0
CACHE.searcher.queryResultCache.cumulative_evictionsRamUsage:0
CACHE.searcher.queryResultCache.cumulative_hitratio:0.52
CACHE.searcher.queryResultCache.cumulative_hits:41849
CACHE.searcher.queryResultCache.cumulative_inserts:38338
CACHE.searcher.queryResultCache.cumulative_lookups:80187
CACHE.searcher.queryResultCache.evictions:10172
CACHE.searcher.queryResultCache.evictionsIdleTime:0
CACHE.searcher.queryResultCache.evictionsRamUsage:0
CACHE.searcher.queryResultCache.hitratio:0.44
CACHE.searcher.queryResultCache.hits:8385
CACHE.searcher.queryResultCache.inserts:10684
CACHE.searcher.queryResultCache.lookups:19069
CACHE.searcher.queryResultCache.maxIdleTime:-1
CACHE.searcher.queryResultCache.maxRamMB:-1
CACHE.searcher.queryResultCache.maxSize:512
CACHE.searcher.queryResultCache.ramBytesUsed:2582208
CACHE.searcher.queryResultCache.size:512
CACHE.searcher.queryResultCache.warmupTime:0
fieldValueCache
class:org.apache.solr.search.FastLRUCache
description:Concurrent LRU Cache(maxSize=10000, initialSize=10,
minSize=9000, acceptableSize=9500, cleanupThread=false)
stats:
CACHE.searcher.fieldValueCache.cleanupThread:false
CACHE.searcher.fieldValueCache.cumulative_evictions:0
CACHE.searcher.fieldValueCache.cumulative_hitratio:0
CACHE.searcher.fieldValueCache.cumulative_hits:0
CACHE.searcher.fieldValueCache.cumulative_idleEvictions:0
CACHE.searcher.fieldValueCache.cumulative_inserts:0
CACHE.searcher.fieldValueCache.cumulative_lookups:0
CACHE.searcher.fieldValueCache.evictions:0
CACHE.searcher.fieldValueCache.hitratio:0
CACHE.searcher.fieldValueCache.hits:0
CACHE.searcher.fieldValueCache.idleEvictions:0
CACHE.searcher.fieldValueCache.inserts:0
CACHE.searcher.fieldValueCache.lookups:0
CACHE.searcher.fieldValueCache.maxRamMB:-1
CACHE.searcher.fieldValueCache.ramBytesUsed:1328
CACHE.searcher.fieldValueCache.size:0
CACHE.searcher.fieldValueCache.warmupTime:0
fieldCache
class:org.apache.solr.search.SolrFieldCacheBean
description:Provides introspection of the Solr FieldCache
stats:
CACHE.core.fieldCache.entries_count:13
CACHE.core.fieldCache.entry#0:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;17c9d12b',
field='vmEnabled', size =~ 19 KB
CACHE.core.fieldCache.entry#1:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;8c7641c',
field='vmEnabled', size =~ 27.9 KB
CACHE.core.fieldCache.entry#10:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4124d95b',
field='vmEnabled', size =~ 4 MB
CACHE.core.fieldCache.entry#11:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;2b6b13f0',
field='vmEnabled', size =~ 2.3 MB
CACHE.core.fieldCache.entry#12:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;68b638d3',
field='vmEnabled', size =~ 333.7 KB
CACHE.core.fieldCache.entry#2:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4ead0cea',
field='vmEnabled', size =~ 4.1 MB
CACHE.core.fieldCache.entry#3:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;5a181271',
field='vmEnabled', size =~ 197.1 KB
CACHE.core.fieldCache.entry#4:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;f0fad35',
field='vmEnabled', size =~ 31.4 KB
CACHE.core.fieldCache.entry#5:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;237effff',
field='vmEnabled', size =~ 207 KB
CACHE.core.fieldCache.entry#6:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4a5ef757',
field='vmEnabled', size =~ 4 MB
CACHE.core.fieldCache.entry#7:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;ed3d5a8',
field='vmEnabled', size =~ 3.9 MB
CACHE.core.fieldCache.entry#8:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;42602d42',
field='vmEnabled', size =~ 241.7 KB
CACHE.core.fieldCache.entry#9:segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;19f94cee',
field='vmEnabled', size =~ 17.6 KB
CACHE.core.fieldCache.total_size:19.3 MB
filterCache
class:org.apache.solr.search.FastLRUCache
description:Concurrent LRU Cache(maxSize=512, initialSize=512, minSize=460,
acceptableSize=486, cleanupThread=false)
stats:
CACHE.searcher.filterCache.cleanupThread:false
CACHE.searcher.filterCache.cumulative_evictions:2217
CACHE.searcher.filterCache.cumulative_hitratio:0.95
CACHE.searcher.filterCache.cumulative_hits:1213572
CACHE.searcher.filterCache.cumulative_idleEvictions:0
CACHE.searcher.filterCache.cumulative_inserts:67924
CACHE.searcher.filterCache.cumulative_lookups:1279606
CACHE.searcher.filterCache.evictions:530
CACHE.searcher.filterCache.hitratio:1
CACHE.searcher.filterCache.hits:304411
CACHE.searcher.filterCache.idleEvictions:0
CACHE.searcher.filterCache.inserts:1409
CACHE.searcher.filterCache.lookups:305327
CACHE.searcher.filterCache.maxRamMB:-1
CACHE.searcher.filterCache.ramBytesUsed:1716745269
CACHE.searcher.filterCache.size:471
CACHE.searcher.filterCache.warmupTime:0
documentCache
class:org.apache.solr.search.LRUCache
description:LRU Cache(maxSize=512, initialSize=512)
stats:
CACHE.searcher.documentCache.cumulative_evictions:15097
CACHE.searcher.documentCache.cumulative_evictionsIdleTime:0
CACHE.searcher.documentCache.cumulative_evictionsRamUsage:0
CACHE.searcher.documentCache.cumulative_hitratio:0.45
CACHE.searcher.documentCache.cumulative_hits:20482
CACHE.searcher.documentCache.cumulative_inserts:25025
CACHE.searcher.documentCache.cumulative_lookups:45507
CACHE.searcher.documentCache.evictions:7651
CACHE.searcher.documentCache.evictionsIdleTime:0
CACHE.searcher.documentCache.evictionsRamUsage:0
CACHE.searcher.documentCache.hitratio:0.51
CACHE.searcher.documentCache.hits:8566
CACHE.searcher.documentCache.inserts:8164
CACHE.searcher.documentCache.lookups:16730
CACHE.searcher.documentCache.maxIdleTime:-1
CACHE.searcher.documentCache.maxRamMB:-1
CACHE.searcher.documentCache.maxSize:512
CACHE.searcher.documentCache.ramBytesUsed:1077408
CACHE.searcher.documentCache.size:512
CACHE.searcher.documentCache.warmupTime:0

On Fri, Mar 18, 2022 at 9:22 PM Vincenzo D'Amore <v....@gmail.com> wrote:

> You were right, *:* first query rows=1 QTime=1947, second query rows=2
> QTime=0
>
> This is the CACHE, not sure how read this:
>
>
>    - perSegFilter
>       - class:org.apache.solr.search.LRUCache
>       - description:LRU Cache(maxSize=10, initialSize=0,
>       autowarmCount=10,
>       regenerator=org.apache.solr.search.NoOpRegenerator@642f416a)
>       - stats:
>          - CACHE.searcher.perSegFilter.cumulative_evictions:0
>          - CACHE.searcher.perSegFilter.cumulative_evictionsIdleTime:0
>          - CACHE.searcher.perSegFilter.cumulative_evictionsRamUsage:0
>          - CACHE.searcher.perSegFilter.cumulative_hitratio:0
>          - CACHE.searcher.perSegFilter.cumulative_hits:0
>          - CACHE.searcher.perSegFilter.cumulative_inserts:0
>          - CACHE.searcher.perSegFilter.cumulative_lookups:0
>          - CACHE.searcher.perSegFilter.evictions:0
>          - CACHE.searcher.perSegFilter.evictionsIdleTime:0
>          - CACHE.searcher.perSegFilter.evictionsRamUsage:0
>          - CACHE.searcher.perSegFilter.hitratio:0
>          - CACHE.searcher.perSegFilter.hits:0
>          - CACHE.searcher.perSegFilter.inserts:0
>          - CACHE.searcher.perSegFilter.lookups:0
>          - CACHE.searcher.perSegFilter.maxIdleTime:-1
>          - CACHE.searcher.perSegFilter.maxRamMB:-1
>          - CACHE.searcher.perSegFilter.maxSize:10
>          - CACHE.searcher.perSegFilter.ramBytesUsed:160
>          - CACHE.searcher.perSegFilter.size:0
>          - CACHE.searcher.perSegFilter.warmupTime:0
>       - queryResultCache
>       - class:org.apache.solr.search.LRUCache
>       - description:LRU Cache(maxSize=512, initialSize=512)
>       - stats:
>          - CACHE.searcher.queryResultCache.cumulative_evictions:18328
>          - CACHE.searcher.queryResultCache.cumulative_evictionsIdleTime:0
>          - CACHE.searcher.queryResultCache.cumulative_evictionsRamUsage:0
>          - CACHE.searcher.queryResultCache.cumulative_hitratio:0.52
>          - CACHE.searcher.queryResultCache.cumulative_hits:41849
>          - CACHE.searcher.queryResultCache.cumulative_inserts:38338
>          - CACHE.searcher.queryResultCache.cumulative_lookups:80187
>          - CACHE.searcher.queryResultCache.evictions:10172
>          - CACHE.searcher.queryResultCache.evictionsIdleTime:0
>          - CACHE.searcher.queryResultCache.evictionsRamUsage:0
>          - CACHE.searcher.queryResultCache.hitratio:0.44
>          - CACHE.searcher.queryResultCache.hits:8385
>          - CACHE.searcher.queryResultCache.inserts:10684
>          - CACHE.searcher.queryResultCache.lookups:19069
>          - CACHE.searcher.queryResultCache.maxIdleTime:-1
>          - CACHE.searcher.queryResultCache.maxRamMB:-1
>          - CACHE.searcher.queryResultCache.maxSize:512
>          - CACHE.searcher.queryResultCache.ramBytesUsed:2582208
>          - CACHE.searcher.queryResultCache.size:512
>          - CACHE.searcher.queryResultCache.warmupTime:0
>       - fieldValueCache
>       - class:org.apache.solr.search.FastLRUCache
>       - description:Concurrent LRU Cache(maxSize=10000, initialSize=10,
>       minSize=9000, acceptableSize=9500, cleanupThread=false)
>       - stats:
>          - CACHE.searcher.fieldValueCache.cleanupThread:false
>          - CACHE.searcher.fieldValueCache.cumulative_evictions:0
>          - CACHE.searcher.fieldValueCache.cumulative_hitratio:0
>          - CACHE.searcher.fieldValueCache.cumulative_hits:0
>          - CACHE.searcher.fieldValueCache.cumulative_idleEvictions:0
>          - CACHE.searcher.fieldValueCache.cumulative_inserts:0
>          - CACHE.searcher.fieldValueCache.cumulative_lookups:0
>          - CACHE.searcher.fieldValueCache.evictions:0
>          - CACHE.searcher.fieldValueCache.hitratio:0
>          - CACHE.searcher.fieldValueCache.hits:0
>          - CACHE.searcher.fieldValueCache.idleEvictions:0
>          - CACHE.searcher.fieldValueCache.inserts:0
>          - CACHE.searcher.fieldValueCache.lookups:0
>          - CACHE.searcher.fieldValueCache.maxRamMB:-1
>          - CACHE.searcher.fieldValueCache.ramBytesUsed:1328
>          - CACHE.searcher.fieldValueCache.size:0
>          - CACHE.searcher.fieldValueCache.warmupTime:0
>       - fieldCache
>       - class:org.apache.solr.search.SolrFieldCacheBean
>       - description:Provides introspection of the Solr FieldCache
>       - stats:
>          - CACHE.core.fieldCache.entries_count:13
>          - CACHE.core.fieldCache.entry#0:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;17c9d12b',
>          field='vmEnabled', size =~ 19 KB
>          - CACHE.core.fieldCache.entry#1:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;8c7641c',
>          field='vmEnabled', size =~ 27.9 KB
>          - CACHE.core.fieldCache.entry#10:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4124d95b',
>          field='vmEnabled', size =~ 4 MB
>          - CACHE.core.fieldCache.entry#11:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;2b6b13f0',
>          field='vmEnabled', size =~ 2.3 MB
>          - CACHE.core.fieldCache.entry#12:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;68b638d3',
>          field='vmEnabled', size =~ 333.7 KB
>          - CACHE.core.fieldCache.entry#2:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4ead0cea',
>          field='vmEnabled', size =~ 4.1 MB
>          - CACHE.core.fieldCache.entry#3:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;5a181271',
>          field='vmEnabled', size =~ 197.1 KB
>          - CACHE.core.fieldCache.entry#4:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;f0fad35',
>          field='vmEnabled', size =~ 31.4 KB
>          - CACHE.core.fieldCache.entry#5:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;237effff',
>          field='vmEnabled', size =~ 207 KB
>          - CACHE.core.fieldCache.entry#6:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4a5ef757',
>          field='vmEnabled', size =~ 4 MB
>          - CACHE.core.fieldCache.entry#7:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;ed3d5a8',
>          field='vmEnabled', size =~ 3.9 MB
>          - CACHE.core.fieldCache.entry#8:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;42602d42',
>          field='vmEnabled', size =~ 241.7 KB
>          - CACHE.core.fieldCache.entry#9:
>          segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;19f94cee',
>          field='vmEnabled', size =~ 17.6 KB
>          - CACHE.core.fieldCache.total_size:19.3 MB
>       - filterCache
>       - class:org.apache.solr.search.FastLRUCache
>       - description:Concurrent LRU Cache(maxSize=512, initialSize=512,
>       minSize=460, acceptableSize=486, cleanupThread=false)
>       - stats:
>          - CACHE.searcher.filterCache.cleanupThread:false
>          - CACHE.searcher.filterCache.cumulative_evictions:2217
>          - CACHE.searcher.filterCache.cumulative_hitratio:0.95
>          - CACHE.searcher.filterCache.cumulative_hits:1213572
>          - CACHE.searcher.filterCache.cumulative_idleEvictions:0
>          - CACHE.searcher.filterCache.cumulative_inserts:67924
>          - CACHE.searcher.filterCache.cumulative_lookups:1279606
>          - CACHE.searcher.filterCache.evictions:530
>          - CACHE.searcher.filterCache.hitratio:1
>          - CACHE.searcher.filterCache.hits:304411
>          - CACHE.searcher.filterCache.idleEvictions:0
>          - CACHE.searcher.filterCache.inserts:1409
>          - CACHE.searcher.filterCache.lookups:305327
>          - CACHE.searcher.filterCache.maxRamMB:-1
>          - CACHE.searcher.filterCache.ramBytesUsed:1716745269
>          - CACHE.searcher.filterCache.size:471
>          - CACHE.searcher.filterCache.warmupTime:0
>       - documentCache
>       - class:org.apache.solr.search.LRUCache
>       - description:LRU Cache(maxSize=512, initialSize=512)
>       - stats:
>          - CACHE.searcher.documentCache.cumulative_evictions:15097
>          - CACHE.searcher.documentCache.cumulative_evictionsIdleTime:0
>          - CACHE.searcher.documentCache.cumulative_evictionsRamUsage:0
>          - CACHE.searcher.documentCache.cumulative_hitratio:0.45
>          - CACHE.searcher.documentCache.cumulative_hits:20482
>          - CACHE.searcher.documentCache.cumulative_inserts:25025
>          - CACHE.searcher.documentCache.cumulative_lookups:45507
>          - CACHE.searcher.documentCache.evictions:7651
>          - CACHE.searcher.documentCache.evictionsIdleTime:0
>          - CACHE.searcher.documentCache.evictionsRamUsage:0
>          - CACHE.searcher.documentCache.hitratio:0.51
>          - CACHE.searcher.documentCache.hits:8566
>          - CACHE.searcher.documentCache.inserts:8164
>          - CACHE.searcher.documentCache.lookups:16730
>          - CACHE.searcher.documentCache.maxIdleTime:-1
>          - CACHE.searcher.documentCache.maxRamMB:-1
>          - CACHE.searcher.documentCache.maxSize:512
>          - CACHE.searcher.documentCache.ramBytesUsed:1077408
>          - CACHE.searcher.documentCache.size:512
>          - CACHE.searcher.documentCache.warmupTime:0
>
>
> On Fri, Mar 18, 2022 at 8:52 PM matthew sporleder <ms...@gmail.com>
> wrote:
>
>> My guess is that it's trashing on a "cold" open of the index file.  I'm
>> sure the next query of *:*&rows=2 is pretty fast since caches get
>> populated.
>>
>> I don't know what to say for next steps - lower the jvm memory and/or
>> check
>> the stats in the admin console -> core selct -> Plugins/Stats -> CACHE.
>>
>> What are the storage speeds?  IMHO you are disk bound.
>>
>>
>> On Fri, Mar 18, 2022 at 3:42 PM Vincenzo D'Amore <v....@gmail.com>
>> wrote:
>>
>> > Is it possible that there are too frequent commits? I mean if each
>> commit
>> > usually invalidates the cache, even the a stupid *:* rows=1 can be
>> > affected.
>> > How can I see how frequent commits are? Or when the latest commit has
>> been
>> > done?
>> >
>> > On Fri, Mar 18, 2022 at 8:36 PM Vincenzo D'Amore <v....@gmail.com>
>> > wrote:
>> >
>> > > Ok, everything you said is right, but nevertheless even right now a
>> > stupid
>> > > *:* rows=1 runs in almost 2 seconds.
>> > > The average document size is pretty small, less than roughly 100/200
>> > > bytes.
>> > > Does someone know if the average doc size is available in the metrics?
>> > >
>> > > {
>> > >   "responseHeader":{
>> > >     "zkConnected":true,
>> > >     "status":0,
>> > >     "QTime":2033,
>> > >     "params":{
>> > >       "q":"*:*",
>> > >       "rows":"1"}},
>> > >
>> > > On Fri, Mar 18, 2022 at 7:50 PM matthew sporleder <
>> msporleder@gmail.com>
>> > > wrote:
>> > >
>> > >> You are getting this general advice but, sadly, it depends on your
>> doc
>> > >> sizes, query complexity, write frequency, and a bunch of other stuff
>> I
>> > >> don't know about.
>> > >>
>> > >> I prefer to run with the *minimum* JVM memory to handle throughput
>> > >> (without
>> > >> OOM) and let the OS do caching because I update/write to the index
>> every
>> > >> few minutes making my *solr* caching pretty worthless.
>> > >>
>> > >> tuning solr also includes tuning queries.  Start with timing id:123
>> type
>> > >> K:V lookups and work your complexity up from there.  Use debug=true
>> and
>> > >> attempt to read it.
>> > >>
>> > >> There are many many knobs.  You need to set a baseline, then a
>> target,
>> > >> then
>> > >> a thesis on how to get there.
>> > >>
>> > >>
>> > >>
>> > >> On Fri, Mar 18, 2022 at 2:36 PM Vincenzo D'Amore <v.damore@gmail.com
>> >
>> > >> wrote:
>> > >>
>> > >> > We have modified the kubernetes configuration and restarted
>> SolrCloud
>> > >> > cluster, now we have 16 cores per Solr instance.
>> > >> > The performance does not seem to be improved though.
>> > >> > The load average is 0.43 0.83 1.00, to me it seems an IO bound
>> > problem.
>> > >> > Looking at the index I see 162M documents, 234M maxDocs, 71M
>> > deleted...
>> > >> > maybe this core needs to be optimized.
>> > >> > The INDEX.size is 70GB, what do you think if I raise the size
>> > allocated
>> > >> > from the JVM to 64GB in order to have the index in memory?
>> > >> > At last, I'm looking at Solr metric but really not sure how to
>> > >> understand
>> > >> > if it is CPU bound or IO bound.
>> > >> >
>> > >> > On Fri, Mar 18, 2022 at 6:34 PM Walter Underwood <
>> > wunder@wunderwood.org
>> > >> >
>> > >> > wrote:
>> > >> >
>> > >> > > First look at the system metrics. Is it CPU bound or IO bound?
>> Each
>> > >> > > request is single threaded, so a CPU bound system will have one
>> core
>> > >> used
>> > >> > > at roughly 100% for that time. An IO bound system will not be
>> using
>> > >> much
>> > >> > > CPU but will have threads in iowait and lots of disk reads.
>> > >> > >
>> > >> > > After you know that, then you know what to work on. If it is IO
>> > bound,
>> > >> > get
>> > >> > > enough RAM for the OS, JVM, and index files to all be in memory.
>> If
>> > >> it is
>> > >> > > CPU bound, get a faster processor and work on the config to have
>> the
>> > >> > > request do less work. Sharding can also help.
>> > >> > >
>> > >> > > I’m not a fan of always choosing 31 GB for the JVM. Allocate
>> only as
>> > >> much
>> > >> > > as is needed. Java will use the whole heap whether it is needed
>> or
>> > >> not.
>> > >> > You
>> > >> > > might only need 8 GB. All of our clusters run with 16 GB. That
>> > >> includes
>> > >> > > some machines with 36 cores.
>> > >> > >
>> > >> > >
>> > >> > --
>> > >> > Vincenzo D'Amore
>> > >> >
>> > >>
>> > >
>> > >
>> > > --
>> > > Vincenzo D'Amore
>> > >
>> > >
>> >
>> > --
>> > Vincenzo D'Amore
>> >
>>
>
>
> --
> Vincenzo D'Amore
>
>

-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
You were right, *:* first query rows=1 QTime=1947, second query rows=2
QTime=0

This is the CACHE, not sure how read this:


   - perSegFilter
      - class:org.apache.solr.search.LRUCache
      - description:LRU Cache(maxSize=10, initialSize=0, autowarmCount=10,
      regenerator=org.apache.solr.search.NoOpRegenerator@642f416a)
      - stats:
         - CACHE.searcher.perSegFilter.cumulative_evictions:0
         - CACHE.searcher.perSegFilter.cumulative_evictionsIdleTime:0
         - CACHE.searcher.perSegFilter.cumulative_evictionsRamUsage:0
         - CACHE.searcher.perSegFilter.cumulative_hitratio:0
         - CACHE.searcher.perSegFilter.cumulative_hits:0
         - CACHE.searcher.perSegFilter.cumulative_inserts:0
         - CACHE.searcher.perSegFilter.cumulative_lookups:0
         - CACHE.searcher.perSegFilter.evictions:0
         - CACHE.searcher.perSegFilter.evictionsIdleTime:0
         - CACHE.searcher.perSegFilter.evictionsRamUsage:0
         - CACHE.searcher.perSegFilter.hitratio:0
         - CACHE.searcher.perSegFilter.hits:0
         - CACHE.searcher.perSegFilter.inserts:0
         - CACHE.searcher.perSegFilter.lookups:0
         - CACHE.searcher.perSegFilter.maxIdleTime:-1
         - CACHE.searcher.perSegFilter.maxRamMB:-1
         - CACHE.searcher.perSegFilter.maxSize:10
         - CACHE.searcher.perSegFilter.ramBytesUsed:160
         - CACHE.searcher.perSegFilter.size:0
         - CACHE.searcher.perSegFilter.warmupTime:0
      - queryResultCache
      - class:org.apache.solr.search.LRUCache
      - description:LRU Cache(maxSize=512, initialSize=512)
      - stats:
         - CACHE.searcher.queryResultCache.cumulative_evictions:18328
         - CACHE.searcher.queryResultCache.cumulative_evictionsIdleTime:0
         - CACHE.searcher.queryResultCache.cumulative_evictionsRamUsage:0
         - CACHE.searcher.queryResultCache.cumulative_hitratio:0.52
         - CACHE.searcher.queryResultCache.cumulative_hits:41849
         - CACHE.searcher.queryResultCache.cumulative_inserts:38338
         - CACHE.searcher.queryResultCache.cumulative_lookups:80187
         - CACHE.searcher.queryResultCache.evictions:10172
         - CACHE.searcher.queryResultCache.evictionsIdleTime:0
         - CACHE.searcher.queryResultCache.evictionsRamUsage:0
         - CACHE.searcher.queryResultCache.hitratio:0.44
         - CACHE.searcher.queryResultCache.hits:8385
         - CACHE.searcher.queryResultCache.inserts:10684
         - CACHE.searcher.queryResultCache.lookups:19069
         - CACHE.searcher.queryResultCache.maxIdleTime:-1
         - CACHE.searcher.queryResultCache.maxRamMB:-1
         - CACHE.searcher.queryResultCache.maxSize:512
         - CACHE.searcher.queryResultCache.ramBytesUsed:2582208
         - CACHE.searcher.queryResultCache.size:512
         - CACHE.searcher.queryResultCache.warmupTime:0
      - fieldValueCache
      - class:org.apache.solr.search.FastLRUCache
      - description:Concurrent LRU Cache(maxSize=10000, initialSize=10,
      minSize=9000, acceptableSize=9500, cleanupThread=false)
      - stats:
         - CACHE.searcher.fieldValueCache.cleanupThread:false
         - CACHE.searcher.fieldValueCache.cumulative_evictions:0
         - CACHE.searcher.fieldValueCache.cumulative_hitratio:0
         - CACHE.searcher.fieldValueCache.cumulative_hits:0
         - CACHE.searcher.fieldValueCache.cumulative_idleEvictions:0
         - CACHE.searcher.fieldValueCache.cumulative_inserts:0
         - CACHE.searcher.fieldValueCache.cumulative_lookups:0
         - CACHE.searcher.fieldValueCache.evictions:0
         - CACHE.searcher.fieldValueCache.hitratio:0
         - CACHE.searcher.fieldValueCache.hits:0
         - CACHE.searcher.fieldValueCache.idleEvictions:0
         - CACHE.searcher.fieldValueCache.inserts:0
         - CACHE.searcher.fieldValueCache.lookups:0
         - CACHE.searcher.fieldValueCache.maxRamMB:-1
         - CACHE.searcher.fieldValueCache.ramBytesUsed:1328
         - CACHE.searcher.fieldValueCache.size:0
         - CACHE.searcher.fieldValueCache.warmupTime:0
      - fieldCache
      - class:org.apache.solr.search.SolrFieldCacheBean
      - description:Provides introspection of the Solr FieldCache
      - stats:
         - CACHE.core.fieldCache.entries_count:13
         - CACHE.core.fieldCache.entry#0:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;17c9d12b',
         field='vmEnabled', size =~ 19 KB
         - CACHE.core.fieldCache.entry#1:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;8c7641c',
         field='vmEnabled', size =~ 27.9 KB
         - CACHE.core.fieldCache.entry#10:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4124d95b',
         field='vmEnabled', size =~ 4 MB
         - CACHE.core.fieldCache.entry#11:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;2b6b13f0',
         field='vmEnabled', size =~ 2.3 MB
         - CACHE.core.fieldCache.entry#12:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;68b638d3',
         field='vmEnabled', size =~ 333.7 KB
         - CACHE.core.fieldCache.entry#2:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4ead0cea',
         field='vmEnabled', size =~ 4.1 MB
         - CACHE.core.fieldCache.entry#3:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;5a181271',
         field='vmEnabled', size =~ 197.1 KB
         - CACHE.core.fieldCache.entry#4:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;f0fad35',
         field='vmEnabled', size =~ 31.4 KB
         - CACHE.core.fieldCache.entry#5:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;237effff',
         field='vmEnabled', size =~ 207 KB
         - CACHE.core.fieldCache.entry#6:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;4a5ef757',
         field='vmEnabled', size =~ 4 MB
         - CACHE.core.fieldCache.entry#7:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;ed3d5a8',
         field='vmEnabled', size =~ 3.9 MB
         - CACHE.core.fieldCache.entry#8:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;42602d42',
         field='vmEnabled', size =~ 241.7 KB
         - CACHE.core.fieldCache.entry#9:
         segment='org.apache.lucene.index.IndexReader$CacheKey@&#8203;19f94cee',
         field='vmEnabled', size =~ 17.6 KB
         - CACHE.core.fieldCache.total_size:19.3 MB
      - filterCache
      - class:org.apache.solr.search.FastLRUCache
      - description:Concurrent LRU Cache(maxSize=512, initialSize=512,
      minSize=460, acceptableSize=486, cleanupThread=false)
      - stats:
         - CACHE.searcher.filterCache.cleanupThread:false
         - CACHE.searcher.filterCache.cumulative_evictions:2217
         - CACHE.searcher.filterCache.cumulative_hitratio:0.95
         - CACHE.searcher.filterCache.cumulative_hits:1213572
         - CACHE.searcher.filterCache.cumulative_idleEvictions:0
         - CACHE.searcher.filterCache.cumulative_inserts:67924
         - CACHE.searcher.filterCache.cumulative_lookups:1279606
         - CACHE.searcher.filterCache.evictions:530
         - CACHE.searcher.filterCache.hitratio:1
         - CACHE.searcher.filterCache.hits:304411
         - CACHE.searcher.filterCache.idleEvictions:0
         - CACHE.searcher.filterCache.inserts:1409
         - CACHE.searcher.filterCache.lookups:305327
         - CACHE.searcher.filterCache.maxRamMB:-1
         - CACHE.searcher.filterCache.ramBytesUsed:1716745269
         - CACHE.searcher.filterCache.size:471
         - CACHE.searcher.filterCache.warmupTime:0
      - documentCache
      - class:org.apache.solr.search.LRUCache
      - description:LRU Cache(maxSize=512, initialSize=512)
      - stats:
         - CACHE.searcher.documentCache.cumulative_evictions:15097
         - CACHE.searcher.documentCache.cumulative_evictionsIdleTime:0
         - CACHE.searcher.documentCache.cumulative_evictionsRamUsage:0
         - CACHE.searcher.documentCache.cumulative_hitratio:0.45
         - CACHE.searcher.documentCache.cumulative_hits:20482
         - CACHE.searcher.documentCache.cumulative_inserts:25025
         - CACHE.searcher.documentCache.cumulative_lookups:45507
         - CACHE.searcher.documentCache.evictions:7651
         - CACHE.searcher.documentCache.evictionsIdleTime:0
         - CACHE.searcher.documentCache.evictionsRamUsage:0
         - CACHE.searcher.documentCache.hitratio:0.51
         - CACHE.searcher.documentCache.hits:8566
         - CACHE.searcher.documentCache.inserts:8164
         - CACHE.searcher.documentCache.lookups:16730
         - CACHE.searcher.documentCache.maxIdleTime:-1
         - CACHE.searcher.documentCache.maxRamMB:-1
         - CACHE.searcher.documentCache.maxSize:512
         - CACHE.searcher.documentCache.ramBytesUsed:1077408
         - CACHE.searcher.documentCache.size:512
         - CACHE.searcher.documentCache.warmupTime:0


On Fri, Mar 18, 2022 at 8:52 PM matthew sporleder <ms...@gmail.com>
wrote:

> My guess is that it's trashing on a "cold" open of the index file.  I'm
> sure the next query of *:*&rows=2 is pretty fast since caches get
> populated.
>
> I don't know what to say for next steps - lower the jvm memory and/or check
> the stats in the admin console -> core selct -> Plugins/Stats -> CACHE.
>
> What are the storage speeds?  IMHO you are disk bound.
>
>
> On Fri, Mar 18, 2022 at 3:42 PM Vincenzo D'Amore <v....@gmail.com>
> wrote:
>
> > Is it possible that there are too frequent commits? I mean if each commit
> > usually invalidates the cache, even the a stupid *:* rows=1 can be
> > affected.
> > How can I see how frequent commits are? Or when the latest commit has
> been
> > done?
> >
> > On Fri, Mar 18, 2022 at 8:36 PM Vincenzo D'Amore <v....@gmail.com>
> > wrote:
> >
> > > Ok, everything you said is right, but nevertheless even right now a
> > stupid
> > > *:* rows=1 runs in almost 2 seconds.
> > > The average document size is pretty small, less than roughly 100/200
> > > bytes.
> > > Does someone know if the average doc size is available in the metrics?
> > >
> > > {
> > >   "responseHeader":{
> > >     "zkConnected":true,
> > >     "status":0,
> > >     "QTime":2033,
> > >     "params":{
> > >       "q":"*:*",
> > >       "rows":"1"}},
> > >
> > > On Fri, Mar 18, 2022 at 7:50 PM matthew sporleder <
> msporleder@gmail.com>
> > > wrote:
> > >
> > >> You are getting this general advice but, sadly, it depends on your doc
> > >> sizes, query complexity, write frequency, and a bunch of other stuff I
> > >> don't know about.
> > >>
> > >> I prefer to run with the *minimum* JVM memory to handle throughput
> > >> (without
> > >> OOM) and let the OS do caching because I update/write to the index
> every
> > >> few minutes making my *solr* caching pretty worthless.
> > >>
> > >> tuning solr also includes tuning queries.  Start with timing id:123
> type
> > >> K:V lookups and work your complexity up from there.  Use debug=true
> and
> > >> attempt to read it.
> > >>
> > >> There are many many knobs.  You need to set a baseline, then a target,
> > >> then
> > >> a thesis on how to get there.
> > >>
> > >>
> > >>
> > >> On Fri, Mar 18, 2022 at 2:36 PM Vincenzo D'Amore <v....@gmail.com>
> > >> wrote:
> > >>
> > >> > We have modified the kubernetes configuration and restarted
> SolrCloud
> > >> > cluster, now we have 16 cores per Solr instance.
> > >> > The performance does not seem to be improved though.
> > >> > The load average is 0.43 0.83 1.00, to me it seems an IO bound
> > problem.
> > >> > Looking at the index I see 162M documents, 234M maxDocs, 71M
> > deleted...
> > >> > maybe this core needs to be optimized.
> > >> > The INDEX.size is 70GB, what do you think if I raise the size
> > allocated
> > >> > from the JVM to 64GB in order to have the index in memory?
> > >> > At last, I'm looking at Solr metric but really not sure how to
> > >> understand
> > >> > if it is CPU bound or IO bound.
> > >> >
> > >> > On Fri, Mar 18, 2022 at 6:34 PM Walter Underwood <
> > wunder@wunderwood.org
> > >> >
> > >> > wrote:
> > >> >
> > >> > > First look at the system metrics. Is it CPU bound or IO bound?
> Each
> > >> > > request is single threaded, so a CPU bound system will have one
> core
> > >> used
> > >> > > at roughly 100% for that time. An IO bound system will not be
> using
> > >> much
> > >> > > CPU but will have threads in iowait and lots of disk reads.
> > >> > >
> > >> > > After you know that, then you know what to work on. If it is IO
> > bound,
> > >> > get
> > >> > > enough RAM for the OS, JVM, and index files to all be in memory.
> If
> > >> it is
> > >> > > CPU bound, get a faster processor and work on the config to have
> the
> > >> > > request do less work. Sharding can also help.
> > >> > >
> > >> > > I’m not a fan of always choosing 31 GB for the JVM. Allocate only
> as
> > >> much
> > >> > > as is needed. Java will use the whole heap whether it is needed or
> > >> not.
> > >> > You
> > >> > > might only need 8 GB. All of our clusters run with 16 GB. That
> > >> includes
> > >> > > some machines with 36 cores.
> > >> > >
> > >> > >
> > >> > --
> > >> > Vincenzo D'Amore
> > >> >
> > >>
> > >
> > >
> > > --
> > > Vincenzo D'Amore
> > >
> > >
> >
> > --
> > Vincenzo D'Amore
> >
>


-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by matthew sporleder <ms...@gmail.com>.
My guess is that it's trashing on a "cold" open of the index file.  I'm
sure the next query of *:*&rows=2 is pretty fast since caches get populated.

I don't know what to say for next steps - lower the jvm memory and/or check
the stats in the admin console -> core selct -> Plugins/Stats -> CACHE.

What are the storage speeds?  IMHO you are disk bound.


On Fri, Mar 18, 2022 at 3:42 PM Vincenzo D'Amore <v....@gmail.com> wrote:

> Is it possible that there are too frequent commits? I mean if each commit
> usually invalidates the cache, even the a stupid *:* rows=1 can be
> affected.
> How can I see how frequent commits are? Or when the latest commit has been
> done?
>
> On Fri, Mar 18, 2022 at 8:36 PM Vincenzo D'Amore <v....@gmail.com>
> wrote:
>
> > Ok, everything you said is right, but nevertheless even right now a
> stupid
> > *:* rows=1 runs in almost 2 seconds.
> > The average document size is pretty small, less than roughly 100/200
> > bytes.
> > Does someone know if the average doc size is available in the metrics?
> >
> > {
> >   "responseHeader":{
> >     "zkConnected":true,
> >     "status":0,
> >     "QTime":2033,
> >     "params":{
> >       "q":"*:*",
> >       "rows":"1"}},
> >
> > On Fri, Mar 18, 2022 at 7:50 PM matthew sporleder <ms...@gmail.com>
> > wrote:
> >
> >> You are getting this general advice but, sadly, it depends on your doc
> >> sizes, query complexity, write frequency, and a bunch of other stuff I
> >> don't know about.
> >>
> >> I prefer to run with the *minimum* JVM memory to handle throughput
> >> (without
> >> OOM) and let the OS do caching because I update/write to the index every
> >> few minutes making my *solr* caching pretty worthless.
> >>
> >> tuning solr also includes tuning queries.  Start with timing id:123 type
> >> K:V lookups and work your complexity up from there.  Use debug=true and
> >> attempt to read it.
> >>
> >> There are many many knobs.  You need to set a baseline, then a target,
> >> then
> >> a thesis on how to get there.
> >>
> >>
> >>
> >> On Fri, Mar 18, 2022 at 2:36 PM Vincenzo D'Amore <v....@gmail.com>
> >> wrote:
> >>
> >> > We have modified the kubernetes configuration and restarted SolrCloud
> >> > cluster, now we have 16 cores per Solr instance.
> >> > The performance does not seem to be improved though.
> >> > The load average is 0.43 0.83 1.00, to me it seems an IO bound
> problem.
> >> > Looking at the index I see 162M documents, 234M maxDocs, 71M
> deleted...
> >> > maybe this core needs to be optimized.
> >> > The INDEX.size is 70GB, what do you think if I raise the size
> allocated
> >> > from the JVM to 64GB in order to have the index in memory?
> >> > At last, I'm looking at Solr metric but really not sure how to
> >> understand
> >> > if it is CPU bound or IO bound.
> >> >
> >> > On Fri, Mar 18, 2022 at 6:34 PM Walter Underwood <
> wunder@wunderwood.org
> >> >
> >> > wrote:
> >> >
> >> > > First look at the system metrics. Is it CPU bound or IO bound? Each
> >> > > request is single threaded, so a CPU bound system will have one core
> >> used
> >> > > at roughly 100% for that time. An IO bound system will not be using
> >> much
> >> > > CPU but will have threads in iowait and lots of disk reads.
> >> > >
> >> > > After you know that, then you know what to work on. If it is IO
> bound,
> >> > get
> >> > > enough RAM for the OS, JVM, and index files to all be in memory. If
> >> it is
> >> > > CPU bound, get a faster processor and work on the config to have the
> >> > > request do less work. Sharding can also help.
> >> > >
> >> > > I’m not a fan of always choosing 31 GB for the JVM. Allocate only as
> >> much
> >> > > as is needed. Java will use the whole heap whether it is needed or
> >> not.
> >> > You
> >> > > might only need 8 GB. All of our clusters run with 16 GB. That
> >> includes
> >> > > some machines with 36 cores.
> >> > >
> >> > >
> >> > --
> >> > Vincenzo D'Amore
> >> >
> >>
> >
> >
> > --
> > Vincenzo D'Amore
> >
> >
>
> --
> Vincenzo D'Amore
>

Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
Is it possible that there are too frequent commits? I mean if each commit
usually invalidates the cache, even the a stupid *:* rows=1 can be affected.
How can I see how frequent commits are? Or when the latest commit has been
done?

On Fri, Mar 18, 2022 at 8:36 PM Vincenzo D'Amore <v....@gmail.com> wrote:

> Ok, everything you said is right, but nevertheless even right now a stupid
> *:* rows=1 runs in almost 2 seconds.
> The average document size is pretty small, less than roughly 100/200
> bytes.
> Does someone know if the average doc size is available in the metrics?
>
> {
>   "responseHeader":{
>     "zkConnected":true,
>     "status":0,
>     "QTime":2033,
>     "params":{
>       "q":"*:*",
>       "rows":"1"}},
>
> On Fri, Mar 18, 2022 at 7:50 PM matthew sporleder <ms...@gmail.com>
> wrote:
>
>> You are getting this general advice but, sadly, it depends on your doc
>> sizes, query complexity, write frequency, and a bunch of other stuff I
>> don't know about.
>>
>> I prefer to run with the *minimum* JVM memory to handle throughput
>> (without
>> OOM) and let the OS do caching because I update/write to the index every
>> few minutes making my *solr* caching pretty worthless.
>>
>> tuning solr also includes tuning queries.  Start with timing id:123 type
>> K:V lookups and work your complexity up from there.  Use debug=true and
>> attempt to read it.
>>
>> There are many many knobs.  You need to set a baseline, then a target,
>> then
>> a thesis on how to get there.
>>
>>
>>
>> On Fri, Mar 18, 2022 at 2:36 PM Vincenzo D'Amore <v....@gmail.com>
>> wrote:
>>
>> > We have modified the kubernetes configuration and restarted SolrCloud
>> > cluster, now we have 16 cores per Solr instance.
>> > The performance does not seem to be improved though.
>> > The load average is 0.43 0.83 1.00, to me it seems an IO bound problem.
>> > Looking at the index I see 162M documents, 234M maxDocs, 71M deleted...
>> > maybe this core needs to be optimized.
>> > The INDEX.size is 70GB, what do you think if I raise the size allocated
>> > from the JVM to 64GB in order to have the index in memory?
>> > At last, I'm looking at Solr metric but really not sure how to
>> understand
>> > if it is CPU bound or IO bound.
>> >
>> > On Fri, Mar 18, 2022 at 6:34 PM Walter Underwood <wunder@wunderwood.org
>> >
>> > wrote:
>> >
>> > > First look at the system metrics. Is it CPU bound or IO bound? Each
>> > > request is single threaded, so a CPU bound system will have one core
>> used
>> > > at roughly 100% for that time. An IO bound system will not be using
>> much
>> > > CPU but will have threads in iowait and lots of disk reads.
>> > >
>> > > After you know that, then you know what to work on. If it is IO bound,
>> > get
>> > > enough RAM for the OS, JVM, and index files to all be in memory. If
>> it is
>> > > CPU bound, get a faster processor and work on the config to have the
>> > > request do less work. Sharding can also help.
>> > >
>> > > I’m not a fan of always choosing 31 GB for the JVM. Allocate only as
>> much
>> > > as is needed. Java will use the whole heap whether it is needed or
>> not.
>> > You
>> > > might only need 8 GB. All of our clusters run with 16 GB. That
>> includes
>> > > some machines with 36 cores.
>> > >
>> > >
>> > --
>> > Vincenzo D'Amore
>> >
>>
>
>
> --
> Vincenzo D'Amore
>
>

-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
Ok, everything you said is right, but nevertheless even right now a stupid
*:* rows=1 runs in almost 2 seconds.
The average document size is pretty small, less than roughly 100/200 bytes.
Does someone know if the average doc size is available in the metrics?

{
  "responseHeader":{
    "zkConnected":true,
    "status":0,
    "QTime":2033,
    "params":{
      "q":"*:*",
      "rows":"1"}},

On Fri, Mar 18, 2022 at 7:50 PM matthew sporleder <ms...@gmail.com>
wrote:

> You are getting this general advice but, sadly, it depends on your doc
> sizes, query complexity, write frequency, and a bunch of other stuff I
> don't know about.
>
> I prefer to run with the *minimum* JVM memory to handle throughput (without
> OOM) and let the OS do caching because I update/write to the index every
> few minutes making my *solr* caching pretty worthless.
>
> tuning solr also includes tuning queries.  Start with timing id:123 type
> K:V lookups and work your complexity up from there.  Use debug=true and
> attempt to read it.
>
> There are many many knobs.  You need to set a baseline, then a target, then
> a thesis on how to get there.
>
>
>
> On Fri, Mar 18, 2022 at 2:36 PM Vincenzo D'Amore <v....@gmail.com>
> wrote:
>
> > We have modified the kubernetes configuration and restarted SolrCloud
> > cluster, now we have 16 cores per Solr instance.
> > The performance does not seem to be improved though.
> > The load average is 0.43 0.83 1.00, to me it seems an IO bound problem.
> > Looking at the index I see 162M documents, 234M maxDocs, 71M deleted...
> > maybe this core needs to be optimized.
> > The INDEX.size is 70GB, what do you think if I raise the size allocated
> > from the JVM to 64GB in order to have the index in memory?
> > At last, I'm looking at Solr metric but really not sure how to understand
> > if it is CPU bound or IO bound.
> >
> > On Fri, Mar 18, 2022 at 6:34 PM Walter Underwood <wu...@wunderwood.org>
> > wrote:
> >
> > > First look at the system metrics. Is it CPU bound or IO bound? Each
> > > request is single threaded, so a CPU bound system will have one core
> used
> > > at roughly 100% for that time. An IO bound system will not be using
> much
> > > CPU but will have threads in iowait and lots of disk reads.
> > >
> > > After you know that, then you know what to work on. If it is IO bound,
> > get
> > > enough RAM for the OS, JVM, and index files to all be in memory. If it
> is
> > > CPU bound, get a faster processor and work on the config to have the
> > > request do less work. Sharding can also help.
> > >
> > > I’m not a fan of always choosing 31 GB for the JVM. Allocate only as
> much
> > > as is needed. Java will use the whole heap whether it is needed or not.
> > You
> > > might only need 8 GB. All of our clusters run with 16 GB. That includes
> > > some machines with 36 cores.
> > >
> > >
> > --
> > Vincenzo D'Amore
> >
>


-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by matthew sporleder <ms...@gmail.com>.
You are getting this general advice but, sadly, it depends on your doc
sizes, query complexity, write frequency, and a bunch of other stuff I
don't know about.

I prefer to run with the *minimum* JVM memory to handle throughput (without
OOM) and let the OS do caching because I update/write to the index every
few minutes making my *solr* caching pretty worthless.

tuning solr also includes tuning queries.  Start with timing id:123 type
K:V lookups and work your complexity up from there.  Use debug=true and
attempt to read it.

There are many many knobs.  You need to set a baseline, then a target, then
a thesis on how to get there.



On Fri, Mar 18, 2022 at 2:36 PM Vincenzo D'Amore <v....@gmail.com> wrote:

> We have modified the kubernetes configuration and restarted SolrCloud
> cluster, now we have 16 cores per Solr instance.
> The performance does not seem to be improved though.
> The load average is 0.43 0.83 1.00, to me it seems an IO bound problem.
> Looking at the index I see 162M documents, 234M maxDocs, 71M deleted...
> maybe this core needs to be optimized.
> The INDEX.size is 70GB, what do you think if I raise the size allocated
> from the JVM to 64GB in order to have the index in memory?
> At last, I'm looking at Solr metric but really not sure how to understand
> if it is CPU bound or IO bound.
>
> On Fri, Mar 18, 2022 at 6:34 PM Walter Underwood <wu...@wunderwood.org>
> wrote:
>
> > First look at the system metrics. Is it CPU bound or IO bound? Each
> > request is single threaded, so a CPU bound system will have one core used
> > at roughly 100% for that time. An IO bound system will not be using much
> > CPU but will have threads in iowait and lots of disk reads.
> >
> > After you know that, then you know what to work on. If it is IO bound,
> get
> > enough RAM for the OS, JVM, and index files to all be in memory. If it is
> > CPU bound, get a faster processor and work on the config to have the
> > request do less work. Sharding can also help.
> >
> > I’m not a fan of always choosing 31 GB for the JVM. Allocate only as much
> > as is needed. Java will use the whole heap whether it is needed or not.
> You
> > might only need 8 GB. All of our clusters run with 16 GB. That includes
> > some machines with 36 cores.
> >
> >
> --
> Vincenzo D'Amore
>

Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
Hi all, just sharing, I think I found what's wrong in this SolrCloud deploy.
Looking at Solr Metrics I see the SEARCHER.new entry that increments very
rapidly, one, two or even three times every 5 seconds.

SEARCHER.new: 11848 ... 3/4 seconds ... SEARCHER.new: 11849 ... 5 seconds
... SEARCHER.new: 11850

It seems there is something that is committing frequently and looking at
UPDATE.updateHandler.commits basically I see the same numbers, but a little
bit higher.


On Fri, Mar 18, 2022 at 11:56 PM Shawn Heisey <ap...@elyograg.org> wrote:

> On 3/18/22 12:35, Vincenzo D'Amore wrote:
> > The INDEX.size is 70GB, what do you think if I raise the size allocated
> > from the JVM to 64GB in order to have the index in memory?
>
> Solr and Java do not put the index into memory.  The OS does.  If you
> raise the heap size, there will be LESS memory available for caching the
> index.
>
> The advice to use a 31GB heap is because as soon as the max heap size
> hits 32GB, Java switches from 32 bit pointers to 64 bit pointers.
> Specifying a 32GB heap actually means the program will get LESS memory
> than a max heap of 31GB.  I once researched it (not very deeply), and I
> think the break-even point for software like Solr doesn't occur until
> the heap size is well beyond 40GB. So in many cases, 31GB is a better
> setting.
>
> Thanks,
> Shawn
>
>

-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/18/22 12:35, Vincenzo D'Amore wrote:
> The INDEX.size is 70GB, what do you think if I raise the size allocated
> from the JVM to 64GB in order to have the index in memory?

Solr and Java do not put the index into memory.  The OS does.  If you 
raise the heap size, there will be LESS memory available for caching the 
index.

The advice to use a 31GB heap is because as soon as the max heap size 
hits 32GB, Java switches from 32 bit pointers to 64 bit pointers.  
Specifying a 32GB heap actually means the program will get LESS memory 
than a max heap of 31GB.  I once researched it (not very deeply), and I 
think the break-even point for software like Solr doesn't occur until 
the heap size is well beyond 40GB. So in many cases, 31GB is a better 
setting.

Thanks,
Shawn


Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
We have modified the kubernetes configuration and restarted SolrCloud
cluster, now we have 16 cores per Solr instance.
The performance does not seem to be improved though.
The load average is 0.43 0.83 1.00, to me it seems an IO bound problem.
Looking at the index I see 162M documents, 234M maxDocs, 71M deleted...
maybe this core needs to be optimized.
The INDEX.size is 70GB, what do you think if I raise the size allocated
from the JVM to 64GB in order to have the index in memory?
At last, I'm looking at Solr metric but really not sure how to understand
if it is CPU bound or IO bound.

On Fri, Mar 18, 2022 at 6:34 PM Walter Underwood <wu...@wunderwood.org>
wrote:

> First look at the system metrics. Is it CPU bound or IO bound? Each
> request is single threaded, so a CPU bound system will have one core used
> at roughly 100% for that time. An IO bound system will not be using much
> CPU but will have threads in iowait and lots of disk reads.
>
> After you know that, then you know what to work on. If it is IO bound, get
> enough RAM for the OS, JVM, and index files to all be in memory. If it is
> CPU bound, get a faster processor and work on the config to have the
> request do less work. Sharding can also help.
>
> I’m not a fan of always choosing 31 GB for the JVM. Allocate only as much
> as is needed. Java will use the whole heap whether it is needed or not. You
> might only need 8 GB. All of our clusters run with 16 GB. That includes
> some machines with 36 cores.
>
>
-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Walter Underwood <wu...@wunderwood.org>.
First look at the system metrics. Is it CPU bound or IO bound? Each request is single threaded, so a CPU bound system will have one core used at roughly 100% for that time. An IO bound system will not be using much CPU but will have threads in iowait and lots of disk reads.

After you know that, then you know what to work on. If it is IO bound, get enough RAM for the OS, JVM, and index files to all be in memory. If it is CPU bound, get a faster processor and work on the config to have the request do less work. Sharding can also help.

I’m not a fan of always choosing 31 GB for the JVM. Allocate only as much as is needed. Java will use the whole heap whether it is needed or not. You might only need 8 GB. All of our clusters run with 16 GB. That includes some machines with 36 cores.

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Mar 18, 2022, at 10:22 AM, Dave <ha...@gmail.com> wrote:
> 
> I’ve found that each solr instance will take as many cores as it needs per request. Your 2 sec response sounds like you just started the server and then did that search. I never trust the first search as nothing has been put into memory yet. I like to give my jvms 31 gb each and let Linux cache the rest of the files as it sees fit, with swap turned completely off. Also *:* can be heavier than you think if you have every field indexed since it’s like a punch card like system where all the fields have to match.  
> 
>> On Mar 18, 2022, at 12:45 PM, Vincenzo D'Amore <v....@gmail.com> wrote:
>> 
>> Thanks for your support, just sharing what I found until now.
>> 
>> I'm working with SolrCloud with a 2 node deployment. This deployment has
>> many indexes but a main one 160GB index that has become very slow.
>> Select *:* rows=1 take 2 seconds.
>> SolrCloud instances are running in kubernetes and are deployed in a pod
>> with 128GB RAM but only 16GB to JVM.
>> Looking at Solr Documentation I've found nothing specific about what
>> happens to the performance if the number of CPUs is not correctly detected.
>> The only interesting page is the following and it seems to match with your
>> suggestion.
>> At the end of paragraph there is a not very clear reference about how the
>> Concurrent Merge Scheduler behavior can be impacted by the number of
>> detected CPUs.
>> 
>>> Similarly, the system property lucene.cms.override_core_count can be set
>> to the number of CPU cores to override the auto-detected processor count.
>> 
>>> Talking Solr to Production > Dynamic Defaults for ConcurrentMergeScheduler
>>> 
>> https://solr.apache.org/guide/8_3/taking-solr-to-production.html#dynamic-defaults-for-concurrentmergescheduler
>> 
>> 
>> 
>>> On Thu, Mar 17, 2022 at 1:22 PM Thomas Matthijs <li...@selckin.be> wrote:
>>> 
>>> I don't know how it affects solr, but if you're interested in java's
>>> support to detect cgroup/container limits on cpu/memory etc, you can use
>>> these links as starting points to investigate.
>>> It affect some jvm configuration, like initial GC selection & settings
>>> that can affect performance.
>>> It was only backported to java 8 quite recently, so if you're still on
>>> that might want to check if you're on the latest version.
>>> 
>>> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115
>>> https://bugs.openjdk.java.net/browse/JDK-8264136
>>> 
>>> 
>>>> On Thu, Mar 17, 2022, at 01:11, Vincenzo D'Amore wrote:
>>>> Hi Shawn, thanks for your help.
>>>> 
>>>> Given that I’ll put the question in another way.
>>>> If Java don’t correctly detect the number of CPU how the overall
>>>> performance can be affected by this?
>>>> 
>>>> Ciao,
>>>> Vincenzo
>>>> 
>>>> --
>>>> mobile: 3498513251
>>>> skype: free.dev
>>>> 
>>>>> On 16 Mar 2022, at 18:56, Shawn Heisey <el...@elyograg.org> wrote:
>>>>> 
>>>>> On 3/16/22 03:56, Vincenzo D'Amore wrote:
>>>>>> just asking how can I rely on the number of processors the solr
>>> dashboard
>>>>>> shows.
>>>>>> 
>>>>>> Just to give you a context, I have a 2 nodes solrcloud instance
>>> running in
>>>>>> kubernetes.
>>>>>> Looking at solr dashboard (8.3.0) I see there is only 1 cpu available
>>> per
>>>>>> solr instance.
>>>>>> but the Solr pods are deployed in two different kube nodes, and
>>> entering
>>>>>> the pod with the
>>>>>> kubectl exec -ti solr-0  -- /bin/bash
>>>>>> and running top I see there are 16 cores available for each solr
>>> instance.
>>>>> 
>>>>> The dashboard info comes from Java, and Java gets it from the OS. How
>>> that works with containers is something I don't know much about.  Here's
>>> what Linux says about a server I have which has two six-core Intel CPUs
>>> with hyperthreading.  This is bare metal, not a VM or container:
>>>>> 
>>>>> elyograg@smeagol:~$ grep processor /proc/cpuinfo
>>>>> processor    : 0
>>>>> processor    : 1
>>>>> processor    : 2
>>>>> processor    : 3
>>>>> processor    : 4
>>>>> processor    : 5
>>>>> processor    : 6
>>>>> processor    : 7
>>>>> processor    : 8
>>>>> processor    : 9
>>>>> processor    : 10
>>>>> processor    : 11
>>>>> processor    : 12
>>>>> processor    : 13
>>>>> processor    : 14
>>>>> processor    : 15
>>>>> processor    : 16
>>>>> processor    : 17
>>>>> processor    : 18
>>>>> processor    : 19
>>>>> processor    : 20
>>>>> processor    : 21
>>>>> processor    : 22
>>>>> processor    : 23
>>>>> 
>>>>> If I start Solr on that server, the dashboard reports 24 processors.
>>>>> 
>>>>> Thanks,
>>>>> Shawn
>>>>> 
>>> 
>> 
>> 
>> -- 
>> Vincenzo D'Amore


Re: Solr dashboard - number of CPUs available

Posted by Dave <ha...@gmail.com>.
I’ve found that each solr instance will take as many cores as it needs per request. Your 2 sec response sounds like you just started the server and then did that search. I never trust the first search as nothing has been put into memory yet. I like to give my jvms 31 gb each and let Linux cache the rest of the files as it sees fit, with swap turned completely off. Also *:* can be heavier than you think if you have every field indexed since it’s like a punch card like system where all the fields have to match.  

> On Mar 18, 2022, at 12:45 PM, Vincenzo D'Amore <v....@gmail.com> wrote:
> 
> Thanks for your support, just sharing what I found until now.
> 
> I'm working with SolrCloud with a 2 node deployment. This deployment has
> many indexes but a main one 160GB index that has become very slow.
> Select *:* rows=1 take 2 seconds.
> SolrCloud instances are running in kubernetes and are deployed in a pod
> with 128GB RAM but only 16GB to JVM.
> Looking at Solr Documentation I've found nothing specific about what
> happens to the performance if the number of CPUs is not correctly detected.
> The only interesting page is the following and it seems to match with your
> suggestion.
> At the end of paragraph there is a not very clear reference about how the
> Concurrent Merge Scheduler behavior can be impacted by the number of
> detected CPUs.
> 
>> Similarly, the system property lucene.cms.override_core_count can be set
> to the number of CPU cores to override the auto-detected processor count.
> 
>> Talking Solr to Production > Dynamic Defaults for ConcurrentMergeScheduler
>> 
> https://solr.apache.org/guide/8_3/taking-solr-to-production.html#dynamic-defaults-for-concurrentmergescheduler
> 
> 
> 
>> On Thu, Mar 17, 2022 at 1:22 PM Thomas Matthijs <li...@selckin.be> wrote:
>> 
>> I don't know how it affects solr, but if you're interested in java's
>> support to detect cgroup/container limits on cpu/memory etc, you can use
>> these links as starting points to investigate.
>> It affect some jvm configuration, like initial GC selection & settings
>> that can affect performance.
>> It was only backported to java 8 quite recently, so if you're still on
>> that might want to check if you're on the latest version.
>> 
>> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115
>> https://bugs.openjdk.java.net/browse/JDK-8264136
>> 
>> 
>>> On Thu, Mar 17, 2022, at 01:11, Vincenzo D'Amore wrote:
>>> Hi Shawn, thanks for your help.
>>> 
>>> Given that I’ll put the question in another way.
>>> If Java don’t correctly detect the number of CPU how the overall
>>> performance can be affected by this?
>>> 
>>> Ciao,
>>> Vincenzo
>>> 
>>> --
>>> mobile: 3498513251
>>> skype: free.dev
>>> 
>>>> On 16 Mar 2022, at 18:56, Shawn Heisey <el...@elyograg.org> wrote:
>>>> 
>>>> On 3/16/22 03:56, Vincenzo D'Amore wrote:
>>>>> just asking how can I rely on the number of processors the solr
>> dashboard
>>>>> shows.
>>>>> 
>>>>> Just to give you a context, I have a 2 nodes solrcloud instance
>> running in
>>>>> kubernetes.
>>>>> Looking at solr dashboard (8.3.0) I see there is only 1 cpu available
>> per
>>>>> solr instance.
>>>>> but the Solr pods are deployed in two different kube nodes, and
>> entering
>>>>> the pod with the
>>>>> kubectl exec -ti solr-0  -- /bin/bash
>>>>> and running top I see there are 16 cores available for each solr
>> instance.
>>>> 
>>>> The dashboard info comes from Java, and Java gets it from the OS. How
>> that works with containers is something I don't know much about.  Here's
>> what Linux says about a server I have which has two six-core Intel CPUs
>> with hyperthreading.  This is bare metal, not a VM or container:
>>>> 
>>>> elyograg@smeagol:~$ grep processor /proc/cpuinfo
>>>> processor    : 0
>>>> processor    : 1
>>>> processor    : 2
>>>> processor    : 3
>>>> processor    : 4
>>>> processor    : 5
>>>> processor    : 6
>>>> processor    : 7
>>>> processor    : 8
>>>> processor    : 9
>>>> processor    : 10
>>>> processor    : 11
>>>> processor    : 12
>>>> processor    : 13
>>>> processor    : 14
>>>> processor    : 15
>>>> processor    : 16
>>>> processor    : 17
>>>> processor    : 18
>>>> processor    : 19
>>>> processor    : 20
>>>> processor    : 21
>>>> processor    : 22
>>>> processor    : 23
>>>> 
>>>> If I start Solr on that server, the dashboard reports 24 processors.
>>>> 
>>>> Thanks,
>>>> Shawn
>>>> 
>> 
> 
> 
> -- 
> Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
Thanks for your support, just sharing what I found until now.

I'm working with SolrCloud with a 2 node deployment. This deployment has
many indexes but a main one 160GB index that has become very slow.
Select *:* rows=1 take 2 seconds.
SolrCloud instances are running in kubernetes and are deployed in a pod
with 128GB RAM but only 16GB to JVM.
Looking at Solr Documentation I've found nothing specific about what
happens to the performance if the number of CPUs is not correctly detected.
The only interesting page is the following and it seems to match with your
suggestion.
At the end of paragraph there is a not very clear reference about how the
Concurrent Merge Scheduler behavior can be impacted by the number of
detected CPUs.

> Similarly, the system property lucene.cms.override_core_count can be set
to the number of CPU cores to override the auto-detected processor count.

> Talking Solr to Production > Dynamic Defaults for ConcurrentMergeScheduler
>
https://solr.apache.org/guide/8_3/taking-solr-to-production.html#dynamic-defaults-for-concurrentmergescheduler



On Thu, Mar 17, 2022 at 1:22 PM Thomas Matthijs <li...@selckin.be> wrote:

> I don't know how it affects solr, but if you're interested in java's
> support to detect cgroup/container limits on cpu/memory etc, you can use
> these links as starting points to investigate.
> It affect some jvm configuration, like initial GC selection & settings
> that can affect performance.
> It was only backported to java 8 quite recently, so if you're still on
> that might want to check if you're on the latest version.
>
> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115
> https://bugs.openjdk.java.net/browse/JDK-8264136
>
>
> On Thu, Mar 17, 2022, at 01:11, Vincenzo D'Amore wrote:
> > Hi Shawn, thanks for your help.
> >
> > Given that I’ll put the question in another way.
> > If Java don’t correctly detect the number of CPU how the overall
> > performance can be affected by this?
> >
> > Ciao,
> > Vincenzo
> >
> > --
> > mobile: 3498513251
> > skype: free.dev
> >
> >> On 16 Mar 2022, at 18:56, Shawn Heisey <el...@elyograg.org> wrote:
> >>
> >> On 3/16/22 03:56, Vincenzo D'Amore wrote:
> >>> just asking how can I rely on the number of processors the solr
> dashboard
> >>> shows.
> >>>
> >>> Just to give you a context, I have a 2 nodes solrcloud instance
> running in
> >>> kubernetes.
> >>> Looking at solr dashboard (8.3.0) I see there is only 1 cpu available
> per
> >>> solr instance.
> >>> but the Solr pods are deployed in two different kube nodes, and
> entering
> >>> the pod with the
> >>> kubectl exec -ti solr-0  -- /bin/bash
> >>> and running top I see there are 16 cores available for each solr
> instance.
> >>
> >> The dashboard info comes from Java, and Java gets it from the OS. How
> that works with containers is something I don't know much about.  Here's
> what Linux says about a server I have which has two six-core Intel CPUs
> with hyperthreading.  This is bare metal, not a VM or container:
> >>
> >> elyograg@smeagol:~$ grep processor /proc/cpuinfo
> >> processor    : 0
> >> processor    : 1
> >> processor    : 2
> >> processor    : 3
> >> processor    : 4
> >> processor    : 5
> >> processor    : 6
> >> processor    : 7
> >> processor    : 8
> >> processor    : 9
> >> processor    : 10
> >> processor    : 11
> >> processor    : 12
> >> processor    : 13
> >> processor    : 14
> >> processor    : 15
> >> processor    : 16
> >> processor    : 17
> >> processor    : 18
> >> processor    : 19
> >> processor    : 20
> >> processor    : 21
> >> processor    : 22
> >> processor    : 23
> >>
> >> If I start Solr on that server, the dashboard reports 24 processors.
> >>
> >> Thanks,
> >> Shawn
> >>
>


-- 
Vincenzo D'Amore

Re: Solr dashboard - number of CPUs available

Posted by Thomas Matthijs <li...@selckin.be>.
I don't know how it affects solr, but if you're interested in java's support to detect cgroup/container limits on cpu/memory etc, you can use these links as starting points to investigate.
It affect some jvm configuration, like initial GC selection & settings that can affect performance.
It was only backported to java 8 quite recently, so if you're still on that might want to check if you're on the latest version.

https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115
https://bugs.openjdk.java.net/browse/JDK-8264136


On Thu, Mar 17, 2022, at 01:11, Vincenzo D'Amore wrote:
> Hi Shawn, thanks for your help. 
>
> Given that I’ll put the question in another way. 
> If Java don’t correctly detect the number of CPU how the overall 
> performance can be affected by this?
>
> Ciao,
> Vincenzo
>
> --
> mobile: 3498513251
> skype: free.dev
>
>> On 16 Mar 2022, at 18:56, Shawn Heisey <el...@elyograg.org> wrote:
>> 
>> On 3/16/22 03:56, Vincenzo D'Amore wrote:
>>> just asking how can I rely on the number of processors the solr dashboard
>>> shows.
>>> 
>>> Just to give you a context, I have a 2 nodes solrcloud instance running in
>>> kubernetes.
>>> Looking at solr dashboard (8.3.0) I see there is only 1 cpu available per
>>> solr instance.
>>> but the Solr pods are deployed in two different kube nodes, and entering
>>> the pod with the
>>> kubectl exec -ti solr-0  -- /bin/bash
>>> and running top I see there are 16 cores available for each solr instance.
>> 
>> The dashboard info comes from Java, and Java gets it from the OS. How that works with containers is something I don't know much about.  Here's what Linux says about a server I have which has two six-core Intel CPUs with hyperthreading.  This is bare metal, not a VM or container:
>> 
>> elyograg@smeagol:~$ grep processor /proc/cpuinfo
>> processor    : 0
>> processor    : 1
>> processor    : 2
>> processor    : 3
>> processor    : 4
>> processor    : 5
>> processor    : 6
>> processor    : 7
>> processor    : 8
>> processor    : 9
>> processor    : 10
>> processor    : 11
>> processor    : 12
>> processor    : 13
>> processor    : 14
>> processor    : 15
>> processor    : 16
>> processor    : 17
>> processor    : 18
>> processor    : 19
>> processor    : 20
>> processor    : 21
>> processor    : 22
>> processor    : 23
>> 
>> If I start Solr on that server, the dashboard reports 24 processors.
>> 
>> Thanks,
>> Shawn
>>

Re: Solr dashboard - number of CPUs available

Posted by Vincenzo D'Amore <v....@gmail.com>.
Hi Shawn, thanks for your help. 

Given that I’ll put the question in another way. 
If Java don’t correctly detect the number of CPU how the overall performance can be affected by this?

Ciao,
Vincenzo

--
mobile: 3498513251
skype: free.dev

> On 16 Mar 2022, at 18:56, Shawn Heisey <el...@elyograg.org> wrote:
> 
> On 3/16/22 03:56, Vincenzo D'Amore wrote:
>> just asking how can I rely on the number of processors the solr dashboard
>> shows.
>> 
>> Just to give you a context, I have a 2 nodes solrcloud instance running in
>> kubernetes.
>> Looking at solr dashboard (8.3.0) I see there is only 1 cpu available per
>> solr instance.
>> but the Solr pods are deployed in two different kube nodes, and entering
>> the pod with the
>> kubectl exec -ti solr-0  -- /bin/bash
>> and running top I see there are 16 cores available for each solr instance.
> 
> The dashboard info comes from Java, and Java gets it from the OS. How that works with containers is something I don't know much about.  Here's what Linux says about a server I have which has two six-core Intel CPUs with hyperthreading.  This is bare metal, not a VM or container:
> 
> elyograg@smeagol:~$ grep processor /proc/cpuinfo
> processor    : 0
> processor    : 1
> processor    : 2
> processor    : 3
> processor    : 4
> processor    : 5
> processor    : 6
> processor    : 7
> processor    : 8
> processor    : 9
> processor    : 10
> processor    : 11
> processor    : 12
> processor    : 13
> processor    : 14
> processor    : 15
> processor    : 16
> processor    : 17
> processor    : 18
> processor    : 19
> processor    : 20
> processor    : 21
> processor    : 22
> processor    : 23
> 
> If I start Solr on that server, the dashboard reports 24 processors.
> 
> Thanks,
> Shawn
> 

Re: Solr dashboard - number of CPUs available

Posted by Shawn Heisey <el...@elyograg.org>.
On 3/16/22 03:56, Vincenzo D'Amore wrote:
> just asking how can I rely on the number of processors the solr dashboard
> shows.
>
> Just to give you a context, I have a 2 nodes solrcloud instance running in
> kubernetes.
> Looking at solr dashboard (8.3.0) I see there is only 1 cpu available per
> solr instance.
> but the Solr pods are deployed in two different kube nodes, and entering
> the pod with the
> kubectl exec -ti solr-0  -- /bin/bash
> and running top I see there are 16 cores available for each solr instance.

The dashboard info comes from Java, and Java gets it from the OS. How 
that works with containers is something I don't know much about.  Here's 
what Linux says about a server I have which has two six-core Intel CPUs 
with hyperthreading.  This is bare metal, not a VM or container:

elyograg@smeagol:~$ grep processor /proc/cpuinfo
processor    : 0
processor    : 1
processor    : 2
processor    : 3
processor    : 4
processor    : 5
processor    : 6
processor    : 7
processor    : 8
processor    : 9
processor    : 10
processor    : 11
processor    : 12
processor    : 13
processor    : 14
processor    : 15
processor    : 16
processor    : 17
processor    : 18
processor    : 19
processor    : 20
processor    : 21
processor    : 22
processor    : 23

If I start Solr on that server, the dashboard reports 24 processors.

Thanks,
Shawn