You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by jimtronic <ji...@gmail.com> on 2013/03/11 03:00:15 UTC

Memory Guidance

I'm having trouble finding some problems while load testing my setup.

If you saw these numbers on your dashboard, would they worry you?

Physical Memory  97.6%
14.64 GB of 15.01 GB

File Descriptor Count  19.1%
196 of 1024

JVM-Memory  95%
1.67 GB (dark gray)
1.76 GB (med gray)
1.76 GB




--
View this message in context: http://lucene.472066.n3.nabble.com/Memory-Guidance-tp4046207.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Memory Guidance

Posted by jimtronic <ji...@gmail.com>.
Thanks, this is on linux and it's dedicated to solr.

It's been hard for me to pinpoint problems -- or even that there is a
problem!

My general approach has been to see how much I can put onto one box. So I
have 13 separate solr cores, some of which are very active in terms of
writes, reads, and sorts. There's also periodic DIH updates.

I'm running load tests that try to mimic a set of real users signing up and
doing various things on the site with some random think times. Everything
works great for a couple hours, but then slows down.

I realize this is all kind of vague, but I'm at the point where I'm
wondering what I should even be monitoring. The main thing I'm tracking is
QTime and the number of concurrent users I'm able to support in the tests.

Jim

On Mon, Mar 11, 2013 at 12:37 PM, Shawn Heisey-4 [via Lucene] <
ml-node+s472066n4046382h58@n3.nabble.com> wrote:

> On 3/11/2013 11:14 AM, Shawn Heisey wrote:
>
> > On 3/10/2013 8:00 PM, jimtronic wrote:
> >> I'm having trouble finding some problems while load testing my setup.
> >>
> >> If you saw these numbers on your dashboard, would they worry you?
> >>
> >> Physical Memory  97.6%
> >> 14.64 GB of 15.01 GB
> >>
> >> File Descriptor Count  19.1%
> >> 196 of 1024
> >>
> >> JVM-Memory  95%
> >> 1.67 GB (dark gray)
> >> 1.76 GB (med gray)
> >> 1.76 GB
> >
> > What OS?  If it's a unix/linux environment, the full output of the
> > 'free' command will be important.  Generally speaking, it's normal for
> > any computer (client or server, regardless of OS) to use all available
> > memory when under load.
>
> Replying to myself.  The cold must be getting to me. :)
>
> If nothing else is running on this server except for Solr, and your
> index is less than 15GB in size, these numbers would not worry me at
> all.  If your index is less than 30GB in size, you might still be OK,
> but at that point your index would exceed available RAM.  Chances are
> that you would be able to cache enough of it for good performance,
> depending on your schema.  The reason that I say this is that you have
> about 2GB of RAM give to Solr, leaving about 13-14GB for OS disk caching.
>
> If the server is shared with other things, particularly a busy database
> or busy web server, then the above paragraph might not apply - you may
> not have enough resources for Solr to work effectively.
>
> Thanks,
> Shawn
>
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
> http://lucene.472066.n3.nabble.com/Memory-Guidance-tp4046207p4046382.html
>  To unsubscribe from Memory Guidance, click here<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=4046207&code=amltdHJvbmljQGdtYWlsLmNvbXw0MDQ2MjA3fDEzMjQ4NDk0MTQ=>
> .
> NAML<http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: http://lucene.472066.n3.nabble.com/Memory-Guidance-tp4046207p4046393.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Memory Guidance

Posted by Shawn Heisey <so...@elyograg.org>.
On 3/11/2013 11:14 AM, Shawn Heisey wrote:
> On 3/10/2013 8:00 PM, jimtronic wrote:
>> I'm having trouble finding some problems while load testing my setup.
>>
>> If you saw these numbers on your dashboard, would they worry you?
>>
>> Physical Memory  97.6%
>> 14.64 GB of 15.01 GB
>>
>> File Descriptor Count  19.1%
>> 196 of 1024
>>
>> JVM-Memory  95%
>> 1.67 GB (dark gray)
>> 1.76 GB (med gray)
>> 1.76 GB
>
> What OS?  If it's a unix/linux environment, the full output of the
> 'free' command will be important.  Generally speaking, it's normal for
> any computer (client or server, regardless of OS) to use all available
> memory when under load.

Replying to myself.  The cold must be getting to me. :)

If nothing else is running on this server except for Solr, and your 
index is less than 15GB in size, these numbers would not worry me at 
all.  If your index is less than 30GB in size, you might still be OK, 
but at that point your index would exceed available RAM.  Chances are 
that you would be able to cache enough of it for good performance, 
depending on your schema.  The reason that I say this is that you have 
about 2GB of RAM give to Solr, leaving about 13-14GB for OS disk caching.

If the server is shared with other things, particularly a busy database 
or busy web server, then the above paragraph might not apply - you may 
not have enough resources for Solr to work effectively.

Thanks,
Shawn


Re: Memory Guidance

Posted by Shawn Heisey <so...@elyograg.org>.
On 3/10/2013 8:00 PM, jimtronic wrote:
> I'm having trouble finding some problems while load testing my setup.
>
> If you saw these numbers on your dashboard, would they worry you?
>
> Physical Memory  97.6%
> 14.64 GB of 15.01 GB
>
> File Descriptor Count  19.1%
> 196 of 1024
>
> JVM-Memory  95%
> 1.67 GB (dark gray)
> 1.76 GB (med gray)
> 1.76 GB

What OS?  If it's a unix/linux environment, the full output of the 
'free' command will be important.  Generally speaking, it's normal for 
any computer (client or server, regardless of OS) to use all available 
memory when under load.

Thanks,
Shawn


Re: Memory Guidance

Posted by jimtronic <ji...@gmail.com>.
Thanks for the link.

I'm seeing erratic behavior under heavy load, but it's difficult for me to
determine where the problem actually is.

I'll post some more specific questions/details as they arise.



--
View this message in context: http://lucene.472066.n3.nabble.com/Memory-Guidance-tp4046207p4046338.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Memory Guidance

Posted by Timothy Potter <th...@gmail.com>.
Hi Jim,

I'd venture to guess your Solr core is using MMapDirectory and if so, then
the physical memory value is correct and nothing to worry about. The index
is loaded into virtual memory using mem mapped I/O.

The file descriptor count looks fine too, but when using MMapDirectory make
sure your OS reports unlimited for ulimit  -v and ulimit -m (not too sure
what -m is but -v is virtual memory).

You'll probably want to give a bit more RAM if you can spare it to your JVM
(but not too much more) esp. if you do lots of custom sorting.

Good read if you haven't seen it:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Tim

On Sun, Mar 10, 2013 at 8:00 PM, jimtronic <ji...@gmail.com> wrote:

> I'm having trouble finding some problems while load testing my setup.
>
> If you saw these numbers on your dashboard, would they worry you?
>
> Physical Memory  97.6%
> 14.64 GB of 15.01 GB
>
> File Descriptor Count  19.1%
> 196 of 1024
>
> JVM-Memory  95%
> 1.67 GB (dark gray)
> 1.76 GB (med gray)
> 1.76 GB
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Memory-Guidance-tp4046207.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>