You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by David Hastings <ha...@gmail.com> on 2017/07/25 19:03:30 UTC

Optimize stalls at the same point

I am trying to optimize a rather large index (417gb) because its sitting at
28% deletions.  However when optimizing, it stops at exactly 492.24 GB
every time.  When I restart solr it will fall back down to 417 gb, and
again, if i send an optimize command, the exact same 492.24 GB and it stops
optimizing.  There is plenty of space on the drive, and im running it
at -Xmx100000m -Xms7000m on a machine with 132gb of ram and 24 cores.  I
have never ran into this problem before but also never had the index get
this large.  Any ideas?
(solr 5.2 btw)
thanks,
-Dave

Re: Optimize stalls at the same point

Posted by David Hastings <ha...@gmail.com>.
it turned out that i think it was a large GC operation, as it has since
resumed optimizing.  current java options are as follows for the indexing
server (they are different for the search servers) if you have any
suggestions as to changes I am more than happy to hear them, honestly they
have just been passed down from one installation to the next ever since we
used to use tomcat to host solr
-server -Xss256k -d64 -Xmx100000m -Xms7000m-XX:NewRatio=3
-XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
-XX:ParallelGCThreads=8 -XX:+CMSScavengeBeforeRemark
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -verbose:gc
-XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps
-XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution
-XX:+PrintGCApplicationStoppedTime
-Xloggc:XXXXXX/solr-5.2.1/server/logs/solr_gc.log

and for my live searchers i use:
server Xss256k Xms50000m Xmx50000m XX:NewRatio=3 XX:SurvivorRatio=4
XX:TargetSurvivorRatio=90 XX:MaxTenuringThreshold=8 XX:+UseConcMarkSweepGC
XX:+UseParNewGC XX:ConcGCThreads=4 XX:ParallelGCThreads=8
XX:+CMSScavengeBeforeRemark XX:PretenureSizeThreshold=64m
XX:+UseCMSInitiatingOccupancyOnly XX:CMSInitiatingOccupancyFraction=50
XX:CMSMaxAbortablePrecleanTime=6000 XX:+CMSParallelRemarkEnabled
XX:+ParallelRefProcEnabled verbose:gc XX:+PrintHeapAtGC XX:+PrintGCDetails
XX:+PrintGCDateStamps XX:+PrintGCTimeStamps XX:+PrintTenuringDistribution
XX:+PrintGCApplicationStoppedTime Xloggc:/SSD2TB01/solr
5.2.1/server/logs/solr_gc.log



On Tue, Jul 25, 2017 at 4:02 PM, Walter Underwood <wu...@wunderwood.org>
wrote:

> Are you sure you need a 100GB heap? The stall could be a major GC.
>
> We run with an 8GB heap. We also run with Xmx equal to Xms, growing memory
> to the max was really time-consuming after startup.
>
> What version of Java? What GC options?
>
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
> > On Jul 25, 2017, at 12:03 PM, David Hastings <
> hastings.recursive@gmail.com> wrote:
> >
> > I am trying to optimize a rather large index (417gb) because its sitting
> at
> > 28% deletions.  However when optimizing, it stops at exactly 492.24 GB
> > every time.  When I restart solr it will fall back down to 417 gb, and
> > again, if i send an optimize command, the exact same 492.24 GB and it
> stops
> > optimizing.  There is plenty of space on the drive, and im running it
> > at -Xmx100000m -Xms7000m on a machine with 132gb of ram and 24 cores.  I
> > have never ran into this problem before but also never had the index get
> > this large.  Any ideas?
> > (solr 5.2 btw)
> > thanks,
> > -Dave
>
>

Re: Optimize stalls at the same point

Posted by Walter Underwood <wu...@wunderwood.org>.
Are you sure you need a 100GB heap? The stall could be a major GC.

We run with an 8GB heap. We also run with Xmx equal to Xms, growing memory to the max was really time-consuming after startup.

What version of Java? What GC options?

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Jul 25, 2017, at 12:03 PM, David Hastings <ha...@gmail.com> wrote:
> 
> I am trying to optimize a rather large index (417gb) because its sitting at
> 28% deletions.  However when optimizing, it stops at exactly 492.24 GB
> every time.  When I restart solr it will fall back down to 417 gb, and
> again, if i send an optimize command, the exact same 492.24 GB and it stops
> optimizing.  There is plenty of space on the drive, and im running it
> at -Xmx100000m -Xms7000m on a machine with 132gb of ram and 24 cores.  I
> have never ran into this problem before but also never had the index get
> this large.  Any ideas?
> (solr 5.2 btw)
> thanks,
> -Dave