You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by "Joshi, Shital" <Sh...@gs.com> on 2014/04/08 19:28:31 UTC

solr4 performance question

Hi,

We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB machine and 40 GB of index. 
We're constantly noticing that Solr queries take longer time while update (with commit=false setting) is in progress. The query which usually takes .5 seconds, take up to 2 minutes while updates are in progress. And this is not the case with all queries, it is very sporadic behavior.
 
Any pointer to nail this issue would be appreciated. 

Is there a way to find how much of a query result came from cache? Can we enable any log settings to start printing what came from cache vs. what was queried?

Thanks!

Re: OutOfMemoryError while merging large indexes

Posted by Furkan KAMACI <fu...@gmail.com>.
Hi;

According to Sun, the error happens "if too much time is being spent in
garbage collection: if more than 98% of the total time is spent in garbage
collection and less than 2% of the heap is recovered, an OutOfMemoryError
will be thrown.". Specifying more memory should be helpful. On the other
hand you should check here:
http://wiki.apache.org/solr/SolrPerformanceProblems and here:
http://wiki.apache.org/solr/ShawnHeisey

Thanks;
Furkan KAMACI


2014-04-09 4:25 GMT+03:00 François Schiettecatte <fs...@gmail.com>:

> Have you tried using:
>
>         -XX:-UseGCOverheadLimit
>
> François
>
> On Apr 8, 2014, at 6:06 PM, Haiying Wang <ha...@yahoo.com> wrote:
>
> > Hi,
> >
> > We were trying to merge a large index (9GB, 21 million docs) into
> current index (only 13MB), using mergeindexes command ofCoreAdminHandler,
> but always run into OOM error. We currently set the max heap size to 4GB
> for the Solr server. We are using 4.6.0, and did not change the original
> solrconfig.xml.
> >
> > Is there any setting/configure that could help to complete the
> mergeindexes process without running into OOM error? I can increase the max
> jvm heap size, but am afraid that may not scale in case larger index need
> to be merged in the future, and hoping the index merge can be performed
> with limited memory foorprint. Please help. Thanks!
> >
> > The jvm heap setting:   -Xmx4096M -Xms512M
> >
> > Command used:
> >
> >
> > curl "
> http://dev101:8983/solr/admin/cores?action=mergeindexes&core=collection1&indexDir=/solr/tmp/data/snapshot.20140407194442777
> "
> >
> > OOM error stack trace:
> >
> > Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
> >         at
> > java.lang.StringCoding$StringDecoder.decode(StringCoding.java:133)
> >         at java.lang.StringCoding.decode(StringCoding.java:179)
> >         at java.lang.String.&lt;init&gt;(String.java:483)
> >         at java.lang.String.&lt;init&gt;(String.java:539)
> >         at
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.readField(CompressingStoredFieldsReader.java:187)
> >         at
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:351)
> >         at
> org.apache.lucene.index.SegmentReader.document(SegmentReader.java:276)
> >         at
> > org.apache.lucene.index.IndexReader.document(IndexReader.java:436)
> >         at
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:345)
> >         at
> org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:316)
> >         at
> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
> >         at
> org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2555)
> >         at
> org.apache.solr.update.DirectUpdateHandler2.mergeIndexes(DirectUpdateHandler2.java:449)
> >         at
> org.apache.solr.update.processor.RunUpdateProcessor.processMergeIndexes(RunUpdateProcessorFactory.java:88)
> >         at
> >
> org.apache.solr.update.processor.UpdateRequestProcessor.processMergeIndexes(UpdateRequestProcessor.java:59)
> >         at
> org.apache.solr.update.processor.LogUpdateProcessor.processMergeIndexes(LogUpdateProcessorFactory.java:149)
> >         at
> org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:384)
> >         at
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
> >         at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> >         at
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:662)
> >         at
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248)
> >         at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
> >         at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> >         at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> >         at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> >         at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> >         at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> >         at
> >
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> >         at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> >         at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> >         at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> >         at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> >
> > Regards,
> >
> > Haiying
>
>

Re: OutOfMemoryError while merging large indexes

Posted by Haiying Wang <ha...@yahoo.com>.
Thanks, Francois,

Tried " -XX:-UseGCOverheadLimit" and I got real OOM error now: "java.lang.OutOfMemoryError: Java heap space".

Has anyone tried merging large indexes? what was your heap size setting for Solr? 

Regards,

Haiying




________________________________
 From: François Schiettecatte <fs...@gmail.com>
To: solr-user@lucene.apache.org; Haiying Wang <ha...@yahoo.com> 
Sent: Tuesday, April 8, 2014 8:25 PM
Subject: Re: OutOfMemoryError while merging large indexes
 

Have you tried using:

    -XX:-UseGCOverheadLimit 

François


On Apr 8, 2014, at 6:06 PM, Haiying Wang <ha...@yahoo.com> wrote:

> Hi,
> 
> We were trying to merge a large index (9GB, 21 million docs) into current index (only 13MB), using mergeindexes command ofCoreAdminHandler, but always run into OOM error. We currently set the max heap size to 4GB for the Solr server. We are using 4.6.0, and did not change the original solrconfig.xml. 
> 
> Is there any setting/configure that could help to complete the mergeindexes process without running into OOM error? I can increase the max jvm heap size, but am afraid that may not scale in case larger index need to be merged in the future, and hoping the index merge can be performed with limited memory foorprint. Please help. Thanks!
> 
> The jvm heap setting:   -Xmx4096M -Xms512M
> 
> Command used:
> 
> 
> curl "http://dev101:8983/solr/admin/cores?action=mergeindexes&core=collection1&indexDir=/solr/tmp/data/snapshot.20140407194442777"
> 
> OOM error stack trace:
> 
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at
> java.lang.StringCoding$StringDecoder.decode(StringCoding.java:133)
>         at java.lang.StringCoding.decode(StringCoding.java:179)
>         at java.lang.String.&lt;init&gt;(String.java:483)
>         at java.lang.String.&lt;init&gt;(String.java:539)
>         at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.readField(CompressingStoredFieldsReader.java:187)
>         at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:351)
>         at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:276)
>         at
> org.apache.lucene.index.IndexReader.document(IndexReader.java:436)
>         at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:345)
>         at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:316)
>         at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
>         at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2555)
>         at org.apache.solr.update.DirectUpdateHandler2.mergeIndexes(DirectUpdateHandler2.java:449)
>         at org.apache.solr.update.processor.RunUpdateProcessor.processMergeIndexes(RunUpdateProcessorFactory.java:88)
>         at
> org.apache.solr.update.processor.UpdateRequestProcessor.processMergeIndexes(UpdateRequestProcessor.java:59)
>         at org.apache.solr.update.processor.LogUpdateProcessor.processMergeIndexes(LogUpdateProcessorFactory.java:149)
>         at org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:384)
>         at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
>         at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>         at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:662)
>         at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248)
>         at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
>         at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>         at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>         at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>         at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>         at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>         at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>         at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>         at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>         at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>         at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> 
> Regards,
> 
> Haiying

Re: OutOfMemoryError while merging large indexes

Posted by François Schiettecatte <fs...@gmail.com>.
Have you tried using:

	-XX:-UseGCOverheadLimit 

François

On Apr 8, 2014, at 6:06 PM, Haiying Wang <ha...@yahoo.com> wrote:

> Hi,
> 
> We were trying to merge a large index (9GB, 21 million docs) into current index (only 13MB), using mergeindexes command ofCoreAdminHandler, but always run into OOM error. We currently set the max heap size to 4GB for the Solr server. We are using 4.6.0, and did not change the original solrconfig.xml. 
> 
> Is there any setting/configure that could help to complete the mergeindexes process without running into OOM error? I can increase the max jvm heap size, but am afraid that may not scale in case larger index need to be merged in the future, and hoping the index merge can be performed with limited memory foorprint. Please help. Thanks!
> 
> The jvm heap setting:   -Xmx4096M -Xms512M
> 
> Command used:
> 
> 
> curl "http://dev101:8983/solr/admin/cores?action=mergeindexes&core=collection1&indexDir=/solr/tmp/data/snapshot.20140407194442777"
> 
> OOM error stack trace:
> 
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at
> java.lang.StringCoding$StringDecoder.decode(StringCoding.java:133)
>         at java.lang.StringCoding.decode(StringCoding.java:179)
>         at java.lang.String.&lt;init&gt;(String.java:483)
>         at java.lang.String.&lt;init&gt;(String.java:539)
>         at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.readField(CompressingStoredFieldsReader.java:187)
>         at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:351)
>         at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:276)
>         at
> org.apache.lucene.index.IndexReader.document(IndexReader.java:436)
>         at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:345)
>         at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:316)
>         at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
>         at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2555)
>         at org.apache.solr.update.DirectUpdateHandler2.mergeIndexes(DirectUpdateHandler2.java:449)
>         at org.apache.solr.update.processor.RunUpdateProcessor.processMergeIndexes(RunUpdateProcessorFactory.java:88)
>         at
> org.apache.solr.update.processor.UpdateRequestProcessor.processMergeIndexes(UpdateRequestProcessor.java:59)
>         at org.apache.solr.update.processor.LogUpdateProcessor.processMergeIndexes(LogUpdateProcessorFactory.java:149)
>         at org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:384)
>         at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
>         at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>         at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:662)
>         at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248)
>         at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
>         at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>         at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>         at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>         at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>         at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>         at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>         at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>         at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>         at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>         at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> 
> Regards,
> 
> Haiying


OutOfMemoryError while merging large indexes

Posted by Haiying Wang <ha...@yahoo.com>.
Hi,

We were trying to merge a large index (9GB, 21 million docs) into current index (only 13MB), using mergeindexes command ofCoreAdminHandler, but always run into OOM error. We currently set the max heap size to 4GB for the Solr server. We are using 4.6.0, and did not change the original solrconfig.xml. 

Is there any setting/configure that could help to complete the mergeindexes process without running into OOM error? I can increase the max jvm heap size, but am afraid that may not scale in case larger index need to be merged in the future, and hoping the index merge can be performed with limited memory foorprint. Please help. Thanks!

The jvm heap setting:   -Xmx4096M -Xms512M

Command used:


curl "http://dev101:8983/solr/admin/cores?action=mergeindexes&core=collection1&indexDir=/solr/tmp/data/snapshot.20140407194442777"

OOM error stack trace:

Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
        at
 java.lang.StringCoding$StringDecoder.decode(StringCoding.java:133)
        at java.lang.StringCoding.decode(StringCoding.java:179)
        at java.lang.String.&lt;init&gt;(String.java:483)
        at java.lang.String.&lt;init&gt;(String.java:539)
        at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.readField(CompressingStoredFieldsReader.java:187)
        at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:351)
        at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:276)
        at
 org.apache.lucene.index.IndexReader.document(IndexReader.java:436)
        at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:345)
        at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:316)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94)
        at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2555)
        at org.apache.solr.update.DirectUpdateHandler2.mergeIndexes(DirectUpdateHandler2.java:449)
        at org.apache.solr.update.processor.RunUpdateProcessor.processMergeIndexes(RunUpdateProcessorFactory.java:88)
        at
 org.apache.solr.update.processor.UpdateRequestProcessor.processMergeIndexes(UpdateRequestProcessor.java:59)
        at org.apache.solr.update.processor.LogUpdateProcessor.processMergeIndexes(LogUpdateProcessorFactory.java:149)
        at org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:384)
        at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
        at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
        at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:662)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248)
        at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
        at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
        at
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)

Regards,

Haiying

Re: solr4 performance question

Posted by Erick Erickson <er...@gmail.com>.
bq:   solr.autoCommit.maxTime:600000
       <maxDocs>100000</maxDocs>
       <openSearcher>true</openSearcher>

Every 100K documents or 10 minutes (whichever comes first) your
current searchers will be closed and a new searcher opened, all the
warmup queries etc. might happen. I suspect you're not doing much with
autwarming and/or newSearcher queries. So occasionally your search has
to wait for caches to be read, terms to be populated, etc.

Some possibilities to test this:
1> create some newSearcher queries in solrconfig.xml
2> specify a reasonable autowarm count for queryResultCache (don't go
crazy here, start with 16 or some similiar)
3> set openSearcher to false above. In this case you won't be able to
see the documents until either a hard or soft commit happens, you
could cure this with a single hard commit at the end of your indexing
run. It all depends on what latency you can tolerate in terms of
searching newly-indexed documents.

Here's a reference...

http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Best,
Erick

On Tue, Apr 8, 2014 at 12:11 PM, Joshi, Shital <Sh...@gs.com> wrote:
> We don't do any soft commit. This is our hard commit setting.
>
> <autoCommit>
>        <maxTime>${solr.autoCommit.maxTime:600000}</maxTime>
>        <maxDocs>100000</maxDocs>
>        <openSearcher>true</openSearcher>
> </autoCommit>
>
> We use this update command:
>
>  solr_command=$(cat<<EnD
> time zcat --force $file2load | /usr/bin/curl --proxy "" --silent --show-error --max-time 3600 \
> "http://$solr_url/solr/$solr_core/update/csv?\
> commit=false\
> &separator=|\
> &escape=\\\
> &trim=true\
> &header=false\
> &skipLines=2\
> &overwrite=true\
> &_shard_=$shardid\
> &fieldnames=$fieldnames\
> &f.cs_rep.split=true\
> &f.cs_rep.separator=%5E"  --data-binary @-  -H 'Content-type:text/plain; charset=utf-8'
> EnD)
>
>
> -----Original Message-----
> From: Erick Erickson [mailto:erickerickson@gmail.com]
> Sent: Tuesday, April 08, 2014 2:21 PM
> To: solr-user@lucene.apache.org
> Subject: Re: solr4 performance question
>
> What do you have for hour _softcommit_ settings in solrconfig.xml? I'm
> guessing you're using SolrJ or similar, but the solrconfig settings
> will trip a commit as well.
>
> For that matter ,what are all our commit settings in solrconfig.xml,
> both hard and soft?
>
> Best,
> Erick
>
> On Tue, Apr 8, 2014 at 10:28 AM, Joshi, Shital <Sh...@gs.com> wrote:
>> Hi,
>>
>> We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB machine and 40 GB of index.
>> We're constantly noticing that Solr queries take longer time while update (with commit=false setting) is in progress. The query which usually takes .5 seconds, take up to 2 minutes while updates are in progress. And this is not the case with all queries, it is very sporadic behavior.
>>
>> Any pointer to nail this issue would be appreciated.
>>
>> Is there a way to find how much of a query result came from cache? Can we enable any log settings to start printing what came from cache vs. what was queried?
>>
>> Thanks!

RE: solr4 performance question

Posted by "Joshi, Shital" <Sh...@gs.com>.
We don't do any soft commit. This is our hard commit setting. 

<autoCommit>
       <maxTime>${solr.autoCommit.maxTime:600000}</maxTime>
       <maxDocs>100000</maxDocs>
       <openSearcher>true</openSearcher>       
</autoCommit>

We use this update command: 

 solr_command=$(cat<<EnD
time zcat --force $file2load | /usr/bin/curl --proxy "" --silent --show-error --max-time 3600 \
"http://$solr_url/solr/$solr_core/update/csv?\
commit=false\
&separator=|\
&escape=\\\
&trim=true\
&header=false\
&skipLines=2\
&overwrite=true\
&_shard_=$shardid\
&fieldnames=$fieldnames\
&f.cs_rep.split=true\
&f.cs_rep.separator=%5E"  --data-binary @-  -H 'Content-type:text/plain; charset=utf-8'
EnD)


-----Original Message-----
From: Erick Erickson [mailto:erickerickson@gmail.com] 
Sent: Tuesday, April 08, 2014 2:21 PM
To: solr-user@lucene.apache.org
Subject: Re: solr4 performance question

What do you have for hour _softcommit_ settings in solrconfig.xml? I'm
guessing you're using SolrJ or similar, but the solrconfig settings
will trip a commit as well.

For that matter ,what are all our commit settings in solrconfig.xml,
both hard and soft?

Best,
Erick

On Tue, Apr 8, 2014 at 10:28 AM, Joshi, Shital <Sh...@gs.com> wrote:
> Hi,
>
> We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB machine and 40 GB of index.
> We're constantly noticing that Solr queries take longer time while update (with commit=false setting) is in progress. The query which usually takes .5 seconds, take up to 2 minutes while updates are in progress. And this is not the case with all queries, it is very sporadic behavior.
>
> Any pointer to nail this issue would be appreciated.
>
> Is there a way to find how much of a query result came from cache? Can we enable any log settings to start printing what came from cache vs. what was queried?
>
> Thanks!

Re: solr4 performance question

Posted by Erick Erickson <er...@gmail.com>.
What do you have for hour _softcommit_ settings in solrconfig.xml? I'm
guessing you're using SolrJ or similar, but the solrconfig settings
will trip a commit as well.

For that matter ,what are all our commit settings in solrconfig.xml,
both hard and soft?

Best,
Erick

On Tue, Apr 8, 2014 at 10:28 AM, Joshi, Shital <Sh...@gs.com> wrote:
> Hi,
>
> We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB machine and 40 GB of index.
> We're constantly noticing that Solr queries take longer time while update (with commit=false setting) is in progress. The query which usually takes .5 seconds, take up to 2 minutes while updates are in progress. And this is not the case with all queries, it is very sporadic behavior.
>
> Any pointer to nail this issue would be appreciated.
>
> Is there a way to find how much of a query result came from cache? Can we enable any log settings to start printing what came from cache vs. what was queried?
>
> Thanks!

Re: solr4 performance question

Posted by Furkan KAMACI <fu...@gmail.com>.
Hi Joshi;

Click to the Plugins/Stats section under your collection at Solr Admin UI.
You will see the cache statistics for different types of caches. hitratio
and evictions are good statistics to look at first. On the other hand you
should read here: https://wiki.apache.org/solr/SolrPerformanceFactors

Thanks;
Furkan KAMACI


2014-04-08 20:28 GMT+03:00 Joshi, Shital <Sh...@gs.com>:

> Hi,
>
> We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB
> machine and 40 GB of index.
> We're constantly noticing that Solr queries take longer time while update
> (with commit=false setting) is in progress. The query which usually takes
> .5 seconds, take up to 2 minutes while updates are in progress. And this is
> not the case with all queries, it is very sporadic behavior.
>
> Any pointer to nail this issue would be appreciated.
>
> Is there a way to find how much of a query result came from cache? Can we
> enable any log settings to start printing what came from cache vs. what was
> queried?
>
> Thanks!
>