You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Joel Cohen <jo...@bluefly.com> on 2014/02/25 19:31:03 UTC

Autocommit, opensearchers and ingestion

Hi all,

I'm working with Solr 4.6.1 and I'm trying to tune my ingestion process.
The ingestion runs a big DB query and then does some ETL on it and inserts
via SolrJ.

I have a 4 node cluster with 1 shard per node running in Tomcat with
-Xmx=4096M. Each node has a separate instance of Zookeeper on it, plus the
ingestion server has one as well. The Solr servers have 8 cores and 64 Gb
of total RAM. The ingestion server is a VM with 8 Gb and 2 cores.

My ingestion code uses a few settings to control concurrency and batch size.

solr.update.batchSize=500
solr.threadCount=4

With this setup, I'm getting a lot of errors and the ingestion is taking
much longer than it should.

Every so often during the ingestion I get these errors on the Solr servers:

WARN  shard1 - 2014-02-25 11:18:34.341;
org.apache.solr.update.UpdateLog$LogReplayer; Starting log replay
tlog{file=/usr/local/solr_shard1/productCatalog/data/tlog/tlog.0000000000000014074
refcount=2} active=true starting pos=776774
WARN  shard1 - 2014-02-25 11:18:37.275;
org.apache.solr.update.UpdateLog$LogReplayer; Log replay finished.
recoveryInfo=RecoveryInfo{adds=4065 deletes=0 deleteByQuery=0 errors=0
positionOfStart=776774}
WARN  shard1 - 2014-02-25 11:18:37.960; org.apache.solr.core.SolrCore;
[productCatalog] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
[productCatalog] Error opening new searcher. exceeded limit of
maxWarmingSearchers=2, try again later.
WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
[productCatalog] Error opening new searcher. exceeded limit of
maxWarmingSearchers=2, try again later.
ERROR shard1 - 2014-02-25 11:18:37.961;
org.apache.solr.common.SolrException; org.apache.solr.common.SolrException:
Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try
again later.
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1575)
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1346)
        at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:592)

I cut threads down to 1 and batchSize down to 100 and the errors go away,
but the upload time jumps up by a factor of 25.

My solrconfig.xml has:

     <autoCommit>
       <maxDocs>${solr.autoCommit.maxDocs:10000}</maxDocs>
       <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
       <openSearcher>false</openSearcher>
     </autoCommit>

     <autoSoftCommit>
       <maxTime>${solr.autoSoftCommit.maxTime:1000}</maxTime>
     </autoSoftCommit>

I turned autowarmCount down to 0 for all the caches. What else can I tune
to allow me to run bigger batch sizes and more threads in my upload script?

-- 

joel cohen, senior system engineer

e joel.cohen@bluefly.com p 212.944.8000 x276
bluefly, inc. 42 w. 39th st. new york, ny 10018
www.bluefly.com <http://www.bluefly.com/?referer=autosig> | *fly since
2013...*

Re: Autocommit, opensearchers and ingestion

Posted by rulinma <ru...@gmail.com>.
good



--
View this message in context: http://lucene.472066.n3.nabble.com/Autocommit-opensearchers-and-ingestion-tp4119604p4150558.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Autocommit, opensearchers and ingestion

Posted by Mark Miller <ma...@gmail.com>.

On Feb 26, 2014, at 5:24 PM, Joel Cohen <jo...@bluefly.com> wrote:

>  he's told me that he's doing commits in his SolrJ code
> every 1000 items (configurable). Does that override my Solr server settings?

Yes. Even if you have configured autocommit - explicit commits are explicit commits that happen on demand. Generally, clients should not send there own commits if you are using auto commit. If clients want to control this, it’s best to setup hard auto commit and have clients use commitWithin for soft commits.

It generally doesn’t make sense for a client to make explicit hard commits with SolrCloud.

- Mark

http://about.me/markrmiller

Re: Autocommit, opensearchers and ingestion

Posted by Joel Cohen <jo...@bluefly.com>.
I read that blog too! Great info. I've bumped up the commit times and
turned the ingestion up a bit as well. I've upped hard commit to 5 minutes
and the soft commit to 60 seconds.

     <autoCommit>
       <maxTime>${solr.autoCommit.maxTime:300000}</maxTime>
       <openSearcher>false</openSearcher>
     </autoCommit>

     <autoSoftCommit>
       <maxTime>${solr.autoSoftCommit.maxTime:60000}</maxTime>
     </autoSoftCommit>

I'm still getting the same issue. After speaking to the engineer working on
the ingestion code, he's told me that he's doing commits in his SolrJ code
every 1000 items (configurable). Does that override my Solr server settings?


On Tue, Feb 25, 2014 at 3:27 PM, Erick Erickson <er...@gmail.com>wrote:

> Gopal: I'm glad somebody noticed that blog!
>
> Joel:
> For bulk loads it's a Good Thing to lengthen out
> your soft autocommit interval. A lot. Every second
> poor Solr is trying to open up a new searcher while
> you're throwing lots of documents at it. That's what's
> generating the "too many searchers" problem I'd
> guess. Soft commits are less expensive than hard
> commits with openSearcher=true (you're not doing this,
> and you shouldn't be). But soft commits aren't free.
> All the top-level caches are thrown away and autowarming
> is performed.....
>
> Also, I'd probably consider just leaving off the bit about
> maxDocs in your hard commit, I find it rarely does all
> that much good. After all, even if you have to replay the
> transaction log, you're only talking 15 seconds here.
>
> Best,
> Erick
>
>
> On Tue, Feb 25, 2014 at 12:08 PM, Gopal Patwa <go...@gmail.com>
> wrote:
>
> > This blog by Eric will help you to understand different commit option and
> > transaction logs and it does provide some recommendation for ingestion
> > process.
> >
> >
> >
> http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
> >
> >
> > On Tue, Feb 25, 2014 at 11:40 AM, Furkan KAMACI <furkankamaci@gmail.com
> > >wrote:
> >
> > > Hi;
> > >
> > > You should read here:
> > >
> > >
> >
> http://wiki.apache.org/solr/FAQ#What_does_.22exceeded_limit_of_maxWarmingSearchers.3DX.22_mean.3F
> > >
> > > On the other hand do you have 4 Zookeeper instances as a quorum?
> > >
> > > Thanks;
> > > Furkan KAMACI
> > >
> > >
> > > 2014-02-25 20:31 GMT+02:00 Joel Cohen <jo...@bluefly.com>:
> > >
> > > > Hi all,
> > > >
> > > > I'm working with Solr 4.6.1 and I'm trying to tune my ingestion
> > process.
> > > > The ingestion runs a big DB query and then does some ETL on it and
> > > inserts
> > > > via SolrJ.
> > > >
> > > > I have a 4 node cluster with 1 shard per node running in Tomcat with
> > > > -Xmx=4096M. Each node has a separate instance of Zookeeper on it,
> plus
> > > the
> > > > ingestion server has one as well. The Solr servers have 8 cores and
> 64
> > Gb
> > > > of total RAM. The ingestion server is a VM with 8 Gb and 2 cores.
> > > >
> > > > My ingestion code uses a few settings to control concurrency and
> batch
> > > > size.
> > > >
> > > > solr.update.batchSize=500
> > > > solr.threadCount=4
> > > >
> > > > With this setup, I'm getting a lot of errors and the ingestion is
> > taking
> > > > much longer than it should.
> > > >
> > > > Every so often during the ingestion I get these errors on the Solr
> > > servers:
> > > >
> > > > WARN  shard1 - 2014-02-25 11:18:34.341;
> > > > org.apache.solr.update.UpdateLog$LogReplayer; Starting log replay
> > > >
> > > >
> > >
> >
> tlog{file=/usr/local/solr_shard1/productCatalog/data/tlog/tlog.0000000000000014074
> > > > refcount=2} active=true starting pos=776774
> > > > WARN  shard1 - 2014-02-25 11:18:37.275;
> > > > org.apache.solr.update.UpdateLog$LogReplayer; Log replay finished.
> > > > recoveryInfo=RecoveryInfo{adds=4065 deletes=0 deleteByQuery=0
> errors=0
> > > > positionOfStart=776774}
> > > > WARN  shard1 - 2014-02-25 11:18:37.960;
> org.apache.solr.core.SolrCore;
> > > > [productCatalog] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> > > > WARN  shard1 - 2014-02-25 11:18:37.961;
> org.apache.solr.core.SolrCore;
> > > > [productCatalog] Error opening new searcher. exceeded limit of
> > > > maxWarmingSearchers=2, try again later.
> > > > WARN  shard1 - 2014-02-25 11:18:37.961;
> org.apache.solr.core.SolrCore;
> > > > [productCatalog] Error opening new searcher. exceeded limit of
> > > > maxWarmingSearchers=2, try again later.
> > > > ERROR shard1 - 2014-02-25 11:18:37.961;
> > > > org.apache.solr.common.SolrException;
> > > org.apache.solr.common.SolrException:
> > > > Error opening new searcher. exceeded limit of maxWarmingSearchers=2,
> > try
> > > > again later.
> > > >         at
> > org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1575)
> > > >         at
> > org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1346)
> > > >         at
> > > >
> > > >
> > >
> >
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:592)
> > > >
> > > > I cut threads down to 1 and batchSize down to 100 and the errors go
> > away,
> > > > but the upload time jumps up by a factor of 25.
> > > >
> > > > My solrconfig.xml has:
> > > >
> > > >      <autoCommit>
> > > >        <maxDocs>${solr.autoCommit.maxDocs:10000}</maxDocs>
> > > >        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
> > > >        <openSearcher>false</openSearcher>
> > > >      </autoCommit>
> > > >
> > > >      <autoSoftCommit>
> > > >        <maxTime>${solr.autoSoftCommit.maxTime:1000}</maxTime>
> > > >      </autoSoftCommit>
> > > >
> > > > I turned autowarmCount down to 0 for all the caches. What else can I
> > tune
> > > > to allow me to run bigger batch sizes and more threads in my upload
> > > script?
> > > >
> > > > --
> > > >
> > > > joel cohen, senior system engineer
> > > >
> > > > e joel.cohen@bluefly.com p 212.944.8000 x276
> > > > bluefly, inc. 42 w. 39th st. new york, ny 10018
> > > > www.bluefly.com <http://www.bluefly.com/?referer=autosig> | *fly
> since
> > > > 2013...*
> > > >
> > >
> >
>



-- 

joel cohen, senior system engineer

e joel.cohen@bluefly.com p 212.944.8000 x276
bluefly, inc. 42 w. 39th st. new york, ny 10018
www.bluefly.com <http://www.bluefly.com/?referer=autosig> | *fly since
2013...*

Re: Autocommit, opensearchers and ingestion

Posted by Erick Erickson <er...@gmail.com>.
Gopal: I'm glad somebody noticed that blog!

Joel:
For bulk loads it's a Good Thing to lengthen out
your soft autocommit interval. A lot. Every second
poor Solr is trying to open up a new searcher while
you're throwing lots of documents at it. That's what's
generating the "too many searchers" problem I'd
guess. Soft commits are less expensive than hard
commits with openSearcher=true (you're not doing this,
and you shouldn't be). But soft commits aren't free.
All the top-level caches are thrown away and autowarming
is performed.....

Also, I'd probably consider just leaving off the bit about
maxDocs in your hard commit, I find it rarely does all
that much good. After all, even if you have to replay the
transaction log, you're only talking 15 seconds here.

Best,
Erick


On Tue, Feb 25, 2014 at 12:08 PM, Gopal Patwa <go...@gmail.com> wrote:

> This blog by Eric will help you to understand different commit option and
> transaction logs and it does provide some recommendation for ingestion
> process.
>
>
> http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
>
>
> On Tue, Feb 25, 2014 at 11:40 AM, Furkan KAMACI <furkankamaci@gmail.com
> >wrote:
>
> > Hi;
> >
> > You should read here:
> >
> >
> http://wiki.apache.org/solr/FAQ#What_does_.22exceeded_limit_of_maxWarmingSearchers.3DX.22_mean.3F
> >
> > On the other hand do you have 4 Zookeeper instances as a quorum?
> >
> > Thanks;
> > Furkan KAMACI
> >
> >
> > 2014-02-25 20:31 GMT+02:00 Joel Cohen <jo...@bluefly.com>:
> >
> > > Hi all,
> > >
> > > I'm working with Solr 4.6.1 and I'm trying to tune my ingestion
> process.
> > > The ingestion runs a big DB query and then does some ETL on it and
> > inserts
> > > via SolrJ.
> > >
> > > I have a 4 node cluster with 1 shard per node running in Tomcat with
> > > -Xmx=4096M. Each node has a separate instance of Zookeeper on it, plus
> > the
> > > ingestion server has one as well. The Solr servers have 8 cores and 64
> Gb
> > > of total RAM. The ingestion server is a VM with 8 Gb and 2 cores.
> > >
> > > My ingestion code uses a few settings to control concurrency and batch
> > > size.
> > >
> > > solr.update.batchSize=500
> > > solr.threadCount=4
> > >
> > > With this setup, I'm getting a lot of errors and the ingestion is
> taking
> > > much longer than it should.
> > >
> > > Every so often during the ingestion I get these errors on the Solr
> > servers:
> > >
> > > WARN  shard1 - 2014-02-25 11:18:34.341;
> > > org.apache.solr.update.UpdateLog$LogReplayer; Starting log replay
> > >
> > >
> >
> tlog{file=/usr/local/solr_shard1/productCatalog/data/tlog/tlog.0000000000000014074
> > > refcount=2} active=true starting pos=776774
> > > WARN  shard1 - 2014-02-25 11:18:37.275;
> > > org.apache.solr.update.UpdateLog$LogReplayer; Log replay finished.
> > > recoveryInfo=RecoveryInfo{adds=4065 deletes=0 deleteByQuery=0 errors=0
> > > positionOfStart=776774}
> > > WARN  shard1 - 2014-02-25 11:18:37.960; org.apache.solr.core.SolrCore;
> > > [productCatalog] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> > > WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
> > > [productCatalog] Error opening new searcher. exceeded limit of
> > > maxWarmingSearchers=2, try again later.
> > > WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
> > > [productCatalog] Error opening new searcher. exceeded limit of
> > > maxWarmingSearchers=2, try again later.
> > > ERROR shard1 - 2014-02-25 11:18:37.961;
> > > org.apache.solr.common.SolrException;
> > org.apache.solr.common.SolrException:
> > > Error opening new searcher. exceeded limit of maxWarmingSearchers=2,
> try
> > > again later.
> > >         at
> org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1575)
> > >         at
> org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1346)
> > >         at
> > >
> > >
> >
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:592)
> > >
> > > I cut threads down to 1 and batchSize down to 100 and the errors go
> away,
> > > but the upload time jumps up by a factor of 25.
> > >
> > > My solrconfig.xml has:
> > >
> > >      <autoCommit>
> > >        <maxDocs>${solr.autoCommit.maxDocs:10000}</maxDocs>
> > >        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
> > >        <openSearcher>false</openSearcher>
> > >      </autoCommit>
> > >
> > >      <autoSoftCommit>
> > >        <maxTime>${solr.autoSoftCommit.maxTime:1000}</maxTime>
> > >      </autoSoftCommit>
> > >
> > > I turned autowarmCount down to 0 for all the caches. What else can I
> tune
> > > to allow me to run bigger batch sizes and more threads in my upload
> > script?
> > >
> > > --
> > >
> > > joel cohen, senior system engineer
> > >
> > > e joel.cohen@bluefly.com p 212.944.8000 x276
> > > bluefly, inc. 42 w. 39th st. new york, ny 10018
> > > www.bluefly.com <http://www.bluefly.com/?referer=autosig> | *fly since
> > > 2013...*
> > >
> >
>

Re: Autocommit, opensearchers and ingestion

Posted by Gopal Patwa <go...@gmail.com>.
This blog by Eric will help you to understand different commit option and
transaction logs and it does provide some recommendation for ingestion
process.

http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/


On Tue, Feb 25, 2014 at 11:40 AM, Furkan KAMACI <fu...@gmail.com>wrote:

> Hi;
>
> You should read here:
>
> http://wiki.apache.org/solr/FAQ#What_does_.22exceeded_limit_of_maxWarmingSearchers.3DX.22_mean.3F
>
> On the other hand do you have 4 Zookeeper instances as a quorum?
>
> Thanks;
> Furkan KAMACI
>
>
> 2014-02-25 20:31 GMT+02:00 Joel Cohen <jo...@bluefly.com>:
>
> > Hi all,
> >
> > I'm working with Solr 4.6.1 and I'm trying to tune my ingestion process.
> > The ingestion runs a big DB query and then does some ETL on it and
> inserts
> > via SolrJ.
> >
> > I have a 4 node cluster with 1 shard per node running in Tomcat with
> > -Xmx=4096M. Each node has a separate instance of Zookeeper on it, plus
> the
> > ingestion server has one as well. The Solr servers have 8 cores and 64 Gb
> > of total RAM. The ingestion server is a VM with 8 Gb and 2 cores.
> >
> > My ingestion code uses a few settings to control concurrency and batch
> > size.
> >
> > solr.update.batchSize=500
> > solr.threadCount=4
> >
> > With this setup, I'm getting a lot of errors and the ingestion is taking
> > much longer than it should.
> >
> > Every so often during the ingestion I get these errors on the Solr
> servers:
> >
> > WARN  shard1 - 2014-02-25 11:18:34.341;
> > org.apache.solr.update.UpdateLog$LogReplayer; Starting log replay
> >
> >
> tlog{file=/usr/local/solr_shard1/productCatalog/data/tlog/tlog.0000000000000014074
> > refcount=2} active=true starting pos=776774
> > WARN  shard1 - 2014-02-25 11:18:37.275;
> > org.apache.solr.update.UpdateLog$LogReplayer; Log replay finished.
> > recoveryInfo=RecoveryInfo{adds=4065 deletes=0 deleteByQuery=0 errors=0
> > positionOfStart=776774}
> > WARN  shard1 - 2014-02-25 11:18:37.960; org.apache.solr.core.SolrCore;
> > [productCatalog] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> > WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
> > [productCatalog] Error opening new searcher. exceeded limit of
> > maxWarmingSearchers=2, try again later.
> > WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
> > [productCatalog] Error opening new searcher. exceeded limit of
> > maxWarmingSearchers=2, try again later.
> > ERROR shard1 - 2014-02-25 11:18:37.961;
> > org.apache.solr.common.SolrException;
> org.apache.solr.common.SolrException:
> > Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try
> > again later.
> >         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1575)
> >         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1346)
> >         at
> >
> >
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:592)
> >
> > I cut threads down to 1 and batchSize down to 100 and the errors go away,
> > but the upload time jumps up by a factor of 25.
> >
> > My solrconfig.xml has:
> >
> >      <autoCommit>
> >        <maxDocs>${solr.autoCommit.maxDocs:10000}</maxDocs>
> >        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
> >        <openSearcher>false</openSearcher>
> >      </autoCommit>
> >
> >      <autoSoftCommit>
> >        <maxTime>${solr.autoSoftCommit.maxTime:1000}</maxTime>
> >      </autoSoftCommit>
> >
> > I turned autowarmCount down to 0 for all the caches. What else can I tune
> > to allow me to run bigger batch sizes and more threads in my upload
> script?
> >
> > --
> >
> > joel cohen, senior system engineer
> >
> > e joel.cohen@bluefly.com p 212.944.8000 x276
> > bluefly, inc. 42 w. 39th st. new york, ny 10018
> > www.bluefly.com <http://www.bluefly.com/?referer=autosig> | *fly since
> > 2013...*
> >
>

Re: Autocommit, opensearchers and ingestion

Posted by Furkan KAMACI <fu...@gmail.com>.
Hi;

You should read here:
http://wiki.apache.org/solr/FAQ#What_does_.22exceeded_limit_of_maxWarmingSearchers.3DX.22_mean.3F

On the other hand do you have 4 Zookeeper instances as a quorum?

Thanks;
Furkan KAMACI


2014-02-25 20:31 GMT+02:00 Joel Cohen <jo...@bluefly.com>:

> Hi all,
>
> I'm working with Solr 4.6.1 and I'm trying to tune my ingestion process.
> The ingestion runs a big DB query and then does some ETL on it and inserts
> via SolrJ.
>
> I have a 4 node cluster with 1 shard per node running in Tomcat with
> -Xmx=4096M. Each node has a separate instance of Zookeeper on it, plus the
> ingestion server has one as well. The Solr servers have 8 cores and 64 Gb
> of total RAM. The ingestion server is a VM with 8 Gb and 2 cores.
>
> My ingestion code uses a few settings to control concurrency and batch
> size.
>
> solr.update.batchSize=500
> solr.threadCount=4
>
> With this setup, I'm getting a lot of errors and the ingestion is taking
> much longer than it should.
>
> Every so often during the ingestion I get these errors on the Solr servers:
>
> WARN  shard1 - 2014-02-25 11:18:34.341;
> org.apache.solr.update.UpdateLog$LogReplayer; Starting log replay
>
> tlog{file=/usr/local/solr_shard1/productCatalog/data/tlog/tlog.0000000000000014074
> refcount=2} active=true starting pos=776774
> WARN  shard1 - 2014-02-25 11:18:37.275;
> org.apache.solr.update.UpdateLog$LogReplayer; Log replay finished.
> recoveryInfo=RecoveryInfo{adds=4065 deletes=0 deleteByQuery=0 errors=0
> positionOfStart=776774}
> WARN  shard1 - 2014-02-25 11:18:37.960; org.apache.solr.core.SolrCore;
> [productCatalog] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
> [productCatalog] Error opening new searcher. exceeded limit of
> maxWarmingSearchers=2, try again later.
> WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
> [productCatalog] Error opening new searcher. exceeded limit of
> maxWarmingSearchers=2, try again later.
> ERROR shard1 - 2014-02-25 11:18:37.961;
> org.apache.solr.common.SolrException; org.apache.solr.common.SolrException:
> Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try
> again later.
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1575)
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1346)
>         at
>
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:592)
>
> I cut threads down to 1 and batchSize down to 100 and the errors go away,
> but the upload time jumps up by a factor of 25.
>
> My solrconfig.xml has:
>
>      <autoCommit>
>        <maxDocs>${solr.autoCommit.maxDocs:10000}</maxDocs>
>        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
>        <openSearcher>false</openSearcher>
>      </autoCommit>
>
>      <autoSoftCommit>
>        <maxTime>${solr.autoSoftCommit.maxTime:1000}</maxTime>
>      </autoSoftCommit>
>
> I turned autowarmCount down to 0 for all the caches. What else can I tune
> to allow me to run bigger batch sizes and more threads in my upload script?
>
> --
>
> joel cohen, senior system engineer
>
> e joel.cohen@bluefly.com p 212.944.8000 x276
> bluefly, inc. 42 w. 39th st. new york, ny 10018
> www.bluefly.com <http://www.bluefly.com/?referer=autosig> | *fly since
> 2013...*
>