You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by vsilgalis <vs...@gmail.com> on 2015/09/22 18:24:21 UTC

Solr 4.10.2 Cores in Recovery

We have a collection with 2 shards, 3 nodes per shard running solr 4.10.2

Our issue is that cores that get in recovery never recover, they are in a
constant state of recovery unless we restart the node and then reload the
core on the leader.  Updates seem to get to the server fine as the
transaction log grows over time and when we restart the node it replays the
transaction log successfully and chugs along in recovery until we reload the
core on the leader.  If we hit the maxwarmingsearchers error would that
break something that prevents recovery?

here is log i have for the node that is in recovery:
INFO  - 2015-09-18 15:10:25.332;
org.apache.solr.handler.component.SpellCheckComponent;
http://0.0.0.41:8080/solr/collection1/|http://0.0.0.45:8080/solr/collection1/
{suggestions={}}
INFO  - 2015-09-18 15:10:25.332;
org.apache.solr.handler.component.SpellCheckComponent;
http://0.0.0.40:8080/solr/collection1/|http://0.0.0.42:8080/solr/collection1/|http://0.0.0.44:8080/solr/collection1/
{
suggestions={}}
INFO  - 2015-09-18 15:10:25.609;
org.apache.solr.update.DirectUpdateHandler2; start
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
WARN  - 2015-09-18 15:10:25.642; org.apache.solr.core.SolrCore;
[collection1] Error opening new searcher. exceeded limit of
maxWarmingSearchers=2, try again later.
ERROR - 2015-09-18 15:10:25.642; org.apache.solr.common.SolrException; auto
commit error...:org.apache.solr.common.SolrException: Error opening new
searcher. exceeded limit of maxWarmingSearchers=2, try again
later.
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1663)
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1421)
        at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:615)
        at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

INFO  - 2015-09-18 15:10:26.429;
org.apache.solr.handler.component.SpellCheckComponent;
http://0.0.0.40:8080/solr/collection1/|http://0.0.0.42:8080/solr/collection1/|http://0.0.0.44:8080/solr/collection1/
n
ull
INFO  - 2015-09-18 15:10:26.429;
org.apache.solr.handler.component.SpellCheckComponent;
http://0.0.0.41:8080/solr/collection1/|http://0.0.0.45:8080/solr/collection1/
null
INFO  - 2015-09-18 15:10:26.430;
org.apache.solr.handler.component.SpellCheckComponent;
http://0.0.0.40:8080/solr/collection1/|http://0.0.0.42:8080/solr/collection1/|http://0.0.0.44:8080/solr/collection1/
n
ull
INFO  - 2015-09-18 15:10:26.430;
org.apache.solr.handler.component.SpellCheckComponent;
http://0.0.0.41:8080/solr/collection1/|http://0.0.0.45:8080/solr/collection1/
null
INFO  - 2015-09-18 15:10:27.359;
org.apache.solr.handler.component.SpellCheckComponent;
http://0.0.0.40:8080/solr/collection1/|http://0.0.0.42:8080/solr/collection1/|http://0.0.0.44:8080/solr/collection1/
n
ull
INFO  - 2015-09-18 15:10:27.359;
org.apache.solr.handler.component.SpellCheckComponent;
http://0.0.0.41:8080/solr/collection1/|http://0.0.0.45:8080/solr/collection1/
null
INFO  - 2015-09-18 15:10:27.710;
org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener;
Building spell index for spell checker: default
INFO  - 2015-09-18 15:10:27.766; org.apache.solr.cloud.RecoveryStrategy;
PeerSync Recovery was successful - registering as Active. core=collection1
INFO  - 2015-09-18 15:10:27.766; org.apache.solr.cloud.ZkController;
publishing core=collection1 state=active collection=collection1
INFO  - 2015-09-18 15:10:27.773;
org.apache.solr.update.DefaultSolrCoreState; Running recovery - first
canceling any ongoing recovery
WARN  - 2015-09-18 15:10:27.774; org.apache.solr.cloud.RecoveryStrategy;
Stopping recovery for core=collection1 coreNodeName=solrserver4
INFO  - 2015-09-18 15:10:27.774; org.apache.solr.cloud.RecoveryStrategy;
Starting recovery process.  core=collection1 recoveringAfterStartup=false
INFO  - 2015-09-18 15:10:27.776; org.apache.solr.cloud.RecoveryStrategy;
Finished recovery process. core=collection1
INFO  - 2015-09-18 15:10:27.776; org.apache.solr.cloud.RecoveryStrategy;
Starting recovery process.  core=collection1 recoveringAfterStartup=false
INFO  - 2015-09-18 15:10:27.776;
org.apache.solr.update.DefaultSolrCoreState; Running recovery - first
canceling any ongoing recovery
WARN  - 2015-09-18 15:10:27.777; org.apache.solr.cloud.RecoveryStrategy;
Stopping recovery for core=collection1 coreNodeName=solrserver4
INFO  - 2015-09-18 15:10:27.777; org.apache.solr.cloud.RecoveryStrategy;
Finished recovery process. core=collection1
INFO  - 2015-09-18 15:10:27.778;
org.apache.solr.update.DefaultSolrCoreState; Running recovery - first
canceling any ongoing recovery
INFO  - 2015-09-18 15:10:27.778; org.apache.solr.cloud.RecoveryStrategy;
Starting recovery process.  core=collection1 recoveringAfterStartup=false
WARN  - 2015-09-18 15:10:27.778; org.apache.solr.cloud.RecoveryStrategy;
Stopping recovery for core=collection1 coreNodeName=solrserver4
INFO  - 2015-09-18 15:10:27.778; org.apache.solr.cloud.RecoveryStrategy;
Finished recovery process. core=collection1
INFO  - 2015-09-18 15:10:27.779;
org.apache.solr.update.DefaultSolrCoreState; Running recovery - first
canceling any ongoing recovery
INFO  - 2015-09-18 15:10:27.779; org.apache.solr.cloud.RecoveryStrategy;
Starting recovery process.  core=collection1 recoveringAfterStartup=false
WARN  - 2015-09-18 15:10:27.779; org.apache.solr.cloud.RecoveryStrategy;
Stopping recovery for core=collection1 coreNodeName=solrserver4

The starting stopping recovery just replays constantly.

Let me know what else is needed to help troubleshoot this issue.

Thanks



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-10-2-Cores-in-Recovery-tp4230598.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr 4.10.2 Cores in Recovery

Posted by Shawn Heisey <ap...@elyograg.org>.
On 9/23/2015 10:10 AM, vsilgalis wrote:
> Thanks guys, this is exactly what I needed, something to dig into and follow
> up on.
>
> I do have question in regards to searcher warmup, when looking here:
> http://0.0.0.0.43:8080/solr/#/collections/plugins/core?entry=searcher
>
> is the warmuptime specific to the last searcher warmup time?
>
> Are there any other important things I can track with graphite?

The warmup time on the stats page is specific to the active searcher. 
When a commit happens that makes new documents visible, a new searcher
is created, warmed, and swapped into service.  The old searcher is no
longer accessible at that point and will eventually be reclaimed by the
Java garbage collector, so all statistics info on that searcheris lost.

Thanks,
Shawn


Re: Solr 4.10.2 Cores in Recovery

Posted by vsilgalis <vs...@gmail.com>.
Shawn Heisey-2 wrote
> On 9/22/2015 11:54 AM, vsilgalis wrote:
>> I've actually read that article a few times.
>>
>> Yeah I know we aren't perfect in opening searchers. Yes we are committing
>> from the client, this is something that is changing in our next code
>> release, AND we are auto soft committing every second.  
>>
>> 
> <filterCache class="solr.FastLRUCache" size="32768" initialSize="32768"
>>
>  autowarmCount="256"/>
>> 
> <queryResultCache class="solr.LRUCache" size="32768" initialSize="32768"
>>
>  autowarmCount="256"/>
>> 
> <documentCache class="solr.LRUCache" size="32768" initialSize="32768"
>>
>  autowarmCount="256"/>
> 
> Those are huge caches.  Especially the filterCache, because each filter
> entry can be megabytes in size, depending on how many documents are in
> the core.  If your index ever reaches the point where the filterCache
> can grow to thousands of entries, your heap memory usage may grow out of
> control.
> 
> The documentCache cannot autowarm, so that autowarmCount setting is
> irrelevant.  The other two are important, and 256 is a pretty large
> number for that setting.  It is unlikely that your autowarming completes
> in less than one second.
> 
> I've repeated some of what Erick already told you, but I would like to
> add the following.  On your autoSoftCommit interval of one second, the
> article that Erick linked has this to say:
> 
> -------
> Set your soft commit interval to as long as you can stand. Don't listen
> to your product manager who says "we need no more than 1 second
> latency". Really. Push back hard and see if the /user/ is best served or
> will even notice. Soft commits and NRT are pretty amazing, but they’re
> not free.
> -------
> 
> This autoSoftCommit interval, especially with large indexes, can cause a
> performance death spiral.  In SolrCloud, that death spiral tends to
> cause constant replica recovery.  A previous message you sent to the
> list indicated that your shards are each 10GB in size, which counts as a
> large index.  Many people have indexes that are larger, but that's still
> pretty big.
> 
> Thanks,
> Shawn

Thanks guys, this is exactly what I needed, something to dig into and follow
up on.

I do have question in regards to searcher warmup, when looking here:
http://0.0.0.0.43:8080/solr/#/collections/plugins/core?entry=searcher

is the warmuptime specific to the last searcher warmup time?

Are there any other important things I can track with graphite?

Thanks,
Vytenis





--
View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-10-2-Cores-in-Recovery-tp4230598p4230866.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr 4.10.2 Cores in Recovery

Posted by Shawn Heisey <ap...@elyograg.org>.
On 9/22/2015 11:54 AM, vsilgalis wrote:
> I've actually read that article a few times.
>
> Yeah I know we aren't perfect in opening searchers. Yes we are committing
> from the client, this is something that is changing in our next code
> release, AND we are auto soft committing every second.  
>
> <filterCache class="solr.FastLRUCache" size="32768" initialSize="32768"
> autowarmCount="256"/>
> <queryResultCache class="solr.LRUCache" size="32768" initialSize="32768"
> autowarmCount="256"/>
> <documentCache class="solr.LRUCache" size="32768" initialSize="32768"
> autowarmCount="256"/>

Those are huge caches.  Especially the filterCache, because each filter
entry can be megabytes in size, depending on how many documents are in
the core.  If your index ever reaches the point where the filterCache
can grow to thousands of entries, your heap memory usage may grow out of
control.

The documentCache cannot autowarm, so that autowarmCount setting is
irrelevant.  The other two are important, and 256 is a pretty large
number for that setting.  It is unlikely that your autowarming completes
in less than one second.

I've repeated some of what Erick already told you, but I would like to
add the following.  On your autoSoftCommit interval of one second, the
article that Erick linked has this to say:

-------
Set your soft commit interval to as long as you can stand. Don't listen
to your product manager who says "we need no more than 1 second
latency". Really. Push back hard and see if the /user/ is best served or
will even notice. Soft commits and NRT are pretty amazing, but they’re
not free.
-------

This autoSoftCommit interval, especially with large indexes, can cause a
performance death spiral.  In SolrCloud, that death spiral tends to
cause constant replica recovery.  A previous message you sent to the
list indicated that your shards are each 10GB in size, which counts as a
large index.  Many people have indexes that are larger, but that's still
pretty big.

Thanks,
Shawn


Re: Solr 4.10.2 Cores in Recovery

Posted by Erick Erickson <er...@gmail.com>.
Yep. Sounds bad.

First of all, your filterCache will potentially occupy
(maxDoc / 8) * 32,768 bytes, plus some slop.

Additionally you're replaying the last 256 filter queries every time
you open a new searcher (i.e. do a soft commit or hard commit with
openSearcher=true. Actually. probably whenever you commit from the
client). This is most likely a huge waste.

Your queryResultCache is replaying the most recent 256 queries every
time it opens a searcher too. This is also most likely a huge waste of
time/cycles.

Your documentCache doesn't autowarm so it's not a factor.

These cache settings are quite outside the norm, suggest you pare them
waaaaaay down, implement some kind of load tester and gradually
increase them until you see diminishing returns. That's usually at
much smaller numbers than you might expect, although YMMV.

Beware filter queries with NOW, see:
http://lucidworks.com/blog/date-math-now-and-filter-queries/

Best,
Erick

On Tue, Sep 22, 2015 at 10:54 AM, vsilgalis <vs...@gmail.com> wrote:
> Erick Erickson wrote
>> Things shouldn't be going into recovery that often.
>>
>> Exceeding the maxwarming searchers indicates that you're committing
>> very often, and that your autowarming interval exceeds the interval
>> between
>> commits (either hard commit with openSearcher set to true or soft
>> commits).
>>
>> I'd focus on that bit first. How are you committing, what are your
>> autowarm
>> settings etc?
>>
>> Are you committing from the client? Do you have very high (> 32 IMO)
>> autowarm counts for your caches in solrconfig.xml? etc.
>>
>> Here's a long writeup of commits -n- stuff:
>> https://lucidworks.com/blog/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
>>
>> Best,
>> Erick
>
> I've actually read that article a few times.
>
> Yeah I know we aren't perfect in opening searchers. Yes we are committing
> from the client, this is something that is changing in our next code
> release, AND we are auto soft committing every second.
>
> <filterCache class="solr.FastLRUCache" size="32768" initialSize="32768"
> autowarmCount="256"/>
> <queryResultCache class="solr.LRUCache" size="32768" initialSize="32768"
> autowarmCount="256"/>
> <documentCache class="solr.LRUCache" size="32768" initialSize="32768"
> autowarmCount="256"/>
>
> Sounds like this might bad?
>
>
>
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-10-2-Cores-in-Recovery-tp4230598p4230616.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr 4.10.2 Cores in Recovery

Posted by vsilgalis <vs...@gmail.com>.
Erick Erickson wrote
> Things shouldn't be going into recovery that often.
> 
> Exceeding the maxwarming searchers indicates that you're committing
> very often, and that your autowarming interval exceeds the interval
> between
> commits (either hard commit with openSearcher set to true or soft
> commits).
> 
> I'd focus on that bit first. How are you committing, what are your
> autowarm
> settings etc?
> 
> Are you committing from the client? Do you have very high (> 32 IMO)
> autowarm counts for your caches in solrconfig.xml? etc.
> 
> Here's a long writeup of commits -n- stuff:
> https://lucidworks.com/blog/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
> 
> Best,
> Erick

I've actually read that article a few times.

Yeah I know we aren't perfect in opening searchers. Yes we are committing
from the client, this is something that is changing in our next code
release, AND we are auto soft committing every second.  

<filterCache class="solr.FastLRUCache" size="32768" initialSize="32768"
autowarmCount="256"/>
<queryResultCache class="solr.LRUCache" size="32768" initialSize="32768"
autowarmCount="256"/>
<documentCache class="solr.LRUCache" size="32768" initialSize="32768"
autowarmCount="256"/>

Sounds like this might bad?



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-10-2-Cores-in-Recovery-tp4230598p4230616.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr 4.10.2 Cores in Recovery

Posted by Erick Erickson <er...@gmail.com>.
Things shouldn't be going into recovery that often.

Exceeding the maxwarming searchers indicates that you're committing
very often, and that your autowarming interval exceeds the interval between
commits (either hard commit with openSearcher set to true or soft commits).

I'd focus on that bit first. How are you committing, what are your autowarm
settings etc?

Are you committing from the client? Do you have very high (> 32 IMO)
autowarm counts for your caches in solrconfig.xml? etc.

Here's a long writeup of commits -n- stuff:
https://lucidworks.com/blog/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Best,
Erick

On Tue, Sep 22, 2015 at 9:24 AM, vsilgalis <vs...@gmail.com> wrote:
> We have a collection with 2 shards, 3 nodes per shard running solr 4.10.2
>
> Our issue is that cores that get in recovery never recover, they are in a
> constant state of recovery unless we restart the node and then reload the
> core on the leader.  Updates seem to get to the server fine as the
> transaction log grows over time and when we restart the node it replays the
> transaction log successfully and chugs along in recovery until we reload the
> core on the leader.  If we hit the maxwarmingsearchers error would that
> break something that prevents recovery?
>
> here is log i have for the node that is in recovery:
> INFO  - 2015-09-18 15:10:25.332;
> org.apache.solr.handler.component.SpellCheckComponent;
> http://0.0.0.41:8080/solr/collection1/|http://0.0.0.45:8080/solr/collection1/
> {suggestions={}}
> INFO  - 2015-09-18 15:10:25.332;
> org.apache.solr.handler.component.SpellCheckComponent;
> http://0.0.0.40:8080/solr/collection1/|http://0.0.0.42:8080/solr/collection1/|http://0.0.0.44:8080/solr/collection1/
> {
> suggestions={}}
> INFO  - 2015-09-18 15:10:25.609;
> org.apache.solr.update.DirectUpdateHandler2; start
> commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
> WARN  - 2015-09-18 15:10:25.642; org.apache.solr.core.SolrCore;
> [collection1] Error opening new searcher. exceeded limit of
> maxWarmingSearchers=2, try again later.
> ERROR - 2015-09-18 15:10:25.642; org.apache.solr.common.SolrException; auto
> commit error...:org.apache.solr.common.SolrException: Error opening new
> searcher. exceeded limit of maxWarmingSearchers=2, try again
> later.
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1663)
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1421)
>         at
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:615)
>         at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
>
> INFO  - 2015-09-18 15:10:26.429;
> org.apache.solr.handler.component.SpellCheckComponent;
> http://0.0.0.40:8080/solr/collection1/|http://0.0.0.42:8080/solr/collection1/|http://0.0.0.44:8080/solr/collection1/
> n
> ull
> INFO  - 2015-09-18 15:10:26.429;
> org.apache.solr.handler.component.SpellCheckComponent;
> http://0.0.0.41:8080/solr/collection1/|http://0.0.0.45:8080/solr/collection1/
> null
> INFO  - 2015-09-18 15:10:26.430;
> org.apache.solr.handler.component.SpellCheckComponent;
> http://0.0.0.40:8080/solr/collection1/|http://0.0.0.42:8080/solr/collection1/|http://0.0.0.44:8080/solr/collection1/
> n
> ull
> INFO  - 2015-09-18 15:10:26.430;
> org.apache.solr.handler.component.SpellCheckComponent;
> http://0.0.0.41:8080/solr/collection1/|http://0.0.0.45:8080/solr/collection1/
> null
> INFO  - 2015-09-18 15:10:27.359;
> org.apache.solr.handler.component.SpellCheckComponent;
> http://0.0.0.40:8080/solr/collection1/|http://0.0.0.42:8080/solr/collection1/|http://0.0.0.44:8080/solr/collection1/
> n
> ull
> INFO  - 2015-09-18 15:10:27.359;
> org.apache.solr.handler.component.SpellCheckComponent;
> http://0.0.0.41:8080/solr/collection1/|http://0.0.0.45:8080/solr/collection1/
> null
> INFO  - 2015-09-18 15:10:27.710;
> org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener;
> Building spell index for spell checker: default
> INFO  - 2015-09-18 15:10:27.766; org.apache.solr.cloud.RecoveryStrategy;
> PeerSync Recovery was successful - registering as Active. core=collection1
> INFO  - 2015-09-18 15:10:27.766; org.apache.solr.cloud.ZkController;
> publishing core=collection1 state=active collection=collection1
> INFO  - 2015-09-18 15:10:27.773;
> org.apache.solr.update.DefaultSolrCoreState; Running recovery - first
> canceling any ongoing recovery
> WARN  - 2015-09-18 15:10:27.774; org.apache.solr.cloud.RecoveryStrategy;
> Stopping recovery for core=collection1 coreNodeName=solrserver4
> INFO  - 2015-09-18 15:10:27.774; org.apache.solr.cloud.RecoveryStrategy;
> Starting recovery process.  core=collection1 recoveringAfterStartup=false
> INFO  - 2015-09-18 15:10:27.776; org.apache.solr.cloud.RecoveryStrategy;
> Finished recovery process. core=collection1
> INFO  - 2015-09-18 15:10:27.776; org.apache.solr.cloud.RecoveryStrategy;
> Starting recovery process.  core=collection1 recoveringAfterStartup=false
> INFO  - 2015-09-18 15:10:27.776;
> org.apache.solr.update.DefaultSolrCoreState; Running recovery - first
> canceling any ongoing recovery
> WARN  - 2015-09-18 15:10:27.777; org.apache.solr.cloud.RecoveryStrategy;
> Stopping recovery for core=collection1 coreNodeName=solrserver4
> INFO  - 2015-09-18 15:10:27.777; org.apache.solr.cloud.RecoveryStrategy;
> Finished recovery process. core=collection1
> INFO  - 2015-09-18 15:10:27.778;
> org.apache.solr.update.DefaultSolrCoreState; Running recovery - first
> canceling any ongoing recovery
> INFO  - 2015-09-18 15:10:27.778; org.apache.solr.cloud.RecoveryStrategy;
> Starting recovery process.  core=collection1 recoveringAfterStartup=false
> WARN  - 2015-09-18 15:10:27.778; org.apache.solr.cloud.RecoveryStrategy;
> Stopping recovery for core=collection1 coreNodeName=solrserver4
> INFO  - 2015-09-18 15:10:27.778; org.apache.solr.cloud.RecoveryStrategy;
> Finished recovery process. core=collection1
> INFO  - 2015-09-18 15:10:27.779;
> org.apache.solr.update.DefaultSolrCoreState; Running recovery - first
> canceling any ongoing recovery
> INFO  - 2015-09-18 15:10:27.779; org.apache.solr.cloud.RecoveryStrategy;
> Starting recovery process.  core=collection1 recoveringAfterStartup=false
> WARN  - 2015-09-18 15:10:27.779; org.apache.solr.cloud.RecoveryStrategy;
> Stopping recovery for core=collection1 coreNodeName=solrserver4
>
> The starting stopping recovery just replays constantly.
>
> Let me know what else is needed to help troubleshoot this issue.
>
> Thanks
>
>
>
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-10-2-Cores-in-Recovery-tp4230598.html
> Sent from the Solr - User mailing list archive at Nabble.com.