You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Joseph Duchesne (JIRA)" <ji...@apache.org> on 2014/02/13 16:38:19 UTC

[jira] [Updated] (SOLR-5724) Two node, one shard solr instance intermittently going offline

     [ https://issues.apache.org/jira/browse/SOLR-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Joseph Duchesne updated SOLR-5724:
----------------------------------

    Description: 
One server is stuck in state "recovering" while the other is stuck in state "down". After waiting 45 minutes or so for the cluster to recover, the statuses were the same. 

Log messages on the "recovering" server: (Just the individual errors for brevity, I can provide full stack traces if that is helpful)
{quote}
We are not the leader
ClusterState says we are the leader, but locally we don't think so
cancelElection did not find election node to remove
We are not the leader
No registered leader was found, collection:listsC slice:shard1
No registered leader was found, collection:listsC slice:shard1
{quote}
On the "down" server at the same timeframe:
{quote}
org.apache.solr.common.SolrException; forwarding update to http://10.0.2.48:8983/solr/listsC/ failed - retrying ... retries: 3
org.apache.solr.update.StreamingSolrServers$1; error
Error while trying to recover. core=listsC:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: We are not the leader
Recovery failed - trying again... (0) core=listsC
Stopping recovery for zkNodeName=core_node2core=listsC
org.apache.solr.update.StreamingSolrServers$1; error
org.apache.solr.common.SolrException: Service Unavailable
{quote}
I am not sure what is causing this, however it has happened a 3 times in the past week. If there are any additional logs I can provide, or if there is anything I can do to try to figure this out myself I will gladly try to help. 

  was:
One server is stuck in state "recovering" while the other is stuck in state "down". After waiting 45 minutes or so for the cluster to recover, the statuses were the same. 

Log messages on the "recovering" server:
We are not the leader
ClusterState says we are the leader, but locally we don't think so
cancelElection did not find election node to remove
We are not the leader
No registered leader was found, collection:listsC slice:shard1
No registered leader was found, collection:listsC slice:shard1

On the "down" server at the same timeframe:
org.apache.solr.common.SolrException; forwarding update to http://10.0.2.48:8983/solr/listsC/ failed - retrying ... retries: 3
org.apache.solr.update.StreamingSolrServers$1; error
Error while trying to recover. core=listsC:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: We are not the leader
Recovery failed - trying again... (0) core=listsC
Stopping recovery for zkNodeName=core_node2core=listsC
org.apache.solr.update.StreamingSolrServers$1; error
org.apache.solr.common.SolrException: Service Unavailable

I am not sure what is causing this, however it has happened a 3 times in the past week. If there are any additional logs I can provide, or if there is anything I can do to try to figure this out myself I will gladly try to help. 


> Two node, one shard solr instance intermittently going offline 
> ---------------------------------------------------------------
>
>                 Key: SOLR-5724
>                 URL: https://issues.apache.org/jira/browse/SOLR-5724
>             Project: Solr
>          Issue Type: Bug
>    Affects Versions: 4.6.1
>         Environment: Ubuntu 12.04.3 LTS, 64 bit,  java version "1.6.0_45"
> Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)
>            Reporter: Joseph Duchesne
>
> One server is stuck in state "recovering" while the other is stuck in state "down". After waiting 45 minutes or so for the cluster to recover, the statuses were the same. 
> Log messages on the "recovering" server: (Just the individual errors for brevity, I can provide full stack traces if that is helpful)
> {quote}
> We are not the leader
> ClusterState says we are the leader, but locally we don't think so
> cancelElection did not find election node to remove
> We are not the leader
> No registered leader was found, collection:listsC slice:shard1
> No registered leader was found, collection:listsC slice:shard1
> {quote}
> On the "down" server at the same timeframe:
> {quote}
> org.apache.solr.common.SolrException; forwarding update to http://10.0.2.48:8983/solr/listsC/ failed - retrying ... retries: 3
> org.apache.solr.update.StreamingSolrServers$1; error
> Error while trying to recover. core=listsC:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: We are not the leader
> Recovery failed - trying again... (0) core=listsC
> Stopping recovery for zkNodeName=core_node2core=listsC
> org.apache.solr.update.StreamingSolrServers$1; error
> org.apache.solr.common.SolrException: Service Unavailable
> {quote}
> I am not sure what is causing this, however it has happened a 3 times in the past week. If there are any additional logs I can provide, or if there is anything I can do to try to figure this out myself I will gladly try to help. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org