You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by "Lei Wu (Jira)" <ji...@apache.org> on 2019/09/16 20:25:00 UTC

[jira] [Updated] (SOLR-13765) Deadlock on Solr cloud request

     [ https://issues.apache.org/jira/browse/SOLR-13765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Lei Wu updated SOLR-13765:
--------------------------
    Description: 
Hi there,

We are seeing an issue about Deadlock on Solr cloud request. 

Say we have a collection with one shard and two replicas for that shard. For whatever reason the cluster appears to be active but each individual replica is down. And when a request comes in, Solr (replica 1) tries to find a remote node (replica 2) to handle the request since the local core (replica 1) is down and when the other node (replica 2) receives the request it does the same to forward the request back to the original node (replica 1). This causes deadlock and eventually uses all the socket cause `Too many open files`.

Not sure what the purpose of finding an inactive node to handle request in HttpSolrCall.getRemoteCoreUrl but taking that out seems to fix the problem

  was:
Hi there,

We are seeing an issue about Deadlock on Solr cloud request. 

Say we have a collection with one shard and two replicas for that shard. For whatever reason the cluster appears to be active but each individual replica is down. And when a request comes in, Solr (replica 1) tries to find a remote node (replica 2) to handle the request since the local core (replica 1) is down and when the other node (replica 2) receives the request it does the same to forward the request back to the original node (replica 1). This causes deadlock and eventually uses all the socket cause `
Too many open files
`.

Not sure what the purpose of finding an inactive node to handle request in HttpSolrCall.getRemoteCoreUrl but taking that out seems to fix the problem


> Deadlock on Solr cloud request
> ------------------------------
>
>                 Key: SOLR-13765
>                 URL: https://issues.apache.org/jira/browse/SOLR-13765
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>    Affects Versions: 7.7.2
>            Reporter: Lei Wu
>            Priority: Major
>
> Hi there,
> We are seeing an issue about Deadlock on Solr cloud request. 
> Say we have a collection with one shard and two replicas for that shard. For whatever reason the cluster appears to be active but each individual replica is down. And when a request comes in, Solr (replica 1) tries to find a remote node (replica 2) to handle the request since the local core (replica 1) is down and when the other node (replica 2) receives the request it does the same to forward the request back to the original node (replica 1). This causes deadlock and eventually uses all the socket cause `Too many open files`.
> Not sure what the purpose of finding an inactive node to handle request in HttpSolrCall.getRemoteCoreUrl but taking that out seems to fix the problem



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org