You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Sathya <sa...@gmail.com> on 2014/03/20 06:44:21 UTC

Solr4.7 No live SolrServers available to handle this request

Hi Friends,

I am new to Solr. I have 5 solr node in 5 different machine. When i index
the data, sometimes "*No live SolrServers available to handle this request*"
exception occur in 1 or 2 machines. 

I dont know why its happen and how to solve this. Kindly help me to solve
this issue.



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.7 No live SolrServers available to handle this request

Posted by Sathya <sa...@gmail.com>.
Hi Greg,

This is my Clusterstate.json.

WatchedEvent state:SyncConnected type:None path:null
[zk: 10.10.1.72:2185(CONNECTED) 0] get /clusterstate.json
{"set_recent":{
    "shards":{
      "shard1":{
        "range":"80000000-d554ffff",
        "state":"active",
        "replicas":{
          "10.10.1.16:4040_solr_set_recent_shard1_replica1":{
            "state":"active",
            "base_url":"http://10.10.1.16:4040/solr",
            "core":"set_recent_shard1_replica1",
            "node_name":"10.10.1.16:4040_solr"},
          "10.10.1.72:2020_solr_set_recent_shard1_replica2":{
            "state":"active",
            "base_url":"http://10.10.1.72:2020/solr",
            "core":"set_recent_shard1_replica2",
            "node_name":"10.10.1.72:2020_solr"},
          "10.10.1.19:3030_solr_set_recent_shard1_replica3":{
            "state":"active",
            "base_url":"http://10.10.1.19:3030/solr",
            "core":"set_recent_shard1_replica3",
            "node_name":"10.10.1.19:3030_solr",
            "leader":"true"},
          "10.10.1.21:1010_solr_set_recent_shard1_replica4":{
            "state":"active",
            "base_url":"http://10.10.1.21:1010/solr",
            "core":"set_recent_shard1_replica4",
            "node_name":"10.10.1.21:1010_solr"},
          "10.10.1.14:5050_solr_set_recent_shard1_replica5":{
            "state":"active",
            "base_url":"http://10.10.1.14:5050/solr",
            "core":"set_recent_shard1_replica5",
            "node_name":"10.10.1.14:5050_solr"}}},
      "shard2":{
        "range":"d5550000-2aa9ffff",
        "state":"active",
        "replicas":{
          "10.10.1.16:4040_solr_set_recent_shard2_replica1":{
            "state":"active",
            "base_url":"http://10.10.1.16:4040/solr",
            "core":"set_recent_shard2_replica1",
            "node_name":"10.10.1.16:4040_solr"},
          "10.10.1.72:2020_solr_set_recent_shard2_replica2":{
            "state":"active",
            "base_url":"http://10.10.1.72:2020/solr",
            "core":"set_recent_shard2_replica2",
            "node_name":"10.10.1.72:2020_solr"},
          "10.10.1.19:3030_solr_set_recent_shard2_replica3":{
            "state":"active",
            "base_url":"http://10.10.1.19:3030/solr",
            "core":"set_recent_shard2_replica3",
            "node_name":"10.10.1.19:3030_solr",
            "leader":"true"},
          "10.10.1.21:1010_solr_set_recent_shard2_replica4":{
            "state":"active",
            "base_url":"http://10.10.1.21:1010/solr",
            "core":"set_recent_shard2_replica4",
            "node_name":"10.10.1.21:1010_solr"},
          "10.10.1.14:5050_solr_set_recent_shard2_replica5":{
            "state":"active",
            "base_url":"http://10.10.1.14:5050/solr",
            "core":"set_recent_shard2_replica5",
            "node_name":"10.10.1.14:5050_solr"}}},
      "shard3":{
        "range":"2aaa0000-7fffffff",
        "state":"active",
        "replicas":{
          "10.10.1.16:4040_solr_set_recent_shard3_replica1":{
            "state":"active",
            "base_url":"http://10.10.1.16:4040/solr",
            "core":"set_recent_shard3_replica1",
            "node_name":"10.10.1.16:4040_solr"},
          "10.10.1.72:2020_solr_set_recent_shard3_replica2":{
            "state":"active",
            "base_url":"http://10.10.1.72:2020/solr",
            "core":"set_recent_shard3_replica2",
            "node_name":"10.10.1.72:2020_solr"},
          "10.10.1.19:3030_solr_set_recent_shard3_replica3":{
            "state":"active",
            "base_url":"http://10.10.1.19:3030/solr",
            "core":"set_recent_shard3_replica3",
            "node_name":"10.10.1.19:3030_solr",
            "leader":"true"},
          "10.10.1.21:1010_solr_set_recent_shard3_replica4":{
            "state":"active",
            "base_url":"http://10.10.1.21:1010/solr",
            "core":"set_recent_shard3_replica4",
            "node_name":"10.10.1.21:1010_solr"},
          "10.10.1.14:5050_solr_set_recent_shard3_replica5":{
            "state":"active",
            "base_url":"http://10.10.1.14:5050/solr",
            "core":"set_recent_shard3_replica5",
            "node_name":"10.10.1.14:5050_solr"}}}},
    "maxShardsPerNode":"3",
    "router":{"name":"compositeId"},
    "replicationFactor":"5"}}
cZxid = 0x100000014
ctime = Tue Mar 18 13:05:38 IST 2014
mZxid = 0x50000027c
mtime = Mon Mar 24 14:22:24 IST 2014
pZxid = 0x100000014
cversion = 0
dataVersion = 387
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4182
numChildren = 0


Kindly let me know for further inputs..



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679p4126478.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.7 No live SolrServers available to handle this request

Posted by Michael Sokolov <ms...@safaribooksonline.com>.
Excellent, thanks Shalin!

On 3/22/2014 3:32 PM, Shalin Shekhar Mangar wrote:
> Thanks Michael! I just committed your fix. It will be released with 4.7.1
>
> On Fri, Mar 21, 2014 at 8:30 PM, Michael Sokolov
> <ms...@safaribooksonline.com> wrote:
>> I just managed to track this down -- as you said the disconnect was a red
>> herring.
>>
>> Ultimately the problem was caused by a custom analysis component we wrote
>> that was raising an IOException -- it was missing some configuration files
>> it relies on.
>>
>> What might be interesting for solr devs to have a look at is that exception
>> was completely swallowed by JavabinCodec, making it very difficult to track
>> down the problem.  Furthermore -- if the /add request was routed directly to
>> the shard where the document was destined to end up, then the IOException
>> raised by the analysis component (a char filter) showed up in the Solr HTTP
>> response (probably because my client used XML format in one test -- javabin
>> is used internally in SolrCloud).  But if the request was routed to a
>> different shard, then the only exception that showed up anywhere (in the
>> logs, in the HTTP response) was kind of irrelevant.
>>
>> I think this could be fixed pretty easily; see SOLR-5985 for my suggestion.
>>
>> -Mike
>>
>>
>>
>> On 03/21/2014 10:20 AM, Greg Walters wrote:
>>> Broken pipe errors are generally caused by unexpected disconnections and
>>> are some times hard to track down. Given the stack traces you've provided
>>> it's hard to point to any one thing and I suspect the relevant information
>>> was snipped out in the "long dump of document fields". You might grab the
>>> entire error from the client you're uploading documents with, the server
>>> you're connected to and any other nodes that have an error at the same time
>>> and put it on pastebin or the like.
>>>
>>> Thanks,
>>> Greg
>>>
>>> On Mar 20, 2014, at 3:36 PM, Michael Sokolov
>>> <ms...@safaribooksonline.com> wrote:
>>>
>>>> I'm getting a similar exception when writing documents (on the client
>>>> side).  I can write one document fine, but the second (which is being routed
>>>> to a different shard) generates the error.  It happens every time -
>>>> definitely not a resource issue or timing problem since this database is
>>>> completely empty -- I'm just getting started and running some tests, so
>>>> there must be some kind of setup problem.  But it's difficult to diagnose
>>>> (for me, anyway)!  I'd appreciate any insight, hints, guesses, etc. since
>>>> I'm stuck. Thanks!
>>>>
>>>> One node (the leader?) is reporting "Internal Server Error" in its log,
>>>> and another node (presumably the shard where the document is being directed)
>>>> bombs out like this:
>>>>
>>>> ERROR - 2014-03-20 15:56:53.022; org.apache.solr.common.SolrException;
>>>> null:org.apache.solr.common.SolrException: ERROR adding document
>>>> SolrInputDocument(
>>>>
>>>> ... long dump of document fields
>>>>
>>>> )
>>>>      at
>>>> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:99)
>>>>      at
>>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
>>>>      at
>>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
>>>>      at
>>>> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
>>>>      at
>>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
>>>>      at
>>>> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
>>>>      at
>>>> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
>>>>      at
>>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
>>>>      at
>>>> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
>>>>      at
>>>> org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
>>>>      at
>>>> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
>>>>      at
>>>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>>>>      at
>>>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>>>>      at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
>>>>      at
>>>> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:721)
>>>> ...
>>>> Caused by: java.net.SocketException: Broken pipe
>>>>          at java.net.SocketOutputStream.socketWrite0(Native Method)
>>>>          at
>>>> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
>>>>          at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
>>>>          at
>>>> org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
>>>>          at
>>>> org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
>>>>          at
>>>> org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:366)
>>>>          at
>>>> org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:240)
>>>>          at
>>>> org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:119)
>>>>          at
>>>> org.apache.coyote.http11.AbstractOutputBuffer.doWrite(AbstractOutputBuffer.java:192)
>>>>          at org.apache.coyote.Response.doWrite(Response.java:520)
>>>>          at
>>>> org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:408)
>>>>          ... 37 more
>>>>
>>>> This is with Solr 4.6.1, Tomcat 7.  Here's my clusterstate.json. Updates
>>>> are being sent to the test1x3 collection
>>>>
>>>>
>>>> {
>>>>    "test3x1":{
>>>>      "shards":{
>>>>        "shard1":{
>>>>          "range":"80000000-d554ffff",
>>>>          "state":"active",
>>>>          "replicas":{"core_node1":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.37:8080/solr",
>>>>              "core":"test3x1_shard1_replica1",
>>>>              "node_name":"10.4.24.37:8080_solr",
>>>>              "leader":"true"}}},
>>>>        "shard2":{
>>>>          "range":"d5550000-2aa9ffff",
>>>>          "state":"active",
>>>>          "replicas":{"core_node3":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.39:8080/solr",
>>>>              "core":"test3x1_shard2_replica1",
>>>>              "node_name":"10.4.24.39:8080_solr",
>>>>              "leader":"true"}}},
>>>>        "shard3":{
>>>>          "range":"2aaa0000-7fffffff",
>>>>          "state":"active",
>>>>          "replicas":{"core_node2":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.38:8080/solr",
>>>>              "core":"test3x1_shard3_replica1",
>>>>              "node_name":"10.4.24.38:8080_solr",
>>>>              "leader":"true"}}}},
>>>>      "maxShardsPerNode":"1",
>>>>      "router":{"name":"compositeId"},
>>>>      "replicationFactor":"1"},
>>>>    "test1x3":{
>>>>      "shards":{"shard1":{
>>>>          "range":"80000000-7fffffff",
>>>>          "state":"active",
>>>>          "replicas":{
>>>>            "core_node1":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.39:8080/solr",
>>>>              "core":"test1x3_shard1_replica2",
>>>>              "node_name":"10.4.24.39:8080_solr",
>>>>              "leader":"true"},
>>>>            "core_node2":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.38:8080/solr",
>>>>              "core":"test1x3_shard1_replica1",
>>>>              "node_name":"10.4.24.38:8080_solr"},
>>>>            "core_node3":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.37:8080/solr",
>>>>              "core":"test1x3_shard1_replica3",
>>>>              "node_name":"10.4.24.37:8080_solr"}}}},
>>>>      "maxShardsPerNode":"1",
>>>>      "router":{"name":"compositeId"},
>>>>      "replicationFactor":"3"},
>>>>    "test2x2":{
>>>>      "shards":{
>>>>        "shard1":{
>>>>          "range":"80000000-ffffffff",
>>>>          "state":"active",
>>>>          "replicas":{
>>>>            "core_node1":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.39:8080/solr",
>>>>              "core":"test2x2_shard1_replica1",
>>>>              "node_name":"10.4.24.39:8080_solr"},
>>>>            "core_node4":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.38:8080/solr",
>>>>              "core":"test2x2_shard1_replica2",
>>>>              "node_name":"10.4.24.38:8080_solr",
>>>>              "leader":"true"}}},
>>>>        "shard2":{
>>>>          "range":"0-7fffffff",
>>>>          "state":"active",
>>>>          "replicas":{
>>>>            "core_node2":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.37:8080/solr",
>>>>              "core":"test2x2_shard2_replica1",
>>>>              "node_name":"10.4.24.37:8080_solr",
>>>>              "leader":"true"},
>>>>            "core_node3":{
>>>>              "state":"active",
>>>>              "base_url":"http://10.4.24.39:8080/solr",
>>>>              "core":"test2x2_shard2_replica2",
>>>>              "node_name":"10.4.24.39:8080_solr"}}}},
>>>>      "maxShardsPerNode":"2",
>>>>      "router":{"name":"compositeId"},
>>>>      "replicationFactor":"2"}}
>>>>
>>>>
>>>>
>>>> On 03/20/2014 09:44 AM, Greg Walters wrote:
>>>>> Sathya,
>>>>>
>>>>> I assume you're using Solr Cloud. Please provide your clusterstate.json
>>>>> while you're seeing this issue and check your logs for any exceptions. With
>>>>> no information from you it's hard to troubleshoot any issues!
>>>>>
>>>>> Thanks,
>>>>> Greg
>>>>>
>>>>> On Mar 20, 2014, at 12:44 AM, Sathya <sa...@gmail.com> wrote:
>>>>>
>>>>>> Hi Friends,
>>>>>>
>>>>>> I am new to Solr. I have 5 solr node in 5 different machine. When i
>>>>>> index
>>>>>> the data, sometimes "*No live SolrServers available to handle this
>>>>>> request*"
>>>>>> exception occur in 1 or 2 machines.
>>>>>>
>>>>>> I dont know why its happen and how to solve this. Kindly help me to
>>>>>> solve
>>>>>> this issue.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> View this message in context:
>>>>>> http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679.html
>>>>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>
>


Re: Solr4.7 No live SolrServers available to handle this request

Posted by Greg Walters <gr...@answers.com>.
Sathya,

We're still missing a fair amount of information here though it looks like your cluster is healthy. How are you indexing and what's the request you're sending that results in the error you're seeing? Have you checked your nodes' logs for errors that correspond with the one you're seeing while indexing?

Thanks,
Greg

On Mar 22, 2014, at 2:32 PM, Shalin Shekhar Mangar <sh...@gmail.com> wrote:

> Thanks Michael! I just committed your fix. It will be released with 4.7.1
> 
> On Fri, Mar 21, 2014 at 8:30 PM, Michael Sokolov
> <ms...@safaribooksonline.com> wrote:
>> I just managed to track this down -- as you said the disconnect was a red
>> herring.
>> 
>> Ultimately the problem was caused by a custom analysis component we wrote
>> that was raising an IOException -- it was missing some configuration files
>> it relies on.
>> 
>> What might be interesting for solr devs to have a look at is that exception
>> was completely swallowed by JavabinCodec, making it very difficult to track
>> down the problem.  Furthermore -- if the /add request was routed directly to
>> the shard where the document was destined to end up, then the IOException
>> raised by the analysis component (a char filter) showed up in the Solr HTTP
>> response (probably because my client used XML format in one test -- javabin
>> is used internally in SolrCloud).  But if the request was routed to a
>> different shard, then the only exception that showed up anywhere (in the
>> logs, in the HTTP response) was kind of irrelevant.
>> 
>> I think this could be fixed pretty easily; see SOLR-5985 for my suggestion.
>> 
>> -Mike
>> 
>> 
>> 
>> On 03/21/2014 10:20 AM, Greg Walters wrote:
>>> 
>>> Broken pipe errors are generally caused by unexpected disconnections and
>>> are some times hard to track down. Given the stack traces you've provided
>>> it's hard to point to any one thing and I suspect the relevant information
>>> was snipped out in the "long dump of document fields". You might grab the
>>> entire error from the client you're uploading documents with, the server
>>> you're connected to and any other nodes that have an error at the same time
>>> and put it on pastebin or the like.
>>> 
>>> Thanks,
>>> Greg
>>> 
>>> On Mar 20, 2014, at 3:36 PM, Michael Sokolov
>>> <ms...@safaribooksonline.com> wrote:
>>> 
>>>> I'm getting a similar exception when writing documents (on the client
>>>> side).  I can write one document fine, but the second (which is being routed
>>>> to a different shard) generates the error.  It happens every time -
>>>> definitely not a resource issue or timing problem since this database is
>>>> completely empty -- I'm just getting started and running some tests, so
>>>> there must be some kind of setup problem.  But it's difficult to diagnose
>>>> (for me, anyway)!  I'd appreciate any insight, hints, guesses, etc. since
>>>> I'm stuck. Thanks!
>>>> 
>>>> One node (the leader?) is reporting "Internal Server Error" in its log,
>>>> and another node (presumably the shard where the document is being directed)
>>>> bombs out like this:
>>>> 
>>>> ERROR - 2014-03-20 15:56:53.022; org.apache.solr.common.SolrException;
>>>> null:org.apache.solr.common.SolrException: ERROR adding document
>>>> SolrInputDocument(
>>>> 
>>>> ... long dump of document fields
>>>> 
>>>> )
>>>>    at
>>>> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:99)
>>>>    at
>>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
>>>>    at
>>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
>>>>    at
>>>> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
>>>>    at
>>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
>>>>    at
>>>> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
>>>>    at
>>>> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
>>>>    at
>>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
>>>>    at
>>>> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
>>>>    at
>>>> org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
>>>>    at
>>>> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
>>>>    at
>>>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>>>>    at
>>>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>>>>    at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
>>>>    at
>>>> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:721)
>>>> ...
>>>> Caused by: java.net.SocketException: Broken pipe
>>>>        at java.net.SocketOutputStream.socketWrite0(Native Method)
>>>>        at
>>>> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
>>>>        at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
>>>>        at
>>>> org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
>>>>        at
>>>> org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
>>>>        at
>>>> org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:366)
>>>>        at
>>>> org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:240)
>>>>        at
>>>> org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:119)
>>>>        at
>>>> org.apache.coyote.http11.AbstractOutputBuffer.doWrite(AbstractOutputBuffer.java:192)
>>>>        at org.apache.coyote.Response.doWrite(Response.java:520)
>>>>        at
>>>> org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:408)
>>>>        ... 37 more
>>>> 
>>>> This is with Solr 4.6.1, Tomcat 7.  Here's my clusterstate.json. Updates
>>>> are being sent to the test1x3 collection
>>>> 
>>>> 
>>>> {
>>>>  "test3x1":{
>>>>    "shards":{
>>>>      "shard1":{
>>>>        "range":"80000000-d554ffff",
>>>>        "state":"active",
>>>>        "replicas":{"core_node1":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.37:8080/solr",
>>>>            "core":"test3x1_shard1_replica1",
>>>>            "node_name":"10.4.24.37:8080_solr",
>>>>            "leader":"true"}}},
>>>>      "shard2":{
>>>>        "range":"d5550000-2aa9ffff",
>>>>        "state":"active",
>>>>        "replicas":{"core_node3":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.39:8080/solr",
>>>>            "core":"test3x1_shard2_replica1",
>>>>            "node_name":"10.4.24.39:8080_solr",
>>>>            "leader":"true"}}},
>>>>      "shard3":{
>>>>        "range":"2aaa0000-7fffffff",
>>>>        "state":"active",
>>>>        "replicas":{"core_node2":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.38:8080/solr",
>>>>            "core":"test3x1_shard3_replica1",
>>>>            "node_name":"10.4.24.38:8080_solr",
>>>>            "leader":"true"}}}},
>>>>    "maxShardsPerNode":"1",
>>>>    "router":{"name":"compositeId"},
>>>>    "replicationFactor":"1"},
>>>>  "test1x3":{
>>>>    "shards":{"shard1":{
>>>>        "range":"80000000-7fffffff",
>>>>        "state":"active",
>>>>        "replicas":{
>>>>          "core_node1":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.39:8080/solr",
>>>>            "core":"test1x3_shard1_replica2",
>>>>            "node_name":"10.4.24.39:8080_solr",
>>>>            "leader":"true"},
>>>>          "core_node2":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.38:8080/solr",
>>>>            "core":"test1x3_shard1_replica1",
>>>>            "node_name":"10.4.24.38:8080_solr"},
>>>>          "core_node3":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.37:8080/solr",
>>>>            "core":"test1x3_shard1_replica3",
>>>>            "node_name":"10.4.24.37:8080_solr"}}}},
>>>>    "maxShardsPerNode":"1",
>>>>    "router":{"name":"compositeId"},
>>>>    "replicationFactor":"3"},
>>>>  "test2x2":{
>>>>    "shards":{
>>>>      "shard1":{
>>>>        "range":"80000000-ffffffff",
>>>>        "state":"active",
>>>>        "replicas":{
>>>>          "core_node1":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.39:8080/solr",
>>>>            "core":"test2x2_shard1_replica1",
>>>>            "node_name":"10.4.24.39:8080_solr"},
>>>>          "core_node4":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.38:8080/solr",
>>>>            "core":"test2x2_shard1_replica2",
>>>>            "node_name":"10.4.24.38:8080_solr",
>>>>            "leader":"true"}}},
>>>>      "shard2":{
>>>>        "range":"0-7fffffff",
>>>>        "state":"active",
>>>>        "replicas":{
>>>>          "core_node2":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.37:8080/solr",
>>>>            "core":"test2x2_shard2_replica1",
>>>>            "node_name":"10.4.24.37:8080_solr",
>>>>            "leader":"true"},
>>>>          "core_node3":{
>>>>            "state":"active",
>>>>            "base_url":"http://10.4.24.39:8080/solr",
>>>>            "core":"test2x2_shard2_replica2",
>>>>            "node_name":"10.4.24.39:8080_solr"}}}},
>>>>    "maxShardsPerNode":"2",
>>>>    "router":{"name":"compositeId"},
>>>>    "replicationFactor":"2"}}
>>>> 
>>>> 
>>>> 
>>>> On 03/20/2014 09:44 AM, Greg Walters wrote:
>>>>> 
>>>>> Sathya,
>>>>> 
>>>>> I assume you're using Solr Cloud. Please provide your clusterstate.json
>>>>> while you're seeing this issue and check your logs for any exceptions. With
>>>>> no information from you it's hard to troubleshoot any issues!
>>>>> 
>>>>> Thanks,
>>>>> Greg
>>>>> 
>>>>> On Mar 20, 2014, at 12:44 AM, Sathya <sa...@gmail.com> wrote:
>>>>> 
>>>>>> Hi Friends,
>>>>>> 
>>>>>> I am new to Solr. I have 5 solr node in 5 different machine. When i
>>>>>> index
>>>>>> the data, sometimes "*No live SolrServers available to handle this
>>>>>> request*"
>>>>>> exception occur in 1 or 2 machines.
>>>>>> 
>>>>>> I dont know why its happen and how to solve this. Kindly help me to
>>>>>> solve
>>>>>> this issue.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> View this message in context:
>>>>>> http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679.html
>>>>>> Sent from the Solr - User mailing list archive at Nabble.com.
>> 
>> 
> 
> 
> 
> -- 
> Regards,
> Shalin Shekhar Mangar.


Re: Solr4.7 No live SolrServers available to handle this request

Posted by Shalin Shekhar Mangar <sh...@gmail.com>.
Thanks Michael! I just committed your fix. It will be released with 4.7.1

On Fri, Mar 21, 2014 at 8:30 PM, Michael Sokolov
<ms...@safaribooksonline.com> wrote:
> I just managed to track this down -- as you said the disconnect was a red
> herring.
>
> Ultimately the problem was caused by a custom analysis component we wrote
> that was raising an IOException -- it was missing some configuration files
> it relies on.
>
> What might be interesting for solr devs to have a look at is that exception
> was completely swallowed by JavabinCodec, making it very difficult to track
> down the problem.  Furthermore -- if the /add request was routed directly to
> the shard where the document was destined to end up, then the IOException
> raised by the analysis component (a char filter) showed up in the Solr HTTP
> response (probably because my client used XML format in one test -- javabin
> is used internally in SolrCloud).  But if the request was routed to a
> different shard, then the only exception that showed up anywhere (in the
> logs, in the HTTP response) was kind of irrelevant.
>
> I think this could be fixed pretty easily; see SOLR-5985 for my suggestion.
>
> -Mike
>
>
>
> On 03/21/2014 10:20 AM, Greg Walters wrote:
>>
>> Broken pipe errors are generally caused by unexpected disconnections and
>> are some times hard to track down. Given the stack traces you've provided
>> it's hard to point to any one thing and I suspect the relevant information
>> was snipped out in the "long dump of document fields". You might grab the
>> entire error from the client you're uploading documents with, the server
>> you're connected to and any other nodes that have an error at the same time
>> and put it on pastebin or the like.
>>
>> Thanks,
>> Greg
>>
>> On Mar 20, 2014, at 3:36 PM, Michael Sokolov
>> <ms...@safaribooksonline.com> wrote:
>>
>>> I'm getting a similar exception when writing documents (on the client
>>> side).  I can write one document fine, but the second (which is being routed
>>> to a different shard) generates the error.  It happens every time -
>>> definitely not a resource issue or timing problem since this database is
>>> completely empty -- I'm just getting started and running some tests, so
>>> there must be some kind of setup problem.  But it's difficult to diagnose
>>> (for me, anyway)!  I'd appreciate any insight, hints, guesses, etc. since
>>> I'm stuck. Thanks!
>>>
>>> One node (the leader?) is reporting "Internal Server Error" in its log,
>>> and another node (presumably the shard where the document is being directed)
>>> bombs out like this:
>>>
>>> ERROR - 2014-03-20 15:56:53.022; org.apache.solr.common.SolrException;
>>> null:org.apache.solr.common.SolrException: ERROR adding document
>>> SolrInputDocument(
>>>
>>> ... long dump of document fields
>>>
>>> )
>>>     at
>>> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:99)
>>>     at
>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
>>>     at
>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
>>>     at
>>> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
>>>     at
>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
>>>     at
>>> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
>>>     at
>>> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
>>>     at
>>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
>>>     at
>>> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
>>>     at
>>> org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
>>>     at
>>> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
>>>     at
>>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>>>     at
>>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>>>     at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
>>>     at
>>> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:721)
>>> ...
>>> Caused by: java.net.SocketException: Broken pipe
>>>         at java.net.SocketOutputStream.socketWrite0(Native Method)
>>>         at
>>> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
>>>         at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
>>>         at
>>> org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
>>>         at
>>> org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
>>>         at
>>> org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:366)
>>>         at
>>> org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:240)
>>>         at
>>> org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:119)
>>>         at
>>> org.apache.coyote.http11.AbstractOutputBuffer.doWrite(AbstractOutputBuffer.java:192)
>>>         at org.apache.coyote.Response.doWrite(Response.java:520)
>>>         at
>>> org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:408)
>>>         ... 37 more
>>>
>>> This is with Solr 4.6.1, Tomcat 7.  Here's my clusterstate.json. Updates
>>> are being sent to the test1x3 collection
>>>
>>>
>>> {
>>>   "test3x1":{
>>>     "shards":{
>>>       "shard1":{
>>>         "range":"80000000-d554ffff",
>>>         "state":"active",
>>>         "replicas":{"core_node1":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.37:8080/solr",
>>>             "core":"test3x1_shard1_replica1",
>>>             "node_name":"10.4.24.37:8080_solr",
>>>             "leader":"true"}}},
>>>       "shard2":{
>>>         "range":"d5550000-2aa9ffff",
>>>         "state":"active",
>>>         "replicas":{"core_node3":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.39:8080/solr",
>>>             "core":"test3x1_shard2_replica1",
>>>             "node_name":"10.4.24.39:8080_solr",
>>>             "leader":"true"}}},
>>>       "shard3":{
>>>         "range":"2aaa0000-7fffffff",
>>>         "state":"active",
>>>         "replicas":{"core_node2":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.38:8080/solr",
>>>             "core":"test3x1_shard3_replica1",
>>>             "node_name":"10.4.24.38:8080_solr",
>>>             "leader":"true"}}}},
>>>     "maxShardsPerNode":"1",
>>>     "router":{"name":"compositeId"},
>>>     "replicationFactor":"1"},
>>>   "test1x3":{
>>>     "shards":{"shard1":{
>>>         "range":"80000000-7fffffff",
>>>         "state":"active",
>>>         "replicas":{
>>>           "core_node1":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.39:8080/solr",
>>>             "core":"test1x3_shard1_replica2",
>>>             "node_name":"10.4.24.39:8080_solr",
>>>             "leader":"true"},
>>>           "core_node2":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.38:8080/solr",
>>>             "core":"test1x3_shard1_replica1",
>>>             "node_name":"10.4.24.38:8080_solr"},
>>>           "core_node3":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.37:8080/solr",
>>>             "core":"test1x3_shard1_replica3",
>>>             "node_name":"10.4.24.37:8080_solr"}}}},
>>>     "maxShardsPerNode":"1",
>>>     "router":{"name":"compositeId"},
>>>     "replicationFactor":"3"},
>>>   "test2x2":{
>>>     "shards":{
>>>       "shard1":{
>>>         "range":"80000000-ffffffff",
>>>         "state":"active",
>>>         "replicas":{
>>>           "core_node1":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.39:8080/solr",
>>>             "core":"test2x2_shard1_replica1",
>>>             "node_name":"10.4.24.39:8080_solr"},
>>>           "core_node4":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.38:8080/solr",
>>>             "core":"test2x2_shard1_replica2",
>>>             "node_name":"10.4.24.38:8080_solr",
>>>             "leader":"true"}}},
>>>       "shard2":{
>>>         "range":"0-7fffffff",
>>>         "state":"active",
>>>         "replicas":{
>>>           "core_node2":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.37:8080/solr",
>>>             "core":"test2x2_shard2_replica1",
>>>             "node_name":"10.4.24.37:8080_solr",
>>>             "leader":"true"},
>>>           "core_node3":{
>>>             "state":"active",
>>>             "base_url":"http://10.4.24.39:8080/solr",
>>>             "core":"test2x2_shard2_replica2",
>>>             "node_name":"10.4.24.39:8080_solr"}}}},
>>>     "maxShardsPerNode":"2",
>>>     "router":{"name":"compositeId"},
>>>     "replicationFactor":"2"}}
>>>
>>>
>>>
>>> On 03/20/2014 09:44 AM, Greg Walters wrote:
>>>>
>>>> Sathya,
>>>>
>>>> I assume you're using Solr Cloud. Please provide your clusterstate.json
>>>> while you're seeing this issue and check your logs for any exceptions. With
>>>> no information from you it's hard to troubleshoot any issues!
>>>>
>>>> Thanks,
>>>> Greg
>>>>
>>>> On Mar 20, 2014, at 12:44 AM, Sathya <sa...@gmail.com> wrote:
>>>>
>>>>> Hi Friends,
>>>>>
>>>>> I am new to Solr. I have 5 solr node in 5 different machine. When i
>>>>> index
>>>>> the data, sometimes "*No live SolrServers available to handle this
>>>>> request*"
>>>>> exception occur in 1 or 2 machines.
>>>>>
>>>>> I dont know why its happen and how to solve this. Kindly help me to
>>>>> solve
>>>>> this issue.
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679.html
>>>>> Sent from the Solr - User mailing list archive at Nabble.com.
>
>



-- 
Regards,
Shalin Shekhar Mangar.

Re: Solr4.7 No live SolrServers available to handle this request

Posted by Michael Sokolov <ms...@safaribooksonline.com>.
I just managed to track this down -- as you said the disconnect was a 
red herring.

Ultimately the problem was caused by a custom analysis component we 
wrote that was raising an IOException -- it was missing some 
configuration files it relies on.

What might be interesting for solr devs to have a look at is that 
exception was completely swallowed by JavabinCodec, making it very 
difficult to track down the problem.  Furthermore -- if the /add request 
was routed directly to the shard where the document was destined to end 
up, then the IOException raised by the analysis component (a char 
filter) showed up in the Solr HTTP response (probably because my client 
used XML format in one test -- javabin is used internally in 
SolrCloud).  But if the request was routed to a different shard, then 
the only exception that showed up anywhere (in the logs, in the HTTP 
response) was kind of irrelevant.

I think this could be fixed pretty easily; see SOLR-5985 for my suggestion.

-Mike


On 03/21/2014 10:20 AM, Greg Walters wrote:
> Broken pipe errors are generally caused by unexpected disconnections and are some times hard to track down. Given the stack traces you've provided it's hard to point to any one thing and I suspect the relevant information was snipped out in the "long dump of document fields". You might grab the entire error from the client you're uploading documents with, the server you're connected to and any other nodes that have an error at the same time and put it on pastebin or the like.
>
> Thanks,
> Greg
>
> On Mar 20, 2014, at 3:36 PM, Michael Sokolov <ms...@safaribooksonline.com> wrote:
>
>> I'm getting a similar exception when writing documents (on the client side).  I can write one document fine, but the second (which is being routed to a different shard) generates the error.  It happens every time - definitely not a resource issue or timing problem since this database is completely empty -- I'm just getting started and running some tests, so there must be some kind of setup problem.  But it's difficult to diagnose (for me, anyway)!  I'd appreciate any insight, hints, guesses, etc. since I'm stuck. Thanks!
>>
>> One node (the leader?) is reporting "Internal Server Error" in its log, and another node (presumably the shard where the document is being directed) bombs out like this:
>>
>> ERROR - 2014-03-20 15:56:53.022; org.apache.solr.common.SolrException; null:org.apache.solr.common.SolrException: ERROR adding document SolrInputDocument(
>>
>> ... long dump of document fields
>>
>> )
>>     at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:99)
>>     at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
>>     at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
>>     at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
>>     at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
>>     at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
>>     at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
>>     at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
>>     at org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
>>     at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
>>     at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
>>     at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>>     at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>>     at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
>>     at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:721)
>> ...
>> Caused by: java.net.SocketException: Broken pipe
>>         at java.net.SocketOutputStream.socketWrite0(Native Method)
>>         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
>>         at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
>>         at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
>>         at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
>>         at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:366)
>>         at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:240)
>>         at org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:119)
>>         at org.apache.coyote.http11.AbstractOutputBuffer.doWrite(AbstractOutputBuffer.java:192)
>>         at org.apache.coyote.Response.doWrite(Response.java:520)
>>         at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:408)
>>         ... 37 more
>>
>> This is with Solr 4.6.1, Tomcat 7.  Here's my clusterstate.json. Updates are being sent to the test1x3 collection
>>
>>
>> {
>>   "test3x1":{
>>     "shards":{
>>       "shard1":{
>>         "range":"80000000-d554ffff",
>>         "state":"active",
>>         "replicas":{"core_node1":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.37:8080/solr",
>>             "core":"test3x1_shard1_replica1",
>>             "node_name":"10.4.24.37:8080_solr",
>>             "leader":"true"}}},
>>       "shard2":{
>>         "range":"d5550000-2aa9ffff",
>>         "state":"active",
>>         "replicas":{"core_node3":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.39:8080/solr",
>>             "core":"test3x1_shard2_replica1",
>>             "node_name":"10.4.24.39:8080_solr",
>>             "leader":"true"}}},
>>       "shard3":{
>>         "range":"2aaa0000-7fffffff",
>>         "state":"active",
>>         "replicas":{"core_node2":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.38:8080/solr",
>>             "core":"test3x1_shard3_replica1",
>>             "node_name":"10.4.24.38:8080_solr",
>>             "leader":"true"}}}},
>>     "maxShardsPerNode":"1",
>>     "router":{"name":"compositeId"},
>>     "replicationFactor":"1"},
>>   "test1x3":{
>>     "shards":{"shard1":{
>>         "range":"80000000-7fffffff",
>>         "state":"active",
>>         "replicas":{
>>           "core_node1":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.39:8080/solr",
>>             "core":"test1x3_shard1_replica2",
>>             "node_name":"10.4.24.39:8080_solr",
>>             "leader":"true"},
>>           "core_node2":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.38:8080/solr",
>>             "core":"test1x3_shard1_replica1",
>>             "node_name":"10.4.24.38:8080_solr"},
>>           "core_node3":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.37:8080/solr",
>>             "core":"test1x3_shard1_replica3",
>>             "node_name":"10.4.24.37:8080_solr"}}}},
>>     "maxShardsPerNode":"1",
>>     "router":{"name":"compositeId"},
>>     "replicationFactor":"3"},
>>   "test2x2":{
>>     "shards":{
>>       "shard1":{
>>         "range":"80000000-ffffffff",
>>         "state":"active",
>>         "replicas":{
>>           "core_node1":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.39:8080/solr",
>>             "core":"test2x2_shard1_replica1",
>>             "node_name":"10.4.24.39:8080_solr"},
>>           "core_node4":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.38:8080/solr",
>>             "core":"test2x2_shard1_replica2",
>>             "node_name":"10.4.24.38:8080_solr",
>>             "leader":"true"}}},
>>       "shard2":{
>>         "range":"0-7fffffff",
>>         "state":"active",
>>         "replicas":{
>>           "core_node2":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.37:8080/solr",
>>             "core":"test2x2_shard2_replica1",
>>             "node_name":"10.4.24.37:8080_solr",
>>             "leader":"true"},
>>           "core_node3":{
>>             "state":"active",
>>             "base_url":"http://10.4.24.39:8080/solr",
>>             "core":"test2x2_shard2_replica2",
>>             "node_name":"10.4.24.39:8080_solr"}}}},
>>     "maxShardsPerNode":"2",
>>     "router":{"name":"compositeId"},
>>     "replicationFactor":"2"}}
>>
>>
>>
>> On 03/20/2014 09:44 AM, Greg Walters wrote:
>>> Sathya,
>>>
>>> I assume you're using Solr Cloud. Please provide your clusterstate.json while you're seeing this issue and check your logs for any exceptions. With no information from you it's hard to troubleshoot any issues!
>>>
>>> Thanks,
>>> Greg
>>>
>>> On Mar 20, 2014, at 12:44 AM, Sathya <sa...@gmail.com> wrote:
>>>
>>>> Hi Friends,
>>>>
>>>> I am new to Solr. I have 5 solr node in 5 different machine. When i index
>>>> the data, sometimes "*No live SolrServers available to handle this request*"
>>>> exception occur in 1 or 2 machines.
>>>>
>>>> I dont know why its happen and how to solve this. Kindly help me to solve
>>>> this issue.
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context: http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679.html
>>>> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr4.7 No live SolrServers available to handle this request

Posted by Greg Walters <gr...@answers.com>.
Broken pipe errors are generally caused by unexpected disconnections and are some times hard to track down. Given the stack traces you've provided it's hard to point to any one thing and I suspect the relevant information was snipped out in the "long dump of document fields". You might grab the entire error from the client you're uploading documents with, the server you're connected to and any other nodes that have an error at the same time and put it on pastebin or the like.

Thanks,
Greg

On Mar 20, 2014, at 3:36 PM, Michael Sokolov <ms...@safaribooksonline.com> wrote:

> I'm getting a similar exception when writing documents (on the client side).  I can write one document fine, but the second (which is being routed to a different shard) generates the error.  It happens every time - definitely not a resource issue or timing problem since this database is completely empty -- I'm just getting started and running some tests, so there must be some kind of setup problem.  But it's difficult to diagnose (for me, anyway)!  I'd appreciate any insight, hints, guesses, etc. since I'm stuck. Thanks!
> 
> One node (the leader?) is reporting "Internal Server Error" in its log, and another node (presumably the shard where the document is being directed) bombs out like this:
> 
> ERROR - 2014-03-20 15:56:53.022; org.apache.solr.common.SolrException; null:org.apache.solr.common.SolrException: ERROR adding document SolrInputDocument(
> 
> ... long dump of document fields
> 
> )
>    at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:99)
>    at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
>    at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
>    at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
>    at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
>    at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
>    at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
>    at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
>    at org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
>    at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
>    at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
>    at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>    at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>    at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
>    at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:721)
> ...
> Caused by: java.net.SocketException: Broken pipe
>        at java.net.SocketOutputStream.socketWrite0(Native Method)
>        at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
>        at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
>        at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
>        at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
>        at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:366)
>        at org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:240)
>        at org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:119)
>        at org.apache.coyote.http11.AbstractOutputBuffer.doWrite(AbstractOutputBuffer.java:192)
>        at org.apache.coyote.Response.doWrite(Response.java:520)
>        at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:408)
>        ... 37 more
> 
> This is with Solr 4.6.1, Tomcat 7.  Here's my clusterstate.json. Updates are being sent to the test1x3 collection
> 
> 
> {
>  "test3x1":{
>    "shards":{
>      "shard1":{
>        "range":"80000000-d554ffff",
>        "state":"active",
>        "replicas":{"core_node1":{
>            "state":"active",
>            "base_url":"http://10.4.24.37:8080/solr",
>            "core":"test3x1_shard1_replica1",
>            "node_name":"10.4.24.37:8080_solr",
>            "leader":"true"}}},
>      "shard2":{
>        "range":"d5550000-2aa9ffff",
>        "state":"active",
>        "replicas":{"core_node3":{
>            "state":"active",
>            "base_url":"http://10.4.24.39:8080/solr",
>            "core":"test3x1_shard2_replica1",
>            "node_name":"10.4.24.39:8080_solr",
>            "leader":"true"}}},
>      "shard3":{
>        "range":"2aaa0000-7fffffff",
>        "state":"active",
>        "replicas":{"core_node2":{
>            "state":"active",
>            "base_url":"http://10.4.24.38:8080/solr",
>            "core":"test3x1_shard3_replica1",
>            "node_name":"10.4.24.38:8080_solr",
>            "leader":"true"}}}},
>    "maxShardsPerNode":"1",
>    "router":{"name":"compositeId"},
>    "replicationFactor":"1"},
>  "test1x3":{
>    "shards":{"shard1":{
>        "range":"80000000-7fffffff",
>        "state":"active",
>        "replicas":{
>          "core_node1":{
>            "state":"active",
>            "base_url":"http://10.4.24.39:8080/solr",
>            "core":"test1x3_shard1_replica2",
>            "node_name":"10.4.24.39:8080_solr",
>            "leader":"true"},
>          "core_node2":{
>            "state":"active",
>            "base_url":"http://10.4.24.38:8080/solr",
>            "core":"test1x3_shard1_replica1",
>            "node_name":"10.4.24.38:8080_solr"},
>          "core_node3":{
>            "state":"active",
>            "base_url":"http://10.4.24.37:8080/solr",
>            "core":"test1x3_shard1_replica3",
>            "node_name":"10.4.24.37:8080_solr"}}}},
>    "maxShardsPerNode":"1",
>    "router":{"name":"compositeId"},
>    "replicationFactor":"3"},
>  "test2x2":{
>    "shards":{
>      "shard1":{
>        "range":"80000000-ffffffff",
>        "state":"active",
>        "replicas":{
>          "core_node1":{
>            "state":"active",
>            "base_url":"http://10.4.24.39:8080/solr",
>            "core":"test2x2_shard1_replica1",
>            "node_name":"10.4.24.39:8080_solr"},
>          "core_node4":{
>            "state":"active",
>            "base_url":"http://10.4.24.38:8080/solr",
>            "core":"test2x2_shard1_replica2",
>            "node_name":"10.4.24.38:8080_solr",
>            "leader":"true"}}},
>      "shard2":{
>        "range":"0-7fffffff",
>        "state":"active",
>        "replicas":{
>          "core_node2":{
>            "state":"active",
>            "base_url":"http://10.4.24.37:8080/solr",
>            "core":"test2x2_shard2_replica1",
>            "node_name":"10.4.24.37:8080_solr",
>            "leader":"true"},
>          "core_node3":{
>            "state":"active",
>            "base_url":"http://10.4.24.39:8080/solr",
>            "core":"test2x2_shard2_replica2",
>            "node_name":"10.4.24.39:8080_solr"}}}},
>    "maxShardsPerNode":"2",
>    "router":{"name":"compositeId"},
>    "replicationFactor":"2"}}
> 
> 
> 
> On 03/20/2014 09:44 AM, Greg Walters wrote:
>> Sathya,
>> 
>> I assume you're using Solr Cloud. Please provide your clusterstate.json while you're seeing this issue and check your logs for any exceptions. With no information from you it's hard to troubleshoot any issues!
>> 
>> Thanks,
>> Greg
>> 
>> On Mar 20, 2014, at 12:44 AM, Sathya <sa...@gmail.com> wrote:
>> 
>>> Hi Friends,
>>> 
>>> I am new to Solr. I have 5 solr node in 5 different machine. When i index
>>> the data, sometimes "*No live SolrServers available to handle this request*"
>>> exception occur in 1 or 2 machines.
>>> 
>>> I dont know why its happen and how to solve this. Kindly help me to solve
>>> this issue.
>>> 
>>> 
>>> 
>>> --
>>> View this message in context: http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679.html
>>> Sent from the Solr - User mailing list archive at Nabble.com.
> 


Re: Solr4.7 No live SolrServers available to handle this request

Posted by Michael Sokolov <ms...@safaribooksonline.com>.
I'm getting a similar exception when writing documents (on the client 
side).  I can write one document fine, but the second (which is being 
routed to a different shard) generates the error.  It happens every time 
- definitely not a resource issue or timing problem since this database 
is completely empty -- I'm just getting started and running some tests, 
so there must be some kind of setup problem.  But it's difficult to 
diagnose (for me, anyway)!  I'd appreciate any insight, hints, guesses, 
etc. since I'm stuck. Thanks!

One node (the leader?) is reporting "Internal Server Error" in its log, 
and another node (presumably the shard where the document is being 
directed) bombs out like this:

ERROR - 2014-03-20 15:56:53.022; org.apache.solr.common.SolrException; 
null:org.apache.solr.common.SolrException: ERROR adding document 
SolrInputDocument(

... long dump of document fields

)
     at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:99)
     at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:166)
     at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:136)
     at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:225)
     at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:121)
     at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:190)
     at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116)
     at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:173)
     at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:106)
     at 
org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:58)
     at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
     at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
     at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
     at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859)
     at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:721)
...
Caused by: java.net.SocketException: Broken pipe
         at java.net.SocketOutputStream.socketWrite0(Native Method)
         at 
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
         at 
org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
         at 
org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
         at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:366)
         at 
org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:240)
         at 
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:119)
         at 
org.apache.coyote.http11.AbstractOutputBuffer.doWrite(AbstractOutputBuffer.java:192)
         at org.apache.coyote.Response.doWrite(Response.java:520)
         at 
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:408)
         ... 37 more

This is with Solr 4.6.1, Tomcat 7.  Here's my clusterstate.json. Updates 
are being sent to the test1x3 collection


{
   "test3x1":{
     "shards":{
       "shard1":{
         "range":"80000000-d554ffff",
         "state":"active",
         "replicas":{"core_node1":{
             "state":"active",
             "base_url":"http://10.4.24.37:8080/solr",
             "core":"test3x1_shard1_replica1",
             "node_name":"10.4.24.37:8080_solr",
             "leader":"true"}}},
       "shard2":{
         "range":"d5550000-2aa9ffff",
         "state":"active",
         "replicas":{"core_node3":{
             "state":"active",
             "base_url":"http://10.4.24.39:8080/solr",
             "core":"test3x1_shard2_replica1",
             "node_name":"10.4.24.39:8080_solr",
             "leader":"true"}}},
       "shard3":{
         "range":"2aaa0000-7fffffff",
         "state":"active",
         "replicas":{"core_node2":{
             "state":"active",
             "base_url":"http://10.4.24.38:8080/solr",
             "core":"test3x1_shard3_replica1",
             "node_name":"10.4.24.38:8080_solr",
             "leader":"true"}}}},
     "maxShardsPerNode":"1",
     "router":{"name":"compositeId"},
     "replicationFactor":"1"},
   "test1x3":{
     "shards":{"shard1":{
         "range":"80000000-7fffffff",
         "state":"active",
         "replicas":{
           "core_node1":{
             "state":"active",
             "base_url":"http://10.4.24.39:8080/solr",
             "core":"test1x3_shard1_replica2",
             "node_name":"10.4.24.39:8080_solr",
             "leader":"true"},
           "core_node2":{
             "state":"active",
             "base_url":"http://10.4.24.38:8080/solr",
             "core":"test1x3_shard1_replica1",
             "node_name":"10.4.24.38:8080_solr"},
           "core_node3":{
             "state":"active",
             "base_url":"http://10.4.24.37:8080/solr",
             "core":"test1x3_shard1_replica3",
             "node_name":"10.4.24.37:8080_solr"}}}},
     "maxShardsPerNode":"1",
     "router":{"name":"compositeId"},
     "replicationFactor":"3"},
   "test2x2":{
     "shards":{
       "shard1":{
         "range":"80000000-ffffffff",
         "state":"active",
         "replicas":{
           "core_node1":{
             "state":"active",
             "base_url":"http://10.4.24.39:8080/solr",
             "core":"test2x2_shard1_replica1",
             "node_name":"10.4.24.39:8080_solr"},
           "core_node4":{
             "state":"active",
             "base_url":"http://10.4.24.38:8080/solr",
             "core":"test2x2_shard1_replica2",
             "node_name":"10.4.24.38:8080_solr",
             "leader":"true"}}},
       "shard2":{
         "range":"0-7fffffff",
         "state":"active",
         "replicas":{
           "core_node2":{
             "state":"active",
             "base_url":"http://10.4.24.37:8080/solr",
             "core":"test2x2_shard2_replica1",
             "node_name":"10.4.24.37:8080_solr",
             "leader":"true"},
           "core_node3":{
             "state":"active",
             "base_url":"http://10.4.24.39:8080/solr",
             "core":"test2x2_shard2_replica2",
             "node_name":"10.4.24.39:8080_solr"}}}},
     "maxShardsPerNode":"2",
     "router":{"name":"compositeId"},
     "replicationFactor":"2"}}



On 03/20/2014 09:44 AM, Greg Walters wrote:
> Sathya,
>
> I assume you're using Solr Cloud. Please provide your clusterstate.json while you're seeing this issue and check your logs for any exceptions. With no information from you it's hard to troubleshoot any issues!
>
> Thanks,
> Greg
>
> On Mar 20, 2014, at 12:44 AM, Sathya <sa...@gmail.com> wrote:
>
>> Hi Friends,
>>
>> I am new to Solr. I have 5 solr node in 5 different machine. When i index
>> the data, sometimes "*No live SolrServers available to handle this request*"
>> exception occur in 1 or 2 machines.
>>
>> I dont know why its happen and how to solve this. Kindly help me to solve
>> this issue.
>>
>>
>>
>> --
>> View this message in context: http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679.html
>> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr4.7 No live SolrServers available to handle this request

Posted by Sathya <sa...@gmail.com>.
Hi Greg,

Where i can find the clusterstate.json.? i have ensemble zookeeper. Can you
please tell me that where i can find it. 

Thanks.



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679p4126452.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.7 No live SolrServers available to handle this request

Posted by Greg Walters <gr...@answers.com>.
Sathya,

I assume you're using Solr Cloud. Please provide your clusterstate.json while you're seeing this issue and check your logs for any exceptions. With no information from you it's hard to troubleshoot any issues!

Thanks,
Greg

On Mar 20, 2014, at 12:44 AM, Sathya <sa...@gmail.com> wrote:

> Hi Friends,
> 
> I am new to Solr. I have 5 solr node in 5 different machine. When i index
> the data, sometimes "*No live SolrServers available to handle this request*"
> exception occur in 1 or 2 machines. 
> 
> I dont know why its happen and how to solve this. Kindly help me to solve
> this issue.
> 
> 
> 
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr4.7 No live SolrServers available to handle this request

Posted by Sathya <sa...@gmail.com>.
Hi Greg,

This is my Clusterstate.json.

WatchedEvent state:SyncConnected type:None path:null
[zk: 10.10.1.72:2185(CONNECTED) 0] get /clusterstate.json
{"set_recent":{
    "shards":{
      "shard1":{
        "range":"80000000-d554ffff",
        "state":"active",
        "replicas":{
          "10.10.1.16:4040_solr_set_recent_shard1_replica1":{
            "state":"active",
            "base_url":"http://10.10.1.16:4040/solr",
            "core":"set_recent_shard1_replica1",
            "node_name":"10.10.1.16:4040_solr"},
          "10.10.1.72:2020_solr_set_recent_shard1_replica2":{
            "state":"active",
            "base_url":"http://10.10.1.72:2020/solr",
            "core":"set_recent_shard1_replica2",
            "node_name":"10.10.1.72:2020_solr"},
          "10.10.1.19:3030_solr_set_recent_shard1_replica3":{
            "state":"active",
            "base_url":"http://10.10.1.19:3030/solr",
            "core":"set_recent_shard1_replica3",
            "node_name":"10.10.1.19:3030_solr",
            "leader":"true"},
          "10.10.1.21:1010_solr_set_recent_shard1_replica4":{
            "state":"active",
            "base_url":"http://10.10.1.21:1010/solr",
            "core":"set_recent_shard1_replica4",
            "node_name":"10.10.1.21:1010_solr"},
          "10.10.1.14:5050_solr_set_recent_shard1_replica5":{
            "state":"active",
            "base_url":"http://10.10.1.14:5050/solr",
            "core":"set_recent_shard1_replica5",
            "node_name":"10.10.1.14:5050_solr"}}},
      "shard2":{
        "range":"d5550000-2aa9ffff",
        "state":"active",
        "replicas":{
          "10.10.1.16:4040_solr_set_recent_shard2_replica1":{
            "state":"active",
            "base_url":"http://10.10.1.16:4040/solr",
            "core":"set_recent_shard2_replica1",
            "node_name":"10.10.1.16:4040_solr"},
          "10.10.1.72:2020_solr_set_recent_shard2_replica2":{
            "state":"active",
            "base_url":"http://10.10.1.72:2020/solr",
            "core":"set_recent_shard2_replica2",
            "node_name":"10.10.1.72:2020_solr"},
          "10.10.1.19:3030_solr_set_recent_shard2_replica3":{
            "state":"active",
            "base_url":"http://10.10.1.19:3030/solr",
            "core":"set_recent_shard2_replica3",
            "node_name":"10.10.1.19:3030_solr",
            "leader":"true"},
          "10.10.1.21:1010_solr_set_recent_shard2_replica4":{
            "state":"active",
            "base_url":"http://10.10.1.21:1010/solr",
            "core":"set_recent_shard2_replica4",
            "node_name":"10.10.1.21:1010_solr"},
          "10.10.1.14:5050_solr_set_recent_shard2_replica5":{
            "state":"active",
            "base_url":"http://10.10.1.14:5050/solr",
            "core":"set_recent_shard2_replica5",
            "node_name":"10.10.1.14:5050_solr"}}},
      "shard3":{
        "range":"2aaa0000-7fffffff",
        "state":"active",
        "replicas":{
          "10.10.1.16:4040_solr_set_recent_shard3_replica1":{
            "state":"active",
            "base_url":"http://10.10.1.16:4040/solr",
            "core":"set_recent_shard3_replica1",
            "node_name":"10.10.1.16:4040_solr"},
          "10.10.1.72:2020_solr_set_recent_shard3_replica2":{
            "state":"active",
            "base_url":"http://10.10.1.72:2020/solr",
            "core":"set_recent_shard3_replica2",
            "node_name":"10.10.1.72:2020_solr"},
          "10.10.1.19:3030_solr_set_recent_shard3_replica3":{
            "state":"active",
            "base_url":"http://10.10.1.19:3030/solr",
            "core":"set_recent_shard3_replica3",
            "node_name":"10.10.1.19:3030_solr",
            "leader":"true"},
          "10.10.1.21:1010_solr_set_recent_shard3_replica4":{
            "state":"active",
            "base_url":"http://10.10.1.21:1010/solr",
            "core":"set_recent_shard3_replica4",
            "node_name":"10.10.1.21:1010_solr"},
          "10.10.1.14:5050_solr_set_recent_shard3_replica5":{
            "state":"active",
            "base_url":"http://10.10.1.14:5050/solr",
            "core":"set_recent_shard3_replica5",
            "node_name":"10.10.1.14:5050_solr"}}}},
    "maxShardsPerNode":"3",
    "router":{"name":"compositeId"},
    "replicationFactor":"5"}}
cZxid = 0x100000014
ctime = Tue Mar 18 13:05:38 IST 2014
mZxid = 0x50000027c
mtime = Mon Mar 24 14:22:24 IST 2014
pZxid = 0x100000014
cversion = 0
dataVersion = 387
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4182
numChildren = 0


Kindly let me know for further inputs..



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-7-No-live-SolrServers-available-to-handle-this-request-tp4125679p4126479.html
Sent from the Solr - User mailing list archive at Nabble.com.