You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by diyun2008 <di...@gmail.com> on 2013/09/07 16:10:53 UTC

Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

*I have installed solr cloud with solr4.4 and zookeeper 3.4.5. 
And I'm testing some requirements with 10k collections supporting in one
solr server.
When I post collection to solr
server(admin/collections?action=CREATE&name=europetest${loopcnt}&numShards=2&replicationFactor=2&maxShardsPerNode=2)
with jmeter, 
I found every time when collections number reached 600+, Solr and zookeeper
will not work correctly.

I checked logs. Here's Solr logs:*
07:56:01,149 ERROR SolrException: null:org.apache.solr.common.SolrException:
createcollection the collection time out:60s
	at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:175)
	at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:156)
	at
org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHandler.java:290)
	at
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:112)
	at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
	at
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
	at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
	at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
	at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
	at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
	at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
	at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
	at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
	at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
	at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
	at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
	at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
	at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023)
	at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
	at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:781)

07:57:23,523 ERROR SolrException: org.apache.solr.common.SolrException:
createcollection the collection error [Watcher fired on path:
/overseer/collection-queue-work/qnr-0000001590 state: SyncConnected type
NodeDeleted]
	at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:178)
	at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:156)
	at
org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHandler.java:290)
	at
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:112)
	at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
	at
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
	at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
	at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
	at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
	at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
	at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
	at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
	at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
	at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
	at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
	at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
	at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
	at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023)
	at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
	at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:781)


And here's the zookeeper logs:

2013-09-07 15:56:17,498 [myid:1] - WARN 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@354] - Exception
causing close of session 0x140f624fa990000 due to java.io.IOException: Len
error 1048971
2013-09-07 15:56:17,507 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed
socket connection for client /9.110.83.131:21210 which had sessionid
0x140f624fa990000
2013-09-07 15:56:29,966 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] -
Accepted socket connection from /9.110.83.131:23793
2013-09-07 15:56:29,967 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@832] - Client
attempting to renew session 0x140f624fa990000 at /9.110.83.131:23793
2013-09-07 15:56:29,967 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:Learner@107] - Revalidating
client: 0x140f624fa990000
2013-09-07 15:56:29,968 [myid:1] - INFO 
[QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@595] - Established
session 0x140f624fa990000 with negotiated timeout 120000 for client
/9.110.83.131:23793
2013-09-07 15:56:51,124 [myid:1] - WARN 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@354] - Exception
causing close of session 0x140f624fa990000 due to java.io.IOException: Len
error 1048971
2013-09-07 15:56:51,125 [myid:1] - INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed
socket connection for client /9.110.83.131:23793 which had sessionid
0x140f624fa990000

*I'm very confused by this problem. I have tried add more memory to
zookeeper and solr. but it's useless.
I'm very appreciated if someone can help me.* 





--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by diyun2008 <di...@gmail.com>.
Thank you Lance for you experience share. That will be useful to me.



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4089212.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by Lance Norskog <go...@gmail.com>.
Yes, Solr/Lucene works fine with other indexes this large. There are 
many indexes with hundreds of gigabytes and hundreds of millions of 
documents. My experience years ago was that at this scale, searching 
worked great, sorting & facets less so, and the real problem was IT: a 
200G blob of data is a pain in the neck to administer.

As always, every index is different, but you should not have problems 
doing the merge that you describe.

Lance

On 09/08/2013 09:01 PM, diyun2008 wrote:
> Thank you Erick. It's very useful to me. I have already started to merge logs
> of collections to 15 collections. but there's another question. If I merge
> 1000 collections to 1 collection, to the new collection it will have about
> 20G data and about 30M records. In 1 solr server, I will create 15 such big
> collections. So I don't know if solr can support such big data in 1
> collection(20G data with 30M records) or in 1 solr server(15*20G data with
> 15*30M records)? Or do I need buy new servers to install solr and do shrding
> to support that?
>
>
>
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4088802.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by diyun2008 <di...@gmail.com>.
Thank you Yago. That seems some strange. Do you know some official document
detail this? I really need more evidence to do dicision.I mean I need to
compare the two method and find out which have more advantages in terms of
performance and cost. And I will change my parameter to do more testing. I
have 15K collections at least . If you have more experiences, I'm very
appreciated to get more advices from you.



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4088873.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by diyun2008 <di...@gmail.com>.
Thank you very much for your advice.



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4089009.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by Yago Riveiro <ya...@gmail.com>.
If you have 15K collections I guess that you are doing custom sharding and not using collection sharding.

My first approach was the same as you are doing. In fact, I have the same lote of cores issue. I use the Djute.maxbuffer without any issue.

In last versions, Solr implements a way to do sharding using a prefix in your ID, therefore I replace my lot of cores with a collection with shards. Now with the splitshard feature you can split the shards that reach a condiserable size.

Downside, I don't know if the splitshard feature honors the compositeId defined on collection's creation.

Recommendation, if you don't want that the lot of cores issue bites you in some kind of wierd issue or anomalous behavior try to reduce the cores as possible and splits shards as necessary when performance can hurt your environment.

-- 
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Monday, September 9, 2013 at 3:09 PM, diyun2008 wrote:

> I just found this option "-Djute.maxbuffer" in zookeeper admin document. But
> it's a "Unsafe Options". I can't really know what it mean. Maybe that will
> bring some unstable problems? Does someone have some real practical
> experiences when using this parameter? I will have at least 15K collections.
> Or I will have to merge them to small numbers.
> 
> 
> 
> 
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4088878.html
> Sent from the Solr - User mailing list archive at Nabble.com (http://Nabble.com).
> 
> 



Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by diyun2008 <di...@gmail.com>.
I just found this option "-Djute.maxbuffer" in zookeeper admin document. But
it's a "Unsafe Options". I can't really know what it mean. Maybe that will
bring some unstable problems? Does someone have some real practical
experiences when using this parameter? I will have at least 15K collections.
Or I will have to merge them to small numbers.




--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4088878.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by Yago Riveiro <ya...@gmail.com>.
If you want have more collections you need to configure in zookeeper and solr the -Djute.maxbuffer variable to override the default limitation. 

In zookeeper you can configure it in zookeeper-env.sh file. On Solr pass the variable like the others.

Note: In both cases the value configured need to be the same or bad things can happen.

-- 
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Monday, September 9, 2013 at 5:01 AM, diyun2008 wrote:

> Thank you Erick. It's very useful to me. I have already started to merge logs
> of collections to 15 collections. but there's another question. If I merge
> 1000 collections to 1 collection, to the new collection it will have about
> 20G data and about 30M records. In 1 solr server, I will create 15 such big
> collections. So I don't know if solr can support such big data in 1
> collection(20G data with 30M records) or in 1 solr server(15*20G data with
> 15*30M records)? Or do I need buy new servers to install solr and do shrding
> to support that? 
> 
> 
> 
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4088802.html
> Sent from the Solr - User mailing list archive at Nabble.com (http://Nabble.com).
> 
> 



Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by diyun2008 <di...@gmail.com>.
Thank you Erick. It's very useful to me. I have already started to merge logs
of collections to 15 collections. but there's another question. If I merge
1000 collections to 1 collection, to the new collection it will have about
20G data and about 30M records. In 1 solr server, I will create 15 such big
collections. So I don't know if solr can support such big data in 1
collection(20G data with 30M records) or in 1 solr server(15*20G data with
15*30M records)? Or do I need buy new servers to install solr and do shrding
to support that? 



--
View this message in context: http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4088802.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Solr4.4 or zookeeper 3.4.5 do not support too many collections? more than 600?

Posted by Erick Erickson <er...@gmail.com>.
Right, I _think_ that the use of ZK is limited to 1M and it looks like the
600th collection pushes the ZK state past 1M. 1024*1024 is
1,048,576 which is waaaaay suspiciously close to
1,048,971

At 600 collections you're pushing past this limit it looks like.
Not quite sure where it can be  changed. Here's a good discussion of this:
http://lucene.472066.n3.nabble.com/gt-1MB-file-to-Zookeeper-td3958614.html

Best,
Erick


On Sat, Sep 7, 2013 at 10:10 AM, diyun2008 <di...@gmail.com> wrote:

> *I have installed solr cloud with solr4.4 and zookeeper 3.4.5.
> And I'm testing some requirements with 10k collections supporting in one
> solr server.
> When I post collection to solr
>
> server(admin/collections?action=CREATE&name=europetest${loopcnt}&numShards=2&replicationFactor=2&maxShardsPerNode=2)
> with jmeter,
> I found every time when collections number reached 600+, Solr and zookeeper
> will not work correctly.
>
> I checked logs. Here's Solr logs:*
> 07:56:01,149 ERROR SolrException:
> null:org.apache.solr.common.SolrException:
> createcollection the collection time out:60s
>         at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:175)
>         at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:156)
>         at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHandler.java:290)
>         at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:112)
>         at
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>         at
>
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
>         at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
>         at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
>         at
>
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
>         at
>
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
>         at
>
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
>         at
>
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
>         at
>
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
>         at
>
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
>         at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
>         at
>
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
>         at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
>         at
>
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023)
>         at
>
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
>         at
>
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:781)
>
> 07:57:23,523 ERROR SolrException: org.apache.solr.common.SolrException:
> createcollection the collection error [Watcher fired on path:
> /overseer/collection-queue-work/qnr-0000001590 state: SyncConnected type
> NodeDeleted]
>         at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:178)
>         at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:156)
>         at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHandler.java:290)
>         at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:112)
>         at
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>         at
>
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
>         at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
>         at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
>         at
>
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
>         at
>
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
>         at
>
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
>         at
>
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
>         at
>
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
>         at
>
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
>         at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
>         at
>
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
>         at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
>         at
>
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023)
>         at
>
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
>         at
>
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:781)
>
>
> And here's the zookeeper logs:
>
> 2013-09-07 15:56:17,498 [myid:1] - WARN
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@354] - Exception
> causing close of session 0x140f624fa990000 due to java.io.IOException: Len
> error 1048971
> 2013-09-07 15:56:17,507 [myid:1] - INFO
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed
> socket connection for client /9.110.83.131:21210 which had sessionid
> 0x140f624fa990000
> 2013-09-07 15:56:29,966 [myid:1] - INFO
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] -
> Accepted socket connection from /9.110.83.131:23793
> 2013-09-07 15:56:29,967 [myid:1] - INFO
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@832] - Client
> attempting to renew session 0x140f624fa990000 at /9.110.83.131:23793
> 2013-09-07 15:56:29,967 [myid:1] - INFO
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:Learner@107] - Revalidating
> client: 0x140f624fa990000
> 2013-09-07 15:56:29,968 [myid:1] - INFO
> [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:ZooKeeperServer@595] -
> Established
> session 0x140f624fa990000 with negotiated timeout 120000 for client
> /9.110.83.131:23793
> 2013-09-07 15:56:51,124 [myid:1] - WARN
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@354] - Exception
> causing close of session 0x140f624fa990000 due to java.io.IOException: Len
> error 1048971
> 2013-09-07 15:56:51,125 [myid:1] - INFO
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed
> socket connection for client /9.110.83.131:23793 which had sessionid
> 0x140f624fa990000
>
> *I'm very confused by this problem. I have tried add more memory to
> zookeeper and solr. but it's useless.
> I'm very appreciated if someone can help me.*
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>