You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Ranveer <ra...@gmail.com> on 2012/01/24 07:22:00 UTC
Re: CLOSE_WAIT after connecting to multiple shards from a primary
shard
Hi Mukund,
Since I am getting this issue for long time, I had done some hit and
run. In my case I am connecting the local tomcat server using solrJ.
SolrJ has max connection perhost 20 and per client 2. As I have heavy
load and lots of dependency on solr so it seems very low. To increase
the default perhost and per client connetions I done:
MultiThreadedHttpConnectionManager connectionManager=new
MultiThreadedHttpConnectionManager();
DefaultHttpMethodRetryHandler retryhandler = new
DefaultHttpMethodRetryHandler(0, false);
connectionManager.getParams().setMaxTotalConnections(500);
connectionManager.getParams().setDefaultMaxConnectionsPerHost(500);
connectionManager.closeIdleConnections(0L);
connectionManager.getParams().setMaxTotalConnections(500);
connectionManager.getParams().setDefaultMaxConnectionsPerHost(500);
connectionManager.closeIdleConnections(0L);
HttpClient httpClient=new HttpClient(connectionManager);
httpClient.getHttpConnectionManager().getParams().setDefaultMaxConnectionsPerHost(500);
httpClient.getHttpConnectionManager().getParams().setMaxTotalConnections(500);
httpClient.getParams().setParameter("http.method.retry-handler",
retryhandler);
server=new CommonsHttpSolrServer(getSolrURL(), httpClient);
I have 5 cores setup and I am using above code for each core in static
block and using same instance across all the class.
but it not seems any effect. still server die randomly in 1 to 2 days. I
am using Tomcat instead of jetty. I already increase maxThread in tomcat
to 500 Is there any limitation of tomcat for this much stress. But other
end when I look using telnet there is not much thread is open. I have
doubt in httpclient.
any help..
regards
On Tuesday 24 January 2012 07:27 AM, Mukunda Madhava wrote:
> Hi Ranveer,
> I dont have any solution to this problem. I havent got any response
> from the forums as well.
>
> I implemented custom design for distributed searching, as it gave me
> better control on the connections open.
>
> On Sun, Jan 22, 2012 at 10:05 PM, Ranveer Kumar
> <ranveer.solr@gmail.com <ma...@gmail.com>> wrote:
>
>
> Hi Mukunda,
>
> Did you get solution. Actually I am aslo getting same problem.
> Please help me to over come this problem.
>
> regards
> Ranveer
>
> On Thu, Jun 2, 2011 at 12:37 AM, Mukunda Madhava
> <mukunda.ms@gmail.com <ma...@gmail.com>> wrote:
>
> Hi Otis,
> Sending to solr-user mailing list.
>
> We see this CLOSE_WAIT connections even when i do a simple
> http request via
> curl, that is, even when i do a simple curl using a primary
> and secondary
> shard query, like for e.g.
>
> curl "
> http://primaryshardhost:8180/solr/core0/select?q=*%3A*&shards=secondaryshardhost1:8090/solr/appgroup1_11053000_11053100
> <http://primaryshardhost:8180/solr/core0/select?q=*%3A*&shards=secondaryshardhost1:8090/solr/appgroup1_11053000_11053100>
> "
>
> While fetching data it is in ESTABLISHED state
>
> -sh-3.2$ netstat | grep ESTABLISHED | grep 8090
> tcp 0 0 primaryshardhost:36805
> secondaryshardhost1:8090
> ESTABLISHED
>
> After the request has come back, it is in CLOSE_WAIT state
>
> -sh-3.2$ netstat | grep CLOSE_WAIT | grep 8090
> tcp 1 0 primaryshardhost:36805
> secondaryshardhost1:8090
> CLOSE_WAIT
>
> why does Solr keep the connection to the shards in CLOSE_WAIT?
>
> Is this a feature of Solr? If we modify an OS property (I dont
> know how) to
> cleanup the CLOSE_WAITs will it cause an issue with subsequent
> searches?
>
> Can someone help me please?
>
> thanks,
> Mukunda
>
> On Mon, May 30, 2011 at 5:59 PM, Otis Gospodnetic <
> otis_gospodnetic@yahoo.com
> <ma...@yahoo.com>> wrote:
>
> > Hi,
> >
> > A few things:
> > 1) why not send this to the Solr list?
> > 2) you talk about searching, but the code sample is about
> optimizing the
> > index.
> >
> > 3) I don't have SolrJ API in front of me, but isn't there is
> > CommonsSolrServe
> > ctor that takes in a URL instead of HttpClient instance?
> Try that one.
> >
> > Otis
> > -----
> > Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> > Lucene ecosystem search :: http://search-lucene.com/
> >
> >
> >
> > ----- Original Message ----
> > > From: Mukunda Madhava <mukunda.ms@gmail.com
> <ma...@gmail.com>>
> > > To: general@lucene.apache.org
> <ma...@lucene.apache.org>
> > > Sent: Mon, May 30, 2011 1:54:07 PM
> > > Subject: CLOSE_WAIT after connecting to multiple shards
> from a primary
> > shard
> > >
> > > Hi,
> > > We are having a "primary" Solr shard, and multiple
> "secondary" shards.
> > We
> > > query data from the secondary shards by specifying the
> "shards" param in
> > the
> > > query params.
> > >
> > > But we found that after recieving the data, there are
> large number of
> > > CLOSE_WAIT on the secondary shards from the primary shards.
> > >
> > > Like for e.g.
> > >
> > > tcp 1 0 primaryshardhost:56109
> secondaryshardhost1:8090
> > > CLOSE_WAIT
> > > tcp 1 0 primaryshardhost:51049
> secondaryshardhost1:8090
> > > CLOSE_WAIT
> > > tcp 1 0 primaryshardhost:49537
> secondaryshardhost1:8089
> > > CLOSE_WAIT
> > > tcp 1 0 primaryshardhost:44109
> secondaryshardhost2:8090
> > > CLOSE_WAIT
> > > tcp 1 0 primaryshardhost:32041
> secondaryshardhost2:8090
> > > CLOSE_WAIT
> > > tcp 1 0 primaryshardhost:48533
> secondaryshardhost2:8089
> > > CLOSE_WAIT
> > >
> > >
> > > We open the Solr connections as below..
> > >
> > > SimpleHttpConnectionManager cm = new
> > > SimpleHttpConnectionManager(true);
> > > cm.closeIdleConnections(0L);
> > > HttpClient httpClient = new HttpClient(cm);
> > > solrServer = new
> CommonsHttpSolrServer(url,httpClient);
> > > solrServer.optimize();
> > >
> > > But still we see these issues. Any ideas?
> > > --
> > > Thanks,
> > > Mukunda
> > >
> >
>
>
>
> --
> Thanks,
> Mukunda
>
>
>
>
>
> --
> Thanks,
> Mukunda <ma...@gmail.com>
>
Re: CLOSE_WAIT after connecting to multiple shards from a primary shard
Posted by Mikhail Khludnev <mk...@griddynamics.com>.
Hello,
AFAIK by setting connectionManager.closeIdleConnections(0L); you
preventing your http connecitons from caching aka disabling keep-alive. If
you increase it enough you won't see many CLOSE_WAIT connections.
Some explanation and solution for jdk's http client (URL Connection), not
for your Commons' one is available here
http://blog.griddynamics.com/2011/04/fast-hessian-methods-leads-to.html
Let me know if it works for you. I suggested it many times here, but didn't
get a feedback. Please update us, it's worth to know for the community.
Regards
On Tue, Jan 24, 2012 at 10:22 AM, Ranveer <ra...@gmail.com> wrote:
> Hi Mukund,
>
> Since I am getting this issue for long time, I had done some hit and run.
> In my case I am connecting the local tomcat server using solrJ. SolrJ has
> max connection perhost 20 and per client 2. As I have heavy load and lots
> of dependency on solr so it seems very low. To increase the default perhost
> and per client connetions I done:
>
> MultiThreadedHttpConnectionMan**ager connectionManager=new
> MultiThreadedHttpConnectionMan**ager();
> DefaultHttpMethodRetryHandler retryhandler = new
> DefaultHttpMethodRetryHandler(**0, false);
> connectionManager.getParams().**setMaxTotalConnections(500);
> connectionManager.getParams().**setDefaultMaxConnectionsPerHos*
> *t(500);
> connectionManager.**closeIdleConnections(0L);
>
>
> connectionManager.getParams().**setMaxTotalConnections(500);
> connectionManager.getParams().**setDefaultMaxConnectionsPerHos*
> *t(500);
> connectionManager.**closeIdleConnections(0L);
> HttpClient httpClient=new HttpClient(connectionManager);
>
> httpClient.**getHttpConnectionManager().**getParams().**
> setDefaultMaxConnectionsPerHos**t(500);
> httpClient.**getHttpConnectionManager().**getParams().**
> setMaxTotalConnections(500);
> httpClient.getParams().**setParameter("http.method.**retry-handler",
> retryhandler);
>
> server=new CommonsHttpSolrServer(**getSolrURL(), httpClient);
>
> I have 5 cores setup and I am using above code for each core in static
> block and using same instance across all the class.
> but it not seems any effect. still server die randomly in 1 to 2 days. I
> am using Tomcat instead of jetty. I already increase maxThread in tomcat to
> 500 Is there any limitation of tomcat for this much stress. But other end
> when I look using telnet there is not much thread is open. I have doubt in
> httpclient.
>
> any help..
>
> regards
>
>
>
>
> On Tuesday 24 January 2012 07:27 AM, Mukunda Madhava wrote:
>
>> Hi Ranveer,
>> I dont have any solution to this problem. I havent got any response from
>> the forums as well.
>>
>> I implemented custom design for distributed searching, as it gave me
>> better control on the connections open.
>>
>> On Sun, Jan 22, 2012 at 10:05 PM, Ranveer Kumar <ranveer.solr@gmail.com<mailto:
>> ranveer.solr@gmail.com**>> wrote:
>>
>>
>> Hi Mukunda,
>>
>> Did you get solution. Actually I am aslo getting same problem.
>> Please help me to over come this problem.
>>
>> regards
>> Ranveer
>>
>> On Thu, Jun 2, 2011 at 12:37 AM, Mukunda Madhava
>> <mukunda.ms@gmail.com <ma...@gmail.com>> wrote:
>>
>> Hi Otis,
>> Sending to solr-user mailing list.
>>
>> We see this CLOSE_WAIT connections even when i do a simple
>> http request via
>> curl, that is, even when i do a simple curl using a primary
>> and secondary
>> shard query, like for e.g.
>>
>> curl "
>> http://primaryshardhost:8180/**solr/core0/select?q=*%3A*&**
>> shards=secondaryshardhost1:**8090/solr/appgroup1_11053000_**11053100<http://primaryshardhost:8180/solr/core0/select?q=*%3A*&shards=secondaryshardhost1:8090/solr/appgroup1_11053000_11053100>
>> <http://primaryshardhost:8180/**solr/core0/select?q=*%3A*&**
>> shards=secondaryshardhost1:**8090/solr/appgroup1_11053000_**11053100<http://primaryshardhost:8180/solr/core0/select?q=*%3A*&shards=secondaryshardhost1:8090/solr/appgroup1_11053000_11053100>
>> >
>> "
>>
>> While fetching data it is in ESTABLISHED state
>>
>> -sh-3.2$ netstat | grep ESTABLISHED | grep 8090
>> tcp 0 0 primaryshardhost:36805
>> secondaryshardhost1:8090
>> ESTABLISHED
>>
>> After the request has come back, it is in CLOSE_WAIT state
>>
>> -sh-3.2$ netstat | grep CLOSE_WAIT | grep 8090
>> tcp 1 0 primaryshardhost:36805
>> secondaryshardhost1:8090
>> CLOSE_WAIT
>>
>> why does Solr keep the connection to the shards in CLOSE_WAIT?
>>
>> Is this a feature of Solr? If we modify an OS property (I dont
>> know how) to
>> cleanup the CLOSE_WAITs will it cause an issue with subsequent
>> searches?
>>
>> Can someone help me please?
>>
>> thanks,
>> Mukunda
>>
>> On Mon, May 30, 2011 at 5:59 PM, Otis Gospodnetic <
>> otis_gospodnetic@yahoo.com
>> <mailto:otis_gospodnetic@**yahoo.com <ot...@yahoo.com>>>
>> wrote:
>>
>> > Hi,
>> >
>> > A few things:
>> > 1) why not send this to the Solr list?
>> > 2) you talk about searching, but the code sample is about
>> optimizing the
>> > index.
>> >
>> > 3) I don't have SolrJ API in front of me, but isn't there is
>> > CommonsSolrServe
>> > ctor that takes in a URL instead of HttpClient instance?
>> Try that one.
>> >
>> > Otis
>> > -----
>> > Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
>> > Lucene ecosystem search :: http://search-lucene.com/
>> >
>> >
>> >
>> > ----- Original Message ----
>> > > From: Mukunda Madhava <mukunda.ms@gmail.com
>> <ma...@gmail.com>>
>> > > To: general@lucene.apache.org
>> <mailto:general@lucene.apache.**org <ge...@lucene.apache.org>>
>> > > Sent: Mon, May 30, 2011 1:54:07 PM
>> > > Subject: CLOSE_WAIT after connecting to multiple shards
>> from a primary
>> > shard
>> > >
>> > > Hi,
>> > > We are having a "primary" Solr shard, and multiple
>> "secondary" shards.
>> > We
>> > > query data from the secondary shards by specifying the
>> "shards" param in
>> > the
>> > > query params.
>> > >
>> > > But we found that after recieving the data, there are
>> large number of
>> > > CLOSE_WAIT on the secondary shards from the primary shards.
>> > >
>> > > Like for e.g.
>> > >
>> > > tcp 1 0 primaryshardhost:56109
>> secondaryshardhost1:8090
>> > > CLOSE_WAIT
>> > > tcp 1 0 primaryshardhost:51049
>> secondaryshardhost1:8090
>> > > CLOSE_WAIT
>> > > tcp 1 0 primaryshardhost:49537
>> secondaryshardhost1:8089
>> > > CLOSE_WAIT
>> > > tcp 1 0 primaryshardhost:44109
>> secondaryshardhost2:8090
>> > > CLOSE_WAIT
>> > > tcp 1 0 primaryshardhost:32041
>> secondaryshardhost2:8090
>> > > CLOSE_WAIT
>> > > tcp 1 0 primaryshardhost:48533
>> secondaryshardhost2:8089
>> > > CLOSE_WAIT
>> > >
>> > >
>> > > We open the Solr connections as below..
>> > >
>> > > SimpleHttpConnectionManager cm = new
>> > > SimpleHttpConnectionManager(**true);
>> > > cm.closeIdleConnections(0L);
>> > > HttpClient httpClient = new HttpClient(cm);
>> > > solrServer = new
>> CommonsHttpSolrServer(url,**httpClient);
>> > > solrServer.optimize();
>> > >
>> > > But still we see these issues. Any ideas?
>> > > --
>> > > Thanks,
>> > > Mukunda
>> > >
>> >
>>
>>
>>
>> --
>> Thanks,
>> Mukunda
>>
>>
>>
>>
>>
>> --
>> Thanks,
>> Mukunda <ma...@gmail.com>
>>
>>
>
--
Sincerely yours
Mikhail Khludnev
Lucid Certified
Apache Lucene/Solr Developer
Grid Dynamics
<http://www.griddynamics.com>
<mk...@griddynamics.com>