You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Abhi Basu <90...@gmail.com> on 2018/03/29 14:25:56 UTC

Solr 7.2 cannot see all running nodes

What am I missing? I used the following instructions
http://blog.thedigitalgroup.com/susheelk/2015/08/03/solrcloud-2-nodes-solr-1-node-zk-setup/#comment-4321
on 4  nodes. The only difference is I have 3 external zk servers. So this
is how I am starting each solr node:

./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/ -p
8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g

They all run without any errors, but when trying to create a collection
with 2S/2R, I get an error saying only one node is running.

./server/scripts/cloud-scripts/zkcli.sh -zkhost
zk0-esohad,zk1-esohad,zk3-esohad:2181 -cmd upconfig -confname
ems-collection -confdir
/usr/local/bin/solr-7.2.1/server/solr/configsets/ems-collection-72_configs/conf


"Operation create caused
exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
and the number of nodes currently live or live and part of your
createNodeSet is 1. This allows a maximum of 1 to be created. Value of
numShards is 2, value of nrtReplicas is 2, value of tlogReplicas is 0 and
value of pullReplicas is 0. This requires 4 shards to be created (higher
than the allowed number)",


Any ideas?

Thanks,

Abhi

-- 
Abhi Basu

Re: Solr 7.2 cannot see all running nodes

Posted by Abhi Basu <90...@gmail.com>.
Just an update. Adding hostnames to solr.xml and using "-z
zk1:2181,zk2:2181,zk3:2181" worked and I can see 4 live nodes and able to
create collection with 2S/2R.

Thanks for your help, greatly appreciate it.

Regards,

Abhi

On Thu, Mar 29, 2018 at 1:45 PM, Abhi Basu <90...@gmail.com> wrote:

> Also, another question, where it says to copy the zoo.cfg from
> /solr72/server/solr folder to /solr72/server/solr/node1/solr, should I
> actually be grabbing the zoo.cfg from one of my external zk nodes?
>
> Thanks,
>
> Abhi
>
> On Thu, Mar 29, 2018 at 1:04 PM, Abhi Basu <90...@gmail.com> wrote:
>
>> Ok, will give it a try along with the host name.
>>
>>
>> On Thu, Mar 29, 2018 at 12:20 PM, Webster Homer <we...@sial.com>
>> wrote:
>>
>>> This Zookeeper ensemble doesn't look right.
>>> >
>>> > ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/
>>> -p
>>> > 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
>>>
>>>
>>> Shouldn't the zookeeper ensemble be specified as:
>>>   zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181
>>>
>>> You should put the zookeeper port on each node in the comma separated
>>> list.
>>> I don't know if this is your problem, but I think your solr nodes will
>>> only
>>> be connecting to 1 zookeeper
>>>
>>> On Thu, Mar 29, 2018 at 10:56 AM, Walter Underwood <
>>> wunder@wunderwood.org>
>>> wrote:
>>>
>>> > I had that problem. Very annoying and we probably should require
>>> special
>>> > flag to use localhost.
>>> >
>>> > We need to start solr like this:
>>> >
>>> > ./solr start -c -h `hostname`
>>> >
>>> > If anybody ever forgets, we get a 127.0.0.1 node that shows down in
>>> > cluster status. No idea how to get rid of that.
>>> >
>>> > wunder
>>> > Walter Underwood
>>> > wunder@wunderwood.org
>>> > http://observer.wunderwood.org/  (my blog)
>>> >
>>> > > On Mar 29, 2018, at 7:46 AM, Shawn Heisey <ap...@elyograg.org>
>>> wrote:
>>> > >
>>> > > On 3/29/2018 8:25 AM, Abhi Basu wrote:
>>> > >> "Operation create caused
>>> > >> exception:":"org.apache.solr.common.SolrException:org.
>>> > apache.solr.common.SolrException:
>>> > >> Cannot create collection ems-collection. Value of maxShardsPerNode
>>> is 1,
>>> > >> and the number of nodes currently live or live and part of your
>>> > >
>>> > > I'm betting that all your nodes are registering themselves with the
>>> same
>>> > name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an
>>> address
>>> > on the loopback interface.
>>> > >
>>> > > Usually this problem (on an OS other than Windows, at least) is
>>> caused
>>> > by an incorrect /etc/hosts file that maps your hostname to a  loopback
>>> > address instead of a real address.
>>> > >
>>> > > You can override the value that SolrCloud uses to register itself
>>> into
>>> > zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh,
>>> I
>>> > think this is the SOLR_HOST variable, which gets translated into
>>> -Dhost=XXX
>>> > on the java commandline.  It can also be configured in solr.xml.
>>> > >
>>> > > Thanks,
>>> > > Shawn
>>> > >
>>> >
>>> >
>>>
>>> --
>>>
>>>
>>> This message and any attachment are confidential and may be privileged or
>>> otherwise protected from disclosure. If you are not the intended
>>> recipient,
>>> you must not copy this message or attachment or disclose the contents to
>>> any other person. If you have received this transmission in error, please
>>> notify the sender immediately and delete the message and any attachment
>>> from your system. Merck KGaA, Darmstadt, Germany and any of its
>>> subsidiaries do not accept liability for any omissions or errors in this
>>> message which may arise as a result of E-Mail-transmission or for damages
>>> resulting from any unauthorized changes of the content of this message
>>> and
>>> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
>>> subsidiaries do not guarantee that this message is free of viruses and
>>> does
>>> not accept liability for any damages caused by any virus transmitted
>>> therewith.
>>>
>>> Click http://www.emdgroup.com/disclaimer to access the German, French,
>>> Spanish and Portuguese versions of this disclaimer.
>>>
>>
>>
>>
>> --
>> Abhi Basu
>>
>
>
>
> --
> Abhi Basu
>



-- 
Abhi Basu

Re: Solr 7.2 cannot see all running nodes

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/29/2018 12:45 PM, Abhi Basu wrote:
> Also, another question, where it says to copy the zoo.cfg from
> /solr72/server/solr folder to /solr72/server/solr/node1/solr, should I
> actually be grabbing the zoo.cfg from one of my external zk nodes?

If you're using zookeeper processes that are separate from Solr, then
zoo.cfg in the solr directory is unimportant.

Doing anything related to zoo.cfg in a solr directory would imply that
you are running Solr with the embedded ZK.  Which is not recommended in
most cases.  The primary issue with the embedded ZK is that when you
stop Solr, you're also stopping ZK.

Thanks,
Shawn


Re: Solr 7.2 cannot see all running nodes

Posted by Abhi Basu <90...@gmail.com>.
Also, another question, where it says to copy the zoo.cfg from
/solr72/server/solr folder to /solr72/server/solr/node1/solr, should I
actually be grabbing the zoo.cfg from one of my external zk nodes?

Thanks,

Abhi

On Thu, Mar 29, 2018 at 1:04 PM, Abhi Basu <90...@gmail.com> wrote:

> Ok, will give it a try along with the host name.
>
>
> On Thu, Mar 29, 2018 at 12:20 PM, Webster Homer <we...@sial.com>
> wrote:
>
>> This Zookeeper ensemble doesn't look right.
>> >
>> > ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/
>> -p
>> > 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
>>
>>
>> Shouldn't the zookeeper ensemble be specified as:
>>   zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181
>>
>> You should put the zookeeper port on each node in the comma separated
>> list.
>> I don't know if this is your problem, but I think your solr nodes will
>> only
>> be connecting to 1 zookeeper
>>
>> On Thu, Mar 29, 2018 at 10:56 AM, Walter Underwood <wunder@wunderwood.org
>> >
>> wrote:
>>
>> > I had that problem. Very annoying and we probably should require special
>> > flag to use localhost.
>> >
>> > We need to start solr like this:
>> >
>> > ./solr start -c -h `hostname`
>> >
>> > If anybody ever forgets, we get a 127.0.0.1 node that shows down in
>> > cluster status. No idea how to get rid of that.
>> >
>> > wunder
>> > Walter Underwood
>> > wunder@wunderwood.org
>> > http://observer.wunderwood.org/  (my blog)
>> >
>> > > On Mar 29, 2018, at 7:46 AM, Shawn Heisey <ap...@elyograg.org>
>> wrote:
>> > >
>> > > On 3/29/2018 8:25 AM, Abhi Basu wrote:
>> > >> "Operation create caused
>> > >> exception:":"org.apache.solr.common.SolrException:org.
>> > apache.solr.common.SolrException:
>> > >> Cannot create collection ems-collection. Value of maxShardsPerNode
>> is 1,
>> > >> and the number of nodes currently live or live and part of your
>> > >
>> > > I'm betting that all your nodes are registering themselves with the
>> same
>> > name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an
>> address
>> > on the loopback interface.
>> > >
>> > > Usually this problem (on an OS other than Windows, at least) is caused
>> > by an incorrect /etc/hosts file that maps your hostname to a  loopback
>> > address instead of a real address.
>> > >
>> > > You can override the value that SolrCloud uses to register itself into
>> > zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh,
>> I
>> > think this is the SOLR_HOST variable, which gets translated into
>> -Dhost=XXX
>> > on the java commandline.  It can also be configured in solr.xml.
>> > >
>> > > Thanks,
>> > > Shawn
>> > >
>> >
>> >
>>
>> --
>>
>>
>> This message and any attachment are confidential and may be privileged or
>> otherwise protected from disclosure. If you are not the intended
>> recipient,
>> you must not copy this message or attachment or disclose the contents to
>> any other person. If you have received this transmission in error, please
>> notify the sender immediately and delete the message and any attachment
>> from your system. Merck KGaA, Darmstadt, Germany and any of its
>> subsidiaries do not accept liability for any omissions or errors in this
>> message which may arise as a result of E-Mail-transmission or for damages
>> resulting from any unauthorized changes of the content of this message and
>> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
>> subsidiaries do not guarantee that this message is free of viruses and
>> does
>> not accept liability for any damages caused by any virus transmitted
>> therewith.
>>
>> Click http://www.emdgroup.com/disclaimer to access the German, French,
>> Spanish and Portuguese versions of this disclaimer.
>>
>
>
>
> --
> Abhi Basu
>



-- 
Abhi Basu

Re: Solr 7.2 cannot see all running nodes

Posted by Abhi Basu <90...@gmail.com>.
Ok, will give it a try along with the host name.


On Thu, Mar 29, 2018 at 12:20 PM, Webster Homer <we...@sial.com>
wrote:

> This Zookeeper ensemble doesn't look right.
> >
> > ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/
> -p
> > 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
>
>
> Shouldn't the zookeeper ensemble be specified as:
>   zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181
>
> You should put the zookeeper port on each node in the comma separated list.
> I don't know if this is your problem, but I think your solr nodes will only
> be connecting to 1 zookeeper
>
> On Thu, Mar 29, 2018 at 10:56 AM, Walter Underwood <wu...@wunderwood.org>
> wrote:
>
> > I had that problem. Very annoying and we probably should require special
> > flag to use localhost.
> >
> > We need to start solr like this:
> >
> > ./solr start -c -h `hostname`
> >
> > If anybody ever forgets, we get a 127.0.0.1 node that shows down in
> > cluster status. No idea how to get rid of that.
> >
> > wunder
> > Walter Underwood
> > wunder@wunderwood.org
> > http://observer.wunderwood.org/  (my blog)
> >
> > > On Mar 29, 2018, at 7:46 AM, Shawn Heisey <ap...@elyograg.org> wrote:
> > >
> > > On 3/29/2018 8:25 AM, Abhi Basu wrote:
> > >> "Operation create caused
> > >> exception:":"org.apache.solr.common.SolrException:org.
> > apache.solr.common.SolrException:
> > >> Cannot create collection ems-collection. Value of maxShardsPerNode is
> 1,
> > >> and the number of nodes currently live or live and part of your
> > >
> > > I'm betting that all your nodes are registering themselves with the
> same
> > name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an
> address
> > on the loopback interface.
> > >
> > > Usually this problem (on an OS other than Windows, at least) is caused
> > by an incorrect /etc/hosts file that maps your hostname to a  loopback
> > address instead of a real address.
> > >
> > > You can override the value that SolrCloud uses to register itself into
> > zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh,
> I
> > think this is the SOLR_HOST variable, which gets translated into
> -Dhost=XXX
> > on the java commandline.  It can also be configured in solr.xml.
> > >
> > > Thanks,
> > > Shawn
> > >
> >
> >
>
> --
>
>
> This message and any attachment are confidential and may be privileged or
> otherwise protected from disclosure. If you are not the intended recipient,
> you must not copy this message or attachment or disclose the contents to
> any other person. If you have received this transmission in error, please
> notify the sender immediately and delete the message and any attachment
> from your system. Merck KGaA, Darmstadt, Germany and any of its
> subsidiaries do not accept liability for any omissions or errors in this
> message which may arise as a result of E-Mail-transmission or for damages
> resulting from any unauthorized changes of the content of this message and
> any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its
> subsidiaries do not guarantee that this message is free of viruses and does
> not accept liability for any damages caused by any virus transmitted
> therewith.
>
> Click http://www.emdgroup.com/disclaimer to access the German, French,
> Spanish and Portuguese versions of this disclaimer.
>



-- 
Abhi Basu

Re: Solr 7.2 cannot see all running nodes

Posted by Webster Homer <we...@sial.com>.
This Zookeeper ensemble doesn't look right.
>
> ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/ -p
> 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g


Shouldn't the zookeeper ensemble be specified as:
  zk0-esohad:2181,zk1-esohad:2181,zk3-esohad:2181

You should put the zookeeper port on each node in the comma separated list.
I don't know if this is your problem, but I think your solr nodes will only
be connecting to 1 zookeeper

On Thu, Mar 29, 2018 at 10:56 AM, Walter Underwood <wu...@wunderwood.org>
wrote:

> I had that problem. Very annoying and we probably should require special
> flag to use localhost.
>
> We need to start solr like this:
>
> ./solr start -c -h `hostname`
>
> If anybody ever forgets, we get a 127.0.0.1 node that shows down in
> cluster status. No idea how to get rid of that.
>
> wunder
> Walter Underwood
> wunder@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Mar 29, 2018, at 7:46 AM, Shawn Heisey <ap...@elyograg.org> wrote:
> >
> > On 3/29/2018 8:25 AM, Abhi Basu wrote:
> >> "Operation create caused
> >> exception:":"org.apache.solr.common.SolrException:org.
> apache.solr.common.SolrException:
> >> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
> >> and the number of nodes currently live or live and part of your
> >
> > I'm betting that all your nodes are registering themselves with the same
> name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an address
> on the loopback interface.
> >
> > Usually this problem (on an OS other than Windows, at least) is caused
> by an incorrect /etc/hosts file that maps your hostname to a  loopback
> address instead of a real address.
> >
> > You can override the value that SolrCloud uses to register itself into
> zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh, I
> think this is the SOLR_HOST variable, which gets translated into -Dhost=XXX
> on the java commandline.  It can also be configured in solr.xml.
> >
> > Thanks,
> > Shawn
> >
>
>

-- 


This message and any attachment are confidential and may be privileged or 
otherwise protected from disclosure. If you are not the intended recipient, 
you must not copy this message or attachment or disclose the contents to 
any other person. If you have received this transmission in error, please 
notify the sender immediately and delete the message and any attachment 
from your system. Merck KGaA, Darmstadt, Germany and any of its 
subsidiaries do not accept liability for any omissions or errors in this 
message which may arise as a result of E-Mail-transmission or for damages 
resulting from any unauthorized changes of the content of this message and 
any attachment thereto. Merck KGaA, Darmstadt, Germany and any of its 
subsidiaries do not guarantee that this message is free of viruses and does 
not accept liability for any damages caused by any virus transmitted 
therewith.

Click http://www.emdgroup.com/disclaimer to access the German, French, 
Spanish and Portuguese versions of this disclaimer.

Re: Solr 7.2 cannot see all running nodes

Posted by Walter Underwood <wu...@wunderwood.org>.
I had that problem. Very annoying and we probably should require special flag to use localhost.

We need to start solr like this:

./solr start -c -h `hostname`

If anybody ever forgets, we get a 127.0.0.1 node that shows down in cluster status. No idea how to get rid of that.

wunder
Walter Underwood
wunder@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Mar 29, 2018, at 7:46 AM, Shawn Heisey <ap...@elyograg.org> wrote:
> 
> On 3/29/2018 8:25 AM, Abhi Basu wrote:
>> "Operation create caused
>> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
>> and the number of nodes currently live or live and part of your
> 
> I'm betting that all your nodes are registering themselves with the same name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an address on the loopback interface.
> 
> Usually this problem (on an OS other than Windows, at least) is caused by an incorrect /etc/hosts file that maps your hostname to a  loopback address instead of a real address.
> 
> You can override the value that SolrCloud uses to register itself into zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh, I think this is the SOLR_HOST variable, which gets translated into -Dhost=XXX on the java commandline.  It can also be configured in solr.xml.
> 
> Thanks,
> Shawn
> 


Re: Solr 7.2 cannot see all running nodes

Posted by Abhi Basu <90...@gmail.com>.
So, in the solr.xml on each node should I set the host to the actual host
name?

<solr>

  <solrcloud>

    <str name="host">${host:}</str>
    <int name="hostPort">${jetty.port:8983}</int>
    <str name="hostContext">${hostContext:solr}</str>

    <bool name="genericCoreNodeNames">${genericCoreNodeNames:true}</bool>

    <int name="zkClientTimeout">${zkClientTimeout:30000}</int>
    <int
name="distribUpdateSoTimeout">${distribUpdateSoTimeout:600000}</int>
    <int
name="distribUpdateConnTimeout">${distribUpdateConnTimeout:60000}</int>
    <str
name="zkCredentialsProvider">${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider}</str>
    <str
name="zkACLProvider">${zkACLProvider:org.apache.solr.common.cloud.DefaultZkACLProvider}</str>

  </solrcloud>


On Thu, Mar 29, 2018 at 9:46 AM, Shawn Heisey <ap...@elyograg.org> wrote:

> On 3/29/2018 8:25 AM, Abhi Basu wrote:
>
>> "Operation create caused
>> exception:":"org.apache.solr.common.SolrException:org.apache
>> .solr.common.SolrException:
>> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
>> and the number of nodes currently live or live and part of your
>>
>
> I'm betting that all your nodes are registering themselves with the same
> name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an address
> on the loopback interface.
>
> Usually this problem (on an OS other than Windows, at least) is caused by
> an incorrect /etc/hosts file that maps your hostname to a  loopback address
> instead of a real address.
>
> You can override the value that SolrCloud uses to register itself into
> zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh, I
> think this is the SOLR_HOST variable, which gets translated into -Dhost=XXX
> on the java commandline.  It can also be configured in solr.xml.
>
> Thanks,
> Shawn
>
>


-- 
Abhi Basu

Re: Solr 7.2 cannot see all running nodes

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/29/2018 8:25 AM, Abhi Basu wrote:
> "Operation create caused
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
> and the number of nodes currently live or live and part of your

I'm betting that all your nodes are registering themselves with the same 
name, and that name is probably either 127.0.0.1 or 127.1.1.0 -- an 
address on the loopback interface.

Usually this problem (on an OS other than Windows, at least) is caused 
by an incorrect /etc/hosts file that maps your hostname to a  loopback 
address instead of a real address.

You can override the value that SolrCloud uses to register itself into 
zookeeper so it doesn't depend on the OS configuration.  In solr.in.sh, 
I think this is the SOLR_HOST variable, which gets translated into 
-Dhost=XXX on the java commandline.  It can also be configured in solr.xml.

Thanks,
Shawn


Re: Solr 7.2 cannot see all running nodes

Posted by Abhi Basu <90...@gmail.com>.
Yes, only showing one live node on admin site.

Checking zk logs.

Thanks,

Abhi

On Thu, Mar 29, 2018 at 9:32 AM, Ganesh Sethuraman <ga...@gmail.com>
wrote:

> may be you can check int he Admin UI --> Cloud --> Tree --> /live_nodes. To
> see the list of live nodes before running. If it is less than what you
> expected, check the Zoo keeper logs? or make sure connectivity between the
> shards and zookeeper.
>
> On Thu, Mar 29, 2018 at 10:25 AM, Abhi Basu <90...@gmail.com> wrote:
>
> > What am I missing? I used the following instructions
> > http://blog.thedigitalgroup.com/susheelk/2015/08/03/
> > solrcloud-2-nodes-solr-1-node-zk-setup/#comment-4321
> > on 4  nodes. The only difference is I have 3 external zk servers. So this
> > is how I am starting each solr node:
> >
> > ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/
> -p
> > 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
> >
> > They all run without any errors, but when trying to create a collection
> > with 2S/2R, I get an error saying only one node is running.
> >
> > ./server/scripts/cloud-scripts/zkcli.sh -zkhost
> > zk0-esohad,zk1-esohad,zk3-esohad:2181 -cmd upconfig -confname
> > ems-collection -confdir
> > /usr/local/bin/solr-7.2.1/server/solr/configsets/ems-
> > collection-72_configs/conf
> >
> >
> > "Operation create caused
> > exception:":"org.apache.solr.common.SolrException:org.
> apache.solr.common.
> > SolrException:
> > Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
> > and the number of nodes currently live or live and part of your
> > createNodeSet is 1. This allows a maximum of 1 to be created. Value of
> > numShards is 2, value of nrtReplicas is 2, value of tlogReplicas is 0 and
> > value of pullReplicas is 0. This requires 4 shards to be created (higher
> > than the allowed number)",
> >
> >
> > Any ideas?
> >
> > Thanks,
> >
> > Abhi
> >
> > --
> > Abhi Basu
> >
>



-- 
Abhi Basu

Re: Solr 7.2 cannot see all running nodes

Posted by Ganesh Sethuraman <ga...@gmail.com>.
may be you can check int he Admin UI --> Cloud --> Tree --> /live_nodes. To
see the list of live nodes before running. If it is less than what you
expected, check the Zoo keeper logs? or make sure connectivity between the
shards and zookeeper.

On Thu, Mar 29, 2018 at 10:25 AM, Abhi Basu <90...@gmail.com> wrote:

> What am I missing? I used the following instructions
> http://blog.thedigitalgroup.com/susheelk/2015/08/03/
> solrcloud-2-nodes-solr-1-node-zk-setup/#comment-4321
> on 4  nodes. The only difference is I have 3 external zk servers. So this
> is how I am starting each solr node:
>
> ./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/ -p
> 8983 -z zk0-esohad,zk1-esohad,zk3-esohad:2181 -m 8g
>
> They all run without any errors, but when trying to create a collection
> with 2S/2R, I get an error saying only one node is running.
>
> ./server/scripts/cloud-scripts/zkcli.sh -zkhost
> zk0-esohad,zk1-esohad,zk3-esohad:2181 -cmd upconfig -confname
> ems-collection -confdir
> /usr/local/bin/solr-7.2.1/server/solr/configsets/ems-
> collection-72_configs/conf
>
>
> "Operation create caused
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.
> SolrException:
> Cannot create collection ems-collection. Value of maxShardsPerNode is 1,
> and the number of nodes currently live or live and part of your
> createNodeSet is 1. This allows a maximum of 1 to be created. Value of
> numShards is 2, value of nrtReplicas is 2, value of tlogReplicas is 0 and
> value of pullReplicas is 0. This requires 4 shards to be created (higher
> than the allowed number)",
>
>
> Any ideas?
>
> Thanks,
>
> Abhi
>
> --
> Abhi Basu
>