You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Mark Miller <ma...@gmail.com> on 2015/10/01 19:34:44 UTC

Re: Cloud Deployment Strategy... In the Cloud

On Wed, Sep 30, 2015 at 10:36 AM Steve Davids <sd...@gmail.com> wrote:

> Our project built a custom "admin" webapp that we use for various O&M
> activities so I went ahead and added the ability to upload a Zip
> distribution which then uses SolrJ to forward the extracted contents to ZK,
> this package is built and uploaded via a Gradle build task which makes life
> easy on us by allowing us to jam stuff into ZK which is sitting in a
> private network (local VPC) without necessarily needing to be on a ZK
> machine. We then moved on to creating collection (trivial), and
> adding/removing replicas. As for adding replicas I am rather confused as to
> why I would need specify a specific shard for replica placement, before
> when I threw down a core.properties file the machine would automatically
> come up and figure out which shard it should join based on reasonable
> assumptions - why wouldn't the same logic apply here?


I'd file a JIRA issue for the functionality.


> I then saw that
> a Rule-based
> Replica Placement
> <
> https://cwiki.apache.org/confluence/display/solr/Rule-based+Replica+Placement
> >
> feature was added which I thought would be reasonable but after looking at
> the tests <https://issues.apache.org/jira/browse/SOLR-7577> it appears to
> still require a shard parameter for adding a replica which seems to defeat
> the entire purpose.


I was not involved in the addReplica command, but the predefined stuff
worked that way just to make bootstrapping up a cluster really simple. I
don't see why addReplica couldn't follow the same logic if no shard was
specified.


> So after getting bummed out about that, I took a look
> at the delete replica request since we are having machines come/go we need
> to start dropping them and found that the delete replica requires a
> collection, shard, and replica name and if I have the name of the machine
> it appears the only way to figure out what to remove is by walking the
> clusterstate tree for all collections and determine which replicas are a
> candidate for removal which seems unnecessarily complicated.
>

You should not need the shard for this call. The collection and replica
core node name will be unique. Another JIRA issue?


>
> Hopefully I don't come off as complaining, but rather looking at it from a
> client perspective, the Collections API doesn't seem simple to use and
> really the only reason I am messing around with it now is because there is
> repeated threats to make "zk as truth" the default in the 5.x branch at
> some point in the future. I would personally advocate that something like
> the autoManageReplicas <https://issues.apache.org/jira/browse/SOLR-5748>
> be
> introduced to make life much simpler on clients as this appears to be the
> thing I am trying to implement externally.
>
> If anyone has happened to to build a system to orchestrate Solr for cloud
> infrastructure and have some pointers it would be greatly appreciated.
>
> Thanks,
>
> -Steve
>
>
> --
- Mark
about.me/markrmiller