You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by no...@apache.org on 2019/06/07 02:39:15 UTC

[lucene-solr] 01/02: SOLR-13329: ref guide

This is an automated email from the ASF dual-hosted git repository.

noble pushed a commit to branch branch_8x
in repository https://gitbox.apache.org/repos/asf/lucene-solr.git

commit 10242afb1b34561b47ffcafbdfddfdae51018291
Author: noble <no...@apache.org>
AuthorDate: Tue Jun 4 15:36:16 2019 +1000

    SOLR-13329: ref guide
---
 .../src/solrcloud-autoscaling-api.adoc             |  6 +--
 .../solrcloud-autoscaling-policy-preferences.adoc  | 57 ++++++++++++++++------
 2 files changed, 45 insertions(+), 18 deletions(-)

diff --git a/solr/solr-ref-guide/src/solrcloud-autoscaling-api.adoc b/solr/solr-ref-guide/src/solrcloud-autoscaling-api.adoc
index a30b3ae..e8203ec 100644
--- a/solr/solr-ref-guide/src/solrcloud-autoscaling-api.adoc
+++ b/solr/solr-ref-guide/src/solrcloud-autoscaling-api.adoc
@@ -150,7 +150,7 @@ If there is no autoscaling policy configured or if you wish to use a configurati
 ----
  curl -X POST -H 'Content-type:application/json'  -d '{
  "cluster-policy": [
-   {"replica": 0,  "port" : "7574"}   ]
+   {"replica": 0, "put" : "on-each", "nodeset": {"port" : "7574"}}]
  }' http://localhost:8983/api/cluster/autoscaling/diagnostics?omitHeader=true
 ----
 
@@ -334,7 +334,7 @@ If there is no autoscaling policy configured or if you wish to use a configurati
 ----
 curl -X POST -H 'Content-type:application/json'  -d '{
  "cluster-policy": [
-   {"replica": 0,  "port" : "7574"}
+    {"replica": 0, "put" : "on-each", "nodeset": {"port" : "7574"}}
    ]
 }' http://localhost:8983/solr/admin/autoscaling/suggestions?omitHeader=true
 ----
@@ -629,7 +629,7 @@ Refer to the <<solrcloud-autoscaling-policy-preferences.adoc#policy-specificatio
 {
 "set-policy": {
   "policy1": [
-    {"replica": "1", "shard": "#EACH", "port": "8983"}
+    {"replica": "1", "shard": "#EACH", "nodeset":{"port": "8983"}}
     ]
   }
 }
diff --git a/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc b/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc
index c54caf4..57a14de 100644
--- a/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc
+++ b/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc
@@ -118,6 +118,37 @@ Per-collection rules have four parts:
 
 ==== Node Selector
 
+A node selector is specified using the `node` `nodeset` attribute. This is used to filter the set of nodes where this rules needs to be applied
+
+examples
+
+[source,json]
+{ "replica" : "<2", "node":"#ANY"}
+
+
+[source,json]
+//place 3 replicas in the group of nodes node-name1, node-name2
+{  "replica" : "3",  "nodeset":["node-name1","node-name2"]}
+
+[source,json]
+{ "nodeset":{"<property-name>":"<property-value>"}}
+
+The property names can be one of  `node` , `host` , `sysprop.*` , `freedisk` , `ip_*` , `nodeRole` , `heapUsage` , `metrics.*`
+
+when using the `nodeset` attribute, an optional attribute `put` can be used to specify how to distribute the replicas in that node set.
+
+e.g:
+
+[source,json]
+//put one replica on each node with a system property zone=east
+{ "replica":1, "put" :"on-each", "nodeset":{"sysprop.zone":"east"}}
+
+[source,json]
+//put a total of  2 replicas on the set of nodes with property zone=east
+{ "replica":2, "put" :"on-each" "nodeset":{"sysprop.zone":"east"}}
+
+
+
 Rule evaluation is restricted to node(s) matching the value of one of the following attributes: <<node-attribute,`node`>>, <<port-attribute,`port`>>, <<ip-attributes,`ip_\*`>>, <<sysprop-attribute,`sysprop.*`>>, or <<diskType-attribute,`diskType`>>.  For replica/core count constraints other than `#EQUAL`, a condition specified in one of the following attributes may instead be used to select nodes: <<freedisk-attribute,`freedisk`>>, <<host-attribute,`host`>>, <<sysLoadAvg-attribute,`sysLo [...]
 
 Except for `node`, the attributes above cause selected nodes to be partitioned into node groups. A node group is referred to as a "bucket". Those attributes usable with the `#EQUAL` directive may define buckets either via the special function <<each-function,`#EACH`>> or an <<array-operator,array>> `["value1", ...]` (a subset of all possible values); in both cases, each node is placed in the bucket corresponding to the matching attribute value.
@@ -309,14 +340,14 @@ Do not place more than 10 cores in <<any-function,any>> node. This rule can only
 Place exactly 1 replica of <<each-function,each>> shard of collection `xyz` on a node running on port `8983`.
 
 [source,json]
-{"replica": 1, "shard": "#EACH", "collection": "xyz", "port": "8983"}
+{"replica": 1, "shard": "#EACH", "collection": "xyz", "nodeset": {"port": "8983"}}
 
 ==== Place Replicas Based on a System Property
 
 Place <<all-function,all>> replicas on nodes with system property `availability_zone=us-east-1a`.
 
 [source,json]
-{"replica": "#ALL", "sysprop.availability_zone": "us-east-1a"}
+{"replica": "#ALL", "nodeset": "sysprop.availability_zone": "us-east-1a"}}
 
 ==== Use Percentage
 
@@ -343,8 +374,8 @@ Distribute replicas of <<each-function,each>> shard of each collection across da
 
 [source,json]
 ----
-{"replica": "33%", "shard": "#EACH", "sysprop.zone": "east"}
-{"replica": "66%", "shard": "#EACH", "sysprop.zone": "west"}
+{"replica": "33%", "shard": "#EACH", "nodeset":{ "sysprop.zone": "east"}}
+{"replica": "66%", "shard": "#EACH", "nodeset":{"sysprop.zone": "west"}}
 ----
 
 For the above rules to work, all nodes must the started with a system property called `"zone"`
@@ -354,48 +385,44 @@ For the above rules to work, all nodes must the started with a system property c
 For <<each-function,each>> shard of each collection, distribute replicas equally across the `east` and `west` zones.
 
 [source,json]
-{"replica": "#EQUAL", "shard": "#EACH", "sysprop.zone": ["east", "west"]}
+{"replica": "#EQUAL", "shard": "#EACH", "nodeset":{"sysprop.zone": ["east", "west"]}}
 
-Distribute replicas equally across <<each-function,each>> zone.
-
-[source,json]
-{"replica": "#EQUAL", "shard": "#EACH", "sysprop.zone": "#EACH"}
 
 ==== Place Replicas Based on Node Role
 
 Do not place any replica on any node that has the overseer role. Note that the role is added by the `addRole` collection API. It is *not* automatically the node which is currently the overseer.
 
 [source,json]
-{"replica": 0, "nodeRole": "overseer"}
+{"replica": 0, "put" :"on-each", "nodeset":{ "nodeRole": "overseer"}}
 
 ==== Place Replicas Based on Free Disk
 
 Place <<all-function,all>> replicas in nodes where <<freedisk-attribute,freedisk>> is greater than 500GB.
 
 [source,json]
-{"replica": "#ALL", "freedisk": ">500"}
+{"replica": "#ALL", "nodeset":{ "freedisk": ">500"}}
 
 Keep all replicas in nodes where <<freedisk-attribute,freedisk>> percentage is greater than `50%`.
 
 [source,json]
-{"replica": "#ALL", "freedisk": ">50%"}
+{"replica": "#ALL", "nodeset":{"freedisk": ">50%"}}
 
 ==== Try to Place Replicas Based on Free Disk
 
 When possible, place <<all-function,all>> replicas in nodes where <<freedisk-attribute,freedisk>> is greater than 500GB.  Here we use the <<Rule Strictness,`strict`>> attribute to signal that this rule is to be honored on a best effort basis.
 
 [source,json]
-{"replica": "#ALL", "freedisk": ">500", "strict": false}
+{"replica": "#ALL", "nodeset":{ "freedisk": ">500"}, "strict": false}
 
 ==== Place All Replicas of Type TLOG on Nodes with SSD Drives
 
 [source,json]
-{"replica": "#ALL", "type": "TLOG",  "diskType": "ssd"}
+{"replica": "#ALL", "type": "TLOG", "nodeset": {"diskType": "ssd"}}
 
 ==== Place All Replicas of Type PULL on Nodes with Rotational Disk Drives
 
 [source,json]
-{"replica": "#ALL", "type": "PULL", "diskType": "rotational"}
+{"replica": "#ALL", "type": "PULL", "nodeset" : {"diskType": "rotational"}}
 
 [[collection-specific-policy]]
 == Defining Collection-Specific Policies