You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/04/20 19:39:12 UTC

[2/4] lucene-solr:jira/solr-10290: SOLR-10290: update raw content files

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/performance-statistics-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/performance-statistics-reference.adoc b/solr/solr-ref-guide/src/performance-statistics-reference.adoc
index 97cc4cd..2b3cff3 100644
--- a/solr/solr-ref-guide/src/performance-statistics-reference.adoc
+++ b/solr/solr-ref-guide/src/performance-statistics-reference.adoc
@@ -30,12 +30,12 @@ Both Update Request Handler and Search Request Handler along with handlers like
 |Attribute |Description
 |15minRateReqsPerSecond |Requests per second received over the past 15 minutes.
 |5minRateReqsPerSecond |Requests per second received over the past 5 minutes.
-|75thPcRequestTime |Request processing time for the request which belongs to the 75th Percentile. E.g. if 100 requests are received, then the 75th fastest request time will be reported by this statistic.
-|95thPcRequestTime |Request processing time in milliseconds for the request which belongs to the 95th Percentile. E.g. if 80 requests are received, then the 76th fastest request time will be reported in this statistic.
+|75thPcRequestTime |Request processing time for the request which belongs to the 75th Percentile. E.g., if 100 requests are received, then the 75th fastest request time will be reported by this statistic.
+|95thPcRequestTime |Request processing time in milliseconds for the request which belongs to the 95th Percentile. E.g., if 80 requests are received, then the 76th fastest request time will be reported in this statistic.
 |999thPcRequestTime |Request processing time in milliseconds for the request which belongs to the 99.9th Percentile. E.g., if 1000 requests are received, then the 999th fastest request time will be reported in this statistic.
-|99thPcRequestTime |Request processing time in milliseconds for the request which belongs to the 99th Percentile. E.g. if 200 requests are received, then the 198th fastest request time will be reported in this statistic.
+|99thPcRequestTime |Request processing time in milliseconds for the request which belongs to the 99th Percentile. E.g., if 200 requests are received, then the 198th fastest request time will be reported in this statistic.
 |avgRequestsPerSecond |Average number of requests received per second.
-|avgTimePerRequest |Average time taken for processing the requests.
+|avgTimePerRequest |Average time taken for processing the requests. This parameter will decay over time, with a bias toward activity in the last 5 minutes.
 |errors |Number of error encountered by handler.
 |clientErrors |Number of syntax errors/parse errors made by client while making requests.
 |handlerStart |Epoch time when the handler was registered.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/putting-the-pieces-together.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/putting-the-pieces-together.adoc b/solr/solr-ref-guide/src/putting-the-pieces-together.adoc
index 4abbd94..b5b64cb 100644
--- a/solr/solr-ref-guide/src/putting-the-pieces-together.adoc
+++ b/solr/solr-ref-guide/src/putting-the-pieces-together.adoc
@@ -27,9 +27,9 @@ Note that the `types` and `fields` sections are optional, meaning you are free t
 [[PuttingthePiecesTogether-ChoosingAppropriateNumericTypes]]
 == Choosing Appropriate Numeric Types
 
-For general numeric needs, use `TrieIntField`, `TrieLongField`, `TrieFloatField`, and `TrieDoubleField` with `precisionStep="0"`.
+For general numeric needs, consider using one of the` IntPointField`, `LongPointField`, `FloatPointField`, or `DoublePointField` classes, depending on the specific values you expect. These "Dimensional Point" based numeric classes use specially encoded data structures to support efficient range queries regardless of the size of the ranges used. Enable <<docvalues.adoc#docvalues,DocValues>> on these fields as needed for sorting and/or faceting.
 
-If you expect users to make frequent range queries on numeric types, use the default `precisionStep` (by not specifying it) or specify it as `precisionStep="8"` (which is the default). This offers faster speed for range queries at the expense of increasing index size.
+Some Solr features may not yet work with "Dimensional Points", in which case you may want to consider the equivilent `TrieIntField`, `TrieLongField`, `TrieFloatField`, and `TrieDoubleField` classes. Configure a `precisionStep="0"` if you wish to minimize index size, but if you expect users to make frequent range queries on numeric types, use the default `precisionStep` (by not specifying it) or specify it as `precisionStep="8"` (which is the default). This offers faster speed for range queries at the expense of increasing index size.
 
 [[PuttingthePiecesTogether-WorkingWithText]]
 == Working With Text

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc b/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
index 624c0ce..896c850 100644
--- a/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
@@ -32,6 +32,8 @@ The Statistics page in the Solr Admin UI will display information about the perf
 
 Each cache has settings to define its initial size (`initialSize`), maximum size (`size`) and number of items to use for during warming (`autowarmCount`). The LRU and FastLRU cache implementations can take a percentage instead of an absolute value for `autowarmCount`.
 
+FastLRUCache and LFUCache support `showItems` attribute. This is the number of cache items to display in the stats page for the cache. It is for debugging.
+
 Details of each cache are described below.
 
 [[QuerySettingsinSolrConfig-filterCache]]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/result-grouping.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/result-grouping.adoc b/solr/solr-ref-guide/src/result-grouping.adoc
index 1ffb241..81ea51a 100644
--- a/solr/solr-ref-guide/src/result-grouping.adoc
+++ b/solr/solr-ref-guide/src/result-grouping.adoc
@@ -8,7 +8,7 @@ Result Grouping groups documents with a common field value into groups and retur
 [NOTE]
 ====
 
-Solr's <<collapse-and-expand-results.adoc#collapse-and-expand-results,Collapse and Expand>> feature is newer and mostly overlaps with Result Grouping. There are features unique to both, and they have different performance characteristics. Prefer C&E to Result Grouping.
+Solr's <<collapse-and-expand-results.adoc#collapse-and-expand-results,Collapse and Expand>> feature is newer and mostly overlaps with Result Grouping. There are features unique to both, and they have different performance characteristics. That said, in most cases Collapse and Expand is preferable to Result Grouping.
 
 ====
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
index 8e97dfd..06480cd 100644
--- a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
+++ b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
@@ -6,12 +6,12 @@ Solr allows configuring roles to control user access to the system. This is acco
 
 The roles can be used with any of the authentication plugins or with a custom authentication plugin if you have created one. You will only need to ensure that you configure the role-to-user mappings with the proper user IDs that your authentication system provides.
 
-Once defined through the API, roles are stored in `security.json` in ZooKeeper. This means this feature is available **when using Solr in SolrCloud mode only**.
+Once defined through the API, roles are stored in `security.json`.
 
 [[Rule-BasedAuthorizationPlugin-EnabletheAuthorizationPlugin]]
 == Enable the Authorization Plugin
 
-The plugin must be enabled in `security.json`. This file and how to upload it to ZooKeeper is described in detail in the section <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-EnabledPluginswithsecurity.json,Enable Plugins with security.json>>.
+The plugin must be enabled in `security.json`. This file and where to put it in your system is described in detail in the section <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-EnabledPluginswithsecurity.json,Enable Plugins with security.json>>.
 
 This file has two parts, the `authentication` part and the `authorization` part. The `authentication` part stores information about the class being used for authentication.
 
@@ -89,8 +89,8 @@ The pre-defined permissions are:
 ** OVERSEERSTATUS
 ** CLUSTERSTATUS
 ** REQUESTSTATUS
-* **update**: this permission is allowed to perform any update action on any collection. This includes sending documents for indexing (using an <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-UpdateRequestHandlers,update request handler>>).
-* **read**: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`.
+* **update**: this permission is allowed to perform any update action on any collection. This includes sending documents for indexing (using an <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-UpdateRequestHandlers,update request handler>>). This applies to all collections by default (`collection:"*"`).
+* **read**: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`. This applies to all collections by default ( `collection:"*"` ).
 * **all**: Any requests coming to Solr.
 
 [[Rule-BasedAuthorizationPlugin-AuthorizationAPI]]
@@ -123,7 +123,7 @@ Several properties can be used to define your custom permission.
 |collection a|
 The collection or collections the permission will apply to.
 
-When the path that will be allowed is collection-specific, such as when setting permissions to allow useof the Schema API, omitting the collection property will allow the defined path and/or method for all collections. However, when the path is one that is non-collection-specific, such as the Collections API, the collection value must be `null`.
+When the path that will be allowed is collection-specific, such as when setting permissions to allow useof the Schema API, omitting the collection property will allow the defined path and/or method for all collections. However, when the path is one that is non-collection-specific, such as the Collections API, the collection value must be `null`. The default value is * (all collections).
 
 |path |A request handler name, such as `/update` or `/select`. A wild card is supported, to allow for all paths as appropriate (such as, `/update/*`).
 |method |HTTP methods that are allowed for this permission. You could allow only GET requests, or have a role that allows PUT and POST requests. The method values that are allowed for this property are GET, POST, PUT,DELETEand HEAD.
@@ -154,7 +154,7 @@ If the commands LIST and CLUSTERSTATUS are case insensitive, the above example s
 |role |The name of the role(s) to give this permission. This name will be used to map user IDs to the role to grant these permissions. The value can be wildcard such as (`*`), which means that any user is OK, but no user is NOT OK.
 |===
 
-The following would create a new permission named "collection-mgr" that is allowed to create and list collections. The permission will be placed before the "read" permission. Note also that we have defined "collection as `null`, this is because requests to the Collections API are never collection-specific.
+The following creates a new permission named "collection-mgr" that is allowed to create and list collections. The permission will be placed before the "read" permission. Note also that we have defined "collection as `null`, this is because requests to the Collections API are never collection-specific.
 
 [source,bash]
 ----
@@ -167,12 +167,22 @@ curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
 }' http://localhost:8983/solr/admin/authorization 
 ----
 
-[[Rule-BasedAuthorizationPlugin-updateordeletepermissions]]
-==== update or delete permissions
+Apply an update permission on all collections to a role called '`dev`' and read permissions to a role called '`guest`':
 
-Permissions can be accessed using their index in the list. Use the GET /security/authorization to see the existing permissions and their indices.
+[source,bash]
+----
+curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{ 
+  "set-permission": {"name": "update, "role":"dev"},
+  "set-permission": {"name": "read, "role":"guest"},
+}' http://localhost:8983/solr/admin/authorization 
+----
+
+[[Rule-BasedAuthorizationPlugin-UpdateorDeletePermissions]]
+=== Update or Delete Permissions
+
+Permissions can be accessed using their index in the list. Use the `/admin/authorization` API to see the existing permissions and their indices.
 
-the following example updates the `'role'` attribute of permission at index `'3'`
+The following example updates the '`role`' attribute of permission at index '`3`':
 
 [source,bash]
 ----
@@ -182,7 +192,7 @@ curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
 }' http://localhost:8983/solr/admin/authorization 
 ----
 
-the following example deletes permission at index `'3'`
+The following example deletes permission at index '`3`':
 
 [source,bash]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
index cb4dc17..eebf665 100644
--- a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
+++ b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
@@ -182,4 +182,4 @@ Rules are specified per collection during collection creation as request paramet
 snitch=class:EC2Snitch&rule=shard:*,replica:1,dc:dc1&rule=shard:*,replica:<2,dc:dc3
 ----
 
-These rules are persisted in `clusterstate.json` in Zookeeper and are available throughout the lifetime of the collection. This enables the system to perform any future node allocation without direct user interaction. The rules added during collection creation can be modified later using the <<collections-api.adoc#CollectionsAPI-modifycollection,MODIFYCOLLECTION>> API.
+These rules are persisted in `clusterstate.json` in ZooKeeper and are available throughout the lifetime of the collection. This enables the system to perform any future node allocation without direct user interaction. The rules added during collection creation can be modified later using the <<collections-api.adoc#CollectionsAPI-modifycollection,MODIFYCOLLECTION>> API.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/schema-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index 3af5834..f189fce 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -2,25 +2,35 @@
 :page-shortname: schema-api
 :page-permalink: schema-api.html
 
-The Schema API provides read and write access to the Solr schema for each collection (or core, when using standalone Solr). Read access to all schema elements is supported. Fields, dynamic fields, field types and copyField rules may be added, removed or replaced. Future Solr releases will extend write access to allow more schema elements to be modified.
+The Schema API utilizes the ManagedIndexSchemaFactory class, which is the default schema factory in modern Solr versions. See the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for more information about choosing a schema factory for your index.
 
-.Re-index after schema modifications!
+This API provides read and write access to the Solr schema for each collection (or core, when using standalone Solr). Read access to all schema elements is supported. Fields, dynamic fields, field types and copyField rules may be added, removed or replaced. Future Solr releases will extend write access to allow more schema elements to be modified.
+
+.Why is hand editing of the managed schema discouraged?
 [IMPORTANT]
 ====
 
-If you modify your schema, you will likely need to re-index all documents. If you do not, you may lose access to documents, or not be able to interpret them properly, e.g. after replacing a field type.
+The file named "managed-schema" in the example configurations may include a note that recommends never hand-editing the file. Before the Schema API existed, such edits were the only way to make changes to the schema, and users may have a strong desire to continue making changes this way.
 
-Modifying your schema will never modify any documents that are already indexed. Again, you must re-index documents in order to apply schema changes to them.
+The reason that this is discouraged is because hand-edits of the schema may be lost if the Schema API described here is later used to make a change, unless the core or collection is reloaded or Solr is restarted before using the Schema API. If care is taken to always reload or restart after a manual edit, then there is no problem at all with doing those edits.
 
 ====
 
-To enable schema modification with this API, the schema will need to be managed and mutable. See the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for more information.
+The API allows two output modes for all calls: JSON or XML. When requesting the complete schema, there is another output mode which is XML modeled after the managed-schema file itself, which is in XML format.
 
-The API allows two output modes for all calls: JSON or XML. When requesting the complete schema, there is another output mode which is XML modeled after the schema.xml file itself.
+When modifying the schema with the API, a core reload will automatically occur in order for the changes to be available immediately for documents indexed thereafter. Previously indexed documents will *not* be automatically updated - they *must* be re-indexed if existing index data uses schema elements that you changed.
 
-When modifying the schema with the API, a core reload will automatically occur in order for the changes to be available immediately for documents indexed thereafter. Previously indexed documents will *not* be automatically handled - they *must* be re-indexed if they used schema elements that you changed.
+.Re-index after schema modifications!
+[IMPORTANT]
+====
+
+If you modify your schema, you will likely need to re-index all documents. If you do not, you may lose access to documents, or not be able to interpret them properly, e.g. after replacing a field type.
+
+Modifying your schema will never modify any documents that are already indexed. You must re-index documents in order to apply schema changes to them. Queries and updates made after the change may encounter errors that were not present before the change. Completely deleting the index and rebuilding it is usually the only option to fix such errors.
+
+====
 
-The base address for the API is `http://<host>:<port>/solr/<collection_name>`. If for example you run Solr's "```cloud```" example (via the `bin/solr` command shown below), which creates a "```gettingstarted```" collection, then the base URL (as in all the sample URLs in this section) would be: `http://localhost:8983/solr/gettingstarted` .
+The base address for the API is `http://<host>:<port>/solr/<collection_name>`. If for example you run Solr's "```cloud```" example (via the `bin/solr` command shown below), which creates a "```gettingstarted```" collection, then the base URL for that collection (as in all the sample URLs in this section) would be: `http://localhost:8983/solr/gettingstarted` .
 
 [source,bash]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/solr-control-script-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-control-script-reference.adoc b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
index a19b4a0..eeece8f 100644
--- a/solr/solr-ref-guide/src/solr-control-script-reference.adoc
+++ b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
@@ -449,7 +449,7 @@ An example of this command with these parameters is:
 [WARNING]
 ====
 
-This command does **not** automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API's <<collections-api.adoc#CollectionsAPI-reload,RELOAD command>> to reload any collections that uses this configuration set.
+This command does *not* automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API's <<collections-api.adoc#CollectionsAPI-reload,RELOAD command>> to reload any collections that uses this configuration set.
 
 ====
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/spatial-search.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spatial-search.adoc b/solr/solr-ref-guide/src/spatial-search.adoc
index c92e93d..2d37c6a 100644
--- a/solr/solr-ref-guide/src/spatial-search.adoc
+++ b/solr/solr-ref-guide/src/spatial-search.adoc
@@ -9,29 +9,43 @@ Solr supports location data for use in spatial/geospatial searches. Using spatia
 * Sort or boost scoring by distance between points, or relative area between rectangles
 * Generate a 2D grid of facet count numbers for heatmap generation or point-plotting.
 
-There are three main field types available for spatial search:
+There are four main field types available for spatial search:
 
-* `LatLonType` and its non-geodetic twin `PointType`
+* `LatLonPointSpatialField`
+* `LatLonType` (now deprecated) and its non-geodetic twin PointType
 * `SpatialRecursivePrefixTreeFieldType` (RPT for short), including `RptWithGeometrySpatialField`, a derivative
 * `BBoxField`
 
-RPT offers more features than LatLonType and fast filter performance, although LatLonType is more appropriate when efficient distance sorting/boosting is desired. They can both be used simultaneously for what each does best \u2013 LatLonType for sorting/boosting, RPT for filtering. If you need to index shapes other than points (e.g. a circle or polygon) then use RPT.
+LatLonPointSpatialField is the ideal field type for the most common use-cases for lat-lon point data. It replaces LatLonType which still exists for backwards compatibility. RPT offers some more features for more advanced/custom use cases / options like polygons and heatmaps.
+
+RptWithGeometrySpatialField is for indexing and searching non-point data though it can do points too. It can't do sorting/boosting.
 
 BBoxField is for indexing bounding boxes, querying by a box, specifying a search predicate (Intersects,Within,Contains,Disjoint,Equals), and a relevancy sort/boost like overlapRatio or simply the area.
 
-Some details that are not in this guide can be found at http://wiki.apache.org/solr/SpatialSearch.
+Some esoteric details that are not in this guide can be found at http://wiki.apache.org/solr/SpatialSearch.
+
+[[SpatialSearch-LatLonPointSpatialField]]
+== LatLonPointSpatialField
+
+Here's how LatLonPointSpatialField should usually be configured in the schema:
+
+`<fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>`
+
+LLPSF supports toggling `indexed`, `stored`, `docValues`, and `multiValued`. LLPSF internally uses a 2-dimensional Lucene "Points" (BDK tree) index when "indexed" is enabled (the default). When "docValues" is enabled, a latitude and longitudes pair are bit-interleaved into 64 bits and put into Lucene DocValues. The accuracy of the docValues data is about a centimeter.
 
-[[SpatialSearch-IndexingandConfiguration]]
-== Indexing and Configuration
+[[SpatialSearch-IndexingPoints]]
+== Indexing Points
 
-For indexing geodetic points (latitude and longitude), supply the pair of numbers as a string with a comma separating them in latitude then longitude order. For non-geodetic points, the order is x,y for PointType, and for RPT you must use a space instead of a comma, or use WKT or GeoJSON.
+For indexing geodetic points (latitude and longitude), supply it in "lat,lon" order (comma separated).
 
-See the section <<SpatialSearch-RPT,RPT>> below for RPT configuration specifics.
+For indexing non-geodetic points, it depends. Use "x y" (a space) if RPT. For PointType however, use "x,y" (a comma).
 
-[[SpatialSearch-SpatialFilters]]
-== Spatial Filters
+If you'd rather use a standard industry format, Solr supports WKT and GeoJSON. However it's much bulkier than the raw coordinates for such simple data. (Not supported by the deprecated LatLonType or PointType)
 
-There are 2 types of Spatial filters, which both support the following parameters:
+[[SpatialSearch-SearchingwithQueryParsers]]
+== Searching with Query Parsers
+
+There are two spatial Solr "query parsers" for geospatial search: `geofilt` and `bbox`. They take the following parameters:
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
@@ -42,7 +56,7 @@ There are 2 types of Spatial filters, which both support the following parameter
 |pt |the center point using the format "lat,lon" if latitude & longitude. Otherwise, "x,y" for PointType or "x y" for RPT field types.
 |sfield |a spatial indexed field
 |score a|
-(Advanced option; RPT and BBoxField field types only) If the query is used in a scoring context (e.g. as the main query in `q`), this _<<local-parameters-in-queries.adoc#local-parameters-in-queries,local parameter>>_ determines what scores will be produced. Valid values are:
+(Advanced option; not supported by LatLonType (deprecated) or PointType) If the query is used in a scoring context (e.g. as the main query in `q`), this _<<local-parameters-in-queries.adoc#local-parameters-in-queries,local parameter>>_ determines what scores will be produced. Valid values are:
 
 * `none` - A fixed score of 1.0. (the default)
 * `kilometers` - distance in kilometers between the field value and the specified center point
@@ -54,17 +68,17 @@ There are 2 types of Spatial filters, which both support the following parameter
 [WARNING]
 ====
 
-Don't use this for indexed non-point shapes (e.g. polygons). The results will be erroneous. And with RPT, it's only recommended for multi-valued point data, as the implementation doesn't scale very well and for single-valued fields, you should instead use a separate LatLonType field purely for distance sorting.
+Don't use this for indexed non-point shapes (e.g. polygons). The results will be erroneous. And with RPT, it's only recommended for multi-valued point data, as the implementation doesn't scale very well and for single-valued fields, you should instead use a separate non-RPT field purely for distance sorting.
 
 ====
 
-When used with `BBoxField`,additional options are supported:
+When used with `BBoxField`, additional options are supported:
 
 * `overlapRatio` - The relative overlap between the indexed shape & query shape.
 * `area` - haversine based area of the overlapping shapes expressed in terms of the `distanceUnits` configured for this field
 * `area2D` - cartesian coordinates based area of the overlapping shapes expressed in terms of the `distanceUnits` configured for this field
 
-|filter |(Advanced option; RPT and BBoxField field types only) If you only want the query to score (with the above `score` local parameter), not filter, then set this local parameter to false.
+|filter |(Advanced option; not supported by LatLonType (deprecated) or PointType). If you only want the query to score (with the above `score` local parameter), not filter, then set this local parameter to false.
 |===
 
 [[SpatialSearch-geofilt]]
@@ -93,16 +107,18 @@ When a bounding box includes a pole, the bounding box ends up being a "bounding
 [[SpatialSearch-Filteringbyanarbitraryrectangle]]
 === Filtering by an arbitrary rectangle
 
-Sometimes the spatial search requirement calls for finding everything in a rectangular area, such as the area covered by a map the user is looking at. For this case, geofilt and bbox won't cut it. This is somewhat of a trick, but you can use Solr's range query syntax for this by supplying the lower-left corner as the start of the range and the upper-right corner as the end of the range. Here's an example: `&q=*:*&fq=store:[45,-94 TO 46,-93]`. LatLonType does *not* support rectangles that cross the dateline, but RPT does. If you are using RPT with non-geospatial coordinates (`geo="false"`) then you must quote the points due to the space, e.g. `"x y"`.
+Sometimes the spatial search requirement calls for finding everything in a rectangular area, such as the area covered by a map the user is looking at. For this case, geofilt and bbox won't cut it. This is somewhat of a trick, but you can use Solr's range query syntax for this by supplying the lower-left corner as the start of the range and the upper-right corner as the end of the range. Here's an example: `&q=*:*&fq=store:[45,-94 TO 46,-93]`. LatLonType (deprecated) does *not* support rectangles that cross the dateline. For RPT and BBoxField, if you are non-geospatial coordinates (`geo="false"`) then you must quote the points due to the space, e.g. `"x y"`.
+
+// OLD_CONFLUENCE_ID: SpatialSearch-Optimizing:CacheorNot
 
-// OLD_CONFLUENCE_ID: SpatialSearch-Optimization:SolrPostFiltering
+[[SpatialSearch-Optimizing_CacheorNot]]
+=== Optimizing: Cache or Not
 
-[[SpatialSearch-Optimization_SolrPostFiltering]]
-=== Optimization: Solr Post Filtering
+It's most common to put a spatial query into an "fq" parameter \u2013 a filter query. By default, Solr will cache the query in the filter cache. If you know the filter query (be it spatial or not) is fairly unique and not likely to get a cache hit then specify `cache="false"` as a local-param as seen in the following example. The only spatial types which stand to benefit from this technique are LatLonPointSpatialField and LatLonType (deprecated). Enable docValues on the field (if it isn't already). LatLonType (deprecated) additionally requires a `cost="100"` (or more) local-param.
 
-Most likely, the fastest spatial filters will be to simply use the RPT field type. However, sometimes it may be faster to use LatLonType with _Solr post filtering_ in circumstances when both the spatial query isn't worth caching and there aren't many matching documents that match the non-spatial filters (e.g. keyword queries and other filters). To use _Solr post filtering_ with LatLonType, use the `bbox` or `geofilt` query parsers in a filter query but specify `cache=false` and `cost=100` (or greater) as local parameters. Here's a short example:
+`&q=...mykeywords...&fq=...someotherfilters...&fq={!geofilt cache=false}&sfield=store&pt=45.15,-93.85&d=5`
 
-`&q=...mykeywords...&fq=...someotherfilters...&fq={!geofilt cache=false cost=100}&sfield=store&pt=45.15,-93.85&d=5`
+LLPSF does not support Solr's "PostFilter".
 
 // OLD_CONFLUENCE_ID: SpatialSearch-DistanceSortingorBoosting(FunctionQueries)
 
@@ -151,17 +167,19 @@ Using the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>> or <<t
 [[SpatialSearch-RPT]]
 == RPT
 
-RPT refers to either `SpatialRecursivePrefixTreeFieldType` (aka simply RPT) and an extended version: `RptWithGeometrySpatialField` (aka RPT with Geometry). RPT offers several functional improvements over LatLonType:
+RPT refers to either `SpatialRecursivePrefixTreeFieldType` (aka simply RPT) and an extended version: `RptWithGeometrySpatialField` (aka RPT with Geometry). RPT offers several functional improvements over LatLonPointSpatialField:
 
+* Non-geodetic \u2013 geo=false general x & y (__not__ latitude and longitude)
 * Query by polygons and other complex shapes, in addition to circles & rectangles
-* Multi-valued indexed fields
-* Ability to index non-point shapes (e.g. polygons) as well as points
-* Rectangles with user-specified corners that can cross the dateline
-* Multi-value distance sort and score boosting _(warning: non-optimized)_
-* Well-Known-Text (WKT) shape syntax (required for specifying polygons & other complex shapes), and GeoJSON too. In addition to indexing and searching, this works with the `wt=geojson` (GeoJSON Solr response-writer) and `[geo f=myfield]` (geo Solr document-transformer).
-* Heatmap grid faceting capability
+* Ability to index non-point shapes (e.g. polygons) as well as points \u2013 see RptWithGeometrySpatialField
+* Heatmap grid faceting
 
-RPT incorporates the basic features of LatLonType and PointType, such as lat-lon bounding boxes and circles, in addition to supporting geofilt, bbox, geodist, and a range-queries. RPT with Geometry is defined further below.
+RPT _shares_ various features in common with LatLonPointSpatialField. Some are listed here:
+
+* Latitude/Longitude indexed point data; possibly multi-valued
+* Fast filtering with geofilt, bbox filters, and range query syntax (dateline crossing is supported)
+* Sort/boost via geodist
+* Well-Known-Text (WKT) shape syntax (required for specifying polygons & other complex shapes), and GeoJSON too. In addition to indexing and searching, this works with the `wt=geojson` (GeoJSON Solr response-writer) and `[geo f=myfield]` (geo Solr document-transformer).
 
 [[SpatialSearch-Schemaconfiguration]]
 === Schema configuration
@@ -198,7 +216,7 @@ This is used to specify the units for distance measurements used throughout the
 [[SpatialSearch-JTSandPolygons]]
 === JTS and Polygons
 
-As indicated above, `spatialContextFactory` must be set to `JTS` for polygon support, including multi-polygon. All other shapes, including even line-strings, are supported without JTS. JTS stands for http://sourceforge.net/projects/jts-topo-suite/[JTS Topology Suite], which does not come with Solr due to its LGPL license. You must download it (a JAR file) and put that in a special location internal to Solr: `SOLR_INSTALL/server/solr-webapp/webapp/WEB-INF/lib/`. You can readily download it here: https://repo1.maven.org/maven2/com/vividsolutions/jts-core/ It will not work if placed in other more typical Solr lib directories, unfortunately. When activated, there are additional configuration attributes available; see https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/context/jts/JtsSpatialContextFactory.html[org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory] for the Javadocs, and remember to look at the superclass's options in https://locationtech.
 github.io/spatial4j/apidocs/org/locationtech/spatial4j/context/SpatialContextFactory.html[SpatialContextFactory] as well. One option in particular you should most likely enable is `autoIndex` (i.e. use JTS's PreparedGeometry) as it's been shown to be a major performance boost for non-trivial polygons.
+As indicated above, `spatialContextFactory` must be set to `JTS` for polygon support, including multi-polygon. All other shapes, including even line-strings, are supported without JTS. JTS stands for http://sourceforge.net/projects/jts-topo-suite/[JTS Topology Suite], which does not come with Solr due to its LGPL license. You must download it (a JAR file) and put that in a special location internal to Solr: `SOLR_INSTALL/server/solr-webapp/webapp/WEB-INF/lib/`. You can readily download it here: https://repo1.maven.org/maven2/com/vividsolutions/jts-core/. It will not work if placed in other more typical Solr lib directories, unfortunately. When activated, there are additional configuration attributes available; see https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/context/jts/JtsSpatialContextFactory.html[org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory] for the Javadocs, and remember to look at the superclass's options in as well. One option 
 in particular you should most likely enable is `autoIndex` (i.e., use JTS's PreparedGeometry) as it's been shown to be a major performance boost for non-trivial polygons.
 
 [source,xml]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/streaming-expressions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/streaming-expressions.adoc b/solr/solr-ref-guide/src/streaming-expressions.adoc
index ac0cb7a..7ea7d62 100644
--- a/solr/solr-ref-guide/src/streaming-expressions.adoc
+++ b/solr/solr-ref-guide/src/streaming-expressions.adoc
@@ -3,9 +3,9 @@
 :page-permalink: streaming-expressions.html
 :page-children: graph-traversal
 
-Streaming Expressions provide a simple yet powerful stream processing language for SolrCloud. They are a suite of functions that can be combined to perform many different parallel computing tasks. These functions are the basis for the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>>.
+Streaming Expressions provide a simple yet powerful stream processing language for Solr Cloud. They are a suite of functions that can be combined to perform many different parallel computing tasks. These functions are the basis for the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>>.
 
-There are several available functions, including those that implement:
+There is a growing library of functions that can be combined to implement:
 
 * Request/response stream processing
 * Batch stream processing
@@ -14,7 +14,12 @@ There are several available functions, including those that implement:
 * Parallel relational algebra (distributed joins, intersections, unions, complements)
 * Publish/subscribe messaging
 * Distributed graph traversal
-* Machine Learning and parallel iterative model training
+* Machine learning and parallel iterative model training
+* Anomaly detection
+* Recommendation systems
+* Retrieve and rank services
+* Text classification and feature extraction
+* Streaming NLP
 
 Streams from outside systems can be joined with streams originating from Solr and users can add their own stream functions by following Solr's {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/io/stream/package-summary.html[Java streaming API].
 
@@ -101,6 +106,9 @@ Because streaming expressions relies on the `/export` handler, many of the field
 
 Stream sources originate streams.
 
+[[StreamingExpressions-echo]]
+=== echo
+
 [[StreamingExpressions-search]]
 === search
 
@@ -133,6 +141,11 @@ expr=search(collection1,
        sort="a_f asc, a_i asc") 
 ----
 
+// OLD_CONFLUENCE_ID: StreamingExpressions-shuffle(6.6)
+
+[[StreamingExpressions-shuffle_6.6_]]
+=== shuffle (6.6)
+
 [[StreamingExpressions-jdbc]]
 === jdbc
 
@@ -367,12 +380,41 @@ random(baskets,
 
 In the example above the `random` function is searching the baskets collections for all rows where "productID:productX". It will return 100 pseudo-random results. The field list returned is the basketID.
 
+[[StreamingExpressions-significantTerms]]
+=== significantTerms
+
+The `significantTerms` function queries a SolrCloud collection, but instead of returning documents, it returns significant terms found in documents in the result set. The `significantTerms` function scores terms based on how frequently they appear in the result set and how rarely they appear in the entire corpus. The `significantTerms` function emits a tuple for each term which contains the term, the score, the foreground count and the background count. The foreground count is how many documents the term appears in in the result set. The background count is how many documents the term appears in in the entire corpus. The foreground and background counts are global for the collection.
+
+[[StreamingExpressions-Parameters.6]]
+==== Parameters
+
+* `collection`: (Mandatory) The collection that the function is run on.
+* `q`: (Mandatory) The query that describes the foreground document set.
+* `limit`: (Optional, Default 20) The max number of terms to return.
+* `minDocFreq`: (Optional, Defaults to 5 documents) The minimum number of documents the term must appear in on a shard. This is a float value. If greater then 1.0 then it's considered the absolute number of documents. If less then 1.0 it's treated as a percentage of documents.
+* `maxDocFreq`: (Optional, Defaults to 30% of documents) The maximum number of documents the term can appear in on a shard. This is a float value. If greater then 1.0 then it's considered the absolute number of documents. If less then 1.0 it's treated as a percentage of documents.
+* `minTermLength`: (Optional, Default 4) The minimum length of the term to be considered significant.
+
+[[StreamingExpressions-Syntax.6]]
+==== Syntax
+
+[source,java]
+----
+significantTerms(collection1, 
+                 q="body:Solr", 
+                 minDocFreq="10",
+                 maxDocFreq=".20",
+                 minTermLength="5")
+----
+
+In the example above the `significantTerms` function is querying `collection1` and returning at most 50 significant terms that appear in 10 or more documents but not more then 20% of the corpus.
+
 [[StreamingExpressions-shortestPath]]
 === shortestPath
 
 The `shortestPath` function is an implementation of a shortest path graph traversal. The `shortestPath` function performs an iterative breadth-first search through an unweighted graph to find the shortest paths between two nodes in a graph. The `shortestPath` function emits a tuple for each path found. Each tuple emitted will contain a `path` key which points to a `List` of nodeIDs comprising the path.
 
-[[StreamingExpressions-Parameters.6]]
+[[StreamingExpressions-Parameters.7]]
 ==== Parameters
 
 * `collection`: (Mandatory) The collection that the topic query will be run on.
@@ -384,7 +426,7 @@ The `shortestPath` function is an implementation of a shortest path graph traver
 * `fq`: (Optional) Filter query
 * `maxDepth`: (Mandatory) Limits to the search to a maximum depth in the graph.
 
-[[StreamingExpressions-Syntax.6]]
+[[StreamingExpressions-Syntax.7]]
 ==== Syntax
 
 [source,java]
@@ -408,14 +450,14 @@ The search starts from the nodeID "john@company.com" in the `from_address` field
 
 The `stats` function gathers simple aggregations for a search result set. The stats function does not support rollups over buckets, so the stats stream always returns a single tuple with the rolled up stats. Under the covers the stats function pushes down the generation of the stats into the search engine using the StatsComponent. The stats function currently supports the following metrics: `count(*)`, `sum()`, `avg()`, `min()`, and `max()`.
 
-[[StreamingExpressions-Parameters.7]]
+[[StreamingExpressions-Parameters.8]]
 ==== Parameters
 
 * `collection`: (Mandatory) Collection the stats will be aggregated from.
 * `q`: (Mandatory) The query to build the aggregations from.
 * `metrics`: (Mandatory) The metrics to include in the result tuple. Current supported metrics are `sum(col)`, `avg(col)`, `min(col)`, `max(col)` and `count(*)`
 
-[[StreamingExpressions-Syntax.7]]
+[[StreamingExpressions-Syntax.8]]
 ==== Syntax
 
 [source,java]
@@ -442,7 +484,7 @@ The `train` function wraps a <<StreamingExpressions-features,features>> function
 
 With each iteration the `train` function emits a tuple with the model. The model contains the feature terms, weights, and the confusion matrix for the model. The optimized model can then be used to classify documents based on their feature terms.
 
-[[StreamingExpressions-Parameters.8]]
+[[StreamingExpressions-Parameters.9]]
 ==== Parameters
 
 * `collection`: (Mandatory) Collection that holds the training set
@@ -453,7 +495,7 @@ With each iteration the `train` function emits a tuple with the model. The model
 * `maxIterations`: (Mandatory) How many training iterations to perform.
 * `positiveLabel`: (defaults to 1) The value in the outcome field that defines a positive outcome.
 
-[[StreamingExpressions-Syntax.8]]
+[[StreamingExpressions-Syntax.9]]
 ==== Syntax
 
 [source,java]
@@ -479,7 +521,7 @@ The topic function should be considered in beta until https://issues.apache.org/
 
 ====
 
-[[StreamingExpressions-Parameters.9]]
+[[StreamingExpressions-Parameters.10]]
 ==== Parameters
 
 * `checkpointCollection`: (Mandatory) The collection where the topic checkpoints are stored.
@@ -489,7 +531,7 @@ The topic function should be considered in beta until https://issues.apache.org/
 * `fl`: (Mandatory) The field list returned by the topic function.
 * `initialCheckpoint`: (Optional) Sets the initial Solr `_version_` number to start reading from the queue. If not set, it defaults to the highest version in the index. Setting to 0 will process all records that match query in the index.
 
-[[StreamingExpressions-Syntax.9]]
+[[StreamingExpressions-Syntax.10]]
 ==== Syntax
 
 [source,java]
@@ -506,6 +548,11 @@ topic(checkpointCollection,
 
 Stream decorators wrap other stream functions or perform operations on the stream.
 
+// OLD_CONFLUENCE_ID: StreamingExpressions-cartesianProduct(6.6)
+
+[[StreamingExpressions-cartesianProduct_6.6_]]
+=== cartesianProduct (6.6)
+
 [[StreamingExpressions-classify]]
 === classify
 
@@ -517,14 +564,14 @@ Each tuple that is classified is assigned two scores:
 
 **score_d**: The score of the document that has not be squashed between 0 and 1. The score may be positive or negative. The higher the score the better the document fits the class. This un-squashed score will be useful in query re-ranking and recommendation use cases. This score is particularly useful when multiple high ranking documents have a probability_d score of 1, which won't provide a meaningful ranking between documents.
 
-[[StreamingExpressions-Parameters.10]]
+[[StreamingExpressions-Parameters.11]]
 ==== Parameters
 
 * `model expression`: (Mandatory) Retrieves the stored logistic regression model.
 * `field`: (Mandatory) The field in the tuples to apply the classifier to. By default the analyzer for this field in the schema will be used extract the features.
 * `analyzerField`: (Optional) Specifies a different field to find the analyzer from in the schema.
 
-[[StreamingExpressions-Syntax.10]]
+[[StreamingExpressions-Syntax.11]]
 ==== *Syntax*
 
 [source,java]
@@ -546,7 +593,7 @@ In the example above the `classify expression` is retrieving the model using the
 
 The `commit` function wraps a single stream (A) and given a collection and batch size will send commit messages to the collection when the batch size is fulfilled or the end of stream is reached. A commit stream is used most frequently with an update stream and as such the commit will take into account possible summary tuples coming from the update stream. All tuples coming into the commit stream will be returned out of the commit stream - no tuples will be dropped and no tuples will be added.
 
-[[StreamingExpressions-Parameters.11]]
+[[StreamingExpressions-Parameters.12]]
 ==== Parameters
 
 * `collection`: The collection to send commit messages to (required)
@@ -556,7 +603,7 @@ The `commit` function wraps a single stream (A) and given a collection and batch
 * `softCommit`: The value passed directly to the commit handler (true/false, default: false)
 * `StreamExpression for StreamA` (required)
 
-[[StreamingExpressions-Syntax.11]]
+[[StreamingExpressions-Syntax.12]]
 ==== Syntax
 
 [source,java]
@@ -577,14 +624,14 @@ commit(
 
 The `complement` function wraps two streams (A and B) and emits tuples from A which do not exist in B. The tuples are emitted in the order in which they appear in stream A. Both streams must be sorted by the fields being used to determine equality (using the `on` parameter).
 
-[[StreamingExpressions-Parameters.12]]
+[[StreamingExpressions-Parameters.13]]
 ==== Parameters
 
 * `StreamExpression for StreamA`
 * `StreamExpression for StreamB`
 * `on`: Fields to be used for checking equality of tuples between A and B. Can be of the format `on="fieldName"`, `on="fieldNameInLeft=fieldNameInRight"`, or `on="fieldName, otherFieldName=rightOtherFieldName"`.
 
-[[StreamingExpressions-Syntax.12]]
+[[StreamingExpressions-Syntax.13]]
 ==== Syntax
 
 [source,java]
@@ -614,7 +661,7 @@ With continuous push streaming the `daemon` function wraps another function and
 
 In order to facilitate the pushing of tuples, the `daemon` function must wrap another stream decorator that pushes the tuples somewhere. One example of this is the `update` function, which wraps a stream and sends the tuples to another SolrCloud collection for indexing.
 
-[[StreamingExpressions-Syntax.13]]
+[[StreamingExpressions-Syntax.14]]
 ==== Syntax
 
 [source,java]
@@ -729,6 +776,9 @@ while(true) {
 daemonStream.close();
 ----
 
+[[StreamingExpressions-eval]]
+=== eval
+
 [[StreamingExpressions-executor]]
 === executor
 
@@ -738,13 +788,13 @@ The `executor` function does not do anything specific with the output of the exp
 
 This model allows for asynchronous execution of jobs where the output is stored in a SolrCloud collection where it can be accessed as the job progresses.
 
-[[StreamingExpressions-Parameters.13]]
+[[StreamingExpressions-Parameters.14]]
 ==== Parameters
 
 * `threads`: (Optional) The number of threads in the executors thread pool for executing expressions.
 * `StreamExpression`: (Mandatory) The stream source which contains the Streaming Expressions to execute.
 
-[[StreamingExpressions-Syntax.14]]
+[[StreamingExpressions-Syntax.15]]
 ==== Syntax
 
 [source,java]
@@ -767,7 +817,7 @@ In the example above a <<StreamingExpressions-daemon,daemon>> wraps an executor*
 
 The `fetch` function iterates a stream and fetches additional fields and adds them to the tuples. The `fetch` function fetches in batches to limit the number of calls back to Solr. Tuples streamed from the `fetch` function will contain the original fields and the additional fields that were fetched. The `fetch` function supports one-to-one fetches. Many-to-one fetches, where the stream source contains duplicate keys, will also work, but one-to-many fetches are currently not supported by this function.
 
-[[StreamingExpressions-Parameters.14]]
+[[StreamingExpressions-Parameters.15]]
 ==== Parameters
 
 * `Collection`: (Mandatory) The collection to fetch the fields from.
@@ -776,7 +826,7 @@ The `fetch` function iterates a stream and fetches additional fields and adds th
 * `on`: Fields to be used for checking equality of tuples between stream source and fetched records. Formatted as `on="fieldNameInTuple=fieldNameInCollection"`.
 * `batchSize`: (Optional) The batch fetch size.
 
-[[StreamingExpressions-Syntax.15]]
+[[StreamingExpressions-Syntax.16]]
 ==== Syntax
 
 [source,java]
@@ -794,17 +844,15 @@ The example above fetches addresses for users by matching the username in the tu
 
 The `having` expression wraps a stream and applies a boolean operation to each tuple. It emits only tuples for which the boolean operation returns **true**.
 
-[[StreamingExpressions-Parameters.15]]
+[[StreamingExpressions-Parameters.16]]
 ==== Parameters
 
 * `StreamExpression`: (Mandatory) The stream source for the having function.
-* `booleanOperation`: (Madatory) The following boolean operations are supported: *eq* (numeric equals), *gt* (numeric greater than), *lt* (numeric less than), *gteq* (numeric greater than or equal to), *lteq* (numeric less than or equal to), **and**, **or**, and **not**. Boolean operations can be nested to form complex boolean logic.
-
-The numeric comparison operations compare the value in a specific field with a numeric value. For example: **eq**(field1, 10), returns true if *field1* is equal to 10.
+* `booleanEvaluator`: (Madatory) The following boolean operations are supported: *eq* (equals), *gt* (greater than), *lt* (less than), *gteq* (greater than or equal to), *lteq* (less than or equal to), **and**, *or, eor* (exclusive or), and **not**. Boolean evaluators can be nested with other evaluators to form complex boolean logic.
 
-The parameter order for numeric comparison operations matters. The first parameter of comparison operations is the field name, the second parameter is the numeric to compare to.
+The comparison evaluators compare the value in a specific field with a value, whether a string, number, or boolean. For example: **eq**(field1, 10), returns true if *field1* is equal to 10.
 
-[[StreamingExpressions-Syntax.16]]
+[[StreamingExpressions-Syntax.17]]
 ==== Syntax
 
 [source,java]
@@ -828,14 +876,14 @@ The `leftOuterJoin` function wraps two streams, Left and Right, and emits tuples
 
 You can wrap the incoming streams with a `select` function to be specific about which field values are included in the emitted tuple.
 
-[[StreamingExpressions-Parameters.16]]
+[[StreamingExpressions-Parameters.17]]
 ==== Parameters
 
 * `StreamExpression for StreamLeft`
 * `StreamExpression for StreamRight`
 * `on`: Fields to be used for checking equality of tuples between Left and Right. Can be of the format `on="fieldName"`, `on="fieldNameInLeft=fieldNameInRight"`, or `on="fieldName, otherFieldName=rightOtherFieldName"`.
 
-[[StreamingExpressions-Syntax.17]]
+[[StreamingExpressions-Syntax.18]]
 ==== Syntax
 
 [source,java]
@@ -872,14 +920,14 @@ You can wrap the incoming streams with a `select` function to be specific about
 
 The hashJoin function can be used when the tuples of Left and Right cannot be put in the same order. Because the tuples are out of order this stream functions by reading all values from the Right stream during the open operation and will store all tuples in memory. The result of this is a memory footprint equal to the size of the Right stream.
 
-[[StreamingExpressions-Parameters.17]]
+[[StreamingExpressions-Parameters.18]]
 ==== Parameters
 
 * `StreamExpression for StreamLeft`
 * `hashed=StreamExpression for StreamRight`
 * `on`: Fields to be used for checking equality of tuples between Left and Right. Can be of the format `on="fieldName"`, `on="fieldNameInLeft=fieldNameInRight"`, or `on="fieldName, otherFieldName=rightOtherFieldName"`.
 
-[[StreamingExpressions-Syntax.18]]
+[[StreamingExpressions-Syntax.19]]
 ==== Syntax
 
 [source,java]
@@ -912,14 +960,14 @@ hashJoin(
 
 Wraps two streams Left and Right and for every tuple in Left which exists in Right will emit a tuple containing the fields of both tuples. This supports one-one, one-many, many-one, and many-many inner join scenarios. The tuples are emitted in the order in which they appear in the Left stream. Both streams must be sorted by the fields being used to determine equality (the 'on' parameter). If both tuples contain a field of the same name then the value from the Right stream will be used in the emitted tuple. You can wrap the incoming streams with a select(...) to be specific about which field values are included in the emitted tuple.
 
-[[StreamingExpressions-Parameters.18]]
+[[StreamingExpressions-Parameters.19]]
 ==== Parameters
 
 * `StreamExpression for StreamLeft`
 * `StreamExpression for StreamRight`
 * `on`: Fields to be used for checking equality of tuples between Left and Right. Can be of the format `on="fieldName"`, `on="fieldNameInLeft=fieldNameInRight"`, or `on="fieldName, otherFieldName=rightOtherFieldName"`.
 
-[[StreamingExpressions-Syntax.19]]
+[[StreamingExpressions-Syntax.20]]
 ==== Syntax
 
 [source,java]
@@ -952,14 +1000,14 @@ innerJoin(
 
 The `intersect` function wraps two streams, A and B, and emits tuples from A which *DO* exist in B. The tuples are emitted in the order in which they appear in stream A. Both streams must be sorted by the fields being used to determine equality (the `on` parameter). Only tuples from A are emitted.
 
-[[StreamingExpressions-Parameters.19]]
+[[StreamingExpressions-Parameters.20]]
 ==== Parameters
 
 * `StreamExpression for StreamA`
 * `StreamExpression for StreamB`
 * `on`: Fields to be used for checking equality of tuples between A and B. Can be of the format `on="fieldName"`, `on="fieldNameInLeft=fieldNameInRight"`, or `on="fieldName, otherFieldName=rightOtherFieldName"`.
 
-[[StreamingExpressions-Syntax.20]]
+[[StreamingExpressions-Syntax.21]]
 ==== Syntax
 
 [source,java]
@@ -982,7 +1030,7 @@ intersect(
 
 The `merge` function merges two or more streaming expressions and maintains the ordering of the underlying streams. Because the order is maintained, the sorts of the underlying streams must line up with the on parameter provided to the merge function.
 
-[[StreamingExpressions-Parameters.20]]
+[[StreamingExpressions-Parameters.21]]
 ==== Parameters
 
 * `StreamExpression A`
@@ -990,7 +1038,7 @@ The `merge` function merges two or more streaming expressions and maintains the
 * `Optional StreamExpression C,D,....Z`
 * `on`: Sort criteria for performing the merge. Of the form `fieldName order` where order is `asc` or `desc`. Multiple fields can be provided in the form `fieldA order, fieldB order`.
 
-[[StreamingExpressions-Syntax.21]]
+[[StreamingExpressions-Syntax.22]]
 ==== Syntax
 
 [source,java]
@@ -1043,12 +1091,12 @@ The null expression can be wrapped by the parallel function and sent to worker n
 2.  Are tuples being evenly distributed across the workers, or is the hash partitioning sending more documents to a single worker.
 3.  Are all workers processing data at the same speed, or is one of the workers the source of the bottleneck.
 
-[[StreamingExpressions-Parameters.21]]
+[[StreamingExpressions-Parameters.22]]
 ==== Parameters
 
 * `StreamExpression`: (Mandatory) The expression read by the null function.
 
-[[StreamingExpressions-Syntax.22]]
+[[StreamingExpressions-Syntax.23]]
 ==== Syntax
 
 [source,java]
@@ -1071,14 +1119,14 @@ You can wrap the incoming streams with a `select` function to be specific about
 
 The outerHashJoin stream can be used when the tuples of Left and Right cannot be put in the same order. Because the tuples are out of order, this stream functions by reading all values from the Right stream during the open operation and will store all tuples in memory. The result of this is a memory footprint equal to the size of the Right stream.
 
-[[StreamingExpressions-Parameters.22]]
+[[StreamingExpressions-Parameters.23]]
 ==== Parameters
 
 * `StreamExpression for StreamLeft`
 * `hashed=StreamExpression for StreamRight`
 * `on`: Fields to be used for checking equality of tuples between Left and Right. Can be of the format `on="fieldName"`, `on="fieldNameInLeft=fieldNameInRight"`, or `on="fieldName, otherFieldName=rightOtherFieldName"`.
 
-[[StreamingExpressions-Syntax.23]]
+[[StreamingExpressions-Syntax.24]]
 ==== Syntax
 
 [source,java]
@@ -1123,7 +1171,7 @@ The worker nodes can be from the same collection as the data, or they can be a d
 
 ====
 
-[[StreamingExpressions-Parameters.23]]
+[[StreamingExpressions-Parameters.24]]
 ==== Parameters
 
 * `collection`: Name of the worker collection to send the StreamExpression to.
@@ -1132,7 +1180,7 @@ The worker nodes can be from the same collection as the data, or they can be a d
 * `zkHost`: (Optional) The ZooKeeper connect string where the worker collection resides.
 * `sort`: The sort criteria for ordering tuples returned by the worker nodes.
 
-[[StreamingExpressions-Syntax.24]]
+[[StreamingExpressions-Syntax.25]]
 ==== Syntax
 
 [source,java]
@@ -1151,7 +1199,7 @@ The expression above shows a `parallel` function wrapping a `reduce` function. T
 [[StreamingExpressions-priority]]
 === priority
 
-The `priority` function is a simple priority scheduler for the <<StreamingExpressions-executor,executor>> function. It doesn't directly have a concept of task prioritization; instead it simply executes tasks in the order that they are read from it's underlying stream. The `priority` function provides the ability to schedule a higher priority task ahead of lower priority tasks that were submitted earlier.
+The `priority` function is a simple priority scheduler for the <<StreamingExpressions-executor,executor>> function. The executor function doesn't directly have a concept of task prioritization; instead it simply executes tasks in the order that they are read from it's underlying stream. The `priority` function provides the ability to schedule a higher priority task ahead of lower priority tasks that were submitted earlier.
 
 The `priority` function wraps two <<StreamingExpressions-topic,topics>> that are both emitting tuples that contain streaming expressions to execute. The first topic is considered the higher priority task queue.
 
@@ -1159,13 +1207,13 @@ Each time the `priority` function is called, it checks the higher priority task
 
 The `priority` function will only emit a batch of tasks from one of the queues each time it is called. This ensures that no lower priority tasks are executed until the higher priority queue has no tasks to run.
 
-[[StreamingExpressions-Parameters.24]]
+[[StreamingExpressions-Parameters.25]]
 ==== Parameters
 
 * `topic expression`: (Mandatory) the high priority task queue
 * `topic expression`: (Mandatory) the lower priority task queue
 
-[[StreamingExpressions-Syntax.25]]
+[[StreamingExpressions-Syntax.26]]
 ==== Syntax
 
 [source,java]
@@ -1192,14 +1240,14 @@ The reduce function relies on the sort order of the underlying stream. According
 
 ====
 
-[[StreamingExpressions-Parameters.25]]
+[[StreamingExpressions-Parameters.26]]
 ==== Parameters
 
 * `StreamExpression`: (Mandatory)
 * `by`: (Mandatory) A comma separated list of fields to group by.
 * `Reduce Operation`: (Mandatory)
 
-[[StreamingExpressions-Syntax.26]]
+[[StreamingExpressions-Syntax.27]]
 ==== Syntax
 
 [source,java]
@@ -1217,14 +1265,14 @@ The `rollup` function wraps another stream function and rolls up aggregates over
 
 The rollup function also needs to process entire result sets in order to perform its aggregations. When the underlying stream is the `search` function, the `/export` handler can be used to provide full sorted result sets to the rollup function. This sorted approach allows the rollup function to perform aggregations over very high cardinality fields. The disadvantage of this approach is that the tuples must be sorted and streamed across the network to a worker node to be aggregated. For faster aggregation over low to moderate cardinality fields, the `facet` function can be used.
 
-[[StreamingExpressions-Parameters.26]]
+[[StreamingExpressions-Parameters.27]]
 ==== Parameters
 
 * `StreamExpression` (Mandatory)
 * `over`: (Mandatory) A list of fields to group by.
 * `metrics`: (Mandatory) The list of metrics to compute. Currently supported metrics are `sum(col)`, `avg(col)`, `min(col)`, `max(col)`, `count(*)`.
 
-[[StreamingExpressions-Syntax.27]]
+[[StreamingExpressions-Syntax.28]]
 ==== Syntax
 
 [source,java]
@@ -1254,9 +1302,9 @@ See section in <<graph-traversal.adoc#GraphTraversal-UsingthescoreNodesFunctiont
 [[StreamingExpressions-select]]
 === select
 
-The `select` function wraps a streaming expression and outputs tuples containing a subset or modified set of fields from the incoming tuples. The list of fields included in the output tuple can contain aliases to effectively rename fields. One can provide a list of operations to perform on any fields, such as `replace` to replace the value of a field with some other value or the value of another field in the tuple.
+The `select` function wraps a streaming expression and outputs tuples containing a subset or modified set of fields from the incoming tuples. The list of fields included in the output tuple can contain aliases to effectively rename fields. The select stream supports both operations and evaluators. One can provide a list of operations and evaluators to perform on any fields, such as `replace, add, if`, etc....
 
-[[StreamingExpressions-Parameters.27]]
+[[StreamingExpressions-Parameters.28]]
 ==== Parameters
 
 * `StreamExpression`
@@ -1265,19 +1313,20 @@ The `select` function wraps a streaming expression and outputs tuples containing
 * `replace(fieldName, value, withValue=replacementValue)`: if `incomingTuple[fieldName] == value` then `outgoingTuple[fieldName]` will be set to `replacementValue`. `value` can be the string "null" to replace a null value with some other value.
 * `replace(fieldName, value, withField=otherFieldName)`: if `incomingTuple[fieldName] == value` then `outgoingTuple[fieldName]` will be set to the value of `incomingTuple[otherFieldName]`. `value` can be the string "null" to replace a null value with some other value.
 
-[[StreamingExpressions-Syntax.28]]
+[[StreamingExpressions-Syntax.29]]
 ==== Syntax
 
 [source,java]
 ----
-// output tuples with fields teamName, wins, and losses where a null value for wins or losses is translated to the value of 0
+// output tuples with fields teamName, wins, losses, and winPercentages where a null value for wins or losses is translated to the value of 0
 select(
   search(collection1, fl="id,teamName_s,wins,losses", q="*:*", sort="id asc"),
   teamName_s as teamName,
   wins,
   losses,
   replace(wins,null,withValue=0),
-  replace(losses,null,withValue=0)
+  replace(losses,null,withValue=0),
+  if(eq(0,wins), 0, div(add(wins,losses), wins)) as winPercentage
 )
 ----
 
@@ -1286,13 +1335,13 @@ select(
 
 The `sort` function wraps a streaming expression and re-orders the tuples. The sort function emits all incoming tuples in the new sort order. The sort function reads all tuples from the incoming stream, re-orders them using an algorithm with `O(nlog(n))` performance characteristics, where n is the total number of tuples in the incoming stream, and then outputs the tuples in the new sort order. Because all tuples are read into memory, the memory consumption of this function grows linearly with the number of tuples in the incoming stream.
 
-[[StreamingExpressions-Parameters.28]]
+[[StreamingExpressions-Parameters.29]]
 ==== Parameters
 
 * `StreamExpression`
 * `by`: Sort criteria for re-ordering the tuples
 
-[[StreamingExpressions-Syntax.29]]
+[[StreamingExpressions-Syntax.30]]
 ==== Syntax
 
 The expression below finds dog owners and orders the results by owner and pet name. Notice that it uses an efficient innerJoin by first ordering by the person/owner id and then re-orders the final output by the owner and pet names.
@@ -1314,14 +1363,14 @@ sort(
 
 The `top` function wraps a streaming expression and re-orders the tuples. The top function emits only the top N tuples in the new sort order. The top function re-orders the underlying stream so the sort criteria *does not* have to match up with the underlying stream.
 
-[[StreamingExpressions-Parameters.29]]
+[[StreamingExpressions-Parameters.30]]
 ==== Parameters
 
 * `n`: Number of top tuples to return.
 * `StreamExpression`
 * `sort`: Sort criteria for selecting the top N tuples.
 
-[[StreamingExpressions-Syntax.30]]
+[[StreamingExpressions-Syntax.31]]
 ==== Syntax
 
 The expression below finds the top 3 results of the underlying search. Notice that it reverses the sort order. The top function re-orders the results of the underlying stream.
@@ -1344,13 +1393,13 @@ The `unique` function wraps a streaming expression and emits a unique stream of
 
 The unique function implements a non-co-located unique algorithm. This means that records with the same unique `over` field do not need to be co-located on the same shard. When executed in the parallel, the `partitionKeys` parameter must be the same as the unique `over` field so that records with the same keys will be shuffled to the same worker.
 
-[[StreamingExpressions-Parameters.30]]
+[[StreamingExpressions-Parameters.31]]
 ==== Parameters
 
 * `StreamExpression`
 * `over`: The unique criteria.
 
-[[StreamingExpressions-Syntax.31]]
+[[StreamingExpressions-Syntax.32]]
 ==== Syntax
 
 [source,java]
@@ -1369,14 +1418,14 @@ unique(
 
 The `update` function wraps another functions and sends the tuples to a SolrCloud collection for indexing.
 
-[[StreamingExpressions-Parameters.31]]
+[[StreamingExpressions-Parameters.32]]
 ==== Parameters
 
 * `destinationCollection`: (Mandatory) The collection where the tuples will indexed.
 * `batchSize`: (Mandatory) The indexing batch size.
 * `StreamExpression`: (Mandatory)
 
-[[StreamingExpressions-Syntax.32]]
+[[StreamingExpressions-Syntax.33]]
 ==== Syntax
 
 [source,java]
@@ -1391,3 +1440,504 @@ The `update` function wraps another functions and sends the tuples to a SolrClou
 ----
 
 The example above sends the tuples returned by the `search` function to the `destinationCollection` to be indexed.
+
+[[StreamingExpressions-StreamEvaluators]]
+== Stream Evaluators
+
+Stream Evaluators can be used to evaluate (calculate) new values based on other values in a tuple. That newly evaluated value can be put into the tuple (as part of a `select(...)` clause), used to filter streams (as part of a `having(...)` clause), and for other things. Evaluators can contain field names, raw values, or other evaluators, giving you the ability to create complex evaluation logic, including conditional if/then choices.
+
+In cases where you want to use raw values as part of an evaluation you will need to consider the order of how evaluators are parsed.
+
+1.  If the parameter can be parsed into a valid number, then it is considered a number. For example, `add(3,4.5)`
+2.  If the parameter can be parsed into a valid boolean, then it is considered a boolean. For example, `eq(true,false)`
+3.  If the parameter can be parsed into a valid evaluator, then it is considered an evaluator. For example, `eq(add(10,4),add(7,7))`
+4.  The parameter is considered a field name, even if it quoted. For example, `eq(fieldA,"fieldB")`
+
+If you wish to use a raw string as part of an evaluation, you will want to consider using the `raw(string)` evaluator. This will always return the raw value, no matter what is entered.analyze (6.6)
+
+[[StreamingExpressions-abs]]
+=== abs
+
+The `abs` function will return the absolute value of the provided single parameter. The `abs` function will fail to execute if the value is non-numeric. If a null value is found then null will be returned as the result.
+
+[[StreamingExpressions-Parameters.33]]
+==== Parameters
+
+* `Field Name | Raw Number | Number Evaluator`
+
+[[StreamingExpressions-Syntax.34]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `abs` evaluator. Only one parameter is accepted. Returns a numeric value.
+
+[source,java]
+----
+abs(1) // 1, not really a good use case for it
+abs(-1) // 1, not really a good use case for it
+abs(add(fieldA,fieldB)) // absolute value of fieldA + fieldB
+abs(fieldA) // absolute value of fieldA
+----
+
+[[StreamingExpressions-add]]
+=== add
+
+The `add` function will take 2 or more numeric values and add them together. The `add` function will fail to execute if any of the values are non-numeric. If a null value is found then null will be returned as the result.
+
+[[StreamingExpressions-Parameters.34]]
+==== Parameters
+
+* `Field Name | Raw Number | Number Evaluator`
+* `Field Name | Raw Number | Number Evaluator`
+* `......`
+* `Field Name | Raw Number | Number Evaluator`
+
+[[StreamingExpressions-Syntax.35]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `add` evaluator. The number and order of these parameters do not matter and is not limited except that at least two parameters are required. Returns a numeric value.
+
+[source,java]
+----
+add(1,2,3,4) // 1 + 2 + 3 + 4 == 10
+add(1,fieldA) // 1 + value of fieldA
+add(fieldA,1.4) // value of fieldA + 1.4
+add(fieldA,fieldB,fieldC) // value of fieldA + value of fieldB + value of fieldC
+add(fieldA,div(fieldA,fieldB)) // value of fieldA + (value of fieldA / value of fieldB)
+add(fieldA,if(gt(fieldA,fieldB),fieldA,fieldB)) // if fieldA > fieldB then fieldA + fieldA, else fieldA + fieldB
+----
+
+[[StreamingExpressions-div]]
+=== div
+
+The `div` function will take two numeric values and divide them. The function will fail to execute if any of the values are non-numeric or null, or the 2nd value is 0. Returns a numeric value.
+
+[[StreamingExpressions-Parameters.35]]
+==== Parameters
+
+* `Field Name | Raw Number | Number Evaluator`
+* `Field Name | Raw Number | Number Evaluator`
+
+[[StreamingExpressions-Syntax.36]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `div` evaluator. The first value will be divided by the second and as such the second cannot be 0.
+
+[source,java]
+----
+div(1,2) // 1 / 2
+div(1,fieldA) // 1 / fieldA
+div(fieldA,1.4) // fieldA / 1.4
+div(fieldA,add(fieldA,fieldB)) // fieldA / (fieldA + fieldB)
+----
+
+[[StreamingExpressions-log]]
+=== log
+
+The `log` function will return the natural log of the provided single parameter. The `log` function will fail to execute if the value is non-numeric. If a null value is found, then null will be returned as the result.
+
+[[StreamingExpressions-Parameters.36]]
+==== Parameters
+
+* `Field Name | Raw Number | Number Evaluator`
+
+[[StreamingExpressions-Syntax.37]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `log` evaluator. Only one parameter is accepted. Returns a numeric value.
+
+[source,java]
+----
+log(100) 
+log(add(fieldA,fieldB)) 
+log(fieldA)
+----
+
+[[StreamingExpressions-mult]]
+=== mult
+
+The `mult` function will take two or more numeric values and multiply them together. The `mult` function will fail to execute if any of the values are non-numeric. If a null value is found then null will be returned as the result.
+
+[[StreamingExpressions-Parameters.37]]
+==== Parameters
+
+* `Field Name | Raw Number | Number Evaluator`
+* `Field Name | Raw Number | Number Evaluator`
+* `......`
+* `Field Name | Raw Number | Number Evaluator`
+
+[[StreamingExpressions-Syntax.38]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `mult` evaluator. The number and order of these parameters do not matter and is not limited except that at least two parameters are required. Returns a numeric value.
+
+[source,java]
+----
+mult(1,2,3,4) // 1 * 2 * 3 * 4
+mult(1,fieldA) // 1 * value of fieldA
+mult(fieldA,1.4) // value of fieldA * 1.4
+mult(fieldA,fieldB,fieldC) // value of fieldA * value of fieldB * value of fieldC
+mult(fieldA,div(fieldA,fieldB)) // value of fieldA * (value of fieldA / value of fieldB)
+mult(fieldA,if(gt(fieldA,fieldB),fieldA,fieldB)) // if fieldA > fieldB then fieldA * fieldA, else fieldA * fieldB
+----
+
+[[StreamingExpressions-sub]]
+=== sub
+
+The `sub` function will take 2 or more numeric values and subtract them, from left to right. The sub function will fail to execute if any of the values are non-numeric. If a null value is found then null will be returned as the result.
+
+[[StreamingExpressions-Parameters.38]]
+==== Parameters
+
+* `Field Name | Raw Number | Number Evaluator`
+* `Field Name | Raw Number | Number Evaluator`
+* `......`
+* `Field Name | Raw Number | Number Evaluator`
+
+[[StreamingExpressions-Syntax.39]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `sub` evaluator. The number of these parameters does not matter and is not limited except that at least two parameters are required. Returns a numeric value.
+
+[source,java]
+----
+sub(1,2,3,4) // 1 - 2 - 3 - 4
+sub(1,fieldA) // 1 - value of fieldA
+sub(fieldA,1.4) // value of fieldA - 1.4
+sub(fieldA,fieldB,fieldC) // value of fieldA - value of fieldB - value of fieldC
+sub(fieldA,div(fieldA,fieldB)) // value of fieldA - (value of fieldA / value of fieldB)
+if(gt(fieldA,fieldB),sub(fieldA,fieldB),sub(fieldB,fieldA)) // if fieldA > fieldB then fieldA - fieldB, else fieldB - field
+----
+
+[[StreamingExpressions-pow]]
+=== *pow*
+
+[[StreamingExpressions-mod]]
+=== *mod*
+
+[[StreamingExpressions-ceil]]
+=== *ceil*
+
+[[StreamingExpressions-floor]]
+=== *floor*
+
+[[StreamingExpressions-sin]]
+=== *sin*
+
+[[StreamingExpressions-asin]]
+=== *asin*
+
+[[StreamingExpressions-sinh]]
+=== *sinh*
+
+[[StreamingExpressions-cos]]
+=== *cos*
+
+[[StreamingExpressions-acos]]
+=== *acos*
+
+[[StreamingExpressions-atan]]
+=== *atan*
+
+[[StreamingExpressions-round]]
+=== *round*
+
+[[StreamingExpressions-sqrt]]
+=== *sqrt*
+
+[[StreamingExpressions-cbrt]]
+=== *cbrt*
+
+*and*
+
+The `and` function will return the logical AND of at least 2 boolean parameters. The function will fail to execute if any parameters are non-boolean or null. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.39]]
+==== Parameters
+
+* `Field Name | Raw Boolean | Boolean Evaluator`
+* `Field Name | Raw Boolean | Boolean Evaluator`
+* `......`
+* `Field Name | Raw Boolean | Boolean Evaluator`
+
+[[StreamingExpressions-Syntax.40]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `and` evaluator. At least two parameters are required, but there is no limit to how many you can use.
+
+[source,java]
+----
+and(true,fieldA) // true && fieldA
+and(fieldA,fieldB) // fieldA && fieldB
+and(or(fieldA,fieldB),fieldC) // (fieldA || fieldB) && fieldC
+and(fieldA,fieldB,fieldC,or(fieldD,fieldE),fieldF)
+----
+
+[[StreamingExpressions-eq]]
+=== eq
+
+The `eq` function will return whether all the parameters are equal, as per Java's standard `equals(...)` function. The function accepts parameters of any type, but will fail to execute if all the parameters are not of the same type. That is, all are Boolean, all are String, all are Numeric. If any any parameters are null and there is at least one parameter that is not null then false will be returned. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.40]]
+==== Parameters
+
+* `Field Name | Raw Value | Evaluator`
+* `Field Name | Raw Value | Evaluator`
+* `......`
+* `Field Name | Raw Value | Evaluator`
+
+[[StreamingExpressions-Syntax.41]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `eq` evaluator.
+
+[source,java]
+----
+eq(1,2) // 1 == 2
+eq(1,fieldA) // 1 == fieldA
+eq(fieldA,val(foo)) fieldA == "foo"
+eq(add(fieldA,fieldB),6) // fieldA + fieldB == 6
+----
+
+[[StreamingExpressions-eor]]
+=== eor
+
+The `eor` function will return the logical exclusive or of at least two boolean parameters. The function will fail to execute if any parameters are non-boolean or null. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.41]]
+==== Parameters
+
+* `Field Name | Raw Boolean | Boolean Evaluator`
+* `Field Name | Raw Boolean | Boolean Evaluator`
+* `......`
+* `Field Name | Raw Boolean | Boolean Evaluator`
+
+[[StreamingExpressions-Syntax.42]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `eor` evaluator. At least two parameters are required, but there is no limit to how many you can use.
+
+[source,java]
+----
+eor(true,fieldA) // true iff fieldA is false
+eor(fieldA,fieldB) // true iff either fieldA or fieldB is true but not both
+eor(eq(fieldA,fieldB),eq(fieldC,fieldD)) // true iff either fieldA == fieldB or fieldC == fieldD but not both
+----
+
+[[StreamingExpressions-gteq]]
+=== gteq
+
+The `gteq` function will return whether the first parameter is greater than or equal to the second parameter. The function accepts numeric and string parameters, but will fail to execute if all the parameters are not of the same type. That is, all are String or all are Numeric. If any any parameters are null then an error will be raised. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.42]]
+==== Parameters
+
+* `Field Name | Raw Value | Evaluator`
+* `Field Name | Raw Value | Evaluator`
+
+[[StreamingExpressions-Syntax.43]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `gteq` evaluator.
+
+[source,java]
+----
+gteq(1,2) // 1 >= 2
+gteq(1,fieldA) // 1 >= fieldA
+gteq(fieldA,val(foo)) fieldA >= "foo"
+gteq(add(fieldA,fieldB),6) // fieldA + fieldB >= 6
+----
+
+[[StreamingExpressions-gt]]
+=== gt
+
+The `gt` function will return whether the first parameter is greater than the second parameter. The function accepts numeric or string parameters, but will fail to execute if all the parameters are not of the same type. That is, all are String or all are Numeric. If any any parameters are null then an error will be raised. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.43]]
+==== Parameters
+
+* `Field Name | Raw Value | Evaluator`
+* `Field Name | Raw Value | Evaluator`
+
+[[StreamingExpressions-Syntax.44]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `gt` evaluator.
+
+[source,java]
+----
+gt(1,2) // 1 > 2
+gt(1,fieldA) // 1 > fieldA
+gt(fieldA,val(foo)) fieldA > "foo"
+gt(add(fieldA,fieldB),6) // fieldA + fieldB > 6
+----
+
+[[StreamingExpressions-if]]
+=== if
+
+The `if` function works like a standard conditional if/then statement. If the first parameter is true, then the second parameter will be returned, else the third parameter will be returned. The function accepts a boolean as the first parameter and anything as the second and third parameters. An error will occur if the first parameter is not a boolean or is null.
+
+[[StreamingExpressions-Parameters.44]]
+==== Parameters
+
+* `Field Name | Raw Value | Boolean Evaluator`
+* `Field Name | Raw Value | Evaluator`
+* `Field Name | Raw Value | Evaluator`
+
+[[StreamingExpressions-Syntax.45]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `if` evaluator.
+
+[source,java]
+----
+if(fieldA,fieldB,fieldC) // if fieldA is true then fieldB else fieldC
+if(gt(fieldA,5), fieldA, 5) // if fieldA > 5 then fieldA else 5
+if(eq(fieldB,null), null, div(fieldA,fieldB)) // if fieldB is null then null else fieldA / fieldB
+----
+
+[[StreamingExpressions-lteq]]
+=== lteq
+
+The l`teq` function will return whether the first parameter is less than or equal to the second parameter. The function accepts numeric and string parameters, but will fail to execute if all the parameters are not of the same type. That is, all are String or all are Numeric. If any any parameters are null then an error will be raised. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.45]]
+==== Parameters
+
+* `Field Name | Raw Value | Evaluator`
+* `Field Name | Raw Value | Evaluator`
+
+[[StreamingExpressions-Syntax.46]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `lteq` evaluator.
+
+[source,java]
+----
+lteq(1,2) // 1 <= 2
+lteq(1,fieldA) // 1 <= fieldA
+lteq(fieldA,val(foo)) fieldA <= "foo"
+lteq(add(fieldA,fieldB),6) // fieldA + fieldB <= 6
+----
+
+[[StreamingExpressions-lt]]
+=== lt
+
+The `lt` function will return whether the first parameter is less than the second parameter. The function accepts numeric or string parameters, but will fail to execute if all the parameters are not of the same type. That is, all are String or all are Numeric. If any any parameters are null then an error will be raised. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.46]]
+==== Parameters
+
+* `Field Name | Raw Value | Evaluator`
+* `Field Name | Raw Value | Evaluator`
+
+[[StreamingExpressions-Syntax.47]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `lt` evaluator.
+
+[source,java]
+----
+lt(1,2) // 1 < 2
+lt(1,fieldA) // 1 < fieldA
+lt(fieldA,val(foo)) fieldA < "foo"
+lt(add(fieldA,fieldB),6) // fieldA + fieldB < 6
+----
+
+[[StreamingExpressions-not]]
+=== not
+
+The `not` function will return the logical NOT of a single boolean parameter. The function will fail to execute if the parameter is non-boolean or null. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.47]]
+==== Parameters
+
+* `Field Name | Raw Boolean | Boolean Evaluator`
+
+[[StreamingExpressions-Syntax.48]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `not` evaluator. Only one parameter is allowed.
+
+[source,java]
+----
+not(true) // false
+not(fieldA) // true if fieldA is false else false
+not(eq(fieldA,fieldB)) // true if fieldA != fieldB
+----
+
+[[StreamingExpressions-or]]
+=== or
+
+The `or` function will return the logical OR of at least 2 boolean parameters. The function will fail to execute if any parameters are non-boolean or null. Returns a boolean value.
+
+[[StreamingExpressions-Parameters.48]]
+==== Parameters
+
+* `Field Name | Raw Boolean | Boolean Evaluator`
+* `Field Name | Raw Boolean | Boolean Evaluator`
+* `......`
+* `Field Name | Raw Boolean | Boolean Evaluator`
+
+[[StreamingExpressions-Syntax.49]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `or` evaluator. At least two parameters are required, but there is no limit to how many you can use.
+
+[source,java]
+----
+or(true,fieldA) // true || fieldA
+or(fieldA,fieldB) // fieldA || fieldB
+or(and(fieldA,fieldB),fieldC) // (fieldA && fieldB) || fieldC
+or(fieldA,fieldB,fieldC,and(fieldD,fieldE),fieldF)
+----
+
+[[StreamingExpressions-analyze]]
+=== analyze
+
+[[StreamingExpressions-second]]
+=== second
+
+[[StreamingExpressions-minute]]
+=== minute
+
+[[StreamingExpressions-hour]]
+=== hour
+
+[[StreamingExpressions-day]]
+=== day
+
+[[StreamingExpressions-month]]
+=== month
+
+[[StreamingExpressions-year]]
+=== year
+
+[[StreamingExpressions-convert]]
+=== convert
+
+[[StreamingExpressions-raw]]
+=== raw
+
+The `raw` function will return whatever raw value is the parameter. This is useful for cases where you want to use a string as part of another evaluator.
+
+[[StreamingExpressions-Parameters.49]]
+==== Parameters
+
+* `Raw Value`
+
+[[StreamingExpressions-Syntax.50]]
+==== Syntax
+
+The expressions below show the various ways in which you can use the `raw` evaluator. Whatever is inside will be returned as-is. Internal evaluators are considered strings and are not evaluated.
+
+[source,java]
+----
+raw(foo) // "foo"
+raw(count(*)) // "count(*)"
+raw(45) // 45
+raw(true) // "true" (note: this returns the string "true" and not the boolean true)
+eq(raw(fieldA), fieldA) // true if the value of fieldA equals the string "fieldA" 
+----
+
+[[StreamingExpressions-UUID]]
+=== UUID