You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2020/10/09 17:05:23 UTC

[lucene-solr] 01/02: Ref Guide: fix typos, formatting issues, etc. backport from master; also includes backport of field type docs from PR #1865

This is an automated email from the ASF dual-hosted git repository.

ctargett pushed a commit to branch branch_8x
in repository https://gitbox.apache.org/repos/asf/lucene-solr.git

commit 1632f881d8f373c279a0a5938b820fa7acced3d2
Author: Cassandra Targett <ca...@lucidworks.com>
AuthorDate: Fri Oct 9 10:42:24 2020 -0500

    Ref Guide: fix typos, formatting issues, etc. backport from master; also includes backport of field type docs from PR #1865
---
 .../src/basic-authentication-plugin.adoc           |  2 +-
 .../src/documents-fields-and-schema-design.adoc    |  2 +
 solr/solr-ref-guide/src/exporting-result-sets.adoc |  2 +-
 .../src/field-types-included-with-solr.adoc        | 62 ++++++++++++++--------
 .../src/hadoop-authentication-plugin.adoc          |  2 +-
 solr/solr-ref-guide/src/index-replication.adoc     |  2 +-
 .../src/indexconfig-in-solrconfig.adoc             |  7 ++-
 .../src/indexing-nested-documents.adoc             | 10 ++--
 solr/solr-ref-guide/src/math-expressions.adoc      |  3 +-
 solr/solr-ref-guide/src/other-parsers.adoc         | 16 +++---
 .../src/searching-nested-documents.adoc            | 42 +++++++--------
 .../src/shards-and-indexing-data-in-solrcloud.adoc |  6 +--
 solr/solr-ref-guide/src/statistics.adoc            |  4 +-
 .../src/stream-source-reference.adoc               |  2 +-
 solr/solr-ref-guide/src/streaming-expressions.adoc |  2 +-
 .../src/the-query-elevation-component.adoc         |  9 ++--
 .../src/updating-parts-of-documents.adoc           |  8 +--
 solr/solr-ref-guide/src/vectorization.adoc         |  8 +--
 18 files changed, 107 insertions(+), 82 deletions(-)

diff --git a/solr/solr-ref-guide/src/basic-authentication-plugin.adoc b/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
index 6638269..64bef86 100644
--- a/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
@@ -68,7 +68,7 @@ If you are using SolrCloud, you must upload `security.json` to ZooKeeper. You ca
 
 [source,bash]
 ----
-bin/solr zk cp file:path_to_local_security.json zk:/security.json -z localhost:9983
+$ bin/solr zk cp file:path_to_local_security.json zk:/security.json -z localhost:9983
 ----
 
 NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<setting-up-an-external-zookeeper-ensemble#updating-solr-include-files,instructions>>) you can omit `-z <zk host string>` from the above command.
diff --git a/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc b/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
index e3aa3ea..ee6fd2c 100644
--- a/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
+++ b/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
@@ -40,3 +40,5 @@ This section includes the following topics:
 <<docvalues.adoc#docvalues,DocValues>>: Describes how to create a docValues index for faster lookups.
 
 <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>>: Automatically add previously unknown schema fields using value-based field type guessing.
+
+<<luke-request-handler.adoc#luke-request-handler,Luke Requst Handler>>: The request handler which provides access to information about fields in the index. This request handler powers the <<schema-browser-screen.adoc#schema-browser-screen,Schema Browser>> page of Solr's Admin UI.
diff --git a/solr/solr-ref-guide/src/exporting-result-sets.adoc b/solr/solr-ref-guide/src/exporting-result-sets.adoc
index 674ab96..cd32431 100644
--- a/solr/solr-ref-guide/src/exporting-result-sets.adoc
+++ b/solr/solr-ref-guide/src/exporting-result-sets.adoc
@@ -39,7 +39,7 @@ You can use `/export` to make requests to export the result set of a query.
 
 All queries must include `sort` and `fl` parameters, or the query will return an error. Filter queries are also supported.
 
-Optional parameter `batchSize` determines the size of the internal buffers for partial results. The default value is 30000 but users may want to specify smaller values to limit the memory use (at the cost of degraded performance) or higher values to improve export performance (the relationship is not linear and larger values don't bring proportionally larger performance increases).
+An optional parameter `batchSize` determines the size of the internal buffers for partial results. The default value is `30000` but users may want to specify smaller values to limit the memory use (at the cost of degraded performance) or higher values to improve export performance (the relationship is not linear and larger values don't bring proportionally larger performance increases).
 
 The supported response writers are `json` and `javabin`. For backward compatibility reasons `wt=xsort` is also supported as input, but `wt=xsort` behaves same as `wt=json`. The default output format is `json`.
 
diff --git a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
index 1e98d86..230f45f 100644
--- a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
+++ b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
@@ -16,21 +16,23 @@
 // specific language governing permissions and limitations
 // under the License.
 
-The following table lists the field types that are available in Solr. The `org.apache.solr.schema` package includes all the classes listed in this table.
+The following table lists the field types that are available in Solr and are recommended. The page further down, lists all the deprecated types for those migrating from older version of Solr. The {solr-javadocs}/solr-core/org/apache/solr/schema/package-summary.html[`org.apache.solr.schema`] package includes all the classes listed in this table.
 
 // TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
 
+== Recommended Field Types
+
 [cols="25,75",options="header"]
 |===
 |Class |Description
+|BBoxField | Indexes a single rectangle (bounding box) per document field and supports searching via a bounding box. See the section <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
+
 |BinaryField |Binary data.
 
 |BoolField |Contains either true or false. Values of `1`, `t`, or `T` in the first character are interpreted as `true`. Any other values in the first character are interpreted as `false`.
 
 |CollationField |Supports Unicode collation for sorting and range queries. The ICUCollationField is a better choice if you can use ICU4J. See the section <<language-analysis.adoc#unicode-collation,Unicode Collation>> for more information.
 
-|CurrencyField |*Deprecated*. Use CurrencyFieldType instead.
-
 |CurrencyFieldType |Supports currencies and exchange rates. See the section <<working-with-currencies-and-exchange-rates.adoc#working-with-currencies-and-exchange-rates,Working with Currencies and Exchange Rates>> for more information.
 
 |DateRangeField |Supports indexing date ranges, to include point in time date instances as well (single-millisecond durations). See the section <<working-with-dates.adoc#working-with-dates,Working with Dates>> for more detail on using this field type. Consider using this field type even if it's just for date instances, particularly when the queries typically fall on UTC year/month/day/hour, etc., boundaries.
@@ -41,8 +43,6 @@ The following table lists the field types that are available in Solr. The `org.a
 
 |ExternalFileField |Pulls values from a file on disk. See the section <<working-with-external-files-and-processes.adoc#working-with-external-files-and-processes,Working with External Files and Processes>> for more information.
 
-|EnumField |*Deprecated*. Use EnumFieldType instead.
-
 |EnumFieldType |Allows defining an enumerated set of values which may not be easily sorted by either alphabetic or numeric order (such as a list of severities, for example). This field type takes a configuration file, which lists the proper order of the field values. See the section <<working-with-enum-fields.adoc#working-with-enum-fields,Working with Enum Fields>> for more information.
 
 |FloatPointField |Floating point field (32-bit IEEE floating point). This class encodes float values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
@@ -53,10 +53,10 @@ The following table lists the field types that are available in Solr. The `org.a
 
 |LatLonPointSpatialField |A latitude/longitude coordinate pair; possibly multi-valued for multiple points. Usually it's specified as "lat,lon" order with a comma. See the section <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
 
-|LatLonType |*Deprecated*. Consider using the LatLonPointSpatialField instead. A single-valued latitude/longitude coordinate pair. Usually it's specified as "lat,lon" order with a comma. See the section <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
-
 |LongPointField |Long field (64-bit signed integer). This class encodes foo values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
 
+|NestPathField | Specialized field type storing ehanced information, when <<indexing-nested-documents.adoc#schema-configuration,working with nested documents>>.
+
 |PointType |A single-valued n-dimensional point. It's both for sorting spatial data that is _not_ lat-lon, and for some more rare use-cases. (NOTE: this is _not_ related to the "Point" based numeric fields). See <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
 
 |PreAnalyzedField |Provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing.
@@ -65,32 +65,50 @@ Configuration and usage of PreAnalyzedField is documented in the section  <<work
 
 |RandomSortField |Does not contain a value. Queries that sort on this field type will return results in random order. Use a dynamic field to use this feature.
 
-|SpatialRecursivePrefixTreeFieldType |(RPT for short) Accepts latitude comma longitude strings or other shapes in WKT format. See <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
+|RankField |Can be used to store scoring factors to improve document ranking. To be used in combination with <<other-parsers.adoc#ranking-query-parser,RankQParserPlugin>>
 
-|StrField |String (UTF-8 encoded string or Unicode). Strings are intended for small fields and are _not_ tokenized or analyzed in any way. They have a hard limit of slightly less than 32K.
+|RptWithGeometrySpatialField |A derivative of `SpatialRecursivePrefixTreeFieldType` that also stores the original geometry. See <<spatial-search.adoc#spatial-search,Spatial Search>> for more information and usage with geospatial results transformer.
 
-|SortableTextField |A specialized version of TextField that allows (and defaults to) `docValues="true"` for sorting on the first 1024 characters of the original string prior to analysis. The number of characters used for sorting can be overridden with the `maxCharsForDocValues` attribute.
+|SortableTextField |A specialized version of TextField that allows (and defaults to) `docValues="true"` for sorting on the first 1024 characters of the original string prior to analysis. The number of characters used for sorting can be overridden with the `maxCharsForDocValues` attribute. See <<common-query-parameters.adoc#sort-parameter,sort parameter discussion>> for details.
 
-|TextField |Text, usually multiple words or tokens.
-
-|TrieDateField |*Deprecated*. Use DatePointField instead.
-
-|TrieDoubleField |*Deprecated*. Use DoublePointField instead.
-
-|TrieFloatField |*Deprecated*. Use FloatPointField instead.
-
-|TrieIntField |*Deprecated*. Use IntPointField instead.
+|SpatialRecursivePrefixTreeFieldType |(RPT for short) Accepts latitude comma longitude strings or other shapes in WKT format. See <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
 
-|TrieLongField |*Deprecated*. Use LongPointField instead.
+|StrField |String (UTF-8 encoded string or Unicode). Strings are intended for small fields and are _not_ tokenized or analyzed in any way. They have a hard limit of slightly less than 32K.
 
-|TrieField |*Deprecated*. This field takes a `type` parameter to define the specific class of Trie* field to use; Use an appropriate Point Field type instead.
+|TextField |Text, usually multiple words or tokens. In normal usage, only fields of type TextField or SortableTextField will specify an <<analyzers.adoc#analyzers,analyzer>>.
 
 |UUIDField |Universally Unique Identifier (UUID). Pass in a value of `NEW` and Solr will create a new UUID.
 
-*Note*: configuring a UUIDField instance with a default value of `NEW` is not advisable for most users when using SolrCloud (and not possible if the UUID value is configured as the unique key field) since the result will be that each replica of each document will get a unique UUID value. Using UUIDUpdateProcessorFactory to generate UUID values when documents are added is recommended instead.
+*Note*: configuring a UUIDField instance with a default value of `NEW` is not advisable for most users when using SolrCloud (and not possible if the UUID value is configured as the unique key field) since the result will be that each replica of each document will get a unique UUID value. Using <<update-request-processors.adoc#update-request-processors,UUIDUpdateProcessorFactory>> to generate UUID values when documents are added is recommended instead.
 |===
 
+== Deprecated Field Types
+
 NOTE: All Trie* numeric and date field types have been deprecated in favor of *Point field types.
       Point field types are better at range queries (speed, memory, disk), however simple field:value queries underperform
       relative to Trie. Either accept this, or continue to use Trie fields.
       This shortcoming may be addressed in a future release.
+
+[cols="25,75",options="header"]
+|===
+|Class |Description
+
+|CurrencyField |Use CurrencyFieldType instead.
+
+|EnumField |Use EnumFieldType instead.
+
+|LatLonType |Consider using the LatLonPointSpatialField instead. A single-valued latitude/longitude coordinate pair. Usually it's specified as "lat,lon" order with a comma. See the section <<spatial-search.adoc#spatial-search,Spatial Search>> for more information.
+
+|TrieDateField |Use DatePointField instead.
+
+|TrieDoubleField |Use DoublePointField instead.
+
+|TrieFloatField |Use FloatPointField instead.
+
+|TrieIntField |Use IntPointField instead.
+
+|TrieLongField |Use LongPointField instead.
+
+|TrieField |This field takes a `type` parameter to define the specific class of Trie* field to use; Use an appropriate Point Field type instead.
+
+|===
diff --git a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
index 1644aef..d51935c 100644
--- a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
@@ -28,7 +28,7 @@ For some of the authentication schemes (e.g., Kerberos), Solr provides a native
 
 There are two plugin classes:
 
-* `HadoopAuthPlugin`: This can be used with standalone Solr as well as Solrcloud with <<authentication-and-authorization-plugins.adoc#securing-inter-node-requests,PKI authentication>> for internode communication.
+* `HadoopAuthPlugin`: This can be used with standalone Solr as well as SolrCloud with <<authentication-and-authorization-plugins.adoc#securing-inter-node-requests,PKI authentication>> for internode communication.
 * `ConfigurableInternodeAuthHadoopPlugin`: This is an extension of HadoopAuthPlugin that allows you to configure the authentication scheme for internode communication.
 
 [TIP]
diff --git a/solr/solr-ref-guide/src/index-replication.adoc b/solr/solr-ref-guide/src/index-replication.adoc
index e635ca1..7795481 100644
--- a/solr/solr-ref-guide/src/index-replication.adoc
+++ b/solr/solr-ref-guide/src/index-replication.adoc
@@ -351,7 +351,7 @@ http://_leader_host:port_/solr/_core_name_/replication?command=restorestatus
 +
 This command is used to check the status of a restore operation. This command takes no parameters.
 +
-The status value can be "In Progress" , "success" or "failed". If it failed then an "exception" will also be sent in the response.
+The status value can be "In Progress", "success", or "failed". If it failed then an "exception" will also be sent in the response.
 
 `deletebackup`::
 Delete any backup created using the `backup` command.
diff --git a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
index 8615a4b8..b75f1e3 100644
--- a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
@@ -111,8 +111,11 @@ Conversely, keeping more segments can accelerate indexing, because merges happen
 
 When a document is deleted or updated, the document is marked as deleted but it not removed from the index until the segment is merged. There are two parameters that can can be adjusted when using the default TieredMergePolicy that influence the number of deleted documents in an index.
 
-* `forceMergeDeletesPctAllowed (default 10.0)`. When the external expungeDeletes command is issued, any segment that has more than this percent deleted documents will be merged into a new segment and the data associated with the deleted documents will be purged. A value of 0.0 will make expungeDeletes behave essentially identically to `optimize`.
-* `deletesPctAllowed (default 33.0)`. During normal segment merging, a "best effort" is made to insure that the total percentage of deleted documents in the index is below this threshold.  Valid settings are between 20% and 50%. 33% was chosen as the default because as this setting approaches 20%, considerable load is added to the system.
+`forceMergeDeletesPctAllowed`::
+(default 10.0) When the external expungeDeletes command is issued, any segment that has more than this percent deleted documents will be merged into a new segment and the data associated with the deleted documents will be purged. A value of 0.0 will make expungeDeletes behave essentially identically to `optimize`.
+
+`deletesPctAllowed`::
+(default 33.0). During normal segment merging, a "best effort" is made to insure that the total percentage of deleted documents in the index is below this threshold.  Valid settings are between 20% and 50%. 33% was chosen as the default because as this setting approaches 20%, considerable load is added to the system.
 
 === Customizing Merge Policies
 
diff --git a/solr/solr-ref-guide/src/indexing-nested-documents.adoc b/solr/solr-ref-guide/src/indexing-nested-documents.adoc
index c8665dd..a02e1aa 100644
--- a/solr/solr-ref-guide/src/indexing-nested-documents.adoc
+++ b/solr/solr-ref-guide/src/indexing-nested-documents.adoc
@@ -19,12 +19,16 @@
 // under the License.
 
 Solr supports indexing nested documents, described here, and ways to <<searching-nested-documents.adoc#searching-nested-documents,search and retrieve>> them very efficiently.
+
 By way of examples: nested documents in Solr can be used to bind a blog post (parent document) with comments (child documents) -- or as a way to model major product lines as parent documents, with multiple types of child documents representing individual SKUs (with unique sizes / colors) and supporting documention (either directly nested under the products, or under individual SKUs.
+
 The "top most" parent with all children is referred to as a "root level" document or "block document" and it explains some of the nomenclature of related features.
+
 At query time, the <<other-parsers.adoc#block-join-query-parsers,Block Join Query Parsers>> can search these relationships,
  and the `<<transforming-result-documents.adoc#child-childdoctransformerfactory,[child]>>` Document Transformer can attach child (or other "descendent") documents to the result documents.
 In terms of performance, indexing the relationships between documents usually yields much faster queries than an equivalent "<<other-parsers#join-query-parser,query time join>>",
  since the relationships are already stored in the index and do not need to be computed.
+
 However, nested documents are less flexible than query time joins as it imposes rules that some applications may not be able to accept.
 Nested documents may be indexed via either the XML or JSON data syntax, and is also supported by <<using-solrj.adoc#using-solrj,SolrJ>> with javabin.
 
@@ -32,7 +36,7 @@ Nested documents may be indexed via either the XML or JSON data syntax, and is a
 [CAUTION]
 ====
 .Re-Indexing Considerations
-With the exception of in-place updates, <<#maintaining-integrity-with-updates-and-deletes,blocks of nested documents must be updated/deleted together>>.  Modifying or replacing individual child documents requires re-indexing of the entire block (either explicitly/externally, or under the covers inside of Solr).  For some applications this may result in a lot of extra indexing overhead and may not be worth the performance gains at query time.
+With the exception of in-place updates, <<#maintaining-integrity-with-updates-and-deletes,blocks of nested documents must be updated/deleted together>>.  Modifying or replacing individual child documents requires reindexing of the entire block (either explicitly/externally, or under the covers inside of Solr).  For some applications this may result in a lot of extra indexing overhead and may not be worth the performance gains at query time.
 ====
 
 [#example-indexing-syntax]
@@ -248,13 +252,13 @@ There are several additional schema considerations that should be considered for
 
 [TIP]
 ====
-When using Solr Cloud it is a _VERY_ good idea to use <<shards-and-indexing-data-in-solrcloud#document-routing,prefix based compositeIds>> with a common prefix for all documents in the block.  This makes it much easier to apply <<updating-parts-of-documents#updating-child-documents,atomic updates to individual child documents>>
+When using SolrCloud it is a _VERY_ good idea to use <<shards-and-indexing-data-in-solrcloud#document-routing,prefix based compositeIds>> with a common prefix for all documents in the block.  This makes it much easier to apply <<updating-parts-of-documents#updating-child-documents,atomic updates to individual child documents>>
 ====
 
 
 == Maintaining Integrity with Updates and Deletes
 
-Blocks of nested documents can be modified simply by adding/replacing the root document with more or fewer child/descendent documents as an application desires.  This can either be done explicitly/externaly by an indexing client completely re-indexing the root level document, or internally by Solr when a client uses <<updating-parts-of-documents#updating-child-documents,atomic updates>> to modify child documents.  This aspect isn't different than updating any normal document except that  [...]
+Blocks of nested documents can be modified simply by adding/replacing the root document with more or fewer child/descendent documents as an application desires.  This can either be done explicitly/externaly by an indexing client completely reindexing the root level document, or internally by Solr when a client uses <<updating-parts-of-documents#updating-child-documents,atomic updates>> to modify child documents.  This aspect isn't different than updating any normal document except that S [...]
 
 Clients should however be very careful to *never* add a root document that has the same `id` of a child document -- or vice-versa.  Solr does not prevent clients from attempting this, but *_it will violate integrity assumptions that Solr expects._*
 
diff --git a/solr/solr-ref-guide/src/math-expressions.adoc b/solr/solr-ref-guide/src/math-expressions.adoc
index 595a7b1..2bcb18d 100644
--- a/solr/solr-ref-guide/src/math-expressions.adoc
+++ b/solr/solr-ref-guide/src/math-expressions.adoc
@@ -28,7 +28,7 @@ mathematical coverage starting with basic scalar math and
 ending with machine learning. Along the way the guide covers variables
 and data structures and techniques for combining Solr's
 powerful streams with mathematical functions to make every
-record in your Solr Cloud cluster computable.
+record in your SolrCloud cluster computable.
 
 *<<scalar-math.adoc#scalar-math,Scalar Math>>*: The functions that apply to scalar numbers.
 
@@ -61,4 +61,3 @@ record in your Solr Cloud cluster computable.
 *<<machine-learning.adoc#machine-learning,Machine Learning>>*: Functions used in machine learning.
 
 *<<computational-geometry.adoc#computational-geometry,Computational Geometry>>*: Convex Hulls and Enclosing Disks.
-
diff --git a/solr/solr-ref-guide/src/other-parsers.adoc b/solr/solr-ref-guide/src/other-parsers.adoc
index 5395938..e50dcf9 100644
--- a/solr/solr-ref-guide/src/other-parsers.adoc
+++ b/solr/solr-ref-guide/src/other-parsers.adoc
@@ -184,9 +184,9 @@ A common mistake is to try and use a `which` parameter that is more restrictive
 q={!parent which="title:join"}comments:support
 ----
 
-This type of query will frequenly not work the way you might expect.  Since the `which` param only identifies _some_ of the "parent" documents, the resulting query can match "parent" documents it should not, because it will mistakenly identify all documents which do _not_ match the `which="title:join"` Block Mask as children of the next "parent" document in the index (that does match this Mask).
+This type of query will frequently not work the way you might expect.  Since the `which` param only identifies _some_ of the "parent" documents, the resulting query can match "parent" documents it should not, because it will mistakenly identify all documents which do _not_ match the `which="title:join"` Block Mask as children of the next "parent" document in the index (that does match this Mask).
 
-A similar problematic situation can arise when mixing parent/child documents with "simple" documents that have no children _and do not match the query used to identify 'parent' documents_.  For example, if we add the following document to our existing parent/child example documents...
+A similar problematic situation can arise when mixing parent/child documents with "simple" documents that have no children _and do not match the query used to identify 'parent' documents_.  For example, if we add the following document to our existing parent/child example documents:
 
 [source,xml]
 ----
@@ -237,21 +237,21 @@ Comma separated list of tags for excluding queries from parameters above. See ex
 {!bool filter=foo should=bar}
 ----
 
-Parameters might also be multivalue references. The former example above is equivlent to 
+Parameters might also be multivalue references. The former example above is equivalent to:
 
 [source,text]
 ----
 q={!bool must=$ref}&ref=foo&ref=bar
 ----
 
-Referred queries might be excuded via tags. Overall the idea is similar to <<faceting.adoc#tagging-and-excluding-filters, excluding fq in facets>>.
+Referred queries might be excluded via tags. Overall the idea is similar to <<faceting.adoc#tagging-and-excluding-filters, excluding fq in facets>>.
 
 [source,text]
 ----
 q={!bool must=$ref excludeTags=t2}&ref={!tag=t1}foo&ref={!tag=t2}bar
 ----
 
-Since the later query is excluded via `t2`, the resulting query is equivalent to 
+Since the later query is excluded via `t2`, the resulting query is equivalent to:
 
 [source,text]
 ----
@@ -633,7 +633,7 @@ The upper bound for the hash range for the query
 
 === Hash Range Cache Config
 
-The hash range query parser uses a special cache to improve the speedup of the queries.  The following should be added to the solrconfig.xml for the various fields that you want to perform the hash range query on.  Note the name of the cache should be the field name prefixed by "hash_".
+The hash range query parser uses a special cache to improve the speedup of the queries. The following should be added to the `solrconfig.xml` for the various fields that you want to perform the hash range query on.  Note the name of the cache should be the field name prefixed by "hash_".
 
 [source,xml]
 ----
@@ -851,7 +851,7 @@ This parameter improves the performance of the cross-collection join, but it dep
 If this parameter is not specified, the cross collection join query will try to determine the correct value automatically.
 
 `ttl`::
-The length of time that a cross colleciton join query in the cache will be considered valid, in seconds.
+The length of time that a cross collection join query in the cache will be considered valid, in seconds.
 Defaults to `3600` (one hour).
 The cross collection join query will not be aware of changes to the remote collection, so if the remote collection is updated, cached cross collection queries may give inaccurate results.
 After the `ttl` period has expired, the cross collection join query will re-execute the join against the remote collection.
@@ -894,7 +894,7 @@ Details about using the `LTRQParserPlugin` can be found in the <<learning-to-ran
 
 == Max Score Query Parser
 
-The `MaxScoreQParser` extends the `LuceneQParser` but returns the Max score from the clauses. It does this by wrapping all `SHOULD` clauses in a `DisjunctionMaxQuery` with tie=1.0. Any `MUST` or `PROHIBITED` clauses are passed through as-is. Non-boolean queries, e.g., NumericRange falls-through to the `LuceneQParser` parser behavior.
+The `MaxScoreQParser` extends the `LuceneQParser` but returns the Max score from the clauses. It does this by wrapping all `SHOULD` clauses in a `DisjunctionMaxQuery` with `tie=1.0`. Any `MUST` or `PROHIBITED` clauses are passed through as-is. Non-boolean queries, e.g., NumericRange falls-through to the `LuceneQParser` parser behavior.
 
 Example:
 
diff --git a/solr/solr-ref-guide/src/searching-nested-documents.adoc b/solr/solr-ref-guide/src/searching-nested-documents.adoc
index 8d13ff4..127ba4e 100644
--- a/solr/solr-ref-guide/src/searching-nested-documents.adoc
+++ b/solr/solr-ref-guide/src/searching-nested-documents.adoc
@@ -22,7 +22,6 @@ These features require `\_root_` and `\_nest_path_` to be declared in the schema
 Please refer to the <<indexing-nested-documents.adoc#indexing-nested-documents, Indexing Nested Documents>>
 section for more details about schema and index configuration.
 
-
 [NOTE]
 This section does not show case faceting on nested documents. For nested document faceting, please refer to the
 <<blockjoin-faceting#blockjoin-faceting, Block Join Faceting>> section.
@@ -41,7 +40,7 @@ For a detailed explanation of this transformer, and specifics on it's syntax & l
 
 A simple query matching all documents with a description that includes "staplers":
 
-[source,bash]
+[source,curl]
 ----
 $ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=description_t:staplers'
 {
@@ -56,7 +55,7 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=descr
 
 The same query with the addition of the `[child]` transformer is shown below.  Note that the `numFound` has not changed, we are still matching the same set of documents, but when returning those documents the nested children are also returned as psuedo-fields.
 
-[source,bash]
+[source,curl]
 ----
 $ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=description_t:staplers&fl=*,[child]'
 {
@@ -79,7 +78,7 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=descr
                 "pages_i":1,
                 "content_t":"...",
                 "_version_":1672933224035123200}]},
-          
+
           {
             "id":"P11!S31",
             "color_s":"BLACK",
@@ -92,7 +91,7 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=descr
             "pages_i":1,
             "content_t":"How to use your stapler ...",
             "_version_":1672933224035123200},
-          
+
           {
             "id":"P11!D61",
             "name_s":"Warranty Details",
@@ -102,14 +101,13 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=descr
   }}
 ----
 
-
 === Child Query Parser
 
 The `{!child}` query parser can be used to search for the _descendent_ documents of parent documents matching a wrapped query. For a detailed explanation of this parser, see the section <<other-parsers.adoc#block-join-children-query-parser, Block Join Children Query Parser>>.
 
 Let's consider again the `description_t:staplers` query used above -- if we wrap that query in a `{!child}` query parser then instead of "matching" & returning the product level documents, we instead match all of the _descendent_ child documents of the original query:
 
-[source,bash]
+[source,curl]
 ----
 $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' -d 'q={!child of="*:* -_nest_path_:*"}description_t:staplers'
 {
@@ -145,11 +143,12 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' -
   }}
 ----
 
-In this example we've used `\*:* -\_nest_path_:*` as our <<other-parsers#block-mask,`of` parameter>> to indicate we want to consider all documents which don't have a nest path -- ie: all "root" level document -- as the set of possible parents.
+In this example we've used `\*:* -\_nest_path_:*` as our <<other-parsers.adoc#block-mask,`of` parameter>> to indicate we want to consider all documents which don't have a nest path -- i.e., all "root" level document -- as the set of possible parents.
 
-By changing the `of` param to match ancestors at specific `\_nest_path_` levels, we can narrow down the list of children we return.  In the query below, we search for all descendents of `skus` (using an `of` param that identifies all documents that do _not_ have a `\_nest_path_` with the prefix `/skus/*`) with a `price_i` less then `50`:
+By changing the `of` param to match ancestors at specific `\_nest_path_` levels, we can narrow down the list of children we return.
+In the query below, we search for all descendants of `skus` (using an `of` parameter that identifies all documents that do _not_ have a `\_nest_path_` with the prefix `/skus/*`) with a `price_i` less then `50`:
 
-[source,bash]
+[source,curl]
 ----
 $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' --data-urlencode 'q={!child of="*:* -_nest_path_:\\/skus\\/*"}(+price_i:[* TO 50] +_nest_path_:\/skus)'
 {
@@ -169,28 +168,26 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' -
 ====
 Note that in the above example, the `/` characters in the `\_nest_path_` were "double escaped" in the `of` parameter:
 
-* One level of `\` escaping is neccessary to prevent the `/` from being interpreted as a {lucene-javadocs}/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Regexp_Searches[Regex Query]
-* An additional level of "escaping the escape character" is neccessary because the `of` local parameter is a quoted string; so we need a second `\` to ensure the first `\` is preserved and passed as is to the query parser.
+* One level of `\` escaping is necessary to prevent the `/` from being interpreted as a {lucene-javadocs}/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Regexp_Searches[Regex Query]
+* An additional level of "escaping the escape character" is necessary because the `of` local parameter is a quoted string; so we need a second `\` to ensure the first `\` is preserved and passed as is to the query parser.
 
 (You can see that only a single level of of `\` escaping is needed in the body of the query string -- to prevent the Regex syntax --  because it's not a quoted string local param)
 
-You may find it more convinient to use <<local-parameters-in-queries#parameter-dereferencing,parameter references>> in conjunction with <<other-parsers#other-parsers,other parsers>> that do not treat `/` as a special character to express the same query in a more verbose form:
+You may find it more convenient to use <<local-parameters-in-queries#parameter-dereferencing,parameter references>> in conjunction with <<other-parsers#other-parsers,other parsers>> that do not treat `/` as a special character to express the same query in a more verbose form:
 
-[source,bash]
+[source,curl]
 ----
 curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' --data-urlencode 'q={!child of=$block_mask}(+price_i:[* TO 50] +{!field f="_nest_path_" v="/skus"})' --data-urlencode 'block_mask=(*:* -{!prefix f="_nest_path_" v="/skus/"})'
 ----
-
 ====
 
-
 === Parent Query Parser
 
-The inverse of the `{!child}` query parser is the `{!parent}` query parser, which let's you search for the _ancestor_ documents of some child documents matching a wrapped query.  For a detailed explanation of this parser, see the section <<other-parsers.adoc#block-join-parent-query-parser,Block Join Parent Query Parser>>.
+The inverse of the `{!child}` query parser is the `{!parent}` query parser, which lets you search for the _ancestor_ documents of some child documents matching a wrapped query. For a detailed explanation of this parser, see the section <<other-parsers.adoc#block-join-parent-query-parser,Block Join Parent Query Parser>>.
 
 Let's first consider this example of searching for all "manual" type documents that have exactly `1` page:
 
-[source,bash]
+[source,curl]
 ----
 $ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=pages_i:1'
 {
@@ -218,7 +215,7 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select?omitHeader=true&q=pages
 
 We can wrap that query in a `{!parent}` query to return the details of all products that are ancestors of these manuals:
 
-[source,bash]
+[source,curl]
 ----
 $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' --data-urlencode 'q={!parent which="*:* -_nest_path_:*"}(+_nest_path_:\/skus\/manuals +pages_i:1)'
 {
@@ -236,11 +233,11 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' -
   }}
 ----
 
-In this example we've used `\*:* -\_nest_path_:*` as our <<other-parsers#block-mask,`which` parameter>> to indicate we want to consider all documents which don't have a nest path -- ie: all "root" level document -- as the set of possible parents.
+In this example we've used `\*:* -\_nest_path_:*` as our <<other-parsers#block-mask,`which` parameter>> to indicate we want to consider all documents which don't have a nest path -- i.e., all "root" level document -- as the set of possible parents.
 
 By changing the `which` param to match ancestors at specific `\_nest_path_` levels, we can change the type of ancestors we return.  In the query below, we search for `skus` (using an `which` param that identifies all documents that do _not_ have a `\_nest_path_` with the prefix `/skus/*`) that are the ancestors of `manuals` with exactly `1` page:
 
-[source,bash]
+[source,curl]
 ----
 $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' --data-urlencode 'q={!parent which="*:* -_nest_path_:\\/skus\\/*"}(+_nest_path_:\/skus\/manuals +pages_i:1)'
 {
@@ -263,7 +260,6 @@ $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' -
 Note that in the above example, the `/` characters in the `\_nest_path_` were "double escaped" in the `which` parameter, for the <<#double-escaping-nest-path-slashes,same reasons discussed above>> regarding the `{!child} pasers `of` parameter.
 ====
 
-
 === Combining Block Join Query Parsers with Child Doc Transformer
 
 The combination of these two parsers with the `[child] transformer enables seamless creation of very powerful queries.
@@ -276,7 +272,7 @@ Here for example is a query where:
 *** "lifetime guarantee" in their content
 * each return (sku) document also includes any descendent (manuals) documents it has
 
-[source,bash]
+[source,curl]
 ----
 $ curl 'http://localhost:8983/solr/gettingstarted/select' -d 'omitHeader=true' -d 'fq=color_s:RED' --data-urlencode 'q={!child of="*:* -_nest_path_:*" filters=$parent_fq}' --data-urlencode 'parent_fq={!parent which="*:* -_nest_path_:*"}(+_nest_path_:"/manuals" +content_t:"lifetime guarantee")' -d 'fl=*,[child]'
 {
diff --git a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
index ca0c3ef..c76429a 100644
--- a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
+++ b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
@@ -120,9 +120,9 @@ More details on how to use shard splitting is in the section on the Collection A
 
 == Ignoring Commits from Client Applications in SolrCloud
 
-In most cases, when running in SolrCloud mode, indexing client applications should not send explicit commit requests. Rather, you should configure auto commits with `openSearcher=false` and auto soft-commits to make recent updates visible in search requests. This ensures that auto commits occur on a regular schedule in the cluster.
+In most cases, when running in SolrCloud mode, indexing client applications should not send explicit commit requests. Rather, you should configure auto commits with `openSearcher=false` and `autoSoftCommit` to make recent updates visible in search requests. This ensures that auto commits occur on a regular schedule in the cluster.
 
-NOTE: Using auto soft commit or commitWithin requires the client app to embrace the realities of "eventual consistency". Solr will make documents searchable at _roughly_ the same time across replicas of a collection but there are no hard guarantees. Consequently, in rare cases, it's possible for a document to show up in one search only for it not to appear in a subsequent search occurring immediately after the first search when the second search is routed to a different replica. Also, do [...]
+NOTE: Using `autoSoftCommit` or `commitWithin` requires the client app to embrace the realities of "eventual consistency". Solr will make documents searchable at _roughly_ the same time across replicas of a collection but there are no hard guarantees. Consequently, in rare cases, it's possible for a document to show up in one search only for it not to appear in a subsequent search occurring immediately after the first search when the second search is routed to a different replica. Also,  [...]
 
 To enforce a policy where client applications should not send explicit commits, you should update all client applications that index data into SolrCloud. However, that is not always feasible, so Solr provides the `IgnoreCommitOptimizeUpdateProcessorFactory`, which allows you to ignore explicit commits and/or optimize requests from client applications without having refactor your client application code.
 
@@ -140,7 +140,7 @@ To activate this request processor you'll need to add the following to your `sol
 </updateRequestProcessorChain>
 ----
 
-As shown in the example above, the processor will return 200 to the client but will ignore the commit / optimize request. Notice that you need to wire-in the implicit processors needed by SolrCloud as well, since this custom chain is taking the place of the default chain.
+As shown in the example above, the processor will return 200 to the client but will ignore the commit or optimize request. Notice that you need to wire-in the implicit processors needed by SolrCloud as well, since this custom chain is taking the place of the default chain.
 
 In the following example, the processor will raise an exception with a 403 code with a customized error message:
 
diff --git a/solr/solr-ref-guide/src/statistics.adoc b/solr/solr-ref-guide/src/statistics.adoc
index 48b81ed..426576b 100644
--- a/solr/solr-ref-guide/src/statistics.adoc
+++ b/solr/solr-ref-guide/src/statistics.adoc
@@ -369,7 +369,7 @@ In the example below covariance is calculated for two numeric
 arrays.
 
 The example below uses arrays created by the `array` function. Its important to note that
-vectorized data from Solr Cloud collections can be used with any function that
+vectorized data from SolrCloud collections can be used with any function that
 operates on arrays.
 
 [source,text]
@@ -774,4 +774,4 @@ When this expression is sent to the `/stream` handler it responds with:
     ]
   }
 }
-----
\ No newline at end of file
+----
diff --git a/solr/solr-ref-guide/src/stream-source-reference.adoc b/solr/solr-ref-guide/src/stream-source-reference.adoc
index 1203023..d31cc3c 100644
--- a/solr/solr-ref-guide/src/stream-source-reference.adoc
+++ b/solr/solr-ref-guide/src/stream-source-reference.adoc
@@ -554,7 +554,7 @@ With each iteration the `train` function emits a tuple with the model. The model
 
 * `collection`: (Mandatory) Collection that holds the training set
 * `q`: (Mandatory) The query that defines the training set. The IDF for the features will be generated on the
-* `name`: (Mandatory) The name of model. This can be used to retrieve the model if they stored in a Solr Cloud collection.
+* `name`: (Mandatory) The name of model. This can be used to retrieve the model if they stored in a SolrCloud collection.
 * `field`: (Mandatory) The text field to extract the features from.
 * `outcome`: (Mandatory) The field that defines the class, positive or negative
 * `maxIterations`: (Mandatory) How many training iterations to perform.
diff --git a/solr/solr-ref-guide/src/streaming-expressions.adoc b/solr/solr-ref-guide/src/streaming-expressions.adoc
index fce1004..1e29ccd 100644
--- a/solr/solr-ref-guide/src/streaming-expressions.adoc
+++ b/solr/solr-ref-guide/src/streaming-expressions.adoc
@@ -17,7 +17,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Streaming Expressions provide a simple yet powerful stream processing language for Solr Cloud.
+Streaming Expressions provide a simple yet powerful stream processing language for SolrCloud.
 
 Streaming expressions are a suite of functions that can be combined to perform many different parallel computing tasks. These functions are the basis for the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>>.
 
diff --git a/solr/solr-ref-guide/src/the-query-elevation-component.adoc b/solr/solr-ref-guide/src/the-query-elevation-component.adoc
index a4ef4f5..dc69a81 100644
--- a/solr/solr-ref-guide/src/the-query-elevation-component.adoc
+++ b/solr/solr-ref-guide/src/the-query-elevation-component.adoc
@@ -61,8 +61,11 @@ Optionally, in the Query Elevation Component configuration you can also specify
 The Query Elevation Search Component takes the following parameters:
 
 `queryFieldType`::
-Specifies which fieldType should be used to analyze the incoming text. For example, it may be appropriate to use a fieldType with a LowerCaseFilter. Other example, if you need to unescape backslash-escaped queries, then you can define the fieldType to preprocess with a PatternReplaceCharFilter. Here is the corresponding example of fieldType (traditionally in `schema.xml`):
-
+Specifies which field type should be used to analyze the incoming text. For example, it may be appropriate to use a field type with a `LowerCaseFilter`.
++
+Another example is if you need to unescape backslash-escaped queries, then you can define the field type to preprocess with a `PatternReplaceCharFilter`.
+Here is the corresponding example of field type (traditionally in `managed-schema` or `schema.xml`):
++
 [source,xml]
 ----
 <fieldType name="unescapelowercase" class="solr.TextField">
@@ -73,7 +76,7 @@ Specifies which fieldType should be used to analyze the incoming text. For examp
   </analyzer>
 </fieldType>
 ----
-
++
 // NOTE: {IsAlphabetic} and {Digit} below are escaped with '\' so Asciidoctor does not treat them as attributes during conversion to HTML.
 For example, to unescape only non-alphanumeric, the pattern could be `\\([^\p\{IsAlphabetic}\p\{Digit}])`.
 
diff --git a/solr/solr-ref-guide/src/updating-parts-of-documents.adoc b/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
index ccd09e4..a804f6c 100644
--- a/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
+++ b/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
@@ -147,7 +147,7 @@ curl -X POST 'http://localhost:8983/solr/gettingstarted/update?commit=true' -H '
 } ]'
 ----
 
-==== Replacing all child documents 
+==== Replacing All Child Documents
 
 As with normal (multiValued) fields, the `set` keyword can be used to replace all child documents in a psuedo-field:
 
@@ -164,11 +164,11 @@ curl -X POST 'http://localhost:8983/solr/gettingstarted/update?commit=true' -H '
                           "name_s": "How to get Red ink stains out of fabric",
                           "content_t": "... vinegar ...",
                         } ] }
-                     
+
 } ]'
 ----
 
-==== Adding a child document
+==== Adding a Child Document
 
 As with normal (multiValued) fields, the `add` keyword can be used to add additional child documents to a psuedo-field:
 
@@ -185,7 +185,7 @@ curl -X POST 'http://localhost:8983/solr/gettingstarted/update?commit=true' -H '
 ----
 
 
-==== Removing a child document
+==== Removing a Child Document
 
 As with normal (multiValued) fields, the `remove` keyword can be used to remove a child document (by `id`) from it's psuedo-field:
 
diff --git a/solr/solr-ref-guide/src/vectorization.adoc b/solr/solr-ref-guide/src/vectorization.adoc
index 5c08a58..26a6f60 100644
--- a/solr/solr-ref-guide/src/vectorization.adoc
+++ b/solr/solr-ref-guide/src/vectorization.adoc
@@ -26,7 +26,7 @@ vectorize text fields.
 == Streams
 
 Streaming Expressions has a wide range of stream sources that can be used to
-retrieve data from Solr Cloud collections. Math expressions can be used
+retrieve data from SolrCloud collections. Math expressions can be used
 to vectorize and analyze the results sets.
 
 Below are some of the key stream sources:
@@ -62,13 +62,13 @@ on by math expressions in the same manner as result sets originating from Solr.
 
 * *`topic`*: Messaging is an important foundational technology for large scale computing. The `topic`
 function provides publish/subscribe messaging capabilities by treating
-Solr Cloud as a distributed message queue. Topics are extremely powerful
+SolrCloud as a distributed message queue. Topics are extremely powerful
 because they allow subscription by query. Topics can be use to support a broad set of
 use cases including bulk text mining operations and AI alerting.
 
 * *`nodes`*: Graph queries are frequently used by recommendation engines and are an important
 machine learning tool. The `nodes` function provides fast, distributed, breadth
-first graph traversal over documents in a Solr Cloud collection. The node sets collected
+first graph traversal over documents in a SolrCloud collection. The node sets collected
 by the `nodes` function can be operated on by statistical and machine learning expressions to
 gain more insight into the graph.
 
@@ -251,7 +251,7 @@ When this expression is sent to the `/stream` handler it responds with:
 == Facet Co-occurrence Matrices
 
 The `facet` function can be used to quickly perform multi-dimension aggregations of categorical data from
-records stored in a Solr Cloud collection. These multi-dimension aggregations can represent co-occurrence
+records stored in a SolrCloud collection. These multi-dimension aggregations can represent co-occurrence
 counts for the values in the dimensions. The `pivot` function can be used to move two dimensional
 aggregations into a co-occurrence matrix. The co-occurrence matrix can then be clustered or analyzed for
 correlations to learn about the hidden connections within the data.