You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@solr.apache.org by ct...@apache.org on 2021/07/22 20:54:20 UTC

[solr] branch main updated: SOLR-14444: miscellaneous ref guide cleanups (#234)

This is an automated email from the ASF dual-hosted git repository.

ctargett pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/solr.git


The following commit(s) were added to refs/heads/main by this push:
     new 34ce242  SOLR-14444: miscellaneous ref guide cleanups (#234)
34ce242 is described below

commit 34ce242b14cf2cf725428186604020c82ce452e6
Author: Cassandra Targett <ct...@apache.org>
AuthorDate: Thu Jul 22 15:54:15 2021 -0500

    SOLR-14444: miscellaneous ref guide cleanups (#234)
---
 solr/solr-ref-guide/src/about-this-guide.adoc      |  3 ++-
 solr/solr-ref-guide/src/analyzers.adoc             |  2 +-
 solr/solr-ref-guide/src/backup-restore.adoc        |  3 ++-
 .../src/block-join-query-parser.adoc               |  6 +++--
 solr/solr-ref-guide/src/charfilterfactories.adoc   |  2 +-
 .../src/collapse-and-expand-results.adoc           |  3 ++-
 solr/solr-ref-guide/src/collection-management.adoc |  2 +-
 .../src/common-query-parameters.adoc               |  4 ++-
 solr/solr-ref-guide/src/configuration-files.adoc   |  2 +-
 solr/solr-ref-guide/src/configuring-solr-xml.adoc  |  3 ++-
 solr/solr-ref-guide/src/copy-fields.adoc           |  3 ++-
 .../src/currencies-exchange-rates.adoc             |  7 ++---
 solr/solr-ref-guide/src/date-formatting-math.adoc  |  3 ++-
 solr/solr-ref-guide/src/de-duplication.adoc        |  2 +-
 solr/solr-ref-guide/src/docker-networking.adoc     |  3 ++-
 solr/solr-ref-guide/src/document-transformers.adoc |  3 ++-
 .../src/documents-fields-schema-design.adoc        |  3 ++-
 solr/solr-ref-guide/src/docvalues.adoc             | 17 +++++++-----
 solr/solr-ref-guide/src/dynamic-fields.adoc        |  2 +-
 solr/solr-ref-guide/src/edismax-query-parser.adoc  |  2 +-
 solr/solr-ref-guide/src/enum-fields.adoc           |  2 +-
 .../src/external-files-processes.adoc              |  2 +-
 solr/solr-ref-guide/src/faceting.adoc              |  3 ++-
 .../src/field-type-definitions-and-properties.adoc |  8 +++---
 solr/solr-ref-guide/src/fields.adoc                |  2 +-
 solr/solr-ref-guide/src/filters.adoc               |  9 ++++---
 solr/solr-ref-guide/src/function-queries.adoc      |  9 ++++---
 solr/solr-ref-guide/src/graph-traversal.adoc       | 30 ++++++++++++++--------
 solr/solr-ref-guide/src/highlighting.adoc          |  8 +++---
 .../src/implicit-requesthandlers.adoc              | 15 +++++++----
 .../src/indexing-with-update-handlers.adoc         |  7 ++---
 solr/solr-ref-guide/src/json-query-dsl.adoc        |  3 ++-
 solr/solr-ref-guide/src/language-analysis.adoc     | 16 ++++++++----
 solr/solr-ref-guide/src/language-detection.adoc    |  3 ++-
 solr/solr-ref-guide/src/learning-to-rank.adoc      |  3 ++-
 solr/solr-ref-guide/src/managed-resources.adoc     |  6 ++---
 solr/solr-ref-guide/src/other-parsers.adoc         |  3 ++-
 solr/solr-ref-guide/src/package-manager.adoc       |  5 ++--
 .../solr-ref-guide/src/parallel-sql-interface.adoc | 11 +++++---
 .../src/partial-document-updates.adoc              | 15 +++++++----
 solr/solr-ref-guide/src/phonetic-matching.adoc     |  3 ++-
 solr/solr-ref-guide/src/ping.adoc                  |  3 ++-
 .../src/query-elevation-component.adoc             |  2 +-
 .../src/query-syntax-and-parsers.adoc              |  3 ++-
 solr/solr-ref-guide/src/reindexing.adoc            |  9 ++++---
 solr/solr-ref-guide/src/response-writers.adoc      |  2 +-
 solr/solr-ref-guide/src/result-clustering.adoc     | 13 ++++++----
 solr/solr-ref-guide/src/result-grouping.adoc       |  3 ++-
 .../src/rule-based-authorization-plugin.adoc       |  2 +-
 solr/solr-ref-guide/src/schema-api.adoc            | 14 +++++-----
 solr/solr-ref-guide/src/schema-elements.adoc       |  6 ++---
 .../src/script-update-processor.adoc               |  2 +-
 solr/solr-ref-guide/src/shard-management.adoc      |  3 ++-
 solr/solr-ref-guide/src/solr-glossary.adoc         | 17 +++++++-----
 solr/solr-ref-guide/src/solr-in-docker.adoc        |  3 ++-
 solr/solr-ref-guide/src/solr-plugins.adoc          |  3 ++-
 solr/solr-ref-guide/src/solr-upgrade-notes.adoc    |  6 +++--
 .../src/solrcloud-shards-indexing.adoc             |  6 +++--
 .../solrcloud-with-legacy-configuration-files.adoc |  4 +--
 solr/solr-ref-guide/src/spatial-search.adoc        |  3 ++-
 solr/solr-ref-guide/src/spell-checking.adoc        |  3 ++-
 solr/solr-ref-guide/src/standard-query-parser.adoc |  3 ++-
 solr/solr-ref-guide/src/stats-component.adoc       |  5 ++--
 .../src/stream-decorator-reference.adoc            |  3 ++-
 .../src/stream-source-reference.adoc               |  3 ++-
 solr/solr-ref-guide/src/suggester.adoc             |  4 +--
 solr/solr-ref-guide/src/tagger-handler.adoc        | 11 +++-----
 .../src/taking-solr-to-production.adoc             |  9 ++++---
 solr/solr-ref-guide/src/term-vector-component.adoc |  9 ++++---
 solr/solr-ref-guide/src/tokenizers.adoc            |  5 ++--
 .../src/transforming-and-indexing-custom-json.adoc |  3 ++-
 .../src/upgrading-a-solr-cluster.adoc              |  6 +++--
 .../src/user-managed-index-replication.adoc        |  3 ++-
 .../src/zookeeper-access-control.adoc              |  6 +++--
 74 files changed, 257 insertions(+), 158 deletions(-)

diff --git a/solr/solr-ref-guide/src/about-this-guide.adoc b/solr/solr-ref-guide/src/about-this-guide.adoc
index 00d243b..c8d8ec7 100644
--- a/solr/solr-ref-guide/src/about-this-guide.adoc
+++ b/solr/solr-ref-guide/src/about-this-guide.adoc
@@ -69,7 +69,8 @@ The first has grown somewhat organically as Solr has developed over time, but th
 In many cases, but not all, the parameters and outputs of API calls are the same between the two styles.
 In all cases the paths and endpoints used are different.
 
-Throughout this Guide, we have added examples of both styles with sections labeled "V1 API" and "V2 API". As of the 7.2 version of this Guide, these examples are not yet complete - more coverage will be added as future versions of the Guide are released.
+Throughout this Guide, we have added examples of both styles with sections labeled "V1 API" and "V2 API".
+As of the 7.2 version of this Guide, these examples are not yet complete - more coverage will be added as future versions of the Guide are released.
 
 The section <<v2-api.adoc#,V2 API>> provides more information about how to work with the new API structure, including how to disable it if you choose to do so.
 
diff --git a/solr/solr-ref-guide/src/analyzers.adoc b/solr/solr-ref-guide/src/analyzers.adoc
index 535926a..8d5deee 100644
--- a/solr/solr-ref-guide/src/analyzers.adoc
+++ b/solr/solr-ref-guide/src/analyzers.adoc
@@ -18,7 +18,7 @@
 
 An analyzer examines the text of fields and generates a token stream.
 
-Analyzers are specified as a child of the `<fieldType>` element in the `schema.xml` configuration file (in the same `conf/` directory as `solrconfig.xml`).
+Analyzers are specified as a child of the `<fieldType>` element in <<solr-schema.adoc#,Solr's schema>>.
 
 In normal usage, only fields of type `solr.TextField` or `solr.SortableTextField` will specify an analyzer.
 The simplest way to configure an analyzer is with a single `<analyzer>` element whose class attribute is a fully qualified Java class name.
diff --git a/solr/solr-ref-guide/src/backup-restore.adoc b/solr/solr-ref-guide/src/backup-restore.adoc
index f3a63e0..0ad5deb 100644
--- a/solr/solr-ref-guide/src/backup-restore.adoc
+++ b/solr/solr-ref-guide/src/backup-restore.adoc
@@ -229,7 +229,8 @@ http://localhost:8983/solr/gettingstarted/replication?command=restorestatus&wt=x
 </response>
 ----
 
-The status value can be "In Progress", "success" or "failed". If it failed then an "exception" will also be sent in the response.
+The status value can be "In Progress", "success" or "failed".
+If it failed then an "exception" will also be sent in the response.
 
 === Create Snapshot API
 
diff --git a/solr/solr-ref-guide/src/block-join-query-parser.adoc b/solr/solr-ref-guide/src/block-join-query-parser.adoc
index 5f3055d..27e8932 100644
--- a/solr/solr-ref-guide/src/block-join-query-parser.adoc
+++ b/solr/solr-ref-guide/src/block-join-query-parser.adoc
@@ -99,7 +99,8 @@ This is equivalent to:
 q={!child of=<blockMask>}+<someParents> +BRAND:Foo +NAME:Bar
 
 Notice "$" syntax in `filters` for referencing queries; comma-separated tags `excludeTags` allows to exclude certain queries by tagging.
-Overall the idea is similar to <<faceting.adoc#tagging-and-excluding-filters, excluding fq in facets>>. Note, that filtering is applied to the subordinate clause (`<someParents>`), and the intersection result is joined to the children.
+Overall the idea is similar to <<faceting.adoc#tagging-and-excluding-filters, excluding fq in facets>>.
+Note, that filtering is applied to the subordinate clause (`<someParents>`), and the intersection result is joined to the children.
 
 === All Children Syntax
 
@@ -161,7 +162,8 @@ q={!parent which=<blockMask>}+<someChildren> +COLOR:Red +SIZE:XL
 
 Notice the "$" syntax in `filters` for referencing queries.
 Comma-separated tags in `excludeTags` allow excluding certain queries by tagging.
-Overall the idea is similar to <<faceting.adoc#tagging-and-excluding-filters, excluding fq in facets>>. Note that filtering is applied to the subordinate clause (`<someChildren>`) first, and the intersection result is joined to the parents.
+Overall the idea is similar to <<faceting.adoc#tagging-and-excluding-filters, excluding fq in facets>>.
+Note that filtering is applied to the subordinate clause (`<someChildren>`) first, and the intersection result is joined to the parents.
 
 === Scoring with the Block Join Parent Query Parser
 
diff --git a/solr/solr-ref-guide/src/charfilterfactories.adoc b/solr/solr-ref-guide/src/charfilterfactories.adoc
index 21fc23e..f20923f 100644
--- a/solr/solr-ref-guide/src/charfilterfactories.adoc
+++ b/solr/solr-ref-guide/src/charfilterfactories.adoc
@@ -241,7 +241,7 @@ s|Required |Default: none
 +
 The text to use to replace matching patterns.
 
-You can configure this filter in `schema.xml` like this:
+You can configure this filter in the schema like this:
 
 [.dynamic-tabs]
 --
diff --git a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
index 41b16f1..ea1f537 100644
--- a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
+++ b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
@@ -186,7 +186,8 @@ fq={!collapse cost=1000 field=group_field}
 
 === Block Collapsing
 
-When collapsing on the `\_root_` field, using `nullPolicy=expand` or `nullPolicy=ignore`, the Collapsing Query Parser can take advantage of the fact that all docs with identical field values are adjacent to each other in the index in a single <<indexing-nested-documents.adoc#,"block" of nested documents>>. This allows the collapsing logic to be much faster and more memory efficient.
+When collapsing on the `\_root_` field, using `nullPolicy=expand` or `nullPolicy=ignore`, the Collapsing Query Parser can take advantage of the fact that all docs with identical field values are adjacent to each other in the index in a single <<indexing-nested-documents.adoc#,"block" of nested documents>>.
+This allows the collapsing logic to be much faster and more memory efficient.
 
 The default collapsing logic must keep track of all group head documents -- for all groups encountered so far -- until it has evaluated all documents, because each document it considers may become the new group head of any group.
 
diff --git a/solr/solr-ref-guide/src/collection-management.adoc b/solr/solr-ref-guide/src/collection-management.adoc
index 608dbb6..b1b6040 100644
--- a/solr/solr-ref-guide/src/collection-management.adoc
+++ b/solr/solr-ref-guide/src/collection-management.adoc
@@ -284,7 +284,7 @@ If the status is anything other than "success", an error message will explain wh
 [[reload]]
 == RELOAD: Reload a Collection
 
-The RELOAD action is used when you have changed a configuration file in ZooKeeper, like uploading a new `schema.xml`.
+The RELOAD action is used when you have changed a configuration file in ZooKeeper, like uploading a new `solrconfig.xml`.
 Solr automatically reloads collections when certain files, monitored via a watch in ZooKeeper are changed,
 such as `security.json`.
 However, for changes to files in configsets, like uploading a new schema, you will need to manually trigger the RELOAD.
diff --git a/solr/solr-ref-guide/src/common-query-parameters.adoc b/solr/solr-ref-guide/src/common-query-parameters.adoc
index 314fc2a..26e6f06 100644
--- a/solr/solr-ref-guide/src/common-query-parameters.adoc
+++ b/solr/solr-ref-guide/src/common-query-parameters.adoc
@@ -222,7 +222,9 @@ fl=id,title,[explain]
 
 === Field Name Aliases
 
-You can change the key used to in the response for a field, function, or transformer by prefixing it with a `_"displayName_:`". For example:
+You can change the key used to in the response for a field, function, or transformer by prefixing it with a `_displayName_:` value.
+
+For example, `why_score` is the display name below:
 
 [source,text]
 ----
diff --git a/solr/solr-ref-guide/src/configuration-files.adoc b/solr/solr-ref-guide/src/configuration-files.adoc
index f0781c7..c84d4be 100644
--- a/solr/solr-ref-guide/src/configuration-files.adoc
+++ b/solr/solr-ref-guide/src/configuration-files.adoc
@@ -76,7 +76,7 @@ For more details on `core.properties`, see the section <<core-discovery.adoc#,Co
 ** `solrconfig.xml` controls high-level behavior.
 You can, for example, specify an alternate location for the data directory.
 For more information on `solrconfig.xml`, see <<configuring-solrconfig-xml.adoc#,Configuring solrconfig.xml>>.
-** `managed-schema` (or `schema.xml`) describes the documents you will ask Solr to index.
+** `managed-schema` or `schema.xml` describes the documents you will ask Solr to index.
 The schema defines a document as a collection of fields.
 You can define both the field types and the fields themselves.
 Field type definitions are powerful and include information about how Solr processes incoming field values and query values.
diff --git a/solr/solr-ref-guide/src/configuring-solr-xml.adoc b/solr/solr-ref-guide/src/configuring-solr-xml.adoc
index 42bcfb1..306d834 100644
--- a/solr/solr-ref-guide/src/configuring-solr-xml.adoc
+++ b/solr/solr-ref-guide/src/configuring-solr-xml.adoc
@@ -57,7 +57,8 @@ The default `solr.xml` file looks like this:
 </solr>
 ----
 
-As you can see, the discovery Solr configuration is "SolrCloud friendly". However, the presence of the `<solrcloud>` element does _not_ mean that the Solr instance is running in SolrCloud mode.
+As you can see, the discovery Solr configuration is "SolrCloud friendly".
+However, the presence of the `<solrcloud>` element does _not_ mean that the Solr instance is running in SolrCloud mode.
 Unless the `-DzkHost` or `-DzkRun` are specified at startup time, this section is ignored.
 
 == Solr.xml Parameters
diff --git a/solr/solr-ref-guide/src/copy-fields.adoc b/solr/solr-ref-guide/src/copy-fields.adoc
index c51f6c4..8883d49 100644
--- a/solr/solr-ref-guide/src/copy-fields.adoc
+++ b/solr/solr-ref-guide/src/copy-fields.adoc
@@ -19,7 +19,8 @@
 You might want to interpret some document fields in more than one way.
 Solr has a mechanism for making copies of fields so that you can apply several distinct field types to a single piece of incoming information.
 
-The name of the field you want to copy is the _source_, and the name of the copy is the _destination_. In `schema.xml`, it's very simple to make copies of fields:
+The name of the field you want to copy is the _source_, and the name of the copy is the _destination_.
+In the schema file, it's very simple to make copies of fields:
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/src/currencies-exchange-rates.adoc b/solr/solr-ref-guide/src/currencies-exchange-rates.adoc
index 20eaeed..111acd5 100644
--- a/solr/solr-ref-guide/src/currencies-exchange-rates.adoc
+++ b/solr/solr-ref-guide/src/currencies-exchange-rates.adoc
@@ -35,7 +35,7 @@ The following features are supported:
 CurrencyField has been deprecated in favor of CurrencyFieldType; all configuration examples below use CurrencyFieldType.
 ====
 
-The `currency` field type is defined in `schema.xml`.
+The `currency` field type is defined in the <<solr-schema.adoc#,schema>>.
 This is the default configuration of this type.
 
 [source,xml]
@@ -46,7 +46,8 @@ This is the default configuration of this type.
 ----
 
 In this example, we have defined the name and class of the field type, and defined the `defaultCurrency` as "USD", for U.S. Dollars.
-We have also defined a `currencyConfig` to use a file called "currency.xml". This is a file of exchange rates between our default currency to other currencies.
+We have also defined a `currencyConfig` to use a file called "currency.xml".
+This is a file of exchange rates between our default currency to other currencies.
 There is an alternate implementation that would allow regular downloading of currency data.
 See <<Exchange Rates>> below for more.
 
@@ -96,7 +97,7 @@ Natively, two provider types are supported: `FileExchangeRateProvider` or `OpenE
 This provider requires you to provide a file of exchange rates.
 It is the default, meaning that to use this provider you only need to specify the file path and name as a value for `currencyConfig` in the definition for this type.
 
-There is a sample `currency.xml` file included with Solr, found in the same directory as the `schema.xml` file.
+There is a sample `currency.xml` file included with Solr, found in the same directory as the schema file.
 Here is a small snippet from this file:
 
 [source,xml]
diff --git a/solr/solr-ref-guide/src/date-formatting-math.adoc b/solr/solr-ref-guide/src/date-formatting-math.adoc
index 1753e27..1f1b7a8 100644
--- a/solr/solr-ref-guide/src/date-formatting-math.adoc
+++ b/solr/solr-ref-guide/src/date-formatting-math.adoc
@@ -19,7 +19,8 @@
 == Date Formatting
 
 Solr's date fields (`DatePointField`, `DateRangeField` and the deprecated `TrieDateField`) represent "dates" as a point in time with millisecond precision.
-The format used is a restricted form of the canonical representation of dateTime in the http://www.w3.org/TR/xmlschema-2/#dateTime[XML Schema specification] – a restricted subset of https://en.wikipedia.org/wiki/ISO_8601[ISO-8601]. For those familiar with Java date handling, Solr uses {java-javadocs}java/time/format/DateTimeFormatter.html#ISO_INSTANT[DateTimeFormatter.ISO_INSTANT] for formatting, and parsing too with "leniency".
+The format used is a restricted form of the canonical representation of dateTime in the http://www.w3.org/TR/xmlschema-2/#dateTime[XML Schema specification] – a restricted subset of https://en.wikipedia.org/wiki/ISO_8601[ISO-8601].
+For those familiar with Java date handling, Solr uses {java-javadocs}java/time/format/DateTimeFormatter.html#ISO_INSTANT[DateTimeFormatter.ISO_INSTANT] for formatting, and parsing too with "leniency".
 
 `YYYY-MM-DDThh:mm:ssZ`
 
diff --git a/solr/solr-ref-guide/src/de-duplication.adoc b/solr/solr-ref-guide/src/de-duplication.adoc
index 8df6dc6..183d22f 100644
--- a/solr/solr-ref-guide/src/de-duplication.adoc
+++ b/solr/solr-ref-guide/src/de-duplication.adoc
@@ -40,7 +40,7 @@ When a document is added, a signature will automatically be generated and attach
 
 == Configuration Options
 
-There are two places in Solr to configure de-duplication: in `solrconfig.xml` and in `schema.xml`.
+There are two places in Solr to configure de-duplication: in `solrconfig.xml` and in the <<solr-schema.adoc#,schema>>.
 
 === In solrconfig.xml
 
diff --git a/solr/solr-ref-guide/src/docker-networking.adoc b/solr/solr-ref-guide/src/docker-networking.adoc
index f0639df..e8d7c95 100644
--- a/solr/solr-ref-guide/src/docker-networking.adoc
+++ b/solr/solr-ref-guide/src/docker-networking.adoc
@@ -230,7 +230,8 @@ ssh -n trinity10.lan "docker pull brandnetworks/tcpproxy && docker run -p 8001 -
 docker port zksolrproxy 8002
 ----
 
-Or use a suitably configured HAProxy to round-robin between all Solr nodes. Or, instead of the overlay network, use http://www.projectcalico.org[Project Calico] and configure L3 routing so you do not need to mess with proxies.
+Or use a suitably configured HAProxy to round-robin between all Solr nodes.
+Or, instead of the overlay network, use http://www.projectcalico.org[Project Calico] and configure L3 routing so you do not need to mess with proxies.
 
 Now I can get to Solr on `http://trinity10:32774/solr/#/`.
 In the Cloud -> Tree -> /live_nodes view I see the Solr nodes.
diff --git a/solr/solr-ref-guide/src/document-transformers.adoc b/solr/solr-ref-guide/src/document-transformers.adoc
index c8c2fc9..b53185f 100644
--- a/solr/solr-ref-guide/src/document-transformers.adoc
+++ b/solr/solr-ref-guide/src/document-transformers.adoc
@@ -401,7 +401,8 @@ In a sense this double-storage between docValues and stored-value storage isn't
 
 === [features] - LTRFeatureLoggerTransformerFactory
 
-The "LTR" prefix stands for <<learning-to-rank.adoc#,Learning To Rank>>. This transformer returns the values of features and it can be used for feature extraction and feature logging.
+The "LTR" prefix stands for <<learning-to-rank.adoc#,Learning To Rank>>.
+This transformer returns the values of features and it can be used for feature extraction and feature logging.
 
 [source,plain]
 ----
diff --git a/solr/solr-ref-guide/src/documents-fields-schema-design.adoc b/solr/solr-ref-guide/src/documents-fields-schema-design.adoc
index b105683..880bfaf 100644
--- a/solr/solr-ref-guide/src/documents-fields-schema-design.adoc
+++ b/solr/solr-ref-guide/src/documents-fields-schema-design.adoc
@@ -20,7 +20,8 @@
 The fundamental premise of Solr is simple.
 You give it a lot of information, then later you can ask it questions and find the piece of information you want.
 
-The part where you feed in all the information is called _indexing_ or _updating_. When you ask a question, it's called a _query_.
+The part where you feed in all the information is called _indexing_ or _updating_.
+When you ask a question, it's called a _query_.
 
 One way to understand how Solr works is to think of a loose-leaf book of recipes.
 Every time you add a recipe to the book, you update the index at the back.
diff --git a/solr/solr-ref-guide/src/docvalues.adoc b/solr/solr-ref-guide/src/docvalues.adoc
index e46d1c1..8a7be90 100644
--- a/solr/solr-ref-guide/src/docvalues.adoc
+++ b/solr/solr-ref-guide/src/docvalues.adoc
@@ -20,7 +20,8 @@ DocValues are a way of recording field values internally that is more efficient
 
 == Why DocValues?
 
-The standard way that Solr builds the index is with an _inverted index_. This style builds a list of terms found in all the documents in the index and next to each term is a list of documents that the term appears in (as well as how many times the term appears in that document).
+The standard way that Solr builds the index is with an _inverted index_.
+This style builds a list of terms found in all the documents in the index and next to each term is a list of documents that the term appears in (as well as how many times the term appears in that document).
 This makes search very fast - since users search by terms, having a ready list of term-to-document values makes the query process faster.
 
 For other features that we now commonly associate with search, such as sorting, faceting, and highlighting, this approach is not very efficient.
@@ -35,9 +36,9 @@ This approach promises to relieve some of the memory requirements of the fieldCa
 
 To use docValues, you only need to enable it for a field that you will use it with.
 As with all schema design, you need to define a field type and then define fields of that type with docValues enabled.
-All of these actions are done in `schema.xml`.
+All of these actions are done in the <<solr-schema.adoc#,schema>>.
 
-Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from the `schema.xml` of Solr's `sample_techproducts_configs` <<config-sets.adoc#,configset>>:
+Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from Solr's `sample_techproducts_configs` <<config-sets.adoc#,configset>>:
 
 [source,xml]
 ----
@@ -45,7 +46,7 @@ Enabling a field for docValues only requires adding `docValues="true"` to the fi
 ----
 
 [IMPORTANT]
-If you have already indexed data into your Solr index, you will need to completely reindex your content after changing your field definitions in `schema.xml` in order to successfully use docValues.
+If you have already indexed data into your Solr index, you will need to completely reindex your content after changing your field definitions in the schema in order to successfully use docValues.
 
 DocValues are only available for specific field types.
 The types chosen determine the underlying Lucene docValue type that will be used.
@@ -70,8 +71,10 @@ Entries are kept in sorted order and duplicates are removed.
 
 These Lucene types are related to how the {lucene-javadocs}/core/org/apache/lucene/index/DocValuesType.html[values are sorted and stored].
 
-There is an additional configuration option available, which is to modify the `docValuesFormat` <<field-type-definitions-and-properties.adoc#docvaluesformat,used by the field type>>. The default implementation employs a mixture of loading some things into memory and keeping some on disk.
-In some cases, however, you may choose to specify an alternative {lucene-javadocs}/core/org/apache/lucene/codecs/DocValuesFormat.html[DocValuesFormat implementation]. For example, you could choose to keep everything in memory by specifying `docValuesFormat="Direct"` on a field type:
+There is an additional configuration option available, which is to modify the `docValuesFormat` <<field-type-definitions-and-properties.adoc#docvaluesformat,used by the field type>>.
+The default implementation employs a mixture of loading some things into memory and keeping some on disk.
+In some cases, however, you may choose to specify an alternative {lucene-javadocs}/core/org/apache/lucene/codecs/DocValuesFormat.html[DocValuesFormat implementation].
+For example, you could choose to keep everything in memory by specifying `docValuesFormat="Direct"` on a field type:
 
 [source,xml]
 ----
@@ -82,7 +85,7 @@ Please note that the `docValuesFormat` option may change in future releases.
 
 [NOTE]
 Lucene index back-compatibility is only supported for the default codec.
-If you choose to customize the `docValuesFormat` in your `schema.xml`, upgrading to a future version of Solr may require you to either switch back to the default codec and optimize your index to rewrite it into the default codec before upgrading, or re-build your entire index from scratch after upgrading.
+If you choose to customize the `docValuesFormat` in your schema, upgrading to a future version of Solr may require you to either switch back to the default codec and optimize your index to rewrite it into the default codec before upgrading, or re-build your entire index from scratch after upgrading.
 
 == Using DocValues
 
diff --git a/solr/solr-ref-guide/src/dynamic-fields.adoc b/solr/solr-ref-guide/src/dynamic-fields.adoc
index 873bd22..ead4959 100644
--- a/solr/solr-ref-guide/src/dynamic-fields.adoc
+++ b/solr/solr-ref-guide/src/dynamic-fields.adoc
@@ -34,5 +34,5 @@ Like regular fields, dynamic fields have a name, a field type, and options.
 <dynamicField name="*_i" type="int" indexed="true"  stored="true"/>
 ----
 
-It is recommended that you include basic dynamic field mappings (like that shown above) in your `schema.xml`.
+It is recommended that you include basic dynamic field mappings (like that shown above) in your schema.
 The mappings can be very useful.
diff --git a/solr/solr-ref-guide/src/edismax-query-parser.adoc b/solr/solr-ref-guide/src/edismax-query-parser.adoc
index 9aeff4a..1eabda2 100644
--- a/solr/solr-ref-guide/src/edismax-query-parser.adoc
+++ b/solr/solr-ref-guide/src/edismax-query-parser.adoc
@@ -124,7 +124,7 @@ By default, no aliasing is used and field names specified in the query string ar
 
 == Examples of eDisMax Queries
 
-All of the sample URLs in this section assume you are running Solr's "```techproducts```" example:
+All of the sample URLs in this section assume you are running Solr's "techproducts" example:
 
 [source,bash]
 ----
diff --git a/solr/solr-ref-guide/src/enum-fields.adoc b/solr/solr-ref-guide/src/enum-fields.adoc
index a5628d5..6981c22 100644
--- a/solr/solr-ref-guide/src/enum-fields.adoc
+++ b/solr/solr-ref-guide/src/enum-fields.adoc
@@ -25,7 +25,7 @@ Examples of this are severity lists, or risk definitions.
 EnumField has been deprecated in favor of EnumFieldType; all configuration examples below use EnumFieldType.
 ====
 
-== Defining an EnumFieldType in schema.xml
+== Defining an EnumFieldType in the Schema
 
 The EnumFieldType type definition is quite simple, as in this example defining field types for "priorityLevel" and "riskLevel" enumerations:
 
diff --git a/solr/solr-ref-guide/src/external-files-processes.adoc b/solr/solr-ref-guide/src/external-files-processes.adoc
index 2925180..81175c3 100644
--- a/solr/solr-ref-guide/src/external-files-processes.adoc
+++ b/solr/solr-ref-guide/src/external-files-processes.adoc
@@ -38,7 +38,7 @@ You might want to update the rank of all the documents daily or hourly, while th
 Without `ExternalFileField`, you would need to update each document just to change the rank.
 Using `ExternalFileField` is much more efficient because all document values for a particular field are stored in an external file that can be updated as frequently as you wish.
 
-In `schema.xml`, the definition of this field type might look like this:
+In the <<solr-schema.adoc#,schema>>, the definition of this field type might look like this:
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/src/faceting.adoc b/solr/solr-ref-guide/src/faceting.adoc
index c2ea402..e4a15ff 100644
--- a/solr/solr-ref-guide/src/faceting.adoc
+++ b/solr/solr-ref-guide/src/faceting.adoc
@@ -447,7 +447,8 @@ The results are typically displayed in a second table showing the summarized dat
 Pivot faceting lets you create a summary table of the results from a faceting documents by multiple fields.
 
 Another way to look at it is that the query produces a Decision Tree, in that Solr tells you "for facet A, the constraints/counts are X/N, Y/M, etc.
-If you were to constrain A by X, then the constraint counts for B would be S/P, T/Q, etc.". In other words, it tells you in advance what the "next" set of facet results would be for a field if you apply a constraint from the current facet results.
+If you were to constrain A by X, then the constraint counts for B would be S/P, T/Q, etc."
+In other words, it tells you in advance what the "next" set of facet results would be for a field if you apply a constraint from the current facet results.
 
 `facet.pivot`::
 +
diff --git a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
index 8becad8..7327b20 100644
--- a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
@@ -57,7 +57,7 @@ Here is an example of a field type definition for a type called `text_general`:
 <2> The rest of the definition is about field analysis, described in <<document-analysis.adoc#,Document Analysis in Solr>>.
 
 The implementing class is responsible for making sure the field is handled correctly.
-In the class names in `schema.xml`, the string `solr` is shorthand for `org.apache.solr.schema` or `org.apache.solr.analysis`.
+In the class names, the string `solr` is shorthand for `org.apache.solr.schema` or `org.apache.solr.analysis`.
 Therefore, `solr.TextField` is really `org.apache.solr.schema.TextField`.
 
 == Field Type Properties
@@ -180,7 +180,7 @@ This requires that a schema-aware codec, such as the `SchemaCodecFactory`, has b
 [NOTE]
 ====
 Lucene index back-compatibility is only supported for the default codec.
-If you choose to customize the `postingsFormat` or `docValuesFormat` in your `schema.xml`, upgrading to a future version of Solr may require you to either switch back to the default codec and optimize your index to rewrite it into the default codec before upgrading, or re-build your entire index from scratch after upgrading.
+If you choose to customize the `postingsFormat` or `docValuesFormat` in your schema, upgrading to a future version of Solr may require you to either switch back to the default codec and optimize your index to rewrite it into the default codec before upgrading, or re-build your entire index from scratch after upgrading.
 ====
 
 === Field Default Properties
@@ -188,7 +188,7 @@ If you choose to customize the `postingsFormat` or `docValuesFormat` in your `sc
 These are properties that can be specified either on the field types, or on individual fields to override the values provided by the field types.
 
 The default values for each property depend on the underlying `FieldType` class, which in turn may depend on the `version` attribute of the `<schema/>`.
-The table below includes the default value for most `FieldType` implementations provided by Solr, assuming a `schema.xml` that declares `version="1.6"`.
+The table below includes the default value for most `FieldType` implementations provided by Solr, assuming a schema that declares `version="1.6"`.
 
 // tags this table for inclusion in another page
 // tag::field-params[]
@@ -231,7 +231,7 @@ One technique is using a text field as a catch-all for keyword searching.
 Most users are not sophisticated about their searches and the most common search is likely to be a simple keyword search.
 You can use `copyField` to take a variety of fields and funnel them all into a single text field for keyword searches.
 
-In the `schema.xml` file for the "```techproducts```" example included with Solr, `copyField` declarations are used to dump the contents of `cat`, `name`, `manu`, `features`, and `includes` into a single field, `text`. In addition, it could be a good idea to copy `ID` into `text` in case users wanted to search for a particular product by passing its product number to a keyword search.
+In the schema for the "techproducts" example included with Solr, `copyField` declarations are used to dump the contents of `cat`, `name`, `manu`, `features`, and `includes` into a single field, `text`. In addition, it could be a good idea to copy `ID` into `text` in case users wanted to search for a particular product by passing its product number to a keyword search.
 
 Another technique is using `copyField` to use the same field in different ways.
 Suppose you have a field that is a list of authors, like this:
diff --git a/solr/solr-ref-guide/src/fields.adoc b/solr/solr-ref-guide/src/fields.adoc
index 44bc41c..e8d6136 100644
--- a/solr/solr-ref-guide/src/fields.adoc
+++ b/solr/solr-ref-guide/src/fields.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Fields are defined in the fields element of `schema.xml`.
+Fields are defined in the fields element of a <<solr-schema.adoc#,schema>>.
 Once you have the field types set up, defining the fields themselves is simple.
 
 == Example Field Definition
diff --git a/solr/solr-ref-guide/src/filters.adoc b/solr/solr-ref-guide/src/filters.adoc
index 95314b3..c1a6ab9 100644
--- a/solr/solr-ref-guide/src/filters.adoc
+++ b/solr/solr-ref-guide/src/filters.adoc
@@ -1047,7 +1047,8 @@ This filter is generally only useful at index time.
 
 == ICU Folding Filter
 
-This filter is a custom Unicode normalization form that applies the foldings specified in http://www.unicode.org/reports/tr30/tr30-4.html[Unicode TR #30: Character Foldings] in addition to the `NFKC_Casefold` normalization form as described in <<ICU Normalizer 2 Filter>>. This filter is a better substitute for the combined behavior of the <<ASCII Folding Filter>>, <<Lower Case Filter>>, and <<ICU Normalizer 2 Filter>>.
+This filter is a custom Unicode normalization form that applies the foldings specified in http://www.unicode.org/reports/tr30/tr30-4.html[Unicode TR #30: Character Foldings] in addition to the `NFKC_Casefold` normalization form as described in <<ICU Normalizer 2 Filter>>.
+This filter is a better substitute for the combined behavior of the <<ASCII Folding Filter>>, <<Lower Case Filter>>, and <<ICU Normalizer 2 Filter>>.
 
 To use this filter, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
@@ -3423,9 +3424,11 @@ Note: although this filter produces correct token graphs, it cannot consume an i
 
 The rules for determining delimiters are determined as follows:
 
-* A change in case within a word: "CamelCase" -> "Camel", "Case". This can be disabled by setting `splitOnCaseChange="0"`.
+* A change in case within a word: "CamelCase" -> "Camel", "Case".
+This can be disabled by setting `splitOnCaseChange="0"`.
 
-* A transition from alpha to numeric characters or vice versa: "Gonzo5000" -> "Gonzo", "5000" "4500XL" -> "4500", "XL". This can be disabled by setting `splitOnNumerics="0"`.
+* A transition from alpha to numeric characters or vice versa: "Gonzo5000" -> "Gonzo", "5000" "4500XL" -> "4500", "XL".
+This can be disabled by setting `splitOnNumerics="0"`.
 
 * Non-alphanumeric characters (discarded): "hot-spot" -> "hot", "spot"
 
diff --git a/solr/solr-ref-guide/src/function-queries.adoc b/solr/solr-ref-guide/src/function-queries.adoc
index e0df316..f827253 100644
--- a/solr/solr-ref-guide/src/function-queries.adoc
+++ b/solr/solr-ref-guide/src/function-queries.adoc
@@ -20,7 +20,8 @@ Function queries enable you to generate a relevancy score using the actual value
 
 Function queries are supported by the <<dismax-query-parser.adoc#,DisMax>>, <<edismax-query-parser.adoc#,Extended DisMax>>, and <<standard-query-parser.adoc#,standard>> query parsers.
 
-Function queries use _functions_. The functions can be a constant (numeric or string literal), a field, another function or a parameter substitution argument.
+Function queries use _functions_.
+The functions can be a constant (numeric or string literal), a field, another function or a parameter substitution argument.
 You can use these functions to modify the ranking of results for users.
 These could be used to change the ranking of results based on a user's location, or some other calculation.
 
@@ -30,7 +31,8 @@ Functions must be expressed as function calls (for example, `sum(a,b)` instead o
 
 There are several ways of using function queries in a Solr query:
 
-* Via an explicit query parser that expects function arguments, such <<other-parsers.adoc#function-query-parser,`func`>> or <<other-parsers.adoc#function-range-query-parser,`frange`>>. For example:
+* Via an explicit query parser that expects function arguments, such <<other-parsers.adoc#function-query-parser,`func`>> or <<other-parsers.adoc#function-range-query-parser,`frange`>>.
+For example:
 +
 [source,text]
 ----
@@ -92,7 +94,8 @@ Returns the absolute value of the specified value or function.
 * `abs(-5)`
 
 === childfield(field) Function
-Returns the value of the given field for one of the matched child docs when searching by <<block-join-query-parser.adoc#block-join-parent-query-parser,{!parent}>>. It can be used only in `sort` parameter.
+Returns the value of the given field for one of the matched child docs when searching by <<block-join-query-parser.adoc#block-join-parent-query-parser,{!parent}>>.
+It can be used only in `sort` parameter.
 
 *Syntax Examples*
 
diff --git a/solr/solr-ref-guide/src/graph-traversal.adoc b/solr/solr-ref-guide/src/graph-traversal.adoc
index ce210f9..f608bec 100644
--- a/solr/solr-ref-guide/src/graph-traversal.adoc
+++ b/solr/solr-ref-guide/src/graph-traversal.adoc
@@ -30,7 +30,8 @@ Some sample use cases are provided later in the document.
 [IMPORTANT]
 ====
 This document assumes a basic understanding of graph terminology and streaming expressions.
-You can begin exploring graph traversal concepts with this https://en.wikipedia.org/wiki/Graph_traversal[Wikipedia article]. More details about streaming expressions are available in this Guide, in the section <<streaming-expressions.adoc#,Streaming Expressions>>.
+You can begin exploring graph traversal concepts with this https://en.wikipedia.org/wiki/Graph_traversal[Wikipedia article].
+More details about streaming expressions are available in this Guide, in the section <<streaming-expressions.adoc#,Streaming Expressions>>.
 ====
 
 == Basic Syntax
@@ -127,7 +128,8 @@ nodes(emails,
       scatter="branches, leaves")
 ----
 
-The `scatter` parameter controls whether to emit the _branches_ with the _leaves_. The root nodes are considered "branches" because they are not the outer-most level of the traversal.
+The `scatter` parameter controls whether to emit the _branches_ with the _leaves_.
+The root nodes are considered "branches" because they are not the outer-most level of the traversal.
 
 When scattering both branches and leaves the output would like this:
 
@@ -224,7 +226,8 @@ In this scenario the `walk` parameter maps the `node` field to the `from` field.
 Remember that the node IDs collected from the inner `nodes` expression are placed in the `node` field.
 
 Put more simply, the inner expression gathers all the people that "\johndoe@apache.org" has emailed.
-We can call this group the "friends of \johndoe@apache.org". The outer expression gathers all the people that the "friends of \johndoe@apache.org" have emailed.
+We can call this group the "friends of \johndoe@apache.org".
+The outer expression gathers all the people that the "friends of \johndoe@apache.org" have emailed.
 This is a basic friends-of-friends traversal.
 
 This construct of nesting `nodes` functions is the basic technique for doing a controlled traversal through the graph.
@@ -315,7 +318,8 @@ The sample code below shows steps 1 and 2 of the recommendation:
        count(*)))
 ----
 
-In the example above, the inner search expression searches the `logs` collection and returning all the articles viewed by "user1". The outer `nodes` expression takes all the articles emitted from the inner search expression and finds all the records in the logs collection for those articles.
+In the example above, the inner search expression searches the `logs` collection and returning all the articles viewed by "user1".
+The outer `nodes` expression takes all the articles emitted from the inner search expression and finds all the records in the logs collection for those articles.
 It then gathers and aggregates the users that have read the articles.
 The `maxDocFreq` parameter limits the articles returned to those that appear in no more then 10,000 log records (per shard).
 This guards against returning articles that have been viewed by millions of users.
@@ -367,7 +371,8 @@ nodes(logs,
       gather="contentID")
 ----
 
-The example above finds all people who sent emails with a body that contains "solr rocks". It then finds all the people these people have emailed.
+The example above finds all people who sent emails with a body that contains "solr rocks".
+It then finds all the people these people have emailed.
 Then it traverses to the logs collection and gathers all the content IDs that these people have edited.
 
 == Combining nodes With Other Streaming Expressions
@@ -397,7 +402,8 @@ Here is an example of using the streaming expression library to intersect two fr
                                   scatter="branches,leaves")))
 ----
 
-The example above gathers two separate friend networks, one rooted with "\johndoe@apache.org" and another rooted with "\janedoe@apache.org". The friend networks are then sorted by the `node` field, and intersected.
+The example above gathers two separate friend networks, one rooted with "\johndoe@apache.org" and another rooted with "\janedoe@apache.org".
+The friend networks are then sorted by the `node` field, and intersected.
 The resulting node set will be the intersection of the two friend networks.
 
 == Sample Use Cases for Graph Traversal
@@ -426,7 +432,8 @@ top(n="5",
 
 Let's break down exactly what this traversal is doing.
 
-. The first expression evaluated is the inner `random` expression, which returns 500 random basketIDs, from the `baskets` collection, that have the `productID` "ABC". The `random` expression is very useful for recommendations because it limits the traversal to a fixed set of baskets, and because it adds the element of surprise into the recommendation.
+. The first expression evaluated is the inner `random` expression, which returns 500 random basketIDs, from the `baskets` collection, that have the `productID` "ABC".
+The `random` expression is very useful for recommendations because it limits the traversal to a fixed set of baskets, and because it adds the element of surprise into the recommendation.
 Using the `random` function you can provide fast sample sets from very large graphs.
 . The outer `nodes` expression finds all the records in the `baskets` collection for the basketIDs generated in step 1.
 It also filters out `productID` "ABC" so it doesn't show up in the results.
@@ -513,9 +520,11 @@ top(n="5",
 Let's break down the expression above step-by-step.
 
 . The first expression evaluated is the inner `search` expression.
-This expression searches the `logs` collection for all records matching "user1". This is the user we are making the recommendation for.
+This expression searches the `logs` collection for all records matching "user1".
+This is the user we are making the recommendation for.
 +
-There is a filter applied to pull back only records where the "action:read". It returns the `articleID` for each record found.
+There is a filter applied to pull back only records where the "action:read".
+It returns the `articleID` for each record found.
 In other words, this expression returns all the articles "user1" has read.
 . The inner `nodes` expression operates over the articleIDs returned from step 1.
 It takes each `articleID` found and searches them against the `articleID` field.
@@ -556,7 +565,8 @@ nodes(proteins,
 Let's break down exactly what this traversal is doing.
 
 . The inner `nodes` expression traverses in the `proteins` collection.
-It finds all the edges in the graph where the name of the protein is "NRAS". Then it gathers the proteins in the `interacts` field.
+It finds all the edges in the graph where the name of the protein is "NRAS".
+Then it gathers the proteins in the `interacts` field.
 This gathers all the proteins that "NRAS" interactions with.
 . The outer `nodes` expression also works with the `proteins` collection.
 It gathers all the drugs that correspond to proteins emitted from step 1.
diff --git a/solr/solr-ref-guide/src/highlighting.adoc b/solr/solr-ref-guide/src/highlighting.adoc
index 6cacd5d..87bf571 100644
--- a/solr/solr-ref-guide/src/highlighting.adoc
+++ b/solr/solr-ref-guide/src/highlighting.adoc
@@ -302,7 +302,7 @@ In contrast, the Unified Highlighter can only be chosen exclusively.
 
 The Unified Highlighter is exclusively configured via search parameters.
 In contrast, some settings for the Original and FastVector Highlighters are set in `solrconfig.xml`.
-There's a robust example of the latter in the "```techproducts```" configset.
+There's a robust example of the latter in the "techproducts" configset.
 
 In addition to further information below, more information can be found in the {solr-javadocs}/core/org/apache/solr/highlight/package-summary.html[Solr javadocs].
 
@@ -409,7 +409,8 @@ If `true`, use the leading portion of the text as a snippet if a proper highligh
 |Optional |Default: `1.2`
 |===
 +
-Specifies BM25 term frequency normalization parameter 'k1'. For example, it can be set to `0` to rank passages solely based on the number of query terms that match.
+Specifies BM25 term frequency normalization parameter 'k1'.
+For example, it can be set to `0` to rank passages solely based on the number of query terms that match.
 
 `hl.score.b`::
 +
@@ -418,7 +419,8 @@ Specifies BM25 term frequency normalization parameter 'k1'. For example, it can
 |Optional |Default: `0.75`
 |===
 +
-Specifies BM25 length normalization parameter 'b'. For example, it can be set to "0" to ignore the length of passages entirely when ranking.
+Specifies BM25 length normalization parameter 'b'.
+For example, it can be set to "0" to ignore the length of passages entirely when ranking.
 
 `hl.score.pivot`::
 +
diff --git a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
index f53ef8c..7789527 100644
--- a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
+++ b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
@@ -48,7 +48,8 @@ Health:: Report the health of the node (_available only in SolrCloud mode_)
 v2: `api/node/health` |{solr-javadocs}/core/org/apache/solr/handler/admin/HealthCheckHandler.html[HealthCheckHandler] |
 |===
 +
-This endpoint also accepts additional request parameters. Please see {solr-javadocs}/core/org/apache/solr/handler/admin/HealthCheckHandler.html[Javadocs] for details.
+This endpoint also accepts additional request parameters.
+Please see {solr-javadocs}/core/org/apache/solr/handler/admin/HealthCheckHandler.html[Javadocs] for details.
 
 Logging:: Retrieve and modify registered loggers.
 +
@@ -60,7 +61,8 @@ Logging:: Retrieve and modify registered loggers.
 v2: `api/node/logging` |{solr-javadocs}/core/org/apache/solr/handler/admin/LoggingHandler.html[LoggingHandler] |`_ADMIN_LOGGING`
 |===
 
-Luke:: Expose the internal Lucene index. This handler must have a collection name in the path to the endpoint.
+Luke:: Expose the internal Lucene index.
+This handler must have a collection name in the path to the endpoint.
 +
 *Documentation*: <<luke-request-handler.adoc#,Luke Request Handler>>
 +
@@ -70,7 +72,8 @@ Luke:: Expose the internal Lucene index. This handler must have a collection nam
 |`solr/<collection>/admin/luke` |{solr-javadocs}/core/org/apache/solr/handler/admin/LukeRequestHandler.html[LukeRequestHandler] |`_ADMIN_LUKE`
 |===
 
-MBeans:: Provide info about all registered {solr-javadocs}/core/org/apache/solr/core/SolrInfoBean.html[SolrInfoMBeans]. This handler must have a collection name in the path to the endpoint.
+MBeans:: Provide info about all registered {solr-javadocs}/core/org/apache/solr/core/SolrInfoBean.html[SolrInfoMBeans].
+This handler must have a collection name in the path to the endpoint.
 +
 *Documentation*: <<mbean-request-handler.adoc#,MBean Request Handler>>
 +
@@ -80,7 +83,8 @@ MBeans:: Provide info about all registered {solr-javadocs}/core/org/apache/solr/
 |`solr/<collection>/admin/mbeans` |{solr-javadocs}/core/org/apache/solr/handler/admin/SolrInfoMBeanHandler.html[SolrInfoMBeanHandler] |`_ADMIN_MBEANS`
 |===
 
-Ping:: Health check. This handler must have a collection name in the path to the endpoint.
+Ping:: Health check.
+This handler must have a collection name in the path to the endpoint.
 +
 *Documentation*: <<ping.adoc#,Ping>>
 +
@@ -90,7 +94,8 @@ Ping:: Health check. This handler must have a collection name in the path to the
 |`solr/<collection>/admin/ping` |{solr-javadocs}/core/org/apache/solr/handler/PingRequestHandler.html[PingRequestHandler] |`_ADMIN_PING`
 |===
 
-Plugins:: Return info about all registered plugins. This handler must have a collection name in the path to the endpoint.
+Plugins:: Return info about all registered plugins.
+This handler must have a collection name in the path to the endpoint.
 +
 [cols="3*.",frame=none,grid=cols,options="header"]
 |===
diff --git a/solr/solr-ref-guide/src/indexing-with-update-handlers.adoc b/solr/solr-ref-guide/src/indexing-with-update-handlers.adoc
index 19a5f49..0ac0d32 100644
--- a/solr/solr-ref-guide/src/indexing-with-update-handlers.adoc
+++ b/solr/solr-ref-guide/src/indexing-with-update-handlers.adoc
@@ -274,7 +274,8 @@ The status field will be non-zero in case of failure.
 
 === Using XSLT to Transform XML Index Updates
 
-The Scripting contrib module provides a separate XSLT Update Request Handler that allows you to index any arbitrary XML by using the `<tr>` parameter to apply an https://en.wikipedia.org/wiki/XSLT[XSL transformation]. You must have an XSLT stylesheet in the `conf/xslt` directory of your <<config-sets.adoc#,configset>> that can transform the incoming data to the expected `<add><doc/></add>` format, and use the `tr` parameter to specify the name of that stylesheet.
+The Scripting contrib module provides a separate XSLT Update Request Handler that allows you to index any arbitrary XML by using the `<tr>` parameter to apply an https://en.wikipedia.org/wiki/XSLT[XSL transformation].
+You must have an XSLT stylesheet in the `conf/xslt` directory of your <<config-sets.adoc#,configset>> that can transform the incoming data to the expected `<add><doc/></add>` format, and use the `tr` parameter to specify the name of that stylesheet.
 
 Learn more about adding the `dist/solr-scripting-*.jar` file into Solr's <<libs.adoc#lib-directories,Lib Directories>>.
 
@@ -407,7 +408,7 @@ curl -X POST -H 'Content-Type: application/json' 'http://localhost:8983/solr/my_
 ]'
 ----
 
-A sample JSON file is provided at `example/exampledocs/books.json` and contains an array of objects that you can add to the Solr `techproducts` example:
+A sample JSON file is provided at `example/exampledocs/books.json` and contains an array of objects that you can add to the Solr "techproducts" example:
 
 [source,bash]
 ----
@@ -513,7 +514,7 @@ This is covered in the section <<transforming-and-indexing-custom-json.adoc#,Tra
 
 CSV formatted update requests may be sent to Solr's `/update` handler using `Content-Type: application/csv` or `Content-Type: text/csv`.
 
-A sample CSV file is provided at `example/exampledocs/books.csv` that you can use to add some documents to the Solr `techproducts` example:
+A sample CSV file is provided at `example/exampledocs/books.csv` that you can use to add some documents to the Solr "techproducts" example:
 
 [source,bash]
 ----
diff --git a/solr/solr-ref-guide/src/json-query-dsl.adoc b/solr/solr-ref-guide/src/json-query-dsl.adoc
index 33d0fdf..80f8481 100644
--- a/solr/solr-ref-guide/src/json-query-dsl.adoc
+++ b/solr/solr-ref-guide/src/json-query-dsl.adoc
@@ -396,7 +396,8 @@ curl -X POST http://localhost:8983/solr/techproducts/query -d '
 ----
 
 Overall this example doesn't make much sense, but just demonstrates the syntax.
-This feature is useful in <<json-faceting-domain-changes.adoc#adding-domain-filters,filtering domain>> in JSON Facet API <<json-facet-api.adoc#changing-the-domain,domain changes>>. Note that these declarations add request parameters underneath, so using same names with other parameters might cause unexpected behavior.
+This feature is useful in <<json-faceting-domain-changes.adoc#adding-domain-filters,filtering domain>> in JSON Facet API <<json-facet-api.adoc#changing-the-domain,domain changes>>.
+Note that these declarations add request parameters underneath, so using same names with other parameters might cause unexpected behavior.
 
 == Tagging in JSON Query DSL
 Query and filter clauses can also be individually "tagged".  Tags serve as handles for query clauses, allowing them to be referenced from elsewhere in the request.
diff --git a/solr/solr-ref-guide/src/language-analysis.adoc b/solr/solr-ref-guide/src/language-analysis.adoc
index e236904..e73453f 100644
--- a/solr/solr-ref-guide/src/language-analysis.adoc
+++ b/solr/solr-ref-guide/src/language-analysis.adoc
@@ -415,7 +415,8 @@ If you specify "de" as the language and "CH" as the country, you will get German
 <copyField source="manu" dest="manuGERMAN"/>
 ----
 
-In the example above, we defined the strength as "primary". The strength of the collation determines how strict the sort order will be, but it also depends upon the language.
+In the example above, we defined the strength as "primary".
+The strength of the collation determines how strict the sort order will be, but it also depends upon the language.
 For example, in English, "primary" strength ignores differences in case and accents.
 
 Another example:
@@ -1434,7 +1435,8 @@ See the example under <<Traditional Chinese>>.
 
 === Simplified Chinese
 
-For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the <<HMM Chinese Tokenizer>>. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model.
+For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the <<HMM Chinese Tokenizer>>.
+This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model.
 To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
 See the `solr/contrib/analysis-extras/README.md` for information on which jars you need to add.
 
@@ -2486,7 +2488,8 @@ For normalization, there is a `NorwegianNormalizationFilterFactory` which is a v
 ==== Norwegian Light Stemmer
 
 The `NorwegianLightStemFilterFactory` requires a "two-pass" sort for the -dom and -het endings.
-This means that in the first pass the word "kristendom" is stemmed to "kristen", and then all the general rules apply so it will be further stemmed to "krist". The effect of this is that "kristen," "kristendom," "kristendommen," and "kristendommens" will all be stemmed to "krist."
+This means that in the first pass the word "kristendom" is stemmed to "kristen", and then all the general rules apply so it will be further stemmed to "krist".
+The effect of this is that "kristen," "kristendom," "kristendommen," and "kristendommens" will all be stemmed to "krist."
 
 The second pass is to pick up -dom and -het endings.
 Consider this example:
@@ -2608,7 +2611,8 @@ Valid values are:
 
 ==== Norwegian Normalization Filter
 
-This filter normalize use of the interchangeable Scandinavian characters æÆäÄöÖøØåÅ and folded variants (ae, oe and aa) by transforming them to æÆøØåÅ. This is a variant of `ScandinavianNormalizationFilter`, with folding rules customized for Norwegian.
+This filter normalize use of the interchangeable Scandinavian characters æÆäÄöÖøØåÅ and folded variants (ae, oe and aa) by transforming them to æÆøØåÅ.
+This is a variant of `ScandinavianNormalizationFilter`, with folding rules customized for Norwegian.
 
 *Factory class:* `solr.NorwegianNormalizationFilterFactory`
 
@@ -2913,7 +2917,9 @@ Scandinavian is a language group spanning three languages <<Norwegian>>, <<Swedi
 Swedish å, ä, ö are in fact the same letters as Norwegian and Danish å, æ, ø and thus interchangeable when used between these languages.
 They are however folded differently when people type them on a keyboard lacking these characters.
 
-In that situation almost all Swedish people use a, a, o instead of å, ä, ö. Norwegians and Danes on the other hand usually type aa, ae and oe instead of å, æ and ø. Some do however use a, a, o, oo, ao and sometimes permutations of everything above.
+In that situation almost all Swedish people use a, a, o instead of å, ä, ö.
+Norwegians and Danes on the other hand usually type aa, ae and oe instead of å, æ and ø.
+Some do however use a, a, o, oo, ao and sometimes permutations of everything above.
 
 There are two filters for helping with normalization between Scandinavian languages: one is `solr.ScandinavianNormalizationFilterFactory` trying to preserve the special characters (æäöå) and another `solr.ScandinavianFoldingFilterFactory` which folds these to the more broad ø/ö -> o, etc.
 
diff --git a/solr/solr-ref-guide/src/language-detection.adoc b/solr/solr-ref-guide/src/language-detection.adoc
index 60ab7bf..7318e83 100644
--- a/solr/solr-ref-guide/src/language-detection.adoc
+++ b/solr/solr-ref-guide/src/language-detection.adoc
@@ -91,7 +91,8 @@ s|Required |Default: none
 +
 An OpenNLP language detection model.
 +
-The OpenNLP project provides a pre-trained 103 language model on the http://opennlp.apache.org/models.html[OpenNLP site's model dowload page]. Model training instructions are provided on the http://opennlp.apache.org/docs/{ivy-opennlp-version}/manual/opennlp.html#tools.langdetect[OpenNLP website].
+The OpenNLP project provides a pre-trained 103 language model on the http://opennlp.apache.org/models.html[OpenNLP site's model dowload page].
+Model training instructions are provided on the http://opennlp.apache.org/docs/{ivy-opennlp-version}/manual/opennlp.html#tools.langdetect[OpenNLP website].
 +
 See <<resource-loading.adoc#,Resource Loading>> for information on where to put the model.
 
diff --git a/solr/solr-ref-guide/src/learning-to-rank.adoc b/solr/solr-ref-guide/src/learning-to-rank.adoc
index baf5a0a..cd7a74e 100644
--- a/solr/solr-ref-guide/src/learning-to-rank.adoc
+++ b/solr/solr-ref-guide/src/learning-to-rank.adoc
@@ -678,7 +678,8 @@ As an alternative to the above-described `DefaultWrapperModel`, it is possible t
 
 === Applying Changes
 
-The feature store and the model store are both <<managed-resources.adoc#,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
+The feature store and the model store are both <<managed-resources.adoc#,Managed Resources>>.
+Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
 
 === LTR Examples
 
diff --git a/solr/solr-ref-guide/src/managed-resources.adoc b/solr/solr-ref-guide/src/managed-resources.adoc
index 17ee531..b5cf5bb 100644
--- a/solr/solr-ref-guide/src/managed-resources.adoc
+++ b/solr/solr-ref-guide/src/managed-resources.adoc
@@ -144,7 +144,7 @@ This is because it is more common to add a term to an existing list than it is t
 === Managing Synonyms
 
 For the most part, the API for managing synonyms behaves similar to the API for stop words, except instead of working with a list of words, it uses a map, where the value for each entry in the map is a set of synonyms for a term.
-As with stop words, the `sample_techproducts_configs` <<config-sets.adoc#,configset>> includes a pre-built set of synonym mappings suitable for the sample data that is activated by the following field type definition in `schema.xml`:
+As with stop words, the `sample_techproducts_configs` <<config-sets.adoc#,configset>> includes a pre-built set of synonym mappings suitable for the sample data that is activated by the following field type definition:
 
 [source,xml]
 ----
@@ -245,7 +245,7 @@ See the section <<reindexing.adoc#,Reindexing>> for more information about reind
 
 Metadata about registered ManagedResources is available using the `/schema/managed` endpoint for each collection.
 
-Assuming you have the `managed_en` field type shown above defined in your `schema.xml`, sending a GET request to the following resource will return metadata about which schema-related resources are being managed by the RestManager:
+Assuming you have the `managed_en` field type shown above defined in your schema, sending a GET request to the following resource will return metadata about which schema-related resources are being managed by the RestManager:
 
 [source,bash]
 ----
@@ -306,7 +306,7 @@ For most users, creating resources in this way should never be necessary, since
 
 However, You may want to explicitly delete managed resources if they are no longer being used by a Solr component.
 
-For instance, the managed resource for German that we created above can be deleted because there are no Solr components that are using it, whereas the managed resource for English stop words cannot be deleted because there is a token filter declared in `schema.xml` that is using it.
+For instance, the managed resource for German that we created above can be deleted because there are no Solr components that are using it, whereas the managed resource for English stop words cannot be deleted because there is a token filter declared in the schema that is using it.
 
 [source,bash]
 ----
diff --git a/solr/solr-ref-guide/src/other-parsers.adoc b/solr/solr-ref-guide/src/other-parsers.adoc
index 7068e44..7764b68 100644
--- a/solr/solr-ref-guide/src/other-parsers.adoc
+++ b/solr/solr-ref-guide/src/other-parsers.adoc
@@ -211,7 +211,8 @@ A mix of ordered and unordered complex phrase queries:
 === Complex Phrase Parser Limitations
 
 Performance is sensitive to the number of unique terms that are associated with a pattern.
-For instance, searching for "a*" will form a large OR clause (technically a SpanOr with many terms) for all of the terms in your index for the indicated field that start with the single letter 'a'. It may be prudent to restrict wildcards to at least two or preferably three letters as a prefix.
+For instance, searching for "a*" will form a large OR clause (technically a SpanOr with many terms) for all of the terms in your index for the indicated field that start with the single letter 'a'.
+It may be prudent to restrict wildcards to at least two or preferably three letters as a prefix.
 Allowing very short prefixes may result in to many low-quality documents being returned.
 
 Notice that it also supports leading wildcards "*a" as well with consequent performance implications.
diff --git a/solr/solr-ref-guide/src/package-manager.adoc b/solr/solr-ref-guide/src/package-manager.adoc
index 4a29068..bdcf05b 100644
--- a/solr/solr-ref-guide/src/package-manager.adoc
+++ b/solr/solr-ref-guide/src/package-manager.adoc
@@ -19,7 +19,8 @@
 
 The package manager in Solr allows installation and updating of Solr-specific packages in Solr's cluster environment.
 
-In this system, a _package_ is a set of Java jar files (usually one) containing one or more <<solr-plugins.adoc#,Solr plugins>>. Each jar file is also accompanied by a signature string (which can be verified against a supplied public key).
+In this system, a _package_ is a set of Java jar files (usually one) containing one or more <<solr-plugins.adoc#,Solr plugins>>.
+Each jar file is also accompanied by a signature string (which can be verified against a supplied public key).
 
 A key design aspect of this system is the ability to install or update packages in a cluster environment securely without the need to restart every node.
 
@@ -117,7 +118,7 @@ If you pass `-y` to the command, confirmation can be skipped.
 
 ==== Manual Deploy
 
-It is also possible to deploy a package's collection level plugins manually by editing a configset (e.g., `solrconfig.xml`, `managed-schema`/`schema.xml`, etc.) and reloading the collection.
+It is also possible to deploy a package's collection level plugins manually by editing a <<config-sets.adoc#,configset>> and reloading the collection.
 
 For example, if a package named `mypackage` contains a request handler, we would add it to a configset's `solrconfig.xml` like this:
 
diff --git a/solr/solr-ref-guide/src/parallel-sql-interface.adoc b/solr/solr-ref-guide/src/parallel-sql-interface.adoc
index 8f8511a..b8c47c2 100644
--- a/solr/solr-ref-guide/src/parallel-sql-interface.adoc
+++ b/solr/solr-ref-guide/src/parallel-sql-interface.adoc
@@ -310,7 +310,10 @@ The parallel SQL interface supports and pushes down most common SQL operators, s
 * IN, LIKE, BETWEEN support the NOT keyword to find rows where the condition is not true, such as `fielda NOT LIKE 'day%'`
 * String literals must be wrapped in single-quotes; double-quotes indicate database objects and not a string literal.
 * A simplistic LIKE can be used with an asterisk wildcard, such as `field = 'sam*'`; this is Solr specific and not part of the SQL standard.
-* When performing ANDed range queries over a multi-valued field, Apache Calcite short-circuits to zero results if the ANDed predicates appear to be disjoint sets. For example, +++b_is <= 2 AND b_is >= 5+++ appears to Calcite to be disjoint sets, which they are from a single-valued field perspective. However, this may not be the case with multi-valued fields, as Solr might match documents. The work-around is to use Solr query syntax directly inside of an equals expression wrapped in paren [...]
+* When performing ANDed range queries over a multi-valued field, Apache Calcite short-circuits to zero results if the ANDed predicates appear to be disjoint sets.
+For example, +++b_is <= 2 AND b_is >= 5+++ appears to Calcite to be disjoint sets, which they are from a single-valued field perspective.
+However, this may not be the case with multi-valued fields, as Solr might match documents.
+The work-around is to use Solr query syntax directly inside of an equals expression wrapped in parens: +++b_is = '(+[5 TO *] +[* TO 2])'+++
 
 === ORDER BY Clause
 
@@ -335,8 +338,10 @@ ORDER BY ... OFFSET 10 FETCH NEXT 10 ROWS ONLY
 ----
 Paging with SQL suffers the same performance penalty of paging in Solr queries using `start` and `rows` where the distributed query must
 over-fetch `OFFSET` + `LIMIT` documents from each shard and then sort the results from each shard to generate the page of results returned to the client.
-Consequently, this feature should only be used for small OFFSET / FETCH sizes, such as paging up to 10,000 documents per shard. Solr SQL does not enforce any hard limits but the deeper you go into the results,
-each subsequent page request takes longer and consumes more resources. Solr's `cursorMark` feature for deep paging is not supported in SQL; use a SQL query without a `LIMIT` to stream large result sets through the `/export` handler instead.
+Consequently, this feature should only be used for small OFFSET / FETCH sizes, such as paging up to 10,000 documents per shard.
+Solr SQL does not enforce any hard limits but the deeper you go into the results,
+each subsequent page request takes longer and consumes more resources.
+Solr's `cursorMark` feature for deep paging is not supported in SQL; use a SQL query without a `LIMIT` to stream large result sets through the `/export` handler instead.
 SQL `OFFSET` is not intended for deep-paging type use cases.
 
 === LIMIT Clause
diff --git a/solr/solr-ref-guide/src/partial-document-updates.adoc b/solr/solr-ref-guide/src/partial-document-updates.adoc
index b8bdcb6..ce4a46e 100644
--- a/solr/solr-ref-guide/src/partial-document-updates.adoc
+++ b/solr/solr-ref-guide/src/partial-document-updates.adoc
@@ -19,11 +19,14 @@
 Once you have indexed the content you need in your Solr index, you will want to start thinking about your strategy for dealing with changes to those documents.
 Solr supports three approaches to updating documents that have only partially changed.
 
-The first is _<<Atomic Updates,atomic updates>>_. This approach allows changing only one or more fields of a document without having to reindex the entire document.
+The first is _<<Atomic Updates,atomic updates>>_.
+This approach allows changing only one or more fields of a document without having to reindex the entire document.
 
-The second approach is known as _<<In-Place Updates,in-place updates>>_. This approach is similar to atomic updates (is a subset of atomic updates in some sense), but can be used only for updating single valued non-indexed and non-stored docValue-based numeric fields.
+The second approach is known as _<<In-Place Updates,in-place updates>>_.
+This approach is similar to atomic updates (is a subset of atomic updates in some sense), but can be used only for updating single valued non-indexed and non-stored docValue-based numeric fields.
 
-The third approach is known as _<<Optimistic Concurrency,optimistic concurrency>>_ or _optimistic locking_. It is a feature of many NoSQL databases, and allows conditional updating a document based on its version.
+The third approach is known as _<<Optimistic Concurrency,optimistic concurrency>>_ or _optimistic locking_.
+It is a feature of many NoSQL databases, and allows conditional updating a document based on its version.
 This approach includes semantics and rules for how to deal with version matches or mis-matches.
 
 Atomic Updates (and in-place updates) and Optimistic Concurrency may be used as independent strategies for managing changes to documents, or they may be combined: you can use optimistic concurrency to conditionally apply an atomic update.
@@ -360,7 +363,8 @@ $ curl -X POST -H 'Content-Type: application/json' 'http://localhost:8983/solr/t
     "bbb",1632740120250548224]}
 ----
 
-In this example, we have added 2 documents "aaa" and "bbb". Because we added `versions=true` to the request, the response shows the document version for each document.
+In this example, we have added 2 documents "aaa" and "bbb".
+Because we added `versions=true` to the request, the response shows the document version for each document.
 
 [source,bash]
 ----
@@ -469,7 +473,8 @@ $ curl -X POST -H 'Content-Type: application/json' 'http://localhost:8983/solr/t
     "ccc",1632740949182382080]}
 ----
 
-In this example, we have added 2 documents "aaa" and "ccc". As we have specified the parameter `\_version_=-1`, this request should not add the document with the id `aaa` because it already exists.
+In this example, we have added 2 documents "aaa" and "ccc".
+As we have specified the parameter `\_version_=-1`, this request should not add the document with the id `aaa` because it already exists.
 The request succeeds & does not throw any error because the `failOnVersionConflicts=false` parameter is specified.
 The response shows that only document `ccc` is added and `aaa` is silently ignored.
 
diff --git a/solr/solr-ref-guide/src/phonetic-matching.adoc b/solr/solr-ref-guide/src/phonetic-matching.adoc
index 566988f..04ab0af 100644
--- a/solr/solr-ref-guide/src/phonetic-matching.adoc
+++ b/solr/solr-ref-guide/src/phonetic-matching.adoc
@@ -38,7 +38,8 @@ Finally, it applies language-independent rules regarding such things as voiced a
 
 For example, assume that the matches found when searching for Stephen in a database are "Stefan", "Steph", "Stephen", "Steve", "Steven", "Stove", and "Stuffin". "Stefan", "Stephen", and "Steven" are probably relevant, and are names that you want to see.
 "Stuffin", however, is probably not relevant.
-Also rejected were "Steph", "Steve", and "Stove". Of those, "Stove" is probably not one that we would have wanted.
+Also rejected were "Steph", "Steve", and "Stove".
+Of those, "Stove" is probably not one that we would have wanted.
 But "Steph" and "Steve" are possibly ones that you might be interested in.
 
 For Solr, BMPM searching is available for the following languages:
diff --git a/solr/solr-ref-guide/src/ping.adoc b/solr/solr-ref-guide/src/ping.adoc
index dd450d2..0433700 100644
--- a/solr/solr-ref-guide/src/ping.adoc
+++ b/solr/solr-ref-guide/src/ping.adoc
@@ -21,7 +21,8 @@ Choosing Ping under a core name issues a `ping` request to check whether the cor
 .Ping Option in Core Dropdown
 image::images/ping/ping.png[image,width=171,height=195]
 
-The search executed by a Ping is configured with the <<request-parameters-api.adoc#,Request Parameters API>>. See <<implicit-requesthandlers.adoc#,Implicit Request Handlers>> for the paramset to use for the `/admin/ping` endpoint.
+The search executed by a Ping is configured with the <<request-parameters-api.adoc#,Request Parameters API>>.
+See <<implicit-requesthandlers.adoc#,Implicit Request Handlers>> for the paramset to use for the `/admin/ping` endpoint.
 
 The Ping option doesn't open a page, but the status of the request can be seen on the core overview page shown when clicking on a collection name.
 The length of time the request has taken is displayed next to the Ping option, in milliseconds.
diff --git a/solr/solr-ref-guide/src/query-elevation-component.adoc b/solr/solr-ref-guide/src/query-elevation-component.adoc
index 97a9100..9a132a2 100644
--- a/solr/solr-ref-guide/src/query-elevation-component.adoc
+++ b/solr/solr-ref-guide/src/query-elevation-component.adoc
@@ -24,7 +24,7 @@ Although this component will work with any QueryParser, it makes the most sense
 
 The Query Elevation Component also supports distributed searching.
 
-All of the sample configuration and queries used in this section assume you are running Solr's "```techproducts```" example:
+All of the sample configuration and queries used in this section assume you are running Solr's "techproducts" example:
 
 [source,bash]
 ----
diff --git a/solr/solr-ref-guide/src/query-syntax-and-parsers.adoc b/solr/solr-ref-guide/src/query-syntax-and-parsers.adoc
index 16a2ad2..f2419de 100644
--- a/solr/solr-ref-guide/src/query-syntax-and-parsers.adoc
+++ b/solr/solr-ref-guide/src/query-syntax-and-parsers.adoc
@@ -40,7 +40,8 @@ This section explains how to specify a query parser and describes the syntax and
 There are some query parameters common to all Solr parsers; these are discussed in the section <<common-query-parameters.adoc#common-query-parameters,Common Query Parameters>>.
 
 Query parsers are also called `QParserPlugins`.
-They are all subclasses of {solr-javadocs}/core/org/apache/solr/search/QParserPlugin.html[QParserPlugin]. If you have custom parsing needs, you may want to extend that class to create your own query parser.
+They are all subclasses of {solr-javadocs}/core/org/apache/solr/search/QParserPlugin.html[QParserPlugin].
+If you have custom parsing needs, you may want to extend that class to create your own query parser.
 
 ****
 // This tags the below list so it can be used in the parent page section list
diff --git a/solr/solr-ref-guide/src/reindexing.adoc b/solr/solr-ref-guide/src/reindexing.adoc
index 5585382..070d50c 100644
--- a/solr/solr-ref-guide/src/reindexing.adoc
+++ b/solr/solr-ref-guide/src/reindexing.adoc
@@ -22,7 +22,8 @@ These changes include editing properties of fields or field types; adding fields
 
 It's important to be aware that failing to reindex can have both obvious and subtle consequences for Solr or for users finding what they are looking for.
 
-"Reindex" in this context means _first delete the existing index and repeat the process you used to ingest the entire corpus from the system-of-record_. It is strongly recommended that Solr users have a consistent, repeatable process for indexing so that the indexes can be recreated as the need arises.
+"Reindex" in this context means _first delete the existing index and repeat the process you used to ingest the entire corpus from the system-of-record_.
+It is strongly recommended that Solr users have a consistent, repeatable process for indexing so that the indexes can be recreated as the need arises.
 
 [CAUTION]
 ====
@@ -76,7 +77,8 @@ Any change to the index-time analysis chain requires reindexing in almost all ca
 
 === Solrconfig Changes
 Identifying changes to solrconfig.xml that alter how data is ingested and thus require reindexing is less straightforward.
-The general rule is "anything that changes what gets stored in the index requires reindexing". Here are several known examples.
+The general rule is "anything that changes what gets stored in the index requires reindexing".
+Here are several known examples.
 
 The parameter `luceneMatchVersion` in solrconfig.xml controls the compatibility of Solr with Lucene.
 Since this parameter can change the rules for analysis behind the scenes, it's always recommended to reindex when changing it.
@@ -116,7 +118,8 @@ They allow you to recreate the Lucene index without having Lucene segments linge
 
 [CAUTION]
 ====
-A Lucene index is a _lossy abstraction designed for fast search_. Once a document is added to the index, the original data cannot be assumed to be available.
+A Lucene index is a _lossy abstraction designed for fast search_.
+Once a document is added to the index, the original data cannot be assumed to be available.
 Therefore it is not possible for Lucene to "fix up" existing documents to reflect changes to the schema, they must be indexed again.
 
 There are a number of technical reasons that make re-ingesting all documents correctly without deleting the entire corpus first difficult and error-prone to code and maintain.
diff --git a/solr/solr-ref-guide/src/response-writers.adoc b/solr/solr-ref-guide/src/response-writers.adoc
index 168327d..de20edd 100644
--- a/solr/solr-ref-guide/src/response-writers.adoc
+++ b/solr/solr-ref-guide/src/response-writers.adoc
@@ -74,7 +74,7 @@ Here is a sample response for a simple query like `q=id:VS1GB400C3`:
   }}
 ----
 
-The default mime type for the JSON writer is `application/json`, however this can be overridden in the `solrconfig.xml` - such as in this example from the "`techproducts`" configuration:
+The default mime type for the JSON writer is `application/json`, however this can be overridden in the `solrconfig.xml` - such as in this example from the "techproducts" configset:
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/src/result-clustering.adoc b/solr/solr-ref-guide/src/result-clustering.adoc
index 9b08687..4ace945 100644
--- a/solr/solr-ref-guide/src/result-clustering.adoc
+++ b/solr/solr-ref-guide/src/result-clustering.adoc
@@ -26,7 +26,8 @@ The *clustering* (or *cluster analysis*) plugin attempts to automatically discov
 
 The clustering algorithm in Solr is applied to documents included in search result of each single query -— this is called an _on-line_ clustering.
 
-Clusters discovered for a given query can be perceived as _dynamic facets_. This is beneficial when regular faceting is difficult (field values are not known in advance) or when the queries are exploratory in nature.
+Clusters discovered for a given query can be perceived as _dynamic facets_.
+This is beneficial when regular faceting is difficult (field values are not known in advance) or when the queries are exploratory in nature.
 Take a look at the https://search.carrot2.org/#/search/web/apache%20solr/treemap[Carrot^2^] project's demo page to see an example of search results clustering in action (the groups in the visualization have been discovered automatically in search results to the right, there is no external information involved).
 
 image::images/result-clustering/carrot2.png[image,width=900]
@@ -112,7 +113,7 @@ The `labels` element of each cluster is a dynamically discovered phrase that des
 
 == Solr Distribution Example
 
-The `techproducts` example included with Solr is pre-configured with all the necessary components for result clustering -- but they are disabled by default.
+The "techproducts" example included with Solr is pre-configured with all the necessary components for result clustering -- but they are disabled by default.
 
 To enable the clustering component extension and the dedicated search handler configured to use it, specify a JVM System Property when running the example:
 
@@ -325,7 +326,8 @@ A commercial clustering algorithm `Lingo3G` plugs into the same extension point
 ****
 The question of which algorithm to choose depends on the amount of traffic, the expected result, and the input data (each algorithm will cluster the input slightly differently).
 There is no one answer which algorithm is "the best": Lingo3G provides hierarchical clusters, Lingo and STC provide flat clusters.
-STC is faster than Lingo, but arguably produces less intuitive clusters, Lingo3G is the fastest algorithm but is not free or open source... Experiment and pick one that suits your needs.
+STC is faster than Lingo, but arguably produces less intuitive clusters, Lingo3G is the fastest algorithm but is not free or open source...
+Experiment and pick one that suits your needs.
 
 For a comparison of characteristics of these algorithms see the following links:
 
@@ -475,7 +477,8 @@ We highly recommend tuning both for production uses.
 Improving the default language resources to include words and phrases common to a particular document domain will improve clustering quality significantly.
 
 Carrot^2^ algorithms have an extensive set of parameters and language resource tuning options.
-Please refer to https://carrot2.github.io/release/latest/[up-to-date project documentation]. In particular, the language resources section and each algorithm's attributes section.
+Please refer to https://carrot2.github.io/release/latest/[up-to-date project documentation].
+In particular, the language resources section and each algorithm's attributes section.
 
 
 === Changing Clustering Algorithm Parameters
@@ -526,7 +529,7 @@ The following rules apply.
 
 * If the parameter is added to the engine's configuration in `solrconfig.xml`, the core must be reloaded for the changes to be picked up.
 Alternatively, pass the parameter via the request URL to change things dynamically on a per-request basis.
-For example, if you have the `techproducts` example running, this will cut the clusters to only those containing at least three documents:
+For example, if you have the "techproducts" example running, this will cut the clusters to only those containing at least three documents:
  `http://localhost:8983/solr/techproducts/clustering?q=\*:*&rows=100&wt=json&preprocessing.documentAssigner.minClusterSize=3`
 
 * For complex types, the parameter key with the name of the instantiated type must precede any of its own parameters.
diff --git a/solr/solr-ref-guide/src/result-grouping.adoc b/solr/solr-ref-guide/src/result-grouping.adoc
index d3ddf1e..6335092 100644
--- a/solr/solr-ref-guide/src/result-grouping.adoc
+++ b/solr/solr-ref-guide/src/result-grouping.adoc
@@ -29,7 +29,8 @@ There are features unique to both, and they have different performance character
 That said, in most cases Collapse and Expand is preferable to Result Grouping.
 ====
 
-Result Grouping is separate from <<faceting.adoc#,Faceting>>. Though it is conceptually similar, faceting returns all relevant results and allows the user to refine the results based on the facet category.
+Result Grouping is separate from <<faceting.adoc#,Faceting>>.
+Though it is conceptually similar, faceting returns all relevant results and allows the user to refine the results based on the facet category.
 For example, if you search for "shoes" on a footwear retailer's e-commerce site, Solr would return all results for that query term, along with selectable facets such as "size," "color," "brand," and so on.
 
 You can however combine grouping with faceting.
diff --git a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
index 1f51820..77a758b 100644
--- a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
+++ b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
@@ -462,7 +462,7 @@ As an example, consider the permissions below:
 All of the permissions in this list match `/select` queries.
 But different permissions will be used depending on the collection being queried.
 
-For a query to the `techproducts` collection, permission 3 will be used because it specifically targets `techproducts`.
+For a query to the "techproducts" collection, permission 3 will be used because it specifically targets "techproducts".
 Only users with the `other` role will be authorized.
 
 For a query to a collection called `collection1` on the other hand, the most specific permission present is permission 2, so _all_ roles are given access.
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index a90f466..8890d9c 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -93,7 +93,7 @@ Previously indexed documents will *not* be automatically handled - they *must* b
 The `add-field` command adds a new field definition to your schema.
 If a field with the same name exists an error is thrown.
 
-All of the properties available when defining a field with manual `schema.xml` edits can be passed via the API.
+All of the properties available when defining a field with manual schema edits can be passed via the API.
 These request attributes are described in detail in the section <<fields.adoc#,Fields>>.
 
 For example, to define a new stored field named "sell_by", of type "pdate", you would POST the following request:
@@ -167,7 +167,7 @@ The `replace-field` command replaces a field's definition.
 Note that you must supply the full definition for a field - this command will *not* partially modify a field's definition.
 If the field does not exist in the schema an error is thrown.
 
-All of the properties available when defining a field with manual `schema.xml` edits can be passed via the API.
+All of the properties available when defining a field with manual schema edits can be passed via the API.
 These request attributes are described in detail in the section <<fields.adoc#,Fields>>.
 
 For example, to replace the definition of an existing field "sell_by", to make it be of type "date" and to not be stored, you would POST the following request:
@@ -207,7 +207,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
 
 The `add-dynamic-field` command adds a new dynamic field rule to your schema.
 
-All of the properties available when editing `schema.xml` can be passed with the POST request.
+All of the properties available when editing the schema can be passed with the POST request.
 The section <<dynamic-fields.adoc#,Dynamic Fields>> has details on all of the attributes that can be defined for a dynamic field rule.
 
 For example, to create a new dynamic field rule where all incoming fields ending with "_s" would be stored and have field type "string", you can POST a request like this:
@@ -281,7 +281,7 @@ The `replace-dynamic-field` command replaces a dynamic field rule in your schema
 Note that you must supply the full definition for a dynamic field rule - this command will *not* partially modify a dynamic field rule's definition.
 If the dynamic field rule does not exist in the schema an error is thrown.
 
-All of the properties available when editing `schema.xml` can be passed with the POST request.
+All of the properties available when editing the schema can be passed with the POST request.
 The section <<dynamic-fields.adoc#,Dynamic Fields>> has details on all of the attributes that can be defined for a dynamic field rule.
 
 For example, to replace the definition of the "*_s" dynamic field rule with one where the field type is "text_general" and it's not stored, you can POST a request like this:
@@ -321,7 +321,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
 
 The `add-field-type` command adds a new field type to your schema.
 
-All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request.
+All of the field type properties available when editing the schema by hand are available for use in a POST request.
 The structure of the command is a JSON mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc.
 Details of all of the available options are described in the section <<field-types.adoc#,Field Types>>.
 
@@ -440,7 +440,7 @@ The `replace-field-type` command replaces a field type in your schema.
 Note that you must supply the full definition for a field type - this command will *not* partially modify a field type's definition.
 If the field type does not exist in the schema an error is thrown.
 
-All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request.
+All of the field type properties available when editing the schema by hand are available for use in a POST request.
 The structure of the command is a JSON mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc.
 Details of all of the available options are described in the section <<field-types.adoc#,Field Types>>.
 
@@ -487,7 +487,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
 
 The `add-copy-field` command adds a new copy field rule to your schema.
 
-The attributes supported by the command are the same as when creating copy field rules by manually editing the `schema.xml`, as below:
+The attributes supported by the command are the same as when creating copy field rules by manually editing the schema, as below:
 
 `source`::
 +
diff --git a/solr/solr-ref-guide/src/schema-elements.adoc b/solr/solr-ref-guide/src/schema-elements.adoc
index 31b1358..d470988 100644
--- a/solr/solr-ref-guide/src/schema-elements.adoc
+++ b/solr/solr-ref-guide/src/schema-elements.adoc
@@ -33,7 +33,7 @@ However, the way you interact with the file will change.
 If you are using the managed schema, it is expected that you only interact with the file with the Schema API, and never make manual edits.
 If you do not use the managed schema, you will only be able to make manual edits to the file, the Schema API will not support any modifications.
 
-Note that if you are not using the Schema API yet you do use SolrCloud, you will need to interact with `schema.xml` through ZooKeeper using `upconfig` and `downconfig` commands to make a local copy and upload your changes.
+Note that if you are not using the Schema API yet you do use SolrCloud, you will need to interact with the schema file through ZooKeeper using `upconfig` and `downconfig` commands to make a local copy and upload your changes.
 The options for doing this are described in <<solr-control-script-reference.adoc#,Solr Control Script Reference>> and <<zookeeper-file-management.adoc#,ZooKeeper File Management>>.
 
 == Structure of the Schema File
@@ -97,7 +97,7 @@ Similarity is a Lucene class used to score a document in searching.
 Each collection has one "global" Similarity.
 By default Solr uses an implicit {solr-javadocs}/core/org/apache/solr/search/similarities/SchemaSimilarityFactory.html[`SchemaSimilarityFactory`] which allows individual field types to be configured with a "per-type" specific Similarity and implicitly uses `BM25Similarity` for any field type which does not have an explicit Similarity.
 
-This default behavior can be overridden by declaring a top level `<similarity/>` element in your `schema.xml`, outside of any single field type.
+This default behavior can be overridden by declaring a top level `<similarity/>` element in your schema, outside of any single field type.
 This similarity declaration can either refer directly to the name of a class with a no-argument constructor, such as in this example showing `BM25Similarity`:
 
 [source,xml]
@@ -117,7 +117,7 @@ or by referencing a `SimilarityFactory` implementation, which may take optional
 </similarity>
 ----
 
-In most cases, specifying global level similarity like this will cause an error if your `schema.xml` also includes field type specific `<similarity/>` declarations.
+In most cases, specifying global level similarity like this will cause an error if your schema also includes field type specific `<similarity/>` declarations.
 One key exception to this is that you may explicitly declare a {solr-javadocs}/core/org/apache/solr/search/similarities/SchemaSimilarityFactory.html[`SchemaSimilarityFactory`] and specify what that default behavior will be for all field types that do not declare an explicit Similarity using the name of field type (specified by `defaultSimFromFieldType`) that _is_ configured with a specific similarity:
 
 [source,xml]
diff --git a/solr/solr-ref-guide/src/script-update-processor.adoc b/solr/solr-ref-guide/src/script-update-processor.adoc
index ed535d5..5ca3cdb 100644
--- a/solr/solr-ref-guide/src/script-update-processor.adoc
+++ b/solr/solr-ref-guide/src/script-update-processor.adoc
@@ -126,7 +126,7 @@ You can see the message recorded in the Solr logging UI.
 
 === Javascript
 
-Note: There is a JavaScript example `update-script.js` as part of the `techproducts` configset.
+Note: There is a JavaScript example `update-script.js` as part of the "techproducts" configset.
 Check `solrconfig.xml` and uncomment the update request processor definition to enable this feature.
 
 [source,javascript]
diff --git a/solr/solr-ref-guide/src/shard-management.adoc b/solr/solr-ref-guide/src/shard-management.adoc
index f84f7d4..544377e 100644
--- a/solr/solr-ref-guide/src/shard-management.adoc
+++ b/solr/solr-ref-guide/src/shard-management.adoc
@@ -271,7 +271,8 @@ Current implementation details and limitations:
 === SPLITSHARD Response
 
 The output will include the status of the request and the new shard names, which will use the original shard as their basis, adding an underscore and a number.
-For example, "shard1" will become "shard1_0" and "shard1_1". If the status is anything other than "success", an error message will explain why the request failed.
+For example, "shard1" will become "shard1_0" and "shard1_1".
+If the status is anything other than "success", an error message will explain why the request failed.
 
 [[createshard]]
 == CREATESHARD: Create a Shard
diff --git a/solr/solr-ref-guide/src/solr-glossary.adoc b/solr/solr-ref-guide/src/solr-glossary.adoc
index 319ca0b..ad16513 100644
--- a/solr/solr-ref-guide/src/solr-glossary.adoc
+++ b/solr/solr-ref-guide/src/solr-glossary.adoc
@@ -66,7 +66,7 @@ Multiple cores can run on a single node.
 See also <<solrclouddef,SolrCloud>>.
 
 [[corereload]]Core reload::
-To re-initialize a Solr core after changes to `schema.xml`, `solrconfig.xml` or other configuration files.
+To re-initialize a Solr core after changes to the schema file, `solrconfig.xml` or other configuration files.
 
 [[SolrGlossary-D]]
 === D
@@ -76,7 +76,8 @@ Distributed search is one where queries are processed across more than one <<sha
 
 [[document]]Document::
 A group of <<field,fields>> and their values.
-Documents are the basic unit of data in a <<collection,collection>>. Documents are assigned to <<shard,shards>> using standard hashing, or by specifically assigning a shard within the document ID.
+Documents are the basic unit of data in a <<collection,collection>>.
+Documents are assigned to <<shard,shards>> using standard hashing, or by specifically assigning a shard within the document ID.
 Documents are versioned after each write operation.
 
 [[SolrGlossary-E]]
@@ -120,7 +121,8 @@ See also <<solrclouddef,SolrCloud>>.
 === M
 
 [[metadata]]Metadata::
-Literally, _data about data_. Metadata is information about a document, such as its title, author, or location.
+Literally, _data about data_.
+Metadata is information about a document, such as its title, author, or location.
 
 [[SolrGlossary-N]]
 === N
@@ -177,7 +179,8 @@ Logic and configuration parameters used by request handlers to process query req
 Examples of search components include faceting, highlighting, and "more like this" functionality.
 
 [[shard]]Shard::
-In SolrCloud, a logical partition of a single <<collection,Collection>>. Every shard consists of at least one physical <<replica,Replica>>, but there may be multiple Replicas distributed across multiple <<node,Nodes>> for fault tolerance.
+In SolrCloud, a logical partition of a single <<collection,Collection>>.
+Every shard consists of at least one physical <<replica,Replica>>, but there may be multiple Replicas distributed across multiple <<node,Nodes>> for fault tolerance.
 See also <<solrclouddef,SolrCloud>>.
 
 [[solrclouddef]]<<cluster-types.adoc#solrcloud-mode,SolrCloud>>::
@@ -218,7 +221,8 @@ See http://en.wikipedia.org/wiki/Tf-idf and {lucene-javadocs}/core/org/apache/lu
 See also <<idf,Inverse document frequency (IDF)>>.
 
 [[transactionlog]]Transaction log::
-An append-only log of write operations maintained by each <<replica,Replica>>. This log is required with SolrCloud implementations and is created and managed automatically by Solr.
+An append-only log of write operations maintained by each <<replica,Replica>>.
+This log is required with SolrCloud implementations and is created and managed automatically by Solr.
 
 [[SolrGlossary-W]]
 === W
@@ -230,6 +234,7 @@ A wildcard allows a substitution of one or more letters of a word to account for
 === Z
 
 [[zookeeper]]ZooKeeper::
-Also known as http://zookeeper.apache.org/[Apache ZooKeeper]. The system used by SolrCloud to keep track of configuration files and node names for a cluster.
+Also known as http://zookeeper.apache.org/[Apache ZooKeeper].
+The system used by SolrCloud to keep track of configuration files and node names for a cluster.
 A ZooKeeper cluster is used as the central configuration store for the cluster, a coordinator for operations requiring distributed synchronization, and the system of record for cluster topology.
 See also <<solrclouddef,SolrCloud>>.
diff --git a/solr/solr-ref-guide/src/solr-in-docker.adoc b/solr/solr-ref-guide/src/solr-in-docker.adoc
index d5393f8..d4f87a5 100644
--- a/solr/solr-ref-guide/src/solr-in-docker.adoc
+++ b/solr/solr-ref-guide/src/solr-in-docker.adoc
@@ -299,7 +299,8 @@ jattach 10 jcmd GC.heap_info
 
 == Updating from Solr 5-7 to 8+
 
-In Solr 8, the Solr Docker image switched from just extracting the Solr tar, to using the <<taking-solr-to-production.adoc#service-installation-script,service installation script>>. This was done for various reasons: to bring it in line with the recommendations by the Solr Ref Guide and to make it easier to mount volumes.
+In Solr 8, the Solr Docker image switched from just extracting the Solr tar, to using the <<taking-solr-to-production.adoc#service-installation-script,service installation script>>.
+This was done for various reasons: to bring it in line with the recommendations by the Solr Ref Guide and to make it easier to mount volumes.
 
 This is a backwards incompatible change, and means that if you're upgrading from an older version, you will most likely need to make some changes.
 If you don't want to upgrade at this time, specify `solr:7` as your container image.
diff --git a/solr/solr-ref-guide/src/solr-plugins.adoc b/solr/solr-ref-guide/src/solr-plugins.adoc
index dda54b3..3ec40f5 100644
--- a/solr/solr-ref-guide/src/solr-plugins.adoc
+++ b/solr/solr-ref-guide/src/solr-plugins.adoc
@@ -36,7 +36,8 @@ One resource is the Solr Wiki documentation on plugins at https://cwiki.apache.o
 There are essentially two types of plugins in Solr:
 
 * Collection level plugins.
-These are registered on individual collections, either by hand-editing the `solrconfig.xml` or schema files for the collection's configset or by using the <<config-api.adoc#,config API>> or <<schema-api.adoc#,schema API>>. Examples of these are query parsers, request handlers, update request processors, value source parsers, response writers etc.
+These are registered on individual collections, either by hand-editing the `solrconfig.xml` or schema files for the collection's configset or by using the <<config-api.adoc#,config API>> or <<schema-api.adoc#,schema API>>.
+Examples of these are query parsers, request handlers, update request processors, value source parsers, response writers etc.
 
 * Cluster level (or Core Container level) plugins.
 These are plugins that are installed at a cluster level and every Solr node has one instance each of these plugins.
diff --git a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
index 4356acd..250a980 100644
--- a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
+++ b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
@@ -222,7 +222,8 @@ See the section <<json-facet-api.adoc#relatedness-options,relatedness() Options>
 
 *solr.in.sh / solr.in.cmd*
 
-* Solr has relied on the `SOLR_STOP_WAIT` parameter defined in `solr.in.sh` or `solr.in.cmd` to determine how long to wait on _startup_. A new parameter `SOLR_START_WAIT` allows defining how long Solr should wait for start up to complete.
+* Solr has relied on the `SOLR_STOP_WAIT` parameter defined in `solr.in.sh` or `solr.in.cmd` to determine how long to wait on _startup_.
+A new parameter `SOLR_START_WAIT` allows defining how long Solr should wait for start up to complete.
 +
 If the time set by this parameter is exceeded, Solr will exit the startup process and return the last few lines of the `solr.log` file to the terminal.
 +
@@ -743,7 +744,8 @@ If you prefer to use CMS or any other GC method, you can modify the `GC_TUNE` se
 == Upgrading from 7.x Releases
 
 The upgrade from 7.x to Solr 8.0 introduces several major changes that you should be aware of before upgrading.
-These changes are described in the section <<major-changes-in-solr-8.adoc#,Major Changes in Solr 8>>. It's strongly recommended that you do a thorough review of that section before starting your upgrade.
+These changes are described in the section <<major-changes-in-solr-8.adoc#,Major Changes in Solr 8>>.
+It's strongly recommended that you do a thorough review of that section before starting your upgrade.
 
 [NOTE]
 If you run in SolrCloud mode, you must be on Solr version 7.3 or higher in order to upgrade to 8.x.
diff --git a/solr/solr-ref-guide/src/solrcloud-shards-indexing.adoc b/solr/solr-ref-guide/src/solrcloud-shards-indexing.adoc
index 1b0d16e..6595eb4 100644
--- a/solr/solr-ref-guide/src/solrcloud-shards-indexing.adoc
+++ b/solr/solr-ref-guide/src/solrcloud-shards-indexing.adoc
@@ -127,7 +127,8 @@ If you use the `compositeId` router (the default), you can send documents with a
 The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently.
 
 For example, if you want to co-locate documents for a customer, you could use the customer name or ID as the prefix.
-If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345". The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
+If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345".
+The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
 
 Then at query time, you include the prefix(es) into your query with the `\_route_` parameter (i.e., `q=solr&_route_=IBM!`) to direct queries to specific shards.
 In some situations, this may improve query performance because it overcomes network latency when querying all the shards.
@@ -166,7 +167,8 @@ In most cases, when running in SolrCloud mode, indexing client applications shou
 Rather, you should configure auto commits with `openSearcher=false` and `autoSoftCommit` to make recent updates visible in search requests.
 This ensures that auto commits occur on a regular schedule in the cluster.
 
-NOTE: Using `autoSoftCommit` or `commitWithin` requires the client app to embrace the realities of "eventual consistency". Solr will make documents searchable at _roughly_ the same time across replicas of a collection but there are no hard guarantees.
+NOTE: Using `autoSoftCommit` or `commitWithin` requires the client app to embrace the realities of "eventual consistency".
+Solr will make documents searchable at _roughly_ the same time across replicas of a collection but there are no hard guarantees.
 Consequently, in rare cases, it's possible for a document to show up in one search only for it not to appear in a subsequent search occurring immediately after the first search when the second search is routed to a different replica.
 Also, documents added in a particular order (even in the same batch) might become searchable out of the order of submission when there is sharding.
 The document will become visible on all replicas of a shard after the next `autoCommit` or `commitWithin` interval expires.
diff --git a/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc b/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
index cc9e8e7..fc3e56c 100644
--- a/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
+++ b/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
@@ -22,9 +22,9 @@ All of the required configuration is already set up in the sample configurations
 You only need to add the following if you are migrating old configuration files.
 Do not remove these files and parameters from a new Solr instance if you intend to use Solr in SolrCloud mode.
 
-These properties exist in 3 files: `schema.xml`, `solrconfig.xml`, and `solr.xml`.
+These properties exist in 3 files: `schema.xml` or `managed-schema`, `solrconfig.xml`, and `solr.xml`.
 
-. In `schema.xml`, you must have a `\_version_` field defined:
+. In the schema file, you must have a `\_version_` field defined:
 +
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/src/spatial-search.adoc b/solr/solr-ref-guide/src/spatial-search.adoc
index d64cd7c..23c6941 100644
--- a/solr/solr-ref-guide/src/spatial-search.adoc
+++ b/solr/solr-ref-guide/src/spatial-search.adoc
@@ -63,7 +63,8 @@ For indexing non-geodetic points, it depends.
 Use `x y` (a space) if RPT.
 For PointType however, use `x,y` (a comma).
 
-If you'd rather use a standard industry format, Solr supports https://en.wikipedia.org/wiki/Well-known_text[WKT] and http://geojson.org/[GeoJSON]. However it's much bulkier than the raw coordinates for such simple data.
+If you'd rather use a standard industry format, Solr supports https://en.wikipedia.org/wiki/Well-known_text[WKT] and http://geojson.org/[GeoJSON].
+However it's much bulkier than the raw coordinates for such simple data.
 (Not supported by the deprecated LatLonType or PointType)
 
 === Indexing GeoJSON and WKT
diff --git a/solr/solr-ref-guide/src/spell-checking.adoc b/solr/solr-ref-guide/src/spell-checking.adoc
index 48f9bbc..37789ea 100644
--- a/solr/solr-ref-guide/src/spell-checking.adoc
+++ b/solr/solr-ref-guide/src/spell-checking.adoc
@@ -173,7 +173,8 @@ The results are combined and collations can contain a mix of corrections from bo
 
 === Add It to a Request Handler
 
-Queries will be sent to a <<query-syntax-and-parsers.adoc#,RequestHandler>>. If every request should generate a suggestion, then you would add the following to the `requestHandler` that you are using:
+Queries will be sent to a <<query-syntax-and-parsers.adoc#,RequestHandler>>.
+If every request should generate a suggestion, then you would add the following to the `requestHandler` that you are using:
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/src/standard-query-parser.adoc b/solr/solr-ref-guide/src/standard-query-parser.adoc
index 2d0bcc0..e7effed 100644
--- a/solr/solr-ref-guide/src/standard-query-parser.adoc
+++ b/solr/solr-ref-guide/src/standard-query-parser.adoc
@@ -46,7 +46,8 @@ Default parameter values are specified in `solrconfig.xml`, or overridden by que
 == Standard Query Parser Response
 
 By default, the response from the standard query parser contains one `<result>` block, which is unnamed.
-If the <<common-query-parameters.adoc#debug-parameter,`debug` parameter>> is used, then an additional `<lst>` block will be returned, using the name "debug". This will contain useful debugging info, including the original query string, the parsed query string, and explain info for each document in the <result> block.
+If the <<common-query-parameters.adoc#debug-parameter,`debug` parameter>> is used, then an additional `<lst>` block will be returned, using the name "debug".
+This will contain useful debugging info, including the original query string, the parsed query string, and explain info for each document in the <result> block.
 If the <<common-query-parameters.adoc#explainother-parameter,`explainOther` parameter>> is also used, then additional explain info will be provided for all the documents matching that query.
 
 === Sample Responses
diff --git a/solr/solr-ref-guide/src/stats-component.adoc b/solr/solr-ref-guide/src/stats-component.adoc
index 371e0dc..4a36eff 100644
--- a/solr/solr-ref-guide/src/stats-component.adoc
+++ b/solr/solr-ref-guide/src/stats-component.adoc
@@ -18,7 +18,7 @@
 
 The Stats component returns simple statistics for numeric, string, and date fields within the document set.
 
-The sample queries in this section assume you are running the "```techproducts```" example included with Solr:
+The sample queries in this section assume you are running the "techproducts" example included with Solr:
 
 [source,bash]
 ----
@@ -141,7 +141,8 @@ This statistic is computed for numeric and date field types and is computed by d
 
 `percentiles`::
 A list of percentile values based on cut-off points specified by the parameter value, such as `1,99,99.9`.
-These values are an approximation, using the https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[t-digest algorithm]. This statistic is computed for numeric field types and is not computed by default.
+These values are an approximation, using the https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[t-digest algorithm].
+This statistic is computed for numeric field types and is not computed by default.
 
 `distinctValues`::
 The set of all distinct values for the field/function in all of the documents in the set.
diff --git a/solr/solr-ref-guide/src/stream-decorator-reference.adoc b/solr/solr-ref-guide/src/stream-decorator-reference.adoc
index 85ad0c1..682f8aa 100644
--- a/solr/solr-ref-guide/src/stream-decorator-reference.adoc
+++ b/solr/solr-ref-guide/src/stream-decorator-reference.adoc
@@ -388,7 +388,8 @@ As you can see in the examples above, the `cartesianProduct` function does suppo
 == classify
 
 The `classify` function classifies tuples using a logistic regression text classification model.
-It was designed specifically to work with models trained using the <<stream-source-reference.adoc#train,train function>>. The `classify` function uses the <<stream-source-reference.adoc#model,model function>> to retrieve a stored model and then scores a stream of tuples using the model.
+It was designed specifically to work with models trained using the <<stream-source-reference.adoc#train,train function>>.
+The `classify` function uses the <<stream-source-reference.adoc#model,model function>> to retrieve a stored model and then scores a stream of tuples using the model.
 The tuples read by the classifier must contain a text field that can be used for classification.
 The classify function uses a Lucene analyzer to extract the features from the text so the model can be applied.
 By default the `classify` function looks for the analyzer using the name of text field in the tuple.
diff --git a/solr/solr-ref-guide/src/stream-source-reference.adoc b/solr/solr-ref-guide/src/stream-source-reference.adoc
index 5be176b..770e70c 100644
--- a/solr/solr-ref-guide/src/stream-source-reference.adoc
+++ b/solr/solr-ref-guide/src/stream-source-reference.adoc
@@ -450,7 +450,8 @@ random(baskets,
        fl="basketID")
 ----
 
-In the example above the `random` function is searching the baskets collections for all rows where "productID:productX". It will return 100 pseudo-random results.
+In the example above the `random` function is searching the baskets collections for all rows where "productID:productX".
+It will return 100 pseudo-random results.
 The field list returned is the basketID.
 
 == significantTerms
diff --git a/solr/solr-ref-guide/src/suggester.adoc b/solr/solr-ref-guide/src/suggester.adoc
index f3ca4c5..e21888d 100644
--- a/solr/solr-ref-guide/src/suggester.adoc
+++ b/solr/solr-ref-guide/src/suggester.adoc
@@ -30,10 +30,10 @@ The main features of this Suggester are:
 * Term dictionary pluggability, giving you the flexibility to choose the dictionary implementation
 * Distributed support
 
-The `solrconfig.xml` found in Solr's "```techproducts```" example has a Suggester implementation configured already.
+The `solrconfig.xml` found in Solr's "techproducts" example has a Suggester implementation configured already.
 For more on search components, see the section <<requesthandlers-searchcomponents.adoc#,Request Handlers and Search Components>>.
 
-The "```techproducts```" example `solrconfig.xml` has a `suggest` search component and a `/suggest` request handler already configured.
+The "techproducts" example `solrconfig.xml` has a `suggest` search component and a `/suggest` request handler already configured.
 You can use that as the basis for your configuration, or create it from scratch, as detailed below.
 
 == Adding the Suggest Search Component
diff --git a/solr/solr-ref-guide/src/tagger-handler.adoc b/solr/solr-ref-guide/src/tagger-handler.adoc
index 9406030..ca9d4a2 100644
--- a/solr/solr-ref-guide/src/tagger-handler.adoc
+++ b/solr/solr-ref-guide/src/tagger-handler.adoc
@@ -182,16 +182,13 @@ Solr's parameters for controlling the response format are also supported, such a
 
 This is a tutorial that demonstrates how to configure and use the text
 tagger with the popular http://www.geonames.org/[Geonames] data set.
-It's more than a tutorial;
-it's a how-to with information that wasn't described above.
+It's more than a tutorial; it's a how-to with information that wasn't described above.
 
 === Create and Configure a Solr Collection
 
-Create a Solr collection named "geonames". For the tutorial, we'll
-assume the default "data-driven" configuration.
-It's good for
-experimentation and getting going fast but not for production or being
-optimal.
+Create a Solr collection named "geonames".
+For the tutorial, we'll assume the default "data-driven" configuration.
+It's good for experimentation and getting going fast but not for production or being optimal.
 
 [source,bash]
 bin/solr create -c geonames
diff --git a/solr/solr-ref-guide/src/taking-solr-to-production.adoc b/solr/solr-ref-guide/src/taking-solr-to-production.adoc
index e609041..cd3c7aa 100644
--- a/solr/solr-ref-guide/src/taking-solr-to-production.adoc
+++ b/solr/solr-ref-guide/src/taking-solr-to-production.adoc
@@ -278,7 +278,8 @@ For instance, to ensure all znodes created by SolrCloud are stored under `/solr`
 ZK_HOST=zk1,zk2,zk3/solr
 ----
 
-Before using a chroot for the first time, you need to create the root path (znode) in ZooKeeper by using the <<solr-control-script-reference.adoc#,Solr Control Script>>. We can use the mkroot command for that:
+Before using a chroot for the first time, you need to create the root path (znode) in ZooKeeper by using the <<solr-control-script-reference.adoc#,Solr Control Script>>.
+We can use the mkroot command for that:
 
 [source,bash]
 ----
@@ -358,7 +359,8 @@ ulimit -a
 These four settings in particular are important to have set very high, unlimited if possible.
 
 * max processes (`ulimit -u`): 65,000 is the recommended _minimum_.
-* file handles (`ulimit -n`): 65,000 is the recommended _minimum_. All the files used by all replicas have their file handles open at once so this can grow quite large.
+* file handles (`ulimit -n`): 65,000 is the recommended _minimum_.
+All the files used by all replicas have their file handles open at once so this can grow quite large.
 * virtual memory (`ulimit -v`): Set to unlimited.
 This is used to by MMapping the indexes.
 * max memory size (`ulimit -m`): Also used by MMap, set to unlimited.
@@ -385,7 +387,8 @@ Errors such as "too many open files", "connection error", and "max processes exc
 
 When running a Java application like Lucene/Solr, having the OS swap memory to disk is a very bad situation.
 We usually prefer a hard crash so other healthy Solr nodes can take over, instead of letting a Solr node swap, causing terrible performance, timeouts and an unstable system.
-So our recommendation is to disable swap on the host altogether or reduce the "swappiness". These instructions are valid for Linux environments.
+So our recommendation is to disable swap on the host altogether or reduce the "swappiness".
+These instructions are valid for Linux environments.
 Also note that when running Solr in a Docker container, these changes must be applied to the *host*.
 
 ==== Disabling Swap
diff --git a/solr/solr-ref-guide/src/term-vector-component.adoc b/solr/solr-ref-guide/src/term-vector-component.adoc
index dfed603..fdb0292 100644
--- a/solr/solr-ref-guide/src/term-vector-component.adoc
+++ b/solr/solr-ref-guide/src/term-vector-component.adoc
@@ -23,7 +23,7 @@ For each document in the response, the TermVectorCcomponent can return the term
 == Term Vector Component Configuration
 
 The TermVectorComponent is not enabled implicitly in Solr - it must be explicitly configured in your `solrconfig.xml` file.
-The examples on this page show how it is configured in Solr's "```techproducts```" example:
+The examples on this page show how it is configured in Solr's "techproducts" example:
 
 [source,bash]
 ----
@@ -38,7 +38,7 @@ To enable the this component, you need to configure it using a `searchComponent`
 ----
 
 A request handler must then be configured to use this component name.
-In the `techproducts` example, the component is associated with a special request handler named `/tvrh`, that enables term vectors by default using the `tv=true` parameter; but you can associate it with any request handler:
+In the "techproducts" example, the component is associated with a special request handler named `/tvrh`, that enables term vectors by default using the `tv=true` parameter; but you can associate it with any request handler:
 
 [source,xml]
 ----
@@ -52,7 +52,7 @@ In the `techproducts` example, the component is associated with a special reques
 </requestHandler>
 ----
 
-Once your handler is defined, you may use in conjunction with any schema (that has a `uniqueKeyField)` to fetch term vectors for fields configured with the `termVector` attribute, such as in the `techproducts` sample schema.
+Once your handler is defined, you may use in conjunction with any schema (that has a `uniqueKeyField)` to fetch term vectors for fields configured with the `termVector` attribute, such as in the "techproducts" sample schema.
 For example:
 
 [source,xml]
@@ -224,7 +224,8 @@ If `true`, returns document term frequency info for each term in the document.
 If `true`, calculates TF / DF (i.e.,: TF * IDF) for each term.
 Please note that this is a _literal_ calculation of "Term Frequency multiplied by Inverse Document Frequency" and *not* a classical TF-IDF similarity measure.
 +
-This parameter requires both `tv.tf` and `tv.df` to be "true". This can be computationally expensive.
+This parameter requires both `tv.tf` and `tv.df` to be `true`.
+This can be computationally expensive.
 (The results are not shown in example output)
 
 To see an example of TermVector component output, see the Wiki page: https://cwiki.apache.org/confluence/display/solr/TermVectorComponentExampleOptions
diff --git a/solr/solr-ref-guide/src/tokenizers.adoc b/solr/solr-ref-guide/src/tokenizers.adoc
index f8bf47e..12e571f 100644
--- a/solr/solr-ref-guide/src/tokenizers.adoc
+++ b/solr/solr-ref-guide/src/tokenizers.adoc
@@ -31,7 +31,7 @@ It's also possible for more than one token to have the same position or refer to
 Keep this in mind if you use token metadata for things like highlighting search results in the field text.
 
 == About Tokenizers
-You configure the tokenizer for a text field type in `schema.xml` with a `<tokenizer>` element, as a child of `<analyzer>`:
+You configure the tokenizer for a text field type in the <<solr-schema.adoc#,schema>> with a `<tokenizer>` element, as a child of `<analyzer>`:
 
 [.dynamic-tabs]
 --
@@ -480,7 +480,8 @@ Edge n-gram range of 2 to 5
 
 This tokenizer processes multilingual text and tokenizes it appropriately based on its script attribute.
 
-You can customize this tokenizer's behavior by specifying http://userguide.icu-project.org/boundaryanalysis#TOC-RBBI-Rules[per-script rule files]. To add per-script rules, add a `rulefiles` argument, which should contain a comma-separated list of `code:rulefile` pairs in the following format: four-letter ISO 15924 script code, followed by a colon, then a resource path.
+You can customize this tokenizer's behavior by specifying http://userguide.icu-project.org/boundaryanalysis#TOC-RBBI-Rules[per-script rule files].
+To add per-script rules, add a `rulefiles` argument, which should contain a comma-separated list of `code:rulefile` pairs in the following format: four-letter ISO 15924 script code, followed by a colon, then a resource path.
 For example, to specify rules for Latin (script code "Latn") and Cyrillic (script code "Cyrl"), you would enter `Latn:my.Latin.rules.rbbi,Cyrl:my.Cyrillic.rules.rbbi`.
 
 The default configuration for `solr.ICUTokenizerFactory` provides UAX#29 word break rules tokenization (like `solr.StandardTokenizer`), but also includes custom tailorings for Hebrew (specializing handling of double and single quotation marks), for syllable tokenization for Khmer, Lao, and Myanmar, and dictionary-based word segmentation for CJK characters.
diff --git a/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc b/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
index eef7373..19f5707 100644
--- a/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
+++ b/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
@@ -656,7 +656,8 @@ curl 'http://localhost:8983/api/collections/techproducts/update/json'\
 ====
 --
 
-In the above example, we've said all of the fields should be added to a field in Solr named 'txt'. This will add multiple fields to a single field, so whatever field you choose should be multi-valued.
+In the above example, we've said all of the fields should be added to a field in Solr named 'txt'.
+This will add multiple fields to a single field, so whatever field you choose should be multi-valued.
 
 The default behavior is to use the fully qualified name (FQN) of the node.
 So, if we don't define any field mappings, like this:
diff --git a/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc b/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
index f665ef2..92b132d 100644
--- a/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
+++ b/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
@@ -59,12 +59,14 @@ This means that you won't need to move any index files around to perform the upg
 === Step 1: Stop Solr
 
 Begin by stopping the Solr node you want to upgrade.
-After stopping the node, if using a replication (i.e., collections with `replicationFactor` less than 1), verify that all leaders hosted on the downed node have successfully migrated to other replicas; you can do this by visiting the <<cloud-screens.adoc#,Cloud panel in the Solr Admin UI>>. If not using replication, then any collections with shards hosted on the downed node will be temporarily off-line.
+After stopping the node, if using a replication (i.e., collections with `replicationFactor` less than 1), verify that all leaders hosted on the downed node have successfully migrated to other replicas; you can do this by visiting the <<cloud-screens.adoc#,Cloud panel in the Solr Admin UI>>.
+If not using replication, then any collections with shards hosted on the downed node will be temporarily off-line.
 
 
 === Step 2: Install Solr as a Service
 
-Please follow the instructions to install Solr as a Service on Linux documented at <<taking-solr-to-production.adoc#,Taking Solr to Production>>. Use the `-n` parameter to avoid automatic start of Solr by the installer script.
+Please follow the instructions to install Solr as a Service on Linux documented at <<taking-solr-to-production.adoc#,Taking Solr to Production>>.
+Use the `-n` parameter to avoid automatic start of Solr by the installer script.
 You need to update the `/etc/default/solr.in.sh` include file in the next step to complete the upgrade process.
 
 [NOTE]
diff --git a/solr/solr-ref-guide/src/user-managed-index-replication.adoc b/solr/solr-ref-guide/src/user-managed-index-replication.adoc
index eb2cfd8..d3beb16 100644
--- a/solr/solr-ref-guide/src/user-managed-index-replication.adoc
+++ b/solr/solr-ref-guide/src/user-managed-index-replication.adoc
@@ -475,7 +475,8 @@ http://_leader_host:port_/solr/_core_name_/replication?command=restorestatus
 This command is used to check the status of a restore operation.
 This command takes no parameters.
 +
-The status value can be "In Progress", "success", or "failed". If it failed then an "exception" will also be sent in the response.
+The status value can be "In Progress", "success", or "failed".
+If it failed then an "exception" will also be sent in the response.
 
 `deletebackup`::
 Delete any backup created using the `backup` command.
diff --git a/solr/solr-ref-guide/src/zookeeper-access-control.adoc b/solr/solr-ref-guide/src/zookeeper-access-control.adoc
index 946a9f4..9642d35 100644
--- a/solr/solr-ref-guide/src/zookeeper-access-control.adoc
+++ b/solr/solr-ref-guide/src/zookeeper-access-control.adoc
@@ -79,7 +79,8 @@ You control which credentials provider will be used by configuring the `zkCreden
 
 You can always make you own implementation, but Solr comes with two implementations:
 
-* `org.apache.solr.common.cloud.DefaultZkCredentialsProvider`: Its `getCredentials()` returns a list of length zero, or "no credentials used". This is the default.
+* `org.apache.solr.common.cloud.DefaultZkCredentialsProvider`: Its `getCredentials()` returns a list of length zero, or "no credentials used".
+This is the default.
 * `org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider`: This lets you define your credentials using system properties.
 It supports at most one set of credentials.
 ** The schema is "digest".
@@ -99,7 +100,8 @@ You control which ACLs will be added by configuring `zkACLProvider` property in
 You can always make you own implementation, but Solr comes with:
 
 * `org.apache.solr.common.cloud.DefaultZkACLProvider`: It returns a list of length one for all `zNodePath`-s.
-The single ACL entry in the list is "open-unsafe". This is the default.
+The single ACL entry in the list is "open-unsafe".
+This is the default.
 * `org.apache.solr.common.cloud.VMParamsAllAndReadonlyDigestZkACLProvider`: This lets you define your ACLs using system properties.
 The `getACLsToAdd()` implementation will apply only admin ACLs to pre-defined sensitive paths as defined by `SecurityAwareZkACLProvider` (`/security.json` and `/security/*`) and both admin and user ACLs to the rest of the contents.
 The two sets of roles will be defined as: