You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@solr.apache.org by ct...@apache.org on 2021/11/29 20:52:42 UTC

[solr] 01/04: Fix refs in indexing guide + cleanups + move 'pure nav' pages aside

This is an automated email from the ASF dual-hosted git repository.

ctargett pushed a commit to branch jira/solr-15556-antora
in repository https://gitbox.apache.org/repos/asf/solr.git

commit 420b96b943b88861f40c2c427f2c51fe60a217ea
Author: Cassandra Targett <ct...@apache.org>
AuthorDate: Thu Nov 25 12:53:08 2021 -0600

    Fix refs in indexing guide + cleanups + move 'pure nav' pages aside
---
 .../modules/deployment-guide/pages/aliases.adoc    |  2 +-
 .../deployment-guide/pages/backup-restore.adoc     |  2 +-
 .../pages/cluster-node-management.adoc             |  2 +-
 .../monitoring-with-prometheus-and-grafana.adoc    |  2 +-
 .../modules/deployment-guide/pages/python.adoc     |  2 +-
 .../deployment-guide/pages/security-ui.adoc        |  2 +-
 .../getting-started/pages/tutorial-films.adoc      |  2 +-
 .../modules/indexing-guide/indexing-nav.adoc       | 81 +++++++++++-----------
 .../indexing-guide/pages/analysis-screen.adoc      |  4 +-
 .../modules/indexing-guide/pages/analyzers.adoc    |  2 +-
 .../indexing-guide/pages/content-streams.adoc      |  4 +-
 .../modules/indexing-guide/pages/copy-fields.adoc  |  2 +-
 .../pages/currencies-exchange-rates.adoc           |  6 +-
 .../indexing-guide/pages/de-duplication.adoc       |  6 +-
 .../indexing-guide/pages/document-analysis.adoc    | 25 +++----
 .../indexing-guide/pages/documents-screen.adoc     |  8 +--
 .../modules/indexing-guide/pages/docvalues.adoc    | 14 ++--
 .../pages/external-files-processes.adoc            |  6 +-
 .../pages/field-properties-by-use-case.adoc        |  8 +--
 .../field-type-definitions-and-properties.adoc     | 20 +++---
 .../pages/field-types-included-with-solr.adoc      | 36 +++++-----
 .../modules/indexing-guide/pages/fields.adoc       |  4 +-
 .../modules/indexing-guide/pages/filters.adoc      | 38 +++++-----
 .../pages/indexing-nested-documents.adoc           | 43 +++++-------
 .../indexing-guide/pages/indexing-with-tika.adoc   | 12 ++--
 .../pages/indexing-with-update-handlers.adoc       | 26 ++++---
 .../indexing-guide/pages/language-analysis.adoc    | 57 +++++++--------
 .../indexing-guide/pages/language-detection.adoc   |  4 +-
 .../indexing-guide/pages/luke-request-handler.adoc |  2 +-
 .../pages/partial-document-updates.adoc            | 12 ++--
 .../indexing-guide/pages/phonetic-matching.adoc    | 21 +++---
 .../modules/indexing-guide/pages/post-tool.adoc    |  2 +-
 .../modules/indexing-guide/pages/reindexing.adoc   | 16 ++---
 .../modules/indexing-guide/pages/schema-api.adoc   | 29 ++++----
 .../pages/schema-browser-screen.adoc               | 10 +--
 .../indexing-guide/pages/schema-designer.adoc      | 29 ++++----
 .../indexing-guide/pages/schema-elements.adoc      | 16 ++---
 .../indexing-guide/pages/schemaless-mode.adoc      | 22 +++---
 .../modules/indexing-guide/pages/tokenizers.adoc   |  6 +-
 .../transforming-and-indexing-custom-json.adoc     |  6 +-
 .../pages => src/old-pages}/field-types.adoc       |  0
 .../old-pages}/fields-and-schema-design.adoc       |  0
 .../old-pages}/indexing-data-operations.adoc       |  0
 .../old-pages}/schema-indexing-guide.adoc          |  0
 .../pages => src/old-pages}/solr-schema.adoc       |  0
 45 files changed, 288 insertions(+), 303 deletions(-)

diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/aliases.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/aliases.adoc
index 6d16f6b..cea787c 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/aliases.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/aliases.adoc
@@ -72,7 +72,7 @@ WARNING: It's extremely important with all routed aliases that the route values
 Reindexing a document with a different route value for the same ID produces two distinct documents with the same ID accessible via the alias.
 All query time behavior of the routed alias is *_undefined_* and not easily predictable once duplicate ID's exist.
 
-CAUTION: It is a bad idea to use "data driven" mode (aka xref:configuration-guide:schemaless-mode.adoc[]) with routed aliases, as duplicate schema mutations might happen concurrently leading to errors.
+CAUTION: It is a bad idea to use "data driven" mode (aka xref:indexing-guide:schemaless-mode.adoc[]) with routed aliases, as duplicate schema mutations might happen concurrently leading to errors.
 
 
 === Time Routed Aliases
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/backup-restore.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/backup-restore.adoc
index 53dadfc..a960749 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/backup-restore.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/backup-restore.adoc
@@ -659,7 +659,7 @@ An example configuration using the overall and GCS-client properties can be seen
 === S3BackupRepository
 
 Stores and retrieves backup files in an Amazon S3 bucket.
-This plugin must first be xref:solr-plugins.adoc#installing-plugins[installed] before using.
+This plugin must first be xref:configuration-guide:solr-plugins.adoc#installing-plugins[installed] before using.
 
 This plugin uses the https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/credentials.html[default AWS credentials provider chain], so ensure that your credentials are set appropriately (e.g., via env var, or in `~/.aws/credentials`, etc.).
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
index d0e954f..4be6a66 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
@@ -601,7 +601,7 @@ The node to be removed.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be xref:collections-api.adoc#asynchronous-calls[processed asynchronously].
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 [[addrole]]
 == ADDROLE: Add a Role
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
index 28b5867..6834f2d 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
@@ -18,7 +18,7 @@
 
 If you use https://prometheus.io[Prometheus] and https://grafana.com[Grafana] for metrics storage and data visualization, Solr includes a Prometheus exporter to collect metrics and other data.
 
-A Prometheus exporter (`solr-exporter`) allows users to monitor not only Solr metrics which come from the xref:metrics-reporting.adoc#metrics-api[Metrics API], but also facet counts which come from xref:query-guide:facet.adoc[] and responses to xref:configuration-guide:collections-api.adoc[] commands and xref:ping.adoc[] requests.
+A Prometheus exporter (`solr-exporter`) allows users to monitor not only Solr metrics which come from the xref:metrics-reporting.adoc#metrics-api[Metrics API], but also facet counts which come from xref:query-guide:faceting.adoc[] and responses to xref:configuration-guide:collections-api.adoc[] commands and xref:ping.adoc[] requests.
 
 This graphic provides a more detailed view:
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/python.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/python.adoc
index 10d2ca8..f2fbd3b 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/python.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/python.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr includes an output format specifically for xref:query-guide:response-writers.adoc#python-response-writer[Python Response Writer], but the xref:response-writers.adoc#json-response-writer[JSON Response Writer] is a little more robust.
+Solr includes an output format specifically for xref:query-guide:response-writers.adoc#python-response-writer[Python Response Writer], but the xref:query-guide:response-writers.adoc#json-response-writer[JSON Response Writer] is a little more robust.
 
 == Simple Python
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/security-ui.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/security-ui.adoc
index 2d5aefb..a5fa8a9 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/security-ui.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/security-ui.adoc
@@ -21,7 +21,7 @@ The Security screen allows administrators with the `security-edit` permission to
 The Security screen works with Solr running in cloud and standalone modes.
 
 .Security Screen
-image::solr-admin-ui/security.png[]
+image::getting-started:solr-admin-ui/security.png[]
 
 == Getting Started
 
diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
index 3a207d8..dd5e736 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
@@ -36,7 +36,7 @@ When it's done start the second node, and tell it how to connect to to ZooKeeper
 
 `./bin/solr start -c -p 7574 -s example/cloud/node2/solr -z localhost:9983`
 
-NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:zookeeper-ensemble#updating-solr-include-files[Updating Solr Include Files]) you can omit `-z <zk host string>` from the above command.
+NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:deployment-guide:zookeeper-ensemble#updating-solr-include-files[Updating Solr Include Files]) you can omit `-z <zk host string>` from the above command.
 
 === Create a New Collection
 
diff --git a/solr/solr-ref-guide/modules/indexing-guide/indexing-nav.adoc b/solr/solr-ref-guide/modules/indexing-guide/indexing-nav.adoc
index 70bfa9f..a5fbe59 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/indexing-nav.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/indexing-nav.adoc
@@ -1,46 +1,45 @@
 .Schema and Indexing Guide
-* xref:schema-indexing-guide.adoc[]
 
-** xref:solr-schema.adoc[]
-*** xref:schema-elements.adoc[]
-*** xref:schema-api.adoc[]
-*** xref:schemaless-mode.adoc[]
-*** xref:schema-designer.adoc[]
-*** xref:schema-browser-screen.adoc[]
+* Solr Schema
+** xref:schema-elements.adoc[]
+** xref:schema-api.adoc[]
+** xref:schemaless-mode.adoc[]
+** xref:schema-designer.adoc[]
+** xref:schema-browser-screen.adoc[]
 
-** xref:fields-and-schema-design.adoc[]
-*** xref:fields.adoc[]
-*** xref:field-types.adoc[]
-**** xref:field-type-definitions-and-properties.adoc[]
-**** xref:field-types-included-with-solr.adoc[]
-**** xref:currencies-exchange-rates.adoc[]
-**** xref:date-formatting-math.adoc[]
-**** xref:enum-fields.adoc[]
-**** xref:external-files-processes.adoc[]
-**** xref:field-properties-by-use-case.adoc[]
-*** xref:copy-fields.adoc[]
-*** xref:dynamic-fields.adoc[]
-*** xref:docvalues.adoc[]
-*** xref:luke-request-handler.adoc[]
+* Fields & Schema Design
+** xref:fields.adoc[]
+** Field Types
+*** xref:field-type-definitions-and-properties.adoc[]
+*** xref:field-types-included-with-solr.adoc[]
+*** xref:currencies-exchange-rates.adoc[]
+*** xref:date-formatting-math.adoc[]
+*** xref:enum-fields.adoc[]
+*** xref:external-files-processes.adoc[]
+*** xref:field-properties-by-use-case.adoc[]
+** xref:copy-fields.adoc[]
+** xref:dynamic-fields.adoc[]
+** xref:docvalues.adoc[]
+** xref:luke-request-handler.adoc[]
 
-** xref:document-analysis.adoc[]
-*** xref:analyzers.adoc[]
-*** xref:tokenizers.adoc[]
-*** xref:filters.adoc[]
-*** xref:charfilterfactories.adoc[]
-*** xref:language-analysis.adoc[]
-*** xref:phonetic-matching.adoc[]
-*** xref:analysis-screen.adoc[]
+* xref:document-analysis.adoc[]
+** xref:analyzers.adoc[]
+** xref:tokenizers.adoc[]
+** xref:filters.adoc[]
+** xref:charfilterfactories.adoc[]
+** xref:language-analysis.adoc[]
+** xref:phonetic-matching.adoc[]
+** xref:analysis-screen.adoc[]
 
-** xref:indexing-data-operations.adoc[]
-*** xref:indexing-with-update-handlers.adoc[]
-**** xref:transforming-and-indexing-custom-json.adoc[]
-*** xref:indexing-with-tika.adoc[]
-*** xref:indexing-nested-documents.adoc[]
-*** xref:post-tool.adoc[]
-*** xref:documents-screen.adoc[]
-*** xref:partial-document-updates.adoc[]
-*** xref:reindexing.adoc[]
-*** xref:language-detection.adoc[]
-*** xref:de-duplication.adoc[]
-*** xref:content-streams.adoc[]
+* Indexing & Data Operations
+** xref:indexing-with-update-handlers.adoc[]
+*** xref:transforming-and-indexing-custom-json.adoc[]
+** xref:indexing-with-tika.adoc[]
+** xref:indexing-nested-documents.adoc[]
+** xref:post-tool.adoc[]
+** xref:documents-screen.adoc[]
+** xref:partial-document-updates.adoc[]
+** xref:reindexing.adoc[]
+** xref:language-detection.adoc[]
+** xref:de-duplication.adoc[]
+** xref:content-streams.adoc[]
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/analysis-screen.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/analysis-screen.adoc
index bd92b79..2d78f2f 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/analysis-screen.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/analysis-screen.adoc
@@ -16,9 +16,9 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Once you've <<field-type-definitions-and-properties.adoc#,defined a field type in your Schema>>, and specified the analysis steps that you want applied to it, you should test it out to make sure that it behaves the way you expect it to.
+Once you've xref:field-type-definitions-and-properties.adoc[defined a field type in your Schema], and specified the analysis steps that you want applied to it, you should test it out to make sure that it behaves the way you expect it to.
 
-Luckily, there is a very handy page in the Solr <<solr-admin-ui.adoc#,admin interface>> that lets you do just that.
+Luckily, there is a very handy page in the Solr Admin UI that lets you do just that.
 You can invoke the analyzer for any text field, provide sample input, and display the resulting token stream.
 
 For example, let's look at some of the "Text" field types available in the `bin/solr -e techproducts` example configuration, and use the Analysis Screen (`\http://localhost:8983/solr/#/techproducts/analysis`) to compare how the tokens produced at index time for the sentence "Running an Analyzer" match up with a slightly different query text of "run my analyzer".
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/analyzers.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/analyzers.adoc
index 8d5deee..280f928 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/analyzers.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/analyzers.adoc
@@ -18,7 +18,7 @@
 
 An analyzer examines the text of fields and generates a token stream.
 
-Analyzers are specified as a child of the `<fieldType>` element in <<solr-schema.adoc#,Solr's schema>>.
+Analyzers are specified as a child of the `<fieldType>` element in xref:schema-elements.adoc[Solr's schema].
 
 In normal usage, only fields of type `solr.TextField` or `solr.SortableTextField` will specify an analyzer.
 The simplest way to configure an analyzer is with a single `<analyzer>` element whose class attribute is a fully qualified Java class name.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/content-streams.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/content-streams.adoc
index 9e87596..55411b0 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/content-streams.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/content-streams.adoc
@@ -55,7 +55,7 @@ In `solrconfig.xml`, you can enable it by changing the following `enableRemoteSt
 
 When `enableRemoteStreaming` is not specified in `solrconfig.xml`, the default behavior is to _not_ allow remote streaming (i.e., `enableRemoteStreaming="false"`).
 
-Remote streaming can also be enabled through the <<config-api.adoc#,Config API>> as follows:
+Remote streaming can also be enabled through the xref:configuration-guide:config-api.adoc[] as follows:
 
 [.dynamic-tabs]
 --
@@ -90,5 +90,5 @@ Gzip doesn't apply to `stream.body`.
 
 == Debugging Requests
 
-The implicit "dump" RequestHandler (see <<implicit-requesthandlers.adoc#,Implicit Request Handlers>>) simply outputs the contents of the Solr QueryRequest using the specified writer type `wt`.
+The implicit "dump" RequestHandler (see xref:configuration-guide:implicit-requesthandlers.adoc[]) simply outputs the contents of the Solr QueryRequest using the specified writer type `wt`.
 This is a useful tool to help understand what streams are available to the RequestHandlers.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/copy-fields.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/copy-fields.adoc
index 8883d49..93a130d 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/copy-fields.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/copy-fields.adoc
@@ -28,7 +28,7 @@ In the schema file, it's very simple to make copies of fields:
 ----
 
 In this example, we want Solr to copy the `cat` field to a field named `text`.
-Fields are copied before <<document-analysis.adoc#,analysis>> is done, meaning you can have two fields with identical original content, but which use different analysis chains and are stored in the index differently.
+Fields are copied before xref:document-analysis.adoc[analysis], meaning you can have two fields with identical original content, but which use different analysis chains and are stored in the index differently.
 
 In the example above, if the `text` destination field has data of its own in the input documents, the contents of the `cat` field will be added as additional values – just as if all of the values had originally been specified by the client.
 Remember to configure your fields as `multivalued="true"` if they will ultimately get multiple values (either from a multivalued source or from multiple `copyField` directives).
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/currencies-exchange-rates.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/currencies-exchange-rates.adoc
index 111acd5..7d089ac 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/currencies-exchange-rates.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/currencies-exchange-rates.adoc
@@ -35,7 +35,7 @@ The following features are supported:
 CurrencyField has been deprecated in favor of CurrencyFieldType; all configuration examples below use CurrencyFieldType.
 ====
 
-The `currency` field type is defined in the <<solr-schema.adoc#,schema>>.
+The `currency` field type is defined in the xref:schema-elements.adoc[schema].
 This is the default configuration of this type.
 
 [source,xml]
@@ -51,7 +51,7 @@ This is a file of exchange rates between our default currency to other currencie
 There is an alternate implementation that would allow regular downloading of currency data.
 See <<Exchange Rates>> below for more.
 
-Many of the example schemas that ship with Solr include a <<dynamic-fields.adoc#,dynamic field>> that uses this type, such as this example:
+Many of the example schemas that ship with Solr include a xref:dynamic-fields.adoc[dynamic field] that uses this type, such as this example:
 
 [source,xml]
 ----
@@ -83,7 +83,7 @@ The currency code field will use the `"*_s_ns"` dynamic field, which must exist
 .Atomic Updates won't work if dynamic sub-fields are stored
 [NOTE]
 ====
-As noted in <<partial-document-updates.adoc#field-storage,Atomic Update Field Storage>>, stored dynamic sub-fields will cause indexing to fail when you use Atomic Updates.
+As noted in xref:partial-document-updates.adoc#field-storage[Atomic Update Field Storage], stored dynamic sub-fields will cause indexing to fail when you use Atomic Updates.
 To avoid this problem, specify `stored="false"` on those dynamic fields.
 ====
 
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/de-duplication.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/de-duplication.adoc
index 183d22f..8d87504 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/de-duplication.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/de-duplication.adoc
@@ -40,11 +40,11 @@ When a document is added, a signature will automatically be generated and attach
 
 == Configuration Options
 
-There are two places in Solr to configure de-duplication: in `solrconfig.xml` and in the <<solr-schema.adoc#,schema>>.
+There are two places in Solr to configure de-duplication: in `solrconfig.xml` and in the xref:schema-elements.adoc[schema].
 
 === In solrconfig.xml
 
-The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` as part of an <<update-request-processors.adoc#,Update Request Processor Chain>>, as in this example:
+The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` as part of an xref:configuration-guide:update-request-processors.adoc[Update Request Processor Chain], as in this example:
 
 [source,xml]
 ----
@@ -125,7 +125,7 @@ There are 2 important things to keep in mind when using `SignatureUpdateProcesso
 
 . The `overwriteDupes=true` setting does not work _except_ in the special case of using the uniqueKey field as the `signatureField`.
 Attempting De-duplication on any other `signatureField` will not work correctly because of how updates are forwarded to replicas
-. When using the uniqueKey field as the `signatureField`, `SignatureUpdateProcessorFactory` must be run prior to the `<<update-request-processors.adoc#update-processors-in-solrcloud,DistributedUpdateProcessor>>` to ensure that documents can be routed to the correct shard leader based on the (generated) uniqueKey field.
+. When using the uniqueKey field as the `signatureField`, `SignatureUpdateProcessorFactory` must be run prior to the xref:configuration-guide:update-request-processors.adoc#update-processors-in-solrcloud[`DistributedUpdateProcessor`] to ensure that documents can be routed to the correct shard leader based on the (generated) uniqueKey field.
 
 (Using any other `signatureField` with `overwriteDupes=false` -- to generate a Signature for each document with out De-duplication -- has no limitations.)
 ====
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/document-analysis.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/document-analysis.adoc
index e24ccaa..5b56449 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/document-analysis.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/document-analysis.adoc
@@ -1,11 +1,4 @@
 = Document Analysis in Solr
-:page-children: analyzers, \
-    tokenizers, \
-    filters, \
-    charfilterfactories, \
-    language-analysis, \
-    phonetic-matching, \
-    analysis-screen
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -26,11 +19,11 @@
 The following sections describe how Solr breaks down and works with textual data.
 There are three main concepts to understand: analyzers, tokenizers, and filters.
 
-* <<analyzers.adoc#,Field analyzers>> are used both during ingestion, when a document is indexed, and at query time.
+* xref:analyzers.adoc[Field analyzers] are used both during ingestion, when a document is indexed, and at query time.
 An analyzer examines the text of fields and generates a token stream.
 Analyzers may be a single class or they may be composed of a series of tokenizer and filter classes.
-* <<tokenizers.adoc#,Tokenizers>> break field data into lexical units, or _tokens_.
-* <<filters.adoc#,Filters>> examine a stream of tokens and keep them, transform or discard them, or create new ones.
+* xref:tokenizers.adoc[] break field data into lexical units, or _tokens_.
+* xref:filters.adoc[] examine a stream of tokens and keep them, transform or discard them, or create new ones.
 Tokenizers and filters may be combined to form pipelines, or _chains_, where the output of one is input to the next.
 Such a sequence of tokenizers and filters is called an _analyzer_ and the resulting output of an analyzer is used to match query results or build indices.
 
@@ -54,12 +47,12 @@ It also serves as a guide so that you can configure your own analysis classes if
 // tag::analysis-sections[]
 [cols="1,1",frame=none,grid=none,stripes=none]
 |===
-| <<analyzers.adoc#,Analyzers>>: Overview of Solr analyzers.
-| <<tokenizers.adoc#,Tokenizers>>: Tokenizers and tokenizer factory classes.
-| <<filters.adoc#,Filters>>: Filters and filter factory classes.
-| <<charfilterfactories.adoc#,CharFilterFactories>>: Filters for pre-processing input characters.
-| <<language-analysis.adoc#,Language Analysis>>: Tokenizers and filters for character set conversion and specific languages.
-| <<analysis-screen.adoc#,Analysis Screen>>: Admin UI for testing field analysis.
+| xref:analyzers.adoc[]: Overview of Solr analyzers.
+| xref:tokenizers.adoc[]: Tokenizers and tokenizer factory classes.
+| xref:filters.adoc[]: Filters and filter factory classes.
+| xref:charfilterfactories.adoc[]: Filters for pre-processing input characters.
+| xref:language-analysis.adoc[]: Tokenizers and filters for character set conversion and specific languages.
+| xref:analysis-screen.adoc[]: Admin UI for testing field analysis.
 |===
 // end::analysis-sections[]
 ****
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/documents-screen.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/documents-screen.adoc
index 22ca858..df17fbf 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/documents-screen.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/documents-screen.adoc
@@ -31,8 +31,8 @@ The screen allows you to:
 ====
 There are other ways to load data, see also these sections:
 
-* <<indexing-with-update-handlers.adoc#,Indexing with Update Handlers>>
-* <<indexing-with-tika.adoc#,Indexing with Solr Cell and Apache Tika>>
+* xref:indexing-with-update-handlers.adoc[]
+* xref:indexing-with-tika.adoc[]
 ====
 
 == Common Fields
@@ -44,7 +44,7 @@ The remaining parameters may change depending on the document type selected.
 * Document(s): Enter a properly-formatted Solr document corresponding to the `Document Type` selected.
 XML and JSON documents must be formatted in a Solr-specific format, a small illustrative document will be shown.
 CSV files should have headers corresponding to fields defined in the schema.
-More details can be found at: <<indexing-with-update-handlers.adoc#,Indexing with Update Handlers>>.
+More details can be found in xref:indexing-with-update-handlers.adoc[].
 * Commit Within: Specify the number of milliseconds between the time the document is submitted and when it is available for searching.
 * Overwrite: If `true` the new document will replace an existing document with the same value in the `id` field.
 If `false` multiple documents with the same id can be added.
@@ -74,7 +74,7 @@ If using `/update` for the Request-Handler option, you will be limited to XML, C
 Other document types (e.g., Word, PDF, etc.) can be indexed using the ExtractingRequestHandler (aka, Solr Cell).
 You must modify the RequestHandler to `/update/extract`, which must be defined in your `solrconfig.xml` file with your desired defaults.
 You should also add `&literal.id` shown in the "Extracting Request Handler Params" field so the file chosen is given a unique id.
-More information can be found at: <<indexing-with-tika.adoc#,Indexing with Solr Cell and Apache Tika>>.
+More information can be found in xref:indexing-with-tika.adoc[].
 
 == Solr Command
 
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/docvalues.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/docvalues.adoc
index 8a7be90..e56b6be 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/docvalues.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/docvalues.adoc
@@ -36,9 +36,9 @@ This approach promises to relieve some of the memory requirements of the fieldCa
 
 To use docValues, you only need to enable it for a field that you will use it with.
 As with all schema design, you need to define a field type and then define fields of that type with docValues enabled.
-All of these actions are done in the <<solr-schema.adoc#,schema>>.
+All of these actions are done in the xref:schema-elements.adoc[schema].
 
-Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from Solr's `sample_techproducts_configs` <<config-sets.adoc#,configset>>:
+Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from Solr's `sample_techproducts_configs` xref:configuration-guide:config-sets.adoc[configset]:
 
 [source,xml]
 ----
@@ -71,7 +71,7 @@ Entries are kept in sorted order and duplicates are removed.
 
 These Lucene types are related to how the {lucene-javadocs}/core/org/apache/lucene/index/DocValuesType.html[values are sorted and stored].
 
-There is an additional configuration option available, which is to modify the `docValuesFormat` <<field-type-definitions-and-properties.adoc#docvaluesformat,used by the field type>>.
+There is an additional configuration option available, which is to modify the xref:field-type-definitions-and-properties.adoc#docvaluesformat[`docValuesFormat`] used by the field type.
 The default implementation employs a mixture of loading some things into memory and keeping some on disk.
 In some cases, however, you may choose to specify an alternative {lucene-javadocs}/core/org/apache/lucene/codecs/DocValuesFormat.html[DocValuesFormat implementation].
 For example, you could choose to keep everything in memory by specifying `docValuesFormat="Direct"` on a field type:
@@ -91,16 +91,16 @@ If you choose to customize the `docValuesFormat` in your schema, upgrading to a
 
 === Sorting, Faceting & Functions
 
-If `docValues="true"` for a field, then DocValues will automatically be used any time the field is used for <<common-query-parameters.adoc#sort-parameter,sorting>>, <<faceting.adoc#,faceting>> or <<function-queries.adoc#,function queries>>.
+If `docValues="true"` for a field, then DocValues will automatically be used any time the field is used for xref:query-guide:common-query-parameters.adoc#sort-parameter[sorting], xref:query-guide:faceting.adoc[faceting], or xref:query-guide:function-queries.adoc[function queries].
 
 === Retrieving DocValues During Search
 
 Field values retrieved during search queries are typically returned from stored values.
 However, non-stored docValues fields will be also returned along with other stored fields when all fields (or pattern matching globs) are specified to be returned (e.g., "`fl=*`") for search queries depending on the effective value of the `useDocValuesAsStored` parameter for each field.
 For schema versions >= 1.6, the implicit default is `useDocValuesAsStored="true"`.
-See <<field-type-definitions-and-properties.adoc#,Field Type Definitions and Properties>> & <<fields.adoc#,Fields>> for more details.
+See xref:field-type-definitions-and-properties.adoc[] and xref:fields.adoc[] for more details.
 
-When `useDocValuesAsStored="false"`, non-stored DocValues fields can still be explicitly requested by name in the <<common-query-parameters.adoc#fl-field-list-parameter,`fl` parameter>>, but will not match glob patterns (`"*"`).
+When `useDocValuesAsStored="false"`, non-stored DocValues fields can still be explicitly requested by name in the xref:query-guide:common-query-parameters.adoc#fl-field-list-parameter[`fl` parameter], but will not match glob patterns (`"*"`).
 
 Returning DocValues along with "regular" stored fields at query time has performance implications that stored fields may not because DocValues are column-oriented and may therefore incur additional cost to retrieve for each returned document.
 
@@ -109,7 +109,7 @@ If you require the multi-valued fields to be returned in the original insertion
 
 In cases where the query is returning _only_ docValues fields performance may improve since returning stored fields requires disk reads and decompression whereas returning docValues fields in the fl list only requires memory access.
 
-When retrieving fields from their docValues form (such as when using the <<exporting-result-sets.adoc#,/export handler>>, <<streaming-expressions.adoc#,streaming expressions>> or if the field is requested in the `fl` parameter), two important differences between regular stored fields and docValues fields must be understood:
+When retrieving fields from their docValues form (such as when using the xref:query-guide:exporting-result-sets.adoc[/export handler], xref:query-guide:streaming-expressions.adoc[streaming expressions], or if the field is requested in the `fl` parameter), two important differences between regular stored fields and docValues fields must be understood:
 
 . Order is _not_ preserved.
 When retrieving stored fields, the insertion order is the return order.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/external-files-processes.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/external-files-processes.adoc
index 81175c3..7d4ade2 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/external-files-processes.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/external-files-processes.adoc
@@ -29,7 +29,7 @@ Another way to think of this is that, instead of specifying the field in documen
 ====
 External fields are not searchable.
 They can be used only for function queries or display.
-For more information on function queries, see the section on <<function-queries.adoc#,Function Queries>>.
+For more information on function queries, see the section on xref:query-guide:function-queries.adoc[].
 ====
 
 The `ExternalFileField` type is handy for cases where you want to update a particular field in many documents more often than you want to update the rest of the documents.
@@ -38,7 +38,7 @@ You might want to update the rank of all the documents daily or hourly, while th
 Without `ExternalFileField`, you would need to update each document just to change the rank.
 Using `ExternalFileField` is much more efficient because all document values for a particular field are stored in an external file that can be updated as frequently as you wish.
 
-In the <<solr-schema.adoc#,schema>>, the definition of this field type might look like this:
+In the xref:schema-elements.adoc[schema], the definition of this field type might look like this:
 
 [source,xml]
 ----
@@ -77,7 +77,7 @@ The file does not need to be sorted, but Solr will be able to perform the lookup
 === Reloading an External File
 
 It's possible to define an event listener to reload an external file when either a searcher is reloaded or when a new searcher is started.
-See the section <<caches-warming.adoc#query-related-listeners,Query-Related Listeners>> for more information, but a sample definition in `solrconfig.xml` might look like this:
+See the section xref:configuration-guide:caches-warming.adoc#query-related-listeners[Query-Related Listeners] for more information, but a sample definition in `solrconfig.xml` might look like this:
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/field-properties-by-use-case.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/field-properties-by-use-case.adoc
index 3b84c6b..e39169f 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/field-properties-by-use-case.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/field-properties-by-use-case.adoc
@@ -44,13 +44,13 @@ Notes:
 2. [[fpbuc_2,2]] Will be used if present, but not necessary.
 3. [[fpbuc_3,3]] (if termVectors=true)
 4. [[fpbuc_4,4]] A tokenizer must be defined for the field, but it doesn't need to be indexed.
-5. [[fpbuc_5,5]] Described in <<document-analysis.adoc#,Document Analysis in Solr>>.
+5. [[fpbuc_5,5]] Described in xref:document-analysis.adoc[].
 6. [[fpbuc_6,6]] Term vectors are not mandatory here.
 If not true, then a stored field is analyzed.
 So term vectors are recommended, but only required if `stored=false`.
 7. [[fpbuc_7,7]] For most field types, either `indexed` or `docValues` must be true, but both are not required.
-<<docvalues.adoc#,DocValues>> can be more efficient in many cases.
+xref:docvalues.adoc[] can be more efficient in many cases.
 For `[Int/Long/Float/Double/Date]PointFields`, `docValues=true` is required.
 8. [[fpbuc_8,8]] Stored content will be used by default, but docValues can alternatively be used.
-See <<docvalues.adoc#,DocValues>>.
-9. [[fpbuc_9,9]] Multi-valued sorting may be performed on docValues-enabled fields using the two-argument `field()` function, e.g., `field(myfield,min)`; see the <<function-queries.adoc#field-function,field() function in Function Queries>>.
+See xref:docvalues.adoc[].
+9. [[fpbuc_9,9]] Multi-valued sorting may be performed on docValues-enabled fields using the two-argument `field()` function, e.g., `field(myfield,min)`; see the xref:query-guide:function-queries.adoc#field-function[field() function in Function Queries].
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/field-type-definitions-and-properties.adoc
index 7327b20..e415023 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/field-type-definitions-and-properties.adoc
@@ -27,7 +27,7 @@ A field type definition can include four types of information:
 
 == Field Type Definitions in the Schema
 
-Field types are defined in the collection's <<solr-schema.adoc#,schema>>.
+Field types are defined in the collection's xref:schema-elements.adoc[schema].
 Each field type is defined between `fieldType` elements.
 They can optionally be grouped within a `types` element.
 
@@ -54,7 +54,7 @@ Here is an example of a field type definition for a type called `text_general`:
 ----
 
 <1> The first line in the example above contains the field type name, `text_general`, and the name of the implementing class, `solr.TextField`.
-<2> The rest of the definition is about field analysis, described in <<document-analysis.adoc#,Document Analysis in Solr>>.
+<2> The rest of the definition is about field analysis, described in xref:document-analysis.adoc[].
 
 The implementing class is responsible for making sure the field is handled correctly.
 In the class names, the string `solr` is shorthand for `org.apache.solr.schema` or `org.apache.solr.analysis`.
@@ -151,10 +151,10 @@ This blog post http://opensourceconnections.com/blog/2017/11/21/solr-synonyms-me
 |Optional |Default: `true`
 |===
 +
-For text fields, applicable when querying with <<standard-query-parser.adoc#standard-query-parser-parameters,`sow=false`>> (which is the default for the `sow` parameter).
-Use `true` for field types with query analyzers including graph-aware filters, e.g., <<filters.adoc#synonym-graph-filter,Synonym Graph Filter>> and <<filters.adoc#word-delimiter-graph-filter,Word Delimiter Graph Filter>>.
+For text fields, applicable when querying with xref:query-guide:standard-query-parser.adoc#standard-query-parser-parameters[`sow=false`] (the default).
+Use `true` for field types with query analyzers including graph-aware filters, e.g., xref:filters.adoc#synonym-graph-filter[Synonym Graph Filter] and xref:filters.adoc#word-delimiter-graph-filter[Word Delimiter Graph Filter].
 +
-Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., <<filters.adoc#shingle-filter,Shingle Filter>>.
+Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., xref:filters.adoc#shingle-filter[Shingle Filter].
 
 [[docvaluesformat]]
 `docValuesFormat`::
@@ -197,16 +197,16 @@ The table below includes the default value for most `FieldType` implementations
 |Property |Description |Implicit Default
 |`indexed` |If `true`, the value of the field can be used in queries to retrieve matching documents. |`true`
 |`stored` |If `true`, the actual value of the field can be retrieved by queries.  |`true`
-|`docValues` |If `true`, the value of the field will be put in a column-oriented <<docvalues.adoc#,DocValues>> structure. |`false`
+|`docValues` |If `true`, the value of the field will be put in a column-oriented xref:docvalues.adoc[] structure. |`false`
 |`sortMissingFirst`, `sortMissingLast` |Control the placement of documents when a sort field is not present. |`false`
 |`multiValued` |If `true`, indicates that a single document might contain multiple values for this field type. |`false`
-|`uninvertible` |If `true`, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of <<docvalues.adoc#,DocValues>>. *Defaults to `true` for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.* |`true`
+|`uninvertible` |If `true`, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of xref:docvalues.adoc[]. *Defaults to `true` for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.* |`true`
 |`omitNorms` |If `true`, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields need norms. |*
 |`omitTermFreqAndPositions` |If `true`, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don't require that information. It also reduces the storage space required for the index. Queries that rely on position that are issued on a field with this option will silently fail to find documents. *This property defaults to true for all field types that are not text fields.* |*
 |`omitPositions` |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |*
 |`termVectors`, `termPositions`, `termOffsets`, `termPayloads` |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset, and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |`false`
 |`required` |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |`false`
-|`useDocValuesAsStored` |If the field has <<docvalues.adoc#,docValues>> enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |`true`
+|`useDocValuesAsStored` |If the field has xref:docvalues.adoc[] enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an xref:query-guide:common-query-parameters.adoc#fl-field-list-parameter[fl parameter]. |`true`
 |`large` |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |`false`
 |===
 
@@ -216,7 +216,7 @@ The table below includes the default value for most `FieldType` implementations
 
 For general numeric needs, consider using one of the `IntPointField`, `LongPointField`, `FloatPointField`, or `DoublePointField` classes, depending on the specific values you expect.
 These "Dimensional Point" based numeric classes use specially encoded data structures to support efficient range queries regardless of the size of the ranges used.
-Enable <<docvalues.adoc#,DocValues>> on these fields as needed for sorting and/or faceting.
+Enable xref:docvalues.adoc[] on these fields as needed for sorting and/or faceting.
 
 Some Solr features may not yet work with "Dimensional Points", in which case you may want to consider the equivalent `TrieIntField`, `TrieLongField`, `TrieFloatField`, and `TrieDoubleField` classes.
 These field types are deprecated and are likely to be removed in a future major Solr release, but they can still be used if necessary.
@@ -255,4 +255,4 @@ Finally, for faceting, use the primary author only via a `StrField`:
 A field type may optionally specify a `<similarity/>` that will be used when scoring documents that refer to fields with this type, as long as the "global" similarity for the collection allows it.
 
 By default, any field type which does not define a similarity, uses `BM25Similarity`.
-For more details, and examples of configuring both global & per-type Similarities, please see <<schema-elements.adoc#similarity,Schema Elements>>.
+For more details, and examples of configuring both global & per-type similarities, please see xref:schema-elements.adoc#similarity[Similarity].
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/field-types-included-with-solr.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/field-types-included-with-solr.adoc
index 74268c0..0faba5d 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/field-types-included-with-solr.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/field-types-included-with-solr.adoc
@@ -25,61 +25,61 @@ The {solr-javadocs}/core/org/apache/solr/schema/package-summary.html[`org.apache
 [%autowidth.stretch,options="header"]
 |===
 |Class |Description
-|BBoxField | Indexes a single rectangle (bounding box) per document field and supports searching via a bounding box. See the section <<spatial-search.adoc#,Spatial Search>> for more information.
+|BBoxField | Indexes a single rectangle (bounding box) per document field and supports searching via a bounding box. See the section xref:query-guide:spatial-search.adoc[] for more information.
 
 |BinaryField |Binary data.
 
 |BoolField |Contains either true or false. Values of `1`, `t`, or `T` in the first character are interpreted as `true`. Any other values in the first character are interpreted as `false`.
 
-|CollationField |Supports Unicode collation for sorting and range queries. The ICUCollationField is a better choice if you can use ICU4J. See the section <<language-analysis.adoc#unicode-collation,Unicode Collation>> for more information.
+|CollationField |Supports Unicode collation for sorting and range queries. The ICUCollationField is a better choice if you can use ICU4J. See the section xref:language-analysis.adoc#unicode-collation[Unicode Collation] for more information.
 
-|CurrencyFieldType |Supports currencies and exchange rates. See the section <<currencies-exchange-rates.adoc#,Currencies and Exchange Rates>> for more information.
+|CurrencyFieldType |Supports currencies and exchange rates. See the section xref:currencies-exchange-rates.adoc[] for more information.
 
-|DateRangeField |Supports indexing date ranges, to include point in time date instances as well (single-millisecond durations). See the section <<date-formatting-math.adoc#,Date Formatting and Date Math>> for more detail on using this field type. Consider using this field type even if it's just for date instances, particularly when the queries typically fall on UTC year/month/day/hour, etc., boundaries.
+|DateRangeField |Supports indexing date ranges, to include point in time date instances as well (single-millisecond durations). See the section xref:date-formatting-math.adoc[] for more detail on using this field type. Consider using this field type even if it's just for date instances, particularly when the queries typically fall on UTC year/month/day/hour, etc., boundaries.
 
-|DatePointField |Date field. Represents a point in time with millisecond precision, encoded using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. See the section <<date-formatting-math.adoc#,Working with Dates>> for more details on the supported syntax. For single valued fields, `docValues="true"` must be used to enable sorting.
+|DatePointField |Date field. Represents a point in time with millisecond precision, encoded using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. See the section xref:date-formatting-math.adoc[] for more details on the supported syntax. For single valued fields, `docValues="true"` must be used to enable sorting.
 
 |DoublePointField |Double field (64-bit IEEE floating point). This class encodes double values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
 
-|ExternalFileField |Pulls values from a file on disk. See the section <<external-files-processes.adoc#,External Files and Processes>> for more information.
+|ExternalFileField |Pulls values from a file on disk. See the section xref:external-files-processes.adoc[] for more information.
 
-|EnumFieldType |Allows defining an enumerated set of values which may not be easily sorted by either alphabetic or numeric order (such as a list of severities, for example). This field type takes a configuration file, which lists the proper order of the field values. See the section <<enum-fields.adoc#,Enum Fields>> for more information.
+|EnumFieldType |Allows defining an enumerated set of values which may not be easily sorted by either alphabetic or numeric order (such as a list of severities, for example). This field type takes a configuration file, which lists the proper order of the field values. See the section xref:enum-fields.adoc[] for more information.
 
 |FloatPointField |Floating point field (32-bit IEEE floating point). This class encodes float values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
 
-|ICUCollationField |Supports Unicode collation for sorting and range queries. See the section <<language-analysis.adoc#unicode-collation,Unicode Collation>> for more information.
+|ICUCollationField |Supports Unicode collation for sorting and range queries. See the section xref:language-analysis.adoc#unicode-collation[Unicode Collation] for more information.
 
 |IntPointField |Integer field (32-bit signed integer). This class encodes int values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
 
-|LatLonPointSpatialField |A latitude/longitude coordinate pair; possibly multi-valued for multiple points. Usually it's specified as "lat,lon" order with a comma. See the section <<spatial-search.adoc#,Spatial Search>> for more information.
+|LatLonPointSpatialField |A latitude/longitude coordinate pair; possibly multi-valued for multiple points. Usually it's specified as "lat,lon" order with a comma. See the section xref:query-guide:spatial-search.adoc[] for more information.
 
 |LongPointField |Long field (64-bit signed integer). This class encodes foo values using a "Dimensional Points" based data structure that allows for very efficient searches for specific values, or ranges of values. For single valued fields, `docValues="true"` must be used to enable sorting.
 
-|NestPathField | Specialized field type storing ehanced information, when <<indexing-nested-documents.adoc#schema-configuration,working with nested documents>>.
+|NestPathField | Specialized field type storing ehanced information, when xref:indexing-nested-documents.adoc#schema-configuration[working with nested documents].
 
-|PointType |A single-valued n-dimensional point. It's both for sorting spatial data that is _not_ lat-lon, and for some more rare use-cases. (NOTE: this is _not_ related to the "Point" based numeric fields). See <<spatial-search.adoc#,Spatial Search>> for more information.
+|PointType |A single-valued n-dimensional point. It's both for sorting spatial data that is _not_ lat-lon, and for some more rare use-cases. (NOTE: this is _not_ related to the "Point" based numeric fields). See xref:query-guide:spatial-search.adoc[] for more information.
 
 |PreAnalyzedField |Provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing.
 
-Configuration and usage of PreAnalyzedField is documented in the section  <<external-files-processes.adoc#the-preanalyzedfield-type,The PreAnalyzedField Type>>.
+Configuration and usage of PreAnalyzedField is documented in the section  xref:external-files-processes.adoc#the-preanalyzedfield-type[PreAnalyzedField Type].
 
 |RandomSortField |Does not contain a value. Queries that sort on this field type will return results in random order. Use a dynamic field to use this feature.
 
-|RankField |Can be used to store scoring factors to improve document ranking. To be used in combination with <<other-parsers.adoc#ranking-query-parser,RankQParserPlugin>>
+|RankField |Can be used to store scoring factors to improve document ranking. To be used in combination with xref:query-guide:other-parsers.adoc#ranking-query-parser[RankQParserPlugin].
 
-|RptWithGeometrySpatialField |A derivative of `SpatialRecursivePrefixTreeFieldType` that also stores the original geometry. See <<spatial-search.adoc#,Spatial Search>> for more information and usage with geospatial results transformer.
+|RptWithGeometrySpatialField |A derivative of `SpatialRecursivePrefixTreeFieldType` that also stores the original geometry. See xref:query-guide:spatial-search.adoc[] for more information and usage with geospatial results transformer.
 
-|SortableTextField |A specialized version of TextField that allows (and defaults to) `docValues="true"` for sorting on the first 1024 characters of the original string prior to analysis. The number of characters used for sorting can be overridden with the `maxCharsForDocValues` attribute. See <<common-query-parameters.adoc#sort-parameter,sort parameter discussion>> for details.
+|SortableTextField |A specialized version of TextField that allows (and defaults to) `docValues="true"` for sorting on the first 1024 characters of the original string prior to analysis. The number of characters used for sorting can be overridden with the `maxCharsForDocValues` attribute. See xref:query-guide:common-query-parameters.adoc#sort-parameter[sort parameter discussion] for details.
 
-|SpatialRecursivePrefixTreeFieldType |(RPT for short) Accepts latitude comma longitude strings or other shapes in WKT format. See <<spatial-search.adoc#,Spatial Search>> for more information.
+|SpatialRecursivePrefixTreeFieldType |(RPT for short) Accepts latitude comma longitude strings or other shapes in WKT format. See xref:query-guide:spatial-search.adoc[] for more information.
 
 |StrField |String (UTF-8 encoded string or Unicode). Strings are intended for small fields and are _not_ tokenized or analyzed in any way. They have a hard limit of slightly less than 32K.
 
-|TextField |Text, usually multiple words or tokens. In normal usage, only fields of type TextField or SortableTextField will specify an <<analyzers.adoc#,analyzer>>.
+|TextField |Text, usually multiple words or tokens. In normal usage, only fields of type TextField or SortableTextField will specify an xref:analyzers.adoc[analyzer].
 
 |UUIDField |Universally Unique Identifier (UUID). Pass in a value of `NEW` and Solr will create a new UUID.
 
-*Note*: configuring a UUIDField instance with a default value of `NEW` is not advisable for most users when using SolrCloud (and not possible if the UUID value is configured as the unique key field) since the result will be that each replica of each document will get a unique UUID value. Using <<update-request-processors.adoc#,UUIDUpdateProcessorFactory>> to generate UUID values when documents are added is recommended instead.
+*Note*: configuring a UUIDField instance with a default value of `NEW` is not advisable for most users when using SolrCloud (and not possible if the UUID value is configured as the unique key field) since the result will be that each replica of each document will get a unique UUID value. Using xref:configuration-guide:update-request-processors.adoc[UUIDUpdateProcessorFactory] to generate UUID values when documents are added is recommended instead.
 |===
 
 == Deprecated Field Types
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/fields.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/fields.adoc
index ce21946..064a3d2 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/fields.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/fields.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Fields are defined in the fields element of a <<solr-schema.adoc#,schema>>.
+Fields are defined in the fields element of a xref:schema-elements.adoc[schema].
 Once you have the field types set up, defining the fields themselves is simple.
 
 == Example Field Definition
@@ -69,7 +69,7 @@ If this property is not specified, there is no default.
 
 Fields can have many of the same properties as field types.
 Properties from the table below which are specified on an individual field will override any explicit value for that property specified on the `fieldType` of the field, or any implicit default property value provided by the underlying `fieldType` implementation.
-The table below is reproduced from <<field-type-definitions-and-properties.adoc#,Field Type Definitions and Properties>>, which has more details:
+The table below is reproduced from xref:field-type-definitions-and-properties.adoc[], which has more details:
 
 --
 include::field-type-definitions-and-properties.adoc[tag=field-params]
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/filters.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/filters.adoc
index 95c56ae..9fd4b73 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/filters.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/filters.adoc
@@ -20,7 +20,7 @@ Filters examine a stream of tokens and keep them, transform them, or discard the
 
 == About Filters
 
-Like <<tokenizers.adoc#,tokenizers>>, filters consume input and produce a stream of tokens.
+Like xref:tokenizers.adoc[tokenizers], filters consume input and produce a stream of tokens.
 Filters also derive from `org.apache.lucene.analysis.TokenStream` but unlike tokenizers, a filter's input is another TokenStream.
 The job of a filter is usually easier than that of a tokenizer since in most cases a filter looks at each token in the stream sequentially and decides whether to pass it along, replace it, or discard it.
 
@@ -90,7 +90,7 @@ Solr includes several language-specific stemmers created by the http://snowball.
 The generic <<Snowball Porter Stemmer Filter>> can be used to configure any of these language stemmers.
 Solr also includes a convenience wrapper for the English Snowball stemmer.
 There are also several purpose-built stemmers for non-English languages.
-These stemmers are described in <<language-analysis.adoc#,Language Analysis>>.
+These stemmers are described in xref:language-analysis.adoc[].
 
 === Filters with Arguments
 
@@ -199,7 +199,7 @@ If `true`, the original token is preserved: "thé" -> "the", "thé"
 == Beider-Morse Filter
 
 Implements the Beider-Morse Phonetic Matching (BMPM) algorithm, which allows identification of similar names, even if they are spelled differently or in different languages.
-More information about how this works is available in the section on <<phonetic-matching.adoc#beider-morse-phonetic-matching-bmpm,Phonetic Matching>>.
+More information about how this works is available in the section xref:phonetic-matching.adoc#beider-morse-phonetic-matching-bmpm[Beider-Morse Phonetic Matching].
 
 [IMPORTANT]
 ====
@@ -284,7 +284,7 @@ The value `auto` will allow the filter to identify the language, or a comma-sepa
 
 == Classic Filter
 
-This filter takes the output of the <<tokenizers.adoc#classic-tokenizer,Classic Tokenizer>> and strips periods from acronyms and "'s" from possessives.
+This filter takes the output of the xref:tokenizers.adoc#classic-tokenizer[Classic Tokenizer] and strips periods from acronyms and "'s" from possessives.
 
 *Factory class:* `solr.ClassicFilterFactory`
 
@@ -328,9 +328,9 @@ This filter takes the output of the <<tokenizers.adoc#classic-tokenizer,Classic
 
 This filter for use in `index` time analysis creates word shingles by combining common tokens such as stop words with regular tokens.
 This can result in an index with more unique terms, but is useful for creating phrase queries containing common words, such as "the cat", in a way that will typically be much faster then if the combined tokens are not used, because only the term positions of documents containg both terms in sequence have to be considered.
-Correct usage requires being paired with <<#common-grams-query-filter,Common Grams Query Filter>> during `query` analysis.
+Correct usage requires being paired with <<Common Grams Query Filter>> during `query` analysis.
 
-These filters can also be combined with <<#stop-filter,Stop Filter>> so searching for `"the cat"` would match different documents then `"a cat"`, while pathological searches for either `"the"` or `"a"` would not match any documents.
+These filters can also be combined with <<Stop Filter>> so searching for `"the cat"` would match different documents then `"a cat"`, while pathological searches for either `"the"` or `"a"` would not match any documents.
 
 *Factory class:* `solr.CommonGramsFilterFactory`
 
@@ -409,18 +409,18 @@ If `true`, the filter ignores the case of words when comparing them to the commo
 
 == Common Grams Query Filter
 
-This filter is used for the `query` time analysis aspect of <<#common-grams-filter,Common Grams Filter>> -- see that filer for a description of arguments, example configuration, and sample input/output.
+This filter is used for the `query` time analysis aspect of <<Common Grams Filter>> -- see that filer for a description of arguments, example configuration, and sample input/output.
 
 == Collation Key Filter
 
 Collation allows sorting of text in a language-sensitive way.
 It is usually used for sorting, but can also be used with advanced searches.
-We've covered this in much more detail in the section on <<language-analysis.adoc#unicode-collation,Unicode Collation>>.
+We've covered this in much more detail in the section on xref:language-analysis.adoc#unicode-collation[Unicode Collation].
 
 == Daitch-Mokotoff Soundex Filter
 
 Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of similar names, even if they are spelled differently.
-More information about how this works is available in the section on <<phonetic-matching.adoc#,Phonetic Matching>>.
+More information about how this works is available in the section on xref:phonetic-matching.adoc[].
 
 *Factory class:* `solr.DaitchMokotoffSoundexFilterFactory`
 
@@ -468,7 +468,7 @@ Setting this to `false` will enable phonetic matching, but the exact spelling of
 == Double Metaphone Filter
 
 This filter creates tokens using the http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/DoubleMetaphone.html[`DoubleMetaphone`] encoding algorithm from commons-codec.
-For more information, see the <<phonetic-matching.adoc#,Phonetic Matching>> section.
+For more information, see xref:phonetic-matching.adoc[].
 
 *Factory class:* `solr.DoubleMetaphoneFilterFactory`
 
@@ -1050,7 +1050,7 @@ This filter is generally only useful at index time.
 This filter is a custom Unicode normalization form that applies the foldings specified in http://www.unicode.org/reports/tr30/tr30-4.html[Unicode TR #30: Character Foldings] in addition to the `NFKC_Casefold` normalization form as described in <<ICU Normalizer 2 Filter>>.
 This filter is a better substitute for the combined behavior of the <<ASCII Folding Filter>>, <<Lower Case Filter>>, and <<ICU Normalizer 2 Filter>>.
 
-To use this filter, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this filter, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
 *Factory class:* `solr.ICUFoldingFilterFactory`
@@ -1191,7 +1191,7 @@ See the http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[Uni
 
 For detailed information about these normalization forms, see http://unicode.org/reports/tr15/[Unicode Normalization Forms].
 
-To use this filter, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this filter, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
 == ICU Transform Filter
@@ -1244,7 +1244,7 @@ For a full list of ICU System Transforms, see http://demo.icu-project.org/icu-bi
 
 For detailed information about ICU Transforms, see http://userguide.icu-project.org/transforms/general.
 
-To use this filter, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this filter, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
 == Keep Word Filter
@@ -1703,7 +1703,7 @@ All other characters are left unchanged.
 
 == Managed Stop Filter
 
-This is specialized version of the <<Stop Filter,Stop Words Filter Factory>> that uses a set of stop words that are <<managed-resources.adoc#,managed from a REST API.>>
+This is specialized version of the <<Stop Filter,Stop Words Filter Factory>> that uses a set of stop words that are xref:configuration-guide:managed-resources.adoc[managed from a REST API].
 
 *Arguments:*
 
@@ -1750,7 +1750,7 @@ See <<Stop Filter>> for example input/output.
 
 == Managed Synonym Filter
 
-This is specialized version of the <<Synonym Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#,managed from a REST API.>>
+This is specialized version of the <<Synonym Filter>> that uses a mapping on synonyms that is xref:configuration-guide:managed-resources.adoc[managed from a REST API].
 
 .Managed Synonym Filter has been Deprecated
 [WARNING]
@@ -1764,7 +1764,7 @@ For arguments and examples, see the <<Synonym Graph Filter>> below.
 
 == Managed Synonym Graph Filter
 
-This is specialized version of the <<Synonym Graph Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#,managed from a REST API.>>
+This is specialized version of the <<Synonym Graph Filter>> that uses a mapping on synonyms that is xref:configuration-guide:managed-resources.adoc[managed from a REST API].
 
 This filter maps single- or multi-token synonyms, producing a fully correct graph output.
 This filter is a replacement for the Managed Synonym Filter, which produces incorrect graphs for multi-token synonyms.
@@ -2199,7 +2199,7 @@ Otherwise the token is passed through.
 == Phonetic Filter
 
 This filter creates tokens using one of the phonetic encoding algorithms in the `org.apache.commons.codec.language` package.
-For more information, see the section on <<phonetic-matching.adoc#,Phonetic Matching>>.
+For more information, see the section on xref:phonetic-matching.adoc[].
 
 *Factory class:* `solr.PhoneticFilterFactory`
 
@@ -3062,7 +3062,7 @@ s|Required |Default: none
 The path to a file that contains a list of synonyms, one per line.
 In the (default) `solr` format - see the `format` argument below for alternatives - blank lines and lines that begin with `\#` are ignored.
 This may be a comma-separated list of paths.
-See <<resource-loading.adoc#,Resource Loading>> for more information.
+See xref:configuration-guide:resource-loading.adoc[] for more information.
 +
 There are two ways to specify synonym mappings:
 +
@@ -3429,7 +3429,7 @@ With the example below, for a token "example.com" with type `<URL>`, the token e
 == Type Token Filter
 
 This filter blacklists or whitelists a specified list of token types, assuming the tokens have type metadata associated with them.
-For example, the <<tokenizers.adoc#uax29-url-email-tokenizer,UAX29 URL Email Tokenizer>> emits "<URL>" and "<EMAIL>" typed tokens, as well as other types.
+For example, the xref:tokenizers.adoc#uax29-url-email-tokenizer[UAX29 URL Email Tokenizer] emits "<URL>" and "<EMAIL>" typed tokens, as well as other types.
 This filter would allow you to pull out only e-mail addresses from text as tokens, if you wish.
 
 *Factory class:* `solr.TypeTokenFilterFactory`
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-nested-documents.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-nested-documents.adoc
index 44dc993..799a4fc 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-nested-documents.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-nested-documents.adoc
@@ -16,23 +16,20 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr supports indexing nested documents, described here, and ways to <<searching-nested-documents.adoc#,search and retrieve>> them very efficiently.
+Solr supports indexing nested documents, described here, and ways to xref:query-guide:searching-nested-documents.adoc[search and retrieve] them very efficiently.
 
-By way of examples: nested documents in Solr can be used to bind a blog post (parent document)
-with comments (child documents) -- or as a way to model major product lines as parent documents,
+By way of examples: nested documents in Solr can be used to bind a blog post (parent document) with comments (child documents) -- or as a way to model major product lines as parent documents,
 with multiple types of child documents representing individual SKUs (with unique sizes / colors) and supporting documentation (either directly nested under the products, or under individual SKUs.
 
 The "top most" parent with all children is referred to as a "root" document or formerly "block
 document" and it explains some of the nomenclature of related features.
 
-At query time, the <<block-join-query-parser.adoc#,Block Join Query Parsers>> can search these relationships,
- and the `<<document-transformers.adoc#child-childdoctransformerfactory,[child]>>` Document Transformer can attach child (or other "descendent") documents to the result documents.
-In terms of performance, indexing the relationships between documents usually yields much faster queries than an equivalent "<<other-parsers#join-query-parser,query time join>>",
+At query time, the xref:query-guide:block-join-query-parser.adoc[] can search these relationships, and the xref:query-guide:document-transformers.adoc#child-childdoctransformerfactory[`[child]`] Document Transformer can attach child (or other "descendent") documents to the result documents.
+In terms of performance, indexing the relationships between documents usually yields much faster queries than an equivalent xref:query-guide:join-query-parser["query time join"],
  since the relationships are already stored in the index and do not need to be computed.
 
 However, nested documents are less flexible than query time joins as it imposes rules that some applications may not be able to accept.
-Nested documents may be indexed via either the XML or JSON data syntax, and is also supported by <<solrj.adoc#,SolrJ>> with javabin.
-
+Nested documents may be indexed via either the XML or JSON data syntax, and is also supported by xref:deployment-guide:solrj.adoc[] with javabin.
 
 [CAUTION]
 ====
@@ -125,7 +122,7 @@ There is no "child document" field type.
 
 [CAUTION]
 =====
-The <<indexing-with-update-handlers#json-update-convenience-paths,`/update/json/docs` convenience path>> will automatically flatten complex JSON documents by default -- so to index nested JSON documents make sure to use `/update`.
+The xref:indexing-with-update-handlers#json-update-convenience-paths[`/update/json/docs` convenience path] will automatically flatten complex JSON documents by default -- so to index nested JSON documents make sure to use `/update`.
 =====
 ====
 
@@ -232,13 +229,13 @@ Indexing nested documents _requires_ an indexed field named `\_root_`:
 ----
 
 *Do not add this field to an index that already has data!
-<<reindexing.adoc#changes-that-require-reindex,You must reindex.>>*
+xref:reindexing.adoc#changes-that-require-reindex[You must reindex].*
 
 * Solr automatically populates this field in _all_ documents with the `id` value of it's root document
 -- it's highest ancestor, possibly itself.
 * This field must be indexed (`indexed="true"`) but doesn't need to
 be either stored (`stored="true"`) or use doc values (`docValues="true"`), however you are free to do so if you find it useful.
-If you want to use `uniqueBlock(\_root_)` <<json-facet-api#stat-facet-functions,field type limitation>>, then you should enable docValues.
+If you want to use `uniqueBlock(\_root_)` xref:query-guide:json-facet-api#stat-facet-functions[field type limitation], then you should enable docValues.
 
 Preferably, you will also define `\_nest_path_` which adds features and ease-of-use:
 
@@ -250,11 +247,11 @@ Preferably, you will also define `\_nest_path_` which adds features and ease-of-
 
 * Solr automatically populates this field for any child document but not root documents.
 * This field enables Solr to properly record & reconstruct the named and nested relationship of documents
-when using the `<<searching-nested-documents.adoc#child-doc-transformer,[child]>>` doc transformer.
-** If this field does not exist, the `[child]` transformer will return all descendent child documents as a flattened list -- just as if they had been <<#indexing-anonymous-children,indexed as anonymous children>>.
+when using the xref:query-guide:searching-nested-documents.adoc#child-doc-transformer[`[child]`] doc transformer.
+** If this field does not exist, the `[child]` transformer will return all descendent child documents as a flattened list -- just as if they had been <<indexing-anonymous-children,indexed as anonymous children>>.
 * If you do not use `\_nest_path_` it is strongly recommended that every document have some
 field that differentiates root documents from their nested children -- and differentiates different "types" of child documents.
-This is not strictly necessary, so long as it's possible to write a "filter" query that can be used to isolate and select only parent documents for use in the <<block-join-query-parser#,block join query parsers>> and <<searching-nested-documents.adoc#child-doc-transformer,[child]>> doc transformer
+This is not strictly necessary, so long as it's possible to write a "filter" query that can be used to isolate and select only parent documents for use in the xref:query-guide:block-join-query-parser[] and xref:query-guide:searching-nested-documents.adoc#child-doc-transformer[`[child]`] doc transformer
 * It's possible to query on this field, although at present it's only documented how to in the
 context of `[child]`'s `childFilter` parameter.
 
@@ -279,17 +276,16 @@ documents.
 [TIP]
 ====
 When using SolrCloud it is a _VERY_ good idea to use
-<<solrcloud-shards-indexing.adoc#document-routing,prefix based compositeIds>> with a
+xref:deployment-guide:solrcloud-shards-indexing.adoc#document-routing[prefix based compositeIds] with a
 common prefix for all documents in the nested document tree.
 This makes it much easier to apply
-<<partial-document-updates#updating-child-documents,atomic updates to individual child documents>>
+xref:partial-document-updates.adoc#updating-child-documents[atomic updates] to individual child documents.
 ====
 
-
 == Maintaining Integrity with Updates and Deletes
 
-Nested document trees can be modified with Solr's
-<<partial-document-updates#updating-child-documents,atomic update>> feature to
+Nested document trees can be modified with
+xref:partial-document-updates.adoc#atomic-updates,atomic updates>> to
 manipulate any document in a nested tree, and even to add new child documents.
 This aspect isn't different than updating any normal document -- Solr internally deletes the old
 nested document tree and it adds the newly modified one.
@@ -302,14 +298,11 @@ Clients should be very careful to *never* violate this.
 
 To delete an entire nested document tree, you can simply delete-by-ID using the `id` of the root document.
 Delete-by-ID will not work with the `id` of a child document, since only root document IDs are considered.
-Instead, use delete-by-query (most efficient) or <<partial-document-updates#,atomic updates>> to remove the child document from it's parent.
+Instead, use delete-by-query (most efficient) or xref:partial-document-updates.adoc#atomic-updates[atomic updates] to remove the child document from it's parent.
 
 If you use Solr's delete-by-query APIs, you *MUST* be careful to ensure that any deletion query is structured to ensure no descendent children remain of any documents that are being deleted.
 *_Doing otherwise will violate integrity assumptions that Solr expects._*
 
-
-
-
 == Indexing Anonymous Children
 
 Although not recommended, it is also possible to index child documents "anonymously":
@@ -424,10 +417,10 @@ include::example$IndexingNestedDocuments.java[tag=anon-kids]
 This simplified approach was common in older versions of Solr, and can still be used with "Root-Only" schemas that do not contain any other nested related fields apart from `\_root_`.
 Many schemas in existence are this way simply because default configsets are this way, even if the application isn't using nested documents.
 
-This approach should *NOT* be used when schemas include a `\_nest_path_` field, as the existence of that field triggers assumptions and changes in behavior in various query time functionality, such as the <<searching-nested-documents.adoc#child-doc-transformer,[child]>>, that will not work when nested documents do not have any intrinsic "nested path" information.
+This approach should *NOT* be used when schemas include a `\_nest_path_` field, as the existence of that field triggers assumptions and changes in behavior in various query time functionality, such as xref:query-guide:searching-nested-documents.adoc#child-doc-transformer[`[child]`], that will not work when nested documents do not have any intrinsic "nested path" information.
 
 The results of indexing anonymous nested children with a "Root-Only" schema are similar to what happens if you attempt to index "pseudo field" nested documents using a "Root-Only" schema.
-Notably: since there is no nested path information for the <<searching-nested-documents.adoc#child-doc-transformer,[child]>> transformer to use to reconstruct the structure of a nest of documents, it returns all matching children as a flat list, similar in structure to how they were originally indexed:
+Notably: since there is no nested path information for the xref:query-guide:searching-nested-documents.adoc#child-doc-transformer[`[child]`] transformer to use to reconstruct the structure of a nest of documents, it returns all matching children as a flat list, similar in structure to how they were originally indexed:
 
 [.dynamic-tabs]
 --
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-tika.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-tika.adoc
index bb5a46d..025b9b8 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-tika.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-tika.adoc
@@ -47,7 +47,7 @@ You can configure which elements should be included/ignored, and which should ma
 * Solr Cell maps each piece of metadata onto a field.
 By default it maps to the same name but several parameters control how this is done.
 * When Solr Cell finishes creating the internal `SolrInputDocument`, the rest of the Lucene/Solr indexing stack takes over.
-The next step after any update handler is the <<update-request-processors.adoc#,Update Request Processor>> chain.
+The next step after any update handler is the xref:configuration-guide:update-request-processors.adoc[Update Request Processor] chain.
 
 Solr Cell is a contrib, which means it's not automatically included with Solr but must be configured.
 The example configsets have Solr Cell configured, but if you are not using those, you will want to pay attention to the section <<solrconfig.xml Configuration>> below.
@@ -70,7 +70,7 @@ that have a lot of rich media embedded in them.
 For these reasons, Solr Cell is not recommended for use in a production system.
 
 It is a best practice to use Solr Cell as a proof-of-concept tool during development and then run Tika as an external
-process that sends the extracted documents to Solr (via <<solrj.adoc#,SolrJ>>) for indexing.
+process that sends the extracted documents to Solr (via xref:deployment-guide:solrj.adoc[]) for indexing.
 This way, any extraction failures that occur are isolated from Solr itself and can be handled gracefully.
 
 For a few examples of how this could be done, see this blog post by Erick Erickson, https://lucidworks.com/2012/02/14/indexing-with-solrj/[Indexing with SolrJ].
@@ -405,7 +405,7 @@ Also see the section <<Defining XPath Expressions>> for an example.
 
 === solrconfig.xml Configuration
 
-If you have started Solr with one of the supplied <<config-sets.adoc#,example configsets>>, you may already have the `ExtractingRequestHandler` configured by default.
+If you have started Solr with one of the supplied xref:configuration-guide:config-sets.adoc[example configsets], you may already have the `ExtractingRequestHandler` configured by default.
 
 If it is not already configured, you will need to configure `solrconfig.xml` to find the `ExtractingRequestHandler` and its dependencies:
 
@@ -434,9 +434,9 @@ In this setup, all field names are lower-cased (with the `lowernames` parameter)
 
 [TIP]
 ====
-You may need to configure <<update-request-processors.adoc#,Update Request Processors>> (URPs) that parse numbers and dates and do other manipulations on the metadata fields generated by Solr Cell.
+You may need to configure xref:configuration-guide:update-request-processors.adoc[] (URPs) that parse numbers and dates and do other manipulations on the metadata fields generated by Solr Cell.
 
-In Solr's `_default` configset, <<schemaless-mode.adoc#,"schemaless">> (aka data driven, or field guessing) mode is enabled, which does a variety of such processing already.
+In Solr's `_default` configset, xref:schemaless-mode.adoc[schemaless mode] (aka data driven, or field guessing) is enabled, which does a variety of such processing already.
 
 If you instead explicitly define the fields for your schema, you can selectively specify the desired URPs.
 An easy way to specify this is to configure the parameter `processor` (under `defaults`) to `uuid,remove-blank,field-name-mutating,parse-boolean,parse-long,parse-double,parse-date`.
@@ -604,7 +604,7 @@ curl "http://localhost:8983/solr/gettingstarted/update/extract?literal.id=doc6&d
 == Using Solr Cell with SolrJ
 
 SolrJ is a Java client that you can use to add documents to the index, update the index, or query the index.
-You'll find more information on SolrJ in <<solrj.adoc#,SolrJ>>.
+You'll find more information on SolrJ in xref:deployment-guide:solrj.adoc[].
 
 Here's an example of using Solr Cell and SolrJ to add documents to a Solr index.
 
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-update-handlers.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-update-handlers.adoc
index 0ac0d32..22706e1 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-update-handlers.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-update-handlers.adoc
@@ -18,18 +18,16 @@
 // under the License.
 
 Update handlers are request handlers designed to add, delete and update documents to the index.
-In addition to having plugins for importing rich documents <<indexing-with-tika.adoc#,Indexing with Solr Cell and Apache Tika>>, Solr natively supports indexing structured documents in XML, CSV, and JSON.
+In addition to having plugins for importing rich documents xref:indexing-with-tika.adoc[], Solr natively supports indexing structured documents in XML, CSV, and JSON.
 
 The recommended way to configure and use request handlers is with path based names that map to paths in the request URL.
-However, request handlers can also be specified with the `qt` (query type) parameter if the <<requestdispatcher.adoc#,`requestDispatcher`>> is appropriately configured.
+However, request handlers can also be specified with the `qt` (query type) parameter if the xref:configuration-guide:requestdispatcher.adoc[`requestDispatcher`] is appropriately configured.
 It is possible to access the same handler using more than one name, which can be useful if you wish to specify different sets of default options.
 
-A single unified update request handler supports XML, CSV, JSON, and javabin update requests, delegating to the appropriate `ContentStreamLoader` based on the `Content-Type` of the <<content-streams.adoc#,ContentStream>>.
+A single unified update request handler supports XML, CSV, JSON, and javabin update requests, delegating to the appropriate `ContentStreamLoader` based on the `Content-Type` of the xref:content-streams.adoc[ContentStream].
 
 If you need to pre-process documents after they are loaded but before they are indexed (or even checked against the schema),
-Solr has document-preprocessing plugins for Update Request Handlers,
-called <<update-request-processors.adoc#,Update Request Processors>>,
-which allow for default and custom configuration chains.
+Solr has document preprocessing plugins for Update Request Handlers, called xref:configuration-guide:update-request-processors.adoc[], which allow for default and custom configuration chains.
 
 == UpdateRequestHandler Configuration
 
@@ -184,7 +182,7 @@ A single delete message can contain multiple delete operations.
 ====
 
 When using the Join query parser in a Delete By Query, you should use the `score` parameter with a value of "none" to avoid a `ClassCastException`.
-See the section on the <<other-parsers.adoc#,Join Query Parser>> for more details on the `score` parameter.
+See the section on the xref:query-guide:join-query-parser.adoc[] for more details on the `score` parameter.
 
 ====
 
@@ -250,7 +248,7 @@ This alternative `curl` command performs equivalent operations but with minimal
 curl http://localhost:8983/solr/my_collection/update -H "Content-Type: text/xml" -T "myfile.xml" -X POST
 ----
 
-Short requests can also be sent using a HTTP GET command, if enabled in <<requestdispatcher.adoc#requestparsers-element,`requestParsers`>> element of `solrconfig.xml`, URL-encoding the request, as in the following.
+Short requests can also be sent using a HTTP GET command, if enabled in xref:configuration-guide:requestdispatcher.adoc#requestparsers-element[`requestParsers`] element of `solrconfig.xml`, URL-encoding the request, as in the following.
 Note the escaping of "<" and ">":
 
 [source,bash]
@@ -275,9 +273,9 @@ The status field will be non-zero in case of failure.
 === Using XSLT to Transform XML Index Updates
 
 The Scripting contrib module provides a separate XSLT Update Request Handler that allows you to index any arbitrary XML by using the `<tr>` parameter to apply an https://en.wikipedia.org/wiki/XSLT[XSL transformation].
-You must have an XSLT stylesheet in the `conf/xslt` directory of your <<config-sets.adoc#,configset>> that can transform the incoming data to the expected `<add><doc/></add>` format, and use the `tr` parameter to specify the name of that stylesheet.
+You must have an XSLT stylesheet in the `conf/xslt` directory of your xref:configuration-guide:config-sets.adoc[configset] that can transform the incoming data to the expected `<add><doc/></add>` format, and use the `tr` parameter to specify the name of that stylesheet.
 
-Learn more about adding the `dist/solr-scripting-*.jar` file into Solr's <<libs.adoc#lib-directories,Lib Directories>>.
+Learn more about adding the `dist/solr-scripting-*.jar` file into Solr's xref:configuration-guide:libs.adoc#lib-directories[Lib Directories].
 
 === tr Parameter
 
@@ -287,7 +285,7 @@ The transformation must be found in the Solr `conf/xslt` directory.
 
 === XSLT Configuration
 
-The example below, from the `sample_techproducts_configs` <<config-sets.adoc#,configset>> in the Solr distribution, shows how the XSLT Update Request Handler is configured.
+The example below, from the `sample_techproducts_configs` xref:configuration-guide:config-sets.adoc[configset] in the Solr distribution, shows how the XSLT Update Request Handler is configured.
 
 [source,xml]
 ----
@@ -358,12 +356,12 @@ $ curl -o standard_solr_xml_format.xml "http://localhost:8983/solr/techproducts/
 $ curl -X POST -H "Content-Type: text/xml" -d @standard_solr_xml_format.xml "http://localhost:8983/solr/techproducts/update/xslt?commit=true&tr=updateXml.xsl"
 ----
 
-NOTE: You can see the opposite export/import cycle using the `tr` parameter in <<response-writers.adoc#xslt-writer-example,Response Writer XSLT example>>.
+NOTE: You can see the opposite export/import cycle using the `tr` parameter in the xref:query-guide:response-writers.adoc#xslt-writer-example[Response Writer XSLT example].
 
 == JSON Formatted Index Updates
 
 Solr can accept JSON that conforms to a defined structure, or can accept arbitrary JSON-formatted documents.
-If sending arbitrarily formatted JSON, there are some additional parameters that need to be sent with the update request, described in the section <<transforming-and-indexing-custom-json.adoc#,Transforming and Indexing Custom JSON>>.
+If sending arbitrarily formatted JSON, there are some additional parameters that need to be sent with the update request, described in the section xref:transforming-and-indexing-custom-json.adoc[].
 
 === Solr-Style JSON
 
@@ -507,7 +505,7 @@ The `/update/json` path may be useful for clients sending in JSON formatted upda
 === Custom JSON Documents
 
 Solr can support custom JSON.
-This is covered in the section <<transforming-and-indexing-custom-json.adoc#,Transforming and Indexing Custom JSON>>.
+This is covered in the section xref:transforming-and-indexing-custom-json.adoc[].
 
 
 == CSV Formatted Index Updates
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/language-analysis.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/language-analysis.adoc
index 676add4..10ca240 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/language-analysis.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/language-analysis.adoc
@@ -25,7 +25,7 @@ Tokens are delimited by white space and/or a relatively small set of punctuation
 In other languages the tokenization rules are often not so simple.
 Some European languages may also require special tokenization rules, such as rules for decompounding German words.
 
-For information about language detection at index time, see <<language-detection.adoc#,Language Detection>>.
+For information about language detection at index time, see xref:language-detection.adoc[].
 
 == KeywordMarkerFilterFactory
 
@@ -33,7 +33,7 @@ Protects words from being modified by stemmers.
 A customized protected word list may be specified with the "protected" attribute in the schema.
 Any words in the protected word list will not be modified by any stemmer in Solr.
 
-A sample Solr `protwords.txt` with comments can be found in the `sample_techproducts_configs` <<config-sets.adoc#,configset>> directory:
+A sample Solr `protwords.txt` with comments can be found in the `sample_techproducts_configs` xref:configuration-guide:config-sets.adoc[configset] directory:
 
 [.dynamic-tabs]
 --
@@ -186,7 +186,7 @@ s|Required |Default: none
 The path of a file that contains a list of simple words, one per line.
 Blank lines and lines that begin with "`#`" are ignored.
 +
-See <<resource-loading.adoc#,Resource Loading>> for more information.
+See xref:configuration-guide:resource-loading.adoc[] for more information.
 
 `minWordSize`::
 +
@@ -269,7 +269,7 @@ Unicode Collation in Solr is fast, because all the work is done at index time.
 Rather than specifying an analyzer within `<fieldtype ... class="solr.TextField">`, the `solr.CollationField` and `solr.ICUCollationField` field type classes provide this functionality.
 `solr.ICUCollationField`, which is backed by http://site.icu-project.org[the ICU4J library], provides more flexible configuration, has more locales, is significantly faster, and requires less memory and less index space, since its keys are smaller than those produced by the JDK implementation that backs `solr.CollationField`.
 
-To use `solr.ICUCollationField`, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use `solr.ICUCollationField`, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
 `solr.ICUCollationField` and `solr.CollationField` fields can be created in two ways:
@@ -695,13 +695,13 @@ On the other hand, it can reduce precision because language-specific character d
 
 The `lucene/analysis/opennlp` module provides OpenNLP integration via several analysis components: a tokenizer, a part-of-speech tagging filter, a phrase chunking filter, and a lemmatization filter.
 In addition to these analysis components, Solr also provides an update request processor to extract named entities.
-See also <<update-request-processors.adoc#update-processor-factories-that-can-be-loaded-as-plugins,Update Processor Factories That Can Be Loaded as Plugins>>.
+See also xref:configuration-guide:update-request-processors.adoc#update-processor-factories-that-can-be-loaded-as-plugins[Update Processor Factories That Can Be Loaded as Plugins].
 
 NOTE: The <<OpenNLP Tokenizer>> must be used with all other OpenNLP analysis components, for two reasons.
 First, the OpenNLP Tokenizer detects and marks the sentence boundaries required by all the OpenNLP filters.
 Second, since the pre-trained OpenNLP models used by these filters were trained using the corresponding language-specific sentence-detection/tokenization models, the same tokenization using the same models must be used at runtime for optimal performance.
 
-To use the OpenNLP components, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use the OpenNLP components, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
 === OpenNLP Tokenizer
@@ -722,7 +722,7 @@ s|Required |Default: none
 |===
 +
 The path of a language-specific OpenNLP sentence detection model file.
-See <<resource-loading.adoc#,Resource Loading>> for more information.
+See xref:configuration-guide:resource-loading.adoc[] for more information.
 
 `tokenizerModel`::
 +
@@ -732,7 +732,7 @@ s|Required |Default: none
 |===
 +
 The path of a language-specific OpenNLP tokenization model file.
-See <<resource-loading.adoc#,Resource Loading>> for more information.
+See xref:configuration-guide:resource-loading.adoc[] for more information.
 
 *Example:*
 
@@ -783,11 +783,12 @@ s|Required |Default: none
 |===
 +
 The path of a language-specific OpenNLP POS tagger model file.
-See <<resource-loading.adoc#,Resource Loading>> for more information.
+See xref:configuration-guide:resource-loading.adoc[] for more information.
 
 *Examples:*
 
-The OpenNLP tokenizer will tokenize punctuation, which is useful for following token filters, but ordinarily you don't want to include punctuation in your index, so the `TypeTokenFilter` (<<filters.adoc#type-token-filter,described here>>) is included in the examples below, with `stop.pos.txt` containing the following:
+The OpenNLP tokenizer will tokenize punctuation, which is useful for following token filters.
+Ordinarily you don't want to include punctuation in your index, so the xref:filters.adoc#type-token-filter[`TypeTokenFilter`] is included in the examples below, with `stop.pos.txt` containing the following:
 
 .stop.pos.txt
 [source,text]
@@ -839,7 +840,7 @@ Index the POS for each token as a payload:
 ====
 --
 
-Index the POS for each token as a synonym, after prefixing the POS with "@" (see the <<filters.adoc#type-as-synonym-filter,TypeAsSynonymFilter description>>):
+Index the POS for each token as a synonym, after prefixing the POS with "@" (see the xref:filters.adoc#type-as-synonym-filter[TypeAsSynonymFilter description]):
 
 [source,xml]
 ----
@@ -888,7 +889,7 @@ s|Required |Default: none
 |===
 +
 The path of a language-specific OpenNLP phrase chunker model file.
-See <<resource-loading.adoc#,Resource Loading>> for more information.
+See xref:configuration-guide:resource-loading.adoc[] for more information.
 
 *Examples*:
 
@@ -928,7 +929,7 @@ Index the phrase chunk label for each token as a payload:
 ====
 --
 
-Index the phrase chunk label for each token as a synonym, after prefixing it with "#" (see the <<filters.adoc#type-as-synonym-filter,TypeAsSynonymFilter description>>):
+Index the phrase chunk label for each token as a synonym, after prefixing it with "#" (see the xref:filters.adoc#type-as-synonym-filter[TypeAsSynonymFilter description]):
 
 [source,xml]
 ----
@@ -963,7 +964,7 @@ Either `dictionary` or `lemmatizerModel` must be provided, and both may be provi
 |===
 +
 The path of a lemmatization dictionary file.
-See <<resource-loading.adoc#,Resource Loading>> for more information.
+See xref:configuration-guide:resource-loading.adoc[] for more information.
 The dictionary file must be encoded as UTF-8, with one entry per line, in the form `word[tab]lemma[tab]part-of-speech`, e.g., `wrote[tab]write[tab]VBD`.
 
 `lemmatizerModel`::
@@ -974,7 +975,7 @@ The dictionary file must be encoded as UTF-8, with one entry per line, in the fo
 |===
 +
 The path of a language-specific OpenNLP lemmatizer model file.
-See <<resource-loading.adoc#,Resource Loading>> for more information.
+See xref:configuration-guide:resource-loading.adoc[] for more information.
 
 *Examples:*
 
@@ -1324,12 +1325,12 @@ The stemmer language, `Catalan` in this case.
 
 === Traditional Chinese
 
-The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>> is suitable for Traditional Chinese text.
+The default configuration of the xref:tokenizers.adoc#icu-tokenizer[ICU Tokenizer] is suitable for Traditional Chinese text.
 It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.
-To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See the `solr/contrib/analysis-extras/README.md` for information on which jars you need to add.
 
-<<tokenizers.adoc#standard-tokenizer,Standard Tokenizer>> can also be used to tokenize Traditional Chinese text.
+The xref:tokenizers.adoc#standard-tokenizer[Standard Tokenizer] can also be used to tokenize Traditional Chinese text.
 Following the Word Break rules from the Unicode Text Segmentation algorithm, it produces one token per Chinese character.
 When combined with <<CJK Bigram Filter>>, overlapping bigrams of Chinese characters are formed.
 
@@ -1377,7 +1378,7 @@ When combined with <<CJK Bigram Filter>>, overlapping bigrams of Chinese charact
 
 === CJK Bigram Filter
 
-Forms bigrams (overlapping 2-character sequences) of CJK characters that are generated from <<tokenizers.adoc#standard-tokenizer,Standard Tokenizer>> or <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>>.
+Forms bigrams (overlapping 2-character sequences) of CJK characters that are generated from the xref:tokenizers.adoc#standard-tokenizer[Standard Tokenizer] or the xref:tokenizers.adoc#icu-tokenizer[ICU Tokenizer].
 
 By default, all CJK characters produce bigrams, but finer grained control is available by specifying orthographic type arguments `han`, `hiragana`, `katakana`, and `hangul`.
 When set to `false`, characters of the corresponding type will be passed through as unigrams, and will not be included in any bigrams.
@@ -1440,12 +1441,12 @@ See the example under <<Traditional Chinese>>.
 
 For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the <<HMM Chinese Tokenizer>>.
 This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model.
-To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See the `solr/contrib/analysis-extras/README.md` for information on which jars you need to add.
 
-The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>> is also suitable for Simplified Chinese text.
+The default configuration of the xref:tokenizers.adoc#icu-tokenizer[ICU Tokenizer] is also suitable for Simplified Chinese text.
 It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.
-To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See the `solr/contrib/analysis-extras/README.md` for information on which jars you need to add.
 
 Also useful for Chinese analysis:
@@ -1503,7 +1504,7 @@ Also useful for Chinese analysis:
 
 For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the `solr.HMMChineseTokenizerFactory` in the `analysis-extras` contrib module.
 This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model.
-To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
 *Factory class:* `solr.HMMChineseTokenizerFactory`
@@ -2346,7 +2347,7 @@ Removes terms with one of the configured parts-of-speech.
 |===
 +
 Filename for a list of parts-of-speech for which to remove terms.
-See `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` <<config-sets.adoc#,configset>> for an example.
+See `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` xref:configuration-guide:config-sets.adoc[configset] for an example.
 
 ==== Japanese Katakana Stem Filter
 
@@ -2550,10 +2551,10 @@ This filter replaces term text with the Reading Attribute, the Hangul transcript
 === Hebrew, Lao, Myanmar, Khmer
 
 Lucene provides support, in addition to UAX#29 word break rules, for Hebrew's use of the double and single quote characters, and for segmenting Lao, Myanmar, and Khmer into syllables with the `solr.ICUTokenizerFactory` in the `analysis-extras` contrib module.
-To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
-See <<tokenizers.adoc#icu-tokenizer,the ICUTokenizer>> for more information.
+See xref:tokenizers.adoc#icu-tokenizer[ICUTokenizer] for more information.
 
 === Latvian
 
@@ -2825,7 +2826,7 @@ Solr includes support for normalizing Persian, and Lucene includes an example st
 
 Solr provides support for Polish stemming with the `solr.StempelPolishStemFilterFactory`, and `solr.MorphologikFilterFactory` for lemmatization, in the `contrib/analysis-extras` module.
 The `solr.StempelPolishStemFilterFactory` component includes an algorithmic stemmer with tables for Polish.
-To use either of these filters, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use either of these filters, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
 *Factory class:* `solr.StempelPolishStemFilterFactory` and `solr.MorfologikFilterFactory`
@@ -3373,7 +3374,7 @@ Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFa
 === Ukrainian
 
 Solr provides support for Ukrainian lemmatization with the `solr.MorphologikFilterFactory`, in the `contrib/analysis-extras` module.
-To use this filter, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this filter, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See `solr/contrib/analysis-extras/README.md` for instructions on which jars you need to add.
 
 Lucene also includes an example Ukrainian stopword list, in the `lucene-analyzers-morfologik` jar.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/language-detection.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/language-detection.adoc
index 7318e83..68122c1 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/language-detection.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/language-detection.adoc
@@ -29,7 +29,7 @@ In general, the LangDetect implementation supports more languages with higher pe
 
 For specific information on each of these language identification implementations, including a list of supported languages for each, see the relevant project websites.
 
-For more information about language analysis in Solr, see <<language-analysis.adoc#,Language Analysis>>.
+For more information about language analysis in Solr, see xref:language-analysis.adoc[].
 
 == Configuring Language Detection
 
@@ -94,7 +94,7 @@ An OpenNLP language detection model.
 The OpenNLP project provides a pre-trained 103 language model on the http://opennlp.apache.org/models.html[OpenNLP site's model dowload page].
 Model training instructions are provided on the http://opennlp.apache.org/docs/{ivy-opennlp-version}/manual/opennlp.html#tools.langdetect[OpenNLP website].
 +
-See <<resource-loading.adoc#,Resource Loading>> for information on where to put the model.
+See xref:configuration-guide:resource-loading.adoc[] for information on where to put the model.
 
 ==== OpenNLP Language Codes
 
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/luke-request-handler.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/luke-request-handler.adoc
index 7d3734b..bb0987d 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/luke-request-handler.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/luke-request-handler.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-The Luke Request Handler offers programmatic access to the information provided on the <<schema-browser-screen#schema-browser-screen,Schema Browser>> page of the Admin UI.
+The Luke Request Handler offers programmatic access to the information provided on the xref:schema-browser-screen.adoc[] page of the Admin UI.
 It is modeled after Luke, the Lucene Index Browser by Andrzej Bialecki.
 It is an implicit handler, so you don't need to define it in `solrconfig.xml`.
 
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/partial-document-updates.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/partial-document-updates.adoc
index 9e601f8..3aee891 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/partial-document-updates.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/partial-document-updates.adoc
@@ -139,13 +139,13 @@ In-place updates avoid that.
 .Routing Updates using child document Ids in SolrCloud
 
 When SolrCloud receives document updates, the
-<<solrcloud-shards-indexing.adoc#document-routing,document routing>> rules for the collection is used to determine which shard should process the update based on the `id` of the document.
+xref:deployment-guide:solrcloud-shards-indexing.adoc#document-routing[document routing] rules for the collection is used to determine which shard should process the update based on the `id` of the document.
 
 When sending an update that specifies the `id` of a _child document_ this will not work by default: the correct shard to send the document to is based on the `id` of the "Root" document for the block the child document is in, *not* the `id` of the child document being updated.
 
 Solr offers two solutions to address this:
 
-* Clients may specify a <<solrcloud-shards-indexing.adoc#document-routing,`\_route_` parameter>>, with the `id` of the Root document as the parameter value, on each update to tell Solr which shard should process the update.
+* Clients may specify a xref:deployment-guide:solrcloud-shards-indexing.adoc#document-routing[`\_route_` parameter], with the `id` of the Root document as the parameter value, on each update to tell Solr which shard should process the update.
 * Clients can use the (default) `compositeId` router's "prefix routing" feature when indexing all documents to ensure that all child/descendent documents in a Block use the same `id` prefix as the Root level document.
 This will cause Solr's default routing logic to automatically send child document updates to the correct shard.
 
@@ -159,13 +159,13 @@ equivalent, but it may be absent or not equivalent (e.g., when using the `implic
 All of the examples below use `id` prefixes, so no `\_route_` parameter will be necessary for these examples.
 ====
 
-For the upcoming examples, we'll assume an index containing the same documents covered in <<indexing-nested-documents#example-indexing-syntax,Indexing Nested Documents>>:
+For the upcoming examples, we'll assume an index containing the same documents covered in xref:indexing-nested-documents.adoc#example-indexing-syntax[Indexing Nested Documents]:
 
 include::indexing-nested-documents.adoc[tag=sample-indexing-deeply-nested-documents]
 
 ==== Modifying Child Document Fields
 
-All of the <<#atomic-updates,Atomic Update operations>> mentioned above are supported for "real" fields of Child Documents:
+All of the <<atomic-updates,Atomic Update operations>> mentioned above are supported for "real" fields of Child Documents:
 
 [source,bash]
 ----
@@ -345,7 +345,7 @@ Use the parameter `failOnVersionConflicts=false` to avoid failure of the entire
 If the document being updated does not include the `\_version_` field, and atomic updates are not being used, the document will be treated by normal Solr rules, which is usually to discard the previous version.
 
 When using Optimistic Concurrency, clients can include an optional `versions=true` request parameter to indicate that the _new_ versions of the documents being added should be included in the response.
-This allows clients to immediately know what the `\_version_` is of every document added without needing to make a redundant <<realtime-get.adoc#,`/get` request>>.
+This allows clients to immediately know what the `\_version_` is of every document added without needing to make a redundant xref:configuration-guide:realtime-get.adoc[`/get` request].
 
 Following are some examples using `versions=true` in queries:
 
@@ -487,7 +487,7 @@ Optimistic Concurrency is extremely powerful, and works very efficiently because
 However, in some situations users may want to configure their own document specific version field, where the version values are assigned on a per-document basis by an external system, and have Solr reject updates that attempt to replace a document with an "older" version.
 In situations like this the {solr-javadocs}/core/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.html[`DocBasedVersionConstraintsProcessorFactory`] can be useful.
 
-The basic usage of `DocBasedVersionConstraintsProcessorFactory` is to configure it in `solrconfig.xml` as part of the <<update-request-processors.adoc#update-request-processor-configuration,UpdateRequestProcessorChain>> and specify the name of your custom `versionField` in your schema that should be checked when validating updates:
+The basic usage of `DocBasedVersionConstraintsProcessorFactory` is to configure it in `solrconfig.xml` as part of the xref:configuration-guide:update-request-processors.adoc#update-request-processor-configuration[UpdateRequestProcessorChain] and specify the name of your custom `versionField` in your schema that should be checked when validating updates:
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/phonetic-matching.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/phonetic-matching.adoc
index 04ab0af..2f6fd13 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/phonetic-matching.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/phonetic-matching.adoc
@@ -22,7 +22,7 @@ For overviews of and comparisons between algorithms, see http://en.wikipedia.org
 
 == Beider-Morse Phonetic Matching (BMPM)
 
-For examples of how to use this encoding in your analyzer, see <<filters.adoc#beider-morse-filter,Beider Morse Filter>> in the Filter Descriptions section.
+For examples of how to use this encoding in your analyzer, see xref:filters.adoc#beider-morse-filter[Beider Morse Filter] in the Filter Descriptions section.
 
 Beider-Morse Phonetic Matching (BMPM) is a "soundalike" tool that lets you search using a new phonetic matching system.
 BMPM helps you search for personal names (or just surnames) in a Solr/Lucene index, and is far superior to the existing phonetic codecs, such as regular soundex, metaphone, caverphone, etc.
@@ -65,7 +65,7 @@ For more information, see here: http://stevemorse.org/phoneticinfo.htm and http:
 
 == Daitch-Mokotoff Soundex
 
-To use this encoding in your analyzer, see <<filters.adoc#daitch-mokotoff-soundex-filter,Daitch-Mokotoff Soundex Filter>> in the Filter Descriptions section.
+To use this encoding in your analyzer, see xref:filters.adoc#daitch-mokotoff-soundex-filter[Daitch-Mokotoff Soundex Filter] in the Filter Descriptions section.
 
 The Daitch-Mokotoff Soundex algorithm is a refinement of the Russel and American Soundex algorithms, yielding greater accuracy in matching especially Slavic and Yiddish surnames with similar pronunciation but differences in spelling.
 
@@ -82,15 +82,16 @@ For more information, see http://en.wikipedia.org/wiki/Daitch%E2%80%93Mokotoff_S
 
 == Double Metaphone
 
-To use this encoding in your analyzer, see <<filters.adoc#double-metaphone-filter,Double Metaphone Filter>> in the Filter Descriptions section.
-Alternatively, you may specify `encoder="DoubleMetaphone"` with the <<filters.adoc#phonetic-filter,Phonetic Filter>>, but note that the Phonetic Filter version will *not* provide the second ("alternate") encoding that is generated by the Double Metaphone Filter for some tokens.
+To use this encoding in your analyzer, see xref:filters.adoc#double-metaphone-filter[Double Metaphone Filter] in the Filter Descriptions section.
+
+Alternatively, you may specify `encoder="DoubleMetaphone"` with the xref:filters.adoc#phonetic-filter[Phonetic Filter], but note that the Phonetic Filter version will *not* provide the second ("alternate") encoding that is generated by the Double Metaphone Filter for some tokens.
 
 Encodes tokens using the double metaphone algorithm by Lawrence Philips.
 See the original article at http://www.drdobbs.com/the-double-metaphone-search-algorithm/184401251?pgno=2
 
 == Metaphone
 
-To use this encoding in your analyzer, specify `encoder="Metaphone"` with the <<filters.adoc#phonetic-filter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoder="Metaphone"` with the xref:filters.adoc#phonetic-filter[Phonetic Filter].
 
 Encodes tokens using the Metaphone algorithm by Lawrence Philips, described in "Hanging on the Metaphone" in Computer Language, Dec. 1990.
 
@@ -99,7 +100,7 @@ Another reference for more information is http://www.drdobbs.com/the-double-meta
 
 == Soundex
 
-To use this encoding in your analyzer, specify `encoder="Soundex"` with the <<filters.adoc#phonetic-filter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoder="Soundex"` with the xref:filters.adoc#phonetic-filter[Phonetic Filter].
 
 Encodes tokens using the Soundex algorithm, which is used to relate similar names, but can also be used as a general purpose scheme to find words with similar phonemes.
 
@@ -107,7 +108,7 @@ See also http://en.wikipedia.org/wiki/Soundex.
 
 == Refined Soundex
 
-To use this encoding in your analyzer, specify `encoder="RefinedSoundex"` with the <<filters.adoc#phonetic-filter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoder="RefinedSoundex"` with xref:filters.adoc#phonetic-filter[Phonetic Filter].
 
 Encodes tokens using an improved version of the Soundex algorithm.
 
@@ -115,7 +116,7 @@ See http://en.wikipedia.org/wiki/Soundex.
 
 == Caverphone
 
-To use this encoding in your analyzer, specify `encoder="Caverphone"` with the <<filters.adoc#phonetic-filter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoder="Caverphone"` with the xref:filters.adoc#phonetic-filter[Phonetic Filter].
 
 Caverphone is an algorithm created by the Caversham Project at the University of Otago.
 The algorithm is optimised for accents present in the southern part of the city of Dunedin, New Zealand.
@@ -124,7 +125,7 @@ See http://en.wikipedia.org/wiki/Caverphone and the Caverphone 2.0 specification
 
 == Kölner Phonetik a.k.a. Cologne Phonetic
 
-To use this encoding in your analyzer, specify `encoder="ColognePhonetic"` with the <<filters.adoc#phonetic-filter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoder="ColognePhonetic"` with the xref:filters.adoc#phonetic-filter[Phonetic Filter].
 
 The Kölner Phonetik, an algorithm published by Hans Joachim Postel in 1969, is optimized for the German language.
 
@@ -132,7 +133,7 @@ See http://de.wikipedia.org/wiki/K%C3%B6lner_Phonetik
 
 == NYSIIS
 
-To use this encoding in your analyzer, specify `encoder="Nysiis"` with the <<filters.adoc#phonetic-filter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoder="Nysiis"` with the xref:filters.adoc#phonetic-filter[Phonetic Filter].
 
 NYSIIS is an encoding used to relate similar names, but can also be used as a general purpose scheme to find words with similar phonemes.
 
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/post-tool.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/post-tool.adoc
index d02facb..f2d303b 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/post-tool.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/post-tool.adoc
@@ -121,7 +121,7 @@ bin/post -c signals -params "separator=%09" -type text/csv data.tsv
 ----
 
 The content type (`-type`) parameter is required to treat the file as the proper type, otherwise it will be ignored and a WARNING logged as it does not know what type of content a .tsv file is.
-The <<indexing-with-update-handlers.adoc#csv-formatted-index-updates,CSV handler>> supports the `separator` parameter, and is passed through using the `-params` setting.
+The xref:indexing-with-update-handlers.adoc#csv-formatted-index-updates[CSV handler] supports the `separator` parameter, and is passed through using the `-params` setting.
 
 === Indexing JSON
 
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/reindexing.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/reindexing.adoc
index 070d50c..7b0bf11 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/reindexing.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/reindexing.adoc
@@ -27,7 +27,7 @@ It is strongly recommended that Solr users have a consistent, repeatable process
 
 [CAUTION]
 ====
-Re-ingesting all the documents in your corpus without first insuring that all documents and Lucene segments have been deleted is *not* sufficient, see the section <<reindexing.adoc#reindexing-strategies,Reindexing Strategies>>.
+Re-ingesting all the documents in your corpus without first insuring that all documents and Lucene segments have been deleted is *not* sufficient, see the section <<Reindexing Strategies>>.
 ====
 
 Reindexing is recommended during major upgrades, so in addition to covering what types of configuration changes should trigger a reindex, this section will also cover strategies for reindexing.
@@ -57,7 +57,7 @@ This type of change is usually only made during or because of a major upgrade.
 When you change your schema by adding fields, removing fields, or changing the field or field type definitions you generally do so with the intent that those changes alter how documents are searched.
 The full effects of those changes are not reflected in the corpus as a whole until all documents are reindexed.
 
-Changes to *any* field/field type property described in <<field-type-definitions-and-properties.adoc#field-type-properties,Field Type Properties>> must be reindexed in order for the change to be reflected in _all_ documents.
+Changes to *any* field/field type property described in xref:field-type-definitions-and-properties.adoc#field-type-properties[Field Type Properties] must be reindexed in order for the change to be reflected in _all_ documents.
 
 [CAUTION]
 ====
@@ -69,7 +69,7 @@ Negative impacts on the user may not be immediately apparent.
 
 ==== Changing Field Analysis
 
-Beyond specific field-level properties, <<analyzers.adoc#,analysis chains>> are also configured on field types, and are applied at index and query time.
+Beyond specific field-level properties, xref:analyzers.adoc[analysis chains] are also configured on field types, and are applied at index and query time.
 
 If separate analysis chains are defined for query and indexing events for a field and you change _only_ the query-time analysis chain, reindexing is not necessary.
 
@@ -80,11 +80,11 @@ Identifying changes to solrconfig.xml that alter how data is ingested and thus r
 The general rule is "anything that changes what gets stored in the index requires reindexing".
 Here are several known examples.
 
-The parameter `luceneMatchVersion` in solrconfig.xml controls the compatibility of Solr with Lucene.
+The parameter `luceneMatchVersion` in `solrconfig.xml` controls the compatibility of Solr with Lucene.
 Since this parameter can change the rules for analysis behind the scenes, it's always recommended to reindex when changing it.
 Usually this is only changed in conjunction with a major upgrade.
 
-If you make a change to Solr's <<update-request-processors.adoc#,Update Request Processors>>, it's generally because you want to change something about how _update requests_ (documents) are _processed_ (indexed).
+If you make a change to Solr's xref:configuration-guide:update-request-processors.adoc[], it's generally because you want to change something about how _update requests_ (documents) are _processed_ (indexed).
 In this case, we recommend that you reindex your documents to implement the changes you've made just as if you had changed the schema.
 
 Similarly, if you change the `codecFactory` parameter in `solrconfig.xml`, it is again strongly recommended that you
@@ -139,7 +139,7 @@ It's important to verify that *all* documents have been deleted, as that ensures
 deleted as well.
 
 To verify that there are no segments in your index, look in the data/index directory and confirm it has no segments files.
-Since the data directory can be customized, see the section <<index-location-format.adoc#specifying-a-location-for-index-data-with-the-datadir-parameter,Specifying a Location for Index Data with the dataDir Parameter>> for the location of your index files.
+Since the data directory can be customized, see the section xref:configuration-guide:index-location-format.adoc#specifying-a-location-for-index-data-with-the-datadir-parameter[Specifying a Location for Index Data with the dataDir Parameter] for the location of your index files.
 
 Note you will need to verify the indexes have been removed in every shard and every replica on every node of a cluster.
 It is not sufficient to only query for the number of documents because you may have no documents but still have index
@@ -152,14 +152,14 @@ A variation on this approch is to delete and recreate your collection using the
 
 === Index to Another Collection
 
-Another approach is to use index to a new collection and use Solr's <<alias-management.adoc#createalias,collection alias>> feature to seamlessly point the application to a new collection without downtime.
+Another approach is to use index to a new collection and use Solr's xref:deployment-guide:alias-management.adoc#createalias[collection alias] feature to seamlessly point the application to a new collection without downtime.
 
 This option is only available for Solr installations running in SolrCloud mode.
 
 With this approach, you will index your documents into a new collection that uses your changes and, once indexing and testing are complete, create an alias that points your front-end at the new collection.
 From that point, new queries and updates will be routed to the new collection seamlessly.
 
-Once the alias is in place and you are satisfied you no longer need the old data, you can delete the old collection with the Collections API <<collection-management.adoc#delete,DELETE command>>.
+Once the alias is in place and you are satisfied you no longer need the old data, you can delete the old collection with the Collections API xref:deployment-guide:collection-management.adoc#delete[DELETE command].
 
 [NOTE]
 One advantage of this option is that if you you can switch back to the old collection if you discover problems our testing did not uncover.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-api.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-api.adoc
index b73f082..c58ec64 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-api.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-api.adoc
@@ -24,7 +24,7 @@ Fields, dynamic fields, field types and copyField rules may be added, removed or
 Future Solr releases will extend write access to allow more schema elements to be modified.
 
 The Schema API utilizes the `ManagedIndexSchemaFactory` class, which is the default schema factory in modern Solr versions.
-See the section <<schema-factory.adoc#,Schema Factory Definition in SolrConfig>> for more information about choosing a schema factory for your index.
+See the section xref:configuration-guide:schema-factory.adoc[] for more information about choosing a schema factory for your index.
 
 .Hand editing of the managed schema is discouraged
 [NOTE]
@@ -55,7 +55,7 @@ You must reindex documents in order to apply schema changes to them.
 Queries and updates made after the change may encounter errors that were not present before the change.
 Completely deleting the index and rebuilding it is usually the only option to fix such errors.
 
-See the section <<reindexing.adoc#,Reindexing>> for more information about reindexing.
+See the section xref:reindexing.adoc[] for more information about reindexing.
 ====
 
 All of the examples in this section assume you are running the "techproducts" Solr example:
@@ -96,7 +96,7 @@ The `add-field` command adds a new field definition to your schema.
 If a field with the same name exists an error is thrown.
 
 All of the properties available when defining a field with manual schema edits can be passed via the API.
-These request attributes are described in detail in the section <<fields.adoc#,Fields>>.
+These request attributes are described in detail in the section xref:fields.adoc[].
 
 For example, to define a new stored field named "sell_by", of type "pdate", you would POST the following request:
 
@@ -170,7 +170,7 @@ Note that you must supply the full definition for a field - this command will *n
 If the field does not exist in the schema an error is thrown.
 
 All of the properties available when defining a field with manual schema edits can be passed via the API.
-These request attributes are described in detail in the section <<fields.adoc#,Fields>>.
+These request attributes are described in detail in the section xref:fields.adoc[].
 
 For example, to replace the definition of an existing field "sell_by", to make it be of type "date" and to not be stored, you would POST the following request:
 
@@ -210,7 +210,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
 The `add-dynamic-field` command adds a new dynamic field rule to your schema.
 
 All of the properties available when editing the schema can be passed with the POST request.
-The section <<dynamic-fields.adoc#,Dynamic Fields>> has details on all of the attributes that can be defined for a dynamic field rule.
+The section xref:dynamic-fields.adoc[] has details on all of the attributes that can be defined for a dynamic field rule.
 
 For example, to create a new dynamic field rule where all incoming fields ending with "_s" would be stored and have field type "string", you can POST a request like this:
 
@@ -284,7 +284,7 @@ Note that you must supply the full definition for a dynamic field rule - this co
 If the dynamic field rule does not exist in the schema an error is thrown.
 
 All of the properties available when editing the schema can be passed with the POST request.
-The section <<dynamic-fields.adoc#,Dynamic Fields>> has details on all of the attributes that can be defined for a dynamic field rule.
+The section xref:dynamic-fields.adoc[] has details on all of the attributes that can be defined for a dynamic field rule.
 
 For example, to replace the definition of the "*_s" dynamic field rule with one where the field type is "text_general" and it's not stored, you can POST a request like this:
 
@@ -325,7 +325,7 @@ The `add-field-type` command adds a new field type to your schema.
 
 All of the field type properties available when editing the schema by hand are available for use in a POST request.
 The structure of the command is a JSON mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc.
-Details of all of the available options are described in the section <<field-types.adoc#,Field Types>>.
+Details of all of the available options are described in the section xref:field-type-definitions-and-properties.adoc[].
 
 For example, to create a new field type named "myNewTxtField", you can POST a request as follows:
 
@@ -444,7 +444,7 @@ If the field type does not exist in the schema an error is thrown.
 
 All of the field type properties available when editing the schema by hand are available for use in a POST request.
 The structure of the command is a JSON mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc.
-Details of all of the available options are described in the section <<field-types.adoc#,Field Types>>.
+Details of all of the available options are described in the section xref:field-type-definitions-and-properties.adoc[].
 
 For example, to replace the definition of a field type named "myNewTxtField", you can make a POST request as follows:
 
@@ -517,7 +517,7 @@ A field or an array of fields to which the source field will be copied.
 |===
 +
 The upper limit for the number of characters to be copied.
-The section <<copy-fields.adoc#,Copy Fields>> has more details.
+The section xref:copy-fields.adoc[] has more details.
 
 For example, to define a rule to copy the field "shelf" to the "location" and "catchall" fields, you would POST the following request:
 
@@ -944,7 +944,7 @@ The output will include each field and any defined configuration for each field.
 The defined configuration can vary for each field, but will minimally include the field `name`, the `type`, if it is `indexed` and if it is `stored`.
 
 If `multiValued` is defined as either true or false (most likely true), that will also be shown.
-See the section <<fields.adoc#,Fields>> for more information about each parameter.
+See the section xref:fields.adoc[] for more information about each parameter.
 
 ==== List Fields Examples
 
@@ -1060,7 +1060,7 @@ If `false`, only explicitly specified field properties will be included.
 
 The output will include each dynamic field rule and the defined configuration for each rule.
 The defined configuration can vary for each rule, but will minimally include the dynamic field `name`, the `type`, if it is `indexed` and if it is `stored`.
-See the section <<dynamic-fields.adoc#,Dynamic Fields>> for more information about each parameter.
+See the section xref:dynamic-fields.adoc[] for more information about each parameter.
 
 ==== List Dynamic Field Examples
 
@@ -1189,7 +1189,7 @@ If `false`, only explicitly specified field properties will be included.
 The output will include each field type and any defined configuration for the type.
 The defined configuration can vary for each type, but will minimally include the field type `name` and the `class`.
 If query or index analyzers, tokenizers, or filters are defined, those will also be shown with other defined parameters.
-See the section <<field-types.adoc#,Field Types>> for more information about how to configure various types of fields.
+See the section xref:field-type-definitions-and-properties.adoc[] for more information about how to configure various types of fields.
 
 ==== List Field Type Examples
 
@@ -1312,7 +1312,7 @@ If not specified, all copyField-s will be included in the response.
 ==== List Copy Field Response
 
 The output will include the `source` and `dest` (destination) of each copy field rule defined in `schema.xml`.
-For more information about copy fields, see the section <<copy-fields.adoc#,Copy Fields>>.
+For more information about copy fields, see the section xref:copy-fields.adoc[].
 
 ==== List Copy Field Examples
 
@@ -1649,6 +1649,5 @@ curl -X GET "http://localhost:8983/api/collections/techproducts/schema/similarit
 
 == Manage Resource Data
 
-The <<managed-resources.adoc#,Managed Resources>> REST API provides a mechanism for any Solr plugin to expose resources that should support CRUD (Create, Read, Update, Delete) operations.
+The xref:configuration-guide:managed-resources.adoc[] REST API provides a mechanism for any Solr plugin to expose resources that should support CRUD (Create, Read, Update, Delete) operations.
 Depending on which field types and analyzers are configured in your Schema, additional `/schema/` REST API paths may exist.
-See the <<managed-resources.adoc#,Managed Resources>> section for more information and examples.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-browser-screen.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-browser-screen.adoc
index 723af74..1502eb1 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-browser-screen.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-browser-screen.adoc
@@ -24,7 +24,7 @@ If there is nothing chosen, use the pull-down menu to choose the field or field
 .Schema Browser Screen
 image::schema-browser-screen/schema_browser_terms.png[image,height=400]
 
-The screen provides a great deal of useful information about each particular field and fieldtype in the Schema, and provides a quick UI for adding fields or fieldtypes using the <<schema-api.adoc#,Schema API>> (if enabled).
+The screen provides a great deal of useful information about each particular field and fieldtype in the Schema, and provides a quick UI for adding fields or fieldtypes using the xref:schema-api.adoc[] (if enabled).
 In the example above, we have chosen the `cat` field.
 On the left side of the main view window, we see the field name, that it is copied to the `\_text_` (because of a copyField rule) and that it uses the `strings` fieldtype.
 Click on one of those field or fieldtype names, and you can see the corresponding definitions.
@@ -32,18 +32,18 @@ Click on one of those field or fieldtype names, and you can see the correspondin
 In the right part of the main view, we see the specific properties of how the `cat` field is defined – either explicitly or implicitly via its fieldtype, as well as how many documents have populated this field.
 Then we see the analyzer used for indexing and query processing.
 Click the icon to the left of either of those, and you'll see the definitions for the tokenizers and/or filters that are used.
-The output of these processes is the information you see when testing how content is handled for a particular field with the <<analysis-screen.adoc#,Analysis Screen>>.
+The output of these processes is the information you see when testing how content is handled for a particular field with the xref:analysis-screen.adoc[].
 
 Under the analyzer information is a button to *Load Term Info*.
 Clicking that button will show the top _N_ terms that are in a sample shard for that field, as well as a histogram showing the number of terms with various frequencies.
-Click on a term, and you will be taken to the <<query-screen.adoc#,Query Screen>> to see the results of a query of that term in that field.
+Click on a term, and you will be taken to the xref:query-guide:query-screen.adoc[] to see the results of a query of that term in that field.
 If you want to always see the term information for a field, choose *Autoload* and it will always appear when there are terms for a field.
 A histogram shows the number of terms with a given frequency in the field.
 
 [IMPORTANT]
 ====
 Term Information is loaded from single arbitrarily selected core from the collection, to provide a representative sample for the collection.
-Full <<faceting.adoc#,Field Facet>> query results are needed to see precise term counts across the entire collection.
+Full xref:query-guide:faceting.adoc[Field Facet] query results are needed to see precise term counts across the entire collection.
 ====
 
-For programmatic access to the underlying information in this screen please reference the <<luke-request-handler.adoc#,Luke Request Handler>>
+For programmatic access to the underlying information in this screen please reference the xref:luke-request-handler.adoc[].
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-designer.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-designer.adoc
index 13576fa..06c98f3 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-designer.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-designer.adoc
@@ -20,7 +20,7 @@
 The Schema Designer screen lets you interactively design a new schema using sample data.
 
 .Schema Designer screen
-image::solr-admin-ui/schema-designer.png[image]
+image::getting-started:solr-admin-ui/schema-designer.png[image]
 
 There are a number of panels on the Schema Designer screen to provide immediate feedback when you make changes to the schema, including:
 
@@ -35,10 +35,10 @@ You can safely experiment with changes and see the impact on query results immed
 Once data is indexed using a published schema, there are severe restrictions on the type of changes you can make to the schema without needing a full re-index.
 When designing a new schema, the Schema Designer re-indexes your sample data automatically when you make changes. However, the designer does not re-index data in collections using a published schema.
 
-.Security Requirements
+.Authorization Requirements
 [NOTE]
 ====
-If the <<rule-based-authorization-plugin.adoc#,Rule-based Authorization Plugin>> is enabled for your Solr installation, then users need to have the `config-edit` and `config-read` permissions to use the Schema Designer.
+If the xref:deployment-guide:rule-based-authorization-plugin.adoc[] is enabled for your Solr installation, then users need to have the `config-edit` and `config-read` permissions to use the Schema Designer.
 ====
 
 == Getting Started
@@ -63,7 +63,7 @@ Click on the btn:[Analyze Documents] button to submit the sample documents to th
 
 === Temporary Configset and Collection
 
-Behind the scenes, the Schema Designer API creates a temporary <<config-sets.adoc#,Configset>> (schema + solrconfig.xml + supporting files) in Zookeeper.
+Behind the scenes, the Schema Designer API creates a temporary xref:configuration-guide:config-sets.adoc[] (schema + `solrconfig.xml` + supporting files) in Zookeeper.
 In addition, the Schema Designer API creates a temporary collection with a single shard and replica to hold sample documents.
 These temporary resources are persisted to disk and exist until the schema is published or manually deleted using the Schema Designer API cleanup endpoint (`/api/schema-designer/cleanup`).
 
@@ -116,14 +116,14 @@ After sending the sample documents to the Schema Designer backend, you can open
 [NOTE]
 ====
 The Schema Designer API is primarily intended to support an interactive experience in the UI vs. being used programmatically by developers.
-To create and manage Configsets and Schemas programmatically, see the <<configsets-api.adoc#,Configset>> and <<schema-api.adoc#,Schema>> APIs.
+To create and manage Configsets and Schemas programmatically, see the sections xref:configuration-guide:configsets-api.adoc[] and xref:schema-api.adoc[].
 ====
 
 == Schema Editor
 
 After analyzing your sample documents, the Schema Designer loads the schema in the *Schema Editor* in the middle panel.
 The editor renders the schema as a tree component composed of Fields, Dynamic Fields, Field Types, and Files.
-For more information about schema objects, see <<fields-and-schema-design.adoc#,Fields and Schema Design>>.
+For more information about schema objects, see xref:fields.adoc[].
 
 image::schema-designer/schema-editor-root.png[image,width=700]
 
@@ -137,10 +137,10 @@ Consequently, the Schema Designer focuses primarily on the schema aspects of a C
 
 When you click on the root node of the Schema Editor tree, you can refine top-level schema properties, including:
 
-* Languages: The `_default` schema includes text fields for a number of common languages. You can include all text analyzers in your schema or select a subset based on the languages your search application needs to support. The designer will remove all the unnecessary field types for languages you don't need. For more information about text analysis and languages, see: <<language-analysis.adoc#,Language Analysis>>
-* Dynamic fields allow Solr to index fields that you did not explicitly define in your schema. Dynamic fields can make your application less brittle by providing some flexibility in the documents you can add to Solr. It is recommended to keep the default set of dynamic fields enabled for your schema. Unchecking this option removes all dynamic fields from your schema. For more information about dynamic fields, see: <<dynamic-fields.adoc#,Dynamic Fields>>
-* Field guessing (aka "schemaless mode") allows Solr to detect the "best" field type for unknown fields encountered during indexing. Field guessing also performs some field transformations, such as removing spaces from field names. If you use the schema designer to create your schema based on sample documents, you may not need to enable this feature. However, with this feature disabled, you need to make sure the incoming data matches the schema exactly or indexing errors may occur. For m [...]
-* Enabling this feature adds the `_root_` and `_nest_path_` fields to your schema. For more information about indexing nested child documents, see: <<indexing-nested-documents.adoc#,Indexing Nested Documents>>
+* Languages: The `_default` schema includes text fields for a number of common languages. You can include all text analyzers in your schema or select a subset based on the languages your search application needs to support. The designer will remove all the unnecessary field types for languages you don't need. For more information about text analysis and languages, see xref:language-analysis.adoc[].
+* Dynamic fields allow Solr to index fields that you did not explicitly define in your schema. Dynamic fields can make your application less brittle by providing some flexibility in the documents you can add to Solr. It is recommended to keep the default set of dynamic fields enabled for your schema. Unchecking this option removes all dynamic fields from your schema. For more information about dynamic fields, see xref:dynamic-fields.adoc[].
+* Field guessing (aka "schemaless mode") allows Solr to detect the "best" field type for unknown fields encountered during indexing. Field guessing also performs some field transformations, such as removing spaces from field names. If you use the schema designer to create your schema based on sample documents, you may not need to enable this feature. However, with this feature disabled, you need to make sure the incoming data matches the schema exactly or indexing errors may occur. For m [...]
+* Enabling this feature adds the `_root_` and `_nest_path_` fields to your schema. For more information about indexing nested child documents, see xref:indexing-nested-documents.adoc[].
 
 Only make changes to these top-level schema properties when you fully understand how they impact the behavior of your search application.
 When first starting out, you can leave the default settings and focus your attention on the fields and field types in the schema.
@@ -148,7 +148,7 @@ When first starting out, you can leave the default settings and focus your atten
 === Schema Fields
 
 Click on the *Fields* node in the editor tree to see an overview of the fields in your schema,
-along with the <<field-type-definitions-and-properties.adoc#,properties>> that govern how the field will be indexed by Solr.
+along with the xref:field-type-definitions-and-properties.adoc[properties] that govern how the field will be indexed by Solr.
 
 image::schema-designer/schema-editor-fields.png[image,width=750]
 
@@ -190,13 +190,14 @@ When you select a text-based field in the tree, the *Text Analysis* panel shows
 
 image::schema-designer/text-analysis.png[image,width=600]
 
-If you need to change the text analysis strategy for a field, you need to edit the Field Type. For more information about text analysis, see: <<analyzers.adoc#,Analyzers>>.
+If you need to change the text analysis strategy for a field, you need to edit the Field Type. For more information about text analysis, see xref:analyzers.adoc[].
 
 == Query Tester
 
 The *Query Tester* panel lets you experiment with queries executed against your sample document set using the current schema.
+
 Using the Query Tester, you can see how changes to the schema impact the behavior of queries, such as matching, sorting, faceting, and highlighting.
-The Query Tester form is not intended to demonstrate all possible <<query-guide.adoc#,query features>> available in Solr.
+The Query Tester form is not intended to demonstrate all possible xref:query-guide.adoc[query features] available in Solr.
 
 image::schema-designer/query-tester.png[image]
 
@@ -233,4 +234,4 @@ and does not prevent someone from changing the schema using the Schema API direc
 Once the publish action completes, the temporary Configset and collection are deleted and the Schema Designer UI resets back to a fresh state.
 
 Alternatively, instead of publishing to Zookeeper, you can also download the Configset to a zip file containing the schema, solrconfig.xml, and supporting files.
-The zip file can be uploaded to other Solr instances using the <<configsets-api.adoc#,Configset API>> or saved in version control.
+The zip file can be uploaded to other Solr instances using the xref:configuration-guide:configsets-api.adoc[] or saved in version control.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-elements.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-elements.adoc
index 6e19834..abb1b74 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-elements.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/schema-elements.adoc
@@ -21,12 +21,12 @@ Solr stores details about the field types and fields it is expected to understan
 == Solr's Schema File
 The name and location of Solr's schema file may vary depending on how you initially configured Solr or if you modified it later.
 
-* `managed-schema.xml` is the name for the schema file Solr uses by default to support making schema changes at runtime via the <<schema-api.adoc#,Schema API>>, or <<schemaless-mode.adoc#,Schemaless Mode>> features.
+* `managed-schema.xml` is the name for the schema file Solr uses by default to support making schema changes at runtime via the xref:schema-api.adoc[], or xref:schemaless-mode.adoc[] features.
 +
-You may <<schema-factory.adoc#,explicitly configure the managed schema features>> to use an alternative filename if you choose, but the contents of the files are still updated automatically by Solr.
-* `schema.xml` is the traditional name for a schema file which can be edited manually by users who use the <<schema-factory.adoc#,`ClassicIndexSchemaFactory`>>.
+You may xref:configuration-guide:schema-factory.adoc[explicitly configure] the managed schema features to use an alternative filename if you choose, but the contents of the files are still updated automatically by Solr.
+* `schema.xml` is the traditional name for a schema file which can be edited manually by users who use the xref:configuration-guide:schema-factory.adoc#classicindexschemafactory[`ClassicIndexSchemaFactory`].
 * If you are using SolrCloud you may not be able to find any file by these names on the local filesystem.
-You will only be able to see the schema through the Schema API (if enabled) or through the Solr Admin UI's <<cloud-screens.adoc#,Cloud Screens>>.
+You will only be able to see the schema through the Schema API (if enabled) or through the Solr Admin UI's xref:deployment-guide:cloud-screens.adoc[].
 
 Whichever name of the file in use in your installation, the structure of the file is not changed.
 However, the way you interact with the file will change.
@@ -34,7 +34,7 @@ If you are using the managed schema, it is expected that you only interact with
 If you do not use the managed schema, you will only be able to make manual edits to the file, the Schema API will not support any modifications.
 
 Note that if you are not using the Schema API yet you do use SolrCloud, you will need to interact with the schema file through ZooKeeper using `upconfig` and `downconfig` commands to make a local copy and upload your changes.
-The options for doing this are described in <<solr-control-script-reference.adoc#,Solr Control Script Reference>> and <<zookeeper-file-management.adoc#,ZooKeeper File Management>>.
+The options for doing this are described in xref:deployment-guide:solr-control-script-reference.adoc[] and xref:deployment-guide:zookeeper-file-management.adoc[].
 
 == Structure of the Schema File
 
@@ -55,13 +55,13 @@ This example is not real XML, but shows the primary elements that make up a sche
 ----
 
 The most commonly defined elements are `types` and `fields`, where the field types and the actual fields are configured.
-The sections <<field-types.adoc#,Field Types>>, and <<fields.adoc#,Fields>> describe how to configure these for your schema.
+The sections xref:field-type-definitions-and-properties.adoc[], and xref:fields.adoc[] describe how to configure these for your schema.
 
-These are supplemented by `copyFields`, described in <<copy-fields.adoc#,Copy Fields>>, and `dynamicFields`, described in <<dynamic-fields.adoc#,Dynamic Fields>>.
+These are supplemented by `copyFields`, described in xref:copy-fields.adoc[], and `dynamicFields`, described in xref:dynamic-fields.adoc[].
 
 The `uniqueKey` described in <<Unique Key>> below must always be defined.
 
-A default `similarity` will be used, but can be modifed as described in the section <<Similarity>> below.
+A default `similarity` will be used, but can be modified as described in the section <<Similarity>> below.
 
 .Types and fields are optional tags
 [NOTE]
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/schemaless-mode.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/schemaless-mode.adoc
index 311b677..5e0884e 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/schemaless-mode.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/schemaless-mode.adoc
@@ -21,13 +21,13 @@ Schemaless Mode is a set of Solr features that, when used together, allow users
 These Solr features, all controlled via `solrconfig.xml`, are:
 
 . Managed schema: Schema modifications are made at runtime through Solr APIs, which requires the use of a `schemaFactory` that supports these changes.
-See the section <<schema-factory.adoc#,Schema Factory Definition in SolrConfig>> for more details.
+See the section xref:configuration-guide:schema-factory.adoc[] for more details.
 . Field value class guessing: Previously unseen fields are run through a cascading set of value-based parsers, which guess the Java class of field values - parsers for Boolean, Integer, Long, Float, Double, and Date are currently available.
-. Automatic schema field addition, based on field value class(es): Previously unseen fields are added to the schema, based on field value Java classes, which are mapped to schema field types - see <<field-types.adoc#,Field Types>>.
+. Automatic schema field addition, based on field value class(es): Previously unseen fields are added to the schema, based on field value Java classes, which are mapped to schema field types - see xref:field-type-definitions-and-properties.adoc[].
 
 == Using the Schemaless Example
 
-The three features of schemaless mode are pre-configured in the `_default` <<config-sets.adoc#,configset>> in the Solr distribution.
+The three features of schemaless mode are pre-configured in the `_default` xref:configuration-guide:config-sets.adoc[configset] in the Solr distribution.
 To start an example instance of Solr using these configs, run the following command:
 
 [source,bash]
@@ -37,7 +37,7 @@ bin/solr start -e schemaless
 
 This will launch a single Solr server, and automatically create a collection (named "```gettingstarted```") that contains only three fields in the initial schema: `id`, `\_version_`, and `\_text_`.
 
-You can use the `/schema/fields` <<schema-api.adoc#,Schema API>> to confirm this: `curl \http://localhost:8983/solr/gettingstarted/schema/fields` will output:
+You can use the `/schema/fields` xref:schema-api.adoc[] to confirm this: `curl \http://localhost:8983/solr/gettingstarted/schema/fields` will output:
 
 [source,json]
 ----
@@ -74,9 +74,9 @@ If, however, you would like to implement schemaless on your own, you should make
 
 === Enable Managed Schema
 
-As described in the section <<schema-factory.adoc#,Schema Factory Definition in SolrConfig>>, Managed Schema support is enabled by default, unless your configuration specifies that `ClassicIndexSchemaFactory` should be used.
+As described in the section xref:configuration-guide:schema-factory.adoc[], Managed Schema support is enabled by default, unless your configuration specifies that `ClassicIndexSchemaFactory` should be used.
 
-You can configure the `ManagedIndexSchemaFactory` (and control the resource file used, or disable future modifications) by adding an explicit `<schemaFactory/>` like the one below, please see <<schema-factory.adoc#,Schema Factory Definition in SolrConfig>> for more details on the options available.
+You can configure the `ManagedIndexSchemaFactory` (and control the resource file used, or disable future modifications) by adding an explicit `<schemaFactory/>` like the one below, please see xref:configuration-guide:schema-factory.adoc[] for more details on the options available.
 
 [source,xml]
 ----
@@ -88,7 +88,7 @@ You can configure the `ManagedIndexSchemaFactory` (and control the resource file
 
 === Enable Field Class Guessing
 
-In Solr, an <<update-request-processors.adoc#,UpdateRequestProcessorChain>> defines a chain of plugins that are applied to documents before or while they are indexed.
+In Solr, an xref:configuration-guide:update-request-processors.adoc[UpdateRequestProcessorChain] defines a chain of plugins that are applied to documents before or while they are indexed.
 
 The field guessing aspect of Solr's schemaless mode uses a specially-defined UpdateRequestProcessorChain that allows Solr to guess field types.
 You can also define the default field type classes to use.
@@ -203,7 +203,7 @@ Once the UpdateRequestProcessorChain has been defined, you must instruct your Up
 There are two ways to do this.
 The update chain shown above has a `default=true` attribute which will use it for any update handler.
 
-An alternative, more explicit way is to use <<initparams.adoc#,InitParams>> to set the defaults on all `/update` request handlers:
+An alternative, more explicit way is to use xref:configuration-guide:initparams.adoc[] to set the defaults on all `/update` request handlers:
 
 [source,xml]
 ----
@@ -219,7 +219,7 @@ IMPORTANT: After all of these changes have been made, Solr should be restarted o
 === Disabling Automatic Field Guessing
 
 Automatic field creation can be disabled with the `update.autoCreateFields` property.
-To do this, you can use <<solr-control-script-reference.adoc#set-or-unset-configuration-properties,`bin/solr config`>> with a command such as:
+To do this, you can use xref:deployment-guide:solr-control-script-reference.adoc#set-or-unset-configuration-properties[`bin/solr config`] with a command such as:
 
 [source,bash]
 bin/solr config -c mycollection -p 8983 -action set-user-property -property update.autoCreateFields -value false
@@ -304,9 +304,9 @@ In addition string versions of the text fields are indexed, using copyFields to
 .You Can Still Be Explicit
 [TIP]
 ====
-Even if you want to use schemaless mode for most fields, you can still use the <<schema-api.adoc#,Schema API>> to pre-emptively create some fields, with explicit types, before you index documents that use them.
+Even if you want to use schemaless mode for most fields, you can still use the xref:schema-api.adoc[] to pre-emptively create some fields, with explicit types, before you index documents that use them.
 
-Internally, the Schema API and the Schemaless Update Processors both use the same <<schema-factory.adoc#,Managed Schema>> functionality.
+Internally, the Schema API and the Schemaless Update Processors both use the same xref:configuration-guide:schema-factory.adoc[Managed Schema] functionality.
 
 Also, if you do not need the `*_str` version of a text field, you can simply remove the `copyField` definition from the auto-generated schema and it will not be re-added since the original field is now defined.
 ====
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/tokenizers.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/tokenizers.adoc
index 12e571f..34a4275 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/tokenizers.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/tokenizers.adoc
@@ -31,7 +31,7 @@ It's also possible for more than one token to have the same position or refer to
 Keep this in mind if you use token metadata for things like highlighting search results in the field text.
 
 == About Tokenizers
-You configure the tokenizer for a text field type in the <<solr-schema.adoc#,schema>> with a `<tokenizer>` element, as a child of `<analyzer>`:
+You configure the tokenizer for a text field type in the xref:schema-elements.adoc[schema] with a `<tokenizer>` element, as a child of `<analyzer>`:
 
 [.dynamic-tabs]
 --
@@ -549,7 +549,7 @@ The default configuration for `solr.ICUTokenizerFactory` provides UAX#29 word br
 [IMPORTANT]
 ====
 
-To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<solr-plugins.adoc#installing-plugins,Solr Plugins>>).
+To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section xref:configuration-guide:solr-plugins.adoc#installing-plugins[Installing Plugins]).
 See the `solr/contrib/analysis-extras/README.md` for information on which jars you need to add.
 
 ====
@@ -913,4 +913,4 @@ Valid values:
 
 == OpenNLP Tokenizer and OpenNLP Filters
 
-See <<language-analysis.adoc#opennlp-integration,OpenNLP Integration>> for information about using the OpenNLP Tokenizer, along with information about available OpenNLP token filters.
+See xref:language-analysis.adoc#opennlp-integration[OpenNLP Integration] for information about using the OpenNLP Tokenizer, along with information about available OpenNLP token filters.
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/transforming-and-indexing-custom-json.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/transforming-and-indexing-custom-json.adoc
index e1a2065..621874d 100644
--- a/solr/solr-ref-guide/modules/indexing-guide/pages/transforming-and-indexing-custom-json.adoc
+++ b/solr/solr-ref-guide/modules/indexing-guide/pages/transforming-and-indexing-custom-json.adoc
@@ -61,7 +61,7 @@ Wildcards can be used here, see <<Using Wildcards for Field Names>> below for mo
 |Optional |Default: `false`
 |===
 +
-This parameter is particularly convenient when the fields in the input JSON are not available in the schema and <<schemaless-mode.adoc#,schemaless mode>> is not enabled.
+This parameter is particularly convenient when the fields in the input JSON are not available in the schema and xref:schemaless-mode.adoc[schemaless mode] is not enabled.
 This will index all the fields into the default search field (using the `df` parameter) and only the `uniqueKey` field is mapped to the corresponding field in the schema.
 If the input JSON does not have a value for the `uniqueKey` field then a UUID is generated for the same.
 
@@ -332,12 +332,12 @@ Solr will automatically attempt to add the content of the field from the JSON in
 ====
 Documents will be rejected during indexing if the fields do not exist in the schema before indexing.
 So, if you are NOT using schemaless mode, you must pre-create all fields.
-If you are working in <<schemaless-mode.adoc#,Schemaless Mode>>, however, fields that don't exist will be created on the fly with Solr's best guess for the field type.
+If you are working in xref:configuration-guide:schemaless-mode.adoc[], however, fields that don't exist will be created on the fly with Solr's best guess for the field type.
 ====
 
 === Reusing Parameters in Multiple Requests
 
-You can store and re-use parameters with Solr's <<request-parameters-api.adoc#,Request Parameters API>>.
+You can store and re-use parameters with Solr's xref:configuration-guide:request-parameters-api.adoc[].
 
 Say we wanted to define parameters to split documents at the `exams` field, and map several other fields.
 We could make an API request such as:
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/field-types.adoc b/solr/solr-ref-guide/src/old-pages/field-types.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/indexing-guide/pages/field-types.adoc
rename to solr/solr-ref-guide/src/old-pages/field-types.adoc
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/fields-and-schema-design.adoc b/solr/solr-ref-guide/src/old-pages/fields-and-schema-design.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/indexing-guide/pages/fields-and-schema-design.adoc
rename to solr/solr-ref-guide/src/old-pages/fields-and-schema-design.adoc
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-data-operations.adoc b/solr/solr-ref-guide/src/old-pages/indexing-data-operations.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/indexing-guide/pages/indexing-data-operations.adoc
rename to solr/solr-ref-guide/src/old-pages/indexing-data-operations.adoc
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/schema-indexing-guide.adoc b/solr/solr-ref-guide/src/old-pages/schema-indexing-guide.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/indexing-guide/pages/schema-indexing-guide.adoc
rename to solr/solr-ref-guide/src/old-pages/schema-indexing-guide.adoc
diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/solr-schema.adoc b/solr/solr-ref-guide/src/old-pages/solr-schema.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/indexing-guide/pages/solr-schema.adoc
rename to solr/solr-ref-guide/src/old-pages/solr-schema.adoc