You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@solr.apache.org by ct...@apache.org on 2021/11/25 04:28:18 UTC

[solr] branch jira/solr-15556-antora updated: Fix link refs for deployment guide and move 'pure nav' files to the side

This is an automated email from the ASF dual-hosted git repository.

ctargett pushed a commit to branch jira/solr-15556-antora
in repository https://gitbox.apache.org/repos/asf/solr.git


The following commit(s) were added to refs/heads/jira/solr-15556-antora by this push:
     new d49e6cc  Fix link refs for deployment guide and move 'pure nav' files to the side
d49e6cc is described below

commit d49e6cc859a535fcc5f67f2071f9fba30187662a
Author: Cassandra Targett <ct...@apache.org>
AuthorDate: Wed Nov 24 22:27:27 2021 -0600

    Fix link refs for deployment guide and move 'pure nav' files to the side
---
 .../configuration-guide/pages/caches-warming.adoc  |   4 +-
 .../pages/requesthandlers-searchcomponents.adoc    |   2 +-
 .../modules/deployment-guide/deployment-nav.adoc   | 113 +++++++++++----------
 .../deployment-guide/pages/alias-management.adoc   |  16 +--
 .../modules/deployment-guide/pages/aliases.adoc    |  32 +++---
 .../authentication-and-authorization-plugins.adoc  |   8 +-
 .../deployment-guide/pages/backup-restore.adoc     |  20 ++--
 .../pages/basic-authentication-plugin.adoc         |  12 +--
 .../pages/cert-authentication-plugin.adoc          |   4 +-
 .../deployment-guide/pages/client-apis.adoc        |  13 ++-
 .../deployment-guide/pages/cloud-screens.adoc      |   4 +-
 .../pages/cluster-node-management.adoc             |   8 +-
 .../pages/collection-management.adoc               |  52 +++++-----
 .../pages/collections-core-admin.adoc              |   6 +-
 .../pages/configuring-logging.adoc                 |   6 +-
 .../modules/deployment-guide/pages/docker-faq.adoc |   4 +-
 .../deployment-guide/pages/enabling-ssl.adoc       |  10 +-
 .../pages/hadoop-authentication-plugin.adoc        |  10 +-
 .../deployment-guide/pages/installing-solr.adoc    |  26 ++---
 .../modules/deployment-guide/pages/javascript.adoc |   2 +-
 .../deployment-guide/pages/jmx-with-solr.adoc      |   2 +-
 .../deployment-guide/pages/jvm-settings.adoc       |   2 +-
 .../pages/jwt-authentication-plugin.adoc           |   4 +-
 .../pages/kerberos-authentication-plugin.adoc      |  14 +--
 .../pages/mbean-request-handler.adoc               |   6 +-
 .../deployment-guide/pages/metrics-reporting.adoc  |   2 +-
 .../monitoring-with-prometheus-and-grafana.adoc    |  21 ++--
 .../pages/performance-statistics-reference.adoc    |   4 +-
 .../modules/deployment-guide/pages/ping.adoc       |   4 +-
 .../modules/deployment-guide/pages/python.adoc     |   2 +-
 .../deployment-guide/pages/replica-management.adoc |  10 +-
 .../modules/deployment-guide/pages/ruby.adoc       |   2 +-
 .../pages/rule-based-authorization-plugin.adoc     |  26 ++---
 .../deployment-guide/pages/securing-solr.adoc      |  24 ++---
 .../deployment-guide/pages/security-ui.adoc        |  23 ++---
 .../deployment-guide/pages/shard-management.adoc   |  12 +--
 .../pages/solr-control-script-reference.adoc       |  30 +++---
 .../deployment-guide/pages/solr-in-docker.adoc     |  14 +--
 .../pages/solrcloud-distributed-requests.adoc      |   8 +-
 .../pages/solrcloud-shards-indexing.adoc           |   6 +-
 .../solrcloud-with-legacy-configuration-files.adoc |   2 +-
 .../modules/deployment-guide/pages/solrj.adoc      |   2 +-
 .../pages/taking-solr-to-production.adoc           |  20 ++--
 .../pages/upgrading-a-solr-cluster.adoc            |  12 +--
 .../pages/user-managed-distributed-search.adoc     |   8 +-
 .../pages/user-managed-index-replication.adoc      |   6 +-
 .../pages/zookeeper-access-control.adoc            |   4 +-
 .../deployment-guide/pages/zookeeper-ensemble.adoc |   4 +-
 .../pages/zookeeper-file-management.adoc           |  11 +-
 .../pages/zookeeper-utilities.adoc                 |   6 +-
 .../getting-started/pages/tutorial-films.adoc      |   2 +-
 .../pages => src/old-pages}/deployment-guide.adoc  |   0
 .../old-pages}/installation-deployment.adoc        |   0
 .../pages => src/old-pages}/monitoring-solr.adoc   |   0
 .../pages => src/old-pages}/scaling-solr.adoc      |   0
 .../old-pages}/solrcloud-clusters.adoc             |   0
 .../old-pages}/user-managed-clusters.adoc          |   0
 57 files changed, 321 insertions(+), 324 deletions(-)

diff --git a/solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc b/solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc
index 14313f8..23197c7 100644
--- a/solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc
+++ b/solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc
@@ -223,10 +223,10 @@ Sets the maximum number of clauses allowed when parsing a boolean query string.
 This limit only impacts boolean queries specified by a user as part of a query string, and provides per-collection controls on how complex user specified boolean queries can be.
 Query strings that specify more clauses than this will result in an error.
 
-If this per-collection limit is greater than the <<configuring-solr-xml#global-maxbooleanclauses,global `maxBooleanClauses` limit>> specified in `solr.xml`, it will have no effect, as that setting also limits the size of user specified boolean queries.
+If this per-collection limit is greater than the xref:configuring-solr-xml.adoc#global-maxbooleanclauses[global `maxBooleanClauses` limit] specified in `solr.xml`, it will have no effect, as that setting also limits the size of user specified boolean queries.
 
 In default configurations this property uses the value of the `solr.max.booleanClauses` system property if specified.
-This is the same system property used in the <<configuring-solr-xml#global-maxbooleanclauses,global `maxBooleanClauses` setting>> in the default `solr.xml` making it easy for Solr administrators to increase both values (in all collections) without needing to search through and update the `solrconfig.xml` files in each collection.
+This is the same system property used in the xref:configuring-solr-xml#global-maxbooleanclauses[global `maxBooleanClauses` setting] in the default `solr.xml` making it easy for Solr administrators to increase both values (in all collections) without needing to search through and update the `solrconfig.xml` files in each collection.
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/modules/configuration-guide/pages/requesthandlers-searchcomponents.adoc b/solr/solr-ref-guide/modules/configuration-guide/pages/requesthandlers-searchcomponents.adoc
index 59acc40..8a3144c 100644
--- a/solr/solr-ref-guide/modules/configuration-guide/pages/requesthandlers-searchcomponents.adoc
+++ b/solr/solr-ref-guide/modules/configuration-guide/pages/requesthandlers-searchcomponents.adoc
@@ -291,7 +291,7 @@ It's possible to define some components as being used before (with `first-compon
 
 NOTE: The component registered with the name "debug" will always be executed after the "last-components"
 
-If you define `components` instead, the <<#default-components,default components (above)>> will not be executed, and `first-components` and `last-components` are disallowed.
+If you define `components` instead, the <<default-components>> will not be executed, and `first-components` and `last-components` are disallowed.
 This should be considered as a last-resort option as the default list may change in a later Solr version.
 
 [source,xml]
diff --git a/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc b/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
index cba6b0d..d8a4757 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/deployment-nav.adoc
@@ -1,72 +1,75 @@
 .Deployment Guide
-* xref:deployment-guide.adoc[]
-** xref:solr-control-script-reference.adoc[]
 
-** xref:installation-deployment.adoc[]
-*** xref:system-requirements.adoc[]
-*** xref:installing-solr.adoc[]
-*** xref:taking-solr-to-production.adoc[]
-*** xref:jvm-settings.adoc[]
-*** xref:upgrading-a-solr-cluster.adoc[]
-**** xref:indexupgrader-tool.adoc[]
-*** xref:backup-restore.adoc[]
-*** xref:solr-in-docker.adoc[]
-**** xref:docker-faq.adoc[]
-**** xref:docker-networking.adoc[]
-*** xref:solr-on-hdfs.adoc[]
+* xref:solr-control-script-reference.adoc[]
 
-** xref:scaling-solr.adoc[]
-*** xref:cluster-types.adoc[]
-*** xref:user-managed-clusters.adoc[]
-**** xref:user-managed-index-replication.adoc[]
-**** xref:user-managed-distributed-search.adoc[]
-*** xref:solrcloud-clusters.adoc[]
-**** xref:solrcloud-shards-indexing.adoc[]
-**** xref:solrcloud-recoveries-and-write-tolerance.adoc[]
-**** xref:solrcloud-distributed-requests.adoc[]
-**** xref:aliases.adoc[]
+* Installation & Deployment
+** xref:system-requirements.adoc[]
+** xref:installing-solr.adoc[]
+** xref:taking-solr-to-production.adoc[]
+** xref:jvm-settings.adoc[]
+** xref:upgrading-a-solr-cluster.adoc[]
+*** xref:indexupgrader-tool.adoc[]
+** xref:backup-restore.adoc[]
+** xref:solr-in-docker.adoc[]
+*** xref:docker-faq.adoc[]
+*** xref:docker-networking.adoc[]
+** xref:solr-on-hdfs.adoc[]
+
+* Scaling Solr
+** xref:cluster-types.adoc[]
+** User-Managed Clusters
+*** xref:user-managed-index-replication.adoc[]
+*** xref:user-managed-distributed-search.adoc[]
+** SolrCloud Clusters
+*** xref:solrcloud-shards-indexing.adoc[]
+*** xref:solrcloud-recoveries-and-write-tolerance.adoc[]
+*** xref:solrcloud-distributed-requests.adoc[]
+*** xref:aliases.adoc[]
+*** Collections API
 **** xref:cluster-node-management.adoc[]
 **** xref:collection-management.adoc[]
 **** xref:shard-management.adoc[]
 **** xref:replica-management.adoc[]
 **** xref:alias-management.adoc[]
+*** ZooKeeper Configuration
 **** xref:zookeeper-ensemble.adoc[]
 **** xref:zookeeper-file-management.adoc[]
 **** xref:zookeeper-utilities.adoc[]
 **** xref:solrcloud-with-legacy-configuration-files.adoc[]
+*** Admin UI
 **** xref:collections-core-admin.adoc[]
 **** xref:cloud-screens.adoc[]
 
-** xref:monitoring-solr.adoc[]
-*** xref:configuring-logging.adoc[]
-*** xref:ping.adoc[]
-*** xref:metrics-reporting.adoc[]
-*** xref:performance-statistics-reference.adoc[]
-*** xref:plugins-stats-screen.adoc[]
-*** xref:mbean-request-handler.adoc[]
-*** xref:monitoring-with-prometheus-and-grafana.adoc[]
-*** xref:jmx-with-solr.adoc[]
-*** xref:thread-dump.adoc[]
-*** xref:distributed-tracing.adoc[]
-*** xref:circuit-breakers.adoc[]
-*** xref:rate-limiters.adoc[]
-*** xref:task-management.adoc[]
+* Monitoring Solr
+** xref:configuring-logging.adoc[]
+** xref:ping.adoc[]
+** xref:metrics-reporting.adoc[]
+** xref:performance-statistics-reference.adoc[]
+** xref:plugins-stats-screen.adoc[]
+** xref:mbean-request-handler.adoc[]
+** xref:monitoring-with-prometheus-and-grafana.adoc[]
+** xref:jmx-with-solr.adoc[]
+** xref:thread-dump.adoc[]
+** xref:distributed-tracing.adoc[]
+** xref:circuit-breakers.adoc[]
+** xref:rate-limiters.adoc[]
+** xref:task-management.adoc[]
 
-** xref:securing-solr.adoc[]
-*** xref:authentication-and-authorization-plugins.adoc[]
-**** xref:basic-authentication-plugin.adoc[]
-**** xref:kerberos-authentication-plugin.adoc[]
-**** xref:jwt-authentication-plugin.adoc[]
-**** xref:cert-authentication-plugin.adoc[]
-**** xref:hadoop-authentication-plugin.adoc[]
-**** xref:rule-based-authorization-plugin.adoc[]
-*** xref:audit-logging.adoc[]
-*** xref:enabling-ssl.adoc[]
-*** xref:zookeeper-access-control.adoc[]
-*** xref:security-ui.adoc[]
+* xref:securing-solr.adoc[]
+** xref:authentication-and-authorization-plugins.adoc[]
+*** xref:basic-authentication-plugin.adoc[]
+*** xref:kerberos-authentication-plugin.adoc[]
+*** xref:jwt-authentication-plugin.adoc[]
+*** xref:cert-authentication-plugin.adoc[]
+*** xref:hadoop-authentication-plugin.adoc[]
+*** xref:rule-based-authorization-plugin.adoc[]
+** xref:audit-logging.adoc[]
+** xref:enabling-ssl.adoc[]
+** xref:zookeeper-access-control.adoc[]
+** xref:security-ui.adoc[]
 
-** xref:client-apis.adoc[]
-*** xref:solrj.adoc[]
-*** xref:javascript.adoc[]
-*** xref:python.adoc[]
-*** xref:ruby.adoc[]
+* xref:client-apis.adoc[]
+** xref:solrj.adoc[]
+** xref:javascript.adoc[]
+** xref:python.adoc[]
+** xref:ruby.adoc[]
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/alias-management.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/alias-management.adoc
index 1e57f51..15fdc27 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/alias-management.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/alias-management.adoc
@@ -25,7 +25,7 @@ Some use cases for collection aliasing:
 * Time series data
 * Reindexing content behind the scenes
 
-For an overview of aliases in Solr, see the section <<aliases.adoc#,Aliases>>.
+For an overview of aliases in Solr, see the section xref:aliases.adoc[].
 
 [[createalias]]
 == CREATEALIAS: Create or Modify an Alias for a Collection
@@ -46,7 +46,7 @@ While it is possible to send updates to an alias spanning multiple collections,
 *Routed aliases* are aliases with additional capabilities to act as a kind of super-collection that route updates to the correct collection.
 
 Routing is data driven and may be based on a temporal field or on categories   specified in a field (normally string based).
-See <<aliases.adoc#routed-aliases,Routed Aliases>> for some important high-level information before getting started.
+See xref:aliases.adoc#routed-aliases[Routed Aliases] for some important high-level information before getting started.
 
 [source,text]
 ----
@@ -92,7 +92,7 @@ It must therefore adhere to normal requirements for collection naming.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 ==== Standard Alias Parameters
 
@@ -123,7 +123,7 @@ s|Required |Default: none
 The type of routing to use.
 Presently only `time` and `category` and `Dimensional[]` are valid.
 +
-In the case of a multi dimensional routed alias (aka "DRA", see <<aliases.adoc#dimensional-routed-aliases,Aliases>>), it is required to express all the dimensions in the same order that they will appear in the dimension
+In the case of a xref:aliases.adoc#dimensional-routed-aliases[multi-dimensional routed alias] (aka "DRA"), it is required to express all the dimensions in the same order that they will appear in the dimension
 array.
 The format for a DRA `router.name` is `Dimensional[dim1,dim2]` where `dim1` and `dim2` are valid `router.name` values for each sub-dimension.
 Note that DRA's are very new, and only 2D DRA's are presently supported.
@@ -147,7 +147,7 @@ This field is required on all incoming documents.
 |Optional |Default: none
 |===
 +
-The `*` wildcard can be replaced with any parameter from the <<collection-management.adoc#create,CREATE>> command except `name`.
+The `*` wildcard can be replaced with any parameter from the xref:collection-management.adoc#create[CREATE] command except `name`.
 All other fields are identical in requirements and naming except that we insist that the configset be explicitly specified.
 The configset must be created beforehand, either uploaded or copied and modified.
 It's probably a bad idea to use "data driven" mode as schema mutations might happen concurrently leading to errors.
@@ -161,7 +161,7 @@ It's probably a bad idea to use "data driven" mode as schema mutations might hap
 s|Required |Default: none
 |===
 +
-The start date/time of data for this time routed alias in Solr's standard date/time format (i.e., ISO-8601 or "NOW" optionally with <<date-formatting-math.adoc#date-math,date math>>).
+The start date/time of data for this time routed alias in Solr's standard date/time format (i.e., ISO-8601 or "NOW" optionally with xref:indexing-guide:date-formatting-math.adoc#date-math[date math]).
 +
 The first collection created for the alias will be internally named after this value.
 If a document is submitted with an earlier value for `router.field` then the earliest collection the alias points to then it will yield an error since it can't be routed.
@@ -680,7 +680,7 @@ A dictionary of name/value pairs of properties to be set.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === ALIASPROP Response
 
@@ -738,7 +738,7 @@ The name of the alias to delete.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === DELETEALIAS Response
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/aliases.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/aliases.adoc
index 7a2c512..6d16f6b 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/aliases.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/aliases.adoc
@@ -32,15 +32,15 @@ In other cases update commands are rejected with an error since there is no logi
 
 == Standard Aliases
 
-Standard aliases are created and updated using the <<alias-management.adoc#createalias,CREATEALIAS>> command.
+Standard aliases are created and updated using the xref:alias-management.adoc#createalias[CREATEALIAS] command.
 
-The current list of collections that are members of an alias can be verified via the <<cluster-node-management.adoc#clusterstatus,CLUSTERSTATUS>> command.
+The current list of collections that are members of an alias can be verified via the xref:cluster-node-management.adoc#clusterstatus[CLUSTERSTATUS] command.
 
-The full definition of all aliases including metadata about that alias (in the case of routed aliases, see below) can be verified via the <<alias-management.adoc#listaliases,LISTALIASES>> command.
+The full definition of all aliases including metadata about that alias (in the case of routed aliases, see below) can be verified via the xref:alias-management.adoc#listaliases[LISTALIASES] command.
 
-Alternatively this information is available by checking `/aliases.json` in ZooKeeper with either the native ZooKeeper  client or in the <<cloud-screens.adoc#tree-view,tree page>> of the cloud menu in the admin UI.
+Alternatively this information is available by checking `/aliases.json` in ZooKeeper with either the native ZooKeeper  client or in the xref:cloud-screens.adoc#tree-view[tree page] of the cloud menu in the admin UI.
 
-Aliases may be deleted via the <<alias-management.adoc#deletealias,DELETEALIAS>> command.
+Aliases may be deleted via the xref:alias-management.adoc#deletealias[DELETEALIAS] command.
 When deleting an alias, underlying collections are *unaffected*.
 
 [TIP]
@@ -48,7 +48,7 @@ When deleting an alias, underlying collections are *unaffected*.
 Any alias (standard or routed) that references multiple collections may complicate relevancy.
 By default, SolrCloud scores documents on a per-shard basis.
 
-With multiple collections in an alias this is always a problem, so if you have a use case for which BM25 or TF/IDF relevancy is important you will want to turn on one of the <<solrcloud-distributed-requests.adoc#distributedidf,ExactStatsCache>> implementations.
+With multiple collections in an alias this is always a problem, so if you have a use case for which BM25 or TF/IDF relevancy is important you will want to turn on one of the xref:solrcloud-distributed-requests.adoc#distributedidf[ExactStatsCache] implementations.
 
 However, for analytical use cases where results are sorted on numeric, date, or alphanumeric field values, rather than relevancy calculations, this is not a problem.
 ====
@@ -59,7 +59,7 @@ To address the update limitations associated with standard aliases and provide a
 There are presently two types of routed alias: time routed and category routed.
 These are described in detail below, but share some common behavior.
 
-When processing an update for a routed alias, Solr initializes its <<update-request-processors.adoc#,UpdateRequestProcessor>> chain as usual, but when `DistributedUpdateProcessor` (DUP) initializes, it detects that the update targets a routed alias and injects `RoutedAliasUpdateProcessor` (RAUP) in front of itself.
+When processing an update for a routed alias, Solr initializes its xref:configuration-guide:update-request-processors.adoc[] chain as usual, but when `DistributedUpdateProcessor` (DUP) initializes, it detects that the update targets a routed alias and injects `RoutedAliasUpdateProcessor` (RAUP) in front of itself.
 RAUP, in coordination with the Overseer, is the main part of a routed alias, and must immediately precede DUP.
 It is not possible to configure custom chains with other types of UpdateRequestProcessors between RAUP and DUP.
 
@@ -72,7 +72,7 @@ WARNING: It's extremely important with all routed aliases that the route values
 Reindexing a document with a different route value for the same ID produces two distinct documents with the same ID accessible via the alias.
 All query time behavior of the routed alias is *_undefined_* and not easily predictable once duplicate ID's exist.
 
-CAUTION: It is a bad idea to use "data driven" mode (aka <<schemaless-mode.adoc#,schemaless-mode>>) with routed aliases, as duplicate schema mutations might happen concurrently leading to errors.
+CAUTION: It is a bad idea to use "data driven" mode (aka xref:configuration-guide:schemaless-mode.adoc[]) with routed aliases, as duplicate schema mutations might happen concurrently leading to errors.
 
 
 === Time Routed Aliases
@@ -86,8 +86,8 @@ If you need to store a lot of timestamped data in Solr, such as logs or IoT sens
 
 ==== How It Works
 
-First you create a time routed aliases using the <<alias-management.adoc#createalias,CREATEALIAS>> command with the desired router settings.
-Most of the settings are editable at a later time using the <<alias-management.adoc#aliasprop,ALIASPROP>> command.
+First you create a time routed aliases using the xref:alias-management.adoc#createalias[CREATEALIAS] command with the desired router settings.
+Most of the settings are editable at a later time using the xref:alias-management.adoc#aliasprop[ALIASPROP] command.
 
 The first collection will be created automatically, along with an alias pointing to it.
 Each underlying Solr "core" in a collection that is a member of a TRA has a special core property referencing the alias.
@@ -115,7 +115,7 @@ Each time a new collection is added, the oldest collections in the TRA are exami
 All this happens synchronously, potentially adding seconds to the update request and indexing latency.
 +
 If `router.preemptiveCreateMath` is configured and if the document arrives within this window then it will occur asynchronously.
-See <<alias-management.adoc#time-routed-alias-parameters,Time Routed Alias Parameters>> for more information.
+See xref:alias-management.adoc#time-routed-alias-parameters[Time Routed Alias Parameters] for more information.
 
 Any other type of update like a commit or delete is routed by RAUP to all collections.
 Generally speaking, this is not a performance concern.
@@ -145,15 +145,15 @@ This approach allows for simplified indexing of data that must be segregated int
 
 ==== How It Works
 
-First you create a category routed alias using the <<alias-management.adoc#createalias,CREATEALIAS>> command with the desired router settings.
-Most of the settings are editable at a later time using the <<alias-management.adoc#aliasprop,ALIASPROP>> command.
+First you create a category routed alias using the xref:alias-management.adoc#createalias[CREATEALIAS] command with the desired router settings.
+Most of the settings are editable at a later time using the xref:alias-management.adoc#aliasprop[ALIASPROP] command.
 
 The alias will be created with a special place-holder collection which will always be named `myAlias\__CRA__NEW_CATEGORY_ROUTED_ALIAS_WAITING_FOR_DATA\__TEMP`.
 The first document indexed into the CRA will create a second collection named `myAlias__CRA__foo` (for a routed field value of `foo`).
 The second document indexed will cause the temporary place holder collection to be deleted.
 Thereafter collections will be created whenever a new value for the field is encountered.
 
-CAUTION: To guard against runaway collection creation options for limiting the total number of categories, and for rejecting values that don't match, a regular expression parameter is provided (see <<alias-management.adoc#category-routed-alias-parameters,Category Routed Alias Parameters>> for details).
+CAUTION: To guard against runaway collection creation options for limiting the total number of categories, and for rejecting values that don't match, a regular expression parameter is provided (see xref:alias-management.adoc#category-routed-alias-parameters[Category Routed Alias Parameters] for details).
 +
 Note that by providing very large or very permissive values for these options you are accepting the risk that garbled data could potentially create thousands of collections and bring your cluster to a grinding halt.
 
@@ -223,7 +223,7 @@ More dimensions will be supported in the future (see https://issues.apache.org/j
 ==== How It Works
 
 First you create a dimensional routed alias with the desired router settings for each dimension.
-See the <<alias-management.adoc#createalias,CREATEALIAS>> command documentation for details on how to specify the per-dimension configuration.
+See the xref:alias-management.adoc#createalias[CREATEALIAS] command documentation for details on how to specify the per-dimension configuration.
 Typical collection names will be of the form (example is for category x time example, with 30 minute intervals):
 
 [source,text]
@@ -314,7 +314,7 @@ If a new category is introduced at a later date and indexing latency is an impor
 
 * If the number of extra time slices to be created is not very large, then sending a single document out of band from regular indexing, and waiting for collection creation to complete before allowing the new category to be sent via the SLA constrained process.
 
-* If the above procedure is likely to create an extreme number of collections, and the earliest possible document in the new category is known, the start time for the time dimension may be adjusted using the <<alias-management.adoc#aliasprop,ALIASPROP>> command
+* If the above procedure is likely to create an extreme number of collections, and the earliest possible document in the new category is known, the start time for the time dimension may be adjusted using the xref:alias-management.adoc#aliasprop[ALIASPROP] command
 
 === Improvement Possibilities
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/authentication-and-authorization-plugins.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/authentication-and-authorization-plugins.adoc
index 8cd2002..edbf2ee 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/authentication-and-authorization-plugins.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/authentication-and-authorization-plugins.adoc
@@ -100,12 +100,12 @@ Then use the `bin/solr zk` command to upload the file:
 >bin/solr zk cp ./security.json zk:security.json -z localhost:2181
 ----
 
-NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<zookeeper-ensemble#updating-solr-include-files,instructions>>) you can omit `-z <zk host string>` from the above command.
+NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:zookeeper-ensemble.adoc#updating-solr-include-files[Updating Solr Include Files]) you can omit `-z <zk host string>` from the above command.
 
 [WARNING]
 ====
 Whenever you use any security plugins and store `security.json` in ZooKeeper, we highly recommend that you implement access control in your ZooKeeper nodes.
-Information about how to enable this is available in the section <<zookeeper-access-control.adoc#,ZooKeeper Access Control>>.
+Information about how to enable this is available in the section xref:zookeeper-access-control.adoc[].
 ====
 
 Once `security.json` has been uploaded to ZooKeeper, you should use the appropriate APIs for the plugins you're using to update it.
@@ -191,8 +191,8 @@ The Admin UI is an AngularJS application running inside your browser, and is tre
 When authentication is required the Admin UI will presented you with a login dialogue.
 The authentication plugins currently supported by the Admin UI are:
 
-* <<basic-authentication-plugin.adoc#,Basic Authentication Plugin>>
-* <<jwt-authentication-plugin.adoc#,JWT Authentication Plugin>>
+* xref:basic-authentication-plugin.adoc[]
+* xref:jwt-authentication-plugin.adoc[]
 
 If your plugin of choice is not supported, the Admin UI will still let you perform unrestricted operations, while for restricted operations you will need to interact with Solr by sending HTTP requests instead of through the graphical user interface of the Admin UI.
 All operations supported by Admin UI can be performed through Solr's APIs.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/backup-restore.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/backup-restore.adoc
index b35eebc..53dadfc 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/backup-restore.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/backup-restore.adoc
@@ -24,7 +24,7 @@ If you run a user-managed cluster or a single-node installation, you will use th
 
 [NOTE]
 ====
-Backups (and Snapshots) capture data that has been <<commits-transaction-logs.adoc#hard-commits-vs-soft-commits,hard committed>>.
+Backups (and Snapshots) capture data that has been xref:configuration-guide:commits-transaction-logs.adoc#hard-commits-vs-soft-commits[hard committed].
 Committing changes using `softCommit=true` may result in changes that are visible in search results but not included in subsequent backups.
 
 Likewise, committing changes using `openSearcher=false` may result in changes committed to disk and included in subsequent backups, even if they are not currently visible in search results.
@@ -32,7 +32,7 @@ Likewise, committing changes using `openSearcher=false` may result in changes co
 
 == SolrCloud Clusters
 
-Support for backups in SolrCloud is provided with the <<collection-management.adoc#,Collections API>>.
+Support for backups in SolrCloud is provided with the xref:collection-management.adoc#backup[Collections API].
 This allows the backups to be generated across multiple shards, and restored to the same number of shards and replicas as the original collection.
 
 NOTE: SolrCloud Backup/Restore requires a shared file system mounted at the same path on all nodes, or HDFS.
@@ -40,20 +40,20 @@ NOTE: SolrCloud Backup/Restore requires a shared file system mounted at the same
 Four different API commands are supported:
 
 * `action=BACKUP`: This command backs up Solr indexes and configurations.
-More information is available in the section <<collection-management.adoc#backup,Backup Collection>>.
+More information is available in the section xref:collection-management.adoc#backup[Backup Collection].
 * `action=RESTORE`: This command restores Solr indexes and configurations.
-More information is available in the section <<collection-management.adoc#restore,Restore Collection>>.
+More information is available in the section xref:collection-management.adoc#restore[Restore Collection].
 * `action=LISTBACKUP`: This command lists the backup points available at a specified location, displaying metadata for each.
-More information is available in the section <<collection-management.adoc#listbackup,List Backups>>.
+More information is available in the section xref:collection-management.adoc#listbackup[List Backups].
 * `action=DELETEBACKUP`: This command allows deletion of backup files or whole backups.
-More information is available in the section <<collection-management.adoc#deletebackup,Delete Backups>>.
+More information is available in the section xref:collection-management.adoc#deletebackup[Delete Backups].
 
 == User-Managed Clusters and Single-Node Installations
 
 Backups and restoration uses Solr's replication handler.
 Out of the box, Solr includes implicit support for replication so this API can be used.
 Configuration of the replication handler can, however, be customized by defining your own replication handler in `solrconfig.xml`.
-For details on configuring the replication handler, see the section <<user-managed-index-replication.adoc#configuring-the-replicationhandler,Configuring the ReplicationHandler>>.
+For details on configuring the replication handler, see the section xref:user-managed-index-replication.adoc#configuring-the-replicationhandler[Configuring the ReplicationHandler].
 
 === Backup API
 
@@ -105,7 +105,7 @@ If a name is not specified then the directory name will have the following forma
 The number of backups to keep.
 If `maxNumberOfBackups` has been specified on the replication handler in `solrconfig.xml`, `maxNumberOfBackups` is always used and attempts to use `numberToKeep` will cause an error.
 Also, this parameter is not taken into consideration if the backup name is specified.
-More information about `maxNumberOfBackups` can be found in the section <<user-managed-index-replication.adoc#configuring-the-replicationhandler,Configuring the ReplicationHandler>>.
+More information about `maxNumberOfBackups` can be found in the section xref:user-managed-index-replication.adoc#configuring-the-replicationhandler[Configuring the ReplicationHandler].
 
 `repository`::
 +
@@ -461,7 +461,7 @@ An example configuration using these properties can be found below:
 
 === GCSBackupRepository
 
-Stores and retrieves backup files in a Google Cloud Storage ("GCS") bucket. This plugin must first be <<solr-plugins.adoc#installing-plugins,installed>> before using.
+Stores and retrieves backup files in a Google Cloud Storage ("GCS") bucket. This plugin must first be xref:configuration-guide:solr-plugins.adoc#installing-plugins[installed] before using.
 
 GCSBackupRepository accepts the following options for overall configuration:
 
@@ -659,7 +659,7 @@ An example configuration using the overall and GCS-client properties can be seen
 === S3BackupRepository
 
 Stores and retrieves backup files in an Amazon S3 bucket.
-This plugin must first be <<solr-plugins.adoc#installing-plugins,installed>> before using.
+This plugin must first be xref:solr-plugins.adoc#installing-plugins[installed] before using.
 
 This plugin uses the https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/credentials.html[default AWS credentials provider chain], so ensure that your credentials are set appropriately (e.g., via env var, or in `~/.aws/credentials`, etc.).
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/basic-authentication-plugin.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/basic-authentication-plugin.adoc
index 1674c2a..2d72999 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/basic-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/basic-authentication-plugin.adoc
@@ -19,12 +19,12 @@
 Solr can support Basic authentication for users with the use of the `BasicAuthPlugin`.
 
 This plugin only provides user authentication.
-To control user permissions, you may need to configure an authorization plugin as described in the section <<rule-based-authorization-plugin.adoc#,Rule-Based Authorization Plugin>>.
+To control user permissions, you may need to configure an authorization plugin as described in the section xref:rule-based-authorization-plugin.adoc[].
 
 == Enable Basic Authentication
 
 To use Basic authentication, you must first create a `security.json` file.
-This file and where to put it is described in detail in the section <<authentication-and-authorization-plugins.adoc#configuring-security-json,Configuring security.json>>.
+This file and where to put it is described in detail in the section xref:authentication-and-authorization-plugins.adoc#configuring-security-json[Configuring security.json].
 
 If running in cloud mode, you can use the `bin/solr auth` command-line utility to enable security for a new installation, see: `bin/solr auth --help` for more details.
 
@@ -70,7 +70,7 @@ However, you will want to ensure that `blockUnknown` is set to `true` or omitted
 [WARNING]
 ====
 If you set `blockUnknown` to `false`, then *any* request that is not explicitly protected by a permission will be accessible to anonymous users!
-Consequently, you should define a role binding for every <<rule-based-authorization-plugin.adoc#permissions,predefined>> permission you want to protect.
+Consequently, you should define a role binding for every xref:rule-based-authorization-plugin.adoc#permissions[predefined permission] you want to protect.
 You can assign the special `role: null` binding for requests that you want to allow anonymous users to access. To protect all endpoints except those with `role:null`,
 you can add a role binding for the `all` permission and place it in the last position in `security.json`.
 ====
@@ -78,14 +78,14 @@ you can add a role binding for the `all` permission and place it in the last pos
 If `realm` is not defined, it will default to `solr`.
 
 If you are using SolrCloud, you must upload `security.json` to ZooKeeper.
-An example command and more information about securing your setup can be found at <<authentication-and-authorization-plugins#in-a-solrcloud-cluster,Authentication and Authorization Plugins In a SolrCloud Cluster>>.
+An example command and more information about securing your setup can be found at xref:authentication-and-authorization-plugins#in-a-solrcloud-cluster[Authentication and Authorization Plugins In a SolrCloud Cluster].
 
 === Caveats
 
 There are a few things to keep in mind when using the Basic authentication plugin.
 
 * Credentials are sent in plain text by default.
-It's recommended to use SSL for communication when Basic authentication is enabled, as described in the section <<enabling-ssl.adoc#,Enabling SSL>>.
+It's recommended to use SSL for communication when Basic authentication is enabled, as described in the section xref:enabling-ssl.adoc[].
 
 * A user who has access to write permissions to `security.json` will be able to modify all permissions and user permission assignments.
 Special care should be taken to only grant access to editing security to appropriate users.
@@ -96,7 +96,7 @@ Even with Basic authentication enabled, you should not unnecessarily expose Solr
 == Combining Basic Authentication with Other Schemes
 :experimental:
 
-When using other authentication schemes, such as the <<jwt-authentication-plugin.adoc#,JWT Authentication Plugin>>, you may still want to use Basic authentication for a small set of "service account" oriented client applications.
+When using other authentication schemes, such as the xref:jwt-authentication-plugin.adoc[], you may still want to use Basic authentication for a small set of "service account" oriented client applications.
 Solr provides the `MultiAuthPlugin` to support multiple authentication schemes. For example, you may want to integrate Solr with an OIDC provider for user accounts,
 but also use Basic for authenticating requests coming from the Prometheus metrics exporter. The `MultiAuthPlugin` uses the scheme of the `Authorization` header to determine which
 plugin should handle each request. The `MultiAuthPlugin` is useful when running Solr on Kubernetes as you can delegate user management and authentication to an OIDC provider for end-users,
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/cert-authentication-plugin.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/cert-authentication-plugin.adoc
index 1c968bd..681388e 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/cert-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/cert-authentication-plugin.adoc
@@ -36,7 +36,7 @@ An example `security.json` is shown below:
 === Certificate Validation
 
 Parts of certificate validation, including verifying the trust chain and peer hostname/ip address will be done by the web servlet container before the request ever reaches the authentication plugin.
-These checks are described in the <<enabling-ssl.adoc#,Enabling SSL>> section.
+These checks are described in the xref:enabling-ssl.adoc[] section.
 
 This plugin provides no additional checking beyond what has been configured via SSL properties.
 
@@ -58,4 +58,4 @@ It is best practice to verify the actual contents of certificates issued by your
 == Using Certificate Auth with Clients (including SolrJ)
 
 With certificate authentication enabled, all client requests must include a valid certificate.
-This is identical to the <<enabling-ssl.adoc#example-client-actions,client requirements>> when using SSL.
+This is identical to the xref:enabling-ssl.adoc#example-client-actions[client requirements] when using SSL.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/client-apis.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/client-apis.adoc
index f47d9a1..792d9a0 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/client-apis.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/client-apis.adoc
@@ -29,10 +29,10 @@ Solr offers documentation on the following client integrations:
 // tag::client-sections[]
 [width=100%,cols="1,1",frame=none,grid=none,stripes=none]
 |===
-| <<solrj.adoc#,SolrJ>>: SolrJ, an API for working with Java applications.
-| <<javascript.adoc#,JavaScript>>: JavaScript clients.
-| <<python.adoc#,Python>>: Python and JSON responses.
-| <<ruby.adoc#,Ruby>>: Solr with Ruby applications.
+| xref:solrj.adoc[]: SolrJ, an API for working with Java applications.
+| xref:javascript.adoc[]: JavaScript clients.
+| xref:python.adoc[]: Python and JSON responses.
+| xref:ruby.adoc[]: Solr with Ruby applications.
 |===
 //end::client-sections[]
 ****
@@ -57,8 +57,7 @@ The other operations are similar, although in certain cases the HTTP request is
 An index operation, for example, may contain a document in the body of the request.
 
 Solr also features an EmbeddedSolrServer that offers a Java API without requiring an HTTP connection.
-For details, see <<solrj.adoc#,SolrJ>>.
-
+For details, see xref:solrj.adoc[].
 
 == Choosing an Output Format
 
@@ -67,7 +66,7 @@ Parsing the responses is a slightly more thorny problem.
 Fortunately, Solr makes it easy to choose an output format that will be easy to handle on the client side.
 
 Specify a response format using the `wt` parameter in a query.
-The available response formats are documented in <<response-writers.adoc#,Response Writers>>.
+The available response formats are documented in xref:query-guide:response-writers.adoc.
 
 Most client APIs hide this detail for you, so for many types of client applications, you won't ever have to specify a `wt` parameter.
 In JavaScript, however, the interface to Solr is a little closer to the metal, so you will need to add this parameter yourself.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/cloud-screens.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/cloud-screens.adoc
index ea72fbb..a4f2323 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/cloud-screens.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/cloud-screens.adoc
@@ -16,12 +16,12 @@
 // specific language governing permissions and limitations
 // under the License.
 
-This screen provides status information about each collection & node in your cluster, as well as access to the low level data being stored in <<zookeeper-file-management.adoc#,ZooKeeper files>>.
+This screen provides status information about each collection & node in your cluster, as well as access to the low level data being stored in xref:zookeeper-file-management.adoc[ZooKeeper files].
 
 .Only Visible When using SolrCloud
 [NOTE]
 ====
-The "Cloud" menu option is only available when Solr is running <<cluster-types.adoc#solrcloud-mode,SolrCloud>>.
+The "Cloud" menu option is only available when Solr is running xref:cluster-types.adoc#solrcloud-mode[SolrCloud].
 User-managed clusters or single-node installations will not display this option.
 ====
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
index e595ad13..d0e954f 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/cluster-node-management.adoc
@@ -229,7 +229,7 @@ curl -X POST http://localhost:8983/api/cluster -H 'Content-Type: application/jso
 +
 The name of the property.
 Supported properties names are `location`, `maxCoresPerNode`, `urlScheme`, and `defaultShardPreferences`.
-If the <<distributed-tracing.adoc#,Jaeger tracing contrib>> has been enabled, the property `samplePercentage` is also available.
+If the xref:distributed-tracing.adoc[Jaeger tracing contrib] has been enabled, the property `samplePercentage` is also available.
 +
 Other properties can be set (for example, if you need them for custom plugins) but they must begin with the prefix `ext.`.
 Unknown properties that don't begin with `ext.` will be rejected.
@@ -344,7 +344,7 @@ Support for the "collectionDefaults" key will be removed in Solr 9.
 === Default Shard Preferences
 
 Using the `defaultShardPreferences` parameter, you can implement rack or availability zone awareness.
-First, make sure to "label" your nodes using a <<property-substitution.adoc#jvm-system-properties,system property>> (e.g., `-Drack=rack1`).
+First, make sure to "label" your nodes using a xref:configuration-guide:property-substitution.adoc#jvm-system-properties[system property] (e.g., `-Drack=rack1`).
 Then, set the value of `defaultShardPreferences` to `node.sysprop:sysprop.YOUR_PROPERTY_NAME` like this:
 
 [source,bash]
@@ -539,7 +539,7 @@ Keep in mind that this can lead to very high network and disk I/O if the replica
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 `timeout`::
 +
@@ -601,7 +601,7 @@ The node to be removed.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 [[addrole]]
 == ADDROLE: Add a Role
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/collection-management.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/collection-management.adoc
index f77886c..8376982 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/collection-management.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/collection-management.adoc
@@ -97,7 +97,7 @@ The `compositeId` router hashes the value in the uniqueKey field and looks up th
 When using the `implicit` router, the `shards` parameter is required.
 When using the `compositeId` router, the `numShards` parameter is required.
 +
-For more information, see also the section <<solrcloud-shards-indexing.adoc#document-routing,Document Routing>>.
+For more information, see also the section xref:solrcloud-shards-indexing.adoc#document-routing[Document Routing].
 
 `numShards`::
 +
@@ -130,7 +130,7 @@ The number of replicas to be created for each shard.
 +
 This will create a NRT type of replica.
 If you want another type of replica, see the `tlogReplicas` and `pullReplicas` parameters below.
-See the section <<solrcloud-shards-indexing.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
+See the section xref:solrcloud-shards-indexing.adoc#types-of-replicas[Types of Replicas] for more information about replica types.
 
 `nrtReplicas`::
 +
@@ -152,7 +152,7 @@ If you want all of your replicas to be of this type, you can simply use `replica
 +
 The number of TLOG replicas to create for this collection.
 This type of replica maintains a transaction log but only updates its index via replication from a leader.
-See the section <<solrcloud-shards-indexing.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
+See the section xref:solrcloud-shards-indexing.adoc#types-of-replicas[Types of Replicas] for more information about replica types.
 
 `pullReplicas`::
 +
@@ -164,7 +164,7 @@ See the section <<solrcloud-shards-indexing.adoc#types-of-replicas,Types of Repl
 The number of PULL replicas to create for this collection.
 This type of replica does not maintain a transaction log and only updates its index via replication from a leader.
 This type is not eligible to become a leader and should not be the only type of replicas in the collection.
-See the section <<solrcloud-shards-indexing.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
+See the section xref:solrcloud-shards-indexing.adoc#types-of-replicas[Types of Replicas] for more information about replica types.
 
 `createNodeSet` (v1), `nodeSet` (v2)::
 +
@@ -178,7 +178,7 @@ The format is a comma-separated list of node_names, such as `localhost:8983_solr
 +
 If not provided, the CREATE operation will create shard-replicas spread across all live Solr nodes.
 +
-Alternatively, use the special value of `EMPTY` to initially create no shard-replica within the new collection and then later use the <<replica-management.adoc#addreplica,ADDREPLICA>> operation to add shard-replicas when and where required.
+Alternatively, use the special value of `EMPTY` to initially create no shard-replica within the new collection and then later use the xref:replica-management.adoc#addreplica[ADDREPLICA] operation to add shard-replicas when and where required.
 
 `createNodeSet.shuffle` (v1), `shuffleNodes` (v2)::
 +
@@ -215,7 +215,7 @@ When such a collection is deleted, its autocreated configset will be deleted by
 If this parameter is specified, the router will look at the value of the field in an input document to compute the hash and identify a shard instead of looking at the `uniqueKey` field.
 If the field specified is null in the document, the document will be rejected.
 +
-Please note that <<realtime-get.adoc#,RealTime Get>> or retrieval by document ID would also require the parameter `\_route_` (or `shard.keys`) to avoid a distributed search.
+Please note that xref:configuration-guide:realtime-get.adoc[] or retrieval by document ID would also require the parameter `\_route_` (or `shard.keys`) to avoid a distributed search.
 
 `perReplicaState`::
 +
@@ -234,7 +234,7 @@ If `true` the states of individual replicas will be maintained as individual chi
 |===
 +
 Set core property _name_ to _value_.
-See the section <<core-discovery.adoc#,Core Discovery>> for details on supported properties and values.
+See the section xref:configuration-guide:core-discovery.adoc[] for details on supported properties and values.
 +
 [WARNING]
 ====
@@ -262,7 +262,7 @@ The default is `false`, which means that the API will return the status of the s
 +
 When a collection is created additionally an alias can be created that points to this collection.
 This parameter allows specifying the name of this alias, effectively combining
-this operation with <<alias-management.adoc#createalias,CREATEALIAS>>
+this operation with xref:alias-management.adoc#createalias[CREATEALIAS].
 
 `async`::
 +
@@ -271,10 +271,10 @@ this operation with <<alias-management.adoc#createalias,CREATEALIAS>>
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 Collections are first created in read-write mode but can be put in `readOnly`
-mode using the <<collection-management.adoc#modifycollection,MODIFYCOLLECTION>> action.
+mode using the xref:collection-management.adoc#modifycollection[MODIFYCOLLECTION] action.
 
 === CREATE Response
 
@@ -351,7 +351,7 @@ This parameter is required by the V1 API.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === RELOAD Response
 
@@ -442,7 +442,7 @@ See the <<create,CREATE action>> section above for details on these attributes.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 [[readonlymode]]
 ==== Read-Only Mode
@@ -643,7 +643,7 @@ The name of the collection to delete.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === DELETE Response
 
@@ -845,7 +845,7 @@ The timeout, in seconds, until which write requests made to the source collectio
 |===
 +
 Set core property _name_ to _value_.
-See the section <<core-discovery.adoc#,Core Discovery>> for details on supported properties and values.
+See the section xref:configuration-guide:core-discovery.adoc[] for details on supported properties and values.
 
 `async`::
 +
@@ -854,7 +854,7 @@ See the section <<core-discovery.adoc#,Core Discovery>> for details on supported
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === MIGRATE Response
 
@@ -991,7 +991,7 @@ If `true` then after the processing is successfully finished the source collecti
 |Optional |Default: none
 |===
 +
-Optional request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Optional request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 There are additionally a number of optional parameters that determine the target collection layout.
 If they are not specified in the request then their values are copied from the source collection.
@@ -1566,7 +1566,7 @@ curl -X POST http://localhost:8983/api/collections -H 'Content-Type: application
 --
 
 The BACKUP command will backup Solr indexes and configurations for a specified collection.
-The BACKUP command <<backup-restore.adoc#,takes one copy from each shard for the indexes>>.
+The BACKUP command xref:backup-restore.adoc[takes one copy from each shard for the indexes].
 For configurations, it backs up the configset that was associated with the collection and metadata.
 
 Backup data is stored in the repository based on the provided `name` and `location`.
@@ -1609,7 +1609,7 @@ s|Required |Default: none
 |===
 +
 The location on a shared drive for the backup command to write to.
-This parameter is required, unless a default location is defined on the repository configuration, or set as a <<cluster-node-management.adoc#clusterprop,cluster property>>.
+This parameter is required, unless a default location is defined on the repository configuration, or set as a xref:cluster-node-management.adoc#clusterprop[cluster property].
 +
 If the location path is on a mounted drive, the mount must be available on the node that serves as the overseer, even if the overseer node does not host a replica of the collection being backed up.
 Since any node can take the overseer role at any time, a best practice to avoid possible backup failures is to ensure the mount point is available on all nodes of the cluster.
@@ -1624,7 +1624,7 @@ Repeated backups of the same collection are done incrementally, so that files un
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 `repository`::
 +
@@ -1695,7 +1695,7 @@ s|Required |Default: none
 |===
 +
 The repository location to list backups from.
-This parameter is required, unless a default location is defined on the repository configuration, or set as a <<cluster-node-management.adoc#clusterprop,cluster property>>.
+This parameter is required, unless a default location is defined on the repository configuration, or set as a xref:cluster-node-management.adoc#clusterprop[cluster property].
 +
 If the location path is on a mounted drive, the mount must be available on the node that serves as the overseer, even if the overseer node does not host a replica of the collection being backed up.
 Since any node can take the overseer role at any time, a best practice to avoid possible backup failures is to ensure the mount point is available on all nodes of the cluster.
@@ -1717,7 +1717,7 @@ If no repository is specified then the local filesystem repository will be used
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === LISTBACKUP Example
 
@@ -1834,7 +1834,7 @@ Optionally, you can override some parameters documented below.
 
 While restoring, if a configset with the same name exists in ZooKeeper then Solr will reuse that, or else it will upload the backed up configset in ZooKeeper and use that.
 
-You can use the collection <<alias-management.adoc#createalias,CREATEALIAS>> command to make sure clients don't need to change the endpoint to query or index against the newly restored collection.
+You can use the collection xref:alias-management.adoc#createalias[CREATEALIAS] command to make sure clients don't need to change the endpoint to query or index against the newly restored collection.
 
 === RESTORE Parameters
 
@@ -1865,7 +1865,7 @@ The name of the existing backup that you want to restore.
 |===
 +
 The location on a shared drive for the RESTORE command to read from.
-Alternately it can be set as a <<cluster-node-management.adoc#clusterprop,cluster property>>.
+Alternately it can be set as a xref:cluster-node-management.adoc#clusterprop[cluster property].
 
 `async`::
 +
@@ -1874,7 +1874,7 @@ Alternately it can be set as a <<cluster-node-management.adoc#clusterprop,cluste
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 `repository`::
 +
@@ -2001,7 +2001,7 @@ s|Required |Default: none
 |===
 +
 The repository location to delete backups from.
-This parameter is required, unless a default location is defined on the repository configuration, or set as a <<cluster-node-management.adoc#clusterprop,cluster property>>.
+This parameter is required, unless a default location is defined on the repository configuration, or set as a xref:cluster-node-management.adoc#clusterprop[cluster property].
 +
 If the location path is on a mounted drive, the mount must be available on the node that serves as the overseer, even if the overseer node does not host a replica of the collection being backed up.
 Since any node can take the overseer role at any time, a best practice to avoid possible backup failures is to ensure the mount point is available on all nodes of the cluster.
@@ -2055,7 +2055,7 @@ Only one of `backupId`, `maxNumBackupPoints`, and `purgeUnused` may be specified
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 [[rebalanceleaders]]
 == REBALANCELEADERS: Rebalance Leaders
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/collections-core-admin.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/collections-core-admin.adoc
index f71ff3a..1e66093 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/collections-core-admin.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/collections-core-admin.adoc
@@ -16,13 +16,13 @@
 // specific language governing permissions and limitations
 // under the License.
 
-The Collections screen provides some basic functionality for managing your Collections, powered by the <<collections-api.adoc#,Collections API>>.
+The Collections screen provides some basic functionality for managing your Collections, powered by the xref:configuration-guide:collections-api.adoc[].
 
 [NOTE]
 ====
 If you are running a user-managed cluster or a single-node installation, you will not see a Collections option in the left nav menu of the Admin UI.
 
-You will instead see a "Core Admin" screen that supports some comparable Core level information & manipulation via the <<coreadmin-api.adoc#,CoreAdmin API>> instead.
+You will instead see a "Core Admin" screen that supports some comparable Core level information & manipulation via the xref:configuration-guide:coreadmin-api.adoc[] instead.
 ====
 
 The main display of this page provides a list of collections that exist in your cluster.
@@ -35,6 +35,6 @@ image::collections-core-admin/collection-admin.png[image,width=653,height=250]
 
 Replicas can be deleted by clicking the red "X" next to the replica name.
 
-If the shard is inactive, for example after a <<shard-management.adoc#splitshard,SPLITSHARD action>>, an option to delete the shard will appear as a red "X" next to the shard name.
+If the shard is inactive, for example after a xref:shard-management.adoc#splitshard[SPLITSHARD action], an option to delete the shard will appear as a red "X" next to the shard name.
 
 image::collections-core-admin/DeleteShard.png[image,width=486,height=250]
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/configuring-logging.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/configuring-logging.adoc
index 3e750fe..847fd3a 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/configuring-logging.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/configuring-logging.adoc
@@ -90,7 +90,7 @@ There are two ways:
 The first way is to set the `SOLR_LOG_LEVEL` environment variable before you start Solr, or place the same variable in `bin/solr.in.sh` or `bin/solr.in.cmd`.
 The variable must contain an uppercase string with a supported log level (see above).
 
-The second way is to start Solr with the -v or -q options, see <<solr-control-script-reference.adoc#,Solr Control Script Reference>> for details.
+The second way is to start Solr with the -v or -q options, see xref:solr-control-script-reference.adoc[] for details.
 Examples:
 
 [source,bash]
@@ -111,7 +111,7 @@ The format of the log messages can be changed by https://logging.apache.org/log4
 
 When you're ready to deploy Solr in production, set the variable `SOLR_LOGS_DIR` to the location where you want Solr to write log files, such as `/var/solr/logs`.
 You may also want to tweak `log4j2.xml`.
-Note that if you installed Solr as a service using the instructions provided in <<taking-solr-to-production.adoc#,Taking Solr to Production>>, then see `/var/solr/log4j2.xml` instead of the default `server/resources` version.
+Note that if you installed Solr as a service using the instructions provided in xref:taking-solr-to-production.adoc[], then see `/var/solr/log4j2.xml` instead of the default `server/resources` version.
 
 When starting Solr in the foreground (`-f` option), all logs will be sent to the console, in addition to `solr.log`.
 When starting Solr in the background, it will write all `stdout` and `stderr` output to a log file in `solr-<port>-console.log`, and automatically disable the CONSOLE logger configured in `log4j2.xml`, having the same effect as if you removed the CONSOLE appender from the rootLogger manually.
@@ -149,4 +149,4 @@ The log file under which you can find all these queries is called `solr_slow_req
 == Logging Select Request Parameters
 
 In addition to the logging options described above, it's possible to log only a selected list of request parameters (such as those sent with queries) with an additional request parameter called `logParamsList`.
-See the section on <<common-query-parameters.adoc#logparamslist-parameter,logParamsList Parameter>> for more information.
+See the section on xref:query-guide:common-query-parameters.adoc#logparamslist-parameter[logParamsList Parameter] for more information.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/docker-faq.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/docker-faq.adoc
index bbd0731..945b7ba 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/docker-faq.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/docker-faq.adoc
@@ -178,7 +178,7 @@ This is especially a problem for ZooKeeper 3.4.6; future versions are better at
 Docker 1.10 has a new `--ip` configuration option that allows you to specify an IP address for a container.
 It also has a `--ip-range` option that allows you to specify the range that other containers get addresses from.
 Used together, you can implement static addresses.
-See the <<docker-networking.adoc#,Solr Docker networking guide>> for more information.
+See the xref:docker-networking.adoc[] for more information.
 
 == Can I run ZooKeeper and Solr with Docker Links?
 
@@ -223,7 +223,7 @@ Then go to `+http://localhost:8983/solr/#/~cloud+` (adjust the hostname for your
 
 == How can I run ZooKeeper and Solr with Docker Compose?
 
-See the <<solr-in-docker.adoc#docker-compose,docker compose example>>.
+See the xref:solr-in-docker.adoc#docker-compose[docker compose example].
 
 == How can I get rid of "shared memory" warnings on Solr startup?
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc
index ddbe268..d90203e 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc
@@ -72,7 +72,7 @@ To activate the SSL settings, uncomment and update the set of properties beginni
 ====
 [.tab-label]**nix (solr.in.sh)*
 
-NOTE: If you setup Solr as a service on Linux using the steps outlined in <<taking-solr-to-production.adoc#,Taking Solr to Production>>, then make these changes in `/var/solr/solr.in.sh`.
+NOTE: If you setup Solr as a service on Linux using the steps outlined in xref:taking-solr-to-production.adoc[], then make these changes in `/var/solr/solr.in.sh`.
 
 [source,bash]
 ----
@@ -218,7 +218,7 @@ There are several related JIRA tickets where SSL support is being planned/worked
 After creating the keystore described above and before you start any SolrCloud nodes, you must configure your Solr cluster properties in ZooKeeper so that Solr nodes know to communicate via SSL.
 
 This section assumes you have created and started an external ZooKeeper.
-See <<zookeeper-ensemble.adoc#,ZooKeeper Ensemble>> for more information.
+See xref:zookeeper-ensemble.adoc[] for more information.
 
 The `urlScheme` cluster-wide property needs to be set to `https` before any Solr node starts up.
 The examples below use the `zkcli` tool that comes with Solr to do this.
@@ -245,7 +245,7 @@ C:\> server\scripts\cloud-scripts\zkcli.bat -zkhost server1:2181,server2:2181,se
 --
 
 Be sure to use the correct `zkhost` value for your system.
-If you have set up your ZooKeeper ensemble to use a <<taking-solr-to-production.adoc#zookeeper-chroot,chroot for Solr>>, make sure to include it in the `zkhost` string, e.g., `-zkhost server1:2181,server2:2181,server3:2181/solr`.
+If you have set up your ZooKeeper ensemble to use a xref:taking-solr-to-production.adoc#zookeeper-chroot[chroot for Solr], make sure to include it in the `zkhost` string, e.g., `-zkhost server1:2181,server2:2181,server3:2181/solr`.
 
 === Update Cluster Properties for Existing Collections
 
@@ -253,7 +253,7 @@ If you are using SolrCloud and have collections created before enabling SSL, you
 
 If you do not have existing collections or are not using SolrCloud, you can skip ahead and start Solr.
 
-Updating cluster properties can be done with the Collections API <<cluster-node-management.adoc#clusterprop,CLUSTERPROP>> command, as in this example (update the hostname and port as appropriate for your system):
+Updating cluster properties can be done with the Collections API xref:cluster-node-management.adoc#clusterprop[CLUSTERPROP command], as in this example (update the hostname and port as appropriate for your system):
 
 [source,terminal]
 $ http://localhost:8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https
@@ -292,7 +292,7 @@ C:\> bin\solr.cmd -p 8984
 
 === Start SolrCloud
 
-NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<zookeeper-ensemble#updating-solr-include-files,instructions>>) you can omit `-z <zk host string>` from all of the `bin/solr`/`bin\solr.cmd` commands below.
+NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:zookeeper-ensemble.adoc#updating-solr-include-files[Updating Solr Include Files]) you can omit `-z <zk host string>` from all of the `bin/solr`/`bin\solr.cmd` commands below.
 
 Start each Solr node with the Solr control script as shown in the examples below.
 Customize the values for the parameters shown as necessary and add any used in your system.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/hadoop-authentication-plugin.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/hadoop-authentication-plugin.adoc
index 46c9fe9..35c023e 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/hadoop-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/hadoop-authentication-plugin.adoc
@@ -29,11 +29,11 @@ Please review the Hadoop documentation for v{ivy-hadoop-version} used by this ve
 
 For some authentication schemes (e.g., Kerberos), Solr provides a native implementation of authentication plugin.
 If you require a more stable setup, in terms of configuration, ability to perform rolling upgrades, backward compatibility, etc., you should consider using one of these plugins.
-Please review the section <<authentication-and-authorization-plugins.adoc#,Authentication and Authorization Plugins>> for an overview of authentication plugin options in Solr.
+Please review the section xref:authentication-and-authorization-plugins.adoc[] for an overview of authentication plugin options in Solr.
 
 There are two plugin classes:
 
-* `HadoopAuthPlugin`: This can be used with SolrCloud, user-managed, and single-node installations as well as SolrCloud with <<authentication-and-authorization-plugins.adoc#securing-inter-node-requests,PKI authentication>> for internode communication.
+* `HadoopAuthPlugin`: This can be used with SolrCloud, user-managed, and single-node installations as well as SolrCloud with xref:authentication-and-authorization-plugins.adoc#securing-inter-node-requests[PKI authentication] for internode communication.
 * `ConfigurableInternodeAuthHadoopPlugin`: This is an extension of `HadoopAuthPlugin` that allows you to configure the authentication scheme for internode communication.
 
 [TIP]
@@ -138,12 +138,12 @@ Only applicable for `ConfigurableInternodeAuthHadoopPlugin`.
 
 === Kerberos Authentication using Hadoop Authentication Plugin
 
-This example lets you configure Solr to use Kerberos Authentication, similar to how you would use the <<kerberos-authentication-plugin.adoc#,Kerberos Authentication Plugin>>.
+This example lets you configure Solr to use Kerberos Authentication, similar to how you would use the xref:kerberos-authentication-plugin.adoc[].
 
 After consulting the Hadoop authentication library's documentation, you can supply per-host configuration parameters using the `solr.*` prefix.
 
 As an example, the Hadoop authentication library expects a parameter `kerberos.principal`, which can be supplied as a system property named `solr.kerberos.principal` when starting a Solr node.
-Refer to the section <<kerberos-authentication-plugin.adoc#,Kerberos Authentication Plugin>> for other typical configuration parameters.
+Refer to the section xref:kerberos-authentication-plugin.adoc[] for other typical configuration parameters.
 
 The example below uses `ConfigurableInternodeAuthHadoopPlugin`, and hence you must provide the `clientBuilderFactory` implementation.
 As a result, all internode communication will use the Kerberos mechanism, instead of PKI authentication.
@@ -185,7 +185,7 @@ Without it, forwarded requests will authenticate as Solr server credentials inst
 
 Similar to the previous example, this is an example of setting up a Solr cluster that uses delegation tokens.
 
-Refer to the parameters in the Hadoop https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[authentication library's documentation] or refer to the section <<kerberos-authentication-plugin.adoc#,Kerberos Authentication Plugin>> for further details.
+Refer to the parameters in the Hadoop https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[authentication library's documentation] or refer to the section xref:kerberos-authentication-plugin.adoc[] for further details.
 
 Please note that this example does not use Kerberos and the requests made to Solr must contain valid delegation tokens.
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/installing-solr.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/installing-solr.adoc
index 88c2a2f..960be5e 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/installing-solr.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/installing-solr.adoc
@@ -19,7 +19,7 @@
 
 Installation of Solr on Unix-compatible or Windows servers generally requires simply extracting (or, unzipping) the download package.
 
-Please be sure to review the <<system-requirements.adoc#,System Requirements>> before starting Solr.
+Please be sure to review the xref:system-requirements.adoc[] before starting Solr.
 
 == Available Solr Packages
 
@@ -41,7 +41,7 @@ This will suffice as an initial development environment, but take care not to ov
 When you've progressed past initial evaluation of Solr, you'll want to take care to plan your implementation.
 You may need to reinstall Solr on another server or make a clustered SolrCloud environment.
 
-When you're ready to setup Solr for a production environment, please refer to the instructions provided on the <<taking-solr-to-production.adoc#,Taking Solr to Production>> page.
+When you're ready to setup Solr for a production environment, please refer to the instructions provided on the xref:taking-solr-to-production.adoc[] page.
 
 .What Size Server Do I Need?
 [NOTE]
@@ -54,7 +54,7 @@ A very good blog post that discusses the issues to consider is https://lucidwork
 
 One thing to note when planning your installation is that a hard limit exists in Lucene for the number of documents in a single index: approximately 2.14 billion documents (2,147,483,647 to be exact).
 In practice, it is highly unlikely that such a large number of documents would fit and perform well in a single index, and you will likely need to distribute your index across a cluster before you ever approach this number.
-If you know you will exceed this number of documents in total before you've even started indexing, it's best to plan your installation with <<cluster-types.adoc#solrcloud-mode,SolrCloud>> as part of your design from the start.
+If you know you will exceed this number of documents in total before you've even started indexing, it's best to plan your installation with xref:cluster-types.adoc#solrcloud-mode[SolrCloud] as part of your design from the start.
 
 == Package Installation
 
@@ -75,11 +75,11 @@ After installing Solr, you'll see the following directories and files within the
 bin/::
 This directory includes several important scripts that will make using Solr easier.
 
-solr and solr.cmd::: This is <<solr-control-script-reference.adoc#,Solr's Control Script>>, also known as `bin/solr` (*nix) / `bin/solr.cmd` (Windows).
+solr and solr.cmd::: This is xref:solr-control-script-reference.adoc[Solr's Control Script], also known as `bin/solr` (*nix) / `bin/solr.cmd` (Windows).
 This script is the preferred tool to start and stop Solr.
 You can also create collections or cores, configure authentication, and work with configuration files when running in SolrCloud mode.
 
-post::: The <<post-tool.adoc#,PostTool>>, which provides a simple command line interface for POSTing content to Solr.
+post::: The xref:indexing-guide:post-tool.adoc[], which provides a simple command line interface for POSTing content to Solr.
 
 solr.in.sh and solr.in.cmd:::
 These are property files for *nix and Windows systems, respectively.
@@ -88,7 +88,7 @@ Many of these settings can be overridden when using `bin/solr` / `bin/solr.cmd`,
 
 install_solr_services.sh:::
 This script is used on *nix systems to install Solr as a service.
-It is described in more detail in the section <<taking-solr-to-production.adoc#,Taking Solr to Production>>.
+It is described in more detail in the section xref:taking-solr-to-production.adoc[].
 
 contrib/::
 Solr's `contrib` directory includes add-on plugins for specialized features of Solr.
@@ -112,19 +112,19 @@ A README in this directory provides a detailed overview, but here are some highl
 * Solr's Admin UI (`server/solr-webapp`)
 * Jetty libraries (`server/lib`)
 * Log files (`server/logs`) and log configurations (`server/resources`).
-See the section <<configuring-logging.adoc#,Configuring Logging>> for more details on how to customize Solr's default logging.
+See the section xref:configuring-logging.adoc[] for more details on how to customize Solr's default logging.
 * Sample configsets (`server/solr/configsets`)
 
 == Solr Examples
 
 Solr includes a number of example documents and configurations to use when getting started.
-If you ran through the <<solr-tutorial.adoc#,Solr Tutorial>>, you have already interacted with some of these files.
+If you ran through the xref:getting-started:solr-tutorial.adoc[], you have already interacted with some of these files.
 
 Here are the examples included with Solr:
 
 exampledocs::
 This is a small set of simple CSV, XML, and JSON files that can be used with `bin/post` when first getting started with Solr.
-For more information about using `bin/post` with these files, see <<post-tool.adoc#,Post Tool>>.
+For more information about using `bin/post` with these files, see xref:indexing-guide:post-tool.adoc[].
 
 files::
 The `files` directory provides a basic search UI for documents such as Word or PDF that you may have stored locally.
@@ -157,7 +157,7 @@ This will start Solr in the background, listening on port 8983.
 
 When you start Solr in the background, the script will wait to make sure Solr starts correctly before returning to the command line prompt.
 
-TIP: All of the options for the Solr CLI are described in the section <<solr-control-script-reference.adoc#,Solr Control Script Reference>>.
+TIP: All of the options for the Solr CLI are described in the section xref:solr-control-script-reference.adoc[].
 
 === Start Solr with a Specific Bundled Example
 
@@ -171,11 +171,11 @@ bin/solr -e techproducts
 ----
 
 Currently, the available examples you can run are: techproducts, schemaless, and cloud.
-See the section <<solr-control-script-reference.adoc#running-with-example-configurations,Running with Example Configurations>> for details on each example.
+See the section xref:solr-control-script-reference.adoc#running-with-example-configurations[Running with Example Configurations] for details on each example.
 
 .Getting Started with SolrCloud
-NOTE: Running the `cloud` example starts Solr in <<cluster-types.adoc#solrcloud-mode,SolrCloud>> mode.
-For more information on starting Solr in SolrCloud mode, see the section <<tutorial-solrcloud.adoc#,Getting Started with SolrCloud>>.
+NOTE: Running the `cloud` example starts Solr in xref:cluster-types.adoc#solrcloud-mode[SolrCloud] mode.
+For more information on starting Solr in SolrCloud mode, see the section xref:getting-started:tutorial-solrcloud.adoc[].
 
 === Check if Solr is Running
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/javascript.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/javascript.adoc
index b4e9057..51d5460 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/javascript.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/javascript.adoc
@@ -22,7 +22,7 @@ You don't need to install any packages or configure anything.
 
 HTTP requests can be sent to Solr using the standard `XMLHttpRequest` mechanism.
 
-By default, Solr sends <<response-writers.adoc#json-response-writer,JavaScript Object Notation (JSON) responses>>, which are easily interpreted in JavaScript.
+By default, Solr sends xref:query-guide:response-writers.adoc#json-response-writer[JavaScript Object Notation (JSON) responses], which are easily interpreted in JavaScript.
 You don't need to add anything to the request URL to have responses sent as JSON.
 
 For more information and an excellent example, take a look at the SolJSON page on the Solr Wiki:
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/jmx-with-solr.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/jmx-with-solr.adoc
index 695948a..8e53c7b 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/jmx-with-solr.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/jmx-with-solr.adoc
@@ -26,7 +26,7 @@ If you are unfamiliar with JMX, you may  find the following overview useful: htt
 
 == Configuring JMX
 
-JMX support is configured by defining a metrics reporter, as described in the section the section <<metrics-reporting.adoc#jmx-reporter,JMX Reporter>>.
+JMX support is configured by defining a metrics reporter, as described in the section the section xref:metrics-reporting.adoc#jmx-reporter[JMX Reporter].
 
 If you have an existing MBean server running in Solr's JVM, or if you start Solr with the system property `-Dcom.sun.management.jmxremote`, Solr will automatically identify its location on startup even if you have not defined a reporter explicitly in `solr.xml`.
 You can also define the location of the MBean server with parameters defined in the reporter definition.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/jvm-settings.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/jvm-settings.adoc
index 220fec2..f2a73a4 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/jvm-settings.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/jvm-settings.adoc
@@ -48,7 +48,7 @@ When heaps grow to larger sizes, it is imperative to test extensively before goi
 * Modern hardware can be configured with hundreds of gigabytes of physical RAM and many CPUs.
 It is often better in these cases to run multiple JVMs, each with a limited amount of memory allocated to their heaps.
 One way to achieve this is to run Solr as a https://hub.docker.com/_/solr?tab=tags[Docker container].
-* It's good practice to periodically re-analyze the GC logs and/or monitor with <<metrics-reporting#metrics-reporting,Metrics Reporting>> to see if the memory usage has changed due to changes in your application, number of documents, etc.
+* It's good practice to periodically re-analyze the GC logs and/or monitor with xref:metrics-reporting.adoc[] to see if the memory usage has changed due to changes in your application, number of documents, etc.
 * On *nix systems, Solr will run with "OOM killer script" (see `solr/bin/oom_solr.sh`).
 This will forcefully stop Solr when the heap is exhausted rather than continue in an indeterminate state.
 You can additionally request a heap dump on OOM through the values in `solr.in.sh`
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/jwt-authentication-plugin.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/jwt-authentication-plugin.adoc
index 4623b3d..a1d3451 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/jwt-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/jwt-authentication-plugin.adoc
@@ -193,13 +193,13 @@ Configuring both will cause an error.
 
 === Multiple Authentication Schemes
 
-Solr provides the <<basic-authentication-plugin.adoc#combining-basic-authentication-with-other-schemes,MultiAuthPlugin>> to support multiple authentication schemes based on the `Authorization` header.
+Solr provides the xref:basic-authentication-plugin.adoc#combining-basic-authentication-with-other-schemes[MultiAuthPlugin] to support multiple authentication schemes based on the `Authorization` header.
 This allows you to configure Solr to delegate user management and authentication to an OIDC provider using the `JWTAuthPlugin`,
 but also allow a small set of service accounts to use `Basic` authentication when using OIDC is not supported or practical.
 
 == Editing JWT Authentication Plugin Configuration
 
-All properties mentioned above can be set or changed using the <<basic-authentication-plugin.adoc#editing-basic-authentication-plugin-configuration,Authentication API>>.
+All properties mentioned above can be set or changed using the xref:basic-authentication-plugin.adoc#editing-basic-authentication-plugin-configuration[Authentication API].
 You can thus start with a simple configuration with only `class` and `blockUnknown=false` configured and then configure the rest using the API.
 
 === Set a Configuration Property
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/kerberos-authentication-plugin.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/kerberos-authentication-plugin.adoc
index 1ba50ab..5354f4a 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/kerberos-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/kerberos-authentication-plugin.adoc
@@ -19,13 +19,13 @@
 If you are using Kerberos to secure your network environment, the Kerberos authentication plugin can be used to secure a Solr cluster.
 
 This allows Solr to use a Kerberos service principal and keytab file to authenticate with ZooKeeper and between nodes of the Solr cluster (if applicable).
-Users of the Admin UI and all clients (such as <<solrj.adoc#,SolrJ>>) would also need to have a valid ticket before being able to use the UI or send requests to Solr.
+Users of the Admin UI and all clients (such as xref:solrj.adoc[]) would also need to have a valid ticket before being able to use the UI or send requests to Solr.
 
 Support for Kerberos authentication is available in SolrCloud, user-managed, or single-node installations.
 
 [TIP]
 ====
-If you are using Solr with a Hadoop cluster secured with Kerberos and intend to store your Solr indexes in HDFS, also see the section <<solr-on-hdfs.adoc#,Solr on HDFS>> for additional steps to configure Solr for that purpose.
+If you are using Solr with a Hadoop cluster secured with Kerberos and intend to store your Solr indexes in HDFS, also see the section xref:solr-on-hdfs.adoc[] for additional steps to configure Solr for that purpose.
 The instructions on this page apply only to scenarios where Solr will be secured with Kerberos.
 If you only need to store your indexes in a Kerberized HDFS system, please see the Running Solr on HDFS section.
 ====
@@ -36,7 +36,7 @@ When setting up Solr to use Kerberos, configurations are put in place for Solr t
 The configurations define the service principal name and the location of the keytab file that contains the credentials.
 
 As with all authentication plugins, Kerberos authentication configuration is stored in `security.json`.
-This file is discussed in the section <<authentication-and-authorization-plugins.adoc#configuring-security-json,Configuring security.json>>.
+This file is discussed in the section xref:authentication-and-authorization-plugins.adoc#configuring-security-json[Configuring security.json].
 
 === Service Principals and Keytab Files
 
@@ -205,7 +205,7 @@ If you are using Solr in a single-node installation, you need to create the `sec
 
 [IMPORTANT]
 ====
-If you already have a `/security.json` file in ZooKeeper, download the file, add or modify the authentication section and upload it back to ZooKeeper using the <<zookeeper-utilities.adoc#,ZooKeeper Utilities>> available in Solr.
+If you already have a `/security.json` file in ZooKeeper, download the file, add or modify the authentication section and upload it back to ZooKeeper using the xref:zookeeper-utilities.adoc[] available in Solr.
 ====
 
 === Define a JAAS Configuration File
@@ -260,7 +260,7 @@ The path should be enclosed in double-quotes.
 === Solr Startup Parameters
 
 While starting up Solr, the following host-specific parameters need to be passed.
-These parameters can be passed at the command line with the `bin/solr` start command (see <<solr-control-script-reference.adoc#,Solr Control Script Reference>> for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
+These parameters can be passed at the command line with the `bin/solr` start command (see xref:solr-control-script-reference.adoc[] for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
 
 `solr.kerberos.name.rules`::
 +
@@ -373,7 +373,7 @@ Delegation tokens can reduce the load because they do not access the server afte
 * If requests or permissions need to be delegated to another user.
 
 To enable delegation tokens, several parameters must be defined.
-These parameters can be passed at the command line with the `bin/solr` start command (<<solr-control-script-reference.adoc#,Solr Control Script Reference>>) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
+These parameters can be passed at the command line with the `bin/solr` start command xref:solr-control-script-reference.adoc[]) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
 
 `solr.kerberos.delegation.token.enabled`::
 +
@@ -451,7 +451,7 @@ Note you also need to customize the `-z` property as appropriate for the locatio
 $ bin/solr -c -z server1:2181,server2:2181,server3:2181/solr
 ----
 
-NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<zookeeper-ensemble#updating-solr-include-files,instructions>>) you can omit `-z <zk host string>` from the above command.
+NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:zookeeper-ensemble#updating-solr-include-files[Updating Solr Include Files]) you can omit `-z <zk host string>` from the above command.
 
 === Test the Configuration
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/mbean-request-handler.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/mbean-request-handler.adoc
index a2f9945..9cdc447 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/mbean-request-handler.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/mbean-request-handler.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-The MBean Request Handler offers programmatic access to the information provided on the <<plugins-stats-screen.adoc#,Plugin/Stats>> page of the Admin UI.
+The MBean Request Handler offers programmatic access to the information provided on the xref:plugins-stats-screen.adoc[] of the Admin UI.
 
 The MBean Request Handler accepts the following parameters:
 
@@ -57,11 +57,11 @@ The default is `false`.
 |===
 +
 The output format.
-This operates the same as the <<response-writers.adoc#,`wt` parameter in a query>>.
+This operates the same as the xref:query-guide:response-writers.adoc[`wt` parameter in a query].
 
 == MBeanRequestHandler Examples
 
-All of the examples in this section assume you are running the <<tutorial-techproducts.adoc#,"techproducts" example>>.
+All of the examples in this section assume you are running the xref:getting-started:tutorial-techproducts.adoc["techproducts" example].
 
 To return information about the CACHE category only:
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/metrics-reporting.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/metrics-reporting.adoc
index 5734bee..1464f4e 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/metrics-reporting.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/metrics-reporting.adoc
@@ -104,7 +104,7 @@ When making requests with the <<Metrics API>>, you can specify `&group=jetty` to
 
 The metrics available in your system can be customized by modifying the `<metrics>` element in `solr.xml`.
 
-TIP: See also the section <<configuring-solr-xml.adoc#,Format of Solr.xml>> for more information about the `solr.xml` file, where to find it, and how to edit it.
+TIP: See also the section xref:configuration-guide:configuring-solr-xml.adoc[] for more information about the `solr.xml` file, where to find it, and how to edit it.
 
 === Disabling the Metrics Collection
 The `<metrics>` element in `solr.xml` supports one attribute `enabled`, which takes a boolean value,
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
index f636c2f..28b5867 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-with-prometheus-and-grafana.adoc
@@ -18,7 +18,7 @@
 
 If you use https://prometheus.io[Prometheus] and https://grafana.com[Grafana] for metrics storage and data visualization, Solr includes a Prometheus exporter to collect metrics and other data.
 
-A Prometheus exporter (`solr-exporter`) allows users to monitor not only Solr metrics which come from <<metrics-reporting.adoc#,Metrics API>>, but also facet counts which come from <<query-guide.adoc#,Query Guide>> and responses to <<collections-api.adoc#,Collections API>> commands and <<ping.adoc#,PingRequestHandler>> requests.
+A Prometheus exporter (`solr-exporter`) allows users to monitor not only Solr metrics which come from the xref:metrics-reporting.adoc#metrics-api[Metrics API], but also facet counts which come from xref:query-guide:facet.adoc[] and responses to xref:configuration-guide:collections-api.adoc[] commands and xref:ping.adoc[] requests.
 
 This graphic provides a more detailed view:
 
@@ -212,7 +212,7 @@ Extra JVM options.
 |===
 +
 Credentials for connecting to a ZooKeeper host that is protected with ACLs.
-For more information on what to include in this variable, refer to the section <<zookeeper-access-control.adoc#zookeeper-acls-in-solr-scripts,ZooKeeper Access Control>> or the <<getting-metrics-from-a-secured-solrcloud,example below>>.
+For more information on what to include in this variable, refer to the section xref:zookeeper-access-control.adoc#zookeeper-acls-in-solr-scripts[ZooKeeper ACLs in Solr Scripts] or the example <<getting-metrics-from-a-secured-solrcloud>> below.
 
 `CLASSPATH_PREFIX`::
 +
@@ -223,15 +223,14 @@ For more information on what to include in this variable, refer to the section <
 +
 Location of extra libraries to load when starting the `solr-exporter`.
 
-All <<#command-line-parameters,command line parameters>> are able to be provided via environment variables when using the `./bin` scripts.
+All <<command-line-parameters>> are able to be provided via environment variables when using the `./bin` scripts.
 
 === Getting Metrics from a Secured SolrCloud
 
-Your SolrCloud might be secured by measures described in <<securing-solr.adoc#,Securing Solr>>.
-The security configuration can be injected into `solr-exporter` using environment variables in a fashion similar to other clients using <<solrj.adoc#,SolrJ>>.
-This is possible because the main script picks up <<Environment Variable Options>>  and passes them on to the Java process.
+Your SolrCloud security configuration can be injected into `solr-exporter` using environment variables in a fashion similar to other clients using xref:solrj.adoc[].
+This is possible because the main script picks up <<Environment Variable Options>> and passes them on to the Java process.
 
-Example for a SolrCloud instance secured by <<basic-authentication-plugin.adoc#,Basic Authentication>>, <<enabling-ssl.adoc#,SSL>> and <<zookeeper-access-control.adoc#,ZooKeeper Access Control>>:
+The following example assumes a SolrCloud instance secured by xref:basic-authentication-plugin.adoc[], xref:enabling-ssl.adoc[SSL] and xref:zookeeper-access-control.adoc[].
 
 Suppose you have a file `basicauth.properties` with the Solr Basic-Auth credentials:
 
@@ -409,10 +408,10 @@ Between these elements, the data the `solr-exporter` should request is defined.
 There are several possible types of requests to make:
 
 [horizontal]
-`<ping>`:: Scrape the response to a <<ping.adoc#,PingRequestHandler>> request.
-`<metrics>`:: Scrape the response to a <<metrics-reporting.adoc#metrics-api,Metrics API>> request.
-`<collections>`:: Scrape the response to a <<collections-api.adoc#,Collections API>> request.
-`<search>`:: Scrape the response to a <<query-guide.adoc#,search>> request.
+`<ping>`:: Scrape the response to a xref:ping.adoc[] request.
+`<metrics>`:: Scrape the response to a xref:metrics-reporting.adoc#metrics-api[Metrics API] request.
+`<collections>`:: Scrape the response to a xref:configuration-guide:collections-api.adoc[] request.
+`<search>`:: Scrape the response to a xref:query-guide:query-syntax-and-parsers.adoc[query] request.
 
 Within each of these types, we need to define the query and how to work with the response.
 To do this, we define two additional elements:
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/performance-statistics-reference.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/performance-statistics-reference.adoc
index 79c3578..9f2f7f8 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/performance-statistics-reference.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/performance-statistics-reference.adoc
@@ -19,7 +19,7 @@
 This page explains some of the statistics that Solr exposes.
 
 There are two approaches to retrieving metrics.
-First, you can use the <<metrics-reporting.adoc#metrics-api,Metrics API>>, or you can enable JMX and get metrics from the <<mbean-request-handler.adoc#,MBean Request Handler>> or via an external tool such as JConsole.
+First, you can use the xref:metrics-reporting.adoc#metrics-api[Metrics API], or you can enable JMX and get metrics from the xref:mbean-request-handler.adoc[] or via an external tool such as JConsole.
 The below descriptions focus on retrieving the metrics using the Metrics API, but the metric names are the same if using the MBean Request Handler or an external tool.
 
 These statistics are per core.
@@ -219,4 +219,4 @@ When eviction by heap usage is enabled, the following additional statistics are
 |evictionsRamUsage| Number of cache evictions for the current index searcher because heap usage exceeded maxRamMB.
 |===
 
-More information on Solr caches is available in the section <<caches-warming.adoc#,Caches and Query Warming>>.
+More information on Solr caches is available in the section xref:configuration-guide:caches-warming.adoc[].
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/ping.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/ping.adoc
index 1941800..d85c080 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/ping.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/ping.adoc
@@ -21,8 +21,8 @@ Choosing Ping under a core name issues a `ping` request to check whether the cor
 .Ping Option in Core Dropdown
 image::ping/ping.png[image,width=171,height=195]
 
-The search executed by a Ping is configured with the <<request-parameters-api.adoc#,Request Parameters API>>.
-See <<implicit-requesthandlers.adoc#,Implicit Request Handlers>> for the paramset to use for the `/admin/ping` endpoint.
+The search executed by a Ping is configured with the xref:configuration-guide:request-parameters-api.adoc[].
+See xref:configuration-guide:implicit-requesthandlers.adoc[] for the paramset to use for the `/admin/ping` endpoint.
 
 The Ping option doesn't open a page, but the status of the request can be seen on the core overview page shown when clicking on a collection name.
 The length of time the request has taken is displayed next to the Ping option, in milliseconds.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/python.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/python.adoc
index 900979f..10d2ca8 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/python.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/python.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr includes an output format specifically for <<response-writers.adoc#python-response-writer,Python>>, but <<response-writers.adoc#json-response-writer,JSON output>> is a little more robust.
+Solr includes an output format specifically for xref:query-guide:response-writers.adoc#python-response-writer[Python Response Writer], but the xref:response-writers.adoc#json-response-writer[JSON Response Writer] is a little more robust.
 
 == Simple Python
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/replica-management.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/replica-management.adoc
index 2965b18..8e7bf30 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/replica-management.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/replica-management.adoc
@@ -189,7 +189,7 @@ These possible values are allowed:
 * `pull`: The PULL type does not maintain a transaction log and only updates its index via replication.
 This type is not eligible to become a leader.
 +
-See the section <<solrcloud-shards-indexing.adoc#types-of-replicas,Types of Replicas>> for more information about replica type options.
+See the section xref:solrcloud-shards-indexing.adoc#types-of-replicas[Types of Replicas] for more information about replica type options.
 
 `nrtReplicas`::
 +
@@ -229,7 +229,7 @@ Defaults to `1` if `type` is `pull` otherwise `0`.
 |===
 +
 Set core property _name_ to _value_.
-See <<core-discovery.adoc#,Core Discovery>> for details about supported properties and values.
+See xref:configuration-guide:core-discovery.adoc[] for details about supported properties and values.
 
 [WARNING]
 ====
@@ -255,7 +255,7 @@ If `false`, the API will return the status of the single action, which may be be
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === Additional Examples using ADDREPLICA
 
@@ -445,7 +445,7 @@ Defaults to `true`, but is ignored if the replica does not have the property `sh
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 
 [[deletereplica]]
@@ -579,7 +579,7 @@ When set to `true`, no action will be taken if the replica is active.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 [[addreplicaprop]]
 == ADDREPLICAPROP: Add Replica Property
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/ruby.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/ruby.adoc
index 84f0eee..92b71b5 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/ruby.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/ruby.adoc
@@ -16,7 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr has an optional Ruby response format that extends the <<response-writers.adoc#json-response-writer,JSON output>> to allow the response to be safely eval'd by Ruby's interpreter
+Solr has an optional Ruby response format that extends the xref:query-guide:response-writers.adoc#json-response-writer[JSON Response Writer] to allow the response to be safely eval'd by Ruby's interpreter
 
 This Ruby response format differs from JSON in the following ways:
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/rule-based-authorization-plugin.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/rule-based-authorization-plugin.adoc
index 68634d6..2496c1c 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/rule-based-authorization-plugin.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/rule-based-authorization-plugin.adoc
@@ -43,7 +43,7 @@ The users that RBAP sees come from whatever authentication plugin has been confi
 RBAP is compatible with all of the authentication plugins that Solr ships with out of the box.
 It is also compatible with any custom authentication plugins users might write, provided that the plugin sets a user principal on the HttpServletRequest it receives.
 
-The user value seen by RBAP in each case depends on the authentication plugin being used: the Kerberos principal if the <<kerberos-authentication-plugin.adoc#,Kerberos Authentication Plugin>> is being used, the "sub" JWT claim if the <<jwt-authentication-plugin.adoc#,JWT Authentication Plugin>> is being used, etc.
+The user value seen by RBAP in each case depends on the authentication plugin being used: the Kerberos principal if the xref:kerberos-authentication-plugin.adoc[] is being used, the "sub" JWT claim if the xref:jwt-authentication-plugin.adoc[] is being used, etc.
 
 === Roles
 
@@ -69,7 +69,7 @@ Administrators can use permissions from a list of predefined options or define t
 == Configuring the Rule-Based Authorization Plugins
 
 Like all of Solr's security plugins, configuration for RBAP lives in a file or ZooKeeper node with the name `security.json`.
-See <<authentication-and-authorization-plugins.adoc#configuring-security-json,Configuring security.json>> for more information on how to setup `security.json` in your cluster.
+See xref:authentication-and-authorization-plugins.adoc#configuring-security-json[Configuring security.json] for more information on how to setup `security.json` in your cluster.
 
 Solr offers an <<Authorization API>> for making changes to RBAP configuration.
 Authorized administrators should use this to make changes under most circumstances.
@@ -154,7 +154,7 @@ For some plugins the principal name and short name may be the same.
 
 === Example for RuleBasedAuthorizationPlugin and BasicAuth
 
-This example `security.json` shows how the <<basic-authentication-plugin.adoc#,Basic authentication plugin>> can work with the `RuleBasedAuthorizationPlugin` plugin:
+This example `security.json` shows how the xref:basic-authentication-plugin.adoc[] can work with the `RuleBasedAuthorizationPlugin` plugin:
 
 [source,json]
 ----
@@ -199,7 +199,7 @@ All other APIs are left open, and can be accessed by both users.
 
 === Example for External Role RuleBasedAuthorizationPlugin with JWT auth
 
-This example `security.json` shows how the <<jwt-authentication-plugin.adoc#,JWT authentication plugin>>, which pulls user and user roles from JWT claims, can work with the `ExternalRoleRuleBasedAuthorizationPlugin` plugin:
+This example `security.json` shows how the xref:jwt-authentication-plugin.adoc[], which pulls user and user roles from JWT claims, can work with the `ExternalRoleRuleBasedAuthorizationPlugin` plugin:
 
 [source,json]
 ----
@@ -394,22 +394,22 @@ The predefined permission names (and their effects) are:
 
 * *security-edit*: this permission is allowed to edit the security configuration, meaning any update action that modifies `security.json` through the APIs will be allowed.
 * *security-read*: this permission is allowed to read the security configuration, meaning any action that reads `security.json` settings through the APIs will be allowed.
-* *schema-edit*: this permission is allowed to edit a collection's schema using the <<schema-api.adoc#,Schema API>>.
+* *schema-edit*: this permission is allowed to edit a collection's schema using the xref:indexing-guide:schema-api.adoc[].
 Note that this allows schema edit permissions for _all_ collections.
 If edit permissions should only be applied to specific collections, a custom permission would need to be created.
-* *schema-read*: this permission is allowed to read a collection's schema using the <<schema-api.adoc#,Schema API>>.
+* *schema-read*: this permission is allowed to read a collection's schema using the xref:indexing-guide:schema-api.adoc[].
 Note that this allows schema read permissions for _all_ collections.
 If read permissions should only be applied to specific collections, a custom permission would need to be created.
-* *config-edit*: this permission is allowed to edit a collection's configuration using the <<config-api.adoc#,Config API>>, the <<request-parameters-api.adoc#,Request Parameters API>>, and other APIs which modify `configoverlay.json`.
+* *config-edit*: this permission is allowed to edit a collection's configuration using the xref:configuration-guide:config-api.adoc[], the xref:configuration-guide:request-parameters-api.adoc[], and other APIs which modify `configoverlay.json`.
 Note that this allows configuration edit permissions for _all_ collections.
 If edit permissions should only be applied to specific collections, a custom permission would need to be created.
-* *config-read*: this permission is allowed to read a collection's configuration using the <<config-api.adoc#,Config API>>, the <<request-parameters-api.adoc#,Request Parameters API>>, <<configsets-api.adoc#configsets-list,Configsets API>>, the Admin UI's <<configuration-files.adoc#files-screen,Files Screen>>, and other APIs accessing configuration.
+* *config-read*: this permission is allowed to read a collection's configuration using the xref:configuration-guide:config-api.adoc[], the xref:configuration-guide:request-parameters-api.adoc[], xref:configuration-guide:configsets-api.adoc#configsets-list[Configsets API], the Admin UI's xref:configuration-guide:configuration-files.adoc#files-screen[Files Screen], and other APIs accessing configuration.
 Note that this allows configuration read permissions for _all_ collections.
 If read permissions should only be applied to specific collections, a custom permission would need to be created.
-* *metrics-read*: this permission allows access to Solr's <<metrics-reporting.adoc#metrics-api,Metrics API>>.
+* *metrics-read*: this permission allows access to Solr's xref:metrics-reporting.adoc#metrics-api[Metrics API].
 * *core-admin-edit*: Core admin commands that can mutate the system state.
 * *core-admin-read*: Read operations on the core admin API
-* *collection-admin-edit*: this permission is allowed to edit a collection's configuration using the <<collections-api.adoc#,Collections API>>.
+* *collection-admin-edit*: this permission is allowed to edit a collection's configuration using the xref:configuration-guide:collections-api.adoc[].
 Note that this allows configuration edit permissions for _all_ collections.
 If edit permissions should only be applied to specific collections, a custom permission would need to be created.
 +
@@ -438,7 +438,7 @@ Specifically, the following actions of the Collections API would be allowed:
 | REBALANCELEADERS
 |===
 
-* *collection-admin-read*: this permission is allowed to read a collection's configuration using the <<collections-api.adoc#,Collections API>>.
+* *collection-admin-read*: this permission is allowed to read a collection's configuration using the xref:configuration-guide:collections-api.adoc[].
 Note that this allows configuration read permissions for _all_ collections.
 If read permissions should only be applied to specific collections, a custom permission would need to be created.
 +
@@ -450,10 +450,10 @@ CLUSTERSTATUS +
 REQUESTSTATUS
 
 * *update*: this permission is allowed to perform any update action on any collection.
-This includes sending documents for indexing (using an <<requesthandlers-searchcomponents.adoc#update-request-handlers,update request handler>>).
+This includes sending documents for indexing (using an xref:configuration-guide:requesthandlers-searchcomponents.adoc#update-request-handlers[update request handler]).
 This applies to all collections by default (`collection:"*"`).
 * *read*: this permission is allowed to perform any read action on any collection.
-This includes querying using search handlers (using <<requesthandlers-searchcomponents.adoc#search-handlers,request handlers>>) such as `/select`, `/get`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`.
+This includes querying using search handlers (using xref:configuration-guide:requesthandlers-searchcomponents.adoc#search-handlers[request handlers]) such as `/select`, `/get`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`.
 This applies to all collections by default ( `collection:"*"` ).
 * *zk-read* : Permission to read content from ZK (`/api/cluster/zk/data/*` , `/api/cluster/zk/ls/*` )
 * *all*: Any requests coming to Solr.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/securing-solr.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/securing-solr.adoc
index 31fd7d5..0ced3b8 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/securing-solr.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/securing-solr.adoc
@@ -36,13 +36,13 @@ When planning how to secure Solr, you should consider which of the available fea
 Encrypting traffic to/from Solr and between Solr nodes prevents sensitive data to be leaked out on the network.
 TLS is also normally a requirement to prevent credential sniffing when using Authentication.
 
-See the section <<enabling-ssl.adoc#,Enabling TLS (SSL)>> for details.
+See the section xref:enabling-ssl.adoc[] for details.
 
 == Authentication and Authorization
 
-Use the <<security-ui.adoc#,Security>> screen in the Admin UI to manage users, roles, and permissions.
+Use the xref:security-ui.adoc[] screen in the Admin UI to manage users, roles, and permissions.
 
-See chapter <<authentication-and-authorization-plugins.adoc#,Configuring Authentication and Authorization>> to learn how to work with the `security.json` file.
+See section xref:authentication-and-authorization-plugins.adoc[] to learn how to work with the `security.json` file.
 
 [#securing-solr-auth-plugins]
 === Authentication Plugins
@@ -53,11 +53,11 @@ The authentication plugins that ship with Solr are:
 // tag::list-of-authentication-plugins[]
 [width=100%,cols="1,1",frame=none,grid=none,stripes=none]
 |===
-| <<basic-authentication-plugin.adoc#,Basic Authentication Plugin>>
-| <<kerberos-authentication-plugin.adoc#,Kerberos Authentication Plugin>>
-| <<jwt-authentication-plugin.adoc#,JWT Authentication Plugin>>
-| <<cert-authentication-plugin.adoc#,Certificate Authentication Plugin>>
-| <<hadoop-authentication-plugin.adoc#,Hadoop Authentication Plugin>>
+| xref:basic-authentication-plugin.adoc[]
+| xref:kerberos-authentication-plugin.adoc[]
+| xref:jwt-authentication-plugin.adoc[]
+| xref:cert-authentication-plugin.adoc[]
+| xref:hadoop-authentication-plugin.adoc[]
 |
 |===
 // end::list-of-authentication-plugins[]
@@ -70,15 +70,15 @@ The authorization plugins that ship with Solr are:
 // tag::list-of-authorization-plugins[]
 [width=100%,cols="1,1",frame=none,grid=none,stripes=none]
 |===
-| <<rule-based-authorization-plugin.adoc#,Rule-Based Authorization Plugin>>
-| <<rule-based-authorization-plugin.adoc#,External Role Rule-Based Authorization Plugin>>
+| xref:rule-based-authorization-plugin.adoc[]
+| xref:rule-based-authorization-plugin.adoc[External Role Rule-Based Authorization Plugin]
 |===
 // end::list-of-authorization-plugins[]
 
 == Audit Logging
 
 Audit logging will record an audit trail of incoming reqests to your cluster, such as users being denied access to admin APIs.
-Learn more about audit logging and how to implement an audit logger plugin in the section <<audit-logging.adoc#,Audit Logging>>.
+Learn more about audit logging and how to implement an audit logger plugin in the section xref:audit-logging.adoc[].
 
 == Request Logging
 
@@ -100,7 +100,7 @@ SOLR_IP_BLACKLIST="192.168.0.3, 192.168.0.4"
 == Securing ZooKeeper Traffic
 
 ZooKeeper is a central and important part of a SolrCloud cluster and understanding how to secure
-its content is covered in the section <<zookeeper-access-control.adoc#,ZooKeeper Access Control>>.
+its content is covered in the section xref:zookeeper-access-control.adoc[].
 
 == Network Configuration
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/security-ui.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/security-ui.adoc
index 0fa1061..2d5aefb 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/security-ui.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/security-ui.adoc
@@ -29,7 +29,7 @@ The Security screen warns you if security is not enabled for Solr. You are stron
 
 image::security-ui/security-not-enabled-warn.png[image,width=500]
 
-When first getting started with Solr, use the `bin/solr auth` command-line utility to enable security for your Solr installation (cloud mode only), see <<solr-control-script-reference.adoc#authentication,bin/solr auth>> for usage instructions.
+When first getting started with Solr, use the `bin/solr auth` command-line utility to enable security for your Solr installation (cloud mode only), see xref:solr-control-script-reference.adoc#authentication[`bin/solr auth`] for usage instructions.
 For example, the following command will enable *basic authentication* and prompt you for the username and password for the initial user with administrative access:
 [source,bash]
 ----
@@ -43,13 +43,13 @@ You do not need to restart Solr as the security configuration will be refreshed
 The Security screen provides the following features:
 
 * Security Settings: Details about the configured authentication and authorization plugins.
-* Users: Read, create, update, and delete user accounts if using the <<basic-authentication-plugin.adoc#,Basic Authentication>> plugin; this panel is disabled for all other authentication plugins.
-* Roles: Read, create, and update roles if using the <<rule-based-authorization-plugin.adoc#,Rule-based Authorization>> plugin; this panel is disabled for all other authorization plugins.
-* Permissions: Read, create, update, and delete permissions if using the <<rule-based-authorization-plugin.adoc#,Rule-based Authorization>> plugin.
+* Users: Read, create, update, and delete user accounts if using the xref:basic-authentication-plugin.adoc[] plugin; this panel is disabled for all other authentication plugins.
+* Roles: Read, create, and update roles if using the xref:rule-based-authorization-plugin.adoc[] plugin; this panel is disabled for all other authorization plugins.
+* Permissions: Read, create, update, and delete permissions if using the xref:rule-based-authorization-plugin.adoc[] plugin.
 
 == User Management
 
-Administrators can read, create, update, and delete user accounts when using the <<basic-authentication-plugin.adoc#,Basic Authentication>> plugin.
+Administrators can read, create, update, and delete user accounts when using the xref:basic-authentication-plugin.adoc[] plugin.
 
 image::security-ui/users.png[image,width=500]
 
@@ -68,11 +68,12 @@ For systems with many user accounts, use the filter controls at the top of the u
 
 image::security-ui/filter-users.png[image,width=400]
 
-For other authentication plugins, such as the <<jwt-authentication-plugin.adoc#,JWT Authentication>> plugin, this panel will be disabled as users are managed by an external system.
+For other authentication plugins, such as the xref:jwt-authentication-plugin.adoc[] plugin, this panel will be disabled as users are managed by an external system.
 
 == Role Management
 
-<<rule-based-authorization-plugin.adoc#roles,Roles>> link users to permissions. If using the <<rule-based-authorization-plugin.adoc#,Rule-based Authorization>> plugin, administrators can read, create, and update roles. Deleting roles is not supported.
+xref:rule-based-authorization-plugin.adoc#roles[Roles] link users to permissions.
+If using the Rule-based Authorization plugin, administrators can read, create, and update roles. Deleting roles is not supported.
 
 image::security-ui/roles.png[image,width=500]
 
@@ -86,7 +87,7 @@ The *Permissions* panel on the Security screen allows administrators to read, cr
 
 image::security-ui/permissions.png[image,width=900]
 
-For detailed information about how permissions work in Solr, see: <<rule-based-authorization-plugin.adoc#permissions,Rule-based Authorization Permissions>>.
+For detailed information about how permissions work in Solr, see: xref:rule-based-authorization-plugin.adoc#permissions[Rule-based Authorization Permissions].
 
 === Add Permission
 
@@ -102,9 +103,5 @@ If you do not select any roles for a permission, then the permission is assigned
 However, if *Block anonymous requests* (`blockUnknown=true`) is checked, then anonymous users will not be allowed to make requests, thus permission with the `null` role are effectively inactive.
 
 To edit a permission, simply click on the corresponding row in the table. When editing a permission, the current index of the permission in the list of permissions is editable.
-This allows you to re-order permissions if needed; see <<rule-based-authorization-plugin.adoc#permission-ordering-and-resolution,Permission Ordering>>.
+This allows you to re-order permissions if needed; see xref:rule-based-authorization-plugin.adoc#permission-ordering-and-resolution[Permission Ordering].
 In general, you want to permissions listed from most specific to least specific in `security.json`.
-
-
-
-
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/shard-management.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/shard-management.adoc
index 544377e..798c562 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/shard-management.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/shard-management.adoc
@@ -108,7 +108,7 @@ When using `splitMethod=rewrite` (default) you must ensure that the node running
 Also, the first replicas of resulting sub-shards will always be placed on the shard leader node.
 
 Shard splitting can be a long running process.
-In order to avoid timeouts, you should run this as an <<collections-api.adoc#asynchronous-calls,asynchronous call>>.
+In order to avoid timeouts, you should run this as an xref:configuration-guide:collections-api.adoc#asynchronous-calls[asynchronous call].
 
 === SPLITSHARD Parameters
 
@@ -203,7 +203,7 @@ A float value which must be smaller than `0.5` that allows to vary the sub-shard
 |===
 +
 Set core property _name_ to _value_.
-See the section <<core-discovery.adoc#,Core Discovery>> for details on supported properties and values.
+See the section xref:configuration-guide:core-discovery.adoc[] for details on supported properties and values.
 
 `waitForFinalState`::
 +
@@ -231,7 +231,7 @@ If `true` then each stage of processing will be timed and a `timing` section wil
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 `splitByPrefix`::
 +
@@ -413,7 +413,7 @@ The defaults for the collection are used if omitted.
 |===
 +
 Set core property _name_ to _value_.
-See the section <<core-discovery.adoc#,Core Discovery>> for details on supported properties and values.
+See the section xref:configuration-guide:core-discovery.adoc[] for details on supported properties and values.
 
 `waitForFinalState`::
 +
@@ -432,7 +432,7 @@ If `false`, the API will return the status of the single action, which may be be
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === CREATESHARD Response
 
@@ -534,7 +534,7 @@ Set this to `false` to prevent the index directory from being deleted.
 |Optional |Default: none
 |===
 +
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+Request ID to track this action which will be xref:configuration-guide:collections-api.adoc#asynchronous-calls[processed asynchronously].
 
 === DELETESHARD Response
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-control-script-reference.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-control-script-reference.adoc
index db956c9..5b07182 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-control-script-reference.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-control-script-reference.adoc
@@ -23,7 +23,7 @@ You can start and stop Solr, create and delete collections or cores, perform ope
 You can find the script in the `bin/` directory of your Solr installation.
 The `bin/solr` script makes Solr easier to work with by providing simple commands and options to quickly accomplish common goals.
 
-More examples of `bin/solr` in use are available throughout this Guide, but particularly in the sections <<installing-solr.adoc#starting-solr,Starting Solr>> and <<tutorial-solrcloud.adoc#,Getting Started with SolrCloud>>.
+More examples of `bin/solr` in use are available throughout this Guide, but particularly in the sections xref:installing-solr.adoc#starting-solr[Starting Solr] and xref:getting-started:tutorial-solrcloud.adoc[].
 
 == Starting and Stopping
 
@@ -77,7 +77,7 @@ Start Solr in SolrCloud mode, which will also launch the embedded ZooKeeper inst
 +
 This option can be shortened to simply `-c`.
 +
-If you are already running a ZooKeeper ensemble that you want to use instead of the embedded (single-node) ZooKeeper, you should also either specify `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<zookeeper-ensemble#updating-solr-include-files,instructions>>) or pass the `-z` parameter.
+If you are already running a ZooKeeper ensemble that you want to use instead of the embedded (single-node) ZooKeeper, you should also either specify `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:zookeeper-ensemble.adoc#updating-solr-include-files[Updating Solr Include Files]) or pass the `-z` parameter.
 +
 For more details, see the section <<SolrCloud Mode>> below.
 +
@@ -283,7 +283,7 @@ The `-c` and `-cloud` options are equivalent:
 
 If you specify a ZooKeeper connection string, such as `-z 192.168.1.4:2181`, then Solr will connect to ZooKeeper and join the cluster.
 
-NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<zookeeper-ensemble#updating-solr-include-files,instructions>>) you can omit `-z <zk host string>` from all `bin/solr` commands.
+NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:zookeeper-ensemble.adoc#updating-solr-include-files,Updating Solr Include Files>>) you can omit `-z <zk host string>` from all `bin/solr` commands.
 
 When starting Solr in SolrCloud mode, if you do not define `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` nor specify the `-z` option, then Solr will launch an embedded ZooKeeper server listening on the Solr port + 1000.
 For example, if Solr is running on port 8983, then the embedded ZooKeeper will listen on port 9983.
@@ -297,7 +297,7 @@ To do this use the `mkroot` command outlined below, for example: `bin/solr zk mk
 
 When starting in SolrCloud mode, the interactive script session will prompt you to choose a configset to use.
 
-For more information about starting Solr in SolrCloud mode, see also the section <<tutorial-solrcloud.adoc#,Getting Started with SolrCloud>>.
+For more information about starting Solr in SolrCloud mode, see also the section xref:getting-started:tutorial-solrcloud.adoc[].
 
 ==== Running with Example Configurations
 
@@ -305,11 +305,11 @@ For more information about starting Solr in SolrCloud mode, see also the section
 
 The example configurations allow you to get started quickly with a configuration that mirrors what you hope to accomplish with Solr.
 
-Each example launches Solr with a managed schema, which allows use of the <<schema-api.adoc#,Schema API>> to make schema edits, but does not allow manual editing of a Schema file.
+Each example launches Solr with a managed schema, which allows use of the xref:indexing-guide:schema-api.adoc[] to make schema edits, but does not allow manual editing of a Schema file.
 
-If you would prefer to manually modify a `schema.xml` file directly, you can change this default as described in the section <<schema-factory.adoc#,Schema Factory Definition in SolrConfig>>.
+If you would prefer to manually modify a `schema.xml` file directly, you can change this default as described in the section xref:configuration-guide:schema-factory.adoc[].
 
-Unless otherwise noted in the descriptions below, the examples do not enable SolrCloud nor <<schemaless-mode.adoc#,schemaless mode>>.
+Unless otherwise noted in the descriptions below, the examples do not enable SolrCloud nor xref:indexing-guide:schemaless-mode.adoc[].
 
 The following examples are provided:
 
@@ -322,8 +322,8 @@ When using this example, you can choose from any of the available configsets fou
 +
 The configset used can be found in `$SOLR_HOME/server/solr/configsets/sample_techproducts_configs`.
 
-* *schemaless*: This example starts a single-node Solr instance using a managed schema, as described in the section <<schema-factory.adoc#,Schema Factory Definition in SolrConfig>>, and provides a very minimal pre-defined schema.
-Solr will run in <<schemaless-mode.adoc#,Schemaless Mode>> with this configuration, where Solr will create fields in the schema on the fly and will guess field types used in incoming documents.
+* *schemaless*: This example starts a single-node Solr instance using a managed schema, as described in the section xref:configuration-guide:schema-factory.adoc[], and provides a very minimal pre-defined schema.
+Solr will run in xref:indexing-guide:schemaless-mode.adoc[] with this configuration, where Solr will create fields in the schema on the fly and will guess field types used in incoming documents.
 +
 The configset used can be found in `$SOLR_HOME/server/solr/configsets/_default`.
 
@@ -743,8 +743,8 @@ Currently, this script only enables Basic Authentication, and is only available
 
 The command `bin/solr auth enable` configures Solr to use Basic Authentication when accessing the User Interface, using `bin/solr` and any API requests.
 
-TIP: For more information about Solr's authentication plugins, see the section <<securing-solr.adoc#,Securing Solr>>.
-For more information on Basic Authentication support specifically, see the section  <<basic-authentication-plugin.adoc#,Basic Authentication Plugin>>.
+TIP: For more information about Solr's authentication plugins, see the section xref:securing-solr.adoc[].
+For more information on Basic Authentication support specifically, see the section xref:basic-authentication-plugin.adoc[].
 
 The `bin/solr auth enable` command makes several changes to enable Basic Authentication:
 
@@ -866,7 +866,7 @@ However, the `basicAuth.conf` file is not removed with either option.
 
 == Set or Unset Configuration Properties
 
-The `bin/solr` script enables a subset of the Config API: <<config-api.adoc#commands-for-common-properties,(un)setting common properties>> and <<config-api.adoc#commands-for-user-defined-properties,(un)setting user-defined properties>>.
+The `bin/solr` script enables a subset of the Config API: xref:configuration-guide:config-api.adoc#commands-for-common-properties[(un)setting common properties] and xref:configuration-guide:config-api.adoc#commands-for-user-defined-properties[(un)setting user-defined properties].
 
 `bin/solr config [options]`
 
@@ -888,7 +888,7 @@ To unset a previously set common property, specify `-action unset-property` with
 
 === Set or Unset User-Defined Properties
 
-To set the user-defined property `update.autoCreateFields` to `false` (to disable <<schemaless-mode.adoc#,Schemaless Mode>>):
+To set the user-defined property `update.autoCreateFields` to `false` (to disable xref:indexing-guide:schemaless-mode.adoc[]):
 
 `bin/solr config -c mycollection -p 8983 -action set-user-property -property update.autoCreateFields -value false`
 
@@ -1041,7 +1041,7 @@ bin/solr zk upconfig -z 111.222.333.444:2181 -n mynewconfig -d /path/to/configse
 ====
 This command does *not* automatically make changes effective!
 It simply uploads the configuration sets to ZooKeeper.
-You can use the Collection API's <<collection-management.adoc#reload,RELOAD command>> to reload any collections that uses this configuration set.
+You can use the Collection API's xref:collection-management.adoc#reload[RELOAD command] to reload any collections that uses this configuration set.
 ====
 
 === Download a Configuration Set
@@ -1459,7 +1459,7 @@ bin/solr export -url http://localhost:8983/solr/gettingstarted -1 -out 1MDocs.js
 
 === Importing Documents to a Collection
 
-Once you have exported documents in a file, you can use the <<indexing-with-update-handlers.adoc#,/update request handler>> to import them to a new Solr collection.
+Once you have exported documents in a file, you can use the xref:indexing-guide:indexing-with-update-handlers.adoc[/update request handler] to import them to a new Solr collection.
 
 *Example: import `jsonl` files*
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
index 4953e4c..045b7e0 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solr-in-docker.adoc
@@ -92,7 +92,7 @@ docker run --name solr_demo -d -p 8983:8983 solr solr-demo
 
 == How the Image Works
 
-The container contains an installation of Solr, as installed by the <<taking-solr-to-production.adoc#service-installation-script,service installation script>>.
+The container contains an installation of Solr, as installed by the xref:taking-solr-to-production.adoc#service-installation-script[service installation script].
 This stores the Solr distribution in `/opt/solr`, and configures Solr to use `/var/solr` to store data and logs, using the `/etc/default/solr` file for configuration.
 If you want to persist the data, mount a volume or directory on `/var/solr`.
 Solr expects some files and directories in `/var/solr`; if you use your own directory or volume you can either pre-populate them, or let Solr docker copy them for you.
@@ -104,7 +104,7 @@ The Solr docker distribution adds scripts in `/opt/solr/docker/scripts` to make
 === Creating Cores
 
 When Solr runs in standalone mode, you create "cores" to store data.
-On a non-Docker Solr, you would run the server in the background, then use the <<solr-control-script-reference.adoc#,Solr control script>> to create cores and load data.
+On a non-Docker Solr, you would run the server in the background, then use the xref:solr-control-script-reference.adoc[Solr control script] to create cores and load data.
 With Solr docker you have various options.
 
 ==== Manually
@@ -136,7 +136,7 @@ docker run -d -p 8983:8983 --name my_solr -v $PWD/config/solr:/my_core_config/co
 ----
 
 N.B. When specifying the full path to the configset, the actual core configuration should be located inside that directory in the `conf` directory.
-See <<config-sets.adoc#,Configsets>> for details.
+See xref:configuration-guide:config-sets.adoc[] for details.
 
 ==== Using solr-create Command
 
@@ -188,8 +188,8 @@ The fourth way is to use the remote API, from the host or from one of the contai
 curl 'http://localhost:8983/solr/admin/collections?action=CREATE&name=gettingstarted3&numShards=1&collection.configName=_default'
 ----
 
-If you want to use a custom config for your collection, you first need to upload it, and then refer to it by name when you create the collection.
-See the Ref guide on how to use the <<solr-control-script-reference.adoc#upload-a-configuration-set,ZooKeeper upload>> or the <<configsets-api.adoc#configsets-upload,Configsets API>>.
+If you want to use a custom configuration for your collection, you first need to upload it, and then refer to it by name when you create the collection.
+You can use the xref:solr-control-script-reference.adoc#upload-a-configuration-set[`bin/solr zk` command] or the xref:configuration-guide:configsets-api.adoc#configsets-upload[Configsets API].
 
 === Loading Your Own Data
 
@@ -222,7 +222,7 @@ Alternatively, you can make the data available on a volume at Solr start time, a
 === solr.in.sh Configuration
 
 In Solr it is common to configure settings in https://github.com/apache/solr/blob/main/solr/bin/solr.in.sh[solr.in.sh],
-as documented in the <<taking-solr-to-production.adoc#environment-overrides-include-file,Taking Solr to Production page>>.
+as documented in the section xref:taking-solr-to-production.adoc#environment-overrides-include-file[Environment Overrides Include File].
 
 The `solr.in.sh` file can be found in `/etc/default`:
 
@@ -314,7 +314,7 @@ jattach 10 jcmd GC.heap_info
 
 == Updating from Solr 5-7 to 8+
 
-In Solr 8, the Solr Docker image switched from just extracting the Solr tar, to using the <<taking-solr-to-production.adoc#service-installation-script,service installation script>>.
+In Solr 8, the Solr Docker image switched from just extracting the Solr tar, to using the xref:taking-solr-to-production.adoc#service-installation-script[service installation script].
 This was done for various reasons: to bring it in line with the recommendations by the Solr Ref Guide and to make it easier to mount volumes.
 
 This is a backwards incompatible change, and means that if you're upgrading from an older version, you will most likely need to make some changes.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-distributed-requests.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-distributed-requests.adoc
index 9a95f11..b8af8e8 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-distributed-requests.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-distributed-requests.adoc
@@ -115,7 +115,7 @@ There are several ways to control how queries are routed.
 
 === Limiting Which Shards are Queried
 
-While one of the advantages of using SolrCloud is the ability to query very large collections distributed across various shards, in some cases you may have configured Solr so you know <<solrcloud-shards-indexing.adoc#document-routing,you are only interested in results from a specific subset of shards>>.
+While one of the advantages of using SolrCloud is the ability to query very large collections distributed across various shards, in some cases you may have configured Solr with specific xref:solrcloud-shards-indexing.adoc#document-routing[document routing].
 You have the option of searching over all of your data or just parts of it.
 
 Because SolrCloud automatically load balances queries, a query across all shards for a collection is simply a query that does not define a `shards` parameter:
@@ -222,7 +222,7 @@ The difference is that the non-leader `TLOG` replica also captures updates in it
 
 `node.sysprop`::
 Query will be routed to nodes with same defined system properties as the current one.
-For example, if you start Solr nodes on different racks, you'll want to identify those nodes by a <<property-substitution.adoc#jvm-system-properties,system property>> (e.g., `-Drack=rack1`).
+For example, if you start Solr nodes on different racks, you'll want to identify those nodes by a xref:configuration-guide:property-substitution.adoc#jvm-system-properties[system property] (e.g., `-Drack=rack1`).
 Then, queries can contain `shards.preference=node.sysprop:sysprop.rack`, to make sure you always hit shards with the same value of `rack`.
 
 *Examples*:
@@ -285,7 +285,7 @@ The `\_route_` parameter can be used to specify a route key which is used to fig
 For example, if you have a document with a unique key "user1!123", then specifying the route key as "_route_=user1!" (notice the trailing '!' character) will route the request to the shard which hosts that user.
 You can specify multiple route keys separated by comma.
 This parameter can be leveraged when we have shard data by users.
-See <<solrcloud-shards-indexing.adoc#document-routing,Document Routing>> for more information
+See xref:solrcloud-shards-indexing.adoc#document-routing[Document Routing] for more information
 
 [source,plain]
 ----
@@ -335,7 +335,7 @@ To add a `shardHandlerFactory` to the standard search handler, provide a configu
 
 NOTE:: The `shardHandlerFactory` is reliant on the `allowUrls` parameter configured in `solr.xml`, which controls which nodes are allowed to talk to each other.
 This means that the configuration of hosts is global instead of per-core or per-collection.
-See the section <<configuring-solr-xml.adoc#allow-urls, Format of solr.allowUrls>> for details.
+See the section xref:configuration-guide:configuring-solr-xml.adoc#allow-urls [allowUrls] for details.
 
 The `HttpShardHandlerFactory` accepts the following parameters:
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-shards-indexing.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-shards-indexing.adoc
index 6595eb4..062a7d2 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-shards-indexing.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-shards-indexing.adoc
@@ -117,11 +117,11 @@ When it rejoins the cluster, it would replicate from the leader and when that is
 === Queries with Preferred Replica Types
 
 By default all replicas serve queries.
-See the section <<solrcloud-distributed-requests.adoc#shards-preference-parameter,shards.preference Parameter>> for details on how to indicate preferred replica types for queries.
+See the section xref:solrcloud-distributed-requests.adoc#shards-preference-parameter[shards.preference Parameter] for details on how to indicate preferred replica types for queries.
 
 == Document Routing
 
-Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collection-management.adoc#create,creating your collection>>.
+Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when xref:collection-management.adoc#create[creating your collection].
 
 If you use the `compositeId` router (the default), you can send documents with a prefix in the document ID which will be used to calculate the hash Solr uses to determine the shard a document is sent to for indexing.
 The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently.
@@ -159,7 +159,7 @@ It currently allows splitting a shard into two pieces.
 The existing shard is left as-is, so the split action effectively makes two copies of the data as new shards.
 You can delete the old shard at a later time when you're ready.
 
-More details on how to use shard splitting is in the section on the Collection API's <<shard-management.adoc#splitshard,SPLITSHARD command>>.
+More details on how to use shard splitting is in the section on the Collection API's xref:shard-management.adoc#splitshard[SPLITSHARD command].
 
 == Ignoring Commits from Client Applications in SolrCloud
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-with-legacy-configuration-files.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-with-legacy-configuration-files.adoc
index dc6af6c..afc8cb8 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-with-legacy-configuration-files.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-with-legacy-configuration-files.adoc
@@ -74,4 +74,4 @@ If you do not want the DistributedUpdateProcessFactory auto-injected into your c
 In the update process, Solr skips updating processors that have already been run on other nodes.
 +
 For more on the default update request processor chain and options, see
-the section <<update-request-processors.adoc#default-update-request-processor-chain,Default Update Request Processor Chain>>.
+the section xref:configuration-guide:update-request-processors.adoc#default-update-request-processor-chain[Default Update Request Processor Chain].
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solrj.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/solrj.adoc
index 0b5e105..b46b3ed 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/solrj.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/solrj.adoc
@@ -174,7 +174,7 @@ processing time of the largest update request.
 
 === Cloud Request Routing
 
-The SolrJ `CloudSolrClient` implementations (`CloudSolrClient` and `CloudHttp2SolrClient`) respect the <<solrcloud-distributed-requests.adoc#shards-preference-parameter,shards.preference parameter>>.
+The SolrJ `CloudSolrClient` implementations (`CloudSolrClient` and `CloudHttp2SolrClient`) respect the xref:solrcloud-distributed-requests.adoc#shards-preference-parameter[shards.preference parameter].
 Therefore requests sent to single-sharded collections, using either of the above clients, will route requests the same way that distributed requests are routed to individual shards.
 If no `shards.preference` parameter is provided, the clients will default to sorting replicas randomly.
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/taking-solr-to-production.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/taking-solr-to-production.adoc
index cd3c7aa..04fb63b 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/taking-solr-to-production.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/taking-solr-to-production.adoc
@@ -54,7 +54,7 @@ With this approach, the files in `/opt/solr` will remain untouched and all files
 
 === Create the Solr User
 
-Running Solr as `root` is not recommended for security reasons, and the <<solr-control-script-reference.adoc#,control script>> start command will refuse to do so.
+Running Solr as `root` is not recommended for security reasons, and the xref:solr-control-script-reference.adoc#starting-and-stopping[`bin/solr start`] command will refuse to do so.
 Consequently, you should determine the username of a system user that will own all of the Solr files and the running Solr process.
 By default, the installation script will create the *solr* user, but you can override this setting using the -u option.
 If your organization has specific requirements for creating new user accounts, then you should create the user before running the script.
@@ -117,7 +117,7 @@ The Solr home directory (not to be confused with the Solr installation directory
 By default, the installation script uses `/var/solr/data`.
 If the `-d` option is used on the install script, then this will change to the `data` subdirectory in the location given to the -d option.
 Take a moment to inspect the contents of the Solr home directory on your system.
-If you do not <<zookeeper-file-management.adoc#,store `solr.xml` in ZooKeeper>>, the home directory must contain a `solr.xml` file.
+If you do not xref:zookeeper-file-management.adoc[store `solr.xml` in ZooKeeper], the home directory must contain a `solr.xml` file.
 When Solr starts up, the Solr Control Script passes the location of the home directory using the `-Dsolr.solr.home=...` system property.
 
 ==== Environment Overrides Include File
@@ -136,7 +136,7 @@ SOLR_PID_DIR=/var/solr
 SOLR_HOME=/var/solr/data
 ----
 
-The `SOLR_PID_DIR` variable sets the directory where the <<solr-control-script-reference.adoc#,control script>> will write out a file containing the Solr server’s process ID.
+The `SOLR_PID_DIR` variable sets the directory where the xref:solr-control-script-reference.adoc[control script] will write out a file containing the Solr server’s process ID.
 
 ==== Log Settings
 
@@ -150,7 +150,7 @@ LOG4J_PROPS=/var/solr/log4j2.xml
 SOLR_LOGS_DIR=/var/solr/logs
 ----
 
-For more information about Log4J configuration, please see: <<configuring-logging.adoc#,Configuring Logging>>
+For more information about Log4J configuration, please see: xref:configuring-logging.adoc[].
 
 ==== init.d Script
 
@@ -219,7 +219,7 @@ This scheduler uses multiple threads to merge Lucene segments in the background.
 By default, the `ConcurrentMergeScheduler` auto-detects defaults for `maxThreadCount` and `maxMergeCount` accordingly.
 `maxThreadCount` is set to 4 or half the number of processors available to the JVM whichever is greater and `maxMergeCount` is set to `maxThreadCount+5`.
 
-If you have a spinning disk, it is best to explicitly set values for `maxThreadCount` and `maxMergeCount` in the <<index-segments-merging.adoc#mergescheduler, IndexConfig section of SolrConfig.xml>> so that values appropriate to your hardware are used.
+If you have a spinning disk, it is best to explicitly set values for `maxThreadCount` and `maxMergeCount` in the xref:configuration-guide:index-segments-merging.adoc#mergescheduler[IndexConfig section of SolrConfig.xml] so that values appropriate to your hardware are used.
 
 === Memory and GC Settings
 
@@ -235,10 +235,10 @@ SOLR_HEAP="8g"
 [NOTE]
 ====
 Do not allocate a very large Java Heap unless you know you need it.
-See <<jvm-settings.adoc#choosing-memory-heap-settings,Choosing Memory Heap Settings>> for details.
+See xref:jvm-settings.adoc#choosing-memory-heap-settings[Choosing Memory Heap Settings] for details.
 ====
 
-Also, the <<solr-control-script-reference.adoc#,Solr Control Script>> comes with a set of pre-configured Garbage First Garbage Collection settings that have shown to work well with Solr for a number of different workloads.
+Also, the xref:solr-control-script-reference.adoc[Solr Control Script] comes with a set of pre-configured Garbage First Garbage Collection settings that have shown to work well with Solr for a number of different workloads.
 However, these settings may not work well for your specific use of Solr.
 Consequently, you may need to change the GC settings, which should also be done with the `GC_TUNE` variable in the `/etc/default/solr.in.sh` include file.
 
@@ -246,7 +246,7 @@ For more information about garbage collection settings refer to following articl
 . https://cwiki.apache.org/confluence/display/solr/ShawnHeisey
 . https://www.oracle.com/technetwork/articles/java/g1gc-1984535.html
 
-You can also refer to <<jvm-settings.adoc#,JVM Settings>> for tuning your memory and garbage collection settings.
+You can also refer to xref:jvm-settings.adoc[] for tuning your memory and garbage collection settings.
 
 ==== Out-of-Memory Shutdown Hook
 
@@ -278,7 +278,7 @@ For instance, to ensure all znodes created by SolrCloud are stored under `/solr`
 ZK_HOST=zk1,zk2,zk3/solr
 ----
 
-Before using a chroot for the first time, you need to create the root path (znode) in ZooKeeper by using the <<solr-control-script-reference.adoc#,Solr Control Script>>.
+Before using a chroot for the first time, you need to create the root path (znode) in ZooKeeper by using the xref:solr-control-script-reference.adoc[Solr Control Script].
 We can use the mkroot command for that:
 
 [source,bash]
@@ -289,7 +289,7 @@ bin/solr zk mkroot /solr -z <ZK_node>:<ZK_PORT>
 [NOTE]
 ====
 If you also want to bootstrap ZooKeeper with existing `solr_home`, you can instead use the `zkcli.sh` / `zkcli.bat` `bootstrap` command, which will also create the chroot path if it does not exist.
-See <<zookeeper-utilities.adoc#,ZooKeeper Utilities>> for more info.
+See xref:zookeeper-utilities.adoc[] for more info.
 ====
 
 === Solr Hostname
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/upgrading-a-solr-cluster.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/upgrading-a-solr-cluster.adoc
index 92b132d..2fb214e 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/upgrading-a-solr-cluster.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/upgrading-a-solr-cluster.adoc
@@ -17,7 +17,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-This page covers how to upgrade an existing Solr cluster that was installed using the <<taking-solr-to-production.adoc#,service installation scripts>>.
+This page covers how to upgrade an existing Solr cluster that was installed using the xref:taking-solr-to-production.adoc[service installation scripts].
 
 IMPORTANT: The steps outlined on this page assume you use the default service name of `solr`.
 If you use an alternate service name or Solr installation directory, some of the paths and commands mentioned below will have to be modified accordingly.
@@ -26,11 +26,11 @@ If you use an alternate service name or Solr installation directory, some of the
 
 Here is a checklist of things you need to prepare before starting the upgrade process:
 
-. Examine the <<solr-upgrade-notes.adoc#,Solr Upgrade Notes>> to determine if any behavior changes in the new version of Solr will affect your installation.
+. Examine the xref:upgrade-notes:solr-upgrade-notes.adoc[] to determine if any behavior changes in the new version of Solr will affect your installation.
 . If not using replication (i.e., collections with `replicationFactor` less than 1), then you should make a backup of each collection.
 If all of your collections use replication, then you don't technically need to make a backup since you will be upgrading and verifying each node individually.
 . Determine which Solr node is currently hosting the Overseer leader process in SolrCloud, as you should upgrade this node last.
-To determine the Overseer, use the Overseer Status API, see: <<collections-api.adoc#,Collections API>>.
+To determine the Overseer, use the xref:cluster-node-management.adoc#overseerstatus[Overseer Status API].
 . Plan to perform your upgrade during a system maintenance window if possible.
 You'll be doing a rolling restart of your cluster (each node, one-by-one), but we still recommend doing the upgrade when system usage is minimal.
 . Verify the cluster is currently healthy and all replicas are active, as you should not perform an upgrade on a degraded cluster.
@@ -40,7 +40,7 @@ You'll be doing a rolling restart of your cluster (each node, one-by-one), but w
 * `SOLR_HOST`: The hostname each Solr node used to register with ZooKeeper when joining the SolrCloud cluster; this value will be used to set the *host* Java system property when starting the new Solr process.
 * `SOLR_PORT`: The port each Solr node is listening on, such as 8983.
 * `SOLR_HOME`: The absolute path to the Solr home directory for each Solr node; this directory must contain a `solr.xml` file.
-This value will be passed to the new Solr process using the `solr.solr.home` system property, see: <<configuring-solr-xml.adoc#,Configuring solr.xml>>.
+This value will be passed to the new Solr process using the `solr.solr.home` system property, see: xref:configuration-guide:configuring-solr-xml.adoc[].
 +
 If you are upgrading from an installation of Solr 5.x or later, these values can typically be found in either `/var/solr/solr.in.sh` or `/etc/default/solr.in.sh`.
 
@@ -59,13 +59,13 @@ This means that you won't need to move any index files around to perform the upg
 === Step 1: Stop Solr
 
 Begin by stopping the Solr node you want to upgrade.
-After stopping the node, if using a replication (i.e., collections with `replicationFactor` less than 1), verify that all leaders hosted on the downed node have successfully migrated to other replicas; you can do this by visiting the <<cloud-screens.adoc#,Cloud panel in the Solr Admin UI>>.
+After stopping the node, if using a replication (i.e., collections with `replicationFactor` less than 1), verify that all leaders hosted on the downed node have successfully migrated to other replicas; you can do this by visiting the xref:cloud-screens.adoc[] in the Solr Admin UI.
 If not using replication, then any collections with shards hosted on the downed node will be temporarily off-line.
 
 
 === Step 2: Install Solr as a Service
 
-Please follow the instructions to install Solr as a Service on Linux documented at <<taking-solr-to-production.adoc#,Taking Solr to Production>>.
+Please follow the instructions to install Solr as a Service on Linux documented at xref:taking-solr-to-production.adoc[].
 Use the `-n` parameter to avoid automatic start of Solr by the installer script.
 You need to update the `/etc/default/solr.in.sh` include file in the next step to complete the upgrade process.
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-distributed-search.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-distributed-search.adoc
index a88a122..4600396 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-distributed-search.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-distributed-search.adoc
@@ -18,7 +18,7 @@
 
 When using traditional index sharding, you will need to consider how to query your documents.
 
-It is highly recommended that you use <<cluster-types.adoc#solrcloud-mode,SolrCloud>> when needing to scale up or scale out.
+It is highly recommended that you use xref:cluster-types.adoc#solrcloud-mode[SolrCloud] when needing to scale up or scale out.
 The setup described below is legacy and was used prior to the existence of SolrCloud.
 SolrCloud provides for a truly distributed set of features with support for things like automatic routing, leader election, optimistic concurrency and other sanity checks that are expected out of a distributed system.
 
@@ -75,7 +75,7 @@ The following components support distributed search:
 
 The nodes allowed in the `shards` parameter is configurable through the `allowUrls` property in `solr.xml`.
 This allow-list is automatically configured for SolrCloud but needs explicit configuration for user-managed clusters.
-Read more details in the section <<solrcloud-distributed-requests.adoc#configuring-the-shardhandlerfactory,Configuring the ShardHandlerFactory>>.
+Read more details in the section xref:solrcloud-distributed-requests.adoc#configuring-the-shardhandlerfactory[Configuring the ShardHandlerFactory].
 
 == Limitations to User-Managed Distributed Search
 
@@ -96,12 +96,12 @@ Document adds and deletes are forwarded to the appropriate server/shard based on
 
 Formerly a limitation was that TF/IDF relevancy computations only used shard-local statistics.
 This is still the case by default.
-If your data isn't randomly distributed, or if you want more exact statistics, then you can configure the <<solrcloud-distributed-requests#distributedidf,`ExactStatsCache`>>.
+If your data isn't randomly distributed, or if you want more exact statistics, then you can configure the xref:solrcloud-distributed-requests.adoc#distributedidf[`ExactStatsCache`].
 
 == Avoiding Distributed Deadlock
 
 Like in SolrCloud mode, inter-shard requests could lead to a distributed deadlock.
-It can be avoided by following the instructions in the section  <<solrcloud-distributed-requests.adoc#,Distributed Requests>>.
+It can be avoided by following the instructions in the section xref:solrcloud-distributed-requests.adoc[].
 
 == Load Balancing Requests
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-index-replication.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-index-replication.adoc
index ea0e592..b09be99 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-index-replication.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-index-replication.adoc
@@ -43,7 +43,7 @@ Configuring replication is therefore similar to any normal request handler.
 .Replication In SolrCloud
 [NOTE]
 ====
-Although there is no explicit concept of leader or follower nodes in a <<cluster-types.adoc#solrcloud-mode,SolrCloud>> cluster, the `ReplicationHandler` discussed on this page is still used by SolrCloud as needed to support "shard recovery" – but this is done in a peer to peer manner.
+Although there is no explicit concept of leader or follower nodes in a xref:cluster-types.adoc#solrcloud-mode[SolrCloud cluster], the `ReplicationHandler` discussed on this page is still used by SolrCloud as needed to support "shard recovery" – but this is done in a peer to peer manner.
 
 When using SolrCloud, the `ReplicationHandler` must be available via the `/replication` path.
 Solr does this implicitly unless overridden explicitly in your `solrconfig.xml`.
@@ -55,7 +55,7 @@ If you wish to override the default behavior, make certain that you do not set a
 In addition to `ReplicationHandler` configuration options specific to the leader and follower roles described below, there are a few special configuration options that are generally supported (even when using SolrCloud).
 
 * `maxNumberOfBackups` an integer value dictating the maximum number of backups this node will keep on disk as it receives `backup` commands.
-* Similar to most other request handlers in Solr you may configure a set of <<requesthandlers-searchcomponents.adoc#search-handlers,defaults, invariants, and/or appends>> parameters corresponding with any request parameters supported by the `ReplicationHandler` when <<HTTP API Commands for the ReplicationHandler,processing commands>>.
+* Similar to most other request handlers in Solr you may configure a set of xref:configuration-guide:requesthandlers-searchcomponents.adoc#search-handlers[defaults, invariants, and/or appends] parameters corresponding with any request parameters supported by the `ReplicationHandler` when <<HTTP API Commands for the ReplicationHandler,processing commands>>.
 
 === Configuring a Leader Server
 
@@ -519,6 +519,6 @@ No query followers need to be taken out of service.
 The optimized index can be distributed in the background as queries are being normally serviced.
 The optimization can occur at any time convenient to the application providing index updates.
 
-While optimizing may have some benefits in some situations, a rapidly changing index will not retain those benefits for long, and since optimization is an intensive process, it may be better to consider other options, such as lowering the merge factor (discussed in the section on <<index-segments-merging.adoc#merge-factors,Index Configuration>>).
+While optimizing may have some benefits in some situations, a rapidly changing index will not retain those benefits for long, and since optimization is an intensive process, it may be better to consider other options, such as lowering the merge factor (discussed in the section on xref:configuration-guide:index-segments-merging.adoc#merge-factors[Controlling Segment Sizes]).
 
 TIP: Do not elect to optimize your index unless you have tangible evidence that it will significantly improve your search performance.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-access-control.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-access-control.adoc
index 9642d35..6844128 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-access-control.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-access-control.adoc
@@ -71,7 +71,7 @@ If you use `SolrZkClient` in your application, the descriptions below will be tr
 
 === Controlling Credentials
 
-You control which credentials provider will be used by configuring the `zkCredentialsProvider` property in the `<solrcloud>` section of <<configuring-solr-xml.adoc#,`solr.xml`>> to the name of a class (on the classpath) implementing the {solr-javadocs}/solrj/org/apache/solr/common/cloud/ZkCredentialsProvider.html[`ZkCredentialsProvider`] interface.
+You control which credentials provider will be used by configuring the `zkCredentialsProvider` property in the `<solrcloud>` section of xref:configuration-guide:configuring-solr-xml.adoc[`solr.xml`] to the name of a class (on the classpath) implementing the {solr-javadocs}/solrj/org/apache/solr/common/cloud/ZkCredentialsProvider.html[`ZkCredentialsProvider`] interface.
 
 `solr.xml` defines the `zkCredentialsProvider` such that it will take on the value of the same-named `zkCredentialsProvider` system property if it is defined (in `solr.in.sh/.cmd` - see <<ZooKeeper ACLs in Solr Scripts,below>>), or if not, default to the `DefaultZkCredentialsProvider` implementation.
 
@@ -91,7 +91,7 @@ This set of credentials will be added to the list of credentials returned by `ge
 
 === Controlling ACLs
 
-You control which ACLs will be added by configuring `zkACLProvider` property in the `<solrcloud>` section of <<configuring-solr-xml.adoc#,`solr.xml`>> to the name of a class (on the classpath) implementing the {solr-javadocs}/solrj/org/apache/solr/common/cloud/ZkACLProvider.html[`ZkACLProvider`] interface.
+You control which ACLs will be added by configuring `zkACLProvider` property in the `<solrcloud>` section of xref:configuration-guide:configuring-solr-xml.adoc[`solr.xml`] to the name of a class (on the classpath) implementing the {solr-javadocs}/solrj/org/apache/solr/common/cloud/ZkACLProvider.html[`ZkACLProvider`] interface.
 
 `solr.xml` defines the `zkACLProvider` such that it will take on the value of the same-named `zkACLProvider` system property if it is defined (in `solr.in.sh/.cmd` - see <<ZooKeeper ACLs in Solr Scripts,below>>), or if not, default to the `DefaultZkACLProvider` implementation.
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-ensemble.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-ensemble.adoc
index 26d3e48..218aa41 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-ensemble.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-ensemble.adoc
@@ -383,7 +383,7 @@ Creating a chroot is done with a `bin/solr` command:
 [source,text]
 bin/solr zk mkroot /solr -z zk1:2181,zk2:2181,zk3:2181
 
-See the section <<solr-control-script-reference.adoc#create-a-znode-supports-chroot,Create a znode>> for more examples of this command.
+See the section xref:solr-control-script-reference.adoc#create-a-znode-supports-chroot[Create a znode] for more examples of this command.
 
 Once the znode is created, it behaves in a similar way to a directory on a filesystem: the data stored by Solr in ZooKeeper is nested beneath the main data directory and won't be mixed with data from another system or process that uses the same ZooKeeper ensemble.
 
@@ -549,4 +549,4 @@ set SOLR_OPTS=%SOLR_OPTS% -Djute.maxbuffer=0x200000
 
 You may also want to secure the communication between ZooKeeper and Solr.
 
-To setup ACL protection of znodes, see the section <<zookeeper-access-control.adoc#,ZooKeeper Access Control>>.
+To setup ACL protection of znodes, see the section xref:zookeeper-access-control.adoc[].
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-file-management.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-file-management.adoc
index 236ce87..fa5ee7c 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-file-management.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-file-management.adoc
@@ -43,16 +43,15 @@ bin/solr create -c mycollection -d _default
 ----
 
 The create command will upload a copy of the `_default` configuration directory to ZooKeeper under `/configs/mycollection`.
-Refer to the <<solr-control-script-reference.adoc#,Solr Control Script Reference>> page for more details about the create command for creating collections.
+Refer to the xref:solr-control-script-reference.adoc[] for more details about the create command for creating collections.
 
-Once a configuration directory has been uploaded to ZooKeeper, you can update them using the <<solr-control-script-reference.adoc#,Solr Control Script>>
+Once a configuration directory has been uploaded to ZooKeeper, you can update them using the xref:solr-control-script-reference.adoc[Solr Control Script].
 
 IMPORTANT: It's a good idea to keep these files under version control.
 
-
 == Uploading Configuration Files using bin/solr or SolrJ
 
-In production situations, <<config-sets.adoc#,Configsets>> can also be uploaded to ZooKeeper independent of collection creation using either Solr's <<solr-control-script-reference.adoc#,Solr Control Script>> or SolrJ.
+In production situations, xref:configuration-guide:config-sets.adoc[] can also be uploaded to ZooKeeper independent of collection creation using either Solr's xref:solr-control-script-reference.adoc[Solr Control Script] or xref:solrj.adoc[].
 
 The below command can be used to upload a new configset using the bin/solr script.
 
@@ -82,7 +81,7 @@ To update or change your SolrCloud configuration files:
 == Preparing ZooKeeper before First Cluster Start
 
 If you will share the same ZooKeeper instance with other applications you should use a _chroot_ in ZooKeeper.
-Please see <<taking-solr-to-production.adoc#zookeeper-chroot,ZooKeeper chroot>> for instructions.
+Please see xref:taking-solr-to-production.adoc#zookeeper-chroot[ZooKeeper chroot] for instructions.
 
 There are certain configuration files containing cluster wide configuration.
 Since some of these are crucial for the cluster to function properly, you may need to upload such files to ZooKeeper before starting your Solr cluster for the first time.
@@ -95,4 +94,4 @@ If you for example would like to keep your `solr.xml` in ZooKeeper to avoid havi
 bin/solr zk cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181
 ----
 
-NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<zookeeper-ensemble#updating-solr-include-files,instructions>>) you can omit `-z <zk host string>` from the above command.
+NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:zookeeper-ensemble.adoc#updating-solr-include-files,Updating Solr Include Files>>) you can omit `-z <zk host string>` from the above command.
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-utilities.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-utilities.adoc
index db3ec44..48e6f9d 100644
--- a/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-utilities.adoc
+++ b/solr/solr-ref-guide/modules/deployment-guide/pages/zookeeper-utilities.adoc
@@ -20,12 +20,12 @@ A ZooKeeper Command Line Interface (CLI) script is available to allow you to int
 
 While Solr's Admin UI includes pages dedicated to the state of your SolrCloud cluster, it does not allow you to download or modify related configuration files.
 
-TIP: See the section <<cloud-screens.adoc#,Cloud Screens>> for more information about using the Admin UI screens.
+TIP: See the section xref:cloud-screens.adoc[] for more information about using the Admin UI screens.
 
 The ZooKeeper CLI script found in `server/scripts/cloud-scripts` let you upload configuration information to ZooKeeper.
 It also provides a few other commands that let you link collection sets to collections, make ZooKeeper paths or clear them, and download configurations from ZooKeeper to the local filesystem.
 
-Many of the functions provided by the zkCli.sh script are also provided by the <<solr-control-script-reference.adoc#,Solr Control Script>>, which may be more familiar as the start script ZooKeeper maintenance commands are very similar to Unix commands.
+Many of the functions provided by the `zkCli.sh` script are also provided by the xref:solr-control-script-reference.adoc[Solr Control Script], which may be more familiar as the start script ZooKeeper maintenance commands are very similar to Unix commands.
 
 .Solr's zkcli.sh vs ZooKeeper's zkCli.sh
 [IMPORTANT]
@@ -156,7 +156,7 @@ This can be useful to create a chroot path in ZooKeeper before first cluster sta
 This command will add or modify a single cluster property in `clusterprops.json`.
 Use this command instead of the usual getfile -> edit -> putfile cycle.
 
-Unlike the CLUSTERPROP command on the <<cluster-node-management.adoc#clusterprop,Collections API>>, this command does *not* require a running Solr cluster.
+Unlike the xref:cluster-node-management.adoc#clusterprop[CLUSTERPROP] command on the Collections API, this command does *not* require a running Solr cluster.
 
 [source,bash]
 ----
diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
index d5a171b..3a207d8 100644
--- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
+++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc
@@ -36,7 +36,7 @@ When it's done start the second node, and tell it how to connect to to ZooKeeper
 
 `./bin/solr start -c -p 7574 -s example/cloud/node2/solr -z localhost:9983`
 
-NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see <<zookeeper-ensemble#updating-solr-include-files,instructions>>) you can omit `-z <zk host string>` from the above command.
+NOTE: If you have defined `ZK_HOST` in `solr.in.sh`/`solr.in.cmd` (see xref:zookeeper-ensemble#updating-solr-include-files[Updating Solr Include Files]) you can omit `-z <zk host string>` from the above command.
 
 === Create a New Collection
 
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/deployment-guide.adoc b/solr/solr-ref-guide/src/old-pages/deployment-guide.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/deployment-guide/pages/deployment-guide.adoc
rename to solr/solr-ref-guide/src/old-pages/deployment-guide.adoc
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/installation-deployment.adoc b/solr/solr-ref-guide/src/old-pages/installation-deployment.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/deployment-guide/pages/installation-deployment.adoc
rename to solr/solr-ref-guide/src/old-pages/installation-deployment.adoc
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-solr.adoc b/solr/solr-ref-guide/src/old-pages/monitoring-solr.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/deployment-guide/pages/monitoring-solr.adoc
rename to solr/solr-ref-guide/src/old-pages/monitoring-solr.adoc
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/scaling-solr.adoc b/solr/solr-ref-guide/src/old-pages/scaling-solr.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/deployment-guide/pages/scaling-solr.adoc
rename to solr/solr-ref-guide/src/old-pages/scaling-solr.adoc
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-clusters.adoc b/solr/solr-ref-guide/src/old-pages/solrcloud-clusters.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/deployment-guide/pages/solrcloud-clusters.adoc
rename to solr/solr-ref-guide/src/old-pages/solrcloud-clusters.adoc
diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-clusters.adoc b/solr/solr-ref-guide/src/old-pages/user-managed-clusters.adoc
similarity index 100%
rename from solr/solr-ref-guide/modules/deployment-guide/pages/user-managed-clusters.adoc
rename to solr/solr-ref-guide/src/old-pages/user-managed-clusters.adoc