You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@solr.apache.org by ep...@apache.org on 2021/10/22 18:12:59 UTC

[solr] branch main updated: SOLR-14834: Update all public-visible links from wiki.apache.org to RefGuide or cwiki (#307)

This is an automated email from the ASF dual-hosted git repository.

epugh pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/solr.git


The following commit(s) were added to refs/heads/main by this push:
     new 1850be2  SOLR-14834: Update all public-visible links from wiki.apache.org to RefGuide or cwiki (#307)
1850be2 is described below

commit 1850be283594f9451fe8df022801e61fc40b6a70
Author: Eric Pugh <ep...@opensourceconnections.com>
AuthorDate: Fri Oct 22 14:12:53 2021 -0400

    SOLR-14834: Update all public-visible links from wiki.apache.org to RefGuide or cwiki (#307)
    
    All references to the old wiki have been updated to point to either the appropriate RefGuide or Cwiki pages.   This doesn't attempt to solve the versioning issues of the Ref Guide.   I've removed a lot of pointers to the 6.6 version of the Ref Guide as well, in favour of the latest Ref Guide, which should hopefully help our SEO on Google point users to the latest version of the docs.
---
 solr/contrib/clustering/README.md                  |   2 +-
 solr/contrib/extraction/README.md                  |   5 +-
 solr/contrib/langid/README.md                      |   3 +-
 ...angDetectLanguageIdentifierUpdateProcessor.java |   4 +-
 ...ctLanguageIdentifierUpdateProcessorFactory.java |  12 +-
 .../LanguageIdentifierUpdateProcessor.java         |   2 +-
 .../OpenNLPLangDetectUpdateProcessorFactory.java   |   4 +-
 .../TikaLanguageIdentifierUpdateProcessor.java     |   2 +-
 ...kaLanguageIdentifierUpdateProcessorFactory.java |   4 +-
 solr/contrib/ltr/README.md                         |  16 +-
 .../test-files/solr/collection1/conf/schema.xml    |   2 +-
 solr/contrib/prometheus-exporter/README.md         |  14 +-
 .../solr/collection1/conf/managed-schema           |   4 +-
 .../solr/collection1/conf/solrconfig.xml           |  44 ---
 solr/contrib/scripting/README.md                   |   4 +-
 .../apache/solr/cloud/overseer/package-info.java   |   8 +-
 .../java/org/apache/solr/cloud/package-info.java   |   8 +-
 .../handler/component/SpellCheckComponent.java     |  26 +-
 .../java/org/apache/solr/schema/CurrencyField.java |  12 +-
 .../org/apache/solr/schema/CurrencyFieldType.java  |  57 ++-
 .../apache/solr/search/ExtendedDismaxQParser.java  | 398 ++++++++++-----------
 .../solr/search/ExtendedDismaxQParserPlugin.java   |   4 +-
 .../src/java/org/apache/solr/search/QParser.java   |   8 +-
 .../solr/search/join/ScoreJoinQParserPlugin.java   |  17 +-
 .../solr/spelling/AbstractLuceneSpellChecker.java  |  12 +-
 .../solr/spelling/IndexBasedSpellChecker.java      |   6 +-
 .../org/apache/solr/spelling/QueryConverter.java   |  12 +-
 .../org/apache/solr/spelling/SolrSpellChecker.java |  26 +-
 .../org/apache/solr/util/plugin/package-info.java  |   8 +-
 .../solr/collection1/conf/schema-spellchecker.xml  |   9 -
 .../solr/collection1/conf/schema-trie.xml          |  24 --
 .../test-files/solr/collection1/conf/schema11.xml  |  91 ++---
 .../solr/collection1/conf/schema_latest.xml        | 131 ++-----
 solr/licenses/README.committers.txt                |  12 +-
 solr/server/solr/README.md                         |   4 +-
 .../configsets/_default/conf/managed-schema.xml    |   4 +-
 .../solr/configsets/_default/conf/solrconfig.xml   |  18 +-
 .../sample_techproducts_configs/conf/elevate.xml   |   6 +-
 .../conf/managed-schema                            | 236 ++++++------
 .../conf/solrconfig.xml                            |  22 +-
 .../solr-ref-guide/src/index-segments-merging.adoc |   2 +-
 .../solrj/request/ContentStreamUpdateRequest.java  |   8 +-
 .../solr/common/cloud/rule/package-info.java       |   8 +-
 .../solr/common/params/QueryElevationParams.java   |   6 +-
 .../collections.collection.shards.Commands.json    |   6 +-
 solr/solrj/src/resources/apispec/core.Update.json  |   2 +-
 .../src/resources/apispec/cores.Commands.json      |   2 +-
 solr/solrj/src/resources/apispec/cores.Status.json |   2 +-
 .../src/resources/apispec/cores.core.Commands.json |  14 +-
 .../apispec/cores.core.Commands.split.json         |   2 +-
 .../src/resources/apispec/metrics.history.json     |   2 +-
 solr/webapp/web/index.html                         |   4 +-
 solr/webapp/web/partials/login.html                |  20 +-
 solr/webapp/web/partials/sqlquery.html             |   2 +-
 54 files changed, 577 insertions(+), 784 deletions(-)

diff --git a/solr/contrib/clustering/README.md b/solr/contrib/clustering/README.md
index 5e9dcb5..f51149b 100644
--- a/solr/contrib/clustering/README.md
+++ b/solr/contrib/clustering/README.md
@@ -1,4 +1,4 @@
 The Clustering contrib plugin for Solr provides a generic mechanism for plugging in third party clustering implementations.
 It currently provides clustering support for search results using the Carrot2 project.
 
-See https://lucene.apache.org/solr/guide/result-clustering for how to get started.
+See https://solr.apache.org/guide/result-clustering.html for how to get started.
diff --git a/solr/contrib/extraction/README.md b/solr/contrib/extraction/README.md
index 1e425ca..c92078c 100644
--- a/solr/contrib/extraction/README.md
+++ b/solr/contrib/extraction/README.md
@@ -7,11 +7,10 @@ Introduction
 Apache Solr Extraction provides a means for extracting and indexing content contained in "rich" documents, such
 as Microsoft Word, Adobe PDF, etc.  (Each name is a trademark of their respective owners)  This contrib module
 uses Apache Tika to extract content and metadata from the files, which can then be indexed.  For more information,
-see https://solr.apache.org/guide/uploading-data-with-solr-cell-using-apache-tika.html
+see https://solr.apache.org/guide/indexing-with-tika.html
 
 Getting Started
 ---------------
 You will need Solr up and running.  Then, simply add the extraction JAR file, plus the Tika dependencies (in the ./lib folder)
-to your Solr Home lib directory.  See https://solr.apache.org/guide/uploading-data-with-solr-cell-using-apache-tika.html for more details on hooking it in
+to your Solr Home lib directory.  See https://solr.apache.org/guide/indexing-with-tika.html for more details on hooking it in
  and configuring.
-
diff --git a/solr/contrib/langid/README.md b/solr/contrib/langid/README.md
index 09c1ff7..ebb4b59 100644
--- a/solr/contrib/langid/README.md
+++ b/solr/contrib/langid/README.md
@@ -13,7 +13,8 @@ Language detector implementations are pluggable.
 
 Getting Started
 ---------------
-Please refer to the module documentation at http://wiki.apache.org/solr/LanguageDetection
+Please refer to the Solr Ref Guide at https://solr.apache.org/guide/language-detection.html
+for more information.
 
 Dependencies
 ------------
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessor.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessor.java
index e1e6fa3..3206656 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessor.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessor.java
@@ -36,7 +36,7 @@ import org.slf4j.LoggerFactory;
 /**
  * Identifies the language of a set of input fields using https://github.com/shuyo/language-detection
  * <p>
- * See <a href="https://solr.apache.org/guide/detecting-languages-during-indexing.html">Detecting Languages During
+ * See <a href="https://solr.apache.org/guide/language-detection.html">Detecting Languages During
  * Indexing</a> in the Solr Ref Guide
  * @since 3.5
  */
@@ -44,7 +44,7 @@ public class LangDetectLanguageIdentifierUpdateProcessor extends LanguageIdentif
 
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  public LangDetectLanguageIdentifierUpdateProcessor(SolrQueryRequest req, 
+  public LangDetectLanguageIdentifierUpdateProcessor(SolrQueryRequest req,
       SolrQueryResponse rsp, UpdateRequestProcessor next) {
     super(req, rsp, next);
   }
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java
index e312f5c..7a0db3a 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LangDetectLanguageIdentifierUpdateProcessorFactory.java
@@ -37,20 +37,20 @@ import com.cybozu.labs.langdetect.DetectorFactory;
 import com.cybozu.labs.langdetect.LangDetectException;
 
 /**
- * Identifies the language of a set of input fields using 
+ * Identifies the language of a set of input fields using
  * http://code.google.com/p/language-detection
  * <p>
  * The UpdateProcessorChain config entry can take a number of parameters
  * which may also be passed as HTTP parameters on the update request
  * and override the defaults. Here is the simplest processor config possible:
- * 
+ *
  * <pre class="prettyprint" >
  * &lt;processor class=&quot;org.apache.solr.update.processor.LangDetectLanguageIdentifierUpdateProcessorFactory&quot;&gt;
  *   &lt;str name=&quot;langid.fl&quot;&gt;title,text&lt;/str&gt;
  *   &lt;str name=&quot;langid.langField&quot;&gt;language_s&lt;/str&gt;
  * &lt;/processor&gt;
  * </pre>
- * See <a href="http://wiki.apache.org/solr/LanguageDetection">http://wiki.apache.org/solr/LanguageDetection</a>
+ * See <a href="https://solr.apache.org/guide/language-detection.html">https://solr.apache.org/guide/language-detection.html</a>
  * @since 3.5
  */
 public class LangDetectLanguageIdentifierUpdateProcessorFactory extends
@@ -105,11 +105,11 @@ public class LangDetectLanguageIdentifierUpdateProcessorFactory extends
     }
     return new LangDetectLanguageIdentifierUpdateProcessor(req, rsp, next);
   }
-  
-  
+
+
   // DetectorFactory is totally global, so we only want to do this once... ever!!!
   static boolean loaded;
-  
+
   // profiles we will load from classpath
   static final String languages[] = {
     "af", "ar", "bg", "bn", "cs", "da", "de", "el", "en", "es", "et", "fa", "fi", "fr", "gu",
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LanguageIdentifierUpdateProcessor.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LanguageIdentifierUpdateProcessor.java
index 0043e46..ff630f6 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/LanguageIdentifierUpdateProcessor.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/LanguageIdentifierUpdateProcessor.java
@@ -45,7 +45,7 @@ import org.slf4j.LoggerFactory;
  *   Identifies the language of a set of input fields.
  *   Also supports mapping of field names based on detected language.
  * </p>
- * See <a href="https://solr.apache.org/guide/detecting-languages-during-indexing.html">Detecting Languages During Indexing</a> in reference guide
+ * See <a href="https://solr.apache.org/guide/language-detection.html">Detecting Languages During Indexing</a> in reference guide
  * @since 3.5
  * @lucene.experimental
  */
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java
index 8bcc6de..14e9fa9 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/OpenNLPLangDetectUpdateProcessorFactory.java
@@ -37,7 +37,7 @@ import opennlp.tools.langdetect.LanguageDetectorModel;
  * The UpdateProcessorChain config entry can take a number of parameters
  * which may also be passed as HTTP parameters on the update request
  * and override the defaults. Here is the simplest processor config possible:
- * 
+ *
  * <pre class="prettyprint" >
  * &lt;processor class=&quot;org.apache.solr.update.processor.OpenNLPLangDetectUpdateProcessorFactory&quot;&gt;
  *   &lt;str name=&quot;langid.fl&quot;&gt;title,text&lt;/str&gt;
@@ -45,7 +45,7 @@ import opennlp.tools.langdetect.LanguageDetectorModel;
  *   &lt;str name="langid.model"&gt;langdetect-183.bin&lt;/str&gt;
  * &lt;/processor&gt;
  * </pre>
- * See <a href="http://wiki.apache.org/solr/LanguageDetection">http://wiki.apache.org/solr/LanguageDetection</a>
+ * See <a href="https://solr.apache.org/guide/language-detection.html#configuring-opennlp-language-detection">https://solr.apache.org/guide/language-detection.html#configuring-opennlp-language-detection</a>
  *
  * @since 7.3.0
  */
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessor.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessor.java
index ecce415..5537780 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessor.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessor.java
@@ -32,7 +32,7 @@ import org.slf4j.LoggerFactory;
  * LanguageIdentifier.
  * The tika-core-x.y.jar must be on the classpath
  * <p>
- * See <a href="http://wiki.apache.org/solr/LanguageDetection">http://wiki.apache.org/solr/LanguageDetection</a>
+ * See <a href="https://solr.apache.org/guide/language-detection.html#configuring-tika-language-detection">https://solr.apache.org/guide/language-detection.html#configuring-tika-language-detection</a>
  * @since 3.5
  */
 public class TikaLanguageIdentifierUpdateProcessor extends LanguageIdentifierUpdateProcessor {
diff --git a/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java b/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java
index 0c5f9e6..4c79dd5 100644
--- a/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java
+++ b/solr/contrib/langid/src/java/org/apache/solr/update/processor/TikaLanguageIdentifierUpdateProcessorFactory.java
@@ -31,14 +31,14 @@ import org.apache.solr.util.plugin.SolrCoreAware;
  * The UpdateProcessorChain config entry can take a number of parameters
  * which may also be passed as HTTP parameters on the update request
  * and override the defaults. Here is the simplest processor config possible:
- * 
+ *
  * <pre class="prettyprint" >
  * &lt;processor class=&quot;org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory&quot;&gt;
  *   &lt;str name=&quot;langid.fl&quot;&gt;title,text&lt;/str&gt;
  *   &lt;str name=&quot;langid.langField&quot;&gt;language_s&lt;/str&gt;
  * &lt;/processor&gt;
  * </pre>
- * See <a href="http://wiki.apache.org/solr/LanguageDetection">http://wiki.apache.org/solr/LanguageDetection</a>
+ * See <a href="https://solr.apache.org/guide/language-detection.html#configuring-tika-language-detection">https://solr.apache.org/guide/language-detection.html#configuring-tika-language-detection</a>
  * @since 3.5
  */
 public class TikaLanguageIdentifierUpdateProcessorFactory extends
diff --git a/solr/contrib/ltr/README.md b/solr/contrib/ltr/README.md
index e1fe1f4..d029f7e 100644
--- a/solr/contrib/ltr/README.md
+++ b/solr/contrib/ltr/README.md
@@ -7,17 +7,5 @@ deploy that model to Solr and use it to rerank your top X search results.
 
 # Getting Started With Solr Learning To Rank
 
-For information on how to get started with solr ltr please see:
- * [Solr Reference Guide's section on Learning To Rank](https://lucene.apache.org/solr/guide/learning-to-rank.html)
-
-# Getting Started With Solr
-
-For information on how to get started with solr please see:
- * [solr/README.md](../../README.md)
- * [Solr Tutorial](https://lucene.apache.org/solr/guide/solr-tutorial.html)
-
-# How To Contribute
-
-For information on how to contribute see:
- * http://wiki.apache.org/lucene-java/HowToContribute
- * http://wiki.apache.org/solr/HowToContribute
+For information on how to get started with Solr LTR please see:
+ * [Solr Reference Guide's section on Learning To Rank](https://solr.apache.org/guide/learning-to-rank.html)
diff --git a/solr/contrib/ltr/src/test-files/solr/collection1/conf/schema.xml b/solr/contrib/ltr/src/test-files/solr/collection1/conf/schema.xml
index 288a953..53bc026 100644
--- a/solr/contrib/ltr/src/test-files/solr/collection1/conf/schema.xml
+++ b/solr/contrib/ltr/src/test-files/solr/collection1/conf/schema.xml
@@ -109,7 +109,7 @@
   <!-- Similarity is the scoring routine for each document vs. a query.
        A custom Similarity or SimilarityFactory may be specified here, but
        the default is fine for most applications.
-       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity
+       For more info: https://solr.apache.org/guide/schema-elements.html#similarity
     -->
   <!--
      <similarity class="com.example.solr.CustomSimilarityFactory">
diff --git a/solr/contrib/prometheus-exporter/README.md b/solr/contrib/prometheus-exporter/README.md
index a574fa9..f69556d 100644
--- a/solr/contrib/prometheus-exporter/README.md
+++ b/solr/contrib/prometheus-exporter/README.md
@@ -6,16 +6,4 @@ Apache Solr Prometheus Exporter (solr-exporter) provides a way for you to expose
 # Getting Started With Solr Prometheus Exporter
 
 For information on how to get started with solr-exporter please see:
- * [Solr Reference Guide's section on Monitoring Solr with Prometheus and Grafana](https://lucene.apache.org/solr/guide/monitoring-solr-with-prometheus-and-grafana.html)
-
-# Getting Started With Solr
-
-For information on how to get started with solr please see:
- * [solr/README.md](../../README.md)
- * [Solr Tutorial](https://lucene.apache.org/solr/guide/solr-tutorial.html)
-
-# How To Contribute
-
-For information on how to contribute see:
- * http://wiki.apache.org/lucene-java/HowToContribute
- * http://wiki.apache.org/solr/HowToContribute
+ * [Solr Reference Guide's section on Monitoring Solr with Prometheus and Grafana](https://solr.apache.org/guide/monitoring-with-prometheus-and-grafana.html)
diff --git a/solr/contrib/prometheus-exporter/src/test-files/solr/collection1/conf/managed-schema b/solr/contrib/prometheus-exporter/src/test-files/solr/collection1/conf/managed-schema
index e910982..346d6b1 100644
--- a/solr/contrib/prometheus-exporter/src/test-files/solr/collection1/conf/managed-schema
+++ b/solr/contrib/prometheus-exporter/src/test-files/solr/collection1/conf/managed-schema
@@ -23,7 +23,7 @@
 
 
  For more information, on how to customize this file, please see
- http://lucene.apache.org/solr/guide/documents-fields-and-schema-design.html
+ http://lucene.apache.org/solr/guide/fields-and-schema-design.html
 
  PERFORMANCE NOTE: this schema includes many optional features and should not
  be used for benchmarking.  To improve performance one could
@@ -401,7 +401,7 @@
     <!-- Similarity is the scoring routine for each document vs. a query.
        A custom Similarity or SimilarityFactory may be specified here, but
        the default is fine for most applications.
-       For more info: http://lucene.apache.org/solr/guide/other-schema-elements.html#OtherSchemaElements-Similarity
+       For more info: http://lucene.apache.org/solr/guide/schema-elements.html#similarity
     -->
     <!--
      <similarity class="com.example.solr.CustomSimilarityFactory">
diff --git a/solr/contrib/prometheus-exporter/src/test-files/solr/collection1/conf/solrconfig.xml b/solr/contrib/prometheus-exporter/src/test-files/solr/collection1/conf/solrconfig.xml
index 068635d..03dbe40 100644
--- a/solr/contrib/prometheus-exporter/src/test-files/solr/collection1/conf/solrconfig.xml
+++ b/solr/contrib/prometheus-exporter/src/test-files/solr/collection1/conf/solrconfig.xml
@@ -102,26 +102,6 @@
 
   </requestDispatcher>
 
-  <!-- Request Handlers
-
-       http://wiki.apache.org/solr/SolrRequestHandler
-
-       Incoming queries will be dispatched to a specific handler by name
-       based on the path specified in the request.
-
-       If a Request Handler is declared with startup="lazy", then it will
-       not be initialized until the first request that uses it.
-
-    -->
-  <!-- SearchHandler
-
-       http://wiki.apache.org/solr/SearchHandler
-
-       For processing Search Queries, the primary Request Handler
-       provided with Solr is "SearchHandler" It delegates to a sequent
-       of SearchComponents (see below) and supports distributed
-       queries across multiple shards
-    -->
   <requestHandler name="/select" class="solr.SearchHandler">
     <lst name="defaults">
       <str name="echoParams">explicit</str>
@@ -129,30 +109,6 @@
     </lst>
   </requestHandler>
 
-  <!-- Update Processors
-
-       Chains of Update Processor Factories for dealing with Update
-       Requests can be declared, and then used by name in Update
-       Request Processors
-
-       http://wiki.apache.org/solr/UpdateRequestProcessor
-
-    -->
-
-  <!-- Add unknown fields to the schema
-
-       Field type guessing update processors that will
-       attempt to parse string-typed field values as Booleans, Longs,
-       Doubles, or Dates, and then add schema fields with the guessed
-       field types. Text content will be indexed as "text_general" as
-       well as a copy to a plain string version in *_str.
-
-       These require that the schema is both managed and mutable, by
-       declaring schemaFactory as ManagedIndexSchemaFactory, with
-       mutable specified as true.
-
-       See http://wiki.apache.org/solr/GuessingFieldTypes
-    -->
   <updateProcessor class="solr.UUIDUpdateProcessorFactory" name="uuid"/>
   <updateProcessor class="solr.RemoveBlankFieldUpdateProcessorFactory" name="remove-blank"/>
   <updateProcessor class="solr.FieldNameMutatingUpdateProcessorFactory" name="field-name-mutating">
diff --git a/solr/contrib/scripting/README.md b/solr/contrib/scripting/README.md
index 3436a65..2ed9ed7 100644
--- a/solr/contrib/scripting/README.md
+++ b/solr/contrib/scripting/README.md
@@ -10,5 +10,5 @@ Today, the ScriptUpdateProcessorFactory allows Java scripting engines to support
 ## Getting Started
 
 For information on how to get started please see:
- * [Solr Reference Guide's section on Update Request Processors](https://lucene.apache.org/solr/guide/update-request-processors.html)
-  * [Solr Reference Guide's section on ScriptUpdateProcessorFactory](https://lucene.apache.org/solr/guide/script-update-processor.html)
+ * [Solr Reference Guide's section on Update Request Processors](https://solr.apache.org/guide/update-request-processors.html)
+ * [Solr Reference Guide's section on ScriptUpdateProcessorFactory](https://solr.apache.org/guide/script-update-processor.html)
diff --git a/solr/core/src/java/org/apache/solr/cloud/overseer/package-info.java b/solr/core/src/java/org/apache/solr/cloud/overseer/package-info.java
index dbd3b1d6..33a1e7f 100644
--- a/solr/core/src/java/org/apache/solr/cloud/overseer/package-info.java
+++ b/solr/core/src/java/org/apache/solr/cloud/overseer/package-info.java
@@ -14,10 +14,8 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
- 
-/** 
- * Classes for updating cluster state in <a href="http://wiki.apache.org/solr/SolrCloud">SolrCloud</a> mode.
+
+/**
+ * Classes for updating cluster state in SolrCloud mode.
  */
 package org.apache.solr.cloud.overseer;
-
-
diff --git a/solr/core/src/java/org/apache/solr/cloud/package-info.java b/solr/core/src/java/org/apache/solr/cloud/package-info.java
index 096d6fa..1a41348 100644
--- a/solr/core/src/java/org/apache/solr/cloud/package-info.java
+++ b/solr/core/src/java/org/apache/solr/cloud/package-info.java
@@ -14,10 +14,8 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
- 
-/** 
- * Classes for dealing with ZooKeeper when operating in <a href="http://wiki.apache.org/solr/SolrCloud">SolrCloud</a> mode.
+
+/**
+ * Classes for dealing with ZooKeeper when operating in SolrCloud mode.
  */
 package org.apache.solr.cloud;
-
-
diff --git a/solr/core/src/java/org/apache/solr/handler/component/SpellCheckComponent.java b/solr/core/src/java/org/apache/solr/handler/component/SpellCheckComponent.java
index 55cb7f0..48e3b43 100644
--- a/solr/core/src/java/org/apache/solr/handler/component/SpellCheckComponent.java
+++ b/solr/core/src/java/org/apache/solr/handler/component/SpellCheckComponent.java
@@ -81,7 +81,7 @@ import org.slf4j.LoggerFactory;
  * and suggestions using the Lucene contributed SpellChecker.
  *
  * <p>
- * Refer to http://wiki.apache.org/solr/SpellCheckComponent for more details
+ * Refer to https://solr.apache.org/guide/spell-checking.html for more details
  * </p>
  *
  * @since solr 1.3
@@ -161,7 +161,7 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
         int alternativeTermCount = params.getInt(SpellingParams.SPELLCHECK_ALTERNATIVE_TERM_COUNT, 0);
         //If specified, this can be a discrete # of results, or a percentage of fq results.
         Integer maxResultsForSuggest = maxResultsForSuggest(rb);
-        
+
         ModifiableSolrParams customParams = new ModifiableSolrParams();
         for (String checkerName : getDictionaryNames(params)) {
           customParams.add(getCustomParams(checkerName, params));
@@ -174,7 +174,7 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
         } else {
           hits = hitsLong.longValue();
         }
-        
+
         SpellingResult spellingResult = null;
         if (maxResultsForSuggest == null || hits <= maxResultsForSuggest) {
           SuggestMode suggestMode = SuggestMode.SUGGEST_WHEN_NOT_IN_INDEX;
@@ -216,12 +216,12 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
       }
     }
   }
-  
+
   private Integer maxResultsForSuggest(ResponseBuilder rb) {
     SolrParams params = rb.req.getParams();
     float maxResultsForSuggestParamValue = params.getFloat(SpellingParams.SPELLCHECK_MAX_RESULTS_FOR_SUGGEST, 0.0f);
     Integer maxResultsForSuggest = null;
-    
+
     if (maxResultsForSuggestParamValue > 0.0f) {
       if (maxResultsForSuggestParamValue == (int) maxResultsForSuggestParamValue) {
         // If a whole number was passed in, this is a discrete number of documents
@@ -230,10 +230,10 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
         // If a fractional value was passed in, this is the % of documents returned by the specified filter
         // If no specified filter, we use the most restrictive filter of the fq parameters
         String maxResultsFilterQueryString = params.get(SpellingParams.SPELLCHECK_MAX_RESULTS_FOR_SUGGEST_FQ);
-        
+
         int maxResultsByFilters = Integer.MAX_VALUE;
         SolrIndexSearcher searcher = rb.req.getSearcher();
-        
+
         try {
           if (maxResultsFilterQueryString != null) {
             // Get the default Lucene query parser
@@ -243,7 +243,7 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
           } else {
             List<Query> filters = rb.getFilters();
 
-            // Get the maximum possible hits within these filters (size of most restrictive filter). 
+            // Get the maximum possible hits within these filters (size of most restrictive filter).
             if (filters != null) {
               for (Query query : filters) {
                 DocSet s = searcher.getDocSet(query);
@@ -260,7 +260,7 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
           log.error("Error", e);
           return null;
         }
-        
+
         // Recalculate maxResultsForSuggest if filters were specified
         if (maxResultsByFilters != Integer.MAX_VALUE) {
           maxResultsForSuggest = Math.round(maxResultsByFilters * maxResultsForSuggestParamValue);
@@ -269,7 +269,7 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
     }
     return maxResultsForSuggest;
   }
-  
+
   protected void addCollationsToResponse(SolrParams params, SpellingResult spellingResult, ResponseBuilder rb, String q,
       NamedList<Object> response, boolean suggestionsMayOverlap) {
     int maxCollations = params.getInt(SPELLCHECK_MAX_COLLATIONS, 1);
@@ -290,7 +290,7 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
         .setDocCollectionLimit(maxCollationCollectDocs)
     ;
     List<SpellCheckCollation> collations = collator.collate(spellingResult, q, rb);
-    //by sorting here we guarantee a non-distributed request returns all 
+    //by sorting here we guarantee a non-distributed request returns all
     //results in the same order as a distributed request would,
     //even in cases when the internal rank is the same.
     Collections.sort(collations);
@@ -383,7 +383,7 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
         origQuery = params.get(CommonParams.Q);
       }
     }
-    
+
     long hits = rb.grouping() ? rb.totalHitCount : rb.getNumberDocumentsFound();
     boolean isCorrectlySpelled = hits > (maxResultsForSuggest==null ? 0 : maxResultsForSuggest);
 
@@ -473,7 +473,7 @@ public class SpellCheckComponent extends SearchComponent implements SolrCoreAwar
         mergeData.origVsSuggested.put(suggestion.getToken(), suggested);
       }
 
-      // sum up original frequency          
+      // sum up original frequency
       int origFreq = 0;
       Integer o = mergeData.origVsFreq.get(suggestion.getToken());
       if (o != null)  origFreq += o;
diff --git a/solr/core/src/java/org/apache/solr/schema/CurrencyField.java b/solr/core/src/java/org/apache/solr/schema/CurrencyField.java
index 5acff21..f94165a 100644
--- a/solr/core/src/java/org/apache/solr/schema/CurrencyField.java
+++ b/solr/core/src/java/org/apache/solr/schema/CurrencyField.java
@@ -31,7 +31,7 @@ import org.apache.solr.common.SolrException.ErrorCode;
 /**
  * Field type for support of monetary values.
  * <p>
- * See <a href="http://wiki.apache.org/solr/CurrencyField">http://wiki.apache.org/solr/CurrencyField</a>
+ * See <a href="https://solr.apache.org/guide/currencies-exchange-rates.html">https://solr.apache.org/guide/currencies-exchange-rates.html</a>
  * @deprecated Use {@link CurrencyFieldType}
  */
 @Deprecated
@@ -45,12 +45,12 @@ public class CurrencyField extends CurrencyFieldType implements SchemaAware, Res
 
   @Override
   protected void init(IndexSchema schema, Map<String, String> args) {
-    
+
     // Fail if amountLongSuffix or codeStrSuffix are specified
     List<String> unknownParams = new ArrayList<>();
     fieldSuffixAmountRaw = args.get(PARAM_FIELD_SUFFIX_AMOUNT_RAW);
     if (fieldSuffixAmountRaw != null) {
-      unknownParams.add(PARAM_FIELD_SUFFIX_AMOUNT_RAW); 
+      unknownParams.add(PARAM_FIELD_SUFFIX_AMOUNT_RAW);
     }
     fieldSuffixCurrency = args.get(PARAM_FIELD_SUFFIX_CURRENCY);
     if (fieldSuffixCurrency != null) {
@@ -59,7 +59,7 @@ public class CurrencyField extends CurrencyFieldType implements SchemaAware, Res
     if ( ! unknownParams.isEmpty()) {
       throw new SolrException(ErrorCode.SERVER_ERROR, "Unknown parameter(s): " + unknownParams);
     }
-    
+
     String precisionStepString = args.get(PARAM_PRECISION_STEP);
     if (precisionStepString == null) {
       precisionStepString = DEFAULT_PRECISION_STEP;
@@ -73,7 +73,7 @@ public class CurrencyField extends CurrencyFieldType implements SchemaAware, Res
     //
     // In theory we should fix this, but since this class is already deprecated, we'll leave it alone
     // to simplify the risk of back-compat break for existing users.
-    
+
     // Initialize field type for amount
     fieldTypeAmountRaw = new TrieLongField();
     fieldTypeAmountRaw.setTypeName(FIELD_TYPE_AMOUNT_RAW);
@@ -104,7 +104,7 @@ public class CurrencyField extends CurrencyFieldType implements SchemaAware, Res
   }
 
   /**
-   * When index schema is informed, add dynamic fields "*____currency" and "*____amount_raw". 
+   * When index schema is informed, add dynamic fields "*____currency" and "*____amount_raw".
    *
    * {@inheritDoc}
    *
diff --git a/solr/core/src/java/org/apache/solr/schema/CurrencyFieldType.java b/solr/core/src/java/org/apache/solr/schema/CurrencyFieldType.java
index 43c3d7d..95c089d 100644
--- a/solr/core/src/java/org/apache/solr/schema/CurrencyFieldType.java
+++ b/solr/core/src/java/org/apache/solr/schema/CurrencyFieldType.java
@@ -49,11 +49,11 @@ import org.slf4j.LoggerFactory;
 /**
  * Field type for support of monetary values.
  * <p>
- * See <a href="http://wiki.apache.org/solr/CurrencyField">http://wiki.apache.org/solr/CurrencyField</a>
+ * See <a href="https://solr.apache.org/guide/currencies-exchange-rates.html">https://solr.apache.org/guide/currencies-exchange-rates.html</a>
  */
 public class CurrencyFieldType extends FieldType implements SchemaAware, ResourceLoaderAware {
   private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
-  
+
   protected static final String PARAM_DEFAULT_CURRENCY = "defaultCurrency";
   protected static final String DEFAULT_DEFAULT_CURRENCY = "USD";
   protected static final String PARAM_RATE_PROVIDER_CLASS = "providerClass";
@@ -66,7 +66,7 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
   protected FieldType fieldTypeAmountRaw;
   protected String fieldSuffixAmountRaw;
   protected String fieldSuffixCurrency;
-  
+
   private String exchangeRateProviderClass;
   private String defaultCurrency;
   private ExchangeRateProvider provider;
@@ -137,7 +137,7 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
         args.remove(PARAM_FIELD_SUFFIX_AMOUNT_RAW);
       }
     }
-    
+
     if (fieldTypeCurrency == null) {       // Don't initialize if subclass already has done so
       fieldSuffixCurrency = args.get(PARAM_FIELD_SUFFIX_CURRENCY);
       if (fieldSuffixCurrency == null) {
@@ -182,7 +182,7 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
 
     return f;
   }
-  
+
   private SchemaField getAmountField(SchemaField field) {
     return schema.getField(field.getName() + POLY_FIELD_SEPARATOR + fieldSuffixAmountRaw);
   }
@@ -192,7 +192,7 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
   }
 
   /**
-   * When index schema is informed, get field types for the configured dynamic sub-fields 
+   * When index schema is informed, get field types for the configured dynamic sub-fields
    *
    * {@inheritDoc}
    *
@@ -256,19 +256,19 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
 
   /**
    * <p>
-   * Returns a ValueSource over this field in which the numeric value for 
-   * each document represents the indexed value as converted to the default 
-   * currency for the field, normalized to its most granular form based 
+   * Returns a ValueSource over this field in which the numeric value for
+   * each document represents the indexed value as converted to the default
+   * currency for the field, normalized to its most granular form based
    * on the default fractional digits.
    * </p>
    * <p>
-   * For example: If the default Currency specified for a field is 
-   * <code>USD</code>, then the values returned by this value source would 
+   * For example: If the default Currency specified for a field is
+   * <code>USD</code>, then the values returned by this value source would
    * represent the equivalent number of "cents" (ie: value in dollars * 100)
-   * after converting each document's native currency to USD -- because the 
-   * default fractional digits for <code>USD</code> is "<code>2</code>".  
+   * after converting each document's native currency to USD -- because the
+   * default fractional digits for <code>USD</code> is "<code>2</code>".
    * So for a document whose indexed value was currently equivalent to
-   * "<code>5.43,USD</code>" using the the exchange provider for this field, 
+   * "<code>5.43,USD</code>" using the the exchange provider for this field,
    * this ValueSource would return a value of "<code>543</code>"
    * </p>
    *
@@ -286,18 +286,18 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
 
   /**
    * <p>
-   * Returns a ValueSource over this field in which the numeric value for 
-   * each document represents the value from the underlying 
-   * <code>RawCurrencyValueSource</code> as converted to the specified target 
+   * Returns a ValueSource over this field in which the numeric value for
+   * each document represents the value from the underlying
+   * <code>RawCurrencyValueSource</code> as converted to the specified target
    * Currency.
    * </p>
    * <p>
    * For example: If the <code>targetCurrencyCode</code> param is set to
-   * <code>USD</code>, then the values returned by this value source would 
+   * <code>USD</code>, then the values returned by this value source would
    * represent the equivalent number of dollars after converting each
-   * document's raw value to <code>USD</code>.  So for a document whose 
+   * document's raw value to <code>USD</code>.  So for a document whose
    * indexed value was currently equivalent to "<code>5.43,USD</code>"
-   * using the the exchange provider for this field, this ValueSource would 
+   * using the the exchange provider for this field, this ValueSource would
    * return a value of "<code>5.43</code>"
    * </p>
    *
@@ -394,7 +394,7 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
       if (null == targetCurrency) {
         throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Currency code not supported by this JVM: " + targetCurrencyCode);
       }
-      // the target digits & currency of our source, 
+      // the target digits & currency of our source,
       // become the source digits & currency of ourselves
       this.rate = provider.getExchangeRate
           (source.getTargetCurrency().getCurrencyCode(),
@@ -405,7 +405,7 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
     public FunctionValues getValues(Map<Object, Object> context, LeafReaderContext reader)
         throws IOException {
       final FunctionValues amounts = source.getValues(context, reader);
-      // the target digits & currency of our source, 
+      // the target digits & currency of our source,
       // become the source digits & currency of ourselves
       final String sourceCurrencyCode = source.getTargetCurrency().getCurrencyCode();
       final double divisor = Math.pow(10D, targetCurrency.getDefaultFractionDigits());
@@ -477,14 +477,14 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
 
   /**
    * <p>
-   * A value source whose values represent the "raw" (ie: normalized using 
-   * the number of default fractional digits) values in the specified 
+   * A value source whose values represent the "raw" (ie: normalized using
+   * the number of default fractional digits) values in the specified
    * target currency).
    * </p>
    * <p>
-   * For example: if the specified target currency is "<code>USD</code>" 
-   * then the numeric values are the number of pennies in the value 
-   * (ie: <code>$n * 100</code>) since the number of default fractional 
+   * For example: if the specified target currency is "<code>USD</code>"
+   * then the numeric values are the number of pennies in the value
+   * (ie: <code>$n * 100</code>) since the number of default fractional
    * digits for <code>USD</code> is "<code>2</code>")
    * </p>
    * @see ConvertedCurrencyValueSource
@@ -570,7 +570,7 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
         public long longVal(int doc) throws IOException {
           long amount = amounts.longVal(doc);
           // bail fast using whatever amounts defaults to if no value
-          // (if we don't do this early, currencyOrd may be < 0, 
+          // (if we don't do this early, currencyOrd may be < 0,
           // causing index bounds exception
           if ( ! exists(doc) ) {
             return amount;
@@ -679,4 +679,3 @@ public class CurrencyFieldType extends FieldType implements SchemaAware, Resourc
   }
 
 }
-
diff --git a/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java b/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java
index 1663c93..b3e7dac 100644
--- a/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java
+++ b/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java
@@ -68,7 +68,7 @@ import com.google.common.collect.Multimaps;
 
 /**
  * Query parser that generates DisjunctionMaxQueries based on user configuration.
- * See Wiki page http://wiki.apache.org/solr/ExtendedDisMax
+ * See Wiki page https://solr.apache.org/guide/edismax-query-parser.html
  */
 public class ExtendedDismaxQParser extends QParser {
 
@@ -83,14 +83,14 @@ public class ExtendedDismaxQParser extends QParser {
   private static class U extends SolrPluginUtils {
     /* :NOOP */
   }
-  
+
   /** shorten the class references for utilities */
   private static interface DMP extends DisMaxParams {
     /**
      * User fields. The fields that can be used by the end user to create field-specific queries.
      */
     public static String UF = "uf";
-    
+
     /**
      * Lowercase Operators. If set to true, 'or' and 'and' will be considered OR and AND, otherwise
      * lowercase operators will be considered terms to search for.
@@ -108,29 +108,29 @@ public class ExtendedDismaxQParser extends QParser {
      */
     public static String STOPWORDS = "stopwords";
   }
-  
+
   private ExtendedDismaxConfiguration config;
   private Query parsedUserQuery;
   private Query altUserQuery;
   private List<Query> boostQueries;
   private boolean parsed = false;
-  
-  
+
+
   public ExtendedDismaxQParser(String qstr, SolrParams localParams, SolrParams params, SolrQueryRequest req) {
     super(qstr, localParams, params, req);
     config = this.createConfiguration(qstr,localParams,params,req);
   }
-  
+
   @Override
   public Query parse() throws SyntaxError {
 
     parsed = true;
-    
+
     /* the main query we will execute.  we disable the coord because
      * this query is an artificial construct
      */
     BooleanQuery.Builder query = new BooleanQuery.Builder();
-    
+
     /* * * Main User Query * * */
     parsedUserQuery = null;
     String userQuery = getString();
@@ -154,11 +154,11 @@ public class ExtendedDismaxQParser extends QParser {
       up.setPhraseSlop(config.qslop);     // slop for explicit user phrase queries
       up.setAllowLeadingWildcard(true);
       up.setAllowSubQueryParsing(config.userFields.isAllowed(MagicFieldName.QUERY.field));
-      
+
       // defer escaping and only do if lucene parsing fails, or we need phrases
       // parsing fails.  Need to sloppy phrase queries anyway though.
       List<Clause> clauses = splitIntoClauses(userQuery, false);
-      
+
       // Always rebuild mainUserQuery from clauses to catch modifications from splitIntoClauses
       // This was necessary for userFields modifications to get propagated into the query.
       // Convert lower or mixed case operators to uppercase if we saw them.
@@ -167,36 +167,36 @@ public class ExtendedDismaxQParser extends QParser {
       // We don't use a regex for this because it might change and AND or OR in
       // a phrase query in a case sensitive field.
       String mainUserQuery = rebuildUserQuery(clauses, config.lowercaseOperators);
-      
+
       // but always for unstructured implicit bqs created by getFieldQuery
       up.minShouldMatch = config.minShouldMatch;
 
       up.setSplitOnWhitespace(config.splitOnWhitespace);
-      
+
       parsedUserQuery = parseOriginalQuery(up, mainUserQuery, clauses, config);
-      
+
       if (parsedUserQuery == null) {
         parsedUserQuery = parseEscapedQuery(up, escapeUserQuery(clauses), config);
       }
-      
+
       query.add(parsedUserQuery, BooleanClause.Occur.MUST);
-      
+
       addPhraseFieldQueries(query, clauses, config);
-      
+
     }
-    
+
     /* * * Boosting Query * * */
     boostQueries = getBoostQueries();
     for(Query f : boostQueries) {
       query.add(f, BooleanClause.Occur.SHOULD);
     }
-    
+
     /* * * Boosting Functions * * */
     List<Query> boostFunctions = getBoostFunctions();
     for(Query f : boostFunctions) {
       query.add(f, BooleanClause.Occur.SHOULD);
     }
-    
+
     //
     // create a boosted query (scores multiplied by boosts)
     //
@@ -208,14 +208,14 @@ public class ExtendedDismaxQParser extends QParser {
     } else if (boosts.size() == 1) {
       topQuery = FunctionScoreQuery.boostByValue(topQuery, boosts.get(0).asDoubleValuesSource());
     }
-    
+
     return topQuery;
   }
-  
+
   /**
    * Validate query field names. Must be explicitly defined in the schema or match a dynamic field pattern.
    * Checks source field(s) represented by a field alias
-   * 
+   *
    * @param up parser used
    * @throws SyntaxError for invalid field name
    */
@@ -224,13 +224,13 @@ public class ExtendedDismaxQParser extends QParser {
     for (String fieldName : config.queryFields.keySet()) {
       buildQueryFieldList(fieldName, up.getAlias(fieldName), flds, up);
     }
-    
+
     checkFieldsInSchema(flds);
   }
-  
+
   /**
    * Build list of source (non-alias) query field names. Recursive through aliases.
-   * 
+   *
    * @param fieldName query field name
    * @param alias field alias
    * @param flds list of query field names
@@ -246,10 +246,10 @@ public class ExtendedDismaxQParser extends QParser {
     up.validateCyclicAliasing(fieldName);
     flds.addAll(getFieldsFromAlias(up, alias));
   }
-  
+
   /**
    * Return list of source (non-alias) field names from an alias
-   * 
+   *
    * @param up parser used
    * @param a field alias
    * @return list of source fields
@@ -263,10 +263,10 @@ public class ExtendedDismaxQParser extends QParser {
 
     return lst;
   }
-  
+
   /**
    * Verify field name exists in schema, explicit or dynamic field pattern
-   * 
+   *
    * @param fieldName source field name to verify
    * @throws SyntaxError for invalid field name
    */
@@ -280,7 +280,7 @@ public class ExtendedDismaxQParser extends QParser {
 
   /**
    * Verify list of source field names
-   * 
+   *
    * @param flds list of source field names to verify
    * @throws SyntaxError for invalid field name
    */
@@ -289,17 +289,17 @@ public class ExtendedDismaxQParser extends QParser {
         checkFieldInSchema(fieldName);
     }
   }
-  
+
   /**
    * Adds shingled phrase queries to all the fields specified in the pf, pf2 anf pf3 parameters
-   * 
+   *
    */
   protected void addPhraseFieldQueries(BooleanQuery.Builder query, List<Clause> clauses,
       ExtendedDismaxConfiguration config) throws SyntaxError {
 
     // sloppy phrase queries for proximity
     List<FieldParams> allPhraseFields = config.getAllPhraseFields();
-    
+
     if (allPhraseFields.size() > 0) {
       // find non-field clauses
       List<Clause> normalClauses = new ArrayList<>(clauses.size());
@@ -340,7 +340,7 @@ public class ExtendedDismaxQParser extends QParser {
       SolrParams localParams, SolrParams params, SolrQueryRequest req) {
     return new ExtendedDismaxConfiguration(localParams,params,req);
   }
-  
+
   /**
    * Creates an instance of ExtendedSolrQueryParser, the query parser that's going to be used
    * to parse the query.
@@ -348,9 +348,9 @@ public class ExtendedDismaxQParser extends QParser {
   protected ExtendedSolrQueryParser createEdismaxQueryParser(QParser qParser, String field) {
     return new ExtendedSolrQueryParser(qParser, field);
   }
-  
+
   /**
-   * Parses an escaped version of the user's query.  This method is called 
+   * Parses an escaped version of the user's query.  This method is called
    * in the event that the original query encounters exceptions during parsing.
    *
    * @param up parser used
@@ -363,7 +363,7 @@ public class ExtendedDismaxQParser extends QParser {
   protected Query parseEscapedQuery(ExtendedSolrQueryParser up,
       String escapedUserQuery, ExtendedDismaxConfiguration config) throws SyntaxError {
     Query query = up.parse(escapedUserQuery);
-    
+
     if (query instanceof BooleanQuery) {
       BooleanQuery.Builder t = new BooleanQuery.Builder();
       SolrPluginUtils.flattenBooleanQuery(t, (BooleanQuery)query);
@@ -372,7 +372,7 @@ public class ExtendedDismaxQParser extends QParser {
     }
     return query;
   }
-  
+
   /**
    * Parses the user's original query.  This method attempts to cleanly parse the specified query string using the specified parser, any Exceptions are ignored resulting in null being returned.
    *
@@ -385,23 +385,23 @@ public class ExtendedDismaxQParser extends QParser {
    */
    protected Query parseOriginalQuery(ExtendedSolrQueryParser up,
       String mainUserQuery, List<Clause> clauses, ExtendedDismaxConfiguration config) {
-    
+
     Query query = null;
     try {
       up.setRemoveStopFilter(!config.stopwords);
       up.exceptions = true;
       query = up.parse(mainUserQuery);
-      
+
       if (shouldRemoveStopFilter(config, query)) {
         // if the query was all stop words, remove none of them
         up.setRemoveStopFilter(true);
-        query = up.parse(mainUserQuery);          
+        query = up.parse(mainUserQuery);
       }
     } catch (Exception e) {
       // ignore failure and reparse later after escaping reserved chars
       up.exceptions = false;
     }
-    
+
     if(query == null) {
       return null;
     }
@@ -429,18 +429,18 @@ public class ExtendedDismaxQParser extends QParser {
       Query query) {
     return config.stopwords && isEmpty(query);
   }
-  
+
   private String escapeUserQuery(List<Clause> clauses) {
     StringBuilder sb = new StringBuilder();
     for (Clause clause : clauses) {
-      
+
       boolean doQuote = clause.isPhrase;
-      
+
       String s=clause.val;
       if (!clause.isPhrase && ("OR".equals(s) || "AND".equals(s) || "NOT".equals(s))) {
         doQuote=true;
       }
-      
+
       if (clause.must != 0) {
         sb.append(clause.must);
       }
@@ -488,10 +488,10 @@ public class ExtendedDismaxQParser extends QParser {
   }
 
   /**
-   * Generates a query string from the raw clauses, uppercasing 
+   * Generates a query string from the raw clauses, uppercasing
    * 'and' and 'or' as needed.
    * @param clauses the clauses of the query string to be rebuilt
-   * @param lowercaseOperators if true, lowercase 'and' and 'or' clauses will 
+   * @param lowercaseOperators if true, lowercase 'and' and 'or' clauses will
    *        be recognized as operators and uppercased in the final query string.
    * @return the generated query string.
    */
@@ -513,7 +513,7 @@ public class ExtendedDismaxQParser extends QParser {
     }
     return sb.toString();
   }
-  
+
   /**
    * Parses all multiplicative boosts
    */
@@ -534,7 +534,7 @@ public class ExtendedDismaxQParser extends QParser {
     }
     return boosts;
   }
-  
+
   /**
    * Parses all function queries
    */
@@ -556,7 +556,7 @@ public class ExtendedDismaxQParser extends QParser {
     }
     return boostFunctions;
   }
-  
+
   /**
    * Parses all boost queries
    */
@@ -571,7 +571,7 @@ public class ExtendedDismaxQParser extends QParser {
     }
     return boostQueries;
   }
-  
+
   /**
    * Extracts all the aliased fields from the requests and adds them to up
    */
@@ -590,10 +590,10 @@ public class ExtendedDismaxQParser extends QParser {
       }
     }
   }
-  
+
   /**
    * Modifies the main query by adding a new optional Query consisting
-   * of shingled phrase queries across the specified clauses using the 
+   * of shingled phrase queries across the specified clauses using the
    * specified field =&gt; boost mappings.
    *
    * @param mainQuery Where the phrase boosting queries will be added
@@ -602,22 +602,22 @@ public class ExtendedDismaxQParser extends QParser {
    * @param shingleSize how big the phrases should be, 0 means a single phrase
    * @param tiebreaker tie breaker value for the DisjunctionMaxQueries
    */
-  protected void addShingledPhraseQueries(final BooleanQuery.Builder mainQuery, 
+  protected void addShingledPhraseQueries(final BooleanQuery.Builder mainQuery,
       final List<Clause> clauses,
       final Collection<FieldParams> fields,
       int shingleSize,
       final float tiebreaker,
       final int slop)
           throws SyntaxError {
-    
-    if (null == fields || fields.isEmpty() || 
-        null == clauses || clauses.size() < shingleSize ) 
+
+    if (null == fields || fields.isEmpty() ||
+        null == clauses || clauses.size() < shingleSize )
       return;
-    
+
     if (0 == shingleSize) shingleSize = clauses.size();
-    
+
     final int lastClauseIndex = shingleSize-1;
-    
+
     StringBuilder userPhraseQuery = new StringBuilder();
     for (int i=0; i < clauses.size() - lastClauseIndex; i++) {
       userPhraseQuery.append('"');
@@ -628,7 +628,7 @@ public class ExtendedDismaxQParser extends QParser {
       userPhraseQuery.append('"');
       userPhraseQuery.append(' ');
     }
-    
+
     /* for parsing sloppy phrases using DisjunctionMaxQueries */
     ExtendedSolrQueryParser pp = createEdismaxQueryParser(this, IMPOSSIBLE_FIELD_NAME);
 
@@ -636,32 +636,32 @@ public class ExtendedDismaxQParser extends QParser {
     pp.setPhraseSlop(slop);
     pp.setRemoveStopFilter(true);  // remove stop filter and keep stopwords
     pp.setSplitOnWhitespace(config.splitOnWhitespace);
-    
+
     /* :TODO: reevaluate using makeDismax=true vs false...
-     * 
-     * The DismaxQueryParser always used DisjunctionMaxQueries for the 
+     *
+     * The DismaxQueryParser always used DisjunctionMaxQueries for the
      * pf boost, for the same reasons it used them for the qf fields.
      * When Yonik first wrote the ExtendedDismaxQParserPlugin, he added
-     * the "makeDismax=false" property to use BooleanQueries instead, but 
+     * the "makeDismax=false" property to use BooleanQueries instead, but
      * when asked why his response was "I honestly don't recall" ...
      *
      * https://issues.apache.org/jira/browse/SOLR-1553?focusedCommentId=12793813#action_12793813
      *
-     * so for now, we continue to use dismax style queries because it 
-     * seems the most logical and is back compatible, but we should 
-     * try to figure out what Yonik was thinking at the time (because he 
+     * so for now, we continue to use dismax style queries because it
+     * seems the most logical and is back compatible, but we should
+     * try to figure out what Yonik was thinking at the time (because he
      * rarely does things for no reason)
      */
-    pp.makeDismax = true; 
-    
-    
+    pp.makeDismax = true;
+
+
     // minClauseSize is independent of the shingleSize because of stop words
-    // (if they are removed from the middle, so be it, but we need at least 
+    // (if they are removed from the middle, so be it, but we need at least
     // two or there shouldn't be a boost)
-    pp.minClauseSize = 2;  
-    
+    pp.minClauseSize = 2;
+
     // TODO: perhaps we shouldn't use synonyms either...
-    
+
     Query phrase = pp.parse(userPhraseQuery.toString());
     if (phrase != null) {
       mainQuery.add(phrase, BooleanClause.Occur.SHOULD);
@@ -685,14 +685,14 @@ public class ExtendedDismaxQParser extends QParser {
   public String[] getDefaultHighlightFields() {
     return config.queryFields.keySet().toArray(new String[0]);
   }
-  
+
   @Override
   public Query getHighlightQuery() throws SyntaxError {
     if (!parsed)
       parse();
     return parsedUserQuery == null ? altUserQuery : parsedUserQuery;
   }
-  
+
   @Override
   public void addDebugInfo(NamedList<Object> debugInfo) {
     super.addDebugInfo(debugInfo);
@@ -706,11 +706,11 @@ public class ExtendedDismaxQParser extends QParser {
   }
 
   protected static class Clause {
-    
+
     boolean isBareWord() {
       return must==0 && !isPhrase;
     }
-    
+
     protected String field;
     protected String rawField;  // if the clause is +(foo:bar) then rawField=(foo
     protected boolean isPhrase;
@@ -721,11 +721,11 @@ public class ExtendedDismaxQParser extends QParser {
     protected String val;  // the field value (minus the field name, +/-, quotes)
     protected String raw;  // the raw clause w/o leading/trailing whitespace
   }
-  
+
   public List<Clause> splitIntoClauses(String s, boolean ignoreQuote) {
     ArrayList<Clause> lst = new ArrayList<>(4);
     Clause clause;
-    
+
     int pos=0;
     int end=s.length();
     char ch=0;
@@ -734,21 +734,21 @@ public class ExtendedDismaxQParser extends QParser {
     while (pos < end) {
       clause = new Clause();
       disallowUserField = true;
-      
+
       ch = s.charAt(pos);
-      
+
       while (Character.isWhitespace(ch)) {
         if (++pos >= end) break;
         ch = s.charAt(pos);
       }
-      
-      start = pos;      
-      
+
+      start = pos;
+
       if ((ch=='+' || ch=='-') && (pos+1)<end) {
         clause.must = ch;
         pos++;
       }
-      
+
       clause.field = getFieldName(s, pos, end);
       if(clause.field != null && !config.userFields.isAllowed(clause.field)) {
         clause.field = null;
@@ -760,19 +760,19 @@ public class ExtendedDismaxQParser extends QParser {
         pos += colon - pos; // skip the field name
         pos++;  // skip the ':'
       }
-      
+
       if (pos>=end) break;
-      
-      
+
+
       char inString=0;
-      
+
       ch = s.charAt(pos);
       if (!ignoreQuote && ch=='"') {
         clause.isPhrase = true;
         inString = '"';
         pos++;
       }
-      
+
       StringBuilder sb = new StringBuilder();
       while (pos < end) {
         ch = s.charAt(pos++);
@@ -797,7 +797,7 @@ public class ExtendedDismaxQParser extends QParser {
             break;
           }
         }
-        
+
         if (inString == 0) {
           if (!ignoreQuote && ch == '"') {
             // end of the token if we aren't in a string, backing
@@ -835,16 +835,16 @@ public class ExtendedDismaxQParser extends QParser {
         sb.append(ch);
       }
       clause.val = sb.toString();
-      
+
       if (clause.isPhrase) {
         if (inString != 0) {
           // detected bad quote balancing... retry
           // parsing with quotes like any other char
           return splitIntoClauses(s, true);
         }
-        
+
         // special syntax in a string isn't special
-        clause.hasSpecialSyntax = false;        
+        clause.hasSpecialSyntax = false;
       } else {
         // an empty clause... must be just a + or - on its own
         if (clause.val.length() == 0) {
@@ -859,7 +859,7 @@ public class ExtendedDismaxQParser extends QParser {
           }
         }
       }
-      
+
       if (clause != null) {
         if(disallowUserField) {
           clause.raw = s.substring(start, pos);
@@ -879,13 +879,13 @@ public class ExtendedDismaxQParser extends QParser {
         lst.add(clause);
       }
     }
-    
+
     return lst;
   }
-  
-  /** 
-   * returns a field name or legal field alias from the current 
-   * position of the string 
+
+  /**
+   * returns a field name or legal field alias from the current
+   * position of the string
    */
   public String getFieldName(String s, int pos, int end) {
     if (pos >= end) return null;
@@ -907,10 +907,10 @@ public class ExtendedDismaxQParser extends QParser {
     boolean isInSchema = getReq().getSchema().getFieldTypeNoEx(fname) != null;
     boolean isAlias = config.solrParams.get("f."+fname+".qf") != null;
     boolean isMagic = (null != MagicFieldName.get(fname));
-    
+
     return (isInSchema || isAlias || isMagic) ? fname : null;
   }
-  
+
   public static List<String> split(String s, boolean ignoreQuote) {
     ArrayList<String> lst = new ArrayList<>(4);
     int pos=0, start=0, end=s.length();
@@ -937,15 +937,15 @@ public class ExtendedDismaxQParser extends QParser {
     if (start < end) {
       lst.add(s.substring(start,end));
     }
-    
+
     if (inString != 0) {
       // unbalanced quote... ignore them
       return split(s, true);
     }
-    
+
     return lst;
   }
-  
+
   enum QType {
     FIELD,
     PHRASE,
@@ -954,43 +954,43 @@ public class ExtendedDismaxQParser extends QParser {
     FUZZY,
     RANGE
   }
-  
-  
+
+
   static final RuntimeException unknownField = new RuntimeException("UnknownField");
   static {
     unknownField.fillInStackTrace();
   }
-  
+
   /**
    * A subclass of SolrQueryParser that supports aliasing fields for
    * constructing DisjunctionMaxQueries.
    */
   public static class ExtendedSolrQueryParser extends SolrQueryParser {
-    
+
     /** A simple container for storing alias info
      */
     protected static class Alias {
       public float tie;
       public Map<String,Float> fields;
     }
-    
+
     boolean makeDismax=true;
     boolean allowWildcard=true;
     int minClauseSize = 0;    // minimum number of clauses per phrase query...
     // used when constructing boosting part of query via sloppy phrases
     boolean exceptions;  //  allow exceptions to be thrown (for example on a missing field)
-    
+
     private Map<String, Analyzer> nonStopFilterAnalyzerPerField;
     private boolean removeStopFilter;
     String minShouldMatch; // for inner boolean queries produced from a single fieldQuery
-    
+
     /**
      * Where we store a map from field name we expect to see in our query
      * string, to Alias object containing the fields to use in our
      * DisjunctionMaxQuery and the tiebreaker to use.
      */
     protected Map<String,Alias> aliases = new HashMap<>(3);
-    
+
     private QType type;
     private String field;
     private String val;
@@ -1000,7 +1000,7 @@ public class ExtendedDismaxQParser extends QParser {
     private boolean bool2;
     private float flt;
     private int slop;
-    
+
     public ExtendedSolrQueryParser(QParser parser, String defaultField) {
       super(parser, defaultField);
       // Respect the q.op parameter before mm will be applied later
@@ -1008,11 +1008,11 @@ public class ExtendedDismaxQParser extends QParser {
       QueryParser.Operator defaultOp = QueryParsing.parseOP(defaultParams.get(QueryParsing.OP));
       setDefaultOperator(defaultOp);
     }
-    
+
     public void setRemoveStopFilter(boolean remove) {
       removeStopFilter = remove;
     }
-    
+
     @Override
     protected Query getBooleanQuery(List<BooleanClause> clauses) throws SyntaxError {
       Query q = super.getBooleanQuery(clauses);
@@ -1021,7 +1021,7 @@ public class ExtendedDismaxQParser extends QParser {
       }
       return q;
     }
-    
+
     /**
      * Add an alias to this query parser.
      *
@@ -1040,7 +1040,7 @@ public class ExtendedDismaxQParser extends QParser {
       a.fields = fieldBoosts;
       aliases.put(field, a);
     }
-    
+
     /**
      * Returns the aliases found for a field.
      * Returns null if there are no aliases for the field
@@ -1049,7 +1049,7 @@ public class ExtendedDismaxQParser extends QParser {
     protected Alias getAlias(String field) {
       return aliases.get(field);
     }
-    
+
     @Override
     protected Query getFieldQuery(String field, String val, boolean quoted, boolean raw) throws SyntaxError {
       this.type = quoted ? QType.PHRASE : QType.FIELD;
@@ -1059,7 +1059,7 @@ public class ExtendedDismaxQParser extends QParser {
       this.slop = getPhraseSlop(); // unspecified
       return getAliasedQuery();
     }
-    
+
     @Override
     protected Query getFieldQuery(String field, String val, int slop) throws SyntaxError {
       this.type = QType.PHRASE;
@@ -1091,9 +1091,9 @@ public class ExtendedDismaxQParser extends QParser {
       this.vals = null;
       return getAliasedQuery();
     }
-    
+
     @Override
-    protected Query newFieldQuery(Analyzer analyzer, String field, String queryText, 
+    protected Query newFieldQuery(Analyzer analyzer, String field, String queryText,
                                   boolean quoted, boolean fieldAutoGenPhraseQueries, boolean enableGraphQueries,
                                   SynonymQueryStyle synonymQueryStyle)
         throws SyntaxError {
@@ -1111,7 +1111,7 @@ public class ExtendedDismaxQParser extends QParser {
       }
       return super.newFieldQuery(actualAnalyzer, field, queryText, quoted, fieldAutoGenPhraseQueries, enableGraphQueries, synonymQueryStyle);
     }
-    
+
     @Override
     protected Query getRangeQuery(String field, String a, String b, boolean startInclusive, boolean endInclusive) throws SyntaxError {
       this.type = QType.RANGE;
@@ -1123,7 +1123,7 @@ public class ExtendedDismaxQParser extends QParser {
       this.bool2 = endInclusive;
       return getAliasedQuery();
     }
-    
+
     @Override
     protected Query getWildcardQuery(String field, String val) throws SyntaxError {
       if (val.equals("*")) {
@@ -1139,7 +1139,7 @@ public class ExtendedDismaxQParser extends QParser {
       this.vals = null;
       return getAliasedQuery();
     }
-    
+
     @Override
     protected Query getFuzzyQuery(String field, String val, float minSimilarity) throws SyntaxError {
       this.type = QType.FUZZY;
@@ -1149,7 +1149,7 @@ public class ExtendedDismaxQParser extends QParser {
       this.flt = minSimilarity;
       return getAliasedQuery();
     }
-    
+
     /**
      * Delegates to the super class unless the field has been specified
      * as an alias -- in which case we recurse on each of
@@ -1169,7 +1169,7 @@ public class ExtendedDismaxQParser extends QParser {
         // that the query expanded to multiple clauses.
         // DisMaxQuery.rewrite() removes itself if there is just a single clause anyway.
         // if (lst.size()==1) return lst.get(0);
-        
+
         if (makeDismax) {
           DisjunctionMaxQuery q = new DisjunctionMaxQuery(lst, a.tie);
           return q;
@@ -1181,7 +1181,7 @@ public class ExtendedDismaxQParser extends QParser {
           return QueryUtils.build(q, parser);
         }
       } else {
-        
+
         // verify that a fielded query is actually on a field that exists... if not,
         // then throw an exception to get us out of here, and we'll treat it like a
         // literal when we try the escape+re-parse.
@@ -1191,7 +1191,7 @@ public class ExtendedDismaxQParser extends QParser {
             throw unknownField;
           }
         }
-        
+
         return getQuery();
       }
     }
@@ -1211,7 +1211,7 @@ public class ExtendedDismaxQParser extends QParser {
         if (lst == null || lst.size() == 0) {
           return getQuery();
         }
-        
+
         // make a DisjunctionMaxQuery in this case too... it will stop
         // the "mm" processing from making everything required in the case
         // that the query expanded to multiple clauses.
@@ -1242,7 +1242,7 @@ public class ExtendedDismaxQParser extends QParser {
             }
             return QueryUtils.build(q, parser);
           } else {
-            return new DisjunctionMaxQuery(lst, a.tie); 
+            return new DisjunctionMaxQuery(lst, a.tie);
           }
         } else {
           BooleanQuery.Builder q = new BooleanQuery.Builder();
@@ -1314,10 +1314,10 @@ public class ExtendedDismaxQParser extends QParser {
       if (q == null) {
         return;
       }
-      
+
       boolean required = operator == AND_OPERATOR;
-      BooleanClause.Occur occur = required ? BooleanClause.Occur.MUST : BooleanClause.Occur.SHOULD;  
-      
+      BooleanClause.Occur occur = required ? BooleanClause.Occur.MUST : BooleanClause.Occur.SHOULD;
+
       if (q instanceof BooleanQuery) {
         boolean allOptionalDisMaxQueries = true;
         for (BooleanClause c : ((BooleanQuery)q).clauses()) {
@@ -1348,7 +1348,7 @@ public class ExtendedDismaxQParser extends QParser {
         throw new SyntaxError("Field aliases lead to a cycle");
       }
     }
-    
+
     private boolean validateField(String field, Set<String> set) {
       if(this.getAlias(field) == null) {
         return false;
@@ -1366,12 +1366,12 @@ public class ExtendedDismaxQParser extends QParser {
       }
       return hascycle;
     }
-    
+
     protected List<Query> getQueries(Alias a) throws SyntaxError {
       if (a == null) return null;
       if (a.fields.size()==0) return null;
       List<Query> lst= new ArrayList<>(4);
-      
+
       for (String f : a.fields.keySet()) {
         this.field = f;
         Query sub = getAliasedQuery();
@@ -1407,7 +1407,7 @@ public class ExtendedDismaxQParser extends QParser {
 
     private Query getQuery() {
       try {
-        
+
         switch (type) {
           case FIELD:  // fallthrough
           case PHRASE:
@@ -1454,7 +1454,7 @@ public class ExtendedDismaxQParser extends QParser {
           case RANGE: return super.getRangeQuery(field, val, val2, bool, bool2);
         }
         return null;
-        
+
       } catch (Exception e) {
         // an exception here is due to the field query not being compatible with the input text
         // for example, passing a string to a numeric field.
@@ -1468,25 +1468,25 @@ public class ExtendedDismaxQParser extends QParser {
       if (!(qa instanceof TokenizerChain)) {
         return qa;
       }
-      
+
       TokenizerChain tcq = (TokenizerChain) qa;
       Analyzer ia = ft.getIndexAnalyzer();
       if (ia == qa || !(ia instanceof TokenizerChain)) {
         return qa;
       }
       TokenizerChain tci = (TokenizerChain) ia;
-      
+
       // make sure that there isn't a stop filter in the indexer
       for (TokenFilterFactory tf : tci.getTokenFilterFactories()) {
         if (tf instanceof StopFilterFactory) {
           return qa;
         }
       }
-      
+
       // now if there is a stop filter in the query analyzer, remove it
       int stopIdx = -1;
       TokenFilterFactory[] facs = tcq.getTokenFilterFactories();
-      
+
       for (int i = 0; i < facs.length; i++) {
         TokenFilterFactory tf = facs[i];
         if (tf instanceof StopFilterFactory) {
@@ -1494,30 +1494,30 @@ public class ExtendedDismaxQParser extends QParser {
           break;
         }
       }
-      
+
       if (stopIdx == -1) {
         // no stop filter exists
         return qa;
       }
-      
+
       TokenFilterFactory[] newtf = new TokenFilterFactory[facs.length - 1];
       for (int i = 0, j = 0; i < facs.length; i++) {
         if (i == stopIdx) continue;
         newtf[j++] = facs[i];
       }
-      
+
       TokenizerChain newa = new TokenizerChain(tcq.getCharFilterFactories(), tcq.getTokenizerFactory(), newtf);
       newa.setPositionIncrementGap(tcq.getPositionIncrementGap(fieldName));
       return newa;
     }
   }
-  
+
   static boolean isEmpty(Query q) {
     if (q==null) return true;
     if (q instanceof BooleanQuery && ((BooleanQuery)q).clauses().size()==0) return true;
     return false;
   }
-  
+
   /**
    * Class that encapsulates the input from userFields parameter and can answer whether
    * a field allowed or disallowed as fielded query in the query string
@@ -1526,13 +1526,13 @@ public class ExtendedDismaxQParser extends QParser {
     private Map<String,Float> userFieldsMap;
     private DynamicField[] dynamicUserFields;
     private DynamicField[] negativeDynamicUserFields;
-    
+
     UserFields(Map<String, Float> ufm) {
       userFieldsMap = ufm;
       if (0 == userFieldsMap.size()) {
         userFieldsMap.put("*", null);
       }
-      
+
       // Process dynamic patterns in userFields
       ArrayList<DynamicField> dynUserFields = new ArrayList<>();
       ArrayList<DynamicField> negDynUserFields = new ArrayList<>();
@@ -1553,30 +1553,30 @@ public class ExtendedDismaxQParser extends QParser {
       Collections.sort(negDynUserFields);
       negativeDynamicUserFields = negDynUserFields.toArray(new DynamicField[negDynUserFields.size()]);
     }
-    
+
     /**
      * Is the given field name allowed according to UserFields spec given in the uf parameter?
      * @param fname the field name to examine
      * @return true if the fielded queries are allowed on this field
      */
     public boolean isAllowed(String fname) {
-      boolean res = ((userFieldsMap.containsKey(fname) || isDynField(fname, false)) && 
+      boolean res = ((userFieldsMap.containsKey(fname) || isDynField(fname, false)) &&
           !userFieldsMap.containsKey("-"+fname) &&
           !isDynField(fname, true));
       return res;
     }
-    
+
     private boolean isDynField(String field, boolean neg) {
       return getDynFieldForName(field, neg) == null ? false : true;
     }
-    
+
     private String getDynFieldForName(String f, boolean neg) {
       for( DynamicField df : neg?negativeDynamicUserFields:dynamicUserFields ) {
         if( df.matches( f ) ) return df.wildcard;
       }
       return null;
     }
-    
+
     /**
      * Finds the default user field boost associated with the given field.
      * This is parsed from the uf parameter, and may be specified as wildcards, e.g. *name^2.0 or *^3.0
@@ -1589,18 +1589,18 @@ public class ExtendedDismaxQParser extends QParser {
             userFieldsMap.get(getDynFieldForName(field, false)); // Dynamic field
     }
   }
-  
+
   /* Represents a dynamic field, for easier matching, inspired by same class in IndexSchema */
   static class DynamicField implements Comparable<DynamicField> {
     final static int STARTS_WITH=1;
     final static int ENDS_WITH=2;
     final static int CATCHALL=3;
-    
+
     final String wildcard;
     final int type;
-    
+
     final String str;
-    
+
     protected DynamicField(String wildcard) {
       this.wildcard = wildcard;
       if (wildcard.equals("*")) {
@@ -1619,7 +1619,7 @@ public class ExtendedDismaxQParser extends QParser {
         throw new SolrException(ErrorCode.BAD_REQUEST, "dynamic field name must start or end with *");
       }
     }
-    
+
     /*
      * Returns true if the regex wildcard for this DynamicField would match the input field name
      */
@@ -1629,7 +1629,7 @@ public class ExtendedDismaxQParser extends QParser {
       else if (type==ENDS_WITH && name.endsWith(str)) return true;
       else return false;
     }
-    
+
     /**
      * Sort order is based on length of regex.  Longest comes first.
      * @param other The object to compare to.
@@ -1641,57 +1641,57 @@ public class ExtendedDismaxQParser extends QParser {
     public int compareTo(DynamicField other) {
       return other.wildcard.length() - wildcard.length();
     }
-    
+
     @Override
     public String toString() {
       return this.wildcard;
     }
   }
-  
+
   /**
    * Simple container for configuration information used when parsing queries
    */
   public static class ExtendedDismaxConfiguration {
-    
+
     /**
-     * The field names specified by 'qf' that (most) clauses will 
-     * be queried against 
+     * The field names specified by 'qf' that (most) clauses will
+     * be queried against
      */
     protected Map<String,Float> queryFields;
-    
-    /** 
-     * The field names specified by 'uf' that users are 
+
+    /**
+     * The field names specified by 'uf' that users are
      * allowed to include literally in their query string.  The Float
-     * boost values will be applied automatically to any clause using that 
-     * field name. '*' will be treated as an alias for any 
+     * boost values will be applied automatically to any clause using that
+     * field name. '*' will be treated as an alias for any
      * field that exists in the schema. Wildcards are allowed to
      * express dynamicFields.
      */
     protected UserFields userFields;
-    
+
     protected String[] boostParams;
     protected String[] multBoosts;
     protected SolrParams solrParams;
     protected String minShouldMatch;
-    
+
     protected List<FieldParams> allPhraseFields;
-    
+
     protected float tiebreaker;
-    
+
     protected int qslop;
-    
+
     protected boolean stopwords;
 
     protected boolean mmAutoRelax;
-    
+
     protected String altQ;
-    
+
     protected boolean lowercaseOperators;
-    
+
     protected  String[] boostFuncs;
 
     protected boolean splitOnWhitespace;
-    
+
     protected IndexSchema schema;
 
     public ExtendedDismaxConfiguration(SolrParams localParams,
@@ -1710,63 +1710,63 @@ public class ExtendedDismaxQParser extends QParser {
       pslop[0] = solrParams.getInt(DisMaxParams.PS, 0);
       pslop[2] = solrParams.getInt(DisMaxParams.PS2, pslop[0]);
       pslop[3] = solrParams.getInt(DisMaxParams.PS3, pslop[0]);
-      
+
       List<FieldParams> phraseFields = U.parseFieldBoostsAndSlop(solrParams.getParams(DMP.PF),0,pslop[0]);
       List<FieldParams> phraseFields2 = U.parseFieldBoostsAndSlop(solrParams.getParams(DMP.PF2),2,pslop[2]);
       List<FieldParams> phraseFields3 = U.parseFieldBoostsAndSlop(solrParams.getParams(DMP.PF3),3,pslop[3]);
-      
+
       allPhraseFields = new ArrayList<>(phraseFields.size() + phraseFields2.size() + phraseFields3.size());
       allPhraseFields.addAll(phraseFields);
       allPhraseFields.addAll(phraseFields2);
       allPhraseFields.addAll(phraseFields3);
-      
+
       tiebreaker = solrParams.getFloat(DisMaxParams.TIE, 0.0f);
-      
+
       qslop = solrParams.getInt(DisMaxParams.QS, 0);
-      
+
       stopwords = solrParams.getBool(DMP.STOPWORDS, true);
 
       mmAutoRelax = solrParams.getBool(DMP.MM_AUTORELAX, false);
-      
+
       altQ = solrParams.get( DisMaxParams.ALTQ );
 
       lowercaseOperators = solrParams.getBool(DMP.LOWERCASE_OPS, false);
-      
+
       /* * * Boosting Query * * */
       boostParams = solrParams.getParams(DisMaxParams.BQ);
-      
+
       boostFuncs = solrParams.getParams(DisMaxParams.BF);
-      
+
       multBoosts = solrParams.getParams(DMP.MULT_BOOST);
 
       splitOnWhitespace = solrParams.getBool(QueryParsing.SPLIT_ON_WHITESPACE, SolrQueryParser.DEFAULT_SPLIT_ON_WHITESPACE);
     }
     /**
-     * 
+     *
      * @return true if there are valid multiplicative boost queries
      */
     public boolean hasMultiplicativeBoosts() {
       return multBoosts!=null && multBoosts.length>0;
     }
-    
+
     /**
-     * 
+     *
      * @return true if there are valid boost functions
      */
     public boolean hasBoostFunctions() {
       return null != boostFuncs && 0 != boostFuncs.length;
     }
     /**
-     * 
+     *
      * @return true if there are valid boost params
      */
     public boolean hasBoostParams() {
       return boostParams!=null && boostParams.length>0;
     }
-    
+
     public List<FieldParams> getAllPhraseFields() {
       return allPhraseFields;
     }
   }
-  
+
 }
diff --git a/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParserPlugin.java b/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParserPlugin.java
index cc5a697..4dfb709 100644
--- a/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParserPlugin.java
+++ b/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParserPlugin.java
@@ -21,7 +21,7 @@ import org.apache.solr.request.SolrQueryRequest;
 
 /**
  * An advanced multi-field query parser based on the DisMax parser.
- * See Wiki page http://wiki.apache.org/solr/ExtendedDisMax
+ * See Wiki page https://solr.apache.org/guide/edismax-query-parser.html
  */
 public class ExtendedDismaxQParserPlugin extends QParserPlugin {
   public static final String NAME = "edismax";
@@ -30,4 +30,4 @@ public class ExtendedDismaxQParserPlugin extends QParserPlugin {
   public QParser createParser(String qstr, SolrParams localParams, SolrParams params, SolrQueryRequest req) {
     return new ExtendedDismaxQParser(qstr, localParams, params, req);
   }
-}
\ No newline at end of file
+}
diff --git a/solr/core/src/java/org/apache/solr/search/QParser.java b/solr/core/src/java/org/apache/solr/search/QParser.java
index c472f4c..3ebcde9 100644
--- a/solr/core/src/java/org/apache/solr/search/QParser.java
+++ b/solr/core/src/java/org/apache/solr/search/QParser.java
@@ -33,7 +33,7 @@ import org.apache.solr.request.SolrQueryRequest;
 
 /**
  * <b>Note: This API is experimental and may change in non backward-compatible ways in the future</b>
- * 
+ *
  *
  */
 public abstract class QParser {
@@ -53,12 +53,12 @@ public abstract class QParser {
 
   protected String stringIncludingLocalParams;   // the original query string including any local params
   protected boolean valFollowedParams;           // true if the value "qstr" followed the localParams
-  protected int localParamsEnd;                  // the position one past where the localParams ended 
+  protected int localParamsEnd;                  // the position one past where the localParams ended
 
   /**
    * Constructor for the QParser
    * @param qstr The part of the query string specific to this parser
-   * @param localParams The set of parameters that are specific to this QParser.  See http://wiki.apache.org/solr/LocalParams
+   * @param localParams The set of parameters that are specific to this QParser.  See https://solr.apache.org/guide/local-parameters-in-queries.html
    * @param params The rest of the {@link org.apache.solr.common.params.SolrParams}
    * @param req The original {@link org.apache.solr.request.SolrQueryRequest}.
    */
@@ -77,7 +77,7 @@ public abstract class QParser {
         Map<Object,Collection<Object>> tagMap = (Map<Object, Collection<Object>>)req.getContext().get("tags");
         if (tagMap == null) {
           tagMap = new HashMap<>();
-          context.put("tags", tagMap);          
+          context.put("tags", tagMap);
         }
         if (tagStr.indexOf(',') >= 0) {
           List<String> tags = StrUtils.splitSmart(tagStr, ',');
diff --git a/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java b/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java
index 51f2dcc..c87e318 100644
--- a/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java
+++ b/solr/core/src/java/org/apache/solr/search/join/ScoreJoinQParserPlugin.java
@@ -48,24 +48,24 @@ import org.apache.solr.uninverting.UninvertingReader;
 import org.apache.solr.util.RefCounted;
 
 /**
- * Create a query-time join query with scoring. 
+ * Create a query-time join query with scoring.
  * It just calls  {@link JoinUtil#createJoinQuery(String, boolean, String, Query, org.apache.lucene.search.IndexSearcher, ScoreMode)}.
  * It runs subordinate query and collects values of "from"  field and scores, then it lookups these collected values in "to" field, and
  * yields aggregated scores.
- * Local parameters are similar to {@link JoinQParserPlugin} <a href="http://wiki.apache.org/solr/Join">{!join}</a>
- * This plugin doesn't have own name, and is called by specifying local parameter <code>{!join score=...}...</code>. 
+ * Local parameters are similar to {@link JoinQParserPlugin} <a href="https://solr.apache.org/guide/join-query-parser.html">{!join}</a>
+ * This plugin doesn't have own name, and is called by specifying local parameter <code>{!join score=...}...</code>.
  * Note: this parser is invoked even if you specify <code>score=none</code>.
  * <br>Example:<code>q={!join from=manu_id_s to=id score=total}foo</code>
  * <ul>
  *  <li>from - "foreign key" field name to collect values while enumerating subordinate query (denoted as <code>foo</code> in example above).
  *             it's better to have this field declared as <code>type="string" docValues="true"</code>.
- *             note: if <a href="http://wiki.apache.org/solr/DocValues">docValues</a> are not enabled for this field, it will work anyway, 
- *             but it costs some memory for {@link UninvertingReader}. 
+ *             note: if <a href="https://solr.apache.org/guide/docvalues.html">docValues</a> are not enabled for this field, it will work anyway,
+ *             but it costs some memory for {@link UninvertingReader}.
  *             Also, numeric doc values are not supported until <a href="https://issues.apache.org/jira/browse/LUCENE-5868">LUCENE-5868</a>.
  *             Thus, it only supports {@link DocValuesType#SORTED}, {@link DocValuesType#SORTED_SET}, {@link DocValuesType#BINARY}.  </li>
  *  <li>fromIndex - optional parameter, a core name where subordinate query should run (and <code>from</code> values are collected) rather than current core.
- *             <br>Example:<code>q={!join from=manu_id_s to=id score=total fromIndex=products}foo</code> 
- *  <li>to - "primary key" field name which is searched for values collected from subordinate query. 
+ *             <br>Example:<code>q={!join from=manu_id_s to=id score=total fromIndex=products}foo</code>
+ *  <li>to - "primary key" field name which is searched for values collected from subordinate query.
  *             it should be declared as <code>indexed="true"</code>. Now it's treated as a single value field.</li>
  *  <li>score - one of {@link ScoreMode}: <code>none,avg,total,max,min</code>. Capital case is also accepted.</li>
  * </ul>
@@ -346,6 +346,3 @@ public class ScoreJoinQParserPlugin extends QParserPlugin {
     return fromReplica;
   }
 }
-
-
-
diff --git a/solr/core/src/java/org/apache/solr/spelling/AbstractLuceneSpellChecker.java b/solr/core/src/java/org/apache/solr/spelling/AbstractLuceneSpellChecker.java
index 85befe2..c068c64 100644
--- a/solr/core/src/java/org/apache/solr/spelling/AbstractLuceneSpellChecker.java
+++ b/solr/core/src/java/org/apache/solr/spelling/AbstractLuceneSpellChecker.java
@@ -43,16 +43,16 @@ import org.apache.solr.search.SolrIndexSearcher;
 
 /**
  * Abstract base class for all Lucene-based spell checking implementations.
- * 
+ *
  * <p>
- * Refer to <a href="http://wiki.apache.org/solr/SpellCheckComponent">SpellCheckComponent</a>
+ * Refer to <a href="https://solr.apache.org/guide/spell-checking.html">https://solr.apache.org/guide/spell-checking.html</a>
  * for more details.
  * </p>
- * 
+ *
  * @since solr 1.3
  */
 public abstract class AbstractLuceneSpellChecker extends SolrSpellChecker {
-  
+
   public static final String SPELLCHECKER_ARG_NAME = "spellchecker";
   public static final String LOCATION = "sourceLocation";
   public static final String INDEX_DIR = "spellcheckIndexDir";
@@ -130,14 +130,14 @@ public abstract class AbstractLuceneSpellChecker extends SolrSpellChecker {
     }
     return name;
   }
-  
+
   @Override
   public SpellingResult getSuggestions(SpellingOptions options) throws IOException {
     SpellingResult result = new SpellingResult(options.tokens);
     IndexReader reader = determineReader(options.reader);
     Term term = field != null ? new Term(field, "") : null;
     float theAccuracy = (options.accuracy == Float.MIN_VALUE) ? spellChecker.getAccuracy() : options.accuracy;
-    
+
     int count = Math.max(options.count, AbstractLuceneSpellChecker.DEFAULT_SUGGESTION_COUNT);
     for (Token token : options.tokens) {
       String tokenText = new String(token.buffer(), 0, token.length());
diff --git a/solr/core/src/java/org/apache/solr/spelling/IndexBasedSpellChecker.java b/solr/core/src/java/org/apache/solr/spelling/IndexBasedSpellChecker.java
index 154e68b..416cfad 100644
--- a/solr/core/src/java/org/apache/solr/spelling/IndexBasedSpellChecker.java
+++ b/solr/core/src/java/org/apache/solr/spelling/IndexBasedSpellChecker.java
@@ -32,12 +32,12 @@ import java.io.IOException;
  * <p>
  * A spell checker implementation that loads words from Solr as well as arbitrary Lucene indices.
  * </p>
- * 
+ *
  * <p>
- * Refer to <a href="http://wiki.apache.org/solr/SpellCheckComponent">SpellCheckComponent</a>
+ * Refer to <a href="https://solr.apache.org/guide/spell-checking.html">https://solr.apache.org/guide/spell-checking.html</a>
  * for more details.
  * </p>
- * 
+ *
  * @since solr 1.3
  **/
 public class IndexBasedSpellChecker extends AbstractLuceneSpellChecker {
diff --git a/solr/core/src/java/org/apache/solr/spelling/QueryConverter.java b/solr/core/src/java/org/apache/solr/spelling/QueryConverter.java
index 193c15d..471e61d 100644
--- a/solr/core/src/java/org/apache/solr/spelling/QueryConverter.java
+++ b/solr/core/src/java/org/apache/solr/spelling/QueryConverter.java
@@ -26,7 +26,7 @@ import org.apache.solr.util.plugin.NamedListInitializedPlugin;
  * input "raw" queries into a set of tokens for spell checking. It is used to
  * "parse" the CommonParams.Q (the input query) and converts it to tokens.
  * </p>
- * 
+ *
  * <p>
  * It is only invoked for the CommonParams.Q parameter, and <b>not</b> the
  * "spellcheck.q" parameter. Systems that use their own query parser or those
@@ -35,20 +35,20 @@ import org.apache.solr.util.plugin.NamedListInitializedPlugin;
  * (SpellingQueryConverter) by overriding the appropriate methods on the
  * SpellingQueryConverter and registering it in the solrconfig.xml
  * </p>
- * 
+ *
  * <p>
- * Refer to <a href="http://wiki.apache.org/solr/SpellCheckComponent">SpellCheckComponent</a>
+ * Refer to <a href="https://solr.apache.org/guide/spell-checking.html">https://solr.apache.org/guide/spell-checking.html</a>
  * for more details
  * </p>
- * 
+ *
  * @since solr 1.3
  */
 public abstract class QueryConverter implements NamedListInitializedPlugin {
   protected Analyzer analyzer;
-  
+
   /**
    * <p>This term is marked prohibited in the query with the minus sign.</p>
-   * 
+   *
    */
   public static final int PROHIBITED_TERM_FLAG = 16384;
   /**
diff --git a/solr/core/src/java/org/apache/solr/spelling/SolrSpellChecker.java b/solr/core/src/java/org/apache/solr/spelling/SolrSpellChecker.java
index c67a10d..b752385 100644
--- a/solr/core/src/java/org/apache/solr/spelling/SolrSpellChecker.java
+++ b/solr/core/src/java/org/apache/solr/spelling/SolrSpellChecker.java
@@ -39,10 +39,10 @@ import org.apache.solr.search.SolrIndexSearcher;
 
 /**
  * <p>
- * Refer to <a href="http://wiki.apache.org/solr/SpellCheckComponent">SpellCheckComponent</a>
+ * Refer to <a href="https://solr.apache.org/guide/spell-checking.html">https://solr.apache.org/guide/spell-checking.html</a>
  * for more details.
  * </p>
- * 
+ *
  * @since solr 1.3
  */
 public abstract class SolrSpellChecker {
@@ -86,26 +86,26 @@ public abstract class SolrSpellChecker {
     } catch(UnsupportedOperationException uoe) {
       //just use .5 as a default
     }
-    
+
     StringDistance sd = null;
     try {
-      sd = getStringDistance() == null ? new LevenshteinDistance() : getStringDistance();    
+      sd = getStringDistance() == null ? new LevenshteinDistance() : getStringDistance();
     } catch(UnsupportedOperationException uoe) {
       sd = new LevenshteinDistance();
     }
-    
+
     SpellingResult result = new SpellingResult();
     for (Map.Entry<String, HashSet<String>> entry : mergeData.origVsSuggested.entrySet()) {
       String original = entry.getKey();
-      
-      //Only use this suggestion if all shards reported it as misspelled, 
+
+      //Only use this suggestion if all shards reported it as misspelled,
       //unless it was not a term original to the user's query
       //(WordBreakSolrSpellChecker can add new terms to the response, and we want to keep these)
       Integer numShards = mergeData.origVsShards.get(original);
       if(numShards<mergeData.totalNumberShardResponses && mergeData.isOriginalToQuery(original)) {
         continue;
       }
-      
+
       HashSet<String> suggested = entry.getValue();
       SuggestWordQueue sugQueue = new SuggestWordQueue(numSug);
       for (String suggestion : suggested) {
@@ -145,7 +145,7 @@ public abstract class SolrSpellChecker {
     }
     return result;
   }
-  
+
   public Analyzer getQueryAnalyzer() {
     return analyzer;
   }
@@ -165,15 +165,15 @@ public abstract class SolrSpellChecker {
    * (re)Builds the spelling index.  May be a NOOP if the implementation doesn't require building, or can't be rebuilt.
    */
   public abstract void build(SolrCore core, SolrIndexSearcher searcher) throws IOException;
-  
+
   /**
-   * Get the value of {@link SpellingParams#SPELLCHECK_ACCURACY} if supported.  
+   * Get the value of {@link SpellingParams#SPELLCHECK_ACCURACY} if supported.
    * Otherwise throws UnsupportedOperationException.
    */
   protected float getAccuracy() {
     throw new UnsupportedOperationException();
   }
-  
+
   /**
    * Get the distance implementation used by this spellchecker, or NULL if not applicable.
    */
@@ -191,7 +191,7 @@ public abstract class SolrSpellChecker {
    * @throws IOException if there is an error producing suggestions
    */
   public abstract SpellingResult getSuggestions(SpellingOptions options) throws IOException;
-  
+
   public boolean isSuggestionsMayOverlap() {
     return false;
   }
diff --git a/solr/core/src/java/org/apache/solr/util/plugin/package-info.java b/solr/core/src/java/org/apache/solr/util/plugin/package-info.java
index e0e3a43..4a5ac1e 100644
--- a/solr/core/src/java/org/apache/solr/util/plugin/package-info.java
+++ b/solr/core/src/java/org/apache/solr/util/plugin/package-info.java
@@ -14,12 +14,10 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
- 
-/** 
- * Common APIs related to implementing <a href="http://wiki.apache.org/solr/SolrPlugins">Solr plugins</a>
+
+/**
+ * Common APIs related to implementing <a href="https://solr.apache.org/guide/solr-plugins.html">Solr plugins</a>
  * <p>
  * See also: {@link org.apache.solr.util.SolrPluginUtils}.
  */
 package org.apache.solr.util.plugin;
-
-
diff --git a/solr/core/src/test-files/solr/collection1/conf/schema-spellchecker.xml b/solr/core/src/test-files/solr/collection1/conf/schema-spellchecker.xml
index 896f139..d5286ab 100644
--- a/solr/core/src/test-files/solr/collection1/conf/schema-spellchecker.xml
+++ b/solr/core/src/test-files/solr/collection1/conf/schema-spellchecker.xml
@@ -16,15 +16,6 @@
  limitations under the License.
 -->
 
-<!-- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
- or located where the classloader for the Solr webapp can find it.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
--->
-
 <schema name="Solr SpellCheck Test" version="1.1">
   <!-- attribute "name" is the name of this schema and is only used for display purposes.
        Applications should change this to reflect the nature of the search collection.
diff --git a/solr/core/src/test-files/solr/collection1/conf/schema-trie.xml b/solr/core/src/test-files/solr/collection1/conf/schema-trie.xml
index 8669d64..f42afb3 100644
--- a/solr/core/src/test-files/solr/collection1/conf/schema-trie.xml
+++ b/solr/core/src/test-files/solr/collection1/conf/schema-trie.xml
@@ -16,18 +16,6 @@
  limitations under the License.
 -->
 
-<!--
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default)
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
--->
 
 <schema name="example" version="1.2">
   <!-- attribute "name" is the name of this schema and is only used for display purposes.
@@ -125,18 +113,6 @@
    -->
   <fieldType name="random" class="solr.RandomSortField" indexed="true"/>
 
-  <!-- solr.TextField allows the specification of custom text analyzers
-       specified as a tokenizer and a list of token filters. Different
-       analyzers may be specified for indexing and querying.
-
-       The optional positionIncrementGap puts space between multiple fields of
-       this type on the same document, with the purpose of preventing false phrase
-       matching across fields.
-
-       For more info on customizing your analyzer chain, please see
-       http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-   -->
-
   <!-- One can also specify an existing Analyzer class that has a
        default constructor via the class attribute on the analyzer element
   <fieldType name="text_greek" class="solr.TextField">
diff --git a/solr/core/src/test-files/solr/collection1/conf/schema11.xml b/solr/core/src/test-files/solr/collection1/conf/schema11.xml
index d791793..f4c90c8 100644
--- a/solr/core/src/test-files/solr/collection1/conf/schema11.xml
+++ b/solr/core/src/test-files/solr/collection1/conf/schema11.xml
@@ -16,19 +16,6 @@
  limitations under the License.
 -->
 
-<!--  
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
--->
-
 <schema name="example" version="1.1">
   <!-- attribute "name" is the name of this schema and is only used for display purposes.
        Applications should change this to reflect the nature of the search collection.
@@ -46,7 +33,7 @@
        org.apache.solr.analysis package.
     -->
 
-    <!-- The StrField type is not analyzed, but indexed/stored verbatim.  
+    <!-- The StrField type is not analyzed, but indexed/stored verbatim.
        - StrField and TextField support an optional compressThreshold which
        limits compression (if enabled in the derived fields) to values which
        exceed a certain size (in characters).
@@ -77,7 +64,7 @@
   <fieldType name="float" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
   <fieldType name="long" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
   <fieldType name="double" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="0" positionIncrementGap="0"/>
-  
+
   <!-- Point Fields -->
   <fieldType name="pint" class="solr.IntPointField" docValues="true"/>
   <fieldType name="plong" class="solr.LongPointField" docValues="true"/>
@@ -87,7 +74,7 @@
 
     <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
          is a more restricted form of the canonical representation of dateTime
-         http://www.w3.org/TR/xmlschema-2/#dateTime    
+         http://www.w3.org/TR/xmlschema-2/#dateTime
          The trailing "Z" designates UTC time and is mandatory.
          Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
          All other components are mandatory.
@@ -102,7 +89,7 @@
                NOW/DAY+6MONTHS+3DAYS
                   ... 6 months and 3 days in the future from the start of
                       the current day
-                      
+
          Consult the TrieDateField javadocs for more information.
       -->
     <fieldType name="date" class="${solr.tests.DateFieldType}" docValues="${solr.tests.numeric.dv}" sortMissingLast="true" omitNorms="true"/>
@@ -111,28 +98,16 @@
     <!-- The "RandomSortField" is not used to store or search any
          data.  You can declare fields of this type it in your schema
          to generate pseudo-random orderings of your docs for sorting
-         purposes.  The ordering is generated based on the field name 
+         purposes.  The ordering is generated based on the field name
          and the version of the index, As long as the index version
          remains unchanged, and the same field name is reused,
-         the ordering of the docs will be consistent.  
+         the ordering of the docs will be consistent.
          If you want different pseudo-random orderings of documents,
          for the same version of the index, use a dynamicField and
          change the name
      -->
     <fieldType name="random" class="solr.RandomSortField" indexed="true" />
 
-    <!-- solr.TextField allows the specification of custom text analyzers
-         specified as a tokenizer and a list of token filters. Different
-         analyzers may be specified for indexing and querying.
-
-         The optional positionIncrementGap puts space between multiple fields of
-         this type on the same document, with the purpose of preventing false phrase
-         matching across fields.
-
-         For more info on customizing your analyzer chain, please see
-         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-     -->
-
     <!-- One can also specify an existing Analyzer class that has a
          default constructor via the class attribute on the analyzer element
     <fieldType name="text_greek" class="solr.TextField">
@@ -225,13 +200,13 @@
         <filter class="solr.TrimFilterFactory" />
         <!-- The PatternReplaceFilter gives you the flexibility to use
              Java Regular expression to replace any sequence of characters
-             matching a pattern with an arbitrary replacement string, 
+             matching a pattern with an arbitrary replacement string,
              which may include back refrences to portions of the orriginal
              string matched by the pattern.
-             
+
              See the Java Regular Expression documentation for more
              infomation on pattern and replacement string syntax.
-             
+
              http://docs.oracle.com/javase/8/docs/api/java/util/regex/package-summary.html
           -->
         <filter class="solr.PatternReplaceFilterFactory"
@@ -240,16 +215,16 @@
       </analyzer>
     </fieldType>
 
-    <!-- since fields of this type are by default not stored or indexed, any data added to 
-         them will be ignored outright 
-     --> 
-    <fieldType name="ignored" stored="false" indexed="false" class="solr.StrField" /> 
+    <!-- since fields of this type are by default not stored or indexed, any data added to
+         them will be ignored outright
+     -->
+    <fieldType name="ignored" stored="false" indexed="false" class="solr.StrField" />
 
     <fieldType name="file" keyField="id" defVal="1" stored="false" indexed="false" class="solr.ExternalFileField" />
 
     <fieldType name="sfile" keyField="sfile_s" defVal="1" stored="false" indexed="false" class="solr.ExternalFileField" />
 
-    
+
     <fieldType name="tint" class="${solr.tests.IntegerFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0"/>
     <fieldType name="tfloat" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0"/>
     <fieldType name="tlong" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0"/>
@@ -257,13 +232,13 @@
     <fieldType name="tdouble4" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" precisionStep="4" omitNorms="true" positionIncrementGap="0"/>
     <fieldType name="tdate" class="${solr.tests.DateFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0"/>
 
-    
+
     <fieldType name="tints" class="${solr.tests.IntegerFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0" precisionStep="0" multiValued="true" />
     <fieldType name="tfloats" class="${solr.tests.FloatFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0" precisionStep="0" multiValued="true"/>
     <fieldType name="tlongs" class="${solr.tests.LongFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0" precisionStep="0" multiValued="true"/>
     <fieldType name="tdoubles" class="${solr.tests.DoubleFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0" precisionStep="0" multiValued="true" />
     <fieldType name="tdates" class="${solr.tests.DateFieldType}" docValues="${solr.tests.numeric.dv}" omitNorms="true" positionIncrementGap="0" precisionStep="0" multiValued="true" />
- 
+
 
     <!-- Poly field -->
     <fieldType name="xy" class="solr.PointType" dimension="2" subFieldType="double"/>
@@ -291,7 +266,7 @@ valued. -->
         <filter class="solr.LowerCaseFilterFactory"/>
         <filter class="solr.LengthFilterFactory" min="2" max="32768"/>
       </analyzer>
-    </fieldType>  
+    </fieldType>
 
 
     <!-- Enum type -->
@@ -308,7 +283,7 @@ valued. -->
        boosting for the field, and saves some memory).  Only full-text
        fields or fields that need an index-time boost need norms.
      termVectors: [false] set to true to store the term vector for a given field.
-       When using MoreLikeThis, fields used for similarity should be stored for 
+       When using MoreLikeThis, fields used for similarity should be stored for
        best performance.
    -->
 
@@ -339,7 +314,7 @@ valued. -->
    <field name="cat_length" type="text_length" indexed="true" stored="true" multiValued="true"/>
 
    <!-- see TestMinMaxOnMultiValuedField -->
-   <!-- NOTE: "string" type configured with sortMissingLast="true" 
+   <!-- NOTE: "string" type configured with sortMissingLast="true"
         we need a multivalued string for sort testing using sortMissing*="false"
    -->
    <field name="val_strs_dv" type="string" indexed="true" stored="true"
@@ -361,12 +336,12 @@ valued. -->
    <field name="val_bool_missl_s_dv" type="boolean" docValues="true" multiValued="true" sortMissingFirst="false" sortMissingLast="true" />
    <field name="val_enum_missf_s_dv" type="severityType" docValues="true" multiValued="true" sortMissingFirst="true" sortMissingLast="false" />
    <field name="val_enum_missl_s_dv" type="severityType" docValues="true" multiValued="true" sortMissingFirst="false" sortMissingLast="true" />
-   
+
 
    <!-- Enum type -->
    <field name="severity" type="severityType" docValues="true" indexed="true" stored="true" multiValued="false"/>
 
-   
+
    <!-- Dynamic field definitions.  If a field name is not found, dynamicFields
         will be used if the name matches any of the patterns.
         RESTRICTION: the glob-like pattern in the name attribute must have
@@ -383,7 +358,7 @@ valued. -->
    <dynamicField name="*_l"    type="long"   indexed="true"  stored="true"/>
    <dynamicField name="*_f"    type="float"  indexed="true"  stored="true"/>
    <dynamicField name="*_d"    type="double" indexed="true"  stored="true"/>
-   
+
     <!-- Test trie fields explicitly -->
    <dynamicField name="*_ti"      type="tint"    indexed="true"  stored="true"/>
    <dynamicField name="*_ti_dv"   type="tint"    indexed="true"  stored="true" docValues="true"/>
@@ -416,7 +391,7 @@ valued. -->
    <dynamicField name="*_tdts"     type="tdates"   indexed="true"  stored="true"/>
    <dynamicField name="*_tdts_dv"   type="tdates"   indexed="true"  stored="true" docValues="true"/>
    <dynamicField name="*_tdts_ni_dv" type="tdates"   indexed="false"  stored="true" docValues="true"/>
-   
+
    <!-- Test point fields explicitly -->
    <dynamicField name="*_i_p"      type="pint"    indexed="true"  stored="true" docValues="true"/>
    <dynamicField name="*_is_p"      type="pint"    indexed="true"  stored="true" docValues="true" multiValued="true"/>
@@ -454,10 +429,10 @@ valued. -->
    <dynamicField name="*_b"  type="boolean" indexed="true"  stored="true"/>
    <dynamicField name="*_dt" type="date"    indexed="true"  stored="true"/>
    <dynamicField name="*_ws" type="text_ws" indexed="true"  stored="true"/>
-   
+
   <!-- Indexed, but NOT uninvertible -->
   <dynamicField name="*_s_not_uninvert" type="string" indexed="true" stored="false" docValues="false" uninvertible="false" />
-   
+
    <!-- for testing tfidf functions, see TestFunctionQuery.testTFIDFFunctions -->
    <dynamicField name="*_tfidf"  type="tfidf_text"    indexed="true"  stored="true" />
    <fieldType name="tfidf_text" class="solr.TextField" positionIncrementGap="100">
@@ -486,18 +461,18 @@ valued. -->
        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
      </analyzer>
    </fieldType>
-     
+
    <dynamicField name="*_extf" type="file"/>
    <dynamicField name="*_extfs" type="sfile"/>
 
    <dynamicField name="*_random" type="random" />
 
-   <!-- uncomment the following to ignore any fields that don't already match an existing 
-        field name or dynamic field, rather than reporting them as an error. 
-        alternately, change the type="ignored" to some other type e.g. "text" if you want 
-        unknown fields indexed and/or stored by default --> 
+   <!-- uncomment the following to ignore any fields that don't already match an existing
+        field name or dynamic field, rather than reporting them as an error.
+        alternately, change the type="ignored" to some other type e.g. "text" if you want
+        unknown fields indexed and/or stored by default -->
    <!--dynamicField name="*" type="ignored" /-->
-   
+
   <!-- For testing payload function -->
   <dynamicField name="*_dpf" type="delimited_payloads_float" indexed="true"  stored="true"/>
   <dynamicField name="*_dpi" type="delimited_payloads_int" indexed="true"  stored="true"/>
@@ -534,8 +509,8 @@ valued. -->
   </fieldType>
 
 
-  
- <!-- Field to use to determine and enforce document uniqueness. 
+
+ <!-- Field to use to determine and enforce document uniqueness.
       Unless this field is marked with required="false", it will be a required field
    -->
  <uniqueKey>id</uniqueKey>
diff --git a/solr/core/src/test-files/solr/collection1/conf/schema_latest.xml b/solr/core/src/test-files/solr/collection1/conf/schema_latest.xml
index 693847b..9d48541 100644
--- a/solr/core/src/test-files/solr/collection1/conf/schema_latest.xml
+++ b/solr/core/src/test-files/solr/collection1/conf/schema_latest.xml
@@ -16,57 +16,28 @@
  limitations under the License.
 -->
 
-<!--  
- This is the Solr schema file. This file should be named "schema.xml" and
- should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
- or located where the classloader for the Solr webapp can find it.
-
- This example schema is the recommended starting point for users.
- It should be kept correct and concise, usable out-of-the-box.
-
- For more information, on how to customize this file, please see
- http://wiki.apache.org/solr/SchemaXml
-
- PERFORMANCE NOTE: this schema includes many optional features and should not
- be used for benchmarking.  To improve performance one could
-  - set stored="false" for all fields possible (esp large fields) when you
-    only need to search on the field but don't need to return the original
-    value.
-  - set indexed="false" if you don't need to search on the field, but only
-    return the field as a result of searching on other indexed fields.
-  - remove all unneeded copyField statements
-  - for best index size and searching performance, set "index" to false
-    for all general text fields, use copyField to copy them to the
-    catchall "text" field, and use that for searching.
-  - For maximum indexing performance, use the StreamingUpdateSolrServer
-    java client.
-  - Remember to run the JVM in server mode, and use a higher logging level
-    that avoids logging every request
--->
-
 <schema name="example" version="1.6">
   <!-- attribute "name" is the name of this schema and is only used for display purposes.
-       version="x.y" is Solr's version number for the schema syntax and 
+       version="x.y" is Solr's version number for the schema syntax and
        semantics.  It should not normally be changed by applications.
 
-       1.0: multiValued attribute did not exist, all fields are multiValued 
+       1.0: multiValued attribute did not exist, all fields are multiValued
             by nature
-       1.1: multiValued attribute introduced, false by default 
-       1.2: omitTermFreqAndPositions attribute introduced, true by default 
+       1.1: multiValued attribute introduced, false by default
+       1.2: omitTermFreqAndPositions attribute introduced, true by default
             except for text fields.
        1.3: removed optional field compress feature
        1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
-            behavior when a single string produces multiple tokens.  Defaults 
+            behavior when a single string produces multiple tokens.  Defaults
             to off for version >= 1.4
-       1.5: omitNorms defaults to true for primitive field types 
+       1.5: omitNorms defaults to true for primitive field types
             (int, float, boolean, string...)
        1.6: useDocValuesAsStored defaults to true.
      -->
 
   <!-- Valid attributes for fields:
     name: mandatory - the name for the field
-    type: mandatory - the name of a field type from the 
+    type: mandatory - the name of a field type from the
       fieldTypes
     indexed: true if this field should be indexed (searchable or sortable)
     stored: true if this field should be retrievable
@@ -89,9 +60,9 @@
       given field.
       When using MoreLikeThis, fields used for similarity should be
       stored for best performance.
-    termPositions: Store position information with the term vector.  
+    termPositions: Store position information with the term vector.
       This will increase storage costs.
-    termOffsets: Store offset information with the term vector. This 
+    termOffsets: Store offset information with the term vector. This
       will increase storage costs.
     required: The field is required.  It will throw an error if the
       value does not exist
@@ -117,7 +88,7 @@
   <field name="_root_" type="string" indexed="true" stored="false"/>
 
   <!-- Only remove the "id" field if you have a very good reason to. While not strictly
-    required, it is highly recommended. A <uniqueKey> is present in almost all Solr 
+    required, it is highly recommended. A <uniqueKey> is present in almost all Solr
     installations. See the <uniqueKey> declaration below where <uniqueKey> is set to "id".
   -->
   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false"/>
@@ -194,7 +165,7 @@
     -->
 
   <!-- Dynamic field definitions allow using convention over configuration
-      for fields via the specification of patterns to match field names. 
+      for fields via the specification of patterns to match field names.
       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
       RESTRICTION: the glob-like pattern in the name attribute must have
       a "*" only at the start or the end.  -->
@@ -242,7 +213,7 @@
   <dynamicField name="*_ddsS" type="double" indexed="true" stored="true" multiValued="true" docValues="true"/>
   <dynamicField name="*_dtdS" type="date" indexed="true" stored="true" docValues="true"/>
   <dynamicField name="*_dtdsS" type="date" indexed="true" stored="true" multiValued="true" docValues="true"/>
-  
+
   <!-- docvalues, not indexed (N suffix) and not stored -->
   <dynamicField name="*_sdN" type="string" indexed="false" stored="false" docValues="true"/>
   <dynamicField name="*_sdsN" type="string" indexed="false" stored="false" multiValued="true" docValues="true"/>
@@ -288,9 +259,9 @@
 
   <dynamicField name="random_*" type="random"/>
 
-  <!-- uncomment the following to ignore any fields that don't already match an existing 
-       field name or dynamic field, rather than reporting them as an error. 
-       alternately, change the type="ignored" to some other type e.g. "text" if you want 
+  <!-- uncomment the following to ignore any fields that don't already match an existing
+       field name or dynamic field, rather than reporting them as an error.
+       alternately, change the type="ignored" to some other type e.g. "text" if you want
        unknown fields indexed and/or stored by default -->
   <!--dynamicField name="*" type="ignored" multiValued="true" /-->
 
@@ -306,7 +277,7 @@
   <field name="where_s_multi_not_uninvert_dv" type="string" indexed="true" stored="false" docValues="true" uninvertible="false" multiValued="true" />
   <field name="where_s_single_not_uninvert_dv" type="string" indexed="true" stored="false" docValues="true" uninvertible="false" multiValued="false" />
 
-  <!-- Field to use to determine and enforce document uniqueness. 
+  <!-- Field to use to determine and enforce document uniqueness.
        Unless this field is marked with required="false", it will be a required field
     -->
   <uniqueKey>id</uniqueKey>
@@ -337,16 +308,16 @@
 
   <!-- Create a string version of author for faceting -->
   <copyField source="author" dest="author_s"/>
-  
+
   <copyField source="where_s" dest="where_s_multi_not_uninvert"/>
   <copyField source="where_s" dest="where_s_multi_not_uninvert_dv"/>
   <copyField source="where_s" dest="where_s_single_not_uninvert"/>
   <copyField source="where_s" dest="where_s_single_not_uninvert_dv"/>
   <copyField source="where_s" dest="where_s_not_indexed_sS"/>
 
-  <!-- Above, multiple source fields are copied to the [text] field. 
-   Another way to map multiple source fields to the same 
-   destination field is to use the dynamic field syntax. 
+  <!-- Above, multiple source fields are copied to the [text] field.
+   Another way to map multiple source fields to the same
+   destination field is to use the dynamic field syntax.
    copyField also supports a maxChars to copy setting.  -->
 
   <!-- <copyField source="*_t" dest="text" maxChars="3000"/> -->
@@ -417,10 +388,10 @@
   <fieldType name="tfloat" class="${solr.tests.FloatFieldType}" precisionStep="8" positionIncrementGap="0"/>
   <fieldType name="tlong" class="${solr.tests.LongFieldType}" precisionStep="8" positionIncrementGap="0"/>
   <fieldType name="tdouble" class="${solr.tests.DoubleFieldType}" precisionStep="8" positionIncrementGap="0"/>
-  
+
   <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
        is a more restricted form of the canonical representation of dateTime
-       http://www.w3.org/TR/xmlschema-2/#dateTime    
+       http://www.w3.org/TR/xmlschema-2/#dateTime
        The trailing "Z" designates UTC time and is mandatory.
        Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
        All other components are mandatory.
@@ -435,7 +406,7 @@
              NOW/DAY+6MONTHS+3DAYS
                 ... 6 months and 3 days in the future from the start of
                     the current day
-                    
+
        Consult the DateField javadocs for more information.
 
        Note: For faster range queries, consider the tdate type
@@ -451,28 +422,17 @@
 
   <!-- The "RandomSortField" is not used to store or search any
        data.  You can declare fields of this type it in your schema
-       to generate pseudo-random orderings of your docs for sorting 
+       to generate pseudo-random orderings of your docs for sorting
        or function purposes.  The ordering is generated based on the field
        name and the version of the index. As long as the index version
        remains unchanged, and the same field name is reused,
-       the ordering of the docs will be consistent.  
+       the ordering of the docs will be consistent.
        If you want different pseudo-random orderings of documents,
        for the same version of the index, use a dynamicField and
        change the field name in the request.
    -->
   <fieldType name="random" class="solr.RandomSortField" indexed="true"/>
 
-  <!-- solr.TextField allows the specification of custom text analyzers
-       specified as a tokenizer and a list of token filters. Different
-       analyzers may be specified for indexing and querying.
-
-       The optional positionIncrementGap puts space between multiple fields of
-       this type on the same document, with the purpose of preventing false phrase
-       matching across fields.
-
-       For more info on customizing your analyzer chain, please see
-       http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
-   -->
 
   <!-- One can also specify an existing Analyzer class that has a
        default constructor via the class attribute on the analyzer element.
@@ -678,13 +638,13 @@
       <filter class="solr.TrimFilterFactory"/>
       <!-- The PatternReplaceFilter gives you the flexibility to use
            Java Regular expression to replace any sequence of characters
-           matching a pattern with an arbitrary replacement string, 
+           matching a pattern with an arbitrary replacement string,
            which may include back references to portions of the original
            string matched by the pattern.
-           
+
            See the Java Regular Expression documentation for more
            information on pattern and replacement string syntax.
-           
+
            http://docs.oracle.com/javase/7/docs/api/java/util/regex/package-summary.html
         -->
       <filter class="solr.PatternReplaceFilterFactory"
@@ -706,7 +666,7 @@
       <!--
       The DelimitedPayloadTokenFilter can put payloads on tokens... for example,
       a token of "foo|1.4"  would be indexed as "foo" with a payload of 1.4f
-      Attributes of the DelimitedPayloadTokenFilterFactory : 
+      Attributes of the DelimitedPayloadTokenFilterFactory :
        "delimiter" - a one character delimiter. Default is | (pipe)
  "encoder" - how to encode the following value into a playload
     float -> org.apache.lucene.analysis.payloads.FloatEncoder,
@@ -726,7 +686,7 @@
     </analyzer>
   </fieldType>
 
-  <!-- 
+  <!--
     Example of using PathHierarchyTokenizerFactory at index time, so
     queries for paths match documents at that path, or in descendent paths
   -->
@@ -738,7 +698,7 @@
       <tokenizer class="solr.KeywordTokenizerFactory"/>
     </analyzer>
   </fieldType>
-  <!-- 
+  <!--
     Example of using PathHierarchyTokenizerFactory at query time, so
     queries for paths match documents at that path, or in ancestor paths
   -->
@@ -757,7 +717,7 @@
 
   <!-- This point type indexes the coordinates as separate fields (subFields)
     If subFieldType is defined, it references a type, and a dynamic field
-    definition is created matching *___<typename>.  Alternately, if 
+    definition is created matching *___<typename>.  Alternately, if
     subFieldSuffix is defined, that is used to create the subFields.
     Example: if subFieldType="double", then the coordinates would be
       indexed in fields myloc_0___double,myloc_1___double.
@@ -771,24 +731,9 @@
   <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
   <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
 
-  <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
-    For more information about this and other Spatial fields new to Solr 4, see:
-    http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
-  -->
   <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
              geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers"/>
 
-  <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
-       Parameters:
-         defaultCurrency: Specifies the default currency if none specified. Defaults to "USD"
-         precisionStep:   Specifies the precisionStep for the TrieLong field used for the amount
-         providerClass:   Lets you plug in other exchange provider backend:
-                          solr.FileExchangeRateProvider is the default and takes one parameter:
-                            currencyConfig: name of an xml file holding exchange rates
-                          solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
-                            ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
-                            refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
-  -->
   <fieldType name="currency" class="solr.CurrencyField" precisionStep="8" defaultCurrency="USD"
              currencyConfig="currency.xml"/>
 
@@ -796,16 +741,4 @@
   <!-- some examples for different languages (generally ordered by ISO code) -->
   <!-- REMOVED.  these reference things not in the test config, like lang/stopwords_en.txt -->
 
-
-  <!-- Similarity is the scoring routine for each document vs. a query.
-       A custom Similarity or SimilarityFactory may be specified here, but 
-       the default is fine for most applications.  
-       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity
-    -->
-  <!--
-     <similarity class="com.example.solr.CustomSimilarityFactory">
-       <str name="paramkey">param value</str>
-     </similarity>
-    -->
-
 </schema>
diff --git a/solr/licenses/README.committers.txt b/solr/licenses/README.committers.txt
index 6580750..17dab04 100644
--- a/solr/licenses/README.committers.txt
+++ b/solr/licenses/README.committers.txt
@@ -5,25 +5,25 @@
 
 Under no circumstances should any new files be added to this directory
 without careful consideration of how LICENSE.txt and NOTICE.txt in the
-parent directory should be updated to reflect the addition. 
+parent directory should be updated to reflect the addition.
 
 Even if a Jar being added is from another Apache project, it should be
 mentioned in NOTICE.txt, and may have additional Attribution or
 Licencing information that also needs to be added to the appropriate
-file.  
+file.
 
 ---
 
 If an existing Jar is replaced with a newer version, the same
 consideration should be given as if it were an entirely new file:
 verify that no updates need to be made to LICENSE.txt or NOTICE.txt
-based on changes in the terms of the dependency being updated. 
+based on changes in the terms of the dependency being updated.
 
 ---
 
-When adding a jar or updating an existing jar, be sure to include/update 
+When adding a jar or updating an existing jar, be sure to include/update
 xyz-LICENSE.txt and if applicable, xyz-NOTICE.txt.  These files often
-change across versions of the dependency, so when updating be SURE to 
+change across versions of the dependency, so when updating be SURE to
 update them to the recent version. This also allows others to see
 what changed with respect to licensing in the commit diff.
 
@@ -38,5 +38,3 @@ along with the specific version information.  If the version is a
 When upgrading Lucene-Java Jars, remember to generate new Analysis
 factories for any new Tokenizers or TokenFilters.  See the wiki for
 details...
-
-  http://wiki.apache.org/solr/CommitterInfo
diff --git a/solr/server/solr/README.md b/solr/server/solr/README.md
index e556760..cf6a3f1 100644
--- a/solr/server/solr/README.md
+++ b/solr/server/solr/README.md
@@ -51,7 +51,7 @@ Although solr.xml can be configured to look for SolrCore Instance Directories
 in any path, simple sub-directories of the Solr Home Dir using relative paths
 are common for many installations.
 
-### Core Discovery 
+### Core Discovery
 
 During startup, Solr will scan sub-directories of Solr home looking for
 a specific file named core.properties. If core.properties is found in a
@@ -62,7 +62,7 @@ example/solr/collection1/core.properties
 
 For more information about core discovery, please see:
 
-https://lucene.apache.org/solr/guide/defining-core-properties.html
+https://lucene.apache.org/solr/guide/core-discovery.html
 
 ### A Shared 'lib' Directory
 
diff --git a/solr/server/solr/configsets/_default/conf/managed-schema.xml b/solr/server/solr/configsets/_default/conf/managed-schema.xml
index 5be18eb..6b5796c 100644
--- a/solr/server/solr/configsets/_default/conf/managed-schema.xml
+++ b/solr/server/solr/configsets/_default/conf/managed-schema.xml
@@ -23,7 +23,7 @@
 
 
  For more information, on how to customize this file, please see
- http://lucene.apache.org/solr/guide/documents-fields-and-schema-design.html
+ http://lucene.apache.org/solr/guide/fields-and-schema-design.html
 
  PERFORMANCE NOTE: this schema includes many optional features and should not
  be used for benchmarking.  To improve performance one could
@@ -1020,7 +1020,7 @@
     <!-- Similarity is the scoring routine for each document vs. a query.
        A custom Similarity or SimilarityFactory may be specified here, but
        the default is fine for most applications.
-       For more info: http://lucene.apache.org/solr/guide/other-schema-elements.html#OtherSchemaElements-Similarity
+       For more info: http://lucene.apache.org/solr/guide/schema-elements.html#similarity
     -->
     <!--
      <similarity class="com.example.solr.CustomSimilarityFactory">
diff --git a/solr/server/solr/configsets/_default/conf/solrconfig.xml b/solr/server/solr/configsets/_default/conf/solrconfig.xml
index 3d03574..59a85ae 100644
--- a/solr/server/solr/configsets/_default/conf/solrconfig.xml
+++ b/solr/server/solr/configsets/_default/conf/solrconfig.xml
@@ -18,7 +18,7 @@
 
 <!--
      For more details about configurations options that may appear in
-     this file, see http://wiki.apache.org/solr/SolrConfigXml.
+     this file, see https://solr.apache.org/guide/configuring-solrconfig-xml.html.
 -->
 <config>
   <!-- In all configuration below, a prefix of "solr." for class names
@@ -202,7 +202,7 @@
                    'simple' is the default
 
          More details on the nuances of each LockFactory...
-         http://wiki.apache.org/lucene-java/AvailableLockFactories
+         https://cwiki.apache.org/confluence/display/lucene/AvailableLockFactories
     -->
     <lockType>${solr.lock.type:native}</lockType>
 
@@ -255,7 +255,7 @@
        parameters. Remove this to disable exposing Solr configuration
        and statistics to JMX.
 
-       For more details see http://wiki.apache.org/solr/SolrJmx
+       For more details see https://solr.apache.org/guide/jmx-with-solr.html
     -->
   <jmx />
   <!-- If you want to connect to a particular server, specify the
@@ -293,7 +293,7 @@
          Instead of enabling autoCommit, consider using "commitWithin"
          when adding documents.
 
-         http://wiki.apache.org/solr/UpdateXmlMessages
+         https://solr.apache.org/guide/indexing-with-update-handlers.html
 
          maxDocs - Maximum number of documents to add since the last
                    commit before automatically triggering a new commit.
@@ -368,11 +368,11 @@
   <query>
 
     <!-- Maximum number of clauses allowed when parsing a boolean query string.
-         
+
          This limit only impacts boolean queries specified by a user as part of a query string,
          and provides per-collection controls on how complex user specified boolean queries can
          be.  Query strings that specify more clauses then this will result in an error.
-         
+
          If this per-collection limit is greater then the global `maxBooleanClauses` limit
          specified in `solr.xml`, it will have no effect, as that setting also limits the size
          of user specified boolean queries.
@@ -1071,7 +1071,7 @@
 
   <!-- Response Writers
 
-       http://wiki.apache.org/solr/QueryResponseWriter
+       https://solr.apache.org/guide/response-writers.html
 
        Request responses will be written using the writer specified by
        the 'wt' request parameter matching the name of a registered
@@ -1121,7 +1121,7 @@
 
   <!-- Function Parsers
 
-       http://wiki.apache.org/solr/FunctionQuery
+       https://solr.apache.org/guide/function-queries.html
 
        Multiple ValueSourceParsers can be registered by name, and then
        used as function names when using the "func" QParser.
@@ -1134,7 +1134,7 @@
 
 
   <!-- Document Transformers
-       http://wiki.apache.org/solr/DocTransformers
+       https://solr.apache.org/guide/document-transformers.html
     -->
   <!--
      Could be something like:
diff --git a/solr/server/solr/configsets/sample_techproducts_configs/conf/elevate.xml b/solr/server/solr/configsets/sample_techproducts_configs/conf/elevate.xml
index 2c09ebe..b4072d0 100644
--- a/solr/server/solr/configsets/sample_techproducts_configs/conf/elevate.xml
+++ b/solr/server/solr/configsets/sample_techproducts_configs/conf/elevate.xml
@@ -20,7 +20,7 @@
      loaded once at startup.  If it is found in Solr's data
      directory, it will be re-loaded every commit.
 
-   See http://wiki.apache.org/solr/QueryElevationComponent for more info
+   See https://solr.apache.org/guide/query-elevation-component.html for more info
 
 -->
 <elevate>
@@ -32,9 +32,9 @@
   </query>
 
 for use with techproducts example
- 
+
   <query text="ipod">
-    <doc id="MA147LL/A" />  put the actual ipod at the top 
+    <doc id="MA147LL/A" />  put the actual ipod at the top
     <doc id="IW-02" exclude="true" /> exclude this cable
   </query>
 -->
diff --git a/solr/server/solr/configsets/sample_techproducts_configs/conf/managed-schema b/solr/server/solr/configsets/sample_techproducts_configs/conf/managed-schema
index 81de184..2aec064 100644
--- a/solr/server/solr/configsets/sample_techproducts_configs/conf/managed-schema
+++ b/solr/server/solr/configsets/sample_techproducts_configs/conf/managed-schema
@@ -16,10 +16,10 @@
  limitations under the License.
 -->
 
-<!--  
+<!--
  This is the Solr schema file. This file should be named "schema.xml" and
  should be in the conf directory under the solr home
- (i.e. ./solr/conf/schema.xml by default) 
+ (i.e. ./solr/conf/schema.xml by default)
  or located where the classloader for the Solr webapp can find it.
 
  This example schema is the recommended starting point for users.
@@ -47,19 +47,19 @@
 
 <schema name="example" version="1.6">
   <!-- attribute "name" is the name of this schema and is only used for display purposes.
-       version="x.y" is Solr's version number for the schema syntax and 
+       version="x.y" is Solr's version number for the schema syntax and
        semantics.  It should not normally be changed by applications.
 
-       1.0: multiValued attribute did not exist, all fields are multiValued 
+       1.0: multiValued attribute did not exist, all fields are multiValued
             by nature
-       1.1: multiValued attribute introduced, false by default 
-       1.2: omitTermFreqAndPositions attribute introduced, true by default 
+       1.1: multiValued attribute introduced, false by default
+       1.2: omitTermFreqAndPositions attribute introduced, true by default
             except for text fields.
        1.3: removed optional field compress feature
        1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
-            behavior when a single string produces multiple tokens.  Defaults 
+            behavior when a single string produces multiple tokens.  Defaults
             to off for version >= 1.4
-       1.5: omitNorms defaults to true for primitive field types 
+       1.5: omitNorms defaults to true for primitive field types
             (int, float, boolean, string...)
        1.6: useDocValuesAsStored defaults to true.
      -->
@@ -67,15 +67,15 @@
 
    <!-- Valid attributes for fields:
      name: mandatory - the name for the field
-     type: mandatory - the name of a field type from the 
+     type: mandatory - the name of a field type from the
        fieldTypes
      indexed: true if this field should be indexed (searchable or sortable)
      stored: true if this field should be retrievable
      docValues: true if this field should have doc values. Doc values are
-       useful (required, if you are using *Point fields) for faceting, 
-       grouping, sorting and function queries. Doc values will make the index 
-       faster to load, more NRT-friendly and more memory-efficient. 
-       They however come with some limitations: they are currently only 
+       useful (required, if you are using *Point fields) for faceting,
+       grouping, sorting and function queries. Doc values will make the index
+       faster to load, more NRT-friendly and more memory-efficient.
+       They however come with some limitations: they are currently only
        supported by StrField, UUIDField, all *PointFields, and depending
        on the field type, they might require the field to be single-valued,
        be required or have a default value (check the documentation
@@ -90,9 +90,9 @@
        given field.
        When using MoreLikeThis, fields used for similarity should be
        stored for best performance.
-     termPositions: Store position information with the term vector.  
+     termPositions: Store position information with the term vector.
        This will increase storage costs.
-     termOffsets: Store offset information with the term vector. This 
+     termOffsets: Store offset information with the term vector. This
        will increase storage costs.
      termPayloads: Store payload information with the term vector. This
        will increase storage costs.
@@ -111,25 +111,25 @@
 
    <!-- If you remove this field, you must _also_ disable the update log in solrconfig.xml
       or Solr won't start. _version_ and update log are required for SolrCloud
-   --> 
+   -->
    <!-- doc values are enabled by default for primitive types such as long so we don't index the version field  -->
    <field name="_version_" type="plong" indexed="false" stored="false"/>
-   
+
    <!-- points to the root document of a block of nested documents. Required for nested
       document support, may be removed otherwise
    -->
    <field name="_root_" type="string" indexed="true" stored="false" docValues="false" />
 
    <!-- Only remove the "id" field if you have a very good reason to. While not strictly
-     required, it is highly recommended. A <uniqueKey> is present in almost all Solr 
+     required, it is highly recommended. A <uniqueKey> is present in almost all Solr
      installations. See the <uniqueKey> declaration below where <uniqueKey> is set to "id".
-     Do NOT change the type and apply index-time analysis to the <uniqueKey> as it will likely 
+     Do NOT change the type and apply index-time analysis to the <uniqueKey> as it will likely
      make routing in SolrCloud and document replacement in general fail. Limited _query_ time
      analysis is possible as long as the indexing process is guaranteed to index the term
      in a compatible way. Any analysis applied to the <uniqueKey> should _not_ produce multiple
      tokens
-   -->   
-   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> 
+   -->
+   <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
 
    <field name="pre" type="preanalyzed" indexed="true" stored="true"/>
    <field name="sku" type="text_en_splitting_tight" indexed="true" stored="true" omitNorms="true"/>
@@ -173,7 +173,7 @@
         using copyField below. This is to save space. Use this field for returning and
         highlighting document content. Use the "text" field to search the content. -->
    <field name="content" type="text_general" indexed="false" stored="true" multiValued="true"/>
-   
+
 
    <!-- catchall field, containing all other searchable text fields (implemented
         via copyField further on in this schema  -->
@@ -191,11 +191,11 @@
 
 
    <!-- Dynamic field definitions allow using convention over configuration
-       for fields via the specification of patterns to match field names. 
+       for fields via the specification of patterns to match field names.
        EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
        RESTRICTION: the glob-like pattern in the name attribute must have
        a "*" only at the start or the end.  -->
-   
+
    <dynamicField name="*_i"  type="pint"    indexed="true"  stored="true"/>
    <dynamicField name="*_is" type="pint"    indexed="true"  stored="true"  multiValued="true"/>
    <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
@@ -225,14 +225,14 @@
 
    <dynamicField name="random_*" type="random" />
 
-   <!-- uncomment the following to ignore any fields that don't already match an existing 
-        field name or dynamic field, rather than reporting them as an error. 
-        alternately, change the type="ignored" to some other type e.g. "text" if you want 
-        unknown fields indexed and/or stored by default --> 
+   <!-- uncomment the following to ignore any fields that don't already match an existing
+        field name or dynamic field, rather than reporting them as an error.
+        alternately, change the type="ignored" to some other type e.g. "text" if you want
+        unknown fields indexed and/or stored by default -->
    <!--dynamicField name="*" type="ignored" multiValued="true" /-->
 
 
- <!-- Field to use to determine and enforce document uniqueness. 
+ <!-- Field to use to determine and enforce document uniqueness.
       Unless this field is marked with required="false", it will be a required field
    -->
  <uniqueKey>id</uniqueKey>
@@ -260,21 +260,21 @@
    <copyField source="content_type" dest="text"/>
    <copyField source="resourcename" dest="text"/>
    <copyField source="url" dest="text"/>
-   
+
    <!-- Create a string version of author for faceting -->
    <copyField source="author" dest="author_s"/>
-  
-   <!-- Above, multiple source fields are copied to the [text] field. 
-    Another way to map multiple source fields to the same 
-    destination field is to use the dynamic field syntax. 
+
+   <!-- Above, multiple source fields are copied to the [text] field.
+    Another way to map multiple source fields to the same
+    destination field is to use the dynamic field syntax.
     copyField also supports a maxChars to copy setting.  -->
-     
+
    <!-- <copyField source="*_t" dest="text" maxChars="3000"/> -->
 
    <!-- copy name to alphaNameSort, a field designed for sorting by name -->
    <!-- <copyField source="name" dest="alphaNameSort"/> -->
- 
-  
+
+
     <!-- field type definitions. The "name" attribute is
        just a label to be used by field definitions.  The "class"
        attribute and any other attributes determine the real
@@ -302,7 +302,7 @@
        - If sortMissingLast="false" and sortMissingFirst="false" (the default),
          then default lucene sorting will be used which places docs without the
          field first in an ascending sort and last in a descending sort.
-    -->    
+    -->
 
     <!--
       Numeric field types that index values using KD-trees.
@@ -312,7 +312,7 @@
     <fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/>
     <fieldType name="plong" class="solr.LongPointField" docValues="true"/>
     <fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>
-    
+
     <fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true"/>
     <fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true"/>
     <fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true"/>
@@ -320,7 +320,7 @@
 
     <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
          is a more restricted form of the canonical representation of dateTime
-         http://www.w3.org/TR/xmlschema-2/#dateTime    
+         http://www.w3.org/TR/xmlschema-2/#dateTime
          The trailing "Z" designates UTC time and is mandatory.
          Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
          All other components are mandatory.
@@ -335,10 +335,10 @@
                NOW/DAY+6MONTHS+3DAYS
                   ... 6 months and 3 days in the future from the start of
                       the current day
-                      
+
          Consult the DatePointField javadocs for more information.
       -->
-      
+
     <!-- KD-tree versions of date fields -->
     <fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
     <fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true"/>
@@ -348,11 +348,11 @@
 
     <!-- The "RandomSortField" is not used to store or search any
          data.  You can declare fields of this type it in your schema
-         to generate pseudo-random orderings of your docs for sorting 
+         to generate pseudo-random orderings of your docs for sorting
          or function purposes.  The ordering is generated based on the field
          name and the version of the index. As long as the index version
          remains unchanged, and the same field name is reused,
-         the ordering of the docs will be consistent.  
+         the ordering of the docs will be consistent.
          If you want different pseudo-random orderings of documents,
          for the same version of the index, use a dynamicField and
          change the field name in the request.
@@ -423,11 +423,11 @@
         <filter name="lowercase"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- SortableTextField generaly functions exactly like TextField,
          except that it supports, and by default uses, docValues for sorting (or faceting)
          on the first 1024 characters of the original field values (which is configurable).
-         
+
          This makes it a bit more useful then TextField in many situations, but the trade-off
          is that it takes up more space on disk; which is why it's not used in place of TextField
          for every fieldType in this _default schema.
@@ -606,13 +606,13 @@
         <filter name="trim" />
         <!-- The PatternReplaceFilter gives you the flexibility to use
              Java Regular expression to replace any sequence of characters
-             matching a pattern with an arbitrary replacement string, 
+             matching a pattern with an arbitrary replacement string,
              which may include back references to portions of the original
              string matched by the pattern.
-             
+
              See the Java Regular Expression documentation for more
              information on pattern and replacement string syntax.
-             
+
              http://docs.oracle.com/javase/8/docs/api/java/util/regex/package-summary.html
           -->
         <filter name="patternReplace"
@@ -620,7 +620,7 @@
         />
       </analyzer>
     </fieldType>
-    
+
     <fieldType name="phonetic" stored="false" indexed="true" class="solr.TextField" >
       <analyzer>
         <tokenizer name="standard"/>
@@ -634,7 +634,7 @@
         <!--
         The DelimitedPayloadTokenFilter can put payloads on tokens... for example,
         a token of "foo|1.4"  would be indexed as "foo" with a payload of 1.4f
-        Attributes of the DelimitedPayloadTokenFilterFactory : 
+        Attributes of the DelimitedPayloadTokenFilterFactory :
          "delimiter" - a one character delimiter. Default is | (pipe)
    "encoder" - how to encode the following value into a playload
       float -> org.apache.lucene.analysis.payloads.FloatEncoder,
@@ -654,7 +654,7 @@
       </analyzer>
     </fieldType>
 
-    <!-- 
+    <!--
       Example of using PathHierarchyTokenizerFactory at index time, so
       queries for paths match documents at that path, or in descendent paths
     -->
@@ -666,7 +666,7 @@
   <tokenizer name="keyword" />
       </analyzer>
     </fieldType>
-    <!-- 
+    <!--
       Example of using PathHierarchyTokenizerFactory at query time, so
       queries for paths match documents at that path, or in ancestor paths
     -->
@@ -680,12 +680,12 @@
     </fieldType>
 
     <!-- since fields of this type are by default not stored or indexed,
-         any data added to them will be ignored outright.  --> 
+         any data added to them will be ignored outright.  -->
     <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
 
     <!-- This point type indexes the coordinates as separate fields (subFields)
       If subFieldType is defined, it references a type, and a dynamic field
-      definition is created matching *___<typename>.  Alternately, if 
+      definition is created matching *___<typename>.  Alternately, if
       subFieldSuffix is defined, that is used to create the subFields.
       Example: if subFieldType="double", then the coordinates would be
         indexed in fields myloc_0___double,myloc_1___double.
@@ -714,7 +714,7 @@
 
    <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
         Parameters:
-          amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field. 
+          amountLongSuffix: Required. Refers to a dynamic field for the raw amount sub-field.
                               The dynamic field must have a field type that extends LongValueFieldType.
                               Note: If you expect to use Atomic Updates, this dynamic field may not be stored.
           codeStrSuffix:    Required. Refers to a dynamic field for the currency code sub-field.
@@ -736,7 +736,7 @@
 
     <!-- Arabic -->
     <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <!-- for any non-arabic -->
         <filter name="lowercase"/>
@@ -749,26 +749,26 @@
 
     <!-- Bulgarian -->
     <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
-        <tokenizer name="standard"/> 
+      <analyzer>
+        <tokenizer name="standard"/>
         <filter name="lowercase"/>
-        <filter name="stop" ignoreCase="true" words="lang/stopwords_bg.txt" /> 
+        <filter name="stop" ignoreCase="true" words="lang/stopwords_bg.txt" />
         <filter name="bulgarianStem"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Catalan -->
     <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <!-- removes l', etc -->
         <filter name="elision" ignoreCase="true" articles="lang/contractions_ca.txt"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_ca.txt" />
-        <filter name="snowballPorter" language="Catalan"/>       
+        <filter name="snowballPorter" language="Catalan"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->
     <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">
       <analyzer>
@@ -795,27 +795,27 @@
 
     <!-- Czech -->
     <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_cz.txt" />
         <filter name="czechStem"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Danish -->
     <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />
-        <filter name="snowballPorter" language="Danish"/>       
+        <filter name="snowballPorter" language="Danish"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- German -->
     <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />
@@ -825,10 +825,10 @@
         <!-- more aggressive: <filter name="snowballPorter" language="German2"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Greek -->
     <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <!-- greek specific lowercase for sigma -->
         <filter name="greekLowercase"/>
@@ -836,10 +836,10 @@
         <filter name="greekStem"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Spanish -->
     <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />
@@ -860,14 +860,14 @@
 
     <!-- Basque -->
     <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_eu.txt" />
         <filter name="snowballPorter" language="Basque"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Persian -->
     <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">
       <analyzer>
@@ -880,10 +880,10 @@
         <filter name="stop" ignoreCase="true" words="lang/stopwords_fa.txt" />
       </analyzer>
     </fieldType>
-    
+
     <!-- Finnish -->
     <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />
@@ -891,10 +891,10 @@
         <!-- less aggressive: <filter name="finnishLightStem"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- French -->
     <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <!-- removes l', etc -->
         <filter name="elision" ignoreCase="true" articles="lang/contractions_fr.txt"/>
@@ -905,10 +905,10 @@
         <!-- more aggressive: <filter name="snowballPorter" language="French"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Irish -->
     <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <!-- removes d', etc -->
         <filter name="elision" ignoreCase="true" articles="lang/contractions_ga.txt"/>
@@ -919,10 +919,10 @@
         <filter name="snowballPorter" language="Irish"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Galician -->
     <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_gl.txt" />
@@ -930,10 +930,10 @@
         <!-- less aggressive: <filter name="galicianMinimalStem"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Hindi -->
     <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <!-- normalizes unicode representation -->
@@ -944,10 +944,10 @@
         <filter name="hindiStem"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Hungarian -->
     <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />
@@ -955,20 +955,20 @@
         <!-- less aggressive: <filter name="hungarianLightStem"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Armenian -->
     <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_hy.txt" />
         <filter name="snowballPorter" language="Armenian"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Indonesian -->
     <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_id.txt" />
@@ -976,10 +976,10 @@
         <filter name="indonesianStem" stemDerivational="true"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Italian -->
     <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <!-- removes l', etc -->
         <filter name="elision" ignoreCase="true" articles="lang/contractions_it.txt"/>
@@ -989,7 +989,7 @@
         <!-- more aggressive: <filter name="snowballPorter" language="Italian"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)
 
          NOTE: If you want to optimize search for precision, use default operator AND in your request
@@ -1041,7 +1041,7 @@
         <filter name="lowercase"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Korean morphological analysis -->
     <dynamicField name="*_txt_ko" type="text_ko"  indexed="true"  stored="true"/>
     <fieldType name="text_ko" class="solr.TextField" positionIncrementGap="100">
@@ -1052,7 +1052,7 @@
 
           This dictionary was built with MeCab, it defines a format for the features adapted
           for the Korean language.
-          
+
           Nori also has a convenient user dictionary feature that allows overriding the statistical
           model with your own entries for segmentation, part-of-speech tags and readings without a need
           to specify weights. Notice that user dictionaries have not been subject to extensive testing.
@@ -1065,7 +1065,7 @@
         -->
         <tokenizer name="korean" decompoundMode="discard" outputUnknownUnigrams="false"/>
         <!-- Removes some part of speech stuff like EOMI (Pos.E), you can add a parameter 'tags',
-          listing the tags to remove. By default it removes: 
+          listing the tags to remove. By default it removes:
           E, IC, J, MAG, MAJ, MM, SP, SSC, SSO, SC, SE, XPN, XSA, XSN, XSV, UNA, NA, VSV
           This is basically an equivalent to stemming.
         -->
@@ -1078,17 +1078,17 @@
 
     <!-- Latvian -->
     <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_lv.txt" />
         <filter name="latvianStem"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Dutch -->
     <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />
@@ -1096,10 +1096,10 @@
         <filter name="snowballPorter" language="Dutch"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Norwegian -->
     <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />
@@ -1109,10 +1109,10 @@
         <!-- The "light" and "minimal" stemmers support variants: nb=Bokmål, nn=Nynorsk, no=Both -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Portuguese -->
     <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />
@@ -1122,20 +1122,20 @@
         <!-- most aggressive: <filter name="portugueseStem"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Romanian -->
     <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_ro.txt" />
         <filter name="snowballPorter" language="Romanian"/>
       </analyzer>
     </fieldType>
-    
+
     <!-- Russian -->
     <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />
@@ -1143,10 +1143,10 @@
         <!-- less aggressive: <filter name="russianLightStem"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Swedish -->
     <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />
@@ -1154,19 +1154,19 @@
         <!-- less aggressive: <filter name="swedishLightStem"/> -->
       </analyzer>
     </fieldType>
-    
+
     <!-- Thai -->
     <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="thai"/>
         <filter name="lowercase"/>
         <filter name="stop" ignoreCase="true" words="lang/stopwords_th.txt" />
       </analyzer>
     </fieldType>
-    
+
     <!-- Turkish -->
     <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">
-      <analyzer> 
+      <analyzer>
         <tokenizer name="standard"/>
         <filter name="apostrophe"/>
         <filter name="turkishLowercase"/>
@@ -1184,9 +1184,9 @@
     </fieldType>
 
   <!-- Similarity is the scoring routine for each document vs. a query.
-       A custom Similarity or SimilarityFactory may be specified here, but 
-       the default is fine for most applications.  
-       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity
+       A custom Similarity or SimilarityFactory may be specified here, but
+       the default is fine for most applications.
+       For more info: https://solr.apache.org/guide/schema-elements.html#similarity
     -->
   <!--
      <similarity class="com.example.solr.CustomSimilarityFactory">
diff --git a/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml b/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
index 8a5b255..a09070b 100644
--- a/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
+++ b/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
@@ -18,7 +18,7 @@
 
 <!--
      For more details about configurations options that may appear in
-     this file, see http://wiki.apache.org/solr/SolrConfigXml.
+     this file, see https://solr.apache.org/guide/configuring-solrconfig-xml.html.
 -->
 <config>
   <!-- In all configuration below, a prefix of "solr." for class names
@@ -214,7 +214,7 @@
                    'simple' is the default
 
          More details on the nuances of each LockFactory...
-         http://wiki.apache.org/lucene-java/AvailableLockFactories
+         https://cwiki.apache.org/confluence/display/lucene/AvailableLockFactories
     -->
     <lockType>${solr.lock.type:native}</lockType>
 
@@ -268,7 +268,7 @@
        parameters. Remove this to disable exposing Solr configuration
        and statistics to JMX.
 
-       For more details see http://wiki.apache.org/solr/SolrJmx
+       For more details see https://solr.apache.org/guide/jmx-with-solr.html
     -->
   <jmx />
   <!-- If you want to connect to a particular server, specify the
@@ -306,7 +306,7 @@
          Instead of enabling autoCommit, consider using "commitWithin"
          when adding documents.
 
-         http://wiki.apache.org/solr/UpdateXmlMessages
+         https://solr.apache.org/guide/indexing-with-update-handlers.html
 
          maxDocs - Maximum number of documents to add since the last
                    commit before automatically triggering a new commit.
@@ -811,7 +811,7 @@
 
   <!-- Solr Cell Update Request Handler
 
-       https://lucene.apache.org/solr/guide/uploading-data-with-solr-cell-using-apache-tika.html
+       https://lucene.apache.org/solr/guide/indexing-with-tika.html
 
     -->
   <requestHandler name="/update/extract"
@@ -830,7 +830,7 @@
 
   <!-- XSLT Update Request Handler
 
-       https://lucene.apache.org/solr/guide/uploading-data-with-index-handlers.html#using-xslt-to-transform-xml-index-updates
+       https://lucene.apache.org/solr/guide/indexing-with-update-handlers.html#using-xslt-to-transform-xml-index-updates
 
     -->
   <requestHandler name="/update/xslt"
@@ -1105,7 +1105,7 @@
 
   <!-- Query Elevation Component
 
-       https://lucene.apache.org/solr/guide/the-query-elevation-component.html
+       https://lucene.apache.org/solr/guide/query-elevation-component.html
 
        a search component that enables you to configure the top
        results for a given query regardless of the normal lucene
@@ -1270,7 +1270,7 @@
        making this example suitable for detecting languages form full-text
        rich documents injected via ExtractingRequestHandler.
 
-       See more about langId at https://lucene.apache.org/solr/guide/detecting-languages-during-indexing.html
+       See more about langId at https://lucene.apache.org/solr/guide/language-detection.html
     -->
     <!--
      <updateRequestProcessorChain name="langid">
@@ -1305,7 +1305,7 @@
 
   <!-- Response Writers
 
-       http://wiki.apache.org/solr/QueryResponseWriter
+       https://solr.apache.org/guide/response-writers.html
 
        Request responses will be written using the writer specified by
        the 'wt' request parameter matching the name of a registered
@@ -1357,7 +1357,7 @@
 
   <!-- Function Parsers
 
-       http://wiki.apache.org/solr/FunctionQuery
+       https://solr.apache.org/guide/function-queries.html
 
        Multiple ValueSourceParsers can be registered by name, and then
        used as function names when using the "func" QParser.
@@ -1381,7 +1381,7 @@
   <queryParser enable="${solr.ltr.enabled:false}" name="ltr" class="org.apache.solr.ltr.search.LTRQParserPlugin"/>
 
   <!-- Document Transformers
-       http://wiki.apache.org/solr/DocTransformers
+       https://solr.apache.org/guide/document-transformers.html
     -->
   <!--
      Could be something like:
diff --git a/solr/solr-ref-guide/src/index-segments-merging.adoc b/solr/solr-ref-guide/src/index-segments-merging.adoc
index 34a3bdc..839400f 100644
--- a/solr/solr-ref-guide/src/index-segments-merging.adoc
+++ b/solr/solr-ref-guide/src/index-segments-merging.adoc
@@ -272,7 +272,7 @@ Many <<Merging Index Segments,Merge Policy>> implementations support `noCFSRatio
 The Segments Info screen in the Admin UI lets you see a visualization of the various segments in the underlying Lucene index for this core, with information about the size of each segment – both bytes and in number of documents – as well as other basic metadata about those segments.
 Most visible is the number of deleted documents, but you can hover your mouse over the segments to see additional numeric details.
 
-image::images/segments-info/segments_info.png[image,width=486,height=250]
+image::images/index-segments-merging/segments_info.png[image,width=486,height=250]
 
 This information may be useful for people to help make decisions about the optimal <<merging-index-segments,merge settings>> for their data.
 
diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/request/ContentStreamUpdateRequest.java b/solr/solrj/src/java/org/apache/solr/client/solrj/request/ContentStreamUpdateRequest.java
index 6b387c0..762b1e1 100644
--- a/solr/solrj/src/java/org/apache/solr/client/solrj/request/ContentStreamUpdateRequest.java
+++ b/solr/solrj/src/java/org/apache/solr/client/solrj/request/ContentStreamUpdateRequest.java
@@ -32,9 +32,9 @@ import org.apache.solr.common.util.ContentStreamBase;
  * Basic functionality to upload a File or {@link org.apache.solr.common.util.ContentStream} to a Solr Cell or some
  * other handler that takes ContentStreams (CSV)
  * <p>
- * See http://wiki.apache.org/solr/ExtractingRequestHandler<br>
- * See http://wiki.apache.org/solr/UpdateCSV
- * 
+ * See https://solr.apache.org/guide/indexing-with-tika.html<br>
+ * See https://solr.apache.org/guide/indexing-with-update-handlers.html
+ *
  *
  **/
 public class ContentStreamUpdateRequest extends AbstractUpdateRequest {
@@ -94,5 +94,5 @@ public class ContentStreamUpdateRequest extends AbstractUpdateRequest {
   public void addContentStream(ContentStream contentStream){
     contentStreams.add(contentStream);
   }
-  
+
 }
diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/rule/package-info.java b/solr/solrj/src/java/org/apache/solr/common/cloud/rule/package-info.java
index c91d889..2ba021f 100644
--- a/solr/solrj/src/java/org/apache/solr/common/cloud/rule/package-info.java
+++ b/solr/solrj/src/java/org/apache/solr/common/cloud/rule/package-info.java
@@ -14,10 +14,8 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
- 
-/** 
- * Classes for managing Replica placement strategy when operating in <a href="http://wiki.apache.org/solr/SolrCloud">SolrCloud</a> mode.
+
+/**
+ * Classes for managing Replica placement strategy when operating in SolrCloud mode.
  */
 package org.apache.solr.common.cloud.rule;
-
-
diff --git a/solr/solrj/src/java/org/apache/solr/common/params/QueryElevationParams.java b/solr/solrj/src/java/org/apache/solr/common/params/QueryElevationParams.java
index e77408e..d2fb35a 100644
--- a/solr/solrj/src/java/org/apache/solr/common/params/QueryElevationParams.java
+++ b/solr/solrj/src/java/org/apache/solr/common/params/QueryElevationParams.java
@@ -31,7 +31,7 @@ public interface QueryElevationParams {
    * The name of the field that editorial results will be written out as when using the QueryElevationComponent, which
    * automatically configures the EditorialMarkerFactory.  The default name is "elevated"
    * <br>
-   * See http://wiki.apache.org/solr/DocTransformers
+   * See https://solr.apache.org/guide/query-elevation-component.html
    */
   String EDITORIAL_MARKER_FIELD_NAME = "editorialMarkerFieldName";
 
@@ -40,7 +40,7 @@ public interface QueryElevationParams {
    * automatically configures the EditorialMarkerFactory.  The default name is "excluded".  This is only used
    * when {@link #MARK_EXCLUDES} is set to true at query time.
    * <br>
-   * See http://wiki.apache.org/solr/DocTransformers
+   * See https://solr.apache.org/guide/query-elevation-component.html
    */
   String EXCLUDE_MARKER_FIELD_NAME = "excludeMarkerFieldName";
 
@@ -62,4 +62,4 @@ public interface QueryElevationParams {
    */
   String ELEVATE_ONLY_DOCS_MATCHING_QUERY = "elevateOnlyDocsMatchingQuery";
 
-}
\ No newline at end of file
+}
diff --git a/solr/solrj/src/resources/apispec/collections.collection.shards.Commands.json b/solr/solrj/src/resources/apispec/collections.collection.shards.Commands.json
index b82bf8a..0d6eb52 100644
--- a/solr/solrj/src/resources/apispec/collections.collection.shards.Commands.json
+++ b/solr/solrj/src/resources/apispec/collections.collection.shards.Commands.json
@@ -30,7 +30,7 @@
         },
         "coreProperties":{
           "type":"object",
-          "documentation": "https://lucene.apache.org/solr/guide/defining-core-properties.html",
+          "documentation": "https://solr.apache.org/guide/core-discovery.html",
           "description": "Allows adding core.properties for the collection. Some examples of core properties you may want to modify include the config set, the node name, the data directory, among others.",
           "additionalProperties":true
         },
@@ -61,7 +61,7 @@
         },
         "coreProperties": {
           "type": "object",
-          "documentation": "https://lucene.apache.org/solr/guide/defining-core-properties.html",
+          "documentation": "https://solr.apache.org/solr/core-discovery.html",
           "description": "Allows adding core.properties for the collection. Some examples of core properties you may want to modify include the config set, the node name, the data directory, among others.",
           "additionalProperties": true
         },
@@ -104,7 +104,7 @@
         },
         "coreProperties": {
           "type": "object",
-          "documentation": "https://lucene.apache.org/solr/guide/defining-core-properties.html",
+          "documentation": "https://solr.apache.org/guide/core-discovery.html",
           "description": "Allows adding core.properties for the collection. Some examples of core properties you may want to modify include the config set and the node name, among others.",
           "additionalProperties": true
         },
diff --git a/solr/solrj/src/resources/apispec/core.Update.json b/solr/solrj/src/resources/apispec/core.Update.json
index ad42cb5..e04859e 100644
--- a/solr/solrj/src/resources/apispec/core.Update.json
+++ b/solr/solrj/src/resources/apispec/core.Update.json
@@ -1,5 +1,5 @@
 {
-  "documentation": "https://lucene.apache.org/solr/guide/uploading-data-with-index-handlers.html",
+  "documentation": "https://lucene.apache.org/solr/guide/indexing-with-update-handlers.html",
   "methods": [
     "POST"
   ],
diff --git a/solr/solrj/src/resources/apispec/cores.Commands.json b/solr/solrj/src/resources/apispec/cores.Commands.json
index 13f349e..29777de 100644
--- a/solr/solrj/src/resources/apispec/cores.Commands.json
+++ b/solr/solrj/src/resources/apispec/cores.Commands.json
@@ -60,7 +60,7 @@
         },
         "props": {
           "type": "object",
-          "documentation": "https://lucene.apache.org/solr/guide/defining-core-properties.html",
+          "documentation": "https://solr.apache.org/guide/core-discovery.html",
           "description": "Allows adding core.properties for the collection.",
           "additionalProperties": true
         },
diff --git a/solr/solrj/src/resources/apispec/cores.Status.json b/solr/solrj/src/resources/apispec/cores.Status.json
index e674f14..f62d203 100644
--- a/solr/solrj/src/resources/apispec/cores.Status.json
+++ b/solr/solrj/src/resources/apispec/cores.Status.json
@@ -1,5 +1,5 @@
 {
-  "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html#coreadmin-status",
+  "documentation": "https://solr.apache.org/guide/coreadmin-api.html#coreadmin-status",
   "description": "Provides status and other information about the status of each core. Individual cores can be requested by core name.",
   "methods": [
     "GET"
diff --git a/solr/solrj/src/resources/apispec/cores.core.Commands.json b/solr/solrj/src/resources/apispec/cores.core.Commands.json
index 7cdbe537f..34a70d6 100644
--- a/solr/solrj/src/resources/apispec/cores.core.Commands.json
+++ b/solr/solrj/src/resources/apispec/cores.core.Commands.json
@@ -1,5 +1,5 @@
 {
-  "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html",
+  "documentation": "https://solr.apache.org/guide/coreadmin-api.html",
   "description": "Actions that are peformed on individual cores, such as reloading, swapping cores, renaming, and others.",
   "methods": [
     "POST"
@@ -12,12 +12,12 @@
   "commands": {
     "reload": {
       "type":"object",
-      "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html#coreadmin-reload",
+      "documentation": "https://solr.apache.org/guide/coreadmin-api.html#coreadmin-reload",
       "description": "Reloads a core. This is useful when you have made changes on disk such as editing the schema or solrconfig.xml files. Most APIs reload cores automatically, so this should not be necessary if changes were made with those APIs."
     },
     "swap": {
       "type":"object",
-      "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html#coreadmin-swap",
+      "documentation": "https://solr.apache.org/guide/coreadmin-api.html#coreadmin-swap",
       "description": "Swaps the names of two existing Solr cores. This can be used to swap new content into production. The former core can be swapped back if necessary. Using this API is not supported in SolrCloud mode.",
       "properties": {
         "with": {
@@ -35,7 +35,7 @@
     },
     "rename": {
       "type": "object",
-      "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html#coreadmin-rename",
+      "documentation": "https://solr.apache.org/guide/coreadmin-api.html#coreadmin-rename",
       "description": "Change the name of a core.",
       "properties": {
         "to": {
@@ -53,7 +53,7 @@
     },
     "unload": {
       "type": "object",
-      "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html#coreadmin-unload",
+      "documentation": "https://solr.apache.org/guide/coreadmin-api.html#coreadmin-unload",
       "description": "Removes a core. Active requests would continue to be processed, but new requests will not be sent to the new core. If a core is registered under more than one name, only the name given in the request is removed.",
       "properties": {
         "deleteIndex": {
@@ -79,7 +79,7 @@
     },
     "merge-indexes": {
       "type":"object",
-      "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html#coreadmin-mergeindexes",
+      "documentation": "https://solr.apache.org/guide/coreadmin-api.html#coreadmin-mergeindexes",
       "description":"Merges one or more indexes to another index. The indexes must have completed commits, and should be locked against writes until the merge is complete to avoid index corruption. The target core (which is the core that should be used as the endpoint for this command) must exist before using this command. A commit should also be performed on this core after the merge is complete.",
       "properties": {
         "indexDir": {
@@ -105,7 +105,7 @@
     "split":  { "#include": "cores.core.Commands.split"},
     "request-recovery": {
       "type":"object",
-      "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html#coreadmin-requestrecovery",
+      "documentation": "https://solr.apache.org/guide/coreadmin-api.html#coreadmin-requestrecovery",
       "description": "Manually asks a core to recover by synching with a leader. It may help SolrCloud clusters where a node refuses to come back up. However, it is considered an expert-level command, and should be used very carefully."
     },
     "force-prepare-for-leadership": {
diff --git a/solr/solrj/src/resources/apispec/cores.core.Commands.split.json b/solr/solrj/src/resources/apispec/cores.core.Commands.split.json
index 5da6c6e..e20a0e5 100644
--- a/solr/solrj/src/resources/apispec/cores.core.Commands.split.json
+++ b/solr/solrj/src/resources/apispec/cores.core.Commands.split.json
@@ -1,5 +1,5 @@
 {
-  "documentation": "https://lucene.apache.org/solr/guide/coreadmin-api.html#coreadmin-split",
+  "documentation": "https://solr.apache.org/guide/coreadmin-api.html#coreadmin-split",
   "description": "Allows splitting an index into two or more new indexes.",
   "type": "object",
   "properties": {
diff --git a/solr/solrj/src/resources/apispec/metrics.history.json b/solr/solrj/src/resources/apispec/metrics.history.json
index 975d775..1d1a376 100644
--- a/solr/solrj/src/resources/apispec/metrics.history.json
+++ b/solr/solrj/src/resources/apispec/metrics.history.json
@@ -1,5 +1,5 @@
 {
-  "documentation": "https://lucene.apache.org/solr/guide/metrics-reporting.html",
+  "documentation": "https://solr.apache.org/guide/metrics-reporting.html",
   "description": "Metrics history handler allows retrieving samples of past metrics recorded in the .system collection.",
   "methods": [
     "GET"
diff --git a/solr/webapp/web/index.html b/solr/webapp/web/index.html
index a3713a4..7e90c2b 100644
--- a/solr/webapp/web/index.html
+++ b/solr/webapp/web/index.html
@@ -246,11 +246,11 @@ limitations under the License.
 
         <ul>
 
-          <li class="documentation"><a href="http://lucene.apache.org/solr/"><span>Documentation</span></a></li>
+          <li class="documentation"><a href="http://solr.apache.org/guide/"><span>Documentation</span></a></li>
           <li class="issues"><a href="http://issues.apache.org/jira/browse/SOLR"><span>Issue Tracker</span></a></li>
           <li class="irc"><a href="http://webchat.freenode.net/?channels=#solr"><span>IRC Channel</span></a></li>
           <li class="mailinglist"><a href="http://wiki.apache.org/solr/UsingMailingLists"><span>Community forum</span></a></li>
-          <li class="wiki-query-syntax"><a href="https://lucene.apache.org/solr/guide/query-syntax-and-parsing.html"><span>Solr Query Syntax</span></a></li>
+          <li class="wiki-query-syntax"><a href="https://solr.apache.org/guide/query-syntax-and-parsers.html"><span>Solr Query Syntax</span></a></li>
 
         </ul>
 
diff --git a/solr/webapp/web/partials/login.html b/solr/webapp/web/partials/login.html
index e770453..46f15dc 100644
--- a/solr/webapp/web/partials/login.html
+++ b/solr/webapp/web/partials/login.html
@@ -62,11 +62,11 @@ limitations under the License.
 
   <div ng-show="authScheme === 'Negotiate'">
     <h1>Kerberos Authentication</h1>
-    <p>Your browser did not provide the required information to authenticate using Kerberos. 
-      Please check that your computer has a valid ticket for communicating with Solr, 
-      and that your browser is properly configured to provide that ticket when required. 
-      For more information, consult 
-      <a href="https://lucene.apache.org/solr/guide/kerberos-authentication-plugin.html">
+    <p>Your browser did not provide the required information to authenticate using Kerberos.
+      Please check that your computer has a valid ticket for communicating with Solr,
+      and that your browser is properly configured to provide that ticket when required.
+      For more information, consult
+      <a href="https://solr.apache.org/guide/kerberos-authentication-plugin.html">
         Solr's Kerberos documentation
       </a>.
     </p>
@@ -83,7 +83,7 @@ WWW-Authenticate: {{wwwAuthHeader}}</pre>
       Please check that your computer has a valid PKI certificate for communicating with Solr,
       and that your browser is properly configured to provide that certificate when required.
       For more information, consult
-      <a href="https://lucene.apache.org/solr/guide/cert-authentication-plugin.html">
+      <a href="https://solr.apache.org/guide/cert-authentication-plugin.html">
         Solr's Certificate Authentication documentation
       </a>.
     </p>
@@ -93,7 +93,7 @@ WWW-Authenticate: {{wwwAuthHeader}}</pre>
 WWW-Authenticate: {{wwwAuthHeader}}</pre>
     <hr/>
   </div>
-  
+
   <div ng-show="authScheme === 'Bearer'">
     <h1>OpenID Connect (JWT) authentication</h1>
     <div class="login-error" ng-show="statusText || authParamsError || error">
@@ -129,7 +129,7 @@ WWW-Authenticate: {{wwwAuthHeader}}</pre>
         <p>
           In order to log in to the identity provider, you need to load this page from the Solr node registered as callback node:<br/>
           {{jwtFindLoginNode()}}<br/>
-          After successful login you will be able to navigate to other nodes. 
+          After successful login you will be able to navigate to other nodes.
         </p>
         <p>
           <form name="form" ng-submit="jwtGotoLoginNode()" role="form">
@@ -157,14 +157,14 @@ WWW-Authenticate: {{wwwAuthHeader}}</pre>
     </div>
 
   </div>
-  
+
   <div ng-show="!authSchemeSupported">
     <h1>Authentication scheme not supported</h1>
 
     <div class="login-error">
       {{statusText}}
     </div>
-    
+
     <p>Some or all Solr operations are protected by an authentication scheme that is not yet supported by this Admin UI ({{authScheme}}).</p>
     <p>Solr returned an error response:
     <hr/>
diff --git a/solr/webapp/web/partials/sqlquery.html b/solr/webapp/web/partials/sqlquery.html
index 3f9f257..5af437d 100644
--- a/solr/webapp/web/partials/sqlquery.html
+++ b/solr/webapp/web/partials/sqlquery.html
@@ -23,7 +23,7 @@ limitations under the License.
       </label>
       <textarea name="stmt" ng-model="stmt" id="sqlexp"></textarea>
       <button type="submit" ng-click="doQuery()">Execute</button>
-      <span><a href="https://lucene.apache.org/solr/guide/8_8/parallel-sql-interface.html" target="_out">syntax help</a></span>
+      <span><a href="https://solr.apache.org/guide/parallel-sql-interface.html" target="_out">syntax help</a></span>
     </form>
   </div>
   <div id="sql-response">