You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/07 19:04:31 UTC

lucene-solr:jira/solr-10290: SOLR-10296: conversion, letter L; some other cleanups

Repository: lucene-solr
Updated Branches:
  refs/heads/jira/solr-10290 c4b547c55 -> 7d7fb52ab


SOLR-10296: conversion, letter L; some other cleanups


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/7d7fb52a
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/7d7fb52a
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/7d7fb52a

Branch: refs/heads/jira/solr-10290
Commit: 7d7fb52ab134a1f4227cab448c5a82890ef4e855
Parents: c4b547c
Author: Cassandra Targett <ct...@apache.org>
Authored: Sun May 7 14:03:53 2017 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Sun May 7 14:03:53 2017 -0500

----------------------------------------------------------------------
 .../src/implicit-requesthandlers.adoc           |   6 +-
 solr/solr-ref-guide/src/index.adoc              |  22 +-
 .../src/indexing-and-basic-data-operations.adoc |   2 +-
 .../src/initparams-in-solrconfig.adoc           |   2 +-
 solr/solr-ref-guide/src/language-analysis.adoc  | 297 +++++++------------
 solr/solr-ref-guide/src/learning-to-rank.adoc   | 190 ++++++------
 .../src/local-parameters-in-queries.adoc        |  15 +-
 solr/solr-ref-guide/src/logging.adoc            |   6 +-
 8 files changed, 226 insertions(+), 314 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/7d7fb52a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
index 8884ba0..a70b096 100644
--- a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
+++ b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
@@ -5,7 +5,7 @@
 Solr ships with many out-of-the-box RequestHandlers, which are called implicit because they are not configured in `solrconfig.xml`.
 
 [[ImplicitRequestHandlers-ListofImplicitlyAvailableEndpoints]]
-=== List of Implicitly Available Endpoints
+== List of Implicitly Available Endpoints
 
 [cols=",,,",options="header",]
 |===
@@ -39,7 +39,7 @@ Solr ships with many out-of-the-box RequestHandlers, which are called implicit b
 |===
 
 [[ImplicitRequestHandlers-HowtoViewtheConfiguration]]
-=== How to View the Configuration
+== How to View the Configuration
 
 You can see configuration for all request handlers, including the implicit request handlers, via the <<config-api.adoc#config-api,Config API>>. E.g. for the `gettingstarted` collection:
 
@@ -54,6 +54,6 @@ To include the expanded paramset in the response, as well as the effective param
 `curl "http://localhost:8983/solr/gettingstarted/config/requestHandler?componentName=/export&expandParams=true"`
 
 [[ImplicitRequestHandlers-HowtoEdittheConfiguration]]
-=== How to Edit the Configuration
+== How to Edit the Configuration
 
 Because implicit request handlers are not present in `solrconfig.xml`, configuration of their associated `default`, `invariant` and `appends` parameters may be edited via<<request-parameters-api.adoc#request-parameters-api, Request Parameters API>> using the paramset listed in the above table. However, other parameters, including SearchHandler components, may not be modified. The invariants and appends specified in the implicit configuration cannot be overridden.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/7d7fb52a/solr/solr-ref-guide/src/index.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/index.adoc b/solr/solr-ref-guide/src/index.adoc
index f0d957e..d1f0a84 100644
--- a/solr/solr-ref-guide/src/index.adoc
+++ b/solr/solr-ref-guide/src/index.adoc
@@ -7,24 +7,24 @@ This reference guide describes Apache Solr, the open source solution for search.
 
 This Guide contains the following sections:
 
-**<<getting-started.adoc#getting-started,Getting Started>>**: This section guides you through the installation and setup of Solr.
+*<<getting-started.adoc#getting-started,Getting Started>>*: This section guides you through the installation and setup of Solr.
 
-**<<using-the-solr-administration-user-interface.adoc#using-the-solr-administration-user-interface,Using the Solr Administration User Interface>>**: This section introduces the Solr Web-based user interface. From your browser you can view configuration files, submit queries, view logfile settings and Java environment settings, and monitor and control distributed configurations.
+*<<using-the-solr-administration-user-interface.adoc#using-the-solr-administration-user-interface,Using the Solr Administration User Interface>>*: This section introduces the Solr Web-based user interface. From your browser you can view configuration files, submit queries, view logfile settings and Java environment settings, and monitor and control distributed configurations.
 
-**<<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>**: This section describes how Solr organizes its data for indexing. It explains how a Solr schema defines the fields and field types which Solr uses to organize data within the document files it indexes.
+*<<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>*: This section describes how Solr organizes its data for indexing. It explains how a Solr schema defines the fields and field types which Solr uses to organize data within the document files it indexes.
 
-**<<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>>**: This section explains how Solr prepares text for indexing and searching. Analyzers parse text and produce a stream of tokens, lexical units used for indexing and searching. Tokenizers break field data down into tokens. Filters perform other transformational or selective work on token streams.
+*<<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>>*: This section explains how Solr prepares text for indexing and searching. Analyzers parse text and produce a stream of tokens, lexical units used for indexing and searching. Tokenizers break field data down into tokens. Filters perform other transformational or selective work on token streams.
 
-**<<indexing-and-basic-data-operations.adoc#indexing-and-basic-data-operations,Indexing and Basic Data Operations>>**: This section describes the indexing process and basic index operations, such as commit, optimize, and rollback.
+*<<indexing-and-basic-data-operations.adoc#indexing-and-basic-data-operations,Indexing and Basic Data Operations>>*: This section describes the indexing process and basic index operations, such as commit, optimize, and rollback.
 
-**<<searching.adoc#searching,Searching>>**: This section presents an overview of the search process in Solr. It describes the main components used in searches, including request handlers, query parsers, and response writers. It lists the query parameters that can be passed to Solr, and it describes features such as boosting and faceting, which can be used to fine-tune search results.
+*<<searching.adoc#searching,Searching>>*: This section presents an overview of the search process in Solr. It describes the main components used in searches, including request handlers, query parsers, and response writers. It lists the query parameters that can be passed to Solr, and it describes features such as boosting and faceting, which can be used to fine-tune search results.
 
-**<<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,The Well-Configured Solr Instance>>**: This section discusses performance tuning for Solr. It begins with an overview of the `solrconfig.xml` file, then tells you how to configure cores with `solr.xml`, how to configure the Lucene index writer, and more.
+*<<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,The Well-Configured Solr Instance>>*: This section discusses performance tuning for Solr. It begins with an overview of the `solrconfig.xml` file, then tells you how to configure cores with `solr.xml`, how to configure the Lucene index writer, and more.
 
-**<<managing-solr.adoc#managing-solr,Managing Solr>>**: This section discusses important topics for running and monitoring Solr. Other topics include how to back up a Solr instance, and how to run Solr with Java Management Extensions (JMX).
+*<<managing-solr.adoc#managing-solr,Managing Solr>>*: This section discusses important topics for running and monitoring Solr. Other topics include how to back up a Solr instance, and how to run Solr with Java Management Extensions (JMX).
 
-**<<solrcloud.adoc#solrcloud,SolrCloud>>**: This section describes the newest and most exciting of Solr's new features, SolrCloud, which provides comprehensive distributed capabilities.
+*<<solrcloud.adoc#solrcloud,SolrCloud>>*: This section describes the newest and most exciting of Solr's new features, SolrCloud, which provides comprehensive distributed capabilities.
 
-**<<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>**: This section tells you how to grow a Solr distribution by dividing a large index into sections called shards, which are then distributed across multiple servers, or by replicating a single index across multiple services.
+*<<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>*: This section tells you how to grow a Solr distribution by dividing a large index into sections called shards, which are then distributed across multiple servers, or by replicating a single index across multiple services.
 
-**<<client-apis.adoc#client-apis,Client APIs>>**: This section tells you how to access Solr through various client APIs, including JavaScript, JSON, and Ruby.
+*<<client-apis.adoc#client-apis,Client APIs>>*: This section tells you how to access Solr through various client APIs, including JavaScript, JSON, and Ruby.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/7d7fb52a/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc b/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
index 6d6d184..ebd35e1 100644
--- a/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
+++ b/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
@@ -28,6 +28,6 @@ This section describes how Solr adds data to its index. It covers the following
 * *<<uima-integration.adoc#uima-integration,UIMA Integration>>*: Information about integrating Solr with Apache's Unstructured Information Management Architecture (UIMA). UIMA lets you define custom pipelines of Analysis Engines that incrementally add metadata to your documents as annotations.
 
 [[IndexingandBasicDataOperations-IndexingUsingClientAPIs]]
-=== Indexing Using Client APIs
+== Indexing Using Client APIs
 
 Using client APIs, such as <<using-solrj.adoc#using-solrj,SolrJ>>, from your applications is an important option for updating Solr indexes. See the <<client-apis.adoc#client-apis,Client APIs>> section for more information.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/7d7fb52a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
index f63c39d..7e0492a 100644
--- a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
@@ -47,7 +47,7 @@ For example, if an `<initParams>` section has the name "myParams", you can call
 |===
 
 [[InitParamsinSolrConfig-Wildcards]]
-=== Wildcards
+== Wildcards
 
 An `<initParams>` section can support wildcards to define nested paths that should use the parameters defined. A single asterisk (\*) denotes that a nested path one level deeper should use the parameters. Double asterisks (**) denote all nested paths no matter how deep should use the parameters.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/7d7fb52a/solr/solr-ref-guide/src/language-analysis.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/language-analysis.adoc b/solr/solr-ref-guide/src/language-analysis.adoc
index df098a3..2ce20ea 100644
--- a/solr/solr-ref-guide/src/language-analysis.adoc
+++ b/solr/solr-ref-guide/src/language-analysis.adoc
@@ -2,7 +2,11 @@
 :page-shortname: language-analysis
 :page-permalink: language-analysis.html
 
-This section contains information about tokenizers and filters related to character set conversion or for use with specific languages. For the European languages, tokenization is fairly straightforward. Tokens are delimited by white space and/or a relatively small set of punctuation characters. In other languages the tokenization rules are often not so simple. Some European languages may require special tokenization rules as well, such as rules for decompounding German words.
+This section contains information about tokenizers and filters related to character set conversion or for use with specific languages.
+
+For the European languages, tokenization is fairly straightforward. Tokens are delimited by white space and/or a relatively small set of punctuation characters.
+
+In other languages the tokenization rules are often not so simple. Some European languages may also require special tokenization rules, such as rules for decompounding German words.
 
 For information about language detection at index time, see <<detecting-languages-during-indexing.adoc#detecting-languages-during-indexing,Detecting Languages During Indexing>>.
 
@@ -24,12 +28,12 @@ A sample Solr `protwords.txt` with comments can be found in the `sample_techprod
 </fieldtype>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-KeywordRepeatFilterFactory]]
 == KeywordRepeatFilterFactory
 
-Emits each token twice, one with the `KEYWORD` attribute and once without. If placed before a stemmer, the result will be that you will get the unstemmed token preserved on the same position as the stemmed one. Queries matching the original exact term will get a better score while still maintaining the recall benefit of stemming. Another advantage of keeping the original token is that wildcard truncation will work as expected.
+Emits each token twice, one with the `KEYWORD` attribute and once without.
+
+If placed before a stemmer, the result will be that you will get the unstemmed token preserved on the same position as the stemmed one. Queries matching the original exact term will get a better score while still maintaining the recall benefit of stemming. Another advantage of keeping the original token is that wildcard truncation will work as expected.
 
 To configure, add the `KeywordRepeatFilterFactory` early in the analysis chain. It is recommended to also include `RemoveDuplicatesTokenFilterFactory` to avoid duplicates when tokens are not stemmed.
 
@@ -47,14 +51,8 @@ A sample fieldType configuration could look like this:
 </fieldtype>
 ----
 
-[IMPORTANT]
-====
-
-When adding the same token twice, it will also score twice (double), so you may have to re-tune your ranking rules.
+IMPORTANT: When adding the same token twice, it will also score twice (double), so you may have to re-tune your ranking rules.
 
-====
-
-<<main,Back to Top>>
 
 [[LanguageAnalysis-StemmerOverrideFilterFactory]]
 == StemmerOverrideFilterFactory
@@ -76,12 +74,10 @@ A sample http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/test-fil
 </fieldtype>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-DictionaryCompoundWordTokenFilter]]
 == Dictionary Compound Word Token Filter
 
-This filter splits, or __decompounds__, compound words into individual words using a dictionary of the component words. Each input token is passed through unchanged. If it can also be decompounded into subwords, each subword is also added to the stream at the same logical position.
+This filter splits, or _decompounds_, compound words into individual words using a dictionary of the component words. Each input token is passed through unchanged. If it can also be decompounded into subwords, each subword is also added to the stream at the same logical position.
 
 Compound words are most commonly found in Germanic languages.
 
@@ -89,15 +85,15 @@ Compound words are most commonly found in Germanic languages.
 
 *Arguments:*
 
-`dictionary`: (required) The path of a file that contains a list of simple words, one per line. Blank lines and lines that begin with "#" are ignored. This path may be an absolute path, or path relative to the Solr config directory.
+`dictionary`:: (required) The path of a file that contains a list of simple words, one per line. Blank lines and lines that begin with "#" are ignored. This path may be an absolute path, or path relative to the Solr config directory.
 
-`minWordSize`: (integer, default 5) Any token shorter than this is not decompounded.
+`minWordSize`:: (integer, default 5) Any token shorter than this is not decompounded.
 
-`minSubwordSize`: (integer, default 2) Subwords shorter than this are not emitted as tokens.
+`minSubwordSize`:: (integer, default 2) Subwords shorter than this are not emitted as tokens.
 
-`maxSubwordSize`: (integer, default 15) Subwords longer than this are not emitted as tokens.
+`maxSubwordSize`:: (integer, default 15) Subwords longer than this are not emitted as tokens.
 
-`onlyLongestMatch`: (true/false) If true (the default), only the longest matching subwords will generate new tokens.
+`onlyLongestMatch`:: (true/false) If true (the default), only the longest matching subwords will generate new tokens.
 
 *Example:*
 
@@ -117,8 +113,6 @@ Assume that `germanwords.txt` contains at least the following words: `dumm kopf
 
 *Out:* "Donaudampfschiff"(1), "Donau"(1), "dampf"(1), "schiff"(1), "dummkopf"(2), "dumm"(2), "kopf"(2)
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-UnicodeCollation]]
 == Unicode Collation
 
@@ -139,31 +133,31 @@ Rather than specifying an analyzer within `<fieldtype ... class="solr.TextField"
 
 Using a System collator:
 
-`locale`: (required) http://www.rfc-editor.org/rfc/rfc3066.txt[RFC 3066] locale ID. See http://demo.icu-project.org/icu-bin/locexp[the ICU locale explorer] for a list of supported locales.
+`locale`:: (required) http://www.rfc-editor.org/rfc/rfc3066.txt[RFC 3066] locale ID. See http://demo.icu-project.org/icu-bin/locexp[the ICU locale explorer] for a list of supported locales.
 
-`strength`: Valid values are `primary`, `secondary`, `tertiary`, `quaternary`, or `identical`. See http://userguide.icu-project.org/collation/concepts#TOC-Comparison-Levels[Comparison Levels in ICU Collation Concepts] for more information.
+`strength`:: Valid values are `primary`, `secondary`, `tertiary`, `quaternary`, or `identical`. See http://userguide.icu-project.org/collation/concepts#TOC-Comparison-Levels[Comparison Levels in ICU Collation Concepts] for more information.
 
-`decomposition`: Valid values are `no` or `canonical`. See http://userguide.icu-project.org/collation/concepts#TOC-Normalization[Normalization in ICU Collation Concepts] for more information.
+`decomposition`:: Valid values are `no` or `canonical`. See http://userguide.icu-project.org/collation/concepts#TOC-Normalization[Normalization in ICU Collation Concepts] for more information.
 
 Using a Tailored ruleset:
 
-`custom`: (required) Path to a UTF-8 text file containing rules supported by the ICU http://icu-project.org/apiref/icu4j/com/ibm/icu/text/RuleBasedCollator.html[`RuleBasedCollator`]
+`custom`:: (required) Path to a UTF-8 text file containing rules supported by the ICU http://icu-project.org/apiref/icu4j/com/ibm/icu/text/RuleBasedCollator.html[`RuleBasedCollator`]
 
-`strength`: Valid values are `primary`, `secondary`, `tertiary`, `quaternary`, or `identical`. See http://userguide.icu-project.org/collation/concepts#TOC-Comparison-Levels[Comparison Levels in ICU Collation Concepts] for more information.
+`strength`:: Valid values are `primary`, `secondary`, `tertiary`, `quaternary`, or `identical`. See http://userguide.icu-project.org/collation/concepts#TOC-Comparison-Levels[Comparison Levels in ICU Collation Concepts] for more information.
 
-`decomposition`: Valid values are `no` or `canonical`. See http://userguide.icu-project.org/collation/concepts#TOC-Normalization[Normalization in ICU Collation Concepts] for more information.
+`decomposition`:: Valid values are `no` or `canonical`. See http://userguide.icu-project.org/collation/concepts#TOC-Normalization[Normalization in ICU Collation Concepts] for more information.
 
 Expert options:
 
-`alternate`: Valid values are `shifted` or `non-ignorable`. Can be used to ignore punctuation/whitespace.
+`alternate`:: Valid values are `shifted` or `non-ignorable`. Can be used to ignore punctuation/whitespace.
 
-`caseLevel`: (true/false) If true, in combination with `strength="primary"`, accents are ignored but case is taken into account. The default is false. See http://userguide.icu-project.org/collation/concepts#TOC-CaseLevel[CaseLevel in ICU Collation Concepts] for more information.
+`caseLevel`:: (true/false) If true, in combination with `strength="primary"`, accents are ignored but case is taken into account. The default is false. See http://userguide.icu-project.org/collation/concepts#TOC-CaseLevel[CaseLevel in ICU Collation Concepts] for more information.
 
-`caseFirst`: Valid values are `lower` or `upper`. Useful to control which is sorted first when case is not ignored.
+`caseFirst`:: Valid values are `lower` or `upper`. Useful to control which is sorted first when case is not ignored.
 
-`numeric`: (true/false) If true, digits are sorted according to numeric value, e.g. foobar-9 sorts before foobar-10. The default is false.
+`numeric`:: (true/false) If true, digits are sorted according to numeric value, e.g. foobar-9 sorts before foobar-10. The default is false.
 
-`variableTop`: Single character or contraction. Controls what is variable for `alternate`
+`variableTop`:: Single character or contraction. Controls what is variable for `alternate`.
 
 [[LanguageAnalysis-SortingTextforaSpecificLanguage]]
 === Sorting Text for a Specific Language
@@ -278,26 +272,25 @@ The principles of JDK Collation are the same as those of ICU Collation; you just
 
 Using a System collator (see http://www.oracle.com/technetwork/java/javase/java8locales-2095355.html[Oracle's list of locales supported in Java 8]):
 
-`language`: (required) http://www.loc.gov/standards/iso639-2/php/code_list.php[ISO-639] language code
+`language`:: (required) http://www.loc.gov/standards/iso639-2/php/code_list.php[ISO-639] language code
 
-`country`: http://www.iso.org/iso/country_codes/iso_3166_code_lists/country_names_and_code_elements.htm[ISO-3166] country code
+`country`:: http://www.iso.org/iso/country_codes/iso_3166_code_lists/country_names_and_code_elements.htm[ISO-3166] country code
 
-`variant`: Vendor or browser-specific code
+`variant`:: Vendor or browser-specific code
 
-`strength`: Valid values are `primary`, `secondary`, `tertiary` or `identical`. See http://docs.oracle.com/javase/8/docs/api/java/text/Collator.html[Oracle Java 8 Collator javadocs] for more information.
+`strength`:: Valid values are `primary`, `secondary`, `tertiary` or `identical`. See http://docs.oracle.com/javase/8/docs/api/java/text/Collator.html[Oracle Java 8 Collator javadocs] for more information.
 
-`decomposition`: Valid values are `no`, `canonical`, or `full`. See http://docs.oracle.com/javase/8/docs/api/java/text/Collator.html[Oracle Java 8 Collator javadocs] for more information.
+`decomposition`:: Valid values are `no`, `canonical`, or `full`. See http://docs.oracle.com/javase/8/docs/api/java/text/Collator.html[Oracle Java 8 Collator javadocs] for more information.
 
 Using a Tailored ruleset:
 
-`custom`: (required) Path to a UTF-8 text file containing rules supported by the http://docs.oracle.com/javase/8/docs/api/java/text/RuleBasedCollator.html[`JDK RuleBasedCollator`]
+`custom`:: (required) Path to a UTF-8 text file containing rules supported by the http://docs.oracle.com/javase/8/docs/api/java/text/RuleBasedCollator.html[`JDK RuleBasedCollator`]
 
-`strength`: Valid values are `primary`, `secondary`, `tertiary` or `identical`. See http://docs.oracle.com/javase/8/docs/api/java/text/Collator.html[Oracle Java 8 Collator javadocs] for more information.
+`strength`:: Valid values are `primary`, `secondary`, `tertiary` or `identical`. See http://docs.oracle.com/javase/8/docs/api/java/text/Collator.html[Oracle Java 8 Collator javadocs] for more information.
 
-`decomposition`: Valid values are `no`, `canonical`, or `full`. See http://docs.oracle.com/javase/8/docs/api/java/text/Collator.html[Oracle Java 8 Collator javadocs] for more information.
-
-*A `solr.CollationField` example:*
+`decomposition`:: Valid values are `no`, `canonical`, or `full`. See http://docs.oracle.com/javase/8/docs/api/java/text/Collator.html[Oracle Java 8 Collator javadocs] for more information.
 
+.A `solr.CollationField` example:
 [source,xml]
 ----
 <fieldType name="collatedGERMAN" class="solr.CollationField"
@@ -310,15 +303,10 @@ Using a Tailored ruleset:
 <copyField source="manu" dest="manuGERMAN"/>
 ----
 
-<<main,Back to Top>>
-
-// OLD_CONFLUENCE_ID: LanguageAnalysis-ASCII&DecimalFoldingFilters
-
-[[LanguageAnalysis-ASCII_DecimalFoldingFilters]]
 == ASCII & Decimal Folding Filters
 
 [[LanguageAnalysis-AsciiFolding]]
-=== Ascii Folding
+=== ASCII Folding
 
 This filter converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists. Only those characters with reasonable ASCII alternatives are converted.
 
@@ -347,7 +335,7 @@ This can increase recall by causing more matches. On the other hand, it can redu
 [[LanguageAnalysis-DecimalDigitFolding]]
 === Decimal Digit Folding
 
-This filter converts any character in the Unicode "Decimal Number" general category (`"Nd"`) into their equivalent Basic Latin digits (0-9).
+This filter converts any character in the Unicode "Decimal Number" general category (`Nd`) into their equivalent Basic Latin digits (0-9).
 
 This can increase recall by causing more matches. On the other hand, it can reduce precision because language-specific character differences may be lost.
 
@@ -365,51 +353,47 @@ This can increase recall by causing more matches. On the other hand, it can redu
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Language-SpecificFactories]]
 == Language-Specific Factories
 
 These factories are each designed to work with specific languages. The languages covered here are:
 
-* <<LanguageAnalysis-Arabic,Arabic>>
-* <<LanguageAnalysis-BrazilianPortuguese,Brazilian Portuguese>>
-* <<LanguageAnalysis-Bulgarian,Bulgarian>>
-* <<LanguageAnalysis-Catalan,Catalan>>
-* <<LanguageAnalysis-Chinese,Chinese>>
-* <<LanguageAnalysis-SimplifiedChinese,Simplified Chinese>>
-* <<LanguageAnalysis-CJK,CJK>>
+* <<Arabic>>
+* <<Brazilian Portuguese>>
+* <<Bulgarian>>
+* <<Catalan>>
+* <<Chinese>>
+* <<Simplified Chinese>>
+* <<CJK>>
 * <<LanguageAnalysis-Czech,Czech>>
 * <<LanguageAnalysis-Danish,Danish>>
 
-* <<LanguageAnalysis-Dutch,Dutch>>
-* <<LanguageAnalysis-Finnish,Finnish>>
-* <<LanguageAnalysis-French,French>>
-* <<LanguageAnalysis-Galician,Galician>>
-* <<LanguageAnalysis-German,German>>
-* <<LanguageAnalysis-Greek,Greek>>
+* <<Dutch>>
+* <<Finnish>>
+* <<French>>
+* <<Galician>>
+* <<German>>
+* <<Greek>>
 * <<LanguageAnalysis-Hebrew_Lao_Myanmar_Khmer,Hebrew, Lao, Myanmar, Khmer>>
-* <<LanguageAnalysis-Hindi,Hindi>>
-
-* <<LanguageAnalysis-Indonesian,Indonesian>>
-* <<LanguageAnalysis-Italian,Italian>>
-* <<LanguageAnalysis-Irish,Irish>>
-* <<LanguageAnalysis-Japanese,Japanese>>
-* <<LanguageAnalysis-Latvian,Latvian>>
-* <<LanguageAnalysis-Norwegian,Norwegian>>
-* <<LanguageAnalysis-Persian,Persian>>
-* <<LanguageAnalysis-Polish,Polish>>
-* <<LanguageAnalysis-Portuguese,Portuguese>>
-
-* <<LanguageAnalysis-Romanian,Romanian>>
-* <<LanguageAnalysis-Russian,Russian>>
-* <<LanguageAnalysis-Scandinavian,Scandinavian>>
-* <<LanguageAnalysis-Serbian,Serbian>>
-* <<LanguageAnalysis-Spanish,Spanish>>
-* <<LanguageAnalysis-Swedish,Swedish>>
-* <<LanguageAnalysis-Thai,Thai>>
-* <<LanguageAnalysis-Turkish,Turkish>>
-* <<LanguageAnalysis-Ukrainian,Ukrainian>>
+* <<Hindi>>
+* <<Indonesian>>
+* <<Italian>>
+* <<Irish>>
+* <<Japanese>>
+* <<Latvian>>
+* <<Norwegian>>
+* <<Persian>>
+* <<Polish>>
+* <<Portuguese>>
+* <<Romanian>>
+* <<Russian>>
+* <<Scandinavian>>
+* <<Serbian>>
+* <<Spanish>>
+* <<Swedish>>
+* <<Thai>>
+* <<Turkish>>
+* <<Ukrainian>>
 
 [[LanguageAnalysis-Arabic]]
 === Arabic
@@ -433,8 +417,6 @@ This algorithm defines both character normalization and stemming, so these are s
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-BrazilianPortuguese]]
 === Brazilian Portuguese
 
@@ -460,8 +442,6 @@ This is a Java filter written specifically for stemming the Brazilian dialect of
 
 *Out:* "pra", "pra"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Bulgarian]]
 === Bulgarian
 
@@ -475,15 +455,13 @@ Solr includes a light stemmer for Bulgarian, following http://members.unine.ch/j
 
 [source,xml]
 ----
-<analyzer>  
+<analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.LowerCaseFilterFactory"/>
   <filter class="solr.BulgarianStemFilterFactory"/>
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Catalan]]
 === Catalan
 
@@ -493,7 +471,7 @@ Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `lan
 
 *Arguments:*
 
-`language`: (required) stemmer language, "Catalan" in this case
+`language`:: (required) stemmer language, "Catalan" in this case
 
 *Example:*
 
@@ -502,7 +480,7 @@ Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `lan
 <analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.ElisionFilterFactory" 
+  <filter class="solr.ElisionFilterFactory"
           articles="lang/contractions_ca.txt"/>
   <filter class="solr.SnowballPorterFilterFactory" language="Catalan" />
 </analyzer>
@@ -514,8 +492,6 @@ Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `lan
 
 *Out:* "llengu"(1), "llengu"(2)
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Chinese]]
 === Chinese
 
@@ -556,8 +532,6 @@ The Chinese Filter Factory is deprecated as of Solr 3.4. Use the <<filter-descri
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-SimplifiedChinese]]
 === Simplified Chinese
 
@@ -585,8 +559,6 @@ Or to configure your own analysis setup, use the `solr.HMMChineseTokenizerFactor
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-CJK]]
 === CJK
 
@@ -605,8 +577,6 @@ This tokenizer breaks Chinese, Japanese and Korean language text into tokens. Th
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Czech]]
 === Czech
 
@@ -633,8 +603,6 @@ Solr includes a light stemmer for Czech, following https://dl.acm.org/citation.c
 
 *Out:* "preziden", "preziden", "preziden"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Danish]]
 === Danish
 
@@ -646,7 +614,7 @@ Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization
 
 *Arguments:*
 
-`language`: (required) stemmer language, "Danish" in this case
+`language`:: (required) stemmer language, "Danish" in this case
 
 *Example:*
 
@@ -665,7 +633,6 @@ Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization
 
 *Out:* "undersøg"(1), "undersøg"(2)
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Dutch]]
 === Dutch
@@ -676,7 +643,7 @@ Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `langu
 
 *Arguments:*
 
-`language`: (required) stemmer language, "Dutch" in this case
+`language`:: (required) stemmer language, "Dutch" in this case
 
 *Example:*
 
@@ -695,8 +662,6 @@ Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `langu
 
 *Out:* "kanal", "kanal"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Finnish]]
 === Finnish
 
@@ -722,7 +687,6 @@ Solr includes support for stemming Finnish, and Lucene includes an example stopw
 
 *Out:* "kala", "kala"
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-French]]
 === French
@@ -736,9 +700,9 @@ Removes article elisions from a token stream. This filter can be useful for lang
 
 *Arguments:*
 
-`articles`: The pathname of a file that contains a list of articles, one per line, to be stripped. Articles are words such as "le", which are commonly abbreviated, such as in _l'avion_ (the plane). This file should include the abbreviated form, which precedes the apostrophe. In this case, simply "__l__". If no `articles` attribute is specified, a default set of French articles is used.
+`articles`:: The pathname of a file that contains a list of articles, one per line, to be stripped. Articles are words such as "le", which are commonly abbreviated, such as in _l'avion_ (the plane). This file should include the abbreviated form, which precedes the apostrophe. In this case, simply "_l_". If no `articles` attribute is specified, a default set of French articles is used.
 
-`ignoreCase`: (boolean) If true, the filter ignores the case of words when comparing them to the common word file. Defaults to `false`
+`ignoreCase`:: (boolean) If true, the filter ignores the case of words when comparing them to the common word file. Defaults to `false`
 
 *Example:*
 
@@ -746,7 +710,7 @@ Removes article elisions from a token stream. This filter can be useful for lang
 ----
 <analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.ElisionFilterFactory" 
+  <filter class="solr.ElisionFilterFactory"
           ignoreCase="true"
           articles="lang/contractions_fr.txt"/>
 </analyzer>
@@ -774,7 +738,7 @@ Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFa
 <analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.ElisionFilterFactory" 
+  <filter class="solr.ElisionFilterFactory"
           articles="lang/contractions_fr.txt"/>
   <filter class="solr.FrenchLightStemFilterFactory"/>
 </analyzer>
@@ -785,7 +749,7 @@ Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFa
 <analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.ElisionFilterFactory" 
+  <filter class="solr.ElisionFilterFactory"
           articles="lang/contractions_fr.txt"/>
   <filter class="solr.FrenchMinimalStemFilterFactory"/>
 </analyzer>
@@ -797,7 +761,6 @@ Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFa
 
 *Out:* "le", "chat", "le", "chat"
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Galician]]
 === Galician
@@ -825,7 +788,6 @@ Solr includes a stemmer for Galician following http://bvg.udc.es/recursos_lingua
 
 *Out:* "feliz", "luz"
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-German]]
 === German
@@ -868,7 +830,6 @@ Solr includes four stemmers for German: one in the `solr.SnowballPorterFilterFac
 
 *Out:* "haus", "haus"
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Greek]]
 === Greek
@@ -881,9 +842,7 @@ This filter converts uppercase letters in the Greek character set to the equival
 
 [IMPORTANT]
 ====
-
-Use of custom charsets is not longer supported as of Solr 3.1. If you need to index text in these encodings, please use Java's character set conversion facilities (InputStreamReader, and so on.) during I/O, so that Lucene can analyze this text as Unicode instead.
-
+Use of custom charsets is no longer supported as of Solr 3.1. If you need to index text in these encodings, please use Java's character set conversion facilities (InputStreamReader, etc.) during I/O, so that Lucene can analyze this text as Unicode instead.
 ====
 
 *Example:*
@@ -896,8 +855,6 @@ Use of custom charsets is not longer supported as of Solr 3.1. If you need to in
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Hindi]]
 === Hindi
 
@@ -919,7 +876,6 @@ Solr includes support for stemming Hindi following http://computing.open.ac.uk/S
 </analyzer>
 ----
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Indonesian]]
 === Indonesian
@@ -947,8 +903,6 @@ Solr includes support for stemming Indonesian (Bahasa Indonesia) following http:
 
 *Out:* "bagai", "bagai"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Italian]]
 === Italian
 
@@ -965,7 +919,7 @@ Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFac
 <analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.ElisionFilterFactory" 
+  <filter class="solr.ElisionFilterFactory"
           articles="lang/contractions_it.txt"/>
   <filter class="solr.ItalianLightStemFilterFactory"/>
 </analyzer>
@@ -977,8 +931,6 @@ Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFac
 
 *Out:* "propag", "propag", "propag"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Irish]]
 === Irish
 
@@ -988,7 +940,7 @@ Solr can stem Irish using the Snowball Porter Stemmer with an argument of `langu
 
 *Arguments:*
 
-`language`: (required) stemmer language, "Irish" in this case
+`language`:: (required) stemmer language, "Irish" in this case
 
 *Example:*
 
@@ -1009,8 +961,6 @@ Solr can stem Irish using the Snowball Porter Stemmer with an argument of `langu
 
 *Out:* "siopadóir", "síceapaite", "fearr", "athair"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Japanese]]
 === Japanese
 
@@ -1035,9 +985,9 @@ Normalizes horizontal Japanese iteration marks (odoriji) to their expanded form.
 
 *Arguments:*
 
-`normalizeKanji`: set to `false` to not normalize kanji iteration marks (default is `true`)
+`normalizeKanji`:: set to `false` to not normalize kanji iteration marks (default is `true`)
 
-` normalizeKana`: set to `false` to not normalize kana iteration marks (default is `true`)
+`normalizeKana`:: set to `false` to not normalize kana iteration marks (default is `true`)
 
 [[LanguageAnalysis-JapaneseTokenizer]]
 ==== Japanese Tokenizer
@@ -1050,19 +1000,19 @@ Tokenizer for Japanese that uses morphological analysis, and annotates each term
 
 *Arguments:*
 
-`mode`: Use `search` mode to get a noun-decompounding effect useful for search. `search` mode improves segmentation for search at the expense of part-of-speech accuracy. Valid values for `mode` are:
-
+`mode`:: Use `search` mode to get a noun-decompounding effect useful for search. `search` mode improves segmentation for search at the expense of part-of-speech accuracy. Valid values for `mode` are:
++
 * `normal`: default segmentation
 * `search`: segmentation useful for search (extra compound splitting)
 * `extended`: search mode plus unigramming of unknown words (experimental)
-
++
 For some applications it might be good to use `search` mode for indexing and `normal` mode for queries to increase precision and prevent parts of compounds from being matched and highlighted.
 
-`userDictionary`: filename for a user dictionary, which allows overriding the statistical model with your own entries for segmentation, part-of-speech tags and readings without a need to specify weights. See `lang/userdict_ja.txt` for a sample user dictionary file.
+`userDictionary`:: filename for a user dictionary, which allows overriding the statistical model with your own entries for segmentation, part-of-speech tags and readings without a need to specify weights. See `lang/userdict_ja.txt` for a sample user dictionary file.
 
-`userDictionaryEncoding`: user dictionary encoding (default is UTF-8)
+`userDictionaryEncoding`:: user dictionary encoding (default is UTF-8)
 
-`discardPunctuation`: set to `false` to keep punctuation, `true` to discard (the default)
+`discardPunctuation`:: set to `false` to keep punctuation, `true` to discard (the default)
 
 [[LanguageAnalysis-JapaneseBaseFormFilter]]
 ==== Japanese Base Form Filter
@@ -1082,9 +1032,9 @@ Removes terms with one of the configured parts-of-speech. `JapaneseTokenizer` an
 
 *Arguments:*
 
-`tags`: filename for a list of parts-of-speech for which to remove terms; see `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` <<config-sets.adoc#config-sets,config set>> for an example.
+`tags`:: filename for a list of parts-of-speech for which to remove terms; see `conf/lang/stoptags_ja.txt` in the `sample_techproducts_config` <<config-sets.adoc#config-sets,config set>> for an example.
 
-`enablePositionIncrements`: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens. *This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
+`enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens. *This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
 
 [[LanguageAnalysis-JapaneseKatakanaStemFilter]]
 ==== Japanese Katakana Stem Filter
@@ -1097,7 +1047,7 @@ Normalizes common katakana spelling variations ending in a long sound character
 
 *Arguments:*
 
-`minimumLength`: terms below this length will not be stemmed. Default is 4, value must be 2 or more.
+`minimumLength`:: terms below this length will not be stemmed. Default is 4, value must be 2 or more.
 
 [[LanguageAnalysis-CJKWidthFilter]]
 ==== CJK Width Filter
@@ -1115,7 +1065,7 @@ Example:
 <fieldType name="text_ja" positionIncrementGap="100" autoGeneratePhraseQueries="false">
   <analyzer>
     <!-- Uncomment if you need to handle iteration marks: -->
-    <!-- <charFilter class="solr.JapaneseIterationMarkCharFilterFactory" /> --> 
+    <!-- <charFilter class="solr.JapaneseIterationMarkCharFilterFactory" /> -->
     <tokenizer class="solr.JapaneseTokenizerFactory" mode="search" userDictionary="lang/userdict_ja.txt"/>
     <filter class="solr.JapaneseBaseFormFilterFactory"/>
     <filter class="solr.JapanesePartOfSpeechStopFilterFactory" tags="lang/stoptags_ja.txt"/>
@@ -1127,10 +1077,6 @@ Example:
 </fieldType>
 ----
 
-<<main,Back to Top>>
-
-// OLD_CONFLUENCE_ID: LanguageAnalysis-Hebrew,Lao,Myanmar,Khmer
-
 [[LanguageAnalysis-Hebrew_Lao_Myanmar_Khmer]]
 === Hebrew, Lao, Myanmar, Khmer
 
@@ -1138,8 +1084,6 @@ Lucene provides support, in addition to UAX#29 word break rules, for Hebrew's us
 
 See <<tokenizers.adoc#Tokenizers-ICUTokenizer,the ICUTokenizer>> for more information.
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Latvian]]
 === Latvian
 
@@ -1168,8 +1112,6 @@ Solr includes support for stemming Latvian, and Lucene includes an example stopw
 
 *Out:* "tirg", "tirg"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Norwegian]]
 === Norwegian
 
@@ -1186,7 +1128,7 @@ The `NorwegianLightStemFilterFactory` requires a "two-pass" sort for the -dom an
 
 The second pass is to pick up -dom and -het endings. Consider this example:
 
-[width="100%",cols="25%,25%,25%,25%",options="header",]
+[width="100%",options="header",]
 |===
 |*One pass* | |*Two passes* |
 |*Before* |*After* |*Before* |*After*
@@ -1201,8 +1143,10 @@ The second pass is to pick up -dom and -het endings. Consider this example:
 
 *Factory class:* `solr.NorwegianLightStemFilterFactory`
 
-*Arguments:* `variant:` Choose the Norwegian language variant to use. Valid values are:
+*Arguments:*
 
+`variant`:: Choose the Norwegian language variant to use. Valid values are:
++
 * `nb:` Bokmål (default)
 * `nn:` Nynorsk
 * `no:` both
@@ -1212,7 +1156,7 @@ The second pass is to pick up -dom and -het endings. Consider this example:
 [source,xml]
 ----
 <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
-  <analyzer> 
+  <analyzer>
     <tokenizer class="solr.StandardTokenizerFactory"/>
     <filter class="solr.LowerCaseFilterFactory"/>
     <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball"/>
@@ -1234,8 +1178,10 @@ The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns on
 
 *Factory class:* `solr.NorwegianMinimalStemFilterFactory`
 
-*Arguments:* `variant:` Choose the Norwegian language variant to use. Valid values are:
+*Arguments:*
 
+`variant`:: Choose the Norwegian language variant to use. Valid values are:
++
 * `nb:` Bokmål (default)
 * `nn:` Nynorsk
 * `no:` both
@@ -1245,7 +1191,7 @@ The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns on
 [source,xml]
 ----
 <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
-  <analyzer> 
+  <analyzer>
     <tokenizer class="solr.StandardTokenizerFactory"/>
     <filter class="solr.LowerCaseFilterFactory"/>
     <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball"/>
@@ -1260,8 +1206,6 @@ The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns on
 
 *Out:* "bil"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Persian]]
 === Persian
 
@@ -1285,8 +1229,6 @@ Solr includes support for normalizing Persian, and Lucene includes an example st
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Polish]]
 === Polish
 
@@ -1326,9 +1268,7 @@ More information about the Stempel stemmer is available in {lucene-javadocs}/ana
 
 Note the lower case filter is applied _after_ the Morfologik stemmer; this is because the Polish dictionary contains proper names and then proper term case may be important to resolve disambiguities (or even lookup the correct lemma at all).
 
-The Morfologik dictionary param value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/__language__.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
-
-<<main,Back to Top>>
+The Morfologik dictionary parameter value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
 
 [[LanguageAnalysis-Portuguese]]
 === Portuguese
@@ -1374,7 +1314,6 @@ Solr includes four stemmers for Portuguese: one in the `solr.SnowballPorterFilte
 
 *Out:* "pra", "pra"
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Romanian]]
 === Romanian
@@ -1385,7 +1324,7 @@ Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `la
 
 *Arguments:*
 
-`language`: (required) stemmer language, "Romanian" in this case
+`language`:: (required) stemmer language, "Romanian" in this case
 
 *Example:*
 
@@ -1398,7 +1337,6 @@ Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `la
 </analyzer>
 ----
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Russian]]
 === Russian
@@ -1423,14 +1361,13 @@ Solr includes two stemmers for Russian: one in the `solr.SnowballPorterFilterFac
 </analyzer>
 ----
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Scandinavian]]
 === Scandinavian
 
 Scandinavian is a language group spanning three languages <<LanguageAnalysis-Norwegian,Norwegian>>, <<LanguageAnalysis-Swed,Swedish>> and <<LanguageAnalysis-Danish,Danish>> which are very similar.
 
-Swedish å,ä,ö are in fact the same letters as Norwegian and Danish å,æ,ø and thus interchangeable when used between these languages. They are however folded differently when people type them on a keyboard lacking these characters.
+Swedish å, ä, ö are in fact the same letters as Norwegian and Danish å, æ, ø and thus interchangeable when used between these languages. They are however folded differently when people type them on a keyboard lacking these characters.
 
 In that situation almost all Swedish people use a, a, o instead of å, ä, ö. Norwegians and Danes on the other hand usually type aa, ae and oe instead of å, æ and ø. Some do however use a, a, o, oo, ao and sometimes permutations of everything above.
 
@@ -1494,8 +1431,6 @@ It's a semantically more destructive solution than `ScandinavianNormalizationFil
 
 *Out:* "blabarsyltetoj", "blabarsyltetoj", "blabarsyltetoj", "blabarsyltetoj"
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Serbian]]
 === Serbian
 
@@ -1508,9 +1443,11 @@ See the Solr wiki for tips & advice on using this filter: https://wiki.apache.or
 
 *Factory class:* `solr.SerbianNormalizationFilterFactory`
 
-*Arguments:* `haircut` : Select the extend of normalization. Valid values are:
+*Arguments:*
 
-* bald: (Default behavior) Cyrillic characters are first converted to Latin; then, Latin characters have their diacritics removed, with the exception of "https://en.wikipedia.org/wiki/D_with_stroke[LATIN SMALL LETTER D WITH STROKE]" (U+0111) which is converted to "```dj```"
+`haircut` :: Select the extend of normalization. Valid values are:
++
+* `bald`: (Default behavior) Cyrillic characters are first converted to Latin; then, Latin characters have their diacritics removed, with the exception of https://en.wikipedia.org/wiki/D_with_stroke[LATIN SMALL LETTER D WITH STROKE] (U+0111) which is converted to "```dj```"
 * `regular`: Only Cyrillic to Latin normalization will be applied, preserving the Latin diatrics
 
 *Example:*
@@ -1524,8 +1461,6 @@ See the Solr wiki for tips & advice on using this filter: https://wiki.apache.or
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Spanish]]
 === Spanish
 
@@ -1552,7 +1487,6 @@ Solr includes two stemmers for Spanish: one in the `solr.SnowballPorterFilterFac
 
 *Out:* "tor", "tor", "tor"
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Swedish]]
 === Swedish
@@ -1585,7 +1519,6 @@ Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization
 
 *Out:* "klok", "klok", "klok"
 
-<<main,Back to Top>>
 
 [[LanguageAnalysis-Thai]]
 === Thai
@@ -1606,8 +1539,6 @@ This filter converts sequences of Thai characters into individual Thai words. Un
 </analyzer>
 ----
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Turkish]]
 === Turkish
 
@@ -1647,8 +1578,6 @@ Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFa
 [[LanguageAnalysis-BacktoTop#main]]
 ===
 
-<<main,Back to Top>>
-
 [[LanguageAnalysis-Ukrainian]]
 === Ukrainian
 
@@ -1660,13 +1589,13 @@ Lucene also includes an example Ukrainian stopword list, in the `lucene-analyzer
 
 *Arguments:*
 
-`dictionary`: (required) lemmatizer dictionary - the `lucene-analyzers-morfologik` jar contains a Ukrainian dictionary at `org/apache/lucene/analysis/uk/ukrainian.dict`.
+`dictionary`:: (required) lemmatizer dictionary - the `lucene-analyzers-morfologik` jar contains a Ukrainian dictionary at `org/apache/lucene/analysis/uk/ukrainian.dict`.
 
 *Example:*
 
 [source,xml]
 ----
-<analyzer> 
+<analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.StopFilterFactory" words="org/apache/lucene/analysis/uk/stopwords.txt"/>
   <filter class="solr.MorfologikFilterFactory" dictionary="org/apache/lucene/analysis/uk/ukrainian.dict"/>
@@ -1676,4 +1605,4 @@ Lucene also includes an example Ukrainian stopword list, in the `lucene-analyzer
 
 Note the lower case filter is applied _after_ the Morfologik stemmer; this is because the Ukrainian dictionary contains proper names and then proper term case may be important to resolve disambiguities (or even lookup the correct lemma at all).
 
-The Morfologik `dictionary` param value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/__language__.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
+The Morfologik `dictionary` param value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/7d7fb52a/solr/solr-ref-guide/src/learning-to-rank.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/learning-to-rank.adoc b/solr/solr-ref-guide/src/learning-to-rank.adoc
index 0af8084..d9805ef 100644
--- a/solr/solr-ref-guide/src/learning-to-rank.adoc
+++ b/solr/solr-ref-guide/src/learning-to-rank.adoc
@@ -2,7 +2,9 @@
 :page-shortname: learning-to-rank
 :page-permalink: learning-to-rank.html
 
-With the *Learning To Rank* (or *LTR* for short) contrib module you can configure and run machine learned ranking models in Solr. The module also supports feature extraction inside Solr. The only thing you need to do outside Solr is train your own ranking model.
+With the *Learning To Rank* (or *LTR* for short) contrib module you can configure and run machine learned ranking models in Solr.
+
+The module also supports feature extraction inside Solr. The only thing you need to do outside Solr is train your own ranking model.
 
 [[LearningToRank-Concepts]]
 == Concepts
@@ -67,12 +69,12 @@ The LTR contrib module includes several feature classes as well as support for c
 |===
 
 [[LearningToRank-Featureextraction]]
-==== Feature extraction
+==== Feature Extraction
 
 The ltr contrib module includes a <<transforming-result-documents.adoc#transforming-result-documents,[features>> transformer] to support the calculation and return of feature values for https://en.wikipedia.org/wiki/Feature_extraction[feature extraction] purposes including and especially when you do not yet have an actual reranking model.
 
 [[LearningToRank-Featureselectionandmodeltraining]]
-==== Feature selection and model training
+==== Feature Selection and Model Training
 
 Feature selection and model training take place offline and outside Solr. The ltr contrib module supports two generalized forms of models as well as custom models. Each model class's javadocs contain an example to illustrate configuration of that class. In the form of JSON files your trained model or models (e.g. different models for different customer geographies) can then be directly uploaded into Solr using provided REST APIs.
 
@@ -89,15 +91,15 @@ Feature selection and model training take place offline and outside Solr. The lt
 
 The `"techproducts"` example included with Solr is pre-configured with the plugins required for learning-to-rank, but they are disabled by default.
 
-To enable the plugins, please specify the `"solr.ltr.enabled"` JVM System Property when running the example:
+To enable the plugins, please specify the `solr.ltr.enabled` JVM System Property when running the example:
 
-[source,plain]
+[source,bash]
 ----
 bin/solr start -e techproducts -Dsolr.ltr.enabled=true
 ----
 
 [[LearningToRank-Uploadingfeatures]]
-=== Uploading features
+=== Uploading Features
 
 To upload features in a `/path/myFeatures.json` file, please run:
 
@@ -108,11 +110,10 @@ curl -XPUT 'http://localhost:8983/solr/techproducts/schema/feature-store' --data
 
 To view the features you just uploaded please open the following URL in a browser:
 
-* http://localhost:8983/solr/techproducts/schema/feature-store/_DEFAULT_
-
-*Example: /path/myFeatures.json*
+`\http://localhost:8983/solr/techproducts/schema/feature-store/_DEFAULT_`
 
-[source,java]
+.Example: /path/myFeatures.json
+[source,json]
 ----
 [
   {
@@ -126,7 +127,7 @@ To view the features you just uploaded please open the following URL in a browse
     "name" : "isBook",
     "class" : "org.apache.solr.ltr.feature.SolrFeature",
     "params" : {
-      "fq": [ "{!terms f=cat}book" ]
+      "fq": ["{!terms f=cat}book"]
     }
   },
   {
@@ -138,15 +139,15 @@ To view the features you just uploaded please open the following URL in a browse
 ----
 
 [[LearningToRank-Extractingfeatures]]
-=== Extracting features
+=== Extracting Features
 
 To extract features as part of a query, add `[features]` to the `fl` parameter, for example:
 
-* http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,%5Bfeatures%5D
+`\http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,%5Bfeatures%5D`
 
 The output XML will include feature values as a comma-separated list, resembling the output shown here:
 
-[source,xml]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -168,7 +169,7 @@ The output XML will include feature values as a comma-separated list, resembling
 ----
 
 [[LearningToRank-Uploadingamodel]]
-=== Uploading a model
+=== Uploading a Model
 
 To upload the model in a `/path/myModel.json` file, please run:
 
@@ -179,10 +180,9 @@ curl -XPUT 'http://localhost:8983/solr/techproducts/schema/model-store' --data-b
 
 To view the model you just uploaded please open the following URL in a browser:
 
-* http://localhost:8983/solr/techproducts/schema/model-store
-
-*Example: /path/myModel.json*
+`\http://localhost:8983/solr/techproducts/schema/model-store`
 
+.Example: /path/myModel.json
 [source,java]
 ----
 {
@@ -204,21 +204,23 @@ To view the model you just uploaded please open the following URL in a browser:
 ----
 
 [[LearningToRank-Runningarerankquery]]
-=== Running a rerank query
+=== Running a Rerank Query
 
 To rerank the results of a query, add the `rq` parameter to your search, for example:
 
-* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%20reRankDocs=100%7D&fl=id,score[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel reRankDocs=100}&fl=id,score]
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%20reRankDocs=100%7D&fl=id,score[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel reRankDocs=100}&fl=id,score]`
 
 The addition of the `rq` parameter will not change the output XML of the search.
 
 To obtain the feature values computed during reranking, add `[features]` to the `fl` parameter, for example:
 
-* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%20reRankDocs=100%7D&fl=id,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel reRankDocs=100}&fl=id,score,[features]]
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%20reRankDocs=100%7D&fl=id,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel reRankDocs=100}&fl=id,score,[features]]
 
 The output XML will include feature values as a comma-separated list, resembling the output shown here:
 
-[source,xml]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -246,7 +248,7 @@ The output XML will include feature values as a comma-separated list, resembling
 The {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/ValueFeature.html[ValueFeature] and {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/SolrFeature.html[SolrFeature] classes support the use of external feature information, `efi` for short.
 
 [[LearningToRank-Uploadingfeatures.1]]
-==== Uploading features
+==== Uploading Features
 
 To upload features in a `/path/myEfiFeatures.json` file, please run:
 
@@ -257,11 +259,10 @@ curl -XPUT 'http://localhost:8983/solr/techproducts/schema/feature-store' --data
 
 To view the features you just uploaded please open the following URL in a browser:
 
-* http://localhost:8983/solr/techproducts/schema/feature-store/myEfiFeatureStore
-
-*Example: /path/myEfiFeatures.json*
+`\http://localhost:8983/solr/techproducts/schema/feature-store/myEfiFeatureStore`
 
-[source,java]
+.Example: /path/myEfiFeatures.json
+[source,json]
 ----
 [
   {
@@ -291,18 +292,21 @@ To view the features you just uploaded please open the following URL in a browse
 ]
 ----
 
-As an aside, you may have noticed that the `myEfiFeatures.json` example uses `"store":"myEfiFeatureStore"` attributes: read more about feature `store`s in the <<LearningToRank-Lifecycle,Lifecycle>> section of this page.
+As an aside, you may have noticed that the `myEfiFeatures.json` example uses `"store":"myEfiFeatureStore"` attributes: read more about feature `store` in the <<Lifecycle>> section of this page.
 
 [[LearningToRank-Extractingfeatures.1]]
-==== Extracting features
+==== Extracting Features
 
 To extract `myEfiFeatureStore` features as part of a query, add `efi.*` parameters to the `[features]` part of the `fl` parameter, for example:
 
-* link:[] http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=1%5D[http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]]
-* link:[] http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=0%20efi.answer=13%5D[http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13]]
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13]
 
 [[LearningToRank-Uploadingamodel.1]]
-==== Uploading a model
+==== Uploading a Model
 
 To upload the model in a `/path/myEfiModel.json` file, please run:
 
@@ -313,11 +317,10 @@ curl -XPUT 'http://localhost:8983/solr/techproducts/schema/model-store' --data-b
 
 To view the model you just uploaded please open the following URL in a browser:
 
-* http://localhost:8983/solr/techproducts/schema/model-store
-
-*Example: /path/myEfiModel.json*
+`\http://localhost:8983/solr/techproducts/schema/model-store`
 
-[source,java]
+.Example: /path/myEfiModel.json
+[source,json]
 ----
 {
   "store" : "myEfiFeatureStore",
@@ -341,28 +344,32 @@ To view the model you just uploaded please open the following URL in a browser:
 ----
 
 [[LearningToRank-Runningarerankquery.1]]
-==== Running a rerank query
+==== Running a Rerank Query
 
 To obtain the feature values computed during reranking, add `[features]` to the `fl` parameter and `efi.*` parameters to the `rq` parameter, for example:
 
-* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myEfiModel%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=1%7D&fl=id,cat,manu,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1}&fl=id,cat,manu,score,[features]] link:[]
-* link:[]http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myEfiModel%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=0%20efi.answer=13%7D&fl=id,cat,manu,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]]
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1}&fl=id,cat,manu,score,[features]] link:[]
+
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]]
 
 Notice the absence of `efi.*` parameters in the `[features]` part of the `fl` parameter.
 
 [[LearningToRank-Extractingfeatureswhilstreranking]]
-==== Extracting features whilst reranking
+==== Extracting Features While Reranking
 
-To extract features for `myEfiFeatureStore`'s features whilst still reranking with `myModel`:
+To extract features for `myEfiFeatureStore` features while still reranking with `myModel`:
 
-* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%7D&fl=id,cat,manu,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=1%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]] link:[]
+[source,text]
+http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]] link:[]
 
 Notice the absence of `efi.*` parameters in the `rq` parameter (because `myModel` does not use `efi` feature) and the presence of `efi.*` parameters in the `[features]` part of the `fl` parameter (because `myEfiFeatureStore` contains `efi` features).
 
-Read more about model evolution in the <<LearningToRank-Lifecycle,Lifecycle>> section of this page.
+Read more about model evolution in the <<Lifecycle>> section of this page.
 
 [[LearningToRank-Trainingexample]]
-=== Training example
+=== Training Example
 
 Example training data and a demo 'train and upload model' script can be found in the `solr/contrib/ltr/example` folder in the https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git[Apache lucene-solr git repository] which is mirrored on https://github.com/apache/lucene-solr/tree/releases/lucene-solr/6.4.0/solr/contrib/ltr/example[github.com] (the `solr/contrib/ltr/example` folder is not shipped in the solr binary release).
 
@@ -380,21 +387,21 @@ Learning-To-Rank is a contrib module and therefore its plugins must be configure
 === Minimum requirements
 
 * Include the required contrib JARs. Note that by default paths are relative to the Solr core so they may need adjustments to your configuration, or an explicit specification of the `$solr.install.dir`.
-
++
 [source,xml]
 ----
 <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-ltr-\d.*\.jar" />
 ----
 
 * Declaration of the `ltr` query parser.
-
-[source,java]
++
+[source,xml]
 ----
 <queryParser name="ltr" class="org.apache.solr.ltr.search.LTRQParserPlugin"/>
 ----
 
 * Configuration of the feature values cache.
-
++
 [source,xml]
 ----
 <cache name="QUERY_DOC_FV"
@@ -406,7 +413,7 @@ Learning-To-Rank is a contrib module and therefore its plugins must be configure
 ----
 
 * Declaration of the `[features]` transformer.
-
++
 [source,xml]
 ----
 <transformer name="features" class="org.apache.solr.ltr.response.transform.LTRFeatureLoggerTransformerFactory">
@@ -415,7 +422,7 @@ Learning-To-Rank is a contrib module and therefore its plugins must be configure
 ----
 
 [[LearningToRank-Advancedoptions]]
-=== Advanced options
+=== Advanced Options
 
 [[LearningToRank-LTRThreadModule]]
 ==== LTRThreadModule
@@ -423,13 +430,13 @@ Learning-To-Rank is a contrib module and therefore its plugins must be configure
 A thread module can be configured for the query parser and/or the transformer to parallelize the creation of feature weights. For details, please refer to the {solr-javadocs}/solr-ltr/org/apache/solr/ltr/LTRThreadModule.html[LTRThreadModule] javadocs.
 
 [[LearningToRank-Featurevectorcustomization]]
-==== Feature vector customization
+==== Feature Vector Customization
 
-The features transformer returns dense csv values such as `"featureA=0.1,featureB=0.2,featureC=0.3,featureD=0.0"`.
+The features transformer returns dense CSV values such as `featureA=0.1,featureB=0.2,featureC=0.3,featureD=0.0`.
 
-For sparse csv output such as `"featureA:0.1 featureB:0.2 featureC:0.3"` you can customize the {solr-javadocs}/solr-ltr/org/apache/solr/ltr/response/transform/LTRFeatureLoggerTransformerFactory.html[feature logger transformer] declaration in `solrconfig.xml` as follows:
+For sparse CSV output such as `featureA:0.1 featureB:0.2 featureC:0.3` you can customize the {solr-javadocs}/solr-ltr/org/apache/solr/ltr/response/transform/LTRFeatureLoggerTransformerFactory.html[feature logger transformer] declaration in `solrconfig.xml` as follows:
 
-[source,java]
+[source,xml]
 ----
 <transformer name="features" class="org.apache.solr.ltr.response.transform.LTRFeatureLoggerTransformerFactory">
   <str name="fvCacheName">QUERY_DOC_FV</str>
@@ -440,20 +447,15 @@ For sparse csv output such as `"featureA:0.1 featureB:0.2 featureC:0.3"` you can
 ----
 
 [[LearningToRank-Implementationandcontributions]]
-==== Implementation and contributions
+==== Implementation and Contributions
 
 .How does Solr Learning-To-Rank work under the hood?
-[NOTE]
-====
 
-Please refer to the `ltr` {solr-javadocs}/solr-ltr/org/apache/solr/ltr/package-summary.html[javadocs] for an implementation overview.
-
-====
+NOTE: Please refer to the `ltr` {solr-javadocs}/solr-ltr/org/apache/solr/ltr/package-summary.html[javadocs] for an implementation overview.
 
 .How could i write additional models and/or features?
 [NOTE]
 ====
-
 Contributions for further models, features and normalizers are welcome. Related links:
 
 * {solr-javadocs}/solr-ltr/org/apache/solr/ltr/model/LTRScoringModel.html[LTRScoringModel javadocs]
@@ -461,14 +463,13 @@ Contributions for further models, features and normalizers are welcome. Related
 * {solr-javadocs}/solr-ltr/org/apache/solr/ltr/norm/Normalizer.html[Normalizer javadocs]
 * http://wiki.apache.org/solr/HowToContribute
 * http://wiki.apache.org/lucene-java/HowToContribute
-
 ====
 
 [[LearningToRank-Lifecycle]]
 == Lifecycle
 
 [[LearningToRank-Featurestores]]
-=== Feature stores
+=== Feature Stores
 
 It is recommended that you organise all your features into stores which are akin to namespaces:
 
@@ -478,11 +479,11 @@ It is recommended that you organise all your features into stores which are akin
 
 To discover the names of all your feature stores:
 
-* http://localhost:8983/solr/techproducts/schema/feature-store
+`\http://localhost:8983/solr/techproducts/schema/feature-store`
 
 To inspect the content of the `commonFeatureStore` feature store:
 
-* http://localhost:8983/solr/techproducts/schema/feature-store/commonFeatureStore
+`\http://localhost:8983/solr/techproducts/schema/feature-store/commonFeatureStore`
 
 [[LearningToRank-Models]]
 === Models
@@ -494,15 +495,15 @@ To inspect the content of the `commonFeatureStore` feature store:
 
 To extract features for `currentFeatureStore`'s features:
 
-* http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,%5Bfeatures%20store=currentFeatureStore%5D[http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,[features store=currentFeatureStore]] link:[]
+`\http://localhost:8983/solr/techproducts/query?q=test&fl=id,score,[features store=currentFeatureStore]`
 
-To extract features for `nextFeatureStore`'s features whilst reranking with `currentModel` based on `currentFeatureStore`:
+To extract features for `nextFeatureStore` features whilst reranking with `currentModel` based on `currentFeatureStore`:
 
-* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=currentModel%20reRankDocs=100%7D&fl=id,score,%5Bfeatures%20store=nextFeatureStore%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=currentModel reRankDocs=100}&fl=id,score,[features store=nextFeatureStore]] link:[]
+`\http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=currentModel reRankDocs=100}&fl=id,score,[features store=nextFeatureStore]`
 
 To view all models:
 
-* http://localhost:8983/solr/techproducts/schema/model-store
+`\http://localhost:8983/solr/techproducts/schema/model-store`
 
 To delete the `currentModel` model:
 
@@ -511,12 +512,7 @@ To delete the `currentModel` model:
 curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/model-store/currentModel'
 ----
 
-[IMPORTANT]
-====
-
-A feature store must be deleted only when there are no models using it.
-
-====
+IMPORTANT: A feature store must be deleted only when there are no models using it.
 
 To delete the `currentFeatureStore` feature store:
 
@@ -526,17 +522,14 @@ curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/feature-store/curr
 ----
 
 [[LearningToRank-Applyingchanges]]
-=== Applying changes
+=== Applying Changes
 
 The feature store and the model store are both <<managed-resources.adoc#managed-resources,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
 
 [[LearningToRank-Examples]]
 === Examples
 
-// OLD_CONFLUENCE_ID: LearningToRank-Onefeaturestore,multiplerankingmodels
-
-[[LearningToRank-Onefeaturestore_multiplerankingmodels]]
-==== One feature store, multiple ranking models
+==== One Feature Store, Multiple Ranking Models
 
 * `leftModel` and `rightModel` both use features from `commonFeatureStore` and the only different between the two models is the weights attached to each feature.
 * Conventions used:
@@ -546,9 +539,8 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 ** The model's features and weights are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two models are.
 ** The stores features are sorted alphabetically by name, this makes it easy to lookup features used in the models
 
-*Example: /path/commonFeatureStore.json*
-
-[source,java]
+.Example: /path/commonFeatureStore.json
+[source,json]
 ----
 [
   {
@@ -576,9 +568,8 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 ]
 ----
 
-*Example: /path/leftModel.json*
-
-[source,java]
+.Example: /path/leftModel.json
+[source,json]
 ----
 {
   "store" : "commonFeatureStore",
@@ -599,9 +590,8 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 }
 ----
 
-*Example: /path/rightModel.json*
-
-[source,java]
+.Example: /path/rightModel.json
+[source,json]
 ----
 {
   "store" : "commonFeatureStore",
@@ -623,7 +613,7 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 ----
 
 [[LearningToRank-Modelevolution]]
-==== Model evolution
+==== Model Evolution
 
 * `linearModel201701` uses features from `featureStore201701`
 * `treesModel201702` uses features from `featureStore201702`
@@ -636,9 +626,8 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 ** The model's features and weights are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two models are.
 ** The stores features are sorted alphabetically by name, this makes it easy to see what the commonalities and differences between the two feature stores are.
 
-*Example: /path/featureStore201701.json*
-
-[source,java]
+.Example: /path/featureStore201701.json
+[source,json]
 ----
 [
   {
@@ -666,9 +655,8 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 ]
 ----
 
-*Example: /path/linearModel201701.json*
-
-[source,java]
+.Example: /path/linearModel201701.json
+[source,json]
 ----
 {
   "store" : "featureStore201701",
@@ -689,9 +677,8 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 }
 ----
 
-*Example: /path/featureStore201702.json*
-
-[source,java]
+.Example: /path/featureStore201702.json
+[source,json]
 ----
 [
   {
@@ -711,9 +698,8 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 ]
 ----
 
-*Example: /path/treesModel201702.json*
-
-[source,java]
+.Example: /path/treesModel201702.json
+[source,json]
 ----
 {
   "store" : "featureStore201702",

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/7d7fb52a/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/local-parameters-in-queries.adoc b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
index f989b0b..1d24b61 100644
--- a/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
+++ b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
@@ -2,7 +2,9 @@
 :page-shortname: local-parameters-in-queries
 :page-permalink: local-parameters-in-queries.html
 
-Local parameters are arguments in a Solr request that are specific to a query parameter. Local parameters provide a way to add meta-data to certain argument types such as query strings. (In Solr documentation, local parameters are sometimes referred to as LocalParams.)
+Local parameters are arguments in a Solr request that are specific to a query parameter.
+
+Local parameters provide a way to add meta-data to certain argument types such as query strings. (In Solr documentation, local parameters are sometimes referred to as LocalParams.)
 
 Local parameters are specified as prefixes to arguments. Take the following query argument, for example:
 
@@ -19,11 +21,11 @@ These local parameters would change the query to require a match on both "solr"
 
 To specify a local parameter, insert the following before the argument to be modified:
 
-* Begin with \{!
+* Begin with `{!`
 
 * Insert any number of key=value pairs separated by white space
 
-* End with } and immediately follow with the query argument
+* End with `}` and immediately follow with the query argument
 
 You may specify only one local parameters prefix per argument. Values in the key-value pairs may be quoted via single or double quotes, and backslash escaping works within quoted strings.
 
@@ -46,10 +48,7 @@ is equivilent to:
 
 `fq={!type=lucene df=summary}solr rocks`
 
-// OLD_CONFLUENCE_ID: LocalParametersinQueries-SpecifyingtheParameterValuewiththe'v'Key
-
-[[LocalParametersinQueries-SpecifyingtheParameterValuewiththe_v_Key]]
-== Specifying the Parameter Value with the '`v`' Key
+== Specifying the Parameter Value with the `v` Key
 
 A special key of `v` within local parameters is an alternate way to specify the value of that parameter.
 
@@ -62,7 +61,7 @@ is equivalent to
 [[LocalParametersinQueries-ParameterDereferencing]]
 == Parameter Dereferencing
 
-Parameter dereferencing or indirection lets you use the value of another argument rather than specifying it directly. This can be used to simplify queries, decouple user input from query parameters, or decouple front-end GUI parameters from defaults set in `solrconfig.xml`.
+Parameter dereferencing, or indirection, lets you use the value of another argument rather than specifying it directly. This can be used to simplify queries, decouple user input from query parameters, or decouple front-end GUI parameters from defaults set in `solrconfig.xml`.
 
 `q={!dismax qf=myfield}solr rocks`
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/7d7fb52a/solr/solr-ref-guide/src/logging.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/logging.adoc b/solr/solr-ref-guide/src/logging.adoc
index 1d88264..a048b24 100644
--- a/solr/solr-ref-guide/src/logging.adoc
+++ b/solr/solr-ref-guide/src/logging.adoc
@@ -6,11 +6,9 @@ The Logging page shows recent messages logged by this Solr node.
 
 When you click the link for "Logging", a page similar to the one below will be displayed:
 
+.The Main Logging Screen, including an example of an error due to a bad document sent by a client
 image::images/logging/logging.png[image,width=621,height=250]
 
-
-_The Main Logging Screen, including an example of an error due to a bad document sent by a client_
-
 While this example shows logged messages for only one core, if you have multiple cores in a single instance, they will each be listed, with the level for each.
 
 [[Logging-SelectingaLoggingLevel]]
@@ -18,7 +16,7 @@ While this example shows logged messages for only one core, if you have multiple
 
 When you select the *Level* link on the left, you see the hierarchy of classpaths and classnames for your instance. A row highlighted in yellow indicates that the class has logging capabilities. Click on a highlighted row, and a menu will appear to allow you to change the log level for that class. Characters in boldface indicate that the class will not be affected by level changes to root.
 
+.Log level selection
 image::images/logging/level_menu.png[image,width=589,height=250]
 
-
 For an explanation of the various logging levels, see <<configuring-logging.adoc#configuring-logging,Configuring Logging>>.