You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@solr.apache.org by ct...@apache.org on 2021/07/08 16:34:48 UTC

[solr] branch main updated: SOLR-14444: Ref Guide: add required/default tables to all params, part I (#208)

This is an automated email from the ASF dual-hosted git repository.

ctargett pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/solr.git


The following commit(s) were added to refs/heads/main by this push:
     new d3c1b61  SOLR-14444: Ref Guide: add required/default tables to all params, part I (#208)
d3c1b61 is described below

commit d3c1b61d7b35c3e1e838ead8991650cd7bc03a30
Author: Cassandra Targett <ct...@apache.org>
AuthorDate: Thu Jul 8 11:33:33 2021 -0500

    SOLR-14444: Ref Guide: add required/default tables to all params, part I (#208)
---
 solr/solr-ref-guide/src/alias-management.adoc      | 165 +++-
 solr/solr-ref-guide/src/analytics.adoc             | 223 ++++--
 solr/solr-ref-guide/src/backup-restore.adoc        | 155 +++-
 solr/solr-ref-guide/src/charfilterfactories.adoc   |  48 +-
 .../src/cluster-node-management.adoc               | 130 ++-
 solr/solr-ref-guide/src/cluster-plugins.adoc       |  59 +-
 .../src/collapse-and-expand-results.adoc           | 107 ++-
 solr/solr-ref-guide/src/collection-management.adoc | 700 ++++++++++++----
 solr/solr-ref-guide/src/collections-api.adoc       |  19 +-
 solr/solr-ref-guide/src/configsets-api.adoc        |  52 +-
 solr/solr-ref-guide/src/configuring-solr-xml.adoc  | 265 +++++-
 solr/solr-ref-guide/src/core-discovery.adoc        | 132 ++-
 solr/solr-ref-guide/src/coreadmin-api.adoc         | 231 +++++-
 solr/solr-ref-guide/src/de-duplication.adoc        |  37 +-
 solr/solr-ref-guide/src/document-transformers.adoc |  43 +-
 solr/solr-ref-guide/src/enabling-ssl.adoc          |  18 +
 solr/solr-ref-guide/src/enum-fields.adoc           |  23 +-
 solr/solr-ref-guide/src/faceting.adoc              | 259 ++++--
 .../src/field-type-definitions-and-properties.adoc | 108 ++-
 solr/solr-ref-guide/src/fields.adoc                |  43 +-
 solr/solr-ref-guide/src/filters.adoc               | 889 +++++++++++++++++----
 solr/solr-ref-guide/src/highlighting.adoc          | 354 ++++++--
 .../solr-ref-guide/src/index-segments-merging.adoc |  61 +-
 solr/solr-ref-guide/src/indexing-with-tika.adoc    | 185 +++--
 .../src/indexing-with-update-handlers.adoc         | 179 ++++-
 .../src/major-changes-in-solr-7.adoc               |   4 +-
 solr/solr-ref-guide/src/other-parsers.adoc         |   8 +-
 .../src/rule-based-authorization-plugin.adoc       |  12 +-
 solr/solr-ref-guide/src/schema-api.adoc            |   2 +-
 solr/solr-ref-guide/src/solr-upgrade-notes.adoc    |   2 +-
 30 files changed, 3773 insertions(+), 740 deletions(-)

diff --git a/solr/solr-ref-guide/src/alias-management.adoc b/solr/solr-ref-guide/src/alias-management.adoc
index 18f5658..1e57f51 100644
--- a/solr/solr-ref-guide/src/alias-management.adoc
+++ b/solr/solr-ref-guide/src/alias-management.adoc
@@ -75,17 +75,34 @@ NOTE: Only updates are routed and queries are distributed to all collections in
 === CREATEALIAS Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The alias name to be created.
-This parameter is required.
 If the alias is to be routed it also functions as a prefix for the names of the dependent collections that will be created.
 It must therefore adhere to normal requirements for collection naming.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 ==== Standard Alias Parameters
 
 `collections`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A comma-separated list of collections to be aliased.
 The collections must already exist in the cluster.
 This parameter signals the creation of a standard alias.
@@ -97,22 +114,39 @@ If routing parameters are present this parameter is prohibited.
 Most routed alias parameters become _alias properties_ that can subsequently be inspected and <<aliasprop,modified>>.
 
 `router.name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The type of routing to use.
 Presently only `time` and `category` and `Dimensional[]` are valid.
++
 In the case of a multi dimensional routed alias (aka "DRA", see <<aliases.adoc#dimensional-routed-aliases,Aliases>>), it is required to express all the dimensions in the same order that they will appear in the dimension
 array.
-The format for a DRA router.name is Dimensional[dim1,dim2] where dim1 and dim2 are valid router.name values for each sub-dimension.
+The format for a DRA `router.name` is `Dimensional[dim1,dim2]` where `dim1` and `dim2` are valid `router.name` values for each sub-dimension.
 Note that DRA's are very new, and only 2D DRA's are presently supported.
 Higher numbers of dimensions will be supported soon.
-See examples below for further clarification on how to configure
-individual dimensions.
-This parameter is required.
+See examples below for further clarification on how to configure individual dimensions.
 
 `router.field`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The field to inspect to determine which underlying collection an incoming document should be routed to.
 This field is required on all incoming documents.
 
 `create-collection.*`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The `*` wildcard can be replaced with any parameter from the <<collection-management.adoc#create,CREATE>> command except `name`.
 All other fields are identical in requirements and naming except that we insist that the configset be explicitly specified.
 The configset must be created beforehand, either uploaded or copied and modified.
@@ -121,54 +155,87 @@ It's probably a bad idea to use "data driven" mode as schema mutations might hap
 ==== Time Routed Alias Parameters
 
 `router.start`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The start date/time of data for this time routed alias in Solr's standard date/time format (i.e., ISO-8601 or "NOW" optionally with <<date-formatting-math.adoc#date-math,date math>>).
 +
 The first collection created for the alias will be internally named after this value.
-If a document is submitted with an earlier value for router.field then the earliest collection the alias points to then it will yield an error since it can't be routed.
+If a document is submitted with an earlier value for `router.field` then the earliest collection the alias points to then it will yield an error since it can't be routed.
 This date/time MUST NOT have a milliseconds component other than 0.
 Particularly, this means `NOW` will fail 999 times out of 1000, though `NOW/SECOND`, `NOW/MINUTE`, etc., will work just fine.
-This parameter is required.
 
 `TZ`::
-The timezone to be used when evaluating any date math in router.start or router.interval.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `UTC`
+|===
++
+The timezone to be used when evaluating any date math in `router.start` or `router.interval`.
 This is equivalent to the same parameter supplied to search queries, but understand in this case it's persisted with most of the other parameters
 as an alias property.
 +
 If GMT-4 is supplied for this value then a document dated 2018-01-14T21:00:00:01.2345Z would be stored in the myAlias_2018-01-15_01 collection (assuming an interval of +1HOUR).
-+
-The default timezone is UTC.
+
 
 `router.interval`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 A date math expression that will be appended to a timestamp to determine the next collection in the series.
 Any date math expression that can be evaluated if appended to a timestamp of the form 2018-01-15T16:17:18 will work here.
-+
-This parameter is required.
 
 `router.maxFutureMs`::
-The maximum milliseconds into the future that a document is allowed to have in `router.field` for it to be accepted without error.
-If there was no limit, than an erroneous value could trigger many collections to be created.
 +
-The default is `600000` or 10 minutes.
+[%autowidth,frame=none]
+|===
+|Optional |Default: `600000` milliseconds
+|===
++
+The maximum milliseconds into the future that a document is allowed to have in `router.field` for it to be accepted without error.
+If there was no limit, then an erroneous value could trigger many collections to be created.
 
 `router.preemptiveCreateMath`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A date math expression that results in early creation of new collections.
 +
 If a document arrives with a timestamp that is after the end time of the most recent collection minus this interval, then the next (and only the next) collection will be created asynchronously.
++
 Without this setting, collections are created synchronously when required by the document time stamp and thus block the flow of documents until the collection is created (possibly several seconds).
 Preemptive creation reduces these hiccups.
 If set to enough time (perhaps an hour or more) then if there are problems creating a collection, this window of time might be enough to take
 corrective action.
-However after a successful preemptive creation, the collection is consuming resources without being used, and new documents will tend to be routed through it only to be routed elsewhere.
+However, after a successful preemptive creation the collection is consuming resources without being used, and new documents will tend to be routed through it only to be routed elsewhere.
++
 Also, note that `router.autoDeleteAge` is currently evaluated relative to the date of a newly created collection, so you may want to increase the delete age by the preemptive window amount so that the oldest collection isn't deleted too
 soon.
-Note that it has to be possible to subtract the interval specified from a date, so if prepending a minus sign creates invalid date math, this will cause an error.
++
+It must be possible to subtract the interval specified from a date, so if prepending a minus sign creates invalid date math, this will cause an error.
 Also note that a document that is itself destined for a collection that does not exist will still trigger synchronous creation up to that destination collection but will not trigger additional async preemptive creation.
 Only one type of collection creation can happen per document.
 Example: `90MINUTES`.
 +
-This property is blank by default indicating just-in-time, synchronous creation of new collections.
+This property is empty by default indicating just-in-time, synchronous creation of new collections.
 
 `router.autoDeleteAge`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A date math expression that results in the oldest collections getting deleted automatically.
 +
 The date math is relative to the timestamp of a newly created collection (typically close to the current time), and thus this must produce an earlier time via rounding and/or subtracting.
@@ -181,12 +248,25 @@ The default is not to delete.
 ==== Category Routed Alias Parameters
 
 `router.maxCardinality`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The maximum number of categories allowed for this alias.
 This setting safeguards against the inadvertent creation of an infinite number of collections in the event of bad data.
 
 `router.mustMatch`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A regular expression that the value of the field specified by `router.field` must match before a corresponding collection will be created.
-Note that changing this setting after data has been added will not alter the data already indexed.
+Changing this setting after data has been added will not alter the data already indexed.
++
 Any valid Java regular expression pattern may be specified.
 This expression is pre-compiled at the start of each request so batching of updates is strongly recommended.
 Overly complex patterns will produce CPU or garbage collection overhead during indexing as determined by the JVM's implementation of regular expressions.
@@ -194,8 +274,15 @@ Overly complex patterns will produce CPU or garbage collection overhead during i
 ==== Dimensional Routed Alias Parameters
 
 `router.#.`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This prefix denotes which position in the dimension array is being referred to for purposes of dimension configuration.
-For example in a Dimensional[time,category] router.0.start would be used to set the start time for the time dimension.
++
+For example in a `Dimensional[time,category]` alias, `router.0.start` would be used to set the start time for the time dimension.
 
 
 === CREATEALIAS Response
@@ -560,16 +647,39 @@ Routed aliases may cease to function, function incorrectly, or cause errors if p
 === ALIASPROP Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The alias name on which to set properties.
-This parameter is required.
 
 `property._name_=_value_` (v1)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Set property _name_ to _value_.
 
 `"properties":{"name":"value"}` (v2)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A dictionary of name/value pairs of properties to be set.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 === ALIASPROP Response
@@ -613,10 +723,21 @@ curl -X POST http://localhost:8983/api/collections -H 'Content-Type: application
 === DELETEALIAS Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the alias to delete.
-This parameter is required.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 === DELETEALIAS Response
diff --git a/solr/solr-ref-guide/src/analytics.adoc b/solr/solr-ref-guide/src/analytics.adoc
index bfae49d..b66e974 100644
--- a/solr/solr-ref-guide/src/analytics.adoc
+++ b/solr/solr-ref-guide/src/analytics.adoc
@@ -497,28 +497,65 @@ The two current sortable facets are <<value-facets, Analytic Value Facets>> and
 ==== Parameters
 
 `criteria`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The list of criteria to sort the facet by.
 +
 It takes the following parameters:
 
-`type`::: The type of sort.
+`type`:::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The type of sort.
 There are two possible values:
 * `expression`: Sort by the value of an expression defined in the same grouping.
 * `facetvalue`: Sort by the string-representation of the facet value.
 
 `Direction`:::
-_(Optional)_ The direction to sort.
-* `ascending` _(Default)_
-* `descending`
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `ascending`
+|===
++
+The direction to sort.
+The options are `ascending` or `descending`.
 
 `expression`:::
-When `type = expression`, the name of an expression defined in the same grouping.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+When `type` is `expression`, the name of an expression defined in the same grouping.
 
 `limit`::
-Limit the number of returned facet values to the top _N_.  _(Optional)_
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `-1`
+|===
++
+Limit the number of returned facet values to the top _N_.
+The default means there is no limit.
 
 `offset`::
- When a limit is set, skip the top _N_ facet values. _(Optional)_
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
+When a limit is set, skip the top _N_ facet values.
 
 [source,json]
 .Example Sort Request
@@ -542,24 +579,36 @@ Limit the number of returned facet values to the top _N_.  _(Optional)_
 
 === Value Facets
 
-Value Facets are used to group documents by the value of a mapping expression applied to each document.
+Value facets are used to group documents by the value of a mapping expression applied to each document.
 Mapping expressions are expressions that do not include a reduction function.
 
 For more information, refer to the <<expression-components, Expressions section>>.
+For example:
 
-* `mult(quantity, sum(price, tax))`: breakup documents by the revenue generated
-* `fillmissing(state, "N/A")`: breakup documents by state, where N/A is used when the document doesn't contain a state
+* `mult(quantity, sum(price, tax))`: breakup documents by the revenue generated.
+* `fillmissing(state, "N/A")`: breakup documents by state, where N/A is used when the document doesn't contain a state.
 
-Value Facets can be sorted.
+Value facets can be sorted.
 
 ==== Parameters
 
-`expression`:: The expression to choose a facet bucket for each document.
-`sort`:: A <<Facet Sorting,sort>> for the results of the pivot.
+`expression`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The expression to choose a facet bucket for each document.
 
-[NOTE]
-.Optional Parameters
-The `sort` parameter is optional.
+`sort`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+A <<Facet Sorting,sort>> for the results of the pivot.
 
 [source,json]
 .Example Value Facet Request
@@ -595,15 +644,14 @@ The `sort` parameter is optional.
 ----
 
 [NOTE]
-.Field Facets
-This is a replacement for Field Facets in the original Analytics Component.
-Field Facet functionality is maintained in Value Facets by using the name of a field as the expression.
+This is a replacement for field facets that existed in the original Analytics Component.
+Field facet functionality is maintained in value facets by using the name of a field as the expression.
 
 === Analytic Pivot Facets
 
 Pivot Facets are used to group documents by the value of multiple mapping expressions applied to each document.
 
-Pivot Facets work much like layers of <<value-facets,Analytic Value Facets>>.
+Pivot Facets work much like layers of <<Value Facets>>.
 A list of pivots is required, and the order of the list directly impacts the results returned.
 The first pivot given will be treated like a normal value facet.
 The second pivot given will be treated like one value facet for each value of the first pivot.
@@ -616,15 +664,42 @@ Sorting in each pivot is independent of the other pivots.
 
 ==== Parameters
 
-`pivots`:: The list of pivots to calculate a drill-down facet for.
+`pivots`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The list of pivots to calculate a drill-down facet for.
 The list is ordered by top-most to bottom-most level.
-`name`::: The name of the pivot.
-`expression`::: The expression to choose a facet bucket for each document.
-`sort`::: A <<Facet Sorting,sort>> for the results of the pivot.
 
-[NOTE]
-.Optional Parameters
-The `sort` parameter within the pivot object is optional, and can be given in any, none or all of the provided pivots.
+`name`:::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The name of the pivot.
+
+`expression`:::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The expression to choose a facet bucket for each document.
+
+`sort`:::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+A <<Facet Sorting,sort>> for the results of the pivot.
 
 [source,json]
 .Example Pivot Facet Request
@@ -700,33 +775,80 @@ Refer to the <<faceting.adoc#range-faceting,Range Facet documentation>> for addi
 
 ==== Parameters
 
-`field`:: Field to be faceted over
-`start`:: The bottom end of the range
-`end`:: The top end of the range
-`gap`:: A list of range gaps to generate facet buckets.
-If the buckets do not add up to fit the `start` to `end` range,
-then the last `gap` value will repeated as many times as needed to fill any unused range.
-`hardend`:: Whether to cutoff the last facet bucket range at the `end` value if it spills over.
-Defaults to `false`.
-`include`:: The boundaries to include in the facet buckets.
-Defaults to `lower`.
+`field`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Field to be faceted over.
+
+`start`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The bottom end of the range.
+
+`end`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The top end of the range.
+
+`gap`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+A list of range gaps to generate facet buckets.
+If the buckets do not add up to fit the `start` to `end` range, then the last `gap` value will repeated as many times as needed to fill any unused range.
+
+`hardend`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+Whether to cutoff the last facet bucket range at the `end` value if it spills over.
+
+`include`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `lower`
+|===
++
+The boundaries to include in the facet buckets.
 * `lower` - All gap-based ranges include their lower bound.
 * `upper` - All gap-based ranges include their upper bound.
-* `edge` - The first and last gap ranges include their edge bounds (lower for the first one, upper for the last one) even if the corresponding upper/lower option is not specified.
+* `edge` - The first and last gap ranges include their edge bounds (lower for the first one, upper for the last one), even if the corresponding upper/lower option is not specified.
 * `outer` - The `before` and `after` ranges will be inclusive of their bounds, even if the first or last ranges already include those boundaries.
-* `all` - Includes all options: `lower`, `upper`, `edge`, and `outer`
-`others`:: Additional ranges to include in the facet.
-Defaults to `none`.
+* `all` - Includes all options: `lower`, `upper`, `edge`, and `outer`.
+
+`others`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `none`
+|===
++
+Additional ranges to include in the facet.
 * `before` - All records with field values lower then lower bound of the first range.
 * `after` - All records with field values greater then the upper bound of the last range.
 * `between` - All records with field values between the lower bound of the first range and the upper bound of the last range.
 * `none` - Include facet buckets for none of the above.
 * `all` - Include facet buckets for `before`, `after` and `between`.
 
-[NOTE]
-.Optional Parameters
-The `hardend`, `include` and `others` parameters are all optional.
-
 [source,json]
 .Example Range Facet Request
 ----
@@ -798,11 +920,18 @@ The `hardend`, `include` and `others` parameters are all optional.
 
 === Query Facets
 
-Query Facets are used to group documents by given set of queries.
+Query facets are used to group documents by given set of queries.
 
 ==== Parameters
 
-`queries`:: The list of queries to facet by.
+`queries`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The list of queries to facet by.
 
 [source,json]
 .Example Query Facet Request
diff --git a/solr/solr-ref-guide/src/backup-restore.adoc b/solr/solr-ref-guide/src/backup-restore.adoc
index 3a25bf6..f3a63e0 100644
--- a/solr/solr-ref-guide/src/backup-restore.adoc
+++ b/solr/solr-ref-guide/src/backup-restore.adoc
@@ -374,9 +374,15 @@ However since the "overseer" role often moves from node to node in a cluster, it
 
 A LocalFileSystemRepository instance is used as a default by any backup and restore commands that don't explicitly provide a `repository` parameter or have a default specified in `solr.xml`.
 
-LocalFileSystemRepository accepts the following configuration options:
+LocalFileSystemRepository accepts the following configuration option:
 
 `location`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A valid file path (accessible to Solr locally) to use for backup storage and retrieval.
 Used as a fallback when user's don't provide a `location` parameter in their Backup or Restore API commands
 
@@ -401,18 +407,40 @@ WARNING: HdfsBackupRepository is deprecated and may be removed or relocated in a
 HdfsBackupRepository accepts the following configuration options:
 
 `solr.hdfs.buffer.size`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `4096` kilobytes
+|===
++
 The size, in bytes, of the buffer used to transfer data to and from HDFS.
-Defaults to 4096 (4KB).
 Better throughput is often attainable with a larger buffer, where memory allows.
 
 `solr.hdfs.home`::
-Required.
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 A HDFS URI in the format `hdfs://<host>:<port>/<hdfsBaseFilePath>` that points Solr to the HDFS cluster to store (or retrieve) backup files on.
 
 `solr.hdfs.permissions.umask-mode`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A permission umask used when creating files in HDFS.
 
 `location`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A valid directory path on the HDFS cluster to use for backup storage and retrieval.
 Used as a fallback when users don't provide a `location` parameter in their Backup or Restore API commands.
 
@@ -437,19 +465,37 @@ Stores and retrieves backup files in a Google Cloud Storage ("GCS") bucket.
 GCSBackupRepsoitory accepts the following options for overall configuration:
 
 `gcsBucket`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 The GCS bucket to read and write all backup files to.
 If not specified, GCSBackupRepository will use the value of the `GCS_BUCKET` environment variable.
 If both values are absent, the value `solrBackupsBucket` will be used as a default.
 
 `gcsCredentialPath`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 A path on the local filesystem (accessible by Solr) to a https://cloud.google.com/iam/docs/creating-managing-service-account-keys[Google Cloud service account key] file.
 If not specified, GCSBackupRepository will use the value of the `GCS_CREDENTIAL_PATH` environment variable.
 If both values are absent, an error will be thrown as GCS requires credentials for most usage.
 
 `location`::
-A valid "directory" path in the given GCS bucket to us for backup strage and retrieval.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+A valid "directory" path in the given GCS bucket to us for backup storage and retrieval.
 (GCS uses a flat storage model, but Solr's backup functionality names blobs in a way that approximates hierarchical directory storage.)
-Used as a fallback when user's don't provide a `location` parameter in their Backup or Restore API commands
+Used as a fallback when user's don't provide a `location` parameter in their Backup or Restore API commands.
 
 In addition to these properties for overall configuration, GCSBackupRepository gives users detailed control over the client used to communicate with GCS.
 These properties are unlikely to interest most users, but may be valuable for those looking to micromanage performance or subject to a flaky network.
@@ -457,76 +503,139 @@ These properties are unlikely to interest most users, but may be valuable for th
 GCSBackupRepository accepts the following advanced client-configuration options:
 
 `gcsWriteBufferSizeBytes`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `16777216` bytes (16 MB)
+|===
++
 The buffer size, in bytes, to use when sending data to GCS.
-`16777216` bytes (i.e., 16 MB) is used by default if not specified.
 
 `gcsReadBufferSizeBytes`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `2097152` bytes (2 MB)
+|===
++
 The buffer size, in bytes, to use when copying data from GCS.
-`2097152` bytes (i.e., 2 MB) is used by default if not specified.
 
 `gcsClientHttpConnectTimeoutMillis`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `2000` milliseconds
+|===
++
 The connection timeout, in milliseconds, for all HTTP requests made by the GCS client.
-"0" may be used to request an infinite timeout.
-A negative integer, or not specifying a value at all, will result in a value of `20000` (or 20 seconds).
+`0` may be used to request an infinite timeout.
+A negative integer, or not specifying a value at all, will result in the default value.
 
 `gcsClientHttpReadTimeoutMillis`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `20000` milliseconds
+|===
++
 The read timeout, in milliseconds, for reading data on an established connection.
-"0" may be used to request an infinite timeout.
-A negative integer, or not specifying a value at all, will result in a value of 20000 (or 20 seconds).
+`0` may be used to request an infinite timeout.
+A negative integer, or not specifying a value at all, will result in the default value.
 
 `gcsClientMaxRetries`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `10`
+|===
++
 The maximum number of times to retry an operation upon failure.
 The GCS client will retry operations until this value is reached, or the time spent across all attempts exceeds `gcsClientMaxRequestTimeoutMillis`.
-"0" may be used to specify that no retries should be done.
-If not specified, this value defaults to 10.
+`0` may be used to specify that no retries should be done.
 
 `gcsClientMaxRequestTimeoutMillis`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `30000` milliseconds
+|===
++
 The maximum amount of time to spend on all retries of an operation that has failed.
 The GCS client will retry operations until either this timeout has been reached, or until `gcsClientMaxRetries` attempts have failed.
-If not specified the value 300000 (5 minutes) is used by default.
 
 `gcsClientHttpInitialRetryDelayMillis`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1000` milliseconds
+|===
++
 The time, in milliseconds, to delay before the first retry of a HTTP request that has failed.
 This value also factors in to subsequent retries - see the `gcsClientHttpRetryDelayMultiplier` description below for more information.
-If `gcsClientMaxRetries` is 0, this property is ignored as no retries are attempted.
-If not specified the value 1000 (1 second) is used by default.
+If `gcsClientMaxRetries` is `0`, this property is ignored as no retries are attempted.
 
 `gcsClientHttpRetryDelayMultiplier`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1.0`
+|===
++
 A floating-point multiplier used to scale the delay between each successive retry of a failed HTTP request..
 The greater this number is, the more quickly the retry delay compounds and scales.
 +
 Under the covers, the GSC client uses an exponential backoff strategy between retries, governed by the formula: stem:[gcsClientH\t\tpInitialRetryDelayMillis*(gcsClientH\t\tpRetryDelayM\u\l\tiplier)^(retryNum-1)].
 The first retry will have a delay of stem:[gcsClientH\t\tpInitialRetryDelayMillis], the second a delay of stem:[gcsClientH\t\tpInitialRetryDelayMillis * gcsClientH\t\tpRetryDelayM\u\l\tiplier], the third a delay of stem:[gcsClientH\t\tpInitialRetryDelayMillis * gcsClientH\t\tpRetryDelayM\u\l\tiplier^2], and so on.
 +
-If not specified the value 1.0 is used by default, ensuring that `gcsClientHttpInitialRetryDelayMillis` is used between each retry attempt.
+If not specified the value `1.0` is used by default, ensuring that `gcsClientHttpInitialRetryDelayMillis` is used between each retry attempt.
 
 `gcsClientHttpMaxRetryDelayMillis`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `30000` milliseconds
+|===
++
 The maximum delay, in milliseconds, between retry attempts on a failed HTTP request.
 This is commonly used to cap the exponential growth in retry-delay that occurs over multiple attempts.
 See the `gcsClientHttpRetryDelayMultiplier` description above for more information on how each delay is calculated when not subject to this maximum.
-If not specified the value 30000 (30 seconds) is used by default.
 
 `gcsClientRpcInitialTimeoutMillis`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `10000` milliseconds
+|===
++
 The time, in milliseconds, to wait on a RPC request before timing out.
 This value also factors in to subsequent retries - see the `gcsClientRpcTimeoutMultiplier` description below for more information.
-If `gcsClientMaxRetries` is 0, this property is ignored as no retries are attempted.
-If not specified the value 10000 (10 seconds) is used by default.
+If `gcsClientMaxRetries` is `0`, this property is ignored as no retries are attempted.
 
 `gcsClientRpcTimeoutMultiplier`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1.0`
+|===
++
 A floating-point multiplier used to scale the timeout on each successive attempt of a failed RPC request.
 The greater this number is, the more quickly the timeout compounds and scales.
 +
 Under the covers, the GSC client uses an exponential backoff strategy for RPC timeouts, governed by the formula: stem:[gcsClientRpcInitialTimeoutMillis*(gcsClientRpcTimeoutM\u\l\tiplier)^(retryNum-1)].
 The first retry will have a delay of stem:[gcsClientRpcInitialTimeoutMillis], the second a delay of stem:[gcsClientRpcInitialTimeoutMillis * gcsClientRpcTimeoutM\u\l\tiplier], the third a delay of stem:[gcsClientRpcInitialTimeoutMillis * gcsClientRpcTimeoutM\u\l\tiplier^2], and so on.
 +
-If not specified the value 1.0 is used by default, ensuring that `gcsClientRpcInitialTimeoutMillis` is used on each RPC attempt.
+If not specified the value `1.0` is used by default, ensuring that `gcsClientRpcInitialTimeoutMillis` is used on each RPC attempt.
 
 `gcsClientRpcMaxTimeoutMillis`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `30000` milliseconds
+|===
++
 The maximum timeout, in milliseconds, for retry attempts of a failed RPC request.
 This is commonly used to cap the exponential growth in timeout that occurs over multiple attempts.
 See the `gcsClientRpcTimeoutMultiplier` description above for more information on how each timeout is calculated when not subject to this maximum.
-If not specified the value 30000 (30 seconds) is used by default.
-
 
 An example configuration using the overall and GCS-client properties can be seen below:
 
diff --git a/solr/solr-ref-guide/src/charfilterfactories.adoc b/solr/solr-ref-guide/src/charfilterfactories.adoc
index ae06264..21fc23e 100644
--- a/solr/solr-ref-guide/src/charfilterfactories.adoc
+++ b/solr/solr-ref-guide/src/charfilterfactories.adoc
@@ -156,16 +156,36 @@ This filter performs pre-tokenization Unicode normalization using http://site.ic
 
 Arguments:
 
-`form`:: A http://unicode.org/reports/tr15/[Unicode Normalization Form], one of `nfc`, `nfkc`, `nfkc_cf`.
-Default is `nfkc_cf`.
+`form`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `nfkc_cf`
+|===
++
+A http://unicode.org/reports/tr15/[Unicode Normalization Form], one of `nfc`, `nfkc`, or `nfkc_cf`.
 
-`mode`:: Either `compose` or `decompose`.
+`mode`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `compose`
+|===
++
+Either `compose` or `decompose`.
 Default is `compose`.
 Use `decompose` with `name="nfc"` or `name="nfkc"` to get NFD or NFKD, respectively.
 
-`filter`:: A http://www.icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet] pattern.
+`filter`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `[]`
+|===
++
+A http://www.icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet] pattern.
 Codepoints outside the set are always left unchanged.
-Default is `[]` (the null set, no filtering - all codepoints are subject to normalization).
+Default is `[]`, as a null set, no filtering (all codepoints are subject to normalization).
 
 Example:
 
@@ -203,9 +223,23 @@ This filter uses http://www.regular-expressions.info/reference.html[regular expr
 
 Arguments:
 
-`pattern`:: the regular expression pattern to apply to the incoming text.
+`pattern`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The regular expression pattern to apply to the incoming text.
 
-`replacement`:: the text to use to replace matching patterns.
+`replacement`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The text to use to replace matching patterns.
 
 You can configure this filter in `schema.xml` like this:
 
diff --git a/solr/solr-ref-guide/src/cluster-node-management.adoc b/solr/solr-ref-guide/src/cluster-node-management.adoc
index 3e69179..e595ad13 100644
--- a/solr/solr-ref-guide/src/cluster-node-management.adoc
+++ b/solr/solr-ref-guide/src/cluster-node-management.adoc
@@ -68,15 +68,33 @@ We do not currently have a V2 equivalent.
 === CLUSTERSTATUS Parameters
 
 `collection`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The collection or alias name for which information is requested.
 If omitted, information on all collections in the cluster will be returned.
 If an alias is supplied, information on the collections in the alias will be returned.
 
 `shard`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The shard(s) for which information is requested.
 Multiple shard names can be specified as a comma-separated list.
 
 `\_route_`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This can be used if you need the details of the shard where a particular document belongs to and you don't know which shard it falls under.
 
 === CLUSTERSTATUS Response
@@ -203,6 +221,12 @@ curl -X POST http://localhost:8983/api/cluster -H 'Content-Type: application/jso
 === CLUSTERPROP Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of the property.
 Supported properties names are `location`, `maxCoresPerNode`, `urlScheme`, and `defaultShardPreferences`.
 If the <<distributed-tracing.adoc#,Jaeger tracing contrib>> has been enabled, the property `samplePercentage` is also available.
@@ -211,6 +235,12 @@ Other properties can be set (for example, if you need them for custom plugins) b
 Unknown properties that don't begin with `ext.` will be rejected.
 
 `val`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The value of the property.
 If the value is empty or null, the property is unset.
 
@@ -370,20 +400,41 @@ curl -X POST http://localhost:8983/api/collections/techproducts -H 'Content-Type
 === BALANCESHARDUNIQUE Parameters
 
 `collection`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the collection to balance the property in.
-This parameter is required.
 
 `property`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The property to balance.
 The literal `property.` is prepended to this property if not specified explicitly.
-This parameter is required.
 
 `onlyactivenodes`::
-Defaults to `true`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 Normally, the property is instantiated on active nodes only.
 If this parameter is specified as `false`, then inactive nodes are also included for distribution.
 
 `shardUnique`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Something of a safety valve.
 There is one pre-defined property (`preferredLeader`) that defaults this value to `true`.
 For all other properties that are balanced, this must be set to `true` or an error message will be returned.
@@ -453,24 +504,51 @@ We do not currently have a V2 equivalent.
 === REPLACENODE Parameters
 
 `sourceNode`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The source node from which the replicas need to be copied from.
-This parameter is required.
 
 `targetNode`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The target node where replicas will be copied.
 If this parameter is not provided, Solr will identify nodes automatically based on policies or number of cores in each node.
 
 `parallel`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If this flag is set to `true`, all replicas are created in separate threads.
 Keep in mind that this can lead to very high network and disk I/O if the replicas have very large indices.
-The default is `false`.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 `timeout`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `300` seconds
+|===
++
 Time in seconds to wait until new replicas are created, and until leader replicas are fully recovered.
-The default is `300`, or 5 minutes.
 
 [IMPORTANT]
 ====
@@ -508,10 +586,21 @@ We do not currently have a V2 equivalent.
 === DELETENODE Parameters
 
 `node`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The node to be removed.
-This parameter is required.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 [[addrole]]
@@ -560,14 +649,24 @@ curl -X POST http://localhost:8983/api/cluster -H 'Content-Type: application/jso
 === ADDROLE Parameters
 
 `role`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the role.
 The only supported role as of now is `overseer`.
-This parameter is required.
 
 `node`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the node that will be assigned the role.
 It is possible to assign a role even before that node is started.
-This parameter is started.
 
 === ADDROLE Response
 
@@ -635,11 +734,22 @@ curl -X POST http://localhost:8983/api/cluster -H 'Content-Type: application/jso
 === REMOVEROLE Parameters
 
 `role`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the role.
 The only supported role as of now is `overseer`.
-This parameter is required.
 
 `node`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the node where the role should be removed.
 
 
diff --git a/solr/solr-ref-guide/src/cluster-plugins.adoc b/solr/solr-ref-guide/src/cluster-plugins.adoc
index ad43eba..d4cae1b 100644
--- a/solr/solr-ref-guide/src/cluster-plugins.adoc
+++ b/solr/solr-ref-guide/src/cluster-plugins.adoc
@@ -37,24 +37,53 @@ The configuration is a JSON map where keys are the unique plugin names, and valu
 The following common plugin properties are supported:
 
 `name`::
-(required) unique plugin name.
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+A unique plugin name.
 Some plugin types require using one of the pre-defined names to properly function.
-By convention such predefined names use a leading-dot prefix (e.g., `.placement-plugin`)
+By convention such predefined names use a leading-dot prefix (e.g., `.placement-plugin`).
 
 `class`::
-(required) implementation class.
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The implementation class.
 This can be specified as a fully-qualified class name if the class is available as a part of Solr, or it can be also specified using the `<package>:<className>` syntax to refer to a class inside one of the Solr packages.
 
 `version`::
-(required when class is loaded from a package).
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Solr package version.
++
+This parameter is required when the class is loaded from a package and not from Solr itself.
 
 `path-prefix`::
-(optional, default is `none`).
-Path prefix to be added to the REST API endpoints defined in the plugin.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `none`
+|===
++
+A Path prefix to be added to the REST API endpoints defined in the plugin.
 
 `config`::
-(optional, default is `none`).
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `none`
+|===
++
 A JSON map of additional plugin configuration parameters.
 Plugins that implement `ConfigurablePlugin` interface will be initialized with a
 plugin-specific configuration object deserialized from this map.
@@ -104,19 +133,19 @@ dynamically changed and reconfigured without restarting the Solr nodes, and the
 Plugins with these names are used in specific parts of Solr.
 Their names are reserved and cannot be used for other plugin types:
 
-`.placement-plugin`::
-A plugin that implements `PlacementPluginFactory` interface.
+* `.placement-plugin`: A plugin that implements `PlacementPluginFactory` interface.
 This type of plugin determines the replica placement strategy in the cluster.
 
-`.cluster-event-producer`::
-A plugin that implements `ClusterEventProducer` interface.
+* `.cluster-event-producer`: A plugin that implements `ClusterEventProducer` interface.
 This type of plugin is used for generating cluster-level events.
 
 === PlacementPluginFactory Plugins
+
 This type of plugin supports configurable placement strategies for collection
 replicas.
 
 === ClusterSingleton Plugins
+
 Plugins that implement `ClusterSingleton` interface are instantiated on each
 Solr node.
 However, their start/stop life-cycle, as defined in the interface, is controlled in such a way that only a single running instance of the plugin is present in the cluster at any time.
@@ -128,6 +157,7 @@ Any plugin can implement this interface to indicate to Solr that
 it requires this cluster singleton behavior.
 
 === ClusterEventProducer Plugins
+
 In order to support the generation of cluster-level events an implementation of
 `ClusterEventProducer` is created on each Solr node.
 This component is also a `ClusterSingleton`, which means that only one active instance is present in the
@@ -159,8 +189,8 @@ curl -X POST -H 'Content-type: application/json' -d '{
   http://localhost:8983/api/cluster/plugin
 ----
 
-
 === ClusterEventListener Plugins
+
 Plugins that implement the `ClusterEventListener` interface will be automatically registered with the instance of `ClusterEventProducer`.
 
 // XXX edit this once SOLR-14977 is done
@@ -168,6 +198,7 @@ Implementations will be notified of all events that are generated by the
 `ClusterEventProducer` and need to select only events that they are interested in.
 
 ==== org.apache.solr.cluster.events.impl.CollectionsRepairEventListener
+
 An implementation of listener that reacts to NODE_LOST events and checks what replicas need to be re-added to other nodes to keep the replication counts the same as before.
 
 This implementation waits for a certain period (default is 30s) to make sure the node is really down.
@@ -188,6 +219,7 @@ curl -X POST -H 'Content-type: application/json' -d '{
 == Plugin Management API
 
 === List Plugins
+
 This command uses HTTP GET and returns a list of loaded plugins and their configurations:
 
 [source,bash]
@@ -196,6 +228,7 @@ curl http://localhost:8983/api/cluster/plugin
 ----
 
 === Add Plugin
+
 This command uses HTTP POST to add a new plugin configuration.
 If a plugin with the same name already exists this results in an error.
 
@@ -212,6 +245,7 @@ curl -X POST -H 'Content-type: application/json' -d '{
 ----
 
 === Update Plugin
+
 This command uses HTTP POST to update an existing plugin configuration.
 If a plugin with this name doesn't exist this results in an error.
 
@@ -231,6 +265,7 @@ curl -X POST -H 'Content-type: application/json' -d '{
 ----
 
 === Remove Plugin
+
 This command uses HTTP POST to delete an existing plugin configuration.
 If a plugin with this name doesn't exist this results in an error.
 
diff --git a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
index 4f7d87a..41b16f1 100644
--- a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
+++ b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
@@ -43,61 +43,93 @@ The CollapsingQParserPlugin fully supports the QueryElevationComponent.
 The CollapsingQParser accepts the following local params:
 
 `field`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The field that is being collapsed on.
-The field must be a single valued String, Int or Float-type of field.
+The field must be a single-valued String, Int or Float-type of field.
 
 `min` or `max`::
-Selects the group head document for each group based on which document has the min or max value of the specified numeric field or <<function-queries.adoc#,function query>>.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Selects the group head document for each group based on which document has the minimum or maximum value of the specified numeric field or <<function-queries.adoc#,function query>>.
 +
 At most only one of the `min`, `max`, or `sort` (see below) parameters may be specified.
 +
 If none are specified, the group head document of each group will be selected based on the highest scoring document in that group.
-The default is none.
 
 `sort`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Selects the group head document for each group based on which document comes first according to the specified <<common-query-parameters.adoc#sort-parameter,sort string>>.
 +
 At most only one of the `min`, `max`, (see above) or `sort` parameters may be specified.
 +
 If none are specified, the group head document of each group will be selected based on the highest scoring document in that group.
-The default is none.
 
 `nullPolicy`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `ignore`
+|===
++
 There are three available null policies:
 +
 * `ignore`: removes documents with a null value in the collapse field.
-This is the default.
 * `expand`: treats each document with a null value in the collapse field as a separate group.
 * `collapse`: collapses all documents with a null value into a single group using either highest score, or minimum/maximum.
-+
-The default is `ignore`.
 
 `hint`::
 +
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 There are two hint options available:
 +
-`top_fc`::: This stands for top level FieldCache.
+* `top_fc`: This stands for top level FieldCache.
 +
-The `hint=top_fc` hint is only available when collapsing on String fields.
+The `top_fc` hint is only available when collapsing on String fields.
 `top_fc` usually provides the best query time speed but takes the longest to warm on startup or following a commit.
 `top_fc` will also result in having the collapsed field cached in memory twice if it's used for faceting or sorting.
 For very high cardinality (high distinct count) fields, `top_fc` may not fare so well.
 +
-`hint=block`::: This indicates that the field being collapsed on is suitable for the optimzed <<#block-collapsing,Block Collapse>> logic described below.
-+
-The default is none.
+* `block`: This indicates that the field being collapsed on is suitable for the optimzed <<#block-collapsing,Block Collapse>> logic described below.
 
 `size`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `100000`
+|===
++
 Sets the initial size of the collapse data structures when collapsing on a *numeric field only*.
 +
 The data structures used for collapsing grow dynamically when collapsing on numeric fields.
 Setting the size above the number of results expected in the result set will eliminate the resizing cost.
-+
-The default is 100,000.
 
 `collectElevatedDocsWhenCollapsing`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 In combination with the <<collapse-and-expand-results.adoc#collapsing-query-parser,Collapse Query Parser>> all elevated docs are visible at the beginning of the result set.
-If this parameter is `false`, only the representative is visible if the elevated docs has the same collapse key (default is `true`).
+If this parameter is `false`, only the representative is visible if the elevated docs has the same collapse key.
 
 
 === Sample Usage Syntax
@@ -207,20 +239,42 @@ As applications iterate the main collapsed result set, they can access the _expa
 The ExpandComponent has the following parameters:
 
 `expand`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 When `true`, the ExpandComponent is enabled.
 
 `expand.field`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Field on which expand documents need to be populated.
 When `expand=true`, either this parameter needs to be specified or should be used with CollapsingQParserPlugin.
 When both are specified, this parameter is given higher priority.
 
 `expand.sort`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `score desc`
+|===
++
 Orders the documents within the expanded groups.
-The default is `score desc`.
 
 `expand.rows`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `5`
+|===
++
 The number of rows to display in each group.
-The default is 5 rows.
 +
 [IMPORTANT]
 ====
@@ -229,15 +283,32 @@ Hence, scores won't be computed even if requested and `maxScore` is set to 0.
 ====
 
 `expand.q`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Overrides the main query (`q`), determines which documents to include in the main group.
 The default is to use the main query.
 
 `expand.fq`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Overrides main filter queries (`fq`), determines which documents to include in the main group.
 The default is to use the main filter queries.
 
 `expand.nullGroup`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 Indicates if an expanded group can be returned containing documents with no value in the expanded field.
 This option only _enables_ support for returning a "null" expanded group.
 As with all expanded groups, it will only exist if the main group includes corresponding documents for it to expand (via `collapse` using either `nullPolicy=collapse` or `nullPolicy=expand`; or via `expand.q`) _and_ documents are found that belong in this expanded group.
-The default value is `false`.
diff --git a/solr/solr-ref-guide/src/collection-management.adoc b/solr/solr-ref-guide/src/collection-management.adoc
index fa658bf..608dbb6 100644
--- a/solr/solr-ref-guide/src/collection-management.adoc
+++ b/solr/solr-ref-guide/src/collection-management.adoc
@@ -70,13 +70,24 @@ curl -X POST http://localhost:8983/api/collections -H 'Content-Type: application
 The CREATE action allows the following parameters:
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the collection to be created.
-This parameter is required.
 
 `router.name`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `compositeId`
+|===
++
 The router name that will be used.
 The router defines how documents will be distributed among the shards.
-Possible values are `implicit` or `compositeId`, which is the default.
+Possible values are `implicit` or `compositeId`.
 +
 The `implicit` router does not automatically route documents to different shards.
 Whichever shard you indicate on the indexing request (or within each document) will be used as the destination for those documents.
@@ -89,38 +100,79 @@ When using the `compositeId` router, the `numShards` parameter is required.
 For more information, see also the section <<solrcloud-shards-indexing.adoc#document-routing,Document Routing>>.
 
 `numShards`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The number of shards to be created as part of the collection.
 This is a required parameter when the `router.name` is `compositeId`.
 
 `shards`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A comma separated list of shard names, e.g., `shard-x,shard-y,shard-z`.
 This is a required parameter when the `router.name` is `implicit`.
 
 `replicationFactor`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
 The number of replicas to be created for each shard.
-The default is `1`.
 +
 This will create a NRT type of replica.
-If you want another type of replica, see the `tlogReplicas` and `pullReplica` parameters below.
+If you want another type of replica, see the `tlogReplicas` and `pullReplicas` parameters below.
 See the section <<solrcloud-shards-indexing.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
 
 `nrtReplicas`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The number of NRT (Near-Real-Time) replicas to create for this collection.
 This type of replica maintains a transaction log and updates its index locally.
 If you want all of your replicas to be of this type, you can simply use `replicationFactor` instead.
 
 `tlogReplicas`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The number of TLOG replicas to create for this collection.
 This type of replica maintains a transaction log but only updates its index via replication from a leader.
 See the section <<solrcloud-shards-indexing.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
 
 `pullReplicas`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The number of PULL replicas to create for this collection.
 This type of replica does not maintain a transaction log and only updates its index via replication from a leader.
 This type is not eligible to become a leader and should not be the only type of replicas in the collection.
 See the section <<solrcloud-shards-indexing.adoc#types-of-replicas,Types of Replicas>> for more information about replica types.
 
 `createNodeSet` (v1), `nodeSet` (v2)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Allows defining the nodes to spread the new collection across.
 The format is a comma-separated list of node_names, such as `localhost:8983_solr,localhost:8984_solr,localhost:8985_solr`.
 +
@@ -129,32 +181,61 @@ If not provided, the CREATE operation will create shard-replicas spread across a
 Alternatively, use the special value of `EMPTY` to initially create no shard-replica within the new collection and then later use the <<replica-management.adoc#addreplica,ADDREPLICA>> operation to add shard-replicas when and where required.
 
 `createNodeSet.shuffle` (v1), `shuffleNodes` (v2)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 Controls whether or not the shard-replicas created for this collection will be assigned to the nodes specified by the `createNodeSet` in a sequential manner, or if the list of nodes should be shuffled prior to creating individual replicas.
 +
 A `false` value makes the results of a collection creation predictable and gives more exact control over the location of the individual shard-replicas, but `true` can be a better choice for ensuring replicas are distributed evenly across nodes.
-The default is `true`.
 +
 This parameter is ignored if `createNodeSet` is not also specified.
 
 `collection.configName` (v1), `config` (v2)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Defines the name of the configuration (which *must already be stored in ZooKeeper*) to use for this collection.
 +
 If not provided, Solr will use the configuration of `_default` configset to create a new (and mutable) configset named `<collectionName>.AUTOCREATED` and will use it for the new collection.
 When such a collection is deleted, its autocreated configset will be deleted by default when it is not in use by any other collection.
 
 `router.field` (v1), `router` (v2)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 If this parameter is specified, the router will look at the value of the field in an input document to compute the hash and identify a shard instead of looking at the `uniqueKey` field.
 If the field specified is null in the document, the document will be rejected.
 +
 Please note that <<realtime-get.adoc#,RealTime Get>> or retrieval by document ID would also require the parameter `\_route_` (or `shard.keys`) to avoid a distributed search.
 
 `perReplicaState`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true` the states of individual replicas will be maintained as individual child of the `state.json`.
-The default is `false`.
 
 `property._name_=_value_`::
-Set core property _name_ to _value_. See the section <<core-discovery.adoc#,Core Discovery>> for details on supported properties and values.
-
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Set core property _name_ to _value_.
+See the section <<core-discovery.adoc#,Core Discovery>> for details on supported properties and values.
++
 [WARNING]
 ====
 The entries in each core.properties file are vital for Solr to function correctly.
@@ -162,19 +243,36 @@ Overriding entries can result in unusable collections.
 Altering these entries by specifying `property._name_=_value_` is an expert-level option and should only be used if you have a thorough understanding of the consequences.
 ====
 
-`async`::
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
-
 `waitForFinalState`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 If `true`, the request will complete only when all affected replicas become active.
 The default is `false`, which means that the API will return the status of the single action, which may be before the new replica is online and active.
 
 `alias`::
-Starting with version 8.1 when a collection is created additionally an alias can be created
-that points to this collection.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+When a collection is created additionally an alias can be created that points to this collection.
 This parameter allows specifying the name of this alias, effectively combining
 this operation with <<alias-management.adoc#createalias,CREATEALIAS>>
 
+`async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+
 Collections are first created in read-write mode but can be put in `readOnly`
 mode using the <<collection-management.adoc#modifycollection,MODIFYCOLLECTION>> action.
 
@@ -189,8 +287,7 @@ If the status is anything other than "success", an error message will explain wh
 The RELOAD action is used when you have changed a configuration file in ZooKeeper, like uploading a new `schema.xml`.
 Solr automatically reloads collections when certain files, monitored via a watch in ZooKeeper are changed,
 such as `security.json`.
- However, for changes to files in configsets, like uploading a new `schema.xml`, you
-will need to manually trigger the RELOAD.
+However, for changes to files in configsets, like uploading a new schema, you will need to manually trigger the RELOAD.
 
 [.dynamic-tabs]
 --
@@ -235,15 +332,25 @@ curl -X POST http://localhost:8983/api/collections/techproducts_v2 -H 'Content-T
 ====
 --
 
-
-
 === RELOAD Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of the collection to reload.
 This parameter is required by the V1 API.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 === RELOAD Response
@@ -300,47 +407,59 @@ curl -X POST http://localhost:8983/api/collections/techproducts_v2 -H 'Content-T
 === MODIFYCOLLECTION Parameters
 
 `collection`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the collection to be modified.
-This parameter is required.
-
-`async`::
-Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 `_attribute_=_value_`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 Key-value pairs of attribute names and attribute values.
-
++
 At least one `_attribute_` parameter is required.
-
++
 The attributes that can be modified are:
 
-* replicationFactor
-* collection.configName
-* readOnly
+* `replicationFactor`
+* `collection.configName`
+* `readOnly`
 * other custom properties that use a `property.` prefix
-
++
 See the <<create,CREATE action>> section above for details on these attributes.
 
+`async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
+
 [[readonlymode]]
 ==== Read-Only Mode
-Setting the `readOnly` attribute to `true` puts the collection in read-only mode,
-in which any index update requests are rejected.
-Other collection-level actions (e.g., adding /
-removing / moving replicas) are still available in this mode.
+Setting the `readOnly` attribute to `true` puts the collection in read-only mode, in which any index update requests are rejected.
+Other collection-level actions (e.g., adding / removing / moving replicas) are still available in this mode.
 
 The transition from the (default) read-write to read-only mode consists of the following steps:
 
 * the `readOnly` flag is changed in collection state,
-* any new update requests are rejected with 403 FORBIDDEN error code (ongoing
-  long-running requests are aborted, too),
+* any new update requests are rejected with 403 FORBIDDEN error code (ongoing long-running requests are aborted, too),
 * a forced commit is performed to flush and commit any in-flight updates.
++
+NOTE: This may potentially take a long time if there are still major segment merges running in the background.
 
-NOTE: This may potentially take a long time if there are still major segment merges running
- in the background.
-
-* a collection <<reload, RELOAD action>> is executed.
+* a collection <<reload,RELOAD action>> is executed.
 
-Removing the `readOnly` property or setting it to false enables the
-processing of updates and reloads the collection.
+Removing the `readOnly` property or setting it to false enables the processing of updates and reloads the collection.
 
 [[list]]
 == LIST: List Collections
@@ -372,7 +491,6 @@ curl -X GET http://localhost:8983/api/collections
 ====
 --
 
-
 *Output*
 
 [source,json]
@@ -389,15 +507,10 @@ curl -X GET http://localhost:8983/api/collections
 [[rename]]
 == RENAME: Rename a Collection
 
-Renaming a collection sets up a standard alias that points to the underlying collection, so
-that the same (unmodified) collection can now be referred to in query, index and admin operations
-using the new name.
+Renaming a collection sets up a standard alias that points to the underlying collection, so that the same (unmodified) collection can now be referred to in query, index and admin operations using the new name.
 
-This command does NOT actually rename the underlying Solr collection - it sets up a new one-to-one alias
-using the new name, or renames the existing alias so that it uses the new name, while still referring to
-the same underlying Solr collection.
-However, from the user's point of view the collection can now be
-accessed using the new name, and the new name can be also referred to in other aliases.
+This command does NOT actually rename the underlying Solr collection - it sets up a new one-to-one alias using the new name, or renames the existing alias so that it uses the new name, while still referring to the same underlying Solr collection.
+However, from the user's point of view the collection can now be accessed using the new name, and the new name can be also referred to in other aliases.
 
 The following limitations apply:
 
@@ -431,42 +544,50 @@ We do not currently have a V2 equivalent.
 === RENAME Command Parameters
 
 `name`::
-Name of the existing SolrCloud collection or an alias that refers to exactly one collection and is not
-a Routed Alias.
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Name of the existing SolrCloud collection or an alias that refers to exactly one collection and is not a Routed Alias.
 
 `target`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 Target name of the collection.
 This will be the new alias that refers to the underlying SolrCloud collection.
-The original name (or alias) of the collection will be replaced also in the existing aliases so that they
-also refer to the new name.
+The original name (or alias) of the collection will be replaced also in the existing aliases so that they also refer to the new name.
 Target name must not be an existing alias.
 
 === Examples using RENAME
-Assuming there are two actual SolrCloud collections named `collection1` and `collection2`,
-and the following aliases already exist:
 
-* `col1 -&gt; collection1`: this resolves to `collection1`.
-* `col2 -&gt; collection2`: this resolves to `collection2`.
-* `simpleAlias -&gt; col1`: this resolves to `collection1`.
-* `compoundAlias -&gt; col1,col2`: this resolves to `collection1,collection2`
+Assuming there are two actual SolrCloud collections named `collection1` and `collection2`, and the following aliases already exist:
+
+* `col1 => collection1`: this resolves to `collection1`.
+* `col2 => collection2`: this resolves to `collection2`.
+* `simpleAlias => col1`: this resolves to `collection1`.
+* `compoundAlias => col1,col2`: this resolves to `collection1,collection2`
 
 The RENAME of `col1` to `foo` will change the aliases to the following:
 
-* `foo -&gt; collection1`: this resolves to `collection1`.
-* `col2 -&gt; collection2`: this resolves to `collection2`.
-* `simpleAlias -&gt; foo`: this resolves to `collection1`.
-* `compoundAlias -&gt; foo,col2`: this resolves to `collection1,collection2`.
+* `foo => collection1`: this resolves to `collection1`.
+* `col2 => collection2`: this resolves to `collection2`.
+* `simpleAlias => foo`: this resolves to `collection1`.
+* `compoundAlias => foo,col2`: this resolves to `collection1,collection2`.
 
 If we then rename `collection1` (which is an actual collection name) to `collection2` (which is also
 an actual collection name) the following aliases will exist now:
 
-* `foo -&gt; collection2`: this resolves to `collection2`.
-* `col2 -&gt; collection2`: this resolves to `collection2`.
-* `simpleAlias -&gt; foo`: this resolves to `collection2`.
-* `compoundAlias -&gt; foo,col2`: this would resolve now to `collection2,collection2` so it's reduced to simply `collection2`.
-* `collection1` -&gt; `collection2`: this newly created alias effectively hides `collection1` from regular query and
-update commands, which are directed now to `collection2`.
-
+* `foo => collection2`: this resolves to `collection2`.
+* `col2 => collection2`: this resolves to `collection2`.
+* `simpleAlias => foo`: this resolves to `collection2`.
+* `compoundAlias => foo,col2`: this would resolve now to `collection2,collection2` so it's reduced to simply `collection2`.
+* `collection1` => `collection2`: this newly created alias effectively hides `collection1` from regular query and update commands, which are directed now to `collection2`.
 
 [[delete]]
 == DELETE: Delete a Collection
@@ -507,10 +628,21 @@ curl -X DELETE http://localhost:8983/api/collections/techproducts_v2?async=aaaa
 === DELETE Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the collection to delete.
-This parameter is required.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 === DELETE Response
@@ -565,7 +697,6 @@ http://localhost:8983/solr/admin/collections?action=COLLECTIONPROP&name=techprod
 ====
 [.tab-label]*V2 API*
 
-
 [source,bash]
 ----
 curl -X POST http://localhost:8983/api/collections/techproducts_v2 -H 'Content-Type: application/json' -d '
@@ -580,17 +711,33 @@ curl -X POST http://localhost:8983/api/collections/techproducts_v2 -H 'Content-T
 ====
 --
 
-
-
 === COLLECTIONPROP Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of the collection for which the property would be set.
 
 `propertyName` (v1), `name` (v2)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of the property.
 
 `propertyValue` (v1), `value` (v2)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The value of the property.
 When not provided, the property is deleted.
 
@@ -637,7 +784,6 @@ curl -X POST http://localhost:8983/api/collections/techproducts_v2 -H 'Content-T
 ====
 --
 
-
 The routing key specified by the `split.key` parameter may span multiple shards on both the source and the target collections.
 The migration is performed shard-by-shard in a single thread.
 One or more temporary collections may be created by this command during the ‘migrate’ process but they are cleaned up at the end automatically.
@@ -655,26 +801,59 @@ Please note that the MIGRATE API does not perform any de-duplication on the docu
 === MIGRATE Parameters
 
 `collection`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the source collection from which documents will be split.
-This parameter is required.
 
 `target.collection` (v1), `target` (v2)::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the target collection to which documents will be migrated.
-This parameter is required.
 
 `split.key` (v1), `splitKey` (v2)::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The routing key prefix.
 For example, if the uniqueKey of a document is "a!123", then you would use `split.key=a!`.
-This parameter is required.
 
 `forward.timeout` (v1), `forwardTimeout` (v2)::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `60` seconds
+|===
++
 The timeout, in seconds, until which write requests made to the source collection for the given `split.key` will be forwarded to the target shard.
-The default is 60 seconds.
 
 `property._name_=_value_`::
-Set core property _name_ to _value_. See the section <<core-discovery.adoc#,Core Discovery>> for details on supported properties and values.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Set core property _name_ to _value_.
+See the section <<core-discovery.adoc#,Core Discovery>> for details on supported properties and values.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 === MIGRATE Response
@@ -708,12 +887,8 @@ We do not currently have a V2 equivalent.
 ====
 --
 
-
-NOTE: Reindexing is potentially a lossy operation - some of the existing indexed data that is not
-available as stored fields may be lost, so users should use this command
-with caution, evaluating the potential impact by using different source and target
-collection names first, and preserving the source collection until the evaluation is
-complete.
+NOTE: Reindexing is potentially a lossy operation.
+Some of the existing indexed data that is not available as stored fields may be lost, so users should use this command with caution, evaluating the potential impact by using different source and target collection names first, and preserving the source collection until the evaluation is complete.
 
 The target collection must not exist (and may not be an alias).
 If the target collection name is the same as the source collection then first a unique sequential name will be generated for the target collection, and then after reindexing is done an alias will be created that points from the source name to the actual sequentially-named target collection.
@@ -730,54 +905,100 @@ Long-running, erroneous or crashed reindexing operations may be terminated by us
 === REINDEXCOLLECTION Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 Source collection name, may be an alias.
-This parameter is required.
 
 `cmd`::
-Optional command.
-Default command is `start`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `start`
+|===
++
 Currently supported commands are:
-* `start` - default, starts processing if not already running,
-* `abort` - aborts an already running reindexing (or clears a left-over status after a crash),
-and deletes partial results,
-* `status` - returns detailed status of a running reindexing command.
+
+* `start`: starts processing if not already running.
+* `abort`: aborts an already running reindexing (or clears a left-over status after a crash), and deletes partial results.
+* `status`: returns detailed status of a running reindexing command.
 
 `target`::
-Target collection name, optional.
-If not specified a unique name will be generated and after all documents have been copied an alias will be created that points from the source collection name to the unique sequentially-named collection, effectively "hiding"
-the original source collection from regular update and search operations.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Target collection name.
+If not specified a unique name will be generated and after all documents have been copied an alias will be created that points from the source collection name to the unique sequentially-named collection.
+This effectively "hides" the original source collection from regular update and search operations.
 
 `q`::
-Optional query to select documents for reindexing.
-Default value is `\*:*`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `\*:*`
+|===
++
+The query to select documents for reindexing.
 
 `fl`::
-Optional list of fields to reindex.
-Default value is `*`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `*`
+|===
++
+A list of fields to reindex.
 
 `rows`::
-Documents are transferred in batches.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `100`
+|===
++
+The batch size for transferring documents.
 Depending on the average size of the document large batch sizes may cause memory issues.
-Default value is 100.
 
 `configName`::
 `collection.configName`::
-Optional name of the configset for the target collection.
-Default is the same as the source collection.
-
-There's a number of optional parameters that determine the target collection layout.
-If they are not specified in the request then their values are copied from the source collection.
-The following parameters are currently supported (described in detail in the <<create,CREATE collection>> section):
-`numShards`, `replicationFactor`, `nrtReplicas`, `tlogReplicas`, `pullReplicas`,
-`shards`, `policy`, `createNodeSet`, `createNodeSet.shuffle`, `router.*`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: <name of the source collection>
+|===
++
+The name of the configset for the target collection.
 
 `removeSource`::
-Optional boolean.
-If true then after the processing is successfully finished the source collection will be deleted.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true` then after the processing is successfully finished the source collection will be deleted.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Optional request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
+There are additionally a number of optional parameters that determine the target collection layout.
+If they are not specified in the request then their values are copied from the source collection.
+The following parameters are currently supported (described in detail in the <<create,CREATE collection>> section):
+`numShards`, `replicationFactor`, `nrtReplicas`, `tlogReplicas`, `pullReplicas`,
+`shards`, `policy`, `createNodeSet`, `createNodeSet.shuffle`, `router.*`.
+
 When the reindexing process has completed the target collection is marked using
 `property.rx: "finished"`, and the source collection state is updated to become read-write.
 On any errors the command will delete any temporary and target collections and also reset the state of the source collection's read-only flag.
@@ -853,29 +1074,54 @@ Such incompatibilities may result from incompatible schema changes or after migr
 === COLSTATUS Parameters
 
 `collection`::
-Collection name (optional).
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Collection name.
 If missing then it means all collections.
 
 `coreInfo`::
-Optional boolean.
-If true then additional information will be provided about
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true` then additional information will be provided about
 SolrCore of shard leaders.
 
 `segments`::
-Optional boolean.
-If true then segment information will be provided.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true` then segment information will be provided.
 
 `fieldInfo`::
-Optional boolean.
-If true then detailed Lucene field information will be provided
-and their corresponding Solr schema types.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true` then detailed Lucene field information will be provided and their corresponding Solr schema types.
 
 `sizeInfo`::
-Optional boolean.
-If true then additional information about the index files
-size and their RAM usage will be provided.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true` then additional information about the index files size and their RAM usage will be provided.
 
 ==== Index Size Analysis Tool
+
 The `COLSTATUS` command also provides a tool for analyzing and estimating the composition of raw index data.
 Please note that this tool should be used with care because it generates a significant IO load on all shard leaders of the analyzed collections.
 A sampling threshold and a sampling percent parameters can be adjusted to reduce this load to some degree.
@@ -888,35 +1134,55 @@ In the following sections whenever "size" is mentioned it means an estimated agg
 The following parameters are specific to this tool:
 
 `rawSize`::
-Optional boolean.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true` then run the raw index data analysis tool (other boolean options below imply this option if any of them are true).
 Command response will include sections that show estimated breakdown of data size per field and per data type.
 
 `rawSizeSummary`::
-Optional boolean.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true` then include also a more detailed breakdown of data size per field and per type.
 
 `rawSizeDetails`::
-Optional boolean.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true` then provide exhaustive details that include statistical distribution of items per field and per type as well as top 20 largest items per field.
 
 `rawSizeSamplingPercent`::
-Optional float.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `5.0`
+|===
++
 When the index is larger than a certain threshold (100k documents per shard) only a part of data is actually retrieved and analyzed in order to reduce the IO load, and then the final results are extrapolated.
-Values must be greater than 0 and less or equal to 100.0.
-Default value is `5.0`.
-Very small values (between 0.0 and 1.0) may introduce significant estimation errors.
++
+Values must be greater than `0` and less or equal to `100.0`.
+Very small values (between `0.0` and `1.0`) may introduce significant estimation errors.
 Also, values that would result in less than 10 documents being sampled are rejected with an exception.
 
-Response for this command always contains two sections:
+The response for this command always contains two sections:
 
-* `fieldsBySize` is a map where field names are keys and values are estimated sizes of raw (uncompressed) data that belongs to the field.
+* `fieldsBySize`: a map where field names are keys and values are estimated sizes of raw (uncompressed) data that belongs to the field.
 The map is sorted by size so that it's easy to see what field occupies most space.
 
-* `typesBySize` is a map where data types are the keys and values are estimates sizes of raw (uncompressed) data of particular type.
+* `typesBySize`: a map where data types are the keys and values are estimates sizes of raw (uncompressed) data of particular type.
 This map is also sorted by size.
 
-Optional sections include:
+Optional sections added with above parameters include:
 
 * `summary` section containing a breakdown of data sizes for each field by data type.
 
@@ -946,6 +1212,7 @@ This information may be omitted if a field has an `omitNorms` flag in the schema
 * `points` - represents aggregated size of point values.
 
 === COLSTATUS Response
+
 The response will include an overview of the collection status, the number of
 active or inactive shards and replicas, and additional index information
 of shard leaders.
@@ -1316,15 +1583,31 @@ See the `incremental` parameter below for more information.
 === BACKUP Parameters
 
 `collection`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the collection to be backed up.
-This parameter is required.
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 What to name the backup that is created.
 This is checked to make sure it doesn't already exist, and otherwise an error message is raised.
-This parameter is required.
 
 `location`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The location on a shared drive for the backup command to write to.
 This parameter is required, unless a default location is defined on the repository configuration, or set as a <<cluster-node-management.adoc#clusterprop,cluster property>>.
 +
@@ -1335,18 +1618,42 @@ Each backup location can only hold a backup for one collection, however the same
 Repeated backups of the same collection are done incrementally, so that files unchanged since the last backup are not duplicated in the backup repository.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 `repository`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of a repository to be used for the backup.
 If no repository is specified then the local filesystem repository will be used automatically.
 
 `maxNumBackupPoints`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The upper-bound on how many backups should be retained at the backup location.
 If the current number exceeds this bound, older backups will be deleted until only `maxNumBackupPoints` backups remain.
 This parameter has no effect if `incremental=false` is specified.
 
 `incremental`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 A boolean parameter allowing users to choose whether to create an incremental (`incremental=true`) or a "snapshot" (`incremental=false`) backup.
 If unspecified, backups are done incrementally by default.
 Incremental backups are preferred in all known circumstances and snapshot backups are deprecated, so this parameter should only be used after much consideration.
@@ -1371,11 +1678,22 @@ Attempting to use them on a location holding an older backup will result in an e
 === LISTBACKUP Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the backups to list.
 The backup name usually corresponds to the collection-name, but isn't required to.
-This parameter is required.
 
 `location`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The repository location to list backups from.
 This parameter is required, unless a default location is defined on the repository configuration, or set as a <<cluster-node-management.adoc#clusterprop,cluster property>>.
 +
@@ -1383,10 +1701,22 @@ If the location path is on a mounted drive, the mount must be available on the n
 Since any node can take the overseer role at any time, a best practice to avoid possible backup failures is to ensure the mount point is available on all nodes of the cluster.
 
 `repository`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of a repository to be used for accessing backup information.
 If no repository is specified then the local filesystem repository will be used automatically.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 === LISTBACKUP Example
@@ -1509,25 +1839,60 @@ You can use the collection <<alias-management.adoc#createalias,CREATEALIAS>> com
 === RESTORE Parameters
 
 `collection`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The collection where the indexes will be restored into.
 This parameter is required.
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the existing backup that you want to restore.
-This parameter is required.
 
 `location`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The location on a shared drive for the RESTORE command to read from.
 Alternately it can be set as a <<cluster-node-management.adoc#clusterprop,cluster property>>.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 `repository`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of a repository to be used for the backup.
 If no repository is specified then the local filesystem repository will be used automatically.
 
 `backupId`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The ID of a specific backup point to restore from.
 +
 Backup locations can hold multiple backups of the same collection.
@@ -1620,10 +1985,21 @@ curl -X POST http://localhost:8983/v2/collections/backups -H 'Content-Type: appl
 === DELETEBACKUP Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The backup name to delete backup files from.
-This parameter is required.
 
 `location`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The repository location to delete backups from.
 This parameter is required, unless a default location is defined on the repository configuration, or set as a <<cluster-node-management.adoc#clusterprop,cluster property>>.
 +
@@ -1631,24 +2007,54 @@ If the location path is on a mounted drive, the mount must be available on the n
 Since any node can take the overseer role at any time, a best practice to avoid possible backup failures is to ensure the mount point is available on all nodes of the cluster.
 
 `repository`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of a repository to be used for deleting backup files.
 If no repository is specified then the local filesystem repository will be used automatically.
 
 `backupId`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Explicitly specify a single backup-ID to delete.
 Only one of `backupId`, `maxNumBackupPoints`, and `purgeUnused` may be specified per DELETEBACKUP request.
 
 `maxNumBackupPoints`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specify how many backups should be retained, deleting all others.
 Only one of `backupId`, `maxNumBackupPoints`, and `purgeUnused` may be specified per DELETEBACKUP request.
 
 `purgeUnused`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Solr's incremental backup support can orphan files if the backups referencing them are deleted.
 The `purgeUnused` flag parameter triggers a scan to detect these orphaned files and delete them.
 Administrators doing repeated backups at the same location should plan on using this parameter sporadically to reclaim disk space.
 Only one of `backupId`, `maxNumBackupPoints`, and `purgeUnused` may be specified per DELETEBACKUP request.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be <<collections-api.adoc#asynchronous-calls,processed asynchronously>>.
 
 [[rebalanceleaders]]
@@ -1697,18 +2103,28 @@ Rebalancing will only attempt to reassign leadership to those replicas that have
 === REBALANCELEADERS Parameters
 
 `collection`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the collection to rebalance `preferredLeaders` on.
-This parameter is required.
 
 `maxAtOnce`::
 The maximum number of reassignments to have queue up at once.
-Values \<=0 are use the default value Integer.MAX_VALUE.
+Values \<=`0` are use the default value Integer.MAX_VALUE.
 +
 When this number is reached, the process waits for one or more leaders to be successfully assigned before adding more to the queue.
 
 `maxWaitSeconds`::
-Defaults to `60`.
-This is the timeout value when waiting for leaders to be reassigned.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `60` seconds
+|===
++
+The timeout when waiting for leaders to be reassigned.
 If `maxAtOnce` is less than the number of reassignments that will take place, this is the maximum interval that any _single_ wait for at least one reassignment.
 +
 For example, if 10 reassignments are to take place and `maxAtOnce` is `1` and `maxWaitSeconds` is `60`, the upper bound on the time that the command may wait is 10 minutes.
diff --git a/solr/solr-ref-guide/src/collections-api.adoc b/solr/solr-ref-guide/src/collections-api.adoc
index a2404a5..a57a8c2 100644
--- a/solr/solr-ref-guide/src/collections-api.adoc
+++ b/solr/solr-ref-guide/src/collections-api.adoc
@@ -127,9 +127,14 @@ curl -X GET http://localhost:8983/api/cluster/command-status/1000
 === REQUESTSTATUS Parameters
 
 `requestid`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The user defined request ID for the request.
 This can be used to track the status of the submitted asynchronous task.
-This parameter is required.
 
 === Examples using REQUESTSTATUS
 
@@ -218,9 +223,21 @@ curl -X DELETE http://localhost:8983/api/cluster/command-status
 === DELETESTATUS Parameters
 
 `requestid`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The request ID of the asynchronous call whose stored response should be cleared.
 
 `flush`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Set to `true` to clear all stored completed and failed async request responses.
 This is required only with the V1 API.
 
diff --git a/solr/solr-ref-guide/src/configsets-api.adoc b/solr/solr-ref-guide/src/configsets-api.adoc
index 8b85228..87421a4 100644
--- a/solr/solr-ref-guide/src/configsets-api.adoc
+++ b/solr/solr-ref-guide/src/configsets-api.adoc
@@ -106,20 +106,41 @@ If you use any of these parameters or features, you must have enabled security f
 The `upload` command takes the following parameters:
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The configset to be created when the upload is complete.
-This parameter is required.
 
 `overwrite`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: see description
+|===
++
 If set to `true`, Solr will overwrite an existing configset with the same name (if false, the request will fail).
 If `filePath` is provided, then this option specifies whether the specified file within the configset should be overwritten if it already exists.
 Default is `false` when using the v1 API, but `true` when using the v2 API.
 
 `cleanup`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
 When overwriting an existing configset (`overwrite=true`), this parameter tells Solr to delete the files in ZooKeeper that existed in the old configset but not in the one being uploaded.
-Default is `false`.
 This parameter cannot be set to true when `filePath` is used.
 
 `filePath`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This parameter allows the uploading of a single, non-zipped file to the given path under the configset in ZooKeeper.
 This functionality respects the `overwrite` parameter, so a request will fail if the given file path already exists in the configset and overwrite is set to `false`.
 The `cleanup` parameter cannot be set to true when `filePath` is used.
@@ -220,14 +241,30 @@ If you have not yet uploaded any configsets, see the <<Upload a Configset>> comm
 The following parameters are supported when creating a configset.
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The configset to be created.
-This parameter is required.
 
 `baseConfigSet`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `_default`
+|===
++
 The name of the configset to copy as a base.
-This defaults to `_default`
 
 `configSetProp._property_=_value_`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A configset property from the base configset to override in the copied configset.
 
 For example, to create a configset named "myConfigset" based on a previously defined "predefinedTemplate" configset, overriding the immutable property to false.
@@ -297,8 +334,13 @@ The `delete` command removes a configset.
 It does not remove any collections that were created with the configset.
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The configset to be deleted.
-This parameter is required.
 
 To delete a configset named "myConfigSet":
 
diff --git a/solr/solr-ref-guide/src/configuring-solr-xml.adoc b/solr/solr-ref-guide/src/configuring-solr-xml.adoc
index 68eea5f..42bcfb1 100644
--- a/solr/solr-ref-guide/src/configuring-solr-xml.adoc
+++ b/solr/solr-ref-guide/src/configuring-solr-xml.adoc
@@ -68,6 +68,12 @@ There are no attributes that you can specify in the `<solr>` tag, which is the r
 The tables below list the child nodes of each XML element in `solr.xml`.
 
 `configSetService`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `configSetService`
+|===
++
 This attribute does not need to be set.
 +
 If used, this attribute should be set to the FQN (Fully qualified name) of a class that inherits from `ConfigSetService`, and you must provide a constructor with one parameter which the type is `org.apache.solr.core.CoreContainer`.
@@ -76,6 +82,12 @@ For example, `<str name="configSetService">com.myorg.CustomConfigSetService</str
 If this attribute isn't set, Solr uses the default `configSetService`, with zookeeper aware of `org.apache.solr.cloud.ZkConfigSetService`, without zookeeper aware of `org.apache.solr.core.FileSystemConfigSetService`.
 
 `adminHandler`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `org.apache.solr.handler.admin.CoreAdminHandler`
+|===
++
 This attribute does not need to be set.
 +
 If used, this attribute should be set to the FQN (Fully qualified name) of a class that inherits from CoreAdminHandler.
@@ -84,26 +96,68 @@ For example, `<str name="adminHandler">com.myorg.MyAdminHandler</str>` would con
 If this attribute isn't set, Solr uses the default admin handler, `org.apache.solr.handler.admin.CoreAdminHandler`.
 
 `collectionsHandler`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 As above, for custom CollectionsHandler implementations.
 
 `infoHandler`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 As above, for custom InfoHandler implementations.
 
 `coreLoadThreads`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specifies the number of threads that will be assigned to load cores in parallel.
 
 `replayUpdatesThreads`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 Specifies the number of threads that will be assigned to replay updates in parallel.
 This pool is shared for all cores of the node.
 The default value is equal to the number of processors.
 
 `coreRootDirectory`::
-The root of the core discovery tree, defaults to `$SOLR_HOME` (by default, `server/solr`).
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `server/solr`
+|===
++
+The root of the core discovery tree, defaults to `$SOLR_HOME`.
 
 `managementPath`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Currently non-operational.
 
 `sharedLib`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specifies the path to a common library directory that will be shared across all cores.
 Any JAR files in this directory will be added to the search path for Solr plugins.
 If the specified path is not absolute, it will be relative to `$SOLR_HOME`.
@@ -111,6 +165,12 @@ Custom handlers may be placed in this directory.
 Note that specifying `sharedLib` will not remove `$SOLR_HOME/lib` from Solr's class path.
 
 `allowPaths`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Solr will normally only access folders relative to `$SOLR_HOME`, `$SOLR_DATA_HOME` or `coreRootDir`.
 If you need to e.g., create a core outside of these paths, you can explicitly allow the path with `allowPaths`.
 It is a comma separated string of file system paths to allow.
@@ -118,6 +178,12 @@ The special value of `*` will allow any path on the system.
 
 [#allow-urls]
 `allowUrls`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 Comma-separated list of Solr hosts to allow.
 +
 The HTTP/HTTPS protocol may be omitted, and only the host and port are checked, i.e., `10.0.0.1:8983/solr,10.0.0.1:8984/solr`.
@@ -130,19 +196,42 @@ The allow-list can also be configured with the `solr.allowUrls` system property
 If you need to disable this feature for backwards compatibility, you can set the system property `solr.disable.allowUrls=true`.
 
 `shareSchema`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This attribute, when set to `true`, ensures that the multiple cores pointing to the same Schema resource file will be referring to the same IndexSchema Object.
 Sharing the IndexSchema Object makes loading the core faster.
 If you use this feature, make sure that no core-specific property is used in your Schema file.
 
 `transientCacheSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Defines how many cores with `transient=true` that can be loaded before swapping the least recently used core for a new core.
 
 `configSetBaseDir`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `$SOLR_HOME/configsets`
+|===
++
 The directory under which configsets for Solr cores can be found.
-Defaults to `$SOLR_HOME/configsets`.
 
 [[global-maxbooleanclauses]]
 `maxBooleanClauses`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 Sets the maximum number of (nested) clauses allowed in any query.
 +
 This global limit provides a safety constraint on the total number of clauses allowed in any query against any collection -- regardless of whether those clauses were explicitly specified in a query string, or were the result of query expansion/re-writing from a more complex type of query based on the terms in the index.
@@ -162,18 +251,48 @@ This element defines several parameters that relate so SolrCloud.
 This section is ignored unless theSolr instance is started with either `-DzkRun` or `-DzkHost`
 
 `distribUpdateConnTimeout`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Used to set the underlying `connTimeout` for intra-cluster updates.
 
 `distribUpdateSoTimeout`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Used to set the underlying `socketTimeout` for intra-cluster updates.
 
 `host`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The hostname Solr uses to access cores.
 
 `hostContext`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The url context path.
 
 `hostPort`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `${solr.port.advertise:0}`
+|===
++
 The port Solr uses to access cores, and advertise Solr node locations through liveNodes.
 This option is only necessary if a Solr instance is listening on a different port than it wants other nodes to contact it at.
 For example, if the Solr node is running behind a proxy or in a cloud environment that allows for port mapping, such as Kubernetes.
@@ -183,48 +302,113 @@ In the default `solr.xml` file, this is set to `${solr.port.advertise:0}`.
 If no port is passed via the `solr.xml` (i.e., `0`), then Solr will default to the port that jetty is listening on, defined by `${jetty.port}`.
 
 `leaderVoteWait`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 When SolrCloud is starting up, how long each Solr node will wait for all known replicas for that shard to be found before assuming that any nodes that haven't reported are down.
 
 `leaderConflictResolveWait`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `180000` milliseconds
+|===
++
 When trying to elect a leader for a shard, this property sets the maximum time a replica will wait to see conflicting state information to be resolved; temporary conflicts in state information can occur when doing rolling restarts, especially when the node hosting the Overseer is restarted.
 +
 Typically, the default value of `180000` (ms) is sufficient for conflicts to be resolved; you may need to increase this value if you have hundreds or thousands of small collections in SolrCloud.
 
 `zkClientTimeout`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A timeout for connection to a ZooKeeper server.
 It is used with SolrCloud.
 
 `zkHost`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 In SolrCloud mode, the URL of the ZooKeeper host that Solr should use for cluster state information.
 
 `genericCoreNodeNames`::
-If `TRUE`, node names are not based on the address of the node, but on a generic name that identifies the core.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+If `true`, node names are not based on the address of the node, but on a generic name that identifies the core.
 When a different machine takes over serving that core things will be much easier to understand.
 
 `zkCredentialsProvider` & `zkACLProvider`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Optional parameters that can be specified if you are using <<zookeeper-access-control.adoc#,ZooKeeper Access Control>>.
 
-
 `distributedClusterStateUpdates`::
-If `TRUE`, the internal behavior of SolrCloud is changed to not use the Overseer for collections' `state.json` updates but do this directly against ZooKeeper.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+If `true`, the internal behavior of SolrCloud is changed to not use the Overseer for collections' `state.json` updates but do this directly against ZooKeeper.
 
 === The <logging> Element
 
 `class`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The class to use for logging.
 The corresponding JAR file must be available to Solr, perhaps through a `<lib>` directive in `solrconfig.xml`.
 
 `enabled`::
-true/false - whether to enable logging or not.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+Whether to enable logging or not.
 
 ==== The <logging><watcher> Element
 
 `size`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `50`
+|===
++
 The number of log events that are buffered.
 
 `threshold`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The logging level above which your particular logging implementation will record.
-For example when using log4j one might specify DEBUG, WARN, INFO, etc.
+For example when using Log4j one might specify DEBUG, WARN, INFO, etc.
 
 === The <shardHandlerFactory> Element
 
@@ -239,41 +423,97 @@ Since this is a custom shard handler, sub-elements are specific to the implement
 The default and only shard handler provided by Solr is the `HttpShardHandlerFactory` in which case, the following sub-elements can be specified:
 
 `socketTimeout`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 The read timeout for intra-cluster query and administrative requests.
 The default is the same as the `distribUpdateSoTimeout` specified in the `<solrcloud>` section.
 
 `connTimeout`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 The connection timeout for intra-cluster query and administrative requests.
 Defaults to the `distribUpdateConnTimeout` specified in the `<solrcloud>` section.
 
 `urlScheme`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The URL scheme to be used in distributed search.
 
 `maxConnectionsPerHost`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `100000`
+|===
++
 Maximum connections allowed per host.
-Defaults to `100000`.
 
 `corePoolSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
 The initial core size of the threadpool servicing requests.
-Default is `0`.
 
 `maximumPoolSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The maximum size of the threadpool servicing requests.
 Default is unlimited.
 
 `maxThreadIdleTime`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `5` seconds
+|===
++
 The amount of time in seconds that idle threads persist for in the queue, before being killed.
-Default is `5` seconds.
 
 `sizeOfQueue`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 If the threadpool uses a backing queue, what is its maximum size to use direct handoff.
 Default is to use a SynchronousQueue.
 
 `fairnessPolicy`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 A boolean to configure if the threadpool favors fairness over throughput.
-Default is false to favor throughput.
 
 `replicaRouting`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 A NamedList specifying replica routing preference configuration.
 This may be used to select and configure replica routing preferences.
 `default=true` may be used to set the default base replica routing preference.
@@ -314,7 +554,8 @@ If a default value is not specified, then the property must be specified at runt
 
 Any JVM system properties usually specified using the `-D` flag when starting the JVM, can be used as variables in the `solr.xml` file.
 
-For example, in the `solr.xml` file shown below, the `socketTimeout` and `connTimeout` values are each set to "60000". However, if you start Solr using `bin/solr -DsocketTimeout=1000`, the `socketTimeout` option of the `HttpShardHandlerFactory` to be overridden using a value of 1000ms, while the `connTimeout` option will continue to use the default property value of "60000".
+For example, in the `solr.xml` file shown below, the `socketTimeout` and `connTimeout` values are each set to "60000".
+However, if you start Solr using `bin/solr -DsocketTimeout=1000`, the `socketTimeout` option of the `HttpShardHandlerFactory` to be overridden using a value of 1000ms, while the `connTimeout` option will continue to use the default property value of "60000".
 
 [source,xml]
 ----
diff --git a/solr/solr-ref-guide/src/core-discovery.adoc b/solr/solr-ref-guide/src/core-discovery.adoc
index ed2107f..9a01dbb 100644
--- a/solr/solr-ref-guide/src/core-discovery.adoc
+++ b/solr/solr-ref-guide/src/core-discovery.adoc
@@ -89,43 +89,131 @@ Java properties files allow the hash (`#`) or bang (`!`) characters to specify c
 
 The following properties are available:
 
-`name`:: The name of the SolrCore.
+`name`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The name of the SolrCore.
 You'll use this name to reference the SolrCore when running commands with the `CoreAdminHandler`.
 
-`config`:: The configuration file name for a given core.
-The default is `solrconfig.xml`.
-
-`schema`:: The schema file name for a given core.
+`config`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `solrconfig.xml`
+|===
++
+The configuration file name for a given core.
+
+`schema`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
+The schema file name for a given core.
 The default is `schema.xml` but please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use.
 See <<schema-factory.adoc#,Schema Factory Definition in SolrConfig>> for more details.
 
-`dataDir`:: The core's data directory (where indexes are stored) as either an absolute pathname, or a path relative to the value of `instanceDir`.
-This is `data` by default.
-
-`configSet`:: The name of a defined configset, if desired, to use to configure the core (see the section <<config-sets.adoc#,Configsets>> for more details).
-
-`properties`:: The name of the properties file for this core.
+`dataDir`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `data`
+|===
++
+The core's data directory (where indexes are stored) as either an absolute pathname, or a path relative to the value of `instanceDir`.
+
+`configSet`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The name of a defined configset, if desired, to use to configure the core (see the section <<config-sets.adoc#,Configsets>> for more details).
+
+`properties`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The name of the properties file for this core.
 The value can be an absolute pathname or a path relative to the value of `instanceDir`.
 
-`transient`:: If `true`, the core can be unloaded if Solr reaches the `transientCacheSize`.
-The default is `false`.
+`transient`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+When `true`, the core can be unloaded if Solr reaches the `transientCacheSize`.
 Cores are unloaded in order of least recently used first.
 _Setting this to `true` is not recommended in SolrCloud mode._
 
-`loadOnStartup`:: If `true`, the default, the core will loaded when Solr starts.
+`loadOnStartup`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+When `true`, the default, the core will loaded when Solr starts.
 _Setting this to `false` is not recommended in SolrCloud mode._
 
-`coreNodeName`:: Used only in SolrCloud, this is a unique identifier for the node hosting this replica.
+`coreNodeName`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
+Used only in SolrCloud, this is a unique identifier for the node hosting this replica.
 By default a `coreNodeName` is generated automatically, but setting this attribute explicitly allows you to manually assign a new core to replace an existing replica.
 For example, this can be useful when replacing a machine that has had a hardware failure by restoring from backups on a new machine with a new hostname or port.
 
-`ulogDir`:: The absolute or relative directory for the update log for this core (SolrCloud only).
-
-`shard`:: The shard to assign this core to (SolrCloud only).
-
-`collection`:: The name of the collection this core is part of (SolrCloud only).
-
-`roles`:: Future parameter for SolrCloud or a way for users to mark nodes for their own use.
+`ulogDir`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The absolute or relative directory for the update log for this core (SolrCloud only).
+
+`shard`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The shard to assign this core to (SolrCloud only).
+
+`collection`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The name of the collection this core is part of (SolrCloud only).
+
+`roles`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Future parameter for SolrCloud or a way for users to mark nodes for their own use.
 
 Additional user-defined properties may be specified for use as variables.
 For more information on how to define local properties, see the section <<property-substitution.adoc#,Property Substitution in `solrconfig.xml`>>.
diff --git a/solr/solr-ref-guide/src/coreadmin-api.adoc b/solr/solr-ref-guide/src/coreadmin-api.adoc
index 5f34f15..0c266d8 100644
--- a/solr/solr-ref-guide/src/coreadmin-api.adoc
+++ b/solr/solr-ref-guide/src/coreadmin-api.adoc
@@ -85,13 +85,24 @@ curl -X GET http://localhost:8983/api/cores?indexInfo=false
 === STATUS Parameters
 
 `core`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of a core, as listed in the "name" attribute of a `<core>` element in `solr.xml`.
 This parameter is required in v1, and part of the url in the v2 API.
 
 `indexInfo`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 If `false`, information about the index will not be returned with a core STATUS request.
 In Solr implementations with a large number of cores (i.e., more than hundreds), retrieving the index information for each core can take a lot of time and isn't always required.
-The default is `true`.
 
 [[coreadmin-create]]
 == CREATE
@@ -168,33 +179,74 @@ The `core.properties` file must NOT exist before calling the CoreAdmin API with
 === CREATE Core Parameters
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the new core.
 Same as `name` on the `<core>` element.
-This parameter is required.
 
 `instanceDir`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 The directory where files for this core should be stored.
 Same as `instanceDir` on the `<core>` element.
 The default is the value specified for the `name` parameter if not supplied.
 This directory must be inside `SOLR_HOME`, `SOLR_DATA_HOME` or one of the paths specified by system property `solr.allowPaths`.
 
 `config`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `solrconfig.xml`
+|===
++
 Name of the config file (i.e., `solrconfig.xml`) relative to `instanceDir`.
 
 `schema`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 Name of the schema file to use for the core.
 Please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use.
 See <<schema-factory.adoc#,Schema Factory Definition in SolrConfig>> for details.
 
 `dataDir`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `data`
+|===
++
 Name of the data directory relative to `instanceDir`.
 If absolute value is used, it must be inside `SOLR_HOME`, `SOLR_DATA_HOME` or one of the paths specified by system property `solr.allowPaths`.
 
 `configSet`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Name of the configset to use for this core.
 For more information, see the section <<config-sets.adoc#,Configsets>>.
 
 `collection`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 The name of the collection to which this core belongs.
 The default is the name of the core.
 `collection._param_=_value_` causes a property of `_param_=_value_` to be set if a new collection is being created.
@@ -204,14 +256,32 @@ WARNING: While it's possible to create a core for a non-existent collection, thi
 Always create a collection using the <<collections-api.adoc#,Collections API>> before creating a core directly for it.
 
 `shard`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The shard ID this core represents.
 This should only be required in special circumstances; normally you want to be auto-assigned a shard ID.
 
 `property._name_=_value_`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Sets the core property _name_ to _value_.
 See the section on defining <<core-discovery.adoc#defining-core-properties-files,core.properties file contents>>.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be processed asynchronously.
 
 Use `collection.configName=_configname_` to point to the config for a new collection.
@@ -269,6 +339,12 @@ Some configuration options, such as the `dataDir` location and `IndexWriter`-rel
 === RELOAD Core Parameters
 
 `core`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The name of the core, as listed in the "name" attribute of a `<core>` element in `solr.xml`.
 This parameter is required in v1, and part of the url in the v2 API.
 
@@ -282,15 +358,31 @@ The `RENAME` action changes the name of a Solr core.
 === RENAME Parameters
 
 `core`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the Solr core to be renamed.
-This parameter is required.
 
 `other`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The new name for the Solr core.
 If the persistent attribute of `<solr>` is `true`, the new name will be written to `solr.xml` as the `name` attribute of the `<core>` attribute.
-This parameter is required.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be processed asynchronously.
 
 
@@ -313,14 +405,30 @@ It is not supported and can result in the core being unusable.
 === SWAP Parameters
 
 `core`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of one of the cores to be swapped.
-This parameter is required.
 
 `other`::
-The name of one of the cores to be swapped.
-This parameter is required.
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The name of the other core to be swapped.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be processed asynchronously.
 
 
@@ -344,22 +452,49 @@ Unloading all cores in a SolrCloud collection causes the removal of that collect
 === UNLOAD Parameters
 
 `core`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of a core to be removed.
 This parameter is required.
 
 `deleteIndex`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true`, will remove the index when unloading the core.
-The default is `false`.
 
 `deleteDataDir`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true`, removes the `data` directory and all sub-directories.
-The default is `false`.
 
 `deleteInstanceDir`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true`, removes everything related to the core, including the index directory, configuration files and other related files.
-The default is `false`.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be processed asynchronously.
 
 [[coreadmin-mergeindexes]]
@@ -390,16 +525,39 @@ This ID can then be used to check the status of the already submitted task using
 === MERGEINDEXES Parameters
 
 `core`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the target core/index.
-This parameter is required.
 
 `indexDir`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Multi-valued, directories that would be merged.
 
 `srcCore`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Multi-valued, source cores that would be merged.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be processed asynchronously.
 
 
@@ -415,30 +573,65 @@ The `SPLIT` action supports five parameters, which are described in the table be
 === SPLIT Parameters
 
 `core`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the core to be split.
-This parameter is required.
 
 `path`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Multi-valued, the directory path in which a piece of the index will be written.
 Either this parameter or `targetCore` must be specified.
 If this is specified, the `targetCore` parameter may not be used.
 
 `targetCore`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Multi-valued, the target Solr core to which a piece of the index will be merged.
 Either this parameter or `path` must be specified.
 If this is specified, the `path` parameter may not be used.
 
 `ranges`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A comma-separated list of hash ranges in hexadecimal format.
 If this parameter is used, `split.key` should not be.
 See the <<SPLIT Examples>> below for an example of how this parameter can be used.
 
 `split.key`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The key to be used for splitting the index.
 If this parameter is used, `ranges` should not be.
 See the <<SPLIT Examples>> below for an example of how this parameter can be used.
 
 `async`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Request ID to track this action which will be processed asynchronously.
 
 === SPLIT Examples
@@ -495,8 +688,13 @@ Request the status of an already submitted asynchronous CoreAdmin API call.
 The REQUESTSTATUS command has only one parameter.
 
 `requestid`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The user defined request-id for the asynchronous request.
-This parameter is required.
 
 The call below will return the status of an already submitted asynchronous CoreAdmin call.
 
@@ -514,8 +712,13 @@ This should be considered an "expert" level command and should be used in situat
 === REQUESTRECOVERY Parameters
 
 `core`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the core to re-sync.
-This parameter is required.
 
 === REQUESTRECOVERY Examples
 
diff --git a/solr/solr-ref-guide/src/de-duplication.adoc b/solr/solr-ref-guide/src/de-duplication.adoc
index 9d24cff..8df6dc6 100644
--- a/solr/solr-ref-guide/src/de-duplication.adoc
+++ b/solr/solr-ref-guide/src/de-duplication.adoc
@@ -62,8 +62,13 @@ The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` a
 The `SignatureUpdateProcessorFactory` takes several properties:
 
 `signatureClass`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `org.apache.solr.update.processor.Lookup3Signature`
+|===
++
 A Signature implementation for generating a signature hash.
-The default is `org.apache.solr.update.processor.Lookup3Signature`.
 +
 The full classpath of the implementation must be specified.
 The available options are described above, the associated classpaths to use are:
@@ -73,20 +78,42 @@ The available options are described above, the associated classpaths to use are:
 * `org.apache.solr.update.process.TextProfileSignature`
 
 `fields`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: all fields
+|===
++
 The fields to use to generate the signature hash in a comma separated list.
 By default, all fields on the document will be used.
 
 `signatureField`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `signatureField`
+|===
++
 The name of the field used to hold the fingerprint/signature.
 The field should be defined in your schema.
-The default is `signatureField`.
 
 `enabled`::
-Set to *false* to disable de-duplication processing.
-The default is *true*.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+Set to `false` to disable de-duplication processing.
 
 `overwriteDupes`::
-If *true*, the default, when a document exists that already matches this signature, it will be overwritten.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If `true`, when a document exists that already matches this signature, it will be overwritten.
 If you are using `overwriteDupes=true` the `signatureField` must be `indexed="true"` in your Schema.
 
 .Using `SignatureUpdateProcessorFactory` in SolrCloud
diff --git a/solr/solr-ref-guide/src/document-transformers.adoc b/solr/solr-ref-guide/src/document-transformers.adoc
index c6d3a8e..c8c2fc9 100644
--- a/solr/solr-ref-guide/src/document-transformers.adoc
+++ b/solr/solr-ref-guide/src/document-transformers.adoc
@@ -140,27 +140,46 @@ q=book_title:Solr&fl=id,[child childFilter=doc_type:chapter limit=100]
 If the documents involved include a `\_nest_path_` field, then it is used to re-create the hierarchical structure of the descendent documents using the original pseudo-field names the documents were indexed with, otherwise the descendent documents are returned as a flat list of <<indexing-nested-documents#indexing-anonymous-children,anonymous children>>.
 
 `childFilter`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: all children
+|===
++
 A query to filter which child documents should be included.
 This can be particularly useful when you have multiple levels of hierarchical documents.
-The default is all children.
 
 `limit`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `10`
+|===
++
 The maximum number of child documents to be returned per parent document.
-The default is `10`.
 
 `fl`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 The field list which the transformer is to return.
-The default is the top level `fl`).
+The default is the top level `fl`.
 +
 There is a further limitation in which the fields here should be a subset of those specified by the top level `fl` parameter.
 
 `parentFilter`::
-Serves the same purpose as the `of`/`which` params in `{!child}`/`{!parent}` query parsers: to
-identify the set of "all parents" for the purpose of identifying the beginning & end of each
-nested document block.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Serves the same purpose as the `of`/`which` params in `{!child}`/`{!parent}` query parsers: to identify the set of "all parents" for the purpose of identifying the beginning & end of each nested document block.
 This recently became fully optional and appears to be obsolete.
-It is likely to be removed in a future Solr release, so _if you find it has some use, let the
-project know!_
+It is likely to be removed in a future Solr release, so _if you find it has some use, let the project know!_
 
 [TIP]
 ====
@@ -175,16 +194,16 @@ When the "path" begins with a `/` character, it restricts matches to documents t
 Some Examples:
 
 * `childFilter="/skus/\*:*"`
-** Matches any documents that are descendents of the current document and have a "nested path" of `/skus` -- but not any children of those `skus`
+** Matches any documents that are descendants of the current document and have a "nested path" of `/skus`, but not any children of those `skus`.
 * childFilter="/skus/color_s:RED"
-** Matches any documents that are descendents of the current document; match `color_s:RED`; and have a "nested path" of `/skus` -- but not any children of those `skus`
+** Matches any documents that are descendants of the current document; match `color_s:RED`; and have a "nested path" of `/skus`, but not any children of those `skus`.
 * `childFilter="/skus/manuals/\*:*"`
-** Matches any documents that are descendents of the current document and have a "nested path" of `/skus/manuals` -- but not any children of those `manuals`
+** Matches any documents that are descendants of the current document and have a "nested path" of `/skus/manuals`, but not any children of those `manuals`.
 
 When paths do not start with a `/` they are treated as "path suffixes":
 
 * `childFilter="manuals/\*:*"`
-** Matches any documents that are descendents of the current document and have a "nested path" that ends with "manuals", regardless of how deeply nested they are -- but not any children of those `manuals`
+** Matches any documents that are descendents of the current document and have a "nested path" that ends with "manuals", regardless of how deeply nested they are, but not any children of those `manuals`.
 
 ====
 
diff --git a/solr/solr-ref-guide/src/enabling-ssl.adoc b/solr/solr-ref-guide/src/enabling-ssl.adoc
index b2de1a2..d968d58 100644
--- a/solr/solr-ref-guide/src/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/src/enabling-ssl.adoc
@@ -158,13 +158,31 @@ Note that if the `javax.net.ssl.\*` configurations are not set, they will fallba
 Solr requires three parameters to be configured in order to use the credential store file for keystore passwords.
 
 `solr.ssl.credential.provider.chain`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The credential provider chain.
 This should be set to `hadoop`.
 
 `SOLR_HADOOP_CREDENTIAL_PROVIDER_PATH`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The path to the credential store file.
 
 `HADOOP_CREDSTORE_PASSWORD`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The password to the credential store.
 
 [.dynamic-tabs]
diff --git a/solr/solr-ref-guide/src/enum-fields.adoc b/solr/solr-ref-guide/src/enum-fields.adoc
index 9e914b3..a5628d5 100644
--- a/solr/solr-ref-guide/src/enum-fields.adoc
+++ b/solr/solr-ref-guide/src/enum-fields.adoc
@@ -32,16 +32,31 @@ The EnumFieldType type definition is quite simple, as in this example defining f
 [source,xml]
 ----
 <fieldType name="priorityLevel" class="solr.EnumFieldType" docValues="true" enumsConfig="enumsConfig.xml" enumName="priority"/>
-<fieldType name="riskLevel"     class="solr.EnumFieldType" docValues="true" enumsConfig="enumsConfig.xml" enumName="risk"    />
+<fieldType name="riskLevel"     class="solr.EnumFieldType" docValues="true" enumsConfig="enumsConfig.xml" enumName="risk" />
 ----
 
 Besides the `name` and the `class`, which are common to all field types, this type also takes two additional parameters:
 
-`enumsConfig`:: the name of a configuration file that contains the `<enum/>` list of field values and their order that you wish to use with this field type.
+`enumsConfig`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The name of a configuration file that contains the `<enum/>` list of field values and their order that you wish to use with this field type.
 If a path to the file is not defined specified, the file should be in the `conf` directory for the collection.
-`enumName`:: the name of the specific enumeration in the `enumsConfig` file to use for this type.
 
-Note that `docValues="true"` must be specified either in the EnumFieldType fieldType or field specification.
+`enumName`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The name of the specific enumeration in the `enumsConfig` file to use for this type.
+
+Note that `docValues="true"` must be specified either in the field type or field definition.
 
 == Defining the EnumFieldType Configuration File
 
diff --git a/solr/solr-ref-guide/src/faceting.adoc b/solr/solr-ref-guide/src/faceting.adoc
index c75db25..c2ea402 100644
--- a/solr/solr-ref-guide/src/faceting.adoc
+++ b/solr/solr-ref-guide/src/faceting.adoc
@@ -28,12 +28,23 @@ See also <<json-facet-api.adoc#, JSON Facet API>> for an alternative approach to
 There are two general parameters for controlling faceting.
 
 `facet`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: `false`
+|===
++
 If set to `true`, this parameter enables facet counts in the query response.
 If set to `false`, a blank or missing value, this parameter disables faceting.
 None of the other parameters listed below will have any effect unless this parameter is set to `true`.
-The default value is blank (false).
 
 `facet.query`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This parameter allows you to specify an arbitrary query in the Lucene default syntax to generate a facet count.
 +
 By default, Solr's faceting feature automatically determines the unique terms for a field and returns a count for each of those terms.
@@ -62,6 +73,12 @@ The Text field should have `indexed="true" docValues=“false"` if used for sear
 Unless otherwise specified, all of the parameters below can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.<parameter>`
 
 `facet.field`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The `facet.field` parameter identifies a field that should be treated as a facet.
 It iterates over each Term in the field and generate a facet count using that Term as the constraint.
 This parameter can be specified multiple times in a query to select multiple facet fields.
@@ -69,66 +86,111 @@ This parameter can be specified multiple times in a query to select multiple fac
 IMPORTANT: If you do not set this parameter to at least one field in the schema, none of the other parameters described in this section will have any effect.
 
 `facet.prefix`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The `facet.prefix` parameter limits the terms on which to facet to those starting with the given string prefix.
 This does not limit the query in any way, only the facets that would be returned in response to the query.
 +
 
 `facet.contains`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The `facet.contains` parameter limits the terms on which to facet to those containing the given substring.
 This does not limit the query in any way, only the facets that would be returned in response to the query.
 
 `facet.contains.ignoreCase`::
-
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 If `facet.contains` is used, the `facet.contains.ignoreCase` parameter causes case to be ignored when matching the given substring against candidate facet terms.
 
 `facet.matches`::
-
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 If you want to only return facet buckets for the terms that match a regular expression.
 
 `facet.sort`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This parameter determines the ordering of the facet field constraints.
 +
 There are two options for this parameter.
 +
---
 `count`::: Sort the constraints by count (highest count first).
 `index`::: Return the constraints sorted in their index order (lexicographic by indexed term).
 For terms in the ASCII range, this will be alphabetically sorted.
---
 +
 The default is `count` if `facet.limit` is greater than 0, otherwise, the default is `index`.
 Note that the default logic is changed when <<#limiting-facet-with-certain-terms>>
 
 `facet.limit`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `100`
+|===
++
 This parameter specifies the maximum number of constraint counts (essentially, the number of facets for a field that are returned) that should be returned for the facet fields.
 A negative value means that Solr will return unlimited number of constraint counts.
-+
-The default value is `100`.
 
 `facet.offset`::
-
-The `facet.offset` parameter indicates an offset into the list of constraints to allow paging.
 +
-The default value is `0`.
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
+The `facet.offset` parameter indicates an offset into the list of constraints to allow paging.
 
 `facet.mincount`::
-
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
 The `facet.mincount` parameter specifies the minimum counts required for a facet field to be included in the response.
 If a field's counts are below the minimum, the field's facet is not returned.
-+
-The default value is `0`.
 
 `facet.missing`::
-If set to `true`, this parameter indicates that, in addition to the Term-based constraints of a facet field, a count of all results that match the query but which have no facet value for the field should be computed and returned in the response.
 +
-The default value is `false`.
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If set to `true`, this parameter indicates that, in addition to the Term-based constraints of a facet field, a count of all results that match the query but which have no facet value for the field should be computed and returned in the response.
 
 `facet.method`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `fc`
+|===
++
 The `facet.method` parameter selects the type of algorithm or method Solr should use when faceting a field.
 +
 The following methods are available.
 +
---
 `enum`::: Enumerates all terms in a field, calculating the set intersection of documents that match the term with documents that match the query.
 +
 This method is recommended for faceting multi-valued fields that have only a few distinct values.
@@ -136,7 +198,7 @@ The average number of values per document does not matter.
 +
 For example, faceting on a field with U.S. States such as `Alabama, Alaska, ... Wyoming` would lead to fifty cached filters which would be used over and over again.
 The `filterCache` should be large enough to hold all the cached filters.
-
++
 `fc`::: Calculates facet counts by iterating over documents that match the query and summing the terms that appear in each document.
 +
 This is currently implemented using an `UnInvertedField` cache if the field either is multi-valued or is tokenized (according to `FieldType.isTokened()`).
@@ -145,15 +207,21 @@ Each document is looked up in the cache to see what terms/values it contains, an
 This method is excellent for situations where the number of indexed values for the field is high, but the number of values per document is low.
 For multi-valued fields, a hybrid approach is used that uses term filters from the `filterCache` for terms that match many documents.
 The letters `fc` stand for field cache.
-
++
 `fcs`::: Per-segment field faceting for single-valued string fields.
 Enable with `facet.method=fcs` and control the number of threads used with the `threads` local parameter.
 This parameter allows faceting to be faster in the presence of rapid index changes.
---
+
 +
 The default value is `fc` (except for fields using the `BoolField` field type and when `facet.exists=true` is requested) since it tends to use less memory and is faster when a field has many unique terms in the index.
 
 `facet.enum.cache.minDf`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
 This parameter indicates the minimum document frequency (the number of documents matching a term) for which the filterCache should be used when determining the constraint count for that term.
 This is only used with the `facet.method=enum` method of faceting.
 +
@@ -164,23 +232,46 @@ Then, optimize the parameter setting as necessary.
 The default value is `0`, causing the filterCache to be used for all terms in the field.
 
 `facet.exists`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 To cap facet counts by 1, specify `facet.exists=true`.
 This parameter can be used with `facet.method=enum` or when it's omitted.
 It can be used only on non-trie fields (such as strings).
 It may speed up facet counting on large indices and/or high-cardinality facet values.
 
 `facet.excludeTerms`::
-
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 If you want to remove terms from facet counts but keep them in the index, the `facet.excludeTerms` parameter allows you to do that.
 
 `facet.overrequest.count` and `facet.overrequest.ratio`::
-In some situations, the accuracy in selecting the "top" constraints returned for a facet in a distributed Solr query can be improved by "over requesting" the number of desired constraints (i.e., `facet.limit`) from each of the individual shards.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
+In some situations, the accuracy in selecting the "top" constraints returned for a facet in a distributed Solr query can be improved by "over-requesting" the number of desired constraints (i.e., `facet.limit`) from each of the individual shards.
 In these situations, each shard is by default asked for the top `10 + (1.5 * facet.limit)` constraints.
 +
-In some situations, depending on how your docs are partitioned across your shards and what `facet.limit` value you used, you may find it advantageous to increase or decrease the amount of over-requesting Solr does.
+Depending on how your docs are partitioned across your shards and what `facet.limit` value you used, you may find it advantageous to increase or decrease the amount of over-requesting Solr does.
 This can be achieved by setting the `facet.overrequest.count` (defaults to `10`) and `facet.overrequest.ratio` (defaults to `1.5`) parameters.
 
 `facet.threads`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This parameter will cause loading the underlying fields used in faceting to be executed in parallel with the number of threads specified.
 Specify as `facet.threads=N` where `N` is the maximum number of threads used.
 +
@@ -193,83 +284,137 @@ You can use Range Faceting on any date field or any numeric field that supports
 This is particularly useful for stitching together a series of range queries (as facet by query) for things like prices.
 
 `facet.range`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The `facet.range` parameter defines the field for which Solr should create range facets.
 For example:
 +
-`facet.range=price&facet.range=age`
+[source,text]
+facet.range=price&facet.range=age
 +
-`facet.range=lastModified_dt`
+[source,text]
+facet.range=lastModified_dt
 
 `facet.range.start`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The `facet.range.start` parameter specifies the lower bound of the ranges.
 You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.start`.
 For example:
 +
-`f.price.facet.range.start=0.0&f.age.facet.range.start=10`
+[source,text]
+f.price.facet.range.start=0.0&f.age.facet.range.start=10
 +
-`f.lastModified_dt.facet.range.start=NOW/DAY-30DAYS`
+[source,text]
+f.lastModified_dt.facet.range.start=NOW/DAY-30DAYS
 
 `facet.range.end`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The `facet.range.end` specifies the upper bound of the ranges.
 You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.end`.
 For example:
 +
-`f.price.facet.range.end=1000.0&f.age.facet.range.start=99`
+[source,text]
+f.price.facet.range.end=1000.0&f.age.facet.range.start=99
 +
-`f.lastModified_dt.facet.range.end=NOW/DAY+30DAYS`
+[source,text]
+f.lastModified_dt.facet.range.end=NOW/DAY+30DAYS
 
 `facet.range.gap`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The span of each range expressed as a value to be added to the lower bound.
 For date fields, this should be expressed using the {solr-javadocs}/core/org/apache/solr/util/DateMathParser.html[`DateMathParser` syntax] (such as, `facet.range.gap=%2B1DAY ... '+1DAY'`).
++
 You can specify this parameter on a per-field basis with the syntax of `f.<fieldname>.facet.range.gap`.
 For example:
 +
-`f.price.facet.range.gap=100&f.age.facet.range.gap=10`
+[source,text]
+f.price.facet.range.gap=100&f.age.facet.range.gap=10
 +
-`f.lastModified_dt.facet.range.gap=+1DAY`
+[source,text]
+f.lastModified_dt.facet.range.gap=+1DAY
 
 `facet.range.hardend`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 The `facet.range.hardend` parameter is a Boolean parameter that specifies how Solr should handle cases where the `facet.range.gap` does not divide evenly between `facet.range.start` and `facet.range.end`.
 +
 If `true`, the last range constraint will have the `facet.range.end` value as an upper bound.
-If `false`, the last range will have the smallest possible upper bound greater then `facet.range.end` such that the range is the exact width of the specified range gap.
-The default value for this parameter is false.
+If `false`, the last range will have the smallest possible upper bound greater then `facet.range.end` so the range is the exact width of the specified range gap.
 +
 This parameter can be specified on a per field basis with the syntax `f.<fieldname>.facet.range.hardend`.
 
 `facet.range.include`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 By default, the ranges used to compute range faceting between `facet.range.start` and `facet.range.end` are inclusive of their lower bounds and exclusive of the upper bounds.
 The "before" range defined with the `facet.range.other` parameter is exclusive and the "after" range is inclusive.
 This default, equivalent to "lower" below, will not result in double counting at the boundaries.
 You can use the `facet.range.include` parameter to modify this behavior using the following options:
-+
---
+
 * `lower`: All gap-based ranges include their lower bound.
 * `upper`: All gap-based ranges include their upper bound.
 * `edge`: The first and last gap ranges include their edge bounds (lower for the first one, upper for the last one) even if the corresponding upper/lower option is not specified.
 * `outer`: The "before" and "after" ranges will be inclusive of their bounds, even if the first or last ranges already include those boundaries.
 * `all`: Includes all options: `lower`, `upper`, `edge`, and `outer`.
---
+
 +
 You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.include`, and you can specify it multiple times to indicate multiple choices.
-+
-NOTE: To ensure you avoid double-counting, do not choose both `lower` and `upper`, do not choose `outer`, and do not choose `all`.
+[NOTE]
+To ensure you avoid double-counting, do not choose both `lower` and `upper`, do not choose `outer`, and do not choose `all`.
 
 `facet.range.other`::
-The `facet.range.other` parameter specifies that in addition to the counts for each range constraint between `facet.range.start` and `facet.range.end`, counts should also be computed for these options:
 +
---
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The `facet.range.other` parameter specifies that in addition to the counts for each range constraint between `facet.range.start` and `facet.range.end`, counts should also be computed for these options:
+
 * `before`: All records with field values lower then lower bound of the first range.
 * `after`: All records with field values greater then the upper bound of the last range.
 * `between`: All records with field values between the start and end bounds of all ranges.
 * `none`: Do not compute any counts.
 * `all`: Compute counts for before, between, and after.
---
+
 +
 This parameter can be specified on a per field basis with the syntax of `f.<fieldname>.facet.range.other`.
 In addition to the `all` option, this parameter can be specified multiple times to indicate multiple choices, but `none` will override all other options.
 
 `facet.range.method`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `filter`
+|===
++
 The `facet.range.method` parameter selects the type of algorithm or method Solr should use for range faceting.
 Both methods produce the same results, but performance may vary.
 +
@@ -281,10 +426,6 @@ dv::: This method iterates the documents that match the main query, and for each
 This method will make use of <<docvalues.adoc#,docValues>> (if enabled for the field) or fieldCache.
 The `dv` method is not supported for field type DateRangeField or when using <<result-grouping.adoc#,group.facets>>.
 --
-+
-The default value for this parameter is `filter`.
-
-
 
 .Date Ranges & Time Zones
 [NOTE]
@@ -309,22 +450,33 @@ Another way to look at it is that the query produces a Decision Tree, in that So
 If you were to constrain A by X, then the constraint counts for B would be S/P, T/Q, etc.". In other words, it tells you in advance what the "next" set of facet results would be for a field if you apply a constraint from the current facet results.
 
 `facet.pivot`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The `facet.pivot` parameter defines the fields to use for the pivot.
 Multiple `facet.pivot` values will create multiple "facet_pivot" sections in the response.
 Separate each list of fields with a comma.
 
 `facet.pivot.mincount`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
 The `facet.pivot.mincount` parameter defines the minimum number of documents that need to match in order for the facet to be included in results.
-The default is 1.
 +
 Using the "`bin/solr -e techproducts`" example, A query URL like this one will return the data below, with the pivot faceting results found in the section "facet_pivot":
-
++
 [source,text]
 ----
 http://localhost:8983/solr/techproducts/select?q=*:*&facet.pivot=cat,popularity,inStock
    &facet.pivot=popularity,cat&facet=true&facet.field=cat&facet.limit=5&rows=0&facet.pivot.mincount=2
 ----
-
++
 [source,json]
 ----
 {  "facet_counts":{
@@ -596,13 +748,24 @@ This method will use <<docvalues.adoc#,docValues>> if they are enabled for the f
 Use these parameters for interval faceting:
 
 `facet.interval`::
-
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This parameter Indicates the field where interval faceting must be applied.
 It can be used multiple times in the same request to indicate multiple fields.
 +
 `facet.interval=price&facet.interval=size`
 
 `facet.interval.set`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 This parameter is used to set the intervals for the field, it can be specified multiple times to indicate multiple intervals.
 This parameter is global, which means that it will be used for all fields indicated with `facet.interval` unless there is an override for a specific field.
 To override this parameter on a specific field you can use: `f.<fieldname>.facet.interval.set`, for example:
diff --git a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
index a7f5533..8becad8 100644
--- a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
@@ -82,53 +82,101 @@ The properties that can be specified for a given field type fall into three majo
 These are the general properties for fields:
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the fieldType.
 This value gets used in field definitions, in the "type" attribute.
 It is strongly recommended that names consist of alphanumeric or underscore characters only and not start with a digit.
 This is not currently strictly enforced.
 
 `class`::
-The class name that gets used to store and index the data for this type.
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The class name used to store and index the data for this type.
 Note that you may prefix included class names with "solr." and Solr will automatically figure out which packages to search for the class - so `solr.TextField` will work.
 +
 If you are using a third-party class, you will probably need to have a fully qualified class name.
 The fully qualified equivalent for `solr.TextField` is `org.apache.solr.schema.TextField`.
 
 `positionIncrementGap`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 For multivalued fields, specifies a distance between multiple values, which prevents spurious phrase matches.
 
-`autoGeneratePhraseQueries`:: For text fields.
+`autoGeneratePhraseQueries`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+For text fields.
 If `true`, Solr automatically generates phrase queries for adjacent terms.
 If `false`, terms must be enclosed in double-quotes to be treated as phrases.
 
 `synonymQueryStyle`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `as_same_term`
+|===
++
 Query used to combine scores of overlapping query terms (i.e., synonyms).
 Consider a search for "blue tee" with query-time synonyms `tshirt,tee`.
+
+* `as_same_term`: Blends terms, i.e., `SynonymQuery(tshirt,tee)` where each term will be treated as equally important.
+This option is appropriate when terms are true synonyms (e.g., "television, tv").
+* `pick_best`: Selects the most significant synonym when scoring `Dismax(tee,tshirt)`.
+Use this when synonyms are expanding to hyponyms `(q=jeans w/ jeans=>jeans,pants)` and you want exact to come before parent and sibling concepts.
+* `as_distinct_terms`: Biases scoring towards the most significant synonym `(pants OR slacks)`.
 +
-Use `as_same_term` (default) to blend terms, i.e., `SynonymQuery(tshirt,tee)` where each term will be treated as equally important.
-Use `pick_best` to select the most significant synonym when scoring `Dismax(tee,tshirt)`.
-Use `as_distinct_terms` to bias scoring towards the most significant synonym `(pants OR slacks)`.
-+
-`as_same_term` is appropriate when terms are true synonyms (television, tv).
-Use `pick_best` or `as_distinct_terms` when synonyms are expanding to hyponyms `(q=jeans w/ jeans=>jeans,pants)` and you want exact to come before parent and sibling concepts.
-See this http://opensourceconnections.com/blog/2017/11/21/solr-synonyms-mea-culpa/[blog article].
+This blog post http://opensourceconnections.com/blog/2017/11/21/solr-synonyms-mea-culpa/[Solr Synonyms and Taxonomies: Mea Culpa] discusses Solr's behavior with synonym expansion.
 
 `enableGraphQueries`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 For text fields, applicable when querying with <<standard-query-parser.adoc#standard-query-parser-parameters,`sow=false`>> (which is the default for the `sow` parameter).
-Use `true`, the default, for field types with query analyzers including graph-aware filters, e.g., <<filters.adoc#synonym-graph-filter,Synonym Graph Filter>> and <<filters.adoc#word-delimiter-graph-filter,Word Delimiter Graph Filter>>.
+Use `true` for field types with query analyzers including graph-aware filters, e.g., <<filters.adoc#synonym-graph-filter,Synonym Graph Filter>> and <<filters.adoc#word-delimiter-graph-filter,Word Delimiter Graph Filter>>.
 +
 Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., <<filters.adoc#shingle-filter,Shingle Filter>>.
 
 [[docvaluesformat]]
 `docValuesFormat`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Defines a custom `DocValuesFormat` to use for fields of this type.
 This requires that a schema-aware codec, such as the `SchemaCodecFactory`, has been configured in `solrconfig.xml`.
 
 `postingsFormat`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Defines a custom `PostingsFormat` to use for fields of this type.
 This requires that a schema-aware codec, such as the `SchemaCodecFactory`, has been configured in `solrconfig.xml`.
 
-
 [NOTE]
 ====
 Lucene index back-compatibility is only supported for the default codec.
@@ -142,27 +190,27 @@ These are properties that can be specified either on the field types, or on indi
 The default values for each property depend on the underlying `FieldType` class, which in turn may depend on the `version` attribute of the `<schema/>`.
 The table below includes the default value for most `FieldType` implementations provided by Solr, assuming a `schema.xml` that declares `version="1.6"`.
 
-// TODO: SOLR-10655 BEGIN: refactor this into a 'field-default-properties.include.adoc' file for reuse
-
+// tags this table for inclusion in another page
+// tag::field-params[]
 [%autowidth.stretch,options="header"]
 |===
-|Property |Description |Values |Implicit Default
-|indexed |If true, the value of the field can be used in queries to retrieve matching documents. |true or false |true
-|stored |If true, the actual value of the field can be retrieved by queries. |true or false |true
-|docValues |If true, the value of the field will be put in a column-oriented <<docvalues.adoc#,DocValues>> structure. |true or false |false
-|sortMissingFirst sortMissingLast |Control the placement of documents when a sort field is not present. |true or false |false
-|multiValued |If true, indicates that a single document might contain multiple values for this field type. |true or false |false
-|uninvertible|If true, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of <<docvalues.adoc#,DocValues>>.  *Defaults to true for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.*|true or false |true
-|omitNorms |If true, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields need norms. |true or false |*
-|omitTermFreqAndPositions |If true, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don't require that information. It also reduces the storage space required for the index. Queries that rely on position that are issued on a field with this option will silently fail to find documents. *This property defaults to true for all field types that are not text fields.* |true or false |*
-|omitPositions |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |true or false |*
-|termVectors termPositions termOffsets termPayloads |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |true or false |false
-|required |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |true or false |false
-|useDocValuesAsStored |If the field has <<docvalues.adoc#,docValues>> enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |true or false |true
-|large |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |true or false |false
-|===
-
-// TODO: SOLR-10655 END
+|Property |Description |Implicit Default
+|`indexed` |If `true`, the value of the field can be used in queries to retrieve matching documents. |`true`
+|`stored` |If `true`, the actual value of the field can be retrieved by queries.  |`true`
+|`docValues` |If `true`, the value of the field will be put in a column-oriented <<docvalues.adoc#,DocValues>> structure. |`false`
+|`sortMissingFirst`, `sortMissingLast` |Control the placement of documents when a sort field is not present. |`false`
+|`multiValued` |If `true`, indicates that a single document might contain multiple values for this field type. |`false`
+|`uninvertible` |If `true`, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of <<docvalues.adoc#,DocValues>>. *Defaults to `true` for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.* |`true`
+|`omitNorms` |If `true`, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields need norms. |*
+|`omitTermFreqAndPositions` |If `true`, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don't require that information. It also reduces the storage space required for the index. Queries that rely on position that are issued on a field with this option will silently fail to find documents. *This property defaults to true for all field types that are not text fields.* |*
+|`omitPositions` |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |*
+|`termVectors`, `termPositions`, `termOffsets`, `termPayloads` |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset, and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |`false`
+|`required` |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |`false`
+|`useDocValuesAsStored` |If the field has <<docvalues.adoc#,docValues>> enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |`true`
+|`large` |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |`false`
+|===
+
+// end::field-params[]
 
 == Choosing Appropriate Numeric Types
 
diff --git a/solr/solr-ref-guide/src/fields.adoc b/solr/solr-ref-guide/src/fields.adoc
index 02582ce..44bc41c 100644
--- a/solr/solr-ref-guide/src/fields.adoc
+++ b/solr/solr-ref-guide/src/fields.adoc
@@ -33,18 +33,35 @@ The following example defines a field named `price` with a type named `float` an
 Field definitions can have the following properties:
 
 `name`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the field.
 Field names should consist of alphanumeric or underscore characters only and not start with a digit.
 This is not currently strictly enforced, but other field names will not have first class support from all components and back compatibility is not guaranteed.
 Names with both leading and trailing underscores (e.g., `\_version_`) are reserved.
-Every field must have a `name`.
 
 `type`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
 The name of the `fieldType` for this field.
 This will be found in the `name` attribute on the `fieldType` definition.
 Every field must have a `type`.
 
 `default`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A default value that will be added automatically to any document that does not have a value in this field when it is indexed.
 If this property is not specified, there is no default.
 
@@ -54,24 +71,6 @@ Fields can have many of the same properties as field types.
 Properties from the table below which are specified on an individual field will override any explicit value for that property specified on the the `fieldType` of the field, or any implicit default property value provided by the underlying `fieldType` implementation.
 The table below is reproduced from <<field-type-definitions-and-properties.adoc#,Field Type Definitions and Properties>>, which has more details:
 
-// TODO: SOLR-10655 BEGIN: refactor this into a 'field-default-properties.include.adoc' file for reuse
-
-[%autowidth.stretch,options="header"]
-|===
-|Property |Description |Values |Implicit Default
-|indexed |If true, the value of the field can be used in queries to retrieve matching documents. |true or false |true
-|stored |If true, the actual value of the field can be retrieved by queries. |true or false |true
-|docValues |If true, the value of the field will be put in a column-oriented <<docvalues.adoc#,DocValues>> structure. |true or false |false
-|sortMissingFirst sortMissingLast |Control the placement of documents when a sort field is not present. |true or false |false
-|multiValued |If true, indicates that a single document might contain multiple values for this field type. |true or false |false
-|uninvertible|If true, indicates that an `indexed="true" docValues="false"` field can be "un-inverted" at query time to build up large in memory data structure to serve in place of <<docvalues.adoc#,DocValues>>.  *Defaults to true for historical reasons, but users are strongly encouraged to set this to `false` for stability and use `docValues="true"` as needed.*|true or false |true
-|omitNorms |If true, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields need norms. |true or false |*
-|omitTermFreqAndPositions |If true, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don't require that information. It also reduces the storage space required for the index. Queries that rely on position that are issued on a field with this option will silently fail to find documents. *This property defaults to true for all field types that are not text fields.* |true or false |*
-|omitPositions |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |true or false |*
-|termVectors termPositions termOffsets termPayloads |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |true or false |false
-|required |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |true or false |false
-|useDocValuesAsStored |If the field has `<<docvalues.adoc#,docValues>>` enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |true or false |true
-|large |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |true or false |false
-|===
-
-// TODO: SOLR-10655 END
+--
+include::field-type-definitions-and-properties.adoc[tag=field-params]
+--
diff --git a/solr/solr-ref-guide/src/filters.adoc b/solr/solr-ref-guide/src/filters.adoc
index 83a153e..1810976 100644
--- a/solr/solr-ref-guide/src/filters.adoc
+++ b/solr/solr-ref-guide/src/filters.adoc
@@ -155,7 +155,14 @@ This filter converts characters from the following Unicode blocks:
 
 *Arguments:*
 
-`preserveOriginal`:: (boolean, default false) If true, the original token is preserved: "thé" -> "the", "thé"
+`preserveOriginal`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true`, the original token is preserved: "thé" -> "the", "thé"
 
 *Example:*
 
@@ -205,17 +212,45 @@ Any index built using this filter with earlier versions of Solr will need to be
 
 *Arguments:*
 
-`nameType`:: Types of names.
-Valid values are GENERIC, ASHKENAZI, or SEPHARDIC.
-If not processing Ashkenazi or Sephardic names, use GENERIC.
+`nameType`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `GENERIC`
+|===
++
+Types of names.
+Valid values are `GENERIC`, `ASHKENAZI`, or `SEPHARDIC`.
+If not processing Ashkenazi or Sephardic names, use `GENERIC`.
 
-`ruleType`:: Types of rules to apply.
-Valid values are APPROX or EXACT.
+`ruleType`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `APPROX`
+|===
++
+Types of rules to apply.
+Valid values are `APPROX` or `EXACT`.
 
-`concat`:: Defines if multiple possible matches should be combined with a pipe ("|").
+`concat`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+Defines if multiple possible matches should be combined with a pipe (`|`).
 
-`languageSet`:: The language set to use.
-The value "auto" will allow the Filter to identify the language, or a comma-separated list can be supplied.
+`languageSet`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `auto`
+|===
++
+The language set to use.
+The value `auto` will allow the filter to identify the language, or a comma-separated list can be supplied.
 
 *Example:*
 
@@ -301,12 +336,32 @@ These filters can also be combined with <<#stop-filter,Stop Filter>> so searchin
 
 *Arguments:*
 
-`words`:: (a common word file in .txt format) Provide the name of a common word file, such as `stopwords.txt`.
+`words`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The name of a common word file in .txt format, such as `stopwords.txt`.
 
-`format`:: (optional) If the stopwords list has been formatted for Snowball, you can specify `format="snowball"` so Solr can read the stopwords file.
+`format`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+If the stopwords list has been formatted for Snowball, you can specify `format="snowball"` so Solr can read the stopwords file.
 
-`ignoreCase`:: (boolean) If true, the filter ignores the case of words when comparing them to the common word file.
-The default is false.
+`ignoreCase`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true`, the filter ignores the case of words when comparing them to the common word file.
 
 *Example:*
 
@@ -371,9 +426,16 @@ More information about how this works is available in the section on <<phonetic-
 
 *Arguments:*
 
-`inject`:: (true/false) If true (the default), then new phonetic tokens are added to the stream.
+`inject`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If `true`, then new phonetic tokens are added to the stream.
 Otherwise, tokens are replaced with the phonetic equivalent.
-Setting this to false will enable phonetic matching, but the exact spelling of the target word may not match.
+Setting this to `false` will enable phonetic matching, but the exact spelling of the target word may not match.
 
 *Example:*
 
@@ -412,15 +474,29 @@ For more information, see the <<phonetic-matching.adoc#,Phonetic Matching>> sect
 
 *Arguments:*
 
-`inject`:: (true/false) If true (the default), then new phonetic tokens are added to the stream.
+`inject`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If `true`, then new phonetic tokens are added to the stream.
 Otherwise, tokens are replaced with the phonetic equivalent.
-Setting this to false will enable phonetic matching, but the exact spelling of the target word may not match.
+Setting this to `false` will enable phonetic matching, but the exact spelling of the target word may not match.
 
-`maxCodeLength`:: (integer) The maximum length of the code to be generated.
+`maxCodeLength`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The maximum length of the code to be generated.
 
 *Example:*
 
-Default behavior for inject (true): keep the original token and add phonetic token(s) at the same position.
+Default behavior for inject (`true`): keep the original token and add phonetic token(s) at the same position.
 
 [.dynamic-tabs]
 --
@@ -485,8 +561,14 @@ This filter adds a numeric floating point boost value to tokens, splitting on a
 
 *Arguments:*
 
-`delimiter`:: The character used to separate the token and the boost.
-Defaults to '|'.
+`delimiter`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `\|` (_pipe symbol_)
+|===
++
+The character used to separate the token and the boost.
 
 *Example:*
 
@@ -552,11 +634,30 @@ This filter generates edge n-gram tokens of sizes within the given range.
 
 *Arguments:*
 
-`minGramSize`:: (integer, default 1) The minimum gram size.
+`minGramSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
+The minimum gram size.
 
-`maxGramSize`:: (integer, default 1) The maximum gram size.
+`maxGramSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
+The maximum gram size.
 
-`preserveOriginal`:: (boolean, default false) If true keep the original term even if it is shorter than `minGramSize` or longer than `maxGramSize`.
+`preserveOriginal`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true` keep the original term even if it is shorter than `minGramSize` or longer than `maxGramSize`.
 
 *Example:*
 
@@ -742,12 +843,24 @@ This can be useful for clustering/linking use cases.
 
 *Arguments:*
 
-`separator`:: The character used to separate tokens combined into the single output token.
-Defaults to " " (a space character).
+`separator`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _space character_
+|===
++
+The character used to separate tokens combined into the single output token.
 
-`maxOutputTokenSize`:: The maximum length of the summarized output token.
+`maxOutputTokenSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1024`
+|===
++
+The maximum length of the summarized output token.
 If exceeded, no output token is emitted.
-Defaults to 1024.
 
 *Example:*
 
@@ -807,16 +920,42 @@ On the other hand, for languages that have no stemmer but do have an extensive d
 
 *Arguments:*
 
-`dictionary`:: (required) The path of a dictionary file.
+`dictionary`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The path to a dictionary file.
 
-`affix`:: (required) The path of a rules file.
+`affix`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The path of a rules file.
 
-`ignoreCase`:: (boolean) controls whether matching is case sensitive or not.
-The default is false.
+`ignoreCase`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+Controls whether matching is case sensitive or not.
 
-`strictAffixParsing`:: (boolean) controls whether the affix parsing is strict or not.
-If true, an error while reading an affix rule causes a ParseException, otherwise is ignored.
-The default is true.
+`strictAffixParsing`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+Controls whether the affix parsing is strict or not.
+If `true`, an error while reading an affix rule causes a ParseException, otherwise is ignored.
 
 *Example:*
 
@@ -917,7 +1056,13 @@ See `solr/contrib/analysis-extras/README.md` for instructions on which jars you
 
 *Arguments:*
 
-`filter`:: (string, optional) A Unicode set filter that can be used to e.g., exclude a set of characters from being processed.
+`filter`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
+A Unicode set filter that can be used to e.g., exclude a set of characters from being processed.
 See the http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet javadocs] for more information.
 
 *Example without a filter:*
@@ -975,17 +1120,35 @@ Using the ICU Normalizer 2 Filter is a better-performing substitution for the <<
 
 *Arguments:*
 
-`form`:: The name of the normalization form.
-Valid options are `nfc`, `nfd`, `nfkc`, `nfkd`, or `nfkc_cf` (the default).
-Required.
+`form`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: `nfkc_cf`
+|===
++
+The name of the normalization form.
+Valid options are `nfc`, `nfd`, `nfkc`, `nfkd`, or `nfkc_cf`.
 
-`mode`:: The mode of Unicode character composition and decomposition.
-Valid options are: `compose` (the default) or `decompose`.
-Required.
+`mode`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: `compose`
+|===
++
+The mode of Unicode character composition and decomposition.
+Valid options are: `compose` or `decompose`.
 
-`filter`:: A Unicode set filter that can be used to e.g., exclude a set of characters from being processed.
+`filter`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+A Unicode set filter that can be used to e.g., exclude a set of characters from being processed.
 See the http://icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet javadocs] for more information.
-Optional.
 
 *Example with NFKC_Casefold:*
 
@@ -1040,7 +1203,14 @@ Custom rule sets are not supported.
 
 *Arguments:*
 
-`id`:: (string) The identifier for the ICU System Transform you wish to apply with this filter.
+`id`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The identifier for the ICU System Transform you wish to apply with this filter.
 For a full list of ICU System Transforms, see http://demo.icu-project.org/icu-bin/translit?TEMPLATE_FILE=data/translit_rule_main.html.
 
 *Example:*
@@ -1086,16 +1256,26 @@ This filter can be useful for building specialized indices for a constrained set
 
 *Arguments:*
 
-`words`:: (required) Path of a text file containing the list of keep words, one per line.
-Blank lines and lines that begin with "#" are ignored.
+`words`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Path to a text file containing the list of keep words, one per line.
+Blank lines and lines that begin with `\#` are ignored.
 This may be an absolute path, or a simple filename in the Solr `conf` directory.
 
-`ignoreCase`:: (true/false) If *true* then comparisons are done case-insensitively.
+`ignoreCase`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true` then comparisons are done case-insensitively.
 If this argument is true, then the words file is assumed to contain only lowercase words.
-The default is *false*.
-
-`enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens.
-*This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
 
 *Example:*
 
@@ -1227,15 +1407,26 @@ All other tokens are discarded.
 
 *Arguments:*
 
-`min`:: (integer, required) Minimum token length.
+`min`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
+Minimum token length.
 Tokens shorter than this are discarded.
 
-`max`:: (integer, required, must be >= min) Maximum token length.
+`max`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Maximum token length.
+Must be larger than `min`.
 Tokens longer than this are discarded.
 
-`enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens.
-*This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
-
 *Example:*
 
 [.dynamic-tabs]
@@ -1281,11 +1472,23 @@ If you are wrapping a `TokenStream` which requires that the full stream of token
 *Factory class:* `solr.LimitTokenCountFilterFactory`
 
 *Arguments:*
-
-`maxTokenCount`:: (integer, required) Maximum token count.
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Maximum token count.
 After this limit has been reached, tokens are discarded.
 
-`consumeAllTokens`:: (boolean, defaults to false) Whether to consume (and discard) previous token filters' tokens after the maximum token count has been reached.
+`consumeAllTokens`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+Whether to consume (and discard) previous token filters' tokens after the maximum token count has been reached.
 See description above.
 
 *Example:*
@@ -1340,7 +1543,14 @@ If you are wrapping a `TokenStream` which requires that the full stream of token
 `maxStartOffset`:: (integer, required) Maximum token start character offset.
 After this limit has been reached, tokens are discarded.
 
-`consumeAllTokens`:: (boolean, defaults to false) Whether to consume (and discard) previous token filters' tokens after the maximum start offset has been reached.
+`consumeAllTokens`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+Whether to consume (and discard) previous token filters' tokens after the maximum start offset has been reached.
 See description above.
 
 *Example:*
@@ -1391,10 +1601,24 @@ If you are wrapping a `TokenStream` which requires that the full stream of token
 
 *Arguments:*
 
-`maxTokenPosition`:: (integer, required) Maximum token position.
+`maxTokenPosition`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Maximum token position.
 After this limit has been reached, tokens are discarded.
 
-`consumeAllTokens`:: (boolean, defaults to false) Whether to consume (and discard) previous token filters' tokens after the maximum start offset has been reached.
+`consumeAllTokens`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+Whether to consume (and discard) previous token filters' tokens after the maximum start offset has been reached.
 See description above.
 
 *Example:*
@@ -1482,7 +1706,14 @@ This is specialized version of the <<Stop Filter,Stop Words Filter Factory>> tha
 
 *Arguments:*
 
-`managed`:: The name that should be used for this set of stop words in the managed REST API.
+`managed`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The name that should be used for this set of stop words in the managed REST API.
 
 *Example:*
 //TODO: make this show an actual API call.
@@ -1541,7 +1772,14 @@ NOTE: Although this filter produces correct token graphs, it cannot consume an i
 
 *Arguments:*
 
-`managed`:: The name that should be used for this mapping on synonyms in the managed REST API.
+`managed`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The name that should be used for this mapping on synonyms in the managed REST API.
 
 *Example:*
 //TODO: make this show an actual API call
@@ -1594,11 +1832,31 @@ Note that tokens are ordered by position and then by gram size.
 
 *Arguments:*
 
-`minGramSize`:: (integer, default 1) The minimum gram size.
+`minGramSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
+The minimum gram size.
 
-`maxGramSize`:: (integer, default 2) The maximum gram size.
+`maxGramSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `2`
+|===
++
+The maximum gram size.
 
-`preserveOriginal`:: (boolean, default false) If true keep the original term even if it is shorter than `minGramSize` or longer than `maxGramSize`.
+`preserveOriginal`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true` keep the original term even if it is shorter than `minGramSize` or longer than `maxGramSize`.
 
 *Example:*
 
@@ -1699,9 +1957,23 @@ Refer to the Javadoc for the `org.apache.lucene.analysis.Token` class for more i
 
 *Arguments:*
 
-`payload`:: (required) A floating point value that will be added to all matching tokens.
+`payload`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+A floating point value that will be added to all matching tokens.
 
-`typeMatch`:: (required) A token type name string.
+`typeMatch`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+A token type name string.
 Tokens with a matching type name will have their payload set to the above floating point value.
 
 *Example:*
@@ -1747,13 +2019,34 @@ Tokens which do not match are passed though unchanged.
 
 *Arguments:*
 
-`pattern`:: (required) The regular expression to test against each token, as per `java.util.regex.Pattern`.
+`pattern`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The regular expression to test against each token, as per `java.util.regex.Pattern`.
 
-`replacement`:: (required) A string to substitute in place of the matched pattern.
+`replacement`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+A string to substitute in place of the matched pattern.
 This string may contain references to capture groups in the regex pattern.
 See the Javadoc for `java.util.regex.Matcher`.
 
-`replace`:: ("all" or "first", default "all") Indicates whether all occurrences of the pattern in the token should be replaced, or only the first.
+`replace`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `all`
+|===
++
+Indicates whether all occurrences of the pattern (`all`) in the token should be replaced, or only the first (`first`).
 
 *Example:*
 
@@ -1838,14 +2131,42 @@ For more information, see the section on <<phonetic-matching.adoc#,Phonetic Matc
 
 *Arguments:*
 
-`encoder`:: (required) The name of the encoder to use.
-The encoder name must be one of the following (case insensitive): `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/DoubleMetaphone.html[DoubleMetaphone]`, `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/Metaphone.html[Metaphone]`, `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/ [...]
-
-`inject`:: (true/false) If true (the default), then new phonetic tokens are added to the stream.
+`encoder`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The name of the encoder to use.
+The encoder name must be one of the following (case insensitive):
+
+* `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/DoubleMetaphone.html[DoubleMetaphone]`
+* `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/Metaphone.html[Metaphone]`
+* `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/Soundex.html[Soundex]`
+* `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/RefinedSoundex.html[RefinedSoundex]`
+* `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/Caverphone.html[Caverphone]` (v2.0)
+* `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/ColognePhonetic.html[ColognePhonetic]`
+* `http://commons.apache.org/proper/commons-codec/archives/{ivy-commons-codec-version}/apidocs/org/apache/commons/codec/language/Nysiis.html[Nysiis]`
+
+`inject`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If `true`, new phonetic tokens are added to the stream.
 Otherwise, tokens are replaced with the phonetic equivalent.
-Setting this to false will enable phonetic matching, but the exact spelling of the target word may not match.
+Setting this to `false` will enable phonetic matching, but the exact spelling of the target word may not match.
 
-`maxCodeLength`:: (integer) The maximum length of the code to be generated by the Metaphone or Double Metaphone encoders.
+`maxCodeLength`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
+The maximum length of the code to be generated by the Metaphone or Double Metaphone encoders.
 
 *Example:*
 
@@ -1975,13 +2296,34 @@ This filter enables a form of conditional filtering: it only applies its wrapped
 
 *Arguments:*
 
-`protected`:: (required) Comma-separated list of files containing protected terms, one per line.
+`protected`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Comma-separated list of files containing protected terms, one per line.
 
-`wrappedFilters`:: (required) Case-insensitive comma-separated list of `TokenFilterFactory` SPI names (strip trailing `(Token)FilterFactory` from the factory name - see the https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html[java.util.ServiceLoader interface]).
+`wrappedFilters`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Case-insensitive comma-separated list of `TokenFilterFactory` SPI names (strip trailing `(Token)FilterFactory` from the factory name - see the https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html[`java.util.ServiceLoader interface`]).
 Each filter name must be unique, so if you need to specify the same filter more than once, you must add case-insensitive unique `-id` suffixes to each same-SPI-named filter (note that the `-id` suffix is stripped prior to SPI lookup).
 
-`ignoreCase`:: (true/false, default false) Ignore case when testing for protected words.
-If true, the protected list should contain lowercase words.
+`ignoreCase`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+Ignore case when testing for protected words.
+If `true`, the protected list should contain lowercase words.
 
 *Example:*
 
@@ -2114,19 +2456,53 @@ Tokens without wildcards are not reversed.
 
 *Arguments:*
 
-`withOriginal`:: (boolean) If true, the filter produces both original and reversed tokens at the same positions.
-If false, produces only reversed tokens.
-
-`maxPosAsterisk`:: (integer, default = 2) The maximum position of the asterisk wildcard ('*') that triggers the reversal of the query term.
+`withOriginal`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
+If `true`, the filter produces both original and reversed tokens at the same positions.
+If `false`, produces only reversed tokens.
+
+`maxPosAsterisk`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `2`
+|===
++
+The maximum position of the asterisk wildcard ('*') that triggers the reversal of the query term.
 Terms with asterisks at positions above this value are not reversed.
 
-`maxPosQuestion`:: (integer, default = 1) The maximum position of the question mark wildcard ('?') that triggers the reversal of query term.
+`maxPosQuestion`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
+The maximum position of the question mark wildcard ('?') that triggers the reversal of query term.
 To reverse only pure suffix queries (queries with a single leading asterisk), set this to 0 and `maxPosAsterisk` to 1.
 
-`maxFractionAsterisk`:: (float, default = 0.0) An additional parameter that triggers the reversal if asterisk ('*') position is less than this fraction of the query token length.
+`maxFractionAsterisk`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0.0`
+|===
++
+An additional parameter that triggers the reversal if asterisk ('*') position is less than this fraction of the query token length.
 
-`minTrailing`:: (integer, default = 2) The minimum number of trailing characters in a query token after the last wildcard character.
-For good performance this should be set to a value larger than 1.
+`minTrailing`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `2`
+|===
++
+The minimum number of trailing characters in a query token after the last wildcard character.
+For good performance this should be set to a value larger than `1`.
 
 *Example:*
 
@@ -2173,15 +2549,52 @@ It combines runs of tokens into a single token.
 
 *Arguments:*
 
-`minShingleSize`:: (integer, must be >= 2, default 2) The minimum number of tokens per shingle.
+`minShingleSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `2`
+|===
++
+The minimum number of tokens per shingle.
+Must be higher than or equal to `2`.
 
-`maxShingleSize`:: (integer, must be >= `minShingleSize`, default 2) The maximum number of tokens per shingle.
+`maxShingleSize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `2`
+|===
++
+The maximum number of tokens per shingle.
+Must be higher than or equal to `minShingleSize`.
 
-`outputUnigrams`:: (boolean, default true) If true, then each individual token is also included at its original position.
+`outputUnigrams`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If `true`, then each individual token is also included at its original position.
 
-`outputUnigramsIfNoShingles`:: (boolean, default false) If true, then individual tokens will be output if no shingles are possible.
+`outputUnigramsIfNoShingles`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true`, then individual tokens will be output if no shingles are possible.
 
-`tokenSeparator`:: (string, default is " ") The string to use when joining adjacent tokens to form a shingle.
+`tokenSeparator`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _space character_
+|===
++
+The string to use when joining adjacent tokens to form a shingle.
 
 *Example:*
 
@@ -2253,13 +2666,27 @@ For more information on Snowball, visit http://snowball.tartarus.org/.
 
 *Arguments:*
 
-`language`:: (default "English") The name of a language, used to select the appropriate Porter stemmer to use.
+`language`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `English`
+|===
++
+The name of a language, used to select the appropriate Porter stemmer to use.
 Case is significant.
 This string is used to select a package name in the `org.tartarus.snowball.ext` class hierarchy.
 
-`protected`:: Path of a text file containing a list of protected words, one per line.
+`protected`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: `protected`
+|===
++
+Path to a text file containing a list of protected words, one per line.
 Protected words will not be stemmed.
-Blank lines and lines that begin with "#" are ignored.
+Blank lines and lines that begin with `\#` are ignored.
 This may be an absolute path, or a simple file name in the Solr `conf` directory.
 
 *Example:*
@@ -2343,17 +2770,35 @@ A standard stop words list is included in the Solr `conf` directory, named `stop
 
 *Arguments:*
 
-`words`:: (optional) The path to a file that contains a list of stop words, one per line.
-Blank lines and lines that begin with "#" are ignored.
+`words`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The path to a file that contains a list of stop words, one per line.
+Blank lines and lines that begin with `\#` are ignored.
 This may be an absolute path, or path relative to the Solr `conf` directory.
 
-`format`:: (optional) If the stopwords list has been formatted for Snowball, you can specify `format="snowball"` so Solr can read the stopwords file.
-
-`ignoreCase`:: (true/false, default false) Ignore case when testing for stop words.
-If true, the stop list should contain lowercase words.
+`format`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+If the stopwords list has been formatted for Snowball, you can specify `format="snowball"` so Solr can read the stopwords file.
 
-`enablePositionIncrements`:: if `luceneMatchVersion` is `4.4` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens.
-*This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
+`ignoreCase`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+Ignore case when testing for stop words.
+If `true`, the stop list should contain lowercase words.
 
 *Example:*
 
@@ -2422,19 +2867,39 @@ When using one of the analyzing suggesters, you would normally use the ordinary
 
 *Arguments:*
 
-`words`:: (optional; default: {lucene-javadocs}/analysis/common/org/apache/lucene/analysis/core/StopAnalyzer.html[`StopAnalyzer#ENGLISH_STOP_WORDS_SET`] ) The name of a stopwords file to parse.
+`words`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: {lucene-javadocs}/analysis/common/org/apache/lucene/analysis/core/StopAnalyzer.html[`StopAnalyzer#ENGLISH_STOP_WORDS_SET`]
+|===
++
+The name of a stopwords file to parse.
 
-`format`:: (optional; default: `wordset`) Defines how the words file will be parsed.
+`format`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `wordset`
+|===
++
+Defines how the words file will be parsed.
 If `words` is not specified, then `format` must not be specified.
-The valid values for the format option are:
+The valid values for the `format` parameter are:
 
-`wordset`:: This is the default format, which supports one word per line (including any intra-word whitespace) and allows whole line comments beginning with the `#` character.
+* `wordset`: Supports one word per line (including any intra-word whitespace) and allows whole line comments beginning with the `\#` character.
 Blank lines are ignored.
-
-`snowball`:: This format allows for multiple words specified on each line, and trailing comments may be specified using the vertical line (`|`).
+* `snowball`: Allows for multiple words specified on each line, and trailing comments may be specified using the vertical line (`|`).
 Blank lines are ignored.
 
-`ignoreCase`:: (optional; default: *false*) If *true*, matching is case-insensitive.
+`ignoreCase`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true`, matching is case-insensitive.
 
 *Example:*
 
@@ -2504,8 +2969,15 @@ NOTE: Although this filter produces correct token graphs, it cannot consume an i
 
 *Arguments:*
 
-`synonyms`:: (required) The path of a file that contains a list of synonyms, one per line.
-In the (default) `solr` format - see the `format` argument below for alternatives - blank lines and lines that begin with "`#`" are ignored.
+`synonyms`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+The path to a file that contains a list of synonyms, one per line.
+In the (default) `solr` format - see the `format` argument below for alternatives - blank lines and lines that begin with `\#` are ignored.
 This may be a comma-separated list of paths.
 See <<resource-loading.adoc#,Resource Loading>> for more information.
 +
@@ -2518,22 +2990,58 @@ If the token matches any of the words, then all the words in the list are substi
 If the token matches any word on the left, then the list on the right is substituted.
 The original token will not be included unless it is also in the list on the right.
 
-`ignoreCase`:: (optional; default: `false`) If `true`, synonyms will be matched case-insensitively.
+`ignoreCase`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true`, synonyms will be matched case-insensitively.
 
-`expand`:: (optional; default: `true`) If `true`, a synonym will be expanded to all equivalent synonyms.
+`expand`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If `true`, a synonym will be expanded to all equivalent synonyms.
 If `false`, all equivalent synonyms will be reduced to the first in the list.
 
-`format`:: (optional; default: `solr`) Controls how the synonyms will be parsed.
-The short names `solr` (for {lucene-javadocs}/analysis/common/org/apache/lucene/analysis/synonym/SolrSynonymParser.html[`SolrSynonymParser)`] and `wordnet` (for {lucene-javadocs}/analysis/common/org/apache/lucene/analysis/synonym/WordnetSynonymParser.html[`WordnetSynonymParser`] ) are supported, or you may alternatively supply the name of your own {lucene-javadocs}/analysis/common/org/apache/lucene/analysis/synonym/SynonymMap.Builder.html[`SynonymMap.Builder`] subclass.
+`format`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+(optional; default: `solr`) Controls how the synonyms will be parsed.
+The short names `solr` (for {lucene-javadocs}/analysis/common/org/apache/lucene/analysis/synonym/SolrSynonymParser.html[`SolrSynonymParser`] and `wordnet` (for {lucene-javadocs}/analysis/common/org/apache/lucene/analysis/synonym/WordnetSynonymParser.html[`WordnetSynonymParser`] ) are supported.
+You may alternatively supply the name of your own {lucene-javadocs}/analysis/common/org/apache/lucene/analysis/synonym/SynonymMap.Builder.html[`SynonymMap.Builder`] subclass.
 
-`tokenizerFactory`:: (optional; default: `WhitespaceTokenizerFactory`) The name of the tokenizer factory to use when parsing the synonyms file.
+`tokenizerFactory`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `WhitespaceTokenizerFactory`
+|===
++
+The name of the tokenizer factory to use when parsing the synonyms file.
 Arguments with the name prefix `tokenizerFactory.*` will be supplied as init params to the specified tokenizer factory.
 +
 Any arguments not consumed by the synonym filter factory, including those without the `tokenizerFactory.*` prefix, will also be supplied as init params to the tokenizer factory.
 +
 If `tokenizerFactory` is specified, then `analyzer` may not be, and vice versa.
 
-`analyzer`:: (optional; default: `WhitespaceTokenizerFactory`) The name of the analyzer class to use when parsing the synonyms file.
+`analyzer`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `WhitespaceTokenizerFactory`
+|===
++
+The name of the analyzer class to use when parsing the synonyms file.
 If `analyzer` is specified, then `tokenizerFactory` may not be, and vice versa.
 
 For the following examples, assume a synonyms file named `mysynonyms.txt`:
@@ -2680,10 +3188,7 @@ Most tokenizers break tokens at whitespace, so this filter is most often used fo
 
 *Factory class:* `solr.TrimFilterFactory`
 
-*Arguments:*
-
-`updateOffsets`:: if `luceneMatchVersion` is `4.3` or earlier and `updateOffsets="true"`, trimmed tokens' start and end offsets will be updated to those of the first and last characters (plus one) remaining in the token.
-*This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
+*Arguments:* None
 
 *Example:*
 
@@ -2771,7 +3276,14 @@ This filter adds the token's type, as a token at the same position as the token,
 
 *Arguments:*
 
-`prefix`:: (optional) The prefix to prepend to the token's type.
+`prefix`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The prefix to prepend to the token's type.
 
 *Examples:*
 
@@ -2841,13 +3353,24 @@ This filter would allow you to pull out only e-mail addresses from text as token
 
 *Arguments:*
 
-`types`:: Defines the location of a file of types to filter.
-
-`useWhitelist`:: If *true*, the file defined in `types` should be used as include list.
-If *false*, or undefined, the file defined in `types` is used as a blacklist.
+`types`::
++
+[%autowidth,frame=none]
+|===
+s|Required |Default: none
+|===
++
+Defines the path to a file of types to filter.
 
-`enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens.
-*This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
+`useWhitelist`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `true`, the file defined in `types` should be used as include list.
+If `false`, or undefined, the file defined in `types` is used as a blacklist.
 
 *Example:*
 
@@ -2914,28 +3437,106 @@ The rules for determining delimiters are determined as follows:
 
 *Arguments:*
 
-`generateWordParts`:: (integer, default 1) If non-zero, splits words at delimiters.
+`generateWordParts`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
+If non-zero, splits words at delimiters.
 For example:"CamelCase", "hot-spot" -> "Camel", "Case", "hot", "spot"
 
-`generateNumberParts`:: (integer, default 1) If non-zero, splits numeric strings at delimiters:"1947-32" -> *"1947", "32"
+`generateNumberParts`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
+If non-zero, splits numeric strings at delimiters:"1947-32" -> *"1947", "32"
 
-`splitOnCaseChange`:: (integer, default 1) If 0, words are not split on camel-case changes:"BugBlaster-XL" -> "BugBlaster", "XL". Example 1 below illustrates the default (non-zero) splitting behavior.
+`splitOnCaseChange`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
+If `0`, words are not split on camel-case changes:"BugBlaster-XL" -> "BugBlaster", "XL".
+Example 1 below illustrates the default (non-zero) splitting behavior.
 
-`splitOnNumerics`:: (integer, default 1) If 0, don't split words on transitions from alpha to numeric:"FemBot3000" -> "Fem", "Bot3000"
+`splitOnNumerics`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
+If `0`, don't split words on transitions from alpha to numeric:"FemBot3000" -> "Fem", "Bot3000"
 
-`catenateWords`:: (integer, default 0) If non-zero, maximal runs of word parts will be joined: "hot-spot-sensor's" -> "hotspotsensor"
+`catenateWords`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
+If non-zero, maximal runs of word parts will be joined: "hot-spot-sensor's" -> "hotspotsensor"
 
-`catenateNumbers`:: (integer, default 0) If non-zero, maximal runs of number parts will be joined: 1947-32" -> "194732"
+`catenateNumbers`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
+If non-zero, maximal runs of number parts will be joined: 1947-32" -> "194732"
 
-`catenateAll`:: (0/1, default 0) If non-zero, runs of word and number parts will be joined: "Zap-Master-9000" -> "ZapMaster9000"
+`catenateAll`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
+If non-zero, runs of word and number parts will be joined: "Zap-Master-9000" -> "ZapMaster9000"
 
-`preserveOriginal`:: (integer, default 0) If non-zero, the original token is preserved: "Zap-Master-9000" -> "Zap-Master-9000", "Zap", "Master", "9000"
+`preserveOriginal`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
+If non-zero, the original token is preserved: "Zap-Master-9000" -> "Zap-Master-9000", "Zap", "Master", "9000"
 
-`protected`:: (optional) The pathname of a file that contains a list of protected words that should be passed through without splitting.
+`protected`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The path to a file that contains a list of protected words that should be passed through without splitting.
 
-`stemEnglishPossessive`:: (integer, default 1) If 1, strips the possessive `'s` from each subword.
+`stemEnglishPossessive`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
+If `1`, strips the possessive `'s` from each subword.
 
-`types`:: (optional) The pathname of a file that contains *character \=> type* mappings, which enable customization of this filter's splitting behavior.
+`types`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The path to a file that contains *character \=> type* mappings, which enable customization of this filter's splitting behavior.
 Recognized character types: `LOWER`, `UPPER`, `ALPHA`, `DIGIT`, `ALPHANUM`, and `SUBWORD_DELIM`.
 +
 The default for any character without a customized mapping is computed from Unicode character properties.
diff --git a/solr/solr-ref-guide/src/highlighting.adoc b/solr/solr-ref-guide/src/highlighting.adoc
index 950a56f..6cacd5d 100644
--- a/solr/solr-ref-guide/src/highlighting.adoc
+++ b/solr/solr-ref-guide/src/highlighting.adoc
@@ -30,21 +30,37 @@ Nonetheless, highlighting is very simple to use.
 === Common Highlighter Parameters
 You only need to set the `hl` and often `hl.fl` parameters to get results.
 The following table documents these and some other supported parameters.
-Note that many highlighting parameters support per-field overrides, such as: `f._title_txt_.hl.snippets`
+Note that many highlighting parameters support per-field overrides, such as: `f._title_txt_.hl.snippets`.
 
 `hl`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 Use this parameter to enable or disable highlighting.
-The default is `false`.
 If you want to use highlighting, you must set this to `true`.
 
 `hl.method`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `orignal`
+|===
++
 The highlighting implementation to use.
 Acceptable values are: `unified`, `original`, `fastVector`.
-The default is `original`.
 +
 See the <<Choosing a Highlighter>> section below for more details on the differences between the available highlighters.
 
 `hl.fl`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: value of `df`
+|===
++
 Specifies a list of fields to highlight, either comma- or space-delimited.
  These must be "stored".
 A wildcard of `\*` (asterisk) can be used to match field globs, such as `text_*` or even `\*` to highlight on all fields where highlighting is possible.
@@ -52,12 +68,21 @@ When using `*`, consider adding `hl.requireFieldMatch=true`.
 +
 Note that the field(s) listed here ought to have compatible text-analysis (defined in the schema) with field(s) referenced in the query to be highlighted.
 It may be necessary to modify `hl.q` and `hl.qparser` and/or modify the text analysis.
++
 The following example uses the <<local-params.adoc#,local params>> syntax and <<edismax-query-parser.adoc#,the eDisMax parser>> to highlight fields in `hl.fl`:
-`&hl.fl=field1 field2&hl.q={!edismax qf=$hl.fl v=$q}&hl.qparser=lucene&hl.requireFieldMatch=true` (along with other applicable parameters, of course).
++
+[source,text]
+&hl.fl=field1 field2&hl.q={!edismax qf=$hl.fl v=$q}&hl.qparser=lucene&hl.requireFieldMatch=true
 +
 The default is the value of the `df` parameter which in turn has no default.
 
 `hl.q`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: value of `q`
+|===
++
 A query to use for highlighting.
 This parameter allows you to highlight different terms or fields than those being used to search for documents.
 When setting this, you might also need to set `hl.qparser`.
@@ -65,59 +90,111 @@ When setting this, you might also need to set `hl.qparser`.
 The default is the value of the `q` parameter (already parsed).
 
 `hl.qparser`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 The <<query-syntax-and-parsers.adoc#,query parser>> to use for the `hl.q` query.
 It only applies when `hl.q` is set.
 +
 The default is the value of the `defType` parameter which in turn defaults to `lucene`.
 
 `hl.requireFieldMatch`::
-By default, `false`, all query terms will be highlighted for each field to be highlighted (`hl.fl`) no matter what fields the parsed query refer to.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
+If `false`, all query terms will be highlighted for each field to be highlighted (`hl.fl`) no matter what fields the parsed query refer to.
 If set to `true`, only query terms aligning with the field being highlighted will in turn be highlighted.
 +
 If the query references fields different from the field being highlighted and they have different text analysis, the query may not highlight query terms it should have and vice versa.
 The analysis used is that of the field being highlighted (`hl.fl`), not the query fields.
 
 `hl.usePhraseHighlighter`::
-If set to `true`, the default, Solr will highlight phrase queries (and other advanced position-sensitive queries) accurately – as phrases.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If set to `true`, Solr will highlight phrase queries (and other advanced position-sensitive queries) accurately as phrases.
 If `false`, the parts of the phrase will be highlighted everywhere instead of only when it forms the given phrase.
 
 `hl.highlightMultiTerm`::
-If set to `true`, the default, Solr will highlight wildcard queries (and other `MultiTermQuery` subclasses).
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If set to `true`, Solr will highlight wildcard queries (and other `MultiTermQuery` subclasses).
 If `false`, they won't be highlighted at all.
 
 `hl.snippets`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1`
+|===
++
 Specifies maximum number of highlighted snippets to generate per field.
 It is possible for any number of snippets from zero to this value to be generated.
-The default is `1`.
 
 `hl.fragsize`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `100`
+|===
++
 Specifies the approximate size, in characters, of fragments to consider for highlighting.
-The default is `100`.
 Using `0` indicates that no fragmenting should be considered and the whole field value should be used.
 
 `hl.tag.pre`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `<em>`
+|===
++
 (`hl.simple.pre` for the Original Highlighter) Specifies the “tag” to use before a highlighted term.
 This can be any string, but is most often an HTML or XML tag.
-+
-The default is `<em>`.
 
 `hl.tag.post`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `</em>`
+|===
++
 (`hl.simple.post` for the Original Highlighter) Specifies the “tag” to use after a highlighted term.
 This can be any string, but is most often an HTML or XML tag.
-+
-The default is `</em>`.
 
 `hl.encoder`::
-If blank, the default, then the stored text will be returned without any escaping/encoding performed by the highlighter.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _empty_
+|===
++
+If blank, then the stored text will be returned without any escaping/encoding performed by the highlighter.
 If set to `html` then special HTML/XML characters will be encoded (e.g., `&` becomes `\&amp;`).
-The pre/post snippet characters are never encoded.
+The pre- and post-snippet characters are never encoded.
 
 `hl.maxAnalyzedChars`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `51200`
+|===
++
 The character limit to look for highlights, after which no highlighting will be done.
 This is mostly only a performance concern for an _analysis_ based offset source since it's the slowest.
 See <<Schema Options and Performance Considerations>>.
-+
-The default is `51200` characters.
 
 There are more parameters supported as well depending on the highlighter (via `hl.method`) chosen.
 
@@ -178,8 +255,7 @@ You should use the `hl.method` parameter to choose a highlighter but it's also p
 
 There are four highlighters available that can be chosen at runtime with the `hl.method` parameter, in order of general recommendation:
 
-
-<<The Unified Highlighter,Unified Highlighter>>:: (`hl.method=unified`)
+<<Unified Highlighter>>:: (`hl.method=unified`)
 +
 The Unified Highlighter is the newest highlighter (as of Solr 6.4), which stands out as the most performant and accurate of the options.
 It can handle typical requirements and others possibly via plugins/extension.
@@ -198,7 +274,7 @@ Passage scoring does not consider boosts in the query.
 Some users want more/better passage breaking flexibility.
 The "alternate" fallback options are more primitive.
 
-<<The Original Highlighter,Original Highlighter>>:: (`hl.method=original`, the default)
+<<Original Highlighter>>:: (`hl.method=original`, the default)
 +
 The Original Highlighter, sometimes called the "Standard Highlighter" or "Default Highlighter", is Lucene's original highlighter – a venerable option with a high degree of customization options.
 Its query accuracy is good enough for most needs, although it's not quite as good/perfect as the Unified Highlighter.
@@ -211,7 +287,7 @@ Where this highlighter falls short is performance; it's often twice as slow as t
 And despite being the most customizable, it doesn't have a BreakIterator based fragmenter (all the others do), which could pose a challenge for some languages.
 
 
-<<The FastVector Highlighter,FastVector Highlighter>>:: (`hl.method=fastVector`)
+<<FastVector Highlighter>>:: (`hl.method=fastVector`)
 +
 The FastVector Highlighter _requires_ full term vector options (`termVectors`, `termPositions`, and `termOffsets`) on the field, and is optimized with that in mind.
 It is nearly as configurable as the Original Highlighter with some variability.
@@ -264,67 +340,140 @@ This adds substantial weight to the index – similar in size to the compressed
 If you are using the Unified Highlighter then this is not a recommended configuration since it's slower and heavier than postings with light term vectors.
 However, this could make sense if full term vectors are already needed for another use-case.
 
-== The Unified Highlighter
+== Unified Highlighter
 
 The Unified Highlighter supports these following additional parameters to the ones listed earlier:
 
 `hl.offsetSource`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 By default, the Unified Highlighter will usually pick the right offset source (see above).
 However it may be ambiguous such as during a migration from one offset source to another that hasn't completed.
 +
 The offset source can be explicitly configured to one of: `ANALYSIS`, `POSTINGS`, `POSTINGS_WITH_TERM_VECTORS`, or `TERM_VECTORS`.
 
 `hl.fragAlignRatio`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0.5`
+|===
++
 This parameter influences where the first match (i.e., highlighted text) in a passage is positioned.
++
 The default value of `0.5` means to align the match to the middle.
 A value of `0.0` means to align the match to the left, while `1.0` to align it to the right.
 This setting is a best-effort hint, as there are a variety of factors.
 When there's lots of text to be highlighted, lowering this number can help performance a lot.
 
 `hl.fragsizeIsMinimum`::
-When `true` (the default), the `hl.fragsize` parameter is treated as a (soft) minimum fragment size;
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+When `true`, the `hl.fragsize` parameter is treated as a (soft) minimum fragment size;
 provided there is enough text, the fragment is at least this size.
 When `false`, it's an optimal target -- the highlighter will _on average_ produce highlights of this length.
 A `false` setting is slower, particularly when there's lots of text and `hl.bs.type=SENTENCE`.
 
 `hl.tag.ellipsis`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 By default, each snippet is returned as a separate value (as is done with the other highlighters).
 Set this parameter to instead return one string with this text as the delimiter.
 _Note: this is likely to be removed in the future._
 
 `hl.defaultSummary`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true`, use the leading portion of the text as a snippet if a proper highlighted snippet can't otherwise be generated.
-The default is `false`.
 
 `hl.score.k1`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `1.2`
+|===
++
 Specifies BM25 term frequency normalization parameter 'k1'. For example, it can be set to `0` to rank passages solely based on the number of query terms that match.
-The default is `1.2`.
 
 `hl.score.b`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0.75`
+|===
++
 Specifies BM25 length normalization parameter 'b'. For example, it can be set to "0" to ignore the length of passages entirely when ranking.
-The default is `0.75`.
 
 `hl.score.pivot`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `87`
+|===
++
 Specifies BM25 average passage length in characters.
-The default is `87`.
 
 `hl.bs.language`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specifies the breakiterator language for dividing the document into passages.
 
 `hl.bs.country`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specifies the breakiterator country for dividing the document into passages.
 
 `hl.bs.variant`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specifies the breakiterator variant for dividing the document into passages.
 
 `hl.bs.type`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `SENTENCE`
+|===
++
 Specifies the breakiterator type for dividing the document into passages.
 Can be `SEPARATOR`, `SENTENCE`, `WORD`*, `CHARACTER`, `LINE`, or `WHOLE`.
 `SEPARATOR` is special value that splits text on a user-provided character in `hl.bs.separator`.
-+
-The default is `SENTENCE`.
 
 `hl.bs.separator`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Indicates which character to break the text on.
 Use only if you have defined `hl.bs.type=SEPARATOR`.
 +
@@ -332,104 +481,185 @@ This is useful when the text has already been manipulated in advance to have a s
 This character will still appear in the text as the last character of a passage.
 
 `hl.weightMatches`::
-Tells the UH to use Lucene's new "Weight Matches" API instead of doing SpanQuery conversion.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+Tells the UH to use Lucene's "Weight Matches" API instead of doing `SpanQuery` conversion.
 This is the most accurate highlighting mode reflecting the query.
 Furthermore, phrases will be highlighted as a whole instead of word by word.
 +
-The default is `true`.
-However if either `hl.usePhraseHighlighter` or `hl.multiTermQuery` are set to false, then this setting is effectively false no matter what you set it to.
+If either `hl.usePhraseHighlighter` or `hl.multiTermQuery` are set to `false`, then this setting is effectively `false` no matter what you set it to.
 
-== The Original Highlighter
+== Original Highlighter
 
 The Original Highlighter supports these following additional parameters to the ones listed earlier:
 
 `hl.mergeContiguous`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 Instructs Solr to collapse contiguous fragments into a single fragment.
 A value of `true` indicates contiguous fragments will be collapsed into single fragment.
-The default value, `false`, is also the backward-compatible setting.
 
 `hl.maxMultiValuedToExamine`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `Integer.MAX_VALUE`
+|===
++
 Specifies the maximum number of entries in a multi-valued field to examine before stopping.
 This can potentially return zero results if the limit is reached before any matches are found.
 +
 If used with the `maxMultiValuedToMatch`, whichever limit is reached first will determine when to stop looking.
-+
-The default is `Integer.MAX_VALUE`.
 
 `hl.maxMultiValuedToMatch`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `Integer.MAX_VALUE`
+|===
++
 Specifies the maximum number of matches in a multi-valued field that are found before stopping.
 +
 If `hl.maxMultiValuedToExamine` is also defined, whichever limit is reached first will determine when to stop looking.
-+
-The default is `Integer.MAX_VALUE`.
 
 `hl.alternateField`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specifies a field to be used as a backup default summary if Solr cannot generate a snippet (i.e., because no terms match).
 
 `hl.maxAlternateFieldLength`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
 Specifies the maximum number of characters of the field to return.
-Any value less than or equal to `0` means the field's length is unlimited (the default behavior).
+Any value less than or equal to `0` means the field's length is unlimited.
 +
 This parameter is only used in conjunction with the `hl.alternateField` parameter.
 
 `hl.highlightAlternate`::
-If set to `true`, the default, and `hl.alternateFieldName` is active, Solr will show the entire alternate field, with highlighting of occurrences.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If set to `true` and `hl.alternateFieldName` is active, Solr will show the entire alternate field, with highlighting of occurrences.
 If `hl.maxAlternateFieldLength=N` is used, Solr returns max `N` characters surrounding the best matching fragment.
 +
 If set to `false`, or if there is no match in the alternate field either, the alternate field will be shown without highlighting.
 
 `hl.formatter`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `simple`
+|===
++
 Selects a formatter for the highlighted output.
 Currently the only legal value is `simple`, which surrounds a highlighted term with a customizable pre- and post-text snippet.
 
 `hl.simple.pre`, `hl.simple.post`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _see description_
+|===
++
 Specifies the text that should appear before (`hl.simple.pre`) and after (`hl.simple.post`) a highlighted term, when using the `simple` formatter.
 The default is `<em>` and `</em>`.
 
 `hl.fragmenter`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `gap`
+|===
++
 Specifies a text snippet generator for highlighted text.
-The standard (default) fragmenter is `gap`, which creates fixed-sized fragments with gaps for multi-valued fields.
+The standard fragmenter is `gap`, which creates fixed-sized fragments with gaps for multi-valued fields.
 +
 Another option is `regex`, which tries to create fragments that resemble a specified regular expression.
 
 `hl.regex.slop`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0.6`
+|===
++
 When using the regex fragmenter (`hl.fragmenter=regex`), this parameter defines the factor by which the fragmenter can stray from the ideal fragment size (given by `hl.fragsize`) to accommodate a regular expression.
 +
 For instance, a slop of `0.2` with `hl.fragsize=100` should yield fragments between 80 and 120 characters in length.
 It is usually good to provide a slightly smaller `hl.fragsize` value when using the regex fragmenter.
-+
-The default is `0.6`.
 
 `hl.regex.pattern`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specifies the regular expression for fragmenting.
 This could be used to extract sentences.
 
 `hl.regex.maxAnalyzedChars`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `10000`
+|===
++
 Instructs Solr to analyze only this many characters from a field when using the regex fragmenter (after which, the fragmenter produces fixed-sized fragments).
-The default is `10000`.
 +
 Note, applying a complicated regex to a huge field is computationally expensive.
 
 `hl.preserveMulti`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true`, multi-valued fields will return all values in the order they were saved in the index.
-If `false`, the default, only values that match the highlight request will be returned.
+If `false`, only values that match the highlight request will be returned.
 
 `hl.payloads`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 When `hl.usePhraseHighlighter` is `true` and the indexed field has payloads but not term vectors (generally quite rare), the index's payloads will be read into the highlighter's memory index along with the postings.
 +
-If this may happen and you know you don't need them for highlighting (i.e., your queries don't filter by payload) then you can save a little memory by setting this to false.
+If this may happen and you know you don't need them for highlighting (i.e., your queries don't filter by payload) then you can save a little memory by setting this to `false`.
 
 The Original Highlighter has a plugin architecture that enables new functionality to be registered in `solrconfig.xml`.
-The "```techproducts```" configset shows most of these settings explicitly.
+The "techproducts" configset shows most of these settings explicitly.
 You can use it as a guide to provide your own components to include a `SolrFormatter`, `SolrEncoder`, and `SolrFragmenter.`
 
-== The FastVector Highlighter
+== FastVector Highlighter
 
 The FastVector Highlighter (FVH) can be used in conjunction with the Original Highlighter if not all fields should be highlighted with the FVH.
 In such a mode, set `hl.method=original` and `f.yourTermVecField.hl.method=fastVector` for all fields that should use the FVH.
 One annoyance to keep in mind is that the Original Highlighter uses `hl.simple.pre` whereas the FVH (and other highlighters) use `hl.tag.pre`.
 
-In addition to the initial listed parameters, the following parameters documented for the Original Highlighter above are also supported by the FVH:
+In addition to the <<Common Highlighter Parameters>> above, the following parameters documented for the <<Original Highlighter>> above are also supported by the FVH:
 
 * `hl.alternateField`
 * `hl.maxAlternateFieldLength`
@@ -438,14 +668,25 @@ In addition to the initial listed parameters, the following parameters documente
 And here are additional parameters supported by the FVH:
 
 `hl.fragListBuilder`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `weighted`
+|===
++
 The snippet fragmenting algorithm.
 The `weighted` fragListBuilder uses IDF-weights to order fragments.
-This fragListBuilder is the default.
 +
 Other options are `single`, which returns the entire field contents as one snippet, or `simple`.
 You can select a fragListBuilder with this parameter, or modify an existing implementation in `solrconfig.xml` to be the default by adding "default=true".
 
 `hl.fragmentsBuilder`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `default`
+|===
++
 The fragments builder is responsible for formatting the fragments, which uses `<em>` and `</em>` markup by default (if `hl.tag.pre` and `hl.tag.post` are not defined).
 +
 Another pre-configured choice is `colored`, which is an example of how to use the fragments builder to insert HTML into the snippets for colored highlights if you choose.
@@ -459,10 +700,21 @@ See <<Using Boundary Scanners with the FastVector Highlighter>> below.
 See <<Using Boundary Scanners with the FastVector Highlighter>> below.
 
 `hl.phraseLimit`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `5000`
+|===
++
 The maximum number of phrases to analyze when searching for the highest-scoring phrase.
-The default is `5000`.
 
 `hl.multiValuedSeparatorChar`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: _space character_
+|===
++
 Text to use to separate one value from the next for a multi-valued field.
 The default is " " (a space).
 
diff --git a/solr/solr-ref-guide/src/index-segments-merging.adoc b/solr/solr-ref-guide/src/index-segments-merging.adoc
index 98a9adb..7f6b873 100644
--- a/solr/solr-ref-guide/src/index-segments-merging.adoc
+++ b/solr/solr-ref-guide/src/index-segments-merging.adoc
@@ -140,14 +140,26 @@ Faster index updates also means shorter commit turnaround times, which means mor
 === Controlling Deleted Document Percentages
 
 When a document is deleted or updated, the document is marked as deleted but it not removed from the index until the segment is merged.
-There are two parameters that can can be adjusted when using the default TieredMergePolicy that influence the number of deleted documents in an index.
+There are two parameters that can can be adjusted when using the default `TieredMergePolicy` that influence the number of deleted documents in an index.
 
 `forceMergeDeletesPctAllowed`::
-(default `10.0`) When the external `expungeDeletes` command is issued, any segment that has more than this percent deleted documents will be merged into a new segment and the data associated with the deleted documents will be purged.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `10.0`
+|===
++
+When the external `expungeDeletes` command is issued, any segment that has more than this percent deleted documents will be merged into a new segment and the data associated with the deleted documents will be purged.
 A value of `0.0` will make expungeDeletes behave essentially identically to `optimize`.
 
 `deletesPctAllowed`::
-(default `33.0`) During normal segment merging, a best effort is made to insure that the total percentage of deleted documents in the index is below this threshold.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `33.0`
+|===
++
+During normal segment merging, a best effort is made to insure that the total percentage of deleted documents in the index is below this threshold.
 Valid settings are between 20% and 50%.
 33% was chosen as the default because as this setting approaches 20%, considerable load is added to the system.
 
@@ -175,24 +187,41 @@ The merge scheduler controls how merges are performed.
 The default `ConcurrentMergeScheduler` performs merges in the background using separate threads.
 The alternative, `SerialMergeScheduler`, does not perform merges with separate threads.
 
-The `ConcurrentMergeScheduler` has the following configurable attributes:
+The `ConcurrentMergeScheduler` has the following configurable attributes.
+The defaults for these attributes are dynamically set based on whether the underlying disk drive is rotational disk or not.
+Refer to the <<taking-solr-to-production.adoc#dynamic-defaults-for-concurrentmergescheduler, Dynamic defaults for ConcurrentMergeScheduler>> section for more details.
 
 `maxMergeCount`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The maximum number of simultaneous merges that are allowed.
 If a merge is necessary yet we already have this many threads running, the indexing thread will block until a merge thread has completed.
 Note that Solr will only run the smallest `maxThreadCount` merges at a time.
 
 `maxThreadCount`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The maximum number of simultaneous merge threads that should be running at once.
 This must be less than `maxMergeCount`.
 
 `ioThrottle`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A Boolean value (`true` or `false`) to explicitly control I/O throttling.
 By default throttling is enabled and the CMS will limit I/O throughput when merging to leave other (search, indexing) some room.
 
-The defaults for the above attributes are dynamically set based on whether the underlying disk drive is rotational disk or not.
-Refer to the <<taking-solr-to-production.adoc#dynamic-defaults-for-concurrentmergescheduler, Dynamic defaults for ConcurrentMergeScheduler>> section for more details.
-
 .Example: Dynamic defaults
 [source,xml]
 ----
@@ -303,12 +332,30 @@ Controls how commits are retained in case of rollback.
 The default is `SolrDeletionPolicy`, which take ths following parameters:
 
 `maxCommitsToKeep`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The maximum number of commits to keep.
 
 `maxOptimizedCommitsToKeep`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The maximum number of optimized commits to keep.
 
 `maxCommitAge`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 The maximum age of any commit to keep.
 This supports `DateMathParser` syntax.
 
diff --git a/solr/solr-ref-guide/src/indexing-with-tika.adoc b/solr/solr-ref-guide/src/indexing-with-tika.adoc
index 0086acb..d3a3342 100644
--- a/solr/solr-ref-guide/src/indexing-with-tika.adoc
+++ b/solr/solr-ref-guide/src/indexing-with-tika.adoc
@@ -50,16 +50,13 @@ By default it maps to the same name but several parameters control how this is d
 The next step after any update handler is the <<update-request-processors.adoc#,Update Request Processor>> chain.
 
 Solr Cell is a contrib, which means it's not automatically included with Solr but must be configured.
-The example configsets have Solr Cell configured, but if you are not using those,
-you will want to pay attention to the section <<Configuring the ExtractingRequestHandler in solrconfig.xml>> below.
+The example configsets have Solr Cell configured, but if you are not using those, you will want to pay attention to the section <<solrconfig.xml Configuration>> below.
 
 === Solr Cell Performance Implications
 
-Rich document formats are frequently not well documented, and even in cases where there is documentation for the
-format, not everyone who creates documents will follow the specifications faithfully.
+Rich document formats are frequently not well documented, and even in cases where there is documentation for the format, not everyone who creates documents will follow the specifications faithfully.
 
-This creates a situation where Tika may encounter something that it is simply not able to handle gracefully,
-despite taking great pains to support as many formats as possible.
+This creates a situation where Tika may encounter something that it is simply not able to handle gracefully, despite taking great pains to support as many formats as possible.
 PDF files are particularly problematic, mostly due to the PDF format itself.
 
 In case of a failure processing any file, the `ExtractingRequestHandler` does not have a secondary mechanism to try to extract some text from the file; it will throw an exception and fail.
@@ -157,10 +154,15 @@ The easiest way to try out the `uprefix` parameter is to start over with a fresh
 
 The following parameters are accepted by the `ExtractingRequestHandler`.
 
-These parameters can be set for each indexing request (as request parameters), or they can be set for all requests to
-the request handler generally by defining them in `solrconfig.xml`, as described in <<Configuring the ExtractingRequestHandler in solrconfig.xml>>.
+These parameters can be set for each indexing request (as request parameters), or they can be set for all requests to the request handler by defining them in <<solrconfig.xml Configuration,`solrconfig.xml`>>.
 
 `capture`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Captures XHTML elements with the specified name for a supplementary addition to the Solr document.
 This parameter can be useful for copying chunks of the XHTML into a separate field.
 For instance, it could be used to grab paragraphs (`<p>`) and index them into a separate field.
@@ -173,6 +175,12 @@ Output: `"p": {"This is a paragraph from my document."}`
 This parameter can also be used with the `fmap._source_field_` parameter to map content from attributes to a new field.
 
 `captureAttr`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 Indexes attributes of the Tika XHTML elements into separate fields, named after the element.
 If set to `true`, when extracting from HTML, Tika can return the href attributes in `<a>` tags as fields named "`a`".
 +
@@ -181,17 +189,34 @@ Example: `captureAttr=true`
 Output: `"div": {"classname1", "classname2"}`
 
 `commitWithin`::
-Add the document within the specified number of milliseconds.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Issue a commit to the index within the specified number of milliseconds.
 +
 Example: `commitWithin=10000` (10 seconds)
 
 `defaultField`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A default field to use if the `uprefix` parameter is not specified and a field cannot otherwise be determined.
 +
 Example: `defaultField=\_text_`
 
 `extractOnly`::
-Default is `false`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true`, returns the extracted content from Tika without indexing the document.
 This returns the extracted XHTML as a string in the response.
 When viewing on a screen, it may be useful to set the `extractFormat` parameter for a response format other than XML to aid in viewing the embedded XHTML tags.
@@ -199,29 +224,53 @@ When viewing on a screen, it may be useful to set the `extractFormat` parameter
 Example: `extractOnly=true`
 
 `extractFormat`::
-The default is `xml`, but the other option is `text`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `xml`
+|===
++
 Controls the serialization format of the extract content.
+The options are `xml` or `text`.
 The `xml` format is actually XHTML, the same format that results from passing the `-x` command to the Tika command line application, while the text format is like that produced by Tika's `-t` command.
 +
 This parameter is valid only if `extractOnly` is set to true.
 +
 Example: `extractFormat=text`
 +
-Output: For an example output (in XML), see https://cwiki.apache.org/confluence/display/solr/TikaExtractOnlyExampleOutput
+Output: For an example output (in XML), see https://cwiki.apache.org/confluence/display/solr/TikaExtractOnlyExampleOutput.
 
 `fmap._source_field_`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Maps (moves) one field name to another.
 The `source_field` must be a field in incoming documents, and the value is the Solr field to map to.
 +
 Example: `fmap.content=text` causes the data in the `content` field generated by Tika to be moved to the Solr's `text` field.
 
 `ignoreTikaException`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 If `true`, exceptions found during processing will be skipped.
 Any metadata available, however, will be indexed.
 +
 Example: `ignoreTikaException=true`
 
 `literal._fieldname_`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Populates a field with the name supplied with the specified value for each document.
 The data can be multivalued if the field is multivalued.
 +
@@ -230,15 +279,26 @@ Example: `literal.doc_status=published`
 Output: `"doc_status": "published"`
 
 `literalsOverride`::
-If `true` (the default), literal field values will override other values with the same field name.
 +
-If `false`, literal values defined with `literal._fieldname_` will be appended to data already in the fields extracted
-from Tika.
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If `true`, literal field values will override other values with the same field name.
++
+If `false`, literal values defined with `literal._fieldname_` will be appended to data already in the fields extracted from Tika.
 When setting `literalsOverride` to `false`, the field must be multivalued.
 +
 Example: `literalsOverride=false`
 
 `lowernames`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true`, all field names will be mapped to lowercase with underscores, if needed.
 +
 Example: `lowernames=true`
@@ -246,46 +306,84 @@ Example: `lowernames=true`
 Output: Assuming input of "Content-Type", the result in documents would be a field `content_type`
 
 `multipartUploadLimitInKB`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `2048` kilobytes
+|===
++
 Defines the size in kilobytes of documents to allow.
-The default is `2048` (2Mb).
 If you have very large documents, you should increase this or they will be rejected.
 +
 Example: `multipartUploadLimitInKB=2048000`
 
 `parseContext.config`::
-If a Tika parser being used allows parameters, you can pass them to Tika by creating a parser configuration file and
-pointing Solr to it.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+If a Tika parser being used allows parameters, you can pass them to Tika by creating a parser configuration file and pointing Solr to it.
 See the section <<Parser-Specific Properties>> for more information about how to use this parameter.
 +
 Example: `parseContext.config=pdf-config.xml`
 
 `passwordsFile`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Defines a file path and name for a file of file name to password mappings.
-See the section
-<<Indexing Encrypted Documents>> for more information about using a password file.
+See the section <<Indexing Encrypted Documents>> for more information about using a password file.
 +
 Example: `passwordsFile=/path/to/passwords.txt`
 
 `resource.name`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Specifies the name of the file to index.
 This is optional, but Tika can use it as a hint for detecting a file's MIME type.
 +
 Example: `resource.name=mydoc.doc`
 
 `resource.password`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Defines a password to use for a password-protected PDF or OOXML file.
-See the section <<Indexing Encrypted Documents>>
-for more information about using this parameter.
+See the section <<Indexing Encrypted Documents>> for more information about using this parameter.
 +
 Example: `resource.password=secret`
 
 `tika.config`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Defines a file path and name to a custom Tika configuration file.
 This is only required if you have customized your Tika implementation.
 +
 Example: `tika.config=/path/to/tika.config`
 
 `uprefix`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Prefixes all fields _that are undefined in the schema_ with the given prefix.
 This is very useful when combined with dynamic field definitions.
 +
@@ -295,17 +393,21 @@ In this case, you could additionally define a rule in the Schema to not index th
 `<dynamicField name="ignored_*" type="ignored" />`
 
 `xpath`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 When extracting, only return Tika XHTML content that satisfies the given XPath expression.
 See http://tika.apache.org/{ivy-tika-version}/ for details on the format of Tika XHTML, it varies with the format being parsed.
 Also see the section <<Defining XPath Expressions>> for an example.
 
-=== Configuring the ExtractingRequestHandler in solrconfig.xml
+=== solrconfig.xml Configuration
 
-If you have started Solr with one of the supplied <<config-sets.adoc#,example configsets>>, you already have
-the `ExtractingRequestHandler` configured by default and you only need to customize it for your content.
+If you have started Solr with one of the supplied <<config-sets.adoc#,example configsets>>, you may already have the `ExtractingRequestHandler` configured by default.
 
-If you are not working with an example configset, the jars required to use Solr Cell will not be loaded automatically.
-You will need to configure your `solrconfig.xml` to find the `ExtractingRequestHandler` and its dependencies:
+If it is not already configured, you will need to configure `solrconfig.xml` to find the `ExtractingRequestHandler` and its dependencies:
 
 [source,xml]
 ----
@@ -314,8 +416,7 @@ You will need to configure your `solrconfig.xml` to find the `ExtractingRequestH
 ----
 
 You can then configure the `ExtractingRequestHandler` in `solrconfig.xml`.
-The following is the default
-configuration found in Solr's `_default` configset, which you can modify as needed:
+The following is the default configuration found in Solr's `sample_techproducts_configs` configset, which you can modify as needed:
 
 [source,xml]
 ----
@@ -333,10 +434,9 @@ In this setup, all field names are lower-cased (with the `lowernames` parameter)
 
 [TIP]
 ====
-You may need to configure <<update-request-processors.adoc#,Update Request Processors>> (URPs)
-that parse numbers and dates and do other manipulations on the metadata fields generated by Solr Cell.
+You may need to configure <<update-request-processors.adoc#,Update Request Processors>> (URPs) that parse numbers and dates and do other manipulations on the metadata fields generated by Solr Cell.
 
-In Solr's default configsets, <<schemaless-mode.adoc#,"schemaless">> (aka data driven, or field guessing) mode is enabled, which does a variety of such processing already.
+In Solr's `_default` configset, <<schemaless-mode.adoc#,"schemaless">> (aka data driven, or field guessing) mode is enabled, which does a variety of such processing already.
 
 If you instead explicitly define the fields for your schema, you can selectively specify the desired URPs.
 An easy way to specify this is to configure the parameter `processor` (under `defaults`) to `uuid,remove-blank,field-name-mutating,parse-boolean,parse-long,parse-double,parse-date`.
@@ -383,10 +483,10 @@ Consult the Tika Java API documentation for configuration parameters that can be
 
 === Indexing Encrypted Documents
 
-The ExtractingRequestHandler will decrypt encrypted files and index their content if you supply a password in either `resource.password` on the request, or in a `passwordsFile` file.
+The ExtractingRequestHandler will decrypt encrypted files and index their content if you supply a password in either `resource.password` in the request, or in a `passwordsFile` file.
 
 In the case of `passwordsFile`, the file supplied must be formatted so there is one line per rule.
-Each rule contains a file name regular expression, followed by "=", then the password in clear-text.
+Each rule contains a file name regular expression, followed by "`=`", then the password in clear-text.
 Because the passwords are in clear-text, the file should have strict access restrictions.
 
 [source,plain]
@@ -411,7 +511,7 @@ Set the parameter `literalsOverride`, which normally defaults to `true`, to `fal
 
 === Metadata Created by Tika
 
-As mentioned before, Tika produces metadata about the document.
+As mentioned earlier, Tika produces metadata about the document.
 Metadata describes different aspects of a document, such as the author's name, the number of pages, the file size, and so on.
 The metadata produced depends on the type of document submitted.
 For instance, PDFs have different metadata than Word documents do.
@@ -420,21 +520,16 @@ For instance, PDFs have different metadata than Word documents do.
 
 In addition to the metadata added by Tika's parsers, Solr adds the following metadata:
 
-`stream_name`::
-The name of the Content Stream as uploaded to Solr.
+* `stream_name`: The name of the Content Stream as uploaded to Solr.
 Depending on how the file is uploaded, this may or may not be set.
 
-`stream_source_info`::
-Any source info about the stream.
+* `stream_source_info`: Any source info about the stream.
 
-`stream_size`::
-The size of the stream in bytes.
+* `stream_size`: The size of the stream in bytes.
 
-`stream_content_type`::
-The content type of the stream, if available.
+* `stream_content_type`: The content type of the stream, if available.
 
-IMPORTANT: It's recommended to use the `extractOnly` option before indexing to discover the values Solr will
-set for these metadata elements on your content.
+IMPORTANT: It's recommended to use the `extractOnly` option before indexing to discover the values Solr will set for these metadata elements on your content.
 
 === Order of Input Processing
 
diff --git a/solr/solr-ref-guide/src/indexing-with-update-handlers.adoc b/solr/solr-ref-guide/src/indexing-with-update-handlers.adoc
index f5e85a6..19a5f49 100644
--- a/solr/solr-ref-guide/src/indexing-with-update-handlers.adoc
+++ b/solr/solr-ref-guide/src/indexing-with-update-handlers.adoc
@@ -78,10 +78,21 @@ For example:
 The add command supports some optional attributes which may be specified.
 
 `commitWithin`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Add the document within the specified number of milliseconds.
 
 `overwrite`::
-Default is `true`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 Indicates if the unique key constraints should be checked to overwrite previous versions of the same document (see below).
 
 If the document schema defines a unique key, then by default an `/update` operation to add a document will overwrite (i.e., replace) any document in the index with the same unique key.
@@ -109,18 +120,38 @@ Applications requiring NRT functionality should not use optimize.
 The `<commit>` and `<optimize>` elements accept these optional attributes:
 
 `waitSearcher`::
-Default is `true`.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 Blocks until a new searcher is opened and registered as the main query searcher, making the changes visible.
 
-`expungeDeletes`:: (commit only) Default is `false`.
+`expungeDeletes`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 Merges segments that have more than 10% deleted docs, expunging the deleted documents in the process.
 Resulting segments will respect `maxMergedSegmentMB`.
+This option only applies in a `<commit> operation`.
 +
-WARNING: expungeDeletes is "less expensive" than optimize, but the same warnings apply.
+WARNING: `expungeDeletes` is less expensive than optimize, but the same warnings apply.
 
-`maxSegments`:: (optimize only) Default is unlimited, resulting segments respect the `maxMergedSegmentMB` setting.
+`maxSegments`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Makes a best effort attempt to merge the segments down to no more than this number of segments but does not guarantee that the goal will be achieved.
 Unless there is tangible evidence that optimizing to a small number of segments is beneficial, this parameter should be omitted and the default behavior accepted.
+This option only applies in a `<optimize` operation.
+Default is unlimited, resulting segments respect the `maxMergedSegmentMB` setting.
 
 Here are examples of `<commit>` and `<optimize>` using optional attributes:
 
@@ -321,17 +352,17 @@ One example usage would be to copy a Solr 1.3 index (which does not have CSV res
 
 [source,bash]
 ----
-curl -o standard_solr_xml_format.xml "http://localhost:8983/solr/techproducts/select?q=ipod&fl=id,cat,name,popularity,price,score&wt=xml"
-curl -X POST -H "Content-Type: text/xml" -d @standard_solr_xml_format.xml "http://localhost:8983/solr/techproducts/update/xslt?commit=true&tr=updateXml.xsl"
+$ curl -o standard_solr_xml_format.xml "http://localhost:8983/solr/techproducts/select?q=ipod&fl=id,cat,name,popularity,price,score&wt=xml"
 
+$ curl -X POST -H "Content-Type: text/xml" -d @standard_solr_xml_format.xml "http://localhost:8983/solr/techproducts/update/xslt?commit=true&tr=updateXml.xsl"
 ----
 
-NOTE: You can see the opposite export/import cycle using the `tr` parameter in   <<response-writers.adoc#xslt-writer-example,Response Writer XSLT example>>.
+NOTE: You can see the opposite export/import cycle using the `tr` parameter in <<response-writers.adoc#xslt-writer-example,Response Writer XSLT example>>.
 
 == JSON Formatted Index Updates
 
 Solr can accept JSON that conforms to a defined structure, or can accept arbitrary JSON-formatted documents.
-If sending arbitrarily formatted JSON, there are some additional parameters that need to be sent with the update request, described below in the section <<transforming-and-indexing-custom-json.adoc#,Transforming and Indexing Custom JSON>>.
+If sending arbitrarily formatted JSON, there are some additional parameters that need to be sent with the update request, described in the section <<transforming-and-indexing-custom-json.adoc#,Transforming and Indexing Custom JSON>>.
 
 === Solr-Style JSON
 
@@ -339,14 +370,14 @@ JSON formatted update requests may be sent to Solr's `/update` handler using `Co
 
 JSON formatted updates can take 3 basic forms, described in depth below:
 
-* <<Adding a Single JSON Document,A single document to add>>, expressed as a top level JSON Object.
+* <<Adding a Single JSON Document,A single document>>, expressed as a top level JSON Object.
 To differentiate this from a set of commands, the `json.command=false` request parameter is required.
-* <<Adding Multiple JSON Documents,A list of documents to add>>, expressed as a top level JSON Array containing a JSON Object per document.
-* <<Sending JSON Update Commands,A sequence of update commands>>, expressed as a top level JSON Object (aka: Map).
+* <<Adding Multiple JSON Documents,A list of documents>>, expressed as a top level JSON Array containing a JSON Object per document.
+* <<Sending JSON Update Commands,A sequence of update commands>>, expressed as a top level JSON Object (a Map).
 
 ==== Adding a Single JSON Document
 
-The simplest way to add Documents via JSON is to send each document individually as a JSON Object, using the `/update/json/docs` path:
+The simplest way to add documents via JSON is to send each document individually as a JSON Object, using the `/update/json/docs` path:
 
 [source,bash]
 ----
@@ -496,101 +527,203 @@ The CSV handler allows the specification of many parameters in the URL in the fo
 The table below describes the parameters for the update handler.
 
 `separator`::
-Character used as field separator; default is ",". This parameter is global; for per-field usage, see the `split` parameter.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `,`
+|===
++
+Character used as field separator.
+This parameter is global; for per-field usage, see the `split` parameter.
 +
 Example:  `separator=%09`
 
 `trim`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 If `true`, remove leading and trailing whitespace from values.
-The default is `false`.
 This parameter can be either global or per-field.
 +
 Examples: `f.isbn.trim=true` or `trim=false`
 
 `header`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
 Set to `true` if first line of input contains field names.
 These will be used if the `fieldnames` parameter is absent.
 This parameter is global.
 
 `fieldnames`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Comma-separated list of field names to use when adding documents.
 This parameter is global.
 +
 Example: `fieldnames=isbn,price,title`
 
 `literal._field_name_`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 A literal value for a specified field name.
 This parameter is global.
 +
 Example: `literal.color=red`
 
 `skip`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Comma separated list of field names to skip.
 This parameter is global.
 +
 Example: `skip=uninteresting,shoesize`
 
 `skipLines`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
 Number of lines to discard in the input stream before the CSV data starts, including the header, if present.
-Default=`0`.
 This parameter is global.
 +
 Example: `skipLines=5`
 
-`encapsulator`:: The character optionally used to surround values to preserve characters such as the CSV separator or whitespace.
+`encapsulator`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The character optionally used to surround values to preserve characters such as the CSV separator or whitespace.
 This standard CSV format handles the encapsulator itself appearing in an encapsulated value by doubling the encapsulator.
 +
 This parameter is global; for per-field usage, see `split`.
 +
 Example: `encapsulator="`
 
-`escape`:: The character used for escaping CSV separators or other reserved characte
+`escape`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+The character used for escaping CSV separators or other reserved character.
 If an escape is specified, the encapsulator is not used unless also explicitly specified since most formats use either encapsulation or escaping, not both.
 +
 Example: `escape=\`
 
 `keepEmpty`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `false`
+|===
++
 Keep and index zero length (empty) fields.
-The default is `false`.
 This parameter can be global or per-field.
 +
 Example: `f.price.keepEmpty=true`
 
-`map`:: Map one value to another.
-Format is value:replacement (which can be empty).
+`map`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
+Map one value to another.
+Format is `map=value:replacement` (which can be empty).
 This parameter can be global or per-field.
 +
 Example: `map=left:right` or `f.subject.map=history:bunk`
 
 `split`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 If `true`, split a field into multiple values by a separate parser.
 This parameter is used on a per-field basis.
 
 `overwrite`::
-If `true` (the default), check for and overwrite duplicate documents, based on the uniqueKey field declared in the Solr schema.
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `true`
+|===
++
+If `true`, check for and overwrite duplicate documents, based on the uniqueKey field declared in the Solr schema.
 If you know the documents you are indexing do not contain any duplicates then you may see a considerable speed up setting this to `false`.
 +
 This parameter is global.
 
 `commit`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Issues a commit after the data has been ingested.
 This parameter is global.
 
 `commitWithin`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Add the document within the specified number of milliseconds.
 This parameter is global.
 +
 Example: `commitWithin=10000`
 
 `rowid`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: none
+|===
++
 Map the `rowid` (line number) to a field specified by the value of the parameter, for instance if your CSV doesn't have a unique key and you want to use the row id as such.
 This parameter is global.
 +
 Example: `rowid=id`
 
 `rowidOffset`::
++
+[%autowidth,frame=none]
+|===
+|Optional |Default: `0`
+|===
++
 Add the given offset (as an integer) to the `rowid` before adding it to the document.
-Default is `0`.
 This parameter is global.
 +
 Example: `rowidOffset=10`
diff --git a/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc b/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc
index 181eb47..0f9b8e4 100644
--- a/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc
+++ b/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc
@@ -53,7 +53,7 @@ At its core, Solr autoscaling provides users with a rule syntax to define prefer
 
 * There were several other new features released in earlier 6.x releases, which you may have missed:
 ** <<learning-to-rank.adoc#,Learning to Rank>>
-** <<highlighting.adoc#the-unified-highlighter,Unified Highlighter>>
+** <<highlighting.adoc#unified-highlighter,Unified Highlighter>>
 ** <<metrics-reporting.adoc#,Metrics API>>. See also information about related deprecations in the section <<JMX Support and MBeans>> below.
 ** <<other-parsers.adoc#payload-query-parsers,Payload queries>>
 ** <<stream-evaluator-reference.adoc#,Streaming Evaluators>>
@@ -169,7 +169,7 @@ The following changes were made in SolrJ.
 * The `defaultOperator` parameter in the schema is no longer supported. Use the `q.op` parameter instead. This option had been deprecated for several releases. See the section <<standard-query-parser.adoc#standard-query-parser-parameters,Standard Query Parser Parameters>> for more information.
 * The `defaultSearchField` parameter in the schema is no longer supported. Use the `df` parameter instead. This option had been deprecated for several releases. See the section <<standard-query-parser.adoc#standard-query-parser-parameters,Standard Query Parser Parameters>> for more information.
 * The `mergePolicy`, `mergeFactor` and `maxMergeDocs` parameters have been removed and are no longer supported. You should define a `mergePolicyFactory` instead. See the section <<index-segments-merging.adoc#mergepolicyfactory,the mergePolicyFactory>> for more information.
-* The PostingsSolrHighlighter has been deprecated. It's recommended that you move to using the UnifiedHighlighter instead. See the section <<highlighting.adoc#the-unified-highlighter,Unified Highlighter>> for more information about this highlighter.
+* The PostingsSolrHighlighter has been deprecated. It's recommended that you move to using the UnifiedHighlighter instead. See the section <<highlighting.adoc#unified-highlighter,Unified Highlighter>> for more information about this highlighter.
 * Index-time boosts have been removed from Lucene, and are no longer available from Solr. If any boosts are provided, they will be ignored by the indexing chain. As a replacement, index-time scoring factors should be indexed in a separate field and combined with the query score using a function query. See the section <<function-queries.adoc#,Function Queries>> for more information.
 * The `StandardRequestHandler` is deprecated. Use `SearchHandler` instead.
 * To improve parameter consistency in the Collections API, the parameter names `fromNode` for the MOVEREPLICA command and `source`, `target` for the REPLACENODE command have been deprecated and replaced with `sourceNode` and `targetNode` instead. The old names will continue to work for back-compatibility but they will be removed in Solr 8.
diff --git a/solr/solr-ref-guide/src/other-parsers.adoc b/solr/solr-ref-guide/src/other-parsers.adoc
index abf2802..31c3d6b 100644
--- a/solr/solr-ref-guide/src/other-parsers.adoc
+++ b/solr/solr-ref-guide/src/other-parsers.adoc
@@ -726,7 +726,7 @@ This query parser is designed to allow users to enter queries however they want,
 
 This parser takes the following parameters:
 
-q.operators::
+`q.operators`::
 Comma-separated list of names of parsing operators to enable.
 By default, all operations are enabled, and this parameter can be used to effectively disable specific operators as needed, by excluding them from the list.
 Passing an empty string with this parameter disables all operators.
@@ -757,15 +757,15 @@ At the end of terms, specifies a fuzzy query.
 |`NEAR` |`~_N_` |At the end of phrases, specifies a NEAR query |`"term1 term2"~5`
 |===
 
-q.op::
+`q.op`::
 Defines the default operator to use if none is defined by the user.
 Allowed values are `AND` and `OR`.
 `OR` is used if none is specified.
 
-qf::
+`qf`::
 A list of query fields and boosts to use when building the query.
 
-df::
+`df`::
 Defines the default field if none is defined in the Schema, or overrides the default field if it is already defined.
 
 Any errors in syntax are ignored and the query parser will interpret queries as best it can.
diff --git a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
index 6b5a4f2..1f51820 100644
--- a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
+++ b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
@@ -237,7 +237,7 @@ Administrators can write their own custom permissions that can match requests ba
 
 Each custom permission is a JSON object under the `permissions` parameter, with one or more of the properties below:
 
-name::
+`name`::
 +
 [%autowidth,frame=none]
 |===
@@ -251,7 +251,7 @@ For custom permissions, this is used only as a clue to administrators about what
 Care must be taken when setting this parameter to avoid colliding with one of Solr's <<Permissions,predefined permissions>>, whose names are reserved.
 If this name matches a predefined permission, Solr ignores any other properties set and uses the semantics of the predefined permission instead.
 
-collection::
+`collection`::
 +
 [%autowidth,frame=none]
 |===
@@ -275,7 +275,7 @@ A `collection` parameter given an alias as a value will never match because RBAP
 Instead, set a `collection` parameter that contains all collections in the alias concerned (or the `*` wildcard).
 ====
 
-path::
+`path`::
 +
 [%autowidth,frame=none]
 |===
@@ -290,7 +290,7 @@ For APIs that access collections, path values should start after the collection
 For collection-agnostic (aka, "admin") APIs, path values should start at the `"/admin` path segment.
 The wildcard `\*` can be used to indicate that this permission applies to all paths.
 
-method::
+`method`::
 +
 [%autowidth,frame=none]
 |===
@@ -301,7 +301,7 @@ Defines the HTTP methods this permission applies to.
 Options include `HEAD`, `POST`, `PUT`, `GET`, `DELETE`, and the wildcard `\*`.
 Multiple values can also be specified using a JSON array.
 
-params::
+`params`::
 +
 [%autowidth,frame=none]
 |===
@@ -335,7 +335,7 @@ If the commands LIST and CLUSTERSTATUS are case insensitive, the example above c
 }
 ----
 
-role::
+`role`::
 +
 [%autowidth,frame=none]
 |===
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index 782d96f..81250a9 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -1313,7 +1313,7 @@ curl -X GET "http://localhost:8983/api/collections/techproducts/schema/name"
 
 *Path Parameters*
 
-collection::
+`collection`::
 The collection (or core) name.
 
 *Query Parameters*
diff --git a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
index 2b1bf50..3dd1d9c 100644
--- a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
+++ b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
@@ -452,7 +452,7 @@ This tool is not yet officially documented in the Reference Guide, but draft doc
 *Highlighting*
 
 Solr's Unified Highlighter now has two parameters to help control passage sizing, `hl.fragAlignRatio` and `hl.fragsizeIsMinimum`.
-See the section <<highlighting.adoc#the-unified-highlighter,The Unified Highlighter>> for details about these new parameters.
+See the section <<highlighting.adoc#unified-highlighter,Unified Highlighter>> for details about these new parameters.
 Regardless of the settings, the passages may be sized differently than before.
 _Warning: These default settings were found to be a significant performance regression for apps that highlight lots of text with the default sentence break iterator.
 See the 8.6 upgrade notes for advise you can apply in 8.5._