You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/06/16 01:05:20 UTC

[5/6] lucene-solr:master: SOLR-10892: Change easy tables to description lists

SOLR-10892: Change easy tables to description lists


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/bf26608f
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/bf26608f
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/bf26608f

Branch: refs/heads/master
Commit: bf26608f6d2b8c1281975dfb25c3cd2110ea5260
Parents: 4eea441
Author: Cassandra Targett <ct...@apache.org>
Authored: Thu Jun 15 18:03:03 2017 -0700
Committer: Cassandra Targett <ct...@apache.org>
Committed: Thu Jun 15 18:03:35 2017 -0700

----------------------------------------------------------------------
 solr/solr-ref-guide/src/blockjoin-faceting.adoc |  19 +-
 .../src/collapse-and-expand-results.adoc        |  84 ++++----
 solr/solr-ref-guide/src/configsets-api.adoc     |  30 +--
 .../src/cross-data-center-replication-cdcr.adoc |  49 ++---
 solr/solr-ref-guide/src/de-duplication.adoc     |  41 ++--
 .../src/defining-core-properties.adoc           |  52 ++---
 solr/solr-ref-guide/src/defining-fields.adoc    |  18 +-
 .../detecting-languages-during-indexing.adoc    |  97 +++++++---
 .../src/distributed-requests.adoc               |  41 ++--
 .../field-type-definitions-and-properties.adoc  |  38 ++--
 .../src/hadoop-authentication-plugin.adoc       |  44 +++--
 solr/solr-ref-guide/src/index-replication.adoc  | 192 ++++++++++++-------
 .../src/indexconfig-in-solrconfig.adoc          |  15 +-
 .../src/initparams-in-solrconfig.adoc           |  16 +-
 .../src/kerberos-authentication-plugin.adoc     |  62 +++---
 .../src/making-and-restoring-backups.adoc       |  98 +++++-----
 .../src/mbean-request-handler.adoc              |  21 +-
 solr/solr-ref-guide/src/morelikethis.adoc       |  74 ++++---
 .../src/near-real-time-searching.adoc           |  36 ++--
 .../solr-ref-guide/src/parameter-reference.adoc |  45 ++---
 solr/solr-ref-guide/src/query-screen.adoc       |  68 ++++---
 .../src/requestdispatcher-in-solrconfig.adoc    |  17 +-
 ...lers-and-searchcomponents-in-solrconfig.adoc |  14 +-
 solr/solr-ref-guide/src/response-writers.adoc   |  93 +++++----
 solr/solr-ref-guide/src/result-clustering.adoc  |  81 ++++----
 solr/solr-ref-guide/src/result-grouping.adoc    |  77 +++++---
 .../src/rule-based-authorization-plugin.adoc    |  38 ++--
 .../src/running-solr-on-hdfs.adoc               |  91 ++++-----
 solr/solr-ref-guide/src/spatial-search.adoc     | 144 ++++++++------
 .../src/the-query-elevation-component.adoc      |  17 +-
 .../solr-ref-guide/src/the-stats-component.adoc |  73 ++++---
 .../src/the-term-vector-component.adoc          |  55 +++---
 .../solr-ref-guide/src/the-terms-component.adoc | 115 +++++------
 .../src/update-request-processors.adoc          |   6 +-
 .../src/updatehandlers-in-solrconfig.adoc       |  56 +++---
 .../src/updating-parts-of-documents.adoc        |  54 ++----
 .../src/uploading-data-with-index-handlers.adoc | 128 +++++++++----
 ...g-data-with-solr-cell-using-apache-tika.adoc | 112 +++++++----
 solr/solr-ref-guide/src/v2-api.adoc             |  24 +--
 .../src/velocity-response-writer.adoc           |  99 +++++-----
 40 files changed, 1362 insertions(+), 1072 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/blockjoin-faceting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/blockjoin-faceting.adoc b/solr/solr-ref-guide/src/blockjoin-faceting.adoc
index bf33aca..1a89a57 100644
--- a/solr/solr-ref-guide/src/blockjoin-faceting.adoc
+++ b/solr/solr-ref-guide/src/blockjoin-faceting.adoc
@@ -102,14 +102,15 @@ Queries are constructed the same way as for a <<other-parsers.adoc#OtherParsers-
 http://localhost:8983/solr/bjqfacet?q={!parent which=type_s:parent}SIZE_s:XL&child.facet.field=COLOR_s
 ----
 
-As a result we should have facets for Red(1) and Blue(1), because matches on children `id=11` and `id=12` are aggregated into single hit into parent with `id=1`. The key components of the request are:
+As a result we should have facets for Red(1) and Blue(1), because matches on children `id=11` and `id=12` are aggregated into single hit into parent with `id=1`.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+The key components of the request shown above are:
 
-[cols="30,70",options="header"]
-|===
-|URL Part | Meaning
-|`/bjqfacet` |The name of the request handler that has been defined with one of block join facet components enabled.
-|`q={!parent ...}..` |The mandatory parent query as a main query. The parent query could also be a subordinate clause in a more complex query.
-|`child.facet.field=...` |The child document field, which might be repeated many times with several fields, as necessary.
-|===
+`/bjqfacet?`::
+The name of the request handler that has been defined with a block join facet component enabled.
+
+`q={!parent which=type_s:parent}SIZE_s:XL`::
+The mandatory parent query as a main query. The parent query could also be a subordinate clause in a more complex query.
+
+`&child.facet.field=COLOR_s`::
+The child document field, which might be repeated many times with several fields, as necessary.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
index 5640b9b..106fd1c 100644
--- a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
+++ b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
@@ -34,37 +34,48 @@ The `CollapsingQParser` is really a _post filter_ that provides more performant
 
 The CollapsingQParser accepts the following local parameters:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+field::
+The field that is being collapsed on. The field must be a single valued String, Int or Float-type of field.
 
-[cols="20,60,20",options="header"]
-|===
-|Parameter |Description |Default
-|field |The field that is being collapsed on. The field must be a single valued String, Int or Float |none
-|min \| max a|
+min or max::
 Selects the group head document for each group based on which document has the min or max value of the specified numeric field or <<function-queries.adoc#function-queries,function query>>.
++
+At most only one of the `min`, `max`, or `sort` (see below) parameters may be specified.
++
+If none are specified, the group head document of each group will be selected based on the highest scoring document in that group. The default is none.
 
-At most only one of the min, max, or sort (see below) parameters may be specified.
-
-If none are specified, the group head document of each group will be selected based on the highest scoring document in that group. |none
-|sort a|
+sort::
 Selects the group head document for each group based on which document comes first according to the specified <<common-query-parameters.adoc#CommonQueryParameters-ThesortParameter,sort string>>.
-
-At most only one of the min, max, (see above) or sort parameters may be specified.
-
-If none are specified, the group head document of each group will be selected based on the highest scoring document in that group. |none
-|nullPolicy a|
-There are three null policies:
-
-* *ignore*: removes documents with a null value in the collapse field. This is the default.
-* *expand*: treats each document with a null value in the collapse field as a separate group.
-* *collapse*: collapses all documents with a null value into a single group using either highest score, or minimum/maximum.
-
- |ignore
-|hint |Currently there is only one hint available: `top_fc`, which stands for top level FieldCache. The `top_fc` hint is only available when collapsing on String fields. `top_fc` usually provides the best query time speed but takes the longest to warm on startup or following a commit. `top_fc` will also result in having the collapsed field cached in memory twice if it's used for faceting or sorting. For very high cardinality (high distinct count) fields, `top_fc` may not fare so well. |none
-|size |Sets the initial size of the collapse data structures when collapsing on a *numeric field only*. The data structures used for collapsing grow dynamically when collapsing on numeric fields. Setting the size above the number of results expected in the result set will eliminate the resizing cost. |100,000
-|===
-
-*Sample Syntax:*
++
+At most only one of the `min`, `max`, (see above) or `sort` parameters may be specified.
++
+If none are specified, the group head document of each group will be selected based on the highest scoring document in that group. The default is none.
+
+nullPolicy::
+There are three available null policies:
++
+* `ignore`: removes documents with a null value in the collapse field. This is the default.
+* `expand`: treats each document with a null value in the collapse field as a separate group.
+* `collapse`: collapses all documents with a null value into a single group using either highest score, or minimum/maximum.
++
+The default is `ignore`.
+
+hint::
+Currently there is only one hint available: `top_fc`, which stands for top level FieldCache.
++
+The `top_fc` hint is only available when collapsing on String fields. `top_fc` usually provides the best query time speed but takes the longest to warm on startup or following a commit. `top_fc` will also result in having the collapsed field cached in memory twice if it's used for faceting or sorting. For very high cardinality (high distinct count) fields, `top_fc` may not fare so well.
++
+The default is none.
+
+size::
+Sets the initial size of the collapse data structures when collapsing on a *numeric field only*.
++
+The data structures used for collapsing grow dynamically when collapsing on numeric fields. Setting the size above the number of results expected in the result set will eliminate the resizing cost.
++
+The default is 100,000.
+
+
+=== Sample Syntax
 
 Collapse on `group_field` selecting the document in each group with the highest scoring document:
 
@@ -137,13 +148,14 @@ Inside the expanded section there is a _map_ with each group head pointing to th
 
 The ExpandComponent has the following parameters:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+expand.sort::
+Orders the documents within the expanded groups. The default is `score desc`.
+
+expand.rows::
+The number of rows to display in each group. The default is 5 rows.
+
+expand.q::
+Overrides the main query (`q`), determines which documents to include in the main group. The default is to use the main query.
 
-[cols="20,60,20",options="header"]
-|===
-|Parameter |Description |Default
-|expand.sort |Orders the documents within the expanded groups |score desc
-|expand.rows |The number of rows to display in each group |5
-|expand.q |Overrides the main q parameter, determines which documents to include in the main group. |main q
-|expand.fq |Overrides main fq's, determines which documents to include in the main group. |main fq's
-|===
+expand.fq::
+Overrides main filter queries (`fq`), determines which documents to include in the main group. The default is to use the main filter queries.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/configsets-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/configsets-api.adoc b/solr/solr-ref-guide/src/configsets-api.adoc
index 2bd50ff..603e08e 100644
--- a/solr/solr-ref-guide/src/configsets-api.adoc
+++ b/solr/solr-ref-guide/src/configsets-api.adoc
@@ -46,15 +46,13 @@ Create a ConfigSet, based on an existing ConfigSet.
 [[ConfigSetsAPI-Input]]
 === Input
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+The following parameters are supported when creating a ConfigSet.
 
-[cols="25,10,10,10,45",options="header"]
-|===
-|Key |Type |Required |Default |Description
-|name |String |Yes | |ConfigSet to be created
-|baseConfigSet |String |Yes | |ConfigSet to copy as a base
-|configSetProp._name=value_ |String |No | |ConfigSet property from base to override
-|===
+name:: The ConfigSet to be created. This parameter is required.
+
+baseConfigSet::  The ConfigSet to copy as a base. This parameter is required.
+
+configSetProp._name_=_value_:: Any ConfigSet property from base to override.
 
 [[ConfigSetsAPI-Output]]
 === Output
@@ -101,13 +99,7 @@ Delete a ConfigSet
 
 *Query Parameters*
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,15,10,15,40",options="header"]
-|===
-|Key |Type |Required |Default |Description
-|name |String |Yes | |ConfigSet to be deleted
-|===
+name:: The ConfigSet to be deleted. This parameter is required.
 
 [[ConfigSetsAPI-Output.1]]
 === Output
@@ -184,13 +176,7 @@ Upload a ConfigSet, sent in as a zipped file. Please note that a ConfigSet is up
 [[ConfigSetsAPI-Input.3]]
 === Input
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,15,10,15,40",options="header"]
-|===
-|Key |Type |Required |Default |Description
-|name |String |Yes | |ConfigSet to be created
-|===
+name:: The ConfigSet to be created when the upload is complete. This parameter is required.
 
 The body of the request should contain a zipped config set.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
index 3bd482b..6772955 100644
--- a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
+++ b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
@@ -252,15 +252,15 @@ The configuration details, defaults and options are as follows:
 
 CDCR can be configured to forward update requests to one or more replicas. A replica is defined with a “replica” list as follows:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
 
-[cols="20,10,15,55",options="header"]
-|===
-|Parameter |Required |Default |Description
-|zkHost |Yes |none |The host address for ZooKeeper of the target SolrCloud. Usually this is a comma-separated list of addresses to each node in the target ZooKeeper ensemble.
-|Source |Yes |none |The name of the collection on the Source SolrCloud to be replicated.
-|Target |Yes |none |The name of the collection on the target SolrCloud to which updates will be forwarded.
-|===
+`zkHost`::
+The host address for ZooKeeper of the target SolrCloud. Usually this is a comma-separated list of addresses to each node in the target ZooKeeper ensemble. This parameter is required.
+
+`Source`::
+The name of the collection on the Source SolrCloud to be replicated. This parameter is required.
+
+`Target`::
+The name of the collection on the target SolrCloud to which updates will be forwarded. This parameter is required.
 
 ==== The Replicator Element
 
@@ -268,39 +268,28 @@ The CDC Replicator is the component in charge of forwarding updates to the repli
 
 The replicator uses a fixed thread pool to forward updates to multiple replicas in parallel. If more than one replica is configured, one thread will forward a batch of updates from one replica at a time in a round-robin fashion. The replicator can be configured with a “replicator” list as follows:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`threadPoolSize`::
+The number of threads to use for forwarding updates. One thread per replica is recommended. The default is `2`.
 
-[cols="20,10,15,55",options="header"]
-|===
-|Parameter |Required |Default |Description
-|threadPoolSize |No |2 |The number of threads to use for forwarding updates. One thread per replica is recommended.
-|schedule |No |10 |The delay in milliseconds for the monitoring the update log(s).
-|batchSize |No |128 |The number of updates to send in one batch. The optimal size depends on the size of the documents. Large batches of large documents can increase your memory usage significantly.
-|===
+`schedule`::
+The delay in milliseconds for the monitoring the update log(s). The default is `10`.
+
+`batchSize`::
+The number of updates to send in one batch. The optimal size depends on the size of the documents. Large batches of large documents can increase your memory usage significantly. The default is `128`.
 
 ==== The updateLogSynchronizer Element
 
 Expert: Non-leader nodes need to synchronize their update logs with their leader node from time to time in order to clean deprecated transaction log files. By default, such a synchronization process is performed every minute. The schedule of the synchronization can be modified with a “updateLogSynchronizer” list as follows:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,10,15,55",options="header"]
-|===
-|Parameter |Required |Default |Description
-|schedule |No |60000 |The delay in milliseconds for synchronizing the updates log.
-|===
+`schedule`::
+ The delay in milliseconds for synchronizing the updates log. The default is `60000`.
 
 ==== The Buffer Element
 
 CDCR is configured by default to buffer any new incoming updates. When buffering updates, the updates log will store all the updates indefinitely. Replicas do not need to buffer updates, and it is recommended to disable buffer on the target SolrCloud. The buffer can be disabled at startup with a “buffer” list and the parameter “defaultState” as follows:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,10,15,55",options="header"]
-|===
-|Parameter |Required |Default |Description
-|defaultState |No |enabled |The state of the buffer at startup.
-|===
+`defaultState`::
+The state of the buffer at startup. The default is `enabled`.
 
 == CDCR API
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/de-duplication.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/de-duplication.adoc b/solr/solr-ref-guide/src/de-duplication.adoc
index 8f4d01a..3e9cd46 100644
--- a/solr/solr-ref-guide/src/de-duplication.adoc
+++ b/solr/solr-ref-guide/src/de-duplication.adoc
@@ -20,17 +20,12 @@
 
 If duplicate, or near-duplicate documents are a concern in your index, de-duplication may be worth implementing.
 
-Preventing duplicate or near duplicate documents from entering an index or tagging documents with a signature/fingerprint for duplicate field collapsing can be efficiently achieved with a low collision or fuzzy hash algorithm. Solr natively supports de-duplication techniques of this type via the `Signature` class and allows for the easy addition of new hash/signature implementations. A Signature can be implemented several ways:
+Preventing duplicate or near duplicate documents from entering an index or tagging documents with a signature/fingerprint for duplicate field collapsing can be efficiently achieved with a low collision or fuzzy hash algorithm. Solr natively supports de-duplication techniques of this type via the `Signature` class and allows for the easy addition of new hash/signature implementations. A Signature can be implemented in a few ways:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+* MD5Signature: 128-bit hash used for exact duplicate detection.
+* Lookup3Signature: 64-bit hash used for exact duplicate detection. This is much faster than MD5 and smaller to index.
+* http://wiki.apache.org/solr/TextProfileSignature[TextProfileSignature]: Fuzzy hashing implementation from Apache Nutch for near duplicate detection. It's tunable but works best on longer text.
 
-[cols="30,70",options="header"]
-|===
-|Method |Description
-|MD5Signature |128-bit hash used for exact duplicate detection.
-|Lookup3Signature |64-bit hash used for exact duplicate detection. This is much faster than MD5 and smaller to index.
-|http://wiki.apache.org/solr/TextProfileSignature[TextProfileSignature] |Fuzzy hashing implementation from Apache Nutch for near duplicate detection. It's tunable but works best on longer text.
-|===
 
 Other, more sophisticated algorithms for fuzzy/near hashing can be added later.
 
@@ -68,23 +63,27 @@ The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` a
 
 The `SignatureUpdateProcessorFactory` takes several properties:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,30,50",options="header"]
-|===
-|Parameter |Default |Description
-|signatureClass |`org.apache.solr.update.processor.Lookup3Signature` a|
-A Signature implementation for generating a signature hash. The full classpath of the implementation must be specified. The available options are described above, the associated classpaths to use are:
+signatureClass::
+A Signature implementation for generating a signature hash. The default is `org.apache.solr.update.processor.Lookup3Signature`.
++
+The full classpath of the implementation must be specified. The available options are described above, the associated classpaths to use are:
 
 * `org.apache.solr.update.processor.Lookup3Signature`
 * `org.apache.solr.update.processor.MD5Signature`
 * `org.apache.solr.update.process.TextProfileSignature`
 
-|fields |all fields |The fields to use to generate the signature hash in a comma separated list. By default, all fields on the document will be used.
-|signatureField |signatureField |The name of the field used to hold the fingerprint/signature. The field should be defined in schema.xml.
-|enabled |true |Enable/disable de-duplication processing.
-|overwriteDupes |true |If true, when a document exists that already matches this signature, it will be overwritten.
-|===
+fields::
+The fields to use to generate the signature hash in a comma separated list. By default, all fields on the document will be used.
+
+signatureField::
+The name of the field used to hold the fingerprint/signature. The field should be defined in `schema.xml`. The default is `signatureField`.
+
+enabled::
+Set to *false* to disable de-duplication processing. The default is *true*.
+
+overwriteDupes::
+If true, the default, when a document exists that already matches this signature, it will be overwritten.
+
 
 [[De-Duplication-Inschema.xml]]
 === In schema.xml

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/defining-core-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/defining-core-properties.adoc b/solr/solr-ref-guide/src/defining-core-properties.adoc
index 93a3d3e..a533098 100644
--- a/solr/solr-ref-guide/src/defining-core-properties.adoc
+++ b/solr/solr-ref-guide/src/defining-core-properties.adoc
@@ -70,26 +70,32 @@ The minimal `core.properties` file is an empty file, in which case all of the pr
 
 Java properties files allow the hash (`#`) or bang (`!`) characters to specify comment-to-end-of-line.
 
-This table defines the recognized properties:
-
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="25,75",options="header"]
-|===
-|Property |Description
-|`name` |The name of the SolrCore. You'll use this name to reference the SolrCore when running commands with the CoreAdminHandler.
-|`config` |The configuration file name for a given core. The default is `solrconfig.xml`.
-|`schema` |The schema file name for a given core. The default is `schema.xml` but please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use. See <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for details.
-|`dataDir` |The core's data directory (where indexes are stored) as either an absolute pathname, or a path relative to the value of `instanceDir`. This is `data` by default.
-|`configSet` |The name of a defined configset, if desired, to use to configure the core (see the <<config-sets.adoc#config-sets,Config Sets>> for more details).
-|`properties` |The name of the properties file for this core. The value can be an absolute pathname or a path relative to the value of `instanceDir`.
-|`transient` |If *true*, the core can be unloaded if Solr reaches the `transientCacheSize`. The default if not specified is *false*. Cores are unloaded in order of least recently used first. _Setting to *true* is not recommended in SolrCloud mode._
-|`loadOnStartup` |If *true*, the default if it is not specified, the core will loaded when Solr starts. _Setting to *false* is not recommended in SolrCloud mode._
-|`coreNodeName` |Used only in SolrCloud, this is a unique identifier for the node hosting this replica. By default a coreNodeName is generated automatically, but setting this attribute explicitly allows you to manually assign a new core to replace an existing replica. For example: when replacing a machine that has had a hardware failure by restoring from backups on a new machine with a new hostname or port..
-|`ulogDir` |The absolute or relative directory for the update log for this core (SolrCloud).
-|`shard` |The shard to assign this core to (SolrCloud).
-|`collection` |The name of the collection this core is part of (SolrCloud).
-|`roles` |Future param for SolrCloud or a way for users to mark nodes for their own use.
-|===
-
-Additional "user defined" properties may be specified for use as variables. For more information on how to define local properties, see the section <<configuring-solrconfig-xml.adoc#Configuringsolrconfig.xml-SubstitutingPropertiesinSolrConfigFiles,Substituting Properties in Solr Config Files>>.
+The following properties are available:
+
+`name`:: The name of the SolrCore. You'll use this name to reference the SolrCore when running commands with the CoreAdminHandler.
+
+`config`:: The configuration file name for a given core. The default is `solrconfig.xml`.
+
+`schema`:: The schema file name for a given core. The default is `schema.xml` but please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use. See <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for more details.
+
+`dataDir`:: The core's data directory (where indexes are stored) as either an absolute pathname, or a path relative to the value of `instanceDir`. This is `data` by default.
+
+`configSet`:: The name of a defined configset, if desired, to use to configure the core (see the section  <<config-sets.adoc#config-sets,Config Sets>> for more details).
+
+`properties`:: The name of the properties file for this core. The value can be an absolute pathname or a path relative to the value of `instanceDir`.
+
+`transient`:: If *true*, the core can be unloaded if Solr reaches the `transientCacheSize`. The default if not specified is *false*. Cores are unloaded in order of least recently used first. _Setting this to *true* is not recommended in SolrCloud mode._
+
+`loadOnStartup`:: If *true*, the default if it is not specified, the core will loaded when Solr starts. _Setting this to *false* is not recommended in SolrCloud mode._
+
+`coreNodeName`:: Used only in SolrCloud, this is a unique identifier for the node hosting this replica. By default a `coreNodeName` is generated automatically, but setting this attribute explicitly allows you to manually assign a new core to replace an existing replica. For example, this can be useful when replacing a machine that has had a hardware failure by restoring from backups on a new machine with a new hostname or port.
+
+`ulogDir`:: The absolute or relative directory for the update log for this core (SolrCloud).
+
+`shard`:: The shard to assign this core to (SolrCloud).
+
+`collection`:: The name of the collection this core is part of (SolrCloud).
+
+`roles`:: Future parameter for SolrCloud or a way for users to mark nodes for their own use.
+
+Additional user-defined properties may be specified for use as variables. For more information on how to define local properties, see the section <<configuring-solrconfig-xml.adoc#Configuringsolrconfig.xml-SubstitutingPropertiesinSolrConfigFiles,Substituting Properties in Solr Config Files>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/defining-fields.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/defining-fields.adoc b/solr/solr-ref-guide/src/defining-fields.adoc
index 4ef3a5d..8e6de9c 100644
--- a/solr/solr-ref-guide/src/defining-fields.adoc
+++ b/solr/solr-ref-guide/src/defining-fields.adoc
@@ -33,15 +33,16 @@ The following example defines a field named `price` with a type named `float` an
 [[DefiningFields-FieldProperties]]
 == Field Properties
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+Field definitions can have the following properties:
 
-[cols="30,70",options="header"]
-|===
-|Property |Description
-|name |The name of the field. Field names should consist of alphanumeric or underscore characters only and not start with a digit. This is not currently strictly enforced, but other field names will not have first class support from all components and back compatibility is not guaranteed. Names with both leading and trailing underscores (e.g., `\_version_`) are reserved. Every field must have a `name`.
-|type |The name of the `fieldType` for this field. This will be found in the `name` attribute on the `fieldType` definition. Every field must have a `type`.
-|default |A default value that will be added automatically to any document that does not have a value in this field when it is indexed. If this property is not specified, there is no default.
-|===
+`name`::
+The name of the field. Field names should consist of alphanumeric or underscore characters only and not start with a digit. This is not currently strictly enforced, but other field names will not have first class support from all components and back compatibility is not guaranteed. Names with both leading and trailing underscores (e.g., `\_version_`) are reserved. Every field must have a `name`.
+
+`type`::
+The name of the `fieldType` for this field. This will be found in the `name` attribute on the `fieldType` definition. Every field must have a `type`.
+
+`default`::
+A default value that will be added automatically to any document that does not have a value in this field when it is indexed. If this property is not specified, there is no default.
 
 [[DefiningFields-OptionalFieldTypeOverrideProperties]]
 == Optional Field Type Override Properties
@@ -70,4 +71,3 @@ Fields can have many of the same properties as field types. Properties from the
 |===
 
 // TODO: SOLR-10655 END
-

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
index b73fdf7..4003f1a 100644
--- a/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
+++ b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
@@ -71,28 +71,75 @@ Here is an example of a minimal LangDetect `langid` configuration in `solrconfig
 
 As previously mentioned, both implementations of the `langid` UpdateRequestProcessor take the same parameters.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,10,10,10,40",options="header"]
-|===
-|Parameter |Type |Default |Required |Description
-|langid |Boolean |true |no |Enables and disables language detection.
-|langid.fl |string |none |yes |A comma- or space-delimited list of fields to be processed by `langid`.
-|langid.langField |string |none |yes |Specifies the field for the returned language code.
-|langid.langsField |multivalued string |none |no |Specifies the field for a list of returned language codes. If you use `langid.map.individual`, each detected language will be added to this field.
-|langid.overwrite |Boolean |false |no |Specifies whether the content of the `langField` and `langsField` fields will be overwritten if they already contain values.
-|langid.lcmap |string |none |false |A space-separated list specifying colon delimited language code mappings to apply to the detected languages. For example, you might use this to map Chinese, Japanese, and Korean to a common `cjk` code, and map both American and British English to a single `en` code by using `langid.lcmap=ja:cjk zh:cjk ko:cjk en_GB:en en_US:en`. This affects both the values put into the `langField` and `langsField` fields, as well as the field suffixes when using `langid.map`, unless overridden by `langid.map.lcmap`
-|langid.threshold |float |0.5 |no |Specifies a threshold value between 0 and 1 that the language identification score must reach before `langid` accepts it. With longer text fields, a high threshold such at 0.8 will give good results. For shorter text fields, you may need to lower the threshold for language identification, though you will be risking somewhat lower quality results. We recommend experimenting with your data to tune your results.
-|langid.whitelist |string |none |no |Specifies a list of allowed language identification codes. Use this in combination with `langid.map` to ensure that you only index documents into fields that are in your schema.
-|langid.map |Boolean |false |no |Enables field name mapping. If true, Solr will map field names for all fields listed in `langid.fl`.
-|langid.map.fl |string |none |no |A comma-separated list of fields for `langid.map` that is different than the fields specified in `langid.fl`.
-|langid.map.keepOrig |Boolean |false |no |If true, Solr will copy the field during the field name mapping process, leaving the original field in place.
-|langid.map.individual |Boolean |false |no |If true, Solr will detect and map languages for each field individually.
-|langid.map.individual.fl |string |none |no |A comma-separated list of fields for use with `langid.map.individual` that is different than the fields specified in `langid.fl`.
-|langid.fallbackFields |string |none |no |If no language is detected that meets the `langid.threshold` score, or if the detected language is not on the `langid.whitelist`, this field specifies language codes to be used as fallback values. If no appropriate fallback languages are found, Solr will use the language code specified in `langid.fallback`.
-|langid.fallback |string |none |no |Specifies a language code to use if no language is detected or specified in `langid.fallbackFields`.
-|langid.map.lcmap |string |determined by `langid.lcmap` |no |A space-separated list specifying colon delimited language code mappings to use when mapping field names. For example, you might use this to make Chinese, Japanese, and Korean language fields use a common `*_cjk` suffix, and map both American and British English fields to a single `*_en` by using `langid.map.lcmap=ja:cjk zh:cjk ko:cjk en_GB:en en_US:en`.
-|langid.map.pattern |Java regular expression |none |no |By default, fields are mapped as <field>_<language>. To change this pattern, you can specify a Java regular expression in this parameter.
-|langid.map.replace |Java replace |none |no |By default, fields are mapped as <field>_<language>. To change this pattern, you can specify a Java replace in this parameter.
-|langid.enforceSchema |Boolean |true |no |If false, the `langid` processor does not validate field names against your schema. This may be useful if you plan to rename or delete fields later in the UpdateChain.
-|===
+`langid`::
+When `true`, the default, enables language detection.
+
+`langid.fl`::
+A comma- or space-delimited list of fields to be processed by `langid`. This parameter is required.
+
+`langid.langField`::
+Specifies the field for the returned language code. This parameter is required.
+
+`langid.langsField`::
+Specifies the field for a list of returned language codes. If you use `langid.map.individual`, each detected language will be added to this field.
+
+`langid.overwrite`::
+Specifies whether the content of the `langField` and `langsField` fields will be overwritten if they already contain values. The default is `false`.
+
+`langid.lcmap`::
+A space-separated list specifying colon delimited language code mappings to apply to the detected languages.
++
+For example, you might use this to map Chinese, Japanese, and Korean to a common `cjk` code, and map both American and British English to a single `en` code by using `langid.lcmap=ja:cjk zh:cjk ko:cjk en_GB:en en_US:en`.
++
+This affects both the values put into the `langField` and `langsField` fields, as well as the field suffixes when using `langid.map`, unless overridden by `langid.map.lcmap`.
+
+`langid.threshold`::
+Specifies a threshold value between 0 and 1 that the language identification score must reach before `langid` accepts it.
++
+With longer text fields, a high threshold such as `0.8` will give good results. For shorter text fields, you may need to lower the threshold for language identification, though you will be risking somewhat lower quality results. We recommend experimenting with your data to tune your results.
++
+The default is `0.5`.
+
+`langid.whitelist`::
+Specifies a list of allowed language identification codes. Use this in combination with `langid.map` to ensure that you only index documents into fields that are in your schema.
+
+`langid.map`::
+Enables field name mapping. If `true`, Solr will map field names for all fields listed in `langid.fl`. The default is `false`.
+
+`langid.map.fl`::
+A comma-separated list of fields for `langid.map` that is different than the fields specified in `langid.fl`.
+
+`langid.map.keepOrig`::
+If `true`, Solr will copy the field during the field name mapping process, leaving the original field in place. The default is `false`.
+
+`langid.map.individual`::
+If `true`, Solr will detect and map languages for each field individually. The default is `false`.
+
+`langid.map.individual.fl`::
+A comma-separated list of fields for use with `langid.map.individual` that is different than the fields specified in `langid.fl`.
+
+`langid.fallback`::
+Specifies a language code to use if no language is detected or specified in `langid.fallbackFields`.
+
+`langid.fallbackFields`::
+If no language is detected that meets the `langid.threshold` score, or if the detected language is not on the `langid.whitelist`, this field specifies language codes to be used as fallback values.
++
+If no appropriate fallback languages are found, Solr will use the language code specified in `langid.fallback`.
+
+`langid.map.lcmap`::
+A space-separated list specifying colon-delimited language code mappings to use when mapping field names.
++
+For example, you might use this to make Chinese, Japanese, and Korean language fields use a common `*_cjk` suffix, and map both American and British English fields to a single `*_en` by using `langid.map.lcmap=ja:cjk zh:cjk ko:cjk en_GB:en en_US:en`.
++
+A list defined with this parameter will override any configuration set with `langid.lcmap`.
+
+`langid.map.pattern`::
+By default, fields are mapped as <field>_<language>. To change this pattern, you can specify a Java regular expression in this parameter.
+
+`langid.map.replace`::
+By default, fields are mapped as `<field>_<language>`. To change this pattern, you can specify a Java replace in this parameter.
+
+`langid.enforceSchema`::
+If `false`, the `langid` processor does not validate field names against your schema. This may be useful if you plan to rename or delete fields later in the UpdateChain.
++
+The default is `true`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/distributed-requests.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/distributed-requests.adoc b/solr/solr-ref-guide/src/distributed-requests.adoc
index 75f023c..b89878f 100644
--- a/solr/solr-ref-guide/src/distributed-requests.adoc
+++ b/solr/solr-ref-guide/src/distributed-requests.adoc
@@ -91,21 +91,32 @@ To configure the standard handler, provide a configuration like this in `solrcon
 
 The parameters that can be specified are as follows:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,15,65",options="header"]
-|===
-|Parameter |Default |Explanation
-|`socketTimeout` |0 (use OS default) |The amount of time in ms that a socket is allowed to wait.
-|`connTimeout` |0 (use OS default) |The amount of time in ms that is accepted for binding / connecting a socket
-|`maxConnectionsPerHost` |20 |The maximum number of concurrent connections that is made to each individual shard in a distributed search.
-|`maxConnections` |`10000` |The total maximum number of concurrent connections in distributed searches.
-|`corePoolSize` |0 |The retained lowest limit on the number of threads used in coordinating distributed search.
-|`maximumPoolSize` |Integer.MAX_VALUE |The maximum number of threads used for coordinating distributed search.
-|`maxThreadIdleTime` |5 seconds |The amount of time to wait for before threads are scaled back in response to a reduction in load.
-|`sizeOfQueue` |-1 |If specified, the thread pool will use a backing queue instead of a direct handoff buffer. High throughput systems will want to configure this to be a direct hand off (with -1). Systems that desire better latency will want to configure a reasonable size of queue to handle variations in requests.
-|`fairnessPolicy` |false |Chooses the JVM specifics dealing with fair policy queuing, if enabled distributed searches will be handled in a First in First out fashion at a cost to throughput. If disabled throughput will be favored over latency.
-|===
+`socketTimeout`::
+The amount of time in ms that a socket is allowed to wait. The default is `0`, where the operating system's default will be used.
+
+`connTimeout`::
+The amount of time in ms that is accepted for binding / connecting a socket. The default is `0`, where the operating system's default will be used.
+
+`maxConnectionsPerHost`::
+The maximum number of concurrent connections that is made to each individual shard in a distributed search. The default is `20`.
+
+`maxConnections`::
+The total maximum number of concurrent connections in distributed searches. The default is `10000`
+
+`corePoolSize`::
+The retained lowest limit on the number of threads used in coordinating distributed search. The default is `0`.
+
+`maximumPoolSize`::
+The maximum number of threads used for coordinating distributed search. The default is `Integer.MAX_VALUE`.
+
+`maxThreadIdleTime`::
+The amount of time in seconds to wait for before threads are scaled back in response to a reduction in load. The default is `5`.
+
+`sizeOfQueue`::
+If specified, the thread pool will use a backing queue instead of a direct handoff buffer. High throughput systems will want to configure this to be a direct hand off (with `-1`). Systems that desire better latency will want to configure a reasonable size of queue to handle variations in requests. The default is `-1`.
+
+`fairnessPolicy`::
+Chooses the JVM specifics dealing with fair policy queuing, if enabled distributed searches will be handled in a First in First out fashion at a cost to throughput. If disabled throughput will be favored over latency. The default is `false`.
 
 [[DistributedRequests-ConfiguringstatsCache_DistributedIDF_]]
 == Configuring statsCache (Distributed IDF)

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
index 12d2913..89b8e90 100644
--- a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
@@ -75,19 +75,33 @@ The properties that can be specified for a given field type fall into three majo
 
 === General Properties
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+These are the general properties for fields
+
+`name`::
+The name of the fieldType. This value gets used in field definitions, in the "type" attribute. It is strongly recommended that names consist of alphanumeric or underscore characters only and not start with a digit. This is not currently strictly enforced.
+
+`class`::
+The class name that gets used to store and index the data for this type. Note that you may prefix included class names with "solr." and Solr will automatically figure out which packages to search for the class - so `solr.TextField` will work.
++
+If you are using a third-party class, you will probably need to have a fully qualified class name. The fully qualified equivalent for `solr.TextField` is `org.apache.solr.schema.TextField`.
+
+`positionIncrementGap`::
+For multivalued fields, specifies a distance between multiple values, which prevents spurious phrase matches.
+
+`autoGeneratePhraseQueries`:: For text fields. If `true`, Solr automatically generates phrase queries for adjacent terms. If `false`, terms must be enclosed in double-quotes to be treated as phrases.
+
+`enableGraphQueries`::
+For text fields, applicable when querying with <<the-standard-query-parser.adoc#TheStandardQueryParser-StandardQueryParserParameters,`sow=false`>>. Use `true` (the default) for field types with query analyzers including graph-aware filters, e.g., <<filter-descriptions.adoc#FilterDescriptions-SynonymGraphFilter,Synonym Graph Filter>> and <<filter-descriptions.adoc#FilterDescriptions-WordDelimiterGraphFilter,Word Delimiter Graph Filter>>.
++
+Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., <<filter-descriptions.adoc#FilterDescriptions-ShingleFilter,Shingle Filter>>.
+
+[[FieldTypeDefinitionsandProperties-docValuesFormat]]
+`docValuesFormat`::
+Defines a custom `DocValuesFormat` to use for fields of this type. This requires that a schema-aware codec, such as the `SchemaCodecFactory` has been configured in solrconfig.xml.
+
+`postingsFormat`::
+Defines a custom `PostingsFormat` to use for fields of this type. This requires that a schema-aware codec, such as the `SchemaCodecFactory` has been configured in solrconfig.xml.
 
-[cols="30,40,30",options="header"]
-|===
-|Property |Description |Values
-|name |The name of the fieldType. This value gets used in field definitions, in the "type" attribute. It is strongly recommended that names consist of alphanumeric or underscore characters only and not start with a digit. This is not currently strictly enforced. |
-|class |The class name that gets used to store and index the data for this type. Note that you may prefix included class names with "solr." and Solr will automatically figure out which packages to search for the class - so `solr.TextField` will work. If you are using a third-party class, you will probably need to have a fully qualified class name. The fully qualified equivalent for `solr.TextField` is `org.apache.solr.schema.TextField`. |
-|positionIncrementGap |For multivalued fields, specifies a distance between multiple values, which prevents spurious phrase matches |integer
-|autoGeneratePhraseQueries |For text fields. If true, Solr automatically generates phrase queries for adjacent terms. If false, terms must be enclosed in double-quotes to be treated as phrases. |true or false
-|enableGraphQueries |For text fields, applicable when querying with <<the-standard-query-parser.adoc#TheStandardQueryParser-StandardQueryParserParameters,`sow=false`>>. Use `true` (the default) for field types with query analyzers including graph-aware filters, e.g. <<filter-descriptions.adoc#FilterDescriptions-SynonymGraphFilter,Synonym Graph Filter>> and <<filter-descriptions.adoc#FilterDescriptions-WordDelimiterGraphFilter,Word Delimiter Graph Filter>>. Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., <<filter-descriptions.adoc#FilterDescriptions-ShingleFilter,Shingle Filter>>. |true or false
-|[[FieldTypeDefinitionsandProperties-docValuesFormat]]docValuesFormat |Defines a custom `DocValuesFormat` to use for fields of this type. This requires that a schema-aware codec, such as the `SchemaCodecFactory` has been configured in solrconfig.xml. |n/a
-|postingsFormat |Defines a custom `PostingsFormat` to use for fields of this type. This requires that a schema-aware codec, such as the `SchemaCodecFactory` has been configured in solrconfig.xml. |n/a
-|===
 
 [NOTE]
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
index 2ac541a..1c17fbc 100644
--- a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
@@ -41,21 +41,35 @@ For most SolrCloud or standalone Solr setups, the `HadoopAuthPlugin` should suff
 [[HadoopAuthenticationPlugin-PluginConfiguration]]
 == Plugin Configuration
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,15,65",options="header"]
-|===
-|Parameter Name |Required |Description
-|class |Yes |Should be either `solr.HadoopAuthPlugin` or `solr.ConfigurableInternodeAuthHadoopPlugin`.
-|type |Yes |The type of authentication scheme to be configured. See https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[configuration] options.
-|sysPropPrefix |Yes |The prefix to be used to define the Java system property for configuring the authentication mechanism. The name of the Java system property is defined by appending the configuration parameter name to this prefix value. For example, if the prefix is 'solr' then the Java system property 'solr.kerberos.principal' defines the value of configuration parameter 'kerberos.principal'.
-|authConfigs |Yes |Configuration parameters required by the authentication scheme defined by the type property. For more details, see https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[Hadoop configuration] options.
-|defaultConfigs |No |Default values for the configuration parameters specified by the `authConfigs` property. The default values are specified as a collection of key-value pairs (i.e., `property-name:default_value`).
-|enableDelegationToken |No |Enable (or disable) the delegation tokens functionality.
-|initKerberosZk |No |For enabling initialization of kerberos before connecting to ZooKeeper (if applicable).
-|proxyUserConfigs |No |Configures proxy users for the underlying Hadoop authentication mechanism. This configuration is expressed as a collection of key-value pairs (i.e., `property-name:value`).
-|clientBuilderFactory |No |The `HttpClientBuilderFactory` implementation used for the Solr internal communication. Only applicable for `ConfigurableInternodeAuthHadoopPlugin`.
-|===
+`class`::
+Should be either `solr.HadoopAuthPlugin` or `solr.ConfigurableInternodeAuthHadoopPlugin`. This parameter is required.
+
+`type`::
+The type of authentication scheme to be configured. See https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[configuration] options. This parameter is required.
+
+`sysPropPrefix`::
+The prefix to be used to define the Java system property for configuring the authentication mechanism. This property is required.
++
+The name of the Java system property is defined by appending the configuration parameter name to this prefix value. For example, if the prefix is `solr` then the Java system property `solr.kerberos.principal` defines the value of configuration parameter `kerberos.principal`.
+
+`authConfigs`::
+Configuration parameters required by the authentication scheme defined by the type property. This property is required. For more details, see https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[Hadoop configuration] options.
+
+`defaultConfigs`::
+Default values for the configuration parameters specified by the `authConfigs` property. The default values are specified as a collection of key-value pairs (i.e., `"property-name": "default_value"`).
+
+`enableDelegationToken`::
+If `true`, the delegation tokens functionality will be enabled.
+
+`initKerberosZk`::
+For enabling initialization of kerberos before connecting to ZooKeeper (if applicable).
+
+`proxyUserConfigs`::
+Configures proxy users for the underlying Hadoop authentication mechanism. This configuration is expressed as a collection of key-value pairs (i.e., `"property-name": "default_value"`).
+
+`clientBuilderFactory`:: No |
+The `HttpClientBuilderFactory` implementation used for the Solr internal communication. Only applicable for `ConfigurableInternodeAuthHadoopPlugin`.
+
 
 [[HadoopAuthenticationPlugin-ExampleConfigurations]]
 == Example Configurations

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/index-replication.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/index-replication.adoc b/solr/solr-ref-guide/src/index-replication.adoc
index df8e9c6..774b78c 100644
--- a/solr/solr-ref-guide/src/index-replication.adoc
+++ b/solr/solr-ref-guide/src/index-replication.adoc
@@ -51,21 +51,33 @@ When using SolrCloud, the `ReplicationHandler` must be available via the `/repli
 
 The table below defines the key terms associated with Solr replication.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Term |Definition
-|Index |A Lucene index is a directory of files. These files make up the searchable and returnable data of a Solr Core.
-|Distribution |The copying of an index from the master server to all slaves. The distribution process takes advantage of Lucene's index file structure.
-|Inserts and Deletes |As inserts and deletes occur in the index, the directory remains unchanged. Documents are always inserted into newly created files. Documents that are deleted are not removed from the files. They are flagged in the file, deletable, and are not removed from the files until the index is optimized.
-|Master and Slave |A Solr replication master is a single node which receives all updates initially and keeps everything organized. Solr replication slave nodes receive no updates directly, instead all changes (such as inserts, updates, deletes, etc.) are made against the single master node. Changes made on the master are distributed to all the slave nodes which service all query requests from the clients.
-|Update |An update is a single change request against a single Solr instance. It may be a request to delete a document, add a new document, change a document, delete all documents matching a query, etc. Updates are handled synchronously within an individual Solr instance.
-|Optimization |A process that compacts the index and merges segments in order to improve query performance. Optimization should only be run on the master nodes. An optimized index may give query performance gains compared to an index that has become fragmented over a period of time with many updates. Distributing an optimized index requires a much longer time than the distribution of new segments to an un-optimized index.
-|Segments |A self contained subset of an index consisting of some documents and data structures related to the inverted index of terms in those documents.
-|mergeFactor |A parameter that controls the number of segments in an index. For example, when mergeFactor is set to 3, Solr will fill one segment with documents until the limit maxBufferedDocs is met, then it will start a new segment. When the number of segments specified by mergeFactor is reached (in this example, 3) then Solr will merge all the segments into a single index file, then begin writing new documents to a new segment.
-|Snapshot |A directory containing hard links to the data files of an index. Snapshots are distributed from the master nodes when the slaves pull them, "smart copying" any segments the slave node does not have in snapshot directory that contains the hard links to the most recent index data files.
-|===
+Index::
+A Lucene index is a directory of files. These files make up the searchable and returnable data of a Solr Core.
+
+Distribution::
+The copying of an index from the master server to all slaves. The distribution process takes advantage of Lucene's index file structure.
+
+Inserts and Deletes::
+As inserts and deletes occur in the index, the directory remains unchanged. Documents are always inserted into newly created files. Documents that are deleted are not removed from the files. They are flagged in the file, deletable, and are not removed from the files until the index is optimized.
+
+Master and Slave::
+A Solr replication master is a single node which receives all updates initially and keeps everything organized. Solr replication slave nodes receive no updates directly, instead all changes (such as inserts, updates, deletes, etc.) are made against the single master node. Changes made on the master are distributed to all the slave nodes which service all query requests from the clients.
+
+Update::
+An update is a single change request against a single Solr instance. It may be a request to delete a document, add a new document, change a document, delete all documents matching a query, etc. Updates are handled synchronously within an individual Solr instance.
+
+Optimization::
+A process that compacts the index and merges segments in order to improve query performance. Optimization should only be run on the master nodes. An optimized index may give query performance gains compared to an index that has become fragmented over a period of time with many updates. Distributing an optimized index requires a much longer time than the distribution of new segments to an un-optimized index.
+
+Segments::
+A self contained subset of an index consisting of some documents and data structures related to the inverted index of terms in those documents.
+
+mergeFactor::
+A parameter that controls the number of segments in an index. For example, when mergeFactor is set to 3, Solr will fill one segment with documents until the limit maxBufferedDocs is met, then it will start a new segment. When the number of segments specified by mergeFactor is reached (in this example, 3) then Solr will merge all the segments into a single index file, then begin writing new documents to a new segment.
+
+Snapshot::
+A directory containing hard links to the data files of an index. Snapshots are distributed from the master nodes when the slaves pull them, "smart copying" any segments the slave node does not have in snapshot directory that contains the hard links to the most recent index data files.
+
 
 [[IndexReplication-ConfiguringtheReplicationHandler]]
 == Configuring the ReplicationHandler
@@ -80,17 +92,20 @@ In addition to `ReplicationHandler` configuration options specific to the master
 
 Before running a replication, you should set the following parameters on initialization of the handler:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`replicateAfter`::
+String specifying action after which replication should occur. Valid values are commit, optimize, or startup. There can be multiple values for this parameter. If you use "startup", you need to have a "commit" and/or "optimize" entry also if you want to trigger replication on future commits or optimizes.
 
-[cols="30,70",options="header"]
-|===
-|Name |Description
-|replicateAfter |String specifying action after which replication should occur. Valid values are commit, optimize, or startup. There can be multiple values for this parameter. If you use "startup", you need to have a "commit" and/or "optimize" entry also if you want to trigger replication on future commits or optimizes.
-|backupAfter |String specifying action after which a backup should occur. Valid values are commit, optimize, or startup. There can be multiple values for this parameter. It is not required for replication, it just makes a backup.
-|maxNumberOfBackups |Integer specifying how many backups to keep. This can be used to delete all but the most recent N backups.
-|confFiles |The configuration files to replicate, separated by a comma.
-|commitReserveDuration |If your commits are very frequent and your network is slow, you can tweak this parameter to increase the amount of time taken to download 5Mb from the master to a slave. The default is 10 seconds.
-|===
+`backupAfter`
+String specifying action after which a backup should occur. Valid values are commit, optimize, or startup. There can be multiple values for this parameter. It is not required for replication, it just makes a backup.
+
+`maxNumberOfBackups`
+Integer specifying how many backups to keep. This can be used to delete all but the most recent N backups.
+
+`confFiles`::
+The configuration files to replicate, separated by a comma.
+
+`commitReserveDuration`::
+If your commits are very frequent and your network is slow, you can tweak this parameter to increase the amount of time taken to download 5Mb from the master to a slave. The default is 10 seconds.
 
 The example below shows a possible 'master' configuration for the `ReplicationHandler`, including a fixed number of backups and an invariant setting for the `maxWriteMBPerSec` request parameter to prevent slaves from saturating its network interface
 
@@ -203,17 +218,13 @@ Here is an example of a ReplicationHandler configuration for a repeater:
 
 When a commit or optimize operation is performed on the master, the RequestHandler reads the list of file names which are associated with each commit point. This relies on the `replicateAfter` parameter in the configuration to decide which types of events should trigger replication.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+These operations are supported:
 
-[cols="30,70",options="header"]
-|===
-|Setting on the Master |Description
-|commit |Triggers replication whenever a commit is performed on the master index.
-|optimize |Triggers replication whenever the master index is optimized.
-|startup |Triggers replication whenever the master index starts up.
-|===
+* `commit`: Triggers replication whenever a commit is performed on the master index.
+* `optimize`: Triggers replication whenever the master index is optimized.
+* `startup`: Triggers replication whenever the master index starts up.
 
-The replicateAfter parameter can accept multiple arguments. For example:
+The `replicateAfter` parameter can accept multiple arguments. For example:
 
 [source,xml]
 ----
@@ -262,36 +273,87 @@ To correct this problem, the slave then copies all the index files from master t
 
 You can use the HTTP commands below to control the ReplicationHandler's operations.
 
-[width="100%",options="header",]
-|===
-|Command |Description
-|http://_master_host:port_/solr/_core_name_/replication?command=enablereplication |Enables replication on the master for all its slaves.
-|http://_master_host:port_/solr/_core_name_/replication?command=disablereplication |Disables replication on the master for all its slaves.
-|http://_host:port_/solr/_core_name_/replication?command=indexversion |Returns the version of the latest replicatable index on the specified master or slave.
-|http://_slave_host:port_/solr/_core_name_/replication?command=fetchindex |Forces the specified slave to fetch a copy of the index from its master. If you like, you can pass an extra attribute such as masterUrl or compression (or any other parameter which is specified in the `<lst name="slave">` tag) to do a one time replication from a master. This obviates the need for hard-coding the master in the slave.
-|http://_slave_host:port_/solr/_core_name_/replication?command=abortfetch |Aborts copying an index from a master to the specified slave.
-|http://_slave_host:port_/solr/_core_name_/replication?command=enablepoll |Enables the specified slave to poll for changes on the master.
-|http://_slave_host:port_/solr/_core_name_/replication?command=disablepoll |Disables the specified slave from polling for changes on the master.
-|http://_slave_host:port_/solr/_core_name_/replication?command=details |Retrieves configuration details and current status.
-|http://_host:port_/solr/_core_name_/replication?command=filelist&generation=<_generation-number_> |Retrieves a list of Lucene files present in the specified host's index. You can discover the generation number of the index by running the `indexversion` command.
-|http://_master_host:port_/solr/_core_name_/replication?command=backup a|
-Creates a backup on master if there are committed index data in the server; otherwise, does nothing. This command is useful for making periodic backups.
-
-supported request parameters:
-
-* `numberToKeep:` request parameter can be used with the backup command unless the `maxNumberOfBackups` initialization parameter has been specified on the handler – in which case `maxNumberOfBackups` is always used and attempts to use the `numberToKeep` request parameter will cause an error.
-* `name` : (optional) Backup name . The snapshot will be created in a directory called snapshot.<name> within the data directory of the core . By default the name is generated using date in `yyyyMMddHHmmssSSS` format. If `location` parameter is passed , that would be used instead of the data directory
-* `location`: Backup location
+`enablereplication`::
+Enable replication on the "master" for all its slaves.
++
+[source,bash]
+http://_master_host:port_/solr/_core_name_/replication?command=enablereplication
+
+`disablereplication`::
+Disable replication on the master for all its slaves.
++
+[source,bash]
+http://_master_host:port_/solr/_core_name_/replication?command=disablereplication
+
+`indexversion`::
+Return the version of the latest replicatable index on the specified master or slave.
++
+[source,bash]
+http://_host:port_/solr/_core_name_/replication?command=indexversion
+
+`fetchindex`::
+Force the specified slave to fetch a copy of the index from its master.
++
+[source.bash]
+http://_slave_host:port_/solr/_core_name_/replication?command=fetchindex
++
+If you like, you can pass an extra attribute such as `masterUrl` or `compression` (or any other parameter which is specified in the `<lst name="slave">` tag) to do a one time replication from a master. This obviates the need for hard-coding the master in the slave.
+
+`abortfetch`::
+Abort copying an index from a master to the specified slave.
++
+[source,bash]
+http://_slave_host:port_/solr/_core_name_/replication?command=abortfetch
+
+`enablepoll`::
+Enable the specified slave to poll for changes on the master.
++
+[source,bash]
+http://_slave_host:port_/solr/_core_name_/replication?command=enablepoll
+
+`disablepoll`::
+Disable the specified slave from polling for changes on the master.
++
+[source,bash]
+http://_slave_host:port_/solr/_core_name_/replication?command=disablepoll
+
+`details`::
+Retrieve configuration details and current status.
++
+[source,bash]
+http://_slave_host:port_/solr/_core_name_/replication?command=details
+
+`filelist`::
+Retrieve a list of Lucene files present in the specified host's index.
++
+[source,bash]
+http://_host:port_/solr/_core_name_/replication?command=filelist&generation=<_generation-number_>
++
+You can discover the generation number of the index by running the `indexversion` command.
+
+`backup`::
+Create a backup on master if there are committed index data in the server; otherwise, does nothing.
++
+[source,bash]
+http://_master_host:port_/solr/_core_name_/replication?command=backup
++
+This command is useful for making periodic backups. There are several supported request parameters:
++
+* `numberToKeep:`: This can be used with the backup command unless the `maxNumberOfBackups` initialization parameter has been specified on the handler – in which case `maxNumberOfBackups` is always used and attempts to use the `numberToKeep` request parameter will cause an error.
+* `name`: (optional) Backup name. The snapshot will be created in a directory called `snapshot.<name>` within the data directory of the core. By default the name is generated using date in `yyyyMMddHHmmssSSS` format. If `location` parameter is passed, that would be used instead of the data directory
+* `location`: Backup location.
+
+`deletebackup`::
+Delete any backup created using the `backup` command.
++
+[source,bash]
+http://_master_host:port_ /solr/_core_name_/replication?command=deletebackup
++
+There are two supported parameters:
+
+* `name`: The name of the snapshot. A snapshot with the name `snapshot._name_` must exist. If not, an error is thrown.
+* `location`: Location where the snapshot is created.
 
-|http://_master_host:port_ /solr/_core_name_/replication?command=deletebackup a|
-Delete any backup created using the `backup` command .
-
-Request parameters:
-
-* name: The name of the snapshot . A snapshot with the name snapshot.<name> must exist .If not, an error is thrown
-* location: Location where the snapshot is created
-
-|===
 
 [[IndexReplication-DistributionandOptimization]]
 == Distribution and Optimization
@@ -302,7 +364,9 @@ The time required to optimize a master index can vary dramatically. A small inde
 
 Distributing a newly optimized index may take only a few minutes or up to an hour or more, again depending on the size of the index and the performance capabilities of network connections and disks. During optimization the machine is under load and does not process queries very well. Given a schedule of updates being driven a few times an hour to the slaves, we cannot run an optimize with every committed snapshot.
 
-Copying an optimized index means that the *entire* index will need to be transferred during the next snappull. This is a large expense, but not nearly as huge as running the optimize everywhere. Consider this example: on a three-slave one-master configuration, distributing a newly-optimized index takes approximately 80 seconds _total_. Rolling the change across a tier would require approximately ten minutes per machine (or machine group). If this optimize were rolled across the query tier, and if each slave node being optimized were disabled and not receiving queries, a rollout would take at least twenty minutes and potentially as long as an hour and a half. Additionally, the files would need to be synchronized so that the _following_ the optimize, snappull would not think that the independently optimized files were different in any way. This would also leave the door open to independent corruption of indexes instead of each being a perfect copy of the master.
+Copying an optimized index means that the *entire* index will need to be transferred during the next `snappull`. This is a large expense, but not nearly as huge as running the optimize everywhere.
+
+Consider this example: on a three-slave one-master configuration, distributing a newly-optimized index takes approximately 80 seconds _total_. Rolling the change across a tier would require approximately ten minutes per machine (or machine group). If this optimize were rolled across the query tier, and if each slave node being optimized were disabled and not receiving queries, a rollout would take at least twenty minutes and potentially as long as an hour and a half. Additionally, the files would need to be synchronized so that the _following_ the optimize, `snappull` would not think that the independently optimized files were different in any way. This would also leave the door open to independent corruption of indexes instead of each being a perfect copy of the master.
 
 Optimizing on the master allows for a straight-forward optimization operation. No query slaves need to be taken out of service. The optimized index can be distributed in the background as queries are being normally serviced. The optimization can occur at any time convenient to the application providing index updates.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
index ce32503..63ab26d 100644
--- a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
@@ -192,15 +192,12 @@ The maximum time to wait for a write lock on an IndexWriter. The default is 1000
 
 There are a few other parameters that may be important to configure for your implementation. These settings affect how or when updates are made to an index.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Setting |Description
-|reopenReaders |Controls if IndexReaders will be re-opened, instead of closed and then opened, which is often less efficient. The default is true.
-|deletionPolicy |Controls how commits are retained in case of rollback. The default is `SolrDeletionPolicy`, which has sub-parameters for the maximum number of commits to keep (`maxCommitsToKeep`), the maximum number of optimized commits to keep (`maxOptimizedCommitsToKeep`), and the maximum age of any commit to keep (`maxCommitAge`), which supports `DateMathParser` syntax.
-|infoStream |The InfoStream setting instructs the underlying Lucene classes to write detailed debug information from the indexing process as Solr log messages.
-|===
+`reopenReaders`:: Controls if IndexReaders will be re-opened, instead of closed and then opened, which is often less efficient. The default is true.
+
+`deletionPolicy`:: Controls how commits are retained in case of rollback. The default is `SolrDeletionPolicy`, which has sub-parameters for the maximum number of commits to keep (`maxCommitsToKeep`), the maximum number of optimized commits to keep (`maxOptimizedCommitsToKeep`), and the maximum age of any commit to keep (`maxCommitAge`), which supports `DateMathParser` syntax.
+
+`infoStream`:: The InfoStream setting instructs the underlying Lucene classes to write detailed debug information from the indexing process as Solr log messages.
+
 
 [source,xml]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
index 126e96b..ac409ff 100644
--- a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
@@ -44,22 +44,16 @@ This sets the default search field ("df") to be "_text_" for all of the request
 
 The syntax and semantics are similar to that of a `<requestHandler>` . The following are the attributes
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`path`::
+A comma-separated list of paths which will use the parameters. Wildcards can be used in paths to define nested paths, as described below.
 
-[cols="30,70",options="header"]
-|===
-|Property |Description
-|path |A comma-separated list of paths which will use the parameters. Wildcards can be used in paths to define nested paths, as described below.
-|name a|
+`name`::
 The name of this set of parameters. The name can be used directly in a requestHandler definition if a path is not explicitly named. If you give your `<initParams>` a name, you can refer to the params in a `<requestHandler>` that is not defined as a path.
-
++
 For example, if an `<initParams>` section has the name "myParams", you can call the name when defining your request handler:
-
++
 [source,xml]
-----
 <requestHandler name="/dump1" class="DumpRequestHandler" initParams="myParams"/>
-----
-|===
 
 [[InitParamsinSolrConfig-Wildcards]]
 == Wildcards

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc b/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
index 3962433..da96316 100644
--- a/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
@@ -232,19 +232,26 @@ The main properties we are concerned with are the `keyTab` and `principal` prope
 
 While starting up Solr, the following host-specific parameters need to be passed. These parameters can be passed at the command line with the `bin/solr` start command (see <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="35,10,55",options="header"]
-|===
-|Parameter Name |Required |Description
-|`solr.kerberos.name.rules` |No |Used to map Kerberos principals to short names. Default value is `DEFAULT`. Example of a name rule: `RULE:[1:$1@$0](.\*EXAMPLE.COM)s/@.*//`
-|`solr.kerberos.cookie.domain` |Yes |Used to issue cookies and should have the hostname of the Solr node.
-|`solr.kerberos.cookie.portaware` |No |When set to true, cookies are differentiated based on host and port, as opposed to standard cookies which are not port aware. This should be set if more than one Solr node is hosted on the same host. The default is false.
-|`solr.kerberos.principal` |Yes |The service principal.
-|`solr.kerberos.keytab` |Yes |Keytab file path containing service principal credentials.
-|`solr.kerberos.jaas.appname` |No |The app name (section name) within the JAAS configuration file which is required for internode communication. Default is `Client`, which is used for ZooKeeper authentication as well. If different users are used for ZooKeeper and Solr, they will need to have separate sections in the JAAS configuration file.
-|`java.security.auth.login.config` |Yes |Path to the JAAS configuration file for configuring a Solr client for internode communication.
-|===
+`solr.kerberos.name.rules`::
+Used to map Kerberos principals to short names. Default value is `DEFAULT`. Example of a name rule: `RULE:[1:$1@$0](.\*EXAMPLE.COM)s/@.*//`.
+
+`solr.kerberos.cookie.domain`:: Used to issue cookies and should have the hostname of the Solr node. This parameter is required.
+
+`solr.kerberos.cookie.portaware`::
+When set to `true`, cookies are differentiated based on host and port, as opposed to standard cookies which are not port aware. This should be set if more than one Solr node is hosted on the same host. The default is `false`.
+
+`solr.kerberos.principal`::
+The service principal. This parameter is required.
+
+`solr.kerberos.keytab`::
+Keytab file path containing service principal credentials. This parameter is required.
+
+`solr.kerberos.jaas.appname`::
+The app name (section name) within the JAAS configuration file which is required for internode communication. Default is `Client`, which is used for ZooKeeper authentication as well. If different users are used for ZooKeeper and Solr, they will need to have separate sections in the JAAS configuration file.
+
+`java.security.auth.login.config`::
+Path to the JAAS configuration file for configuring a Solr client for internode communication. This parameter is required.
+
 
 Here is an example that could be added to `bin/solr.in.sh`. Make sure to change this example to use the right hostname and the keytab file path.
 
@@ -279,18 +286,23 @@ There are a few use cases for Solr where this might be helpful:
 
 To enable delegation tokens, several parameters must be defined. These parameters can be passed at the command line with the `bin/solr` start command (see <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="40,10,50",options="header"]
-|===
-|Parameter Name |Required |Description
-|`solr.kerberos.delegation.token.enabled` |Yes, to enable tokens |False by default, set to true to enable delegation tokens.
-|`solr.kerberos.delegation.token.kind` |No |Type of delegation tokens. By default this is `solr-dt`. Likely this does not need to change. No other option is available at this time.
-|`solr.kerberos.delegation.token.validity` |No |Time, in seconds, for which delegation tokens are valid. The default is 36000 seconds.
-|`solr.kerberos.delegation.token.signer.secret.provider` |No |Where delegation token information is stored internally. The default is `zookeeper` which must be the location for delegation tokens to work across Solr servers (when running in SolrCloud mode). No other option is available at this time.
-|`solr.kerberos.delegation.token.signer.secret.provider.zookeper.path` |No |The ZooKeeper path where the secret provider information is stored. This is in the form of the path + /security/token. The path can include the chroot or the chroot can be omitted if you are not using it. This example includes the chroot: `server1:9983,server2:9983,server3:9983/solr/security/token`.
-|`solr.kerberos.delegation.token.secret.manager.znode.working.path` |No |The ZooKeeper path where token information is stored. This is in the form of the path + /security/zkdtsm. The path can include the chroot or the chroot can be omitted if you are not using it. This example includes the chroot: `server1:9983,server2:9983,server3:9983/solr/security/zkdtsm`.
-|===
+`solr.kerberos.delegation.token.enabled`::
+This is `false` by default, set to `true` to enable delegation tokens. This parameter is required if you want to enable tokens.
+
+`solr.kerberos.delegation.token.kind`::
+The type of delegation tokens. By default this is `solr-dt`. Likely this does not need to change. No other option is available at this time.
+
+`solr.kerberos.delegation.token.validity`::
+Time, in seconds, for which delegation tokens are valid. The default is 36000 seconds.
+
+`solr.kerberos.delegation.token.signer.secret.provider`::
+Where delegation token information is stored internally. The default is `zookeeper` which must be the location for delegation tokens to work across Solr servers (when running in SolrCloud mode). No other option is available at this time.
+
+`solr.kerberos.delegation.token.signer.secret.provider.zookeper.path`::
+The ZooKeeper path where the secret provider information is stored. This is in the form of the path + /security/token. The path can include the chroot or the chroot can be omitted if you are not using it. This example includes the chroot: `server1:9983,server2:9983,server3:9983/solr/security/token`.
+
+`solr.kerberos.delegation.token.secret.manager.znode.working.path`::
+The ZooKeeper path where token information is stored. This is in the form of the path + /security/zkdtsm. The path can include the chroot or the chroot can be omitted if you are not using it. This example includes the chroot: `server1:9983,server2:9983,server3:9983/solr/security/zkdtsm`.
 
 [[KerberosAuthenticationPlugin-StartSolr]]
 === Start Solr