You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2018/02/27 22:18:25 UTC

[2/2] lucene-solr:branch_7x: Ref Guide: Copy editing changes committed for 7.3 & fixing typos Removed stream-evaluator-reference.adoc to remove crazy merge conflicts

Ref Guide: Copy editing changes committed for 7.3 & fixing typos
Removed stream-evaluator-reference.adoc to remove crazy merge conflicts


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/601c7350
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/601c7350
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/601c7350

Branch: refs/heads/branch_7x
Commit: 601c7350ce459f60b7e9fea4dfa46793f254f7c8
Parents: 9b0127f
Author: Cassandra Targett <ct...@apache.org>
Authored: Tue Feb 27 16:04:36 2018 -0600
Committer: Cassandra Targett <ct...@apache.org>
Committed: Tue Feb 27 16:18:14 2018 -0600

----------------------------------------------------------------------
 solr/solr-ref-guide/src/about-this-guide.adoc   |   2 +-
 solr/solr-ref-guide/src/analyzers.adoc          |   2 +-
 .../src/collapse-and-expand-results.adoc        |   2 +-
 solr/solr-ref-guide/src/collections-api.adoc    |  69 ++++++++-----
 .../src/command-line-utilities.adoc             |   2 +-
 .../src/common-query-parameters.adoc            |   8 +-
 .../detecting-languages-during-indexing.adoc    |   8 +-
 solr/solr-ref-guide/src/documents-screen.adoc   |   6 +-
 solr/solr-ref-guide/src/enabling-ssl.adoc       |   4 +-
 .../src/field-types-included-with-solr.adoc     |   2 +-
 .../solr-ref-guide/src/filter-descriptions.adoc |   4 +-
 .../src/getting-started-with-solrcloud.adoc     |   4 +-
 solr/solr-ref-guide/src/graph-traversal.adoc    |   4 +-
 solr/solr-ref-guide/src/json-facet-api.adoc     |   4 +-
 solr/solr-ref-guide/src/language-analysis.adoc  |   4 +-
 solr/solr-ref-guide/src/learning-to-rank.adoc   |   2 +-
 .../src/major-changes-in-solr-7.adoc            |   2 +-
 solr/solr-ref-guide/src/merging-indexes.adoc    |   2 +-
 solr/solr-ref-guide/src/meta-docs/publish.adoc  |   2 +-
 solr/solr-ref-guide/src/metrics-reporting.adoc  |  32 +++---
 .../src/near-real-time-searching.adoc           |   4 +-
 solr/solr-ref-guide/src/other-parsers.adoc      |   4 +-
 .../src/other-schema-elements.adoc              |   2 +-
 .../src/pagination-of-results.adoc              |   2 +-
 solr/solr-ref-guide/src/post-tool.adoc          |   4 +-
 .../src/running-solr-on-hdfs.adoc               |   2 +-
 .../src/solr-jdbc-apache-zeppelin.adoc          |   6 +-
 solr/solr-ref-guide/src/solr-tutorial.adoc      |   2 +-
 .../solrcloud-autoscaling-trigger-actions.adoc  |  26 ++---
 .../src/solrcloud-autoscaling-triggers.adoc     | 103 +++++++++++--------
 solr/solr-ref-guide/src/spell-checking.adoc     |   2 +-
 .../src/stream-decorator-reference.adoc         |   4 +-
 .../src/taking-solr-to-production.adoc          |  17 ++-
 .../src/the-dismax-query-parser.adoc            |   2 +-
 .../src/update-request-processors.adoc          |   2 +-
 .../src/uploading-data-with-index-handlers.adoc |   2 +-
 36 files changed, 200 insertions(+), 149 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/about-this-guide.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/about-this-guide.adoc b/solr/solr-ref-guide/src/about-this-guide.adoc
index 5b8502f..3d7fc24 100644
--- a/solr/solr-ref-guide/src/about-this-guide.adoc
+++ b/solr/solr-ref-guide/src/about-this-guide.adoc
@@ -46,7 +46,7 @@ Path information is given relative to `solr.home`, which is the location under t
 
 In many cases, this is is in the `server/solr` directory of your installation. However, there can be exceptions, particularly if your installation has customized this.
 
-In several cases of this Guide, our examples are built from the the "techproducts" example (i.e., you have started solr with the command `bin/solr -e techproducts`). In this case, `solr.home` will be a sub-directory of the `example/` directory created for you automatically.
+In several cases of this Guide, our examples are built from the the "techproducts" example (i.e., you have started Solr with the command `bin/solr -e techproducts`). In this case, `solr.home` will be a sub-directory of the `example/` directory created for you automatically.
 
 See also the section <<solr-configuration-files.adoc#solr-home,Solr Home>> for further details on what is contained in this directory.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/analyzers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/analyzers.adoc b/solr/solr-ref-guide/src/analyzers.adoc
index 343fd30..2edfe9c 100644
--- a/solr/solr-ref-guide/src/analyzers.adoc
+++ b/solr/solr-ref-guide/src/analyzers.adoc
@@ -88,7 +88,7 @@ At query time, the only normalization that happens is to convert the query terms
 
 === Analysis for Multi-Term Expansion
 
-In some types of queries (i.e., Prefix, Wildcard, Regex, etc...) the input provided by the user is not natural language intended for Analysis. Things like Synonyms or Stop word filtering do not work in a logical way in these types of Queries.
+In some types of queries (i.e., Prefix, Wildcard, Regex, etc.) the input provided by the user is not natural language intended for Analysis. Things like Synonyms or Stop word filtering do not work in a logical way in these types of Queries.
 
 The analysis factories that _can_ work in these types of queries (such as Lowercasing, or Normalizing Factories) are known as {lucene-javadocs}/analyzers-common/org/apache/lucene/analysis/util/MultiTermAwareComponent.html[`MultiTermAwareComponents`]. When Solr needs to perform analysis for a query that results in Multi-Term expansion, only the `MultiTermAwareComponents` used in the `query` analyzer are used, Factory that is not Multi-Term aware will be skipped.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
index 67a2980..d967b93 100644
--- a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
+++ b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
@@ -27,7 +27,7 @@ In order to use these features with SolrCloud, the documents must be located on
 
 == Collapsing Query Parser
 
-The `CollapsingQParser` is really a _post filter_ that provides more performant field collapsing than Solr's standard approach when the number of distinct groups in the result set is high. This parser collapses the result set to a single document per group before it forwards the result set to the rest of the search components. So all downstream components (faceting, highlighting, etc...) will work with the collapsed result set.
+The `CollapsingQParser` is really a _post filter_ that provides more performant field collapsing than Solr's standard approach when the number of distinct groups in the result set is high. This parser collapses the result set to a single document per group before it forwards the result set to the rest of the search components. So all downstream components (faceting, highlighting, etc.) will work with the collapsed result set.
 
 The CollapsingQParser accepts the following local parameters:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/collections-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collections-api.adoc b/solr/solr-ref-guide/src/collections-api.adoc
index b2d3cc4..dee5443 100644
--- a/solr/solr-ref-guide/src/collections-api.adoc
+++ b/solr/solr-ref-guide/src/collections-api.adoc
@@ -519,7 +519,7 @@ http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=testalias&c
 ----
 
 [[createroutedalias]]
-== CREATEROUTEDALIAS: Create an alias that partitions data
+== CREATEROUTEDALIAS: Create an Alias that Partitions Data
 
 CREATEROUTEDALIAS will create a special type of alias that automates the partitioning of data across a series of
 collections. This feature allows for indefinite indexing of data without degradation of performance otherwise
@@ -528,26 +528,31 @@ the document is then potentially re-routed to another collection. The underlying
 can be queried independently but more likely the alias created by this command will be used. These collections are created
 automatically on the fly as new data arrives based on the parameters supplied in this command.
 
-*NOTE* Presently only partitioning of time based data is available, though other schemes may become available in
+NOTE: Presently only partitioning of time-based data is available, though other schemes may become available in
 the future.
+
 [source,text]
 ----
-localhost:8983/solr/admin/collections?action=CREATEROUTEDALIAS&name=timedata&router.start=NOW/DAY&router.field=evt_dt&router.name=time&router.interval=%2B1DAY&router.maxFutureMs=3600000&create-collection.collection.configName=myConfig&create-collection.numShards=2
+admin/collections?action=CREATEROUTEDALIAS&name=timedata&router.start=NOW/DAY&router.field=evt_dt&router.name=time&router.interval=%2B1DAY&router.maxFutureMs=3600000&create-collection.collection.configName=myConfig&create-collection.numShards=2
 ----
 
-If run on Jan 15, 2018 The above will create an alias named timedata, that contains collections with names such as
+If run on Jan 15, 2018, the above will create an alias named "timedata", that contains collections with names such as
 `timedata` and an initial collection named `timedata_2018_01_15`. Updates sent to this alias with a (required) value
 in `evt_dt` that is before or after 2018-01-15 will be rejected, until the last 60 minutes of 2018-01-15. After
-2018-01-15T23:00:00 documents for either 2018-01-15 or 2018-01-16 will be accepted. As soon as the system receives a
+2018-01-15T23:00:00, documents for either 2018-01-15 or 2018-01-16 will be accepted.
+
+As soon as the system receives a
 document for an allowable time window for which there is no collection it will automatically create the next required
-collection (and potentially any intervening collections if router.interval is smaller than router.maxFutureMs). Both
+collection (and potentially any intervening collections if `router.interval` is smaller than `router.maxFutureMs`). Both
 the initial collection and any subsequent collections will be created using the specified configset. All Collection
 creation parameters other than `name` are allowed, prefixed by `create-collection.`
 
-This means that one could (for example) partition their collections by day, and within each daily collection route
+This means that one could, for example, partition their collections by day, and within each daily collection route
 the data to shards based on customer id. Such shards can be of any type (NRT, PULL or TLOG), and rule based replica
-placement strategies may also be used. The values supplied in this command for collection creation will be retained
-in alias metadata, and can be verified by inspecting aliases.json in zookeeper.
+placement strategies may also be used.
+
+The values supplied in this command for collection creation will be retained
+in alias metadata, and can be verified by inspecting `aliases.json` in ZooKeeper.
 
 === CREATEROUTEDALIAS Parameters
 
@@ -557,49 +562,61 @@ dependent collections that will be created. It must therefore adhere to normal r
 naming.
 
 `router.start`::
-The start date/time of data for this time routed alias in Solr's standard date/time format (ISO-8601 or "NOW"
-optionally with "date math").
+The start date/time of data for this time routed alias in Solr's standard date/time format (i.e., ISO-8601 or "NOW"
+optionally with <<working-with-dates.adoc#date-math,date math>>).
++
 The first collection created for the alias will be internally named after this value.
-If a document is submitted with an earlier value for router.field then the earliest collection the alias points to then
+If a document is submitted with an earlier value for `router.field` than the earliest collection the alias points to then
 it will yield an error since it can't be routed.
++
 This date/time MUST NOT have a milliseconds component other than 0.
 Particularly, this means `NOW` will fail 999 times out of 1000, though `NOW/SECOND`, `NOW/MINUTE`, etc. will work just fine.
-This param is required.
++
+This parameter is required.
 
 `TZ`::
-The timezone to be used when evaluating any date math in router.start or router.interval.  This is equivalent to the
+The timezone to be used when evaluating any date math in router.start or router.interval. This is equivalent to the
 same parameter supplied to search queries, but understand in this case it's persisted with most of the other parameters
 as alias metadata.
++
 If GMT-4 is supplied for this value then a document dated 2018-01-14T21:00:00:01.2345Z would be stored in the
-myAlias_2018-01-15_01 collection (assumming an interval of +1HOUR). The default timezone is UTC.
+myAlias_2018-01-15_01 collection (assuming an interval of +1HOUR).
++
+The default timezone is UTC.
 
 `router.field`::
 The date field to inspect to determine which underlying collection an incoming document should be routed to.
 This field is required on all incoming documents.
 
 `router.name`::
-The type of routing to use. Presently only `time` is valid.  This param is required.
+The type of routing to use. Presently only `time` is valid.  This parameter is required.
 
 `router.interval`::
 A date math expression that will be appended to a timestamp to determine the next collection in the series.
 Any date math expression that can be evaluated if appended to a timestamp of the form 2018-01-15T16:17:18 will
-work here. This param is required.
+work here.
++
+This parameter is required.
 
 `router.maxFutureMs`::
 The maximum milliseconds into the future that a document is allowed to have in `router.field` for it to be accepted
 without error.  If there was no limit, than an erroneous value could trigger many collections to be created.
-The default is 10 minutes worth.
++
+The default is 10 minutes.
 
 `router.autoDeleteAge`::
 A date math expression that results in the oldest collections getting deleted automatically.
++
 The date math is relative to the timestamp of a newly created collection (typically close to the current time),
 and thus this must produce an earlier time via rounding and/or subtracting.
 Collections to be deleted must have a time range that is entirely before the computed age.
 Collections are considered for deletion immediately prior to new collections getting created.
-Example: `/DAY-90DAYS`.  The default is not to delete.
+Example: `/DAY-90DAYS`.
++
+The default is not to delete.
 
 `create-collection.*`::
-The * can be replaced with any parameter from the <<create,CREATE>> command except `name`. All other fields
+The * wildcard can be replaced with any parameter from the <<create,CREATE>> command except `name`. All other fields
 are identical in requirements and naming except that we insist that the configset be explicitly specified.
 The configset must be created beforehand, either uploaded or copied and modified.
 It's probably a bad idea to use "data driven" mode as schema mutations might happen concurrently leading to errors.
@@ -612,7 +629,7 @@ Request ID to track this action which will be <<Asynchronous Calls,processed asy
 The output will simply be a responseHeader with details of the time it took to process the request. To confirm the
 creation of the alias and the values of the associated metadata, you can look in the Solr Admin UI, under the Cloud
 section and find the `aliases.json` file. The initial collection should also be visible in various parts
-of the admin UI.
+of the Admin UI.
 
 === Examples using CREATEROUTEDALIAS
 
@@ -625,7 +642,7 @@ partiton is to be rejected and collections are created using a config set named
 
 [source,text]
 ----
-localhost:8983/solr/admin/collections?action=CREATEROUTEDALIAS&name=myTimeData&router.start=NOW/DAY&router.field=evt_dt&router.name=time&router.interval=%2B1DAY&router.maxFutureMs=3600000&create-collection.collection.configName=myConfig&create-collection.numShards=2
+http://localhost:8983/solr/admin/collections?action=CREATEROUTEDALIAS&name=myTimeData&router.start=NOW/DAY&router.field=evt_dt&router.name=time&router.interval=%2B1DAY&router.maxFutureMs=3600000&create-collection.collection.configName=myConfig&create-collection.numShards=2
 ----
 
 *Output*
@@ -641,7 +658,7 @@ localhost:8983/solr/admin/collections?action=CREATEROUTEDALIAS&name=myTimeData&r
 ----
 
 A somewhat contrived example demonstrating the <<v2-api.adoc#top-v2-api,V2 API>> usage and additional collection creation options.
-Notice that the collection creation fields follow the v2 api naming convention, not the v1 naming conventions.
+Notice that the collection creation parameters follow the v2 API naming convention, not the v1 naming conventions.
 
 *Input*
 
@@ -955,7 +972,7 @@ If `shard` is not specified, then `\_route_` must be.
 `\_route_`::
 If the exact shard name is not known, users may pass the `\_route_` value and the system would identify the name of the shard.
 +
-Ignored if the `shard` param is also specified.
+Ignored if the `shard` parameter is also specified.
 
 `node`::
 The name of the node where the replica should be created.
@@ -1989,7 +2006,7 @@ WARNING: This is an expert level command, and should be invoked only when regula
 [[migratestateformat]]
 == MIGRATESTATEFORMAT: Migrate Cluster State
 
-A expert level utility API to move a collection from shared `clusterstate.json` zookeeper node (created with `stateFormat=1`, the default in all Solr releases prior to 5.0) to the per-collection `state.json` stored in ZooKeeper (created with `stateFormat=2`, the current default) seamlessly without any application down-time.
+A expert level utility API to move a collection from shared `clusterstate.json` ZooKeeper node (created with `stateFormat=1`, the default in all Solr releases prior to 5.0) to the per-collection `state.json` stored in ZooKeeper (created with `stateFormat=2`, the current default) seamlessly without any application down-time.
 
 `/admin/collections?action=MIGRATESTATEFORMAT&collection=<collection_name>`
 
@@ -2104,7 +2121,7 @@ For source replicas that are also shard leaders the operation will wait for the
 The source node from which the replicas need to be copied from. This parameter is required.
 
 `targetNode`::
-The target node where replicas will be copied. If this parameter is not provided, Solr would identify nodes automatically based on policies or no:of cores in each node
+The target node where replicas will be copied. If this parameter is not provided, Solr will identify nodes automatically based on policies or number of cores in each node.
 
 `parallel`::
 If this flag is set to `true`, all replicas are created in separate threads. Keep in mind that this can lead to very high network and disk I/O if the replicas have very large indices. The default is `false`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/command-line-utilities.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/command-line-utilities.adoc b/solr/solr-ref-guide/src/command-line-utilities.adoc
index ae88327..1e588e4 100644
--- a/solr/solr-ref-guide/src/command-line-utilities.adoc
+++ b/solr/solr-ref-guide/src/command-line-utilities.adoc
@@ -111,7 +111,7 @@ If you are on Windows machine, simply replace `zkcli.sh` with `zkcli.bat` in the
 .Bootstrap with chroot
 [NOTE]
 ====
-Using the boostrap command with a zookeeper chroot in the `-zkhost` parameter, e.g., `-zkhost 127.0.0.1:2181/solr`, will automatically create the chroot path before uploading the configs.
+Using the boostrap command with a ZooKeeper chroot in the `-zkhost` parameter, e.g., `-zkhost 127.0.0.1:2181/solr`, will automatically create the chroot path before uploading the configs.
 ====
 
 === Put Arbitrary Data into a New ZooKeeper file

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/common-query-parameters.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/common-query-parameters.adoc b/solr/solr-ref-guide/src/common-query-parameters.adoc
index 0e2f4f0..83ea667 100644
--- a/solr/solr-ref-guide/src/common-query-parameters.adoc
+++ b/solr/solr-ref-guide/src/common-query-parameters.adoc
@@ -26,7 +26,7 @@ The defType parameter selects the query parser that Solr should use to process t
 
 `defType=dismax`
 
-If no defType param is specified, then by default, the <<the-standard-query-parser.adoc#the-standard-query-parser,The Standard Query Parser>> is used. (e.g., `defType=lucene`)
+If no `defType` parameter is specified, then by default, the <<the-standard-query-parser.adoc#the-standard-query-parser,The Standard Query Parser>> is used. (e.g., `defType=lucene`)
 
 == sort Parameter
 
@@ -36,12 +36,12 @@ Solr can sort query responses according to:
 
 * Document scores
 * <<function-queries.adoc#sort-by-function,Function results>>
-* The value of any primative field (numerics, string, boolean, dates, etc...) which has `docValues="true"` (or `multiValued="false"` and `indexed="true"` in which case the indexed terms will used to build DocValue like structures on the fly at runtime)
+* The value of any primitive field (numerics, string, boolean, dates, etc.) which has `docValues="true"` (or `multiValued="false"` and `indexed="true"`, in which case the indexed terms will used to build DocValue like structures on the fly at runtime)
 * A SortableTextField which implicitly uses `docValues="true"` by default to allow sorting on the original input string regardless of the analyzers used for Searching.
-* A single-valued TextField that uses an analyzer (such as the KeywordTokenizer) that produces only a single term per document.  TextField does not support docValues="true", but a DocValue like structure will be built on the fly at runtime.
+* A single-valued TextField that uses an analyzer (such as the KeywordTokenizer) that produces only a single term per document. TextField does not support `docValues="true"`, but a DocValue-like structure will be built on the fly at runtime.
 ** *NOTE:* If you want to be able to sort on a field whose contents you want to tokenize to facilitate searching, <<copying-fields.adoc#copying-fields,use a `copyField` directive>> in the the Schema to clone the field. Then search on the field and sort on its clone.
 
-In the case of primative fields, or SortableTextFields, that are `multiValued="true"` the representantive value used for each doc when sorting depends on the sort direction: The minimum value in each document is used for ascending (`asc`) sorting, while the maximal value in each document is used for descending (`desc`) sorting.  This default behavior is equivilent to explicitly sorting using the 2 argument `<<function-queries.adoc#field-function,field()>>` function: `sort=field(name,min) asc` and `sort=field(name,max) desc`
+In the case of primitive fields, or SortableTextFields, that are `multiValued="true"` the representative value used for each doc when sorting depends on the sort direction: The minimum value in each document is used for ascending (`asc`) sorting, while the maximal value in each document is used for descending (`desc`) sorting.  This default behavior is equivilent to explicitly sorting using the 2 argument `<<function-queries.adoc#field-function,field()>>` function: `sort=field(name,min) asc` and `sort=field(name,max) desc`
 
 The table below explains how Solr responds to various settings of the `sort` parameter.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
index 7caccb7..61394f1 100644
--- a/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
+++ b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
@@ -22,7 +22,7 @@ Solr supports three implementations of this feature:
 
 * Tika's language detection feature: http://tika.apache.org/0.10/detection.html
 * LangDetect language detection: https://github.com/shuyo/language-detection
-* OpenNLP language detection: http://opennlp.apache.org/docs/1.8.4/manual/opennlp.html#tools.langdetect 
+* OpenNLP language detection: http://opennlp.apache.org/docs/1.8.4/manual/opennlp.html#tools.langdetect
 
 You can see a comparison between the Tika and LangDetect implementations here: http://blog.mikemccandless.com/2011/10/accuracy-and-performance-of-googles.html. In general, the LangDetect implementation supports more languages with higher performance.
 
@@ -77,12 +77,12 @@ Here is an example of a minimal OpenNLP `langid` configuration in `solrconfig.xm
 </processor>
 ----
 
-==== OpenNLP-specific parameters 
+==== OpenNLP-specific Parameters
 
 `langid.model`::
-An OpenNLP language detection model. The OpenNLP project provides a pre-trained 103 language model on the http://opennlp.apache.org/models.html[OpenNLP site's model dowload page]. Model training instructions are provided on the http://opennlp.apache.org/docs/1.8.4/manual/opennlp.html#tools.langdetect[OpenNLP website]. This parameter is required. 
+An OpenNLP language detection model. The OpenNLP project provides a pre-trained 103 language model on the http://opennlp.apache.org/models.html[OpenNLP site's model dowload page]. Model training instructions are provided on the http://opennlp.apache.org/docs/1.8.4/manual/opennlp.html#tools.langdetect[OpenNLP website]. This parameter is required.
 
-==== OpenNLP language codes
+==== OpenNLP Language Codes
 
 `OpenNLPLangDetectUpdateProcessor` automatically converts the 3-letter ISO 639-3 codes detected by the OpenNLP model into 2-letter ISO 639-1 codes.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/documents-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/documents-screen.adoc b/solr/solr-ref-guide/src/documents-screen.adoc
index 3274d40..66b0cd4 100644
--- a/solr/solr-ref-guide/src/documents-screen.adoc
+++ b/solr/solr-ref-guide/src/documents-screen.adoc
@@ -23,8 +23,8 @@ image::images/documents-screen/documents_add_screen.png[image,height=400]
 
 The screen allows you to:
 
-* Submit JSON, CSV or XML documents in solr-specific format to Solr
-* Upload documents (in JSON, CSV or XML) to Solr
+* Submit JSON, CSV or XML documents in Solr-specific format for indexing
+* Upload documents (in JSON, CSV or XML) for indexing
 * Construct documents by selecting fields and field values
 
 [TIP]
@@ -61,7 +61,7 @@ The Document Builder provides a wizard-like interface to enter fields of a docum
 
 The File Upload option allows choosing a prepared file and uploading it. If using `/update` for the Request-Handler option, you will be limited to XML, CSV, and JSON.
 
-Other document types (e.g Word, PDF etc) can be indexed using the ExtractingRequestHandler (aka Solr Cell). You must modify the Request-Handler to `/update/extract`, which must be defined in your `solrconfig.xml` file with your desired defaults. You should also add `&literal.id` shown in the "Extracting Request Handler Params" field so the file chosen is given a unique id.
+Other document types (e.g Word, PDF, etc.) can be indexed using the ExtractingRequestHandler (aka, Solr Cell). You must modify the RequestHandler to `/update/extract`, which must be defined in your `solrconfig.xml` file with your desired defaults. You should also add `&literal.id` shown in the "Extracting Request Handler Params" field so the file chosen is given a unique id.
 More information can be found at:  <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Uploading Data with Solr Cell using Apache Tika>>
 
 == Solr Command

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/enabling-ssl.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/enabling-ssl.adoc b/solr/solr-ref-guide/src/enabling-ssl.adoc
index fd6223e..b641bfd 100644
--- a/solr/solr-ref-guide/src/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/src/enabling-ssl.adoc
@@ -134,7 +134,7 @@ This section describes how to run a two-node SolrCloud cluster with no initial c
 
 NOTE: ZooKeeper does not support encrypted communication with clients like Solr. There are several related JIRA tickets where SSL support is being planned/worked on: https://issues.apache.org/jira/browse/ZOOKEEPER-235[ZOOKEEPER-235]; https://issues.apache.org/jira/browse/ZOOKEEPER-236[ZOOKEEPER-236]; https://issues.apache.org/jira/browse/ZOOKEEPER-1000[ZOOKEEPER-1000]; and https://issues.apache.org/jira/browse/ZOOKEEPER-2120[ZOOKEEPER-2120].
 
-Before you start any SolrCloud nodes, you must configure your solr cluster properties in ZooKeeper, so that Solr nodes know to communicate via SSL.
+Before you start any SolrCloud nodes, you must configure your Solr cluster properties in ZooKeeper, so that Solr nodes know to communicate via SSL.
 
 This section assumes you have created and started a single-node external ZooKeeper on port 2181 on localhost - see <<setting-up-an-external-zookeeper-ensemble.adoc#setting-up-an-external-zookeeper-ensemble,Setting Up an External ZooKeeper Ensemble>>.
 
@@ -230,7 +230,7 @@ bin\solr.cmd -cloud -s cloud\node2 -z localhost:2181 -p 7574
 ====
 curl on OS X Mavericks (10.9) has degraded SSL support. For more information and workarounds to allow one-way SSL, see http://curl.haxx.se/mail/archive-2013-10/0036.html. curl on OS X Yosemite (10.10) is improved - 2-way SSL is possible - see http://curl.haxx.se/mail/archive-2014-10/0053.html.
 
-The curl commands in the following sections will not work with the system `curl` on OS X Yosemite (10.10). Instead, the certificate supplied with the `-E` param must be in PKCS12 format, and the file supplied with the `--cacert` param must contain only the CA certificate, and no key (see <<Convert the Certificate and Key to PEM Format for Use with curl,above>> for instructions on creating this file):
+The curl commands in the following sections will not work with the system `curl` on OS X Yosemite (10.10). Instead, the certificate supplied with the `-E` parameter must be in PKCS12 format, and the file supplied with the `--cacert` parametr must contain only the CA certificate, and no key (see <<Convert the Certificate and Key to PEM Format for Use with curl,above>> for instructions on creating this file):
 
 [source,bash]
 curl -E solr-ssl.keystore.p12:secret --cacert solr-ssl.cacert.pem ...

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
index 3c6259f..1e98d86 100644
--- a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
+++ b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
@@ -69,7 +69,7 @@ Configuration and usage of PreAnalyzedField is documented in the section  <<work
 
 |StrField |String (UTF-8 encoded string or Unicode). Strings are intended for small fields and are _not_ tokenized or analyzed in any way. They have a hard limit of slightly less than 32K.
 
-|SortableTextField |A specialized version of TextField that allows (and defaults to) `docValues="true"` for sorting on the first 1024 characters of the original string prior to analysis -- the number of characters used for sorting can be overridden with the `maxCharsForDocValues` attribute.
+|SortableTextField |A specialized version of TextField that allows (and defaults to) `docValues="true"` for sorting on the first 1024 characters of the original string prior to analysis. The number of characters used for sorting can be overridden with the `maxCharsForDocValues` attribute.
 
 |TextField |Text, usually multiple words or tokens.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/filter-descriptions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/filter-descriptions.adoc b/solr/solr-ref-guide/src/filter-descriptions.adoc
index 267e53b..3985e08 100644
--- a/solr/solr-ref-guide/src/filter-descriptions.adoc
+++ b/solr/solr-ref-guide/src/filter-descriptions.adoc
@@ -831,7 +831,7 @@ This is specialized version of the <<Synonym Graph Filter>> that uses a mapping
 
 This filter maps single- or multi-token synonyms, producing a fully correct graph output. This filter is a replacement for the Managed Synonym Filter, which produces incorrect graphs for multi-token synonyms.
 
-Note: although this filter produces correct token graphs, it cannot consume an input token graph correctly.
+NOTE: Although this filter produces correct token graphs, it cannot consume an input token graph correctly.
 
 *Arguments:*
 
@@ -1437,7 +1437,7 @@ This filter maps single- or multi-token synonyms, producing a fully correct grap
 
 If you use this filter during indexing, you must follow it with a Flatten Graph Filter to squash tokens on top of one another like the Synonym Filter, because the indexer can't directly consume a graph. To get fully correct positional queries when your synonym replacements are multiple tokens, you should instead apply synonyms using this filter at query time.
 
-Note: although this filter produces correct token graphs, it cannot consume an input token graph correctly.
+NOTE: Although this filter produces correct token graphs, it cannot consume an input token graph correctly.
 
 *Factory class:* `solr.SynonymGraphFilterFactory`
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc b/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
index 2ac319e5..36aff7d 100644
--- a/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
+++ b/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
@@ -124,7 +124,7 @@ You can see how your collection is deployed across the cluster by visiting the c
 bin/solr healthcheck -c gettingstarted
 ----
 
-The healthcheck command gathers basic information about each replica in a collection, such as number of docs, current status (active, down, etc), and address (where the replica lives in the cluster).
+The healthcheck command gathers basic information about each replica in a collection, such as number of docs, current status (active, down, etc.), and address (where the replica lives in the cluster).
 
 Documents can now be added to SolrCloud using the <<post-tool.adoc#post-tool,Post Tool>>.
 
@@ -168,7 +168,7 @@ Adding a node to an existing cluster is a bit advanced and involves a little mor
 
 [source,bash]
 ----
-mkdir <solr.home for new solr node>
+mkdir <solr.home for new Solr node>
 cp <existing solr.xml path> <new solr.home>
 bin/solr start -cloud -s solr.home/solr -p <port num> -z <zk hosts string>
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/graph-traversal.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/graph-traversal.adoc b/solr/solr-ref-guide/src/graph-traversal.adoc
index 29d2aef..9056f38 100644
--- a/solr/solr-ref-guide/src/graph-traversal.adoc
+++ b/solr/solr-ref-guide/src/graph-traversal.adoc
@@ -253,7 +253,7 @@ Look closely at step 2. In large graphs, step 2 can lead to a very large travers
 . A large traversal that visit millions of unique nodes is slow and takes a lot of memory because cycle detection is tracked in memory.
 . High frequency nodes are also not useful in determining users with similar tastes. The content that fewer people have viewed provides a more precise recommendation.
 
-The `nodes` function has the `maxDocFreq` param to allow for filtering out high frequency nodes. The sample code below shows steps 1 and 2 of the recommendation:
+The `nodes` function has the `maxDocFreq` parameter to allow for filtering out high frequency nodes. The sample code below shows steps 1 and 2 of the recommendation:
 
 [source,plain]
 ----
@@ -433,7 +433,7 @@ Let's break down the expression above step-by-step.
 There is a filter applied to pull back only records where the "action:read". It returns the `articleID` for each record found. In other words, this expression returns all the articles "user1" has read.
 . The inner `nodes` expression operates over the articleIDs returned from step 1. It takes each `articleID` found and searches them against the `articleID` field.
 +
-Note that it skips high frequency nodes using the `maxDocFreq` param to filter out articles that appear over 10,000 times in the logs. It gathers userIDs and aggregates the counts for each user. This step finds the users that have read the same articles that "user1" has read and counts how many of the same articles they have read.
+Note that it skips high frequency nodes using the `maxDocFreq` parameter to filter out articles that appear over 10,000 times in the logs. It gathers userIDs and aggregates the counts for each user. This step finds the users that have read the same articles that "user1" has read and counts how many of the same articles they have read.
 . The inner `top` expression ranks the users emitted from step 2. It will emit the top 30 users who have the most overlap with user1's reading list.
 . The outer `nodes` expression gathers the reading list for the users emitted from step 3. It counts the articleIDs that are gathered.
 +

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/json-facet-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/json-facet-api.adoc b/solr/solr-ref-guide/src/json-facet-api.adoc
index b823d53..0834d46 100644
--- a/solr/solr-ref-guide/src/json-facet-api.adoc
+++ b/solr/solr-ref-guide/src/json-facet-api.adoc
@@ -324,8 +324,8 @@ Example:
 ----
 The value of `filter` can be a single query to treat as a filter, or a list of filter queries.  Each one can be:
 
-* a string containing a query in solr query syntax
-* a reference to a request parameter containing solr query syntax, of the form: `{param : <request_param_name>}`
+* a string containing a query in Solr query syntax
+* a reference to a request parameter containing Solr query syntax, of the form: `{param : <request_param_name>}`
 
 [[AggregationFunctions]]
 == Aggregation Functions

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/language-analysis.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/language-analysis.adoc b/solr/solr-ref-guide/src/language-analysis.adoc
index 8d6f734..ef476d7 100644
--- a/solr/solr-ref-guide/src/language-analysis.adoc
+++ b/solr/solr-ref-guide/src/language-analysis.adoc
@@ -511,7 +511,7 @@ This filter replaces the text of each token with its lemma. Both a dictionary-ba
 
 Either `dictionary` or `lemmatizerModel` must be provided, and both may be provided - see the examples below:
 
-`dictionary`:: (optional) The path of a lemmatization dictionary file. This path may be an absolute path, or path relative to the Solr config directory. The dictionary file must be encoded as UTF-8, with one entry per line, in the form `word[tab]lemma[tab]part-of-speech`, e.g. `wrote[tab]write[tab]VBD`.
+`dictionary`:: (optional) The path of a lemmatization dictionary file. This path may be an absolute path, or path relative to the Solr config directory. The dictionary file must be encoded as UTF-8, with one entry per line, in the form `word[tab]lemma[tab]part-of-speech`, e.g., `wrote[tab]write[tab]VBD`.
 
 `lemmatizerModel`:: (optional) The path of a language-specific OpenNLP lemmatizer model file. This path may be an absolute path, or path relative to the Solr config directory.
 
@@ -1772,4 +1772,4 @@ Lucene also includes an example Ukrainian stopword list, in the `lucene-analyzer
 </analyzer>
 ----
 
-The Morfologik `dictionary` param value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
+The Morfologik `dictionary` parameter value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/learning-to-rank.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/learning-to-rank.adoc b/solr/solr-ref-guide/src/learning-to-rank.adoc
index b44e85a..f0b3811 100644
--- a/solr/solr-ref-guide/src/learning-to-rank.adoc
+++ b/solr/solr-ref-guide/src/learning-to-rank.adoc
@@ -370,7 +370,7 @@ Read more about model evolution in the <<LTR Lifecycle>> section of this page.
 
 === Training Example
 
-Example training data and a demo 'train and upload model' script can be found in the `solr/contrib/ltr/example` folder in the https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git[Apache lucene-solr git repository] which is mirrored on https://github.com/apache/lucene-solr/tree/releases/lucene-solr/6.4.0/solr/contrib/ltr/example[github.com] (the `solr/contrib/ltr/example` folder is not shipped in the solr binary release).
+Example training data and a demo 'train and upload model' script can be found in the `solr/contrib/ltr/example` folder in the https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git[Apache lucene-solr git repository] which is mirrored on https://github.com/apache/lucene-solr/tree/releases/lucene-solr/6.4.0/solr/contrib/ltr/example[github.com] (the `solr/contrib/ltr/example` folder is not shipped in the Solr binary release).
 
 == Installation of LTR
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc b/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc
index c351cfb..9689d27 100644
--- a/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc
+++ b/solr/solr-ref-guide/src/major-changes-in-solr-7.adoc
@@ -168,7 +168,7 @@ The following changes were made in SolrJ.
 * `HttpClientUtil` now allows configuring `HttpClient` instances via `SolrHttpClientBuilder` rather than an `HttpClientConfigurer`. Use of env variable `SOLR_AUTHENTICATION_CLIENT_CONFIGURER` no longer works, please use `SOLR_AUTHENTICATION_CLIENT_BUILDER`
 * `SolrClient` implementations now use their own internal configuration for socket timeouts, connect timeouts, and allowing redirects rather than what is set as the default when building the `HttpClient` instance. Use the appropriate setters on the `SolrClient` instance.
 * `HttpSolrClient#setAllowCompression` has been removed and compression must be enabled as a constructor param.
-* `HttpSolrClient#setDefaultMaxConnectionsPerHost` and `HttpSolrClient#setMaxTotalConnections` have been removed. These now default very high and can only be changed via param when creating an HttpClient instance.
+* `HttpSolrClient#setDefaultMaxConnectionsPerHost` and `HttpSolrClient#setMaxTotalConnections` have been removed. These now default very high and can only be changed via parameter when creating an HttpClient instance.
 
 === Other Deprecations and Removals
 * The `defaultOperator` parameter in the schema is no longer supported. Use the `q.op` parameter instead. This option had been deprecated for several releases. See the section <<the-standard-query-parser.adoc#standard-query-parser-parameters,Standard Query Parser Parameters>> for more information.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/merging-indexes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/merging-indexes.adoc b/solr/solr-ref-guide/src/merging-indexes.adoc
index 2bcd493..740ee33 100644
--- a/solr/solr-ref-guide/src/merging-indexes.adoc
+++ b/solr/solr-ref-guide/src/merging-indexes.adoc
@@ -38,7 +38,7 @@ java -cp $SOLR/server/solr-webapp/webapp/WEB-INF/lib/lucene-core-VERSION.jar:$SO
 ----
 +
 This will create a new index at `/path/to/newindex` that contains both index1 and index2.
-. Copy this new directory to the location of your application's solr index (move the old one aside first, of course) and start Solr.
+. Copy this new directory to the location of your application's Solr index (move the old one aside first, of course) and start Solr.
 
 == Using CoreAdmin
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/meta-docs/publish.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/meta-docs/publish.adoc b/solr/solr-ref-guide/src/meta-docs/publish.adoc
index d97754e..5ad35b2 100644
--- a/solr/solr-ref-guide/src/meta-docs/publish.adoc
+++ b/solr/solr-ref-guide/src/meta-docs/publish.adoc
@@ -200,7 +200,7 @@ Go to the checkout directory where you have built the Guide and push the documen
 [source,bash]
 svn -m "Add Ref Guide for Solr 6.5" import <checkoutroot>/solr/build/solr-ref-guide/html-site https://svn.apache.org/repos/infra/websites/production/lucene/content/solr/guide/6_5
 
-Confirm you can browse to these URLs manually, and especially that solr javadocs link back to lucene's correctly. Example:
+Confirm you can browse to these URLs manually, and especially that Solr javadocs link back to lucene's correctly. Example:
 https://lucene.apache.org/solr/guide/6_5
 
 ==== Step 3: Push Staging extpaths.txt to Production

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/metrics-reporting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/metrics-reporting.adoc b/solr/solr-ref-guide/src/metrics-reporting.adoc
index d9b4c5f..936adcb 100644
--- a/solr/solr-ref-guide/src/metrics-reporting.adoc
+++ b/solr/solr-ref-guide/src/metrics-reporting.adoc
@@ -64,7 +64,7 @@ This registry is returned at `solr.node` and includes the following information.
 
 The <<Core Level Metrics,Core (SolrCore) Registry>> includes `solr.core.<collection>`, one for each core. When making requests with the <<Metrics API>>, you can specify `&group=core` to limit to only these metrics.
 
-* all common RequestHandler-s report: request timers / counters, timeouts, errors. Handlers that support
+* all common RequestHandlers report: request timers / counters, timeouts, errors. Handlers that support
   process distributed shard requests also report `shardRequests` sub-counters for each type of distributed
   request.
 * <<Index Merge Metrics,index-level events>>: meters for minor / major merges, number of merged docs, number of deleted docs, gauges for currently running merges and their size.
@@ -77,7 +77,7 @@ This registry is returned at `solr.jetty` and includes the following information
 
 * threads and pools,
 * connection and request timers,
-* meters for responses by HTTP class (1xx, 2xx, etc)
+* meters for responses by HTTP class (1xx, 2xx, etc.)
 
 In the future, metrics will be added for shard leaders and cluster nodes, including aggregations from per-core metrics.
 
@@ -318,7 +318,7 @@ When `true` use multicast UDP communication, otherwise use UDP unicast. Default
 These two reporters can be used for aggregation of metrics reported from replicas to shard leader (the "shard" reporter),
 and from any local registry to the Overseer node.
 
-Metric reports from these reporters are periodically sent as batches of regular SolrInputDocument-s,
+Metric reports from these reporters are periodically sent as batches of regular SolrInputDocuments,
 so they can be processed by any Solr handler. By default they are sent to `/admin/metrics/collector` handler
 (an instance of `MetricsCollectorHandler`) on a target node, which aggregates these reports and keeps them in
 additional local metric registries so that they can be accessed using `/admin/metrics` handler,
@@ -514,18 +514,18 @@ The `admin/metrics` endpoint provides access to all the metrics for all metric g
 
 A few query parameters are available to limit your request to only certain metrics:
 
-group:: The metric group to retrieve. The default is `all` to retrieve all metrics for all groups. Other possible values are: `jvm`, `jetty`, `node`, and `core`. More than one group can be specified in a request; multiple group names should be separated by a comma.
+`group`:: The metric group to retrieve. The default is `all` to retrieve all metrics for all groups. Other possible values are: `jvm`, `jetty`, `node`, and `core`. More than one group can be specified in a request; multiple group names should be separated by a comma.
 
-type:: The type of metric to retrieve. The default is `all` to retrieve all metric types. Other possible values are `counter`, `gauge`, `histogram`, `meter`, and `timer`. More than one type can be specified in a request; multiple types should be separated by a comma.
+`type`:: The type of metric to retrieve. The default is `all` to retrieve all metric types. Other possible values are `counter`, `gauge`, `histogram`, `meter`, and `timer`. More than one type can be specified in a request; multiple types should be separated by a comma.
 
-prefix:: The first characters of metric name that will filter the metrics returned to those starting with the provided string. It can be combined with `group` and/or `type` parameters. More than one prefix can be specified in a request; multiple prefixes should be separated by a comma. Prefix matching is also case-sensitive.
+`prefix`:: The first characters of metric name that will filter the metrics returned to those starting with the provided string. It can be combined with `group` and/or `type` parameters. More than one prefix can be specified in a request; multiple prefixes should be separated by a comma. Prefix matching is also case-sensitive.
 
-regex:: A regular expression matching metric names. Note: dot separators in metric names must be escaped, eg.
+`regex`:: A regular expression matching metric names. Note: dot separators in metric names must be escaped, eg.
 `QUERY\./select\..*` is a valid regex that matches all metrics with the `QUERY./select.` prefix.
 
-property:: Allows requesting only this metric from any compound metric. Multiple `property` parameters can be combined to act as an OR request. For example, to only get the 99th and 999th percentile values from all metric types and groups, you can add `&property=p99_ms&property=p999_ms` to your request. This can be combined with `group`, `type`, and `prefix` as necessary.
+`property`:: Allows requesting only this metric from any compound metric. Multiple `property` parameters can be combined to act as an OR request. For example, to only get the 99th and 999th percentile values from all metric types and groups, you can add `&property=p99_ms&property=p999_ms` to your request. This can be combined with `group`, `type`, and `prefix` as necessary.
 
-key:: fully-qualified metric name, which specifies one concrete metric instance (parameter can be
+`key`:: fully-qualified metric name, which specifies one concrete metric instance (parameter can be
 specified multiple times to retrieve multiple concrete metrics). *NOTE: when this parameter is used, other
 selection methods listed above are ignored.* Fully-qualified name consists of registry name, colon and
 metric name, with optional colon and metric property. Colons in names can be escaped using back-slash `\`
@@ -535,10 +535,11 @@ character. Examples:
 * `key=solr.core.collection1:QUERY./select.requestTimes:max_ms`
 * `key=solr.jvm:system.properties:user.name`
 
-compact:: When false, a more verbose format of the response will be returned. Instead of a response like this:
+`compact`:: When false, a more verbose format of the response will be returned. Instead of a response like this:
 +
 [source,json]
-  "metrics": [
+----
+{"metrics": [
     "solr.core.gettingstarted",
     {
       "CORE.aliases": {
@@ -560,12 +561,14 @@ compact:: When false, a more verbose format of the response will be returned. In
         "value": "2017-03-14T11:43:23.822Z"
       }
     }
-  ]
+  ]}
+----
 +
 The response will look like this:
 +
 [source,json]
-  "metrics": [
+----
+{"metrics": [
     "solr.core.gettingstarted",
     {
       "CORE.aliases": [
@@ -577,7 +580,8 @@ The response will look like this:
       "CORE.refCount": 1,
       "CORE.startTime": "2017-03-14T11:43:23.822Z"
     }
-  ]
+  ]}
+----
 
 Like other request handlers, the Metrics API can also take the `wt` parameter to define the output format.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/near-real-time-searching.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/near-real-time-searching.adoc b/solr/solr-ref-guide/src/near-real-time-searching.adoc
index fe91094..00724c5 100644
--- a/solr/solr-ref-guide/src/near-real-time-searching.adoc
+++ b/solr/solr-ref-guide/src/near-real-time-searching.adoc
@@ -51,7 +51,7 @@ true|false, whether to make documents visible for search. For NRT applications t
 
 Transaction logs are a "rolling window" of updates since the last hard commit. The current transaction log is closed and a new one opened each time any variety of hard commit occurs. Soft commits have no effect on the transaction log.
 
-When tlogs are enabled, documents being added to the index are written to the tlog before the indexing call returns to the client. In the event of an un-graceful shutdown (power loss, JVM crash, `kill -9` etc) any documents written to the tlog but not yet committed with a hard commit when Solr was stopped are replayed on startup. Therefore the data is not lost.
+When tlogs are enabled, documents being added to the index are written to the tlog before the indexing call returns to the client. In the event of an un-graceful shutdown (power loss, JVM crash, `kill -9`, etc.) any documents written to the tlog but not yet committed with a hard commit when Solr was stopped are replayed on startup. Therefore the data is not lost.
 
 When Solr is shut down gracefully (using the `bin/solr stop` command) Solr will close the tlog file and index segments so no replay will be necessary on startup.
 
@@ -85,4 +85,4 @@ TIP: For extremely high bulk indexing, especially for the initial load if there
 
 == Advanced Commit Options
 
-All varieties of commits can be invoked from a SolrJ client or via a URL. The usual recommendation is to _not_ call commits externally. For those cases where it is desirable, see <<uploading-data-with-index-handlers.adoc#xml-update-commands,Update Commands>>. These options are listed for XML update commands that can be issued from a browser or curl etc and the equivalents are available from a SolrJ client.
+All varieties of commits can be invoked from a SolrJ client or via a URL. The usual recommendation is to _not_ call commits externally. For those cases where it is desirable, see <<uploading-data-with-index-handlers.adoc#xml-update-commands,Update Commands>>. These options are listed for XML update commands that can be issued from a browser or curl, etc., and the equivalents are available from a SolrJ client.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/other-parsers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/other-parsers.adoc b/solr/solr-ref-guide/src/other-parsers.adoc
index 670aefd..14bed10 100644
--- a/solr/solr-ref-guide/src/other-parsers.adoc
+++ b/solr/solr-ref-guide/src/other-parsers.adoc
@@ -424,7 +424,7 @@ http://localhost:8983/solr/my_graph/query?fl=id&q={!graph+from=in_edge+to=out_ed
 }
 ----
 
-The examples shown so far have all used a query for a single document (`"id:A"`) as the root node for the graph traversal, but any query can be used to identify multiple documents to use as root nodes. The next example demonstrates using the `maxDepth` param to find all nodes that are at most one edge away from an root node with a value in the `foo` field less then or equal to 10:
+The examples shown so far have all used a query for a single document (`"id:A"`) as the root node for the graph traversal, but any query can be used to identify multiple documents to use as root nodes. The next example demonstrates using the `maxDepth` parameter to find all nodes that are at most one edge away from an root node with a value in the `foo` field less then or equal to 10:
 
 [source,text]
 ----
@@ -466,7 +466,7 @@ curl -H 'Content-Type: application/json' 'http://localhost:8983/solr/alt_graph/u
   ]'
 ----
 
-With this alternative document model, all of the same queries demonstrated above can still be executed, simply by changing the "```from```" param to replace the "```in_edge```" field with the "```id```" field:
+With this alternative document model, all of the same queries demonstrated above can still be executed, simply by changing the "```from```" parameter to replace the "```in_edge```" field with the "```id```" field:
 
 [source,text]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/other-schema-elements.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/other-schema-elements.adoc b/solr/solr-ref-guide/src/other-schema-elements.adoc
index d662dce..54224ec 100644
--- a/solr/solr-ref-guide/src/other-schema-elements.adoc
+++ b/solr/solr-ref-guide/src/other-schema-elements.adoc
@@ -92,4 +92,4 @@ In the example above `IBSimilarityFactory` (using the Information-Based model) w
 
 If `SchemaSimilarityFactory` is explicitly declared without configuring a `defaultSimFromFieldType`, then `BM25Similarity` is implicitly used as the default.
 
-In addition to the various factories mentioned on this page, there are several other similarity implementations that can be used such as the `SweetSpotSimilarityFactory`, `ClassicSimilarityFactory`, etc.... For details, see the Solr Javadocs for the {solr-javadocs}/solr-core/org/apache/solr/schema/SimilarityFactory.html[similarity factories].
+In addition to the various factories mentioned on this page, there are several other similarity implementations that can be used such as the `SweetSpotSimilarityFactory`, `ClassicSimilarityFactory`, etc. For details, see the Solr Javadocs for the {solr-javadocs}/solr-core/org/apache/solr/schema/SimilarityFactory.html[similarity factories].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/pagination-of-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/pagination-of-results.adoc b/solr/solr-ref-guide/src/pagination-of-results.adoc
index bb21e03..f3deac7 100644
--- a/solr/solr-ref-guide/src/pagination-of-results.adoc
+++ b/solr/solr-ref-guide/src/pagination-of-results.adoc
@@ -39,7 +39,7 @@ function fetch_solr_page($page_number, $rows_per_page) {
 
 === How Basic Pagination is Affected by Index Updates
 
-The `start` param specified in a request to Solr indicates an *absolute* "offset" in the complete sorted list of matches that the client wants Solr to use as the beginning of the current "page".
+The `start` parameter specified in a request to Solr indicates an *absolute* "offset" in the complete sorted list of matches that the client wants Solr to use as the beginning of the current "page".
 
 If an index modification (such as adding or removing documents) which affects the sequence of ordered documents matching a query occurs in between two requests from a client for subsequent pages of results, then it is possible that these modifications can result in the same document being returned on multiple pages, or documents being "skipped" as the result set shrinks or grows.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/post-tool.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/post-tool.adoc b/solr/solr-ref-guide/src/post-tool.adoc
index c692995..3d736b5 100644
--- a/solr/solr-ref-guide/src/post-tool.adoc
+++ b/solr/solr-ref-guide/src/post-tool.adoc
@@ -125,7 +125,7 @@ Index all JSON files into `gettingstarted`.
 bin/post -c gettingstarted *.json
 ----
 
-=== Indexing Rich Documents (PDF, Word, HTML, etc)
+=== Indexing Rich Documents (PDF, Word, HTML, etc.)
 
 Index a PDF file into `gettingstarted`.
 
@@ -150,7 +150,7 @@ bin/post -c gettingstarted -filetypes ppt,html afolder/
 
 === Indexing to a Password Protected Solr (Basic Auth)
 
-Index a pdf as the user solr with password `SolrRocks`:
+Index a PDF as the user "solr" with password "SolrRocks":
 
 [source,bash]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
index 62c6d9e..1aee1a6 100644
--- a/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
+++ b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
@@ -218,7 +218,7 @@ curl -X POST -H 'Content-type: application/json' -d '{"set-property": {"name":"a
 ====
 --
 
-Re-enable automatic addition of replicas (for those collections created with `autoAddReplica=true`) by unsetting the `autoAddReplicas` cluster property. When no `val` param is provided, the cluster property is unset:
+Re-enable automatic addition of replicas (for those collections created with `autoAddReplica=true`) by unsetting the `autoAddReplicas` cluster property. When no `val` parameter is provided, the cluster property is unset:
 
 [.dynamic-tabs]
 --

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
index a88d977..09d62fd 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
@@ -48,16 +48,16 @@ image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_5.png[image,width=839,
 
 == JDBC Interpreter Copy Sheet
 
-To facilitate easy copying the parameters mentioned in the screenshots, here is a consolidated list
+To facilitate easy copying the parameters mentioned in the screenshots, here is a consolidated list of the parameters:
 
-[source,text]
+[source,text,subs=attributes]
 ----
 Name : Solr
 Interpreter : jdbc
 default.url : jdbc:solr://SOLR_ZK_CONNECTION_STRING?collection=<collection_name>
 default.driver : org.apache.solr.client.solrj.io.sql.DriverImpl
 default.user : solr
-dependency : org.apache.solr:solr-solrj:-{solr-docs-version}.0
+dependency : org.apache.solr:solr-solrj:{solr-docs-version}.0
 ----
 
 == Query with the Notebook

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/solr-tutorial.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-tutorial.adoc b/solr/solr-ref-guide/src/solr-tutorial.adoc
index 089c5e5..4abb130 100644
--- a/solr/solr-ref-guide/src/solr-tutorial.adoc
+++ b/solr/solr-ref-guide/src/solr-tutorial.adoc
@@ -267,7 +267,7 @@ Solr has very powerful search options, and this tutorial won't be able to cover
 
 ==== Search for a Single Term
 
-To search for a term, enter it as the `q` param value in the Solr Admin UI Query screen, replacing `\*:*` with the term you want to find.
+To search for a term, enter it as the `q` parameter value in the Solr Admin UI Query screen, replacing `\*:*` with the term you want to find.
 
 Enter "foundation" and hit btn:[Execute Query] again.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/solrcloud-autoscaling-trigger-actions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solrcloud-autoscaling-trigger-actions.adoc b/solr/solr-ref-guide/src/solrcloud-autoscaling-trigger-actions.adoc
index 77ab9c9..5571377 100644
--- a/solr/solr-ref-guide/src/solrcloud-autoscaling-trigger-actions.adoc
+++ b/solr/solr-ref-guide/src/solrcloud-autoscaling-trigger-actions.adoc
@@ -32,30 +32,32 @@ The following parameters are configurable:
 A comma-separated list of collection names. If this list is not empty then
 the computed operations will only calculate collection operations that affect
 listed collections and ignore any other collection operations for collections
-not listed here (please note that non-collection operations are not affected by this).
+not listed here. Note that non-collection operations are not affected by this.
 
 Example configuration:
 
 [source,json]
+----
 {
- 'set-trigger' : {
-  'name' : 'node_added_trigger',
-  'event' : 'nodeAdded',
-  'waitFor' : '1s',
-  'enabled' : true,
-  'actions' : [
+ "set-trigger" : {
+  "name" : "node_added_trigger",
+  "event" : "nodeAdded",
+  "waitFor" : "1s",
+  "enabled" : true,
+  "actions" : [
    {
-    'name' : 'compute_plan',
-    'class' : 'solr.ComputePlanAction',
-    'collections' : 'test1,test2',
+    "name" : "compute_plan",
+    "class" : "solr.ComputePlanAction",
+    "collections" : "test1,test2",
    },
    {
-    'name' : 'execute_plan',
-    'class' : 'solr.ExecutePlanAction',
+    "name" : "execute_plan",
+    "class" : "solr.ExecutePlanAction",
    }
   ]
  }
 }
+----
 
 In this example only collections `test1` and `test2` will be potentially
 replicated / moved to an added node, other collections will be ignored even

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/solrcloud-autoscaling-triggers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solrcloud-autoscaling-triggers.adoc b/solr/solr-ref-guide/src/solrcloud-autoscaling-triggers.adoc
index a7a8d52..5ef4023 100644
--- a/solr/solr-ref-guide/src/solrcloud-autoscaling-triggers.adoc
+++ b/solr/solr-ref-guide/src/solrcloud-autoscaling-triggers.adoc
@@ -32,23 +32,27 @@ currently at fixed interval of 1 second between each execution (not every execut
 == Event Types
 Currently the following event types (and corresponding trigger implementations) are defined:
 
-* `nodeAdded` - generated when a new node joins the cluster
-* `nodeLost` - generated when a node leaves the cluster
-* `metric` - generated when the configured metric crosses a configured lower or upper threshold value
-* `searchRate` - generated when the 1 min average search rate exceeds configured upper threshold
+* `nodeAdded`: generated when a new node joins the cluster
+* `nodeLost`: generated when a node leaves the cluster
+* `metric`: generated when the configured metric crosses a configured lower or upper threshold value
+* `searchRate`: generated when the 1-minute average search rate exceeds configured upper threshold
 
 Events are not necessarily generated immediately after the corresponding state change occurred - the
 maximum rate of events is controlled by the `waitFor` configuration parameter (see below).
 
 The following properties are common to all event types:
 
-* `id` - (string) A unique time-based event id.
-* `eventType` - (string) The type of event.
-* `source` - (string) The name of the trigger that produced this event.
-* `eventTime` - (long) Unix time when the condition that caused this event occurred. For example, for a
+`id`:: (string) A unique time-based event id.
+
+`eventType`:: (string) The type of event.
+
+`source`:: (string) The name of the trigger that produced this event.
+
+`eventTime`:: (long) Unix time when the condition that caused this event occurred. For example, for a
 `nodeAdded` event this will be the time when the node was added and not when the event was actually
 generated, which may significantly differ due to the rate limits set by `waitFor`.
-* `properties` - (map, optional) Any additional properties. Currently includes `nodeName` property that
+
+`properties`:: (map, optional) Any additional properties. Currently includes `nodeName` property that
 indicates the node that was lost or added.
 
 == Auto Add Replicas Trigger
@@ -61,19 +65,25 @@ You can see the section <<solrcloud-autoscaling-auto-add-replicas.adoc#solrcloud
 
 == Metric Trigger
 
-The metric trigger can be used to monitor any metric exposed by the Metrics API. It supports lower and upper threshold configurations as well as optional filters to limit operation to specific collection, shards and nodes.
+The metric trigger can be used to monitor any metric exposed by the <<metrics-reporting.adoc#metrics-reporting,Metrics API>>. It supports lower and upper threshold configurations as well as optional filters to limit operation to specific collection, shards, and nodes.
 
 This trigger supports the following configuration:
 
-* `metric` - (string, required) The metric property name to be watched in the format metrics:group:prefix e.g. `metric:solr.node:CONTAINER.fs.coreRoot.usableSpace`
-* `below` - (double, optional) The lower threshold for the metric value. The trigger produces a metric breached event if the metric's value falls below this value
-* `above` - (double, optional) The upper threshold for the metric value. The trigger produces a metric breached event if the metric's value crosses above this value
-* `collection` - (string, optional) The collection used to limit the nodes on which the given metric is watched. When the metric is breached, trigger actions will limit operations to this collection only.
-* `shard` - (string, optional) The shard used to limit the nodes on which the given metric is watched. When the metric is breached, trigger actions will limit operations to this shard only.
-* `node` - (string, optional) The node on which the given metric is watched. Trigger actions will operate on this node only.
-* `preferredOperation` (string, optional, defaults to `MOVEREPLICA`) - The operation to be performed in response to an event generated by this trigger. By default, replicas will be moved from the hot node to others. The only other supported value is `ADDREPLICA` which adds more replicas if the metric is breached.
+`metric`:: (string, required) The metric property name to be watched in the format metrics:group:prefix, e.g., `metric:solr.node:CONTAINER.fs.coreRoot.usableSpace`.
+
+`below`:: (double, optional) The lower threshold for the metric value. The trigger produces a metric breached event if the metric's value falls below this value.
+
+`above`:: (double, optional) The upper threshold for the metric value. The trigger produces a metric breached event if the metric's value crosses above this value.
 
-.Example: Metric Trigger that fires when total usable space on a node having replicas of "mycollection" falls below 100GB
+`collection`:: (string, optional) The collection used to limit the nodes on which the given metric is watched. When the metric is breached, trigger actions will limit operations to this collection only.
+
+`shard`:: (string, optional) The shard used to limit the nodes on which the given metric is watched. When the metric is breached, trigger actions will limit operations to this shard only.
+
+`node`:: (string, optional) The node on which the given metric is watched. Trigger actions will operate on this node only.
+
+`preferredOperation`:: (string, optional, defaults to `MOVEREPLICA`) The operation to be performed in response to an event generated by this trigger. By default, replicas will be moved from the hot node to others. The only other supported value is `ADDREPLICA` which adds more replicas if the metric is breached.
+
+.Example: a metric trigger that fires when total usable space on a node having replicas of "mycollection" falls below 100GB
 [source,json]
 ----
 {
@@ -88,31 +98,35 @@ This trigger supports the following configuration:
 }
 ----
 
-== Search Rate trigger
+== Search Rate Trigger
 
-The search rate trigger can be used for monitoring 1-min average search rates in a selected
+The search rate trigger can be used for monitoring 1-minute average search rates in a selected
 collection, and request that either replicas be moved to different nodes or new replicas be added
-to reduce the per-replica search rate for a collection / shard with search rate hot spots.
-(Note: future versions of Solr will also be able to automatically remove some replicas
+to reduce the per-replica search rate for a collection or shard with search rate hot spots.
+(Future versions of Solr will also be able to automatically remove some replicas
 when search rate falls below the configured lower threshold).
 
 This trigger support the following configuration:
 
-* `collection` - (string, optional) collection name to monitor, or any collection if empty
-* `shard` - (string, optional) shard name within the collection (requires `collection` to be set), or any shard if empty
-* `node` - (string, optional) node name to monitor, or any if empty
-* `handler` - (string, optional) handler name whose request rate represents the search rate
+`collection`:: (string, optional) collection name to monitor, or any collection if empty.
+
+`shard`:: (string, optional) shard name within the collection (requires `collection` to be set), or any shard if empty.
+
+`node`:: (string, optional) node name to monitor, or any if empty.
+
+`handler`:: (string, optional) handler name whose request rate represents the search rate
 (default is `/select`). This name is used for creating the full metric key, in
-this case `solr.core.<coreName>:QUERY./select.requestTimes:1minRate`
-* `rate` - (double, required) the upper bound for the request rate metric value.
+this case `solr.core.<coreName>:QUERY./select.requestTimes:1minRate`.
+
+`rate`:: (double, required) the upper bound for the request rate metric value.
 
 If a rate is exceeded for a node (but not for individual replicas placed on this node) then
 the action requested by this event is to move one replica (with the highest rate) to another
-node. If a rate is exceeded for a collection / shard then the action requested is to add some
+node. If a rate is exceeded for a collection or shard then the action requested is to add some
 replicas - currently at least 1 and at most 3, depending on how much the rate is exceeded, proportional to
 the threshold rate and the current request rate.
 
-.Example: a trigger configuration that monitors collection "test" and adds new replicas if 1-min average request rate of "/select" handler exceeds 100 reqs/sec:
+.Example: a search rate trigger that monitors collection "test" and adds new replicas if 1-minute average request rate of "/select" handler exceeds 100 requests/sec:
 [source,json]
 ----
 {
@@ -144,22 +158,29 @@ Trigger configurations are managed using the Autoscaling Write API and the comma
 
 Trigger configuration consists of the following properties:
 
-* `name` - (string, required) A unique trigger configuration name.
-* `event` - (string, required) One of the predefined event types (`nodeAdded` or `nodeLost`).
-* `actions` - (list of action configs, optional) An ordered list of actions to execute when event is fired.
-* `waitFor` - (string, optional) The time to wait between generating new events, as an integer number immediately followed by unit symbol, one of `s` (seconds), `m` (minutes), or `h` (hours). Default is `0s`.
-* `enabled` - (boolean, optional) When `true` the trigger is enabled. Default is `true`.
-* Additional implementation-specific properties may be provided.
+`name`:: (string, required) A unique trigger configuration name.
+
+`event`:: (string, required) One of the predefined event types (`nodeAdded` or `nodeLost`).
+
+`actions`:: (list of action configs, optional) An ordered list of actions to execute when event is fired.
+
+`waitFor`:: (string, optional) The time to wait between generating new events, as an integer number immediately followed by unit symbol, one of `s` (seconds), `m` (minutes), or `h` (hours). Default is `0s`.
+
+`enabled`:: (boolean, optional) When `true` the trigger is enabled. Default is `true`.
+
+Additional implementation-specific properties may be provided.
 
 Action configuration consists of the following properties:
 
-* `name` - (string, required) A unique name of the action configuration.
-* `class` - (string, required) The action implementation class.
-* Additional implementation-specific properties may be provided
+`name`:: (string, required) A unique name of the action configuration.
+
+`class`:: (string, required) The action implementation class.
+
+Additional implementation-specific properties may be provided
 
-If the Action configuration is omitted, then by default, the `ComputePlanAction` and the `ExecutePlanAction` are automatically added to the trigger configuration.
+If the `actions` configuration is omitted, then by default, the `ComputePlanAction` and the `ExecutePlanAction` are automatically added to the trigger configuration.
 
-.Example: adding or updating a trigger for `nodeAdded` events 
+.Example: adding or updating a trigger for `nodeAdded` events
 [source,json]
 ----
 {

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/spell-checking.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spell-checking.adoc b/solr/solr-ref-guide/src/spell-checking.adoc
index 9936ece..7911b84 100644
--- a/solr/solr-ref-guide/src/spell-checking.adoc
+++ b/solr/solr-ref-guide/src/spell-checking.adoc
@@ -192,7 +192,7 @@ If set to `true`, this parameter reloads the spellchecker. The results depend on
 This parameter specifies the maximum number of suggestions that the spellchecker should return for a term. If this parameter isn't set, the value defaults to `1`. If the parameter is set but not assigned a number, the value defaults to `5`. If the parameter is set to a positive integer, that number becomes the maximum number of suggestions returned by the spellchecker.
 
 `spellcheck.queryAnalyzerFieldtype`::
-This field type's analyzer is used by the QueryConverter to tokenize the value for "q" parameter. The field specified by this parameter should do minimal transformations, it's usually a best practice to avoid types that aggressively stem or ngram for instance.
+A field type from Solr's schema. The analyzer configured for the provided field type is used by the QueryConverter to tokenize the value for "q" parameter. The field type specified by this parameter should do minimal transformations. It's usually a best practice to avoid types that aggressively stem or NGram, for instance, since those types of analysis can throw off spell checking.
 
 `spellcheck.onlyMorePopular`::
 If `true`, Solr will return suggestions that result in more hits for the query than the existing query. Note that this will return more popular suggestions even when the given query term is present in the index and considered "correct".

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/stream-decorator-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/stream-decorator-reference.adoc b/solr/solr-ref-guide/src/stream-decorator-reference.adoc
index cfdfb11..61608c8 100644
--- a/solr/solr-ref-guide/src/stream-decorator-reference.adoc
+++ b/solr/solr-ref-guide/src/stream-decorator-reference.adoc
@@ -549,7 +549,7 @@ topicQueryParams.put("q","hello");  // The query for the topic
 topicQueryparams.put("rows", "500"); // How many rows to fetch during each run
 topicQueryparams.put("fl", "id", "title"); // The field list to return with the documents
 
-TopicStream topicStream = new TopicStream(zkHost,        // Host address for the zookeeper service housing the collections
+TopicStream topicStream = new TopicStream(zkHost,        // Host address for the ZooKeeper service housing the collections
                                          "checkpoints",  // The collection to store the topic checkpoints
                                          "topicData",    // The collection to query for the topic records
                                          "topicId",      // The id of the topic
@@ -1088,7 +1088,7 @@ See section in <<graph-traversal.adoc#using-the-scorenodes-function-to-make-a-re
 
 == select
 
-The `select` function wraps a streaming expression and outputs tuples containing a subset or modified set of fields from the incoming tuples. The list of fields included in the output tuple can contain aliases to effectively rename fields. The `select` stream supports both operations and evaluators. One can provide a list of operations and evaluators to perform on any fields, such as `replace, add, if`, etc....
+The `select` function wraps a streaming expression and outputs tuples containing a subset or modified set of fields from the incoming tuples. The list of fields included in the output tuple can contain aliases to effectively rename fields. The `select` stream supports both operations and evaluators. One can provide a list of operations and evaluators to perform on any fields, such as `replace, add, if`, etc.
 
 === select Parameters
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/taking-solr-to-production.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/taking-solr-to-production.adoc b/solr/solr-ref-guide/src/taking-solr-to-production.adoc
index ca0f4eb..88f127c 100644
--- a/solr/solr-ref-guide/src/taking-solr-to-production.adoc
+++ b/solr/solr-ref-guide/src/taking-solr-to-production.adoc
@@ -248,20 +248,27 @@ SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
 
 === File Handles and Processes (ulimit settings)
 
-Two common settings that result in errors on *nix systems are file handles and user processes. It is common for the default limits for number of processes and file handles to default to values that are too low for a large Solr installation. The required number of each of these will increase based on a combination of the number of replicas hosted per node and the number of segments in the index for each replica. The usual recommendation is to make processes and file handles at least 65,000 each, unlimited if possible. On most *nix systems, this command:
+Two common settings that result in errors on *nix systems are file handles and user processes.
+
+It is common for the default limits for number of processes and file handles to default to values that are too low for a large Solr installation. The required number of each of these will increase based on a combination of the number of replicas hosted per node and the number of segments in the index for each replica.
+
+The usual recommendation is to make processes and file handles at least 65,000 each, unlimited if possible. On most *nix systems, this command will show the currently-defined limits:
 
 [source,bash]
 ----
 ulimit -a
 ----
-will show the currently-defined limits. It is strongly recommended that file handle and process limits be permanently raised as above. The exact form of the command will vary per operating system, and some systems require editing configuration files and restarting your server. Consult your system administrators for guidance in your particular environment.
 
-[TIP]
+It is strongly recommended that file handle and process limits be permanently raised as above. The exact form of the command will vary per operating system, and some systems require editing configuration files and restarting your server. Consult your system administrators for guidance in your particular environment.
+
+[WARNING]
 ====
-If these limits are exceeded, the problems reported by Solr vary depending on the specific operation responsible for exceeding the limit. Errors such as to "too many open files", "connection error", and "max processes exceeded" have been reported, as well as SolrCloud recovery failures. Since exceeding these limits can result in such varied symptoms it is _strongly_ recommended that these limits be permanently raised as recommended above.
+If these limits are exceeded, the problems reported by Solr vary depending on the specific operation responsible for exceeding the limit. Errors such as "too many open files", "connection error", and "max processes exceeded" have been reported, as well as SolrCloud recovery failures.
+
+Since exceeding these limits can result in such varied symptoms it is _strongly_ recommended that these limits be permanently raised as recommended above.
 ====
 
-== Running Multiple Solr Nodes Per Host
+== Running Multiple Solr Nodes per Host
 
 The `bin/solr` script is capable of running multiple instances on one machine, but for a *typical* installation, this is not a recommended setup. Extra CPU and memory resources are required for each additional instance. A single instance is easily capable of handling multiple indexes.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/the-dismax-query-parser.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-dismax-query-parser.adoc b/solr/solr-ref-guide/src/the-dismax-query-parser.adoc
index db89932..bb06ae1 100644
--- a/solr/solr-ref-guide/src/the-dismax-query-parser.adoc
+++ b/solr/solr-ref-guide/src/the-dismax-query-parser.adoc
@@ -135,7 +135,7 @@ The `bf` parameter specifies functions (with optional boosts) that will be used
 recip(rord(myfield),1,2,3)^1.5
 ----
 
-Specifying functions with the bf parameter is essentially just shorthand for using the `bq` param combined with the `{!func}` parser.
+Specifying functions with the bf parameter is essentially just shorthand for using the `bq` parameter combined with the `{!func}` parser.
 
 For example, if you want to show the most recent documents first, you could use either of the following:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/601c7350/solr/solr-ref-guide/src/update-request-processors.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/update-request-processors.adoc b/solr/solr-ref-guide/src/update-request-processors.adoc
index 921677a..1c14cd2 100644
--- a/solr/solr-ref-guide/src/update-request-processors.adoc
+++ b/solr/solr-ref-guide/src/update-request-processors.adoc
@@ -259,7 +259,7 @@ What follows are brief descriptions of the currently available update request pr
 
 {solr-javadocs}/solr-core/org/apache/solr/update/processor/AddSchemaFieldsUpdateProcessorFactory.html[AddSchemaFieldsUpdateProcessorFactory]:: This processor will dynamically add fields to the schema if an input document contains one or more fields that don't match any field or dynamic field in the schema.
 
-{solr-javadocs}/solr-core/org/apache/solr/update/processor/AtomicUpdateRequestProcessorFactory.html[AtomicUpdateProcessorFactory]:: This processor will convert conventional field-value documents to atomic update documents. This processor can be used at runtime (without defining it in `solrconfig.xml`), see the section <<atomicupdateprocessorfactory>> below.
+{solr-javadocs}/solr-core/org/apache/solr/update/processor/AtomicUpdateProcessorFactory.html[AtomicUpdateProcessorFactory]:: This processor will convert conventional field-value documents to atomic update documents. This processor can be used at runtime (without defining it in `solrconfig.xml`), see the section <<atomicupdateprocessorfactory>> below.
 
 {solr-javadocs}/solr-core/org/apache/solr/update/processor/ClassificationUpdateProcessorFactory.html[ClassificationUpdateProcessorFactory]:: This processor uses Lucene's classification module to provide simple document classification. See https://wiki.apache.org/solr/SolrClassification for more details on how to use this processor.