You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2018/11/13 14:18:21 UTC

[1/3] lucene-solr:branch_7x: SOLR-12927: Add upgrade notes for Solr 7.6

Repository: lucene-solr
Updated Branches:
  refs/heads/branch_7x 988462b9e -> 6c6b47a65


SOLR-12927: Add upgrade notes for Solr 7.6


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/aeab9de3
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/aeab9de3
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/aeab9de3

Branch: refs/heads/branch_7x
Commit: aeab9de3b4bdde044f23e5e1fdf0f32c1107023a
Parents: 988462b
Author: Cassandra Targett <ct...@apache.org>
Authored: Mon Nov 12 08:55:05 2018 -0600
Committer: Cassandra Targett <ct...@apache.org>
Committed: Tue Nov 13 08:10:35 2018 -0600

----------------------------------------------------------------------
 solr/solr-ref-guide/src/solr-upgrade-notes.adoc | 57 ++++++++++++++++++++
 1 file changed, 57 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/aeab9de3/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
index 40892b2..516598d 100644
--- a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
+++ b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
@@ -27,6 +27,63 @@ Detailed steps for upgrading a Solr cluster are in the section <<upgrading-a-sol
 
 == Upgrading to 7.x Releases
 
+=== Solr 7.6
+
+See the https://wiki.apache.org/solr/ReleaseNote76[7.6 Release Notes] for an overview of the main new features in Solr 7.6.
+
+When upgrading to Solr 7.6, users should be aware of the following major changes from v7.5:
+
+*Collections*
+
+* The JSON parameter to set cluster-wide default cluster properties with the <<collections-api.adoc#clusterprop,CLUSTERPROP>> command has changed.
++
+The old syntax nested the defaults into a property named `clusterDefaults`. The new syntax uses only `defaults`. The command to use is still `set-obj-property`.
++
+An example of the new syntax is:
++
+[source,json]
+----
+{
+  "set-obj-property": {
+    "defaults" : {
+      "collection": {
+        "numShards": 2,
+        "nrtReplicas": 1,
+        "tlogReplicas": 1,
+        "pullReplicas": 1
+      }
+    }
+  }
+}
+----
++
+The old syntax will be supported until at least Solr 9, but users are advised to begin using the new syntax as soon as possible.
+
+* The parameter `min_rf` has been deprecated and no longer needs to be provided in order to see the achieved replication factor. This information will now always be returned to the client with the response.
+
+*Autoscaling*
+
+* An autoscaling policy is now used as the default strategy for selecting nodes on which new replicas or replicas of new collections are created.
++
+A default policy is now in place for all users, which will sort nodes by the number of cores and available freedisk, which means by default a node with the fewest number of cores already on it and the highest available freedisk will be selected for new core creation.
+
+* The change described above has two additional impacts on the `maxShardsPerNode` parameter:
+
+. It removes the restriction against using `maxShardsPerNode` when an autoscaling policy is in place. This parameter can now always be set when creating a collection.
+. It removes the default setting of `maxShardsPerNode=1` when an autoscaling policy is in place. It will be set correctly (if required) regardless of whether an autoscaling policy is in place or not.
++
+The default value of `maxShardsPerNode` is still `1`. It can be set to `-1` if the old behavior of unlimited `maxSharedsPerNode` is desired.
+
+*DirectoryFactory*
+
+* Lucene has introduced the `ByteBuffersDirectory` as a replacement for the `RAMDirectoryFactory`, which will be removed in Solr 9.
++
+While most users are still encouraged to use the `NRTCachingDirectoryFactory`, which allows Lucene to select the best directory factory to use, if you have explicitly configured Solr to use the `RAMDirectoryFactory`, you are encouraged to switch to the new implementation as soon as possible before Solr 9 is released.
++
+For more information about the new directory factory, see the Jira issue https://issues.apache.org/jira/browse/LUCENE-8438[LUCENE-8438].
++
+For more information about the directory factory configuration in Solr, see the section <<datadir-and-directoryfactory-in-solrconfig.adoc#datadir-and-directoryfactory-in-solrconfig,DataDir and DirectoryFactory in SolrConfig>>.
+
 === Solr 7.5
 
 See the https://wiki.apache.org/solr/ReleaseNote75[7.5 Release Notes] for an overview of the main new features in Solr 7.5.


[3/3] lucene-solr:branch_7x: Ref Guide: accidentally back-ported 8.0 changes to 7.x branches

Posted by ct...@apache.org.
Ref Guide: accidentally back-ported 8.0 changes to 7.x branches


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/6c6b47a6
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/6c6b47a6
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/6c6b47a6

Branch: refs/heads/branch_7x
Commit: 6c6b47a65ea5620d89296d356f944218ef0be33a
Parents: fda40a8
Author: Cassandra Targett <ct...@apache.org>
Authored: Tue Nov 13 08:17:52 2018 -0600
Committer: Cassandra Targett <ct...@apache.org>
Committed: Tue Nov 13 08:17:52 2018 -0600

----------------------------------------------------------------------
 ...g-data-with-solr-cell-using-apache-tika.adoc | 27 ++++++++------------
 1 file changed, 10 insertions(+), 17 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/6c6b47a6/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
index 7acc709..af9e781 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
@@ -26,23 +26,16 @@ If you want to supply your own `ContentHandler` for Solr to use, you can extend
 
 When using the Solr Cell framework, it is helpful to keep the following in mind:
 
-* Tika will automatically attempt to determine the input document type (e.g., Word, PDF, HTML) and extract the content appropriately.
-If you like, you can explicitly specify a MIME type for Tika with the `stream.type` parameter.
-See http://tika.apache.org/{ivy-tika-version}/formats.html for the file types supported.
-* Briefly, Tika internally works by synthesizing an XHTML document from the core content of the parsed document which is passed to a configured http://www.saxproject.org/quickstart.html[SAX] ContentHandler provided by Solr Cell.
-Solr responds to Tika's SAX events to create one or more text fields from the content.
-Tika exposes document metadata as well (apart from the XHTML).
-* Tika produces metadata such as Title, Subject, and Author according to specifications such as the DublinCore.
-The metadata available is highly dependent on the file types and what they in turn contain.
-Solr Cell supplies some metadata of its own too.
-* Solr Cell concatenates text from the internal XHTML into a `content` field.
-You can configure which elements should be included/ignored, and which should map to another field.
-* Solr Cell maps each piece of metadata onto a field.
-By default it maps to the same name but several parameters control how this is done.
-* When Solr Cell finishes creating the internal `SolrInputDocument`, the rest of the Lucene/Solr indexing stack takes over.
-The next step after any update handler is the <<update-request-processors.adoc#update-request-processors,Update Request Processor>> chain.
-
-[NOTE]
+* Tika will automatically attempt to determine the input document type (Word, PDF, HTML) and extract the content appropriately. If you like, you can explicitly specify a MIME type for Tika with the `stream.type` parameter.
+* Tika works by producing an XHTML stream that it feeds to a SAX ContentHandler. SAX is a common interface implemented for many different XML parsers. For more information, see http://www.saxproject.org/quickstart.html.
+* Solr then responds to Tika's SAX events and creates the fields to index.
+* Tika produces metadata such as Title, Subject, and Author according to specifications such as the DublinCore. See http://tika.apache.org/{ivy-tika-version}/formats.html for the file types supported.
+* Tika adds all the extracted text to the `content` field.
+* You can map Tika's metadata fields to Solr fields.
+* You can pass in literals for field values. Literals will override Tika-parsed values, including fields in the Tika metadata object, the Tika content field, and any "captured content" fields.
+* You can apply an XPath expression to the Tika XHTML to restrict the content that is produced.
+
+[TIP]
 ====
 While Apache Tika is quite powerful, it is not perfect and fails on some files. PDF files are particularly problematic, mostly due to the PDF format itself. In case of a failure processing any file, the `ExtractingRequestHandler` does not have a secondary mechanism to try to extract some text from the file; it will throw an exception and fail.
 ====


[2/3] lucene-solr:branch_7x: SOLR-12927: copy edits (i.e., e.g., capitalized titles, etc.)

Posted by ct...@apache.org.
SOLR-12927: copy edits (i.e., e.g., capitalized titles, etc.)


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/fda40a87
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/fda40a87
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/fda40a87

Branch: refs/heads/branch_7x
Commit: fda40a873a2511ff402716e706e811016a9cce36
Parents: aeab9de
Author: Cassandra Targett <ct...@apache.org>
Authored: Mon Nov 12 20:01:37 2018 -0600
Committer: Cassandra Targett <ct...@apache.org>
Committed: Tue Nov 13 08:13:16 2018 -0600

----------------------------------------------------------------------
 .../src/analytics-mapping-functions.adoc        |  4 +--
 solr/solr-ref-guide/src/cloud-screens.adoc      |  2 +-
 .../src/computational-geometry.adoc             |  3 +-
 solr/solr-ref-guide/src/coreadmin-api.adoc      |  2 +-
 .../field-type-definitions-and-properties.adoc  |  2 +-
 .../src/initparams-in-solrconfig.adoc           |  2 +-
 solr/solr-ref-guide/src/json-facet-api.adoc     |  2 +-
 solr/solr-ref-guide/src/json-request-api.adoc   |  2 +-
 .../major-changes-from-solr-5-to-solr-6.adoc    |  4 +--
 .../src/making-and-restoring-backups.adoc       |  6 ++--
 ...toring-solr-with-prometheus-and-grafana.adoc |  2 +-
 .../src/pagination-of-results.adoc              |  2 +-
 solr/solr-ref-guide/src/ping.adoc               |  2 +-
 .../src/probability-distributions.adoc          |  3 +-
 solr/solr-ref-guide/src/schema-api.adoc         |  6 ++--
 .../src/solr-control-script-reference.adoc      |  4 +--
 solr/solr-ref-guide/src/solr-tutorial.adoc      |  4 +--
 solr/solr-ref-guide/src/solr-upgrade-notes.adoc |  2 +-
 ...olrcloud-autoscaling-policy-preferences.adoc | 34 ++++++++++----------
 .../src/stream-decorator-reference.adoc         |  8 +++--
 .../src/the-extended-dismax-query-parser.adoc   |  4 +--
 .../transforming-and-indexing-custom-json.adoc  |  4 +--
 .../src/transforming-result-documents.adoc      |  2 +-
 .../src/uploading-data-with-index-handlers.adoc |  3 +-
 ...g-data-with-solr-cell-using-apache-tika.adoc | 29 ++++++++++-------
 ...store-data-with-the-data-import-handler.adoc |  2 +-
 solr/solr-ref-guide/src/v2-api.adoc             |  2 +-
 ...king-with-currencies-and-exchange-rates.adoc |  2 +-
 solr/solr-ref-guide/src/working-with-dates.adoc |  2 +-
 .../src/zookeeper-access-control.adoc           |  2 +-
 30 files changed, 77 insertions(+), 71 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/analytics-mapping-functions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/analytics-mapping-functions.adoc b/solr/solr-ref-guide/src/analytics-mapping-functions.adoc
index f6de0de..75afd7f 100644
--- a/solr/solr-ref-guide/src/analytics-mapping-functions.adoc
+++ b/solr/solr-ref-guide/src/analytics-mapping-functions.adoc
@@ -278,7 +278,7 @@ All parameters must be the same type after implicit casting is done.
 
 === Fill Missing
 If the 1^st^ expression does not have values, fill it with the values for the 2^nd^ expression.
-Both expressions must be of the same type and cardinality after implicit casting is done
+Both expressions must be of the same type and cardinality after implicit casting is done.
 
 `fill_missing(< T >, < T >)` \=> `< T >`::
     * `fill_missing([], 3)` \=> `[3]`
@@ -287,7 +287,7 @@ Both expressions must be of the same type and cardinality after implicit casting
 
 === Remove
 Remove all occurrences of the 2^nd^ expression's value from the values of the 1^st^ expression.
-Both expressions must be of the same type after implicit casting is done
+Both expressions must be of the same type after implicit casting is done.
 
 `remove(< T >, < _Single_ T >)` \=> `< T >`::
     * `remove([1,2,3,2], 2)` \=> `[1, 3]`

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/cloud-screens.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/cloud-screens.adoc b/solr/solr-ref-guide/src/cloud-screens.adoc
index 6e7c5a0..13f0810 100644
--- a/solr/solr-ref-guide/src/cloud-screens.adoc
+++ b/solr/solr-ref-guide/src/cloud-screens.adoc
@@ -43,7 +43,7 @@ image::images/cloud-screens/cloud-tree.png[image,width=487,height=250]
 As an aid to debugging, the data shown in the "Tree" view can be exported locally using the following command `bin/solr zk ls -r /`
 
 == ZK Status View
-The "ZK Status" view gives an overview over the Zookeepers used by Solr. It lists whether running in `standalone` or `ensemble` mode, shows how many zookeepers are configured, and then displays a table listing detailed monitoring status for each of the zookeepers, including who is the leader, configuration parameters and more.
+The "ZK Status" view gives an overview over the ZooKeeper servers or ensemble used by Solr. It lists whether running in `standalone` or `ensemble` mode, shows how many zookeepers are configured, and then displays a table listing detailed monitoring status for each of the zookeepers, including who is the leader, configuration parameters and more.
 
 image::images/cloud-screens/cloud-zkstatus.png[image,width=512,height=509]
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/computational-geometry.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/computational-geometry.adoc b/solr/solr-ref-guide/src/computational-geometry.adoc
index abcdb08..e44c08e 100644
--- a/solr/solr-ref-guide/src/computational-geometry.adoc
+++ b/solr/solr-ref-guide/src/computational-geometry.adoc
@@ -17,8 +17,7 @@
 // under the License.
 
 
-This section of the math expressions user guide covers computational geometry
-functions.
+This section of the math expressions user guide covers computational geometry functions.
 
 == Convex Hull
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/coreadmin-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/coreadmin-api.adoc b/solr/solr-ref-guide/src/coreadmin-api.adoc
index 49895fd..4690448 100644
--- a/solr/solr-ref-guide/src/coreadmin-api.adoc
+++ b/solr/solr-ref-guide/src/coreadmin-api.adoc
@@ -235,7 +235,7 @@ Multi-valued, directories that would be merged.
 Multi-valued, source cores that would be merged.
 
 `async`::
-Request ID to track this action which will be processed asynchronously
+Request ID to track this action which will be processed asynchronously.
 
 
 [[coreadmin-split]]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
index 854132f..c70d8b2 100644
--- a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
@@ -72,7 +72,7 @@ The properties that can be specified for a given field type fall into three majo
 
 === General Properties
 
-These are the general properties for fields
+These are the general properties for fields:
 
 `name`::
 The name of the fieldType. This value gets used in field definitions, in the "type" attribute. It is strongly recommended that names consist of alphanumeric or underscore characters only and not start with a digit. This is not currently strictly enforced.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
index c0b7fb3..0429783 100644
--- a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
@@ -40,7 +40,7 @@ For example, here is one of the `<initParams>` sections defined by default in th
 
 This sets the default search field ("df") to be "_text_" for all of the request handlers named in the path section. If we later want to change the `/query` request handler to search a different field by default, we could override the `<initParams>` by defining the parameter in the `<requestHandler>` section for `/query`.
 
-The syntax and semantics are similar to that of a `<requestHandler>`. The following are the attributes
+The syntax and semantics are similar to that of a `<requestHandler>`. The following are the attributes:
 
 `path`::
 A comma-separated list of paths which will use the parameters. Wildcards can be used in paths to define nested paths, as described below.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/json-facet-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/json-facet-api.adoc b/solr/solr-ref-guide/src/json-facet-api.adoc
index 1e52034..d842517 100644
--- a/solr/solr-ref-guide/src/json-facet-api.adoc
+++ b/solr/solr-ref-guide/src/json-facet-api.adoc
@@ -147,7 +147,7 @@ curl http://localhost:8983/solr/techproducts/query -d '
 
 === JSON Extensions
 
-The *Noggit* JSON parser that is used by Solr accepts a number of JSON extensions such as
+The *Noggit* JSON parser that is used by Solr accepts a number of JSON extensions such as,
 
 * bare words can be left unquoted
 * single line comments using either `//` or `#`

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/json-request-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/json-request-api.adoc b/solr/solr-ref-guide/src/json-request-api.adoc
index 94ba84e..196a0cc 100644
--- a/solr/solr-ref-guide/src/json-request-api.adoc
+++ b/solr/solr-ref-guide/src/json-request-api.adoc
@@ -113,7 +113,7 @@ curl "http://localhost:8983/solr/techproducts/query?fl=name,price"-d '
   }
 }'
 
-Which is equivalent to
+Which is equivalent to:
 
 [source,bash]
 curl "http://localhost:8983/solr/techproducts/query?fl=name,price&q=memory&rows=1"

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
index ea4cba1..dbc2c91 100644
--- a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
+++ b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
@@ -78,11 +78,11 @@ Please review the <<schema-factory-definition-in-solrconfig.adoc#schema-factory-
 
 Solr's default behavior when a Schema does not explicitly define a global <<other-schema-elements.adoc#other-schema-elements,`<similarity/>`>> is now dependent on the `luceneMatchVersion` specified in the `solrconfig.xml`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarityFactory` will be used, otherwise an instance of `SchemaSimilarityFactory` will be used. Most notably this change means that users can take advantage of per Field Type similarity declarations, without needing to also explicitly declare a global usage of `SchemaSimilarityFactory`.
 
-Regardless of whether it is explicitly declared, or used as an implicit global default, `SchemaSimilarityFactory` 's implicit behavior when a Field Types do not declare an explicit `<similarity />` has also been changed to depend on the the `luceneMatchVersion`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarity` will be used, otherwise an instance of `BM25Similarity` will be used. A `defaultSimFromFieldType` init option may be specified on the `SchemaSimilarityFactory` declaration to change this behavior. Please review the `SchemaSimilarityFactory` javadocs for more details
+Regardless of whether it is explicitly declared, or used as an implicit global default, `SchemaSimilarityFactory` 's implicit behavior when a Field Types do not declare an explicit `<similarity />` has also been changed to depend on the the `luceneMatchVersion`. When `luceneMatchVersion < 6.0`, an instance of `ClassicSimilarity` will be used, otherwise an instance of `BM25Similarity` will be used. A `defaultSimFromFieldType` init option may be specified on the `SchemaSimilarityFactory` declaration to change this behavior. Please review the `SchemaSimilarityFactory` javadocs for more details.
 
 == Replica & Shard Delete Command Changes
 
-DELETESHARD and DELETEREPLICA now default to deleting the instance directory, data directory, and index directory for any replica they delete. Please review the <<collections-api.adoc#collections-api,Collection API>> documentation for details on new request parameters to prevent this behavior if you wish to keep all data on disk when using these commands
+DELETESHARD and DELETEREPLICA now default to deleting the instance directory, data directory, and index directory for any replica they delete. Please review the <<collections-api.adoc#collections-api,Collection API>> documentation for details on new request parameters to prevent this behavior if you wish to keep all data on disk when using these commands.
 
 == facet.date.* Parameters Removed
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/making-and-restoring-backups.adoc b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
index 61576c7..8d4f33e 100644
--- a/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
+++ b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
@@ -202,13 +202,13 @@ http://localhost:8983/solr/admin/cores?action=DELETESNAPSHOT&core=techproducts&c
 The delete snapshot request parameters are:
 
 `commitName`::
-Specify the commit name to be deleted
+Specify the commit name to be deleted.
 
 `core`::
-The name of the core whose snapshot we want to delete
+The name of the core whose snapshot we want to delete.
 
 `async`::
-Request ID to track this action which will be processed asynchronously
+Request ID to track this action which will be processed asynchronously.
 
 == Backup/Restore Storage Repositories
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc b/solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
index 8796c74..79af524 100644
--- a/solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
+++ b/solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
@@ -83,7 +83,7 @@ $ ./bin/solr-exporter -p 9854 -z localhost:2181/solr -f ./conf/solr-exporter-con
 
 === Command Line Parameters
 
-The parameters in the example start commands shown above
+The parameters in the example start commands shown above:
 
 `h`, `--help`::
 Displays command line help and usage.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/pagination-of-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/pagination-of-results.adoc b/solr/solr-ref-guide/src/pagination-of-results.adoc
index f0b0fb6..7be5ac7 100644
--- a/solr/solr-ref-guide/src/pagination-of-results.adoc
+++ b/solr/solr-ref-guide/src/pagination-of-results.adoc
@@ -93,7 +93,7 @@ In addition to returning the top N sorted results (where you can control N using
 
 === Constraints when using Cursors
 
-There are a few important constraints to be aware of when using `cursorMark` parameter in a Solr request
+There are a few important constraints to be aware of when using `cursorMark` parameter in a Solr request.
 
 . `cursorMark` and `start` are mutually exclusive parameters.
 * Your requests must either not include a `start` parameter, or it must be specified with a value of "```0```".

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/ping.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/ping.adoc b/solr/solr-ref-guide/src/ping.adoc
index a32152b..c1de95c 100644
--- a/solr/solr-ref-guide/src/ping.adoc
+++ b/solr/solr-ref-guide/src/ping.adoc
@@ -45,7 +45,7 @@ This command will ping the core name for a response.
 http://localhost:8983/solr/<collection-name>/admin/ping?distrib=true&wt=xml
 ----
 
-This command will ping all replicas of the given collection name for a response
+This command will ping all replicas of the given collection name for a response:
 
 *Sample Output*
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/probability-distributions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/probability-distributions.adoc b/solr/solr-ref-guide/src/probability-distributions.adoc
index 7482872..bcee553 100644
--- a/solr/solr-ref-guide/src/probability-distributions.adoc
+++ b/solr/solr-ref-guide/src/probability-distributions.adoc
@@ -16,8 +16,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-This section of the user guide covers the
-probability distribution
+This section of the user guide covers the probability distribution
 framework included in the math expressions library.
 
 == Probability Distribution Framework

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/schema-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index 798cc44..173391f 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -313,7 +313,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
 
 The `add-field-type` command adds a new field type to your schema.
 
-All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request. The structure of the command is a json mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc. Details of all of the available options are described in the section <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
+All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request. The structure of the command is a JSON mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc. Details of all of the available options are described in the section <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
 
 For example, to create a new field type named "myNewTxtField", you can POST a request as follows:
 
@@ -426,7 +426,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
 
 The `replace-field-type` command replaces a field type in your schema. Note that you must supply the full definition for a field type - this command will *not* partially modify a field type's definition. If the field type does not exist in the schema an error is thrown.
 
-All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request. The structure of the command is a json mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc. Details of all of the available options are described in the section <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
+All of the field type properties available when editing `schema.xml` by hand are available for use in a POST request. The structure of the command is a JSON mapping of the standard field type definition, including the name, class, index and query analyzer definitions, etc. Details of all of the available options are described in the section <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
 
 For example, to replace the definition of a field type named "myNewTxtField", you can make a POST request as follows:
 
@@ -1187,7 +1187,7 @@ The output will simply be the schema version in use.
 
 ==== Show Schema Version Example
 
-Get the schema version
+Get the schema version:
 
 [source,bash]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/solr-control-script-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-control-script-reference.adoc b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
index 5d5808e..6710019 100644
--- a/solr/solr-ref-guide/src/solr-control-script-reference.adoc
+++ b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
@@ -770,7 +770,7 @@ Copy a single file from ZooKeeper to local.
 
 === Remove a znode from ZooKeeper
 
-Use the `zk rm` command to remove a znode (and optionally all child nodes) from ZooKeeper
+Use the `zk rm` command to remove a znode (and optionally all child nodes) from ZooKeeper.
 
 ==== ZK Remove Parameters
 
@@ -806,7 +806,7 @@ Examples of this command with the parameters are:
 
 === Move One ZooKeeper znode to Another (Rename)
 
-Use the `zk mv` command to move (rename) a ZooKeeper znode
+Use the `zk mv` command to move (rename) a ZooKeeper znode.
 
 ==== ZK Move Parameters
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/solr-tutorial.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-tutorial.adoc b/solr/solr-ref-guide/src/solr-tutorial.adoc
index fb0acbc..99fee4b 100644
--- a/solr/solr-ref-guide/src/solr-tutorial.adoc
+++ b/solr/solr-ref-guide/src/solr-tutorial.adoc
@@ -585,7 +585,7 @@ First, we are using a "managed schema", which is configured to only be modified
 
 Second, we are using "field guessing", which is configured in the `solrconfig.xml` file (and includes most of Solr's various configuration settings). Field guessing is designed to allow us to start using Solr without having to define all the fields we think will be in our documents before trying to index them. This is why we call it "schemaless", because you can start quickly and let Solr create fields for you as it encounters them in documents.
 
-Sounds great! Well, not really, there are limitations. It's a bit brute force, and if it guesses wrong, you can't change much about a field after data has been indexed without having to reindex. If we only have a few thousand documents that might not be bad, but if you have millions and millions of documents, or, worse, don't have access to the original data anymore, this can be a real problem.
+Sounds great! Well, not really, there are limitations. It's a bit brute force, and if it guesses wrong, you can't change much about a field after data has been indexed without having to re-index. If we only have a few thousand documents that might not be bad, but if you have millions and millions of documents, or, worse, don't have access to the original data anymore, this can be a real problem.
 
 For these reasons, the Solr community does not recommend going to production without a schema that you have defined yourself. By this we mean that the schemaless features are fine to start with, but you should still always make sure your schema matches your expectations for how you want your data indexed and how users are going to query it.
 
@@ -936,7 +936,7 @@ Go ahead and edit any of the existing example data files, change some of the dat
 
 === Deleting Data
 
-If you need to iterate a few times to get your schema right, you may want to delete documents to clear out the collection and try again. Note, however, that merely removing documents doesn't change the underlying field definitions. Essentially, this will allow you to reindex your data after making changes to fields for your needs.
+If you need to iterate a few times to get your schema right, you may want to delete documents to clear out the collection and try again. Note, however, that merely removing documents doesn't change the underlying field definitions. Essentially, this will allow you to re-index your data after making changes to fields for your needs.
 
 You can delete data by POSTing a delete command to the update URL and specifying the value of the document's unique key field, or a query that matches multiple documents (be careful with that one!). We can use `bin/post` to delete documents also if we structure the request properly.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
index 516598d..da84bc3 100644
--- a/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
+++ b/solr/solr-ref-guide/src/solr-upgrade-notes.adoc
@@ -122,7 +122,7 @@ When upgrading to Solr 7.4, users should be aware of the following major changes
 
 *Logging*
 
-* Solr now uses Log4j v2.11. The Log4j configuration is now in `log4j2.xml` rather than `log4j.properties` files. This is a server side change only and clients using SolrJ won't need any changes. Clients can still use any logging implementation which is compatible with SLF4J. We now let Log4j handle rotation of solr logs at startup, and `bin/solr` start scripts will no longer attempt this nor move existing console or garbage collection logs into `logs/archived` either. See <<configuring-logging.adoc#configuring-logging,Configuring Logging>> for more details about Solr logging.
+* Solr now uses Log4j v2.11. The Log4j configuration is now in `log4j2.xml` rather than `log4j.properties` files. This is a server side change only and clients using SolrJ won't need any changes. Clients can still use any logging implementation which is compatible with SLF4J. We now let Log4j handle rotation of Solr logs at startup, and `bin/solr` start scripts will no longer attempt this nor move existing console or garbage collection logs into `logs/archived` either. See <<configuring-logging.adoc#configuring-logging,Configuring Logging>> for more details about Solr logging.
 
 * Configuring `slowQueryThresholdMillis` now logs slow requests to a separate file named `solr_slow_requests.log`. Previously they would get logged in the `solr.log` file.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc b/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc
index 1cd72c6..a20ea7c 100644
--- a/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc
+++ b/solr/solr-ref-guide/src/solrcloud-autoscaling-policy-preferences.adoc
@@ -153,16 +153,16 @@ The `replica` attribute value can be specified in one of the following forms:
 
 <<Replica Count Constraint,Replica count constraints>> (`"replica":"..."`) and <<Core Count Constraint,core count constraints>> (`"cores":"..."`) allow specification of acceptable counts for replicas (cores tied to a collection) and cores (regardless of the collection to which they belong), respectively.
 
-You can specify one of the following as the value of a `replica` and `cores` policy rule attribute: 
+You can specify one of the following as the value of a `replica` and `cores` policy rule attribute:
 
-* an exact integer (e.g. `2`)
-* an exclusive lower integer bound (e.g. `>0`)
-* an exclusive upper integer bound (e.g. `<3`)
+* an exact integer (e.g., `2`)
+* an exclusive lower integer bound (e.g., `>0`)
+* an exclusive upper integer bound (e.g., `<3`)
 * a decimal value, interpreted as an acceptable range of core counts, from the floor of the value to the ceiling of the value, with the system preferring the rounded value (e.g., `1.6`: `1` or `2` is acceptable, and `2` is preferred)
-* a <<range-operator,range>> of acceptable replica/core counts, as inclusive lower and upper integer bounds separated by a hyphen (e.g. `3-5`)
-* a percentage (e.g. `33%`), which is multiplied at runtime either by the number of <<Replica Selector and Rule Evaluation Context,selected replicas>> (for a `replica` constraint) or the number of cores in the cluster (for a `cores` constraint). This value is then interpreted as described above for a literal decimal value.
+* a <<range-operator,range>> of acceptable replica/core counts, as inclusive lower and upper integer bounds separated by a hyphen (e.g., `3-5`)
+* a percentage (e.g., `33%`), which is multiplied at runtime either by the number of <<Replica Selector and Rule Evaluation Context,selected replicas>> (for a `replica` constraint) or the number of cores in the cluster (for a `cores` constraint). This value is then interpreted as described above for a literal decimal value.
 
-NOTE: Using an exact integer value for count constraints is of limited utility, since collection or cluster changes could quickly invalidate them.  For example, attempting to add a third replica to each shard of a collection on a two-node cluster with policy rule `{"replica":1, "shard":"#EACH", "node":"#ANY"}` would cause a violation, since at least one node would have to host more than one replica. Percentage rules are less brittle.  Rewriting the rule as `{"replica":"50%", "shard":"#EACH", "node":"#ANY"}` eliminates the violation: `50% of 3 replicas = 1.5 replicas per node`, meaning that it's acceptable for a node to host either one or two replicas of each shard. 
+NOTE: Using an exact integer value for count constraints is of limited utility, since collection or cluster changes could quickly invalidate them.  For example, attempting to add a third replica to each shard of a collection on a two-node cluster with policy rule `{"replica":1, "shard":"#EACH", "node":"#ANY"}` would cause a violation, since at least one node would have to host more than one replica. Percentage rules are less brittle.  Rewriting the rule as `{"replica":"50%", "shard":"#EACH", "node":"#ANY"}` eliminates the violation: `50% of 3 replicas = 1.5 replicas per node`, meaning that it's acceptable for a node to host either one or two replicas of each shard.
 
 === Policy Rule Attributes
 
@@ -213,7 +213,7 @@ The port of the node to which the rule should apply.  The <<not-operator,`!` (no
 
 [[freedisk-attribute]]
 `freedisk`::
-The free disk space in gigabytes of the node. This must be a positive 64-bit integer value, or a <<percentage-function,percentage>>. If a percentage is specified, either an upper or lower bound may also be specified using the `<` or `>` operators, respectively, e.g. `>50%`, `<25%`.
+The free disk space in gigabytes of the node. This must be a positive 64-bit integer value, or a <<percentage-function,percentage>>. If a percentage is specified, either an upper or lower bound may also be specified using the `<` or `>` operators, respectively, e.g., `>50%`, `<25%`.
 
 [[host-attribute]]
 `host`::
@@ -277,7 +277,7 @@ This supports values calculated at the time of execution.
 * [[all-function]]`#ALL`: Applies to the <<replica-attribute,`replica` attribute>> only. This means all replicas that meet the rule condition.
 * [[each-function]]`#EACH`: Applies to the <<shard-attribute,`shard` attribute>> (meaning the rule should be evaluated separately for each shard), and to the attributes used to define the buckets for the <<equal-function,#EQUAL function>> (meaning all possible values for the bucket-defining attribute).
 * [[equal-function]]`#EQUAL`: Applies to the <<replica-attribute,`replica`>> and <<cores-attribute,`cores`>> attributes only. This means an equal number of replicas/cores in each bucket. The buckets can be defined using the below attributes with a value that can either be <<each-function,`#EACH`>> or a list specified with the <<array-operator,array operator (`[]`)>>:
-** <<node-attribute,`node`>> \<- <<Rule Types,global rules>>, i.e. those with the <<cores-attribute,`cores` attribute>>, may only specify this attribute
+** <<node-attribute,`node`>> \<- <<Rule Types,global rules>>, i.e., those with the <<cores-attribute,`cores` attribute>>, may only specify this attribute
 ** <<sysprop-attribute,`sysprop.*`>>
 ** <<port-attribute,`port`>>
 ** <<diskType-attribute,`diskType`>>
@@ -413,7 +413,7 @@ To create a new named policy, use the <<solrcloud-autoscaling-api.adoc#create-an
 
 The above CREATE collection command will associate a policy named `policy1` with the collection named `coll1`. Only a single policy may be associated with a collection.
 
-== Example: Manual Collection Creation with a Policy 
+== Example: Manual Collection Creation with a Policy
 
 The starting state for this example is a Solr cluster with 3 nodes: "nodeA", "nodeB", and "nodeC".  An existing 2-shard `FirstCollection` with a `replicationFactor` of 1 has one replica on "nodeB" and one on "nodeC".  The default Autoscaling preferences are in effect:
 
@@ -422,9 +422,9 @@ The starting state for this example is a Solr cluster with 3 nodes: "nodeA", "no
 
 The configured policy rule allows at most 1 core per node:
 
-[source,json]    
+[source,json]
 [ {"cores": "<2", "node": "#ANY"} ]
-    
+
 We now issue a CREATE command for a `SecondCollection` with two shards and a `replicationFactor` of 1:
 
 [source,text]
@@ -433,13 +433,13 @@ http://localhost:8983/solr/admin/collections?action=CREATE&name=SecondCollection
 ----
 
 For each of the two replicas to be created, each Solr node is tested, in order from least to most loaded: would all policy rules be satisfied if a replica were placed there using an ADDREPLICA sub-command?
- 
+
 * ADDREPLICA for `shard1`: According to the Autoscaling preferences, the least loaded node is the one with the fewest cores: "nodeA", because it hosts no cores, while the other two nodes each host one core. The test to place a replica here succeeds, because doing so causes no policy violations, since the core count after adding the replica would not exceed the configured maximum of 1.  Because "nodeA" can host the first shard's replica, Solr skips testing of the other two nodes.
 * ADDREPLICA for `shard2`: After placing the `shard1` replica, all nodes would be equally loaded, since each would have one core. The test to place the `shard2` replica fails on each node, because placement would push the node over its maximum core count.  This causes a policy violation.
- 
-Since there is no node that can host a replica for `shard2` without causing a violation, the overall CREATE command fails.  Let's try again after increasing the maximum core count on all nodes to 2: 
 
-[source,json]    
+Since there is no node that can host a replica for `shard2` without causing a violation, the overall CREATE command fails.  Let's try again after increasing the maximum core count on all nodes to 2:
+
+[source,json]
 [ {"cores": "<3", "node": "#ANY"} ]
 
-After re-issuing the `SecondCollection` CREATE command, the replica for `shard1` will be placed on "nodeA": it's least loaded, so is tested first, and no policy violation will result from placement there.  The `shard2` replica could be placed on any of the 3 nodes, since they're all equally loaded, and the chosen node will remain below its maximum core count after placement.  The CREATE command succeeds. 
+After re-issuing the `SecondCollection` CREATE command, the replica for `shard1` will be placed on "nodeA": it's least loaded, so is tested first, and no policy violation will result from placement there.  The `shard2` replica could be placed on any of the 3 nodes, since they're all equally loaded, and the chosen node will remain below its maximum core count after placement.  The CREATE command succeeds.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/stream-decorator-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/stream-decorator-reference.adoc b/solr/solr-ref-guide/src/stream-decorator-reference.adoc
index d9186c9..3d9f569 100644
--- a/solr/solr-ref-guide/src/stream-decorator-reference.adoc
+++ b/solr/solr-ref-guide/src/stream-decorator-reference.adoc
@@ -22,7 +22,8 @@
 
 The `cartesianProduct` function turns a single tuple with a multi-valued field (i.e., an array) into multiple tuples, one for each value in the array field. That is, given a single tuple containing an array of N values for fieldA, the `cartesianProduct` function will output N tuples, each with one value from the original tuple's array. In essence, you can flatten arrays for further processing.
 
-For example, using `cartesianProduct` you can turn this tuple
+For example, using `cartesianProduct` you can turn this tuple:
+
 [source,text]
 ----
 {
@@ -31,7 +32,8 @@ For example, using `cartesianProduct` you can turn this tuple
 }
 ----
 
-into the following 3 tuples
+into the following 3 tuples:
+
 [source,text]
 ----
 {
@@ -67,7 +69,7 @@ cartesianProduct(
 
 === cartesianProduct Examples
 
-The following examples show different outputs for this source tuple
+The following examples show different outputs for this source tuple:
 
 [source,text]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc b/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc
index 08451e0..1a51421 100644
--- a/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc
+++ b/solr/solr-ref-guide/src/the-extended-dismax-query-parser.adoc
@@ -212,9 +212,9 @@ A document that contains "Hans Anderson" will match, but a document that contain
 
 Finally, in addition to the phrase fields (`pf`) parameter, `edismax` also supports the `pf2` and `pf3` parameters, for fields over which to create bigram and trigram phrase queries. The phrase slop for these parameters' queries can be specified using the `ps2` and `ps3` parameters, respectively. If you use `pf2`/`pf3` but not `ps2`/`ps3`, then the phrase slop for these parameters' queries will be taken from the `ps` parameter, if any.
 
-=== Synonyms expansion in phrase queries with slop
+=== Synonyms Expansion in Phrase Queries with Slop
 
-When a phrase query with slop (e.g. `pf` with `ps`) triggers synonym expansions, a separate clause will be generated for each combination of synonyms. For example, with configured synonyms `dog,canine` and `cat,feline`, the query `"dog chased cat"` will generate the following phrase query clauses:
+When a phrase query with slop (e.g., `pf` with `ps`) triggers synonym expansions, a separate clause will be generated for each combination of synonyms. For example, with configured synonyms `dog,canine` and `cat,feline`, the query `"dog chased cat"` will generate the following phrase query clauses:
 
 * `"dog chased cat"`
 * `"canine chased cat"`

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc b/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
index 7f7e58b..26bd60b 100644
--- a/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
+++ b/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
@@ -416,8 +416,8 @@ There are two restrictions: wildcards can only be used at the end of the `json-p
 A single asterisk `\*` maps only to direct children, and a double asterisk `**` maps recursively to all descendants. The following are example wildcard path mappings:
 
 * `f=$FQN:/**`: maps all fields to the fully qualified name (`$FQN`) of the JSON field. The fully qualified name is obtained by concatenating all the keys in the hierarchy with a period (`.`) as a delimiter. This is the default behavior if no `f` path mappings are specified.
-* `f=/docs/*`: maps all the fields under docs and in the name as given in json
-* `f=/docs/**`: maps all the fields under docs and its children in the name as given in json
+* `f=/docs/*`: maps all the fields under docs and in the name as given in JSON
+* `f=/docs/**`: maps all the fields under docs and its children in the name as given in JSON
 * `f=searchField:/docs/*`: maps all fields under /docs to a single field called ‘searchField’
 * `f=searchField:/docs/**`: maps all fields under /docs and its children to searchField
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/transforming-result-documents.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/transforming-result-documents.adoc b/solr/solr-ref-guide/src/transforming-result-documents.adoc
index 4577725..05406f8 100644
--- a/solr/solr-ref-guide/src/transforming-result-documents.adoc
+++ b/solr/solr-ref-guide/src/transforming-result-documents.adoc
@@ -292,7 +292,7 @@ To log substituted subquery request parameters, add the corresponding parameter
 
 ==== Cores and Collections in SolrCloud
 
-Use `foo:[subquery fromIndex=departments]` to invoke subquery on another core on the same node. This is what `{!join}` does for non-SolrCloud mode. But with SolrCloud, just (and only) explicitly specify its native parameters like `collection, shards` for subquery, e.g.:
+Use `foo:[subquery fromIndex=departments]` to invoke subquery on another core on the same node. This is what `{!join}` does for non-SolrCloud mode. But with SolrCloud, just (and only) explicitly specify its native parameters like `collection, shards` for subquery, for example:
 
 [source,plain,subs="quotes"]
 q=\*:*&fl=\*,foo:[subquery]&foo.q=cloud&**foo.collection**=departments

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
index 93ffdc2..7b1ae65 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
@@ -568,7 +568,7 @@ Nested documents may be indexed via either the XML or JSON data syntax, and is a
  ** it may be infeasible to use `required`
  ** even child documents need a unique `id`
  * You must include a field that identifies the parent document as a parent; it can be any field that suits this purpose, and it will be used as input for the <<other-parsers.adoc#block-join-query-parsers,block join query parsers>>.
- * If you associate a child document as a field (e.g. comment), that field need not be defined in the schema, and probably
+ * If you associate a child document as a field (e.g., comment), that field need not be defined in the schema, and probably
    shouldn't be as it would be confusing.  There is no child document field type.
 
 === XML Examples
@@ -640,4 +640,3 @@ For the anonymous relationship, note the special `\_childDocuments_` key whose c
   }
 ]
 ----
-

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
index e47e00f..7acc709 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
@@ -26,16 +26,23 @@ If you want to supply your own `ContentHandler` for Solr to use, you can extend
 
 When using the Solr Cell framework, it is helpful to keep the following in mind:
 
-* Tika will automatically attempt to determine the input document type (Word, PDF, HTML) and extract the content appropriately. If you like, you can explicitly specify a MIME type for Tika with the `stream.type` parameter.
-* Tika works by producing an XHTML stream that it feeds to a SAX ContentHandler. SAX is a common interface implemented for many different XML parsers. For more information, see http://www.saxproject.org/quickstart.html.
-* Solr then responds to Tika's SAX events and creates the fields to index.
-* Tika produces metadata such as Title, Subject, and Author according to specifications such as the DublinCore. See http://tika.apache.org/{ivy-tika-version}/formats.html for the file types supported.
-* Tika adds all the extracted text to the `content` field.
-* You can map Tika's metadata fields to Solr fields.
-* You can pass in literals for field values. Literals will override Tika-parsed values, including fields in the Tika metadata object, the Tika content field, and any "captured content" fields.
-* You can apply an XPath expression to the Tika XHTML to restrict the content that is produced.
-
-[TIP]
+* Tika will automatically attempt to determine the input document type (e.g., Word, PDF, HTML) and extract the content appropriately.
+If you like, you can explicitly specify a MIME type for Tika with the `stream.type` parameter.
+See http://tika.apache.org/{ivy-tika-version}/formats.html for the file types supported.
+* Briefly, Tika internally works by synthesizing an XHTML document from the core content of the parsed document which is passed to a configured http://www.saxproject.org/quickstart.html[SAX] ContentHandler provided by Solr Cell.
+Solr responds to Tika's SAX events to create one or more text fields from the content.
+Tika exposes document metadata as well (apart from the XHTML).
+* Tika produces metadata such as Title, Subject, and Author according to specifications such as the DublinCore.
+The metadata available is highly dependent on the file types and what they in turn contain.
+Solr Cell supplies some metadata of its own too.
+* Solr Cell concatenates text from the internal XHTML into a `content` field.
+You can configure which elements should be included/ignored, and which should map to another field.
+* Solr Cell maps each piece of metadata onto a field.
+By default it maps to the same name but several parameters control how this is done.
+* When Solr Cell finishes creating the internal `SolrInputDocument`, the rest of the Lucene/Solr indexing stack takes over.
+The next step after any update handler is the <<update-request-processors.adoc#update-request-processors,Update Request Processor>> chain.
+
+[NOTE]
 ====
 While Apache Tika is quite powerful, it is not perfect and fails on some files. PDF files are particularly problematic, mostly due to the PDF format itself. In case of a failure processing any file, the `ExtractingRequestHandler` does not have a secondary mechanism to try to extract some text from the file; it will throw an exception and fail.
 ====
@@ -138,7 +145,7 @@ Defines a file path and name for a file of file name to password mappings.
 Specifies the optional name of the file. Tika can use it as a hint for detecting a file's MIME type.
 
 `resource.password`::
-Defines a password to use for a password-protected PDF or OOXML file
+Defines a password to use for a password-protected PDF or OOXML file.
 
 `tika.config`::
 Defines a file path and name to a customized Tika configuration file. This is only required if you have customized your Tika implementation.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/uploading-structured-data-store-data-with-the-data-import-handler.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-structured-data-store-data-with-the-data-import-handler.adoc b/solr/solr-ref-guide/src/uploading-structured-data-store-data-with-the-data-import-handler.adoc
index d724055..37c949a 100644
--- a/solr/solr-ref-guide/src/uploading-structured-data-store-data-with-the-data-import-handler.adoc
+++ b/solr/solr-ref-guide/src/uploading-structured-data-store-data-with-the-data-import-handler.adoc
@@ -820,7 +820,7 @@ timeout::
 The query timeout in seconds. The default is 5 minutes (300 seconds).
 
 cursorMark="true"::
-Use this to enable cursor for efficient result set scrolling
+Use this to enable cursor for efficient result set scrolling.
 
 sort="id asc"::
 This should be used to specify a sort parameter referencing the uniqueKey field of the source Solr instance. See <<pagination-of-results.adoc#pagination-of-results,Pagination of Results>> for details.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/v2-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/v2-api.adoc b/solr/solr-ref-guide/src/v2-api.adoc
index 589a684..8541d9f 100644
--- a/solr/solr-ref-guide/src/v2-api.adoc
+++ b/solr/solr-ref-guide/src/v2-api.adoc
@@ -146,7 +146,7 @@ Example of introspect for a POST API: `\http://localhost:8983/api/c/gettingstart
 }
 ----
 
-The `"commands"` section in the above example has one entry for each command supported at this endpoint. The key is the command name and the value is a json object describing the command structure using JSON schema (see http://json-schema.org/ for a description).
+The `"commands"` section in the above example has one entry for each command supported at this endpoint. The key is the command name and the value is a JSON object describing the command structure using JSON schema (see http://json-schema.org/ for a description).
 
 == Invocation Examples
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc b/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
index 4ee5711..f9ffb49 100644
--- a/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
+++ b/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
@@ -60,7 +60,7 @@ During query processing, range and point queries are both supported.
 
 === Sub-field Suffixes
 
-You must specify parameters `amountLongSuffix` and `codeStrSuffix`, corresponding to dynamic fields to be used for the raw amount and the currency dynamic sub-fields, e.g.:
+You must specify parameters `amountLongSuffix` and `codeStrSuffix`, corresponding to dynamic fields to be used for the raw amount and the currency dynamic sub-fields, for example:
 
 [source,xml]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/working-with-dates.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-dates.adoc b/solr/solr-ref-guide/src/working-with-dates.adoc
index d5f3203..7a42909 100644
--- a/solr/solr-ref-guide/src/working-with-dates.adoc
+++ b/solr/solr-ref-guide/src/working-with-dates.adoc
@@ -102,7 +102,7 @@ Note that while date math is most commonly used relative to `NOW` it can be appl
 
 The `NOW` parameter is used internally by Solr to ensure consistent date math expression parsing across multiple nodes in a distributed request. But it can be specified to instruct Solr to use an arbitrary moment in time (past or future) to override for all situations where the the special value of "```NOW```" would impact date math expressions.
 
-It must be specified as a (long valued) milliseconds since epoch
+It must be specified as a (long valued) milliseconds since epoch.
 
 Example:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/fda40a87/solr/solr-ref-guide/src/zookeeper-access-control.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/zookeeper-access-control.adoc b/solr/solr-ref-guide/src/zookeeper-access-control.adoc
index c2b619a..588c786 100644
--- a/solr/solr-ref-guide/src/zookeeper-access-control.adoc
+++ b/solr/solr-ref-guide/src/zookeeper-access-control.adoc
@@ -30,7 +30,7 @@ Content stored in ZooKeeper is critical to the operation of a SolrCloud cluster.
 * Changing cluster state information into something wrong or inconsistent might very well make a SolrCloud cluster behave strangely.
 * Adding a delete-collection job to be carried out by the Overseer will cause data to be deleted from the cluster.
 
-You may want to enable ZooKeeper ACLs with Solr if you grant access to your ZooKeeper ensemble to entities you do not trust, or if you want to reduce risk of bad actions resulting from, e.g.:
+You may want to enable ZooKeeper ACLs with Solr if you grant access to your ZooKeeper ensemble to entities you do not trust, or if you want to reduce risk of bad actions resulting from, for example:
 
 * Malware that found its way into your system.
 * Other systems using the same ZooKeeper ensemble (a "bad thing" might be done by accident).