You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/04/20 19:39:14 UTC

[4/4] lucene-solr:jira/solr-10290: SOLR-10290: update raw content files

SOLR-10290: update raw content files


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/73148be0
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/73148be0
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/73148be0

Branch: refs/heads/jira/solr-10290
Commit: 73148be0baab123b93953f98d69d3b4517842f03
Parents: 201d238
Author: Cassandra Targett <ct...@apache.org>
Authored: Thu Apr 20 14:35:53 2017 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Thu Apr 20 14:35:53 2017 -0500

----------------------------------------------------------------------
 ...uthentication-and-authorization-plugins.adoc |  46 +-
 .../src/basic-authentication-plugin.adoc        |  43 +-
 solr/solr-ref-guide/src/blob-store-api.adoc     |  18 +-
 .../src/collections-core-admin.adoc             |   7 +
 .../src/command-line-utilities.adoc             |  16 +-
 solr/solr-ref-guide/src/config-api.adoc         |   9 +-
 .../solr-ref-guide/src/configuring-logging.adoc |   6 +-
 solr/solr-ref-guide/src/copying-fields.adoc     |   8 +
 solr/solr-ref-guide/src/coreadmin-api.adoc      |   4 +-
 ...adir-and-directoryfactory-in-solrconfig.adoc |  16 +-
 solr/solr-ref-guide/src/defining-fields.adoc    |   3 +-
 solr/solr-ref-guide/src/documents-screen.adoc   |   2 +-
 solr/solr-ref-guide/src/docvalues.adoc          |  14 +-
 solr/solr-ref-guide/src/enabling-ssl.adoc       |   2 +-
 solr/solr-ref-guide/src/errata.adoc             |   8 +-
 solr/solr-ref-guide/src/faceting.adoc           |  18 +-
 .../src/field-properties-by-use-case.adoc       |   8 +-
 .../field-type-definitions-and-properties.adoc  |   4 +-
 .../src/field-types-included-with-solr.adoc     |  22 +-
 .../solr-ref-guide/src/filter-descriptions.adoc | 265 +++++++-
 .../src/hadoop-authentication-plugin.adoc       |  20 +-
 solr/solr-ref-guide/src/highlighting.adoc       |  13 +-
 .../a-quick-overview/sample-client-app-arch.png | Bin 48412 -> 52100 bytes
 .../images/analysis-screen/analysis_normal.png  | Bin 55013 -> 57653 bytes
 .../images/analysis-screen/analysis_verbose.png | Bin 79508 -> 66742 bytes
 .../src/images/cloud-screens/cloud-graph.png    | Bin 38740 -> 54929 bytes
 .../src/images/cloud-screens/cloud-radial.png   | Bin 40464 -> 62572 bytes
 .../src/images/cloud-screens/cloud-tree.png     | Bin 88824 -> 105371 bytes
 .../collection_dashboard.png                    | Bin 54879 -> 69978 bytes
 .../collections-core-admin/DeleteShard.png      | Bin 0 -> 161077 bytes
 .../collections-core-admin/collection-admin.png | Bin 53629 -> 60968 bytes
 .../worddav4101c16174820e932b44baa22abcfcd1.png | Bin 63101 -> 54328 bytes
 .../core-specific-tools/core_dashboard.png      | Bin 76861 -> 84253 bytes
 .../CDCR_arch.png                               | Bin 83660 -> 83216 bytes
 .../images/parallel-sql-interface/cluster.png   | Bin 202836 -> 3067133 bytes
 .../dbvisualizer_solrjdbc_1.png                 | Bin 81111 -> 171124 bytes
 .../dbvisualizer_solrjdbc_11.png                | Bin 70254 -> 54439 bytes
 .../dbvisualizer_solrjdbc_12.png                | Bin 67927 -> 130739 bytes
 .../dbvisualizer_solrjdbc_13.png                | Bin 87658 -> 82449 bytes
 .../dbvisualizer_solrjdbc_14.png                | Bin 46455 -> 75971 bytes
 .../dbvisualizer_solrjdbc_15.png                | Bin 81305 -> 118023 bytes
 .../dbvisualizer_solrjdbc_16.png                | Bin 98840 -> 162783 bytes
 .../dbvisualizer_solrjdbc_17.png                | Bin 72953 -> 122613 bytes
 .../dbvisualizer_solrjdbc_19.png                | Bin 103797 -> 84112 bytes
 .../dbvisualizer_solrjdbc_2.png                 | Bin 145587 -> 115345 bytes
 .../dbvisualizer_solrjdbc_20.png                | Bin 130057 -> 145134 bytes
 .../dbvisualizer_solrjdbc_3.png                 | Bin 138254 -> 106194 bytes
 .../dbvisualizer_solrjdbc_4.png                 | Bin 138292 -> 110362 bytes
 .../dbvisualizer_solrjdbc_5.png                 | Bin 73941 -> 95829 bytes
 .../dbvisualizer_solrjdbc_6.png                 | Bin 106787 -> 106536 bytes
 .../dbvisualizer_solrjdbc_7.png                 | Bin 101870 -> 111281 bytes
 .../dbvisualizer_solrjdbc_9.png                 | Bin 111291 -> 117209 bytes
 .../src/kerberos-authentication-plugin.adoc     |  30 +-
 solr/solr-ref-guide/src/language-analysis.adoc  |  12 +-
 solr/solr-ref-guide/src/learning-to-rank.adoc   |  28 +-
 solr/solr-ref-guide/src/managed-resources.adoc  |   2 +-
 solr/solr-ref-guide/src/managing-solr.adoc      |   4 +-
 solr/solr-ref-guide/src/metrics-reporting.adoc  |  51 +-
 solr/solr-ref-guide/src/other-parsers.adoc      |  16 +-
 .../src/pagination-of-results.adoc              |   1 +
 .../src/parallel-sql-interface.adoc             |  29 +-
 .../src/performance-statistics-reference.adoc   |   8 +-
 .../src/putting-the-pieces-together.adoc        |   4 +-
 .../src/query-settings-in-solrconfig.adoc       |   2 +
 solr/solr-ref-guide/src/result-grouping.adoc    |   2 +-
 .../src/rule-based-authorization-plugin.adoc    |  32 +-
 .../src/rule-based-replica-placement.adoc       |   2 +-
 solr/solr-ref-guide/src/schema-api.adoc         |  26 +-
 .../src/solr-control-script-reference.adoc      |   2 +-
 solr/solr-ref-guide/src/spatial-search.adoc     |  78 ++-
 .../src/streaming-expressions.adoc              | 678 +++++++++++++++++--
 .../src/the-extended-dismax-query-parser.adoc   |   9 +-
 .../src/the-standard-query-parser.adoc          |   5 +-
 .../solr-ref-guide/src/the-terms-component.adoc |  19 +-
 .../src/the-well-configured-solr-instance.adoc  |   2 +-
 solr/solr-ref-guide/src/tokenizers.adoc         |  48 ++
 .../src/transforming-result-documents.adoc      |   6 +-
 .../src/updating-parts-of-documents.adoc        |  98 ++-
 solr/solr-ref-guide/src/upgrading-solr.adoc     |  11 +-
 .../src/uploading-data-with-index-handlers.adoc |  28 +-
 ...g-data-with-solr-cell-using-apache-tika.adoc |  17 +-
 ...store-data-with-the-data-import-handler.adoc |  25 +-
 solr/solr-ref-guide/src/v2-api.adoc             | 167 +++++
 .../src/velocity-response-writer.adoc           |  16 +-
 solr/solr-ref-guide/src/working-with-dates.adoc |   2 +-
 85 files changed, 1584 insertions(+), 428 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc b/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc
index 24583df..585360a 100644
--- a/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc
+++ b/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc
@@ -5,20 +5,14 @@
 
 Solr has security frameworks for supporting authentication and authorization of users. This allows for verifying a user's identity and for restricting access to resources in a Solr cluster. Solr includes some plugins out of the box, and additional plugins can be developed using the authentication and authorization frameworks described below.
 
-The plugin implementation will dictate if the plugin can be used with Solr running in SolrCloud mode only or also if running in standalone mode. If the plugin supports SolrCloud only, a `security.json` file must be created and uploaded to ZooKeeper before it can be used. If the plugin also supports standalone mode, a system property `-DauthenticationPlugin=<pluginClassName>` can be used instead of creating and managing `security.json` in ZooKeeper. Here is a list of the available plugins and the approach supported:
+All authentication and authorization plugins can work with Solr whether they are running in SolrCloud mode or standalone mode. All authentication and authorization configuration, including users and permission rules, are stored in a file named `security.json`. When using Solr in standalone mode, this file must be in the `$SOLR_HOME` directory (usually `server/solr`). When using SolrCloud, this file must be located in ZooKeeper.
 
-* <<basic-authentication-plugin.adoc#basic-authentication-plugin,Basic authentication>>: SolrCloud only.
-* <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos authentication>>: SolrCloud or standalone mode.
-* <<rule-based-authorization-plugin.adoc#rule-based-authorization-plugin,Rule-based authorization>>: SolrCloud only.
-* <<hadoop-authentication-plugin.adoc#hadoop-authentication-plugin,Hadoop authentication>>: SolrCloud or standalone
-* <<AuthenticationandAuthorizationPlugins-PKI,PKI based authentication>>: SolrCloud only - for securing inter-node traffic
-
-The following section describes how to enable plugins with `security.json` in ZooKeeper when using Solr in SolrCloud mode.
+The following section describes how to enable plugins with `security.json` and place them in the proper locations for your mode of operation.
 
 [[AuthenticationandAuthorizationPlugins-EnablePluginswithsecurity.json]]
 == Enable Plugins with security.json
 
-All of the information required to initialize either type of security plugin is stored in a `/security.json` file in ZooKeeper. This file contains 2 sections, one each for authentication and authorization.
+All of the information required to initialize either type of security plugin is stored in a `security.json` file. This file contains 2 sections, one each for authentication and authorization.
 
 *security.json*
 
@@ -34,7 +28,7 @@ All of the information required to initialize either type of security plugin is
 }
 ----
 
-The `/security.json` file needs to be in ZooKeeper before a Solr instance comes up so Solr starts with the security plugin enabled. See the section <<AuthenticationandAuthorizationPlugins-Addingsecurity.jsontoZooKeeper,Adding security.json to ZooKeeper>> below for information on how to do this.
+The `/security.json` file needs to be in the proper location before a Solr instance comes up so Solr starts with the security plugin enabled. See the section <<AuthenticationandAuthorizationPlugins-Usingsecurity.jsonwithSolr,Using security.json with Solr>> below for information on how to do this.
 
 Depending on the plugin(s) in use, other information will be stored in `security.json` such as user information or rules to create roles and permissions. This information is added through the APIs for each plugin provided by Solr, or, in the case of a custom plugin, the approach designed by you.
 
@@ -55,10 +49,13 @@ Here is a more detailed `security.json` example. In this, the Basic authenticati
 }}
 ----
 
-[[AuthenticationandAuthorizationPlugins-Addingsecurity.jsontoZooKeeper]]
-=== Adding security.json to ZooKeeper
+[[AuthenticationandAuthorizationPlugins-Usingsecurity.jsonwithSolr]]
+== Using security.json with Solr
+
+[[AuthenticationandAuthorizationPlugins-InSolrCloudmode]]
+=== In SolrCloud mode
 
-While configuring Solr to use an authentication or authorization plugin, you will need to upload a `security.json` file to ZooKeeper as in the example below.
+While configuring Solr to use an authentication or authorization plugin, you will need to upload a `security.json` file to ZooKeeper. The following command writes the file as it uploads it - you could also upload a file that you have already created locally.
 
 [source,bash]
 ----
@@ -77,6 +74,17 @@ Depending on the authentication and authorization plugin that you use, you may h
 
 ====
 
+Once `security.json` has been uploaded to ZooKeeper, you should use the appropriate APIs for the plugins you're using to update it. You can edit it manually, but you must take care to remove any version data so it will be properly updated across all ZooKeeper nodes. The version data is found at the end of the `security.json` file, and will appear as the letter "v" followed by a number, such as `{"v":138}`.
+
+[[AuthenticationandAuthorizationPlugins-InStandaloneMode]]
+=== In Standalone Mode
+
+When running Solr in standalone mode, you need to create the `security.json` file and put it in the `$SOLR_HOME` directory for your installation (this is the same place you have located `solr.xml` and is usually `server/solr`).
+
+If you are using <<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>, you will need to place `security.json` on each node of the cluster.
+
+You can use the authentication and authorization APIs, but if you are using the legacy scaling model, you will need to make the same API requests on each node separately. You can also edit `security.json` by hand if you prefer.
+
 [[AuthenticationandAuthorizationPlugins-Authentication]]
 == Authentication
 
@@ -152,14 +160,18 @@ Solr has one implementation of an authorization plugin:
 
 * <<rule-based-authorization-plugin.adoc#rule-based-authorization-plugin,Rule-Based Authorization Plugin>>
 
-[[AuthenticationandAuthorizationPlugins-PKISecuringinter-noderequests]]
+[[AuthenticationandAuthorizationPlugins-PKISecuringInter-NodeRequests]]
 
 [[AuthenticationandAuthorizationPlugins-PKI]]
-== Securing inter-node requests
+== Securing Inter-Node Requests
 
-There are a lot of requests that originate from the Solr nodes itself. e.g: requests from overseer to nodes, recovery threads etc. Each Authentication plugin declares whether it is capable of securing inter-node requests or not. If not, Solr will fall back to using a special internode authentication mechanism where each Solr node is a super user and is fully trusted by other Solr nodes, described below.
+There are a lot of requests that originate from the Solr nodes itself. e.g., requests from overseer to nodes, recovery threads, etc. Each Authentication plugin declares whether it is capable of securing inter-node requests or not. If not, Solr will fall back to using a special internode authentication mechanism where each Solr node is a super user and is fully trusted by other Solr nodes, described below.
 
 [[AuthenticationandAuthorizationPlugins-PKIAuthenticationPlugin]]
 === PKIAuthenticationPlugin
 
-This kicks in when there is any request going on between 2 Solr nodes, and the configured Authentication plugin does not wish to handle inter-node security. For each outgoing request `PKIAuthenticationPlugin` adds a special header `'SolrAuth' `which carries the timestamp and principal encrypted using the private key of that node. The public key is exposed through an API so that any node can read it whenever it needs it. Any node who gets the request with that header, would get the public key from the sender and decrypt the information. if it is able to decrypt the data, the request trusted. It is invalid if the timestamp is more than 5 secs old. This assumes that the clocks of different nodes in the cluster are synchronized. The timeout is configurable through a system property called 'pkiauth.ttl'. For example , if you wish to bump up the ttl to 10 seconds (10000 milliseconds) , start each node with a property `'-Dpkiauth.ttl=10000'`.
+The PKIAuthenticationPlugin is used when there is any request going on between two Solr nodes, and the configured Authentication plugin does not wish to handle inter-node security.
+
+For each outgoing request `PKIAuthenticationPlugin` adds a special header `'SolrAuth'` which carries the timestamp and principal encrypted using the private key of that node. The public key is exposed through an API so that any node can read it whenever it needs it. Any node who gets the request with that header, would get the public key from the sender and decrypt the information. If it is able to decrypt the data, the request trusted. It is invalid if the timestamp is more than 5 secs old. This assumes that the clocks of different nodes in the cluster are synchronized.
+
+The timeout is configurable through a system property called `pkiauth.ttl`. For example, if you wish to bump up the time-to-live to 10 seconds (10000 milliseconds), start each node with a property `'-Dpkiauth.ttl=10000'`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/basic-authentication-plugin.adoc b/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
index fa87b40..4a3ed98 100644
--- a/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
@@ -9,11 +9,11 @@ An authorization plugin is also available to configure Solr with permissions to
 [[BasicAuthenticationPlugin-EnableBasicAuthentication]]
 == Enable Basic Authentication
 
-To use Basic authentication, you must first create a `security.json` file and store it in ZooKeeper. This file and how to upload it to ZooKeeper is described in detail in the section <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-EnablePluginswithsecurity.json,Enable Plugins with security.json>>.
+To use Basic authentication, you must first create a `security.json` file. This file and where to put it is described in detail in the section <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-EnablePluginswithsecurity.json,Enable Plugins with security.json>>.
 
 For Basic authentication, the `security.json` file must have an `authentication` part which defines the class being used for authentication. Usernames and passwords (as a sha256(password+salt) hash) could be added when the file is created, or can be added later with the Basic authentication API, described below.
 
-The `authorization` part is not related to Basic authentication, but is a separate authorization plugin designed to support fine-grained user access control. For more information, see <<rule-based-authorization-plugin.adoc#rule-based-authorization-plugin,Rule-Based Authorization Plugin>>.
+The `authorization` part is not related to Basic authentication, but is a separate authorization plugin designed to support fine-grained user access control. For more information, see the section <<rule-based-authorization-plugin.adoc#rule-based-authorization-plugin,Rule-Based Authorization Plugin>>.
 
 An example `security.json` showing both sections is shown below to show how these plugins can work together:
 
@@ -33,25 +33,27 @@ An example `security.json` showing both sections is shown below to show how thes
 }}
 ----
 
-Save the above JSON to a file called `security.json` locally.
-
-Run the following command to upload it to Zookeeper, ensuring that the ZooKeeper port is correct:
-
-[source,bash]
-----
-bin/solr zk cp file:path_to_local_security.json zk:/security.json -z localhost:9983
-----
-
 There are several things defined in this file:
 
 * Basic authentication and rule-based authorization plugins are enabled.
 * A user called 'solr', with a password `'SolrRocks'` has been defined.
-* `'blockUknown:true'` means that unauthenticated requests are not allowed to pass through
+* The parameter `"blockUnknown": true` means that unauthenticated requests are not allowed to pass through.
 * The 'admin' role has been defined, and it has permission to edit security settings.
 * The 'solr' user has been defined to the 'admin' role.
 
+Save your settings to a file called `security.json` locally. If you are using Solr in standalone mode, you should put this file in `$SOLR_HOME`.
+
+If `blockUnknown` does not appear in the `security.json` file, it will default to `false`. This has the effect of not requiring authentication at all. In some cases, you may want this; for example, if you want to have `security.json` in place but aren't ready to enable authentication. However, you will want to ensure that this parameter is set to `true` in order for authentication to be truly enabled in your system.
+
+If you are using SolrCloud, you must upload `security.json` to ZooKeeper. You can use this example command, ensuring that the ZooKeeper port is correct:
+
+[source,bash]
+----
+bin/solr zk cp file:path_to_local_security.json zk:/security.json -z localhost:9983
+----
+
 [[BasicAuthenticationPlugin-Caveats]]
-== Caveats
+=== Caveats
 
 There are a few things to keep in mind when using the Basic authentication plugin.
 
@@ -65,14 +67,14 @@ There are a few things to keep in mind when using the Basic authentication plugi
 An Authentication API allows modifying user IDs and passwords. The API provides an endpoint with specific commands to set user details or delete a user.
 
 [[BasicAuthenticationPlugin-APIEntryPoint]]
-==== API Entry Point
+=== API Entry Point
 
 `admin/authentication`
 
 This endpoint is not collection-specific, so users are created for the entire Solr cluster. If users need to be restricted to a specific collection, that can be done with the authorization rules.
 
 [[BasicAuthenticationPlugin-AddaUserorEditaPassword]]
-==== Add a User or Edit a Password
+=== Add a User or Edit a Password
 
 The `set-user` command allows you to add users and change their passwords. For example, the following defines two users and their passwords:
 
@@ -84,7 +86,7 @@ curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'C
 ----
 
 [[BasicAuthenticationPlugin-DeleteaUser]]
-==== Delete a User
+=== Delete a User
 
 The `delete-user` command allows you to remove a user. The user password does not need to be sent to remove a user. In the following example, we've asked that user IDs 'tom' and 'harry' be removed from the system.
 
@@ -108,7 +110,7 @@ curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'C
 [[BasicAuthenticationPlugin-UsingBasicAuthwithSolrJ]]
 === Using BasicAuth with SolrJ
 
-In SolrJ the basic authentication credentials need to be set for each request as in this example:
+In SolrJ, the basic authentication credentials need to be set for each request as in this example:
 
 [source,java]
 ----
@@ -126,12 +128,13 @@ req.setBasicAuthCredentials(userName, password);
 QueryResponse rsp = req.process(solrClient);
 ----
 
-[[BasicAuthenticationPlugin-UsingcommandlinescriptswithBasicAuth]]
-=== Using command line scripts with BasicAuth
+[[BasicAuthenticationPlugin-UsingCommandLinescriptswithBasicAuth]]
+=== Using Command Line scripts with BasicAuth
 
-Add the following line to the `solr.in.sh/solr.in.cmd` file. This example sets BasicAuth credentials for user-name "solr" and password "SolrRocks":
+Add the following line to the `solr.in.sh/solr.in.cmd` file. This example tells the `bin/solr` command line to to use "basic" as the type of authentication, and to pass credentials with the user-name "solr" and password "SolrRocks":
 
 [source,bash]
 ----
+SOLR_AUTH_TYPE="basic"
 SOLR_AUTHENTICATION_OPTS="-Dbasicauth=solr:SolrRocks"
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/blob-store-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/blob-store-api.adoc b/solr/solr-ref-guide/src/blob-store-api.adoc
index af2c331..bf032cf 100644
--- a/solr/solr-ref-guide/src/blob-store-api.adoc
+++ b/solr/solr-ref-guide/src/blob-store-api.adoc
@@ -8,16 +8,16 @@ When using the blob store, note that the API does not delete or overwrite a prev
 
 *The blob store is only available when running in SolrCloud mode.* Solr in standalone mode does not support use of a blob store.
 
-The blob store API is implemented as a requestHandler. A special collection named ".system" must be created as the collection that contains the blob store index.
+The blob store API is implemented as a requestHandler. A special collection named ".system" is used to store the blobs. This collection can be created in advance, but if it does not exist it will be created automatically.
 
-[[BlobStoreAPI-Createa.systemCollection]]
-=== Create a .system Collection
+[[BlobStoreAPI-Aboutthe.systemCollection]]
+=== About the .system Collection
 
-Before using the blob store, a special collection must be created and it must be named `.system`.
+Before uploading blobs to the blob store, a special collection must be created and it must be named `.system`. Solr will automatically create this collection if it does not already exist, but you can also create it manually if you choose.
 
 The BlobHandler is automatically registered in the .system collection. The `solrconfig.xml`, Schema, and other configuration files for the collection are automatically provided by the system and don't need to be defined specifically.
 
-If you do not use the `-shards` or `-replicationFactor` options, then defaults of 1 shard and 1 replica will be used.
+If you do not use the `-shards` or `-replicationFactor` options, then defaults of numShards=1 and replicationFactor=3 (or maximum nodes in the cluster) will be used.
 
 You can create the `.system` collection with the <<collections-api.adoc#collections-api,Collections API>>, as in this example:
 
@@ -26,8 +26,12 @@ You can create the `.system` collection with the <<collections-api.adoc#collecti
 curl "http://localhost:8983/solr/admin/collections?action=CREATE&name=.system&replicationFactor=2"
 ----
 
-image::images/icons/emoticons/warning.png[(warning)]
- Note that the `bin/solr` script cannot be used to create the `.system ` collection at this time. Also, please ensure that there is at least one collection created before creating the `.system` collection.
+[IMPORTANT]
+====
+
+The `bin/solr` script cannot be used to create the `.system ` collection at this time.
+
+====
 
 [[BlobStoreAPI-UploadFilestoBlobStore]]
 === Upload Files to Blob Store

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/collections-core-admin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collections-core-admin.adoc b/solr/solr-ref-guide/src/collections-core-admin.adoc
index 4833766..6f35507 100644
--- a/solr/solr-ref-guide/src/collections-core-admin.adoc
+++ b/solr/solr-ref-guide/src/collections-core-admin.adoc
@@ -19,3 +19,10 @@ The buttons at the top of the screen let you make various collection level chang
 
 image::images/collections-core-admin/collection-admin.png[image,width=653,height=250]
 
+
+Replicas can be deleted by clicking the red "X" next to the replica name.
+
+If the shard is inactive, for example after a <<collections-api.adoc#CollectionsAPI-splitshard,SPLITSHARD action>>, an option to delete the shard will appear as a red "X" next to the shard name.
+
+image::images/collections-core-admin/DeleteShard.png[image,width=486,height=250]
+

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/command-line-utilities.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/command-line-utilities.adoc b/solr/solr-ref-guide/src/command-line-utilities.adoc
index 9cb0555..5a90930 100644
--- a/solr/solr-ref-guide/src/command-line-utilities.adoc
+++ b/solr/solr-ref-guide/src/command-line-utilities.adoc
@@ -2,7 +2,7 @@
 :page-shortname: command-line-utilities
 :page-permalink: command-line-utilities.html
 
-Solr's Administration page (found by default at `http://hostname:8983/solr/`), provides a section with menu items for monitoring indexing and performance statistics, information about index distribution and replication, and information on all threads running in the JVM at the time. There is also a section where you can run queries, and an assistance area.
+Solr's Administration page (found by default at `http://hostname:8983/solr/` ), provides a section with menu items for monitoring indexing and performance statistics, information about index distribution and replication, and information on all threads running in the JVM at the time. There is also a section where you can run queries, and an assistance area.
 
 In addition, SolrCloud provides its own administration page (found at http://localhost:8983/solr/#/~cloud), as well as a few tools available via a ZooKeeper Command Line Utility (CLI). The CLI scripts found in `server/scripts/cloud-scripts` let you upload configuration information to ZooKeeper, in the same two ways that were shown in the examples in <<parameter-reference.adoc#parameter-reference,Parameter Reference>>. It also provides a few other commands that let you link collection sets to collections, make ZooKeeper paths or clear them, and download configurations from ZooKeeper to the local filesystem.
 
@@ -14,7 +14,7 @@ The `zkcli.sh` provided by Solr is not the same as the https://zookeeper.apache.
 
 ZooKeeper's `zkCli.sh` provides a completely general, application-agnostic shell for manipulating data in ZooKeeper. Solr's `zkcli.sh` \u2013 discussed in this section \u2013 is specific to Solr, and has command line arguments specific to dealing with Solr data in ZooKeeper.
 
-Many of the functions provided by the zkCli.sh script are also provided by the <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>>, which may be more familiar as the start script Zookeeper maintenance commands are very similar to Unix commands.
+Many of the functions provided by the zkCli.sh script are also provided by the <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>>, which may be more familiar as the start script ZooKeeper maintenance commands are very similar to Unix commands.
 
 ====
 
@@ -28,19 +28,21 @@ Both `zkcli.sh` (for Unix environments) and `zkcli.bat` (for Windows environment
 [width="100%",cols="34%,33%,33%",options="header",]
 |===
 |Short |Parameter Usage |Meaning
-| |`-cmd <arg>` |CLI Command to be executed: `bootstrap`, `upconfig`, `downconfig`, `linkconfig`, `makepath`, `get`, `getfile`, `put`, `putfile`, `list, ``clear `or` clusterprop`. This parameter is *mandatory*
+| |`-cmd <arg>` |CLI Command to be executed: `bootstrap`, `upconfig`, `downconfig`, `linkconfig`, `makepath`, `get`, `getfile`, `put`, `putfile`, `list`, `clear` or `clusterprop`. This parameter is **mandatory**.
 |`-z` |`-zkhost <locations>` |ZooKeeper host address. This parameter is *mandatory* for all CLI commands.
 |`-c` |`-collection <name>` |For `linkconfig`: name of the collection.
-|`-d` |`-confdir <path>` |For `upconfig`: a directory of configuration files. For downconfig: the destination of files pulled from Zookeeper
+|`-d` |`-confdir <path>` |For `upconfig`: a directory of configuration files. For downconfig: the destination of files pulled from ZooKeeper
 |`-h` |`-help` |Display help text.
-|`-n` |`-confname <arg>` |For `upconfig`, `linkconfig, downconfig`: name of the configuration set.
+|`-n` |`-confname <arg>` |For `upconfig`, `linkconfig`, `downconfig`: name of the configuration set.
 |`-r` |`-runzk <port>` |Run ZooKeeper internally by passing the Solr run port; only for clusters on one machine.
 |`-s` |`-solrhome <path>` |For `bootstrap` or when using `-runzk`: the *mandatory* solrhome location.
-| |`-name <name>` |For `clusterprop`: the **mandatory** cluster property name.
+| |`-name <name>` |For `clusterprop`: the *mandatory* cluster property name.
 | |`-val <value>` |For `clusterprop`: the cluster property value. If not specified, *null* will be used as value.
 |===
 
-The short form parameter options may be specified with a single dash (eg: `-c mycollection`). The long form parameter options may be specified using either a single dash (eg: `-collection mycollection`) or a double dash (eg: `--collection mycollection`)
+The short form parameter options may be specified with a single dash (eg: `-c mycollection`).
+
+The long form parameter options may be specified using either a single dash (eg: `-collection mycollection`) or a double dash (eg: `--collection mycollection`)
 
 [[CommandLineUtilities-ZooKeeperCLIExamples]]
 == ZooKeeper CLI Examples

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/config-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/config-api.adoc b/solr/solr-ref-guide/src/config-api.adoc
index 86c31a5..aca36be 100644
--- a/solr/solr-ref-guide/src/config-api.adoc
+++ b/solr/solr-ref-guide/src/config-api.adoc
@@ -4,7 +4,7 @@
 
 The Config API enables manipulating various aspects of your `solrconfig.xml` using REST-like API calls. This feature is enabled by default and works similarly in both SolrCloud and standalone mode. Many commonly edited properties (such as cache sizes and commit settings) and request handler definitions can be changed with this API.
 
-When using this API, `solrconfig.xml` is is not changed. Instead, all edited configuration is stored in a file called `configoverlay.json`. The values in `configoverlay.json` override the values in `solrconfig.xml`.
+When using this API, `solrconfig.xml` is not changed. Instead, all edited configuration is stored in a file called `configoverlay.json`. The values in `configoverlay.json` override the values in `solrconfig.xml`.
 
 [[ConfigAPI-APIEntryPoints]]
 == API Entry Points
@@ -393,7 +393,7 @@ curl http://localhost:8983/solr/techproducts/config -H 'Content-type:application
     "class":"solr.DumpRequestHandler",
     "defaults":{ "x":"y" ,"a":"b", "wt":"json", "indent":true },
     "useParams":"x"
-  },
+  }
 }'
 ----
 
@@ -513,8 +513,3 @@ Any component can register a listener using:
 `SolrCore#addConfListener(Runnable listener)`
 
 to get notified for config changes. This is not very useful if the files modified result in core reloads (i.e., `configoverlay.xml` or Schema). Components can use this to reload the files they are interested in.
-
-[source,bash]
-----
-curl http://localhost:8983/solr/techproducts/config/requestHandler
-----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/configuring-logging.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/configuring-logging.adoc b/solr/solr-ref-guide/src/configuring-logging.adoc
index fbd359c..b9d187b 100644
--- a/solr/solr-ref-guide/src/configuring-logging.adoc
+++ b/solr/solr-ref-guide/src/configuring-logging.adoc
@@ -58,8 +58,8 @@ There is also a way of sending REST commands to the logging endpoint to do the s
 curl -s http://localhost:8983/solr/admin/info/logging --data-binary "set=root:WARN&wt=json"
 ----
 
-[[ConfiguringLogging-Choosingloglevelatstartup]]
-== Choosing log level at startup
+[[ConfiguringLogging-ChoosingLogLevelatStartup]]
+== Choosing Log Level at Startup
 
 You can temporarily choose a different logging level as you start Solr. There are two ways:
 
@@ -95,6 +95,8 @@ Java Garabage Collection logs are rotated by the JVM when size hits 20M, for a m
 
 On every startup of Solr, the start script will clean up old logs and rotate the main `solr.log` file. If you changed the `log4j.appender.file.MaxBackupIndex` setting in `log4j.properties`, you also need to change the corresponding setting `-rotate_solr_logs 9` in the start script.
 
+You can disable the automatic log rotation at startup by changing the setting `SOLR_LOG_PRESTART_ROTATION` found in `bin/solr.in.sh` or `bin/solr.in.cmd` to false.
+
 [[ConfiguringLogging-LoggingSlowQueries]]
 == Logging Slow Queries
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/copying-fields.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/copying-fields.adoc b/solr/solr-ref-guide/src/copying-fields.adoc
index 908a4cc..9746811 100644
--- a/solr/solr-ref-guide/src/copying-fields.adoc
+++ b/solr/solr-ref-guide/src/copying-fields.adoc
@@ -32,3 +32,11 @@ Both the source and the destination of `copyField` can contain either leading or
 The `copyField` command can use a wildcard (*) character in the `dest` parameter only if the `source` parameter contains one as well. `copyField` uses the matching glob from the source field for the `dest` field name into which the source content is copied.
 
 ====
+
+Copying is done at the stream source level and no copy feeds into another copy. This means that copy fields cannot be chained i.e. _you cannot_ copy from `here` to `there` and then from `there` to `elsewhere`. However the same source field can be copied to multiple destination fields:
+
+[source,xml]
+----
+<copyField source="here" dest="there"/>
+<copyField source="here" dest="elsewhere"/>
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/coreadmin-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/coreadmin-api.adoc b/solr/solr-ref-guide/src/coreadmin-api.adoc
index 426a645..4c1821f 100644
--- a/solr/solr-ref-guide/src/coreadmin-api.adoc
+++ b/solr/solr-ref-guide/src/coreadmin-api.adoc
@@ -46,7 +46,7 @@ Note that this command is the only one of the Core Admin API commands that *does
 
 Your CREATE call must be able to find a configuration, or it will not succeed.
 
-When you are running SolrCloud and create a new core for a collection, the configuration will be inherited from the collection \u2013 each collection is linked to a configName, which is stored in the zookeeper database. This satisfies the config requirement. There is something to note, though \u2013 if you're running SolrCloud, you should *NOT* be using the CoreAdmin API at all. Use the Collections API.
+When you are running SolrCloud and create a new core for a collection, the configuration will be inherited from the collection \u2013 each collection is linked to a configName, which is stored in the ZooKeeper database. This satisfies the config requirement. There is something to note, though \u2013 if you're running SolrCloud, you should *NOT* be using the CoreAdmin API at all. Use the Collections API.
 
 When you are not running SolrCloud, if you have <<config-sets.adoc#config-sets,Config Sets>> defined, you can use the configSet parameter as documented below. If there are no config sets, then the instanceDir specified in the CREATE call must already exist, and it must contain a conf directory which in turn must contain `solrconfig.xml` and your schema, which is usually named either `managed-schema` or `schema.xml`, as well as any files referenced by those configs. The config and schema filenames could be specified with the config and schema parameters, but these are expert options. One thing you COULD do to avoid creating the conf directory is use config and schema parameters that point at absolute paths, but this can lead to confusing configurations unless you fully understand what you are doing.
 
@@ -340,4 +340,4 @@ The `REQUESTRECOVERY` action supports one parameter, which is described in the t
 
 `http://localhost:8981/solr/admin/cores?action=REQUESTRECOVERY&core=gettingstarted_shard1_replica1`
 
-The core to specify can be found by expanding the appropriate Zookeeper node via the admin UI.
+The core to specify can be found by expanding the appropriate ZooKeeper node via the admin UI.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc b/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
index f7fc2d1..a4b5312 100644
--- a/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
@@ -5,27 +5,31 @@
 [[DataDirandDirectoryFactoryinSolrConfig-SpecifyingaLocationforIndexDatawiththedataDirParameter]]
 == Specifying a Location for Index Data with the `dataDir` Parameter
 
-By default, Solr stores its index data in a directory called `/data` under the Solr home. If you would like to specify a different directory for storing index data, use the `<dataDir>` parameter in the `solrconfig.xml` file. You can specify another directory either with a full pathname or a pathname relative to the instance dir of the SolrCore. For example:
+By default, Solr stores its index data in a directory called `/data` under the core's instanceDir. If you would like to specify a different directory for storing index data, you can configure dataDir in the core.properties file for the core, or use the `<dataDir>` parameter in the `solrconfig.xml` file. You can specify another directory either with a full pathname or a pathname relative to the instanceDir of the SolrCore. For example:
 
 [source,xml]
 ----
-<dataDir>/var/data/solr/</dataDir>
+<dataDir>/solr/data/${solr.core.name}</dataDir>
 ----
 
+$\{solr.core.name} will cause the name current core to be substituted, which results in each core's data being kept in a separate subdirectory.
+
 If you are using replication to replicate the Solr index (as described in <<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>), then the `<dataDir>` directory should correspond to the index directory used in the replication configuration.
 
 [[DataDirandDirectoryFactoryinSolrConfig-SpecifyingtheDirectoryFactoryForYourIndex]]
 == Specifying the DirectoryFactory For Your Index
 
-The default `solr.StandardDirectoryFactory` is filesystem based, and tries to pick the best implementation for the current JVM and platform. You can force a particular implementation by specifying `solr.MMapDirectoryFactory`, `solr.NIOFSDirectoryFactory`, or `solr.SimpleFSDirectoryFactory`.
+The default {solr-javadocs}/solr-core/org/apache/solr/core/StandardDirectoryFactory.html[`solr.StandardDirectoryFactory`] is filesystem based, and tries to pick the best implementation for the current JVM and platform. You can force a particular implementation and/or config options by specifying {solr-javadocs}/solr-core/org/apache/solr/core/MMapDirectoryFactory.html[`solr.MMapDirectoryFactory`], {solr-javadocs}/solr-core/org/apache/solr/core/NIOFSDirectoryFactory.html[`solr.NIOFSDirectoryFactory`], or {solr-javadocs}/solr-core/org/apache/solr/core/SimpleFSDirectoryFactory.html[`solr.SimpleFSDirectoryFactory`].
 
 [source,xml]
 ----
 <directoryFactory name="DirectoryFactory"
-                  class="${solr.directoryFactory:solr.StandardDirectoryFactory}"/>
+                  class="solr.MMapDirectoryFactory">
+  <bool name="preload">true</bool>
+</directoryFactory>
 ----
 
-The `solr.RAMDirectoryFactory` is memory based, not persistent, and does not work with replication. Use this DirectoryFactory to store your index in RAM.
+The {solr-javadocs}/solr-core/org/apache/solr/core/RAMDirectoryFactory.html[`solr.RAMDirectoryFactory`] is memory based, not persistent, and does not work with replication. Use this DirectoryFactory to store your index in RAM.
 
 [source,xml]
 ----
@@ -35,6 +39,6 @@ The `solr.RAMDirectoryFactory` is memory based, not persistent, and does not wor
 [IMPORTANT]
 ====
 
-If you are using Hadoop and would like to store your indexes in HDFS, you should use the `solr.HdfsDirectoryFactory` instead of either of the above implementations. For more details, see the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>>.
+If you are using Hadoop and would like to store your indexes in HDFS, you should use the {solr-javadocs}/solr-core/org/apache/solr/core/HdfsDirectoryFactory.html[`solr.HdfsDirectoryFactory`] instead of either of the above implementations. For more details, see the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>>.
 
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/defining-fields.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/defining-fields.adoc b/solr/solr-ref-guide/src/defining-fields.adoc
index 87fcc7e..b89eb5a 100644
--- a/solr/solr-ref-guide/src/defining-fields.adoc
+++ b/solr/solr-ref-guide/src/defining-fields.adoc
@@ -38,12 +38,13 @@ Fields can have many of the same properties as field types. Properties from the
 |docValues |If true, the value of the field will be put in a column-oriented https://cwiki.apache.org/confluence/display/solr/DocValues[DocValues] structure. |true or false |false
 |sortMissingFirst sortMissingLast |Control the placement of documents when a sort field is not present. |true or false |false
 |multiValued |If true, indicates that a single document might contain multiple values for this field type. |true or false |false
-|omitNorms |If true, omits the norms associated with this field (this disables length normalization and index-time boosting for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields that need an index-time boost need norms. |true or false |*
+|omitNorms |If true, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields need norms. |true or false |*
 |omitTermFreqAndPositions |If true, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don't require that information. It also reduces the storage space required for the index. Queries that rely on position that are issued on a field with this option will silently fail to find documents. *This property defaults to true for all field types that are not text fields.* |true or false |*
 |omitPositions |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |true or false |*
 |termVectors termPositions termOffsets termPayloads |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |true or false |false
 |required |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |true or false |false
 |useDocValuesAsStored |If the field has https://cwiki.apache.org/confluence/display/solr/DocValues[docValues] enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an https://cwiki.apache.org/confluence/display/solr/Common+Query+Parameters#CommonQueryParameters-Thefl(FieldList)Parameter[fl parameter]. |true or false |true
+|large |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |true or false |false
 |===
 
 [[DefiningFields-RelatedTopics]]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/documents-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/documents-screen.adoc b/solr/solr-ref-guide/src/documents-screen.adoc
index dda091a..960e292 100644
--- a/solr/solr-ref-guide/src/documents-screen.adoc
+++ b/solr/solr-ref-guide/src/documents-screen.adoc
@@ -22,7 +22,7 @@ Then choose the Document Type to define the type of document to load. The remain
 
 When using the JSON document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper JSON format.
 
-Then you can choose when documents should be added to the index (Commit Within), whether existing documents should be overwritten with incoming documents with the same id (if this is not **true**, then the incoming documents will be dropped), and, finally, if a document boost should be applied.
+Then you can choose when documents should be added to the index (Commit Within), & whether existing documents should be overwritten with incoming documents with the same id (if this is not **true**, then the incoming documents will be dropped).
 
 This option will only add or overwrite documents to the index; for other update tasks, see the <<DocumentsScreen-SolrCommand,Solr Command>> option.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/docvalues.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/docvalues.adoc b/solr/solr-ref-guide/src/docvalues.adoc
index 8dd4d4b..a7a2e58 100644
--- a/solr/solr-ref-guide/src/docvalues.adoc
+++ b/solr/solr-ref-guide/src/docvalues.adoc
@@ -43,13 +43,13 @@ DocValues are only available for specific field types. The types chosen determin
 ** If the field is single-valued (i.e., multi-valued is false), Lucene will use the NUMERIC type.
 ** If the field is multi-valued, Lucene will use the SORTED_SET type.
 * Boolean fields
+* ....
+[Int/Long/Float/Double/Date]PointField
+....
+** If the field is single-valued (i.e., multi-valued is false), Lucene will use the NUMERIC type.
+** If the field is multi-valued, Lucene will use the SORTED_NUMERIC type.
 
-These Lucene types are related to how the values are sorted and stored.
-
-There are two implications of multi-valued DocValues being stored as SORTED_SET types that should be kept in mind when combined with /export (and, by extension, <<streaming-expressions.adoc#streaming-expressions,Streaming Expression>>-based functionality):
-
-1.  Values are returned in sorted order rather than the original input order.
-2.  If multiple, identical entries are in the field in a _single_ document, only one will be returned for that document.
+These Lucene types are related to how the {lucene-javadocs}/core/org/apache/lucene/index/DocValuesType.html[values are sorted and stored].
 
 There is an additional configuration option available, which is to modify the `docValuesFormat` <<field-type-definitions-and-properties.adoc#FieldTypeDefinitionsandProperties-docValuesFormat,used by the field type>>. The default implementation employs a mixture of loading some things into memory and keeping some on disk. In some cases, however, you may choose to specify an alternative {lucene-javadocs}/core/org/apache/lucene/codecs/DocValuesFormat.html[DocValuesFormat implementation]. For example, you could choose to keep everything in memory by specifying `docValuesFormat="Memory"` on a field type:
 
@@ -86,7 +86,7 @@ When `useDocValuesAsStored="false"`, non-stored DocValues fields can still be ex
 
 In cases where the query is returning _only_ docValues fields performance may improve since returning stored fields requires disk reads and decompression whereas returning docValues fields in the fl list only requires memory access.
 
-When retrieving fields from their docValues form, two important differences between regular stored fields and docValues fields must be understood:
+When retrieving fields from their docValues form (using https://cwiki.apache.org/confluence/display/solr/Exporting+Result+Sets[/export handler], https://cwiki.apache.org/confluence/display/solr/Streaming+Expressions[streaming expressions] or if the field is requested in the `fl` parameter), two important differences between regular stored fields and docValues fields must be understood:
 
 1.  Order is _not_ preserved. For simply retrieving stored fields, the insertion order is the return order. For docValues, it is the _sorted_ order.
 2.  Multiple identical entries are collapsed into a single value. Thus if I insert values 4, 5, 2, 4, 1, my return will be 1, 2, 4, 5.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/enabling-ssl.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/enabling-ssl.adoc b/solr/solr-ref-guide/src/enabling-ssl.adoc
index 16ce746..b796522 100644
--- a/solr/solr-ref-guide/src/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/src/enabling-ssl.adoc
@@ -155,7 +155,7 @@ server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd clusterprop -n
 server\scripts\cloud-scripts\zkcli.bat -zkhost localhost:2181 -cmd clusterprop -name urlScheme -val https
 ----
 
-If you have set up your ZooKeeper cluster to use a <<taking-solr-to-production.adoc#TakingSolrtoProduction-ZooKeeperchroot,chroot�for Solr>>, make sure you use the correct `zkhost` string with `zkcli`, e.g. `-zkhost localhost:2181/solr`.
+If you have set up your ZooKeeper cluster to use a <<taking-solr-to-production.adoc#TakingSolrtoProduction-ZooKeeperchroot,chroot�for Solr>> , make sure you use the correct `zkhost` string with `zkcli`, e.g. `-zkhost localhost:2181/solr`.
 
 [[EnablingSSL-RunSolrCloudwithSSL]]
 === Run SolrCloud with SSL

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/errata.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/errata.adoc b/solr/solr-ref-guide/src/errata.adoc
index 54a9dbb..a4f4731 100644
--- a/solr/solr-ref-guide/src/errata.adoc
+++ b/solr/solr-ref-guide/src/errata.adoc
@@ -3,19 +3,21 @@
 :page-permalink: errata.html
 
 [[Errata-ErrataForThisDocumentation]]
-= Errata For This Documentation
+== Errata For This Documentation
 
 Any mistakes found in this documentation after its release will be listed on the on-line version of this page:
 
 http://s.apache.org/errata[https://cwiki.apache.org/confluence/display/solr/Errata]
 
 [[Errata-ErrataForPastVersionsofThisDocumentation]]
-= Errata For Past Versions of This Documentation
+== Errata For Past Versions of This Documentation
 
 Any known mistakes in past releases of this documentation will be noted below.
 
+**v2 API Blob store api path**: The 6.5 guide listed the Blob store api path as `/v2/blob`, but the correct path is `/v2/c/.system/blob`.
+
 *Using copyField directives with suggester:* Previous versions of this guide advocated using copyField directives to accumulate the contents on multiple fields into a single field to be used with Solr suggester components. This will not work previous to Solr 5.1; attempting to build the suggester will result in errors being reported in the logs if the field is multiValued. As a work-around, indexing clients should accumulate all of the contents into the field before sending the documents to Solr, and any fields used with the suggesters should have multiValued="false".
 
 The *_variable_ facet.range.gap parameter* was included in documentation even though the patch was not committed. As of yet there is no ability to specify variable gaps via a comma-separated list for facet.range.gap. Some of this functionality can be achieved by interval faceting, see SOLR-6216.
 
-The *MaxIndexingThreads* parameter in *solrconfig.xml* is no longer supported from Solr 5.3, see LUCENE-6659
+The *MaxIndexingThreads* parameter in *solrconfig.xml* is no longer supported from Solr 5.3, see LUCENE-6659.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/faceting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/faceting.adoc b/solr/solr-ref-guide/src/faceting.adoc
index 9c2f88c..8e978f7 100644
--- a/solr/solr-ref-guide/src/faceting.adoc
+++ b/solr/solr-ref-guide/src/faceting.adoc
@@ -57,8 +57,9 @@ The table below summarizes Solr's field value faceting parameters.
 |<<Faceting-Thefacet.mincountParameter,facet.mincount>> |Specifies the minimum counts required for a facet field to be included in the response.
 |<<Faceting-Thefacet.missingParameter,facet.missing>> |Controls whether Solr should compute a count of all matching results which have no value for the field, in addition to the term-based constraints of a facet field.
 |<<Faceting-Thefacet.methodParameter,facet.method>> |Selects the algorithm or method Solr should use when faceting a field.
-|<<Faceting-Thefacet.existsParameter,facet.exists>> |Caps facet counts by one. Available only for facet.method=enum as performance optimization.
-|<<Faceting-Thefacet.enum.cache.minDfParameter,facet.enum.cache.minDF>> |(Advanced) Specifies the minimum document frequency (the number of documents matching a term) for which the `filterCache` should be used when determining the constraint count for that term.
+|<<Faceting-Thefacet.existsParameter,facet.exists>> |Caps facet counts by one. Available only for `facet.method=enum` as performance optimization.
+|<<Faceting-Thefacet.excludeTermsParameter,facet.excludeTerms>> |Removes specific terms from facet counts. This allows you to exclude certain terms from faceting, while maintaining the terms in the index for general queries.
+|<<Faceting-Thefacet.enum.cache.minDfParameter,facet.enum.cache.minDf>> |(Advanced) Specifies the minimum document frequency (the number of documents matching a term) for which the `filterCache` should be used when determining the constraint count for that term.
 |<<Faceting-Over-RequestParameters,facet.overrequest.count>> |(Advanced) A number of documents, beyond the effective `facet.limit` to request from each shard in a distributed search
 |<<Faceting-Over-RequestParameters,facet.overrequest.ratio>> |(Advanced) A multiplier of the effective `facet.limit` to request from each shard in a distributed search
 |<<Faceting-Thefacet.threadsParameter,facet.threads>> |(Advanced) Controls parallel execution of field faceting
@@ -177,14 +178,19 @@ A value greater than zero decreases the filterCache's memory usage, but increase
 
 The default value is 0, causing the filterCache to be used for all terms in the field.
 
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.enum.cache.minDF`.
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.enum.cache.minDf`.
 
 [[Faceting-Thefacet.existsParameter]]
-=== The `facet.exists `Parameter
+=== The `facet.exists` Parameter
 
 To cap facet counts by 1 specify facet.exists=true. It can be used with facet.method=enum or when it's omitted. It can be used only on non-trie fields i.e. strings. It may speed up facet counting on large indices and/or high-cardinality facet values..
 
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.exists` or via local parameter` facet.field={!facet.method=enum facet.exists=true}size`
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.exists` or via local parameter` facet.field={!facet.method=enum facet.exists=true}size`.
+
+[[Faceting-Thefacet.excludeTermsParameter]]
+=== The `facet.excludeTerms` Parameter
+
+If you want to remove terms from facet counts but keep them in the index, the `facet.excludeTerms` parameter allows you to do that.
 
 [[Faceting-Over-RequestParameters]]
 === Over-Request Parameters
@@ -714,7 +720,7 @@ The parameter setting above causes the field facet results for the "doctype" fie
 [[Faceting-Limitingfacetwithcertainterms]]
 === Limiting facet with certain terms
 
-To limit field facet with certain terms specify them comma separated with `terms` local parameter. Commas and quotes in terms can be escaped with backslash \,. In this case facet is calculated on a way similar to `facet.method=enum` , but ignores `facet.enum.cache.minDf`. For example:
+To limit field facet with certain terms specify them comma separated with `terms` local parameter. Commas and quotes in terms can be escaped with backslash, as in `\,`. In this case facet is calculated on a way similar to `facet.method=enum` , but ignores `facet.enum.cache.minDf`. For example:
 
 `  facet.field={!terms='alfa,betta,with\,with\',with space'}symbol  `
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/field-properties-by-use-case.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-properties-by-use-case.adoc b/solr/solr-ref-guide/src/field-properties-by-use-case.adoc
index 0c1f23a..90ee877 100644
--- a/solr/solr-ref-guide/src/field-properties-by-use-case.adoc
+++ b/solr/solr-ref-guide/src/field-properties-by-use-case.adoc
@@ -8,11 +8,9 @@ Here is a summary of common use cases, and the attributes the fields or field ty
 |===
 |Use Case |indexed |stored |multiValued |omitNorms |termVectors |termPositions |docValues
 |search within field |true | | | | | |
-|retrieve contents | |true | | | | |
+|retrieve contents | |true^8^ | | | | |true^8^
 |use as unique key |true | |false | | | |
 |sort on field |true^7^ | |false |true ^1^ | | |true^7^
-|use field boosts ^5^ | | | |false | | |
-|document boosts affect searches within field | | | |false | | |
 |highlighting |true ^4^ |true | | |true^2^ |true ^3^ |
 |faceting ^5^ |true^7^ | | | | | |true^7^
 |add multiple values, maintaining order | | |true | | | |
@@ -22,4 +20,6 @@ Here is a summary of common use cases, and the attributes the fields or field ty
 
 Notes:
 
-^1^ Recommended but not necessary. ^2^ Will be used if present, but not necessary. ^3^ (if termVectors=true) ^4^ A tokenizer must be defined for the field, but it doesn't need to be indexed. ^5^ Described in <<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>>. ^6^ Term vectors are not mandatory here. If not true, then a stored field is analyzed. So term vectors are recommended, but only required if `stored=false`.^7^ Either `indexed` or `docValues` must be true, but both are not required. <<docvalues.adoc#docvalues,DocValues>> can be more efficient in many cases.
+^1^ Recommended but not necessary. ^2^ Will be used if present, but not necessary. ^3^ (if termVectors=true) ^4^ A tokenizer must be defined for the field, but it doesn't need to be indexed. ^5^ Described in <<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>>. ^6^ Term vectors are not mandatory here. If not true, then a stored field is analyzed. So term vectors are recommended, but only required if `stored=false`.^7^ For most field types, either `indexed` or `docValues` must be true, but both are not required. <<docvalues.adoc#docvalues,DocValues>> can be more efficient in many cases. For `[Int/Long/Float/Double/Date]PointFields`, `docValues=true` is required.
+
+^8^ Stored content will be used by default, but docValues can alternatively be used. See <<docvalues.adoc#docvalues,DocValues>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
index 5e689ec..7f6efc6 100644
--- a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
@@ -65,6 +65,7 @@ The properties that can be specified for a given field type fall into three majo
 |class |The class name that gets used to store and index the data for this type. Note that you may prefix included class names with "solr." and Solr will automatically figure out which packages to search for the class - so "solr.TextField" will work. If you are using a third-party class, you will probably need to have a fully qualified class name. The fully qualified equivalent for "solr.TextField" is "org.apache.solr.schema.TextField". |
 |positionIncrementGap |For multivalued fields, specifies a distance between multiple values, which prevents spurious phrase matches |integer
 |autoGeneratePhraseQueries |For text fields. If true, Solr automatically generates phrase queries for adjacent terms. If false, terms must be enclosed in double-quotes to be treated as phrases. |true or false
+|enableGraphQueries |For text fields, applicable when querying with https://cwiki.apache.org/confluence/display/solr/The+Standard+Query+Parser#TheStandardQueryParser-StandardQueryParserParameters[`sow=false`]. Use `true` (the default) for field types with query analyzers including graph-aware filters, e.g. <<filter-descriptions.adoc#FilterDescriptions-SynonymGraphFilter,Synonym Graph Filter>> and <<filter-descriptions.adoc#FilterDescriptions-WordDelimiterGraphFilter,Word Delimiter Graph Filter>>. Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g. <<filter-descriptions.adoc#FilterDescriptions-ShingleFilter,Shingle Filter>>. |true or false
 |[[FieldTypeDefinitionsandProperties-docValuesFormat]]docValuesFormat |Defines a custom `DocValuesFormat` to use for fields of this type. This requires that a schema-aware codec, such as the `SchemaCodecFactory` has been configured in solrconfig.xml. |n/a
 |postingsFormat |Defines a custom `PostingsFormat` to use for fields of this type. This requires that a schema-aware codec, such as the `SchemaCodecFactory` has been configured in solrconfig.xml. |n/a
 |===
@@ -89,12 +90,13 @@ These are properties that can be specified either on the field types, or on indi
 |docValues |If true, the value of the field will be put in a column-oriented <<docvalues.adoc#docvalues,DocValues>> structure. |true or false |false
 |sortMissingFirst sortMissingLast |Control the placement of documents when a sort field is not present. |true or false |false
 |multiValued |If true, indicates that a single document might contain multiple values for this field type. |true or false |false
-|omitNorms |If true, omits the norms associated with this field (this disables length normalization and index-time boosting for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields that need an index-time boost need norms. |true or false |*
+|omitNorms |If true, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). *Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string.* Only full-text fields or fields need norms. |true or false |*
 |omitTermFreqAndPositions |If true, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don't require that information. It also reduces the storage space required for the index. Queries that rely on position that are issued on a field with this option will silently fail to find documents. *This property defaults to true for all field types that are not text fields.* |true or false |*
 |omitPositions |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |true or false |*
 |termVectors termPositions termOffsets termPayloads |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |true or false |false
 |required |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |true or false |false
 |useDocValuesAsStored |If the field has <<docvalues.adoc#docvalues,docValues>> enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#CommonQueryParameters-Thefl_FieldList_Parameter,fl parameter>>. |true or false |true
+|large |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |true or false |false
 |===
 
 [[FieldTypeDefinitionsandProperties-FieldTypeSimilarity]]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
index be05b50..5ff082f 100644
--- a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
+++ b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
@@ -15,18 +15,24 @@ The following table lists the field types that are available in Solr. The `org.a
 |ExternalFileField |Pulls values from a file on disk. See the section <<working-with-external-files-and-processes.adoc#working-with-external-files-and-processes,Working with External Files and Processes>>.
 |EnumField |Allows defining an enumerated set of values which may not be easily sorted by either alphabetic or numeric order (such as a list of severities, for example). This field type takes a configuration file, which lists the proper order of the field values. See the section <<working-with-enum-fields.adoc#working-with-enum-fields,Working with Enum Fields>> for more information.
 |ICUCollationField |Supports Unicode collation for sorting and range queries. See the section <<language-analysis.adoc#LanguageAnalysis-UnicodeCollation,Unicode Collation>>.
-|LatLonType |<<spatial-search.adoc#spatial-search,Spatial Search>>: a latitude/longitude coordinate pair. The latitude is specified first in the pair.
-|PointType |<<spatial-search.adoc#spatial-search,Spatial Search>>: An arbitrary n-dimensional point, useful for searching sources such as blueprints or CAD drawings.
+|LatLonPointSpatialField |<<spatial-search.adoc#spatial-search,Spatial Search>>: a latitude/longitude coordinate pair; possibly multi-valued for multiple points. Usually it's specified as "lat,lon" order with a comma.
+|LatLonType |(deprecated) <<spatial-search.adoc#spatial-search,Spatial Search>>: a single-valued latitude/longitude coordinate pair. Usually it's specified as "lat,lon" order with a comma.
+|PointType |<<spatial-search.adoc#spatial-search,Spatial Search>>: A single-valued n-dimensional point. It's both for sorting spatial data that is _not_ lat-lon, and for some more rare use-cases. (NOTE: this is _not_ related to the "Point" based numeric fields)
 |PreAnalyzedField |Provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing. Configuration and usage of PreAnalyzedField is documented on the <<working-with-external-files-and-processes.adoc#WorkingwithExternalFilesandProcesses-ThePreAnalyzedFieldType,Working with External Files and Processes>> page.
 |RandomSortField |Does not contain a value. Queries that sort on this field type will return results in random order. Use a dynamic field to use this feature.
 |SpatialRecursivePrefixTreeFieldType |(RPT for short) <<spatial-search.adoc#spatial-search,Spatial Search>>: Accepts latitude comma longitude strings or other shapes in WKT format.
 |StrField |String (UTF-8 encoded string or Unicode). Strings are intended for small fields and are _not_ tokenized or analyzed in any way. They have a hard limit of slightly less than 32K.
 |TextField |Text, usually multiple words or tokens.
-|TrieDateField |Date field. Represents a point in time with millisecond precision. See the section <<working-with-dates.adoc#working-with-dates,Working with Dates>>. `precisionStep="0"` enables efficient date sorting and minimizes index size; `precisionStep="8"` (the default) enables efficient range queries.
-|TrieDoubleField |Double field (64-bit IEEE floating point). `precisionStep="0"` enables efficient numeric sorting and minimizes index size; `precisionStep="8"` (the default) enables efficient range queries.
-|TrieField |If this field type is used, a "type" attribute must also be specified, valid values are: `integer`, `long`, `float`, `double`, `date`. Using this field is the same as using any of the Trie fields. `precisionStep="0"` enables efficient numeric sorting and minimizes index size; `precisionStep="8"` (the default) enables efficient range queries.
-|TrieFloatField |Floating point field (32-bit IEEE floating point) . `precisionStep="0"` enables efficient numeric sorting and minimizes index size; `precisionStep="8"` (the default) enables efficient range queries.
-|TrieIntField |Integer field (32-bit signed integer). `precisionStep="0"` enables efficient numeric sorting and minimizes index size; `precisionStep="8"` (the default) enables efficient range queries.
-|TrieLongField |Long field (64-bit signed integer). `precisionStep="0"` enables efficient numeric sorting and minimizes index size; `precisionStep="8"` (the default) enables efficient range queries.
+|TrieDateField |Date field. Represents a point in time with millisecond precision. See the section <<working-with-dates.adoc#working-with-dates,Working with Dates>>. `precisionStep="0"` minimizes index size; `precisionStep="8"` (the default) enables more efficient range queries. For single valued fields, use `docValues="true"` for efficient sorting.
+|TrieDoubleField |Double field (64-bit IEEE floating point). `precisionStep="0"` minimizes index size; `precisionStep="8"` (the default) enables more efficient range queries. For single valued fields, use `docValues="true"` for efficient sorting.
+|TrieFloatField |Floating point field (32-bit IEEE floating point) . `precisionStep="0"` enables efficient numeric sorting and minimizes index size; `precisionStep="8"` (the default) enables efficient range queries. Use `docValues="true"` for efficient sorting. For single valued fields, use `docValues="true"` for efficient sorting.
+|TrieIntField |Integer field (32-bit signed integer). `precisionStep="0"` enables efficient numeric sorting and minimizes index size; `precisionStep="8"` (the default) enables efficient range queries. For single valued fields, use `docValues="true"` for efficient sorting.
+|TrieLongField |Long field (64-bit signed integer). `precisionStep="0"` minimizes index size; `precisionStep="8"` (the default) enables more efficient range queries. For single valued fields, use `docValues="true"` for efficient sorting.
+|TrieField |If this field type is used, a "type" attribute must also be specified, valid values are: `integer`, `long`, `float`, `double`, `date`. Using this field is the same as using any of the Trie fields mentioned above
+|DatePointField |Date field. Represents a point in time with millisecond precision. See the section <<working-with-dates.adoc#working-with-dates,Working with Dates>>. This class functions similarly to TrieDateField, but using a "Dimensional Points" based data structure instead of indexed terms, and doesn't require configuration of a precision step. For single valued fields, `docValues="true"` must be used to enable sorting.
+|DoublePointField |Double field (64-bit IEEE floating point). This class functions similarly to TrieDoubleField, but using a "Dimensional Points" based data structure instead of indexed terms, and doesn't require configuration of a precision step. For single valued fields, `docValues="true"` must be used to enable sorting.
+|FloatPointField |Floating point field (32-bit IEEE floating point). This class functions similarly to TrieFloatField, but using a "Dimensional Points" based data structure instead of indexed terms, and doesn't require configuration of a precision step. For single valued fields, `docValues="true"` must be used to enable sorting.
+|IntPointField |Integer field (32-bit signed integer). This class functions similarly to TrieIntField, but using a "Dimensional Points" based data structure instead of indexed terms, and doesn't require configuration of a precision step. For single valued fields, `docValues="true"` must be used to enable sorting.
+|LongPointField |Long field (64-bit signed integer). This class functions similarly to TrieLongField, but using a "Dimensional Points" based data structure instead of indexed terms, and doesn't require configuration of a precision step. For single valued fields, `docValues="true"` must be used to enable sorting.
 |UUIDField |Universally Unique Identifier (UUID). Pass in a value of "NEW" and Solr will create a new UUID. **Note**: configuring a UUIDField instance with a default value of "NEW" is not advisable for most users when using SolrCloud (and not possible if the UUID value is configured as the unique key field) since the result will be that each replica of each document will get a unique UUID value. Using UUIDUpdateProcessorFactory to generate UUID values when documents are added is recommended instead.
 |===