You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/04/20 19:39:13 UTC

[3/4] lucene-solr:jira/solr-10290: SOLR-10290: update raw content files

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/filter-descriptions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/filter-descriptions.adoc b/solr/solr-ref-guide/src/filter-descriptions.adoc
index 0786295..bd7840d 100644
--- a/solr/solr-ref-guide/src/filter-descriptions.adoc
+++ b/solr/solr-ref-guide/src/filter-descriptions.adoc
@@ -56,14 +56,17 @@ This filter converts alphabetic, numeric, and symbolic Unicode characters which
 
 *Factory class:* `solr.ASCIIFoldingFilterFactory`
 
-*Arguments:* None
+*Arguments:*
+
+`preserveOriginal`: (boolean, default false) If true, the original token is preserved: "th�" -> "the", "th�"
 
 *Example:*
 
 [source,xml]
 ----
 <analyzer>
-  <filter class="solr.ASCIIFoldingFilterFactory"/>
+  <tokenizer class="solr.WhitespaceTokenizer"/>
+  <filter class="solr.ASCIIFoldingFilterFactory" preserveOriginal="false" />
 </analyzer>
 ----
 
@@ -323,7 +326,7 @@ This filter stems plural English words to their singular form.
 [source,xml]
 ----
 <analyzer type="index">
-  <tokenizer class="solr.StandardTokenizerFactory "/>
+  <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.EnglishMinimalStemFilterFactory"/>
 </analyzer>
 ----
@@ -334,6 +337,31 @@ This filter stems plural English words to their singular form.
 
 *Out:* "dog", "cat"
 
+[[FilterDescriptions-EnglishPossessiveFilter]]
+== English Possessive Filter
+
+This filter removes singular possessives (trailing **'s**) from words. Note that plural possessives, e.g. the *s'* in "divers' snorkels", are not removed by this filter.
+
+*Factory class:* `solr.EnglishPossessiveFilterFactory`
+
+*Arguments:* None
+
+*Example:*
+
+[source,xml]
+----
+<analyzer>
+  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+  <filter class="solr.EnglishPossessiveFilterFactory"/>
+</analyzer>
+----
+
+*In:* "Man's dog bites dogs' man"
+
+*Tokenizer to Filter:* "Man's", "dog", "bites", "dogs'", "man"
+
+*Out:* "Man", "dog", "bites", "dogs'", "man"
+
 [[FilterDescriptions-FingerprintFilter]]
 == Fingerprint Filter
 
@@ -363,6 +391,17 @@ This filter outputs a single token which is a concatenation of the sorted and de
 
 *Out:* "brown_dog_fox_jumped_lazy_over_quick_the"
 
+[[FilterDescriptions-FlattenGraphFilter]]
+== Flatten Graph Filter
+
+This filter must be included on index-time analyzer specifications that include at least one graph-aware filter, including Synonym Graph Filter and Word Delimiter Graph Filter.
+
+*Factory class:* `solr.FlattenGraphFilterFactory`
+
+*Arguments:* None
+
+See the examples on Synonym Graph Filter and Word Delimiter Graph Filter.
+
 [[FilterDescriptions-HunspellStemFilter]]
 == Hunspell Stem Filter
 
@@ -636,6 +675,102 @@ This filter passes tokens whose length falls within the min/max limit specified.
 
 *Out:* "turn", "right"
 
+[[FilterDescriptions-LimitTokenCountFilter]]
+== Limit Token Count Filter
+
+This filter limits the number of accepted tokens, typically useful for index analysis.
+
+By default, this filter ignores any tokens in the wrapped `TokenStream` once the limit has been reached, which can result in `reset()` being called prior to `incrementToken()` returning `false`. For most `TokenStream` implementations this should be acceptable, and faster then consuming the full stream. If you are wrapping a `TokenStream` which requires that the full stream of tokens be exhausted in order to function properly, use the `consumeAllTokens="true"` option.
+
+*Factory class:* `solr.LimitTokenCountFilterFactory`
+
+*Arguments:*
+
+`maxTokenCount`: (integer, required) Maximum token count. After this limit has been reached, tokens are discarded.
+
+`consumeAllTokens`: (boolean, defaults to false) Whether to consume (and discard) previous token filters' tokens after the maximum token count has been reached. See description above.
+
+*Example:*
+
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+  <filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10"
+          consumeAllTokens="false" />
+</analyzer>
+----
+
+*In:* "1 2 3 4 5 6 7 8 9 10 11 12"
+
+*Tokenizer to Filter:* "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12"
+
+*Out:* "1", "2", "3", "4", "5", "6", "7", "8", "9", "10"
+
+[[FilterDescriptions-LimitTokenOffsetFilter]]
+== Limit Token Offset Filter
+
+This filter limits tokens to those before a configured maximum start character offset. This can be useful to limit highlighting, for example.
+
+By default, this filter ignores any tokens in the wrapped `TokenStream` once the limit has been reached, which can result in `reset()` being called prior to `incrementToken()` returning `false`. For most `TokenStream` implementations this should be acceptable, and faster then consuming the full stream. If you are wrapping a `TokenStream` which requires that the full stream of tokens be exhausted in order to function properly, use the `consumeAllTokens="true"` option.
+
+*Factory class:* `solr.LimitTokenOffsetFilterFactory`
+
+*Arguments:*
+
+`maxStartOffset`: (integer, required) Maximum token start character offset. After this limit has been reached, tokens are discarded.
+
+`consumeAllTokens`: (boolean, defaults to false) Whether to consume (and discard) previous token filters' tokens after the maximum start offset has been reached. See description above.
+
+*Example:*
+
+[source,xml]
+----
+<analyzer>
+  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+  <filter class="solr.LimitTokenOffsetFilterFactory" maxStartOffset="10"
+          consumeAllTokens="false" />
+</analyzer>
+----
+
+*In:* "0 2 4 6 8 A C E"
+
+*Tokenizer to Filter:* "0", "2", "4", "6", "8", "A", "C", "E"
+
+*Out:* "0", "2", "4", "6", "8", "A"
+
+[[FilterDescriptions-LimitTokenPositionFilter]]
+== Limit Token Position Filter
+
+This filter limits tokens to those before a configured maximum token position.
+
+By default, this filter ignores any tokens in the wrapped `TokenStream` once the limit has been reached, which can result in `reset()` being called prior to `incrementToken()` returning `false`. For most `TokenStream` implementations this should be acceptable, and faster then consuming the full stream. If you are wrapping a `TokenStream` which requires that the full stream of tokens be exhausted in order to function properly, use the `consumeAllTokens="true"` option.
+
+*Factory class:* `solr.LimitTokenPositionFilterFactory`
+
+*Arguments:*
+
+`maxTokenPosition`: (integer, required) Maximum token position. After this limit has been reached, tokens are discarded.
+
+`consumeAllTokens`: (boolean, defaults to false) Whether to consume (and discard) previous token filters' tokens after the maximum start offset has been reached. See description above.
+
+*Example:*
+
+[source,xml]
+----
+<analyzer>
+  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+  <filter class="solr.LimitTokenPositionFilterFactory" maxTokenPosition="3"
+          consumeAllTokens="false" />
+</analyzer>
+----
+
+*In:* "1 2 3 4 5"
+
+*Tokenizer to Filter:* "1", "2", "3", "4", "5"
+
+*Out:* "1", "2", "3"
+
 [[FilterDescriptions-LowerCaseFilter]]
 == Lower Case Filter
 
@@ -754,7 +889,7 @@ A range of 1 to 4.
 
 *Tokenizer to Filter:* "four", "score"
 
-*Out:* "f", "fo", "fou", "four", "s", "sc", "sco", "scor"
+*Out:* "f", "fo", "fou", "four", "o", "ou", "our", "u", "ur", "r", "s", "sc", "sco", "scor", "c", "co", "cor", "core", "o", "or", "ore", "r", "re", "e"
 
 *Example:*
 
@@ -971,7 +1106,7 @@ This filter applies the Porter Stemming Algorithm for English. The results are s
 [[FilterDescriptions-RemoveDuplicatesTokenFilter]]
 == Remove Duplicates Token Filter
 
-The filter removes duplicate tokens in the stream. Tokens are considered to be duplicates if they have the same text and position values.
+The filter removes duplicate tokens in the stream. Tokens are considered to be duplicates ONLY if they have the same text and position values. Because positions must be the same, this filter might not do what a user expects it to do based on its name. It is a very specialized filter that is only useful in very specific circumstances. A more accurate filter name would be extremely long and confusing, so the shorter "remove duplicates" name has been used, even though it is potentially misleading.
 
 *Factory class:* `solr.RemoveDuplicatesTokenFilterFactory`
 
@@ -979,7 +1114,9 @@ The filter removes duplicate tokens in the stream. Tokens are considered to be d
 
 *Example:*
 
-One example of where `RemoveDuplicatesTokenFilterFactory` is in situations where a synonym file is being used in conjuntion with a stemmer causes some synonyms to be reduced to the same stem. Consider the following entry from a `synonyms.txt` file:
+One example of where `RemoveDuplicatesTokenFilterFactory` is useful in situations where a synonym file is being used in conjuntion with a stemmer. In these situations, both the stemmer and the synonym filter can cause completely identical terms with the same positions to end up in the stream, increasing index size with no benefit.
+
+Consider the following entry from a `synonyms.txt` file:
 
 [source,text]
 ----
@@ -990,9 +1127,9 @@ When used in the following configuration:
 
 [source,xml]
 ----
-<analyzer>
+<analyzer type="query">
   <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt"/>
+  <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt"/>
   <filter class="solr.EnglishMinimalStemFilterFactory"/>
   <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
 </analyzer>
@@ -1053,15 +1190,15 @@ This filter constructs shingles, which are token n-grams, from the token stream.
 
 *Arguments:*
 
-`minShingleSize`: (integer, default 2) The minimum number of tokens per shingle.
+`minShingleSize`: (integer, must be >= 2, default 2) The minimum number of tokens per shingle.
 
-`maxShingleSize`: (integer, must be >= 2, default 2) The maximum number of tokens per shingle.
+`maxShingleSize`: (integer, must be >= `minShingleSize`, default 2) The maximum number of tokens per shingle.
 
-`outputUnigrams`: (true/false) If true (the default), then each individual token is also included at its original position.
+`outputUnigrams`: (boolean, default true) If true, then each individual token is also included at its original position.
 
-`outputUnigramsIfNoShingles`: (true/false) If false (the default), then individual tokens will be output if no shingles are possible.
+`outputUnigramsIfNoShingles`: (boolean, default false) If true, then individual tokens will be output if no shingles are possible.
 
-`tokenSeparator`: (string, default is " ") The default string to use when joining adjacent tokens to form a shingle.
+`tokenSeparator`: (string, default is " ") The string to use when joining adjacent tokens to form a shingle.
 
 *Example:*
 
@@ -1278,8 +1415,27 @@ Like <<FilterDescriptions-StopFilter,Stop Filter>>, this filter discards, or _st
 
 This filter does synonym mapping. Each token is looked up in the list of synonyms and if a match is found, then the synonym is emitted in place of the token. The position value of the new tokens are set such they all occur at the same position as the original token.
 
+.Synonym Filter has been deprecated
+[WARNING]
+====
+
+Synonym Filter has been deprecated in favor of Synonym Graph Filter, which is required for multi-term synonym support.
+
+====
+
 *Factory class:* `solr.SynonymFilterFactory`
 
+For arguments and examples, see the Synonym Graph Filter below.
+
+[[FilterDescriptions-SynonymGraphFilter]]
+== Synonym Graph Filter
+
+This filter maps single- or multi-token synonyms, producing a fully correct graph output. This filter is a replacement for the Synonym Filter, which produces incorrect graphs for multi-token synonyms.
+
+If you use this filter during indexing, you must follow it with a Flatten Graph Filter to squash tokens on top of one another like the Synonym Filter, because the indexer can't directly consume a graph. To get fully correct positional queries when your synonym replacements are multiple tokens, you should instead apply synonyms using this filter at query time.
+
+*Factory class:* `solr.SynonymGraphFilterFactory`
+
 *Arguments:*
 
 `synonyms`: (required) The path of a file that contains a list of synonyms, one per line. In the (default) `solr` format - see the `format` argument below for alternatives - blank lines and lines that begin with "`#`" are ignored. This may be an absolute path, or path relative to the Solr config directory. There are two ways to specify synonym mappings:
@@ -1312,9 +1468,14 @@ small => tiny,teeny,weeny
 
 [source,xml]
 ----
-<analyzer>
+<analyzer type="index">
+  <tokenizer class="solr.StandardTokenizerFactory"/>
+  <filter class="solr.SynonymGraphFilterFactory" synonyms="mysynonyms.txt"/>
+  <filter class="solr.FlattenGraphFilterFactory"/> <!-- required on index analyzers after graph filters -->
+</analyzer>
+<analyzer type="query">
   <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.SynonymFilterFactory" synonyms="mysynonyms.txt"/>
+  <filter class="solr.SynonymGraphFilterFactory" synonyms="mysynonyms.txt"/>
 </analyzer>
 ----
 
@@ -1326,14 +1487,6 @@ small => tiny,teeny,weeny
 
 *Example:*
 
-[source,xml]
-----
-<analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory "/>
-  <filter class="solr.SynonymFilterFactory" synonyms="mysynonyms.txt"/>
-</analyzer>
-----
-
 *In:* "teh ginormous, humungous sofa"
 
 *Tokenizer to Filter:* "teh"(1), "ginormous"(2), "humungous"(3), "sofa"(4)
@@ -1446,7 +1599,30 @@ This filter blacklists or whitelists a specified list of token types, assuming t
 [[FilterDescriptions-WordDelimiterFilter]]
 == Word Delimiter Filter
 
-This filter splits tokens at word delimiters. The rules for determining delimiters are determined as follows:
+This filter splits tokens at word delimiters.
+
+.Word Delimiter Filter has been deprecated
+[WARNING]
+====
+
+Word Delimiter Filter has been deprecated in favor of Word Delimiter Graph Filter, which is required to produce a correct token graph so that e.g. phrase queries can work correctly.
+
+====
+
+*Factory class:* `solr.WordDelimiterFilterFactory`
+
+For a full description, including arguments and examples, see the Word Delimiter Graph Filter below.
+
+[[FilterDescriptions-WordDelimiterGraphFilter]]
+== Word Delimiter Graph Filter
+
+This filter splits tokens at word delimiters.
+
+If you use this filter during indexing, you must follow it with a Flatten Graph Filter to squash tokens on top of one another like the Word Delimiter Filter, because the indexer can't directly consume a graph. To get fully correct positional queries when tokens are split, you should instead use this filter at query time.
+
+Note: although this filter produces correct token graphs, it cannot consume an input token graph correctly.
+
+The rules for determining delimiters are determined as follows:
 
 * A change in case within a word: "CamelCase" *->* "Camel", "Case". This can be disabled by setting `splitOnCaseChange="0"`.
 
@@ -1458,7 +1634,7 @@ This filter splits tokens at word delimiters. The rules for determining delimite
 
 * Any leading or trailing delimiters are discarded: "--hot-spot--" *->* "hot", "spot"
 
-*Factory class:* `solr.WordDelimiterFilterFactory`
+*Factory class:* `solr.WordDelimiterGraphFilterFactory`
 
 *Arguments:*
 
@@ -1482,15 +1658,34 @@ This filter splits tokens at word delimiters. The rules for determining delimite
 
 `stemEnglishPossessive`: (integer, default 1) If 1, strips the possessive "'s" from each subword.
 
+`types`: (optional) The pathname of a file that contains *character => type* mappings, which enable customization of this filter's splitting behavior. Recognized character types: `LOWER`, `UPPER`, `ALPHA`, `DIGIT`, `ALPHANUM`, and `SUBWORD_DELIM`. The default for any character without a customized mapping is computed from Unicode character properties. Blank lines and comment lines starting with '#' are ignored. An example file:
+
+[source,text]
+----
+# Don't split numbers at '$', '.' or ','
+$ => DIGIT
+. => DIGIT
+\u002C => DIGIT
+
+# Don't split on ZWJ: http://en.wikipedia.org/wiki/Zero-width_joiner
+\u200D => ALPHANUM
+----
+
 *Example:*
 
 Default behavior. The whitespace tokenizer is used here to preserve non-alphanumeric characters.
 
 [source,xml]
 ----
-<analyzer>
+<analyzer type="index">
+  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+  <filter class="solr.WordDelimiterGraphFilterFactory"/>
+  <filter class="solr.FlattenGraphFilterFactory"/> <!-- required on index analyzers after graph filters -->
+</analyzer>
+
+<analyzer type="query">
   <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterFilterFactory"/>
+  <filter class="solr.WordDelimiterGraphFilterFactory"/>
 </analyzer>
 ----
 
@@ -1506,9 +1701,9 @@ Do not split on case changes, and do not generate number parts. Note that by not
 
 [source,xml]
 ----
-<analyzer>
+<analyzer type="query">
   <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterFilterFactory" generateNumberParts="0" splitOnCaseChange="0"/>
+  <filter class="solr.WordDelimiterGraphFilterFactory" generateNumberParts="0" splitOnCaseChange="0"/>
 </analyzer>
 ----
 
@@ -1524,9 +1719,9 @@ Concatenate word parts and number parts, but not word and number parts that occu
 
 [source,xml]
 ----
-<analyzer>
+<analyzer type="query">
   <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterFilterFactory" catenateWords="1" catenateNumbers="1"/>
+  <filter class="solr.WordDelimiterGraphFilterFactory" catenateWords="1" catenateNumbers="1"/>
 </analyzer>
 ----
 
@@ -1542,9 +1737,9 @@ Concatenate all. Word and/or number parts are joined together.
 
 [source,xml]
 ----
-<analyzer>
+<analyzer type="query">
   <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterFilterFactory" catenateAll="1"/>
+  <filter class="solr.WordDelimiterGraphFilterFactory" catenateAll="1"/>
 </analyzer>
 ----
 
@@ -1560,9 +1755,9 @@ Using a protected words list that contains "AstroBlaster" and "XL-5000" (among o
 
 [source,xml]
 ----
-<analyzer>
+<analyzer type="query">
   <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterFilterFactory" protected="protwords.txt"/>
+  <filter class="solr.WordDelimiterGraphFilterFactory" protected="protwords.txt"/>
 </analyzer>
 ----
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
index 17baa37..8c19635 100644
--- a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
@@ -30,7 +30,7 @@ For most SolrCloud or standalone Solr setups, the HadoopAuthPlugin should suffic
 |authConfigs |Yes |Configuration parameters required by the authentication scheme defined by the type property. See https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[configuration] options.
 |defaultConfigs |No |Default values for the configuration parameters specified by the `authConfigs` property. The default values are specified as a collection of key-value pairs (i.e., property-name : default_value).
 |enableDelegationToken |No |Enable (or disable) the delegation tokens functionality.
-|initKerberosZk |No |For enabling initialization of kerberos before connecting to Zookeeper.
+|initKerberosZk |No |For enabling initialization of kerberos before connecting to Zookeeper (if applicable).
 |proxyUserConfigs |No |Configures proxy users for the underlying Hadoop authentication mechanism. This configuration is expressed as a collection of key-value pairs (i.e., property-name : value).
 |clientBuilderFactory |No |The HttpClientBuilderFactory implementation used for the Solr internal communication. Only applicable for ConfigurableInternodeAuthHadoopPlugin
 |===
@@ -41,14 +41,11 @@ For most SolrCloud or standalone Solr setups, the HadoopAuthPlugin should suffic
 [[HadoopAuthenticationPlugin-KerberosAuthenticationusingHadoopAuthenticationPlugin]]
 === Kerberos Authentication using Hadoop Authentication Plugin
 
-This example lets you configure Solr to use Kerberos Authentication, similar to how you would use the <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>>. After consulting the Hadoop authentication library's documentation, you can supply per host configuration parameters using the "solr." prefix. As an example, the Hadoop authentication library expects a parameter "kerberos.principal", which can be supplied as a system property named "solr.kerberos.principal" when starting a Solr node. Refer to the <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>> page for other typical configuration parameters. Please note that this example uses ConfigurableInternodeAuthHadoopPlugin, and hence you must provide the clientBuilderFactory implementation. As a result, all internode communication will use the Kerberos mechanism, instead of PKI authentication.
+This example lets you configure Solr to use Kerberos Authentication, similar to how you would use the <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>>. After consulting the Hadoop authentication library's documentation, you can supply per host configuration parameters using the "solr." prefix. As an example, the Hadoop authentication library expects a parameter "kerberos.principal", which can be supplied as a system property named "solr.kerberos.principal" when starting a Solr node. Refer to the <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>> page for other typical configuration parameters.
 
-To setup this plugin, use the following in your Zookeeper's /security.json znode.
+Please note that this example uses `ConfigurableInternodeAuthHadoopPlugin`, and hence you must provide the `clientBuilderFactory` implementation. As a result, all internode communication will use the Kerberos mechanism, instead of PKI authentication.
 
-// OLD_CONFLUENCE_ID: HadoopAuthenticationPlugin-/security.json
-
-[[HadoopAuthenticationPlugin-_security.json]]
-==== /security.json
+To setup this plugin, use the following in your `security.json` file.
 
 [source,bash]
 ----
@@ -73,14 +70,9 @@ To setup this plugin, use the following in your Zookeeper's /security.json znode
 [[HadoopAuthenticationPlugin-SimpleAuthenticationwithDelegationTokens]]
 === Simple Authentication with Delegation Tokens
 
-Similar to the previous example, this is an example of setting up Solr cluster that uses delegation tokens. Refer to the parameters in the Hadoop authentication library's https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[documentation] or refer to the <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>> page for some details. Please note that this example does not use Kerberos and the requests made to Solr must contain valid delegation tokens.
-
-To setup this plugin, use the following in your Zookeeper's /security.json znode.
-
-// OLD_CONFLUENCE_ID: HadoopAuthenticationPlugin-/security.json.1
+Similar to the previous example, this is an example of setting up Solr cluster that uses delegation tokens. Refer to the parameters in the Hadoop authentication library's https://hadoop.apache.org/docs/stable/hadoop-auth/Configuration.html[documentation] or refer to the <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>> page for further details. Please note that this example does not use Kerberos and the requests made to Solr must contain valid delegation tokens.
 
-[[HadoopAuthenticationPlugin-_security.json.1]]
-==== /security.json
+To setup this plugin, use the following in your `security.json` file.
 
 [source,bash]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/highlighting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/highlighting.adoc b/solr/solr-ref-guide/src/highlighting.adoc
index 63b768a..9b4c9f4 100644
--- a/solr/solr-ref-guide/src/highlighting.adoc
+++ b/solr/solr-ref-guide/src/highlighting.adoc
@@ -9,7 +9,9 @@ Highlighting in Solr allows fragments of documents that match the user's query t
 
 You only need to set the `hl` and often `hl.fl` parameters to get results. The following table documents these and some other supported parameters. Note that many highlighting parameters support per-field overrides, such as: `f.__title_txt__.hl.snippets`
 
-[cols=",,",options="header",]
+// TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
+
+[width="100%",cols="34%,33%,33%",options="header",]
 |===
 |Parameter |Default |Description
 |hl |false |Use this parameter to enable or disable highlighting.
@@ -17,7 +19,11 @@ You only need to set the `hl` and often `hl.fl` parameters to get results. The f
 |hl.fl |_(df=)_ |Specifies a list of fields to highlight. Accepts a comma- or space-delimited list of fields for which Solr should generate highlighted snippets. A wildcard of '`*`' (asterisk) can be used to match field globs, such as 'text_*' or even '*' to highlight on all fields where highlighting is possible. When using '*', consider adding `hl.requireFieldMatch=true`
 |hl.q |_(q=)_ |A query to use for highlighting. This parameter allows you to highlight different terms than those being used to retrieve documents.
 |hl.qparser |_(defType=)_ |The query parser to use for the `hl.q` query.
-|hl.requireFieldMatch |false |By default, **false**, all query terms will be highlighted for each field to be highlighted (`hl.fl`) no matter what fields the parsed query refer to. If set to **true**, only query terms aligning with the field being highlighted will in turn be highlighted.
+|hl.requireFieldMatch |false a|
+By default, **false**, all query terms will be highlighted for each field to be highlighted (`hl.fl`) no matter what fields the parsed query refer to. If set to **true**, only query terms aligning with the field being highlighted will in turn be highlighted.
+
+note: if the query references fields different from the field being highlighted and they have different text analysis, the query may not highlight query terms it should have and vice versa. The analysis used is that of the field being highlighted (`hl.fl`), not the query fields.
+
 |hl.usePhraseHighlighter |true |If set to **true**, Solr will highlight phrase queries (and other advanced position-sensitive queries) accurately \u2013 as phrases. If **false**, the parts of the phrase will be highlighted everywhere instead of only when it forms the given phrase.
 |hl.highlightMultiTerm |true |If set to **true**, Solr will highlight wildcard queries (and other `MultiTermQuery` subclasses). If **false**, they won't be highlighted at all.
 |hl.snippets |1 |Specifies maximum number of highlighted snippets to generate per field. It is possible for any number of snippets from zero to this value to be generated.
@@ -117,7 +123,8 @@ The Unified Highlighter supports these following additional parameters to the on
 |hl.bs.language |_(blank)_ |Specifies the breakiterator language for dividing the document into passages.
 |hl.bs.country |_(blank)_ |Specifies the breakiterator country for dividing the document into passages.
 |hl.bs.variant |_(blank)_ |Specifies the breakiterator variant for dividing the document into passages.
-|hl.bs.type |SENTENCE |Specifies the breakiterator type for dividing the document into passages. Can be **SENTENCE**, **WORD**, **CHARACTER**, **LINE**, or **WHOLE**.
+|hl.bs.type |SENTENCE |Specifies the breakiterator type for dividing the document into passages. Can be **SEPARATOR**, **SENTENCE**, **WORD**, **CHARACTER**, **LINE**, or **WHOLE**. SEPARATOR is special value that splits text on a user-provided character in `hl.bs.separator`.
+|hl.bs.separator |_(blank)_ |Indicates which character to break the text on. Requires `hl.bs.type=SEPARATOR`. This is useful when the text has already been manipulated in advance to have a special delineation character at desired highlight passage boundaries. This character will still appear in the text as the last character of a passage.
 |===
 
 [[Highlighting-ThePostingsHighlighter]]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/a-quick-overview/sample-client-app-arch.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/a-quick-overview/sample-client-app-arch.png b/solr/solr-ref-guide/src/images/a-quick-overview/sample-client-app-arch.png
index d4ad454..7c181b3 100644
Binary files a/solr/solr-ref-guide/src/images/a-quick-overview/sample-client-app-arch.png and b/solr/solr-ref-guide/src/images/a-quick-overview/sample-client-app-arch.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/analysis-screen/analysis_normal.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/analysis-screen/analysis_normal.png b/solr/solr-ref-guide/src/images/analysis-screen/analysis_normal.png
index 6e83572..f180ca9 100644
Binary files a/solr/solr-ref-guide/src/images/analysis-screen/analysis_normal.png and b/solr/solr-ref-guide/src/images/analysis-screen/analysis_normal.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/analysis-screen/analysis_verbose.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/analysis-screen/analysis_verbose.png b/solr/solr-ref-guide/src/images/analysis-screen/analysis_verbose.png
index 54f1e5c..13f8fc7 100644
Binary files a/solr/solr-ref-guide/src/images/analysis-screen/analysis_verbose.png and b/solr/solr-ref-guide/src/images/analysis-screen/analysis_verbose.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/cloud-screens/cloud-graph.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/cloud-screens/cloud-graph.png b/solr/solr-ref-guide/src/images/cloud-screens/cloud-graph.png
index a23d2e9..a1f81b2 100644
Binary files a/solr/solr-ref-guide/src/images/cloud-screens/cloud-graph.png and b/solr/solr-ref-guide/src/images/cloud-screens/cloud-graph.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/cloud-screens/cloud-radial.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/cloud-screens/cloud-radial.png b/solr/solr-ref-guide/src/images/cloud-screens/cloud-radial.png
index ef1c6bb..76f9e1e 100644
Binary files a/solr/solr-ref-guide/src/images/cloud-screens/cloud-radial.png and b/solr/solr-ref-guide/src/images/cloud-screens/cloud-radial.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/cloud-screens/cloud-tree.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/cloud-screens/cloud-tree.png b/solr/solr-ref-guide/src/images/cloud-screens/cloud-tree.png
index be0115b..127812a 100644
Binary files a/solr/solr-ref-guide/src/images/cloud-screens/cloud-tree.png and b/solr/solr-ref-guide/src/images/cloud-screens/cloud-tree.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/collection-specific-tools/collection_dashboard.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/collection-specific-tools/collection_dashboard.png b/solr/solr-ref-guide/src/images/collection-specific-tools/collection_dashboard.png
index d75b03c..66a31e2 100644
Binary files a/solr/solr-ref-guide/src/images/collection-specific-tools/collection_dashboard.png and b/solr/solr-ref-guide/src/images/collection-specific-tools/collection_dashboard.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/collections-core-admin/DeleteShard.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/collections-core-admin/DeleteShard.png b/solr/solr-ref-guide/src/images/collections-core-admin/DeleteShard.png
new file mode 100644
index 0000000..b7723e7
Binary files /dev/null and b/solr/solr-ref-guide/src/images/collections-core-admin/DeleteShard.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/collections-core-admin/collection-admin.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/collections-core-admin/collection-admin.png b/solr/solr-ref-guide/src/images/collections-core-admin/collection-admin.png
index 6f42d24..86c367f 100644
Binary files a/solr/solr-ref-guide/src/images/collections-core-admin/collection-admin.png and b/solr/solr-ref-guide/src/images/collections-core-admin/collection-admin.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/combining-distribution-and-replication/worddav4101c16174820e932b44baa22abcfcd1.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/combining-distribution-and-replication/worddav4101c16174820e932b44baa22abcfcd1.png b/solr/solr-ref-guide/src/images/combining-distribution-and-replication/worddav4101c16174820e932b44baa22abcfcd1.png
index 1b060eb..f0e4d89 100644
Binary files a/solr/solr-ref-guide/src/images/combining-distribution-and-replication/worddav4101c16174820e932b44baa22abcfcd1.png and b/solr/solr-ref-guide/src/images/combining-distribution-and-replication/worddav4101c16174820e932b44baa22abcfcd1.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/core-specific-tools/core_dashboard.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/core-specific-tools/core_dashboard.png b/solr/solr-ref-guide/src/images/core-specific-tools/core_dashboard.png
index 2da9e98..b4e941a 100644
Binary files a/solr/solr-ref-guide/src/images/core-specific-tools/core_dashboard.png and b/solr/solr-ref-guide/src/images/core-specific-tools/core_dashboard.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/cross-data-center-replication-cdcr-/CDCR_arch.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/cross-data-center-replication-cdcr-/CDCR_arch.png b/solr/solr-ref-guide/src/images/cross-data-center-replication-cdcr-/CDCR_arch.png
index 17cc014..4dc6ee5 100644
Binary files a/solr/solr-ref-guide/src/images/cross-data-center-replication-cdcr-/CDCR_arch.png and b/solr/solr-ref-guide/src/images/cross-data-center-replication-cdcr-/CDCR_arch.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/parallel-sql-interface/cluster.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/parallel-sql-interface/cluster.png b/solr/solr-ref-guide/src/images/parallel-sql-interface/cluster.png
index eb862c4..10f134f 100644
Binary files a/solr/solr-ref-guide/src/images/parallel-sql-interface/cluster.png and b/solr/solr-ref-guide/src/images/parallel-sql-interface/cluster.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_1.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_1.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_1.png
index 39e8a35..eb1d655 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_1.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_1.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png
index f5c3296..8d051a2 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_12.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_12.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_12.png
index 880fe0a..69abcf7 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_12.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_12.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_13.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_13.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_13.png
index a8b29fc..e7ca8fe 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_13.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_13.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_14.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_14.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_14.png
index fb5b0aa..6bc3f4f 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_14.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_14.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_15.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_15.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_15.png
index 1c0d077..d6696d9 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_15.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_15.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_16.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_16.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_16.png
index eebc252..a07772e 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_16.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_16.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_17.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_17.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_17.png
index abd0f98..b748a9a 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_17.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_17.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_19.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_19.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_19.png
index 76b81b2..8d01bc0 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_19.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_19.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_2.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_2.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_2.png
index b9ae300..eabb3f8 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_2.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_2.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_20.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_20.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_20.png
index ff9663c..8680d2f 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_20.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_20.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_3.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_3.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_3.png
index bec1091..ab24004 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_3.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_3.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_4.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_4.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_4.png
index 22df737..1321f05 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_4.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_4.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_5.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_5.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_5.png
index 3db7c66..76a5798 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_5.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_5.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_6.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_6.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_6.png
index b6a368c..111de69 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_6.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_6.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_7.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_7.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_7.png
index 7d814cd..bcedc59 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_7.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_7.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_9.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_9.png b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_9.png
index bd9e4f4..f5de9c4 100644
Binary files a/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_9.png and b/solr/solr-ref-guide/src/images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_9.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc b/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
index 81d42a4..c555f8c 100644
--- a/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
@@ -2,9 +2,9 @@
 :page-shortname: kerberos-authentication-plugin
 :page-permalink: kerberos-authentication-plugin.html
 
-If you are using Kerberos to secure your network environment, the Kerberos authentication plugin can be used to secure a Solr cluster. This allows Solr to use a Kerberos service principal and keytab file to authenticate with ZooKeeper and between nodes of the Solr cluster. Users of the Admin UI and alll clients (such as <<using-solrj.adoc#using-solrj,SolrJ>>) would also need to have a valid ticket before being able to use the UI or send requests to Solr.
+If you are using Kerberos to secure your network environment, the Kerberos authentication plugin can be used to secure a Solr cluster. This allows Solr to use a Kerberos service principal and keytab file to authenticate with ZooKeeper and between nodes of the Solr cluster (if applicable). Users of the Admin UI and all clients (such as <<using-solrj.adoc#using-solrj,SolrJ>>) would also need to have a valid ticket before being able to use the UI or send requests to Solr.
 
-Support for the Kerberos authentication plugin is only available in SolrCloud mode.
+Support for the Kerberos authentication plugin is available in SolrCloud mode or standalone mode.
 
 [TIP]
 ====
@@ -21,11 +21,7 @@ When setting up Solr to use Kerberos, configurations are put in place for Solr t
 [[KerberosAuthenticationPlugin-security.json]]
 === security.json
 
-The Solr authentication model uses a file called `/security.json` which is stored in ZooKeeper. A description of this file and how it is created and maintained is covered in the section <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,Authentication and Authorization Plugins>>, and can only be used when Solr is running in SolrCloud mode. If this file is created after an initial startup of Solr, a restart of the system on each node is required.
-
-Alternatively, the authentication plugin implementation can be specified during node startup using the system parameter: `-DauthenticationPlugin=org.apache.solr.security.KerberosPlugin`. This parameter can be used with either SolrCloud mode or standalone mode. However, if you are using Solr in standalone mode, this system parameter is the only way to enable Kerberos.
-
-If you are using SolrCloud mode, the approach to use `security.json` is the best practice.
+The Solr authentication model uses a file called `security.json`. A description of this file and how it is created and maintained is covered in the section <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,Authentication and Authorization Plugins>>. If this file is created after an initial startup of Solr, a restart of each node of the system is required.
 
 [[KerberosAuthenticationPlugin-ServicePrincipalsandKeytabFiles]]
 === Service Principals and Keytab Files
@@ -47,7 +43,7 @@ Since a Solr cluster requires internode communication, each node must also be ab
 [[KerberosAuthenticationPlugin-KerberizedZooKeeper]]
 === Kerberized ZooKeeper
 
-When setting up a kerberized Solr cluster, it is recommended to enable Kerberos security for Zookeeper as well. In such a setup, the client principal used to authenticate requests with Zookeeper can be shared for internode communication as well. This has the benefit of not needing to renew the ticket granting tickets (TGTs) separately, since the Zookeeper client used by Solr takes care of this. To achieve this, a single JAAS configuration (with the app name as Client) can be used for the Kerberos plugin as well as for the Zookeeper client. See the configuration section below for an example of starting Zookeeper in Kerberos mode.
+When setting up a kerberized SolrCloud cluster, it is recommended to enable Kerberos security for Zookeeper as well. In such a setup, the client principal used to authenticate requests with Zookeeper can be shared for internode communication as well. This has the benefit of not needing to renew the ticket granting tickets (TGTs) separately, since the Zookeeper client used by Solr takes care of this. To achieve this, a single JAAS configuration (with the app name as Client) can be used for the Kerberos plugin as well as for the Zookeeper client. See the configuration section below for an example of starting Zookeeper in Kerberos mode.
 
 [[KerberosAuthenticationPlugin-BrowserConfiguration]]
 === Browser Configuration
@@ -81,7 +77,7 @@ We'll walk through each of these steps below.
 [IMPORTANT]
 ====
 
-To use host names instead of IP addresses, use the SOLR_HOST config in http://solr.in[`bin/solr.in.sh`] or pass a `-Dhost=<hostname>` during Solr startup. This guide uses IP addresses . If you specify a hostname replace all the IP addresses in the guide with the solr hostname
+To use host names instead of IP addresses, use the `SOLR_HOST` configuration in `bin/solr.in.sh` or pass a `-Dhost=<hostname>` system parameter during Solr startup. This guide uses IP addresses. If you specify a hostname, replace all the IP addresses in the guide with the Solr hostname as appropriate.
 
 ====
 
@@ -142,7 +138,7 @@ Server {
   doNotPrompt=true
   useTicketCache=false
   debug=true
-  principal=\u201dzookeeper/host1\u201d;
+  principal="zookeeper/host1@EXAMPLE.COM";
 };
 ----
 
@@ -161,18 +157,20 @@ Once all of the pieces are in place, start ZooKeeper with the following paramete
 bin/zkServer.sh start -Djava.security.auth.login.config=/etc/zookeeper/conf/jaas-client.conf
 ----
 
-// OLD_CONFLUENCE_ID: KerberosAuthenticationPlugin-Create/security.json
+[[KerberosAuthenticationPlugin-Createsecurity.json]]
+=== Create security.json
 
-[[KerberosAuthenticationPlugin-Create_security.json]]
-=== Create /security.json
+Create the `security.json` file.
 
-Set up Solr to use the Kerberos plugin by uploading the `security.json` as follows:
+In SolrCloud mode, you can set up Solr to use the Kerberos plugin by uploading the `security.json` to ZooKeeper while you create it, as follows:
 
 [source,bash]
 ----
 > server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put /security.json '{"authentication":{"class": "org.apache.solr.security.KerberosPlugin"}}'
 ----
 
+If you are using Solr in standalone mode, you need to create the `security.json` file and put it in your `$SOLR_HOME` directory.
+
 More details on how to use a `/security.json` file in Solr are available in the section <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,Authentication and Authorization Plugins>>.
 
 [IMPORTANT]
@@ -236,7 +234,7 @@ Here is an example that could be added to `bin/solr.in.sh`. Make sure to change
 
 [source,bash]
 ----
-SOLR_AUTHENTICATION_CLIENT_CONFIGURER=org.apache.solr.client.solrj.impl.Krb5HttpClientConfigurer
+SOLR_AUTH_TYPE="kerberos"
 SOLR_AUTHENTICATION_OPTS="-Djava.security.auth.login.config=/home/foo/jaas-client.conf -Dsolr.kerberos.cookie.domain=192.168.0.107 -Dsolr.kerberos.cookie.portaware=true -Dsolr.kerberos.principal=HTTP/192.168.0.107@EXAMPLE.COM -Dsolr.kerberos.keytab=/keytabs/107.keytab"
 ----
 
@@ -281,7 +279,7 @@ To enable delegation tokens, several parameters must be defined. These parameter
 [[KerberosAuthenticationPlugin-StartSolr]]
 === Start Solr
 
-Once the configuration is complete, you can start Solr with the `bin/solr` script, as in the example below. This example assumes you modified `bin/solr.in.sh` or `bin/solr.in.cmd`, with the proper values, but if you did not, you would pas the system parameters along with the start command. Note you also need to customize the `-z` property as appropriate for the location of your ZooKeeper nodes.
+Once the configuration is complete, you can start Solr with the `bin/solr` script, as in the example below, which is for users in SolrCloud mode only. This example assumes you modified `bin/solr.in.sh` or `bin/solr.in.cmd`, with the proper values, but if you did not, you would pass the system parameters along with the start command. Note you also need to customize the `-z` property as appropriate for the location of your ZooKeeper nodes.
 
 [source,bash]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/language-analysis.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/language-analysis.adoc b/solr/solr-ref-guide/src/language-analysis.adoc
index 6cdc339..df098a3 100644
--- a/solr/solr-ref-guide/src/language-analysis.adoc
+++ b/solr/solr-ref-guide/src/language-analysis.adoc
@@ -1311,8 +1311,8 @@ Solr provides support for Polish stemming with the `solr.StempelPolishStemFilter
 ----
 <analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
   <filter class="solr.MorfologikFilterFactory" dictionary="morfologik/stemming/polish/polish.dict"/>
+  <filter class="solr.LowerCaseFilterFactory"/>
 </analyzer>
 ----
 
@@ -1324,7 +1324,9 @@ Solr provides support for Polish stemming with the `solr.StempelPolishStemFilter
 
 More information about the Stempel stemmer is available in {lucene-javadocs}/analyzers-stempel/index.html[the Lucene javadocs].
 
-The Morfologik dictionary param value is a constant specifying which dictionary to choose. The dictionary resource must be named `morfologik/stemming/__language__/__language__.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
+Note the lower case filter is applied _after_ the Morfologik stemmer; this is because the Polish dictionary contains proper names and then proper term case may be important to resolve disambiguities (or even lookup the correct lemma at all).
+
+The Morfologik dictionary param value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/__language__.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
 
 <<main,Back to Top>>
 
@@ -1666,10 +1668,12 @@ Lucene also includes an example Ukrainian stopword list, in the `lucene-analyzer
 ----
 <analyzer> 
   <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
   <filter class="solr.StopFilterFactory" words="org/apache/lucene/analysis/uk/stopwords.txt"/>
   <filter class="solr.MorfologikFilterFactory" dictionary="org/apache/lucene/analysis/uk/ukrainian.dict"/>
+  <filter class="solr.LowerCaseFilterFactory"/>
 </analyzer>
 ----
 
-The Morfologik `dictionary` param value is a constant specifying which dictionary to choose. The dictionary resource must be named `morfologik/stemming/__language__/__language__.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
+Note the lower case filter is applied _after_ the Morfologik stemmer; this is because the Ukrainian dictionary contains proper names and then proper term case may be important to resolve disambiguities (or even lookup the correct lemma at all).
+
+The Morfologik `dictionary` param value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/__language__.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/learning-to-rank.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/learning-to-rank.adoc b/solr/solr-ref-guide/src/learning-to-rank.adoc
index c65df18..0af8084 100644
--- a/solr/solr-ref-guide/src/learning-to-rank.adoc
+++ b/solr/solr-ref-guide/src/learning-to-rank.adoc
@@ -266,6 +266,12 @@ To view the features you just uploaded please open the following URL in a browse
 [
   {
     "store" : "myEfiFeatureStore",
+    "name" : "isPreferredManufacturer",
+    "class" : "org.apache.solr.ltr.feature.SolrFeature",
+    "params" : { "fq" : [ "{!field f=manu}${preferredManufacturer}" ] }
+  },
+  {
+    "store" : "myEfiFeatureStore",
     "name" : "userAnswerValue",
     "class" : "org.apache.solr.ltr.feature.ValueFeature",
     "params" : { "value" : "${answer:42}" }
@@ -292,8 +298,8 @@ As an aside, you may have noticed that the `myEfiFeatures.json` example uses `"s
 
 To extract `myEfiFeatureStore` features as part of a query, add `efi.*` parameters to the `[features]` part of the `fl` parameter, for example:
 
-* link:[] http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.fromMobile=1%5D[http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,score,[features store=myEfiFeatureStore efi.text=test efi.fromMobile=1]]
-* link:[] http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.fromMobile=0%20efi.answer=13%5D[http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,score,[features store=myEfiFeatureStore efi.text=test efi.fromMobile=0 efi.answer=13]]
+* link:[] http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=1%5D[http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]]
+* link:[] http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=0%20efi.answer=13%5D[http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13]]
 
 [[LearningToRank-Uploadingamodel.1]]
 ==== Uploading a model
@@ -318,12 +324,14 @@ To view the model you just uploaded please open the following URL in a browser:
   "name" : "myEfiModel",
   "class" : "org.apache.solr.ltr.model.LinearModel",
   "features" : [
+    { "name" : "isPreferredManufacturer" },
     { "name" : "userAnswerValue" },
     { "name" : "userFromMobileValue" },
     { "name" : "userTextCat" }
   ],
   "params" : {
     "weights" : {
+      "isPreferredManufacturer" : 0.2,
       "userAnswerValue" : 1.0,
       "userFromMobileValue" : 1.0,
       "userTextCat" : 0.1
@@ -337,8 +345,8 @@ To view the model you just uploaded please open the following URL in a browser:
 
 To obtain the feature values computed during reranking, add `[features]` to the `fl` parameter and `efi.*` parameters to the `rq` parameter, for example:
 
-* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myEfiModel%20efi.text=test%20efi.fromMobile=1%7D&fl=id,cat,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.fromMobile=1}&fl=id,cat,score,[features]] link:[]
-* link:[]http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myEfiModel%20efi.text=test%20efi.fromMobile=0%20efi.answer=13%7D&fl=id,cat,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.fromMobile=0 efi.answer=13}&fl=id,cat,score,[features]]
+* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myEfiModel%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=1%7D&fl=id,cat,manu,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1}&fl=id,cat,manu,score,[features]] link:[]
+* link:[]http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myEfiModel%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=0%20efi.answer=13%7D&fl=id,cat,manu,score,%5Bfeatures%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]]
 
 Notice the absence of `efi.*` parameters in the `[features]` part of the `fl` parameter.
 
@@ -347,7 +355,7 @@ Notice the absence of `efi.*` parameters in the `[features]` part of the `fl` pa
 
 To extract features for `myEfiFeatureStore`'s features whilst still reranking with `myModel`:
 
-* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%7D&fl=id,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.fromMobile=1%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel}&fl=id,score,[features store=myEfiFeatureStore efi.text=test efi.fromMobile=1]] link:[]
+* http://localhost:8983/solr/techproducts/query?q=test&rq=%7B!ltr%20model=myModel%7D&fl=id,cat,manu,score,%5Bfeatures%20store=myEfiFeatureStore%20efi.text=test%20efi.preferredManufacturer=Apache%20efi.fromMobile=1%5D[http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]] link:[]
 
 Notice the absence of `efi.*` parameters in the `rq` parameter (because `myModel` does not use `efi` feature) and the presence of `efi.*` parameters in the `[features]` part of the `fl` parameter (because `myEfiFeatureStore` contains `efi` features).
 
@@ -356,12 +364,12 @@ Read more about model evolution in the <<LearningToRank-Lifecycle,Lifecycle>> se
 [[LearningToRank-Trainingexample]]
 === Training example
 
-Example training data and a demo 'train and upload model' script can be found in the `solr/contrib/ltr/example` folder in the https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git[Apache lucene-solr git repository] which is mirrored on https://github.com/apache/lucene-solr/tree/releases/lucene-solr/6.4.0/solr/contrib/ltr/example[github.com].
+Example training data and a demo 'train and upload model' script can be found in the `solr/contrib/ltr/example` folder in the https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git[Apache lucene-solr git repository] which is mirrored on https://github.com/apache/lucene-solr/tree/releases/lucene-solr/6.4.0/solr/contrib/ltr/example[github.com] (the `solr/contrib/ltr/example` folder is not shipped in the solr binary release).
 
 [[LearningToRank-Installation]]
 == Installation
 
-The ltr contrib module requires `dist/solr-ltr-*.jar` and all JARs under `contrib/ltr/lib`.
+The ltr contrib module requires the `dist/solr-ltr-*.jar` JARs.
 
 [[LearningToRank-Configuration]]
 == Configuration
@@ -375,7 +383,6 @@ Learning-To-Rank is a contrib module and therefore its plugins must be configure
 
 [source,xml]
 ----
-<lib dir="${solr.install.dir:../../../..}/contrib/ltr/lib/" regex=".*\.jar" />
 <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-ltr-\d.*\.jar" />
 ----
 
@@ -518,6 +525,11 @@ To delete the `currentFeatureStore` feature store:
 curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/feature-store/currentFeatureStore'
 ----
 
+[[LearningToRank-Applyingchanges]]
+=== Applying changes
+
+The feature store and the model store are both <<managed-resources.adoc#managed-resources,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
+
 [[LearningToRank-Examples]]
 === Examples
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/managed-resources.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/managed-resources.adoc b/solr/solr-ref-guide/src/managed-resources.adoc
index ccf2f50..709f1cd 100644
--- a/solr/solr-ref-guide/src/managed-resources.adoc
+++ b/solr/solr-ref-guide/src/managed-resources.adoc
@@ -182,7 +182,7 @@ Lastly, you can delete a mapping by sending a DELETE request to the managed endp
 [[ManagedResources-ApplyingChanges]]
 == Applying Changes
 
-Changes made to managed resources via this REST API are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded. For example:, after adding or deleting a stop word, you must reload the core/collection before changes become active.
+Changes made to managed resources via this REST API are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded. For example:, after adding or deleting a stop word, you must reload the core/collection before changes become active; related APIs: <<coreadmin-api.adoc#coreadmin-api,CoreAdmin API>> and <<collections-api.adoc#collections-api,Collections API>>.
 
 This approach is required when running in distributed mode so that we are assured changes are applied to all cores in a collection at the same time so that behavior is consistent and predictable. It goes without saying that you don\u2019t want one of your replicas working with a different set of stop words or synonyms than the others.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/managing-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/managing-solr.adoc b/solr/solr-ref-guide/src/managing-solr.adoc
index d24a3e8..ad13113 100644
--- a/solr/solr-ref-guide/src/managing-solr.adoc
+++ b/solr/solr-ref-guide/src/managing-solr.adoc
@@ -1,7 +1,7 @@
 = Managing Solr
 :page-shortname: managing-solr
 :page-permalink: managing-solr.html
-:page-children: taking-solr-to-production, securing-solr, running-solr-on-hdfs, making-and-restoring-backups, configuring-logging, using-jmx-with-solr, mbean-request-handler, performance-statistics-reference, metrics-reporting
+:page-children: taking-solr-to-production, securing-solr, running-solr-on-hdfs, making-and-restoring-backups, configuring-logging, using-jmx-with-solr, mbean-request-handler, performance-statistics-reference, metrics-reporting, v2-api
 
 This section describes how to run Solr and how to look at Solr when it is running. It contains the following sections:
 
@@ -22,3 +22,5 @@ This section describes how to run Solr and how to look at Solr when it is runnin
 <<performance-statistics-reference.adoc#performance-statistics-reference,Performance Statistics Reference>>: Additional information on statistics returned from JMX.
 
 <<metrics-reporting.adoc#metrics-reporting,Metrics Reporting>>: Details of Solr's metrics registries and Metrics API.
+
+<<v2-api.adoc#v2-api,v2 API>>: Describes a redesigned API framework covering most existing Solr APIs.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/metrics-reporting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/metrics-reporting.adoc b/solr/solr-ref-guide/src/metrics-reporting.adoc
index 945bc15..38ca2f3 100644
--- a/solr/solr-ref-guide/src/metrics-reporting.adoc
+++ b/solr/solr-ref-guide/src/metrics-reporting.adoc
@@ -50,7 +50,6 @@ These are the major groups of metrics that are collected:
 
 * all common RequestHandler-s report: request timers / counters, timeouts, errors.
 * <<MetricsReporting-IndexMergeMetrics,index-level events>>: meters for minor / major merges, number of merged docs, number of deleted docs, gauges for currently running merges and their size.
-* <<MetricsReporting-DirectoryI_OMetrics,directory-level IO>>: total read / write meters, histograms for read / write operations and their size, optionally split per index file (e.g., field data, term dictionary, docValues, etc)
 * shard replication and transaction log replay on replicas (TBD, SOLR-9856)
 * TBD: caches, update handler details, and other relevant SolrInfoMBean-s
 
@@ -251,27 +250,6 @@ If the boolean flag `mergeDetails` is true then the following additional metrics
 * `INDEX.merge.major.docs` - meter for the number of documents merged in major merge operations
 * `INDEX.merge.major.deletedDocs` - meter for the number of deleted documents expunged in major merge operations
 
-// OLD_CONFLUENCE_ID: MetricsReporting-DirectoryI/OMetrics
-
-[[MetricsReporting-DirectoryI_OMetrics]]
-=== Directory I/O Metrics
-
-Index storage (represented in Lucene/Solr by `Directory` abstraction) is monitored for I/O throughput, which is optionally tracked per index file (see the previous section, `directoryDetails` argument). As with the index-level metrics, these metrics are also registered in per-core registries.
-
-The following metrics are collected:
-
-* `DIRECTORY.total.reads` - meter for total read bytes from the directory.
-* `DIRECTORY.total.writes` - meter for total written bytes to the directory.
-
-If `directoryDetails` is set to true the following additional metrics are collected (note: this can potentially produce a lot of metrics so it should not be used in production):
-
-* `DIRECTORY.total.readSizes` - histogram of read operation sizes (in byte units)
-* `DIRECTORY.total.writeSizes` - histogram of write operation sizes (in byte units)
-* `DIRECTORY.<file type>.reads` - meter for read bytes per "file type". File type is either `segments` for `segments_N` and `pending_segments_N`, or a file extension (e.g., `fdt`, `doc`, `tim`, etc). The number and type of these files vary depending on the type of Lucene codec used.
-* `DIRECTORY.<file type>.writes` - meter for written bytes per "file type".
-* `DIRECTORY.<file type>.readSizes` - histogram of write operation sizes per "file type" (in byte units).
-* `DIRECTORY.<file type>.writeSizes` - histogram of write operation sizes per "file type" (in byte units).
-
 [[MetricsReporting-MetricsAPI]]
 == Metrics API
 
@@ -292,31 +270,6 @@ Request only "counter" type metrics in the "core" group, returned in JSON:
 
 `http://localhost:8983/solr/admin/metrics?wt=json&type=counter&group=core`
 
-Request only "core" group metrics that start with "DIRECTORY", returned in JSON:
-
-`http://localhost:8983/solr/admin/metrics?wt=json&prefix=DIRECTORY&group=core`
-
-Sample output from the above request:
+Request only "core" group metrics that start with "INDEX", returned in JSON:
 
-[source,java]
-----
-{
-    "responseHeader": {
-        "status": 0,
-        "QTime": 0
-    },
-    "metrics": ["solr.core.test", 
-        ["DIRECTORY.total.reads", 
-            ["count", 142, 
-             "meanRate", 0.23106951540768358, 
-             "1minRate", 0.0011862666311920798, 
-             "5minRate", 3.7799942123292443, 
-             "15minRate", 14.500264968437852],
-        "DIRECTORY.total.writes", 
-            ["count", 71, 
-             "meanRate", 0.11553475490916319, 
-             "1minRate", 5.931333155960399E-4, 
-             "5minRate", 1.8899971061646221, 
-             "15minRate", 7.250132484218926]]]
-}
-----
+`http://localhost:8983/solr/admin/metrics?wt=json&prefix=INDEX&group=core`

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/other-parsers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/other-parsers.adoc b/solr/solr-ref-guide/src/other-parsers.adoc
index 0001861..d8e4a96 100644
--- a/solr/solr-ref-guide/src/other-parsers.adoc
+++ b/solr/solr-ref-guide/src/other-parsers.adoc
@@ -663,7 +663,7 @@ This parser takes the following parameters:
 |q.operators a|
 Comma-separated list of names of parsing operators to enable. By default, all operations are enabled, and this parameter can be used to effectively disable specific operators as needed, by excluding them from the list. Passing an empty string with this parameter disables all operators.
 
-[cols=",,,",options="header",]
+[width="100%",cols="25%,25%,25%,25%",options="header",]
 |===
 |Name |Operator |Description |Example query
 |`AND` |`+` |Specifies AND |`token1+token2`
@@ -674,7 +674,17 @@ Comma-separated list of names of parsing operators to enable. By default, all op
 |`PRECEDENCE` |`( )` |Specifies precedence; tokens inside the parenthesis will be analyzed first. Otherwise, normal order is left to right. |`token1 + (token2 | token3)`
 |`ESCAPE` |`\` |Put it in front of operators to match them literally |`C\+\+`
 |`WHITESPACE` |space or` [\r\t\n]` |Delimits tokens on whitespace. If not enabled, whitespace splitting will not be performed prior to analysis \u2013 usually most desirable. Not splitting whitespace is a unique feature of this parser that enables multi-word synonyms to work. However, it probably actually won't unless synonyms are configured to normalize instead of expand to all that match a given synonym. Such a configuration requires normalizing synonyms at both index time and query time. Solr's analysis screen can help here. |`term1 term2`
-|`FUZZY` |`~N` |At the end of terms, specifies a fuzzy query |`term~1`
+|`FUZZY` a|
+`~`
+
+`~N`
+
+ a|
+At the end of terms, specifies a fuzzy query.
+
+"N" is optional and may be either "1" or "2" (the default)
+
+ |`term~1`
 |`NEAR` |`~N` |At the end of phrases, specifies a NEAR query |`"term1 term2"~5`
 |===
 
@@ -847,7 +857,7 @@ The XmlQParser implementation uses the {solr-javadocs}/solr-core/org/apache/solr
 [[OtherParsers-CustomizingXMLQueryParser]]
 === Customizing XML Query Parser
 
-You can configure your own custom query builders for additional XML elements. The custom builders need to extend the {solr-javadocs}/solr-core/org/apache/solr/search/SolrQueryBuilder.html[SolrQueryBuilder] class. Example solrconfig.xml snippet:
+You can configure your own custom query builders for additional XML elements. The custom builders need to extend the {solr-javadocs}/solr-core/org/apache/solr/search/SolrQueryBuilder.html[SolrQueryBuilder] or the {solr-javadocs}/solr-core/org/apache/solr/search/SolrSpanQueryBuilder.html[SolrSpanQueryBuilder] class. Example solrconfig.xml snippet:
 
 [source,xml]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/pagination-of-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/pagination-of-results.adoc b/solr/solr-ref-guide/src/pagination-of-results.adoc
index 16e57cf..cf13629 100644
--- a/solr/solr-ref-guide/src/pagination-of-results.adoc
+++ b/solr/solr-ref-guide/src/pagination-of-results.adoc
@@ -89,6 +89,7 @@ There are a few important constraints to be aware of when using `cursorMark` par
 * Your requests must either not include a `start` parameter, or it must be specified with a value of "```0```".
 2.  `sort` clauses must include the uniqueKey field (either "```asc```" or `"desc`")
 * If `id` is your uniqueKey field, then sort params like `id asc` and `name asc, id desc` would both work fine, but `name asc` by itself would not
+3.  Sorts including <<working-with-dates.adoc#working-with-dates,Date Math>> based functions that involve calculations relative to `NOW` will cause confusing results, since every document will get a new sort value on every subsequent request. This can easily result in cursors that never end, and constantly return the same documents over and over \u2013 even if the documents are never updated. In this situation, choose & re-use a fixed value for the <<working-with-dates.adoc#WorkingwithDates-NOW,`NOW` request param>> in all of your cursor requests.
 
 Cursor mark values are computed based on the sort values of each document in the result, which means multiple documents with identical sort values will produce identical Cursor mark values if one of them is the last document on a page of results. In that situation, the subsequent request using that `cursorMark` would not know which of the documents with the identical mark values should be skipped. Requiring that the uniqueKey field be used as a clause in the sort criteria guarantees that a deterministic ordering will be returned, and that every `cursorMark` value will identify a unique point in the sequence of documents.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/73148be0/solr/solr-ref-guide/src/parallel-sql-interface.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/parallel-sql-interface.adoc b/solr/solr-ref-guide/src/parallel-sql-interface.adoc
index 44fdfb3..5f6552e 100644
--- a/solr/solr-ref-guide/src/parallel-sql-interface.adoc
+++ b/solr/solr-ref-guide/src/parallel-sql-interface.adoc
@@ -3,12 +3,12 @@
 :page-permalink: parallel-sql-interface.html
 :page-children: solr-jdbc-dbvisualizer, solr-jdbc-squirrel-sql, solr-jdbc-apache-zeppelin, solr-jdbc-python-jython, solr-jdbc-r
 
-Solr's Parallel SQL Interface brings the power of SQL to SolrCloud. The SQL interface seamlessly combines SQL with Solr's full-text search capabilities. Two implementations for aggregations allow using either MapReduce-like shuffling or the JSON Facet API, depending on performance needs. These features allow Solr's SQL interface to be used for a wide variety of use cases.
+Solr's Parallel SQL Interface brings the power of SQL to SolrCloud. The SQL interface seamlessly combines SQL with Solr's full-text search capabilities. Both MapReduce style and JSON Facet API aggregations are supported, which means the SQL interface can be used to support both *high query volume* and *high cardinality* use cases.
 
 [[ParallelSQLInterface-SQLArchitecture]]
 == SQL Architecture
 
-The SQL interface allows sending a SQL query to Solr and getting documents streamed back in response. Under the covers, Solr's SQL interface is powered by the https://prestodb.io/[Presto Project's] https://github.com/prestodb/presto/tree/master/presto-parser[SQL Parser], which translates SQL queries on the fly to <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>>.
+The SQL interface allows sending a SQL query to Solr and getting documents streamed back in response. Under the covers, Solr's SQL interface uses the Apache Calcite SQL engine to translate SQL queries to physical query plans implemented as <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>>.
 
 [[ParallelSQLInterface-SolrCollectionsandDBTables]]
 === Solr Collections and DB Tables
@@ -35,8 +35,8 @@ More information about how to structure SQL queries for Solr is included in the
 
 The SQL feature of Solr can work with aggregations (grouping of results) in two ways:
 
+* `facet`: This is the *default* aggregation mode, which uses the JSON Facet API or StatsComponent for aggregations. In this scenario the aggregations logic is pushed down into the search engine and only the aggregates are sent across the network. This is Solr's normal mode of operation. This is fast when the cardinality of GROUP BY fields is low to moderate. But it breaks down when you have high cardinality fields in the GROUP BY field.
 * `map_reduce`: This implementation shuffles tuples to worker nodes and performs the aggregation on the worker nodes. It involves sorting and partitioning the entire result set and sending it to worker nodes. In this approach the tuples arrive at the worker nodes sorted by the GROUP BY fields. The worker nodes can then rollup the aggregates one group at a time. This allows for unlimited cardinality aggregation, but you pay the price of sending the entire result set across the network to worker nodes.
-* `facet`: This uses the JSON Facet API or StatsComponent for aggregations. In this scenario the aggregations logic is pushed down into the search engine and only the aggregates are sent across the network. This is Solr's normal mode of operation. This is fast when the cardinality of GROUP BY fields is low to moderate. But it breaks down when you have high cardinality fields in the GROUP BY field.
 
 These modes are defined with the `aggregationMode` property when sending the request to Solr.
 
@@ -61,7 +61,7 @@ By default, the `/sql` request handler is configured as an implicit handler, mea
 [IMPORTANT]
 ====
 
-As described below in the section <<ParallelSQLInterface-BestPractices,Best Practices>>, you may want to set up a separate collection for parallelized SQL queries. If you have high cardinality fields and a large amount of data, please be sure to review that section and
+As described below in the section <<ParallelSQLInterface-BestPractices,Best Practices>>, you may want to set up a separate collection for parallelized SQL queries. If you have high cardinality fields and a large amount of data, please be sure to review that section and consider using a separate collection.
 
 ====
 
@@ -70,7 +70,7 @@ As described below in the section <<ParallelSQLInterface-BestPractices,Best Prac
 [[ParallelSQLInterface-_streamand_exportRequestHandlers]]
 === /stream and /export Request Handlers
 
-The Streaming API is an extensible parallel computing framework for SolrCloud. <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>> provide a query language and a serialization format for the Streaming API. The Streaming API provides support for fast MapReduce allowing it to perform parallel relational algebra on extremely large data sets. Under the covers the SQL interface parses SQL queries using the Presto SQL Parser. It then translates the queries to the parallel query plan. The parallel query plan is expressed using the Streaming API and Streaming Expressions.
+The Streaming API is an extensible parallel computing framework for SolrCloud. <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>> provide a query language and a serialization format for the Streaming API. The Streaming API provides support for fast MapReduce allowing it to perform parallel relational algebra on extremely large data sets. Under the covers the SQL interface parses SQL queries using the Apache Calcite SQL Parser. It then translates the queries to the parallel query plan. The parallel query plan is expressed using the Streaming API and Streaming Expressions.
 
 Like the `/sql` request handler, the `/stream` and `/export` request handlers are configured as implicit handlers, and no further configuration is required.
 
@@ -80,7 +80,7 @@ Like the `/sql` request handler, the `/stream` and `/export` request handlers ar
 In some cases, fields used in SQL queries must be configured as DocValue fields. If queries are unlimited, all fields must be DocValue fields. If queries are limited (with the `limit` clause) then fields do not have to be have DocValues enabled.
 
 [[ParallelSQLInterface-SendingQueries]]
-== Sending Queries
+=== Sending Queries
 
 The SQL Interface provides a basic JDBC driver and an HTTP interface to perform queries.
 
@@ -157,6 +157,16 @@ The SQL parser being used by Solr to translate the SQL statements is case insens
 
 ====
 
+[[ParallelSQLInterface-EscapingReservedWords]]
+=== Escaping Reserved Words
+
+The SQL parser will return an error if a reserved word is used in the SQL query. Reserved words can be escaped and included in the query using the back tick. For example:
+
+[source,java]
+----
+select `from` from emails
+----
+
 [[ParallelSQLInterface-SELECTStatements]]
 === SELECT Statements
 
@@ -308,7 +318,7 @@ Because these functions never require data to be shuffled, the aggregations are
 
 [source,java]
 ----
-SELECT count(fieldA) as count, sum(fieldB) as sum FROM tableA WHERE fieldC = 'Hello'
+SELECT count(*) as count, sum(fieldB) as sum FROM tableA WHERE fieldC = 'Hello'
 ----
 
 [[ParallelSQLInterface-GROUPBYAggregations]]
@@ -344,10 +354,7 @@ The Column Identifiers can contain both fields in the Solr index and aggregate f
 
 The non-function fields in the field list determine the fields to calculate the aggregations over.
 
-Column aliases are supported for both fields and functions and can be referenced in the GROUP BY, HAVING and ORDER BY clauses.
-
-[[ParallelSQLInterface-GROUPBYClause]]
-==== *GROUP BY Clause*
+*GROUP BY Clause*
 
 The `GROUP BY` clause can contain up to 4 fields in the Solr index. These fields should correspond with the non-function fields in the field list.