You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/08 15:24:10 UTC

[1/2] lucene-solr:jira/solr-10290: SOLR-10296: conversion, remaining letter S minus solr-glossary

Repository: lucene-solr
Updated Branches:
  refs/heads/jira/solr-10290 3f9dc3859 -> d05e3a406


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/spell-checking.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spell-checking.adoc b/solr/solr-ref-guide/src/spell-checking.adoc
index 60c71b0..e392194 100644
--- a/solr/solr-ref-guide/src/spell-checking.adoc
+++ b/solr/solr-ref-guide/src/spell-checking.adoc
@@ -2,7 +2,9 @@
 :page-shortname: spell-checking
 :page-permalink: spell-checking.html
 
-The SpellCheck component is designed to provide inline query suggestions based on other, similar, terms. The basis for these suggestions can be terms in a field in Solr, externally created text files, or fields in other Lucene indexes.
+The SpellCheck component is designed to provide inline query suggestions based on other, similar, terms.
+
+The basis for these suggestions can be terms in a field in Solr, externally created text files, or fields in other Lucene indexes.
 
 [[SpellChecking-ConfiguringtheSpellCheckComponent]]
 == Configuring the SpellCheckComponent
@@ -17,7 +19,7 @@ The first step is to specify the source of terms in `solrconfig.xml`. There are
 
 The `IndexBasedSpellChecker` uses a Solr index as the basis for a parallel index used for spell checking. It requires defining a field as the basis for the index terms; a common practice is to copy terms from some fields (such as `title`, `body`, etc.) to another field created for spell checking. Here is a simple example of configuring `solrconfig.xml` with the `IndexBasedSpellChecker`:
 
-[source,java]
+[source,xml]
 ----
 <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
   <lst name="spellchecker">
@@ -25,9 +27,9 @@ The `IndexBasedSpellChecker` uses a Solr index as the basis for a parallel index
     <str name="spellcheckIndexDir">./spellchecker</str>
     <str name="field">content</str>
     <str name="buildOnCommit">true</str>
-    <!-- optional elements with defaults 
+    <!-- optional elements with defaults
     <str name="distanceMeasure">org.apache.lucene.search.spell.LevensteinDistance</str>
-    <str name="accuracy">0.5</str> 
+    <str name="accuracy">0.5</str>
     -->
  </lst>
 </searchComponent>
@@ -44,7 +46,7 @@ Finally, _buildOnCommit_ defines whether to build the spell check index at every
 
 The `DirectSolrSpellChecker` uses terms from the Solr index without building a parallel index like the `IndexBasedSpellChecker`. This spell checker has the benefit of not having to be built regularly, meaning that the terms are always up-to-date with terms in the index. Here is how this might be configured in `solrconfig.xml`
 
-[source,java]
+[source,xml]
 ----
 <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
   <lst name="spellchecker">
@@ -78,7 +80,7 @@ At first, spellchecker analyses incoming query words by looking up them in the i
 
 The `FileBasedSpellChecker` uses an external file as a spelling dictionary. This can be useful if using Solr as a spelling server, or if spelling suggestions don't need to be based on actual terms in the index. In `solrconfig.xml`, you would define the searchComponent as so:
 
-[source,java]
+[source,xml]
 ----
 <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
   <lst name="spellchecker">
@@ -87,9 +89,9 @@ The `FileBasedSpellChecker` uses an external file as a spelling dictionary. This
     <str name="sourceLocation">spellings.txt</str>
     <str name="characterEncoding">UTF-8</str>
     <str name="spellcheckIndexDir">./spellcheckerFile</str>
-    <!-- optional elements with defaults 
+    <!-- optional elements with defaults
     <str name="distanceMeasure">org.apache.lucene.search.spell.LevensteinDistance</str>
-    <str name="accuracy">0.5</str> 
+    <str name="accuracy">0.5</str>
     -->
  </lst>
 </searchComponent>
@@ -97,11 +99,9 @@ The `FileBasedSpellChecker` uses an external file as a spelling dictionary. This
 
 The differences here are the use of the `sourceLocation` to define the location of the file of terms and the use of `characterEncoding` to define the encoding of the terms file.
 
-[NOTE]
+[TIP]
 ====
-
 In the previous example, _name_ is used to name this specific definition of the spellchecker. Multiple definitions can co-exist in a single `solrconfig.xml`, and the _name_ helps to differentiate them. If only defining one spellchecker, no name is required.
-
 ====
 
 [[SpellChecking-WordBreakSolrSpellChecker]]
@@ -111,7 +111,7 @@ In the previous example, _name_ is used to name this specific definition of the
 
 Here is how it might be configured in `solrconfig.xml`:
 
-[source,java]
+[source,xml]
 ----
 <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
   <lst name="spellchecker">
@@ -134,7 +134,7 @@ The spellchecker can be configured with a traditional checker (ie: `DirectSolrSp
 
 Queries will be sent to a <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,RequestHandler>>. If every request should generate a suggestion, then you would add the following to the `requestHandler` that you are using:
 
-[source,java]
+[source,xml]
 ----
 <str name="spellcheck">true</str>
 ----
@@ -143,7 +143,7 @@ One of the possible parameters is the `spellcheck.dictionary` to use, and multip
 
 Here is an example with multiple dictionaries:
 
-[source,java]
+[source,xml]
 ----
 <requestHandler name="spellCheckWithWordbreak" class="org.apache.solr.handler.component.SearchHandler">
   <lst name="defaults">
@@ -162,10 +162,10 @@ Here is an example with multiple dictionaries:
 
 The SpellCheck component accepts the parameters described in the table below.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
-|<<SpellChecking-ThespellcheckParameter,spellcheck>> |Turns on or off SpellCheck suggestions for the request. If **true**, then spelling suggestions will be generated.
+|<<SpellChecking-ThespellcheckParameter,spellcheck>> |Turns on or off SpellCheck suggestions for the request. If *true*, then spelling suggestions will be generated.
 |<<SpellChecking-Thespellcheck.qorqParameter,spellcheck.q or q>> |Selects the query to be spellchecked.
 |<<SpellChecking-Thespellcheck.buildParameter,spellcheck.build>> |Instructs Solr to build a dictionary for use in spellchecking.
 |<<SpellChecking-Thespellcheck.collateParameter,spellcheck.collate>> |Causes Solr to build a new query based on the best suggestion for each term in the submitted query.
@@ -183,30 +183,28 @@ The SpellCheck component accepts the parameters described in the table below.
 |<<SpellChecking-Thespellcheck.alternativeTermCountParameter,spellcheck.alternativeTermCount>> |The count of suggestions to return for each query term existing in the index and/or dictionary.
 |<<SpellChecking-Thespellcheck.reloadParameter,spellcheck.reload>> |Reloads the spellchecker.
 |<<SpellChecking-Thespellcheck.accuracyParameter,spellcheck.accuracy>> |Specifies an accuracy value to help decide whether a result is worthwhile.
-|link:#SpellChecking-Thespellcheck.%3CDICT_NAME%3E.keyParameter[spellcheck.<DICT_NAME>.key] |Specifies a key/value pair for the implementation handling a given dictionary.
+|<<SpellChecking-Thespellcheck.%3CDICT_NAME%3E.keyParameter,spellcheck.<DICT_NAME>.key>> |Specifies a key/value pair for the implementation handling a given dictionary.
 |===
 
 [[SpellChecking-ThespellcheckParameter]]
 === The `spellcheck` Parameter
 
-This parameter turns on SpellCheck suggestions for the request. If **true**, then spelling suggestions will be generated.
+This parameter turns on SpellCheck suggestions for the request. If *true*, then spelling suggestions will be generated.
 
 [[SpellChecking-Thespellcheck.qorqParameter]]
 === The `spellcheck.q` or `q` Parameter
 
 This parameter specifies the query to spellcheck. If `spellcheck.q` is defined, then it is used; otherwise the original input query is used. The `spellcheck.q` parameter is intended to be the original query, minus any extra markup like field names, boosts, and so on. If the `q` parameter is specified, then the `SpellingQueryConverter` class is used to parse it into tokens; otherwise the <<tokenizers.adoc#Tokenizers-WhiteSpaceTokenizer,`WhitespaceTokenizer`>> is used. The choice of which one to use is up to the application. Essentially, if you have a spelling "ready" version in your application, then it is probably better to use `spellcheck.q`. Otherwise, if you just want Solr to do the job, use the `q` parameter.
 
-[IMPORTANT]
+[NOTE]
 ====
-
 The SpellingQueryConverter class does not deal properly with non-ASCII characters. In this case, you have either to use `spellcheck.q`, or implement your own QueryConverter.
-
 ====
 
 [[SpellChecking-Thespellcheck.buildParameter]]
 === The `spellcheck.build` Parameter
 
-If set to **true**, this parameter creates the dictionary that the SolrSpellChecker will use for spell-checking. In a typical search application, you will need to build the dictionary before using the SolrSpellChecker. However, it's not always necessary to build a dictionary first. For example, you can configure the spellchecker to use a dictionary that already exists.
+If set to *true*, this parameter creates the dictionary that the SolrSpellChecker will use for spell-checking. In a typical search application, you will need to build the dictionary before using the SolrSpellChecker. However, it's not always necessary to build a dictionary first. For example, you can configure the spellchecker to use a dictionary that already exists.
 
 The dictionary will take some time to build, so this parameter should not be sent with every request.
 
@@ -223,7 +221,7 @@ This parameter specifies the maximum number of suggestions that the spellchecker
 [[SpellChecking-Thespellcheck.onlyMorePopularParameter]]
 === The `spellcheck.onlyMorePopular` Parameter
 
-If **true**, Solr will to return suggestions that result in more hits for the query than the existing query. Note that this will return more popular suggestions even when the given query term is present in the index and considered "correct".
+If *true*, Solr will to return suggestions that result in more hits for the query than the existing query. Note that this will return more popular suggestions even when the given query term is present in the index and considered "correct".
 
 [[SpellChecking-Thespellcheck.maxResultsForSuggestParameter]]
 === The `spellcheck.maxResultsForSuggest` Parameter
@@ -243,21 +241,16 @@ This parameter causes to Solr to include additional information about the sugges
 [[SpellChecking-Thespellcheck.collateParameter]]
 === The `spellcheck.collate` Parameter
 
-If **true**, this parameter directs Solr to take the best suggestion for each token (if one exists) and construct a new query from the suggestions. For example, if the input query was "jawa class lording" and the best suggestion for "jawa" was "java" and "lording" was "loading", then the resulting collation would be "java class loading".
+If *true*, this parameter directs Solr to take the best suggestion for each token (if one exists) and construct a new query from the suggestions. For example, if the input query was "jawa class lording" and the best suggestion for "jawa" was "java" and "lording" was "loading", then the resulting collation would be "java class loading".
 
 The spellcheck.collate parameter only returns collations that are guaranteed to result in hits if re-queried, even when applying original `fq` parameters. This is especially helpful when there is more than one correction per query.
 
-[IMPORTANT]
-====
-
-This only returns a query to be used. It does not actually run the suggested query.
-
-====
+NOTE: This only returns a query to be used. It does not actually run the suggested query.
 
 [[SpellChecking-Thespellcheck.maxCollationsParameter]]
 === The `spellcheck.maxCollations` Parameter
 
-The maximum number of collations to return. The default is **1**. This parameter is ignored if `spellcheck.collate` is false.
+The maximum number of collations to return. The default is *1*. This parameter is ignored if `spellcheck.collate` is false.
 
 [[SpellChecking-Thespellcheck.maxCollationTriesParameter]]
 === The `spellcheck.maxCollationTries` Parameter
@@ -272,21 +265,21 @@ This parameter specifies the maximum number of word correction combinations to r
 [[SpellChecking-Thespellcheck.collateExtendedResultsParameter]]
 === The `spellcheck.collateExtendedResults` Parameter
 
-If **true**, this parameter returns an expanded response format detailing the collations Solr found. The default value is *false* and this is ignored if `spellcheck.collate` is false.
+If *true*, this parameter returns an expanded response format detailing the collations Solr found. The default value is *false* and this is ignored if `spellcheck.collate` is false.
 
 [[SpellChecking-Thespellcheck.collateMaxCollectDocsParameter]]
 === The `spellcheck.collateMaxCollectDocs` Parameter
 
 This parameter specifies the maximum number of documents that should be collect when testing potential collations against the index. A value of *0* indicates that all documents should be collected, resulting in exact hit-counts. Otherwise an estimation is provided as a performance optimization in cases where exact hit-counts are unnecessary – the higher the value specified, the more precise the estimation.
 
-The default value for this parameter is **0**, but when `spellcheck.collateExtendedResults` is **false**, the optimization is always used as if a *1* had been specified.
+The default value for this parameter is *0*, but when `spellcheck.collateExtendedResults` is *false*, the optimization is always used as if a *1* had been specified.
 
 // OLD_CONFLUENCE_ID: SpellChecking-Thespellcheck.collateParam.*ParameterPrefix
 
 [[SpellChecking-Thespellcheck.collateParam._ParameterPrefix]]
 === The `spellcheck.collateParam.*` Parameter Prefix
 
-This parameter prefix can be used to specify any additional parameters that you wish to the Spellchecker to use when internally validating collation queries. For example, even if your regular search results allow for loose matching of one or more query terms via parameters like `"q.op=OR`&`mm=20%`" you can specify override params such as "`spellcheck.collateParam.q.op=AND&spellcheck.collateParam.mm=100%`" to require that only collations consisting of words that are all found in at least one document may be returned.
+This parameter prefix can be used to specify any additional parameters that you wish to the Spellchecker to use when internally validating collation queries. For example, even if your regular search results allow for loose matching of one or more query terms via parameters like `q.op=OR` and `mm=20%` you can specify override params such as `spellcheck.collateParam.q.op=AND&spellcheck.collateParam.mm=100%` to require that only collations consisting of words that are all found in at least one document may be returned.
 
 [[SpellChecking-Thespellcheck.dictionaryParameter]]
 === The `spellcheck.dictionary` Parameter
@@ -310,9 +303,9 @@ For example, given a dictionary called `foo`, `spellcheck.foo.myKey=myValue` wou
 [[SpellChecking-Example]]
 === Example
 
-Using Solr's "`bin/solr -e techproducts`" example, this query shows the results of a simple request that defines a query using the `spellcheck.q` parameter, and forces the collations to require all input terms must match:
+Using Solr's `bin/solr -e techproducts` example, this query shows the results of a simple request that defines a query using the `spellcheck.q` parameter, and forces the collations to require all input terms must match:
 
-` http://localhost:8983/solr/techproducts/spell?df=text&spellcheck.q=delll+ultra+sharp&spellcheck=true&spellcheck.collateParam.q.op=AND `
+`\http://localhost:8983/solr/techproducts/spell?df=text&spellcheck.q=delll+ultra+sharp&spellcheck=true&spellcheck.collateParam.q.op=AND`
 
 Results:
 
@@ -364,13 +357,16 @@ Results:
 
 The `SpellCheckComponent` also supports spellchecking on distributed indexes. If you are using the SpellCheckComponent on a request handler other than "/select", you must provide the following two parameters:
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |shards |Specifies the shards in your distributed indexing configuration. For more information about distributed indexing, see <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>
 |shards.qt |Specifies the request handler Solr uses for requests to shards. This parameter is not required for the `/select` request handler.
 |===
 
-For example: `http://localhost:8983/solr/techproducts/spell?spellcheck=true&spellcheck.build=true&spellcheck.q=toyata&shards.qt=/spell&shards=solr-shard1:8983/solr/techproducts,solr-shard2:8983/solr/techproducts`
+For example:
+
+[source,text]
+http://localhost:8983/solr/techproducts/spell?spellcheck=true&spellcheck.build=true&spellcheck.q=toyata&shards.qt=/spell&shards=solr-shard1:8983/solr/techproducts,solr-shard2:8983/solr/techproducts
 
 In case of a distributed request to the SpellCheckComponent, the shards are requested for at least five suggestions even if the `spellcheck.count` parameter value is less than five. Once the suggestions are collected, they are ranked by the configured distance measure (Levenstein Distance by default) and then by aggregate frequency.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/stream-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/stream-screen.adoc b/solr/solr-ref-guide/src/stream-screen.adoc
index 8165d0a..a351b0a 100644
--- a/solr/solr-ref-guide/src/stream-screen.adoc
+++ b/solr/solr-ref-guide/src/stream-screen.adoc
@@ -8,5 +8,5 @@ The screen will insert everything up to the streaming expression itself, so you
 
 Under the input box, the Execute button will run the expression. An option "with explanation" will show the parts of the streaming expression that were executed. Under this, the streamed results are shown. A URL to be able to view the output in a browser is also available.
 
+.Stream Screen with query and results
 image::images/stream-screen/StreamScreen.png[image,height=400]
-

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/streaming-expressions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/streaming-expressions.adoc b/solr/solr-ref-guide/src/streaming-expressions.adoc
index fed02af..96933a8 100644
--- a/solr/solr-ref-guide/src/streaming-expressions.adoc
+++ b/solr/solr-ref-guide/src/streaming-expressions.adoc
@@ -3,7 +3,9 @@
 :page-permalink: streaming-expressions.html
 :page-children: graph-traversal
 
-Streaming Expressions provide a simple yet powerful stream processing language for Solr Cloud. They are a suite of functions that can be combined to perform many different parallel computing tasks. These functions are the basis for the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>>.
+Streaming Expressions provide a simple yet powerful stream processing language for Solr Cloud.
+
+Streaming expressions are a suite of functions that can be combined to perform many different parallel computing tasks. These functions are the basis for the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>>.
 
 There is a growing library of functions that can be combined to implement:
 
@@ -25,9 +27,7 @@ Streams from outside systems can be joined with streams originating from Solr an
 
 [IMPORTANT]
 ====
-
 Both streaming expressions and the streaming API are considered experimental, and the APIs are subject to change.
-
 ====
 
 [[StreamingExpressions-StreamLanguageBasics]]
@@ -48,10 +48,10 @@ The `/stream` request handler takes one parameter, `expr`, which is used to spec
 
 [source,bash]
 ----
-curl --data-urlencode 'expr=search(enron_emails, 
-                                   q="from:1800flowers*", 
-                                   fl="from, to", 
-                                   sort="from asc", 
+curl --data-urlencode 'expr=search(enron_emails,
+                                   q="from:1800flowers*",
+                                   fl="from, to",
+                                   sort="from asc",
                                    qt="/export")' http://localhost:8983/solr/enron_emails/stream
 ----
 
@@ -59,7 +59,7 @@ Details of the parameters for each function are included below.
 
 For the above example the `/stream` handler responded with the following JSON response:
 
-[source,java]
+[source,json]
 ----
 {"result-set":{"docs":[
    {"from":"1800flowers.133139412@s2u2.com","to":"lcampbel@enron.com"},
@@ -92,7 +92,7 @@ StreamFactory streamFactory = new StreamFactory().withCollectionZkHost("collecti
     .withStreamFunction("top", RankStream.class)
     .withStreamFunction("group", ReducerStream.class)
     .withStreamFunction("parallel", ParallelStream.class);
- 
+
 ParallelStream pstream = (ParallelStream)streamFactory.constructStream("parallel(collection1, group(search(collection1, q=\"*:*\", fl=\"id,a_s,a_i,a_f\", sort=\"a_s asc,a_f asc\", partitionKeys=\"a_s\"), by=\"a_s asc\"), workers=\"2\", zkHost=\""+zkHost+"\", sort=\"a_s asc\")");
 ----
 
@@ -108,6 +108,7 @@ Stream sources originate streams.
 
 [[StreamingExpressions-echo]]
 === echo
+//TODO
 
 [[StreamingExpressions-search]]
 === search
@@ -131,20 +132,19 @@ This expression allows you to specify a request hander using the `qt` parameter.
 [[StreamingExpressions-Syntax]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
-expr=search(collection1, 
+expr=search(collection1,
        zkHost="localhost:9983",
-       qt="/export", 
-       q="*:*", 
-       fl="id,a_s,a_i,a_f", 
-       sort="a_f asc, a_i asc") 
+       qt="/export",
+       q="*:*",
+       fl="id,a_s,a_i,a_f",
+       sort="a_f asc, a_i asc")
 ----
 
-// OLD_CONFLUENCE_ID: StreamingExpressions-shuffle(6.6)
 
-[[StreamingExpressions-shuffle_6.6_]]
-=== shuffle (6.6)
+=== shuffle
+//TODO
 
 [[StreamingExpressions-jdbc]]
 === jdbc
@@ -172,7 +172,7 @@ When the JDBC stream is opened it will validate that a driver can be found for t
 
 Due to the inherent differences in datatypes across JDBC sources the following datatypes are supported. The table indicates what Java type will be used for a given JDBC type. Types marked as requiring conversion will go through a conversion for each value of that type. For performance reasons the cell data types are only considered when the stream is opened as this is when the converters are created.
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |JDBC Type |Java Type |Requires Conversion
 |String |String |No
@@ -189,11 +189,11 @@ Due to the inherent differences in datatypes across JDBC sources the following d
 
 A basic `jdbc` expression:
 
-[source,java]
+[source,text]
 ----
 jdbc(
-    connection="jdbc:hsqldb:mem:.", 
-    sql="select NAME, ADDRESS, EMAIL, AGE from PEOPLE where AGE > 25 order by AGE, NAME DESC", 
+    connection="jdbc:hsqldb:mem:.",
+    sql="select NAME, ADDRESS, EMAIL, AGE from PEOPLE where AGE > 25 order by AGE, NAME DESC",
     sort="AGE asc, NAME desc",
     driver="org.hsqldb.jdbcDriver"
 )
@@ -201,12 +201,12 @@ jdbc(
 
 A `jdbc` expression that passes a property to the driver:
 
-[source,java]
+[source,text]
 ----
 // get_column_name is a property to pass to the hsqldb driver
 jdbc(
-    connection="jdbc:hsqldb:mem:.", 
-    sql="select NAME as FIRST_NAME, ADDRESS, EMAIL, AGE from PEOPLE where AGE > 25 order by AGE, NAME DESC", 
+    connection="jdbc:hsqldb:mem:.",
+    sql="select NAME as FIRST_NAME, ADDRESS, EMAIL, AGE from PEOPLE where AGE > 25 order by AGE, NAME DESC",
     sort="AGE asc, NAME desc",
     driver="org.hsqldb.jdbcDriver",
     get_column_name="false"
@@ -233,21 +233,21 @@ The `facet` function provides aggregations that are rolled up over buckets. Unde
 
 Example 1:
 
-[source,java]
+[source,text]
 ----
-facet(collection1, 
-      q="*:*", 
+facet(collection1,
+      q="*:*",
       buckets="a_s",
       bucketSorts="sum(a_i) desc",
       bucketSizeLimit=100,
-      sum(a_i), 
-      sum(a_f), 
-      min(a_i), 
-      min(a_f), 
-      max(a_i), 
+      sum(a_i),
+      sum(a_f),
+      min(a_i),
+      min(a_f),
+      max(a_i),
       max(a_f),
-      avg(a_i), 
-      avg(a_f), 
+      avg(a_i),
+      avg(a_f),
       count(*))
 ----
 
@@ -255,21 +255,21 @@ The example above shows a facet function with rollups over a single bucket, wher
 
 Example 2:
 
-[source,java]
+[source,text]
 ----
-facet(collection1, 
-      q="*:*", 
+facet(collection1,
+      q="*:*",
       buckets="year_i, month_i, day_i",
       bucketSorts="year_i desc, month_i desc, day_i desc",
       bucketSizeLimit=100,
-      sum(a_i), 
-      sum(a_f), 
-      min(a_i), 
-      min(a_f), 
-      max(a_i), 
+      sum(a_i),
+      sum(a_f),
+      min(a_i),
+      min(a_f),
+      max(a_i),
       max(a_f),
-      avg(a_i), 
-      avg(a_f), 
+      avg(a_i),
+      avg(a_f),
       count(*))
 ----
 
@@ -278,7 +278,7 @@ The example above shows a facet function with rollups over three buckets, where
 [[StreamingExpressions-features]]
 === features
 
-The `features` function extracts the key terms from a text field in a classification training set stored in a SolrCloud collection. It uses an algorithm known as **Information Gain**, to select the important terms from the training set. The `features` function was designed to work specifically with the <<StreamingExpressions-train,train>> function, which uses the extracted features to train a text classifier.
+The `features` function extracts the key terms from a text field in a classification training set stored in a SolrCloud collection. It uses an algorithm known as * Information Gain* , to select the important terms from the training set. The `features` function was designed to work specifically with the <<StreamingExpressions-train,train>> function, which uses the extracted features to train a text classifier.
 
 The `features` function is designed to work with a training set that provides both positive and negative examples of a class. It emits a tuple for each feature term that is extracted along with the inverse document frequency (IDF) for the term in the training set.
 
@@ -298,13 +298,13 @@ The `features` function uses a query to select the training set from a collectio
 [[StreamingExpressions-Syntax.3]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
-features(collection1, 
-         q="*:*", 
-         featureSet="features1", 
-         field="body", 
-         outcome="out_i", 
+features(collection1,
+         q="*:*",
+         featureSet="features1",
+         field="body",
+         outcome="out_i",
          numTerms=250)
 ----
 
@@ -346,11 +346,11 @@ The storage format of the models in Solr is below. The `train` function outputs
 [[StreamingExpressions-Syntax.4]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
-model(modelCollection, 
+model(modelCollection,
       id="myModel"
-      cacheMillis="200000") 
+      cacheMillis="200000")
 ----
 
 [[StreamingExpressions-random]]
@@ -370,12 +370,12 @@ The `random` function searches a SolrCloud collection and emits a pseudo-random
 [[StreamingExpressions-Syntax.5]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
-random(baskets, 
-       q="productID:productX", 
-       rows="100", 
-       fl="basketID") 
+random(baskets,
+       q="productID:productX",
+       rows="100",
+       fl="basketID")
 ----
 
 In the example above the `random` function is searching the baskets collections for all rows where "productID:productX". It will return 100 pseudo-random results. The field list returned is the basketID.
@@ -398,10 +398,10 @@ The `significantTerms` function queries a SolrCloud collection, but instead of r
 [[StreamingExpressions-Syntax.6]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
-significantTerms(collection1, 
-                 q="body:Solr", 
+significantTerms(collection1,
+                 q="body:Solr",
                  minDocFreq="10",
                  maxDocFreq=".20",
                  minTermLength="5")
@@ -429,21 +429,21 @@ The `shortestPath` function is an implementation of a shortest path graph traver
 [[StreamingExpressions-Syntax.7]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
-shortestPath(collection, 
-             from="john@company.com", 
+shortestPath(collection,
+             from="john@company.com",
              to="jane@company.com",
              edge="from_address=to_address",
              threads="6",
-             partitionSize="300", 
-             fq="limiting query", 
+             partitionSize="300",
+             fq="limiting query",
              maxDepth="4")
 ----
 
 The expression above performs a breadth-first search to find the shortest paths in an unweighted, directed graph.
 
-The search starts from the nodeID "john@company.com" in the `from_address` field and searches for the nodeID "jane@company.com" in the `to_address` field. This search is performed iteratively until the `maxDepth` has been reached. Each level in the traversal is implemented as a parallel partitioned nested loop join across the entire collection. The `threads` parameter controls the number of threads performing the join at each level, while the `partitionSize` parameter controls the of number of nodes in each join partition. The `maxDepth` parameter controls the number of levels to traverse. `fq` is a limiting query applied to each level in the traversal.
+The search starts from the nodeID "\john@company.com" in the `from_address` field and searches for the nodeID "\jane@company.com" in the `to_address` field. This search is performed iteratively until the `maxDepth` has been reached. Each level in the traversal is implemented as a parallel partitioned nested loop join across the entire collection. The `threads` parameter controls the number of threads performing the join at each level, while the `partitionSize` parameter controls the of number of nodes in each join partition. The `maxDepth` parameter controls the number of levels to traverse. `fq` is a limiting query applied to each level in the traversal.
 
 [[StreamingExpressions-stats]]
 === stats
@@ -460,24 +460,26 @@ The `stats` function gathers simple aggregations for a search result set. The st
 [[StreamingExpressions-Syntax.8]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
-stats(collection1, 
-      q=*:*, 
-      sum(a_i), 
-      sum(a_f), 
-      min(a_i), 
-      min(a_f), 
-      max(a_i), 
-      max(a_f), 
-      avg(a_i), 
-      avg(a_f), 
+stats(collection1,
+      q=*:*,
+      sum(a_i),
+      sum(a_f),
+      min(a_i),
+      min(a_f),
+      max(a_i),
+      max(a_f),
+      avg(a_i),
+      avg(a_f),
       count(*))
 ----
 
 [[StreamingExpressions-timeseries]]
 === timeseries
 
+//TODO
+
 [[StreamingExpressions-train]]
 === train
 
@@ -501,7 +503,7 @@ With each iteration the `train` function emits a tuple with the model. The model
 [[StreamingExpressions-Syntax.9]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 train(collection1,
       features(collection1, q="*:*", featureSet="first", field="body", outcome="out_i", numTerms=250),
@@ -519,9 +521,7 @@ The `topic` function provides publish/subscribe messaging capabilities built on
 
 [WARNING]
 ====
-
 The topic function should be considered in beta until https://issues.apache.org/jira/browse/SOLR-8709[SOLR-8709] is committed and released.
-
 ====
 
 [[StreamingExpressions-Parameters.10]]
@@ -537,13 +537,13 @@ The topic function should be considered in beta until https://issues.apache.org/
 [[StreamingExpressions-Syntax.10]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 topic(checkpointCollection,
       collection,
-      id="uniqueId", 
+      id="uniqueId",
       q="topic query",
-      fl="id, name, country") 
+      fl="id, name, country")
 ----
 
 [[StreamingExpressions-StreamDecorators]]
@@ -553,11 +553,11 @@ Stream decorators wrap other stream functions or perform operations on the strea
 
 // OLD_CONFLUENCE_ID: StreamingExpressions-cartesianProduct(6.6)
 
-[[StreamingExpressions-cartesianProduct_6.6_]]
-=== cartesianProduct (6.6)
+=== cartesianProduct
+//TODO
 
-[[StreamingExpressions-cell]]
 === cell
+//TODO
 
 [[StreamingExpressions-classify]]
 === classify
@@ -566,9 +566,9 @@ The `classify` function classifies tuples using a logistic regression text class
 
 Each tuple that is classified is assigned two scores:
 
-**probability_d**: A float between 0 and 1 which describes the probability that the tuple belongs to the class. This is useful in the classification use case.
+* probability_d* : A float between 0 and 1 which describes the probability that the tuple belongs to the class. This is useful in the classification use case.
 
-**score_d**: The score of the document that has not be squashed between 0 and 1. The score may be positive or negative. The higher the score the better the document fits the class. This un-squashed score will be useful in query re-ranking and recommendation use cases. This score is particularly useful when multiple high ranking documents have a probability_d score of 1, which won't provide a meaningful ranking between documents.
+* score_d* : The score of the document that has not be squashed between 0 and 1. The score may be positive or negative. The higher the score the better the document fits the class. This un-squashed score will be useful in query re-ranking and recommendation use cases. This score is particularly useful when multiple high ranking documents have a probability_d score of 1, which won't provide a meaningful ranking between documents.
 
 [[StreamingExpressions-Parameters.11]]
 ==== Parameters
@@ -578,16 +578,16 @@ Each tuple that is classified is assigned two scores:
 * `analyzerField`: (Optional) Specifies a different field to find the analyzer from in the schema.
 
 [[StreamingExpressions-Syntax.11]]
-==== *Syntax*
+==== Syntax
 
-[source,java]
+[source,text]
 ----
-classify(model(modelCollection, 
-             id="model1", 
-             cacheMillis=5000), 
-         search(contentCollection, 
-             q="id:(a b c)", 
-             fl="text_t, id", 
+classify(model(modelCollection,
+             id="model1",
+             cacheMillis=5000),
+         search(contentCollection,
+             q="id:(a b c)",
+             fl="text_t, id",
              sort="id asc"),
              field="text_t")
 ----
@@ -612,14 +612,14 @@ The `commit` function wraps a single stream (A) and given a collection and batch
 [[StreamingExpressions-Syntax.12]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 commit(
-    destinationCollection, 
-    batchSize=2, 
+    destinationCollection,
+    batchSize=2,
     update(
-        destinationCollection, 
-        batchSize=5, 
+        destinationCollection,
+        batchSize=5,
         search(collection1, q=*:*, fl="id,a_s,a_i,a_f,s_multi,i_multi", sort="a_f asc, a_i asc")
     )
 )
@@ -640,14 +640,14 @@ The `complement` function wraps two streams (A and B) and emits tuples from A wh
 [[StreamingExpressions-Syntax.13]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 complement(
   search(collection1, q=a_s:(setA || setAB), fl="id,a_s,a_i", sort="a_i asc, a_s asc"),
   search(collection1, q=a_s:(setB || setAB), fl="id,a_s,a_i", sort="a_i asc"),
   on="a_i"
 )
- 
+
 complement(
   search(collection1, q=a_s:(setA || setAB), fl="id,a_s,a_i", sort="a_i asc, a_s asc"),
   search(collection1, q=a_s:(setB || setAB), fl="id,a_s,a_i", sort="a_i asc, a_s asc"),
@@ -661,7 +661,7 @@ complement(
 The `daemon` function wraps another function and runs it at intervals using an internal thread. The `daemon` function can be used to provide both continuous push and pull streaming.
 
 [[StreamingExpressions-Continuouspushstreaming]]
-==== Continuous push streaming
+==== Continuous Push Streaming
 
 With continuous push streaming the `daemon` function wraps another function and is then sent to the `/stream` handler for execution. The `/stream` handler recognizes the `daemon` function and keeps it resident in memory, so it can run its internal function at intervals.
 
@@ -670,17 +670,17 @@ In order to facilitate the pushing of tuples, the `daemon` function must wrap an
 [[StreamingExpressions-Syntax.14]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
-daemon(id="uniqueId", 
+daemon(id="uniqueId",
        runInterval="1000",
        terminate="true",
-       update(destinationCollection, 
-              batchSize=100, 
-              topic(checkpointCollection, 
-                    topicCollection, 
-                    q="topic query", 
-                    fl="id, title, abstract, text", 
+       update(destinationCollection,
+              batchSize=100,
+              topic(checkpointCollection,
+                    topicCollection,
+                    q="topic query",
+                    fl="id, title, abstract, text",
                     id="topicId",
                     initialCheckpoint=0)
                )
@@ -695,28 +695,28 @@ Push streaming can also be used for continuous background aggregation scenarios
 
 The `/stream` handler supports a small set commands for listing and controlling daemon functions:
 
-[source,java]
+[source,text]
 ----
 http://localhost:8983/collection/stream?action=list
 ----
 
 This command will provide a listing of the current daemon's running on the specific node along with there current state.
 
-[source,java]
+[source,text]
 ----
 http://localhost:8983/collection/stream?action=stop&id=daemonId
 ----
 
 This command will stop a specific daemon function but leave it resident in memory.
 
-[source,java]
+[source,text]
 ----
 http://localhost:8983/collection/stream?action=start&id=daemonId
 ----
 
 This command will start a specific daemon function that has been stopped.
 
-[source,java]
+[source,text]
 ----
 http://localhost:8983/collection/stream?action=kill&id=daemonId
 ----
@@ -724,7 +724,7 @@ http://localhost:8983/collection/stream?action=kill&id=daemonId
 This command will stop a specific daemon function and remove it from memory.
 
 [[StreamingExpressions-ContinousPullStreaming]]
-==== Continous Pull Streaming
+==== Continuous Pull Streaming
 
 The {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/io/stream/DaemonStream.html[DaemonStream] java class (part of the SolrJ libraries) can also be embedded in a java application to provide continuous pull streaming. Sample code:
 
@@ -734,42 +734,42 @@ StreamContext context = new StreamContext()
 SolrClientCache cache = new SolrClientCache();
 context.setSolrClientCache(cache);
 
-Map topicQueryParams = new HashMap();  
+Map topicQueryParams = new HashMap();
 topicQueryParams.put("q","hello");  // The query for the topic
 topicQueryparams.put("rows", "500"); // How many rows to fetch during each run
-topicQueryparams.put("fl", "id, "title"); // The field list to return with the documents
+topicQueryparams.put("fl", "id", "title"); // The field list to return with the documents
 
-TopicStream topicStream = new TopicStream(zkHost,        // Host address for the zookeeper service housing the collections 
+TopicStream topicStream = new TopicStream(zkHost,        // Host address for the zookeeper service housing the collections
                                          "checkpoints",  // The collection to store the topic checkpoints
                                          "topicData",    // The collection to query for the topic records
                                          "topicId",      // The id of the topic
                                          -1,             // checkpoint every X tuples, if set -1 it will checkpoint after each run.
                                           topicQueryParams); // The query parameters for the TopicStream
 
-DaemonStream daemonStream = new DaemonStream(topicStream,             // The underlying stream to run. 
+DaemonStream daemonStream = new DaemonStream(topicStream,             // The underlying stream to run.
                                              "daemonId",              // The id of the daemon
                                              1000,                    // The interval at which to run the internal stream
                                              500);                    // The internal queue size for the daemon stream. Tuples will be placed in the queue
                                                                       // as they are read by the internal internal thread.
                                                                       // Calling read() on the daemon stream reads records from the internal queue.
-                                                                       
+
 daemonStream.setStreamContext(context);
 
 daemonStream.open();
- 
+
 //Read until it's time to shutdown the DaemonStream. You can define the shutdown criteria.
 while(!shutdown()) {
     Tuple tuple = daemonStream.read() // This will block until tuples become available from the underlying stream (TopicStream)
                                       // The EOF tuple (signaling the end of the stream) will never occur until the DaemonStream has been shutdown.
     //Do something with the tuples
 }
- 
+
 // Shutdown the DaemonStream.
 daemonStream.shutdown();
- 
+
 //Read the DaemonStream until the EOF Tuple is found.
 //This allows the underlying stream to perform an orderly shutdown.
- 
+
 while(true) {
     Tuple tuple = daemonStream.read();
     if(tuple.EOF) {
@@ -785,6 +785,8 @@ daemonStream.close();
 [[StreamingExpressions-eval]]
 === eval
 
+//todo
+
 [[StreamingExpressions-executor]]
 === executor
 
@@ -803,15 +805,15 @@ This model allows for asynchronous execution of jobs where the output is stored
 [[StreamingExpressions-Syntax.15]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 daemon(id="myDaemon",
        terminate="true",
-       executor(threads=10, 
+       executor(threads=10,
                 topic(checkpointCollection
                       storedExpressions,
-                      q="*:*", 
-                      fl="id, expr_s", 
+                      q="*:*",
+                      fl="id, expr_s",
                       initialCheckPoint=0,
                       id="myTopic")))
 ----
@@ -835,12 +837,12 @@ The `fetch` function iterates a stream and fetches additional fields and adds th
 [[StreamingExpressions-Syntax.16]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 fetch(addresses,
       search(people, q="*:*", fl="username, firstName, lastName", sort="username asc"),
       fl="streetAddress, city, state, country, zip",
-      on="username=userId") 
+      on="username=userId")
 ----
 
 The example above fetches addresses for users by matching the username in the tuple with the userId field in the addresses collection.
@@ -848,32 +850,32 @@ The example above fetches addresses for users by matching the username in the tu
 [[StreamingExpressions-having]]
 === having
 
-The `having` expression wraps a stream and applies a boolean operation to each tuple. It emits only tuples for which the boolean operation returns **true**.
+The `having` expression wraps a stream and applies a boolean operation to each tuple. It emits only tuples for which the boolean operation returns *true*.
 
 [[StreamingExpressions-Parameters.16]]
 ==== Parameters
 
 * `StreamExpression`: (Mandatory) The stream source for the having function.
-* `booleanEvaluator`: (Madatory) The following boolean operations are supported: *eq* (equals), *gt* (greater than), *lt* (less than), *gteq* (greater than or equal to), *lteq* (less than or equal to), **and**, *or, eor* (exclusive or), and **not**. Boolean evaluators can be nested with other evaluators to form complex boolean logic.
+* `booleanEvaluator`: (Madatory) The following boolean operations are supported: *eq* (equals), *gt* (greater than), *lt* (less than), *gteq* (greater than or equal to), *lteq* (less than or equal to), *and*, *or, eor* (exclusive or), and *not*. Boolean evaluators can be nested with other evaluators to form complex boolean logic.
 
-The comparison evaluators compare the value in a specific field with a value, whether a string, number, or boolean. For example: **eq**(field1, 10), returns true if *field1* is equal to 10.
+The comparison evaluators compare the value in a specific field with a value, whether a string, number, or boolean. For example: *eq*(field1, 10), returns true if *field1* is equal to 10.
 
 [[StreamingExpressions-Syntax.17]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 having(rollup(over=a_s,
-              sum(a_i), 
-              search(collection1, 
-                     q=*:*, 
-                     fl="id,a_s,a_i,a_f", 
-                     sort="a_s asc")), 
+              sum(a_i),
+              search(collection1,
+                     q=*:*,
+                     fl="id,a_s,a_i,a_f",
+                     sort="a_s asc")),
        and(gt(sum(a_i), 100), lt(sum(a_i), 110)))
- 
+
 ----
 
-In this example, the `having` expression iterates the aggregated tuples from the `rollup` expression and emits all tuples where the field 'sum(a_i)' is greater then 100 and less then 110.
+In this example, the `having` expression iterates the aggregated tuples from the `rollup` expression and emits all tuples where the field `sum(a_i)` is greater then 100 and less then 110.
 
 [[StreamingExpressions-leftOuterJoin]]
 === leftOuterJoin
@@ -892,7 +894,7 @@ You can wrap the incoming streams with a `select` function to be specific about
 [[StreamingExpressions-Syntax.18]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 leftOuterJoin(
   search(people, q=*:*, fl="personId,name", sort="personId asc"),
@@ -905,7 +907,7 @@ leftOuterJoin(
   search(pets, q=type:cat, fl="ownerId,petName", sort="ownerId asc"),
   on="personId=ownerId"
 )
- 
+
 leftOuterJoin(
   search(people, q=*:*, fl="personId,name", sort="personId asc"),
   select(
@@ -936,7 +938,7 @@ The hashJoin function can be used when the tuples of Left and Right cannot be pu
 [[StreamingExpressions-Syntax.19]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 hashJoin(
   search(people, q=*:*, fl="personId,name", sort="personId asc"),
@@ -949,7 +951,7 @@ hashJoin(
   hashed=search(pets, q=type:cat, fl="ownerId,petName", sort="ownerId asc"),
   on="personId=ownerId"
 )
- 
+
 hashJoin(
   search(people, q=*:*, fl="personId,name", sort="personId asc"),
   hashed=select(
@@ -976,7 +978,7 @@ Wraps two streams Left and Right and for every tuple in Left which exists in Rig
 [[StreamingExpressions-Syntax.20]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 innerJoin(
   search(people, q=*:*, fl="personId,name", sort="personId asc"),
@@ -989,7 +991,7 @@ innerJoin(
   search(pets, q=type:cat, fl="ownerId,petName", sort="ownerId asc"),
   on="personId=ownerId"
 )
- 
+
 innerJoin(
   search(people, q=*:*, fl="personId,name", sort="personId asc"),
   select(
@@ -1016,14 +1018,14 @@ The `intersect` function wraps two streams, A and B, and emits tuples from A whi
 [[StreamingExpressions-Syntax.21]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 intersect(
   search(collection1, q=a_s:(setA || setAB), fl="id,a_s,a_i", sort="a_i asc, a_s asc"),
   search(collection1, q=a_s:(setB || setAB), fl="id,a_s,a_i", sort="a_i asc"),
   on="a_i"
 )
- 
+
 intersect(
   search(collection1, q=a_s:(setA || setAB), fl="id,a_s,a_i", sort="a_i asc, a_s asc"),
   search(collection1, q=a_s:(setB || setAB), fl="id,a_s,a_i", sort="a_i asc, a_s asc"),
@@ -1047,47 +1049,48 @@ The `merge` function merges two or more streaming expressions and maintains the
 [[StreamingExpressions-Syntax.22]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 # Merging two stream expressions together
 merge(
-      search(collection1, 
-             q="id:(0 3 4)", 
-             fl="id,a_s,a_i,a_f", 
+      search(collection1,
+             q="id:(0 3 4)",
+             fl="id,a_s,a_i,a_f",
              sort="a_f asc"),
-      search(collection1, 
-             q="id:(1)", 
-             fl="id,a_s,a_i,a_f", 
+      search(collection1,
+             q="id:(1)",
+             fl="id,a_s,a_i,a_f",
              sort="a_f asc"),
-      on="a_f asc") 
+      on="a_f asc")
 ----
 
-[source,py]
+[source,text]
 ----
-# Merging four stream expressions together. Notice that while the sorts of each stream are not identical they are 
+# Merging four stream expressions together. Notice that while the sorts of each stream are not identical they are
 # comparable. That is to say the first N fields in each stream's sort matches the N fields in the merge's on clause.
 merge(
-      search(collection1, 
-             q="id:(0 3 4)", 
-             fl="id,fieldA,fieldB,fieldC", 
+      search(collection1,
+             q="id:(0 3 4)",
+             fl="id,fieldA,fieldB,fieldC",
              sort="fieldA asc, fieldB desc"),
-      search(collection1, 
-             q="id:(1)", 
-             fl="id,fieldA", 
+      search(collection1,
+             q="id:(1)",
+             fl="id,fieldA",
              sort="fieldA asc"),
-      search(collection2, 
-             q="id:(10 11 13)", 
-             fl="id,fieldA,fieldC", 
+      search(collection2,
+             q="id:(10 11 13)",
+             fl="id,fieldA,fieldC",
              sort="fieldA asc"),
-      search(collection3, 
-             q="id:(987)", 
-             fl="id,fieldA,fieldC", 
+      search(collection3,
+             q="id:(987)",
+             fl="id,fieldA,fieldC",
              sort="fieldA asc"),
-      on="fieldA asc") 
+      on="fieldA asc")
 ----
 
 [[StreamingExpressions-list]]
 === list
+// TODO
 
 [[StreamingExpressions-null]]
 === null
@@ -1108,12 +1111,12 @@ The null expression can be wrapped by the parallel function and sent to worker n
 [[StreamingExpressions-Syntax.23]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
- parallel(workerCollection, 
+ parallel(workerCollection,
           null(search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_s desc", qt="/export", partitionKeys="a_s")),
-          workers="20", 
-          zkHost="localhost:9983", 
+          workers="20",
+          zkHost="localhost:9983",
           sort="a_s desc")
 ----
 
@@ -1138,7 +1141,7 @@ The outerHashJoin stream can be used when the tuples of Left and Right cannot be
 [[StreamingExpressions-Syntax.24]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 outerHashJoin(
   search(people, q=*:*, fl="personId,name", sort="personId asc"),
@@ -1151,7 +1154,7 @@ outerHashJoin(
   hashed=search(pets, q=type:cat, fl="ownerId,petName", sort="ownerId asc"),
   on="personId=ownerId"
 )
- 
+
 outerHashJoin(
   search(people, q=*:*, fl="personId,name", sort="personId asc"),
   hashed=select(
@@ -1175,9 +1178,7 @@ The parallel function maintains the sort order of the tuples returned by the wor
 .Worker Collections
 [TIP]
 ====
-
 The worker nodes can be from the same collection as the data, or they can be a different collection entirely, even one that only exists for parallel streaming expressions. A worker collection can be any SolrCloud collection that has the `/stream` handler configured. Unlike normal SolrCloud collections, worker collections don't have to hold any data. Worker collections can be empty collections that exist only to execute streaming expressions.
-
 ====
 
 [[StreamingExpressions-Parameters.24]]
@@ -1192,14 +1193,14 @@ The worker nodes can be from the same collection as the data, or they can be a d
 [[StreamingExpressions-Syntax.25]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
- parallel(workerCollection, 
+ parallel(workerCollection,
           reduce(search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_s desc", partitionKeys="a_s"),
                  by="a_s",
                  group(sort="a_f desc", n="4")),
-          workers="20", 
-          zkHost="localhost:9983", 
+          workers="20",
+          zkHost="localhost:9983",
           sort="a_s desc")
 ----
 
@@ -1225,10 +1226,10 @@ The `priority` function will only emit a batch of tasks from one of the queues e
 [[StreamingExpressions-Syntax.26]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 daemon(id="myDaemon",
-       executor(threads=10, 
+       executor(threads=10,
                 priority(topic(checkpointCollection, storedExpressions, q="priority:high", fl="id, expr_s", initialCheckPoint=0,id="highPriorityTasks"),
                          topic(checkpointCollection, storedExpressions, q="priority:low", fl="id, expr_s", initialCheckPoint=0,id="lowPriorityTasks"))))
 ----
@@ -1244,9 +1245,7 @@ Each tuple group is operated on as a single block by a pluggable reduce operatio
 
 [IMPORTANT]
 ====
-
 The reduce function relies on the sort order of the underlying stream. Accordingly the sort order of the underlying stream must be aligned with the group by field.
-
 ====
 
 [[StreamingExpressions-Parameters.26]]
@@ -1259,7 +1258,7 @@ The reduce function relies on the sort order of the underlying stream. According
 [[StreamingExpressions-Syntax.27]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 reduce(search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_s asc, a_f asc"),
        by="a_s",
@@ -1284,7 +1283,7 @@ The rollup function also needs to process entire result sets in order to perform
 [[StreamingExpressions-Syntax.28]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 rollup(
    search(collection1, q=*:*, fl="a_s,a_i,a_f", qt="/export", sort="a_s asc"),
@@ -1325,7 +1324,7 @@ The `select` function wraps a streaming expression and outputs tuples containing
 [[StreamingExpressions-Syntax.29]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 // output tuples with fields teamName, wins, losses, and winPercentages where a null value for wins or losses is translated to the value of 0
 select(
@@ -1355,12 +1354,12 @@ The `sort` function wraps a streaming expression and re-orders the tuples. The s
 
 The expression below finds dog owners and orders the results by owner and pet name. Notice that it uses an efficient innerJoin by first ordering by the person/owner id and then re-orders the final output by the owner and pet names.
 
-[source,java]
+[source,text]
 ----
 sort(
   innerJoin(
     search(people, q=*:*, fl="id,name", sort="id asc"),
-    search(pets, q=type:dog, fl="owner,petName", sort="owner asc"), 
+    search(pets, q=type:dog, fl="owner,petName", sort="owner asc"),
     on="id=owner"
   ),
   by="name asc, petName asc"
@@ -1384,13 +1383,13 @@ The `top` function wraps a streaming expression and re-orders the tuples. The to
 
 The expression below finds the top 3 results of the underlying search. Notice that it reverses the sort order. The top function re-orders the results of the underlying stream.
 
-[source,java]
+[source,text]
 ----
 top(n=3,
-     search(collection1, 
+     search(collection1,
             q="*:*",
-            qt="/export", 
-            fl="id,a_s,a_i,a_f", 
+            qt="/export",
+            fl="id,a_s,a_i,a_f",
             sort="a_f desc, a_i desc"),
       sort="a_f asc, a_i asc")
 ----
@@ -1411,7 +1410,7 @@ The unique function implements a non-co-located unique algorithm. This means tha
 [[StreamingExpressions-Syntax.32]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
 unique(
   search(collection1,
@@ -1437,15 +1436,15 @@ The `update` function wraps another functions and sends the tuples to a SolrClou
 [[StreamingExpressions-Syntax.33]]
 ==== Syntax
 
-[source,java]
+[source,text]
 ----
- update(destinationCollection, 
-        batchSize=500, 
-        search(collection1, 
-               q=*:*, 
-               fl="id,a_s,a_i,a_f,s_multi,i_multi", 
+ update(destinationCollection,
+        batchSize=500,
+        search(collection1,
+               q=*:*,
+               fl="id,a_s,a_i,a_f,s_multi,i_multi",
                sort="a_f asc, a_i asc"))
- 
+
 ----
 
 The example above sends the tuples returned by the `search` function to the `destinationCollection` to be indexed.
@@ -1479,7 +1478,7 @@ The `abs` function will return the absolute value of the provided single paramet
 
 The expressions below show the various ways in which you can use the `abs` evaluator. Only one parameter is accepted. Returns a numeric value.
 
-[source,java]
+[source,text]
 ----
 abs(1) // 1, not really a good use case for it
 abs(-1) // 1, not really a good use case for it
@@ -1505,7 +1504,7 @@ The `add` function will take 2 or more numeric values and add them together. The
 
 The expressions below show the various ways in which you can use the `add` evaluator. The number and order of these parameters do not matter and is not limited except that at least two parameters are required. Returns a numeric value.
 
-[source,java]
+[source,text]
 ----
 add(1,2,3,4) // 1 + 2 + 3 + 4 == 10
 add(1,fieldA) // 1 + value of fieldA
@@ -1531,7 +1530,7 @@ The `div` function will take two numeric values and divide them. The function wi
 
 The expressions below show the various ways in which you can use the `div` evaluator. The first value will be divided by the second and as such the second cannot be 0.
 
-[source,java]
+[source,text]
 ----
 div(1,2) // 1 / 2
 div(1,fieldA) // 1 / fieldA
@@ -1554,10 +1553,10 @@ The `log` function will return the natural log of the provided single parameter.
 
 The expressions below show the various ways in which you can use the `log` evaluator. Only one parameter is accepted. Returns a numeric value.
 
-[source,java]
+[source,text]
 ----
-log(100) 
-log(add(fieldA,fieldB)) 
+log(100)
+log(add(fieldA,fieldB))
 log(fieldA)
 ----
 
@@ -1579,7 +1578,7 @@ The `mult` function will take two or more numeric values and multiply them toget
 
 The expressions below show the various ways in which you can use the `mult` evaluator. The number and order of these parameters do not matter and is not limited except that at least two parameters are required. Returns a numeric value.
 
-[source,java]
+[source,text]
 ----
 mult(1,2,3,4) // 1 * 2 * 3 * 4
 mult(1,fieldA) // 1 * value of fieldA
@@ -1607,7 +1606,7 @@ The `sub` function will take 2 or more numeric values and subtract them, from le
 
 The expressions below show the various ways in which you can use the `sub` evaluator. The number of these parameters does not matter and is not limited except that at least two parameters are required. Returns a numeric value.
 
-[source,java]
+[source,text]
 ----
 sub(1,2,3,4) // 1 - 2 - 3 - 4
 sub(1,fieldA) // 1 - value of fieldA
@@ -1618,45 +1617,57 @@ if(gt(fieldA,fieldB),sub(fieldA,fieldB),sub(fieldB,fieldA)) // if fieldA > field
 ----
 
 [[StreamingExpressions-pow]]
-=== *pow*
+=== pow
+//TODO
 
 [[StreamingExpressions-mod]]
-=== *mod*
+=== mod
+//TODO
 
 [[StreamingExpressions-ceil]]
-=== *ceil*
+==== ceil
+//TODO
 
 [[StreamingExpressions-floor]]
-=== *floor*
+=== floor
+//TODO
 
 [[StreamingExpressions-sin]]
-=== *sin*
+=== sin
+//TODO
 
 [[StreamingExpressions-asin]]
-=== *asin*
+=== asin
+//TODO
 
 [[StreamingExpressions-sinh]]
-=== *sinh*
+=== sinh
+//TODO
 
 [[StreamingExpressions-cos]]
-=== *cos*
+=== cos
+//TODO
 
 [[StreamingExpressions-acos]]
-=== *acos*
+=== acos
+//TODO
 
 [[StreamingExpressions-atan]]
-=== *atan*
+=== atan
+//TODO
 
 [[StreamingExpressions-round]]
-=== *round*
+=== round
+//TODO
 
 [[StreamingExpressions-sqrt]]
-=== *sqrt*
+=== sqrt
+//TODO
 
 [[StreamingExpressions-cbrt]]
-=== *cbrt*
+=== cbrt
 
-*and*
+=== and
 
 The `and` function will return the logical AND of at least 2 boolean parameters. The function will fail to execute if any parameters are non-boolean or null. Returns a boolean value.
 
@@ -1673,7 +1684,7 @@ The `and` function will return the logical AND of at least 2 boolean parameters.
 
 The expressions below show the various ways in which you can use the `and` evaluator. At least two parameters are required, but there is no limit to how many you can use.
 
-[source,java]
+[source,text]
 ----
 and(true,fieldA) // true && fieldA
 and(fieldA,fieldB) // fieldA && fieldB
@@ -1699,7 +1710,7 @@ The `eq` function will return whether all the parameters are equal, as per Java'
 
 The expressions below show the various ways in which you can use the `eq` evaluator.
 
-[source,java]
+[source,text]
 ----
 eq(1,2) // 1 == 2
 eq(1,fieldA) // 1 == fieldA
@@ -1725,7 +1736,7 @@ The `eor` function will return the logical exclusive or of at least two boolean
 
 The expressions below show the various ways in which you can use the `eor` evaluator. At least two parameters are required, but there is no limit to how many you can use.
 
-[source,java]
+[source,text]
 ----
 eor(true,fieldA) // true iff fieldA is false
 eor(fieldA,fieldB) // true iff either fieldA or fieldB is true but not both
@@ -1748,7 +1759,7 @@ The `gteq` function will return whether the first parameter is greater than or e
 
 The expressions below show the various ways in which you can use the `gteq` evaluator.
 
-[source,java]
+[source,text]
 ----
 gteq(1,2) // 1 >= 2
 gteq(1,fieldA) // 1 >= fieldA
@@ -1772,7 +1783,7 @@ The `gt` function will return whether the first parameter is greater than the se
 
 The expressions below show the various ways in which you can use the `gt` evaluator.
 
-[source,java]
+[source,text]
 ----
 gt(1,2) // 1 > 2
 gt(1,fieldA) // 1 > fieldA
@@ -1797,7 +1808,7 @@ The `if` function works like a standard conditional if/then statement. If the fi
 
 The expressions below show the various ways in which you can use the `if` evaluator.
 
-[source,java]
+[source,text]
 ----
 if(fieldA,fieldB,fieldC) // if fieldA is true then fieldB else fieldC
 if(gt(fieldA,5), fieldA, 5) // if fieldA > 5 then fieldA else 5
@@ -1820,7 +1831,7 @@ The l`teq` function will return whether the first parameter is less than or equa
 
 The expressions below show the various ways in which you can use the `lteq` evaluator.
 
-[source,java]
+[source,text]
 ----
 lteq(1,2) // 1 <= 2
 lteq(1,fieldA) // 1 <= fieldA
@@ -1844,7 +1855,7 @@ The `lt` function will return whether the first parameter is less than the secon
 
 The expressions below show the various ways in which you can use the `lt` evaluator.
 
-[source,java]
+[source,text]
 ----
 lt(1,2) // 1 < 2
 lt(1,fieldA) // 1 < fieldA
@@ -1867,7 +1878,7 @@ The `not` function will return the logical NOT of a single boolean parameter. Th
 
 The expressions below show the various ways in which you can use the `not` evaluator. Only one parameter is allowed.
 
-[source,java]
+[source,text]
 ----
 not(true) // false
 not(fieldA) // true if fieldA is false else false
@@ -1892,7 +1903,7 @@ The `or` function will return the logical OR of at least 2 boolean parameters. T
 
 The expressions below show the various ways in which you can use the `or` evaluator. At least two parameters are required, but there is no limit to how many you can use.
 
-[source,java]
+[source,text]
 ----
 or(true,fieldA) // true || fieldA
 or(fieldA,fieldB) // fieldA || fieldB
@@ -1902,27 +1913,35 @@ or(fieldA,fieldB,fieldC,and(fieldD,fieldE),fieldF)
 
 [[StreamingExpressions-analyze]]
 === analyze
+//TODO
 
 [[StreamingExpressions-second]]
 === second
+//TODO
 
 [[StreamingExpressions-minute]]
 === minute
+//TODO
 
 [[StreamingExpressions-hour]]
 === hour
+//TODO
 
 [[StreamingExpressions-day]]
 === day
+//TODO
 
 [[StreamingExpressions-month]]
 === month
+//TODO
 
 [[StreamingExpressions-year]]
 === year
+//TODO
 
 [[StreamingExpressions-convert]]
 === convert
+//TODO
 
 [[StreamingExpressions-raw]]
 === raw
@@ -1939,14 +1958,15 @@ The `raw` function will return whatever raw value is the parameter. This is usef
 
 The expressions below show the various ways in which you can use the `raw` evaluator. Whatever is inside will be returned as-is. Internal evaluators are considered strings and are not evaluated.
 
-[source,java]
+[source,text]
 ----
 raw(foo) // "foo"
 raw(count(*)) // "count(*)"
 raw(45) // 45
 raw(true) // "true" (note: this returns the string "true" and not the boolean true)
-eq(raw(fieldA), fieldA) // true if the value of fieldA equals the string "fieldA" 
+eq(raw(fieldA), fieldA) // true if the value of fieldA equals the string "fieldA"
 ----
 
 [[StreamingExpressions-UUID]]
 === UUID
+//TODO

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/suggester.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/suggester.adoc b/solr/solr-ref-guide/src/suggester.adoc
index e80b637..c6a5056 100644
--- a/solr/solr-ref-guide/src/suggester.adoc
+++ b/solr/solr-ref-guide/src/suggester.adoc
@@ -2,9 +2,13 @@
 :page-shortname: suggester
 :page-permalink: suggester.html
 
-The SuggestComponent in Solr provides users with automatic suggestions for query terms. You can use this to implement a powerful auto-suggest feature in your search application.
+The SuggestComponent in Solr provides users with automatic suggestions for query terms.
 
-Although it is possible to use the <<spell-checking.adoc#spell-checking,Spell Checking>> functionality to power autosuggest behavior, Solr has a dedicated http://lucene.apache.org/solr/api/solr-core/org/apache/solr/handler/component/SuggestComponent.html[SuggestComponent] designed for this functionality. This approach utilizes Lucene's Suggester implementation and supports all of the lookup implementations available in Lucene.
+You can use this to implement a powerful auto-suggest feature in your search application.
+
+Although it is possible to use the <<spell-checking.adoc#spell-checking,Spell Checking>> functionality to power autosuggest behavior, Solr has a dedicated http://lucene.apache.org/solr/api/solr-core/org/apache/solr/handler/component/SuggestComponent.html[SuggestComponent] designed for this functionality.
+
+This approach utilizes Lucene's Suggester implementation and supports all of the lookup implementations available in Lucene.
 
 The main features of this Suggester are:
 
@@ -46,7 +50,7 @@ The Suggester search component takes several configuration parameters. The choic
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |searchComponent name |Arbitrary name for the search component.
@@ -66,7 +70,7 @@ To be used as the basis for a suggestion, the field must be stored. You may want
     <filter class="solr.StandardFilterFactory"/>
     <filter class="solr.LowerCaseFilterFactory"/>
   </analyzer>
-</fieldType> 
+</fieldType>
 ----
 
 However, this minimal analysis is not required if you want more analysis to occur on terms. If using the AnalyzingLookupFactory as your lookupImpl, however, you have the option of defining the field type rules to use for index and query time analysis.
@@ -223,10 +227,9 @@ If using a dictionary file, it should be a plain text file in UTF-8 encoding. Yo
 
 This dictionary implementation takes one parameter in addition to parameters described for the Suggester generally and for the lookup implementation:
 
-* fieldDelimiter: Specify the delimiter to be used separating the entries, weights and payloads. The default is tab ('\t').
-
-*Example*
+fieldDelimiter:: Specify the delimiter to be used separating the entries, weights and payloads. The default is tab ('\t').
 
+.Example File
 [source,text]
 ----
 acquire
@@ -237,7 +240,7 @@ accommodate 3.0
 [[Suggester-MultipleDictionaries]]
 ==== Multiple Dictionaries
 
-It is possible to include multiple dictionaryImpl definitions in a single SuggestComponent definition.
+It is possible to include multiple `dictionaryImpl` definitions in a single SuggestComponent definition.
 
 To do this, simply define separate suggesters, as in this example:
 
@@ -246,8 +249,8 @@ To do this, simply define separate suggesters, as in this example:
 <searchComponent name="suggest" class="solr.SuggestComponent">
   <lst name="suggester">
     <str name="name">mySuggester</str>
-    <str name="lookupImpl">FuzzyLookupFactory</str>      
-    <str name="dictionaryImpl">DocumentDictionaryFactory</str>      
+    <str name="lookupImpl">FuzzyLookupFactory</str>
+    <str name="dictionaryImpl">DocumentDictionaryFactory</str>
     <str name="field">cat</str>
     <str name="weightField">price</str>
     <str name="suggestAnalyzerFieldType">string</str>
@@ -262,7 +265,7 @@ To do this, simply define separate suggesters, as in this example:
     <str name="sortField">price</str>
     <str name="storeDir">suggest_fuzzy_doc_expr_dict</str>
     <str name="suggestAnalyzerFieldType">text_en</str>
-  </lst>  
+  </lst>
 </searchComponent>
 ----
 
@@ -291,7 +294,7 @@ After adding the search component, a request handler must be added to `solrconfi
 
 The following parameters allow you to set defaults for the Suggest request handler:
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |suggest=true |This parameter should always be true, because we always want to run the Suggester for queries submitted to this handler.
@@ -310,9 +313,7 @@ These properties can also be overridden at query time, or not set in the request
 .Context Filtering
 [IMPORTANT]
 ====
-
 Context filtering (`suggest.cfq`) is currently only supported by AnalyzingInfixLookupFactory and BlendedInfixLookupFactory, and only when backed by a Document*Dictionary. All other implementations will return unfiltered matches as if filtering was not requested.
-
 ====
 
 [[Suggester-ExampleUsages]]
@@ -429,8 +430,7 @@ Context filtering lets you filter suggestions by a separate context field, such
 
 Add `contextField` to your suggester configuration. This example will suggest names and allow to filter by category:
 
-*solrconfig.xml*
-
+.solrconfig.xml
 [source,xml]
 ----
 <searchComponent name="suggest" class="solr.SuggestComponent">
@@ -451,7 +451,7 @@ Example context filtering suggest query:
 
 [source,text]
 ----
-http://localhost:8983/solr/techproducts/suggest?suggest=true&suggest.build=true& \ 
+http://localhost:8983/solr/techproducts/suggest?suggest=true&suggest.build=true& \
    suggest.dictionary=mySuggester&wt=json&suggest.q=c&suggest.cfq=memory
 ----
 


[2/2] lucene-solr:jira/solr-10290: SOLR-10296: conversion, remaining letter S minus solr-glossary

Posted by ct...@apache.org.
SOLR-10296: conversion, remaining letter S minus solr-glossary


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/d05e3a40
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/d05e3a40
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/d05e3a40

Branch: refs/heads/jira/solr-10290
Commit: d05e3a4066fcf7446479a233f26a254c970fb95c
Parents: 3f9dc38
Author: Cassandra Targett <ct...@apache.org>
Authored: Mon May 8 10:23:43 2017 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Mon May 8 10:23:43 2017 -0500

----------------------------------------------------------------------
 .../src/solr-control-script-reference.adoc      | 111 ++--
 .../src/solr-cores-and-solr-xml.adoc            |  12 +-
 solr/solr-ref-guide/src/solr-field-types.adoc   |   6 +-
 .../src/solr-jdbc-apache-zeppelin.adoc          |  27 +-
 .../src/solr-jdbc-dbvisualizer.adoc             |   3 +-
 .../src/solr-jdbc-python-jython.adoc            |  73 +--
 solr/solr-ref-guide/src/solr-jdbc-r.adoc        |  19 +-
 .../src/solr-jdbc-squirrel-sql.adoc             |  13 -
 ...lrcloud-with-legacy-configuration-files.adoc |  26 +-
 solr/solr-ref-guide/src/solrcloud.adoc          |   2 +-
 solr/solr-ref-guide/src/spatial-search.adoc     |  98 ++--
 solr/solr-ref-guide/src/spell-checking.adoc     |  72 ++-
 solr/solr-ref-guide/src/stream-screen.adoc      |   2 +-
 .../src/streaming-expressions.adoc              | 508 ++++++++++---------
 solr/solr-ref-guide/src/suggester.adoc          |  34 +-
 15 files changed, 502 insertions(+), 504 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solr-control-script-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-control-script-reference.adoc b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
index eeece8f..d150d39 100644
--- a/solr/solr-ref-guide/src/solr-control-script-reference.adoc
+++ b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
@@ -2,11 +2,13 @@
 :page-shortname: solr-control-script-reference
 :page-permalink: solr-control-script-reference.html
 
-Solr includes a script known as "`bin/solr`" that allows you to start and stop Solr, create and delete collections or cores, perform operations on ZooKeeper and check the status of Solr and configured shards. You can find the script in the `bin/` directory of your Solr installation. The `bin/solr` script makes Solr easier to work with by providing simple commands and options to quickly accomplish common goals.
+Solr includes a script known as "`bin/solr`" that allows you to perform many common operations on your Solr installation or cluster.
 
-The headings below correspond to available commands. For each command, the available options are described with examples.
+You can start and stop Solr, create and delete collections or cores, perform operations on ZooKeeper and check the status of Solr and configured shards.
 
-More examples of bin/solr in use are available throughout the Solr Reference Guide, but particularly in the sections <<running-solr.adoc#running-solr,Running Solr>> and <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
+You can find the script in the `bin/` directory of your Solr installation. The `bin/solr` script makes Solr easier to work with by providing simple commands and options to quickly accomplish common goals.
+
+More examples of `bin/solr` in use are available throughout the Solr Reference Guide, but particularly in the sections <<running-solr.adoc#running-solr,Running Solr>> and <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
 
 [[SolrControlScriptReference-StartingandStopping]]
 == Starting and Stopping
@@ -14,9 +16,9 @@ More examples of bin/solr in use are available throughout the Solr Reference Gui
 [[SolrControlScriptReference-StartandRestart]]
 === Start and Restart
 
-The start command starts Solr. The restart command allows you to restart Solr while it is already running or if it has been stopped already.
+The `start` command starts Solr. The `restart` command allows you to restart Solr while it is already running or if it has been stopped already.
 
-The start and restart commands have several options to allow you to run in SolrCloud mode, use an example configuration set, start with a hostname or port that is not the default and point to a local ZooKeeper ensemble.
+The `start` and `restart` commands have several options to allow you to run in SolrCloud mode, use an example configuration set, start with a hostname or port that is not the default and point to a local ZooKeeper ensemble.
 
 `bin/solr start [options]`
 
@@ -26,16 +28,16 @@ The start and restart commands have several options to allow you to run in SolrC
 
 `bin/solr restart -help`
 
-When using the restart command, you must pass all of the parameters you initially passed when you started Solr. Behind the scenes, a stop request is initiated, so Solr will be stopped before being started again. If no nodes are already running, restart will skip the step to stop and proceed to starting Solr.
+When using the `restart` command, you must pass all of the parameters you initially passed when you started Solr. Behind the scenes, a stop request is initiated, so Solr will be stopped before being started again. If no nodes are already running, restart will skip the step to stop and proceed to starting Solr.
 
 [[SolrControlScriptReference-AvailableParameters]]
 ==== Available Parameters
 
-The bin/solr script provides many options to allow you to customize the server in common ways, such as changing the listening port. However, most of the defaults are adequate for most Solr installations, especially when just getting started.
+The`bin/solr` script provides many options to allow you to customize the server in common ways, such as changing the listening port. However, most of the defaults are adequate for most Solr installations, especially when just getting started.
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |-a "<string>" |Start Solr with additional JVM parameters, such as those starting with -X. If you are passing JVM parameters that begin with "-D", you can omit the -a option. |`bin/solr start -a "-Xdebug -Xrunjdwp:transport=dt_socket, server=y,suspend=n,address=1044"`
@@ -61,7 +63,6 @@ The available options are:
 * schemaless
 
 See the section <<SolrControlScriptReference-RunningwithExampleConfigurations,Running with Example Configurations>> below for more details on the example configurations.
-
  |`bin/solr start -e schemaless`
 |-f |Start Solr in the foreground; you cannot use this option when running examples with the -e option. |`bin/solr start -f`
 |-h <hostname> |Start Solr with the defined hostname. If this is not specified, 'localhost' will be assumed. |`bin/solr start -h search.mysolr.com`
@@ -97,26 +98,30 @@ It is not necessary to define all of the options when starting if the defaults a
 [[SolrControlScriptReference-SettingJavaSystemProperties]]
 ==== Setting Java System Properties
 
-The bin/solr script will pass any additional parameters that begin with -D to the JVM, which allows you to set arbitrary Java system properties. For example, to set the auto soft-commit frequency to 3 seconds, you can do:
+The `bin/solr` script will pass any additional parameters that begin with `-D` to the JVM, which allows you to set arbitrary Java system properties.
+
+For example, to set the auto soft-commit frequency to 3 seconds, you can do:
 
 `bin/solr start -Dsolr.autoSoftCommit.maxTime=3000`
 
 [[SolrControlScriptReference-SolrCloudMode]]
 ==== SolrCloud Mode
 
-The -c and -cloud options are equivalent:
+The `-c` and `-cloud` options are equivalent:
 
 `bin/solr start -c`
 
 `bin/solr start -cloud`
 
-If you specify a ZooKeeper connection string, such as `-z 192.168.1.4:2181`, then Solr will connect to ZooKeeper and join the cluster. If you do not specify the -z option when starting Solr in cloud mode, then Solr will launch an embedded ZooKeeper server listening on the Solr port + 1000, i.e., if Solr is running on port 8983, then the embedded ZooKeeper will be listening on port 9983.
+If you specify a ZooKeeper connection string, such as `-z 192.168.1.4:2181`, then Solr will connect to ZooKeeper and join the cluster.
+
+If you do not specify the `-z` option when starting Solr in cloud mode, then Solr will launch an embedded ZooKeeper server listening on the Solr port + 1000, i.e., if Solr is running on port 8983, then the embedded ZooKeeper will be listening on port 9983.
 
 [IMPORTANT]
 ====
-
-IMPORTANT: If your ZooKeeper connection string uses a chroot, such as `localhost:2181/solr`, then you need to create the /solr znode before launching SolrCloud using the bin/solr script. To do this use the "mkroot" command outlined below, for example: `bin/solr zk mkroot /solr -z 192.168.1.4:2181`
-
+If your ZooKeeper connection string uses a chroot, such as `localhost:2181/solr`, then you need to create the /solr znode before launching SolrCloud using the `bin/solr` script.
++
+To do this use the `mkroot` command outlined below, for example: `bin/solr zk mkroot /solr -z 192.168.1.4:2181`
 ====
 
 When starting in SolrCloud mode, the interactive script session will prompt you to choose a configset to use.
@@ -130,28 +135,26 @@ For more information about starting Solr in SolrCloud mode, see also the section
 
 The example configurations allow you to get started quickly with a configuration that mirrors what you hope to accomplish with Solr.
 
-Each example launches Solr in with a managed schema, which allows use of the <<schema-api.adoc#schema-api,Schema API>> to make schema edits, but does not allow manual editing of a Schema file If you would prefer to manually modify a `schema.xml` file directly, you can change this default as described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>.
+Each example launches Solr with a managed schema, which allows use of the <<schema-api.adoc#schema-api,Schema API>> to make schema edits, but does not allow manual editing of a Schema file If you would prefer to manually modify a `schema.xml` file directly, you can change this default as described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>.
 
 Unless otherwise noted in the descriptions below, the examples do not enable <<solrcloud.adoc#solrcloud,SolrCloud>> nor <<schemaless-mode.adoc#schemaless-mode,schemaless mode>>.
 
 The following examples are provided:
 
-* **cloud**: This example starts a 1-4 node SolrCloud cluster on a single machine. When chosen, an interactive session will start to guide you through options to select the initial configset to use, the number of nodes for your example cluster, the ports to use, and name of the collection to be created. When using this example, you can choose from any of the available configsets found in `$SOLR_HOME/server/solr/configsets`.
-* **techproducts**: This example starts Solr in standalone mode with a schema designed for the sample documents included in the `$SOLR_HOME/example/exampledocs` directory. The configset used can be found in `$SOLR_HOME/server/solr/configsets/sample_techproducts_configs`.
-* **dih**: This example starts Solr in standalone mode with the DataImportHandler (DIH) enabled and several example `dataconfig.xml` files pre-configured for different types of data supported with DIH (such as, database contents, email, RSS feeds, etc.). The configset used is customized for DIH, and is found in `$SOLR_HOME/example/example-DIH/solr/conf`. For more information about DIH, see the section <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>.
-* **schemaless**: This example starts Solr in standalone mode using a managed schema, as described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>, and provides a very minimal pre-defined schema. Solr will run in <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>> with this configuration, where Solr will create fields in the schema on the fly and will guess field types used in incoming documents. The configset used can be found in `$SOLR_HOME/server/solr/configsets/data_driven_schema_configs`.
+* *cloud*: This example starts a 1-4 node SolrCloud cluster on a single machine. When chosen, an interactive session will start to guide you through options to select the initial configset to use, the number of nodes for your example cluster, the ports to use, and name of the collection to be created. When using this example, you can choose from any of the available configsets found in `$SOLR_HOME/server/solr/configsets`.
+* *techproducts*: This example starts Solr in standalone mode with a schema designed for the sample documents included in the `$SOLR_HOME/example/exampledocs` directory. The configset used can be found in `$SOLR_HOME/server/solr/configsets/sample_techproducts_configs`.
+* *dih*: This example starts Solr in standalone mode with the DataImportHandler (DIH) enabled and several example `dataconfig.xml` files pre-configured for different types of data supported with DIH (such as, database contents, email, RSS feeds, etc.). The configset used is customized for DIH, and is found in `$SOLR_HOME/example/example-DIH/solr/conf`. For more information about DIH, see the section <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>.
+* *schemaless*: This example starts Solr in standalone mode using a managed schema, as described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>, and provides a very minimal pre-defined schema. Solr will run in <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>> with this configuration, where Solr will create fields in the schema on the fly and will guess field types used in incoming documents. The configset used can be found in `$SOLR_HOME/server/solr/configsets/data_driven_schema_configs`.
 
 [IMPORTANT]
 ====
-
-The run in-foreground option (-f) is not compatible with the -e option since the script needs to perform additional tasks after starting the Solr server.
-
+The run in-foreground option (`-f`) is not compatible with the `-e` option since the script needs to perform additional tasks after starting the Solr server.
 ====
 
 [[SolrControlScriptReference-Stop]]
 === Stop
 
-The stop command sends a STOP request to a running Solr node, which allows it to shutdown gracefully. The command will wait up to 5 seconds for Solr to stop gracefully and then will forcefully kill the process (kill -9).
+The `stop` command sends a STOP request to a running Solr node, which allows it to shutdown gracefully. The command will wait up to 5 seconds for Solr to stop gracefully and then will forcefully kill the process (kill -9).
 
 `bin/solr stop [options]`
 
@@ -174,7 +177,7 @@ The stop command sends a STOP request to a running Solr node, which allows it to
 [[SolrControlScriptReference-Version]]
 === Version
 
-The version command simply returns the version of Solr currently installed and immediately exists.
+The `version` command simply returns the version of Solr currently installed and immediately exists.
 
 [source,plain]
 ----
@@ -185,7 +188,9 @@ X.Y.0
 [[SolrControlScriptReference-Status]]
 === Status
 
-The status command displays basic JSON-formatted information for any Solr nodes found running on the local system. The status command uses the SOLR_PID_DIR environment variable to locate Solr process ID files to find running Solr instances; the SOLR_PID_DIR variable defaults to the bin directory.
+The `status` command displays basic JSON-formatted information for any Solr nodes found running on the local system.
+
+The `status` command uses the `SOLR_PID_DIR` environment variable to locate Solr process ID files to find running Solr instances, which defaults to the `bin` directory.
 
 `bin/solr status`
 
@@ -193,7 +198,7 @@ The output will include a status of each node of the cluster, as in this example
 
 [source,plain]
 ----
-Found 2 Solr nodes: 
+Found 2 Solr nodes:
 
 Solr process 39920 running on port 7574
 {
@@ -223,7 +228,7 @@ Solr process 39827 running on port 8865
 [[SolrControlScriptReference-Healthcheck]]
 === Healthcheck
 
-The healthcheck command generates a JSON-formatted health report for a collection when running in SolrCloud mode. The health report provides information about the state of every replica for all shards in a collection, including the number of committed documents and its current state.
+The `healthcheck` command generates a JSON-formatted health report for a collection when running in SolrCloud mode. The health report provides information about the state of every replica for all shards in a collection, including the number of committed documents and its current state.
 
 `bin/solr healthcheck [options]`
 
@@ -241,10 +246,10 @@ The healthcheck command generates a JSON-formatted health report for a collectio
 
 Below is an example healthcheck request and response using a non-standard ZooKeeper connect string, with 2 nodes running:
 
-[source,plain]
-----
-$ bin/solr healthcheck -c gettingstarted -z localhost:9865
+`$ bin/solr healthcheck -c gettingstarted -z localhost:9865`
 
+[source,json]
+----
 {
   "collection":"gettingstarted",
   "status":"healthy",
@@ -294,12 +299,12 @@ $ bin/solr healthcheck -c gettingstarted -z localhost:9865
 [[SolrControlScriptReference-CollectionsandCores]]
 == Collections and Cores
 
-The bin/solr script can also help you create new collections (in SolrCloud mode) or cores (in standalone mode), or delete collections.
+The `bin/solr` script can also help you create new collections (in SolrCloud mode) or cores (in standalone mode), or delete collections.
 
 [[SolrControlScriptReference-Create]]
 === Create
 
-The create command detects the mode that Solr is running in (standalone or SolrCloud) and then creates a core or collection depending on the mode.
+The `create` command detects the mode that Solr is running in (standalone or SolrCloud) and then creates a core or collection depending on the mode.
 
 `bin/solr create [options]`
 
@@ -310,7 +315,7 @@ The create command detects the mode that Solr is running in (standalone or SolrC
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |-c <name> |Name of the core or collection to create (required). |`bin/solr create -c mycollection`
@@ -345,7 +350,9 @@ a|
 [[SolrControlScriptReference-ConfigurationDirectoriesandSolrCloud]]
 ==== Configuration Directories and SolrCloud
 
-Before creating a collection in SolrCloud, the configuration directory used by the collection must be uploaded to ZooKeeper. The create command supports several use cases for how collections and configuration directories work. The main decision you need to make is whether a configuration directory in ZooKeeper should be shared across multiple collections. Let's work through a few examples to illustrate how configuration directories work in SolrCloud.
+Before creating a collection in SolrCloud, the configuration directory used by the collection must be uploaded to ZooKeeper. The create command supports several use cases for how collections and configuration directories work. The main decision you need to make is whether a configuration directory in ZooKeeper should be shared across multiple collections.
+
+Let's work through a few examples to illustrate how configuration directories work in SolrCloud.
 
 First, if you don't provide the `-d` or `-n` options, then the default configuration (`$SOLR_HOME/server/solr/configsets/data_driven_schema_configs/conf`) is uploaded to ZooKeeper using the same name as the collection. For example, the following command will result in the *data_driven_schema_configs* configuration being uploaded to `/configs/contacts` in ZooKeeper: `bin/solr create -c contacts`. If you create another collection, by doing `bin/solr create -c contacts2`, then another copy of the `data_driven_schema_configs` directory will be uploaded to ZooKeeper under `/configs/contacts2`. Any changes you make to the configuration for the contacts collection will not affect the contacts2 collection. Put simply, the default behavior creates a unique copy of the configuration directory for each collection you create.
 
@@ -363,7 +370,7 @@ The `data_driven_schema_configs` schema can mutate as data is indexed. Consequen
 [[SolrControlScriptReference-Delete]]
 === Delete
 
-The delete command detects the mode that Solr is running in (standalone or SolrCloud) and then deletes the specified core (standalone) or collection (SolrCloud) as appropriate.
+The `delete` command detects the mode that Solr is running in (standalone or SolrCloud) and then deletes the specified core (standalone) or collection (SolrCloud) as appropriate.
 
 `bin/solr delete [options]`
 
@@ -376,7 +383,7 @@ If running in SolrCloud mode, the delete command checks if the configuration dir
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |-c <name> |Name of the core / collection to delete (required). |`bin/solr delete -c mycoll`
@@ -397,7 +404,7 @@ This option is useful if you are running multiple standalone Solr instances on t
 [[SolrControlScriptReference-ZooKeeperOperations]]
 == ZooKeeper Operations
 
-The bin/solr script allows certain operations affecting ZooKeeper. These operations are for SolrCloud mode only. The operations are available as sub-commands, which each have their own set of options.
+The `bin/solr` script allows certain operations affecting ZooKeeper. These operations are for SolrCloud mode only. The operations are available as sub-commands, which each have their own set of options.
 
 `bin/solr zk [sub-command] [options]`
 
@@ -417,7 +424,7 @@ Use the `zk upconfig` command to upload one of the pre-configured configuration
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |-n <name> a|
@@ -448,9 +455,7 @@ An example of this command with these parameters is:
 .Reload Collections When Changing Configurations
 [WARNING]
 ====
-
 This command does *not* automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API's <<collections-api.adoc#CollectionsAPI-reload,RELOAD command>> to reload any collections that uses this configuration set.
-
 ====
 
 [[SolrControlScriptReference-DownloadaConfigurationSet]]
@@ -465,7 +470,7 @@ Use the `zk downconfig` command to download a configuration set from ZooKeeper t
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |-n <name> |Name of config set in ZooKeeper to download. The Admin UI Cloud -> Tree -> configs node lists all available configuration sets. |`-n myconfig`
@@ -494,7 +499,7 @@ Use the `zk cp` command for transferring files and directories between ZooKeeper
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |-r |Optional. Do a recursive copy. The command will fail if the <src> has children unless '-r' is specified. |`-r`
@@ -504,7 +509,7 @@ Use the `zk cp` command for transferring files and directories between ZooKeeper
 `file:/Users/apache/configs/src`
 
 |<dest> |The file or path to copy to. If prefixed with `zk:` then the source is presumed to be ZooKeeper. If no prefix or the prefix is 'file:' this is the local drive. At least one of <src> or <dest> must be prefixed by `zk:` or the command will fail. If <dest> ends in a slash character it names a directory. |`zk:/configs/myconfigs/solrconfig.xml` `file:/Users/apache/configs/src`
-|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181 `
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
 |===
 
 An example of this command with the parameters is:
@@ -527,7 +532,7 @@ Use the `zk rm` command to remove a znode (and optionally all child nodes) from
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |-r |Optional. Do a recursive removal. The command will fail if the <path> has children unless '-r' is specified. |`-r`
@@ -545,7 +550,7 @@ The path is assumed to be a ZooKeeper node, no `zk:` prefix is necessary.
 
 `/configs/myconfigset/solrconfig.xml`
 
-|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181 `
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
 |===
 
 An example of this command with the parameters is:
@@ -564,12 +569,12 @@ Use the `zk mv` command to move (rename) a ZooKeeper znode
 [[SolrControlScriptReference-AvailableParameters.7]]
 ==== Available Parameters
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |<src> |The znode to rename. The `zk:` prefix is assumed. |`/configs/oldconfigset`
 |<dest> |The new name of the znode. The `zk:` prefix is assumed. |`/configs/newconfigset`
-|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181 `
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
 |===
 
 An example of this command is:
@@ -586,12 +591,12 @@ Use the `zk ls` command to see the children of a znode.
 [[SolrControlScriptReference-AvailableParameters.8]]
 ==== Available Parameters
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |-r |Optional. Recursively list all descendants of a znode. |`-r`
 |<path> |The path on ZooKeeper to list. |`/collections/mycollection`
-|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181 `
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
 |===
 
 An example of this command with the parameters is:
@@ -610,15 +615,15 @@ Use the `zk mkroot` command to create a znode. The primary use-case for this com
 [[SolrControlScriptReference-AvailableParameters.9]]
 ==== Available Parameters
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description |Example
 |<path> |The path on ZooKeeper to create. Intermediate znodes will be created if necessary. A leading slash is assumed even if not specified. |`/solr`
-|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181 `
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
 |===
 
 Examples of this command:
 
-`bin/solr zk mkroot /solr -z 123.321.23.43:2181 `
+`bin/solr zk mkroot /solr -z 123.321.23.43:2181`
 
 `bin/solr zk mkroot /solr/production`

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc b/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc
index e3cfa77..2e44bab 100644
--- a/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc
+++ b/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc
@@ -3,7 +3,7 @@
 :page-permalink: solr-cores-and-solr-xml.html
 :page-children: format-of-solr-xml, defining-core-properties, coreadmin-api, config-sets
 
-In Solr, the term _core_ is used to refer to a single index and associated transaction log and configuration files (including the `solrconfig.xml` and Schema files, among others). Your Solr installation can have multiple cores if needed, which allows you to index data with different structures in the same server, and maintain more control over how your data is presented to different audiences. In SolrCloud mode you will be more familiar with the term __collection.__ Behind the scenes a collection consists of one or more cores.
+In Solr, the term _core_ is used to refer to a single index and associated transaction log and configuration files (including the `solrconfig.xml` and Schema files, among others). Your Solr installation can have multiple cores if needed, which allows you to index data with different structures in the same server, and maintain more control over how your data is presented to different audiences. In SolrCloud mode you will be more familiar with the term _collection._ Behind the scenes a collection consists of one or more cores.
 
 Cores can be created using `bin/solr` script or as part of SolrCloud collection creation using the APIs. Core-specific properties (such as the directories to use for the indexes or configuration files, the core name, and other options) are defined in a `core.properties` file. Any `core.properties` file in any directory of your Solr installation (or in a directory under where `solr_home` is defined) will be found by Solr and the defined properties will be used for the core named in the file.
 
@@ -11,14 +11,12 @@ In standalone mode, `solr.xml` must reside in `solr_home`. In SolrCloud mode, `s
 
 [NOTE]
 ====
-
 In older versions of Solr, cores had to be predefined as `<core>` tags in `solr.xml` in order for Solr to know about them. Now, however, Solr supports automatic discovery of cores and they no longer need to be explicitly defined. The recommended way is to dynamically create cores/collections using the APIs.
-
 ====
 
 The following sections describe these options in more detail.
 
-* **<<format-of-solr-xml.adoc#format-of-solr-xml,Format of solr.xml>>**: Details on how to define `solr.xml`, including the acceptable parameters for the `solr.xml` file
-* **<<defining-core-properties.adoc#defining-core-properties,Defining core.properties>>**: Details on placement of `core.properties` and available property options.
-* **<<coreadmin-api.adoc#coreadmin-api,CoreAdmin API>>**: Tools and commands for core administration using a REST API.
-* **<<config-sets.adoc#config-sets,Config Sets>>**: How to use configsets to avoid duplicating effort when defining a new core.
+* *<<format-of-solr-xml.adoc#format-of-solr-xml,Format of solr.xml>>*: Details on how to define `solr.xml`, including the acceptable parameters for the `solr.xml` file
+* *<<defining-core-properties.adoc#defining-core-properties,Defining core.properties>>*: Details on placement of `core.properties` and available property options.
+* *<<coreadmin-api.adoc#coreadmin-api,CoreAdmin API>>*: Tools and commands for core administration using a REST API.
+* *<<config-sets.adoc#config-sets,Config Sets>>*: How to use configsets to avoid duplicating effort when defining a new core.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solr-field-types.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-field-types.adoc b/solr/solr-ref-guide/src/solr-field-types.adoc
index f8f08a6..b9bf1da 100644
--- a/solr/solr-ref-guide/src/solr-field-types.adoc
+++ b/solr/solr-ref-guide/src/solr-field-types.adoc
@@ -21,8 +21,4 @@ Topics covered in this section:
 
 * <<field-properties-by-use-case.adoc#field-properties-by-use-case,Field Properties by Use Case>>
 
-[[SolrFieldTypes-RelatedTopics]]
-== Related Topics
-
-* http://wiki.apache.org/solr/SchemaXml#Data_Types[SchemaXML-DataTypes]
-* {solr-javadocs}/solr-core/org/apache/solr/schema/FieldType.html[FieldType Javadoc]
+TIP: See also the {solr-javadocs}/solr-core/org/apache/solr/schema/FieldType.html[FieldType Javadoc].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
index 858aae2..e78a936 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
@@ -2,59 +2,52 @@
 :page-shortname: solr-jdbc-apache-zeppelin
 :page-permalink: solr-jdbc-apache-zeppelin.html
 
-[IMPORTANT]
-====
+The Solr JDBC driver can support Apache Zeppelin.
 
-This requires Apache Zeppelin 0.6.0 or greater which contains the JDBC interpreter.
-
-====
+IMPORTANT: This requires Apache Zeppelin 0.6.0 or greater which contains the JDBC interpreter.
 
-For http://zeppelin.apache.org[Apache Zeppelin], you will need to create a JDBC interpreter for Solr. This will add SolrJ to the interpreter classpath. Once the interpreter has been created, you can create a notebook to issue queries. The http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation] provides additional information about JDBC prefixes and other features.
+To use http://zeppelin.apache.org[Apache Zeppelin] with Solr, you will need to create a JDBC interpreter for Solr. This will add SolrJ to the interpreter classpath. Once the interpreter has been created, you can create a notebook to issue queries. The http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation] provides additional information about JDBC prefixes and other features.
 
 [[SolrJDBC-ApacheZeppelin-CreatetheApacheSolrJDBCInterpreter]]
 == Create the Apache Solr JDBC Interpreter
 
+.Click "Interpreter" in the top navigation
 image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_1.png[image,height=400]
 
-
+.Click "Create"
 image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_2.png[image,height=400]
 
-
+.Enter information about your Solr installation
 image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_3.png[image,height=400]
 
-
 [NOTE]
 ====
-
 For most installations, Apache Zeppelin configures PostgreSQL as the JDBC interpreter default driver. The default driver can either be replaced by the Solr driver as outlined above or you can add a separate JDBC interpreter prefix as outlined in the http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation].
-
 ====
 
 [[SolrJDBC-ApacheZeppelin-CreateaNotebook]]
 == Create a Notebook
 
+.Click Notebook -> Create new note
 image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_4.png[image,width=517,height=400]
 
-
+.Provide a name and click "Create Note"
 image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_5.png[image,width=839,height=400]
 
-
 [[SolrJDBC-ApacheZeppelin-QuerywiththeNotebook]]
 == Query with the Notebook
 
 [IMPORTANT]
 ====
-
 For some notebooks, the JDBC interpreter will not be bound to the notebook by default. Instructions on how to bind the JDBC interpreter to a notebook are available https://zeppelin.apache.org/docs/latest/interpreter/jdbc.html#bind-to-notebook[here].
-
 ====
 
+.Results of Solr query
 image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_6.png[image,width=481,height=400]
 
-
 The below code block assumes that the Apache Solr driver is setup as the default JDBC interpreter driver. If that is not the case, instructions for using a different prefix is available https://zeppelin.apache.org/docs/latest/interpreter/jdbc.html#how-to-use[here].
 
-[source,java]
+[source,sql]
 ----
 %jdbc
 select fielda, fieldb, from test limit 10

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
index f0f341c..af70dfd 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
@@ -2,6 +2,8 @@
 :page-shortname: solr-jdbc-dbvisualizer
 :page-permalink: solr-jdbc-dbvisualizer.html
 
+Solr's JDBC driver supports DBVisualizer for querying Solr.
+
 For https://www.dbvis.com/[DbVisualizer], you will need to create a new driver for Solr using the DbVisualizer Driver Manager. This will add several SolrJ client .jars to the DbVisualizer classpath. The files required are:
 
 * all .jars found in `$SOLR_HOME/dist/solrj-lib`
@@ -116,4 +118,3 @@ image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_19.png[image,width=57
 
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_20.png[image,width=556,height=400]
-

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc b/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc
index 37ae6ac..bb91425 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc
@@ -2,35 +2,29 @@
 :page-shortname: solr-jdbc-python-jython
 :page-permalink: solr-jdbc-python-jython.html
 
-// OLD_CONFLUENCE_ID: SolrJDBC-Python/Jython-Python
+Solr's JDBC driver supports Python and Jython.
 
-[[SolrJDBC-Python_Jython-Python]]
 == Python
 
 Python supports accessing JDBC using the https://pypi.python.org/pypi/JayDeBeApi/[JayDeBeApi] library. The CLASSPATH variable must be configured to contain the solr-solrj jar and the supporting solrj-lib jars.
 
-// OLD_CONFLUENCE_ID: SolrJDBC-Python/Jython-JayDeBeApi
 
-[[SolrJDBC-Python_Jython-JayDeBeApi]]
 === JayDeBeApi
 
-*run.sh*
-
+.run.sh
 [source,bash]
 ----
 #!/usr/bin/env bash
- 
 # Java 8 must already be installed
- 
+
 pip install JayDeBeApi
- 
+
 export CLASSPATH="$(echo $(ls /opt/solr/dist/solr-solrj* /opt/solr/dist/solrj-lib/*) | tr ' ' ':')"
 
 python solr_jaydebeapi.py
 ----
 
-*solr_jaydebeapi.py*
-
+.solr_jaydebeapi.py
 [source,py]
 ----
 #!/usr/bin/env python
@@ -43,103 +37,90 @@ if __name__ == '__main__':
   jdbc_url = "jdbc:solr://localhost:9983?collection=test"
   driverName = "org.apache.solr.client.solrj.io.sql.DriverImpl"
   statement = "select fielda, fieldb, fieldc, fieldd_s, fielde_i from test limit 10"
- 
+
   conn = jaydebeapi.connect(driverName, jdbc_url)
   curs = conn.cursor()
   curs.execute(statement)
   print(curs.fetchall())
-  
+
   conn.close()
-  
+
   sys.exit(0)
 ----
 
-// OLD_CONFLUENCE_ID: SolrJDBC-Python/Jython-Jython
-
-[[SolrJDBC-Python_Jython-Jython]]
 == Jython
 
 Jython supports accessing JDBC natively with Java interfaces or with the zxJDBC library. The CLASSPATH variable must be configured to contain the solr-solrj jar and the supporting solrj-lib jars.
 
-*run.sh*
-
+.run.sh
 [source,bash]
 ----
 #!/usr/bin/env bash
- 
 # Java 8 and Jython must already be installed
- 
+
 export CLASSPATH="$(echo $(ls /opt/solr/dist/solr-solrj* /opt/solr/dist/solrj-lib/*) | tr ' ' ':')"
- 
+
 jython [solr_java_native.py | solr_zxjdbc.py]
 ----
 
-// OLD_CONFLUENCE_ID: SolrJDBC-Python/Jython-JavaNative
-
-[[SolrJDBC-Python_Jython-JavaNative]]
 === Java Native
 
-*solr_java_native.py*
-
+.solr_java_native.py
 [source,py]
 ----
 #!/usr/bin/env jython
- 
+
 # http://www.jython.org/jythonbook/en/1.0/DatabasesAndJython.html
 # https://wiki.python.org/jython/DatabaseExamples#SQLite_using_JDBC
- 
+
 import sys
- 
+
 from java.lang import Class
 from java.sql  import DriverManager, SQLException
- 
+
 if __name__ == '__main__':
   jdbc_url = "jdbc:solr://localhost:9983?collection=test"
   driverName = "org.apache.solr.client.solrj.io.sql.DriverImpl"
   statement = "select fielda, fieldb, fieldc, fieldd_s, fielde_i from test limit 10"
-  
+
   dbConn = DriverManager.getConnection(jdbc_url)
   stmt = dbConn.createStatement()
-  
+
   resultSet = stmt.executeQuery(statement)
   while resultSet.next():
     print(resultSet.getString("fielda"))
-  
+
   resultSet.close()
   stmt.close()
   dbConn.close()
-  
+
   sys.exit(0)
 ----
 
-// OLD_CONFLUENCE_ID: SolrJDBC-Python/Jython-zxJDBC
-
-[[SolrJDBC-Python_Jython-zxJDBC]]
 === zxJDBC
 
-*solr_zxjdbc.py*
-
+.solr_zxjdbc.py
 [source,py]
 ----
 #!/usr/bin/env jython
- 
+
 # http://www.jython.org/jythonbook/en/1.0/DatabasesAndJython.html
 # https://wiki.python.org/jython/DatabaseExamples#SQLite_using_ziclix
- 
+
 import sys
- 
+
 from com.ziclix.python.sql import zxJDBC
- 
+
 if __name__ == '__main__':
   jdbc_url = "jdbc:solr://localhost:9983?collection=test"
   driverName = "org.apache.solr.client.solrj.io.sql.DriverImpl"
   statement = "select fielda, fieldb, fieldc, fieldd_s, fielde_i from test limit 10"
-  
+
   with zxJDBC.connect(jdbc_url, None, None, driverName) as conn:
     with conn:
       with conn.cursor() as c:
         c.execute(statement)
         print(c.fetchall())
-  
+
   sys.exit(0)
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solr-jdbc-r.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-r.adoc b/solr/solr-ref-guide/src/solr-jdbc-r.adoc
index bd38ffe..3dedbcc 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-r.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-r.adoc
@@ -4,31 +4,28 @@
 
 R supports accessing JDBC using the https://www.rforge.net/RJDBC/[RJDBC] library.
 
-[[SolrJDBC-R-RJDBC]]
-=== RJDBC
-
-*run.sh*
+== RJDBC
 
+.run.sh
 [source,bash]
 ----
 #!/usr/bin/env bash
- 
+
 # Java 8 must already be installed and R configured with `R CMD javareconf`
 
 Rscript -e 'install.packages("RJDBC", dep=TRUE)'
 Rscript solr_rjdbc.R
 ----
 
-*solr_rjdbc.R*
-
-[source,java]
+.solr_rjdbc.R
+[source,r]
 ----
 # https://www.rforge.net/RJDBC/
- 
+
 library("RJDBC")
- 
+
 solrCP <- c(list.files('/opt/solr/dist/solrj-lib', full.names=TRUE), list.files('/opt/solr/dist', pattern='solrj', full.names=TRUE, recursive = TRUE))
- 
+
 drv <- JDBC("org.apache.solr.client.solrj.io.sql.DriverImpl",
            solrCP,
            identifier.quote="`")

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc b/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc
index 9031312..bac4cbd 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc
@@ -9,22 +9,18 @@ For http://squirrel-sql.sourceforge.net[SQuirreL SQL], you will need to create a
 
 Once the driver has been created, you can create a connection to Solr with the connection string format outlined in the generic section and use the editor to issue queries.
 
-[[SolrJDBC-SQuirreLSQL-AddSolrJDBCDriver]]
 == Add Solr JDBC Driver
 
-[[SolrJDBC-SQuirreLSQL-OpenDrivers]]
 === Open Drivers
 
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_1.png[image,width=900,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-AddDriver]]
 === Add Driver
 
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_2.png[image,width=892,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-NametheDriver]]
 === Name the Driver
 
 Provide a name for the driver, and provide the URL format: `jdbc:solr://<zk_connection_string>/?collection=<collection>`. Do not fill in values for the variables "```zk_connection_string```" and "```collection```", those will be defined later when the connection to Solr is configured.
@@ -32,7 +28,6 @@ Provide a name for the driver, and provide the URL format: `jdbc:solr://<zk_conn
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_3.png[image,width=467,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-AddSolrJDBCjarstoClasspath]]
 === Add Solr JDBC jars to Classpath
 
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_4.png[image,width=467,height=400]
@@ -47,7 +42,6 @@ image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_5.png[image,width=469,
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_7.png[image,width=467,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-AddtheSolrJDBCdriverclassname]]
 === Add the Solr JDBC driver class name
 
 After adding the .jars, you will need to additionally define the Class Name `org.apache.solr.client.solrj.io.sql.DriverImpl`.
@@ -55,39 +49,32 @@ After adding the .jars, you will need to additionally define the Class Name `org
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_11.png[image,width=470,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-CreateanAlias]]
 == Create an Alias
 
 To define a JDBC connection, you must define an alias.
 
-[[SolrJDBC-SQuirreLSQL-OpenAliases]]
 === Open Aliases
 
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_10.png[image,width=840,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-AddanAlias]]
 === Add an Alias
 
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_12.png[image,width=959,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-ConfiguretheAlias]]
 === Configure the Alias
 
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_14.png[image,width=470,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-ConnecttotheAlias]]
 === Connect to the Alias
 
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_13.png[image,width=522,height=400]
 
 
-[[SolrJDBC-SQuirreLSQL-Querying]]
 == Querying
 
 Once you've successfully connected to Solr, you can use the SQL interface to enter queries and work with data.
 
 image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_15.png[image,width=655,height=400]
-

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc b/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
index 155271c..b528abe 100644
--- a/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
+++ b/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
@@ -2,19 +2,21 @@
 :page-shortname: solrcloud-with-legacy-configuration-files
 :page-permalink: solrcloud-with-legacy-configuration-files.html
 
+If you are migrating from a non-SolrCloud environment to SolrCloud, this information may be helpful.
+
 All of the required configuration is already set up in the sample configurations shipped with Solr. You only need to add the following if you are migrating old configuration files. Do not remove these files and parameters from a new Solr instance if you intend to use Solr in SolrCloud mode.
 
 These properties exist in 3 files: `schema.xml`, `solrconfig.xml`, and `solr.xml`.
 
-\1. In `schema.xml`, you must have a `_version_` field defined:
-
+. In `schema.xml`, you must have a `_version_` field defined:
++
 [source,xml]
 ----
 <field name="_version_" type="long" indexed="true" stored="true" multiValued="false"/>
 ----
-
-\2. In `solrconfig.xml`, you must have an `UpdateLog` defined. This should be defined in the `updateHandler` section.
-
++
+. In `solrconfig.xml`, you must have an `UpdateLog` defined. This should be defined in the `updateHandler` section.
++
 [source,xml]
 ----
 <updateHandler>
@@ -25,9 +27,9 @@ These properties exist in 3 files: `schema.xml`, `solrconfig.xml`, and `solr.xml
   ...
 </updateHandler>
 ----
-
-\3. The http://wiki.apache.org/solr/UpdateRequestProcessor#Distributed_Updates[DistributedUpdateProcessor] is part of the default update chain and is automatically injected into any of your custom update chains, so you don't actually need to make any changes for this capability. However, should you wish to add it explicitly, you can still add it to the `solrconfig.xml` file as part of an `updateRequestProcessorChain`. For example:
-
++
+. The http://wiki.apache.org/solr/UpdateRequestProcessor#Distributed_Updates[DistributedUpdateProcessor] is part of the default update chain and is automatically injected into any of your custom update chains, so you don't actually need to make any changes for this capability. However, should you wish to add it explicitly, you can still add it to the `solrconfig.xml` file as part of an `updateRequestProcessorChain`. For example:
++
 [source,xml]
 ----
 <updateRequestProcessorChain name="sample">
@@ -37,17 +39,17 @@ These properties exist in 3 files: `schema.xml`, `solrconfig.xml`, and `solr.xml
   <processor class="solr.RunUpdateProcessorFactory" />
 </updateRequestProcessorChain>
 ----
-
++
 If you do not want the DistributedUpdateProcessFactory auto-injected into your chain (for example, if you want to use SolrCloud functionality, but you want to distribute updates yourself) then specify the `NoOpDistributingUpdateProcessorFactory` update processor factory in your chain:
-
++
 [source,xml]
 ----
 <updateRequestProcessorChain name="sample">
   <processor class="solr.LogUpdateProcessorFactory" />
-  <processor class="solr.NoOpDistributingUpdateProcessorFactory"/>  
+  <processor class="solr.NoOpDistributingUpdateProcessorFactory"/>
   <processor class="my.package.MyDistributedUpdateFactory"/>
   <processor class="solr.RunUpdateProcessorFactory" />
 </updateRequestProcessorChain>
 ----
-
++
 In the update process, Solr skips updating processors that have already been run on other nodes.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solrcloud.adoc b/solr/solr-ref-guide/src/solrcloud.adoc
index b112597..644b143 100644
--- a/solr/solr-ref-guide/src/solrcloud.adoc
+++ b/solr/solr-ref-guide/src/solrcloud.adoc
@@ -3,7 +3,7 @@
 :page-permalink: solrcloud.html
 :page-children: getting-started-with-solrcloud, how-solrcloud-works, solrcloud-configuration-and-parameters, rule-based-replica-placement, cross-data-center-replication-cdcr-
 
-Apache Solr includes the ability to set up a cluster of Solr servers that combines fault tolerance and high availability. Called **SolrCloud**, these capabilities provide distributed indexing and search capabilities, supporting the following features:
+Apache Solr includes the ability to set up a cluster of Solr servers that combines fault tolerance and high availability. Called *SolrCloud*, these capabilities provide distributed indexing and search capabilities, supporting the following features:
 
 * Central configuration for the entire cluster
 * Automatic load balancing and fail-over for queries

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/d05e3a40/solr/solr-ref-guide/src/spatial-search.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spatial-search.adoc b/solr/solr-ref-guide/src/spatial-search.adoc
index 2d37c6a..c6da376 100644
--- a/solr/solr-ref-guide/src/spatial-search.adoc
+++ b/solr/solr-ref-guide/src/spatial-search.adoc
@@ -2,7 +2,9 @@
 :page-shortname: spatial-search
 :page-permalink: spatial-search.html
 
-Solr supports location data for use in spatial/geospatial searches. Using spatial search, you can:
+Solr supports location data for use in spatial/geospatial searches.
+
+Using spatial search, you can:
 
 * Index points or other shapes
 * Filter search results by a bounding box or circle or by other shapes
@@ -12,24 +14,25 @@ Solr supports location data for use in spatial/geospatial searches. Using spatia
 There are four main field types available for spatial search:
 
 * `LatLonPointSpatialField`
-* `LatLonType` (now deprecated) and its non-geodetic twin PointType
+* `LatLonType` (now deprecated) and its non-geodetic twin `PointType`
 * `SpatialRecursivePrefixTreeFieldType` (RPT for short), including `RptWithGeometrySpatialField`, a derivative
 * `BBoxField`
 
-LatLonPointSpatialField is the ideal field type for the most common use-cases for lat-lon point data. It replaces LatLonType which still exists for backwards compatibility. RPT offers some more features for more advanced/custom use cases / options like polygons and heatmaps.
+`LatLonPointSpatialField` is the ideal field type for the most common use-cases for lat-lon point data. It replaces LatLonType which still exists for backwards compatibility. RPT offers some more features for more advanced/custom use cases and options like polygons and heatmaps.
 
-RptWithGeometrySpatialField is for indexing and searching non-point data though it can do points too. It can't do sorting/boosting.
+`RptWithGeometrySpatialField` is for indexing and searching non-point data though it can do points too. It can't do sorting/boosting.
 
-BBoxField is for indexing bounding boxes, querying by a box, specifying a search predicate (Intersects,Within,Contains,Disjoint,Equals), and a relevancy sort/boost like overlapRatio or simply the area.
+`BBoxField` is for indexing bounding boxes, querying by a box, specifying a search predicate (Intersects,Within,Contains,Disjoint,Equals), and a relevancy sort/boost like overlapRatio or simply the area.
 
 Some esoteric details that are not in this guide can be found at http://wiki.apache.org/solr/SpatialSearch.
 
 [[SpatialSearch-LatLonPointSpatialField]]
 == LatLonPointSpatialField
 
-Here's how LatLonPointSpatialField should usually be configured in the schema:
+Here's how `LatLonPointSpatialField` (LLPSF) should usually be configured in the schema:
 
-`<fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>`
+[source,xml]
+<fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>
 
 LLPSF supports toggling `indexed`, `stored`, `docValues`, and `multiValued`. LLPSF internally uses a 2-dimensional Lucene "Points" (BDK tree) index when "indexed" is enabled (the default). When "docValues" is enabled, a latitude and longitudes pair are bit-interleaved into 64 bits and put into Lucene DocValues. The accuracy of the docValues data is about a centimeter.
 
@@ -38,7 +41,7 @@ LLPSF supports toggling `indexed`, `stored`, `docValues`, and `multiValued`. LLP
 
 For indexing geodetic points (latitude and longitude), supply it in "lat,lon" order (comma separated).
 
-For indexing non-geodetic points, it depends. Use "x y" (a space) if RPT. For PointType however, use "x,y" (a comma).
+For indexing non-geodetic points, it depends. Use `x y` (a space) if RPT. For PointType however, use `x,y` (a comma).
 
 If you'd rather use a standard industry format, Solr supports WKT and GeoJSON. However it's much bulkier than the raw coordinates for such simple data. (Not supported by the deprecated LatLonType or PointType)
 
@@ -49,7 +52,7 @@ There are two spatial Solr "query parsers" for geospatial search: `geofilt` and
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |d |the radial distance, usually in kilometers. (RPT & BBoxField can set other units via the setting `distanceUnits`)
@@ -67,9 +70,7 @@ There are two spatial Solr "query parsers" for geospatial search: `geofilt` and
 
 [WARNING]
 ====
-
 Don't use this for indexed non-point shapes (e.g. polygons). The results will be erroneous. And with RPT, it's only recommended for multi-valued point data, as the implementation doesn't scale very well and for single-valued fields, you should instead use a separate non-RPT field purely for distance sorting.
-
 ====
 
 When used with `BBoxField`, additional options are supported:
@@ -92,29 +93,41 @@ image::images/spatial-search/circle.png[image]
 [[SpatialSearch-bbox]]
 === `bbox`
 
-The `bbox` filter is very similar to `geofilt` except it uses the _bounding box_ of the calculated circle. See the blue box in the diagram below. It takes the same parameters as geofilt. Here's a sample query: `&q=*:*&fq={!bbox sfield=store}&pt=45.15,-93.85&d=5`. The rectangular shape is faster to compute and so it's sometimes used as an alternative to geofilt when it's acceptable to return points outside of the radius. However, if the ideal goal is a circle but you want it to run faster, then instead consider using the RPT field and try a large "distErrPct" value like `0.1` (10% radius). This will return results outside the radius but it will do so somewhat uniformly around the shape.
+The `bbox` filter is very similar to `geofilt` except it uses the _bounding box_ of the calculated circle. See the blue box in the diagram below. It takes the same parameters as geofilt.
+
+Here's a sample query:
+
+`&q=*:*&fq={!bbox sfield=store}&pt=45.15,-93.85&d=5`
+
+The rectangular shape is faster to compute and so it's sometimes used as an alternative to `geofilt` when it's acceptable to return points outside of the radius. However, if the ideal goal is a circle but you want it to run faster, then instead consider using the RPT field and try a large `distErrPct` value like `0.1` (10% radius). This will return results outside the radius but it will do so somewhat uniformly around the shape.
 
 image::images/spatial-search/bbox.png[image]
 
 
 [IMPORTANT]
 ====
-
-When a bounding box includes a pole, the bounding box ends up being a "bounding bowl" (a __spherical cap__) that includes all values north of the lowest latitude of the circle if it touches the north pole (or south of the highest latitude if it touches the south pole).
-
+When a bounding box includes a pole, the bounding box ends up being a "bounding bowl" (a _spherical cap_) that includes all values north of the lowest latitude of the circle if it touches the north pole (or south of the highest latitude if it touches the south pole).
 ====
 
 [[SpatialSearch-Filteringbyanarbitraryrectangle]]
-=== Filtering by an arbitrary rectangle
+=== Filtering by an Arbitrary Rectangle
+
+Sometimes the spatial search requirement calls for finding everything in a rectangular area, such as the area covered by a map the user is looking at. For this case, geofilt and bbox won't cut it. This is somewhat of a trick, but you can use Solr's range query syntax for this by supplying the lower-left corner as the start of the range and the upper-right corner as the end of the range.
 
-Sometimes the spatial search requirement calls for finding everything in a rectangular area, such as the area covered by a map the user is looking at. For this case, geofilt and bbox won't cut it. This is somewhat of a trick, but you can use Solr's range query syntax for this by supplying the lower-left corner as the start of the range and the upper-right corner as the end of the range. Here's an example: `&q=*:*&fq=store:[45,-94 TO 46,-93]`. LatLonType (deprecated) does *not* support rectangles that cross the dateline. For RPT and BBoxField, if you are non-geospatial coordinates (`geo="false"`) then you must quote the points due to the space, e.g. `"x y"`.
+Here's an example:
+
+`&q=*:*&fq=store:[45,-94 TO 46,-93]`
+
+LatLonType (deprecated) does *not* support rectangles that cross the dateline. For RPT and BBoxField, if you are non-geospatial coordinates (`geo="false"`) then you must quote the points due to the space, e.g. `"x y"`.
 
 // OLD_CONFLUENCE_ID: SpatialSearch-Optimizing:CacheorNot
 
 [[SpatialSearch-Optimizing_CacheorNot]]
 === Optimizing: Cache or Not
 
-It's most common to put a spatial query into an "fq" parameter – a filter query. By default, Solr will cache the query in the filter cache. If you know the filter query (be it spatial or not) is fairly unique and not likely to get a cache hit then specify `cache="false"` as a local-param as seen in the following example. The only spatial types which stand to benefit from this technique are LatLonPointSpatialField and LatLonType (deprecated). Enable docValues on the field (if it isn't already). LatLonType (deprecated) additionally requires a `cost="100"` (or more) local-param.
+It's most common to put a spatial query into an "fq" parameter – a filter query. By default, Solr will cache the query in the filter cache.
+
+If you know the filter query (be it spatial or not) is fairly unique and not likely to get a cache hit then specify `cache="false"` as a local-param as seen in the following example. The only spatial types which stand to benefit from this technique are LatLonPointSpatialField and LatLonType (deprecated). Enable docValues on the field (if it isn't already). LatLonType (deprecated) additionally requires a `cost="100"` (or more) local-param.
 
 `&q=...mykeywords...&fq=...someotherfilters...&fq={!geofilt cache=false}&sfield=store&pt=45.15,-93.85&d=5`
 
@@ -125,7 +138,14 @@ LLPSF does not support Solr's "PostFilter".
 [[SpatialSearch-DistanceSortingorBoosting_FunctionQueries_]]
 == Distance Sorting or Boosting (Function Queries)
 
-There are four distance function queries: `geodist`, see below, usually the most appropriate; http://wiki.apache.org/solr/FunctionQuery#dist[`dist`], to calculate the p-norm distance between multi-dimensional vectors; http://wiki.apache.org/solr/FunctionQuery#hsin.2C_ghhsin_-_Haversine_Formula[`hsin`], to calculate the distance between two points on a sphere; and https://wiki.apache.org/solr/FunctionQuery#sqedist_-_Squared_Euclidean_Distance[`sqedist`], to calculate the squared Euclidean distance between two points. For more information about these function queries, see the section on <<function-queries.adoc#function-queries,Function Queries>>.
+There are four distance function queries:
+
+* `geodist`, see below, usually the most appropriate;
+*  http://wiki.apache.org/solr/FunctionQuery#dist[`dist`], to calculate the p-norm distance between multi-dimensional vectors;
+* http://wiki.apache.org/solr/FunctionQuery#hsin.2C_ghhsin_-_Haversine_Formula[`hsin`], to calculate the distance between two points on a sphere;
+* https://wiki.apache.org/solr/FunctionQuery#sqedist_-_Squared_Euclidean_Distance[`sqedist`], to calculate the squared Euclidean distance between two points.
+
+For more information about these function queries, see the section on <<function-queries.adoc#function-queries,Function Queries>>.
 
 [[SpatialSearch-geodist]]
 === `geodist`
@@ -169,16 +189,16 @@ Using the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>> or <<t
 
 RPT refers to either `SpatialRecursivePrefixTreeFieldType` (aka simply RPT) and an extended version: `RptWithGeometrySpatialField` (aka RPT with Geometry). RPT offers several functional improvements over LatLonPointSpatialField:
 
-* Non-geodetic – geo=false general x & y (__not__ latitude and longitude)
+* Non-geodetic – geo=false general x & y (_not_ latitude and longitude)
 * Query by polygons and other complex shapes, in addition to circles & rectangles
 * Ability to index non-point shapes (e.g. polygons) as well as points – see RptWithGeometrySpatialField
 * Heatmap grid faceting
 
-RPT _shares_ various features in common with LatLonPointSpatialField. Some are listed here:
+RPT _shares_ various features in common with `LatLonPointSpatialField`. Some are listed here:
 
 * Latitude/Longitude indexed point data; possibly multi-valued
-* Fast filtering with geofilt, bbox filters, and range query syntax (dateline crossing is supported)
-* Sort/boost via geodist
+* Fast filtering with `geofilt`, `bbox` filters, and range query syntax (dateline crossing is supported)
+* Sort/boost via `geodist`
 * Well-Known-Text (WKT) shape syntax (required for specifying polygons & other complex shapes), and GeoJSON too. In addition to indexing and searching, this works with the `wt=geojson` (GeoJSON Solr response-writer) and `[geo f=myfield]` (geo Solr document-transformer).
 
 [[SpatialSearch-Schemaconfiguration]]
@@ -188,7 +208,7 @@ To use RPT, the field type must be registered and configured in `schema.xml`. Th
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Setting |Description
 |name |The name of the field type.
@@ -211,12 +231,16 @@ This is used to specify the units for distance measurements used throughout the
 |maxLevels |Sets the maximum grid depth for indexed data. Instead, it's usually more intuitive to compute an appropriate maxLevels by specifying `maxDistErr` .
 |===
 
-*_And there are others:_* `normWrapLongitude` _,_ `datelineRule`, `validationRule`, `autoIndex`, `allowMultiOverlap`, `precisionModel`. For further info, see notes below about spatialContextFactory implementations referenced above, especially the link to the JTS based one.
+*_And there are others:_* `normWrapLongitude` _,_ `datelineRule`, `validationRule`, `autoIndex`, `allowMultiOverlap`, `precisionModel`. For further info, see notes below about `spatialContextFactory` implementations referenced above, especially the link to the JTS based one.
 
 [[SpatialSearch-JTSandPolygons]]
 === JTS and Polygons
 
-As indicated above, `spatialContextFactory` must be set to `JTS` for polygon support, including multi-polygon. All other shapes, including even line-strings, are supported without JTS. JTS stands for http://sourceforge.net/projects/jts-topo-suite/[JTS Topology Suite], which does not come with Solr due to its LGPL license. You must download it (a JAR file) and put that in a special location internal to Solr: `SOLR_INSTALL/server/solr-webapp/webapp/WEB-INF/lib/`. You can readily download it here: https://repo1.maven.org/maven2/com/vividsolutions/jts-core/. It will not work if placed in other more typical Solr lib directories, unfortunately. When activated, there are additional configuration attributes available; see https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/context/jts/JtsSpatialContextFactory.html[org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory] for the Javadocs, and remember to look at the superclass's options in as well. One option 
 in particular you should most likely enable is `autoIndex` (i.e., use JTS's PreparedGeometry) as it's been shown to be a major performance boost for non-trivial polygons.
+As indicated above, `spatialContextFactory` must be set to `JTS` for polygon support, including multi-polygon.
+
+All other shapes, including even line-strings, are supported without JTS. JTS stands for http://sourceforge.net/projects/jts-topo-suite/[JTS Topology Suite], which does not come with Solr due to its LGPL license. You must download it (a JAR file) and put that in a special location internal to Solr: `SOLR_INSTALL/server/solr-webapp/webapp/WEB-INF/lib/`. You can readily download it here: https://repo1.maven.org/maven2/com/vividsolutions/jts-core/. It will not work if placed in other more typical Solr lib directories, unfortunately.
+
+When activated, there are additional configuration attributes available; see https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/context/jts/JtsSpatialContextFactory.html[org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory] for the Javadocs, and remember to look at the superclass's options in as well. One option in particular you should most likely enable is `autoIndex` (i.e., use JTS's PreparedGeometry) as it's been shown to be a major performance boost for non-trivial polygons.
 
 [source,xml]
 ----
@@ -233,13 +257,12 @@ Once the field type has been defined, define a field that uses it.
 
 Here's an example polygon query for a field "geo" that can be either solr.SpatialRecursivePrefixTreeFieldType or RptWithGeometrySpatialField:
 
-....
+[source,plain]
 &q=*:*&fq={!field f=geo}Intersects(POLYGON((-10 30, -40 40, -10 -20, 40 20, 0 0, -10 30)))
-....
 
 Inside the parenthesis following the search predicate is the shape definition. The format of that shape is governed by the `format` attribute on the field type, defaulting to WKT. If you prefer GeoJSON, you can specify that instead.
 
-*Beyond this reference guide and Spatila4j's docs, there are some details that remain at the Solr Wiki at* http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
+Beyond this Reference Guide and Spatila4j's docs, there are some details that remain at the Solr Wiki at http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4.
 
 [[SpatialSearch-RptWithGeometrySpatialField]]
 === RptWithGeometrySpatialField
@@ -267,7 +290,7 @@ The RPT field supports generating a 2D grid of facet counts for documents having
 
 The heatmap feature is accessed from Solr's faceting feature. As a part of faceting, it supports the `key` local parameter as well as excluding tagged filter queries, just like other types of faceting do. This allows multiple heatmaps to be returned on the same field with different filters.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |facet |Set to `true` to enable faceting
@@ -279,17 +302,16 @@ The heatmap feature is accessed from Solr's faceting feature. As a part of facet
 |facet.heatmap.format |The format, either `ints2D` (default) or `png`.
 |===
 
-.Tip
-[NOTE]
+[TIP]
 ====
 
 You'll experiment with different distErrPct values (probably 0.10 - 0.20) with various input geometries till the default size is what you're looking for. The specific details of how it's computed isn't important. For high-detail grids used in point-plotting (loosely one cell per pixel), set distErr to be the number of decimal-degrees of several pixels or so of the map being displayed. Also, you probably don't want to use a geohash based grid because the cell orientation between grid levels flip-flops between being square and rectangle. Quad is consistent and has more levels, albeit at the expense of a larger index.
 
 ====
 
-Here's some sample output in JSON (with some ..... inserted for brevity):
+Here's some sample output in JSON (with "..." inserted for brevity):
 
-[source,java]
+[source,plain]
 ----
 {gridLevel=6,columns=64,rows=64,minX=-180.0,maxX=180.0,minY=-90.0,maxY=90.0,
 counts_ints2D=[[0, 0, 2, 1, ....],[1, 1, 3, 2, ...],...]}
@@ -322,14 +344,14 @@ To index a box, add a field value to a bbox field that's a string in the WKT/CQL
 
 To search, you can use the `{!bbox}` query parser, or the range syntax e.g. `[10,-10 TO 15,20]`, or the ENVELOPE syntax wrapped in parenthesis with a leading search predicate. The latter is the only way to choose a predicate other than Intersects. For example:
 
-....
+[source,plain]
 &q={!field f=bbox}Contains(ENVELOPE(-10, 20, 15, 10))
-....
+
 
 Now to sort the results by one of the relevancy modes, use it like this:
 
-....
+[source,plain]
 &q={!field f=bbox score=overlapRatio}Intersects(ENVELOPE(-10, 20, 15, 10))
-....
+
 
 The `score` local parameter can be one of `overlapRatio`, `area`, and `area2D`. `area` scores by the document area using surface-of-a-sphere (assuming `geo=true`) math, while `area2D` uses simple width * height. `overlapRatio` computes a [0-1] ranged score based on how much overlap exists relative to the document's area and the query area. The javadocs of {lucene-javadocs}/spatial-extras/org/apache/lucene/spatial/bbox/BBoxOverlapRatioValueSource.html[BBoxOverlapRatioValueSource] have more info on the formula. There is an additional parameter `queryTargetProportion` that allows you to weight the query side of the formula to the index (target) side of the formula. You can also use `&debug=results` to see useful score computation info.