You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by is...@apache.org on 2017/07/29 21:59:43 UTC

[06/28] lucene-solr:jira/solr-6630: Merging master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/schemaless-mode.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schemaless-mode.adoc b/solr/solr-ref-guide/src/schemaless-mode.adoc
index 30e7d51..825c294 100644
--- a/solr/solr-ref-guide/src/schemaless-mode.adoc
+++ b/solr/solr-ref-guide/src/schemaless-mode.adoc
@@ -26,7 +26,6 @@ These Solr features, all controlled via `solrconfig.xml`, are:
 . Field value class guessing: Previously unseen fields are run through a cascading set of value-based parsers, which guess the Java class of field values - parsers for Boolean, Integer, Long, Float, Double, and Date are currently available.
 . Automatic schema field addition, based on field value class(es): Previously unseen fields are added to the schema, based on field value Java classes, which are mapped to schema field types - see <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
 
-[[SchemalessMode-UsingtheSchemalessExample]]
 == Using the Schemaless Example
 
 The three features of schemaless mode are pre-configured in the `_default` <<config-sets.adoc#config-sets,config set>> in the Solr distribution. To start an example instance of Solr using these configs, run the following command:
@@ -67,12 +66,10 @@ You can use the `/schema/fields` <<schema-api.adoc#schema-api,Schema API>> to co
       "uniqueKey":true}]}
 ----
 
-[[SchemalessMode-ConfiguringSchemalessMode]]
 == Configuring Schemaless Mode
 
 As described above, there are three configuration elements that need to be in place to use Solr in schemaless mode. In the `_default` config set included with Solr these are already configured. If, however, you would like to implement schemaless on your own, you should make the following changes.
 
-[[SchemalessMode-EnableManagedSchema]]
 === Enable Managed Schema
 
 As described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>, Managed Schema support is enabled by default, unless your configuration specifies that `ClassicIndexSchemaFactory` should be used.
@@ -87,7 +84,6 @@ You can configure the `ManagedIndexSchemaFactory` (and control the resource file
 </schemaFactory>
 ----
 
-[[SchemalessMode-DefineanUpdateRequestProcessorChain]]
 === Define an UpdateRequestProcessorChain
 
 The UpdateRequestProcessorChain allows Solr to guess field types, and you can define the default field type classes to use. To start, you should define it as follows (see the javadoc links below for update processor factory documentation):
@@ -174,7 +170,6 @@ Javadocs for update processor factories mentioned above:
 * {solr-javadocs}/solr-core/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.html[ParseDateFieldUpdateProcessorFactory]
 * {solr-javadocs}/solr-core/org/apache/solr/update/processor/AddSchemaFieldsUpdateProcessorFactory.html[AddSchemaFieldsUpdateProcessorFactory]
 
-[[SchemalessMode-MaketheUpdateRequestProcessorChaintheDefaultfortheUpdateRequestHandler]]
 === Make the UpdateRequestProcessorChain the Default for the UpdateRequestHandler
 
 Once the UpdateRequestProcessorChain has been defined, you must instruct your UpdateRequestHandlers to use it when working with index updates (i.e., adding, removing, replacing documents). There are two ways to do this. The update chain shown above has a `default=true` attribute which will use it for any update handler. An alternative, more explicit way is to use <<initparams-in-solrconfig.adoc#initparams-in-solrconfig,InitParams>> to set the defaults on all `/update` request handlers:
@@ -193,7 +188,6 @@ Once the UpdateRequestProcessorChain has been defined, you must instruct your Up
 After each of these changes have been made, Solr should be restarted (or, you can reload the cores to load the new `solrconfig.xml` definitions).
 ====
 
-[[SchemalessMode-ExamplesofIndexedDocuments]]
 == Examples of Indexed Documents
 
 Once the schemaless mode has been enabled (whether you configured it manually or are using `_default`), documents that include fields that are not defined in your schema will be indexed, using the guessed field types which are automatically added to the schema.
@@ -243,13 +237,14 @@ The fields now in the schema (output from `curl \http://localhost:8983/solr/gett
       "name":"Sold",
       "type":"plongs"},
     {
-      "name":"_root_" ...}
+      "name":"_root_", ...},
     {
-      "name":"_text_" ...}
+      "name":"_text_", ...},
     {
-      "name":"_version_" ...}
+      "name":"_version_", ...},
     {
-      "name":"id" ...}
+      "name":"id", ...}
+]}
 ----
 
 In addition string versions of the text fields are indexed, using copyFields to a `*_str` dynamic field: (output from `curl \http://localhost:8983/solr/gettingstarted/schema/copyfields` ):
@@ -277,7 +272,7 @@ Even if you want to use schemaless mode for most fields, you can still use the <
 
 Internally, the Schema API and the Schemaless Update Processors both use the same <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Managed Schema>> functionality.
 
-Also, if you do not need the `*_str` version of a text field, you can simply remove the `copyField` definition from the auto-generated schema and it will not be re-added since the original field is now defined. 
+Also, if you do not need the `*_str` version of a text field, you can simply remove the `copyField` definition from the auto-generated schema and it will not be re-added since the original field is now defined.
 ====
 
 Once a field has been added to the schema, its field type is fixed. As a consequence, adding documents with field value(s) that conflict with the previously guessed field type will fail. For example, after adding the above document, the "```Sold```" field has the fieldType `plongs`, but the document below has a non-integral decimal value in this field:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/segments-info.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/segments-info.adoc b/solr/solr-ref-guide/src/segments-info.adoc
index c5a4395..b0d72fe 100644
--- a/solr/solr-ref-guide/src/segments-info.adoc
+++ b/solr/solr-ref-guide/src/segments-info.adoc
@@ -22,4 +22,4 @@ The Segments Info screen lets you see a visualization of the various segments in
 
 image::images/segments-info/segments_info.png[image,width=486,height=250]
 
-This information may be useful for people to help make decisions about the optimal <<indexconfig-in-solrconfig.adoc#IndexConfiginSolrConfig-MergingIndexSegments,merge settings>> for their data.
+This information may be useful for people to help make decisions about the optimal <<indexconfig-in-solrconfig.adoc#merging-index-segments,merge settings>> for their data.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
index ab54836..d82ac29 100644
--- a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
+++ b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
@@ -40,7 +40,6 @@ For example, if you only have two ZooKeeper nodes and one goes down, 50% of avai
 
 More information on ZooKeeper clusters is available from the ZooKeeper documentation at http://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup.
 
-[[SettingUpanExternalZooKeeperEnsemble-DownloadApacheZooKeeper]]
 == Download Apache ZooKeeper
 
 The first step in setting up Apache ZooKeeper is, of course, to download the software. It's available from http://zookeeper.apache.org/releases.html.
@@ -52,15 +51,12 @@ When using stand-alone ZooKeeper, you need to take care to keep your version of
 Solr currently uses Apache ZooKeeper v3.4.10.
 ====
 
-[[SettingUpanExternalZooKeeperEnsemble-SettingUpaSingleZooKeeper]]
 == Setting Up a Single ZooKeeper
 
-[[SettingUpanExternalZooKeeperEnsemble-Createtheinstance]]
-=== Create the instance
+=== Create the Instance
 Creating the instance is a simple matter of extracting the files into a specific target directory. The actual directory itself doesn't matter, as long as you know where it is, and where you'd like to have ZooKeeper store its internal data.
 
-[[SettingUpanExternalZooKeeperEnsemble-Configuretheinstance]]
-=== Configure the instance
+=== Configure the Instance
 The next step is to configure your ZooKeeper instance. To do that, create the following file: `<ZOOKEEPER_HOME>/conf/zoo.cfg`. To this file, add the following information:
 
 [source,bash]
@@ -80,15 +76,13 @@ The parameters are as follows:
 
 Once this file is in place, you're ready to start the ZooKeeper instance.
 
-[[SettingUpanExternalZooKeeperEnsemble-Runtheinstance]]
-=== Run the instance
+=== Run the Instance
 
 To run the instance, you can simply use the `ZOOKEEPER_HOME/bin/zkServer.sh` script provided, as with this command: `zkServer.sh start`
 
 Again, ZooKeeper provides a great deal of power through additional configurations, but delving into them is beyond the scope of this tutorial. For more information, see the ZooKeeper http://zookeeper.apache.org/doc/r3.4.5/zookeeperStarted.html[Getting Started] page. For this example, however, the defaults are fine.
 
-[[SettingUpanExternalZooKeeperEnsemble-PointSolrattheinstance]]
-=== Point Solr at the instance
+=== Point Solr at the Instance
 
 Pointing Solr at the ZooKeeper instance you've created is a simple matter of using the `-z` parameter when using the bin/solr script. For example, in order to point the Solr instance to the ZooKeeper you've started on port 2181, this is what you'd need to do:
 
@@ -108,12 +102,10 @@ bin/solr start -cloud -s <path to solr home for new node> -p 8987 -z localhost:2
 
 NOTE: When you are not using an example to start solr, make sure you upload the configuration set to ZooKeeper before creating the collection.
 
-[[SettingUpanExternalZooKeeperEnsemble-ShutdownZooKeeper]]
-=== Shut down ZooKeeper
+=== Shut Down ZooKeeper
 
 To shut down ZooKeeper, use the zkServer script with the "stop" command: `zkServer.sh stop`.
 
-[[SettingUpanExternalZooKeeperEnsemble-SettingupaZooKeeperEnsemble]]
 == Setting up a ZooKeeper Ensemble
 
 With an external ZooKeeper ensemble, you need to set things up just a little more carefully as compared to the Getting Started example.
@@ -188,8 +180,7 @@ Once these servers are running, you can reference them from Solr just as you did
 bin/solr start -e cloud -z localhost:2181,localhost:2182,localhost:2183 -noprompt
 ----
 
-[[SettingUpanExternalZooKeeperEnsemble-SecuringtheZooKeeperconnection]]
-== Securing the ZooKeeper connection
+== Securing the ZooKeeper Connection
 
 You may also want to secure the communication between ZooKeeper and Solr.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
index d2dbcf7..3d0a87d 100644
--- a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
+++ b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
@@ -36,10 +36,9 @@ If a leader goes down, one of the other replicas is automatically elected as the
 
 When a document is sent to a Solr node for indexing, the system first determines which Shard that document belongs to, and then which node is currently hosting the leader for that shard. The document is then forwarded to the current leader for indexing, and the leader forwards the update to all of the other replicas.
 
-[[ShardsandIndexingDatainSolrCloud-DocumentRouting]]
 == Document Routing
 
-Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collections-api.adoc#CollectionsAPI-create,creating your collection>>.
+Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collections-api.adoc#create,creating your collection>>.
 
 If you use the (default) "```compositeId```" router, you can send documents with a prefix in the document ID which will be used to calculate the hash Solr uses to determine the shard a document is sent to for indexing. The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently. For example, if you wanted to co-locate documents for a customer, you could use the customer name or ID as the prefix. If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345". The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
 
@@ -55,16 +54,14 @@ If you do not want to influence how documents are stored, you don't need to spec
 
 If you created the collection and defined the "implicit" router at the time of creation, you can additionally define a `router.field` parameter to use a field from each document to identify a shard where the document belongs. If the field specified is missing in the document, however, the document will be rejected. You could also use the `\_route_` parameter to name a specific shard.
 
-[[ShardsandIndexingDatainSolrCloud-ShardSplitting]]
 == Shard Splitting
 
 When you create a collection in SolrCloud, you decide on the initial number shards to be used. But it can be difficult to know in advance the number of shards that you need, particularly when organizational requirements can change at a moment's notice, and the cost of finding out later that you chose wrong can be high, involving creating new cores and re-indexing all of your data.
 
 The ability to split shards is in the Collections API. It currently allows splitting a shard into two pieces. The existing shard is left as-is, so the split action effectively makes two copies of the data as new shards. You can delete the old shard at a later time when you're ready.
 
-More details on how to use shard splitting is in the section on the Collection API's <<collections-api.adoc#CollectionsAPI-splitshard,SPLITSHARD command>>.
+More details on how to use shard splitting is in the section on the Collection API's <<collections-api.adoc#splitshard,SPLITSHARD command>>.
 
-[[ShardsandIndexingDatainSolrCloud-IgnoringCommitsfromClientApplicationsinSolrCloud]]
 == Ignoring Commits from Client Applications in SolrCloud
 
 In most cases, when running in SolrCloud mode, indexing client applications should not send explicit commit requests. Rather, you should configure auto commits with `openSearcher=false` and auto soft-commits to make recent updates visible in search requests. This ensures that auto commits occur on a regular schedule in the cluster.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-control-script-reference.adoc b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
index 45a9e80..5888671 100644
--- a/solr/solr-ref-guide/src/solr-control-script-reference.adoc
+++ b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
@@ -83,7 +83,7 @@ The available options are:
 * dih
 * schemaless
 +
-See the section <<SolrControlScriptReference-RunningwithExampleConfigurations,Running with Example Configurations>> below for more details on the example configurations.
+See the section <<Running with Example Configurations>> below for more details on the example configurations.
 +
 *Example*: `bin/solr start -e schemaless`
 
@@ -185,7 +185,6 @@ When starting in SolrCloud mode, the interactive script session will prompt you
 
 For more information about starting Solr in SolrCloud mode, see also the section <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
 
-[[SolrControlScriptReference-RunningwithExampleConfigurations]]
 ==== Running with Example Configurations
 
 `bin/solr start -e <name>`
@@ -297,7 +296,23 @@ Solr process 39827 running on port 8865
     "collections":"2"}}
 ----
 
-[[SolrControlScriptReference-Healthcheck]]
+=== Assert
+
+The `assert` command sanity-checks common issues with Solr installations.  These include checking the ownership/existence of particular directories, and ensuring Solr is available on the expected URL.  The command can either output a specified error message, or change its exit code to indicate errors.
+
+As an example:
+
+`bin/solr assert --exists /opt/bin/solr`
+
+Results in the output below:
+
+[source,plain]
+----
+
+ERROR: Directory /opt/bin/solr does not exist.
+
+----
+
 === Healthcheck
 
 The `healthcheck` command generates a JSON-formatted health report for a collection when running in SolrCloud mode. The health report provides information about the state of every replica for all shards in a collection, including the number of committed documents and its current state.
@@ -306,7 +321,6 @@ The `healthcheck` command generates a JSON-formatted health report for a collect
 
 `bin/solr healthcheck -help`
 
-[[SolrControlScriptReference-AvailableParameters.2]]
 ==== Healthcheck Parameters
 
 `-c <collection>`::
@@ -371,7 +385,6 @@ Below is an example healthcheck request and response using a non-standard ZooKee
           "leader":true}]}]}
 ----
 
-[[SolrControlScriptReference-CollectionsandCores]]
 == Collections and Cores
 
 The `bin/solr` script can also help you create new collections (in SolrCloud mode) or cores (in standalone mode), or delete collections.
@@ -566,7 +579,6 @@ If the `-updateIncludeFileOnly` option is set to *true*, then only the settings
 
 If the `-updateIncludeFileOnly` option is set to *false*, then the settings in `bin/solr.in.sh` or `bin\solr.in.cmd` will be updated, and `security.json` will be removed. However, the `basicAuth.conf` file is not removed with either option.
 
-[[SolrControlScriptReference-ZooKeeperOperations]]
 == ZooKeeper Operations
 
 The `bin/solr` script allows certain operations affecting ZooKeeper. These operations are for SolrCloud mode only. The operations are available as sub-commands, which each have their own set of options.
@@ -577,7 +589,6 @@ The `bin/solr` script allows certain operations affecting ZooKeeper. These opera
 
 NOTE: Solr should have been started at least once before issuing these commands to initialize ZooKeeper with the znodes Solr expects. Once ZooKeeper is initialized, Solr doesn't need to be running on any node to use these commands.
 
-[[SolrControlScriptReference-UploadaConfigurationSet]]
 === Upload a Configuration Set
 
 Use the `zk upconfig` command to upload one of the pre-configured configuration set or a customized configuration set to ZooKeeper.
@@ -618,10 +629,9 @@ bin/solr zk upconfig -z 111.222.333.444:2181 -n mynewconfig -d /path/to/configse
 .Reload Collections When Changing Configurations
 [WARNING]
 ====
-This command does *not* automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API's <<collections-api.adoc#CollectionsAPI-reload,RELOAD command>> to reload any collections that uses this configuration set.
+This command does *not* automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API's <<collections-api.adoc#reload,RELOAD command>> to reload any collections that uses this configuration set.
 ====
 
-[[SolrControlScriptReference-DownloadaConfigurationSet]]
 === Download a Configuration Set
 
 Use the `zk downconfig` command to download a configuration set from ZooKeeper to the local filesystem.
@@ -791,12 +801,10 @@ An example of this command with the parameters is:
 `bin/solr zk ls /collections`
 
 
-[[SolrControlScriptReference-Createaznode_supportschroot_]]
 === Create a znode (supports chroot)
 
 Use the `zk mkroot` command to create a znode. The primary use-case for this command to support ZooKeeper's "chroot" concept. However, it can also be used to create arbitrary paths.
 
-[[SolrControlScriptReference-AvailableParameters.9]]
 ==== Create znode Parameters
 
 `<path>`::

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/solr-glossary.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-glossary.adoc b/solr/solr-ref-guide/src/solr-glossary.adoc
index 1feed2f..de27081 100644
--- a/solr/solr-ref-guide/src/solr-glossary.adoc
+++ b/solr/solr-ref-guide/src/solr-glossary.adoc
@@ -33,7 +33,7 @@ Where possible, terms are linked to relevant parts of the Solr Reference Guide f
 [[SolrGlossary-A]]
 === A
 
-[[atomicupdates]]<<updating-parts-of-documents.adoc#UpdatingPartsofDocuments-AtomicUpdates,Atomic updates>>::
+[[atomicupdates]]<<updating-parts-of-documents.adoc#atomic-updates,Atomic updates>>::
 An approach to updating only one or more fields of a document, instead of reindexing the entire document.
 
 
@@ -120,7 +120,7 @@ A JVM instance running Solr. Also known as a Solr server.
 [[SolrGlossary-O]]
 === O
 
-[[optimisticconcurrency]]<<updating-parts-of-documents.adoc#UpdatingPartsofDocuments-OptimisticConcurrency,Optimistic concurrency>>::
+[[optimisticconcurrency]]<<updating-parts-of-documents.adoc#optimistic-concurrency,Optimistic concurrency>>::
 Also known as "optimistic locking", this is an approach that allows for updates to documents currently in the index while retaining locking or version control.
 
 [[overseer]]Overseer::

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
index 45877f2..82e92a8 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
@@ -24,7 +24,6 @@ IMPORTANT: This requires Apache Zeppelin 0.6.0 or greater which contains the JDB
 
 To use http://zeppelin.apache.org[Apache Zeppelin] with Solr, you will need to create a JDBC interpreter for Solr. This will add SolrJ to the interpreter classpath. Once the interpreter has been created, you can create a notebook to issue queries. The http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation] provides additional information about JDBC prefixes and other features.
 
-[[SolrJDBC-ApacheZeppelin-CreatetheApacheSolrJDBCInterpreter]]
 == Create the Apache Solr JDBC Interpreter
 
 .Click "Interpreter" in the top navigation
@@ -41,7 +40,6 @@ image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_3.png[image,height=400
 For most installations, Apache Zeppelin configures PostgreSQL as the JDBC interpreter default driver. The default driver can either be replaced by the Solr driver as outlined above or you can add a separate JDBC interpreter prefix as outlined in the http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation].
 ====
 
-[[SolrJDBC-ApacheZeppelin-CreateaNotebook]]
 == Create a Notebook
 
 .Click Notebook \-> Create new note
@@ -50,7 +48,6 @@ image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_4.png[image,width=517,
 .Provide a name and click "Create Note"
 image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_5.png[image,width=839,height=400]
 
-[[SolrJDBC-ApacheZeppelin-QuerywiththeNotebook]]
 == Query with the Notebook
 
 [IMPORTANT]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
index f3ecc86..8b9b2b2 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
@@ -27,10 +27,8 @@ For https://www.dbvis.com/[DbVisualizer], you will need to create a new driver f
 
 Once the driver has been created, you can create a connection to Solr with the connection string format outlined in the generic section and use the SQL Commander to issue queries.
 
-[[SolrJDBC-DbVisualizer-SetupDriver]]
 == Setup Driver
 
-[[SolrJDBC-DbVisualizer-OpenDriverManager]]
 === Open Driver Manager
 
 From the Tools menu, choose Driver Manager to add a driver.
@@ -38,21 +36,18 @@ From the Tools menu, choose Driver Manager to add a driver.
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_1.png[image,width=673,height=400]
 
 
-[[SolrJDBC-DbVisualizer-CreateaNewDriver]]
 === Create a New Driver
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_2.png[image,width=532,height=400]
 
 
-[[SolrJDBC-DbVisualizer-NametheDriver]]
-=== Name the Driver
+=== Name the Driver in Driver Manager
 
 Provide a name for the driver, and provide the URL format: `jdbc:solr://<zk_connection_string>/?collection=<collection>`. Do not fill in values for the variables "```zk_connection_string```" and "```collection```", those will be provided later when the connection to Solr is configured. The Driver Class will also be automatically added when the driver .jars are added.
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_3.png[image,width=532,height=400]
 
 
-[[SolrJDBC-DbVisualizer-AddDriverFilestoClasspath]]
 === Add Driver Files to Classpath
 
 The driver files to be added are:
@@ -75,17 +70,14 @@ image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_7.png[image,width=655
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_9.png[image,width=651,height=400]
 
 
-[[SolrJDBC-DbVisualizer-ReviewandCloseDriverManager]]
 === Review and Close Driver Manager
 
 Once the driver files have been added, you can close the Driver Manager.
 
-[[SolrJDBC-DbVisualizer-CreateaConnection]]
 == Create a Connection
 
 Next, create a connection to Solr using the driver just created.
 
-[[SolrJDBC-DbVisualizer-UsetheConnectionWizard]]
 === Use the Connection Wizard
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png[image,width=763,height=400]
@@ -94,19 +86,16 @@ image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png[image,width=76
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_12.png[image,width=807,height=400]
 
 
-[[SolrJDBC-DbVisualizer-NametheConnection]]
 === Name the Connection
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_13.png[image,width=402,height=400]
 
 
-[[SolrJDBC-DbVisualizer-SelecttheSolrdriver]]
 === Select the Solr driver
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_14.png[image,width=399,height=400]
 
 
-[[SolrJDBC-DbVisualizer-SpecifytheSolrURL]]
 === Specify the Solr URL
 
 Provide the Solr URL, using the ZooKeeper host and port and the collection. For example, `jdbc:solr://localhost:9983?collection=test`
@@ -114,7 +103,6 @@ Provide the Solr URL, using the ZooKeeper host and port and the collection. For
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_15.png[image,width=401,height=400]
 
 
-[[SolrJDBC-DbVisualizer-OpenandConnecttoSolr]]
 == Open and Connect to Solr
 
 Once the connection has been created, double-click on it to open the connection details screen and connect to Solr.
@@ -125,7 +113,6 @@ image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_16.png[image,width=62
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_17.png[image,width=592,height=400]
 
 
-[[SolrJDBC-DbVisualizer-OpenSQLCommandertoEnterQueries]]
 == Open SQL Commander to Enter Queries
 
 When the connection is established, you can use the SQL Commander to issue queries and view data.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/spatial-search.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spatial-search.adoc b/solr/solr-ref-guide/src/spatial-search.adoc
index 8b56c02..64d813f 100644
--- a/solr/solr-ref-guide/src/spatial-search.adoc
+++ b/solr/solr-ref-guide/src/spatial-search.adoc
@@ -42,7 +42,6 @@ There are four main field types available for spatial search:
 
 Some esoteric details that are not in this guide can be found at http://wiki.apache.org/solr/SpatialSearch.
 
-[[SpatialSearch-LatLonPointSpatialField]]
 == LatLonPointSpatialField
 
 Here's how `LatLonPointSpatialField` (LLPSF) should usually be configured in the schema:
@@ -52,7 +51,6 @@ Here's how `LatLonPointSpatialField` (LLPSF) should usually be configured in the
 
 LLPSF supports toggling `indexed`, `stored`, `docValues`, and `multiValued`. LLPSF internally uses a 2-dimensional Lucene "Points" (BDK tree) index when "indexed" is enabled (the default). When "docValues" is enabled, a latitude and longitudes pair are bit-interleaved into 64 bits and put into Lucene DocValues. The accuracy of the docValues data is about a centimeter.
 
-[[SpatialSearch-IndexingPoints]]
 == Indexing Points
 
 For indexing geodetic points (latitude and longitude), supply it in "lat,lon" order (comma separated).
@@ -61,7 +59,6 @@ For indexing non-geodetic points, it depends. Use `x y` (a space) if RPT. For Po
 
 If you'd rather use a standard industry format, Solr supports WKT and GeoJSON. However it's much bulkier than the raw coordinates for such simple data. (Not supported by the deprecated LatLonType or PointType)
 
-[[SpatialSearch-SearchingwithQueryParsers]]
 == Searching with Query Parsers
 
 There are two spatial Solr "query parsers" for geospatial search: `geofilt` and `bbox`. They take the following parameters:
@@ -100,7 +97,6 @@ When used with `BBoxField`, additional options are supported:
 (Advanced option; not supported by LatLonType (deprecated) or PointType). If you only want the query to score (with the above `score` local parameter), not filter, then set this local parameter to false.
 
 
-[[SpatialSearch-geofilt]]
 === geofilt
 
 The `geofilt` filter allows you to retrieve results based on the geospatial distance (AKA the "great circle distance") from a given point. Another way of looking at it is that it creates a circular shape filter. For example, to find all documents within five kilometers of a given lat/lon point, you could enter `&q=*:*&fq={!geofilt sfield=store}&pt=45.15,-93.85&d=5`. This filter returns all results within a circle of the given radius around the initial point:
@@ -108,7 +104,6 @@ The `geofilt` filter allows you to retrieve results based on the geospatial dist
 image::images/spatial-search/circle.png[5KM radius]
 
 
-[[SpatialSearch-bbox]]
 === bbox
 
 The `bbox` filter is very similar to `geofilt` except it uses the _bounding box_ of the calculated circle. See the blue box in the diagram below. It takes the same parameters as geofilt.
@@ -126,7 +121,6 @@ image::images/spatial-search/bbox.png[Bounding box]
 When a bounding box includes a pole, the bounding box ends up being a "bounding bowl" (a _spherical cap_) that includes all values north of the lowest latitude of the circle if it touches the north pole (or south of the highest latitude if it touches the south pole).
 ====
 
-[[SpatialSearch-Filteringbyanarbitraryrectangle]]
 === Filtering by an Arbitrary Rectangle
 
 Sometimes the spatial search requirement calls for finding everything in a rectangular area, such as the area covered by a map the user is looking at. For this case, geofilt and bbox won't cut it. This is somewhat of a trick, but you can use Solr's range query syntax for this by supplying the lower-left corner as the start of the range and the upper-right corner as the end of the range.
@@ -138,7 +132,6 @@ Here's an example:
 LatLonType (deprecated) does *not* support rectangles that cross the dateline. For RPT and BBoxField, if you are non-geospatial coordinates (`geo="false"`) then you must quote the points due to the space, e.g. `"x y"`.
 
 
-[[SpatialSearch-Optimizing_CacheorNot]]
 === Optimizing: Cache or Not
 
 It's most common to put a spatial query into an "fq" parameter – a filter query. By default, Solr will cache the query in the filter cache.
@@ -149,7 +142,6 @@ If you know the filter query (be it spatial or not) is fairly unique and not lik
 
 LLPSF does not support Solr's "PostFilter".
 
-[[SpatialSearch-DistanceSortingorBoosting_FunctionQueries_]]
 == Distance Sorting or Boosting (Function Queries)
 
 There are four distance function queries:
@@ -161,7 +153,6 @@ There are four distance function queries:
 
 For more information about these function queries, see the section on <<function-queries.adoc#function-queries,Function Queries>>.
 
-[[SpatialSearch-geodist]]
 === geodist
 
 `geodist` is a distance function that takes three optional parameters: `(sfield,latitude,longitude)`. You can use the `geodist` function to sort results by distance or score return results.
@@ -170,19 +161,16 @@ For example, to sort your results by ascending distance, enter `...&q=*:*&fq={!g
 
 To return the distance as the document score, enter `...&q={!func}geodist()&sfield=store&pt=45.15,-93.85&sort=score+asc`.
 
-[[SpatialSearch-MoreExamples]]
-== More Examples
+== More Spatial Search Examples
 
 Here are a few more useful examples of what you can do with spatial search in Solr.
 
-[[SpatialSearch-UseasaSub-QuerytoExpandSearchResults]]
 === Use as a Sub-Query to Expand Search Results
 
 Here we will query for results in Jacksonville, Florida, or within 50 kilometers of 45.15,-93.85 (near Buffalo, Minnesota):
 
 `&q=*:*&fq=(state:"FL" AND city:"Jacksonville") OR {!geofilt}&sfield=store&pt=45.15,-93.85&d=50&sort=geodist()+asc`
 
-[[SpatialSearch-FacetbyDistance]]
 === Facet by Distance
 
 To facet by distance, you can use the Frange query parser:
@@ -191,14 +179,12 @@ To facet by distance, you can use the Frange query parser:
 
 There are other ways to do it too, like using a \{!geofilt} in each facet.query.
 
-[[SpatialSearch-BoostNearestResults]]
 === Boost Nearest Results
 
 Using the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>> or <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,Extended DisMax>>, you can combine spatial search with the boost function to boost the nearest results:
 
 `&q.alt=*:*&fq={!geofilt}&sfield=store&pt=45.15,-93.85&d=50&bf=recip(geodist(),2,200,20)&sort=score desc`
 
-[[SpatialSearch-RPT]]
 == RPT
 
 RPT refers to either `SpatialRecursivePrefixTreeFieldType` (aka simply RPT) and an extended version: `RptWithGeometrySpatialField` (aka RPT with Geometry). RPT offers several functional improvements over LatLonPointSpatialField:
@@ -215,8 +201,7 @@ RPT _shares_ various features in common with `LatLonPointSpatialField`. Some are
 * Sort/boost via `geodist`
 * Well-Known-Text (WKT) shape syntax (required for specifying polygons & other complex shapes), and GeoJSON too. In addition to indexing and searching, this works with the `wt=geojson` (GeoJSON Solr response-writer) and `[geo f=myfield]` (geo Solr document-transformer).
 
-[[SpatialSearch-Schemaconfiguration]]
-=== Schema Configuration
+=== Schema Configuration for RPT
 
 To use RPT, the field type must be registered and configured in `schema.xml`. There are many options for this field type.
 
@@ -266,7 +251,6 @@ A third choice is `packedQuad`, which is generally more efficient than `quad`, p
 
 *_And there are others:_* `normWrapLongitude`, `datelineRule`, `validationRule`, `autoIndex`, `allowMultiOverlap`, `precisionModel`. For further info, see notes below about `spatialContextFactory` implementations referenced above, especially the link to the JTS based one.
 
-[[SpatialSearch-JTSandPolygons]]
 === JTS and Polygons
 
 As indicated above, `spatialContextFactory` must be set to `JTS` for polygon support, including multi-polygon.
@@ -297,7 +281,6 @@ Inside the parenthesis following the search predicate is the shape definition. T
 
 Beyond this Reference Guide and Spatila4j's docs, there are some details that remain at the Solr Wiki at http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4.
 
-[[SpatialSearch-RptWithGeometrySpatialField]]
 === RptWithGeometrySpatialField
 
 The `RptWithGeometrySpatialField` field type is a derivative of `SpatialRecursivePrefixTreeFieldType` that also stores the original geometry internally in Lucene DocValues, which it uses to achieve accurate search. It can also be used for indexed point fields. The Intersects predicate (the default) is particularly fast, since many search results can be returned as an accurate hit without requiring a geometry check. This field type is configured just like RPT except that the default `distErrPct` is 0.15 (higher than 0.025) because the grid squares are purely for performance and not to fundamentally represent the shape.
@@ -316,7 +299,6 @@ An optional in-memory cache can be defined in `solrconfig.xml`, which should be
 
 When using this field type, you will likely _not_ want to mark the field as stored because it's redundant with the DocValues data and surely larger because of the formatting (be it WKT or GeoJSON). To retrieve the spatial data in search results from DocValues, use the `[geo]` transformer -- <<transforming-result-documents.adoc#transforming-result-documents,Transforming Result Documents>>.
 
-[[SpatialSearch-HeatmapFaceting]]
 === Heatmap Faceting
 
 The RPT field supports generating a 2D grid of facet counts for documents having spatial data in each grid cell. For high-detail grids, this can be used to plot points, and for lesser detail it can be used for heatmap generation. The grid cells are determined at index-time based on RPT's configuration. At facet counting time, the indexed cells in the region of interest are traversed and a grid of counters corresponding to each cell are incremented. Solr can return the data in a straight-forward 2D array of integers or in a PNG which compresses better for larger data sets but must be decoded.
@@ -365,7 +347,6 @@ The `counts_ints2D` key has a 2D array of integers. The initial outer level is i
 
 If `format=png` then the output key is `counts_png`. It's a base-64 encoded string of a 4-byte PNG. The PNG logically holds exactly the same data that the ints2D format does. Note that the alpha channel byte is flipped to make it easier to view the PNG for diagnostic purposes, since otherwise counts would have to exceed 2^24 before it becomes non-opague. Thus counts greater than this value will become opaque.
 
-[[SpatialSearch-BBoxField]]
 == BBoxField
 
 The `BBoxField` field type indexes a single rectangle (bounding box) per document field and supports searching via a bounding box. It supports most spatial search predicates, it has enhanced relevancy modes based on the overlap or area between the search rectangle and the indexed rectangle. It's particularly useful for its relevancy modes. To configure it in the schema, use a configuration like this:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/spell-checking.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spell-checking.adoc b/solr/solr-ref-guide/src/spell-checking.adoc
index adb784a..20ec5e0 100644
--- a/solr/solr-ref-guide/src/spell-checking.adoc
+++ b/solr/solr-ref-guide/src/spell-checking.adoc
@@ -22,15 +22,12 @@ The SpellCheck component is designed to provide inline query suggestions based o
 
 The basis for these suggestions can be terms in a field in Solr, externally created text files, or fields in other Lucene indexes.
 
-[[SpellChecking-ConfiguringtheSpellCheckComponent]]
 == Configuring the SpellCheckComponent
 
-[[SpellChecking-DefineSpellCheckinsolrconfig.xml]]
 === Define Spell Check in solrconfig.xml
 
 The first step is to specify the source of terms in `solrconfig.xml`. There are three approaches to spell checking in Solr, discussed below.
 
-[[SpellChecking-IndexBasedSpellChecker]]
 ==== IndexBasedSpellChecker
 
 The `IndexBasedSpellChecker` uses a Solr index as the basis for a parallel index used for spell checking. It requires defining a field as the basis for the index terms; a common practice is to copy terms from some fields (such as `title`, `body`, etc.) to another field created for spell checking. Here is a simple example of configuring `solrconfig.xml` with the `IndexBasedSpellChecker`:
@@ -57,7 +54,6 @@ The `spellcheckIndexDir` defines the location of the directory that holds the sp
 
 Finally, _buildOnCommit_ defines whether to build the spell check index at every commit (that is, every time new documents are added to the index). It is optional, and can be omitted if you would rather set it to `false`.
 
-[[SpellChecking-DirectSolrSpellChecker]]
 ==== DirectSolrSpellChecker
 
 The `DirectSolrSpellChecker` uses terms from the Solr index without building a parallel index like the `IndexBasedSpellChecker`. This spell checker has the benefit of not having to be built regularly, meaning that the terms are always up-to-date with terms in the index. Here is how this might be configured in `solrconfig.xml`
@@ -89,9 +85,8 @@ Because this spell checker is querying the main index, you may want to limit how
 
 The `maxInspections` parameter defines the maximum number of possible matches to review before returning results; the default is 5. `minQueryLength` defines how many characters must be in the query before suggestions are provided; the default is 4.
 
-At first, spellchecker analyses incoming query words by looking up them in the index. Only query words, which are absent in index or too rare ones (below `maxQueryFrequency` ) are considered as misspelled and used for finding suggestions. Words which are frequent than `maxQueryFrequency` bypass spellchecker unchanged. After suggestions for every misspelled word are found they are filtered for enough frequency with `thresholdTokenFrequency` as boundary value. These parameters (`maxQueryFrequency` and `thresholdTokenFrequency`) can be a percentage (such as .01, or 1%) or an absolute value (such as 4).
+At first, spellchecker analyses incoming query words by looking up them in the index. Only query words, which are absent in index or too rare ones (below `maxQueryFrequency`) are considered as misspelled and used for finding suggestions. Words which are frequent than `maxQueryFrequency` bypass spellchecker unchanged. After suggestions for every misspelled word are found they are filtered for enough frequency with `thresholdTokenFrequency` as boundary value. These parameters (`maxQueryFrequency` and `thresholdTokenFrequency`) can be a percentage (such as .01, or 1%) or an absolute value (such as 4).
 
-[[SpellChecking-FileBasedSpellChecker]]
 ==== FileBasedSpellChecker
 
 The `FileBasedSpellChecker` uses an external file as a spelling dictionary. This can be useful if using Solr as a spelling server, or if spelling suggestions don't need to be based on actual terms in the index. In `solrconfig.xml`, you would define the searchComponent as so:
@@ -120,7 +115,6 @@ The differences here are the use of the `sourceLocation` to define the location
 In the previous example, _name_ is used to name this specific definition of the spellchecker. Multiple definitions can co-exist in a single `solrconfig.xml`, and the _name_ helps to differentiate them. If only defining one spellchecker, no name is required.
 ====
 
-[[SpellChecking-WordBreakSolrSpellChecker]]
 ==== WordBreakSolrSpellChecker
 
 `WordBreakSolrSpellChecker` offers suggestions by combining adjacent query terms and/or breaking terms into multiple words. It is a `SpellCheckComponent` enhancement, leveraging Lucene's `WordBreakSpellChecker`. It can detect spelling errors resulting from misplaced whitespace without the use of shingle-based dictionaries and provides collation support for word-break errors, including cases where the user has a mix of single-word spelling errors and word-break errors in the same query. It also provides shard support.
@@ -145,7 +139,6 @@ Some of the parameters will be familiar from the discussion of the other spell c
 
 The spellchecker can be configured with a traditional checker (ie: `DirectSolrSpellChecker`). The results are combined and collations can contain a mix of corrections from both spellcheckers.
 
-[[SpellChecking-AddIttoaRequestHandler]]
 === Add It to a Request Handler
 
 Queries will be sent to a <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,RequestHandler>>. If every request should generate a suggestion, then you would add the following to the `requestHandler` that you are using:
@@ -173,151 +166,86 @@ Here is an example with multiple dictionaries:
 </requestHandler>
 ----
 
-[[SpellChecking-SpellCheckParameters]]
 == Spell Check Parameters
 
-The SpellCheck component accepts the parameters described in the table below.
-
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|<<SpellChecking-ThespellcheckParameter,spellcheck>> |Turns on or off SpellCheck suggestions for the request. If *true*, then spelling suggestions will be generated.
-|<<SpellChecking-Thespellcheck.qorqParameter,spellcheck.q or q>> |Selects the query to be spellchecked.
-|<<SpellChecking-Thespellcheck.buildParameter,spellcheck.build>> |Instructs Solr to build a dictionary for use in spellchecking.
-|<<SpellChecking-Thespellcheck.collateParameter,spellcheck.collate>> |Causes Solr to build a new query based on the best suggestion for each term in the submitted query.
-|<<SpellChecking-Thespellcheck.maxCollationsParameter,spellcheck.maxCollations>> |This parameter specifies the maximum number of collations to return.
-|<<SpellChecking-Thespellcheck.maxCollationTriesParameter,spellcheck.maxCollationTries>> |This parameter specifies the number of collation possibilities for Solr to try before giving up.
-|<<SpellChecking-Thespellcheck.maxCollationEvaluationsParameter,spellcheck.maxCollationEvaluations>> |This parameter specifies the maximum number of word correction combinations to rank and evaluate prior to deciding which collation candidates to test against the index.
-|<<SpellChecking-Thespellcheck.collateExtendedResultsParameter,spellcheck.collateExtendedResults>> |If true, returns an expanded response detailing the collations found. If `spellcheck.collate` is false, this parameter will be ignored.
-|<<SpellChecking-Thespellcheck.collateMaxCollectDocsParameter,spellcheck.collateMaxCollectDocs>> |The maximum number of documents to collect when testing potential Collations
-|<<SpellChecking-Thespellcheck.collateParam._ParameterPrefix,spellcheck.collateParam.*>> |Specifies param=value pairs that can be used to override normal query params when validating collations
-|<<SpellChecking-Thespellcheck.countParameter,spellcheck.count>> |Specifies the maximum number of spelling suggestions to be returned.
-|<<SpellChecking-Thespellcheck.dictionaryParameter,spellcheck.dictionary>> |Specifies the dictionary that should be used for spellchecking.
-|<<SpellChecking-Thespellcheck.extendedResultsParameter,spellcheck.extendedResults>> |Causes Solr to return additional information about spellcheck results, such as the frequency of each original term in the index (origFreq) as well as the frequency of each suggestion in the index (frequency). Note that this result format differs from the non-extended one as the returned suggestion for a word is actually an array of lists, where each list holds the suggested term and its frequency.
-|<<SpellChecking-Thespellcheck.onlyMorePopularParameter,spellcheck.onlyMorePopular>> |Limits spellcheck responses to queries that are more popular than the original query.
-|<<SpellChecking-Thespellcheck.maxResultsForSuggestParameter,spellcheck.maxResultsForSuggest>> |The maximum number of hits the request can return in order to both generate spelling suggestions and set the "correctlySpelled" element to "false".
-|<<SpellChecking-Thespellcheck.alternativeTermCountParameter,spellcheck.alternativeTermCount>> |The count of suggestions to return for each query term existing in the index and/or dictionary.
-|<<SpellChecking-Thespellcheck.reloadParameter,spellcheck.reload>> |Reloads the spellchecker.
-|<<SpellChecking-Thespellcheck.accuracyParameter,spellcheck.accuracy>> |Specifies an accuracy value to help decide whether a result is worthwhile.
-|<<spellcheck_DICT_NAME,spellcheck.<DICT_NAME>.key>> |Specifies a key/value pair for the implementation handling a given dictionary.
-|===
-
-[[SpellChecking-ThespellcheckParameter]]
-=== The spellcheck Parameter
-
-This parameter turns on SpellCheck suggestions for the request. If *true*, then spelling suggestions will be generated.
-
-[[SpellChecking-Thespellcheck.qorqParameter]]
-=== The spellcheck.q or q Parameter
-
-This parameter specifies the query to spellcheck. If `spellcheck.q` is defined, then it is used; otherwise the original input query is used. The `spellcheck.q` parameter is intended to be the original query, minus any extra markup like field names, boosts, and so on. If the `q` parameter is specified, then the `SpellingQueryConverter` class is used to parse it into tokens; otherwise the <<tokenizers.adoc#Tokenizers-WhiteSpaceTokenizer,`WhitespaceTokenizer`>> is used. The choice of which one to use is up to the application. Essentially, if you have a spelling "ready" version in your application, then it is probably better to use `spellcheck.q`. Otherwise, if you just want Solr to do the job, use the `q` parameter.
-
-[NOTE]
-====
-The SpellingQueryConverter class does not deal properly with non-ASCII characters. In this case, you have either to use `spellcheck.q`, or implement your own QueryConverter.
-====
-
-[[SpellChecking-Thespellcheck.buildParameter]]
-=== The spellcheck.build Parameter
-
-If set to *true*, this parameter creates the dictionary that the SolrSpellChecker will use for spell-checking. In a typical search application, you will need to build the dictionary before using the SolrSpellChecker. However, it's not always necessary to build a dictionary first. For example, you can configure the spellchecker to use a dictionary that already exists.
-
-The dictionary will take some time to build, so this parameter should not be sent with every request.
+The SpellCheck component accepts the parameters described below.
 
-[[SpellChecking-Thespellcheck.reloadParameter]]
-=== The spellcheck.reload Parameter
+`spellcheck`::
+This parameter turns on SpellCheck suggestions for the request. If `true`, then spelling suggestions will be generated. This is required if spell checking is desired.
 
-If set to true, this parameter reloads the spellchecker. The results depend on the implementation of `SolrSpellChecker.reload()`. In a typical implementation, reloading the spellchecker means reloading the dictionary.
+`spellcheck.q` or `q`::
+This parameter specifies the query to spellcheck.
++
+If `spellcheck.q` is defined, then it is used; otherwise the original input query is used. The `spellcheck.q` parameter is intended to be the original query, minus any extra markup like field names, boosts, and so on. If the `q` parameter is specified, then the `SpellingQueryConverter` class is used to parse it into tokens; otherwise the <<tokenizers.adoc#white-space-tokenizer,`WhitespaceTokenizer`>> is used.
++
+The choice of which one to use is up to the application. Essentially, if you have a spelling "ready" version in your application, then it is probably better to use `spellcheck.q`. Otherwise, if you just want Solr to do the job, use the `q` parameter.
 
-[[SpellChecking-Thespellcheck.countParameter]]
-=== The spellcheck.count Parameter
+NOTE: The `SpellingQueryConverter` class does not deal properly with non-ASCII characters. In this case, you have either to use `spellcheck.q`, or implement your own QueryConverter.
 
-This parameter specifies the maximum number of suggestions that the spellchecker should return for a term. If this parameter isn't set, the value defaults to 1. If the parameter is set but not assigned a number, the value defaults to 5. If the parameter is set to a positive integer, that number becomes the maximum number of suggestions returned by the spellchecker.
-
-[[SpellChecking-Thespellcheck.onlyMorePopularParameter]]
-=== The spellcheck.onlyMorePopular Parameter
-
-If *true*, Solr will to return suggestions that result in more hits for the query than the existing query. Note that this will return more popular suggestions even when the given query term is present in the index and considered "correct".
-
-[[SpellChecking-Thespellcheck.maxResultsForSuggestParameter]]
-=== The spellcheck.maxResultsForSuggest Parameter
-
-For example, if this is set to 5 and the user's query returns 5 or fewer results, the spellchecker will report "correctlySpelled=false" and also offer suggestions (and collations if requested). Setting this greater than zero is useful for creating "did-you-mean?" suggestions for queries that return a low number of hits.
+`spellcheck.build`::
+If set to `true`, this parameter creates the dictionary to be used for spell-checking. In a typical search application, you will need to build the dictionary before using the spell check. However, it's not always necessary to build a dictionary first. For example, you can configure the spellchecker to use a dictionary that already exists.
++
+The dictionary will take some time to build, so this parameter should not be sent with every request.
 
-[[SpellChecking-Thespellcheck.alternativeTermCountParameter]]
-=== The spellcheck.alternativeTermCount Parameter
+`spellcheck.reload`::
+If set to `true`, this parameter reloads the spellchecker. The results depend on the implementation of `SolrSpellChecker.reload()`. In a typical implementation, reloading the spellchecker means reloading the dictionary.
 
-Specify the number of suggestions to return for each query term existing in the index and/or dictionary. Presumably, users will want fewer suggestions for words with docFrequency>0. Also setting this value turns "on" context-sensitive spell suggestions.
+`spellcheck.count`::
+This parameter specifies the maximum number of suggestions that the spellchecker should return for a term. If this parameter isn't set, the value defaults to `1`. If the parameter is set but not assigned a number, the value defaults to `5`. If the parameter is set to a positive integer, that number becomes the maximum number of suggestions returned by the spellchecker.
 
-[[SpellChecking-Thespellcheck.extendedResultsParameter]]
-=== The spellcheck.extendedResults Parameter
+`spellcheck.onlyMorePopular`::
+If `true`, Solr will to return suggestions that result in more hits for the query than the existing query. Note that this will return more popular suggestions even when the given query term is present in the index and considered "correct".
 
-This parameter causes to Solr to include additional information about the suggestion, such as the frequency in the index.
+`spellcheck.maxResultsForSuggest`::
+If, for example, this is set to `5` and the user's query returns 5 or fewer results, the spellchecker will report "correctlySpelled=false" and also offer suggestions (and collations if requested). Setting this greater than zero is useful for creating "did-you-mean?" suggestions for queries that return a low number of hits.
 
-[[SpellChecking-Thespellcheck.collateParameter]]
-=== The spellcheck.collate Parameter
+`spellcheck.alternativeTermCount`::
+Defines the number of suggestions to return for each query term existing in the index and/or dictionary. Presumably, users will want fewer suggestions for words with docFrequency>0. Also, setting this value enables context-sensitive spell suggestions.
 
-If *true*, this parameter directs Solr to take the best suggestion for each token (if one exists) and construct a new query from the suggestions. For example, if the input query was "jawa class lording" and the best suggestion for "jawa" was "java" and "lording" was "loading", then the resulting collation would be "java class loading".
+`spellcheck.extendedResults`::
+If `true`, this parameter causes to Solr to return additional information about spellcheck results, such as the frequency of each original term in the index (`origFreq`) as well as the frequency of each suggestion in the index (`frequency`). Note that this result format differs from the non-extended one as the returned suggestion for a word is actually an array of lists, where each list holds the suggested term and its frequency.
 
-The spellcheck.collate parameter only returns collations that are guaranteed to result in hits if re-queried, even when applying original `fq` parameters. This is especially helpful when there is more than one correction per query.
+`spellcheck.collate`::
+If `true`, this parameter directs Solr to take the best suggestion for each token (if one exists) and construct a new query from the suggestions.
++
+For example, if the input query was "jawa class lording" and the best suggestion for "jawa" was "java" and "lording" was "loading", then the resulting collation would be "java class loading".
++
+The `spellcheck.collate` parameter only returns collations that are guaranteed to result in hits if re-queried, even when applying original `fq` parameters. This is especially helpful when there is more than one correction per query.
 
 NOTE: This only returns a query to be used. It does not actually run the suggested query.
 
-[[SpellChecking-Thespellcheck.maxCollationsParameter]]
-=== The spellcheck.maxCollations Parameter
-
-The maximum number of collations to return. The default is *1*. This parameter is ignored if `spellcheck.collate` is false.
+`spellcheck.maxCollations`::
+The maximum number of collations to return. The default is `1`. This parameter is ignored if `spellcheck.collate` is false.
 
-[[SpellChecking-Thespellcheck.maxCollationTriesParameter]]
-=== The spellcheck.maxCollationTries Parameter
+`spellcheck.maxCollationTries`::
+This parameter specifies the number of collation possibilities for Solr to try before giving up. Lower values ensure better performance. Higher values may be necessary to find a collation that can return results. The default value is `0`, which is equivalent to not checking collations. This parameter is ignored if `spellcheck.collate` is false.
 
-This parameter specifies the number of collation possibilities for Solr to try before giving up. Lower values ensure better performance. Higher values may be necessary to find a collation that can return results. The default value is `0`, which maintains backwards-compatible (Solr 1.4) behavior (do not check collations). This parameter is ignored if `spellcheck.collate` is false.
+`spellcheck.maxCollationEvaluations`::
+This parameter specifies the maximum number of word correction combinations to rank and evaluate prior to deciding which collation candidates to test against the index. This is a performance safety-net in case a user enters a query with many misspelled words. The default is `10000` combinations, which should work well in most situations.
 
-[[SpellChecking-Thespellcheck.maxCollationEvaluationsParameter]]
-=== The spellcheck.maxCollationEvaluations Parameter
+`spellcheck.collateExtendedResults`::
+If `true`, this parameter returns an expanded response format detailing the collations Solr found. The default value is `false` and this is ignored if `spellcheck.collate` is false.
 
-This parameter specifies the maximum number of word correction combinations to rank and evaluate prior to deciding which collation candidates to test against the index. This is a performance safety-net in case a user enters a query with many misspelled words. The default is *10,000* combinations, which should work well in most situations.
-
-[[SpellChecking-Thespellcheck.collateExtendedResultsParameter]]
-=== The spellcheck.collateExtendedResults Parameter
-
-If *true*, this parameter returns an expanded response format detailing the collations Solr found. The default value is *false* and this is ignored if `spellcheck.collate` is false.
-
-[[SpellChecking-Thespellcheck.collateMaxCollectDocsParameter]]
-=== The spellcheck.collateMaxCollectDocs Parameter
-
-This parameter specifies the maximum number of documents that should be collect when testing potential collations against the index. A value of *0* indicates that all documents should be collected, resulting in exact hit-counts. Otherwise an estimation is provided as a performance optimization in cases where exact hit-counts are unnecessary – the higher the value specified, the more precise the estimation.
-
-The default value for this parameter is *0*, but when `spellcheck.collateExtendedResults` is *false*, the optimization is always used as if a *1* had been specified.
-
-
-[[SpellChecking-Thespellcheck.collateParam._ParameterPrefix]]
-=== The spellcheck.collateParam.* Parameter Prefix
+`spellcheck.collateMaxCollectDocs`::
+This parameter specifies the maximum number of documents that should be collected when testing potential collations against the index. A value of `0` indicates that all documents should be collected, resulting in exact hit-counts. Otherwise an estimation is provided as a performance optimization in cases where exact hit-counts are unnecessary – the higher the value specified, the more precise the estimation.
++
+The default value for this parameter is `0`, but when `spellcheck.collateExtendedResults` is false, the optimization is always used as if `1` had been specified.
 
+`spellcheck.collateParam.*` Prefix::
 This parameter prefix can be used to specify any additional parameters that you wish to the Spellchecker to use when internally validating collation queries. For example, even if your regular search results allow for loose matching of one or more query terms via parameters like `q.op=OR` and `mm=20%` you can specify override params such as `spellcheck.collateParam.q.op=AND&spellcheck.collateParam.mm=100%` to require that only collations consisting of words that are all found in at least one document may be returned.
 
-[[SpellChecking-Thespellcheck.dictionaryParameter]]
-=== The spellcheck.dictionary Parameter
-
-This parameter causes Solr to use the dictionary named in the parameter's argument. The default setting is "default". This parameter can be used to invoke a specific spellchecker on a per request basis.
-
-[[SpellChecking-Thespellcheck.accuracyParameter]]
-=== The spellcheck.accuracy Parameter
+`spellcheck.dictionary`::
+This parameter causes Solr to use the dictionary named in the parameter's argument. The default setting is `default`. This parameter can be used to invoke a specific spellchecker on a per request basis.
 
+`spellcheck.accuracy`::
 Specifies an accuracy value to be used by the spell checking implementation to decide whether a result is worthwhile or not. The value is a float between 0 and 1. Defaults to `Float.MIN_VALUE`.
 
-
-[[spellcheck_DICT_NAME]]
-=== The spellcheck.<DICT_NAME>.key Parameter
-
-Specifies a key/value pair for the implementation handling a given dictionary. The value that is passed through is just `key=value` (`spellcheck.<DICT_NAME>.` is stripped off.
-
+`spellcheck.<DICT_NAME>.key`::
+Specifies a key/value pair for the implementation handling a given dictionary. The value that is passed through is just `key=value` (`spellcheck.<DICT_NAME>.` is stripped off).
++
 For example, given a dictionary called `foo`, `spellcheck.foo.myKey=myValue` would result in `myKey=myValue` being passed through to the implementation handling the dictionary `foo`.
 
-[[SpellChecking-Example]]
-=== Example
+=== Spell Check Example
 
 Using Solr's `bin/solr -e techproducts` example, this query shows the results of a simple request that defines a query using the `spellcheck.q` parameter, and forces the collations to require all input terms must match:
 
@@ -368,19 +296,15 @@ Results:
 </lst>
 ----
 
-[[SpellChecking-DistributedSpellCheck]]
 == Distributed SpellCheck
 
 The `SpellCheckComponent` also supports spellchecking on distributed indexes. If you are using the SpellCheckComponent on a request handler other than "/select", you must provide the following two parameters:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`shards`::
+Specifies the shards in your distributed indexing configuration. For more information about distributed indexing, see <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>
 
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|shards |Specifies the shards in your distributed indexing configuration. For more information about distributed indexing, see <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>
-|shards.qt |Specifies the request handler Solr uses for requests to shards. This parameter is not required for the `/select` request handler.
-|===
+`shards.qt`::
+Specifies the request handler Solr uses for requests to shards. This parameter is not required for the `/select` request handler.
 
 For example:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/stream-decorators.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/stream-decorators.adoc b/solr/solr-ref-guide/src/stream-decorators.adoc
index e65f18a..4db4a82 100644
--- a/solr/solr-ref-guide/src/stream-decorators.adoc
+++ b/solr/solr-ref-guide/src/stream-decorators.adoc
@@ -382,7 +382,7 @@ cartesianProduct(
 }
 ----
 
-As you can see in the examples above, the `cartesianProduct` function does support flattening tuples across multiple fields and/or evaluators. 
+As you can see in the examples above, the `cartesianProduct` function does support flattening tuples across multiple fields and/or evaluators.
 
 == classify
 
@@ -615,8 +615,6 @@ eval(expr)
 In the example above the `eval` expression reads the first tuple from the underlying expression. It then compiles and
 executes the string Streaming Expression in the epxr_s field.
 
-
-[[StreamingExpressions-executor]]
 == executor
 
 The `executor` function wraps a stream source that contains streaming expressions, and executes the expressions in parallel. The `executor` function looks for the expression in the `expr_s` field in each tuple. The `executor` function has an internal thread pool that runs tasks that compile and run expressions in parallel on the same worker node. This function can also be parallelized across worker nodes by wrapping it in the <<parallel,`parallel`>> function to provide parallel execution of expressions across a cluster.
@@ -984,7 +982,6 @@ The worker nodes can be from the same collection as the data, or they can be a d
 * `zkHost`: (Optional) The ZooKeeper connect string where the worker collection resides.
 * `sort`: The sort criteria for ordering tuples returned by the worker nodes.
 
-[[StreamingExpressions-Syntax.25]]
 === parallel Syntax
 
 [source,text]
@@ -1000,10 +997,9 @@ The worker nodes can be from the same collection as the data, or they can be a d
 
 The expression above shows a `parallel` function wrapping a `reduce` function. This will cause the `reduce` function to be run in parallel across 20 worker nodes.
 
-[[StreamingExpressions-priority]]
 == priority
 
-The `priority` function is a simple priority scheduler for the <<StreamingExpressions-executor,executor>> function. The executor function doesn't directly have a concept of task prioritization; instead it simply executes tasks in the order that they are read from it's underlying stream. The `priority` function provides the ability to schedule a higher priority task ahead of lower priority tasks that were submitted earlier.
+The `priority` function is a simple priority scheduler for the <<executor>> function. The `executor` function doesn't directly have a concept of task prioritization; instead it simply executes tasks in the order that they are read from it's underlying stream. The `priority` function provides the ability to schedule a higher priority task ahead of lower priority tasks that were submitted earlier.
 
 The `priority` function wraps two <<stream-sources.adoc#topic,topics>> that are both emitting tuples that contain streaming expressions to execute. The first topic is considered the higher priority task queue.
 
@@ -1011,14 +1007,12 @@ Each time the `priority` function is called, it checks the higher priority task
 
 The `priority` function will only emit a batch of tasks from one of the queues each time it is called. This ensures that no lower priority tasks are executed until the higher priority queue has no tasks to run.
 
-[[StreamingExpressions-Parameters.25]]
-=== Parameters
+=== priority Parameters
 
 * `topic expression`: (Mandatory) the high priority task queue
 * `topic expression`: (Mandatory) the lower priority task queue
 
-[[StreamingExpressions-Syntax.26]]
-=== Syntax
+=== priority Syntax
 
 [source,text]
 ----
@@ -1092,7 +1086,7 @@ The example about shows the rollup function wrapping the search function. Notice
 
 == scoreNodes
 
-See section in <<graph-traversal.adoc#GraphTraversal-UsingthescoreNodesFunctiontoMakeaRecommendation,graph traversal>>.
+See section in <<graph-traversal.adoc#using-the-scorenodes-function-to-make-a-recommendation,graph traversal>>.
 
 == select
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/streaming-expressions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/streaming-expressions.adoc b/solr/solr-ref-guide/src/streaming-expressions.adoc
index 5ea3dd9..1474aaa 100644
--- a/solr/solr-ref-guide/src/streaming-expressions.adoc
+++ b/solr/solr-ref-guide/src/streaming-expressions.adoc
@@ -46,7 +46,6 @@ Streams from outside systems can be joined with streams originating from Solr an
 Both streaming expressions and the streaming API are considered experimental, and the APIs are subject to change.
 ====
 
-[[StreamingExpressions-StreamLanguageBasics]]
 == Stream Language Basics
 
 Streaming Expressions are comprised of streaming functions which work with a Solr collection. They emit a stream of tuples (key/value Maps).
@@ -55,7 +54,6 @@ Many of the provided streaming functions are designed to work with entire result
 
 Some streaming functions act as stream sources to originate the stream flow. Other streaming functions act as stream decorators to wrap other stream functions and perform operations on the stream of tuples. Many streams functions can be parallelized across a worker collection. This can be particularly powerful for relational algebra functions.
 
-[[StreamingExpressions-StreamingRequestsandResponses]]
 === Streaming Requests and Responses
 
 Solr has a `/stream` request handler that takes streaming expression requests and returns the tuples as a JSON stream. This request handler is implicitly defined, meaning there is nothing that has to be defined in `solrconfig.xml` - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>>.
@@ -112,7 +110,6 @@ StreamFactory streamFactory = new StreamFactory().withCollectionZkHost("collecti
 ParallelStream pstream = (ParallelStream)streamFactory.constructStream("parallel(collection1, group(search(collection1, q=\"*:*\", fl=\"id,a_s,a_i,a_f\", sort=\"a_s asc,a_f asc\", partitionKeys=\"a_s\"), by=\"a_s asc\"), workers=\"2\", zkHost=\""+zkHost+"\", sort=\"a_s asc\")");
 ----
 
-[[StreamingExpressions-DataRequirements]]
 === Data Requirements
 
 Because streaming expressions relies on the `/export` handler, many of the field and field type requirements to use `/export` are also requirements for `/stream`, particularly for `sort` and `fl` parameters. Please see the section <<exporting-result-sets.adoc#exporting-result-sets,Exporting Result Sets>> for details.