You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ab...@apache.org on 2017/07/13 15:38:30 UTC

[33/47] lucene-solr:jira/solr-11000: SOLR-11050: remove Confluence-style anchors and fix all incoming links

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
index bffa71f..50d4396 100644
--- a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
+++ b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
@@ -140,8 +140,6 @@ The CDCR replication logic requires modification to the maintenance logic of the
 
 If the communication with one of the target data center is slow, the Updates Log on the source data center can grow to a substantial size. In such a scenario, it is necessary for the Updates Log to be able to efficiently find a given update operation given its identifier. Given that its identifier is an incremental number, it is possible to implement an efficient search strategy. Each transaction log file contains as part of its filename the version number of the first element. This is used to quickly traverse all the transaction log files and find the transaction log file containing one specific version number.
 
-
-[[CrossDataCenterReplication_CDCR_-Monitoring]]
 === Monitoring
 
 CDCR provides the following monitoring capabilities over the replication operations:
@@ -155,24 +153,19 @@ Information about the lifecycle and statistics will be provided on a per-shard b
 
 The CDC Replicator is a background thread that is responsible for replicating updates from a Source data center to one or more target data centers. It is responsible in providing monitoring information on a per-shard basis. As there can be a large number of collections and shards in a cluster, we will use a fixed-size pool of CDC Replicator threads that will be shared across shards.
 
-
-[[CrossDataCenterReplication_CDCR_-Limitations]]
-=== Limitations
+=== CDCR Limitations
 
 The current design of CDCR has some limitations. CDCR will continue to evolve over time and many of these limitations will be addressed. Among them are:
 
 * CDCR is unlikely to be satisfactory for bulk-load situations where the update rate is high, especially if the bandwidth between the Source and target clusters is restricted. In this scenario, the initial bulk load should be performed, the Source and target data centers synchronized and CDCR be utilized for incremental updates.
 * CDCR is currently only active-passive; data is pushed from the Source cluster to the target cluster. There is active work being done in this area in the 6x code line to remove this limitation.
 * CDCR works most robustly with the same number of shards in the Source and target collection. The shards in the two collections may have different numbers of replicas.
+* Running CDCR with the indexes on HDFS is not currently supported, see the https://issues.apache.org/jira/browse/SOLR-9861[Solr CDCR over HDFS] JIRA issue.
 
-
-[[CrossDataCenterReplication_CDCR_-Configuration]]
-== Configuration
+== CDCR Configuration
 
 The source and target configurations differ in the case of the data centers being in separate clusters. "Cluster" here means separate ZooKeeper ensembles controlling disjoint Solr instances. Whether these data centers are physically separated or not is immaterial for this discussion.
 
-
-[[CrossDataCenterReplication_CDCR_-SourceConfiguration]]
 === Source Configuration
 
 Here is a sample of a source configuration file, a section in `solrconfig.xml`. The presence of the <replica> section causes CDCR to use this cluster as the Source and should not be present in the target collections in the cluster-to-cluster case. Details about each setting are after the two examples:
@@ -211,8 +204,6 @@ Here is a sample of a source configuration file, a section in `solrconfig.xml`.
 </updateHandler>
 ----
 
-
-[[CrossDataCenterReplication_CDCR_-TargetConfiguration]]
 === Target Configuration
 
 Here is a typical target configuration.
@@ -256,7 +247,6 @@ The configuration details, defaults and options are as follows:
 
 CDCR can be configured to forward update requests to one or more replicas. A replica is defined with a “replica” list as follows:
 
-
 `zkHost`::
 The host address for ZooKeeper of the target SolrCloud. Usually this is a comma-separated list of addresses to each node in the target ZooKeeper ensemble. This parameter is required.
 
@@ -303,41 +293,27 @@ Monitor actions are performed at a core level, i.e., by using the following base
 
 Currently, none of the CDCR API calls have parameters.
 
-
 === API Entry Points (Control)
 
-* `<collection>/cdcr?action=STATUS`: <<CrossDataCenterReplication_CDCR_-STATUS,Returns the current state>> of CDCR.
-* `<collection>/cdcr?action=START`: <<CrossDataCenterReplication_CDCR_-START,Starts CDCR>> replication
-* `<collection>/cdcr?action=STOP`: <<CrossDataCenterReplication_CDCR_-STOP,Stops CDCR>> replication.
-* `<collection>/cdcr?action=ENABLEBUFFER`: <<CrossDataCenterReplication_CDCR_-ENABLEBUFFER,Enables the buffering>> of updates.
-* `<collection>/cdcr?action=DISABLEBUFFER`: <<CrossDataCenterReplication_CDCR_-DISABLEBUFFER,Disables the buffering>> of updates.
-
+* `<collection>/cdcr?action=STATUS`: <<CDCR STATUS,Returns the current state>> of CDCR.
+* `<collection>/cdcr?action=START`: <<CDCR START,Starts CDCR>> replication
+* `<collection>/cdcr?action=STOP`: <<CDCR STOP,Stops CDCR>> replication.
+* `<collection>/cdcr?action=ENABLEBUFFER`: <<ENABLEBUFFER,Enables the buffering>> of updates.
+* `<collection>/cdcr?action=DISABLEBUFFER`: <<DISABLEBUFFER,Disables the buffering>> of updates.
 
 === API Entry Points (Monitoring)
 
-* `core/cdcr?action=QUEUES`: <<CrossDataCenterReplication_CDCR_-QUEUES,Fetches statistics about the queue>> for each replica and about the update logs.
-* `core/cdcr?action=OPS`: <<CrossDataCenterReplication_CDCR_-OPS,Fetches statistics about the replication performance>> (operations per second) for each replica.
-* `core/cdcr?action=ERRORS`: <<CrossDataCenterReplication_CDCR_-ERRORS,Fetches statistics and other information about replication errors>> for each replica.
+* `core/cdcr?action=QUEUES`: <<QUEUES,Fetches statistics about the queue>> for each replica and about the update logs.
+* `core/cdcr?action=OPS`: <<OPS,Fetches statistics about the replication performance>> (operations per second) for each replica.
+* `core/cdcr?action=ERRORS`: <<ERRORS,Fetches statistics and other information about replication errors>> for each replica.
 
 === Control Commands
 
-[[CrossDataCenterReplication_CDCR_-STATUS]]
-==== STATUS
+==== CDCR STATUS
 
 `/collection/cdcr?action=STATUS`
 
-===== Input
-
-*Query Parameters:* There are no parameters to this command.
-
-===== Output
-
-*Output Content*
-
-The current state of the CDCR, which includes the state of the replication process and the state of the buffer.
-
-[[cdcr_examples]]
-===== Examples
+===== CDCR Status Example
 
 *Input*
 
@@ -362,22 +338,15 @@ The current state of the CDCR, which includes the state of the replication proce
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-ENABLEBUFFER]]
 ==== ENABLEBUFFER
 
 `/collection/cdcr?action=ENABLEBUFFER`
 
-===== Input
-
-*Query Parameters:* There are no parameters to this command.
-
-===== Output
+===== Enable Buffer Response
 
-*Output Content*
-
-The status of the process and an indication of whether the buffer is enabled
+The status of the process and an indication of whether the buffer is enabled.
 
-===== Examples
+===== Enable Buffer Example
 
 *Input*
 
@@ -402,20 +371,15 @@ The status of the process and an indication of whether the buffer is enabled
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-DISABLEBUFFER]]
 ==== DISABLEBUFFER
 
 `/collection/cdcr?action=DISABLEBUFFER`
 
-===== Input
-
-*Query Parameters:* There are no parameters to this command
-
-===== Output
+===== Disable Buffer Response
 
-*Output Content:* The status of CDCR and an indication that the buffer is disabled.
+The status of CDCR and an indication that the buffer is disabled.
 
-===== Examples
+===== Disable Buffer Example
 
 *Input*
 
@@ -440,20 +404,15 @@ http://host:8983/solr/<collection_name>/cdcr?action=DISABLEBUFFER
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-START]]
-==== START
+==== CDCR START
 
 `/collection/cdcr?action=START`
 
-===== Input
+===== CDCR Start Response
 
-*Query Parameters:* There are no parameters for this action
+Confirmation that CDCR is started and the status of buffering
 
-===== Output
-
-*Output Content:* Confirmation that CDCR is started and the status of buffering
-
-===== Examples
+===== CDCR Start Examples
 
 *Input*
 
@@ -478,20 +437,15 @@ http://host:8983/solr/<collection_name>/cdcr?action=START
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-STOP]]
-==== STOP
+==== CDCR STOP
 
 `/collection/cdcr?action=STOP`
 
-===== Input
-
-*Query Parameters:* There are no parameters for this command.
-
-===== Output
+===== CDCR Stop Response
 
-*Output Content:* The status of CDCR, including the confirmation that CDCR is stopped
+The status of CDCR, including the confirmation that CDCR is stopped.
 
-===== Examples
+===== CDCR Stop Examples
 
 *Input*
 
@@ -517,19 +471,13 @@ http://host:8983/solr/<collection_name>/cdcr?action=START
 ----
 
 
-[[CrossDataCenterReplication_CDCR_-Monitoringcommands]]
-=== Monitoring commands
+=== CDCR Monitoring Commands
 
-[[CrossDataCenterReplication_CDCR_-QUEUES]]
 ==== QUEUES
 
 `/core/cdcr?action=QUEUES`
 
-===== Input
-
-*Query Parameters:* There are no parameters for this command
-
-===== Output
+===== QUEUES Response
 
 *Output Content*
 
@@ -537,7 +485,7 @@ The output is composed of a list “queues” which contains a list of (ZooKeepe
 
 The “queues” object also contains information about the updates log, such as the size (in bytes) of the updates log on disk (“tlogTotalSize”), the number of transaction log files (“tlogTotalCount”) and the status of the updates log synchronizer (“updateLogSynchronizer”).
 
-===== Examples
+===== QUEUES Examples
 
 *Input*
 
@@ -569,20 +517,15 @@ The “queues” object also contains information about the updates log, such as
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-OPS]]
 ==== OPS
 
 `/core/cdcr?action=OPS`
 
-===== Input
-
-*Query Parameters:* There are no parameters for this command.
-
-===== Output
+===== OPS Response
 
-*Output Content:* The output is composed of a list “operationsPerSecond” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, the average number of processed operations per second since the start of the replication process is provided. The operations are further broken down into two groups: add and delete operations.
+The output is composed of `operationsPerSecond` which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, the average number of processed operations per second since the start of the replication process is provided. The operations are further broken down into two groups: add and delete operations.
 
-===== Examples
+===== OPS Examples
 
 *Input*
 
@@ -612,20 +555,15 @@ The “queues” object also contains information about the updates log, such as
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-ERRORS]]
 ==== ERRORS
 
 `/core/cdcr?action=ERRORS`
 
-===== Input
+===== ERRORS Response
 
-*Query Parameters:* There are no parameters for this command.
+The output is composed of a list “errors” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, information about errors encountered during the replication is provided, such as the number of consecutive errors encountered by the replicator thread, the number of bad requests or internal errors since the start of the replication process, and a list of the last errors encountered ordered by timestamp.
 
-===== Output
-
-*Output Content:* The output is composed of a list “errors” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, information about errors encountered during the replication is provided, such as the number of consecutive errors encountered by the replicator thread, the number of bad requests or internal errors since the start of the replication process, and a list of the last errors encountered ordered by timestamp.
-
-===== Examples
+===== ERRORS Examples
 
 *Input*
 
@@ -728,7 +666,6 @@ http://host:port/solr/collection_name/cdcr?action=DISABLEBUFFER
 +
 * Renable indexing
 
-[[CrossDataCenterReplication_CDCR_-Monitoring.1]]
 == Monitoring
 
 .  Network and disk space monitoring are essential. Ensure that the system has plenty of available storage to queue up changes if there is a disconnect between the Source and Target. A network outage between the two data centers can cause your disk usage to grow.
@@ -763,8 +700,3 @@ curl http://<Source>/solr/cloud1/update -H 'Content-type:application/json' -d '[
 #check the Target
 curl "http://<Target>:8983/solr/<collection_name>/select?q=SKU:ABC&wt=json&indent=true"
 ----
-
-[[CrossDataCenterReplication_CDCR_-Limitations.1]]
-== Limitations
-
-* Running CDCR with the indexes on HDFS is not currently supported, see: https://issues.apache.org/jira/browse/SOLR-9861[Solr CDCR over HDFS].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/defining-core-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/defining-core-properties.adoc b/solr/solr-ref-guide/src/defining-core-properties.adoc
index a533098..1424327 100644
--- a/solr/solr-ref-guide/src/defining-core-properties.adoc
+++ b/solr/solr-ref-guide/src/defining-core-properties.adoc
@@ -29,7 +29,6 @@ A minimal `core.properties` file looks like the example below. However, it can a
 name=my_core_name
 ----
 
-[[Definingcore.properties-Placementofcore.properties]]
 == Placement of core.properties
 
 Solr cores are configured by placing a file named `core.properties` in a sub-directory under `solr.home`. There are no a-priori limits to the depth of the tree, nor are there limits to the number of cores that can be defined. Cores may be anywhere in the tree with the exception that cores may _not_ be defined under an existing core. That is, the following is not allowed:
@@ -61,11 +60,8 @@ Your `core.properties` file can be empty if necessary. Suppose `core.properties`
 You can run Solr without configuring any cores.
 ====
 
-[[Definingcore.properties-Definingcore.propertiesFiles]]
 == Defining core.properties Files
 
-[[Definingcore.properties-core.properties_files]]
-
 The minimal `core.properties` file is an empty file, in which case all of the properties are defaulted appropriately.
 
 Java properties files allow the hash (`#`) or bang (`!`) characters to specify comment-to-end-of-line.
@@ -98,4 +94,4 @@ The following properties are available:
 
 `roles`:: Future parameter for SolrCloud or a way for users to mark nodes for their own use.
 
-Additional user-defined properties may be specified for use as variables. For more information on how to define local properties, see the section <<configuring-solrconfig-xml.adoc#Configuringsolrconfig.xml-SubstitutingPropertiesinSolrConfigFiles,Substituting Properties in Solr Config Files>>.
+Additional user-defined properties may be specified for use as variables. For more information on how to define local properties, see the section <<configuring-solrconfig-xml.adoc#substituting-properties-in-solr-config-files,Substituting Properties in Solr Config Files>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/distributed-requests.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/distributed-requests.adoc b/solr/solr-ref-guide/src/distributed-requests.adoc
index 9fc80a7..b9c3920 100644
--- a/solr/solr-ref-guide/src/distributed-requests.adoc
+++ b/solr/solr-ref-guide/src/distributed-requests.adoc
@@ -24,7 +24,7 @@ The chosen replica acts as an aggregator: it creates internal requests to random
 
 == Limiting Which Shards are Queried
 
-While one of the advantages of using SolrCloud is the ability to query very large collections distributed among various shards, in some cases <<shards-and-indexing-data-in-solrcloud.adoc#ShardsandIndexingDatainSolrCloud-DocumentRouting,you may know that you are only interested in results from a subset of your shards>>. You have the option of searching over all of your data or just parts of it.
+While one of the advantages of using SolrCloud is the ability to query very large collections distributed among various shards, in some cases <<shards-and-indexing-data-in-solrcloud.adoc#document-routing,you may know that you are only interested in results from a subset of your shards>>. You have the option of searching over all of your data or just parts of it.
 
 Querying all shards for a collection should look familiar; it's as though SolrCloud didn't even come into play:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/documents-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/documents-screen.adoc b/solr/solr-ref-guide/src/documents-screen.adoc
index 4605dd7..7c16ee9 100644
--- a/solr/solr-ref-guide/src/documents-screen.adoc
+++ b/solr/solr-ref-guide/src/documents-screen.adoc
@@ -42,28 +42,24 @@ The first step is to define the RequestHandler to use (aka, 'qt'). By default `/
 
 Then choose the Document Type to define the type of document to load. The remaining parameters will change depending on the document type selected.
 
-[[DocumentsScreen-JSON]]
-== JSON
+== JSON Documents
 
 When using the JSON document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper JSON format.
 
 Then you can choose when documents should be added to the index (Commit Within), & whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
 
-This option will only add or overwrite documents to the index; for other update tasks, see the <<DocumentsScreen-SolrCommand,Solr Command>> option.
+This option will only add or overwrite documents to the index; for other update tasks, see the <<Solr Command>> option.
 
-[[DocumentsScreen-CSV]]
-== CSV
+== CSV Documents
 
 When using the CSV document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper CSV format, with columns delimited and one row per document.
 
 Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
 
-[[DocumentsScreen-DocumentBuilder]]
 == Document Builder
 
 The Document Builder provides a wizard-like interface to enter fields of a document
 
-[[DocumentsScreen-FileUpload]]
 == File Upload
 
 The File Upload option allows choosing a prepared file and uploading it. If using only `/update` for the Request-Handler option, you will be limited to XML, CSV, and JSON.
@@ -72,18 +68,16 @@ However, to use the ExtractingRequestHandler (aka Solr Cell), you can modify the
 
 Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
 
-[[DocumentsScreen-SolrCommand]]
 == Solr Command
 
 The Solr Command option allows you use XML or JSON to perform specific actions on documents, such as defining documents to be added or deleted, updating only certain fields of documents, or commit and optimize commands on the index.
 
 The documents should be structured as they would be if using `/update` on the command line.
 
-[[DocumentsScreen-XML]]
-== XML
+== XML Documents
 
 When using the XML document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper Solr XML format, with each document separated by `<doc>` tags and each field defined.
 
 Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not **true**, then the incoming documents will be dropped).
 
-This option will only add or overwrite documents to the index; for other update tasks, see the <<DocumentsScreen-SolrCommand,Solr Command>> option.
+This option will only add or overwrite documents to the index; for other update tasks, see the <<Solr Command>> option.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/enabling-ssl.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/enabling-ssl.adoc b/solr/solr-ref-guide/src/enabling-ssl.adoc
index a741565..5357ab1 100644
--- a/solr/solr-ref-guide/src/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/src/enabling-ssl.adoc
@@ -154,7 +154,7 @@ server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd clusterprop -n
 server\scripts\cloud-scripts\zkcli.bat -zkhost localhost:2181 -cmd clusterprop -name urlScheme -val https
 ----
 
-If you have set up your ZooKeeper cluster to use a <<taking-solr-to-production.adoc#TakingSolrtoProduction-ZooKeeperchroot,chroot for Solr>> , make sure you use the correct `zkhost` string with `zkcli`, e.g. `-zkhost localhost:2181/solr`.
+If you have set up your ZooKeeper cluster to use a <<taking-solr-to-production.adoc#zookeeper-chroot,chroot for Solr>> , make sure you use the correct `zkhost` string with `zkcli`, e.g. `-zkhost localhost:2181/solr`.
 
 === Run SolrCloud with SSL
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/errata.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/errata.adoc b/solr/solr-ref-guide/src/errata.adoc
index 9030ee3..7484c17 100644
--- a/solr/solr-ref-guide/src/errata.adoc
+++ b/solr/solr-ref-guide/src/errata.adoc
@@ -18,14 +18,12 @@
 // specific language governing permissions and limitations
 // under the License.
 
-[[Errata-ErrataForThisDocumentation]]
 == Errata For This Documentation
 
 Any mistakes found in this documentation after its release will be listed on the on-line version of this page:
 
 https://lucene.apache.org/solr/guide/{solr-docs-version}/errata.html
 
-[[Errata-ErrataForPastVersionsofThisDocumentation]]
 == Errata For Past Versions of This Documentation
 
 Any known mistakes in past releases of this documentation will be noted below.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/faceting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/faceting.adoc b/solr/solr-ref-guide/src/faceting.adoc
index 44db506..4384a74 100644
--- a/solr/solr-ref-guide/src/faceting.adoc
+++ b/solr/solr-ref-guide/src/faceting.adoc
@@ -351,7 +351,7 @@ The `facet.mincount` parameter, the same one as used in field faceting is also a
 [NOTE]
 ====
 
-Range faceting on date fields is a common situation where the <<working-with-dates.adoc#WorkingwithDates-TZ,`TZ`>> parameter can be useful to ensure that the "facet counts per day" or "facet counts per month" are based on a meaningful definition of when a given day/month "starts" relative to a particular TimeZone.
+Range faceting on date fields is a common situation where the <<working-with-dates.adoc#tz,`TZ`>> parameter can be useful to ensure that the "facet counts per day" or "facet counts per month" are based on a meaningful definition of when a given day/month "starts" relative to a particular TimeZone.
 
 For more information, see the examples in the <<working-with-dates.adoc#working-with-dates,Working with Dates>> section.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
index 27c3222..695146b 100644
--- a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
@@ -90,9 +90,9 @@ For multivalued fields, specifies a distance between multiple values, which prev
 `autoGeneratePhraseQueries`:: For text fields. If `true`, Solr automatically generates phrase queries for adjacent terms. If `false`, terms must be enclosed in double-quotes to be treated as phrases.
 
 `enableGraphQueries`::
-For text fields, applicable when querying with <<the-standard-query-parser.adoc#TheStandardQueryParser-StandardQueryParserParameters,`sow=false`>>. Use `true` (the default) for field types with query analyzers including graph-aware filters, e.g., <<filter-descriptions.adoc#FilterDescriptions-SynonymGraphFilter,Synonym Graph Filter>> and <<filter-descriptions.adoc#FilterDescriptions-WordDelimiterGraphFilter,Word Delimiter Graph Filter>>.
+For text fields, applicable when querying with <<the-standard-query-parser.adoc#TheStandardQueryParser-StandardQueryParserParameters,`sow=false`>>. Use `true` (the default) for field types with query analyzers including graph-aware filters, e.g., <<filter-descriptions.adoc#synonym-graph-filter,Synonym Graph Filter>> and <<filter-descriptions.adoc#word-delimiter-graph-filter,Word Delimiter Graph Filter>>.
 +
-Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., <<filter-descriptions.adoc#FilterDescriptions-ShingleFilter,Shingle Filter>>.
+Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., <<filter-descriptions.adoc#shingle-filter,Shingle Filter>>.
 
 [[FieldTypeDefinitionsandProperties-docValuesFormat]]
 `docValuesFormat`::
@@ -140,4 +140,4 @@ The default values for each property depend on the underlying `FieldType` class,
 
 A field type may optionally specify a `<similarity/>` that will be used when scoring documents that refer to fields with this type, as long as the "global" similarity for the collection allows it.
 
-By default, any field type which does not define a similarity, uses `BM25Similarity`. For more details, and examples of configuring both global & per-type Similarities, please see <<other-schema-elements.adoc#OtherSchemaElements-Similarity,Other Schema Elements>>.
+By default, any field type which does not define a similarity, uses `BM25Similarity`. For more details, and examples of configuring both global & per-type Similarities, please see <<other-schema-elements.adoc#similarity,Other Schema Elements>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
index 5c82970..4ba0e45 100644
--- a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
+++ b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
@@ -27,17 +27,17 @@ The following table lists the field types that are available in Solr. The `org.a
 |Class |Description
 |BinaryField |Binary data.
 |BoolField |Contains either true or false. Values of "1", "t", or "T" in the first character are interpreted as true. Any other values in the first character are interpreted as false.
-|CollationField |Supports Unicode collation for sorting and range queries. ICUCollationField is a better choice if you can use ICU4J. See the section <<language-analysis.adoc#LanguageAnalysis-UnicodeCollation,Unicode Collation>>.
+|CollationField |Supports Unicode collation for sorting and range queries. ICUCollationField is a better choice if you can use ICU4J. See the section <<language-analysis.adoc#unicode-collation,Unicode Collation>>.
 |CurrencyField |Deprecated in favor of CurrencyFieldType.
 |CurrencyFieldType |Supports currencies and exchange rates. See the section <<working-with-currencies-and-exchange-rates.adoc#working-with-currencies-and-exchange-rates,Working with Currencies and Exchange Rates>>.
 |DateRangeField |Supports indexing date ranges, to include point in time date instances as well (single-millisecond durations). See the section <<working-with-dates.adoc#working-with-dates,Working with Dates>> for more detail on using this field type. Consider using this field type even if it's just for date instances, particularly when the queries typically fall on UTC year/month/day/hour, etc., boundaries.
 |ExternalFileField |Pulls values from a file on disk. See the section <<working-with-external-files-and-processes.adoc#working-with-external-files-and-processes,Working with External Files and Processes>>.
 |EnumField |Allows defining an enumerated set of values which may not be easily sorted by either alphabetic or numeric order (such as a list of severities, for example). This field type takes a configuration file, which lists the proper order of the field values. See the section <<working-with-enum-fields.adoc#working-with-enum-fields,Working with Enum Fields>> for more information.
-|ICUCollationField |Supports Unicode collation for sorting and range queries. See the section <<language-analysis.adoc#LanguageAnalysis-UnicodeCollation,Unicode Collation>>.
+|ICUCollationField |Supports Unicode collation for sorting and range queries. See the section <<language-analysis.adoc#unicode-collation,Unicode Collation>>.
 |LatLonPointSpatialField |<<spatial-search.adoc#spatial-search,Spatial Search>>: a latitude/longitude coordinate pair; possibly multi-valued for multiple points. Usually it's specified as "lat,lon" order with a comma.
 |LatLonType |(deprecated) <<spatial-search.adoc#spatial-search,Spatial Search>>: a single-valued latitude/longitude coordinate pair. Usually it's specified as "lat,lon" order with a comma.
 |PointType |<<spatial-search.adoc#spatial-search,Spatial Search>>: A single-valued n-dimensional point. It's both for sorting spatial data that is _not_ lat-lon, and for some more rare use-cases. (NOTE: this is _not_ related to the "Point" based numeric fields)
-|PreAnalyzedField |Provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing. Configuration and usage of PreAnalyzedField is documented on the <<working-with-external-files-and-processes.adoc#WorkingwithExternalFilesandProcesses-ThePreAnalyzedFieldType,Working with External Files and Processes>> page.
+|PreAnalyzedField |Provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing. Configuration and usage of PreAnalyzedField is documented on the <<working-with-external-files-and-processes.adoc#the-preanalyzedfield-type,Working with External Files and Processes>> page.
 |RandomSortField |Does not contain a value. Queries that sort on this field type will return results in random order. Use a dynamic field to use this feature.
 |SpatialRecursivePrefixTreeFieldType |(RPT for short) <<spatial-search.adoc#spatial-search,Spatial Search>>: Accepts latitude comma longitude strings or other shapes in WKT format.
 |StrField |String (UTF-8 encoded string or Unicode). Strings are intended for small fields and are _not_ tokenized or analyzed in any way. They have a hard limit of slightly less than 32K.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/filter-descriptions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/filter-descriptions.adoc b/solr/solr-ref-guide/src/filter-descriptions.adoc
index f428678..4ced59e 100644
--- a/solr/solr-ref-guide/src/filter-descriptions.adoc
+++ b/solr/solr-ref-guide/src/filter-descriptions.adoc
@@ -50,7 +50,6 @@ The following sections describe the filter factories that are included in this r
 
 For user tips about Solr's filters, see http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters.
 
-[[FilterDescriptions-ASCIIFoldingFilter]]
 == ASCII Folding Filter
 
 This filter converts alphabetic, numeric, and symbolic Unicode characters which are not in the Basic Latin Unicode block (the first 127 ASCII characters) to their ASCII equivalents, if one exists. This filter converts characters from the following Unicode blocks:
@@ -92,10 +91,9 @@ This filter converts alphabetic, numeric, and symbolic Unicode characters which
 
 *Out:* "a" (ASCII character 97)
 
-[[FilterDescriptions-Beider-MorseFilter]]
 == Beider-Morse Filter
 
-Implements the Beider-Morse Phonetic Matching (BMPM) algorithm, which allows identification of similar names, even if they are spelled differently or in different languages. More information about how this works is available in the section on <<phonetic-matching.adoc#PhoneticMatching-Beider-MorsePhoneticMatching_BMPM_,Phonetic Matching>>.
+Implements the Beider-Morse Phonetic Matching (BMPM) algorithm, which allows identification of similar names, even if they are spelled differently or in different languages. More information about how this works is available in the section on <<phonetic-matching.adoc#beider-morse-phonetic-matching-bmpm,Phonetic Matching>>.
 
 [IMPORTANT]
 ====
@@ -125,10 +123,9 @@ BeiderMorseFilter changed its behavior in Solr 5.0 due to an update to version 3
 </analyzer>
 ----
 
-[[FilterDescriptions-ClassicFilter]]
 == Classic Filter
 
-This filter takes the output of the <<tokenizers.adoc#Tokenizers-ClassicTokenizer,Classic Tokenizer>> and strips periods from acronyms and "'s" from possessives.
+This filter takes the output of the <<tokenizers.adoc#classic-tokenizer,Classic Tokenizer>> and strips periods from acronyms and "'s" from possessives.
 
 *Factory class:* `solr.ClassicFilterFactory`
 
@@ -150,7 +147,6 @@ This filter takes the output of the <<tokenizers.adoc#Tokenizers-ClassicTokenize
 
 *Out:* "IBM", "cat", "can't"
 
-[[FilterDescriptions-CommonGramsFilter]]
 == Common Grams Filter
 
 This filter creates word shingles by combining common tokens such as stop words with regular tokens. This is useful for creating phrase queries containing common words, such as "the cat." Solr normally ignores stop words in queried phrases, so searching for "the cat" would return all matches for the word "cat."
@@ -181,12 +177,10 @@ This filter creates word shingles by combining common tokens such as stop words
 
 *Out:* "the_cat"
 
-[[FilterDescriptions-CollationKeyFilter]]
 == Collation Key Filter
 
-Collation allows sorting of text in a language-sensitive way. It is usually used for sorting, but can also be used with advanced searches. We've covered this in much more detail in the section on <<language-analysis.adoc#LanguageAnalysis-UnicodeCollation,Unicode Collation>>.
+Collation allows sorting of text in a language-sensitive way. It is usually used for sorting, but can also be used with advanced searches. We've covered this in much more detail in the section on <<language-analysis.adoc#unicode-collation,Unicode Collation>>.
 
-[[FilterDescriptions-Daitch-MokotoffSoundexFilter]]
 == Daitch-Mokotoff Soundex Filter
 
 Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of similar names, even if they are spelled differently. More information about how this works is available in the section on <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>>.
@@ -207,7 +201,6 @@ Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of
 </analyzer>
 ----
 
-[[FilterDescriptions-DoubleMetaphoneFilter]]
 == Double Metaphone Filter
 
 This filter creates tokens using the http://commons.apache.org/codec/apidocs/org/apache/commons/codec/language/DoubleMetaphone.html[`DoubleMetaphone`] encoding algorithm from commons-codec. For more information, see the <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>> section.
@@ -260,7 +253,6 @@ Discard original token (`inject="false"`).
 
 Note that "Kuczewski" has two encodings, which are added at the same position.
 
-[[FilterDescriptions-EdgeN-GramFilter]]
 == Edge N-Gram Filter
 
 This filter generates edge n-gram tokens of sizes within the given range.
@@ -327,7 +319,6 @@ A range of 4 to 6.
 
 *Out:* "four", "scor", "score", "twen", "twent", "twenty"
 
-[[FilterDescriptions-EnglishMinimalStemFilter]]
 == English Minimal Stem Filter
 
 This filter stems plural English words to their singular form.
@@ -352,7 +343,6 @@ This filter stems plural English words to their singular form.
 
 *Out:* "dog", "cat"
 
-[[FilterDescriptions-EnglishPossessiveFilter]]
 == English Possessive Filter
 
 This filter removes singular possessives (trailing *'s*) from words. Note that plural possessives, e.g. the *s'* in "divers' snorkels", are not removed by this filter.
@@ -377,7 +367,6 @@ This filter removes singular possessives (trailing *'s*) from words. Note that p
 
 *Out:* "Man", "dog", "bites", "dogs'", "man"
 
-[[FilterDescriptions-FingerprintFilter]]
 == Fingerprint Filter
 
 This filter outputs a single token which is a concatenation of the sorted and de-duplicated set of input tokens. This can be useful for clustering/linking use cases.
@@ -406,7 +395,6 @@ This filter outputs a single token which is a concatenation of the sorted and de
 
 *Out:* "brown_dog_fox_jumped_lazy_over_quick_the"
 
-[[FilterDescriptions-FlattenGraphFilter]]
 == Flatten Graph Filter
 
 This filter must be included on index-time analyzer specifications that include at least one graph-aware filter, including Synonym Graph Filter and Word Delimiter Graph Filter.
@@ -417,7 +405,6 @@ This filter must be included on index-time analyzer specifications that include
 
 See the examples below for <<Synonym Graph Filter>> and <<Word Delimiter Graph Filter>>.
 
-[[FilterDescriptions-HunspellStemFilter]]
 == Hunspell Stem Filter
 
 The `Hunspell Stem Filter` provides support for several languages. You must provide the dictionary (`.dic`) and rules (`.aff`) files for each language you wish to use with the Hunspell Stem Filter. You can download those language files http://wiki.services.openoffice.org/wiki/Dictionaries[here].
@@ -456,7 +443,6 @@ Be aware that your results will vary widely based on the quality of the provided
 
 *Out:* "jump", "jump", "jump"
 
-[[FilterDescriptions-HyphenatedWordsFilter]]
 == Hyphenated Words Filter
 
 This filter reconstructs hyphenated words that have been tokenized as two tokens because of a line break or other intervening whitespace in the field test. If a token ends with a hyphen, it is joined with the following token and the hyphen is discarded.
@@ -483,10 +469,9 @@ Note that for this filter to work properly, the upstream tokenizer must not remo
 
 *Out:* "A", "hyphenated", "word"
 
-[[FilterDescriptions-ICUFoldingFilter]]
 == ICU Folding Filter
 
-This filter is a custom Unicode normalization form that applies the foldings specified in http://www.unicode.org/reports/tr30/tr30-4.html[Unicode Technical Report 30] in addition to the `NFKC_Casefold` normalization form as described in <<FilterDescriptions-ICUNormalizer2Filter,ICU Normalizer 2 Filter>>. This filter is a better substitute for the combined behavior of the <<FilterDescriptions-ASCIIFoldingFilter,ASCII Folding Filter>>, <<FilterDescriptions-LowerCaseFilter,Lower Case Filter>>, and <<FilterDescriptions-ICUNormalizer2Filter,ICU Normalizer 2 Filter>>.
+This filter is a custom Unicode normalization form that applies the foldings specified in http://www.unicode.org/reports/tr30/tr30-4.html[Unicode Technical Report 30] in addition to the `NFKC_Casefold` normalization form as described in <<ICU Normalizer 2 Filter>>. This filter is a better substitute for the combined behavior of the <<ASCII Folding Filter>>, <<Lower Case Filter>>, and <<ICU Normalizer 2 Filter>>.
 
 To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`. For more information about adding jars, see the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in Solrconfig>>.
 
@@ -506,7 +491,6 @@ To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructio
 
 For detailed information on this normalization form, see http://www.unicode.org/reports/tr30/tr30-4.html.
 
-[[FilterDescriptions-ICUNormalizer2Filter]]
 == ICU Normalizer 2 Filter
 
 This filter factory normalizes text according to one of five Unicode Normalization Forms as described in http://unicode.org/reports/tr15/[Unicode Standard Annex #15]:
@@ -539,7 +523,6 @@ For detailed information about these Unicode Normalization Forms, see http://uni
 
 To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.
 
-[[FilterDescriptions-ICUTransformFilter]]
 == ICU Transform Filter
 
 This filter applies http://userguide.icu-project.org/transforms/general[ICU Tranforms] to text. This filter supports only ICU System Transforms. Custom rule sets are not supported.
@@ -564,7 +547,6 @@ For detailed information about ICU Transforms, see http://userguide.icu-project.
 
 To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.
 
-[[FilterDescriptions-KeepWordFilter]]
 == Keep Word Filter
 
 This filter discards all tokens except those that are listed in the given word list. This is the inverse of the Stop Words Filter. This filter can be useful for building specialized indices for a constrained set of terms.
@@ -638,7 +620,6 @@ Using LowerCaseFilterFactory before filtering for keep words, no `ignoreCase` fl
 
 *Out:* "happy", "funny"
 
-[[FilterDescriptions-KStemFilter]]
 == KStem Filter
 
 KStem is an alternative to the Porter Stem Filter for developers looking for a less aggressive stemmer. KStem was written by Bob Krovetz, ported to Lucene by Sergio Guzman-Lara (UMASS Amherst). This stemmer is only appropriate for English language text.
@@ -663,7 +644,6 @@ KStem is an alternative to the Porter Stem Filter for developers looking for a l
 
 *Out:* "jump", "jump", "jump"
 
-[[FilterDescriptions-LengthFilter]]
 == Length Filter
 
 This filter passes tokens whose length falls within the min/max limit specified. All other tokens are discarded.
@@ -694,7 +674,6 @@ This filter passes tokens whose length falls within the min/max limit specified.
 
 *Out:* "turn", "right"
 
-[[FilterDescriptions-LimitTokenCountFilter]]
 == Limit Token Count Filter
 
 This filter limits the number of accepted tokens, typically useful for index analysis.
@@ -726,7 +705,6 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Out:* "1", "2", "3", "4", "5", "6", "7", "8", "9", "10"
 
-[[FilterDescriptions-LimitTokenOffsetFilter]]
 == Limit Token Offset Filter
 
 This filter limits tokens to those before a configured maximum start character offset. This can be useful to limit highlighting, for example.
@@ -758,7 +736,6 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Out:* "0", "2", "4", "6", "8", "A"
 
-[[FilterDescriptions-LimitTokenPositionFilter]]
 == Limit Token Position Filter
 
 This filter limits tokens to those before a configured maximum token position.
@@ -790,7 +767,6 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Out:* "1", "2", "3"
 
-[[FilterDescriptions-LowerCaseFilter]]
 == Lower Case Filter
 
 Converts any uppercase letters in a token to the equivalent lowercase token. All other characters are left unchanged.
@@ -815,10 +791,9 @@ Converts any uppercase letters in a token to the equivalent lowercase token. All
 
 *Out:* "down", "with", "camelcase"
 
-[[FilterDescriptions-ManagedStopFilter]]
 == Managed Stop Filter
 
-This is specialized version of the <<FilterDescriptions-StopFilter,Stop Words Filter Factory>> that uses a set of stop words that are <<managed-resources.adoc#managed-resources,managed from a REST API.>>
+This is specialized version of the <<Stop Filter,Stop Words Filter Factory>> that uses a set of stop words that are <<managed-resources.adoc#managed-resources,managed from a REST API.>>
 
 *Arguments:*
 
@@ -836,12 +811,11 @@ With this configuration the set of words is named "english" and can be managed v
 </analyzer>
 ----
 
-See <<FilterDescriptions-StopFilter,Stop Filter>> for example input/output.
+See <<Stop Filter>> for example input/output.
 
-[[FilterDescriptions-ManagedSynonymFilter]]
 == Managed Synonym Filter
 
-This is specialized version of the <<FilterDescriptions-SynonymFilter,Synonym Filter Factory>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
+This is specialized version of the <<Synonym Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
 
 .Managed Synonym Filter has been Deprecated
 [WARNING]
@@ -851,12 +825,11 @@ Managed Synonym Filter has been deprecated in favor of Managed Synonym Graph Fil
 
 *Factory class:* `solr.ManagedSynonymFilterFactory`
 
-For arguments and examples, see the Managed Synonym Graph Filter below.
+For arguments and examples, see the <<Managed Synonym Graph Filter>> below.
 
-[[FilterDescriptions-ManagedSynonymGraphFilter]]
 == Managed Synonym Graph Filter
 
-This is specialized version of the <<FilterDescriptions-SynonymGraphFilter,Synonym Graph Filter Factory>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
+This is specialized version of the <<Synonym Graph Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
 
 This filter maps single- or multi-token synonyms, producing a fully correct graph output. This filter is a replacement for the Managed Synonym Filter, which produces incorrect graphs for multi-token synonyms.
 
@@ -881,9 +854,8 @@ With this configuration the set of mappings is named "english" and can be manage
 </analyzer>
 ----
 
-See <<FilterDescriptions-ManagedSynonymFilter,Managed Synonym Filter>> for example input/output.
+See <<Managed Synonym Filter>> for example input/output.
 
-[[FilterDescriptions-N-GramFilter]]
 == N-Gram Filter
 
 Generates n-gram tokens of sizes in the given range. Note that tokens are ordered by position and then by gram size.
@@ -950,7 +922,6 @@ A range of 3 to 5.
 
 *Out:* "fou", "four", "our", "sco", "scor", "score", "cor", "core", "ore"
 
-[[FilterDescriptions-NumericPayloadTokenFilter]]
 == Numeric Payload Token Filter
 
 This filter adds a numeric floating point payload value to tokens that match a given type. Refer to the Javadoc for the `org.apache.lucene.analysis.Token` class for more information about token types and payloads.
@@ -979,7 +950,6 @@ This filter adds a numeric floating point payload value to tokens that match a g
 
 *Out:* "bing"[0.75], "bang"[0.75], "boom"[0.75]
 
-[[FilterDescriptions-PatternReplaceFilter]]
 == Pattern Replace Filter
 
 This filter applies a regular expression to each token and, for those that match, substitutes the given replacement string in place of the matched pattern. Tokens which do not match are passed though unchanged.
@@ -1048,7 +1018,6 @@ More complex pattern with capture group reference in the replacement. Tokens tha
 
 *Out:* "cat", "foo_1234", "9987", "blah1234foo"
 
-[[FilterDescriptions-PhoneticFilter]]
 == Phonetic Filter
 
 This filter creates tokens using one of the phonetic encoding algorithms in the `org.apache.commons.codec.language` package. For more information, see the section on <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>>.
@@ -1119,7 +1088,6 @@ Default Soundex encoder.
 
 *Out:* "four"(1), "F600"(1), "score"(2), "S600"(2), "and"(3), "A530"(3), "twenty"(4), "T530"(4)
 
-[[FilterDescriptions-PorterStemFilter]]
 == Porter Stem Filter
 
 This filter applies the Porter Stemming Algorithm for English. The results are similar to using the Snowball Porter Stemmer with the `language="English"` argument. But this stemmer is coded directly in Java and is not based on Snowball. It does not accept a list of protected words and is only appropriate for English language text. However, it has been benchmarked as http://markmail.org/thread/d2c443z63z37rwf6[four times faster] than the English Snowball stemmer, so can provide a performance enhancement.
@@ -1144,7 +1112,6 @@ This filter applies the Porter Stemming Algorithm for English. The results are s
 
 *Out:* "jump", "jump", "jump"
 
-[[FilterDescriptions-RemoveDuplicatesTokenFilter]]
 == Remove Duplicates Token Filter
 
 The filter removes duplicate tokens in the stream. Tokens are considered to be duplicates ONLY if they have the same text and position values.
@@ -1223,7 +1190,6 @@ This filter reverses tokens to provide faster leading wildcard and prefix querie
 
 *Out:* "oof*", "rab*"
 
-[[FilterDescriptions-ShingleFilter]]
 == Shingle Filter
 
 This filter constructs shingles, which are token n-grams, from the token stream. It combines runs of tokens into a single token.
@@ -1278,7 +1244,6 @@ A shingle size of four, do not include original token.
 
 *Out:* "To be"(1), "To be or"(1), "To be or not"(1), "be or"(2), "be or not"(2), "be or not to"(2), "or not"(3), "or not to"(3), "or not to be"(3), "not to"(4), "not to be"(4), "to be"(5)
 
-[[FilterDescriptions-SnowballPorterStemmerFilter]]
 == Snowball Porter Stemmer Filter
 
 This filter factory instantiates a language-specific stemmer generated by Snowball. Snowball is a software package that generates pattern-based word stemmers. This type of stemmer is not as accurate as a table-based stemmer, but is faster and less complex. Table-driven stemmers are labor intensive to create and maintain and so are typically commercial products.
@@ -1349,7 +1314,6 @@ Spanish stemmer, Spanish words:
 
 *Out:* "cant", "cant"
 
-[[FilterDescriptions-StandardFilter]]
 == Standard Filter
 
 This filter removes dots from acronyms and the substring "'s" from the end of tokens. This filter depends on the tokens being tagged with the appropriate term-type to recognize acronyms and words with apostrophes.
@@ -1363,7 +1327,6 @@ This filter removes dots from acronyms and the substring "'s" from the end of to
 This filter is no longer operational in Solr when the `luceneMatchVersion` (in `solrconfig.xml`) is higher than "3.1".
 ====
 
-[[FilterDescriptions-StopFilter]]
 == Stop Filter
 
 This filter discards, or _stops_ analysis of, tokens that are on the given stop words list. A standard stop words list is included in the Solr `conf` directory, named `stopwords.txt`, which is appropriate for typical English language text.
@@ -1414,10 +1377,9 @@ Case-sensitive matching, capitalized words not stopped. Token positions skip sto
 
 *Out:* "what"(4)
 
-[[FilterDescriptions-SuggestStopFilter]]
 == Suggest Stop Filter
 
-Like <<FilterDescriptions-StopFilter,Stop Filter>>, this filter discards, or _stops_ analysis of, tokens that are on the given stop words list.
+Like <<Stop Filter>>, this filter discards, or _stops_ analysis of, tokens that are on the given stop words list.
 
 Suggest Stop Filter differs from Stop Filter in that it will not remove the last token unless it is followed by a token separator. For example, a query `"find the"` would preserve the `'the'` since it was not followed by a space, punctuation etc., and mark it as a `KEYWORD` so that following filters will not change or remove it.
 
@@ -1455,7 +1417,6 @@ By contrast, a query like "`find the popsicle`" would remove '`the`' as a stopwo
 
 *Out:* "the"(2)
 
-[[FilterDescriptions-SynonymFilter]]
 == Synonym Filter
 
 This filter does synonym mapping. Each token is looked up in the list of synonyms and if a match is found, then the synonym is emitted in place of the token. The position value of the new tokens are set such they all occur at the same position as the original token.
@@ -1470,7 +1431,6 @@ Synonym Filter has been deprecated in favor of Synonym Graph Filter, which is re
 
 For arguments and examples, see the Synonym Graph Filter below.
 
-[[FilterDescriptions-SynonymGraphFilter]]
 == Synonym Graph Filter
 
 This filter maps single- or multi-token synonyms, producing a fully correct graph output. This filter is a replacement for the Synonym Filter, which produces incorrect graphs for multi-token synonyms.
@@ -1542,7 +1502,6 @@ small => tiny,teeny,weeny
 
 *Out:* "the"(1), "large"(2), "large"(3), "couch"(4), "sofa"(4), "divan"(4)
 
-[[FilterDescriptions-TokenOffsetPayloadFilter]]
 == Token Offset Payload Filter
 
 This filter adds the numeric character offsets of the token as a payload value for that token.
@@ -1567,7 +1526,6 @@ This filter adds the numeric character offsets of the token as a payload value f
 
 *Out:* "bing"[0,4], "bang"[5,9], "boom"[10,14]
 
-[[FilterDescriptions-TrimFilter]]
 == Trim Filter
 
 This filter trims leading and/or trailing whitespace from tokens. Most tokenizers break tokens at whitespace, so this filter is most often used for special situations.
@@ -1596,7 +1554,6 @@ The PatternTokenizerFactory configuration used here splits the input on simple c
 
 *Out:* "one", "two", "three", "four"
 
-[[FilterDescriptions-TypeAsPayloadFilter]]
 == Type As Payload Filter
 
 This filter adds the token's type, as an encoded byte sequence, as its payload.
@@ -1621,10 +1578,9 @@ This filter adds the token's type, as an encoded byte sequence, as its payload.
 
 *Out:* "Pay"[<ALPHANUM>], "Bob's"[<APOSTROPHE>], "I.O.U."[<ACRONYM>]
 
-[[FilterDescriptions-TypeTokenFilter]]
 == Type Token Filter
 
-This filter blacklists or whitelists a specified list of token types, assuming the tokens have type metadata associated with them. For example, the <<tokenizers.adoc#Tokenizers-UAX29URLEmailTokenizer,UAX29 URL Email Tokenizer>> emits "<URL>" and "<EMAIL>" typed tokens, as well as other types. This filter would allow you to pull out only e-mail addresses from text as tokens, if you wish.
+This filter blacklists or whitelists a specified list of token types, assuming the tokens have type metadata associated with them. For example, the <<tokenizers.adoc#uax29-url-email-tokenizer,UAX29 URL Email Tokenizer>> emits "<URL>" and "<EMAIL>" typed tokens, as well as other types. This filter would allow you to pull out only e-mail addresses from text as tokens, if you wish.
 
 *Factory class:* `solr.TypeTokenFilterFactory`
 
@@ -1645,7 +1601,6 @@ This filter blacklists or whitelists a specified list of token types, assuming t
 </analyzer>
 ----
 
-[[FilterDescriptions-WordDelimiterFilter]]
 == Word Delimiter Filter
 
 This filter splits tokens at word delimiters.
@@ -1660,7 +1615,6 @@ Word Delimiter Filter has been deprecated in favor of Word Delimiter Graph Filte
 
 For a full description, including arguments and examples, see the Word Delimiter Graph Filter below.
 
-[[FilterDescriptions-WordDelimiterGraphFilter]]
 == Word Delimiter Graph Filter
 
 This filter splits tokens at word delimiters.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/function-queries.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/function-queries.adoc b/solr/solr-ref-guide/src/function-queries.adoc
index 29cca9c..5a9f6df 100644
--- a/solr/solr-ref-guide/src/function-queries.adoc
+++ b/solr/solr-ref-guide/src/function-queries.adoc
@@ -25,14 +25,13 @@ Function queries are supported by the <<the-dismax-query-parser.adoc#the-dismax-
 
 Function queries use _functions_. The functions can be a constant (numeric or string literal), a field, another function or a parameter substitution argument. You can use these functions to modify the ranking of results for users. These could be used to change the ranking of results based on a user's location, or some other calculation.
 
-[[FunctionQueries-UsingFunctionQuery]]
 == Using Function Query
 
 Functions must be expressed as function calls (for example, `sum(a,b)` instead of simply `a+b`).
 
 There are several ways of using function queries in a Solr query:
 
-* Via an explicit QParser that expects function arguments, such <<other-parsers.adoc#OtherParsers-FunctionQueryParser,`func`>> or <<other-parsers.adoc#OtherParsers-FunctionRangeQueryParser,`frange`>> . For example:
+* Via an explicit QParser that expects function arguments, such <<other-parsers.adoc#function-query-parser,`func`>> or <<other-parsers.adoc#function-range-query-parser,`frange`>> . For example:
 +
 [source,text]
 ----
@@ -76,7 +75,6 @@ q=_val_:mynumericfield _val_:"recip(rord(myfield),1,2,3)"
 
 Only functions with fast random access are recommended.
 
-[[FunctionQueries-AvailableFunctions]]
 == Available Functions
 
 The table below summarizes the functions available for function queries.
@@ -89,7 +87,7 @@ Returns the absolute value of the specified value or function.
 * `abs(x)` `abs(-5)`
 
 === childfield(field) Function
-Returns the value of the given field for one of the matched child docs when searching by <<other-parsers.adoc#OtherParsers-BlockJoinParentQueryParser,{!parent}>>. It can be used only in `sort` parameter.
+Returns the value of the given field for one of the matched child docs when searching by <<other-parsers.adoc#block-join-parent-query-parser,{!parent}>>. It can be used only in `sort` parameter.
 
 *Syntax Examples*
 
@@ -149,7 +147,6 @@ You can quote the term if it's more complex, or do parameter substitution for th
 * `docfreq(text,'solr')`
 * `...&defType=func` `&q=docfreq(text,$myterm)&myterm=solr`
 
-[[FunctionQueries-field]]
 === field Function
 Returns the numeric docValues or indexed value of the field with the specified name. In its simplest (single argument) form, this function can only be used on single valued fields, and can be called using the name of the field as a string, or for most conventional field names simply use the field name by itself with out using the `field(...)` syntax.
 
@@ -232,7 +229,7 @@ If the value of `x` does not fall between `min` and `max`, then either the value
 === max Function
 Returns the maximum numeric value of multiple nested functions or constants, which are specified as arguments: `max(x,y,...)`. The `max` function can also be useful for "bottoming out" another function or field at some specified constant.
 
-Use the `field(myfield,max)` syntax for <<FunctionQueries-field,selecting the maximum value of a single multivalued field>>.
+Use the `field(myfield,max)` syntax for <<field Function,selecting the maximum value of a single multivalued field>>.
 
 *Syntax Example*
 
@@ -248,7 +245,7 @@ Returns the number of documents in the index, including those that are marked as
 === min Function
 Returns the minimum numeric value of multiple nested functions of constants, which are specified as arguments: `min(x,y,...)`. The `min` function can also be useful for providing an "upper bound" on a function using a constant.
 
-Use the `field(myfield,min)` <<FunctionQueries-field,syntax for selecting the minimum value of a single multivalued field>>.
+Use the `field(myfield,min)` <<field Function,syntax for selecting the minimum value of a single multivalued field>>.
 
 *Syntax Example*
 
@@ -502,8 +499,6 @@ Returns `true` if any member of the field exists.
 *Syntax Example*
 * `if(lt(ms(mydatefield),315569259747),0.8,1)` translates to this pseudocode: `if mydatefield < 315569259747 then 0.8 else 1`
 
-
-[[FunctionQueries-ExampleFunctionQueries]]
 == Example Function Queries
 
 To give you a better understanding of how function queries can be used in Solr, suppose an index stores the dimensions in meters x,y,z of some hypothetical boxes with arbitrary names stored in field `boxname`. Suppose we want to search for box matching name `findbox` but ranked according to volumes of boxes. The query parameters would be:
@@ -521,7 +516,6 @@ Suppose that you also have a field storing the weight of the box as `weight`. To
 http://localhost:8983/solr/collection_name/select?q=boxname:findbox _val_:"div(weight,product(x,y,z))"&fl=boxname x y z weight score
 ----
 
-[[FunctionQueries-SortByFunction]]
 == Sort By Function
 
 You can sort your query results by the output of a function. For example, to sort results by distance, you could enter:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/graph-traversal.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/graph-traversal.adoc b/solr/solr-ref-guide/src/graph-traversal.adoc
index 007019b..a23b32e 100644
--- a/solr/solr-ref-guide/src/graph-traversal.adoc
+++ b/solr/solr-ref-guide/src/graph-traversal.adoc
@@ -31,7 +31,6 @@ The `nodes` function can be combined with the `scoreNodes` function to provide r
 This document assumes a basic understanding of graph terminology and streaming expressions. You can begin exploring graph traversal concepts with this https://en.wikipedia.org/wiki/Graph_traversal[Wikipedia article]. More details about streaming expressions are available in this Guide, in the section <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>>.
 ====
 
-[[GraphTraversal-BasicSyntax]]
 == Basic Syntax
 
 We'll start with the most basic syntax and slowly build up more complexity. The most basic syntax for `nodes` is:
@@ -161,7 +160,6 @@ When scattering both branches and leaves the output would like this:
 
 Now the level 0 root node is included in the output.
 
-[[GraphTraversal-Aggregations]]
 == Aggregations
 
 `nodes` also supports aggregations. For example:
@@ -182,8 +180,7 @@ Edges are uniqued as part of the traversal so the count will *not* reflect the n
 
 The aggregation functions supported are `count(*)`, `sum(field)`, `min(field)`, `max(field)`, and `avg(field)`. The fields being aggregated should be present in the edges collected during the traversal. Later examples (below) will show aggregations can be a powerful tool for providing recommendations and limiting the scope of traversals.
 
-[[GraphTraversal-Nestingnodesfunctions]]
-== Nesting nodes functions
+== Nesting nodes Functions
 
 The `nodes` function can be nested to traverse deeper into the graph. For example:
 
@@ -207,14 +204,12 @@ Put more simply, the inner expression gathers all the people that "\johndoe@apac
 
 This construct of nesting `nodes` functions is the basic technique for doing a controlled traversal through the graph.
 
-[[GraphTraversal-CycleDetection]]
 == Cycle Detection
 
 The `nodes` function performs cycle detection across the entire traversal. This ensures that nodes that have already been visited are not traversed again. Cycle detection is important for both limiting the size of traversals and gathering accurate aggregations. Without cycle detection the size of the traversal could grow exponentially with each hop in the traversal. With cycle detection only new nodes encountered are traversed.
 
 Cycle detection *does not* cross collection boundaries. This is because internally the collection name is part of the node ID. For example the node ID "\johndoe@apache.org", is really `emails/johndoe@apache.org`. When traversing to another collection "\johndoe@apache.org" will be traversed.
 
-[[GraphTraversal-FilteringtheTraversal]]
 == Filtering the Traversal
 
 Each level in the traversal can be filtered with a filter query. For example:
@@ -229,7 +224,6 @@ nodes(emails,
 
 In the example above only emails that match the filter query will be included in the traversal. Any Solr query can be included here. So you can do fun things like <<spatial-search.adoc#spatial-search,geospatial queries>>, apply any of the available <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,query parsers>>, or even write custom query parsers to limit the traversal.
 
-[[GraphTraversal-RootStreams]]
 == Root Streams
 
 Any streaming expression can be used to provide the root nodes for a traversal. For example:
@@ -246,7 +240,6 @@ The example above provides the root nodes through a search expression. You can a
 
 Notice that the `walk` parameter maps a field from the tuples generated by the inner stream. In this case it maps the `to` field from the inner stream to the `from` field.
 
-[[GraphTraversal-SkippingHighFrequencyNodes]]
 == Skipping High Frequency Nodes
 
 It's often desirable to skip traversing high frequency nodes in the graph. This is similar in nature to a search term stop list. The best way to describe this is through an example use case.
@@ -277,7 +270,6 @@ The `nodes` function has the `maxDocFreq` param to allow for filtering out high
 
 In the example above, the inner search expression searches the `logs` collection and returning all the articles viewed by "user1". The outer `nodes` expression takes all the articles emitted from the inner search expression and finds all the records in the logs collection for those articles. It then gathers and aggregates the users that have read the articles. The `maxDocFreq` parameter limits the articles returned to those that appear in no more then 10,000 log records (per shard). This guards against returning articles that have been viewed by millions of users.
 
-[[GraphTraversal-TrackingtheTraversal]]
 == Tracking the Traversal
 
 By default the `nodes` function only tracks enough information to do cycle detection. This provides enough information to output the nodes and aggregations in the graph.
@@ -298,7 +290,6 @@ nodes(emails,
       gather="to")
 ----
 
-[[GraphTraversal-Cross-CollectionTraversals]]
 == Cross-Collection Traversals
 
 Nested `nodes` functions can operate on different SolrCloud collections. This allow traversals to "walk" from one collection to another to gather nodes. Cycle detection does not cross collection boundaries, so nodes collected in one collection will be traversed in a different collection. This was done deliberately to support cross-collection traversals. Note that the output from a cross-collection traversal will likely contain duplicate nodes with different collection attributes.
@@ -320,7 +311,6 @@ nodes(logs,
 
 The example above finds all people who sent emails with a body that contains "solr rocks". It then finds all the people these people have emailed. Then it traverses to the logs collection and gathers all the content IDs that these people have edited.
 
-[[GraphTraversal-CombiningnodesWithOtherStreamingExpressions]]
 == Combining nodes With Other Streaming Expressions
 
 The `nodes` function can act as both a stream source and a stream decorator. The connection with the wider stream expression library provides tremendous power and flexibility when performing graph traversals. Here is an example of using the streaming expression library to intersect two friend networks:
@@ -348,10 +338,8 @@ The `nodes` function can act as both a stream source and a stream decorator. The
 
 The example above gathers two separate friend networks, one rooted with "\johndoe@apache.org" and another rooted with "\janedoe@apache.org". The friend networks are then sorted by the `node` field, and intersected. The resulting node set will be the intersection of the two friend networks.
 
-[[GraphTraversal-SampleUseCases]]
-== Sample Use Cases
+== Sample Use Cases for Graph Traversal
 
-[[GraphTraversal-CalculateMarketBasketCo-occurrence]]
 === Calculate Market Basket Co-occurrence
 
 It is often useful to know which products are most frequently purchased with a particular product. This example uses a simple market basket table (indexed in Solr) to store past shopping baskets. The schema for the table is very simple with each row containing a `basketID` and a `productID`. This can be seen as a graph with each row in the table representing an edge. And it can be traversed very quickly to calculate basket co-occurrence, even when the graph contains billions of edges.
@@ -378,15 +366,13 @@ Let's break down exactly what this traversal is doing.
 
 In a nutshell this expression finds the products that most frequently co-occur with product "ABC" in past shopping baskets.
 
-[[GraphTraversal-UsingthescoreNodesFunctiontoMakeaRecommendation]]
 === Using the scoreNodes Function to Make a Recommendation
 
-This use case builds on the market basket example <<GraphTraversal-CalculateMarketBasketCo-occurrence,above>> that calculates which products co-occur most frequently with productID:ABC. The ranked co-occurrence counts provide candidates for a recommendation. The `scoreNodes` function can be used to score the candidates to find the best recommendation.
+This use case builds on the market basket example <<Calculate Market Basket Co-occurrence,above>> that calculates which products co-occur most frequently with productID:ABC. The ranked co-occurrence counts provide candidates for a recommendation. The `scoreNodes` function can be used to score the candidates to find the best recommendation.
 
 Before diving into the syntax of the `scoreNodes` function it's useful to understand why the raw co-occurrence counts may not produce the best recommendation. The reason is that raw co-occurrence counts favor items that occur frequently across all baskets. A better recommendation would find the product that has the most significant relationship with productID ABC. The `scoreNodes` function uses a term frequency-inverse document frequency (TF-IDF) algorithm to find the most significant relationship.
 
-[[GraphTraversal-HowItWorks]]
-==== *How It Works*
+==== How scoreNodes Works
 
 The `scoreNodes` function assigns a score to each node emitted by the nodes expression. By default the `scoreNodes` function uses the `count(*)` aggregation, which is the co-occurrence count, as the TF value. The IDF value for each node is fetched from the collection where the node was gathered. Each node is then scored using the TF*IDF formula, which provides a boost to nodes with a lower frequency across all market baskets.
 
@@ -394,8 +380,7 @@ Combining the co-occurrence count with the IDF provides a score that shows how i
 
 The `scoreNodes` function adds the score to each node in the `nodeScore` field.
 
-[[GraphTraversal-ExampleSyntax]]
-==== *Example Syntax*
+==== Example scoreNodes Syntax
 
 [source,plain]
 ----
@@ -417,7 +402,6 @@ This example builds on the earlier example "Calculate market basket co-occurrenc
 . The `scoreNodes` function then assigns a score to the candidates based on the TF*IDF of each node.
 . The outer `top` expression selects the highest scoring node. This is the recommendation.
 
-[[GraphTraversal-RecommendContentBasedonCollaborativeFilter]]
 === Recommend Content Based on Collaborative Filter
 
 In this example we'll recommend content for a user based on a collaborative filter. This recommendation is made using log records that contain the `userID` and `articleID` and the action performed. In this scenario each log record can be viewed as an edge in a graph. The userID and articleID are the nodes and the action is an edge property used to filter the traversal.
@@ -458,7 +442,6 @@ Note that it skips high frequency nodes using the `maxDocFreq` param to filter o
 Any article selected in step 1 (user1 reading list), will not appear in this step due to cycle detection. So this step returns the articles read by the users with the most similar readings habits to "user1" that "user1" has not read yet. It also counts the number of times each article has been read across this user group.
 . The outer `top` expression takes the top articles emitted from step 4. This is the recommendation.
 
-[[GraphTraversal-ProteinPathwayTraversal]]
 === Protein Pathway Traversal
 
 In recent years, scientists have become increasingly able to rationally design drugs that target the mutated proteins, called oncogenes, responsible for some cancers. Proteins typically act through long chains of chemical interactions between multiple proteins, called pathways, and, while the oncogene in the pathway may not have a corresponding drug, another protein in the pathway may. Graph traversal on a protein collection that records protein interactions and drugs may yield possible candidates. (Thanks to Lewis Geer of the NCBI, for providing this example).
@@ -481,7 +464,6 @@ Let's break down exactly what this traversal is doing.
 . The outer `nodes` expression also works with the `proteins` collection. It gathers all the drugs that correspond to proteins emitted from step 1.
 . Using this stepwise approach you can gather the drugs along the pathway of interactions any number of steps away from the root protein.
 
-[[GraphTraversal-ExportingGraphMLtoSupportGraphVisualization]]
 == Exporting GraphML to Support Graph Visualization
 
 In the examples above, the `nodes` expression was sent to Solr's `/stream` handler like any other streaming expression. This approach outputs the nodes in the same JSON tuple format as other streaming expressions so that it can be treated like any other streaming expression. You can use the `/stream` handler when you need to operate directly on the tuples, such as in the recommendation use cases above.
@@ -496,8 +478,7 @@ There are a few things to keep mind when exporting a graph in GraphML:
 . The `/graph` handler currently accepts an arbitrarily complex streaming expression which includes a `nodes` expression. If the streaming expression doesn't include a `nodes` expression, the `/graph` handler will not properly output GraphML.
 . The `/graph` handler currently accepts a single arbitrarily complex, nested `nodes` expression per request. This means you cannot send in a streaming expression that joins or intersects the node sets from multiple `nodes` expressions. The `/graph` handler does support any level of nesting within a single `nodes` expression. The `/stream` handler does support joining and intersecting node sets, but the `/graph` handler currently does not.
 
-[[GraphTraversal-SampleRequest]]
-=== Sample Request
+=== Sample GraphML Request
 
 [source,bash]
 ----
@@ -512,7 +493,6 @@ curl --data-urlencode 'expr=nodes(enron_emails,
                                   gather="to")' http://localhost:8983/solr/enron_emails/graph
 ----
 
-[[GraphTraversal-SampleGraphMLOutput]]
 === Sample GraphML Output
 
 [source,xml]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
index 7fc1943..7ed0a15 100644
--- a/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/hadoop-authentication-plugin.adoc
@@ -30,7 +30,7 @@ For some of the authentication schemes (e.g., Kerberos), Solr provides a native
 
 There are two plugin classes:
 
-* `HadoopAuthPlugin`: This can be used with standalone Solr as well as Solrcloud with <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-PKI,PKI authentication>> for internode communication.
+* `HadoopAuthPlugin`: This can be used with standalone Solr as well as Solrcloud with <<authentication-and-authorization-plugins.adoc#securing-inter-node-requests,PKI authentication>> for internode communication.
 * `ConfigurableInternodeAuthHadoopPlugin`: This is an extension of HadoopAuthPlugin that allows you to configure the authentication scheme for internode communication.
 
 [TIP]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
index 3c87f8c..c10d93b 100644
--- a/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
+++ b/solr/solr-ref-guide/src/implicit-requesthandlers.adoc
@@ -20,7 +20,6 @@
 
 Solr ships with many out-of-the-box RequestHandlers, which are called implicit because they are not configured in `solrconfig.xml`.
 
-[[ImplicitRequestHandlers-ListofImplicitlyAvailableEndpoints]]
 == List of Implicitly Available Endpoints
 
 // TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
@@ -44,19 +43,18 @@ Solr ships with many out-of-the-box RequestHandlers, which are called implicit b
 |`/debug/dump` |{solr-javadocs}/solr-core/org/apache/solr/handler/DumpRequestHandler.html[DumpRequestHandler] |`_DEBUG_DUMP` |Echo the request contents back to the client.
 |<<exporting-result-sets.adoc#exporting-result-sets,`/export`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/component/SearchHandler.html[SearchHandler] |`_EXPORT` |Export full sorted result sets.
 |<<realtime-get.adoc#realtime-get,`/get`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/RealTimeGetHandler.html[RealTimeGetHandler] |`_GET` |Real-time get: low-latency retrieval of the latest version of a document.
-|<<graph-traversal.adoc#GraphTraversal-ExportingGraphMLtoSupportGraphVisualization,`/graph`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/GraphHandler.html[GraphHandler] |`_ADMIN_GRAPH` |Return http://graphml.graphdrawing.org/[GraphML] formatted output from a <<graph-traversal.adoc#graph-traversal,`gather` `Nodes` streaming expression>>.
+|<<graph-traversal.adoc#exporting-graphml-to-support-graph-visualization,`/graph`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/GraphHandler.html[GraphHandler] |`_ADMIN_GRAPH` |Return http://graphml.graphdrawing.org/[GraphML] formatted output from a <<graph-traversal.adoc#graph-traversal,`gather` `Nodes` streaming expression>>.
 |<<index-replication.adoc#index-replication,`/replication`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/ReplicationHandler.html[ReplicationHandler] |`_REPLICATION` |Replicate indexes for SolrCloud recovery and Master/Slave index distribution.
 |<<schema-api.adoc#schema-api,`/schema`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/SchemaHandler.html[SchemaHandler] |`_SCHEMA` |Retrieve/modify Solr schema.
 |<<parallel-sql-interface.adoc#sql-request-handler,`/sql`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/SQLHandler.html[SQLHandler] |`_SQL` |Front end of the Parallel SQL interface.
-|<<streaming-expressions.adoc#StreamingExpressions-StreamingRequestsandResponses,`/stream`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/StreamHandler.html[StreamHandler] |`_STREAM` |Distributed stream processing.
-|<<the-terms-component.adoc#TheTermsComponent-UsingtheTermsComponentinaRequestHandler,`/terms`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/component/SearchHandler.html[SearchHandler] |`_TERMS` |Return a field's indexed terms and the number of documents containing each term.
+|<<streaming-expressions.adoc#streaming-requests-and-responses,`/stream`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/StreamHandler.html[StreamHandler] |`_STREAM` |Distributed stream processing.
+|<<the-terms-component.adoc#using-the-terms-component-in-a-request-handler,`/terms`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/component/SearchHandler.html[SearchHandler] |`_TERMS` |Return a field's indexed terms and the number of documents containing each term.
 |<<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,`/update`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE` |Add, delete and update indexed documents formatted as SolrXML, CSV, SolrJSON or javabin.
-|<<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-CSVUpdateConveniencePaths,`/update/csv`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_CSV` |Add and update CSV-formatted documents.
-|<<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-CSVUpdateConveniencePaths,`/update/json`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_JSON` |Add, delete and update SolrJSON-formatted documents.
+|<<uploading-data-with-index-handlers.adoc#csv-update-convenience-paths,`/update/csv`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_CSV` |Add and update CSV-formatted documents.
+|<<uploading-data-with-index-handlers.adoc#csv-update-convenience-paths,`/update/json`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_JSON` |Add, delete and update SolrJSON-formatted documents.
 |<<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,`/update/json/docs`>> |{solr-javadocs}/solr-core/org/apache/solr/handler/UpdateRequestHandler.html[UpdateRequestHandler] |`_UPDATE_JSON_DOCS` |Add and update custom JSON-formatted documents.
 |===
 
-[[ImplicitRequestHandlers-HowtoViewtheConfiguration]]
 == How to View the Configuration
 
 You can see configuration for all request handlers, including the implicit request handlers, via the <<config-api.adoc#config-api,Config API>>. E.g. for the `gettingstarted` collection:
@@ -71,7 +69,6 @@ To include the expanded paramset in the response, as well as the effective param
 
 `curl "http://localhost:8983/solr/gettingstarted/config/requestHandler?componentName=/export&expandParams=true"`
 
-[[ImplicitRequestHandlers-HowtoEdittheConfiguration]]
 == How to Edit the Configuration
 
 Because implicit request handlers are not present in `solrconfig.xml`, configuration of their associated `default`, `invariant` and `appends` parameters may be edited via<<request-parameters-api.adoc#request-parameters-api, Request Parameters API>> using the paramset listed in the above table. However, other parameters, including SearchHandler components, may not be modified. The invariants and appends specified in the implicit configuration cannot be overridden.