You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/07/14 18:35:01 UTC

[03/11] lucene-solr:branch_7_0: SOLR-11050: remove unneeded anchors for pages that have no incoming links from other pages

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/jvm-settings.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/jvm-settings.adoc b/solr/solr-ref-guide/src/jvm-settings.adoc
index 56560da..532e1a7 100644
--- a/solr/solr-ref-guide/src/jvm-settings.adoc
+++ b/solr/solr-ref-guide/src/jvm-settings.adoc
@@ -24,7 +24,6 @@ Configuring your JVM can be a complex topic and a full discussion is beyond the
 
 For more general information about improving Solr performance, see https://wiki.apache.org/solr/SolrPerformanceFactors.
 
-[[JVMSettings-ChoosingMemoryHeapSettings]]
 == Choosing Memory Heap Settings
 
 The most important JVM configuration settings are those that determine the amount of memory it is allowed to allocate. There are two primary command-line options that set memory limits for the JVM. These are `-Xms`, which sets the initial size of the JVM's memory heap, and `-Xmx`, which sets the maximum size to which the heap is allowed to grow.
@@ -41,12 +40,10 @@ When setting the maximum heap size, be careful not to let the JVM consume all av
 
 On systems with many CPUs/cores, it can also be beneficial to tune the layout of the heap and/or the behavior of the garbage collector. Adjusting the relative sizes of the generational pools in the heap can affect how often GC sweeps occur and whether they run concurrently. Configuring the various settings of how the garbage collector should behave can greatly reduce the overall performance impact when it does run. There is a lot of good information on this topic available on Sun's website. A good place to start is here: http://www.oracle.com/technetwork/java/javase/tech/index-jsp-140228.html[Oracle's Java HotSpot Garbage Collection].
 
-[[JVMSettings-UsetheServerHotSpotVM]]
 == Use the Server HotSpot VM
 
 If you are using Sun's JVM, add the `-server` command-line option when you start Solr. This tells the JVM that it should optimize for a long running, server process. If the Java runtime on your system is a JRE, rather than a full JDK distribution (including `javac` and other development tools), then it is possible that it may not support the `-server` JVM option. Test this by running `java -help` and look for `-server` as an available option in the displayed usage message.
 
-[[JVMSettings-CheckingJVMSettings]]
 == Checking JVM Settings
 
 A great way to see what JVM settings your server is using, along with other useful information, is to use the admin RequestHandler, `solr/admin/system`. This request handler will display a wealth of server statistics and settings.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc b/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
index da96316..a7fa9c1 100644
--- a/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/kerberos-authentication-plugin.adoc
@@ -29,17 +29,14 @@ Support for the Kerberos authentication plugin is available in SolrCloud mode or
 If you are using Solr with a Hadoop cluster secured with Kerberos and intend to store your Solr indexes in HDFS, also see the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>> for additional steps to configure Solr for that purpose. The instructions on this page apply only to scenarios where Solr will be secured with Kerberos. If you only need to store your indexes in a Kerberized HDFS system, please see the other section referenced above.
 ====
 
-[[KerberosAuthenticationPlugin-HowSolrWorksWithKerberos]]
 == How Solr Works With Kerberos
 
 When setting up Solr to use Kerberos, configurations are put in place for Solr to use a _service principal_, or a Kerberos username, which is registered with the Key Distribution Center (KDC) to authenticate requests. The configurations define the service principal name and the location of the keytab file that contains the credentials.
 
-[[KerberosAuthenticationPlugin-security.json]]
 === security.json
 
 The Solr authentication model uses a file called `security.json`. A description of this file and how it is created and maintained is covered in the section <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,Authentication and Authorization Plugins>>. If this file is created after an initial startup of Solr, a restart of each node of the system is required.
 
-[[KerberosAuthenticationPlugin-ServicePrincipalsandKeytabFiles]]
 === Service Principals and Keytab Files
 
 Each Solr node must have a service principal registered with the Key Distribution Center (KDC). The Kerberos plugin uses SPNego to negotiate authentication.
@@ -56,7 +53,6 @@ Along with the service principal, each Solr node needs a keytab file which shoul
 
 Since a Solr cluster requires internode communication, each node must also be able to make Kerberos enabled requests to other nodes. By default, Solr uses the same service principal and keytab as a 'client principal' for internode communication. You may configure a distinct client principal explicitly, but doing so is not recommended and is not covered in the examples below.
 
-[[KerberosAuthenticationPlugin-KerberizedZooKeeper]]
 === Kerberized ZooKeeper
 
 When setting up a kerberized SolrCloud cluster, it is recommended to enable Kerberos security for ZooKeeper as well.
@@ -65,15 +61,13 @@ In such a setup, the client principal used to authenticate requests with ZooKeep
 
 See the <<ZooKeeper Configuration>> section below for an example of starting ZooKeeper in Kerberos mode.
 
-[[KerberosAuthenticationPlugin-BrowserConfiguration]]
 === Browser Configuration
 
 In order for your browser to access the Solr Admin UI after enabling Kerberos authentication, it must be able to negotiate with the Kerberos authenticator service to allow you access. Each browser supports this differently, and some (like Chrome) do not support it at all. If you see 401 errors when trying to access the Solr Admin UI after enabling Kerberos authentication, it's likely your browser has not been configured properly to know how or where to negotiate the authentication request.
 
 Detailed information on how to set up your browser is beyond the scope of this documentation; please see your system administrators for Kerberos for details on how to configure your browser.
 
-[[KerberosAuthenticationPlugin-PluginConfiguration]]
-== Plugin Configuration
+== Kerberos Authentication Configuration
 
 .Consult Your Kerberos Admins!
 [WARNING]
@@ -97,7 +91,6 @@ We'll walk through each of these steps below.
 To use host names instead of IP addresses, use the `SOLR_HOST` configuration in `bin/solr.in.sh` or pass a `-Dhost=<hostname>` system parameter during Solr startup. This guide uses IP addresses. If you specify a hostname, replace all the IP addresses in the guide with the Solr hostname as appropriate.
 ====
 
-[[KerberosAuthenticationPlugin-GetServicePrincipalsandKeytabs]]
 === Get Service Principals and Keytabs
 
 Before configuring Solr, make sure you have a Kerberos service principal for each Solr host and ZooKeeper (if ZooKeeper has not already been configured) available in the KDC server, and generate a keytab file as shown below.
@@ -128,7 +121,6 @@ Copy the keytab file from the KDC server’s `/tmp/107.keytab` location to the S
 
 You might need to take similar steps to create a ZooKeeper service principal and keytab if it has not already been set up. In that case, the example below shows a different service principal for ZooKeeper, so the above might be repeated with `zookeeper/host1` as the service principal for one of the nodes
 
-[[KerberosAuthenticationPlugin-ZooKeeperConfiguration]]
 === ZooKeeper Configuration
 
 If you are using a ZooKeeper that has already been configured to use Kerberos, you can skip the ZooKeeper-related steps shown here.
@@ -173,7 +165,6 @@ Once all of the pieces are in place, start ZooKeeper with the following paramete
 bin/zkServer.sh start -Djava.security.auth.login.config=/etc/zookeeper/conf/jaas-client.conf
 ----
 
-[[KerberosAuthenticationPlugin-Createsecurity.json]]
 === Create security.json
 
 Create the `security.json` file.
@@ -194,7 +185,6 @@ More details on how to use a `/security.json` file in Solr are available in the
 If you already have a `/security.json` file in ZooKeeper, download the file, add or modify the authentication section and upload it back to ZooKeeper using the <<command-line-utilities.adoc#command-line-utilities,Command Line Utilities>> available in Solr.
 ====
 
-[[KerberosAuthenticationPlugin-DefineaJAASConfigurationFile]]
 === Define a JAAS Configuration File
 
 The JAAS configuration file defines the properties to use for authentication, such as the service principal and the location of the keytab file. Other properties can also be set to ensure ticket caching and other features.
@@ -227,7 +217,6 @@ The main properties we are concerned with are the `keyTab` and `principal` prope
 * `debug`: this boolean property will output debug messages for help in troubleshooting.
 * `principal`: the name of the service principal to be used.
 
-[[KerberosAuthenticationPlugin-SolrStartupParameters]]
 === Solr Startup Parameters
 
 While starting up Solr, the following host-specific parameters need to be passed. These parameters can be passed at the command line with the `bin/solr` start command (see <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>> for details on how to pass system parameters) or defined in `bin/solr.in.sh` or `bin/solr.in.cmd` as appropriate for your operating system.
@@ -252,7 +241,6 @@ The app name (section name) within the JAAS configuration file which is required
 `java.security.auth.login.config`::
 Path to the JAAS configuration file for configuring a Solr client for internode communication. This parameter is required.
 
-
 Here is an example that could be added to `bin/solr.in.sh`. Make sure to change this example to use the right hostname and the keytab file path.
 
 [source,bash]
@@ -273,7 +261,6 @@ For Java 1.8, this is available here: http://www.oracle.com/technetwork/java/jav
 Replace the `local_policy.jar` present in `JAVA_HOME/jre/lib/security/` with the new `local_policy.jar` from the downloaded package and restart the Solr node.
 ====
 
-[[KerberosAuthenticationPlugin-UsingDelegationTokens]]
 === Using Delegation Tokens
 
 The Kerberos plugin can be configured to use delegation tokens, which allow an application to reuse the authentication of an end-user or another application.
@@ -304,7 +291,6 @@ The ZooKeeper path where the secret provider information is stored. This is in t
 `solr.kerberos.delegation.token.secret.manager.znode.working.path`::
 The ZooKeeper path where token information is stored. This is in the form of the path + /security/zkdtsm. The path can include the chroot or the chroot can be omitted if you are not using it. This example includes the chroot: `server1:9983,server2:9983,server3:9983/solr/security/zkdtsm`.
 
-[[KerberosAuthenticationPlugin-StartSolr]]
 === Start Solr
 
 Once the configuration is complete, you can start Solr with the `bin/solr` script, as in the example below, which is for users in SolrCloud mode only. This example assumes you modified `bin/solr.in.sh` or `bin/solr.in.cmd`, with the proper values, but if you did not, you would pass the system parameters along with the start command. Note you also need to customize the `-z` property as appropriate for the location of your ZooKeeper nodes.
@@ -314,7 +300,6 @@ Once the configuration is complete, you can start Solr with the `bin/solr` scrip
 bin/solr -c -z server1:2181,server2:2181,server3:2181/solr
 ----
 
-[[KerberosAuthenticationPlugin-TesttheConfiguration]]
 === Test the Configuration
 
 . Do a `kinit` with your username. For example, `kinit \user@EXAMPLE.COM`.
@@ -325,7 +310,6 @@ bin/solr -c -z server1:2181,server2:2181,server3:2181/solr
 curl --negotiate -u : "http://192.168.0.107:8983/solr/"
 ----
 
-[[KerberosAuthenticationPlugin-UsingSolrJwithaKerberizedSolr]]
 == Using SolrJ with a Kerberized Solr
 
 To use Kerberos authentication in a SolrJ application, you need the following two lines before you create a SolrClient:
@@ -353,7 +337,6 @@ SolrJClient {
 };
 ----
 
-[[KerberosAuthenticationPlugin-DelegationTokenswithSolrJ]]
 === Delegation Tokens with SolrJ
 
 Delegation tokens are also supported with SolrJ, in the following ways:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/local-parameters-in-queries.adoc b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
index 1ed8eea..e7becd7 100644
--- a/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
+++ b/solr/solr-ref-guide/src/local-parameters-in-queries.adoc
@@ -32,7 +32,6 @@ We can prefix this query string with local parameters to provide more informatio
 
 These local parameters would change the query to require a match on both "solr" and "rocks" while searching the "title" field by default.
 
-[[LocalParametersinQueries-BasicSyntaxofLocalParameters]]
 == Basic Syntax of Local Parameters
 
 To specify a local parameter, insert the following before the argument to be modified:
@@ -45,7 +44,6 @@ To specify a local parameter, insert the following before the argument to be mod
 
 You may specify only one local parameters prefix per argument. Values in the key-value pairs may be quoted via single or double quotes, and backslash escaping works within quoted strings.
 
-[[LocalParametersinQueries-QueryTypeShortForm]]
 == Query Type Short Form
 
 If a local parameter value appears without a name, it is given the implicit name of "type". This allows short-form representation for the type of query parser to use when parsing a query string. Thus
@@ -74,7 +72,6 @@ is equivalent to
 
 `q={!type=dismax qf=myfield v='solr rocks'`}
 
-[[LocalParametersinQueries-ParameterDereferencing]]
 == Parameter Dereferencing
 
 Parameter dereferencing, or indirection, lets you use the value of another argument rather than specifying it directly. This can be used to simplify queries, decouple user input from query parameters, or decouple front-end GUI parameters from defaults set in `solrconfig.xml`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/logging.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/logging.adoc b/solr/solr-ref-guide/src/logging.adoc
index d44dcad..8b847f7 100644
--- a/solr/solr-ref-guide/src/logging.adoc
+++ b/solr/solr-ref-guide/src/logging.adoc
@@ -27,7 +27,6 @@ image::images/logging/logging.png[image,width=621,height=250]
 
 While this example shows logged messages for only one core, if you have multiple cores in a single instance, they will each be listed, with the level for each.
 
-[[Logging-SelectingaLoggingLevel]]
 == Selecting a Logging Level
 
 When you select the *Level* link on the left, you see the hierarchy of classpaths and classnames for your instance. A row highlighted in yellow indicates that the class has logging capabilities. Click on a highlighted row, and a menu will appear to allow you to change the log level for that class. Characters in boldface indicate that the class will not be affected by level changes to root.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/managed-resources.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/managed-resources.adoc b/solr/solr-ref-guide/src/managed-resources.adoc
index 72b879a..14fcffd 100644
--- a/solr/solr-ref-guide/src/managed-resources.adoc
+++ b/solr/solr-ref-guide/src/managed-resources.adoc
@@ -38,8 +38,7 @@ bin/solr -e techproducts
 
 Let's begin learning about managed resources by looking at a couple of examples provided by Solr for managing stop words and synonyms using a REST API. After reading this section, you'll be ready to dig into the details of how managed resources are implemented in Solr so you can start building your own implementation.
 
-[[ManagedResources-Stopwords]]
-=== Stop Words
+=== Managing Stop Words
 
 To begin, you need to define a field type that uses the <<filter-descriptions.adoc#FilterDescriptions-ManagedStopFilter,ManagedStopFilterFactory>>, such as:
 
@@ -134,8 +133,7 @@ curl -X DELETE "http://localhost:8983/solr/techproducts/schema/analysis/stopword
 
 NOTE: PUT/POST is used to add terms to an existing list instead of replacing the list entirely. This is because it is more common to add a term to an existing list than it is to replace a list altogether, so the API favors the more common approach of incrementally adding terms especially since deleting individual terms is also supported.
 
-[[ManagedResources-Synonyms]]
-=== Synonyms
+=== Managing Synonyms
 
 For the most part, the API for managing synonyms behaves similar to the API for stop words, except instead of working with a list of words, it uses a map, where the value for each entry in the map is a set of synonyms for a term. As with stop words, the `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> includes a pre-built set of synonym mappings suitable for the sample data that is activated by the following field type definition in schema.xml:
 
@@ -209,8 +207,7 @@ Note that the expansion is performed when processing the PUT request so the unde
 
 Lastly, you can delete a mapping by sending a DELETE request to the managed endpoint.
 
-[[ManagedResources-ApplyingChanges]]
-== Applying Changes
+== Applying Managed Resource Changes
 
 Changes made to managed resources via this REST API are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
 
@@ -227,7 +224,6 @@ However, the intent of this API implementation is that changes will be applied u
 Changing things like stop words and synonym mappings typically require re-indexing existing documents if being used by index-time analyzers. The RestManager framework does not guard you from this, it simply makes it possible to programmatically build up a set of stop words, synonyms etc.
 ====
 
-[[ManagedResources-RestManagerEndpoint]]
 == RestManager Endpoint
 
 Metadata about registered ManagedResources is available using the `/schema/managed` endpoint for each collection.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/mbean-request-handler.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/mbean-request-handler.adoc b/solr/solr-ref-guide/src/mbean-request-handler.adoc
index eebd082..8a3b918 100644
--- a/solr/solr-ref-guide/src/mbean-request-handler.adoc
+++ b/solr/solr-ref-guide/src/mbean-request-handler.adoc
@@ -34,8 +34,7 @@ Specifies whether statistics are returned with results. You can override the `st
 `wt`::
 The output format. This operates the same as the <<response-writers.adoc#response-writers,`wt` parameter in a query>>. The default is `xml`.
 
-[[MBeanRequestHandler-Examples]]
-== Examples
+== MBeanRequestHandler Examples
 
 The following examples assume you are running Solr's `techproducts` example configuration:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/merging-indexes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/merging-indexes.adoc b/solr/solr-ref-guide/src/merging-indexes.adoc
index 49afe4e..1c11851 100644
--- a/solr/solr-ref-guide/src/merging-indexes.adoc
+++ b/solr/solr-ref-guide/src/merging-indexes.adoc
@@ -27,7 +27,6 @@ To merge indexes, they must meet these requirements:
 
 Optimally, the two indexes should be built using the same schema.
 
-[[MergingIndexes-UsingIndexMergeTool]]
 == Using IndexMergeTool
 
 To merge the indexes, do the following:
@@ -43,7 +42,6 @@ java -cp $SOLR/server/solr-webapp/webapp/WEB-INF/lib/lucene-core-VERSION.jar:$SO
 This will create a new index at `/path/to/newindex` that contains both index1 and index2.
 . Copy this new directory to the location of your application's solr index (move the old one aside first, of course) and start Solr.
 
-[[MergingIndexes-UsingCoreAdmin]]
 == Using CoreAdmin
 
 The `MERGEINDEXES` command of the <<coreadmin-api.adoc#CoreAdminAPI-MERGEINDEXES,CoreAdminHandler>> can be used to merge indexes into a new core – either from one or more arbitrary `indexDir` directories or by merging from one or more existing `srcCore` core names.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/morelikethis.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/morelikethis.adoc b/solr/solr-ref-guide/src/morelikethis.adoc
index e0756cb..a5bdb4f 100644
--- a/solr/solr-ref-guide/src/morelikethis.adoc
+++ b/solr/solr-ref-guide/src/morelikethis.adoc
@@ -28,7 +28,6 @@ The second is to use it as a search component. This is less desirable since it p
 
 The final approach is to use it as a request handler but with externally supplied text. This case, also referred to as the MoreLikeThisHandler, will supply information about similar documents in the index based on the text of the input document.
 
-[[MoreLikeThis-HowMoreLikeThisWorks]]
 == How MoreLikeThis Works
 
 `MoreLikeThis` constructs a Lucene query based on terms in a document. It does this by pulling terms from the defined list of fields ( see the `mlt.fl` parameter, below). For best results, the fields should have stored term vectors in `schema.xml`. For example:
@@ -42,7 +41,6 @@ If term vectors are not stored, `MoreLikeThis` will generate terms from stored f
 
 The next phase filters terms from the original document using thresholds defined with the MoreLikeThis parameters. Finally, a query is run with these terms, and any other query parameters that have been defined (see the `mlt.qf` parameter, below) and a new document set is returned.
 
-[[MoreLikeThis-CommonParametersforMoreLikeThis]]
 == Common Parameters for MoreLikeThis
 
 The table below summarizes the `MoreLikeThis` parameters supported by Lucene/Solr. These parameters can be used with any of the three possible MoreLikeThis approaches.
@@ -77,8 +75,6 @@ Specifies if the query will be boosted by the interesting term relevance. It can
 `mlt.qf`::
 Query fields and their boosts using the same format as that used by the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax Query Parser>>. These fields must also be specified in `mlt.fl`.
 
-
-[[MoreLikeThis-ParametersfortheMoreLikeThisComponent]]
 == Parameters for the MoreLikeThisComponent
 
 Using MoreLikeThis as a search component returns similar documents for each document in the response set. In addition to the common parameters, these additional options are available:
@@ -89,8 +85,6 @@ If set to `true`, activates the `MoreLikeThis` component and enables Solr to ret
 `mlt.count`::
 Specifies the number of similar documents to be returned for each result. The default value is 5.
 
-
-[[MoreLikeThis-ParametersfortheMoreLikeThisHandler]]
 == Parameters for the MoreLikeThisHandler
 
 The table below summarizes parameters accessible through the `MoreLikeThisHandler`. It supports faceting, paging, and filtering using common query parameters, but does not work well with alternate query parsers.
@@ -105,7 +99,6 @@ Specifies an offset into the main query search results to locate the document on
 Controls how the `MoreLikeThis` component presents the "interesting" terms (the top TF/IDF terms) for the query. Supports three settings. The setting list lists the terms. The setting none lists no terms. The setting details lists the terms along with the boost value used for each term. Unless `mlt.boost=true`, all terms will have `boost=1.0`.
 
 
-[[MoreLikeThis-MoreLikeThisQueryParser]]
-== More Like This Query Parser
+== MoreLikeThis Query Parser
 
 The `mlt` query parser provides a mechanism to retrieve documents similar to a given document, like the handler. More information on the usage of the mlt query parser can be found in the section <<other-parsers.adoc#other-parsers,Other Parsers>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/near-real-time-searching.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/near-real-time-searching.adoc b/solr/solr-ref-guide/src/near-real-time-searching.adoc
index fe0e449..8727387 100644
--- a/solr/solr-ref-guide/src/near-real-time-searching.adoc
+++ b/solr/solr-ref-guide/src/near-real-time-searching.adoc
@@ -26,7 +26,6 @@ With NRT, you can modify a `commit` command to be a *soft commit*, which avoids
 
 However, pay special attention to cache and autowarm settings as they can have a significant impact on NRT performance.
 
-[[NearRealTimeSearching-CommitsandOptimizing]]
 == Commits and Optimizing
 
 A commit operation makes index changes visible to new search requests. A *hard commit* uses the transaction log to get the id of the latest document changes, and also calls `fsync` on the index files to ensure they have been flushed to stable storage and no data loss will result from a power failure. The current transaction log is closed and a new one is opened. See the "transaction log" discussion below for data loss issues.
@@ -45,7 +44,6 @@ The number of milliseconds to wait before pushing documents to the index. It wor
 
 Use `maxDocs` and `maxTime` judiciously to fine-tune your commit strategies.
 
-[[NearRealTimeSearching-TransactionLogs]]
 === Transaction Logs (tlogs)
 
 Transaction logs are a "rolling window" of at least the last `N` (default 100) documents indexed. Tlogs are configured in solrconfig.xml, including the value of `N`. The current transaction log is closed and a new one opened each time any variety of hard commit occurs. Soft commits have no effect on the transaction log.
@@ -54,7 +52,6 @@ When tlogs are enabled, documents being added to the index are written to the tl
 
 When Solr is shut down gracefully (i.e. using the `bin/solr stop` command and the like) Solr will close the tlog file and index segments so no replay will be necessary on startup.
 
-[[NearRealTimeSearching-AutoCommits]]
 === AutoCommits
 
 An autocommit also uses the parameters `maxDocs` and `maxTime`. However it's useful in many strategies to use both a hard `autocommit` and `autosoftcommit` to achieve more flexible commits.
@@ -72,7 +69,6 @@ For example:
 
 It's better to use `maxTime` rather than `maxDocs` to modify an `autoSoftCommit`, especially when indexing a large number of documents through the commit operation. It's also better to turn off `autoSoftCommit` for bulk indexing.
 
-[[NearRealTimeSearching-OptionalAttributesforcommitandoptimize]]
 === Optional Attributes for commit and optimize
 
 `waitSearcher`::
@@ -99,7 +95,6 @@ Example of `commit` and `optimize` with optional attributes:
 <optimize waitSearcher="false"/>
 ----
 
-[[NearRealTimeSearching-PassingcommitandcommitWithinparametersaspartoftheURL]]
 === Passing commit and commitWithin Parameters as Part of the URL
 
 Update handlers can also get `commit`-related parameters as part of the update URL. This example adds a small test document and causes an explicit commit to happen immediately afterwards:
@@ -132,8 +127,14 @@ curl http://localhost:8983/solr/my_collection/update?commitWithin=10000
   -H "Content-Type: text/xml" --data-binary '<add><doc><field name="id">testdoc</field></doc></add>'
 ----
 
+<<<<<<< HEAD
 [[NearRealTimeSearching-ChangingdefaultcommitWithinBehavior]]
 === Changing default commitWithin Behavior
+=======
+WARNING: While the `stream.body` feature is great for development and testing, it should normally not be enabled in production systems, as it lets a user with READ permissions post data that may alter the system state. The feature is disabled by default. See <<requestdispatcher-in-solrconfig.adoc#RequestDispatcherinSolrConfig-requestParsersElement,RequestDispatcher in SolrConfig>> for details.
+
+=== Changing Default commitWithin Behavior
+>>>>>>> 74ab16168c... SOLR-11050: remove unneeded anchors for pages that have no incoming links from other pages
 
 The `commitWithin` settings allow forcing document commits to happen in a defined time period. This is used most frequently with <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>, and for that reason the default is to perform a soft commit. This does not, however, replicate new documents to slave servers in a master/slave environment. If that's a requirement for your implementation, you can force a hard commit by adding a parameter, as in this example:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/post-tool.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/post-tool.adoc b/solr/solr-ref-guide/src/post-tool.adoc
index 80e74d4..e0391af 100644
--- a/solr/solr-ref-guide/src/post-tool.adoc
+++ b/solr/solr-ref-guide/src/post-tool.adoc
@@ -31,7 +31,6 @@ bin/post -c gettingstarted example/films/films.json
 
 This will contact the server at `localhost:8983`. Specifying the `collection/core name` is *mandatory*. The `-help` (or simply `-h`) option will output information on its usage (i.e., `bin/post -help)`.
 
-
 == Using the bin/post Tool
 
 Specifying either the `collection/core name` or the full update `url` is *mandatory* when using `bin/post`.
@@ -74,8 +73,7 @@ OPTIONS
 ...
 ----
 
-[[bin_post_examples]]
-== Examples
+== Examples Using bin/post
 
 There are several ways to use `bin/post`. This section presents several examples.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/request-parameters-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/request-parameters-api.adoc b/solr/solr-ref-guide/src/request-parameters-api.adoc
index 45275d0..81646a7 100644
--- a/solr/solr-ref-guide/src/request-parameters-api.adoc
+++ b/solr/solr-ref-guide/src/request-parameters-api.adoc
@@ -33,12 +33,10 @@ When might you want to use this feature?
 * To mix and match parameter sets at request time.
 * To avoid a reload of your collection for small parameter changes.
 
-[[RequestParametersAPI-TheRequestParametersEndpoint]]
 == The Request Parameters Endpoint
 
 All requests are sent to the `/config/params` endpoint of the Config API.
 
-[[RequestParametersAPI-SettingRequestParameters]]
 == Setting Request Parameters
 
 The request to set, unset, or update request parameters is sent as a set of Maps with names. These objects can be directly used in a request or a request handler definition.
@@ -88,7 +86,6 @@ curl http://localhost:8983/solr/techproducts/config/params -H 'Content-type:appl
 }'
 ----
 
-[[RequestParametersAPI-UsingRequestParameterswithRequestHandlers]]
 == Using Request Parameters with RequestHandlers
 
 After creating the `my_handler_params` paramset in the above section, it is possible to define a request handler as follows:
@@ -119,12 +116,10 @@ It will be equivalent to a standard request handler definition such as this one:
 </requestHandler>
 ----
 
-[[RequestParametersAPI-ImplicitRequestHandlers]]
-=== Implicit RequestHandlers
+=== Implicit RequestHandlers with the Request Parameters API
 
 Solr ships with many out-of-the-box request handlers that may only be configured via the Request Parameters API, because their configuration is not present in `solrconfig.xml`. See <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> for the paramset to use when configuring an implicit request handler.
 
-[[RequestParametersAPI-ViewingExpandedParamsetsandEffectiveParameterswithRequestHandlers]]
 === Viewing Expanded Paramsets and Effective Parameters with RequestHandlers
 
 To see the expanded paramset and the resulting effective parameters for a RequestHandler defined with `useParams`, use the `expandParams` request param. E.g. for the `/export` request handler:
@@ -134,7 +129,6 @@ To see the expanded paramset and the resulting effective parameters for a Reques
 curl http://localhost:8983/solr/techproducts/config/requestHandler?componentName=/export&expandParams=true
 ----
 
-[[RequestParametersAPI-ViewingRequestParameters]]
 == Viewing Request Parameters
 
 To see the paramsets that have been created, you can use the `/config/params` endpoint to read the contents of `params.json`, or use the name in the request:
@@ -147,7 +141,6 @@ curl http://localhost:8983/solr/techproducts/config/params
 curl http://localhost:8983/solr/techproducts/config/params/myQueries
 ----
 
-[[RequestParametersAPI-TheuseParamsParameter]]
 == The useParams Parameter
 
 When making a request, the `useParams` parameter applies the request parameters sent to the request. This is translated at request time to the actual parameters.
@@ -192,12 +185,10 @@ To summarize, parameters are applied in this order:
 * parameter sets defined in `params.json` that have been defined in the request handler.
 * parameters defined in `<defaults>` in `solrconfig.xml`.
 
-[[RequestParametersAPI-PublicAPIs]]
 == Public APIs
 
 The RequestParams Object can be accessed using the method `SolrConfig#getRequestParams()`. Each paramset can be accessed by their name using the method `RequestParams#getRequestParams(String name)`.
 
-[[RequestParametersAPI-Examples]]
-== Examples
+== Examples Using the Request Parameters API
 
-The Solr "films" example demonstrates the use of the parameters API. See https://github.com/apache/lucene-solr/tree/master/solr/example/films for details.
+The Solr "films" example demonstrates the use of the parameters API. You can use this example in your Solr installation (in the `example/films` directory) or view the files in the Apache GitHub mirror at https://github.com/apache/lucene-solr/tree/master/solr/example/films.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/result-clustering.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/result-clustering.adoc b/solr/solr-ref-guide/src/result-clustering.adoc
index db9a43c..c9bdf63 100644
--- a/solr/solr-ref-guide/src/result-clustering.adoc
+++ b/solr/solr-ref-guide/src/result-clustering.adoc
@@ -28,8 +28,7 @@ image::images/result-clustering/carrot2.png[image,width=900]
 
 The query issued to the system was _Solr_. It seems clear that faceting could not yield a similar set of groups, although the goals of both techniques are similar—to let the user explore the set of search results and either rephrase the query or narrow the focus to a subset of current documents. Clustering is also similar to <<result-grouping.adoc#result-grouping,Result Grouping>> in that it can help to look deeper into search results, beyond the top few hits.
 
-[[ResultClustering-PreliminaryConcepts]]
-== Preliminary Concepts
+== Clustering Concepts
 
 Each *document* passed to the clustering component is composed of several logical parts:
 
@@ -39,12 +38,11 @@ Each *document* passed to the clustering component is composed of several logica
 * the main content,
 * a language code of the title and content.
 
-The identifier part is mandatory, everything else is optional but at least one of the text fields (title or content) will be required to make the clustering process reasonable. It is important to remember that logical document parts must be mapped to a particular schema and its fields. The content (text) for clustering can be sourced from either a stored text field or context-filtered using a highlighter, all these options are explained below in the <<ResultClustering-Configuration,configuration>> section.
+The identifier part is mandatory, everything else is optional but at least one of the text fields (title or content) will be required to make the clustering process reasonable. It is important to remember that logical document parts must be mapped to a particular schema and its fields. The content (text) for clustering can be sourced from either a stored text field or context-filtered using a highlighter, all these options are explained below in the <<Clustering Configuration,configuration>> section.
 
 A *clustering algorithm* is the actual logic (implementation) that discovers relationships among the documents in the search result and forms human-readable cluster labels. Depending on the choice of the algorithm the clusters may (and probably will) vary. Solr comes with several algorithms implemented in the open source http://carrot2.org[Carrot2] project, commercial alternatives also exist.
 
-[[ResultClustering-QuickStartExample]]
-== Quick Start Example
+== Clustering Quick Start Example
 
 The "```techproducts```" example included with Solr is pre-configured with all the necessary components for result clustering -- but they are disabled by default.
 
@@ -137,16 +135,13 @@ There were a few clusters discovered for this query (`\*:*`), separating search
 
 Depending on the quality of input documents, some clusters may not make much sense. Some documents may be left out and not be clustered at all; these will be assigned to the synthetic _Other Topics_ group, marked with the `other-topics` property set to `true` (see the XML dump above for an example). The score of the other topics group is zero.
 
-[[ResultClustering-Installation]]
-== Installation
+== Installing the Clustering Contrib
 
 The clustering contrib extension requires `dist/solr-clustering-*.jar` and all JARs under `contrib/clustering/lib`.
 
-[[ResultClustering-Configuration]]
-== Configuration
+== Clustering Configuration
 
-[[ResultClustering-DeclarationoftheSearchComponentandRequestHandler]]
-=== Declaration of the Search Component and Request Handler
+=== Declaration of the Clustering Search Component and Request Handler
 
 Clustering extension is a search component and must be declared in `solrconfig.xml`. Such a component can be then appended to a request handler as the last component in the chain (because it requires search results which must be previously fetched by the search component).
 
@@ -205,8 +200,6 @@ An example configuration could look as shown below.
 </requestHandler>
 ----
 
-
-[[ResultClustering-ConfigurationParametersoftheClusteringComponent]]
 === Configuration Parameters of the Clustering Component
 
 The following parameters of each clustering engine or the entire clustering component (depending where they are declared) are available.
@@ -237,7 +230,6 @@ If `true` and the algorithm supports hierarchical clustering, sub-clusters will
 `carrot.numDescriptions`::
 Maximum number of per-cluster labels to return (if the algorithm assigns more than one label to a cluster).
 
-
 The `carrot.algorithm` parameter should contain a fully qualified class name of an algorithm supported by the http://project.carrot2.org[Carrot2] framework. Currently, the following algorithms are available:
 
 * `org.carrot2.clustering.lingo.LingoClusteringAlgorithm` (open source)
@@ -253,7 +245,6 @@ For a comparison of characteristics of these algorithms see the following links:
 
 The question of which algorithm to choose depends on the amount of traffic (STC is faster than Lingo, but arguably produces less intuitive clusters, Lingo3G is the fastest algorithm but is not free or open source), expected result (Lingo3G provides hierarchical clusters, Lingo and STC provide flat clusters), and the input data (each algorithm will cluster the input slightly differently). There is no one answer which algorithm is "the best".
 
-[[ResultClustering-ContextualandFullFieldClustering]]
 === Contextual and Full Field Clustering
 
 The clustering engine can apply clustering to the full content of (stored) fields or it can run an internal highlighter pass to extract context-snippets before clustering. Highlighting is recommended when the logical snippet field contains a lot of content (this would affect clustering performance). Highlighting can also increase the quality of clustering because the content passed to the algorithm will be more focused around the query (it will be query-specific context). The following parameters control the internal highlighter.
@@ -266,10 +257,9 @@ The size, in characters, of the snippets (aka fragments) created by the highligh
 
 `carrot.summarySnippets`:: The number of summary snippets to generate for clustering. If not specified, the default highlighting snippet count (`hl.snippets`) will be used.
 
-[[ResultClustering-LogicaltoDocumentFieldMapping]]
 === Logical to Document Field Mapping
 
-As already mentioned in <<ResultClustering-PreliminaryConcepts,Preliminary Concepts>>, the clustering component clusters "documents" consisting of logical parts that need to be mapped onto physical schema of data stored in Solr. The field mapping attributes provide a connection between fields and logical document parts. Note that the content of title and snippet fields must be *stored* so that it can be retrieved at search time.
+As already mentioned in <<Clustering Concepts>>, the clustering component clusters "documents" consisting of logical parts that need to be mapped onto physical schema of data stored in Solr. The field mapping attributes provide a connection between fields and logical document parts. Note that the content of title and snippet fields must be *stored* so that it can be retrieved at search time.
 
 `carrot.title`::
 The field (alternatively comma- or space-separated list of fields) that should be mapped to the logical document's title. The clustering algorithms typically give more weight to the content of the title field compared to the content (snippet). For best results, the field should contain concise, noise-free content. If there is no clear title in your data, you can leave this parameter blank.
@@ -280,7 +270,6 @@ The field (alternatively comma- or space-separated list of fields) that should b
 `carrot.url`::
 The field that should be mapped to the logical document's content URL. Leave blank if not required.
 
-[[ResultClustering-ClusteringMultilingualContent]]
 === Clustering Multilingual Content
 
 The field mapping specification can include a `carrot.lang` parameter, which defines the field that stores http://www.loc.gov/standards/iso639-2/php/code_list.php[ISO 639-1] code of the language in which the title and content of the document are written. This information can be stored in the index based on apriori knowledge of the documents' source or a language detection filter applied at indexing time. All algorithms inside the Carrot2 framework will accept ISO codes of languages defined in https://github.com/carrot2/carrot2/blob/master/core/carrot2-core/src/org/carrot2/core/LanguageCode.java[LanguageCode enum].
@@ -295,15 +284,13 @@ A mapping of arbitrary strings into ISO 639 two-letter codes used by `carrot.lan
 
 The default language can also be set using Carrot2-specific algorithm attributes (in this case the http://doc.carrot2.org/#section.attribute.lingo.MultilingualClustering.defaultLanguage[MultilingualClustering.defaultLanguage] attribute).
 
-[[ResultClustering-TweakingAlgorithmSettings]]
 == Tweaking Algorithm Settings
 
 The algorithms that come with Solr are using their default settings which may be inadequate for all data sets. All algorithms have lexical resources and resources (stop words, stemmers, parameters) that may require tweaking to get better clusters (and cluster labels). For Carrot2-based algorithms it is probably best to refer to a dedicated tuning application called Carrot2 Workbench (screenshot below). From this application one can export a set of algorithm attributes as an XML file, which can be then placed under the location pointed to by `carrot.resourcesDir`.
 
 image::images/result-clustering/carrot2-workbench.png[image,scaledwidth=75.0%]
 
-[[ResultClustering-ProvidingDefaults]]
-=== Providing Defaults
+=== Providing Defaults for Clustering
 
 The default attributes for all engines (algorithms) declared in the clustering component are placed under `carrot.resourcesDir` and with an expected file name of `engineName-attributes.xml`. So for an engine named `lingo` and the default value of `carrot.resourcesDir`, the attributes would be read from a file in `conf/clustering/carrot2/lingo-attributes.xml`.
 
@@ -323,8 +310,7 @@ An example XML file changing the default language of documents to Polish is show
 </attribute-sets>
 ----
 
-[[ResultClustering-TweakingatQuery-Time]]
-=== Tweaking at Query-Time
+=== Tweaking Algorithms at Query-Time
 
 The clustering component and Carrot2 clustering algorithms can accept query-time attribute overrides. Note that certain things (for example lexical resources) can only be initialized once (at startup, via the XML configuration files).
 
@@ -332,8 +318,7 @@ An example query that changes the `LingoClusteringAlgorithm.desiredClusterCountB
 
 The clustering engine (the algorithm declared in `solrconfig.xml`) can also be changed at runtime by passing `clustering.engine=name` request attribute: http://localhost:8983/solr/techproducts/clustering?q=*:*&rows=100&clustering.engine=kmeans
 
-[[ResultClustering-PerformanceConsiderations]]
-== Performance Considerations
+== Performance Considerations with Dynamic Clustering
 
 Dynamic clustering of search results comes with two major performance penalties:
 
@@ -349,7 +334,6 @@ For simple queries, the clustering time will usually dominate the fetch time. If
 
 Some of these techniques are described in _Apache SOLR and Carrot2 integration strategies_ document, available at http://carrot2.github.io/solr-integration-strategies. The topic of improving performance is also included in the Carrot2 manual at http://doc.carrot2.org/#section.advanced-topics.fine-tuning.performance.
 
-[[ResultClustering-AdditionalResources]]
 == Additional Resources
 
 The following resources provide additional information about the clustering component in Solr and its potential applications.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/result-grouping.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/result-grouping.adoc b/solr/solr-ref-guide/src/result-grouping.adoc
index 89b3c33..a0bb076 100644
--- a/solr/solr-ref-guide/src/result-grouping.adoc
+++ b/solr/solr-ref-guide/src/result-grouping.adoc
@@ -54,8 +54,7 @@ Object 3
 
 If you ask Solr to group these documents by "product_range", then the total amount of groups is 2, but the facets for ppm are 2 for 62 and 1 for 65.
 
-[[ResultGrouping-RequestParameters]]
-== Request Parameters
+== Grouping Parameters
 
 Result Grouping takes the following request parameters. Any number of these request parameters can be included in a single request:
 
@@ -68,7 +67,7 @@ The name of the field by which to group results. The field must be single-valued
 `group.func`::
 Group based on the unique values of a function query.
 +
-NOTE: This option does not work with <<ResultGrouping-DistributedResultGroupingCaveats,distributed searches>>.
+NOTE: This option does not work with <<Distributed Result Grouping Caveats,distributed searches>>.
 
 `group.query`::
 Return a single group of documents that match the given query.
@@ -100,7 +99,7 @@ If `true`, the result of the first field grouping command is used as the main re
 `group.ngroups`::
 If `true`, Solr includes the number of groups that have matched the query in the results. The default value is false.
 +
-See below for <<ResultGrouping-DistributedResultGroupingCaveats,Distributed Result Grouping Caveats>> when using sharded indexes
+See below for <<Distributed Result Grouping Caveats>> when using sharded indexes.
 
 `group.truncate`::
 If `true`, facet counts are based on the most relevant document of each group matching the query. The default value is `false`.
@@ -110,7 +109,7 @@ Determines whether to compute grouped facets for the field facets specified in f
 +
 WARNING: There can be a heavy performance cost to this option.
 +
-See below for <<ResultGrouping-DistributedResultGroupingCaveats,Distributed Result Grouping Caveats>> when using sharded indexes.
+See below for <<Distributed Result Grouping Caveats>> when using sharded indexes.
 
 `group.cache.percent`::
 Setting this parameter to a number greater than 0 enables caching for result grouping. Result Grouping executes two searches; this option caches the second search. The default value is `0`. The maximum value is `100`.
@@ -119,12 +118,10 @@ Testing has shown that group caching only improves search time with Boolean, wil
 
 Any number of group commands (e.g., `group.field`, `group.func`, `group.query`, etc.) may be specified in a single request.
 
-[[ResultGrouping-Examples]]
-== Examples
+== Grouping Examples
 
 All of the following sample queries work with Solr's "`bin/solr -e techproducts`" example.
 
-[[ResultGrouping-GroupingResultsbyField]]
 === Grouping Results by Field
 
 In this example, we will group results based on the `manu_exact` field, which specifies the manufacturer of the items in the sample dataset.
@@ -217,7 +214,6 @@ We can run the same query with the request parameter `group.main=true`. This wil
 }
 ----
 
-[[ResultGrouping-GroupingbyQuery]]
 === Grouping by Query
 
 In this example, we will use the `group.query` parameter to find the top three results for "memory" in two different price ranges: 0.00 to 99.99, and over 100.
@@ -267,7 +263,6 @@ In this example, we will use the `group.query` parameter to find the top three r
 
 In this case, Solr found five matches for "memory," but only returns four results grouped by price. This is because one result for "memory" did not have a price assigned to it.
 
-[[ResultGrouping-DistributedResultGroupingCaveats]]
 == Distributed Result Grouping Caveats
 
 Grouping is supported for <<solrcloud.adoc#solrcloud,distributed searches>>, with some caveats:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
index ee2fd88..4ce41fe 100644
--- a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
+++ b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
@@ -26,7 +26,6 @@ The roles can be used with any of the authentication plugins or with a custom au
 
 Once defined through the API, roles are stored in `security.json`.
 
-[[Rule-BasedAuthorizationPlugin-EnabletheAuthorizationPlugin]]
 == Enable the Authorization Plugin
 
 The plugin must be enabled in `security.json`. This file and where to put it in your system is described in detail in the section <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-EnablePluginswithsecurity.json,Enable Plugins with security.json>>.
@@ -61,14 +60,12 @@ There are several things defined in this example:
 * The 'admin' role has been defined, and it has permission to edit security settings.
 * The 'solr' user has been defined to the 'admin' role.
 
-[[Rule-BasedAuthorizationPlugin-PermissionAttributes]]
 == Permission Attributes
 
 Each role is comprised of one or more permissions which define what the user is allowed to do. Each permission is made up of several attributes that define the allowed activity. There are some pre-defined permissions which cannot be modified.
 
 The permissions are consulted in order they appear in `security.json`. The first permission that matches is applied for each user, so the strictest permissions should be at the top of the list. Permissions order can be controlled with a parameter of the Authorization API, as described below.
 
-[[Rule-BasedAuthorizationPlugin-PredefinedPermissions]]
 === Predefined Permissions
 
 There are several permissions that are pre-defined. These have fixed default values, which cannot be modified, and new attributes cannot be added. To use these attributes, simply define a role that includes this permission, and then assign a user to that role.
@@ -111,15 +108,12 @@ The pre-defined permissions are:
 * *read*: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`. This applies to all collections by default ( `collection:"*"` ).
 * *all*: Any requests coming to Solr.
 
-[[Rule-BasedAuthorizationPlugin-AuthorizationAPI]]
 == Authorization API
 
-[[Rule-BasedAuthorizationPlugin-APIEndpoint]]
-=== API Endpoint
+=== Authorization API Endpoint
 
 `/admin/authorization`: takes a set of commands to create permissions, map permissions to roles, and map roles to users.
 
-[[Rule-BasedAuthorizationPlugin-ManagePermissions]]
 === Manage Permissions
 
 Three commands control managing permissions:
@@ -195,7 +189,6 @@ curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
   "set-permission": {"name": "read", "role":"guest"}
 }' http://localhost:8983/solr/admin/authorization
 
-[[Rule-BasedAuthorizationPlugin-UpdateorDeletePermissions]]
 === Update or Delete Permissions
 
 Permissions can be accessed using their index in the list. Use the `/admin/authorization` API to see the existing permissions and their indices.
@@ -216,7 +209,6 @@ curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
 }' http://localhost:8983/solr/admin/authorization
 
 
-[[Rule-BasedAuthorizationPlugin-MapRolestoUsers]]
 === Map Roles to Users
 
 A single command allows roles to be mapped to users:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
index 30e15eb..deb7243 100644
--- a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
+++ b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
@@ -31,7 +31,6 @@ This feature is used in the following instances:
 * Replica creation
 * Shard splitting
 
-[[Rule-basedReplicaPlacement-CommonUseCases]]
 == Common Use Cases
 
 There are several situations where this functionality may be used. A few of the rules that could be implemented are listed below:
@@ -43,7 +42,6 @@ There are several situations where this functionality may be used. A few of the
 * Assign replica in nodes hosting less than 5 cores.
 * Assign replicas in nodes hosting the least number of cores.
 
-[[Rule-basedReplicaPlacement-RuleConditions]]
 == Rule Conditions
 
 A rule is a set of conditions that a node must satisfy before a replica core can be created there.
@@ -52,9 +50,8 @@ There are three possible conditions.
 
 * *shard*: this is the name of a shard or a wild card (* means for all shards). If shard is not specified, then the rule applies to the entire collection.
 * *replica*: this can be a number or a wild-card (* means any number zero to infinity).
-* *tag*: this is an attribute of a node in the cluster that can be used in a rule, e.g., “freedisk”, “cores”, “rack”, “dc”, etc. The tag name can be a custom string. If creating a custom tag, a snitch is responsible for providing tags and values. The section <<Rule-basedReplicaPlacement-Snitches,Snitches>> below describes how to add a custom tag, and defines six pre-defined tags (cores, freedisk, host, port, node, and sysprop).
+* *tag*: this is an attribute of a node in the cluster that can be used in a rule, e.g., “freedisk”, “cores”, “rack”, “dc”, etc. The tag name can be a custom string. If creating a custom tag, a snitch is responsible for providing tags and values. The section <<Snitches>> below describes how to add a custom tag, and defines six pre-defined tags (cores, freedisk, host, port, node, and sysprop).
 
-[[Rule-basedReplicaPlacement-RuleOperators]]
 === Rule Operators
 
 A condition can have one of the following operators to set the parameters for the rule.
@@ -64,25 +61,20 @@ A condition can have one of the following operators to set the parameters for th
 * *less than (<)*: `tag:<x` means tag value less than ‘x’. x must be a number
 * *not equal (!)*: `tag:!x` means tag value MUST NOT be equal to ‘x’. The equals check is performed on String value
 
-
-[[Rule-basedReplicaPlacement-FuzzyOperator_]]
 === Fuzzy Operator (~)
 
 This can be used as a suffix to any condition. This would first try to satisfy the rule strictly. If Solr can’t find enough nodes to match the criterion, it tries to find the next best match which may not satisfy the criterion. For example, if we have a rule such as, `freedisk:>200~`, Solr will try to assign replicas of this collection on nodes with more than 200GB of free disk space. If that is not possible, the node which has the most free disk space will be chosen instead.
 
-[[Rule-basedReplicaPlacement-ChoosingAmongEquals]]
 === Choosing Among Equals
 
 The nodes are sorted first and the rules are used to sort them. This ensures that even if many nodes match the rules, the best nodes are picked up for node assignment. For example, if there is a rule such as `freedisk:>20`, nodes are sorted first on disk space descending and the node with the most disk space is picked up first. Or, if the rule is `cores:<5`, nodes are sorted with number of cores ascending and the node with the least number of cores is picked up first.
 
-[[Rule-basedReplicaPlacement-Rulesfornewshards]]
-== Rules for new shards
+== Rules for New Shards
 
 The rules are persisted along with collection state. So, when a new replica is created, the system will assign replicas satisfying the rules. When a new shard is created as a result of using the Collection API's <<collections-api.adoc#CollectionsAPI-createshard,CREATESHARD command>>, ensure that you have created rules specific for that shard name. Rules can be altered using the <<collections-api.adoc#CollectionsAPI-modifycollection,MODIFYCOLLECTION command>>. However, it is not required to do so if the rules do not specify explicit shard names. For example, a rule such as `shard:shard1,replica:*,ip_3:168:`, will not apply to any new shard created. But, if your rule is `replica:*,ip_3:168`, then it will apply to any new shard created.
 
 The same is applicable to shard splitting. Shard splitting is treated exactly the same way as shard creation. Even though `shard1_1` and `shard1_2` may be created from `shard1`, the rules treat them as distinct, unrelated shards.
 
-[[Rule-basedReplicaPlacement-Snitches]]
 == Snitches
 
 Tag values come from a plugin called Snitch. If there is a tag named ‘rack’ in a rule, there must be Snitch which provides the value for ‘rack’ for each node in the cluster. A snitch implements the Snitch interface. Solr, by default, provides a default snitch which provides the following tags:
@@ -96,7 +88,6 @@ Tag values come from a plugin called Snitch. If there is a tag named ‘rack’
 * *ip_1, ip_2, ip_3, ip_4*: These are ip fragments for each node. For example, in a host with ip `192.168.1.2`, `ip_1 = 2`, `ip_2 =1`, `ip_3 = 168` and` ip_4 = 192`
 * *sysprop.{PROPERTY_NAME}*: These are values available from system properties. `sysprop.key` means a value that is passed to the node as `-Dkey=keyValue` during the node startup. It is possible to use rules like `sysprop.key:expectedVal,shard:*`
 
-[[Rule-basedReplicaPlacement-HowSnitchesareConfigured]]
 === How Snitches are Configured
 
 It is possible to use one or more snitches for a set of rules. If the rules only need tags from default snitch it need not be explicitly configured. For example:
@@ -114,11 +105,8 @@ snitch=class:fqn.ClassName,key1:val1,key2:val2,key3:val3
 . After identifying the Snitches, they provide the tag values for each node in the cluster.
 . If the value for a tag is not obtained for a given node, it cannot participate in the assignment.
 
-[[Rule-basedReplicaPlacement-Examples]]
-== Examples
-
+== Replica Placement Examples
 
-[[Rule-basedReplicaPlacement-Keeplessthan2replicas_atmost1replica_ofthiscollectiononanynode]]
 === Keep less than 2 replicas (at most 1 replica) of this collection on any node
 
 For this rule, we define the `replica` condition with operators for "less than 2", and use a pre-defined tag named `node` to define nodes with any name.
@@ -129,8 +117,6 @@ replica:<2,node:*
 // this is equivalent to replica:<2,node:*,shard:**. We can omit shard:** because ** is the default value of shard
 ----
 
-
-[[Rule-basedReplicaPlacement-Foragivenshard_keeplessthan2replicasonanynode]]
 === For a given shard, keep less than 2 replicas on any node
 
 For this rule, we use the `shard` condition to define any shard , the `replica` condition with operators for "less than 2", and finally a pre-defined tag named `node` to define nodes with any name.
@@ -140,7 +126,6 @@ For this rule, we use the `shard` condition to define any shard , the `replica`
 shard:*,replica:<2,node:*
 ----
 
-[[Rule-basedReplicaPlacement-Assignallreplicasinshard1torack730]]
 === Assign all replicas in shard1 to rack 730
 
 This rule limits the `shard` condition to 'shard1', but any number of replicas. We're also referencing a custom tag named `rack`. Before defining this rule, we will need to configure a custom Snitch which provides values for the tag `rack`.
@@ -157,7 +142,6 @@ In this case, the default value of `replica` is * (or, all replicas). So, it can
 shard:shard1,rack:730
 ----
 
-[[Rule-basedReplicaPlacement-Createreplicasinnodeswithlessthan5coresonly]]
 === Create replicas in nodes with less than 5 cores only
 
 This rule uses the `replica` condition to define any number of replicas, but adds a pre-defined tag named `core` and uses operators for "less than 5".
@@ -174,7 +158,6 @@ Again, we can simplify this to use the default value for `replica`, like so:
 cores:<5
 ----
 
-[[Rule-basedReplicaPlacement-Donotcreateanyreplicasinhost192.45.67.3]]
 === Do not create any replicas in host 192.45.67.3
 
 This rule uses only the pre-defined tag `host` to define an IP address where replicas should not be placed.
@@ -184,7 +167,6 @@ This rule uses only the pre-defined tag `host` to define an IP address where rep
 host:!192.45.67.3
 ----
 
-[[Rule-basedReplicaPlacement-DefiningRules]]
 == Defining Rules
 
 Rules are specified per collection during collection creation as request parameters. It is possible to specify multiple ‘rule’ and ‘snitch’ params as in this example:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
index 9d0e60d..a16979b 100644
--- a/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
@@ -31,7 +31,6 @@ Schemaless mode requires enabling the Managed Schema if it is not already, but f
 
 While the "read" features of the Schema API are supported for all schema types, support for making schema modifications programatically depends on the `<schemaFactory/>` in use.
 
-[[SchemaFactoryDefinitioninSolrConfig-SolrUsesManagedSchemabyDefault]]
 == Solr Uses Managed Schema by Default
 
 When a `<schemaFactory/>` is not explicitly declared in a `solrconfig.xml` file, Solr implicitly uses a `ManagedIndexSchemaFactory`, which is by default `"mutable"` and keeps schema information in a `managed-schema` file.
@@ -54,7 +53,6 @@ If you wish to explicitly configure `ManagedIndexSchemaFactory` the following op
 
 With the default configuration shown above, you can use the <<schema-api.adoc#schema-api,Schema API>> to modify the schema as much as you want, and then later change the value of `mutable` to *false* if you wish to "lock" the schema in place and prevent future changes.
 
-[[SchemaFactoryDefinitioninSolrConfig-Classicschema.xml]]
 == Classic schema.xml
 
 An alternative to using a managed schema is to explicitly configure a `ClassicIndexSchemaFactory`. `ClassicIndexSchemaFactory` requires the use of a `schema.xml` configuration file, and disallows any programatic changes to the Schema at run time. The `schema.xml` file must be edited manually and is only loaded only when the collection is loaded.
@@ -64,7 +62,6 @@ An alternative to using a managed schema is to explicitly configure a `ClassicIn
   <schemaFactory class="ClassicIndexSchemaFactory"/>
 ----
 
-[[SchemaFactoryDefinitioninSolrConfig-Switchingfromschema.xmltoManagedSchema]]
 === Switching from schema.xml to Managed Schema
 
 If you have an existing Solr collection that uses `ClassicIndexSchemaFactory`, and you wish to convert to use a managed schema, you can simply modify the `solrconfig.xml` to specify the use of the `ManagedIndexSchemaFactory`.
@@ -78,7 +75,6 @@ Once Solr is restarted and it detects that a `schema.xml` file exists, but the `
 
 You are now free to use the <<schema-api.adoc#schema-api,Schema API>> as much as you want to make changes, and remove the `schema.xml.bak`.
 
-[[SchemaFactoryDefinitioninSolrConfig-SwitchingfromManagedSchematoManuallyEditedschema.xml]]
 === Switching from Managed Schema to Manually Edited schema.xml
 
 If you have started Solr with managed schema enabled and you would like to switch to manually editing a `schema.xml` file, you should take the following steps:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/schemaless-mode.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schemaless-mode.adoc b/solr/solr-ref-guide/src/schemaless-mode.adoc
index 30e7d51..825c294 100644
--- a/solr/solr-ref-guide/src/schemaless-mode.adoc
+++ b/solr/solr-ref-guide/src/schemaless-mode.adoc
@@ -26,7 +26,6 @@ These Solr features, all controlled via `solrconfig.xml`, are:
 . Field value class guessing: Previously unseen fields are run through a cascading set of value-based parsers, which guess the Java class of field values - parsers for Boolean, Integer, Long, Float, Double, and Date are currently available.
 . Automatic schema field addition, based on field value class(es): Previously unseen fields are added to the schema, based on field value Java classes, which are mapped to schema field types - see <<solr-field-types.adoc#solr-field-types,Solr Field Types>>.
 
-[[SchemalessMode-UsingtheSchemalessExample]]
 == Using the Schemaless Example
 
 The three features of schemaless mode are pre-configured in the `_default` <<config-sets.adoc#config-sets,config set>> in the Solr distribution. To start an example instance of Solr using these configs, run the following command:
@@ -67,12 +66,10 @@ You can use the `/schema/fields` <<schema-api.adoc#schema-api,Schema API>> to co
       "uniqueKey":true}]}
 ----
 
-[[SchemalessMode-ConfiguringSchemalessMode]]
 == Configuring Schemaless Mode
 
 As described above, there are three configuration elements that need to be in place to use Solr in schemaless mode. In the `_default` config set included with Solr these are already configured. If, however, you would like to implement schemaless on your own, you should make the following changes.
 
-[[SchemalessMode-EnableManagedSchema]]
 === Enable Managed Schema
 
 As described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>, Managed Schema support is enabled by default, unless your configuration specifies that `ClassicIndexSchemaFactory` should be used.
@@ -87,7 +84,6 @@ You can configure the `ManagedIndexSchemaFactory` (and control the resource file
 </schemaFactory>
 ----
 
-[[SchemalessMode-DefineanUpdateRequestProcessorChain]]
 === Define an UpdateRequestProcessorChain
 
 The UpdateRequestProcessorChain allows Solr to guess field types, and you can define the default field type classes to use. To start, you should define it as follows (see the javadoc links below for update processor factory documentation):
@@ -174,7 +170,6 @@ Javadocs for update processor factories mentioned above:
 * {solr-javadocs}/solr-core/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.html[ParseDateFieldUpdateProcessorFactory]
 * {solr-javadocs}/solr-core/org/apache/solr/update/processor/AddSchemaFieldsUpdateProcessorFactory.html[AddSchemaFieldsUpdateProcessorFactory]
 
-[[SchemalessMode-MaketheUpdateRequestProcessorChaintheDefaultfortheUpdateRequestHandler]]
 === Make the UpdateRequestProcessorChain the Default for the UpdateRequestHandler
 
 Once the UpdateRequestProcessorChain has been defined, you must instruct your UpdateRequestHandlers to use it when working with index updates (i.e., adding, removing, replacing documents). There are two ways to do this. The update chain shown above has a `default=true` attribute which will use it for any update handler. An alternative, more explicit way is to use <<initparams-in-solrconfig.adoc#initparams-in-solrconfig,InitParams>> to set the defaults on all `/update` request handlers:
@@ -193,7 +188,6 @@ Once the UpdateRequestProcessorChain has been defined, you must instruct your Up
 After each of these changes have been made, Solr should be restarted (or, you can reload the cores to load the new `solrconfig.xml` definitions).
 ====
 
-[[SchemalessMode-ExamplesofIndexedDocuments]]
 == Examples of Indexed Documents
 
 Once the schemaless mode has been enabled (whether you configured it manually or are using `_default`), documents that include fields that are not defined in your schema will be indexed, using the guessed field types which are automatically added to the schema.
@@ -243,13 +237,14 @@ The fields now in the schema (output from `curl \http://localhost:8983/solr/gett
       "name":"Sold",
       "type":"plongs"},
     {
-      "name":"_root_" ...}
+      "name":"_root_", ...},
     {
-      "name":"_text_" ...}
+      "name":"_text_", ...},
     {
-      "name":"_version_" ...}
+      "name":"_version_", ...},
     {
-      "name":"id" ...}
+      "name":"id", ...}
+]}
 ----
 
 In addition string versions of the text fields are indexed, using copyFields to a `*_str` dynamic field: (output from `curl \http://localhost:8983/solr/gettingstarted/schema/copyfields` ):
@@ -277,7 +272,7 @@ Even if you want to use schemaless mode for most fields, you can still use the <
 
 Internally, the Schema API and the Schemaless Update Processors both use the same <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Managed Schema>> functionality.
 
-Also, if you do not need the `*_str` version of a text field, you can simply remove the `copyField` definition from the auto-generated schema and it will not be re-added since the original field is now defined. 
+Also, if you do not need the `*_str` version of a text field, you can simply remove the `copyField` definition from the auto-generated schema and it will not be re-added since the original field is now defined.
 ====
 
 Once a field has been added to the schema, its field type is fixed. As a consequence, adding documents with field value(s) that conflict with the previously guessed field type will fail. For example, after adding the above document, the "```Sold```" field has the fieldType `plongs`, but the document below has a non-integral decimal value in this field:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
index ab54836..d82ac29 100644
--- a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
+++ b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
@@ -40,7 +40,6 @@ For example, if you only have two ZooKeeper nodes and one goes down, 50% of avai
 
 More information on ZooKeeper clusters is available from the ZooKeeper documentation at http://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup.
 
-[[SettingUpanExternalZooKeeperEnsemble-DownloadApacheZooKeeper]]
 == Download Apache ZooKeeper
 
 The first step in setting up Apache ZooKeeper is, of course, to download the software. It's available from http://zookeeper.apache.org/releases.html.
@@ -52,15 +51,12 @@ When using stand-alone ZooKeeper, you need to take care to keep your version of
 Solr currently uses Apache ZooKeeper v3.4.10.
 ====
 
-[[SettingUpanExternalZooKeeperEnsemble-SettingUpaSingleZooKeeper]]
 == Setting Up a Single ZooKeeper
 
-[[SettingUpanExternalZooKeeperEnsemble-Createtheinstance]]
-=== Create the instance
+=== Create the Instance
 Creating the instance is a simple matter of extracting the files into a specific target directory. The actual directory itself doesn't matter, as long as you know where it is, and where you'd like to have ZooKeeper store its internal data.
 
-[[SettingUpanExternalZooKeeperEnsemble-Configuretheinstance]]
-=== Configure the instance
+=== Configure the Instance
 The next step is to configure your ZooKeeper instance. To do that, create the following file: `<ZOOKEEPER_HOME>/conf/zoo.cfg`. To this file, add the following information:
 
 [source,bash]
@@ -80,15 +76,13 @@ The parameters are as follows:
 
 Once this file is in place, you're ready to start the ZooKeeper instance.
 
-[[SettingUpanExternalZooKeeperEnsemble-Runtheinstance]]
-=== Run the instance
+=== Run the Instance
 
 To run the instance, you can simply use the `ZOOKEEPER_HOME/bin/zkServer.sh` script provided, as with this command: `zkServer.sh start`
 
 Again, ZooKeeper provides a great deal of power through additional configurations, but delving into them is beyond the scope of this tutorial. For more information, see the ZooKeeper http://zookeeper.apache.org/doc/r3.4.5/zookeeperStarted.html[Getting Started] page. For this example, however, the defaults are fine.
 
-[[SettingUpanExternalZooKeeperEnsemble-PointSolrattheinstance]]
-=== Point Solr at the instance
+=== Point Solr at the Instance
 
 Pointing Solr at the ZooKeeper instance you've created is a simple matter of using the `-z` parameter when using the bin/solr script. For example, in order to point the Solr instance to the ZooKeeper you've started on port 2181, this is what you'd need to do:
 
@@ -108,12 +102,10 @@ bin/solr start -cloud -s <path to solr home for new node> -p 8987 -z localhost:2
 
 NOTE: When you are not using an example to start solr, make sure you upload the configuration set to ZooKeeper before creating the collection.
 
-[[SettingUpanExternalZooKeeperEnsemble-ShutdownZooKeeper]]
-=== Shut down ZooKeeper
+=== Shut Down ZooKeeper
 
 To shut down ZooKeeper, use the zkServer script with the "stop" command: `zkServer.sh stop`.
 
-[[SettingUpanExternalZooKeeperEnsemble-SettingupaZooKeeperEnsemble]]
 == Setting up a ZooKeeper Ensemble
 
 With an external ZooKeeper ensemble, you need to set things up just a little more carefully as compared to the Getting Started example.
@@ -188,8 +180,7 @@ Once these servers are running, you can reference them from Solr just as you did
 bin/solr start -e cloud -z localhost:2181,localhost:2182,localhost:2183 -noprompt
 ----
 
-[[SettingUpanExternalZooKeeperEnsemble-SecuringtheZooKeeperconnection]]
-== Securing the ZooKeeper connection
+== Securing the ZooKeeper Connection
 
 You may also want to secure the communication between ZooKeeper and Solr.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
index 45877f2..82e92a8 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
@@ -24,7 +24,6 @@ IMPORTANT: This requires Apache Zeppelin 0.6.0 or greater which contains the JDB
 
 To use http://zeppelin.apache.org[Apache Zeppelin] with Solr, you will need to create a JDBC interpreter for Solr. This will add SolrJ to the interpreter classpath. Once the interpreter has been created, you can create a notebook to issue queries. The http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation] provides additional information about JDBC prefixes and other features.
 
-[[SolrJDBC-ApacheZeppelin-CreatetheApacheSolrJDBCInterpreter]]
 == Create the Apache Solr JDBC Interpreter
 
 .Click "Interpreter" in the top navigation
@@ -41,7 +40,6 @@ image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_3.png[image,height=400
 For most installations, Apache Zeppelin configures PostgreSQL as the JDBC interpreter default driver. The default driver can either be replaced by the Solr driver as outlined above or you can add a separate JDBC interpreter prefix as outlined in the http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation].
 ====
 
-[[SolrJDBC-ApacheZeppelin-CreateaNotebook]]
 == Create a Notebook
 
 .Click Notebook \-> Create new note
@@ -50,7 +48,6 @@ image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_4.png[image,width=517,
 .Provide a name and click "Create Note"
 image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_5.png[image,width=839,height=400]
 
-[[SolrJDBC-ApacheZeppelin-QuerywiththeNotebook]]
 == Query with the Notebook
 
 [IMPORTANT]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/333906f8/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
index f3ecc86..8b9b2b2 100644
--- a/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
+++ b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
@@ -27,10 +27,8 @@ For https://www.dbvis.com/[DbVisualizer], you will need to create a new driver f
 
 Once the driver has been created, you can create a connection to Solr with the connection string format outlined in the generic section and use the SQL Commander to issue queries.
 
-[[SolrJDBC-DbVisualizer-SetupDriver]]
 == Setup Driver
 
-[[SolrJDBC-DbVisualizer-OpenDriverManager]]
 === Open Driver Manager
 
 From the Tools menu, choose Driver Manager to add a driver.
@@ -38,21 +36,18 @@ From the Tools menu, choose Driver Manager to add a driver.
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_1.png[image,width=673,height=400]
 
 
-[[SolrJDBC-DbVisualizer-CreateaNewDriver]]
 === Create a New Driver
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_2.png[image,width=532,height=400]
 
 
-[[SolrJDBC-DbVisualizer-NametheDriver]]
-=== Name the Driver
+=== Name the Driver in Driver Manager
 
 Provide a name for the driver, and provide the URL format: `jdbc:solr://<zk_connection_string>/?collection=<collection>`. Do not fill in values for the variables "```zk_connection_string```" and "```collection```", those will be provided later when the connection to Solr is configured. The Driver Class will also be automatically added when the driver .jars are added.
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_3.png[image,width=532,height=400]
 
 
-[[SolrJDBC-DbVisualizer-AddDriverFilestoClasspath]]
 === Add Driver Files to Classpath
 
 The driver files to be added are:
@@ -75,17 +70,14 @@ image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_7.png[image,width=655
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_9.png[image,width=651,height=400]
 
 
-[[SolrJDBC-DbVisualizer-ReviewandCloseDriverManager]]
 === Review and Close Driver Manager
 
 Once the driver files have been added, you can close the Driver Manager.
 
-[[SolrJDBC-DbVisualizer-CreateaConnection]]
 == Create a Connection
 
 Next, create a connection to Solr using the driver just created.
 
-[[SolrJDBC-DbVisualizer-UsetheConnectionWizard]]
 === Use the Connection Wizard
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png[image,width=763,height=400]
@@ -94,19 +86,16 @@ image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png[image,width=76
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_12.png[image,width=807,height=400]
 
 
-[[SolrJDBC-DbVisualizer-NametheConnection]]
 === Name the Connection
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_13.png[image,width=402,height=400]
 
 
-[[SolrJDBC-DbVisualizer-SelecttheSolrdriver]]
 === Select the Solr driver
 
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_14.png[image,width=399,height=400]
 
 
-[[SolrJDBC-DbVisualizer-SpecifytheSolrURL]]
 === Specify the Solr URL
 
 Provide the Solr URL, using the ZooKeeper host and port and the collection. For example, `jdbc:solr://localhost:9983?collection=test`
@@ -114,7 +103,6 @@ Provide the Solr URL, using the ZooKeeper host and port and the collection. For
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_15.png[image,width=401,height=400]
 
 
-[[SolrJDBC-DbVisualizer-OpenandConnecttoSolr]]
 == Open and Connect to Solr
 
 Once the connection has been created, double-click on it to open the connection details screen and connect to Solr.
@@ -125,7 +113,6 @@ image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_16.png[image,width=62
 image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_17.png[image,width=592,height=400]
 
 
-[[SolrJDBC-DbVisualizer-OpenSQLCommandertoEnterQueries]]
 == Open SQL Commander to Enter Queries
 
 When the connection is established, you can use the SQL Commander to issue queries and view data.