You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by is...@apache.org on 2017/07/29 21:59:44 UTC

[07/28] lucene-solr:jira/solr-6630: Merging master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc b/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
index 947c760..9f9f041 100644
--- a/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
+++ b/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
@@ -22,14 +22,12 @@ SolrCloud supports elasticity, high availability, and fault tolerance in reads a
 
 What this means, basically, is that when you have a large cluster, you can always make requests to the cluster: Reads will return results whenever possible, even if some nodes are down, and Writes will be acknowledged only if they are durable; i.e., you won't lose data.
 
-[[ReadandWriteSideFaultTolerance-ReadSideFaultTolerance]]
 == Read Side Fault Tolerance
 
 In a SolrCloud cluster each individual node load balances read requests across all the replicas in collection. You still need a load balancer on the 'outside' that talks to the cluster, or you need a smart client which understands how to read and interact with Solr's metadata in ZooKeeper and only requests the ZooKeeper ensemble's address to start discovering to which nodes it should send requests. (Solr provides a smart Java SolrJ client called {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html[CloudSolrClient].)
 
 Even if some nodes in the cluster are offline or unreachable, a Solr node will be able to correctly respond to a search request as long as it can communicate with at least one replica of every shard, or one replica of every _relevant_ shard if the user limited the search via the `shards` or `\_route_` parameters. The more replicas there are of every shard, the more likely that the Solr cluster will be able to handle search results in the event of node failures.
 
-[[ReadandWriteSideFaultTolerance-zkConnected]]
 === zkConnected
 
 A Solr node will return the results of a search request as long as it can communicate with at least one replica of every shard that it knows about, even if it can _not_ communicate with ZooKeeper at the time it receives the request. This is normally the preferred behavior from a fault tolerance standpoint, but may result in stale or incorrect results if there have been major changes to the collection structure that the node has not been informed of via ZooKeeper (i.e., shards may have been added or removed, or split into sub-shards)
@@ -56,7 +54,6 @@ A `zkConnected` header is included in every search response indicating if the no
 }
 ----
 
-[[ReadandWriteSideFaultTolerance-shards.tolerant]]
 === shards.tolerant
 
 In the event that one or more shards queried are completely unavailable, then Solr's default behavior is to fail the request. However, there are many use-cases where partial results are acceptable and so Solr provides a boolean `shards.tolerant` parameter (default `false`).
@@ -89,12 +86,10 @@ Example response with `partialResults` flag set to 'true':
 }
 ----
 
-[[ReadandWriteSideFaultTolerance-WriteSideFaultTolerance]]
 == Write Side Fault Tolerance
 
 SolrCloud is designed to replicate documents to ensure redundancy for your data, and enable you to send update requests to any node in the cluster. That node will determine if it hosts the leader for the appropriate shard, and if not it will forward the request to the the leader, which will then forward it to all existing replicas, using versioning to make sure every replica has the most up-to-date version. If the leader goes down, another replica can take its place. This architecture enables you to be certain that your data can be recovered in the event of a disaster, even if you are using <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>.
 
-[[ReadandWriteSideFaultTolerance-Recovery]]
 === Recovery
 
 A Transaction Log is created for each node so that every change to content or organization is noted. The log is used to determine which content in the node should be included in a replica. When a new replica is created, it refers to the Leader and the Transaction Log to know which content to include. If it fails, it retries.
@@ -105,7 +100,6 @@ If a leader goes down, it may have sent requests to some replicas and not others
 
 If an update fails because cores are reloading schemas and some have finished but others have not, the leader tells the nodes that the update failed and starts the recovery procedure.
 
-[[ReadandWriteSideFaultTolerance-AchievedReplicationFactor]]
 === Achieved Replication Factor
 
 When using a replication factor greater than one, an update request may succeed on the shard leader but fail on one or more of the replicas. For instance, consider a collection with one shard and a replication factor of three. In this case, you have a shard leader and two additional replicas. If an update request succeeds on the leader but fails on both replicas, for whatever reason, the update request is still considered successful from the perspective of the client. The replicas that missed the update will sync with the leader when they recover.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/realtime-get.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/realtime-get.adoc b/solr/solr-ref-guide/src/realtime-get.adoc
index 0573e05..4d97455 100644
--- a/solr/solr-ref-guide/src/realtime-get.adoc
+++ b/solr/solr-ref-guide/src/realtime-get.adoc
@@ -38,8 +38,6 @@ Real Time Get requests can be performed using the `/get` handler which exists im
 <requestHandler name="/get" class="solr.RealTimeGetHandler">
   <lst name="defaults">
     <str name="omitHeader">true</str>
-    <str name="wt">json</str>
-    <str name="indent">true</str>
   </lst>
 </requestHandler>
 ----
@@ -94,7 +92,7 @@ http://localhost:8983/solr/techproducts/get?id=mydoc&id=IW-02
 }
 ----
 
-Real Time Get requests can also be combined with filter queries, specified with an <<common-query-parameters.adoc#CommonQueryParameters-Thefq_FilterQuery_Parameter,`fq` parameter>>, just like search requests:
+Real Time Get requests can also be combined with filter queries, specified with an <<common-query-parameters.adoc#fq-filter-query-parameter,`fq` parameter>>, just like search requests:
 
 [source,text]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/request-parameters-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/request-parameters-api.adoc b/solr/solr-ref-guide/src/request-parameters-api.adoc
index 45275d0..b01e0f6 100644
--- a/solr/solr-ref-guide/src/request-parameters-api.adoc
+++ b/solr/solr-ref-guide/src/request-parameters-api.adoc
@@ -33,12 +33,10 @@ When might you want to use this feature?
 * To mix and match parameter sets at request time.
 * To avoid a reload of your collection for small parameter changes.
 
-[[RequestParametersAPI-TheRequestParametersEndpoint]]
 == The Request Parameters Endpoint
 
 All requests are sent to the `/config/params` endpoint of the Config API.
 
-[[RequestParametersAPI-SettingRequestParameters]]
 == Setting Request Parameters
 
 The request to set, unset, or update request parameters is sent as a set of Maps with names. These objects can be directly used in a request or a request handler definition.
@@ -80,7 +78,6 @@ curl http://localhost:8983/solr/techproducts/config/params -H 'Content-type:appl
       "facet.limit":5,
       "_invariants_": {
         "facet":true,
-        "wt":"json"
        },
       "_appends_":{"facet.field":["field1","field2"]
      }
@@ -88,7 +85,6 @@ curl http://localhost:8983/solr/techproducts/config/params -H 'Content-type:appl
 }'
 ----
 
-[[RequestParametersAPI-UsingRequestParameterswithRequestHandlers]]
 == Using Request Parameters with RequestHandlers
 
 After creating the `my_handler_params` paramset in the above section, it is possible to define a request handler as follows:
@@ -107,7 +103,6 @@ It will be equivalent to a standard request handler definition such as this one:
     <int name="facet.limit">5</int>
   </lst>
   <lst name="invariants">
-    <str name="wt">json</str>
     <bool name="facet">true</bool>
   </lst>
   <lst name="appends">
@@ -119,12 +114,10 @@ It will be equivalent to a standard request handler definition such as this one:
 </requestHandler>
 ----
 
-[[RequestParametersAPI-ImplicitRequestHandlers]]
-=== Implicit RequestHandlers
+=== Implicit RequestHandlers with the Request Parameters API
 
 Solr ships with many out-of-the-box request handlers that may only be configured via the Request Parameters API, because their configuration is not present in `solrconfig.xml`. See <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> for the paramset to use when configuring an implicit request handler.
 
-[[RequestParametersAPI-ViewingExpandedParamsetsandEffectiveParameterswithRequestHandlers]]
 === Viewing Expanded Paramsets and Effective Parameters with RequestHandlers
 
 To see the expanded paramset and the resulting effective parameters for a RequestHandler defined with `useParams`, use the `expandParams` request param. E.g. for the `/export` request handler:
@@ -134,7 +127,6 @@ To see the expanded paramset and the resulting effective parameters for a Reques
 curl http://localhost:8983/solr/techproducts/config/requestHandler?componentName=/export&expandParams=true
 ----
 
-[[RequestParametersAPI-ViewingRequestParameters]]
 == Viewing Request Parameters
 
 To see the paramsets that have been created, you can use the `/config/params` endpoint to read the contents of `params.json`, or use the name in the request:
@@ -147,7 +139,6 @@ curl http://localhost:8983/solr/techproducts/config/params
 curl http://localhost:8983/solr/techproducts/config/params/myQueries
 ----
 
-[[RequestParametersAPI-TheuseParamsParameter]]
 == The useParams Parameter
 
 When making a request, the `useParams` parameter applies the request parameters sent to the request. This is translated at request time to the actual parameters.
@@ -192,12 +183,10 @@ To summarize, parameters are applied in this order:
 * parameter sets defined in `params.json` that have been defined in the request handler.
 * parameters defined in `<defaults>` in `solrconfig.xml`.
 
-[[RequestParametersAPI-PublicAPIs]]
 == Public APIs
 
 The RequestParams Object can be accessed using the method `SolrConfig#getRequestParams()`. Each paramset can be accessed by their name using the method `RequestParams#getRequestParams(String name)`.
 
-[[RequestParametersAPI-Examples]]
-== Examples
+== Examples Using the Request Parameters API
 
-The Solr "films" example demonstrates the use of the parameters API. See https://github.com/apache/lucene-solr/tree/master/solr/example/films for details.
+The Solr "films" example demonstrates the use of the parameters API. You can use this example in your Solr installation (in the `example/films` directory) or view the files in the Apache GitHub mirror at https://github.com/apache/lucene-solr/tree/master/solr/example/films.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
index e20b55c..6271cb6 100644
--- a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
@@ -22,7 +22,6 @@ The `requestDispatcher` element of `solrconfig.xml` controls the way the Solr HT
 
 Included are parameters for defining if it should handle `/select` urls (for Solr 1.1 compatibility), if it will support remote streaming, the maximum size of file uploads and how it will respond to HTTP cache headers in requests.
 
-[[RequestDispatcherinSolrConfig-handleSelectElement]]
 == handleSelect Element
 
 [IMPORTANT]
@@ -41,7 +40,6 @@ In recent versions of Solr, a `/select` requestHandler is defined by default, so
 </requestDispatcher>
 ----
 
-[[RequestDispatcherinSolrConfig-requestParsersElement]]
 == requestParsers Element
 
 The `<requestParsers>` sub-element controls values related to parsing requests. This is an empty XML element that doesn't have any content, only attributes.
@@ -67,7 +65,7 @@ The attribute `addHttpRequestToContext` can be used to indicate that the origina
                 addHttpRequestToContext="false" />
 ----
 
-The below command is an example of how to enable RemoteStreaming and BodyStreaming through <<config-api.adoc#ConfigAPI-CreatingandUpdatingCommonProperties,Config API>>:
+The below command is an example of how to enable RemoteStreaming and BodyStreaming through <<config-api.adoc#creating-and-updating-common-properties,Config API>>:
 
 [source,bash]
 ----
@@ -77,7 +75,6 @@ curl http://localhost:8983/solr/gettingstarted/config -H 'Content-type:applicati
 }'
 ----
 
-[[RequestDispatcherinSolrConfig-httpCachingElement]]
 == httpCaching Element
 
 The `<httpCaching>` element controls HTTP cache control headers. Do not confuse these settings with Solr's internal cache configuration. This element controls caching of HTTP responses as defined by the W3C HTTP specifications.
@@ -102,7 +99,6 @@ This value of this attribute is sent as the value of the `ETag` header. Changing
 </httpCaching>
 ----
 
-[[RequestDispatcherinSolrConfig-cacheControlElement]]
 === cacheControl Element
 
 In addition to these attributes, `<httpCaching>` accepts one child element: `<cacheControl>`. The content of this element will be sent as the value of the Cache-Control header on HTTP responses. This header is used to modify the default caching behavior of the requesting client. The possible values for the Cache-Control header are defined by the HTTP 1.1 specification in http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9[Section 14.9].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc b/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
index 46d9c9e..f043e84 100644
--- a/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
@@ -26,7 +26,6 @@ A _search component_ is a feature of search, such as highlighting or faceting. T
 
 These are often referred to as "requestHandler" and "searchComponent", which is how they are defined in `solrconfig.xml`.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-RequestHandlers]]
 == Request Handlers
 
 Every request handler is defined with a name and a class. The name of the request handler is referenced with the request to Solr, typically as a path. For example, if Solr is installed at ` http://localhost:8983/solr/ `and you have a collection named "```gettingstarted```", you can make a request using URLs like this:
@@ -44,7 +43,6 @@ Request handlers can also process requests for nested paths of their names, for
 
 It is also possible to configure defaults for request handlers with a section called `initParams`. These defaults can be used when you want to have common properties that will be used by each separate handler. For example, if you intend to create several request handlers that will all request the same list of fields in the response, you can configure an `initParams` section with your list of fields. For more information about `initParams`, see the section <<initparams-in-solrconfig.adoc#initparams-in-solrconfig,InitParams in SolrConfig>>.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers]]
 === SearchHandlers
 
 The primary request handler defined with Solr by default is the "SearchHandler", which handles search queries. The request handler is defined, and then a list of defaults for the handler are defined with a `defaults` list.
@@ -67,7 +65,7 @@ All of the parameters described in the section  <<searching.adoc#searching,Searc
 
 Besides `defaults`, there are other options for the SearchHandler, which are:
 
-* `appends`: This allows definition of parameters that are added to the user query. These might be <<common-query-parameters.adoc#CommonQueryParameters-Thefq_FilterQuery_Parameter,filter queries>>, or other query rules that should be added to each query. There is no mechanism in Solr to allow a client to override these additions, so you should be absolutely sure you always want these parameters applied to queries.
+* `appends`: This allows definition of parameters that are added to the user query. These might be <<common-query-parameters.adoc#fq-filter-query-parameter,filter queries>>, or other query rules that should be added to each query. There is no mechanism in Solr to allow a client to override these additions, so you should be absolutely sure you always want these parameters applied to queries.
 +
 [source,xml]
 ----
@@ -91,33 +89,28 @@ In this example, the filter query "inStock:true" will always be added to every q
 +
 In this example, facet fields have been defined which limits the facets that will be returned by Solr. If the client requests facets, the facets defined with a configuration like this are the only facets they will see.
 
-The final section of a request handler definition is `components`, which defines a list of search components that can be used with a request handler. They are only registered with the request handler. How to define a search component is discussed further on in the section on <<RequestHandlersandSearchComponentsinSolrConfig-SearchComponents,Search Components>>. The `components` element can only be used with a request handler that is a SearchHandler.
+The final section of a request handler definition is `components`, which defines a list of search components that can be used with a request handler. They are only registered with the request handler. How to define a search component is discussed further on in the section on <<Search Components>> below. The `components` element can only be used with a request handler that is a SearchHandler.
 
 The `solrconfig.xml` file includes many other examples of SearchHandlers that can be used or modified as needed.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-UpdateRequestHandlers]]
 === UpdateRequestHandlers
 
 The UpdateRequestHandlers are request handlers which process updates to the index.
 
 In this guide, we've covered these handlers in detail in the section <<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-ShardHandlers]]
 === ShardHandlers
 
 It is possible to configure a request handler to search across shards of a cluster, used with distributed search. More information about distributed search and how to configure the shardHandler is in the section <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-ImplicitRequestHandlers]]
 === Implicit Request Handlers
 
 Solr includes many out-of-the-box request handlers that are not configured in `solrconfig.xml`, and so are referred to as "implicit" - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>>.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-SearchComponents]]
 == Search Components
 
 Search components define the logic that is used by the SearchHandler to perform queries for users.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-DefaultComponents]]
 === Default Components
 
 There are several default search components that work with all SearchHandlers without any additional configuration. If no components are defined (with the exception of `first-components` and `last-components` - see below), these are executed by default, in the following order:
@@ -132,13 +125,12 @@ There are several default search components that work with all SearchHandlers wi
 |mlt |`solr.MoreLikeThisComponent` |Described in the section <<morelikethis.adoc#morelikethis,MoreLikeThis>>.
 |highlight |`solr.HighlightComponent` |Described in the section <<highlighting.adoc#highlighting,Highlighting>>.
 |stats |`solr.StatsComponent` |Described in the section <<the-stats-component.adoc#the-stats-component,The Stats Component>>.
-|debug |`solr.DebugComponent` |Described in the section on <<common-query-parameters.adoc#CommonQueryParameters-ThedebugParameter,Common Query Parameters>>.
+|debug |`solr.DebugComponent` |Described in the section on <<common-query-parameters.adoc#debug-parameter,Common Query Parameters>>.
 |expand |`solr.ExpandComponent` |Described in the section <<collapse-and-expand-results.adoc#collapse-and-expand-results,Collapse and Expand Results>>.
 |===
 
 If you register a new search component with one of these default names, the newly defined component will be used instead of the default.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-First-ComponentsandLast-Components]]
 === First-Components and Last-Components
 
 It's possible to define some components as being used before (with `first-components`) or after (with `last-components`) the default components listed above.
@@ -158,7 +150,6 @@ It's possible to define some components as being used before (with `first-compon
 </arr>
 ----
 
-[[RequestHandlersandSearchComponentsinSolrConfig-Components]]
 === Components
 
 If you define `components`, the default components (see above) will not be executed, and `first-components` and `last-components` are disallowed:
@@ -172,7 +163,6 @@ If you define `components`, the default components (see above) will not be execu
 </arr>
 ----
 
-[[RequestHandlersandSearchComponentsinSolrConfig-OtherUsefulComponents]]
 === Other Useful Components
 
 Many of the other useful components are described in sections of this Guide for the features they support. These are:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/response-writers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/response-writers.adoc b/solr/solr-ref-guide/src/response-writers.adoc
index 947c8ea..bc8657a 100644
--- a/solr/solr-ref-guide/src/response-writers.adoc
+++ b/solr/solr-ref-guide/src/response-writers.adoc
@@ -25,95 +25,25 @@ Solr supports a variety of Response Writers to ensure that query responses can b
 
 The `wt` parameter selects the Response Writer to be used. The list below describe shows the most common settings for the `wt` parameter, with links to further sections that discuss them in more detail.
 
-* <<ResponseWriters-CSVResponseWriter,csv>>
-* <<ResponseWriters-GeoJSONResponseWriter,geojson>>
-* <<ResponseWriters-BinaryResponseWriter,javabin>>
-* <<ResponseWriters-JSONResponseWriter,json>>
-* <<ResponseWriters-PHPResponseWriterandPHPSerializedResponseWriter,php>>
-* <<ResponseWriters-PHPResponseWriterandPHPSerializedResponseWriter,phps>>
-* <<ResponseWriters-PythonResponseWriter,python>>
-* <<ResponseWriters-RubyResponseWriter,ruby>>
-* <<ResponseWriters-SmileResponseWriter,smile>>
-* <<ResponseWriters-VelocityResponseWriter,velocity>>
-* <<ResponseWriters-XLSXResponseWriter,xlsx>>
-* <<ResponseWriters-TheStandardXMLResponseWriter,xml>>
-* <<ResponseWriters-TheXSLTResponseWriter,xslt>>
-
-
-[[ResponseWriters-TheStandardXMLResponseWriter]]
-== The Standard XML Response Writer
+* <<CSV Response Writer,csv>>
+* <<GeoJSON Response Writer,geojson>>
+* <<Binary Response Writer,javabin>>
+* <<JSON Response Writer,json>>
+* <<php-writer,php>>
+* <<php-writer,phps>>
+* <<Python Response Writer,python>>
+* <<Ruby Response Writer,ruby>>
+* <<Smile Response Writer,smile>>
+* <<Velocity Response Writer,velocity>>
+* <<XLSX Response Writer,xlsx>>
+* <<Standard XML Response Writer,xml>>
+* <<XSLT Response Writer,xslt>>
 
-The XML Response Writer is the most general purpose and reusable Response Writer currently included with Solr. It is the format used in most discussions and documentation about the response of Solr queries.
-
-Note that the XSLT Response Writer can be used to convert the XML produced by this writer to other vocabularies or text-based formats.
-
-The behavior of the XML Response Writer can be driven by the following query parameters.
-
-[[ResponseWriters-TheversionParameter]]
-=== The version Parameter
-
-The `version` parameter determines the XML protocol used in the response. Clients are strongly encouraged to _always_ specify the protocol version, so as to ensure that the format of the response they receive does not change unexpectedly if the Solr server is upgraded and a new default format is introduced.
-
-The only currently supported version value is `2.2`. The format of the `responseHeader` changed to use the same `<lst>` structure as the rest of the response.
-
-The default value is the latest supported.
-
-[[ResponseWriters-ThestylesheetParameter]]
-=== The stylesheet Parameter
-
-The `stylesheet` parameter can be used to direct Solr to include a `<?xml-stylesheet type="text/xsl" href="..."?>` declaration in the XML response it returns.
-
-The default behavior is not to return any stylesheet declaration at all.
-
-[IMPORTANT]
-====
-Use of the `stylesheet` parameter is discouraged, as there is currently no way to specify external stylesheets, and no stylesheets are provided in the Solr distributions. This is a legacy parameter, which may be developed further in a future release.
-====
-
-[[ResponseWriters-TheindentParameter]]
-=== The indent Parameter
-
-If the `indent` parameter is used, and has a non-blank value, then Solr will make some attempts at indenting its XML response to make it more readable by humans.
-
-The default behavior is not to indent.
-
-[[ResponseWriters-TheXSLTResponseWriter]]
-== The XSLT Response Writer
-
-The XSLT Response Writer applies an XML stylesheet to output. It can be used for tasks such as formatting results for an RSS feed.
-
-[[ResponseWriters-trParameter]]
-=== tr Parameter
-
-The XSLT Response Writer accepts one parameter: the `tr` parameter, which identifies the XML transformation to use. The transformation must be found in the Solr `conf/xslt` directory.
-
-The Content-Type of the response is set according to the `<xsl:output>` statement in the XSLT transform, for example: `<xsl:output media-type="text/html"/>`
-
-[[ResponseWriters-Configuration]]
-=== Configuration
-
-The example below, from the `sample_techproducts_configs` <<response-writers.adoc#response-writers,config set>> in the Solr distribution, shows how the XSLT Response Writer is configured.
-
-[source,xml]
-----
-<!--
-  Changes to XSLT transforms are taken into account
-  every xsltCacheLifetimeSeconds at most.
--->
-<queryResponseWriter name="xslt"
-                     class="org.apache.solr.request.XSLTResponseWriter">
-  <int name="xsltCacheLifetimeSeconds">5</int>
-</queryResponseWriter>
-----
-
-A value of 5 for `xsltCacheLifetimeSeconds` is good for development, to see XSLT changes quickly. For production you probably want a much higher value.
-
-[[ResponseWriters-JSONResponseWriter]]
 == JSON Response Writer
 
-A very commonly used Response Writer is the `JsonResponseWriter`, which formats output in JavaScript Object Notation (JSON), a lightweight data interchange format specified in specified in RFC 4627. Setting the `wt` parameter to `json` invokes this Response Writer.
+The default Solr Response Writer is the `JsonResponseWriter`, which formats output in JavaScript Object Notation (JSON), a lightweight data interchange format specified in specified in RFC 4627. If you do not set the `wt` parameter in your request, you will get JSON by default.
 
-Here is a sample response for a simple query like `q=id:VS1GB400C3&wt=json`:
+Here is a sample response for a simple query like `q=id:VS1GB400C3`:
 
 [source,json]
 ----
@@ -123,9 +53,7 @@ Here is a sample response for a simple query like `q=id:VS1GB400C3&wt=json`:
     "status":0,
     "QTime":7,
     "params":{
-      "q":"id:VS1GB400C3",
-      "indent":"on",
-      "wt":"json"}},
+      "q":"id:VS1GB400C3"}},
   "response":{"numFound":1,"start":0,"maxScore":2.3025851,"docs":[
       {
         "id":"VS1GB400C3",
@@ -158,10 +86,8 @@ The default mime type for the JSON writer is `application/json`, however this ca
 </queryResponseWriter>
 ----
 
-[[ResponseWriters-JSON-SpecificParameters]]
 === JSON-Specific Parameters
 
-[[ResponseWriters-json.nl]]
 ==== json.nl
 
 This parameter controls the output format of NamedLists, where order is more important than access by name. NamedList is currently used for field faceting data.
@@ -196,7 +122,6 @@ NamedList is represented as an array of Name Type Value JSON objects.
 +
 With input of `NamedList("a"=1, "bar"="foo", null=3, null=null)`, the output would be `[{"name":"a","type":"int","value":1}, {"name":"bar","type":"str","value":"foo"}, {"name":null,"type":"int","value":3}, {"name":null,"type":"null","value":null}]`.
 
-[[ResponseWriters-json.wrf]]
 ==== json.wrf
 
 `json.wrf=function` adds a wrapper-function around the JSON response, useful in AJAX with dynamic script tags for specifying a JavaScript callback function.
@@ -204,17 +129,76 @@ With input of `NamedList("a"=1, "bar"="foo", null=3, null=null)`, the output wou
 * http://www.xml.com/pub/a/2005/12/21/json-dynamic-script-tag.html
 * http://www.theurer.cc/blog/2005/12/15/web-services-json-dump-your-proxy/
 
-[[ResponseWriters-BinaryResponseWriter]]
+
+== Standard XML Response Writer
+
+The XML Response Writer is the most general purpose and reusable Response Writer currently included with Solr. It is the format used in most discussions and documentation about the response of Solr queries.
+
+Note that the XSLT Response Writer can be used to convert the XML produced by this writer to other vocabularies or text-based formats.
+
+The behavior of the XML Response Writer can be driven by the following query parameters.
+
+=== The version Parameter
+
+The `version` parameter determines the XML protocol used in the response. Clients are strongly encouraged to _always_ specify the protocol version, so as to ensure that the format of the response they receive does not change unexpectedly if the Solr server is upgraded and a new default format is introduced.
+
+The only currently supported version value is `2.2`. The format of the `responseHeader` changed to use the same `<lst>` structure as the rest of the response.
+
+The default value is the latest supported.
+
+=== stylesheet Parameter
+
+The `stylesheet` parameter can be used to direct Solr to include a `<?xml-stylesheet type="text/xsl" href="..."?>` declaration in the XML response it returns.
+
+The default behavior is not to return any stylesheet declaration at all.
+
+[IMPORTANT]
+====
+Use of the `stylesheet` parameter is discouraged, as there is currently no way to specify external stylesheets, and no stylesheets are provided in the Solr distributions. This is a legacy parameter, which may be developed further in a future release.
+====
+
+=== indent Parameter
+
+If the `indent` parameter is used, and has a non-blank value, then Solr will make some attempts at indenting its XML response to make it more readable by humans.
+
+The default behavior is not to indent.
+
+== XSLT Response Writer
+
+The XSLT Response Writer applies an XML stylesheet to output. It can be used for tasks such as formatting results for an RSS feed.
+
+=== tr Parameter
+
+The XSLT Response Writer accepts one parameter: the `tr` parameter, which identifies the XML transformation to use. The transformation must be found in the Solr `conf/xslt` directory.
+
+The Content-Type of the response is set according to the `<xsl:output>` statement in the XSLT transform, for example: `<xsl:output media-type="text/html"/>`
+
+=== XSLT Configuration
+
+The example below, from the `sample_techproducts_configs` <<response-writers.adoc#response-writers,config set>> in the Solr distribution, shows how the XSLT Response Writer is configured.
+
+[source,xml]
+----
+<!--
+  Changes to XSLT transforms are taken into account
+  every xsltCacheLifetimeSeconds at most.
+-->
+<queryResponseWriter name="xslt"
+                     class="org.apache.solr.request.XSLTResponseWriter">
+  <int name="xsltCacheLifetimeSeconds">5</int>
+</queryResponseWriter>
+----
+
+A value of 5 for `xsltCacheLifetimeSeconds` is good for development, to see XSLT changes quickly. For production you probably want a much higher value.
+
 == Binary Response Writer
 
 This is a custom binary format used by Solr for inter-node communication as well as client-server communication. SolrJ uses this as the default for indexing as well as querying. See <<client-apis.adoc#client-apis,Client APIs>> for more details.
 
-[[ResponseWriters-GeoJSONResponseWriter]]
 == GeoJSON Response Writer
 
 Returns Solr results in http://geojson.org[GeoJSON] augmented with Solr-specific JSON. To use this, set `wt=geojson` and `geojson.field` to the name of a spatial Solr field. Not all spatial fields types are supported, and you'll get an error if you use an unsupported one.
 
-[[ResponseWriters-PythonResponseWriter]]
 == Python Response Writer
 
 Solr has an optional Python response format that extends its JSON output in the following ways to allow the response to be safely evaluated by the python interpreter:
@@ -225,7 +209,7 @@ Solr has an optional Python response format that extends its JSON output in the
 * newlines are escaped
 * null changed to None
 
-[[ResponseWriters-PHPResponseWriterandPHPSerializedResponseWriter]]
+[[php-writer]]
 == PHP Response Writer and PHP Serialized Response Writer
 
 Solr has a PHP response format that outputs an array (as PHP code) which can be evaluated. Setting the `wt` parameter to `php` invokes the PHP Response Writer.
@@ -250,7 +234,6 @@ $result = unserialize($serializedResult);
 print_r($result);
 ----
 
-[[ResponseWriters-RubyResponseWriter]]
 == Ruby Response Writer
 
 Solr has an optional Ruby response format that extends its JSON output in the following ways to allow the response to be safely evaluated by Ruby's interpreter:
@@ -274,14 +257,12 @@ puts 'number of matches = ' + rsp['response']['numFound'].to_s
 rsp['response']['docs'].each { |doc| puts 'name field = ' + doc['name'\] }
 ----
 
-[[ResponseWriters-CSVResponseWriter]]
 == CSV Response Writer
 
 The CSV response writer returns a list of documents in comma-separated values (CSV) format. Other information that would normally be included in a response, such as facet information, is excluded.
 
 The CSV response writer supports multi-valued fields, as well as<<transforming-result-documents.adoc#transforming-result-documents,pseudo-fields>>, and the output of this CSV format is compatible with Solr's https://wiki.apache.org/solr/UpdateCSV[CSV update format].
 
-[[ResponseWriters-CSVParameters]]
 === CSV Parameters
 
 These parameters specify the CSV format that will be returned. You can accept the default values or specify your own.
@@ -297,7 +278,6 @@ These parameters specify the CSV format that will be returned. You can accept th
 |csv.null |Defaults to a zero length string. Use this parameter when a document has no value for a particular field.
 |===
 
-[[ResponseWriters-Multi-ValuedFieldCSVParameters]]
 === Multi-Valued Field CSV Parameters
 
 These parameters specify how multi-valued fields are encoded. Per-field overrides for these values can be done using `f.<fieldname>.csv.separator=|`.
@@ -310,8 +290,7 @@ These parameters specify how multi-valued fields are encoded. Per-field override
 |csv.mv.separator |Defaults to the `csv.separator` value.
 |===
 
-[[ResponseWriters-Example]]
-=== Example
+=== CSV Writer Example
 
 `\http://localhost:8983/solr/techproducts/select?q=ipod&fl=id,cat,name,popularity,price,score&wt=csv` returns:
 
@@ -323,19 +302,17 @@ F8V7067-APL-KIT,"electronics,connector",Belkin Mobile Power Cord for iPod w/ Doc
 MA147LL/A,"electronics,music",Apple 60 GB iPod with Video Playback Black,10,399.0,0.2446348
 ----
 
-[[ResponseWriters-VelocityResponseWriter]]
+[[velocity-writer]]
 == Velocity Response Writer
 
 The `VelocityResponseWriter` processes the Solr response and request context through Apache Velocity templating.
 
-See <<velocity-response-writer.adoc#velocity-response-writer,Velocity Response Writer>> section for details.
+See the <<velocity-response-writer.adoc#velocity-response-writer,Velocity Response Writer>> section for details.
 
-[[ResponseWriters-SmileResponseWriter]]
 == Smile Response Writer
 
 The Smile format is a JSON-compatible binary format, described in detail here: http://wiki.fasterxml.com/SmileFormat.
 
-[[ResponseWriters-XLSXResponseWriter]]
 == XLSX Response Writer
 
 Use this to get the response as a spreadsheet in the .xlsx (Microsoft Excel) format. It accepts parameters in the form `colwidth.<field-name>` and `colname.<field-name>` which helps you customize the column widths and column names.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/result-clustering.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/result-clustering.adoc b/solr/solr-ref-guide/src/result-clustering.adoc
index db9a43c..c9bdf63 100644
--- a/solr/solr-ref-guide/src/result-clustering.adoc
+++ b/solr/solr-ref-guide/src/result-clustering.adoc
@@ -28,8 +28,7 @@ image::images/result-clustering/carrot2.png[image,width=900]
 
 The query issued to the system was _Solr_. It seems clear that faceting could not yield a similar set of groups, although the goals of both techniques are similar—to let the user explore the set of search results and either rephrase the query or narrow the focus to a subset of current documents. Clustering is also similar to <<result-grouping.adoc#result-grouping,Result Grouping>> in that it can help to look deeper into search results, beyond the top few hits.
 
-[[ResultClustering-PreliminaryConcepts]]
-== Preliminary Concepts
+== Clustering Concepts
 
 Each *document* passed to the clustering component is composed of several logical parts:
 
@@ -39,12 +38,11 @@ Each *document* passed to the clustering component is composed of several logica
 * the main content,
 * a language code of the title and content.
 
-The identifier part is mandatory, everything else is optional but at least one of the text fields (title or content) will be required to make the clustering process reasonable. It is important to remember that logical document parts must be mapped to a particular schema and its fields. The content (text) for clustering can be sourced from either a stored text field or context-filtered using a highlighter, all these options are explained below in the <<ResultClustering-Configuration,configuration>> section.
+The identifier part is mandatory, everything else is optional but at least one of the text fields (title or content) will be required to make the clustering process reasonable. It is important to remember that logical document parts must be mapped to a particular schema and its fields. The content (text) for clustering can be sourced from either a stored text field or context-filtered using a highlighter, all these options are explained below in the <<Clustering Configuration,configuration>> section.
 
 A *clustering algorithm* is the actual logic (implementation) that discovers relationships among the documents in the search result and forms human-readable cluster labels. Depending on the choice of the algorithm the clusters may (and probably will) vary. Solr comes with several algorithms implemented in the open source http://carrot2.org[Carrot2] project, commercial alternatives also exist.
 
-[[ResultClustering-QuickStartExample]]
-== Quick Start Example
+== Clustering Quick Start Example
 
 The "```techproducts```" example included with Solr is pre-configured with all the necessary components for result clustering -- but they are disabled by default.
 
@@ -137,16 +135,13 @@ There were a few clusters discovered for this query (`\*:*`), separating search
 
 Depending on the quality of input documents, some clusters may not make much sense. Some documents may be left out and not be clustered at all; these will be assigned to the synthetic _Other Topics_ group, marked with the `other-topics` property set to `true` (see the XML dump above for an example). The score of the other topics group is zero.
 
-[[ResultClustering-Installation]]
-== Installation
+== Installing the Clustering Contrib
 
 The clustering contrib extension requires `dist/solr-clustering-*.jar` and all JARs under `contrib/clustering/lib`.
 
-[[ResultClustering-Configuration]]
-== Configuration
+== Clustering Configuration
 
-[[ResultClustering-DeclarationoftheSearchComponentandRequestHandler]]
-=== Declaration of the Search Component and Request Handler
+=== Declaration of the Clustering Search Component and Request Handler
 
 Clustering extension is a search component and must be declared in `solrconfig.xml`. Such a component can be then appended to a request handler as the last component in the chain (because it requires search results which must be previously fetched by the search component).
 
@@ -205,8 +200,6 @@ An example configuration could look as shown below.
 </requestHandler>
 ----
 
-
-[[ResultClustering-ConfigurationParametersoftheClusteringComponent]]
 === Configuration Parameters of the Clustering Component
 
 The following parameters of each clustering engine or the entire clustering component (depending where they are declared) are available.
@@ -237,7 +230,6 @@ If `true` and the algorithm supports hierarchical clustering, sub-clusters will
 `carrot.numDescriptions`::
 Maximum number of per-cluster labels to return (if the algorithm assigns more than one label to a cluster).
 
-
 The `carrot.algorithm` parameter should contain a fully qualified class name of an algorithm supported by the http://project.carrot2.org[Carrot2] framework. Currently, the following algorithms are available:
 
 * `org.carrot2.clustering.lingo.LingoClusteringAlgorithm` (open source)
@@ -253,7 +245,6 @@ For a comparison of characteristics of these algorithms see the following links:
 
 The question of which algorithm to choose depends on the amount of traffic (STC is faster than Lingo, but arguably produces less intuitive clusters, Lingo3G is the fastest algorithm but is not free or open source), expected result (Lingo3G provides hierarchical clusters, Lingo and STC provide flat clusters), and the input data (each algorithm will cluster the input slightly differently). There is no one answer which algorithm is "the best".
 
-[[ResultClustering-ContextualandFullFieldClustering]]
 === Contextual and Full Field Clustering
 
 The clustering engine can apply clustering to the full content of (stored) fields or it can run an internal highlighter pass to extract context-snippets before clustering. Highlighting is recommended when the logical snippet field contains a lot of content (this would affect clustering performance). Highlighting can also increase the quality of clustering because the content passed to the algorithm will be more focused around the query (it will be query-specific context). The following parameters control the internal highlighter.
@@ -266,10 +257,9 @@ The size, in characters, of the snippets (aka fragments) created by the highligh
 
 `carrot.summarySnippets`:: The number of summary snippets to generate for clustering. If not specified, the default highlighting snippet count (`hl.snippets`) will be used.
 
-[[ResultClustering-LogicaltoDocumentFieldMapping]]
 === Logical to Document Field Mapping
 
-As already mentioned in <<ResultClustering-PreliminaryConcepts,Preliminary Concepts>>, the clustering component clusters "documents" consisting of logical parts that need to be mapped onto physical schema of data stored in Solr. The field mapping attributes provide a connection between fields and logical document parts. Note that the content of title and snippet fields must be *stored* so that it can be retrieved at search time.
+As already mentioned in <<Clustering Concepts>>, the clustering component clusters "documents" consisting of logical parts that need to be mapped onto physical schema of data stored in Solr. The field mapping attributes provide a connection between fields and logical document parts. Note that the content of title and snippet fields must be *stored* so that it can be retrieved at search time.
 
 `carrot.title`::
 The field (alternatively comma- or space-separated list of fields) that should be mapped to the logical document's title. The clustering algorithms typically give more weight to the content of the title field compared to the content (snippet). For best results, the field should contain concise, noise-free content. If there is no clear title in your data, you can leave this parameter blank.
@@ -280,7 +270,6 @@ The field (alternatively comma- or space-separated list of fields) that should b
 `carrot.url`::
 The field that should be mapped to the logical document's content URL. Leave blank if not required.
 
-[[ResultClustering-ClusteringMultilingualContent]]
 === Clustering Multilingual Content
 
 The field mapping specification can include a `carrot.lang` parameter, which defines the field that stores http://www.loc.gov/standards/iso639-2/php/code_list.php[ISO 639-1] code of the language in which the title and content of the document are written. This information can be stored in the index based on apriori knowledge of the documents' source or a language detection filter applied at indexing time. All algorithms inside the Carrot2 framework will accept ISO codes of languages defined in https://github.com/carrot2/carrot2/blob/master/core/carrot2-core/src/org/carrot2/core/LanguageCode.java[LanguageCode enum].
@@ -295,15 +284,13 @@ A mapping of arbitrary strings into ISO 639 two-letter codes used by `carrot.lan
 
 The default language can also be set using Carrot2-specific algorithm attributes (in this case the http://doc.carrot2.org/#section.attribute.lingo.MultilingualClustering.defaultLanguage[MultilingualClustering.defaultLanguage] attribute).
 
-[[ResultClustering-TweakingAlgorithmSettings]]
 == Tweaking Algorithm Settings
 
 The algorithms that come with Solr are using their default settings which may be inadequate for all data sets. All algorithms have lexical resources and resources (stop words, stemmers, parameters) that may require tweaking to get better clusters (and cluster labels). For Carrot2-based algorithms it is probably best to refer to a dedicated tuning application called Carrot2 Workbench (screenshot below). From this application one can export a set of algorithm attributes as an XML file, which can be then placed under the location pointed to by `carrot.resourcesDir`.
 
 image::images/result-clustering/carrot2-workbench.png[image,scaledwidth=75.0%]
 
-[[ResultClustering-ProvidingDefaults]]
-=== Providing Defaults
+=== Providing Defaults for Clustering
 
 The default attributes for all engines (algorithms) declared in the clustering component are placed under `carrot.resourcesDir` and with an expected file name of `engineName-attributes.xml`. So for an engine named `lingo` and the default value of `carrot.resourcesDir`, the attributes would be read from a file in `conf/clustering/carrot2/lingo-attributes.xml`.
 
@@ -323,8 +310,7 @@ An example XML file changing the default language of documents to Polish is show
 </attribute-sets>
 ----
 
-[[ResultClustering-TweakingatQuery-Time]]
-=== Tweaking at Query-Time
+=== Tweaking Algorithms at Query-Time
 
 The clustering component and Carrot2 clustering algorithms can accept query-time attribute overrides. Note that certain things (for example lexical resources) can only be initialized once (at startup, via the XML configuration files).
 
@@ -332,8 +318,7 @@ An example query that changes the `LingoClusteringAlgorithm.desiredClusterCountB
 
 The clustering engine (the algorithm declared in `solrconfig.xml`) can also be changed at runtime by passing `clustering.engine=name` request attribute: http://localhost:8983/solr/techproducts/clustering?q=*:*&rows=100&clustering.engine=kmeans
 
-[[ResultClustering-PerformanceConsiderations]]
-== Performance Considerations
+== Performance Considerations with Dynamic Clustering
 
 Dynamic clustering of search results comes with two major performance penalties:
 
@@ -349,7 +334,6 @@ For simple queries, the clustering time will usually dominate the fetch time. If
 
 Some of these techniques are described in _Apache SOLR and Carrot2 integration strategies_ document, available at http://carrot2.github.io/solr-integration-strategies. The topic of improving performance is also included in the Carrot2 manual at http://doc.carrot2.org/#section.advanced-topics.fine-tuning.performance.
 
-[[ResultClustering-AdditionalResources]]
 == Additional Resources
 
 The following resources provide additional information about the clustering component in Solr and its potential applications.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/result-grouping.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/result-grouping.adoc b/solr/solr-ref-guide/src/result-grouping.adoc
index 89b3c33..8a51e12 100644
--- a/solr/solr-ref-guide/src/result-grouping.adoc
+++ b/solr/solr-ref-guide/src/result-grouping.adoc
@@ -54,8 +54,7 @@ Object 3
 
 If you ask Solr to group these documents by "product_range", then the total amount of groups is 2, but the facets for ppm are 2 for 62 and 1 for 65.
 
-[[ResultGrouping-RequestParameters]]
-== Request Parameters
+== Grouping Parameters
 
 Result Grouping takes the following request parameters. Any number of these request parameters can be included in a single request:
 
@@ -68,7 +67,7 @@ The name of the field by which to group results. The field must be single-valued
 `group.func`::
 Group based on the unique values of a function query.
 +
-NOTE: This option does not work with <<ResultGrouping-DistributedResultGroupingCaveats,distributed searches>>.
+NOTE: This option does not work with <<Distributed Result Grouping Caveats,distributed searches>>.
 
 `group.query`::
 Return a single group of documents that match the given query.
@@ -100,7 +99,7 @@ If `true`, the result of the first field grouping command is used as the main re
 `group.ngroups`::
 If `true`, Solr includes the number of groups that have matched the query in the results. The default value is false.
 +
-See below for <<ResultGrouping-DistributedResultGroupingCaveats,Distributed Result Grouping Caveats>> when using sharded indexes
+See below for <<Distributed Result Grouping Caveats>> when using sharded indexes.
 
 `group.truncate`::
 If `true`, facet counts are based on the most relevant document of each group matching the query. The default value is `false`.
@@ -110,7 +109,7 @@ Determines whether to compute grouped facets for the field facets specified in f
 +
 WARNING: There can be a heavy performance cost to this option.
 +
-See below for <<ResultGrouping-DistributedResultGroupingCaveats,Distributed Result Grouping Caveats>> when using sharded indexes.
+See below for <<Distributed Result Grouping Caveats>> when using sharded indexes.
 
 `group.cache.percent`::
 Setting this parameter to a number greater than 0 enables caching for result grouping. Result Grouping executes two searches; this option caches the second search. The default value is `0`. The maximum value is `100`.
@@ -119,17 +118,15 @@ Testing has shown that group caching only improves search time with Boolean, wil
 
 Any number of group commands (e.g., `group.field`, `group.func`, `group.query`, etc.) may be specified in a single request.
 
-[[ResultGrouping-Examples]]
-== Examples
+== Grouping Examples
 
 All of the following sample queries work with Solr's "`bin/solr -e techproducts`" example.
 
-[[ResultGrouping-GroupingResultsbyField]]
 === Grouping Results by Field
 
 In this example, we will group results based on the `manu_exact` field, which specifies the manufacturer of the items in the sample dataset.
 
-`\http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=id,name&q=solr+memory&group=true&group.field=manu_exact`
+`\http://localhost:8983/solr/techproducts/select?fl=id,name&q=solr+memory&group=true&group.field=manu_exact`
 
 [source,json]
 ----
@@ -180,7 +177,7 @@ The response indicates that there are six total matches for our query. For each
 
 We can run the same query with the request parameter `group.main=true`. This will format the results as a single flat document list. This flat format does not include as much information as the normal result grouping query results – notably the `numFound` in each group – but it may be easier for existing Solr clients to parse.
 
-`\http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=id,name,manufacturer&q=solr+memory&group=true&group.field=manu_exact&group.main=true`
+`\http://localhost:8983/solr/techproducts/select?fl=id,name,manufacturer&q=solr+memory&group=true&group.field=manu_exact&group.main=true`
 
 [source,json]
 ----
@@ -194,8 +191,7 @@ We can run the same query with the request parameter `group.main=true`. This wil
       "q":"solr memory",
       "group.field":"manu_exact",
       "group.main":"true",
-      "group":"true",
-      "wt":"json"}},
+      "group":"true"}},
   "grouped":{},
   "response":{"numFound":6,"start":0,"docs":[
       {
@@ -217,12 +213,11 @@ We can run the same query with the request parameter `group.main=true`. This wil
 }
 ----
 
-[[ResultGrouping-GroupingbyQuery]]
 === Grouping by Query
 
 In this example, we will use the `group.query` parameter to find the top three results for "memory" in two different price ranges: 0.00 to 99.99, and over 100.
 
-`\http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=name,price&q=memory&group=true&group.query=price:[0+TO+99.99]&group.query=price:[100+TO+*]&group.limit=3`
+`\http://localhost:8983/solr/techproducts/select?indent=true&fl=name,price&q=memory&group=true&group.query=price:[0+TO+99.99]&group.query=price:[100+TO+*]&group.limit=3`
 
 [source,json]
 ----
@@ -237,8 +232,7 @@ In this example, we will use the `group.query` parameter to find the top three r
       "group.limit":"3",
       "group.query":["price:[0 TO 99.99]",
       "price:[100 TO *]"],
-      "group":"true",
-      "wt":"json"}},
+      "group":"true"}},
   "grouped":{
     "price:[0 TO 99.99]":{
       "matches":5,
@@ -267,7 +261,6 @@ In this example, we will use the `group.query` parameter to find the top three r
 
 In this case, Solr found five matches for "memory," but only returns four results grouped by price. This is because one result for "memory" did not have a price assigned to it.
 
-[[ResultGrouping-DistributedResultGroupingCaveats]]
 == Distributed Result Grouping Caveats
 
 Grouping is supported for <<solrcloud.adoc#solrcloud,distributed searches>>, with some caveats:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
index ee2fd88..3b84dc6 100644
--- a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
+++ b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
@@ -26,10 +26,9 @@ The roles can be used with any of the authentication plugins or with a custom au
 
 Once defined through the API, roles are stored in `security.json`.
 
-[[Rule-BasedAuthorizationPlugin-EnabletheAuthorizationPlugin]]
 == Enable the Authorization Plugin
 
-The plugin must be enabled in `security.json`. This file and where to put it in your system is described in detail in the section <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-EnablePluginswithsecurity.json,Enable Plugins with security.json>>.
+The plugin must be enabled in `security.json`. This file and where to put it in your system is described in detail in the section <<authentication-and-authorization-plugins.adoc#enable-plugins-with-security-json,Enable Plugins with security.json>>.
 
 This file has two parts, the `authentication` part and the `authorization` part. The `authentication` part stores information about the class being used for authentication.
 
@@ -61,14 +60,12 @@ There are several things defined in this example:
 * The 'admin' role has been defined, and it has permission to edit security settings.
 * The 'solr' user has been defined to the 'admin' role.
 
-[[Rule-BasedAuthorizationPlugin-PermissionAttributes]]
 == Permission Attributes
 
 Each role is comprised of one or more permissions which define what the user is allowed to do. Each permission is made up of several attributes that define the allowed activity. There are some pre-defined permissions which cannot be modified.
 
 The permissions are consulted in order they appear in `security.json`. The first permission that matches is applied for each user, so the strictest permissions should be at the top of the list. Permissions order can be controlled with a parameter of the Authorization API, as described below.
 
-[[Rule-BasedAuthorizationPlugin-PredefinedPermissions]]
 === Predefined Permissions
 
 There are several permissions that are pre-defined. These have fixed default values, which cannot be modified, and new attributes cannot be added. To use these attributes, simply define a role that includes this permission, and then assign a user to that role.
@@ -107,19 +104,16 @@ The pre-defined permissions are:
 ** OVERSEERSTATUS
 ** CLUSTERSTATUS
 ** REQUESTSTATUS
-* *update*: this permission is allowed to perform any update action on any collection. This includes sending documents for indexing (using an <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-UpdateRequestHandlers,update request handler>>). This applies to all collections by default (`collection:"*"`).
-* *read*: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`. This applies to all collections by default ( `collection:"*"` ).
+* *update*: this permission is allowed to perform any update action on any collection. This includes sending documents for indexing (using an <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#updaterequesthandlers,update request handler>>). This applies to all collections by default (`collection:"*"`).
+* *read*: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#searchhandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`. This applies to all collections by default ( `collection:"*"` ).
 * *all*: Any requests coming to Solr.
 
-[[Rule-BasedAuthorizationPlugin-AuthorizationAPI]]
 == Authorization API
 
-[[Rule-BasedAuthorizationPlugin-APIEndpoint]]
-=== API Endpoint
+=== Authorization API Endpoint
 
 `/admin/authorization`: takes a set of commands to create permissions, map permissions to roles, and map roles to users.
 
-[[Rule-BasedAuthorizationPlugin-ManagePermissions]]
 === Manage Permissions
 
 Three commands control managing permissions:
@@ -195,7 +189,6 @@ curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
   "set-permission": {"name": "read", "role":"guest"}
 }' http://localhost:8983/solr/admin/authorization
 
-[[Rule-BasedAuthorizationPlugin-UpdateorDeletePermissions]]
 === Update or Delete Permissions
 
 Permissions can be accessed using their index in the list. Use the `/admin/authorization` API to see the existing permissions and their indices.
@@ -216,7 +209,6 @@ curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
 }' http://localhost:8983/solr/admin/authorization
 
 
-[[Rule-BasedAuthorizationPlugin-MapRolestoUsers]]
 === Map Roles to Users
 
 A single command allows roles to be mapped to users:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
index 30e15eb..2464606 100644
--- a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
+++ b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
@@ -31,7 +31,6 @@ This feature is used in the following instances:
 * Replica creation
 * Shard splitting
 
-[[Rule-basedReplicaPlacement-CommonUseCases]]
 == Common Use Cases
 
 There are several situations where this functionality may be used. A few of the rules that could be implemented are listed below:
@@ -43,7 +42,6 @@ There are several situations where this functionality may be used. A few of the
 * Assign replica in nodes hosting less than 5 cores.
 * Assign replicas in nodes hosting the least number of cores.
 
-[[Rule-basedReplicaPlacement-RuleConditions]]
 == Rule Conditions
 
 A rule is a set of conditions that a node must satisfy before a replica core can be created there.
@@ -52,9 +50,8 @@ There are three possible conditions.
 
 * *shard*: this is the name of a shard or a wild card (* means for all shards). If shard is not specified, then the rule applies to the entire collection.
 * *replica*: this can be a number or a wild-card (* means any number zero to infinity).
-* *tag*: this is an attribute of a node in the cluster that can be used in a rule, e.g., “freedisk”, “cores”, “rack”, “dc”, etc. The tag name can be a custom string. If creating a custom tag, a snitch is responsible for providing tags and values. The section <<Rule-basedReplicaPlacement-Snitches,Snitches>> below describes how to add a custom tag, and defines six pre-defined tags (cores, freedisk, host, port, node, and sysprop).
+* *tag*: this is an attribute of a node in the cluster that can be used in a rule, e.g., “freedisk”, “cores”, “rack”, “dc”, etc. The tag name can be a custom string. If creating a custom tag, a snitch is responsible for providing tags and values. The section <<Snitches>> below describes how to add a custom tag, and defines six pre-defined tags (cores, freedisk, host, port, node, and sysprop).
 
-[[Rule-basedReplicaPlacement-RuleOperators]]
 === Rule Operators
 
 A condition can have one of the following operators to set the parameters for the rule.
@@ -64,25 +61,20 @@ A condition can have one of the following operators to set the parameters for th
 * *less than (<)*: `tag:<x` means tag value less than ‘x’. x must be a number
 * *not equal (!)*: `tag:!x` means tag value MUST NOT be equal to ‘x’. The equals check is performed on String value
 
-
-[[Rule-basedReplicaPlacement-FuzzyOperator_]]
 === Fuzzy Operator (~)
 
 This can be used as a suffix to any condition. This would first try to satisfy the rule strictly. If Solr can’t find enough nodes to match the criterion, it tries to find the next best match which may not satisfy the criterion. For example, if we have a rule such as, `freedisk:>200~`, Solr will try to assign replicas of this collection on nodes with more than 200GB of free disk space. If that is not possible, the node which has the most free disk space will be chosen instead.
 
-[[Rule-basedReplicaPlacement-ChoosingAmongEquals]]
 === Choosing Among Equals
 
 The nodes are sorted first and the rules are used to sort them. This ensures that even if many nodes match the rules, the best nodes are picked up for node assignment. For example, if there is a rule such as `freedisk:>20`, nodes are sorted first on disk space descending and the node with the most disk space is picked up first. Or, if the rule is `cores:<5`, nodes are sorted with number of cores ascending and the node with the least number of cores is picked up first.
 
-[[Rule-basedReplicaPlacement-Rulesfornewshards]]
-== Rules for new shards
+== Rules for New Shards
 
-The rules are persisted along with collection state. So, when a new replica is created, the system will assign replicas satisfying the rules. When a new shard is created as a result of using the Collection API's <<collections-api.adoc#CollectionsAPI-createshard,CREATESHARD command>>, ensure that you have created rules specific for that shard name. Rules can be altered using the <<collections-api.adoc#CollectionsAPI-modifycollection,MODIFYCOLLECTION command>>. However, it is not required to do so if the rules do not specify explicit shard names. For example, a rule such as `shard:shard1,replica:*,ip_3:168:`, will not apply to any new shard created. But, if your rule is `replica:*,ip_3:168`, then it will apply to any new shard created.
+The rules are persisted along with collection state. So, when a new replica is created, the system will assign replicas satisfying the rules. When a new shard is created as a result of using the Collection API's <<collections-api.adoc#createshard,CREATESHARD command>>, ensure that you have created rules specific for that shard name. Rules can be altered using the <<collections-api.adoc#modifycollection,MODIFYCOLLECTION command>>. However, it is not required to do so if the rules do not specify explicit shard names. For example, a rule such as `shard:shard1,replica:*,ip_3:168:`, will not apply to any new shard created. But, if your rule is `replica:*,ip_3:168`, then it will apply to any new shard created.
 
 The same is applicable to shard splitting. Shard splitting is treated exactly the same way as shard creation. Even though `shard1_1` and `shard1_2` may be created from `shard1`, the rules treat them as distinct, unrelated shards.
 
-[[Rule-basedReplicaPlacement-Snitches]]
 == Snitches
 
 Tag values come from a plugin called Snitch. If there is a tag named ‘rack’ in a rule, there must be Snitch which provides the value for ‘rack’ for each node in the cluster. A snitch implements the Snitch interface. Solr, by default, provides a default snitch which provides the following tags:
@@ -96,7 +88,6 @@ Tag values come from a plugin called Snitch. If there is a tag named ‘rack’
 * *ip_1, ip_2, ip_3, ip_4*: These are ip fragments for each node. For example, in a host with ip `192.168.1.2`, `ip_1 = 2`, `ip_2 =1`, `ip_3 = 168` and` ip_4 = 192`
 * *sysprop.{PROPERTY_NAME}*: These are values available from system properties. `sysprop.key` means a value that is passed to the node as `-Dkey=keyValue` during the node startup. It is possible to use rules like `sysprop.key:expectedVal,shard:*`
 
-[[Rule-basedReplicaPlacement-HowSnitchesareConfigured]]
 === How Snitches are Configured
 
 It is possible to use one or more snitches for a set of rules. If the rules only need tags from default snitch it need not be explicitly configured. For example:
@@ -114,11 +105,8 @@ snitch=class:fqn.ClassName,key1:val1,key2:val2,key3:val3
 . After identifying the Snitches, they provide the tag values for each node in the cluster.
 . If the value for a tag is not obtained for a given node, it cannot participate in the assignment.
 
-[[Rule-basedReplicaPlacement-Examples]]
-== Examples
-
+== Replica Placement Examples
 
-[[Rule-basedReplicaPlacement-Keeplessthan2replicas_atmost1replica_ofthiscollectiononanynode]]
 === Keep less than 2 replicas (at most 1 replica) of this collection on any node
 
 For this rule, we define the `replica` condition with operators for "less than 2", and use a pre-defined tag named `node` to define nodes with any name.
@@ -129,8 +117,6 @@ replica:<2,node:*
 // this is equivalent to replica:<2,node:*,shard:**. We can omit shard:** because ** is the default value of shard
 ----
 
-
-[[Rule-basedReplicaPlacement-Foragivenshard_keeplessthan2replicasonanynode]]
 === For a given shard, keep less than 2 replicas on any node
 
 For this rule, we use the `shard` condition to define any shard , the `replica` condition with operators for "less than 2", and finally a pre-defined tag named `node` to define nodes with any name.
@@ -140,7 +126,6 @@ For this rule, we use the `shard` condition to define any shard , the `replica`
 shard:*,replica:<2,node:*
 ----
 
-[[Rule-basedReplicaPlacement-Assignallreplicasinshard1torack730]]
 === Assign all replicas in shard1 to rack 730
 
 This rule limits the `shard` condition to 'shard1', but any number of replicas. We're also referencing a custom tag named `rack`. Before defining this rule, we will need to configure a custom Snitch which provides values for the tag `rack`.
@@ -157,7 +142,6 @@ In this case, the default value of `replica` is * (or, all replicas). So, it can
 shard:shard1,rack:730
 ----
 
-[[Rule-basedReplicaPlacement-Createreplicasinnodeswithlessthan5coresonly]]
 === Create replicas in nodes with less than 5 cores only
 
 This rule uses the `replica` condition to define any number of replicas, but adds a pre-defined tag named `core` and uses operators for "less than 5".
@@ -174,7 +158,6 @@ Again, we can simplify this to use the default value for `replica`, like so:
 cores:<5
 ----
 
-[[Rule-basedReplicaPlacement-Donotcreateanyreplicasinhost192.45.67.3]]
 === Do not create any replicas in host 192.45.67.3
 
 This rule uses only the pre-defined tag `host` to define an IP address where replicas should not be placed.
@@ -184,7 +167,6 @@ This rule uses only the pre-defined tag `host` to define an IP address where rep
 host:!192.45.67.3
 ----
 
-[[Rule-basedReplicaPlacement-DefiningRules]]
 == Defining Rules
 
 Rules are specified per collection during collection creation as request parameters. It is possible to specify multiple ‘rule’ and ‘snitch’ params as in this example:
@@ -194,4 +176,4 @@ Rules are specified per collection during collection creation as request paramet
 snitch=class:EC2Snitch&rule=shard:*,replica:1,dc:dc1&rule=shard:*,replica:<2,dc:dc3
 ----
 
-These rules are persisted in `clusterstate.json` in ZooKeeper and are available throughout the lifetime of the collection. This enables the system to perform any future node allocation without direct user interaction. The rules added during collection creation can be modified later using the <<collections-api.adoc#CollectionsAPI-modifycollection,MODIFYCOLLECTION>> API.
+These rules are persisted in `clusterstate.json` in ZooKeeper and are available throughout the lifetime of the collection. This enables the system to perform any future node allocation without direct user interaction. The rules added during collection creation can be modified later using the <<collections-api.adoc#modifycollection,MODIFYCOLLECTION>> API.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
index 9f8e2dc..6ca5670 100644
--- a/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
+++ b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
@@ -28,13 +28,11 @@ To use HDFS rather than a local filesystem, you must be using Hadoop 2.x and you
 * Modify `solr.in.sh` (or `solr.in.cmd` on Windows) to pass the JVM arguments automatically when using `bin/solr` without having to set them manually.
 * Define the properties in `solrconfig.xml`. These configuration changes would need to be repeated for every collection, so is a good option if you only want some of your collections stored in HDFS.
 
-[[RunningSolronHDFS-StartingSolronHDFS]]
 == Starting Solr on HDFS
 
-[[RunningSolronHDFS-StandaloneSolrInstances]]
 === Standalone Solr Instances
 
-For standalone Solr instances, there are a few parameters you should be sure to modify before starting Solr. These can be set in `solrconfig.xml`(more on that <<RunningSolronHDFS-HdfsDirectoryFactoryParameters,below>>), or passed to the `bin/solr` script at startup.
+For standalone Solr instances, there are a few parameters you should be sure to modify before starting Solr. These can be set in `solrconfig.xml`(more on that <<HdfsDirectoryFactory Parameters,below>>), or passed to the `bin/solr` script at startup.
 
 * You need to use an `HdfsDirectoryFactory` and a data dir of the form `hdfs://host:port/path`
 * You need to specify an UpdateLog location of the form `hdfs://host:port/path`
@@ -50,9 +48,8 @@ bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
      -Dsolr.updatelog=hdfs://host:port/path
 ----
 
-This example will start Solr in standalone mode, using the defined JVM properties (explained in more detail <<RunningSolronHDFS-HdfsDirectoryFactoryParameters,below>>).
+This example will start Solr in standalone mode, using the defined JVM properties (explained in more detail <<HdfsDirectoryFactory Parameters,below>>).
 
-[[RunningSolronHDFS-SolrCloudInstances]]
 === SolrCloud Instances
 
 In SolrCloud mode, it's best to leave the data and update log directories as the defaults Solr comes with and simply specify the `solr.hdfs.home`. All dynamically created collections will create the appropriate directories automatically under the `solr.hdfs.home` root directory.
@@ -70,7 +67,6 @@ bin/solr start -c -Dsolr.directoryFactory=HdfsDirectoryFactory
 This command starts Solr in SolrCloud mode, using the defined JVM properties.
 
 
-[[RunningSolronHDFS-Modifyingsolr.in.sh_nix_orsolr.in.cmd_Windows_]]
 === Modifying solr.in.sh (*nix) or solr.in.cmd (Windows)
 
 The examples above assume you will pass JVM arguments as part of the start command every time you use `bin/solr` to start Solr. However, `bin/solr` looks for an include file named `solr.in.sh` (`solr.in.cmd` on Windows) to set environment variables. By default, this file is found in the `bin` directory, and you can modify it to permanently add the `HdfsDirectoryFactory` settings and ensure they are used every time Solr is started.
@@ -85,7 +81,6 @@ For example, to set JVM arguments to always use HDFS when running in SolrCloud m
 -Dsolr.hdfs.home=hdfs://host:port/path \
 ----
 
-[[RunningSolronHDFS-TheBlockCache]]
 == The Block Cache
 
 For performance, the HdfsDirectoryFactory uses a Directory that will cache HDFS blocks. This caching mechanism is meant to replace the standard file system cache that Solr utilizes so much. By default, this cache is allocated off heap. This cache will often need to be quite large and you may need to raise the off heap memory limit for the specific JVM you are running Solr in. For the Oracle/OpenJDK JVMs, the follow is an example command line parameter that you can use to raise the limit when starting Solr:
@@ -95,18 +90,15 @@ For performance, the HdfsDirectoryFactory uses a Directory that will cache HDFS
 -XX:MaxDirectMemorySize=20g
 ----
 
-[[RunningSolronHDFS-HdfsDirectoryFactoryParameters]]
 == HdfsDirectoryFactory Parameters
 
 The `HdfsDirectoryFactory` has a number of settings that are defined as part of the `directoryFactory` configuration.
 
-[[RunningSolronHDFS-SolrHDFSSettings]]
 === Solr HDFS Settings
 
 `solr.hdfs.home`::
 A root location in HDFS for Solr to write collection data to. Rather than specifying an HDFS location for the data directory or update log directory, use this to specify one root location and have everything automatically created within this HDFS location. The structure of this parameter is `hdfs://host:port/path/solr`.
 
-[[RunningSolronHDFS-BlockCacheSettings]]
 === Block Cache Settings
 
 `solr.hdfs.blockcache.enabled`::
@@ -124,7 +116,6 @@ Number of memory slabs to allocate. Each slab is 128 MB in size. The default is
 `solr.hdfs.blockcache.global`::
 Enable/Disable using one global cache for all SolrCores. The settings used will be from the first HdfsDirectoryFactory created. The default is `true`.
 
-[[RunningSolronHDFS-NRTCachingDirectorySettings]]
 === NRTCachingDirectory Settings
 
 `solr.hdfs.nrtcachingdirectory.enable`:: true |
@@ -136,13 +127,11 @@ NRTCachingDirectory max segment size for merges. The default is `16`.
 `solr.hdfs.nrtcachingdirectory.maxcachedmb`::
 NRTCachingDirectory max cache size. The default is `192`.
 
-[[RunningSolronHDFS-HDFSClientConfigurationSettings]]
 === HDFS Client Configuration Settings
 
 `solr.hdfs.confdir`::
 Pass the location of HDFS client configuration files - needed for HDFS HA for example.
 
-[[RunningSolronHDFS-KerberosAuthenticationSettings]]
 === Kerberos Authentication Settings
 
 Hadoop can be configured to use the Kerberos protocol to verify user identity when trying to access core services like HDFS. If your HDFS directories are protected using Kerberos, then you need to configure Solr's HdfsDirectoryFactory to authenticate using Kerberos in order to read and write to HDFS. To enable Kerberos authentication from Solr, you need to set the following parameters:
@@ -157,8 +146,7 @@ This file will need to be present on all Solr servers at the same path provided
 `solr.hdfs.security.kerberos.principal`::
 The Kerberos principal that Solr should use to authenticate to secure Hadoop; the format of a typical Kerberos V5 principal is: `primary/instance@realm`.
 
-[[RunningSolronHDFS-Example]]
-== Example
+== Example solrconfig.xml for HDFS
 
 Here is a sample `solrconfig.xml` configuration for storing Solr indexes on HDFS:
 
@@ -189,7 +177,6 @@ If using Kerberos, you will need to add the three Kerberos related properties to
 </directoryFactory>
 ----
 
-[[RunningSolronHDFS-AutomaticallyAddReplicasinSolrCloud]]
 == Automatically Add Replicas in SolrCloud
 
 One benefit to running Solr in HDFS is the ability to automatically add new replicas when the Overseer notices that a shard has gone down. Because the "gone" index shards are stored in HDFS, the a new core will be created and the new core will point to the existing indexes in HDFS.
@@ -205,7 +192,6 @@ The minimum time (in ms) to wait for initiating replacement of a replica after f
 `autoReplicaFailoverBadNodeExpiration`::
 The delay (in ms) after which a replica marked as down would be unmarked. The default is `60000`.
 
-[[RunningSolronHDFS-TemporarilydisableautoAddReplicasfortheentirecluster]]
 === Temporarily Disable autoAddReplicas for the Entire Cluster
 
 When doing offline maintenance on the cluster and for various other use cases where an admin would like to temporarily disable auto addition of replicas, the following APIs will disable and re-enable autoAddReplicas for *all collections in the cluster*:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/running-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/running-solr.adoc b/solr/solr-ref-guide/src/running-solr.adoc
index ecc4112..f18183e 100644
--- a/solr/solr-ref-guide/src/running-solr.adoc
+++ b/solr/solr-ref-guide/src/running-solr.adoc
@@ -114,7 +114,7 @@ Solr also provides a number of useful examples to help you learn about key featu
 bin/solr -e techproducts
 ----
 
-Currently, the available examples you can run are: techproducts, dih, schemaless, and cloud. See the section <<solr-control-script-reference.adoc#SolrControlScriptReference-RunningwithExampleConfigurations,Running with Example Configurations>> for details on each example.
+Currently, the available examples you can run are: techproducts, dih, schemaless, and cloud. See the section <<solr-control-script-reference.adoc#running-with-example-configurations,Running with Example Configurations>> for details on each example.
 
 .Getting Started with SolrCloud
 [NOTE]
@@ -171,7 +171,7 @@ You may want to add a few sample documents before trying to index your own conte
 
 In the `bin/` directory is the post script, a command line tool which can be used to index different types of documents. Do not worry too much about the details for now. The <<indexing-and-basic-data-operations.adoc#indexing-and-basic-data-operations,Indexing and Basic Data Operations>> section has all the details on indexing.
 
-To see some information about the usage of `bin/post`, use the `-help` option. Windows users, see the section for <<post-tool.adoc#PostTool-WindowsSupport,Post Tool on Windows>>.
+To see some information about the usage of `bin/post`, use the `-help` option. Windows users, see the section for <<post-tool.adoc#post-tool-windows-support,Post Tool on Windows>>.
 
 `bin/post` can post various types of content to Solr, including files in Solr's native XML and JSON formats, CSV files, a directory tree of rich documents, or even a simple short web crawl. See the examples at the end of `bin/post -help` for various commands to easily get started posting your content into Solr.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/schema-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index 893936f..c120e0a 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -52,7 +52,7 @@ The base address for the API is `\http://<host>:<port>/solr/<collection_name>`.
 bin/solr -e cloud -noprompt
 ----
 
-== API Entry Points
+== Schema API Entry Points
 
 * `/schema`: <<Retrieve the Entire Schema,retrieve>> the schema, or <<Modify the Schema,modify>> the schema to add, remove, or replace fields, dynamic fields, copy fields, or field types
 * `/schema/fields`: <<List Fields,retrieve information>> about all defined fields or a specific named field
@@ -408,21 +408,19 @@ The query parameters should be added to the API request after '?'.
 `wt`::
 Defines the format of the response. The options are *json*, *xml* or *schema.xml*. If not specified, JSON will be returned by default.
 
-[[SchemaAPI-OUTPUT]]
 ==== Retrieve Schema Response
 
 *Output Content*
 
 The output will include all fields, field types, dynamic rules and copy field rules, in the format requested (JSON or XML). The schema name and version are also included.
 
-[[SchemaAPI-EXAMPLES]]
 ==== Retrieve Schema Examples
 
 Get the entire schema in JSON.
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema
 ----
 
 [source,json]
@@ -611,7 +609,7 @@ Get a list of all fields.
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema/fields?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema/fields
 ----
 
 The sample output below has been truncated to only show a few fields.
@@ -684,7 +682,7 @@ Get a list of all dynamic field declarations:
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema/dynamicfields?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema/dynamicfields
 ----
 
 The sample output below has been truncated.
@@ -767,7 +765,7 @@ Get a list of all field types.
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema/fieldtypes?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema/fieldtypes
 ----
 
 The sample output below has been truncated to show a few different field types from different parts of the list.
@@ -855,7 +853,7 @@ Get a list of all copyFields.
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema/copyfields?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema/copyfields
 ----
 
 The sample output below has been truncated to the first few copy definitions.
@@ -916,7 +914,7 @@ Get the schema name.
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema/name?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema/name
 ----
 
 [source,json]
@@ -956,7 +954,7 @@ Get the schema version
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema/version?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema/version
 ----
 
 [source,json]
@@ -997,7 +995,7 @@ List the uniqueKey.
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema/uniquekey?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema/uniquekey
 ----
 
 [source,json]
@@ -1037,7 +1035,7 @@ Get the similarity implementation.
 
 [source,bash]
 ----
-curl http://localhost:8983/solr/gettingstarted/schema/similarity?wt=json
+curl http://localhost:8983/solr/gettingstarted/schema/similarity
 ----
 
 [source,json]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
index 9d0e60d..4f26591 100644
--- a/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/schema-factory-definition-in-solrconfig.adoc
@@ -31,7 +31,6 @@ Schemaless mode requires enabling the Managed Schema if it is not already, but f
 
 While the "read" features of the Schema API are supported for all schema types, support for making schema modifications programatically depends on the `<schemaFactory/>` in use.
 
-[[SchemaFactoryDefinitioninSolrConfig-SolrUsesManagedSchemabyDefault]]
 == Solr Uses Managed Schema by Default
 
 When a `<schemaFactory/>` is not explicitly declared in a `solrconfig.xml` file, Solr implicitly uses a `ManagedIndexSchemaFactory`, which is by default `"mutable"` and keeps schema information in a `managed-schema` file.
@@ -54,7 +53,6 @@ If you wish to explicitly configure `ManagedIndexSchemaFactory` the following op
 
 With the default configuration shown above, you can use the <<schema-api.adoc#schema-api,Schema API>> to modify the schema as much as you want, and then later change the value of `mutable` to *false* if you wish to "lock" the schema in place and prevent future changes.
 
-[[SchemaFactoryDefinitioninSolrConfig-Classicschema.xml]]
 == Classic schema.xml
 
 An alternative to using a managed schema is to explicitly configure a `ClassicIndexSchemaFactory`. `ClassicIndexSchemaFactory` requires the use of a `schema.xml` configuration file, and disallows any programatic changes to the Schema at run time. The `schema.xml` file must be edited manually and is only loaded only when the collection is loaded.
@@ -64,7 +62,6 @@ An alternative to using a managed schema is to explicitly configure a `ClassicIn
   <schemaFactory class="ClassicIndexSchemaFactory"/>
 ----
 
-[[SchemaFactoryDefinitioninSolrConfig-Switchingfromschema.xmltoManagedSchema]]
 === Switching from schema.xml to Managed Schema
 
 If you have an existing Solr collection that uses `ClassicIndexSchemaFactory`, and you wish to convert to use a managed schema, you can simply modify the `solrconfig.xml` to specify the use of the `ManagedIndexSchemaFactory`.
@@ -78,7 +75,6 @@ Once Solr is restarted and it detects that a `schema.xml` file exists, but the `
 
 You are now free to use the <<schema-api.adoc#schema-api,Schema API>> as much as you want to make changes, and remove the `schema.xml.bak`.
 
-[[SchemaFactoryDefinitioninSolrConfig-SwitchingfromManagedSchematoManuallyEditedschema.xml]]
 === Switching from Managed Schema to Manually Edited schema.xml
 
 If you have started Solr with managed schema enabled and you would like to switch to manually editing a `schema.xml` file, you should take the following steps:
@@ -89,7 +85,7 @@ If you have started Solr with managed schema enabled and you would like to switc
 .. Add a `ClassicIndexSchemaFactory` definition as shown above
 . Reload the core(s).
 
-If you are using SolrCloud, you may need to modify the files via ZooKeeper. The `bin/solr` script provides an easy way to download the files from ZooKeeper and upload them back after edits. See the section <<solr-control-script-reference.adoc#SolrControlScriptReference-ZooKeeperOperations,ZooKeeper Operations>> for more information.
+If you are using SolrCloud, you may need to modify the files via ZooKeeper. The `bin/solr` script provides an easy way to download the files from ZooKeeper and upload them back after edits. See the section <<solr-control-script-reference.adoc#zookeeper-operations,ZooKeeper Operations>> for more information.
 
 [TIP]
 ====