You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/07/13 01:01:37 UTC

[03/10] lucene-solr:master: SOLR-11050: remove Confluence-style anchors and fix all incoming links

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/other-parsers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/other-parsers.adoc b/solr/solr-ref-guide/src/other-parsers.adoc
index 271c33b..db48419 100644
--- a/solr/solr-ref-guide/src/other-parsers.adoc
+++ b/solr/solr-ref-guide/src/other-parsers.adoc
@@ -24,7 +24,6 @@ This section details the other parsers, and gives examples for how they might be
 
 Many of these parsers are expressed the same way as <<local-parameters-in-queries.adoc#local-parameters-in-queries,Local Parameters in Queries>>.
 
-[[OtherParsers-BlockJoinQueryParsers]]
 == Block Join Query Parsers
 
 There are two query parsers that support block joins. These parsers allow indexing and searching for relational content that has been<<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,indexed as nested documents>>.
@@ -55,7 +54,6 @@ The example usage of the query parsers below assumes these two documents and eac
 </add>
 ----
 
-[[OtherParsers-BlockJoinChildrenQueryParser]]
 === Block Join Children Query Parser
 
 This parser takes a query that matches some parent documents and returns their children.
@@ -80,16 +78,16 @@ Using the example documents above, we can construct a query such as `q={!child o
 
 Note that the query for `someParents` should match only parent documents passed by `allParents` or you may get an exception:
 
-....
+[literal]
 Parent query must not match any docs besides parent filter. Combine them as must (+) and must-not (-) clauses to find a problem doc.
-....
+
 In older version the error is:
-....
+
+[literal]
 Parent query yields document which is not matched by parents filter.
-....
+
 You can search for `q=+(someParents) -(allParents)` to find a cause.
 
-[[OtherParsers-BlockJoinParentQueryParser]]
 === Block Join Parent Query Parser
 
 This parser takes a query that matches child documents and returns their parents.
@@ -101,13 +99,15 @@ The parameter `allParents` is a filter that matches *only parent documents*; her
 The parameter `someChildren` is a query that matches some or all of the child documents.
 
 Note that the query for `someChildren` should match only child documents or you may get an exception:
-....
+
+[literal]
 Child query must not match same docs with parent filter. Combine them as must clauses (+) to find a problem doc.
-....
-In older version it's:
-....
+
+In older version the error is:
+
+[literal]
 child query must only match non-parent docs.
-....
+
 You can search for `q=+(parentFilter) +(someChildren)` to find a cause .
 
 Again using the example documents above, we can construct a query such as `q={!parent which="content_type:parentDocument"}comments:SolrCloud`. We get this document in response:
@@ -133,20 +133,17 @@ A common mistake is to try to filter parents with a `which` filter, as in this b
 Instead, you should use a sibling mandatory clause as a filter:
 
 `q= *+title:join* +{!parent which="*content_type:parentDocument*"}comments:SolrCloud`
-
 ====
 
-[[OtherParsers-Scoring]]
-=== Scoring
+=== Scoring with the Block Join Parent Query Parser
 
 You can optionally use the `score` local parameter to return scores of the subordinate query. The values to use for this parameter define the type of aggregation, which are `avg` (average), `max` (maximum), `min` (minimum), `total (sum)`. Implicit default is `none` which returns `0.0`.
 
-[[OtherParsers-BoostQueryParser]]
 == Boost Query Parser
 
 `BoostQParser` extends the `QParserPlugin` and creates a boosted query from the input value. The main value is the query to be boosted. Parameter `b` is the function query to use as the boost. The query to be boosted may be of any type.
 
-Examples:
+=== Boost Query Parser Examples
 
 Creates a query "foo" which is boosted (scores are multiplied) by the function query `log(popularity)`:
 
@@ -162,7 +159,7 @@ Creates a query "foo" which is boosted by the date boosting function referenced
 {!boost b=recip(ms(NOW,mydatefield),3.16e-11,1,1)}foo
 ----
 
-[[OtherParsers-CollapsingQueryParser]]
+[[other-collapsing]]
 == Collapsing Query Parser
 
 The `CollapsingQParser` is really a _post filter_ that provides more performant field collapsing than Solr's standard approach when the number of distinct groups in the result set is high.
@@ -171,7 +168,6 @@ This parser collapses the result set to a single document per group before it fo
 
 Details about using the `CollapsingQParser` can be found in the section <<collapse-and-expand-results.adoc#collapse-and-expand-results,Collapse and Expand Results>>.
 
-[[OtherParsers-ComplexPhraseQueryParser]]
 == Complex Phrase Query Parser
 
 The `ComplexPhraseQParser` provides support for wildcards, ORs, etc., inside phrase queries using Lucene's {lucene-javadocs}/queryparser/org/apache/lucene/queryparser/complexPhrase/ComplexPhraseQueryParser.html[`ComplexPhraseQueryParser`].
@@ -204,15 +200,13 @@ A mix of ordered and unordered complex phrase queries:
 +_query_:"{!complexphrase inOrder=true}manu:\"a* c*\"" +_query_:"{!complexphrase inOrder=false df=name}\"bla* pla*\""
 ----
 
-[[OtherParsers-Limitations]]
-=== Limitations
+=== Complex Phrase Parser Limitations
 
 Performance is sensitive to the number of unique terms that are associated with a pattern. For instance, searching for "a*" will form a large OR clause (technically a SpanOr with many terms) for all of the terms in your index for the indicated field that start with the single letter 'a'. It may be prudent to restrict wildcards to at least two or preferably three letters as a prefix. Allowing very short prefixes may result in to many low-quality documents being returned.
 
 Notice that it also supports leading wildcards "*a" as well with consequent performance implications. Applying <<filter-descriptions.adoc#reversed-wildcard-filter,ReversedWildcardFilterFactory>> in index-time analysis is usually a good idea.
 
-[[OtherParsers-MaxBooleanClauses]]
-==== MaxBooleanClauses
+==== MaxBooleanClauses with Complex Phrase Parser
 
 You may need to increase MaxBooleanClauses in `solrconfig.xml` as a result of the term expansion above:
 
@@ -221,10 +215,9 @@ You may need to increase MaxBooleanClauses in `solrconfig.xml` as a result of th
 <maxBooleanClauses>4096</maxBooleanClauses>
 ----
 
-This property is described in more detail in the section <<query-settings-in-solrconfig.adoc#QuerySettingsinSolrConfig-QuerySizingandWarming,Query Sizing and Warming>>.
+This property is described in more detail in the section <<query-settings-in-solrconfig.adoc#query-sizing-and-warming,Query Sizing and Warming>>.
 
-[[OtherParsers-Stopwords]]
-==== Stopwords
+==== Stopwords with Complex Phrase Parser
 
 It is recommended not to use stopword elimination with this query parser.
 
@@ -246,12 +239,10 @@ the document is returned. The next query that _does_ use the Complex Phrase Quer
 
 does _not_ return that document because SpanNearQuery has no good way to handle stopwords in a way analogous to PhraseQuery. If you must remove stopwords for your use case, use a custom filter factory or perhaps a customized synonyms filter that reduces given stopwords to some impossible token.
 
-[[OtherParsers-Escaping]]
-==== Escaping
+==== Escaping with Complex Phrase Parser
 
 Special care has to be given when escaping: clauses between double quotes (usually whole query) is parsed twice, these parts have to be escaped as twice. eg `"foo\\: bar\\^"`.
 
-[[OtherParsers-FieldQueryParser]]
 == Field Query Parser
 
 The `FieldQParser` extends the `QParserPlugin` and creates a field query from the input value, applying text analysis and constructing a phrase query if appropriate. The parameter `f` is the field to be queried.
@@ -265,7 +256,6 @@ Example:
 
 This example creates a phrase query with "foo" followed by "bar" (assuming the analyzer for `myfield` is a text field with an analyzer that splits on whitespace and lowercase terms). This is generally equivalent to the Lucene query parser expression `myfield:"Foo Bar"`.
 
-[[OtherParsers-FunctionQueryParser]]
 == Function Query Parser
 
 The `FunctionQParser` extends the `QParserPlugin` and creates a function query from the input value. This is only one way to use function queries in Solr; for another, more integrated, approach, see the section on <<function-queries.adoc#function-queries,Function Queries>>.
@@ -277,7 +267,6 @@ Example:
 {!func}log(foo)
 ----
 
-[[OtherParsers-FunctionRangeQueryParser]]
 == Function Range Query Parser
 
 The `FunctionRangeQParser` extends the `QParserPlugin` and creates a range query over a function. This is also referred to as `frange`, as seen in the examples below.
@@ -312,15 +301,13 @@ Both of these examples restrict the results by a range of values found in a decl
 
 For more information about range queries over functions, see Yonik Seeley's introductory blog post https://lucidworks.com/2009/07/06/ranges-over-functions-in-solr-14/[Ranges over Functions in Solr 1.4].
 
-[[OtherParsers-GraphQueryParser]]
 == Graph Query Parser
 
 The `graph` query parser does a breadth first, cyclic aware, graph traversal of all documents that are "reachable" from a starting set of root documents identified by a wrapped query.
 
 The graph is built according to linkages between documents based on the terms found in `from` and `to` fields that you specify as part of the query.
 
-[[OtherParsers-Parameters]]
-=== Parameters
+=== Graph Query Parameters
 
 `to`::
 The field name of matching documents to inspect to identify outgoing edges for graph traversal. Defaults to `edge_ids`.
@@ -342,17 +329,15 @@ Boolean that indicates if the results of the query should be filtered so that on
 
 `useAutn`:: Boolean that indicates if an Automatons should be compiled for each iteration of the breadth first search, which may be faster for some graphs. Defaults to `false`.
 
-[[OtherParsers-Limitations.1]]
-=== Limitations
+=== Graph Query Limitations
 
 The `graph` parser only works in single node Solr installations, or with <<solrcloud.adoc#solrcloud,SolrCloud>> collections that use exactly 1 shard.
 
-[[OtherParsers-Examples]]
-=== Examples
+=== Graph Query Examples
 
 To understand how the graph parser works, consider the following Directed Cyclic Graph, containing 8 nodes (A to H) and 9 edges (1 to 9):
 
-image::images/other-parsers/graph_qparser_example.png[image,height=200]
+image::images/other-parsers/graph_qparser_example.png[image,height=100]
 
 One way to model this graph as Solr documents, would be to create one document per node, with mutivalued fields identifying the incoming and outgoing edges for each node:
 
@@ -426,7 +411,6 @@ http://localhost:8983/solr/my_graph/query?fl=id&q={!graph+from=in_edge+to=out_ed
 }
 ----
 
-[[OtherParsers-SimplifiedModels]]
 === Simplified Models
 
 The Document & Field modeling used in the above examples enumerated all of the outgoing and income edges for each node explicitly, to help demonstrate exactly how the "from" and "to" params work, and to give you an idea of what is possible. With multiple sets of fields like these for identifying incoming and outgoing edges, it's possible to model many independent Directed Graphs that contain some or all of the documents in your collection.
@@ -469,7 +453,6 @@ http://localhost:8983/solr/alt_graph/query?fl=id&q={!graph+from=id+to=out_edge+m
 }
 ----
 
-[[OtherParsers-JoinQueryParser]]
 == Join Query Parser
 
 `JoinQParser` extends the `QParserPlugin`. It allows normalizing relationships between documents with a join operation. This is different from the concept of a join in a relational database because no information is being truly joined. An appropriate SQL analogy would be an "inner query".
@@ -493,8 +476,7 @@ fq = price:[* TO 12]
 
 The join operation is done on a term basis, so the "from" and "to" fields must use compatible field types. For example: joining between a `StrField` and a `TrieIntField` will not work, likewise joining between a `StrField` and a `TextField` that uses `LowerCaseFilterFactory` will only work for values that are already lower cased in the string field.
 
-[[OtherParsers-Scoring.1]]
-=== Scoring
+=== Join Parser Scoring
 
 You can optionally use the `score` parameter to return scores of the subordinate query. The values to use for this parameter define the type of aggregation, which are `avg` (average), `max` (maximum), `min` (minimum) `total`, or `none`.
 
@@ -504,7 +486,6 @@ You can optionally use the `score` parameter to return scores of the subordinate
 Specifying `score` local parameter switches the join algorithm. This might have performance implication on large indices, but it's more important that this algorithm won't work for single value numeric field starting from 7.0. Users are encouraged to change field types to string and rebuild indexes during migration.
 ====
 
-[[OtherParsers-JoiningAcrossCollections]]
 === Joining Across Collections
 
 You can also specify a `fromIndex` parameter to join with a field from another core or collection. If running in SolrCloud mode, then the collection specified in the `fromIndex` parameter must have a single shard and a replica on all Solr nodes where the collection you're joining to has a replica.
@@ -548,7 +529,6 @@ At query time, the `JoinQParser` will access the local replica of the *movie_dir
 
 For more information about join queries, see the Solr Wiki page on http://wiki.apache.org/solr/Join[Joins]. Erick Erickson has also written a blog post about join performance titled https://lucidworks.com/2012/06/20/solr-and-joins/[Solr and Joins].
 
-[[OtherParsers-LuceneQueryParser]]
 == Lucene Query Parser
 
 The `LuceneQParser` extends the `QParserPlugin` by parsing Solr's variant on the Lucene QueryParser syntax. This is effectively the same query parser that is used in Lucene. It uses the operators `q.op`, the default operator ("OR" or "AND") and `df`, the default field name.
@@ -562,7 +542,6 @@ Example:
 
 For more information about the syntax for the Lucene Query Parser, see the {lucene-javadocs}/queryparser/org/apache/lucene/queryparser/classic/package-summary.html[Classic QueryParser javadocs].
 
-[[OtherParsers-LearningToRankQueryParser]]
 == Learning To Rank Query Parser
 
 The `LTRQParserPlugin` is a special purpose parser for reranking the top results of a simple query using a more complex ranking query which is based on a machine learnt model.
@@ -576,7 +555,6 @@ Example:
 
 Details about using the `LTRQParserPlugin` can be found in the <<learning-to-rank.adoc#learning-to-rank,Learning To Rank>> section.
 
-[[OtherParsers-MaxScoreQueryParser]]
 == Max Score Query Parser
 
 The `MaxScoreQParser` extends the `LuceneQParser` but returns the Max score from the clauses. It does this by wrapping all `SHOULD` clauses in a `DisjunctionMaxQuery` with tie=1.0. Any `MUST` or `PROHIBITED` clauses are passed through as-is. Non-boolean queries, e.g., NumericRange falls-through to the `LuceneQParser` parser behavior.
@@ -588,7 +566,6 @@ Example:
 {!maxscore tie=0.01}C OR (D AND E)
 ----
 
-[[OtherParsers-MoreLikeThisQueryParser]]
 == More Like This Query Parser
 
 `MLTQParser` enables retrieving documents that are similar to a given document. It uses Lucene's existing `MoreLikeThis` logic and also works in SolrCloud mode. The document identifier used here is the unique id value and not the Lucene internal document id. The list of returned documents excludes the queried document.
@@ -638,7 +615,6 @@ Adding more constraints to what qualifies as similar using mintf and mindf.
 {!mlt qf=name mintf=2 mindf=3}1
 ----
 
-[[OtherParsers-NestedQueryParser]]
 == Nested Query Parser
 
 The `NestedParser` extends the `QParserPlugin` and creates a nested query, with the ability for that query to redefine its type via local parameters. This is useful in specifying defaults in configuration and letting clients indirectly reference them.
@@ -662,7 +638,6 @@ If the `q1` parameter is price, then the query would be a function query on the
 For more information about the possibilities of nested queries, see Yonik Seeley's blog post https://lucidworks.com/2009/03/31/nested-queries-in-solr/[Nested Queries in Solr].
 
 
-[[OtherParsers-PayloadQueryParsers]]
 == Payload Query Parsers
 
 These query parsers utilize payloads encoded on terms during indexing.
@@ -672,7 +647,6 @@ The main query, for both of these parsers, is parsed straightforwardly from the
 * `PayloadScoreQParser`
 * `PayloadCheckQParser`
 
-[[OtherParsers-PayloadScoreParser]]
 === Payload Score Parser
 
 `PayloadScoreQParser` incorporates each matching term's numeric (integer or float) payloads into the scores.
@@ -695,7 +669,6 @@ If `true`, multiples computed payload factor by the score of the original query.
 {!payload_score f=my_field_dpf v=some_term func=max}
 ----
 
-[[OtherParsers-PayloadCheckParser]]
 === Payload Check Parser
 
 `PayloadCheckQParser` only matches when the matching terms also have the specified payloads.
@@ -719,7 +692,6 @@ Each specified payload will be encoded using the encoder determined from the fie
 {!payload_check f=words_dps payloads="VERB NOUN"}searching stuff
 ----
 
-[[OtherParsers-PrefixQueryParser]]
 == Prefix Query Parser
 
 `PrefixQParser` extends the `QParserPlugin` by creating a prefix query from the input value. Currently no analysis or value transformation is done to create this prefix query.
@@ -735,7 +707,6 @@ Example:
 
 This would be generally equivalent to the Lucene query parser expression `myfield:foo*`.
 
-[[OtherParsers-RawQueryParser]]
 == Raw Query Parser
 
 `RawQParser` extends the `QParserPlugin` by creating a term query from the input value without any text analysis or transformation. This is useful in debugging, or when raw terms are returned from the terms component (this is not the default).
@@ -751,18 +722,16 @@ Example:
 
 This example constructs the query: `TermQuery(Term("myfield","Foo Bar"))`.
 
-For easy filter construction to drill down in faceting, the <<OtherParsers-TermQueryParser,TermQParserPlugin>> is recommended.
+For easy filter construction to drill down in faceting, the <<Term Query Parser,TermQParserPlugin>> is recommended.
 
-For full analysis on all fields, including text fields, you may want to use the <<OtherParsers-FieldQueryParser,FieldQParserPlugin>>.
+For full analysis on all fields, including text fields, you may want to use the <<Field Query Parser,FieldQParserPlugin>>.
 
-[[OtherParsers-Re-RankingQueryParser]]
 == Re-Ranking Query Parser
 
 The `ReRankQParserPlugin` is a special purpose parser for Re-Ranking the top results of a simple query using a more complex ranking query.
 
 Details about using the `ReRankQParserPlugin` can be found in the <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>> section.
 
-[[OtherParsers-SimpleQueryParser]]
 == Simple Query Parser
 
 The Simple query parser in Solr is based on Lucene's SimpleQueryParser. This query parser is designed to allow users to enter queries however they want, and it will do its best to interpret the query and return results.
@@ -811,14 +780,12 @@ Defines the default field if none is defined in the Schema, or overrides the def
 
 Any errors in syntax are ignored and the query parser will interpret queries as best it can. However, this can lead to odd results in some cases.
 
-[[OtherParsers-SpatialQueryParsers]]
 == Spatial Query Parsers
 
 There are two spatial QParsers in Solr: `geofilt` and `bbox`. But there are other ways to query spatially: using the `frange` parser with a distance function, using the standard (lucene) query parser with the range syntax to pick the corners of a rectangle, or with RPT and BBoxField you can use the standard query parser but use a special syntax within quotes that allows you to pick the spatial predicate.
 
 All these options are documented further in the section <<spatial-search.adoc#spatial-search,Spatial Search>>.
 
-[[OtherParsers-SurroundQueryParser]]
 == Surround Query Parser
 
 The `SurroundQParser` enables the Surround query syntax, which provides proximity search functionality. There are two positional operators: `w` creates an ordered span query and `n` creates an unordered one. Both operators take a numeric value to indicate distance between two terms. The default is 1, and the maximum is 99.
@@ -838,7 +805,6 @@ This query parser will also accept boolean operators (`AND`, `OR`, and `NOT`, in
 
 The non-unary operators (everything but `NOT`) support both infix `(a AND b AND c)` and prefix `AND(a, b, c)` notation.
 
-[[OtherParsers-SwitchQueryParser]]
 == Switch Query Parser
 
 `SwitchQParser` is a `QParserPlugin` that acts like a "switch" or "case" statement.
@@ -895,7 +861,6 @@ Using the example configuration below, clients can optionally specify the custom
 </requestHandler>
 ----
 
-[[OtherParsers-TermQueryParser]]
 == Term Query Parser
 
 `TermQParser` extends the `QParserPlugin` by creating a single term query from the input value equivalent to `readableToIndexed()`. This is useful for generating filter queries from the external human readable terms returned by the faceting or terms components. The only parameter is `f`, for the field.
@@ -907,14 +872,13 @@ Example:
 {!term f=weight}1.5
 ----
 
-For text fields, no analysis is done since raw terms are already returned from the faceting and terms components. To apply analysis to text fields as well, see the <<OtherParsers-FieldQueryParser,Field Query Parser>>, above.
+For text fields, no analysis is done since raw terms are already returned from the faceting and terms components. To apply analysis to text fields as well, see the <<Field Query Parser>>, above.
 
-If no analysis or transformation is desired for any type of field, see the <<OtherParsers-RawQueryParser,Raw Query Parser>>, above.
+If no analysis or transformation is desired for any type of field, see the <<Raw Query Parser>>, above.
 
-[[OtherParsers-TermsQueryParser]]
 == Terms Query Parser
 
-`TermsQParser` functions similarly to the <<OtherParsers-TermQueryParser,Term Query Parser>> but takes in multiple values separated by commas and returns documents matching any of the specified values.
+`TermsQParser` functions similarly to the <<Term Query Parser,Term Query Parser>> but takes in multiple values separated by commas and returns documents matching any of the specified values.
 
 This can be useful for generating filter queries from the external human readable terms returned by the faceting or terms components, and may be more efficient in some cases than using the <<the-standard-query-parser.adoc#the-standard-query-parser,Standard Query Parser>> to generate an boolean query since the default implementation `method` avoids scoring.
 
@@ -929,7 +893,6 @@ Separator to use when parsing the input. If set to " " (a single blank space), w
 `method`::
 The internal query-building implementation: `termsFilter`, `booleanQuery`, `automaton`, or `docValuesTermsFilter`. Defaults to `termsFilter`.
 
-
 *Examples*
 
 [source,text]
@@ -942,7 +905,6 @@ The internal query-building implementation: `termsFilter`, `booleanQuery`, `auto
 {!terms f=categoryId method=booleanQuery separator=" "}8 6 7 5309
 ----
 
-[[OtherParsers-XMLQueryParser]]
 == XML Query Parser
 
 The {solr-javadocs}/solr-core/org/apache/solr/search/XmlQParserPlugin.html[XmlQParserPlugin] extends the {solr-javadocs}/solr-core/org/apache/solr/search/QParserPlugin.html[QParserPlugin] and supports the creation of queries from XML. Example:
@@ -1002,7 +964,6 @@ The XmlQParser implementation uses the {solr-javadocs}/solr-core/org/apache/solr
 |<LegacyNumericRangeQuery> |LegacyNumericRangeQuery(Builder) is deprecated
 |===
 
-[[OtherParsers-CustomizingXMLQueryParser]]
 === Customizing XML Query Parser
 
 You can configure your own custom query builders for additional XML elements. The custom builders need to extend the {solr-javadocs}/solr-core/org/apache/solr/search/SolrQueryBuilder.html[SolrQueryBuilder] or the {solr-javadocs}/solr-core/org/apache/solr/search/SolrSpanQueryBuilder.html[SolrSpanQueryBuilder] class. Example solrconfig.xml snippet:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/other-schema-elements.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/other-schema-elements.adoc b/solr/solr-ref-guide/src/other-schema-elements.adoc
index 029cd64..cd39401 100644
--- a/solr/solr-ref-guide/src/other-schema-elements.adoc
+++ b/solr/solr-ref-guide/src/other-schema-elements.adoc
@@ -20,7 +20,6 @@
 
 This section describes several other important elements of `schema.xml` not covered in earlier sections.
 
-[[OtherSchemaElements-UniqueKey]]
 == Unique Key
 
 The `uniqueKey` element specifies which field is a unique identifier for documents. Although `uniqueKey` is not required, it is nearly always warranted by your application design. For example, `uniqueKey` should be used if you will ever update a document in the index.
@@ -37,7 +36,6 @@ Schema defaults and `copyFields` cannot be used to populate the `uniqueKey` fiel
 Further, the operation will fail if the `uniqueKey` field is used, but is multivalued (or inherits the multivalue-ness from the `fieldtype`). However, `uniqueKey` will continue to work, as long as the field is properly used.
 
 
-[[OtherSchemaElements-Similarity]]
 == Similarity
 
 Similarity is a Lucene class used to score a document in searching.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/overview-of-searching-in-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/overview-of-searching-in-solr.adoc b/solr/solr-ref-guide/src/overview-of-searching-in-solr.adoc
index 60ae891..4389c5f 100644
--- a/solr/solr-ref-guide/src/overview-of-searching-in-solr.adoc
+++ b/solr/solr-ref-guide/src/overview-of-searching-in-solr.adoc
@@ -54,7 +54,7 @@ Faceting makes use of fields defined when the search applications were indexed.
 
 Solr also supports a feature called <<morelikethis.adoc#morelikethis,MoreLikeThis>>, which enables users to submit new queries that focus on particular terms returned in an earlier query. MoreLikeThis queries can make use of faceting or clustering to provide additional aid to users.
 
-A Solr component called a <<response-writers.adoc#response-writers,*response writer*>> manages the final presentation of the query response. Solr includes a variety of response writers, including an <<response-writers.adoc#ResponseWriters-TheStandardXMLResponseWriter,XML Response Writer>> and a <<response-writers.adoc#ResponseWriters-JSONResponseWriter,JSON Response Writer>>.
+A Solr component called a <<response-writers.adoc#response-writers,*response writer*>> manages the final presentation of the query response. Solr includes a variety of response writers, including an <<response-writers.adoc#standard-xml-response-writer,XML Response Writer>> and a <<response-writers.adoc#json-response-writer,JSON Response Writer>>.
 
 The diagram below summarizes some key elements of the search process.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/pagination-of-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/pagination-of-results.adoc b/solr/solr-ref-guide/src/pagination-of-results.adoc
index a9c8368..130a6c7 100644
--- a/solr/solr-ref-guide/src/pagination-of-results.adoc
+++ b/solr/solr-ref-guide/src/pagination-of-results.adoc
@@ -24,7 +24,7 @@ In most search applications, the "top" matching results (sorted by score, or som
 In many applications the UI for these sorted results are displayed to the user in "pages" containing a fixed number of matching results, and users don't typically look at results past the first few pages worth of results.
 
 == Basic Pagination
-In Solr, this basic paginated searching is supported using the `start` and `rows` parameters, and performance of this common behaviour can be tuned by utilizing the <<query-settings-in-solrconfig.adoc#QuerySettingsinSolrConfig-queryResultCache,`queryResultCache`>> and adjusting the <<query-settings-in-solrconfig.adoc#QuerySettingsinSolrConfig-queryResultWindowSize,`queryResultWindowSize`>> configuration options based on your expected page sizes.
+In Solr, this basic paginated searching is supported using the `start` and `rows` parameters, and performance of this common behaviour can be tuned by utilizing the <<query-settings-in-solrconfig.adoc#queryresultcache,`queryResultCache`>> and adjusting the <<query-settings-in-solrconfig.adoc#queryresultwindowsize,`queryResultWindowSize`>> configuration options based on your expected page sizes.
 
 === Basic Pagination Examples
 
@@ -103,7 +103,7 @@ There are a few important constraints to be aware of when using `cursorMark` par
 * If `id` is your uniqueKey field, then sort params like `id asc` and `name asc, id desc` would both work fine, but `name asc` by itself would not
 . Sorts including <<working-with-dates.adoc#working-with-dates,Date Math>> based functions that involve calculations relative to `NOW` will cause confusing results, since every document will get a new sort value on every subsequent request. This can easily result in cursors that never end, and constantly return the same documents over and over – even if the documents are never updated.
 +
-In this situation, choose & re-use a fixed value for the <<working-with-dates.adoc#WorkingwithDates-NOW,`NOW` request param>> in all of your cursor requests.
+In this situation, choose & re-use a fixed value for the <<working-with-dates.adoc#now,`NOW` request param>> in all of your cursor requests.
 
 Cursor mark values are computed based on the sort values of each document in the result, which means multiple documents with identical sort values will produce identical Cursor mark values if one of them is the last document on a page of results. In that situation, the subsequent request using that `cursorMark` would not know which of the documents with the identical mark values should be skipped. Requiring that the uniqueKey field be used as a clause in the sort criteria guarantees that a deterministic ordering will be returned, and that every `cursorMark` value will identify a unique point in the sequence of documents.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/performance-statistics-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/performance-statistics-reference.adoc b/solr/solr-ref-guide/src/performance-statistics-reference.adoc
index 50bc601..9987850 100644
--- a/solr/solr-ref-guide/src/performance-statistics-reference.adoc
+++ b/solr/solr-ref-guide/src/performance-statistics-reference.adoc
@@ -24,7 +24,7 @@ The same statistics are also exposed via the <<mbean-request-handler.adoc#mbean-
 
 These statistics are per core. When you are running in SolrCloud mode these statistics would co-relate to each performance of an individual replica.
 
-== Request Handlers
+== Request Handler Statistics
 
 === Update Request Handler
 
@@ -93,7 +93,7 @@ Both Update Request Handler and Search Request Handler along with handlers like
 |transaction_logs_total_size |Total size of all the TLogs created so far from the beginning of the Solr instance.
 |===
 
-== Caches
+== Cache Statistics
 
 === Document Cache
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/phonetic-matching.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/phonetic-matching.adoc b/solr/solr-ref-guide/src/phonetic-matching.adoc
index 0e7c81a..6cd419c 100644
--- a/solr/solr-ref-guide/src/phonetic-matching.adoc
+++ b/solr/solr-ref-guide/src/phonetic-matching.adoc
@@ -22,11 +22,9 @@ Phonetic matching algorithms may be used to encode tokens so that two different
 
 For overviews of and comparisons between algorithms, see http://en.wikipedia.org/wiki/Phonetic_algorithm and http://ntz-develop.blogspot.com/2011/03/phonetic-algorithms.html
 
-
-[[PhoneticMatching-Beider-MorsePhoneticMatching_BMPM_]]
 == Beider-Morse Phonetic Matching (BMPM)
 
-For examples of how to use this encoding in your analyzer, see <<filter-descriptions.adoc#FilterDescriptions-Beider-MorseFilter,Beider Morse Filter>> in the Filter Descriptions section.
+For examples of how to use this encoding in your analyzer, see <<filter-descriptions.adoc#beider-morse-filter,Beider Morse Filter>> in the Filter Descriptions section.
 
 Beider-Morse Phonetic Matching (BMPM) is a "soundalike" tool that lets you search using a new phonetic matching system. BMPM helps you search for personal names (or just surnames) in a Solr/Lucene index, and is far superior to the existing phonetic codecs, such as regular soundex, metaphone, caverphone, etc.
 
@@ -59,7 +57,7 @@ For more information, see here: http://stevemorse.org/phoneticinfo.htm and http:
 
 == Daitch-Mokotoff Soundex
 
-To use this encoding in your analyzer, see <<filter-descriptions.adoc#FilterDescriptions-Daitch-MokotoffSoundexFilter,Daitch-Mokotoff Soundex Filter>> in the Filter Descriptions section.
+To use this encoding in your analyzer, see <<filter-descriptions.adoc#daitch-mokotoff-soundex-filter,Daitch-Mokotoff Soundex Filter>> in the Filter Descriptions section.
 
 The Daitch-Mokotoff Soundex algorithm is a refinement of the Russel and American Soundex algorithms, yielding greater accuracy in matching especially Slavic and Yiddish surnames with similar pronunciation but differences in spelling.
 
@@ -76,13 +74,13 @@ For more information, see http://en.wikipedia.org/wiki/Daitch%E2%80%93Mokotoff_S
 
 == Double Metaphone
 
-To use this encoding in your analyzer, see <<filter-descriptions.adoc#FilterDescriptions-DoubleMetaphoneFilter,Double Metaphone Filter>> in the Filter Descriptions section. Alternatively, you may specify `encoding="DoubleMetaphone"` with the <<filter-descriptions.adoc#FilterDescriptions-PhoneticFilter,Phonetic Filter>>, but note that the Phonetic Filter version will *not* provide the second ("alternate") encoding that is generated by the Double Metaphone Filter for some tokens.
+To use this encoding in your analyzer, see <<filter-descriptions.adoc#double-metaphone-filter,Double Metaphone Filter>> in the Filter Descriptions section. Alternatively, you may specify `encoding="DoubleMetaphone"` with the <<filter-descriptions.adoc#phonetic-filter,Phonetic Filter>>, but note that the Phonetic Filter version will *not* provide the second ("alternate") encoding that is generated by the Double Metaphone Filter for some tokens.
 
 Encodes tokens using the double metaphone algorithm by Lawrence Philips. See the original article at http://www.drdobbs.com/the-double-metaphone-search-algorithm/184401251?pgno=2
 
 == Metaphone
 
-To use this encoding in your analyzer, specify `encoding="Metaphone"` with the <<filter-descriptions.adoc#FilterDescriptions-PhoneticFilter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoding="Metaphone"` with the <<filter-descriptions.adoc#phonetic-filter,Phonetic Filter>>.
 
 Encodes tokens using the Metaphone algorithm by Lawrence Philips, described in "Hanging on the Metaphone" in Computer Language, Dec. 1990.
 
@@ -91,7 +89,7 @@ Another reference for more information is http://www.drdobbs.com/the-double-meta
 
 == Soundex
 
-To use this encoding in your analyzer, specify `encoding="Soundex"` with the <<filter-descriptions.adoc#FilterDescriptions-PhoneticFilter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoding="Soundex"` with the <<filter-descriptions.adoc#phonetic-filter,Phonetic Filter>>.
 
 Encodes tokens using the Soundex algorithm, which is used to relate similar names, but can also be used as a general purpose scheme to find words with similar phonemes.
 
@@ -99,7 +97,7 @@ See also http://en.wikipedia.org/wiki/Soundex.
 
 == Refined Soundex
 
-To use this encoding in your analyzer, specify `encoding="RefinedSoundex"` with the <<filter-descriptions.adoc#FilterDescriptions-PhoneticFilter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoding="RefinedSoundex"` with the <<filter-descriptions.adoc#phonetic-filter,Phonetic Filter>>.
 
 Encodes tokens using an improved version of the Soundex algorithm.
 
@@ -107,7 +105,7 @@ See http://en.wikipedia.org/wiki/Soundex.
 
 == Caverphone
 
-To use this encoding in your analyzer, specify `encoding="Caverphone"` with the <<filter-descriptions.adoc#FilterDescriptions-PhoneticFilter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoding="Caverphone"` with the <<filter-descriptions.adoc#phonetic-filter,Phonetic Filter>>.
 
 Caverphone is an algorithm created by the Caversham Project at the University of Otago. The algorithm is optimised for accents present in the southern part of the city of Dunedin, New Zealand.
 
@@ -115,7 +113,7 @@ See http://en.wikipedia.org/wiki/Caverphone and the Caverphone 2.0 specification
 
 == Kölner Phonetik a.k.a. Cologne Phonetic
 
-To use this encoding in your analyzer, specify `encoding="ColognePhonetic"` with the <<filter-descriptions.adoc#FilterDescriptions-PhoneticFilter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoding="ColognePhonetic"` with the <<filter-descriptions.adoc#phonetic-filter,Phonetic Filter>>.
 
 The Kölner Phonetik, an algorithm published by Hans Joachim Postel in 1969, is optimized for the German language.
 
@@ -123,7 +121,7 @@ See http://de.wikipedia.org/wiki/K%C3%B6lner_Phonetik
 
 == NYSIIS
 
-To use this encoding in your analyzer, specify `encoding="Nysiis"` with the <<filter-descriptions.adoc#FilterDescriptions-PhoneticFilter,Phonetic Filter>>.
+To use this encoding in your analyzer, specify `encoding="Nysiis"` with the <<filter-descriptions.adoc#phonetic-filter,Phonetic Filter>>.
 
 NYSIIS is an encoding used to relate similar names, but can also be used as a general purpose scheme to find words with similar phonemes.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/post-tool.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/post-tool.adoc b/solr/solr-ref-guide/src/post-tool.adoc
index e0391af..1cbaa92 100644
--- a/solr/solr-ref-guide/src/post-tool.adoc
+++ b/solr/solr-ref-guide/src/post-tool.adoc
@@ -20,7 +20,7 @@
 
 Solr includes a simple command line tool for POSTing various types of content to a Solr server.
 
-The tool is `bin/post`. The bin/post tool is a Unix shell script; for Windows (non-Cygwin) usage, see the <<PostTool-WindowsSupport,Windows section>> below.
+The tool is `bin/post`. The bin/post tool is a Unix shell script; for Windows (non-Cygwin) usage, see the section <<Post Tool Windows Support>> below.
 
 To run it, open a window and enter:
 
@@ -116,7 +116,7 @@ Index a tab-separated file into `gettingstarted`:
 bin/post -c signals -params "separator=%09" -type text/csv data.tsv
 ----
 
-The content type (`-type`) parameter is required to treat the file as the proper type, otherwise it will be ignored and a WARNING logged as it does not know what type of content a .tsv file is. The <<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-CSVFormattedIndexUpdates,CSV handler>> supports the `separator` parameter, and is passed through using the `-params` setting.
+The content type (`-type`) parameter is required to treat the file as the proper type, otherwise it will be ignored and a WARNING logged as it does not know what type of content a .tsv file is. The <<uploading-data-with-index-handlers.adoc#csv-formatted-index-updates,CSV handler>> supports the `separator` parameter, and is passed through using the `-params` setting.
 
 === Indexing JSON
 
@@ -159,8 +159,7 @@ Index a pdf as the user solr with password `SolrRocks`:
 bin/post -u solr:SolrRocks -c gettingstarted a.pdf
 ----
 
-[[PostTool-WindowsSupport]]
-== Windows Support
+== Post Tool Windows Support
 
 `bin/post` exists currently only as a Unix shell script, however it delegates its work to a cross-platform capable Java program. The <<SimplePostTool>> can be run directly in supported environments, including Windows.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc b/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
index 1a6b315..09a8f0a 100644
--- a/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/query-settings-in-solrconfig.adoc
@@ -29,7 +29,6 @@ These settings are all configured in child elements of the `<query>` element in
 </query>
 ----
 
-[[QuerySettingsinSolrConfig-Caches]]
 == Caches
 
 Solr caches are associated with a specific instance of an Index Searcher, a specific view of an index that doesn't change during the lifetime of that searcher. As long as that Index Searcher is being used, any items in its cache will be valid and available for reuse. Caching in Solr differs from caching in many other applications in that cached Solr objects do not expire after a time interval; instead, they remain valid for the lifetime of the Index Searcher.
@@ -54,7 +53,6 @@ FastLRUCache and LFUCache support `showItems` attribute. This is the number of c
 
 Details of each cache are described below.
 
-[[QuerySettingsinSolrConfig-filterCache]]
 === filterCache
 
 This cache is used by `SolrIndexSearcher` for filters (DocSets) for unordered sets of all documents that match a query. The numeric attributes control the number of entries in the cache.
@@ -71,7 +69,6 @@ Solr also uses this cache for faceting when the configuration parameter `facet.m
              autowarmCount="128"/>
 ----
 
-[[QuerySettingsinSolrConfig-queryResultCache]]
 === queryResultCache
 
 This cache holds the results of previous searches: ordered lists of document IDs (DocList) based on a query, a sort, and the range of documents requested.
@@ -87,7 +84,6 @@ The `queryResultCache` has an additional (optional) setting to limit the maximum
                   maxRamMB="1000"/>
 ----
 
-[[QuerySettingsinSolrConfig-documentCache]]
 === documentCache
 
 This cache holds Lucene Document objects (the stored fields for each document). Since Lucene internal document IDs are transient, this cache is not auto-warmed. The size for the `documentCache` should always be greater than `max_results` times the `max_concurrent_queries`, to ensure that Solr does not need to refetch a document during a request. The more fields you store in your documents, the higher the memory usage of this cache will be.
@@ -100,7 +96,6 @@ This cache holds Lucene Document objects (the stored fields for each document).
                autowarmCount="0"/>
 ----
 
-[[QuerySettingsinSolrConfig-UserDefinedCaches]]
 === User Defined Caches
 
 You can also define named caches for your own application code to use. You can locate and use your cache object by name by calling the `SolrIndexSearcher` methods `getCache()`, `cacheLookup()` and `cacheInsert()`.
@@ -116,10 +111,8 @@ You can also define named caches for your own application code to use. You can l
 
 If you want auto-warming of your cache, include a `regenerator` attribute with the fully qualified name of a class that implements `solr.search.CacheRegenerator`. You can also use the `NoOpRegenerator`, which simply repopulates the cache with old items. Define it with the `regenerator` parameter as`: regenerator="solr.NoOpRegenerator"`.
 
-[[QuerySettingsinSolrConfig-QuerySizingandWarming]]
 == Query Sizing and Warming
 
-[[QuerySettingsinSolrConfig-maxBooleanClauses]]
 === maxBooleanClauses
 
 This sets the maximum number of clauses allowed in a boolean query. This can affect range or prefix queries that expand to a query with a large number of boolean terms. If this limit is exceeded, an exception is thrown.
@@ -134,7 +127,6 @@ This sets the maximum number of clauses allowed in a boolean query. This can aff
 This option modifies a global property that effects all Solr cores. If multiple `solrconfig.xml` files disagree on this property, the value at any point in time will be based on the last Solr core that was initialized.
 ====
 
-[[QuerySettingsinSolrConfig-enableLazyFieldLoading]]
 === enableLazyFieldLoading
 
 If this parameter is set to true, then fields that are not directly requested will be loaded lazily as needed. This can boost performance if the most common queries only need a small subset of fields, especially if infrequently accessed fields are large in size.
@@ -144,7 +136,6 @@ If this parameter is set to true, then fields that are not directly requested wi
 <enableLazyFieldLoading>true</enableLazyFieldLoading>
 ----
 
-[[QuerySettingsinSolrConfig-useFilterForSortedQuery]]
 === useFilterForSortedQuery
 
 This parameter configures Solr to use a filter to satisfy a search. If the requested sort does not include "score", the `filterCache` will be checked for a filter matching the query. For most situations, this is only useful if the same search is requested often with different sort options and none of them ever use "score".
@@ -154,7 +145,6 @@ This parameter configures Solr to use a filter to satisfy a search. If the reque
 <useFilterForSortedQuery>true</useFilterForSortedQuery>
 ----
 
-[[QuerySettingsinSolrConfig-queryResultWindowSize]]
 === queryResultWindowSize
 
 Used with the `queryResultCache`, this will cache a superset of the requested number of document IDs. For example, if the a search in response to a particular query requests documents 10 through 19, and `queryWindowSize` is 50, documents 0 through 49 will be cached.
@@ -164,7 +154,6 @@ Used with the `queryResultCache`, this will cache a superset of the requested nu
 <queryResultWindowSize>20</queryResultWindowSize>
 ----
 
-[[QuerySettingsinSolrConfig-queryResultMaxDocsCached]]
 === queryResultMaxDocsCached
 
 This parameter sets the maximum number of documents to cache for any entry in the `queryResultCache`.
@@ -174,7 +163,6 @@ This parameter sets the maximum number of documents to cache for any entry in th
 <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
 ----
 
-[[QuerySettingsinSolrConfig-useColdSearcher]]
 === useColdSearcher
 
 This setting controls whether search requests for which there is not a currently registered searcher should wait for a new searcher to warm up (false) or proceed immediately (true). When set to "false", requests will block until the searcher has warmed its caches.
@@ -184,7 +172,6 @@ This setting controls whether search requests for which there is not a currently
 <useColdSearcher>false</useColdSearcher>
 ----
 
-[[QuerySettingsinSolrConfig-maxWarmingSearchers]]
 === maxWarmingSearchers
 
 This parameter sets the maximum number of searchers that may be warming up in the background at any given time. Exceeding this limit will raise an error. For read-only slaves, a value of two is reasonable. Masters should probably be set a little higher.
@@ -194,10 +181,9 @@ This parameter sets the maximum number of searchers that may be warming up in th
 <maxWarmingSearchers>2</maxWarmingSearchers>
 ----
 
-[[QuerySettingsinSolrConfig-Query-RelatedListeners]]
 == Query-Related Listeners
 
-As described in the section on <<QuerySettingsinSolrConfig-Caches,Caches>>, new Index Searchers are cached. It's possible to use the triggers for listeners to perform query-related tasks. The most common use of this is to define queries to further "warm" the Index Searchers while they are starting. One benefit of this approach is that field caches are pre-populated for faster sorting.
+As described in the section on <<Caches>>, new Index Searchers are cached. It's possible to use the triggers for listeners to perform query-related tasks. The most common use of this is to define queries to further "warm" the Index Searchers while they are starting. One benefit of this approach is that field caches are pre-populated for faster sorting.
 
 Good query selection is key with this type of listener. It's best to choose your most common and/or heaviest queries and include not just the keywords used, but any other parameters such as sorting or filtering requests.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc b/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
index 947c760..9f9f041 100644
--- a/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
+++ b/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
@@ -22,14 +22,12 @@ SolrCloud supports elasticity, high availability, and fault tolerance in reads a
 
 What this means, basically, is that when you have a large cluster, you can always make requests to the cluster: Reads will return results whenever possible, even if some nodes are down, and Writes will be acknowledged only if they are durable; i.e., you won't lose data.
 
-[[ReadandWriteSideFaultTolerance-ReadSideFaultTolerance]]
 == Read Side Fault Tolerance
 
 In a SolrCloud cluster each individual node load balances read requests across all the replicas in collection. You still need a load balancer on the 'outside' that talks to the cluster, or you need a smart client which understands how to read and interact with Solr's metadata in ZooKeeper and only requests the ZooKeeper ensemble's address to start discovering to which nodes it should send requests. (Solr provides a smart Java SolrJ client called {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html[CloudSolrClient].)
 
 Even if some nodes in the cluster are offline or unreachable, a Solr node will be able to correctly respond to a search request as long as it can communicate with at least one replica of every shard, or one replica of every _relevant_ shard if the user limited the search via the `shards` or `\_route_` parameters. The more replicas there are of every shard, the more likely that the Solr cluster will be able to handle search results in the event of node failures.
 
-[[ReadandWriteSideFaultTolerance-zkConnected]]
 === zkConnected
 
 A Solr node will return the results of a search request as long as it can communicate with at least one replica of every shard that it knows about, even if it can _not_ communicate with ZooKeeper at the time it receives the request. This is normally the preferred behavior from a fault tolerance standpoint, but may result in stale or incorrect results if there have been major changes to the collection structure that the node has not been informed of via ZooKeeper (i.e., shards may have been added or removed, or split into sub-shards)
@@ -56,7 +54,6 @@ A `zkConnected` header is included in every search response indicating if the no
 }
 ----
 
-[[ReadandWriteSideFaultTolerance-shards.tolerant]]
 === shards.tolerant
 
 In the event that one or more shards queried are completely unavailable, then Solr's default behavior is to fail the request. However, there are many use-cases where partial results are acceptable and so Solr provides a boolean `shards.tolerant` parameter (default `false`).
@@ -89,12 +86,10 @@ Example response with `partialResults` flag set to 'true':
 }
 ----
 
-[[ReadandWriteSideFaultTolerance-WriteSideFaultTolerance]]
 == Write Side Fault Tolerance
 
 SolrCloud is designed to replicate documents to ensure redundancy for your data, and enable you to send update requests to any node in the cluster. That node will determine if it hosts the leader for the appropriate shard, and if not it will forward the request to the the leader, which will then forward it to all existing replicas, using versioning to make sure every replica has the most up-to-date version. If the leader goes down, another replica can take its place. This architecture enables you to be certain that your data can be recovered in the event of a disaster, even if you are using <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>.
 
-[[ReadandWriteSideFaultTolerance-Recovery]]
 === Recovery
 
 A Transaction Log is created for each node so that every change to content or organization is noted. The log is used to determine which content in the node should be included in a replica. When a new replica is created, it refers to the Leader and the Transaction Log to know which content to include. If it fails, it retries.
@@ -105,7 +100,6 @@ If a leader goes down, it may have sent requests to some replicas and not others
 
 If an update fails because cores are reloading schemas and some have finished but others have not, the leader tells the nodes that the update failed and starts the recovery procedure.
 
-[[ReadandWriteSideFaultTolerance-AchievedReplicationFactor]]
 === Achieved Replication Factor
 
 When using a replication factor greater than one, an update request may succeed on the shard leader but fail on one or more of the replicas. For instance, consider a collection with one shard and a replication factor of three. In this case, you have a shard leader and two additional replicas. If an update request succeeds on the leader but fails on both replicas, for whatever reason, the update request is still considered successful from the perspective of the client. The replicas that missed the update will sync with the leader when they recover.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
index e20b55c..6271cb6 100644
--- a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
@@ -22,7 +22,6 @@ The `requestDispatcher` element of `solrconfig.xml` controls the way the Solr HT
 
 Included are parameters for defining if it should handle `/select` urls (for Solr 1.1 compatibility), if it will support remote streaming, the maximum size of file uploads and how it will respond to HTTP cache headers in requests.
 
-[[RequestDispatcherinSolrConfig-handleSelectElement]]
 == handleSelect Element
 
 [IMPORTANT]
@@ -41,7 +40,6 @@ In recent versions of Solr, a `/select` requestHandler is defined by default, so
 </requestDispatcher>
 ----
 
-[[RequestDispatcherinSolrConfig-requestParsersElement]]
 == requestParsers Element
 
 The `<requestParsers>` sub-element controls values related to parsing requests. This is an empty XML element that doesn't have any content, only attributes.
@@ -67,7 +65,7 @@ The attribute `addHttpRequestToContext` can be used to indicate that the origina
                 addHttpRequestToContext="false" />
 ----
 
-The below command is an example of how to enable RemoteStreaming and BodyStreaming through <<config-api.adoc#ConfigAPI-CreatingandUpdatingCommonProperties,Config API>>:
+The below command is an example of how to enable RemoteStreaming and BodyStreaming through <<config-api.adoc#creating-and-updating-common-properties,Config API>>:
 
 [source,bash]
 ----
@@ -77,7 +75,6 @@ curl http://localhost:8983/solr/gettingstarted/config -H 'Content-type:applicati
 }'
 ----
 
-[[RequestDispatcherinSolrConfig-httpCachingElement]]
 == httpCaching Element
 
 The `<httpCaching>` element controls HTTP cache control headers. Do not confuse these settings with Solr's internal cache configuration. This element controls caching of HTTP responses as defined by the W3C HTTP specifications.
@@ -102,7 +99,6 @@ This value of this attribute is sent as the value of the `ETag` header. Changing
 </httpCaching>
 ----
 
-[[RequestDispatcherinSolrConfig-cacheControlElement]]
 === cacheControl Element
 
 In addition to these attributes, `<httpCaching>` accepts one child element: `<cacheControl>`. The content of this element will be sent as the value of the Cache-Control header on HTTP responses. This header is used to modify the default caching behavior of the requesting client. The possible values for the Cache-Control header are defined by the HTTP 1.1 specification in http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9[Section 14.9].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc b/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
index 46d9c9e..10fabab 100644
--- a/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
@@ -26,7 +26,6 @@ A _search component_ is a feature of search, such as highlighting or faceting. T
 
 These are often referred to as "requestHandler" and "searchComponent", which is how they are defined in `solrconfig.xml`.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-RequestHandlers]]
 == Request Handlers
 
 Every request handler is defined with a name and a class. The name of the request handler is referenced with the request to Solr, typically as a path. For example, if Solr is installed at ` http://localhost:8983/solr/ `and you have a collection named "```gettingstarted```", you can make a request using URLs like this:
@@ -44,7 +43,6 @@ Request handlers can also process requests for nested paths of their names, for
 
 It is also possible to configure defaults for request handlers with a section called `initParams`. These defaults can be used when you want to have common properties that will be used by each separate handler. For example, if you intend to create several request handlers that will all request the same list of fields in the response, you can configure an `initParams` section with your list of fields. For more information about `initParams`, see the section <<initparams-in-solrconfig.adoc#initparams-in-solrconfig,InitParams in SolrConfig>>.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers]]
 === SearchHandlers
 
 The primary request handler defined with Solr by default is the "SearchHandler", which handles search queries. The request handler is defined, and then a list of defaults for the handler are defined with a `defaults` list.
@@ -91,33 +89,28 @@ In this example, the filter query "inStock:true" will always be added to every q
 +
 In this example, facet fields have been defined which limits the facets that will be returned by Solr. If the client requests facets, the facets defined with a configuration like this are the only facets they will see.
 
-The final section of a request handler definition is `components`, which defines a list of search components that can be used with a request handler. They are only registered with the request handler. How to define a search component is discussed further on in the section on <<RequestHandlersandSearchComponentsinSolrConfig-SearchComponents,Search Components>>. The `components` element can only be used with a request handler that is a SearchHandler.
+The final section of a request handler definition is `components`, which defines a list of search components that can be used with a request handler. They are only registered with the request handler. How to define a search component is discussed further on in the section on <<Search Components>> below. The `components` element can only be used with a request handler that is a SearchHandler.
 
 The `solrconfig.xml` file includes many other examples of SearchHandlers that can be used or modified as needed.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-UpdateRequestHandlers]]
 === UpdateRequestHandlers
 
 The UpdateRequestHandlers are request handlers which process updates to the index.
 
 In this guide, we've covered these handlers in detail in the section <<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-ShardHandlers]]
 === ShardHandlers
 
 It is possible to configure a request handler to search across shards of a cluster, used with distributed search. More information about distributed search and how to configure the shardHandler is in the section <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-ImplicitRequestHandlers]]
 === Implicit Request Handlers
 
 Solr includes many out-of-the-box request handlers that are not configured in `solrconfig.xml`, and so are referred to as "implicit" - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>>.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-SearchComponents]]
 == Search Components
 
 Search components define the logic that is used by the SearchHandler to perform queries for users.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-DefaultComponents]]
 === Default Components
 
 There are several default search components that work with all SearchHandlers without any additional configuration. If no components are defined (with the exception of `first-components` and `last-components` - see below), these are executed by default, in the following order:
@@ -138,7 +131,6 @@ There are several default search components that work with all SearchHandlers wi
 
 If you register a new search component with one of these default names, the newly defined component will be used instead of the default.
 
-[[RequestHandlersandSearchComponentsinSolrConfig-First-ComponentsandLast-Components]]
 === First-Components and Last-Components
 
 It's possible to define some components as being used before (with `first-components`) or after (with `last-components`) the default components listed above.
@@ -158,7 +150,6 @@ It's possible to define some components as being used before (with `first-compon
 </arr>
 ----
 
-[[RequestHandlersandSearchComponentsinSolrConfig-Components]]
 === Components
 
 If you define `components`, the default components (see above) will not be executed, and `first-components` and `last-components` are disallowed:
@@ -172,7 +163,6 @@ If you define `components`, the default components (see above) will not be execu
 </arr>
 ----
 
-[[RequestHandlersandSearchComponentsinSolrConfig-OtherUsefulComponents]]
 === Other Useful Components
 
 Many of the other useful components are described in sections of this Guide for the features they support. These are:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/response-writers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/response-writers.adoc b/solr/solr-ref-guide/src/response-writers.adoc
index 947c8ea..2c6113b 100644
--- a/solr/solr-ref-guide/src/response-writers.adoc
+++ b/solr/solr-ref-guide/src/response-writers.adoc
@@ -25,23 +25,22 @@ Solr supports a variety of Response Writers to ensure that query responses can b
 
 The `wt` parameter selects the Response Writer to be used. The list below describe shows the most common settings for the `wt` parameter, with links to further sections that discuss them in more detail.
 
-* <<ResponseWriters-CSVResponseWriter,csv>>
-* <<ResponseWriters-GeoJSONResponseWriter,geojson>>
-* <<ResponseWriters-BinaryResponseWriter,javabin>>
-* <<ResponseWriters-JSONResponseWriter,json>>
-* <<ResponseWriters-PHPResponseWriterandPHPSerializedResponseWriter,php>>
-* <<ResponseWriters-PHPResponseWriterandPHPSerializedResponseWriter,phps>>
-* <<ResponseWriters-PythonResponseWriter,python>>
-* <<ResponseWriters-RubyResponseWriter,ruby>>
-* <<ResponseWriters-SmileResponseWriter,smile>>
-* <<ResponseWriters-VelocityResponseWriter,velocity>>
-* <<ResponseWriters-XLSXResponseWriter,xlsx>>
-* <<ResponseWriters-TheStandardXMLResponseWriter,xml>>
-* <<ResponseWriters-TheXSLTResponseWriter,xslt>>
-
-
-[[ResponseWriters-TheStandardXMLResponseWriter]]
-== The Standard XML Response Writer
+* <<CSV Response Writer,csv>>
+* <<GeoJSON Response Writer,geojson>>
+* <<Binary Response Writer,javabin>>
+* <<JSON Response Writer,json>>
+* <<php-writer,php>>
+* <<php-writer,phps>>
+* <<Python Response Writer,python>>
+* <<Ruby Response Writer,ruby>>
+* <<Smile Response Writer,smile>>
+* <<Velocity Response Writer,velocity>>
+* <<XLSX Response Writer,xlsx>>
+* <<Standard XML Response Writer,xml>>
+* <<XSLT Response Writer,xslt>>
+
+
+== Standard XML Response Writer
 
 The XML Response Writer is the most general purpose and reusable Response Writer currently included with Solr. It is the format used in most discussions and documentation about the response of Solr queries.
 
@@ -49,7 +48,6 @@ Note that the XSLT Response Writer can be used to convert the XML produced by th
 
 The behavior of the XML Response Writer can be driven by the following query parameters.
 
-[[ResponseWriters-TheversionParameter]]
 === The version Parameter
 
 The `version` parameter determines the XML protocol used in the response. Clients are strongly encouraged to _always_ specify the protocol version, so as to ensure that the format of the response they receive does not change unexpectedly if the Solr server is upgraded and a new default format is introduced.
@@ -58,8 +56,7 @@ The only currently supported version value is `2.2`. The format of the `response
 
 The default value is the latest supported.
 
-[[ResponseWriters-ThestylesheetParameter]]
-=== The stylesheet Parameter
+=== stylesheet Parameter
 
 The `stylesheet` parameter can be used to direct Solr to include a `<?xml-stylesheet type="text/xsl" href="..."?>` declaration in the XML response it returns.
 
@@ -70,27 +67,23 @@ The default behavior is not to return any stylesheet declaration at all.
 Use of the `stylesheet` parameter is discouraged, as there is currently no way to specify external stylesheets, and no stylesheets are provided in the Solr distributions. This is a legacy parameter, which may be developed further in a future release.
 ====
 
-[[ResponseWriters-TheindentParameter]]
-=== The indent Parameter
+=== indent Parameter
 
 If the `indent` parameter is used, and has a non-blank value, then Solr will make some attempts at indenting its XML response to make it more readable by humans.
 
 The default behavior is not to indent.
 
-[[ResponseWriters-TheXSLTResponseWriter]]
-== The XSLT Response Writer
+== XSLT Response Writer
 
 The XSLT Response Writer applies an XML stylesheet to output. It can be used for tasks such as formatting results for an RSS feed.
 
-[[ResponseWriters-trParameter]]
 === tr Parameter
 
 The XSLT Response Writer accepts one parameter: the `tr` parameter, which identifies the XML transformation to use. The transformation must be found in the Solr `conf/xslt` directory.
 
 The Content-Type of the response is set according to the `<xsl:output>` statement in the XSLT transform, for example: `<xsl:output media-type="text/html"/>`
 
-[[ResponseWriters-Configuration]]
-=== Configuration
+=== XSLT Configuration
 
 The example below, from the `sample_techproducts_configs` <<response-writers.adoc#response-writers,config set>> in the Solr distribution, shows how the XSLT Response Writer is configured.
 
@@ -108,7 +101,6 @@ The example below, from the `sample_techproducts_configs` <<response-writers.ado
 
 A value of 5 for `xsltCacheLifetimeSeconds` is good for development, to see XSLT changes quickly. For production you probably want a much higher value.
 
-[[ResponseWriters-JSONResponseWriter]]
 == JSON Response Writer
 
 A very commonly used Response Writer is the `JsonResponseWriter`, which formats output in JavaScript Object Notation (JSON), a lightweight data interchange format specified in specified in RFC 4627. Setting the `wt` parameter to `json` invokes this Response Writer.
@@ -158,10 +150,8 @@ The default mime type for the JSON writer is `application/json`, however this ca
 </queryResponseWriter>
 ----
 
-[[ResponseWriters-JSON-SpecificParameters]]
 === JSON-Specific Parameters
 
-[[ResponseWriters-json.nl]]
 ==== json.nl
 
 This parameter controls the output format of NamedLists, where order is more important than access by name. NamedList is currently used for field faceting data.
@@ -196,7 +186,6 @@ NamedList is represented as an array of Name Type Value JSON objects.
 +
 With input of `NamedList("a"=1, "bar"="foo", null=3, null=null)`, the output would be `[{"name":"a","type":"int","value":1}, {"name":"bar","type":"str","value":"foo"}, {"name":null,"type":"int","value":3}, {"name":null,"type":"null","value":null}]`.
 
-[[ResponseWriters-json.wrf]]
 ==== json.wrf
 
 `json.wrf=function` adds a wrapper-function around the JSON response, useful in AJAX with dynamic script tags for specifying a JavaScript callback function.
@@ -204,17 +193,14 @@ With input of `NamedList("a"=1, "bar"="foo", null=3, null=null)`, the output wou
 * http://www.xml.com/pub/a/2005/12/21/json-dynamic-script-tag.html
 * http://www.theurer.cc/blog/2005/12/15/web-services-json-dump-your-proxy/
 
-[[ResponseWriters-BinaryResponseWriter]]
 == Binary Response Writer
 
 This is a custom binary format used by Solr for inter-node communication as well as client-server communication. SolrJ uses this as the default for indexing as well as querying. See <<client-apis.adoc#client-apis,Client APIs>> for more details.
 
-[[ResponseWriters-GeoJSONResponseWriter]]
 == GeoJSON Response Writer
 
 Returns Solr results in http://geojson.org[GeoJSON] augmented with Solr-specific JSON. To use this, set `wt=geojson` and `geojson.field` to the name of a spatial Solr field. Not all spatial fields types are supported, and you'll get an error if you use an unsupported one.
 
-[[ResponseWriters-PythonResponseWriter]]
 == Python Response Writer
 
 Solr has an optional Python response format that extends its JSON output in the following ways to allow the response to be safely evaluated by the python interpreter:
@@ -225,7 +211,7 @@ Solr has an optional Python response format that extends its JSON output in the
 * newlines are escaped
 * null changed to None
 
-[[ResponseWriters-PHPResponseWriterandPHPSerializedResponseWriter]]
+[[php-writer]]
 == PHP Response Writer and PHP Serialized Response Writer
 
 Solr has a PHP response format that outputs an array (as PHP code) which can be evaluated. Setting the `wt` parameter to `php` invokes the PHP Response Writer.
@@ -250,7 +236,6 @@ $result = unserialize($serializedResult);
 print_r($result);
 ----
 
-[[ResponseWriters-RubyResponseWriter]]
 == Ruby Response Writer
 
 Solr has an optional Ruby response format that extends its JSON output in the following ways to allow the response to be safely evaluated by Ruby's interpreter:
@@ -274,14 +259,12 @@ puts 'number of matches = ' + rsp['response']['numFound'].to_s
 rsp['response']['docs'].each { |doc| puts 'name field = ' + doc['name'\] }
 ----
 
-[[ResponseWriters-CSVResponseWriter]]
 == CSV Response Writer
 
 The CSV response writer returns a list of documents in comma-separated values (CSV) format. Other information that would normally be included in a response, such as facet information, is excluded.
 
 The CSV response writer supports multi-valued fields, as well as<<transforming-result-documents.adoc#transforming-result-documents,pseudo-fields>>, and the output of this CSV format is compatible with Solr's https://wiki.apache.org/solr/UpdateCSV[CSV update format].
 
-[[ResponseWriters-CSVParameters]]
 === CSV Parameters
 
 These parameters specify the CSV format that will be returned. You can accept the default values or specify your own.
@@ -297,7 +280,6 @@ These parameters specify the CSV format that will be returned. You can accept th
 |csv.null |Defaults to a zero length string. Use this parameter when a document has no value for a particular field.
 |===
 
-[[ResponseWriters-Multi-ValuedFieldCSVParameters]]
 === Multi-Valued Field CSV Parameters
 
 These parameters specify how multi-valued fields are encoded. Per-field overrides for these values can be done using `f.<fieldname>.csv.separator=|`.
@@ -310,8 +292,7 @@ These parameters specify how multi-valued fields are encoded. Per-field override
 |csv.mv.separator |Defaults to the `csv.separator` value.
 |===
 
-[[ResponseWriters-Example]]
-=== Example
+=== CSV Writer Example
 
 `\http://localhost:8983/solr/techproducts/select?q=ipod&fl=id,cat,name,popularity,price,score&wt=csv` returns:
 
@@ -323,19 +304,17 @@ F8V7067-APL-KIT,"electronics,connector",Belkin Mobile Power Cord for iPod w/ Doc
 MA147LL/A,"electronics,music",Apple 60 GB iPod with Video Playback Black,10,399.0,0.2446348
 ----
 
-[[ResponseWriters-VelocityResponseWriter]]
+[[velocity-writer]]
 == Velocity Response Writer
 
 The `VelocityResponseWriter` processes the Solr response and request context through Apache Velocity templating.
 
-See <<velocity-response-writer.adoc#velocity-response-writer,Velocity Response Writer>> section for details.
+See the <<velocity-response-writer.adoc#velocity-response-writer,Velocity Response Writer>> section for details.
 
-[[ResponseWriters-SmileResponseWriter]]
 == Smile Response Writer
 
 The Smile format is a JSON-compatible binary format, described in detail here: http://wiki.fasterxml.com/SmileFormat.
 
-[[ResponseWriters-XLSXResponseWriter]]
 == XLSX Response Writer
 
 Use this to get the response as a spreadsheet in the .xlsx (Microsoft Excel) format. It accepts parameters in the form `colwidth.<field-name>` and `colname.<field-name>` which helps you customize the column widths and column names.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
index 4ce41fe..3b84dc6 100644
--- a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
+++ b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
@@ -28,7 +28,7 @@ Once defined through the API, roles are stored in `security.json`.
 
 == Enable the Authorization Plugin
 
-The plugin must be enabled in `security.json`. This file and where to put it in your system is described in detail in the section <<authentication-and-authorization-plugins.adoc#AuthenticationandAuthorizationPlugins-EnablePluginswithsecurity.json,Enable Plugins with security.json>>.
+The plugin must be enabled in `security.json`. This file and where to put it in your system is described in detail in the section <<authentication-and-authorization-plugins.adoc#enable-plugins-with-security-json,Enable Plugins with security.json>>.
 
 This file has two parts, the `authentication` part and the `authorization` part. The `authentication` part stores information about the class being used for authentication.
 
@@ -104,8 +104,8 @@ The pre-defined permissions are:
 ** OVERSEERSTATUS
 ** CLUSTERSTATUS
 ** REQUESTSTATUS
-* *update*: this permission is allowed to perform any update action on any collection. This includes sending documents for indexing (using an <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-UpdateRequestHandlers,update request handler>>). This applies to all collections by default (`collection:"*"`).
-* *read*: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`. This applies to all collections by default ( `collection:"*"` ).
+* *update*: this permission is allowed to perform any update action on any collection. This includes sending documents for indexing (using an <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#updaterequesthandlers,update request handler>>). This applies to all collections by default (`collection:"*"`).
+* *read*: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#searchhandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`. This applies to all collections by default ( `collection:"*"` ).
 * *all*: Any requests coming to Solr.
 
 == Authorization API

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
index deb7243..2464606 100644
--- a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
+++ b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
@@ -71,7 +71,7 @@ The nodes are sorted first and the rules are used to sort them. This ensures tha
 
 == Rules for New Shards
 
-The rules are persisted along with collection state. So, when a new replica is created, the system will assign replicas satisfying the rules. When a new shard is created as a result of using the Collection API's <<collections-api.adoc#CollectionsAPI-createshard,CREATESHARD command>>, ensure that you have created rules specific for that shard name. Rules can be altered using the <<collections-api.adoc#CollectionsAPI-modifycollection,MODIFYCOLLECTION command>>. However, it is not required to do so if the rules do not specify explicit shard names. For example, a rule such as `shard:shard1,replica:*,ip_3:168:`, will not apply to any new shard created. But, if your rule is `replica:*,ip_3:168`, then it will apply to any new shard created.
+The rules are persisted along with collection state. So, when a new replica is created, the system will assign replicas satisfying the rules. When a new shard is created as a result of using the Collection API's <<collections-api.adoc#createshard,CREATESHARD command>>, ensure that you have created rules specific for that shard name. Rules can be altered using the <<collections-api.adoc#modifycollection,MODIFYCOLLECTION command>>. However, it is not required to do so if the rules do not specify explicit shard names. For example, a rule such as `shard:shard1,replica:*,ip_3:168:`, will not apply to any new shard created. But, if your rule is `replica:*,ip_3:168`, then it will apply to any new shard created.
 
 The same is applicable to shard splitting. Shard splitting is treated exactly the same way as shard creation. Even though `shard1_1` and `shard1_2` may be created from `shard1`, the rules treat them as distinct, unrelated shards.
 
@@ -176,4 +176,4 @@ Rules are specified per collection during collection creation as request paramet
 snitch=class:EC2Snitch&rule=shard:*,replica:1,dc:dc1&rule=shard:*,replica:<2,dc:dc3
 ----
 
-These rules are persisted in `clusterstate.json` in ZooKeeper and are available throughout the lifetime of the collection. This enables the system to perform any future node allocation without direct user interaction. The rules added during collection creation can be modified later using the <<collections-api.adoc#CollectionsAPI-modifycollection,MODIFYCOLLECTION>> API.
+These rules are persisted in `clusterstate.json` in ZooKeeper and are available throughout the lifetime of the collection. This enables the system to perform any future node allocation without direct user interaction. The rules added during collection creation can be modified later using the <<collections-api.adoc#modifycollection,MODIFYCOLLECTION>> API.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
index 9f8e2dc..6ca5670 100644
--- a/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
+++ b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
@@ -28,13 +28,11 @@ To use HDFS rather than a local filesystem, you must be using Hadoop 2.x and you
 * Modify `solr.in.sh` (or `solr.in.cmd` on Windows) to pass the JVM arguments automatically when using `bin/solr` without having to set them manually.
 * Define the properties in `solrconfig.xml`. These configuration changes would need to be repeated for every collection, so is a good option if you only want some of your collections stored in HDFS.
 
-[[RunningSolronHDFS-StartingSolronHDFS]]
 == Starting Solr on HDFS
 
-[[RunningSolronHDFS-StandaloneSolrInstances]]
 === Standalone Solr Instances
 
-For standalone Solr instances, there are a few parameters you should be sure to modify before starting Solr. These can be set in `solrconfig.xml`(more on that <<RunningSolronHDFS-HdfsDirectoryFactoryParameters,below>>), or passed to the `bin/solr` script at startup.
+For standalone Solr instances, there are a few parameters you should be sure to modify before starting Solr. These can be set in `solrconfig.xml`(more on that <<HdfsDirectoryFactory Parameters,below>>), or passed to the `bin/solr` script at startup.
 
 * You need to use an `HdfsDirectoryFactory` and a data dir of the form `hdfs://host:port/path`
 * You need to specify an UpdateLog location of the form `hdfs://host:port/path`
@@ -50,9 +48,8 @@ bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
      -Dsolr.updatelog=hdfs://host:port/path
 ----
 
-This example will start Solr in standalone mode, using the defined JVM properties (explained in more detail <<RunningSolronHDFS-HdfsDirectoryFactoryParameters,below>>).
+This example will start Solr in standalone mode, using the defined JVM properties (explained in more detail <<HdfsDirectoryFactory Parameters,below>>).
 
-[[RunningSolronHDFS-SolrCloudInstances]]
 === SolrCloud Instances
 
 In SolrCloud mode, it's best to leave the data and update log directories as the defaults Solr comes with and simply specify the `solr.hdfs.home`. All dynamically created collections will create the appropriate directories automatically under the `solr.hdfs.home` root directory.
@@ -70,7 +67,6 @@ bin/solr start -c -Dsolr.directoryFactory=HdfsDirectoryFactory
 This command starts Solr in SolrCloud mode, using the defined JVM properties.
 
 
-[[RunningSolronHDFS-Modifyingsolr.in.sh_nix_orsolr.in.cmd_Windows_]]
 === Modifying solr.in.sh (*nix) or solr.in.cmd (Windows)
 
 The examples above assume you will pass JVM arguments as part of the start command every time you use `bin/solr` to start Solr. However, `bin/solr` looks for an include file named `solr.in.sh` (`solr.in.cmd` on Windows) to set environment variables. By default, this file is found in the `bin` directory, and you can modify it to permanently add the `HdfsDirectoryFactory` settings and ensure they are used every time Solr is started.
@@ -85,7 +81,6 @@ For example, to set JVM arguments to always use HDFS when running in SolrCloud m
 -Dsolr.hdfs.home=hdfs://host:port/path \
 ----
 
-[[RunningSolronHDFS-TheBlockCache]]
 == The Block Cache
 
 For performance, the HdfsDirectoryFactory uses a Directory that will cache HDFS blocks. This caching mechanism is meant to replace the standard file system cache that Solr utilizes so much. By default, this cache is allocated off heap. This cache will often need to be quite large and you may need to raise the off heap memory limit for the specific JVM you are running Solr in. For the Oracle/OpenJDK JVMs, the follow is an example command line parameter that you can use to raise the limit when starting Solr:
@@ -95,18 +90,15 @@ For performance, the HdfsDirectoryFactory uses a Directory that will cache HDFS
 -XX:MaxDirectMemorySize=20g
 ----
 
-[[RunningSolronHDFS-HdfsDirectoryFactoryParameters]]
 == HdfsDirectoryFactory Parameters
 
 The `HdfsDirectoryFactory` has a number of settings that are defined as part of the `directoryFactory` configuration.
 
-[[RunningSolronHDFS-SolrHDFSSettings]]
 === Solr HDFS Settings
 
 `solr.hdfs.home`::
 A root location in HDFS for Solr to write collection data to. Rather than specifying an HDFS location for the data directory or update log directory, use this to specify one root location and have everything automatically created within this HDFS location. The structure of this parameter is `hdfs://host:port/path/solr`.
 
-[[RunningSolronHDFS-BlockCacheSettings]]
 === Block Cache Settings
 
 `solr.hdfs.blockcache.enabled`::
@@ -124,7 +116,6 @@ Number of memory slabs to allocate. Each slab is 128 MB in size. The default is
 `solr.hdfs.blockcache.global`::
 Enable/Disable using one global cache for all SolrCores. The settings used will be from the first HdfsDirectoryFactory created. The default is `true`.
 
-[[RunningSolronHDFS-NRTCachingDirectorySettings]]
 === NRTCachingDirectory Settings
 
 `solr.hdfs.nrtcachingdirectory.enable`:: true |
@@ -136,13 +127,11 @@ NRTCachingDirectory max segment size for merges. The default is `16`.
 `solr.hdfs.nrtcachingdirectory.maxcachedmb`::
 NRTCachingDirectory max cache size. The default is `192`.
 
-[[RunningSolronHDFS-HDFSClientConfigurationSettings]]
 === HDFS Client Configuration Settings
 
 `solr.hdfs.confdir`::
 Pass the location of HDFS client configuration files - needed for HDFS HA for example.
 
-[[RunningSolronHDFS-KerberosAuthenticationSettings]]
 === Kerberos Authentication Settings
 
 Hadoop can be configured to use the Kerberos protocol to verify user identity when trying to access core services like HDFS. If your HDFS directories are protected using Kerberos, then you need to configure Solr's HdfsDirectoryFactory to authenticate using Kerberos in order to read and write to HDFS. To enable Kerberos authentication from Solr, you need to set the following parameters:
@@ -157,8 +146,7 @@ This file will need to be present on all Solr servers at the same path provided
 `solr.hdfs.security.kerberos.principal`::
 The Kerberos principal that Solr should use to authenticate to secure Hadoop; the format of a typical Kerberos V5 principal is: `primary/instance@realm`.
 
-[[RunningSolronHDFS-Example]]
-== Example
+== Example solrconfig.xml for HDFS
 
 Here is a sample `solrconfig.xml` configuration for storing Solr indexes on HDFS:
 
@@ -189,7 +177,6 @@ If using Kerberos, you will need to add the three Kerberos related properties to
 </directoryFactory>
 ----
 
-[[RunningSolronHDFS-AutomaticallyAddReplicasinSolrCloud]]
 == Automatically Add Replicas in SolrCloud
 
 One benefit to running Solr in HDFS is the ability to automatically add new replicas when the Overseer notices that a shard has gone down. Because the "gone" index shards are stored in HDFS, the a new core will be created and the new core will point to the existing indexes in HDFS.
@@ -205,7 +192,6 @@ The minimum time (in ms) to wait for initiating replacement of a replica after f
 `autoReplicaFailoverBadNodeExpiration`::
 The delay (in ms) after which a replica marked as down would be unmarked. The default is `60000`.
 
-[[RunningSolronHDFS-TemporarilydisableautoAddReplicasfortheentirecluster]]
 === Temporarily Disable autoAddReplicas for the Entire Cluster
 
 When doing offline maintenance on the cluster and for various other use cases where an admin would like to temporarily disable auto addition of replicas, the following APIs will disable and re-enable autoAddReplicas for *all collections in the cluster*: