You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by is...@apache.org on 2017/07/29 21:59:48 UTC

[11/28] lucene-solr:jira/solr-6630: Merging master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/faceting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/faceting.adoc b/solr/solr-ref-guide/src/faceting.adoc
index b0a79c0..4b8ce46 100644
--- a/solr/solr-ref-guide/src/faceting.adoc
+++ b/solr/solr-ref-guide/src/faceting.adoc
@@ -21,32 +21,26 @@
 
 Faceting is the arrangement of search results into categories based on indexed terms.
 
-Searchers are presented with the indexed terms, along with numerical counts of how many matching documents were found were each term. Faceting makes it easy for users to explore search results, narrowing in on exactly the results they are looking for.
+Searchers are presented with the indexed terms, along with numerical counts of how many matching documents were found for each term. Faceting makes it easy for users to explore search results, narrowing in on exactly the results they are looking for.
 
-[[Faceting-GeneralParameters]]
-== General Parameters
+== General Facet Parameters
 
 There are two general parameters for controlling faceting.
 
-[[Faceting-ThefacetParameter]]
-=== The facet Parameter
-
-If set to *true*, this parameter enables facet counts in the query response. If set to *false*, a blank or missing value, this parameter disables faceting. None of the other parameters listed below will have any effect unless this parameter is set to *true*. The default value is blank (false).
-
-[[Faceting-Thefacet.queryParameter]]
-=== The facet.query Parameter
+`facet`::
+If set to `true`, this parameter enables facet counts in the query response. If set to `false`, a blank or missing value, this parameter disables faceting. None of the other parameters listed below will have any effect unless this parameter is set to `true`. The default value is blank (false).
 
+`facet.query`::
 This parameter allows you to specify an arbitrary query in the Lucene default syntax to generate a facet count.
-
++
 By default, Solr's faceting feature automatically determines the unique terms for a field and returns a count for each of those terms. Using `facet.query`, you can override this default behavior and select exactly which terms or expressions you would like to see counted. In a typical implementation of faceting, you will specify a number of `facet.query` parameters. This parameter can be particularly useful for numeric-range-based facets or prefix-based facets.
-
++
 You can set the `facet.query` parameter multiple times to indicate that multiple queries should be used as separate facet constraints.
-
++
 To use facet queries in a syntax other than the default syntax, prefix the facet query with the name of the query notation. For example, to use the hypothetical `myfunc` query parser, you could set the `facet.query` parameter like so:
-
++
 `facet.query={!myfunc}name~fred`
 
-[[Faceting-Field-ValueFacetingParameters]]
 == Field-Value Faceting Parameters
 
 Several parameters can be used to trigger faceting based on the indexed terms in a field.
@@ -55,335 +49,217 @@ When using these parameters, it is important to remember that "term" is a very s
 
 If you want Solr to perform both analysis (for searching) and faceting on the full literal strings, use the `copyField` directive in your Schema to create two versions of the field: one Text and one String. Make sure both are `indexed="true"`. (For more information about the `copyField` directive, see <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>.)
 
-The table below summarizes Solr's field value faceting parameters.
-
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|<<Faceting-Thefacet.fieldParameter,facet.field>> |Identifies a field to be treated as a facet.
-|<<Faceting-Thefacet.prefixParameter,facet.prefix>> |Limits the terms used for faceting to those that begin with the specified prefix.
-|<<Faceting-Thefacet.containsParameter,facet.contains>> |Limits the terms used for faceting to those that contain the specified substring.
-|<<Faceting-Thefacet.contains.ignoreCaseParameter,facet.contains.ignoreCase>> |If facet.contains is used, ignore case when searching for the specified substring.
-|<<Faceting-Thefacet.sortParameter,facet.sort>> |Controls how faceted results are sorted.
-|<<Faceting-Thefacet.limitParameter,facet.limit>> |Controls how many constraints should be returned for each facet.
-|<<Faceting-Thefacet.offsetParameter,facet.offset>> |Specifies an offset into the facet results at which to begin displaying facets.
-|<<Faceting-Thefacet.mincountParameter,facet.mincount>> |Specifies the minimum counts required for a facet field to be included in the response.
-|<<Faceting-Thefacet.missingParameter,facet.missing>> |Controls whether Solr should compute a count of all matching results which have no value for the field, in addition to the term-based constraints of a facet field.
-|<<Faceting-Thefacet.methodParameter,facet.method>> |Selects the algorithm or method Solr should use when faceting a field.
-|<<Faceting-Thefacet.existsParameter,facet.exists>> |Caps facet counts by one. Available only for `facet.method=enum` as performance optimization.
-|<<Faceting-Thefacet.excludeTermsParameter,facet.excludeTerms>> |Removes specific terms from facet counts. This allows you to exclude certain terms from faceting, while maintaining the terms in the index for general queries.
-|<<Faceting-Thefacet.enum.cache.minDfParameter,facet.enum.cache.minDf>> |(Advanced) Specifies the minimum document frequency (the number of documents matching a term) for which the `filterCache` should be used when determining the constraint count for that term.
-|<<Faceting-Over-RequestParameters,facet.overrequest.count>> |(Advanced) A number of documents, beyond the effective `facet.limit` to request from each shard in a distributed search
-|<<Faceting-Over-RequestParameters,facet.overrequest.ratio>> |(Advanced) A multiplier of the effective `facet.limit` to request from each shard in a distributed search
-|<<Faceting-Thefacet.threadsParameter,facet.threads>> |(Advanced) Controls parallel execution of field faceting
-|===
-
-These parameters are described in the sections below.
-
-[[Faceting-Thefacet.fieldParameter]]
-=== The facet.field Parameter
+Unless otherwise specified, all of the parameters below can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.<parameter>`
 
+`facet.field`::
 The `facet.field` parameter identifies a field that should be treated as a facet. It iterates over each Term in the field and generate a facet count using that Term as the constraint. This parameter can be specified multiple times in a query to select multiple facet fields.
++
+IMPORTANT: If you do not set this parameter to at least one field in the schema, none of the other parameters described in this section will have any effect.
 
-[IMPORTANT]
-====
-If you do not set this parameter to at least one field in the schema, none of the other parameters described in this section will have any effect.
-====
-
-[[Faceting-Thefacet.prefixParameter]]
-=== The facet.prefix Parameter
-
+`facet.prefix`::
 The `facet.prefix` parameter limits the terms on which to facet to those starting with the given string prefix. This does not limit the query in any way, only the facets that would be returned in response to the query.
++
 
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.prefix`.
-
-[[Faceting-Thefacet.containsParameter]]
-=== The facet.contains Parameter
-
+`facet.contains`::
 The `facet.contains` parameter limits the terms on which to facet to those containing the given substring. This does not limit the query in any way, only the facets that would be returned in response to the query.
 
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.contains`.
-
-[[Faceting-Thefacet.contains.ignoreCaseParameter]]
-=== The facet.contains.ignoreCase Parameter
+`facet.contains.ignoreCase`::
 
 If `facet.contains` is used, the `facet.contains.ignoreCase` parameter causes case to be ignored when matching the given substring against candidate facet terms.
 
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.contains.ignoreCase`.
-
-[[Faceting-Thefacet.sortParameter]]
-=== The facet.sort Parameter
-
+`facet.sort`::
 This parameter determines the ordering of the facet field constraints.
-
++
 There are two options for this parameter.
-
-count:: Sort the constraints by count (highest count first).
-index:: Return the constraints sorted in their index order (lexicographic by indexed term). For terms in the ASCII range, this will be alphabetically sorted.
-
++
+--
+`count`::: Sort the constraints by count (highest count first).
+`index`::: Return the constraints sorted in their index order (lexicographic by indexed term). For terms in the ASCII range, this will be alphabetically sorted.
+--
++
 The default is `count` if `facet.limit` is greater than 0, otherwise, the default is `index`.
 
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.sort`.
-
-[[Faceting-Thefacet.limitParameter]]
-=== The facet.limit Parameter
-
+`facet.limit`::
 This parameter specifies the maximum number of constraint counts (essentially, the number of facets for a field that are returned) that should be returned for the facet fields. A negative value means that Solr will return unlimited number of constraint counts.
++
+The default value is `100`.
 
-The default value is 100.
-
-This parameter can be specified on a per-field basis to apply a distinct limit to each field with the syntax of `f.<fieldname>.facet.limit`.
-
-[[Faceting-Thefacet.offsetParameter]]
-=== The facet.offset Parameter
+`facet.offset`::
 
 The `facet.offset` parameter indicates an offset into the list of constraints to allow paging.
++
+The default value is `0`.
 
-The default value is 0.
-
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.offset`.
-
-[[Faceting-Thefacet.mincountParameter]]
-=== The facet.mincount Parameter
+`facet.mincount`::
 
 The `facet.mincount` parameter specifies the minimum counts required for a facet field to be included in the response. If a field's counts are below the minimum, the field's facet is not returned.
++
+The default value is `0`.
 
-The default value is 0.
-
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.mincount`.
-
-[[Faceting-Thefacet.missingParameter]]
-=== The facet.missing Parameter
-
-If set to true, this parameter indicates that, in addition to the Term-based constraints of a facet field, a count of all results that match the query but which have no facet value for the field should be computed and returned in the response.
-
-The default value is false.
-
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.missing`.
-
-[[Faceting-Thefacet.methodParameter]]
-=== The facet.method Parameter
-
-The facet.method parameter selects the type of algorithm or method Solr should use when faceting a field.
+`facet.missing`::
+If set to `true`, this parameter indicates that, in addition to the Term-based constraints of a facet field, a count of all results that match the query but which have no facet value for the field should be computed and returned in the response.
++
+The default value is `false`.
 
+`facet.method`::
+The `facet.method` parameter selects the type of algorithm or method Solr should use when faceting a field.
++
 The following methods are available.
-
-enum:: Enumerates all terms in a field, calculating the set intersection of documents that match the term with documents that match the query.
++
+--
+`enum`::: Enumerates all terms in a field, calculating the set intersection of documents that match the term with documents that match the query.
 +
 This method is recommended for faceting multi-valued fields that have only a few distinct values. The average number of values per document does not matter.
 +
 For example, faceting on a field with U.S. States such as `Alabama, Alaska, ... Wyoming` would lead to fifty cached filters which would be used over and over again. The `filterCache` should be large enough to hold all the cached filters.
 
-fc:: Calculates facet counts by iterating over documents that match the query and summing the terms that appear in each document.
+`fc`::: Calculates facet counts by iterating over documents that match the query and summing the terms that appear in each document.
 +
 This is currently implemented using an `UnInvertedField` cache if the field either is multi-valued or is tokenized (according to `FieldType.isTokened()`). Each document is looked up in the cache to see what terms/values it contains, and a tally is incremented for each value.
 +
 This method is excellent for situations where the number of indexed values for the field is high, but the number of values per document is low. For multi-valued fields, a hybrid approach is used that uses term filters from the `filterCache` for terms that match many documents. The letters `fc` stand for field cache.
 
-fcs:: Per-segment field faceting for single-valued string fields. Enable with `facet.method=fcs` and control the number of threads used with the `threads` local parameter. This parameter allows faceting to be faster in the presence of rapid index changes.
-
+`fcs`::: Per-segment field faceting for single-valued string fields. Enable with `facet.method=fcs` and control the number of threads used with the `threads` local parameter. This parameter allows faceting to be faster in the presence of rapid index changes.
+--
++
 The default value is `fc` (except for fields using the `BoolField` field type and when `facet.exists=true` is requested) since it tends to use less memory and is faster when a field has many unique terms in the index.
 
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.method`.
-
-[[Faceting-Thefacet.enum.cache.minDfParameter]]
-=== The facet.enum.cache.minDf Parameter
-
+`facet.enum.cache.minDf`::
 This parameter indicates the minimum document frequency (the number of documents matching a term) for which the filterCache should be used when determining the constraint count for that term. This is only used with the `facet.method=enum` method of faceting.
++
+A value greater than zero decreases the filterCache's memory usage, but increases the time required for the query to be processed. If you are faceting on a field with a very large number of terms, and you wish to decrease memory usage, try setting this parameter to a value between `25` and `50`, and run a few tests. Then, optimize the parameter setting as necessary.
++
+The default value is `0`, causing the filterCache to be used for all terms in the field.
 
-A value greater than zero decreases the filterCache's memory usage, but increases the time required for the query to be processed. If you are faceting on a field with a very large number of terms, and you wish to decrease memory usage, try setting this parameter to a value between 25 and 50, and run a few tests. Then, optimize the parameter setting as necessary.
-
-The default value is 0, causing the filterCache to be used for all terms in the field.
-
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.enum.cache.minDf`.
-
-[[Faceting-Thefacet.existsParameter]]
-=== The facet.exists Parameter
-
-To cap facet counts by 1, specify `facet.exists=true`. It can be used with `facet.method=enum` or when it's omitted. It can be used only on non-trie fields (such as strings). It may speed up facet counting on large indices and/or high-cardinality facet values..
-
-This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.exists` or via local parameter` facet.field={!facet.method=enum facet.exists=true}size`.
+`facet.exists`::
+To cap facet counts by 1, specify `facet.exists=true`. This parameter can be used with `facet.method=enum` or when it's omitted. It can be used only on non-trie fields (such as strings). It may speed up facet counting on large indices and/or high-cardinality facet values.
 
-[[Faceting-Thefacet.excludeTermsParameter]]
-=== The facet.excludeTerms Parameter
+`facet.excludeTerms`::
 
 If you want to remove terms from facet counts but keep them in the index, the `facet.excludeTerms` parameter allows you to do that.
 
-[[Faceting-Over-RequestParameters]]
-=== Over-Request Parameters
-
-In some situations, the accuracy in selecting the "top" constraints returned for a facet in a distributed Solr query can be improved by "Over Requesting" the number of desired constraints (ie: `facet.limit`) from each of the individual Shards. In these situations, each shard is by default asked for the top "`10 + (1.5 * facet.limit)`" constraints.
-
-In some situations, depending on how your docs are partitioned across your shards, and what `facet.limit` value you used, you may find it advantageous to increase or decrease the amount of over-requesting Solr does. This can be achieved by setting the `facet.overrequest.count` (defaults to 10) and `facet.overrequest.ratio` (defaults to 1.5) parameters.
-
-[[Faceting-Thefacet.threadsParameter]]
-=== The facet.threads Parameter
+`facet.overrequest.count` and `facet.overrequest.ratio`::
+In some situations, the accuracy in selecting the "top" constraints returned for a facet in a distributed Solr query can be improved by "over requesting" the number of desired constraints (i.e., `facet.limit`) from each of the individual shards. In these situations, each shard is by default asked for the top `10 + (1.5 * facet.limit)` constraints.
++
+In some situations, depending on how your docs are partitioned across your shards and what `facet.limit` value you used, you may find it advantageous to increase or decrease the amount of over-requesting Solr does. This can be achieved by setting the `facet.overrequest.count` (defaults to `10`) and `facet.overrequest.ratio` (defaults to `1.5`) parameters.
 
-This param will cause loading the underlying fields used in faceting to be executed in parallel with the number of threads specified. Specify as `facet.threads=N` where `N` is the maximum number of threads used. Omitting this parameter or specifying the thread count as 0 will not spawn any threads, and only the main request thread will be used. Specifying a negative number of threads will create up to Integer.MAX_VALUE threads.
+`facet.threads`::
+This parameter will cause loading the underlying fields used in faceting to be executed in parallel with the number of threads specified. Specify as `facet.threads=N` where `N` is the maximum number of threads used.
++
+Omitting this parameter or specifying the thread count as `0` will not spawn any threads, and only the main request thread will be used. Specifying a negative number of threads will create up to `Integer.MAX_VALUE` threads.
 
-[[Faceting-RangeFaceting]]
 == Range Faceting
 
 You can use Range Faceting on any date field or any numeric field that supports range queries. This is particularly useful for stitching together a series of range queries (as facet by query) for things like prices.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|<<Faceting-Thefacet.rangeParameter,facet.range>> |Specifies the field to facet by range.
-|<<Faceting-Thefacet.range.startParameter,facet.range.start>> |Specifies the start of the facet range.
-|<<Faceting-Thefacet.range.endParameter,facet.range.end>> |Specifies the end of the facet range.
-|<<Faceting-Thefacet.range.gapParameter,facet.range.gap>> |Specifies the span of the range as a value to be added to the lower bound.
-|<<Faceting-Thefacet.range.hardendParameter,facet.range.hardend>> |A boolean parameter that specifies how Solr handles a range gap that cannot be evenly divided between the range start and end values. If true, the last range constraint will have the `facet.range.end` value an upper bound. If false, the last range will have the smallest possible upper bound greater then `facet.range.end` such that the range is the exact width of the specified range gap. The default value for this parameter is false.
-|<<Faceting-Thefacet.range.includeParameter,facet.range.include>> |Specifies inclusion and exclusion preferences for the upper and lower bounds of the range. See the `facet.range.include` topic for more detailed information.
-|<<Faceting-Thefacet.range.otherParameter,facet.range.other>> |Specifies counts for Solr to compute in addition to the counts for each facet range constraint.
-|<<Faceting-Thefacet.range.methodParameter,facet.range.method>> |Specifies the algorithm or method to use for calculating facets.
-|===
-
-[[Faceting-Thefacet.rangeParameter]]
-=== The facet.range Parameter
-
+`facet.range`::
 The `facet.range` parameter defines the field for which Solr should create range facets. For example:
-
++
 `facet.range=price&facet.range=age`
-
++
 `facet.range=lastModified_dt`
 
-[[Faceting-Thefacet.range.startParameter]]
-=== The facet.range.start Parameter
-
+`facet.range.start`::
 The `facet.range.start` parameter specifies the lower bound of the ranges. You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.start`. For example:
-
++
 `f.price.facet.range.start=0.0&f.age.facet.range.start=10`
-
++
 `f.lastModified_dt.facet.range.start=NOW/DAY-30DAYS`
 
-[[Faceting-Thefacet.range.endParameter]]
-=== The facet.range.end Parameter
-
-The facet.range.end specifies the upper bound of the ranges. You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.end`. For example:
-
+`facet.range.end`::
+The `facet.range.end` specifies the upper bound of the ranges. You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.end`. For example:
++
 `f.price.facet.range.end=1000.0&f.age.facet.range.start=99`
-
++
 `f.lastModified_dt.facet.range.end=NOW/DAY+30DAYS`
 
-[[Faceting-Thefacet.range.gapParameter]]
-=== The facet.range.gap Parameter
-
+`facet.range.gap`::
 The span of each range expressed as a value to be added to the lower bound. For date fields, this should be expressed using the {solr-javadocs}/solr-core/org/apache/solr/util/DateMathParser.html[`DateMathParser` syntax] (such as, `facet.range.gap=%2B1DAY ... '+1DAY'`). You can specify this parameter on a per-field basis with the syntax of `f.<fieldname>.facet.range.gap`. For example:
-
++
 `f.price.facet.range.gap=100&f.age.facet.range.gap=10`
-
++
 `f.lastModified_dt.facet.range.gap=+1DAY`
 
-[[Faceting-Thefacet.range.hardendParameter]]
-=== The facet.range.hardend Parameter
-
+`facet.range.hardend`::
 The `facet.range.hardend` parameter is a Boolean parameter that specifies how Solr should handle cases where the `facet.range.gap` does not divide evenly between `facet.range.start` and `facet.range.end`.
-
-If *true*, the last range constraint will have the `facet.range.end` value as an upper bound. If *false*, the last range will have the smallest possible upper bound greater then `facet.range.end` such that the range is the exact width of the specified range gap. The default value for this parameter is false.
-
++
+If `true`, the last range constraint will have the `facet.range.end` value as an upper bound. If `false`, the last range will have the smallest possible upper bound greater then `facet.range.end` such that the range is the exact width of the specified range gap. The default value for this parameter is false.
++
 This parameter can be specified on a per field basis with the syntax `f.<fieldname>.facet.range.hardend`.
 
-[[Faceting-Thefacet.range.includeParameter]]
-=== The facet.range.include Parameter
-
+`facet.range.include`::
 By default, the ranges used to compute range faceting between `facet.range.start` and `facet.range.end` are inclusive of their lower bounds and exclusive of the upper bounds. The "before" range defined with the `facet.range.other` parameter is exclusive and the "after" range is inclusive. This default, equivalent to "lower" below, will not result in double counting at the boundaries. You can use the `facet.range.include` parameter to modify this behavior using the following options:
-
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Option |Description
-|lower |All gap-based ranges include their lower bound.
-|upper |All gap-based ranges include their upper bound.
-|edge |The first and last gap ranges include their edge bounds (lower for the first one, upper for the last one) even if the corresponding upper/lower option is not specified.
-|outer |The "before" and "after" ranges will be inclusive of their bounds, even if the first or last ranges already include those boundaries.
-|all |Includes all options: lower, upper, edge, outer.
-|===
-
++
+--
+* `lower`: All gap-based ranges include their lower bound.
+* `upper`: All gap-based ranges include their upper bound.
+* `edge`: The first and last gap ranges include their edge bounds (lower for the first one, upper for the last one) even if the corresponding upper/lower option is not specified.
+* `outer`: The "before" and "after" ranges will be inclusive of their bounds, even if the first or last ranges already include those boundaries.
+* `all`: Includes all options: `lower`, `upper`, `edge`, and `outer`.
+--
++
 You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.include`, and you can specify it multiple times to indicate multiple choices.
++
+NOTE: To ensure you avoid double-counting, do not choose both `lower` and `upper`, do not choose `outer`, and do not choose `all`.
 
-[NOTE]
-====
-To ensure you avoid double-counting, do not choose both `lower` and `upper`, do not choose `outer`, and do not choose `all`.
-====
-
-[[Faceting-Thefacet.range.otherParameter]]
-=== The facet.range.other Parameter
-
+`facet.range.other`::
 The `facet.range.other` parameter specifies that in addition to the counts for each range constraint between `facet.range.start` and `facet.range.end`, counts should also be computed for these options:
-
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Option |Description
-|before |All records with field values lower then lower bound of the first range.
-|after |All records with field values greater then the upper bound of the last range.
-|between |All records with field values between the start and end bounds of all ranges.
-|none |Do not compute any counts.
-|all |Compute counts for before, between, and after.
-|===
-
++
+--
+* `before`: All records with field values lower then lower bound of the first range.
+* `after`: All records with field values greater then the upper bound of the last range.
+* `between`: All records with field values between the start and end bounds of all ranges.
+* `none`: Do not compute any counts.
+* `all`: Compute counts for before, between, and after.
+--
++
 This parameter can be specified on a per field basis with the syntax of `f.<fieldname>.facet.range.other`. In addition to the `all` option, this parameter can be specified multiple times to indicate multiple choices, but `none` will override all other options.
 
-[[Faceting-Thefacet.range.methodParameter]]
-=== The facet.range.method Parameter
-
+`facet.range.method`::
 The `facet.range.method` parameter selects the type of algorithm or method Solr should use for range faceting. Both methods produce the same results, but performance may vary.
++
+--
+filter::: This method generates the ranges based on other facet.range parameters, and for each of them executes a filter that later intersects with the main query resultset to get the count. It will make use of the filterCache, so it will benefit of a cache large enough to contain all ranges.
++
+dv::: This method iterates the documents that match the main query, and for each of them finds the correct range for the value. This method will make use of <<docvalues.adoc#docvalues,docValues>> (if enabled for the field) or fieldCache. The `dv` method is not supported for field type DateRangeField or when using <<result-grouping.adoc#result-grouping,group.facets>>.
+--
++
+The default value for this parameter is `filter`.
 
-filter:: This method generates the ranges based on other facet.range parameters, and for each of them executes a filter that later intersects with the main query resultset to get the count. It will make use of the filterCache, so it will benefit of a cache large enough to contain all ranges.
-
-dv:: This method iterates the documents that match the main query, and for each of them finds the correct range for the value. This method will make use of <<docvalues.adoc#docvalues,docValues>> (if enabled for the field) or fieldCache. The `dv` method is not supported for field type DateRangeField or when using <<result-grouping.adoc#result-grouping,group.facets>>.
-
-Default value for this parameter is "filter".
-
-[[Faceting-Thefacet.mincountParameterinRangeFaceting]]
-=== The facet.mincount Parameter in Range Faceting
 
-The `facet.mincount` parameter, the same one as used in field faceting is also applied to range faceting. When used, no ranges with a count below the minimum will be included in the response.
 
 .Date Ranges & Time Zones
 [NOTE]
 ====
-
-Range faceting on date fields is a common situation where the <<working-with-dates.adoc#WorkingwithDates-TZ,`TZ`>> parameter can be useful to ensure that the "facet counts per day" or "facet counts per month" are based on a meaningful definition of when a given day/month "starts" relative to a particular TimeZone.
+Range faceting on date fields is a common situation where the <<working-with-dates.adoc#tz,`TZ`>> parameter can be useful to ensure that the "facet counts per day" or "facet counts per month" are based on a meaningful definition of when a given day/month "starts" relative to a particular TimeZone.
 
 For more information, see the examples in the <<working-with-dates.adoc#working-with-dates,Working with Dates>> section.
-
 ====
 
+=== facet.mincount in Range Faceting
+
+The `facet.mincount` parameter, the same one as used in field faceting is also applied to range faceting. When used, no ranges with a count below the minimum will be included in the response.
 
-[[Faceting-Pivot_DecisionTree_Faceting]]
 == Pivot (Decision Tree) Faceting
 
 Pivoting is a summarization tool that lets you automatically sort, count, total or average data stored in a table. The results are typically displayed in a second table showing the summarized data. Pivot faceting lets you create a summary table of the results from a faceting documents by multiple fields.
 
 Another way to look at it is that the query produces a Decision Tree, in that Solr tells you "for facet A, the constraints/counts are X/N, Y/M, etc. If you were to constrain A by X, then the constraint counts for B would be S/P, T/Q, etc.". In other words, it tells you in advance what the "next" set of facet results would be for a field if you apply a constraint from the current facet results.
 
-[[Faceting-facet.pivot]]
-=== facet.pivot
-
+`facet.pivot`::
 The `facet.pivot` parameter defines the fields to use for the pivot. Multiple `facet.pivot` values will create multiple "facet_pivot" sections in the response. Separate each list of fields with a comma.
 
-[[Faceting-facet.pivot.mincount]]
-=== facet.pivot.mincount
-
+`facet.pivot.mincount`::
 The `facet.pivot.mincount` parameter defines the minimum number of documents that need to match in order for the facet to be included in results. The default is 1.
-
++
 Using the "`bin/solr -e techproducts`" example, A query URL like this one will return the data below, with the pivot faceting results found in the section "facet_pivot":
 
 [source,text]
 ----
 http://localhost:8983/solr/techproducts/select?q=*:*&facet.pivot=cat,popularity,inStock
-   &facet.pivot=popularity,cat&facet=true&facet.field=cat&facet.limit=5
-   &rows=0&wt=json&indent=true&facet.pivot.mincount=2
+   &facet.pivot=popularity,cat&facet=true&facet.field=cat&facet.limit=5&rows=0&facet.pivot.mincount=2
 ----
-
++
 [source,json]
 ----
 {  "facet_counts":{
@@ -413,10 +289,9 @@ http://localhost:8983/solr/techproducts/select?q=*:*&facet.pivot=cat,popularity,
 }]}}}
 ----
 
-[[Faceting-CombiningStatsComponentWithPivots]]
 === Combining Stats Component With Pivots
 
-In addition to some of the <<Faceting-LocalParametersforFaceting,general local parameters>> supported by other types of faceting, a `stats` local parameters can be used with `facet.pivot` to refer to <<the-stats-component.adoc#the-stats-component,`stats.field`>> instances (by tag) that you would like to have computed for each Pivot Constraint.
+In addition to some of the <<Local Parameters for Faceting,general local parameters>> supported by other types of faceting, a `stats` local parameters can be used with `facet.pivot` to refer to <<the-stats-component.adoc#the-stats-component,`stats.field`>> instances (by tag) that you would like to have computed for each Pivot Constraint.
 
 In the example below, two different (overlapping) sets of statistics are computed for each of the facet.pivot result hierarchies:
 
@@ -503,7 +378,6 @@ Results:
       "..."}]}}}}]}]}}
 ----
 
-[[Faceting-CombiningFacetQueriesAndFacetRangesWithPivotFacets]]
 === Combining Facet Queries And Facet Ranges With Pivot Facets
 
 A `query` local parameter can be used with `facet.pivot` to refer to `facet.query` instances (by tag) that should be computed for each pivot constraint. Similarly, a `range` local parameter can be used with `facet.pivot` to refer to `facet.range` instances.
@@ -630,10 +504,9 @@ facet.pivot={!range=r1}cat,inStock
                   "..."]}]}}}
 ----
 
-[[Faceting-AdditionalPivotParameters]]
 === Additional Pivot Parameters
 
-Although `facet.pivot.mincount` deviates in name from the `facet.mincount` parameter used by field faceting, many other Field faceting parameters described above can also be used with pivot faceting:
+Although `facet.pivot.mincount` deviates in name from the `facet.mincount` parameter used by field faceting, many of the faceting parameters described above can also be used with pivot faceting:
 
 * `facet.limit`
 * `facet.offset`
@@ -641,7 +514,6 @@ Although `facet.pivot.mincount` deviates in name from the `facet.mincount` param
 * `facet.overrequest.count`
 * `facet.overrequest.ratio`
 
-[[Faceting-IntervalFaceting]]
 == Interval Faceting
 
 Another supported form of faceting is interval faceting. This sounds similar to range faceting, but the functionality is really closer to doing facet queries with range queries. Interval faceting allows you to set variable intervals and count the number of documents that have values within those intervals in the specified field.
@@ -652,23 +524,21 @@ If you are concerned about the performance of your searches you should test with
 
 This method will use <<docvalues.adoc#docvalues,docValues>> if they are enabled for the field, will use fieldCache otherwise.
 
-[[Faceting-Thefacet.intervalparameter]]
-=== The facet.interval parameter
+Use these parameters for interval faceting:
 
-This parameter Indicates the field where interval faceting must be applied. It can be used multiple times in the same request to indicate multiple fields.
+`facet.interval`::
 
+This parameter Indicates the field where interval faceting must be applied. It can be used multiple times in the same request to indicate multiple fields.
++
 `facet.interval=price&facet.interval=size`
 
-[[Faceting-Thefacet.interval.setparameter]]
-=== The facet.interval.set parameter
-
+`facet.interval.set`::
 This parameter is used to set the intervals for the field, it can be specified multiple times to indicate multiple intervals. This parameter is global, which means that it will be used for all fields indicated with `facet.interval` unless there is an override for a specific field. To override this parameter on a specific field you can use: `f.<fieldname>.facet.interval.set`, for example:
-
++
 [source,text]
 f.price.facet.interval.set=[0,10]&f.price.facet.interval.set=(10,100]
 
 
-[[Faceting-IntervalSyntax]]
 === Interval Syntax
 
 Intervals must begin with either '(' or '[', be followed by the start value, then a comma (','), the end value, and finally a closing ')' or ']’.
@@ -699,12 +569,10 @@ Interval faceting supports output key replacement described below. Output keys c
 &facet=true
 ----
 
-[[Faceting-LocalParametersforFaceting]]
 == Local Parameters for Faceting
 
 The <<local-parameters-in-queries.adoc#local-parameters-in-queries,LocalParams syntax>> allows overriding global settings. It can also provide a method of adding metadata to other parameter values, much like XML attributes.
 
-[[Faceting-TaggingandExcludingFilters]]
 === Tagging and Excluding Filters
 
 You can tag specific filters and exclude those filters when faceting. This is useful when doing multi-select faceting.
@@ -732,7 +600,6 @@ To return counts for doctype values that are currently not selected, tag filters
 
 Filter exclusion is supported for all types of facets. Both the `tag` and `ex` local parameters may specify multiple values by separating them with commas.
 
-[[Faceting-ChangingtheOutputKey]]
 === Changing the Output Key
 
 To change the output key for a faceting command, specify a new name with the `key` local parameter. For example:
@@ -741,14 +608,12 @@ To change the output key for a faceting command, specify a new name with the `ke
 
 The parameter setting above causes the field facet results for the "doctype" field to be returned using the key "mylabel" rather than "doctype" in the response. This can be helpful when faceting on the same field multiple times with different exclusions.
 
-[[Faceting-Limitingfacetwithcertainterms]]
 === Limiting Facet with Certain Terms
 
 To limit field facet with certain terms specify them comma separated with `terms` local parameter. Commas and quotes in terms can be escaped with backslash, as in `\,`. In this case facet is calculated on a way similar to `facet.method=enum` , but ignores `facet.enum.cache.minDf`. For example:
 
 `facet.field={!terms='alfa,betta,with\,with\',with space'}symbol`
 
-[[Faceting-RelatedTopics]]
 == Related Topics
 
-* <<spatial-search.adoc#spatial-search,Heatmap Faceting (Spatial)>>
+See also <<spatial-search.adoc#spatial-search,Heatmap Faceting (Spatial)>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
index 89b8e90..c3c1b5d 100644
--- a/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
+++ b/solr/solr-ref-guide/src/field-type-definitions-and-properties.adoc
@@ -27,7 +27,6 @@ A field type definition can include four types of information:
 * If the field type is `TextField`, a description of the field analysis for the field type.
 * Field type properties - depending on the implementation class, some properties may be mandatory.
 
-[[FieldTypeDefinitionsandProperties-FieldTypeDefinitionsinschema.xml]]
 == Field Type Definitions in schema.xml
 
 Field types are defined in `schema.xml`. Each field type is defined between `fieldType` elements. They can optionally be grouped within a `types` element. Here is an example of a field type definition for a type called `text_general`:
@@ -91,11 +90,11 @@ For multivalued fields, specifies a distance between multiple values, which prev
 `autoGeneratePhraseQueries`:: For text fields. If `true`, Solr automatically generates phrase queries for adjacent terms. If `false`, terms must be enclosed in double-quotes to be treated as phrases.
 
 `enableGraphQueries`::
-For text fields, applicable when querying with <<the-standard-query-parser.adoc#TheStandardQueryParser-StandardQueryParserParameters,`sow=false`>>. Use `true` (the default) for field types with query analyzers including graph-aware filters, e.g., <<filter-descriptions.adoc#FilterDescriptions-SynonymGraphFilter,Synonym Graph Filter>> and <<filter-descriptions.adoc#FilterDescriptions-WordDelimiterGraphFilter,Word Delimiter Graph Filter>>.
+For text fields, applicable when querying with <<the-standard-query-parser.adoc#standard-query-parser-parameters,`sow=false`>>. Use `true` (the default) for field types with query analyzers including graph-aware filters, e.g., <<filter-descriptions.adoc#synonym-graph-filter,Synonym Graph Filter>> and <<filter-descriptions.adoc#word-delimiter-graph-filter,Word Delimiter Graph Filter>>.
 +
-Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., <<filter-descriptions.adoc#FilterDescriptions-ShingleFilter,Shingle Filter>>.
+Use `false` for field types with query analyzers including filters that can match docs when some tokens are missing, e.g., <<filter-descriptions.adoc#shingle-filter,Shingle Filter>>.
 
-[[FieldTypeDefinitionsandProperties-docValuesFormat]]
+[[docvaluesformat]]
 `docValuesFormat`::
 Defines a custom `DocValuesFormat` to use for fields of this type. This requires that a schema-aware codec, such as the `SchemaCodecFactory` has been configured in solrconfig.xml.
 
@@ -131,15 +130,14 @@ The default values for each property depend on the underlying `FieldType` class,
 |omitPositions |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |true or false |*
 |termVectors termPositions termOffsets termPayloads |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |true or false |false
 |required |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |true or false |false
-|useDocValuesAsStored |If the field has <<docvalues.adoc#docvalues,docValues>> enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#CommonQueryParameters-Thefl_FieldList_Parameter,fl parameter>>. |true or false |true
+|useDocValuesAsStored |If the field has <<docvalues.adoc#docvalues,docValues>> enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |true or false |true
 |large |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |true or false |false
 |===
 
 // TODO: SOLR-10655 END
 
-[[FieldTypeDefinitionsandProperties-FieldTypeSimilarity]]
 == Field Type Similarity
 
 A field type may optionally specify a `<similarity/>` that will be used when scoring documents that refer to fields with this type, as long as the "global" similarity for the collection allows it.
 
-By default, any field type which does not define a similarity, uses `BM25Similarity`. For more details, and examples of configuring both global & per-type Similarities, please see <<other-schema-elements.adoc#OtherSchemaElements-Similarity,Other Schema Elements>>.
+By default, any field type which does not define a similarity, uses `BM25Similarity`. For more details, and examples of configuring both global & per-type Similarities, please see <<other-schema-elements.adoc#similarity,Other Schema Elements>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
index 5c82970..4ba0e45 100644
--- a/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
+++ b/solr/solr-ref-guide/src/field-types-included-with-solr.adoc
@@ -27,17 +27,17 @@ The following table lists the field types that are available in Solr. The `org.a
 |Class |Description
 |BinaryField |Binary data.
 |BoolField |Contains either true or false. Values of "1", "t", or "T" in the first character are interpreted as true. Any other values in the first character are interpreted as false.
-|CollationField |Supports Unicode collation for sorting and range queries. ICUCollationField is a better choice if you can use ICU4J. See the section <<language-analysis.adoc#LanguageAnalysis-UnicodeCollation,Unicode Collation>>.
+|CollationField |Supports Unicode collation for sorting and range queries. ICUCollationField is a better choice if you can use ICU4J. See the section <<language-analysis.adoc#unicode-collation,Unicode Collation>>.
 |CurrencyField |Deprecated in favor of CurrencyFieldType.
 |CurrencyFieldType |Supports currencies and exchange rates. See the section <<working-with-currencies-and-exchange-rates.adoc#working-with-currencies-and-exchange-rates,Working with Currencies and Exchange Rates>>.
 |DateRangeField |Supports indexing date ranges, to include point in time date instances as well (single-millisecond durations). See the section <<working-with-dates.adoc#working-with-dates,Working with Dates>> for more detail on using this field type. Consider using this field type even if it's just for date instances, particularly when the queries typically fall on UTC year/month/day/hour, etc., boundaries.
 |ExternalFileField |Pulls values from a file on disk. See the section <<working-with-external-files-and-processes.adoc#working-with-external-files-and-processes,Working with External Files and Processes>>.
 |EnumField |Allows defining an enumerated set of values which may not be easily sorted by either alphabetic or numeric order (such as a list of severities, for example). This field type takes a configuration file, which lists the proper order of the field values. See the section <<working-with-enum-fields.adoc#working-with-enum-fields,Working with Enum Fields>> for more information.
-|ICUCollationField |Supports Unicode collation for sorting and range queries. See the section <<language-analysis.adoc#LanguageAnalysis-UnicodeCollation,Unicode Collation>>.
+|ICUCollationField |Supports Unicode collation for sorting and range queries. See the section <<language-analysis.adoc#unicode-collation,Unicode Collation>>.
 |LatLonPointSpatialField |<<spatial-search.adoc#spatial-search,Spatial Search>>: a latitude/longitude coordinate pair; possibly multi-valued for multiple points. Usually it's specified as "lat,lon" order with a comma.
 |LatLonType |(deprecated) <<spatial-search.adoc#spatial-search,Spatial Search>>: a single-valued latitude/longitude coordinate pair. Usually it's specified as "lat,lon" order with a comma.
 |PointType |<<spatial-search.adoc#spatial-search,Spatial Search>>: A single-valued n-dimensional point. It's both for sorting spatial data that is _not_ lat-lon, and for some more rare use-cases. (NOTE: this is _not_ related to the "Point" based numeric fields)
-|PreAnalyzedField |Provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing. Configuration and usage of PreAnalyzedField is documented on the <<working-with-external-files-and-processes.adoc#WorkingwithExternalFilesandProcesses-ThePreAnalyzedFieldType,Working with External Files and Processes>> page.
+|PreAnalyzedField |Provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing. Configuration and usage of PreAnalyzedField is documented on the <<working-with-external-files-and-processes.adoc#the-preanalyzedfield-type,Working with External Files and Processes>> page.
 |RandomSortField |Does not contain a value. Queries that sort on this field type will return results in random order. Use a dynamic field to use this feature.
 |SpatialRecursivePrefixTreeFieldType |(RPT for short) <<spatial-search.adoc#spatial-search,Spatial Search>>: Accepts latitude comma longitude strings or other shapes in WKT format.
 |StrField |String (UTF-8 encoded string or Unicode). Strings are intended for small fields and are _not_ tokenized or analyzed in any way. They have a hard limit of slightly less than 32K.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/filter-descriptions.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/filter-descriptions.adoc b/solr/solr-ref-guide/src/filter-descriptions.adoc
index f428678..4ced59e 100644
--- a/solr/solr-ref-guide/src/filter-descriptions.adoc
+++ b/solr/solr-ref-guide/src/filter-descriptions.adoc
@@ -50,7 +50,6 @@ The following sections describe the filter factories that are included in this r
 
 For user tips about Solr's filters, see http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters.
 
-[[FilterDescriptions-ASCIIFoldingFilter]]
 == ASCII Folding Filter
 
 This filter converts alphabetic, numeric, and symbolic Unicode characters which are not in the Basic Latin Unicode block (the first 127 ASCII characters) to their ASCII equivalents, if one exists. This filter converts characters from the following Unicode blocks:
@@ -92,10 +91,9 @@ This filter converts alphabetic, numeric, and symbolic Unicode characters which
 
 *Out:* "a" (ASCII character 97)
 
-[[FilterDescriptions-Beider-MorseFilter]]
 == Beider-Morse Filter
 
-Implements the Beider-Morse Phonetic Matching (BMPM) algorithm, which allows identification of similar names, even if they are spelled differently or in different languages. More information about how this works is available in the section on <<phonetic-matching.adoc#PhoneticMatching-Beider-MorsePhoneticMatching_BMPM_,Phonetic Matching>>.
+Implements the Beider-Morse Phonetic Matching (BMPM) algorithm, which allows identification of similar names, even if they are spelled differently or in different languages. More information about how this works is available in the section on <<phonetic-matching.adoc#beider-morse-phonetic-matching-bmpm,Phonetic Matching>>.
 
 [IMPORTANT]
 ====
@@ -125,10 +123,9 @@ BeiderMorseFilter changed its behavior in Solr 5.0 due to an update to version 3
 </analyzer>
 ----
 
-[[FilterDescriptions-ClassicFilter]]
 == Classic Filter
 
-This filter takes the output of the <<tokenizers.adoc#Tokenizers-ClassicTokenizer,Classic Tokenizer>> and strips periods from acronyms and "'s" from possessives.
+This filter takes the output of the <<tokenizers.adoc#classic-tokenizer,Classic Tokenizer>> and strips periods from acronyms and "'s" from possessives.
 
 *Factory class:* `solr.ClassicFilterFactory`
 
@@ -150,7 +147,6 @@ This filter takes the output of the <<tokenizers.adoc#Tokenizers-ClassicTokenize
 
 *Out:* "IBM", "cat", "can't"
 
-[[FilterDescriptions-CommonGramsFilter]]
 == Common Grams Filter
 
 This filter creates word shingles by combining common tokens such as stop words with regular tokens. This is useful for creating phrase queries containing common words, such as "the cat." Solr normally ignores stop words in queried phrases, so searching for "the cat" would return all matches for the word "cat."
@@ -181,12 +177,10 @@ This filter creates word shingles by combining common tokens such as stop words
 
 *Out:* "the_cat"
 
-[[FilterDescriptions-CollationKeyFilter]]
 == Collation Key Filter
 
-Collation allows sorting of text in a language-sensitive way. It is usually used for sorting, but can also be used with advanced searches. We've covered this in much more detail in the section on <<language-analysis.adoc#LanguageAnalysis-UnicodeCollation,Unicode Collation>>.
+Collation allows sorting of text in a language-sensitive way. It is usually used for sorting, but can also be used with advanced searches. We've covered this in much more detail in the section on <<language-analysis.adoc#unicode-collation,Unicode Collation>>.
 
-[[FilterDescriptions-Daitch-MokotoffSoundexFilter]]
 == Daitch-Mokotoff Soundex Filter
 
 Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of similar names, even if they are spelled differently. More information about how this works is available in the section on <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>>.
@@ -207,7 +201,6 @@ Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of
 </analyzer>
 ----
 
-[[FilterDescriptions-DoubleMetaphoneFilter]]
 == Double Metaphone Filter
 
 This filter creates tokens using the http://commons.apache.org/codec/apidocs/org/apache/commons/codec/language/DoubleMetaphone.html[`DoubleMetaphone`] encoding algorithm from commons-codec. For more information, see the <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>> section.
@@ -260,7 +253,6 @@ Discard original token (`inject="false"`).
 
 Note that "Kuczewski" has two encodings, which are added at the same position.
 
-[[FilterDescriptions-EdgeN-GramFilter]]
 == Edge N-Gram Filter
 
 This filter generates edge n-gram tokens of sizes within the given range.
@@ -327,7 +319,6 @@ A range of 4 to 6.
 
 *Out:* "four", "scor", "score", "twen", "twent", "twenty"
 
-[[FilterDescriptions-EnglishMinimalStemFilter]]
 == English Minimal Stem Filter
 
 This filter stems plural English words to their singular form.
@@ -352,7 +343,6 @@ This filter stems plural English words to their singular form.
 
 *Out:* "dog", "cat"
 
-[[FilterDescriptions-EnglishPossessiveFilter]]
 == English Possessive Filter
 
 This filter removes singular possessives (trailing *'s*) from words. Note that plural possessives, e.g. the *s'* in "divers' snorkels", are not removed by this filter.
@@ -377,7 +367,6 @@ This filter removes singular possessives (trailing *'s*) from words. Note that p
 
 *Out:* "Man", "dog", "bites", "dogs'", "man"
 
-[[FilterDescriptions-FingerprintFilter]]
 == Fingerprint Filter
 
 This filter outputs a single token which is a concatenation of the sorted and de-duplicated set of input tokens. This can be useful for clustering/linking use cases.
@@ -406,7 +395,6 @@ This filter outputs a single token which is a concatenation of the sorted and de
 
 *Out:* "brown_dog_fox_jumped_lazy_over_quick_the"
 
-[[FilterDescriptions-FlattenGraphFilter]]
 == Flatten Graph Filter
 
 This filter must be included on index-time analyzer specifications that include at least one graph-aware filter, including Synonym Graph Filter and Word Delimiter Graph Filter.
@@ -417,7 +405,6 @@ This filter must be included on index-time analyzer specifications that include
 
 See the examples below for <<Synonym Graph Filter>> and <<Word Delimiter Graph Filter>>.
 
-[[FilterDescriptions-HunspellStemFilter]]
 == Hunspell Stem Filter
 
 The `Hunspell Stem Filter` provides support for several languages. You must provide the dictionary (`.dic`) and rules (`.aff`) files for each language you wish to use with the Hunspell Stem Filter. You can download those language files http://wiki.services.openoffice.org/wiki/Dictionaries[here].
@@ -456,7 +443,6 @@ Be aware that your results will vary widely based on the quality of the provided
 
 *Out:* "jump", "jump", "jump"
 
-[[FilterDescriptions-HyphenatedWordsFilter]]
 == Hyphenated Words Filter
 
 This filter reconstructs hyphenated words that have been tokenized as two tokens because of a line break or other intervening whitespace in the field test. If a token ends with a hyphen, it is joined with the following token and the hyphen is discarded.
@@ -483,10 +469,9 @@ Note that for this filter to work properly, the upstream tokenizer must not remo
 
 *Out:* "A", "hyphenated", "word"
 
-[[FilterDescriptions-ICUFoldingFilter]]
 == ICU Folding Filter
 
-This filter is a custom Unicode normalization form that applies the foldings specified in http://www.unicode.org/reports/tr30/tr30-4.html[Unicode Technical Report 30] in addition to the `NFKC_Casefold` normalization form as described in <<FilterDescriptions-ICUNormalizer2Filter,ICU Normalizer 2 Filter>>. This filter is a better substitute for the combined behavior of the <<FilterDescriptions-ASCIIFoldingFilter,ASCII Folding Filter>>, <<FilterDescriptions-LowerCaseFilter,Lower Case Filter>>, and <<FilterDescriptions-ICUNormalizer2Filter,ICU Normalizer 2 Filter>>.
+This filter is a custom Unicode normalization form that applies the foldings specified in http://www.unicode.org/reports/tr30/tr30-4.html[Unicode Technical Report 30] in addition to the `NFKC_Casefold` normalization form as described in <<ICU Normalizer 2 Filter>>. This filter is a better substitute for the combined behavior of the <<ASCII Folding Filter>>, <<Lower Case Filter>>, and <<ICU Normalizer 2 Filter>>.
 
 To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`. For more information about adding jars, see the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in Solrconfig>>.
 
@@ -506,7 +491,6 @@ To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructio
 
 For detailed information on this normalization form, see http://www.unicode.org/reports/tr30/tr30-4.html.
 
-[[FilterDescriptions-ICUNormalizer2Filter]]
 == ICU Normalizer 2 Filter
 
 This filter factory normalizes text according to one of five Unicode Normalization Forms as described in http://unicode.org/reports/tr15/[Unicode Standard Annex #15]:
@@ -539,7 +523,6 @@ For detailed information about these Unicode Normalization Forms, see http://uni
 
 To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.
 
-[[FilterDescriptions-ICUTransformFilter]]
 == ICU Transform Filter
 
 This filter applies http://userguide.icu-project.org/transforms/general[ICU Tranforms] to text. This filter supports only ICU System Transforms. Custom rule sets are not supported.
@@ -564,7 +547,6 @@ For detailed information about ICU Transforms, see http://userguide.icu-project.
 
 To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.
 
-[[FilterDescriptions-KeepWordFilter]]
 == Keep Word Filter
 
 This filter discards all tokens except those that are listed in the given word list. This is the inverse of the Stop Words Filter. This filter can be useful for building specialized indices for a constrained set of terms.
@@ -638,7 +620,6 @@ Using LowerCaseFilterFactory before filtering for keep words, no `ignoreCase` fl
 
 *Out:* "happy", "funny"
 
-[[FilterDescriptions-KStemFilter]]
 == KStem Filter
 
 KStem is an alternative to the Porter Stem Filter for developers looking for a less aggressive stemmer. KStem was written by Bob Krovetz, ported to Lucene by Sergio Guzman-Lara (UMASS Amherst). This stemmer is only appropriate for English language text.
@@ -663,7 +644,6 @@ KStem is an alternative to the Porter Stem Filter for developers looking for a l
 
 *Out:* "jump", "jump", "jump"
 
-[[FilterDescriptions-LengthFilter]]
 == Length Filter
 
 This filter passes tokens whose length falls within the min/max limit specified. All other tokens are discarded.
@@ -694,7 +674,6 @@ This filter passes tokens whose length falls within the min/max limit specified.
 
 *Out:* "turn", "right"
 
-[[FilterDescriptions-LimitTokenCountFilter]]
 == Limit Token Count Filter
 
 This filter limits the number of accepted tokens, typically useful for index analysis.
@@ -726,7 +705,6 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Out:* "1", "2", "3", "4", "5", "6", "7", "8", "9", "10"
 
-[[FilterDescriptions-LimitTokenOffsetFilter]]
 == Limit Token Offset Filter
 
 This filter limits tokens to those before a configured maximum start character offset. This can be useful to limit highlighting, for example.
@@ -758,7 +736,6 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Out:* "0", "2", "4", "6", "8", "A"
 
-[[FilterDescriptions-LimitTokenPositionFilter]]
 == Limit Token Position Filter
 
 This filter limits tokens to those before a configured maximum token position.
@@ -790,7 +767,6 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Out:* "1", "2", "3"
 
-[[FilterDescriptions-LowerCaseFilter]]
 == Lower Case Filter
 
 Converts any uppercase letters in a token to the equivalent lowercase token. All other characters are left unchanged.
@@ -815,10 +791,9 @@ Converts any uppercase letters in a token to the equivalent lowercase token. All
 
 *Out:* "down", "with", "camelcase"
 
-[[FilterDescriptions-ManagedStopFilter]]
 == Managed Stop Filter
 
-This is specialized version of the <<FilterDescriptions-StopFilter,Stop Words Filter Factory>> that uses a set of stop words that are <<managed-resources.adoc#managed-resources,managed from a REST API.>>
+This is specialized version of the <<Stop Filter,Stop Words Filter Factory>> that uses a set of stop words that are <<managed-resources.adoc#managed-resources,managed from a REST API.>>
 
 *Arguments:*
 
@@ -836,12 +811,11 @@ With this configuration the set of words is named "english" and can be managed v
 </analyzer>
 ----
 
-See <<FilterDescriptions-StopFilter,Stop Filter>> for example input/output.
+See <<Stop Filter>> for example input/output.
 
-[[FilterDescriptions-ManagedSynonymFilter]]
 == Managed Synonym Filter
 
-This is specialized version of the <<FilterDescriptions-SynonymFilter,Synonym Filter Factory>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
+This is specialized version of the <<Synonym Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
 
 .Managed Synonym Filter has been Deprecated
 [WARNING]
@@ -851,12 +825,11 @@ Managed Synonym Filter has been deprecated in favor of Managed Synonym Graph Fil
 
 *Factory class:* `solr.ManagedSynonymFilterFactory`
 
-For arguments and examples, see the Managed Synonym Graph Filter below.
+For arguments and examples, see the <<Managed Synonym Graph Filter>> below.
 
-[[FilterDescriptions-ManagedSynonymGraphFilter]]
 == Managed Synonym Graph Filter
 
-This is specialized version of the <<FilterDescriptions-SynonymGraphFilter,Synonym Graph Filter Factory>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
+This is specialized version of the <<Synonym Graph Filter>> that uses a mapping on synonyms that is <<managed-resources.adoc#managed-resources,managed from a REST API.>>
 
 This filter maps single- or multi-token synonyms, producing a fully correct graph output. This filter is a replacement for the Managed Synonym Filter, which produces incorrect graphs for multi-token synonyms.
 
@@ -881,9 +854,8 @@ With this configuration the set of mappings is named "english" and can be manage
 </analyzer>
 ----
 
-See <<FilterDescriptions-ManagedSynonymFilter,Managed Synonym Filter>> for example input/output.
+See <<Managed Synonym Filter>> for example input/output.
 
-[[FilterDescriptions-N-GramFilter]]
 == N-Gram Filter
 
 Generates n-gram tokens of sizes in the given range. Note that tokens are ordered by position and then by gram size.
@@ -950,7 +922,6 @@ A range of 3 to 5.
 
 *Out:* "fou", "four", "our", "sco", "scor", "score", "cor", "core", "ore"
 
-[[FilterDescriptions-NumericPayloadTokenFilter]]
 == Numeric Payload Token Filter
 
 This filter adds a numeric floating point payload value to tokens that match a given type. Refer to the Javadoc for the `org.apache.lucene.analysis.Token` class for more information about token types and payloads.
@@ -979,7 +950,6 @@ This filter adds a numeric floating point payload value to tokens that match a g
 
 *Out:* "bing"[0.75], "bang"[0.75], "boom"[0.75]
 
-[[FilterDescriptions-PatternReplaceFilter]]
 == Pattern Replace Filter
 
 This filter applies a regular expression to each token and, for those that match, substitutes the given replacement string in place of the matched pattern. Tokens which do not match are passed though unchanged.
@@ -1048,7 +1018,6 @@ More complex pattern with capture group reference in the replacement. Tokens tha
 
 *Out:* "cat", "foo_1234", "9987", "blah1234foo"
 
-[[FilterDescriptions-PhoneticFilter]]
 == Phonetic Filter
 
 This filter creates tokens using one of the phonetic encoding algorithms in the `org.apache.commons.codec.language` package. For more information, see the section on <<phonetic-matching.adoc#phonetic-matching,Phonetic Matching>>.
@@ -1119,7 +1088,6 @@ Default Soundex encoder.
 
 *Out:* "four"(1), "F600"(1), "score"(2), "S600"(2), "and"(3), "A530"(3), "twenty"(4), "T530"(4)
 
-[[FilterDescriptions-PorterStemFilter]]
 == Porter Stem Filter
 
 This filter applies the Porter Stemming Algorithm for English. The results are similar to using the Snowball Porter Stemmer with the `language="English"` argument. But this stemmer is coded directly in Java and is not based on Snowball. It does not accept a list of protected words and is only appropriate for English language text. However, it has been benchmarked as http://markmail.org/thread/d2c443z63z37rwf6[four times faster] than the English Snowball stemmer, so can provide a performance enhancement.
@@ -1144,7 +1112,6 @@ This filter applies the Porter Stemming Algorithm for English. The results are s
 
 *Out:* "jump", "jump", "jump"
 
-[[FilterDescriptions-RemoveDuplicatesTokenFilter]]
 == Remove Duplicates Token Filter
 
 The filter removes duplicate tokens in the stream. Tokens are considered to be duplicates ONLY if they have the same text and position values.
@@ -1223,7 +1190,6 @@ This filter reverses tokens to provide faster leading wildcard and prefix querie
 
 *Out:* "oof*", "rab*"
 
-[[FilterDescriptions-ShingleFilter]]
 == Shingle Filter
 
 This filter constructs shingles, which are token n-grams, from the token stream. It combines runs of tokens into a single token.
@@ -1278,7 +1244,6 @@ A shingle size of four, do not include original token.
 
 *Out:* "To be"(1), "To be or"(1), "To be or not"(1), "be or"(2), "be or not"(2), "be or not to"(2), "or not"(3), "or not to"(3), "or not to be"(3), "not to"(4), "not to be"(4), "to be"(5)
 
-[[FilterDescriptions-SnowballPorterStemmerFilter]]
 == Snowball Porter Stemmer Filter
 
 This filter factory instantiates a language-specific stemmer generated by Snowball. Snowball is a software package that generates pattern-based word stemmers. This type of stemmer is not as accurate as a table-based stemmer, but is faster and less complex. Table-driven stemmers are labor intensive to create and maintain and so are typically commercial products.
@@ -1349,7 +1314,6 @@ Spanish stemmer, Spanish words:
 
 *Out:* "cant", "cant"
 
-[[FilterDescriptions-StandardFilter]]
 == Standard Filter
 
 This filter removes dots from acronyms and the substring "'s" from the end of tokens. This filter depends on the tokens being tagged with the appropriate term-type to recognize acronyms and words with apostrophes.
@@ -1363,7 +1327,6 @@ This filter removes dots from acronyms and the substring "'s" from the end of to
 This filter is no longer operational in Solr when the `luceneMatchVersion` (in `solrconfig.xml`) is higher than "3.1".
 ====
 
-[[FilterDescriptions-StopFilter]]
 == Stop Filter
 
 This filter discards, or _stops_ analysis of, tokens that are on the given stop words list. A standard stop words list is included in the Solr `conf` directory, named `stopwords.txt`, which is appropriate for typical English language text.
@@ -1414,10 +1377,9 @@ Case-sensitive matching, capitalized words not stopped. Token positions skip sto
 
 *Out:* "what"(4)
 
-[[FilterDescriptions-SuggestStopFilter]]
 == Suggest Stop Filter
 
-Like <<FilterDescriptions-StopFilter,Stop Filter>>, this filter discards, or _stops_ analysis of, tokens that are on the given stop words list.
+Like <<Stop Filter>>, this filter discards, or _stops_ analysis of, tokens that are on the given stop words list.
 
 Suggest Stop Filter differs from Stop Filter in that it will not remove the last token unless it is followed by a token separator. For example, a query `"find the"` would preserve the `'the'` since it was not followed by a space, punctuation etc., and mark it as a `KEYWORD` so that following filters will not change or remove it.
 
@@ -1455,7 +1417,6 @@ By contrast, a query like "`find the popsicle`" would remove '`the`' as a stopwo
 
 *Out:* "the"(2)
 
-[[FilterDescriptions-SynonymFilter]]
 == Synonym Filter
 
 This filter does synonym mapping. Each token is looked up in the list of synonyms and if a match is found, then the synonym is emitted in place of the token. The position value of the new tokens are set such they all occur at the same position as the original token.
@@ -1470,7 +1431,6 @@ Synonym Filter has been deprecated in favor of Synonym Graph Filter, which is re
 
 For arguments and examples, see the Synonym Graph Filter below.
 
-[[FilterDescriptions-SynonymGraphFilter]]
 == Synonym Graph Filter
 
 This filter maps single- or multi-token synonyms, producing a fully correct graph output. This filter is a replacement for the Synonym Filter, which produces incorrect graphs for multi-token synonyms.
@@ -1542,7 +1502,6 @@ small => tiny,teeny,weeny
 
 *Out:* "the"(1), "large"(2), "large"(3), "couch"(4), "sofa"(4), "divan"(4)
 
-[[FilterDescriptions-TokenOffsetPayloadFilter]]
 == Token Offset Payload Filter
 
 This filter adds the numeric character offsets of the token as a payload value for that token.
@@ -1567,7 +1526,6 @@ This filter adds the numeric character offsets of the token as a payload value f
 
 *Out:* "bing"[0,4], "bang"[5,9], "boom"[10,14]
 
-[[FilterDescriptions-TrimFilter]]
 == Trim Filter
 
 This filter trims leading and/or trailing whitespace from tokens. Most tokenizers break tokens at whitespace, so this filter is most often used for special situations.
@@ -1596,7 +1554,6 @@ The PatternTokenizerFactory configuration used here splits the input on simple c
 
 *Out:* "one", "two", "three", "four"
 
-[[FilterDescriptions-TypeAsPayloadFilter]]
 == Type As Payload Filter
 
 This filter adds the token's type, as an encoded byte sequence, as its payload.
@@ -1621,10 +1578,9 @@ This filter adds the token's type, as an encoded byte sequence, as its payload.
 
 *Out:* "Pay"[<ALPHANUM>], "Bob's"[<APOSTROPHE>], "I.O.U."[<ACRONYM>]
 
-[[FilterDescriptions-TypeTokenFilter]]
 == Type Token Filter
 
-This filter blacklists or whitelists a specified list of token types, assuming the tokens have type metadata associated with them. For example, the <<tokenizers.adoc#Tokenizers-UAX29URLEmailTokenizer,UAX29 URL Email Tokenizer>> emits "<URL>" and "<EMAIL>" typed tokens, as well as other types. This filter would allow you to pull out only e-mail addresses from text as tokens, if you wish.
+This filter blacklists or whitelists a specified list of token types, assuming the tokens have type metadata associated with them. For example, the <<tokenizers.adoc#uax29-url-email-tokenizer,UAX29 URL Email Tokenizer>> emits "<URL>" and "<EMAIL>" typed tokens, as well as other types. This filter would allow you to pull out only e-mail addresses from text as tokens, if you wish.
 
 *Factory class:* `solr.TypeTokenFilterFactory`
 
@@ -1645,7 +1601,6 @@ This filter blacklists or whitelists a specified list of token types, assuming t
 </analyzer>
 ----
 
-[[FilterDescriptions-WordDelimiterFilter]]
 == Word Delimiter Filter
 
 This filter splits tokens at word delimiters.
@@ -1660,7 +1615,6 @@ Word Delimiter Filter has been deprecated in favor of Word Delimiter Graph Filte
 
 For a full description, including arguments and examples, see the Word Delimiter Graph Filter below.
 
-[[FilterDescriptions-WordDelimiterGraphFilter]]
 == Word Delimiter Graph Filter
 
 This filter splits tokens at word delimiters.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/function-queries.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/function-queries.adoc b/solr/solr-ref-guide/src/function-queries.adoc
index 29cca9c..11dfb08 100644
--- a/solr/solr-ref-guide/src/function-queries.adoc
+++ b/solr/solr-ref-guide/src/function-queries.adoc
@@ -25,14 +25,13 @@ Function queries are supported by the <<the-dismax-query-parser.adoc#the-dismax-
 
 Function queries use _functions_. The functions can be a constant (numeric or string literal), a field, another function or a parameter substitution argument. You can use these functions to modify the ranking of results for users. These could be used to change the ranking of results based on a user's location, or some other calculation.
 
-[[FunctionQueries-UsingFunctionQuery]]
 == Using Function Query
 
 Functions must be expressed as function calls (for example, `sum(a,b)` instead of simply `a+b`).
 
 There are several ways of using function queries in a Solr query:
 
-* Via an explicit QParser that expects function arguments, such <<other-parsers.adoc#OtherParsers-FunctionQueryParser,`func`>> or <<other-parsers.adoc#OtherParsers-FunctionRangeQueryParser,`frange`>> . For example:
+* Via an explicit QParser that expects function arguments, such <<other-parsers.adoc#function-query-parser,`func`>> or <<other-parsers.adoc#function-range-query-parser,`frange`>> . For example:
 +
 [source,text]
 ----
@@ -61,7 +60,7 @@ the output would be:
 <float name="score">0.343</float>
 ...
 ----
-* Use in a parameter that is explicitly for specifying functions, such as the EDisMax query parser's <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,`boost`>> param, or DisMax query parser's <<the-dismax-query-parser.adoc#TheDisMaxQueryParser-Thebf_BoostFunctions_Parameter,`bf` (boost function) parameter>>. (Note that the `bf` parameter actually takes a list of function queries separated by white space and each with an optional boost. Make sure you eliminate any internal white space in single function queries when using `bf`). For example:
+* Use in a parameter that is explicitly for specifying functions, such as the EDisMax query parser's <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,`boost`>> param, or DisMax query parser's <<the-dismax-query-parser.adoc#bf-boost-functions-parameter,`bf` (boost function) parameter>>. (Note that the `bf` parameter actually takes a list of function queries separated by white space and each with an optional boost. Make sure you eliminate any internal white space in single function queries when using `bf`). For example:
 +
 [source,text]
 ----
@@ -76,7 +75,6 @@ q=_val_:mynumericfield _val_:"recip(rord(myfield),1,2,3)"
 
 Only functions with fast random access are recommended.
 
-[[FunctionQueries-AvailableFunctions]]
 == Available Functions
 
 The table below summarizes the functions available for function queries.
@@ -89,7 +87,7 @@ Returns the absolute value of the specified value or function.
 * `abs(x)` `abs(-5)`
 
 === childfield(field) Function
-Returns the value of the given field for one of the matched child docs when searching by <<other-parsers.adoc#OtherParsers-BlockJoinParentQueryParser,{!parent}>>. It can be used only in `sort` parameter.
+Returns the value of the given field for one of the matched child docs when searching by <<other-parsers.adoc#block-join-parent-query-parser,{!parent}>>. It can be used only in `sort` parameter.
 
 *Syntax Examples*
 
@@ -149,7 +147,6 @@ You can quote the term if it's more complex, or do parameter substitution for th
 * `docfreq(text,'solr')`
 * `...&defType=func` `&q=docfreq(text,$myterm)&myterm=solr`
 
-[[FunctionQueries-field]]
 === field Function
 Returns the numeric docValues or indexed value of the field with the specified name. In its simplest (single argument) form, this function can only be used on single valued fields, and can be called using the name of the field as a string, or for most conventional field names simply use the field name by itself with out using the `field(...)` syntax.
 
@@ -232,7 +229,7 @@ If the value of `x` does not fall between `min` and `max`, then either the value
 === max Function
 Returns the maximum numeric value of multiple nested functions or constants, which are specified as arguments: `max(x,y,...)`. The `max` function can also be useful for "bottoming out" another function or field at some specified constant.
 
-Use the `field(myfield,max)` syntax for <<FunctionQueries-field,selecting the maximum value of a single multivalued field>>.
+Use the `field(myfield,max)` syntax for <<field Function,selecting the maximum value of a single multivalued field>>.
 
 *Syntax Example*
 
@@ -248,7 +245,7 @@ Returns the number of documents in the index, including those that are marked as
 === min Function
 Returns the minimum numeric value of multiple nested functions of constants, which are specified as arguments: `min(x,y,...)`. The `min` function can also be useful for providing an "upper bound" on a function using a constant.
 
-Use the `field(myfield,min)` <<FunctionQueries-field,syntax for selecting the minimum value of a single multivalued field>>.
+Use the `field(myfield,min)` <<field Function,syntax for selecting the minimum value of a single multivalued field>>.
 
 *Syntax Example*
 
@@ -502,8 +499,6 @@ Returns `true` if any member of the field exists.
 *Syntax Example*
 * `if(lt(ms(mydatefield),315569259747),0.8,1)` translates to this pseudocode: `if mydatefield < 315569259747 then 0.8 else 1`
 
-
-[[FunctionQueries-ExampleFunctionQueries]]
 == Example Function Queries
 
 To give you a better understanding of how function queries can be used in Solr, suppose an index stores the dimensions in meters x,y,z of some hypothetical boxes with arbitrary names stored in field `boxname`. Suppose we want to search for box matching name `findbox` but ranked according to volumes of boxes. The query parameters would be:
@@ -521,7 +516,6 @@ Suppose that you also have a field storing the weight of the box as `weight`. To
 http://localhost:8983/solr/collection_name/select?q=boxname:findbox _val_:"div(weight,product(x,y,z))"&fl=boxname x y z weight score
 ----
 
-[[FunctionQueries-SortByFunction]]
 == Sort By Function
 
 You can sort your query results by the output of a function. For example, to sort results by distance, you could enter:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc b/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
index d512660..30dd9b1 100644
--- a/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
+++ b/solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc
@@ -33,10 +33,8 @@ In this section you will learn how to start a SolrCloud cluster using startup sc
 This tutorial assumes that you're already familiar with the basics of using Solr. If you need a refresher, please see the <<getting-started.adoc#getting-started,Getting Started section>> to get a grounding in Solr concepts. If you load documents as part of that exercise, you should start over with a fresh Solr installation for these SolrCloud tutorials.
 ====
 
-[[GettingStartedwithSolrCloud-SolrCloudExample]]
 == SolrCloud Example
 
-[[GettingStartedwithSolrCloud-InteractiveStartup]]
 === Interactive Startup
 
 The `bin/solr` script makes it easy to get started with SolrCloud as it walks you through the process of launching Solr nodes in cloud mode and adding a collection. To get started, simply do:
@@ -120,7 +118,6 @@ To stop Solr in SolrCloud mode, you would use the `bin/solr` script and issue th
 bin/solr stop -all
 ----
 
-[[GettingStartedwithSolrCloud-Startingwith-noprompt]]
 === Starting with -noprompt
 
 You can also get SolrCloud started with all the defaults instead of the interactive session using the following command:
@@ -130,7 +127,6 @@ You can also get SolrCloud started with all the defaults instead of the interact
 bin/solr -e cloud -noprompt
 ----
 
-[[GettingStartedwithSolrCloud-RestartingNodes]]
 === Restarting Nodes
 
 You can restart your SolrCloud nodes using the `bin/solr` script. For instance, to restart node1 running on port 8983 (with an embedded ZooKeeper server), you would do:
@@ -149,7 +145,6 @@ bin/solr restart -c -p 7574 -z localhost:9983 -s example/cloud/node2/solr
 
 Notice that you need to specify the ZooKeeper address (`-z localhost:9983`) when starting node2 so that it can join the cluster with node1.
 
-[[GettingStartedwithSolrCloud-Addinganodetoacluster]]
 === Adding a node to a cluster
 
 Adding a node to an existing cluster is a bit advanced and involves a little more understanding of Solr. Once you startup a SolrCloud cluster using the startup scripts, you can add a new node to it by: