You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by da...@apache.org on 2017/07/13 07:18:44 UTC

[36/41] lucene-solr:feature/autoscaling: SOLR-11050: remove unneeded anchors for pages that have no incoming links from other pages

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/spatial-search.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spatial-search.adoc b/solr/solr-ref-guide/src/spatial-search.adoc
index 8b56c02..64d813f 100644
--- a/solr/solr-ref-guide/src/spatial-search.adoc
+++ b/solr/solr-ref-guide/src/spatial-search.adoc
@@ -42,7 +42,6 @@ There are four main field types available for spatial search:
 
 Some esoteric details that are not in this guide can be found at http://wiki.apache.org/solr/SpatialSearch.
 
-[[SpatialSearch-LatLonPointSpatialField]]
 == LatLonPointSpatialField
 
 Here's how `LatLonPointSpatialField` (LLPSF) should usually be configured in the schema:
@@ -52,7 +51,6 @@ Here's how `LatLonPointSpatialField` (LLPSF) should usually be configured in the
 
 LLPSF supports toggling `indexed`, `stored`, `docValues`, and `multiValued`. LLPSF internally uses a 2-dimensional Lucene "Points" (BDK tree) index when "indexed" is enabled (the default). When "docValues" is enabled, a latitude and longitudes pair are bit-interleaved into 64 bits and put into Lucene DocValues. The accuracy of the docValues data is about a centimeter.
 
-[[SpatialSearch-IndexingPoints]]
 == Indexing Points
 
 For indexing geodetic points (latitude and longitude), supply it in "lat,lon" order (comma separated).
@@ -61,7 +59,6 @@ For indexing non-geodetic points, it depends. Use `x y` (a space) if RPT. For Po
 
 If you'd rather use a standard industry format, Solr supports WKT and GeoJSON. However it's much bulkier than the raw coordinates for such simple data. (Not supported by the deprecated LatLonType or PointType)
 
-[[SpatialSearch-SearchingwithQueryParsers]]
 == Searching with Query Parsers
 
 There are two spatial Solr "query parsers" for geospatial search: `geofilt` and `bbox`. They take the following parameters:
@@ -100,7 +97,6 @@ When used with `BBoxField`, additional options are supported:
 (Advanced option; not supported by LatLonType (deprecated) or PointType). If you only want the query to score (with the above `score` local parameter), not filter, then set this local parameter to false.
 
 
-[[SpatialSearch-geofilt]]
 === geofilt
 
 The `geofilt` filter allows you to retrieve results based on the geospatial distance (AKA the "great circle distance") from a given point. Another way of looking at it is that it creates a circular shape filter. For example, to find all documents within five kilometers of a given lat/lon point, you could enter `&q=*:*&fq={!geofilt sfield=store}&pt=45.15,-93.85&d=5`. This filter returns all results within a circle of the given radius around the initial point:
@@ -108,7 +104,6 @@ The `geofilt` filter allows you to retrieve results based on the geospatial dist
 image::images/spatial-search/circle.png[5KM radius]
 
 
-[[SpatialSearch-bbox]]
 === bbox
 
 The `bbox` filter is very similar to `geofilt` except it uses the _bounding box_ of the calculated circle. See the blue box in the diagram below. It takes the same parameters as geofilt.
@@ -126,7 +121,6 @@ image::images/spatial-search/bbox.png[Bounding box]
 When a bounding box includes a pole, the bounding box ends up being a "bounding bowl" (a _spherical cap_) that includes all values north of the lowest latitude of the circle if it touches the north pole (or south of the highest latitude if it touches the south pole).
 ====
 
-[[SpatialSearch-Filteringbyanarbitraryrectangle]]
 === Filtering by an Arbitrary Rectangle
 
 Sometimes the spatial search requirement calls for finding everything in a rectangular area, such as the area covered by a map the user is looking at. For this case, geofilt and bbox won't cut it. This is somewhat of a trick, but you can use Solr's range query syntax for this by supplying the lower-left corner as the start of the range and the upper-right corner as the end of the range.
@@ -138,7 +132,6 @@ Here's an example:
 LatLonType (deprecated) does *not* support rectangles that cross the dateline. For RPT and BBoxField, if you are non-geospatial coordinates (`geo="false"`) then you must quote the points due to the space, e.g. `"x y"`.
 
 
-[[SpatialSearch-Optimizing_CacheorNot]]
 === Optimizing: Cache or Not
 
 It's most common to put a spatial query into an "fq" parameter – a filter query. By default, Solr will cache the query in the filter cache.
@@ -149,7 +142,6 @@ If you know the filter query (be it spatial or not) is fairly unique and not lik
 
 LLPSF does not support Solr's "PostFilter".
 
-[[SpatialSearch-DistanceSortingorBoosting_FunctionQueries_]]
 == Distance Sorting or Boosting (Function Queries)
 
 There are four distance function queries:
@@ -161,7 +153,6 @@ There are four distance function queries:
 
 For more information about these function queries, see the section on <<function-queries.adoc#function-queries,Function Queries>>.
 
-[[SpatialSearch-geodist]]
 === geodist
 
 `geodist` is a distance function that takes three optional parameters: `(sfield,latitude,longitude)`. You can use the `geodist` function to sort results by distance or score return results.
@@ -170,19 +161,16 @@ For example, to sort your results by ascending distance, enter `...&q=*:*&fq={!g
 
 To return the distance as the document score, enter `...&q={!func}geodist()&sfield=store&pt=45.15,-93.85&sort=score+asc`.
 
-[[SpatialSearch-MoreExamples]]
-== More Examples
+== More Spatial Search Examples
 
 Here are a few more useful examples of what you can do with spatial search in Solr.
 
-[[SpatialSearch-UseasaSub-QuerytoExpandSearchResults]]
 === Use as a Sub-Query to Expand Search Results
 
 Here we will query for results in Jacksonville, Florida, or within 50 kilometers of 45.15,-93.85 (near Buffalo, Minnesota):
 
 `&q=*:*&fq=(state:"FL" AND city:"Jacksonville") OR {!geofilt}&sfield=store&pt=45.15,-93.85&d=50&sort=geodist()+asc`
 
-[[SpatialSearch-FacetbyDistance]]
 === Facet by Distance
 
 To facet by distance, you can use the Frange query parser:
@@ -191,14 +179,12 @@ To facet by distance, you can use the Frange query parser:
 
 There are other ways to do it too, like using a \{!geofilt} in each facet.query.
 
-[[SpatialSearch-BoostNearestResults]]
 === Boost Nearest Results
 
 Using the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>> or <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,Extended DisMax>>, you can combine spatial search with the boost function to boost the nearest results:
 
 `&q.alt=*:*&fq={!geofilt}&sfield=store&pt=45.15,-93.85&d=50&bf=recip(geodist(),2,200,20)&sort=score desc`
 
-[[SpatialSearch-RPT]]
 == RPT
 
 RPT refers to either `SpatialRecursivePrefixTreeFieldType` (aka simply RPT) and an extended version: `RptWithGeometrySpatialField` (aka RPT with Geometry). RPT offers several functional improvements over LatLonPointSpatialField:
@@ -215,8 +201,7 @@ RPT _shares_ various features in common with `LatLonPointSpatialField`. Some are
 * Sort/boost via `geodist`
 * Well-Known-Text (WKT) shape syntax (required for specifying polygons & other complex shapes), and GeoJSON too. In addition to indexing and searching, this works with the `wt=geojson` (GeoJSON Solr response-writer) and `[geo f=myfield]` (geo Solr document-transformer).
 
-[[SpatialSearch-Schemaconfiguration]]
-=== Schema Configuration
+=== Schema Configuration for RPT
 
 To use RPT, the field type must be registered and configured in `schema.xml`. There are many options for this field type.
 
@@ -266,7 +251,6 @@ A third choice is `packedQuad`, which is generally more efficient than `quad`, p
 
 *_And there are others:_* `normWrapLongitude`, `datelineRule`, `validationRule`, `autoIndex`, `allowMultiOverlap`, `precisionModel`. For further info, see notes below about `spatialContextFactory` implementations referenced above, especially the link to the JTS based one.
 
-[[SpatialSearch-JTSandPolygons]]
 === JTS and Polygons
 
 As indicated above, `spatialContextFactory` must be set to `JTS` for polygon support, including multi-polygon.
@@ -297,7 +281,6 @@ Inside the parenthesis following the search predicate is the shape definition. T
 
 Beyond this Reference Guide and Spatila4j's docs, there are some details that remain at the Solr Wiki at http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4.
 
-[[SpatialSearch-RptWithGeometrySpatialField]]
 === RptWithGeometrySpatialField
 
 The `RptWithGeometrySpatialField` field type is a derivative of `SpatialRecursivePrefixTreeFieldType` that also stores the original geometry internally in Lucene DocValues, which it uses to achieve accurate search. It can also be used for indexed point fields. The Intersects predicate (the default) is particularly fast, since many search results can be returned as an accurate hit without requiring a geometry check. This field type is configured just like RPT except that the default `distErrPct` is 0.15 (higher than 0.025) because the grid squares are purely for performance and not to fundamentally represent the shape.
@@ -316,7 +299,6 @@ An optional in-memory cache can be defined in `solrconfig.xml`, which should be
 
 When using this field type, you will likely _not_ want to mark the field as stored because it's redundant with the DocValues data and surely larger because of the formatting (be it WKT or GeoJSON). To retrieve the spatial data in search results from DocValues, use the `[geo]` transformer -- <<transforming-result-documents.adoc#transforming-result-documents,Transforming Result Documents>>.
 
-[[SpatialSearch-HeatmapFaceting]]
 === Heatmap Faceting
 
 The RPT field supports generating a 2D grid of facet counts for documents having spatial data in each grid cell. For high-detail grids, this can be used to plot points, and for lesser detail it can be used for heatmap generation. The grid cells are determined at index-time based on RPT's configuration. At facet counting time, the indexed cells in the region of interest are traversed and a grid of counters corresponding to each cell are incremented. Solr can return the data in a straight-forward 2D array of integers or in a PNG which compresses better for larger data sets but must be decoded.
@@ -365,7 +347,6 @@ The `counts_ints2D` key has a 2D array of integers. The initial outer level is i
 
 If `format=png` then the output key is `counts_png`. It's a base-64 encoded string of a 4-byte PNG. The PNG logically holds exactly the same data that the ints2D format does. Note that the alpha channel byte is flipped to make it easier to view the PNG for diagnostic purposes, since otherwise counts would have to exceed 2^24 before it becomes non-opague. Thus counts greater than this value will become opaque.
 
-[[SpatialSearch-BBoxField]]
 == BBoxField
 
 The `BBoxField` field type indexes a single rectangle (bounding box) per document field and supports searching via a bounding box. It supports most spatial search predicates, it has enhanced relevancy modes based on the overlap or area between the search rectangle and the indexed rectangle. It's particularly useful for its relevancy modes. To configure it in the schema, use a configuration like this:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/suggester.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/suggester.adoc b/solr/solr-ref-guide/src/suggester.adoc
index 834d992..1950fc7 100644
--- a/solr/solr-ref-guide/src/suggester.adoc
+++ b/solr/solr-ref-guide/src/suggester.adoc
@@ -36,7 +36,6 @@ The `solrconfig.xml` found in Solr's "```techproducts```" example has a Suggeste
 
 The "```techproducts```" example `solrconfig.xml` has a `suggest` search component and a `/suggest` request handler already configured. You can use that as the basis for your configuration, or create it from scratch, as detailed below.
 
-[[Suggester-AddingtheSuggestSearchComponent]]
 == Adding the Suggest Search Component
 
 The first step is to add a search component to `solrconfig.xml` and tell it to use the SuggestComponent. Here is some sample code that could be used.
@@ -56,7 +55,6 @@ The first step is to add a search component to `solrconfig.xml` and tell it to u
 </searchComponent>
 ----
 
-[[Suggester-SuggesterSearchComponentParameters]]
 === Suggester Search Component Parameters
 
 The Suggester search component takes several configuration parameters.
@@ -72,10 +70,10 @@ Arbitrary name for the search component.
 A symbolic name for this suggester. You can refer to this name in the URL parameters and in the SearchHandler configuration. It is possible to have multiples of these in one `solrconfig.xml` file.
 
 `lookupImpl`::
-Lookup implementation. There are several possible implementations, described below in the section <<Suggester-LookupImplementations,Lookup Implementations>>. If not set, the default lookup is `JaspellLookupFactory`.
+Lookup implementation. There are several possible implementations, described below in the section <<Lookup Implementations>>. If not set, the default lookup is `JaspellLookupFactory`.
 
 `dictionaryImpl`::
-The dictionary implementation to use. There are several possible implementations, described below in the section <<Suggester-DictionaryImplementations,Dictionary Implementations>>.
+The dictionary implementation to use. There are several possible implementations, described below in the section <<Dictionary Implementations>>.
 +
 If not set, the default dictionary implementation is `HighFrequencyDictionaryFactory`. However, if a `sourceLocation` is used, the dictionary implementation will be `FileDictionaryFactory`.
 
@@ -113,12 +111,10 @@ If `true,` then the lookup data structure will be built when Solr starts or when
 +
 Enabling this to `true` could lead to the core talking longer to load (or reload) as the suggester data structure needs to be built, which can sometimes take a long time. It’s usually preferred to have this setting set to `false`, the default, and build suggesters manually issuing requests with `suggest.build=true`.
 
-[[Suggester-LookupImplementations]]
 === Lookup Implementations
 
 The `lookupImpl` parameter defines the algorithms used to look up terms in the suggest index. There are several possible implementations to choose from, and some require additional parameters to be configured.
 
-[[Suggester-AnalyzingLookupFactory]]
 ==== AnalyzingLookupFactory
 
 A lookup that first analyzes the incoming text and adds the analyzed form to a weighted FST, and then does the same thing at lookup time.
@@ -137,7 +133,6 @@ If `true`, the default, then a separator between tokens is preserved. This means
 `preservePositionIncrements`::
 If `true`, the suggester will preserve position increments. This means that token filters which leave gaps (for example, when StopFilter matches a stopword) the position would be respected when building the suggester. The default is `false`.
 
-[[Suggester-FuzzyLookupFactory]]
 ==== FuzzyLookupFactory
 
 This is a suggester which is an extension of the AnalyzingSuggester but is fuzzy in nature. The similarity is measured by the Levenshtein algorithm.
@@ -174,7 +169,6 @@ The minimum length of query before which any string edits will be allowed. The d
 `unicodeAware`::
 If `true`, the `maxEdits`, `minFuzzyLength`, `transpositions` and `nonFuzzyPrefix` parameters will be measured in unicode code points (actual letters) instead of bytes. The default is `false`.
 
-[[Suggester-AnalyzingInfixLookupFactory]]
 ==== AnalyzingInfixLookupFactory
 
 Analyzes the input text and then suggests matches based on prefix matches to any tokens in the indexed text. This uses a Lucene index for its dictionary.
@@ -193,9 +187,8 @@ Boolean option for multiple terms. The default is `true`, all terms will be requ
 `highlight`::
 Highlight suggest terms. Default is `true`.
 
-This implementation supports <<Suggester-ContextFiltering,Context Filtering>>.
+This implementation supports <<Context Filtering>>.
 
-[[Suggester-BlendedInfixLookupFactory]]
 ==== BlendedInfixLookupFactory
 
 An extension of the `AnalyzingInfixSuggester` which provides additional functionality to weight prefix matches across the matched documents. You can tell it to score higher if a hit is closer to the start of the suggestion or vice versa.
@@ -220,9 +213,8 @@ When using `BlendedInfixSuggester` you can provide your own path where the index
 `minPrefixChars`::
 Minimum number of leading characters before PrefixQuery is used (the default is `4`). Prefixes shorter than this are indexed as character ngrams (increasing index size but making lookups faster).
 
-This implementation supports <<Suggester-ContextFiltering,Context Filtering>> .
+This implementation supports <<Context Filtering>> .
 
-[[Suggester-FreeTextLookupFactory]]
 ==== FreeTextLookupFactory
 
 It looks at the last tokens plus the prefix of whatever final token the user is typing, if present, to predict the most likely next token. The number of previous tokens that need to be considered can also be specified. This suggester would only be used as a fallback, when the primary suggester fails to find any suggestions.
@@ -235,7 +227,6 @@ The analyzer used at "query-time" and "build-time" to analyze suggestions. This
 `ngrams`::
 The max number of tokens out of which singles will be made the dictionary. The default value is `2`. Increasing this would mean you want more than the previous 2 tokens to be taken into consideration when making the suggestions.
 
-[[Suggester-FSTLookupFactory]]
 ==== FSTLookupFactory
 
 An automaton-based lookup. This implementation is slower to build, but provides the lowest memory cost. We recommend using this implementation unless you need more sophisticated matching results, in which case you should use the Jaspell implementation.
@@ -248,29 +239,24 @@ If `true`, the default, exact suggestions are returned first, even if they are p
 `weightBuckets`::
 The number of separate buckets for weights which the suggester will use while building its dictionary.
 
-[[Suggester-TSTLookupFactory]]
 ==== TSTLookupFactory
 
 A simple compact ternary trie based lookup.
 
-[[Suggester-WFSTLookupFactory]]
 ==== WFSTLookupFactory
 
 A weighted automaton representation which is an alternative to `FSTLookup` for more fine-grained ranking. `WFSTLookup` does not use buckets, but instead a shortest path algorithm.
 
 Note that it expects weights to be whole numbers. If weight is missing it's assumed to be `1.0`. Weights affect the sorting of matching suggestions when `spellcheck.onlyMorePopular=true` is selected: weights are treated as "popularity" score, with higher weights preferred over suggestions with lower weights.
 
-[[Suggester-JaspellLookupFactory]]
 ==== JaspellLookupFactory
 
 A more complex lookup based on a ternary trie from the http://jaspell.sourceforge.net/[JaSpell] project. Use this implementation if you need more sophisticated matching results.
 
-[[Suggester-DictionaryImplementations]]
 === Dictionary Implementations
 
 The dictionary implementations define how terms are stored. There are several options, and multiple dictionaries can be used in a single request if necessary.
 
-[[Suggester-DocumentDictionaryFactory]]
 ==== DocumentDictionaryFactory
 
 A dictionary with terms, weights, and an optional payload taken from the index.
@@ -286,7 +272,6 @@ The `payloadField` should be a field that is stored. This parameter is optional.
 `contextField`::
 Field to be used for context filtering. Note that only some lookup implementations support filtering.
 
-[[Suggester-DocumentExpressionDictionaryFactory]]
 ==== DocumentExpressionDictionaryFactory
 
 This dictionary implementation is the same as the `DocumentDictionaryFactory` but allows users to specify an arbitrary expression into the `weightExpression` tag.
@@ -302,7 +287,6 @@ An arbitrary expression used for scoring the suggestions. The fields used must b
 `contextField`::
 Field to be used for context filtering. Note that only some lookup implementations support filtering.
 
-[[Suggester-HighFrequencyDictionaryFactory]]
 ==== HighFrequencyDictionaryFactory
 
 This dictionary implementation allows adding a threshold to prune out less frequent terms in cases where very common terms may overwhelm other terms.
@@ -312,7 +296,6 @@ This dictionary implementation takes one parameter in addition to parameters des
 `threshold`::
 A value between zero and one representing the minimum fraction of the total documents where a term should appear in order to be added to the lookup dictionary.
 
-[[Suggester-FileDictionaryFactory]]
 ==== FileDictionaryFactory
 
 This dictionary implementation allows using an external file that contains suggest entries. Weights and payloads can also be used.
@@ -332,7 +315,6 @@ accidentally    2.0
 accommodate 3.0
 ----
 
-[[Suggester-MultipleDictionaries]]
 === Multiple Dictionaries
 
 It is possible to include multiple `dictionaryImpl` definitions in a single SuggestComponent definition.
@@ -364,9 +346,8 @@ To do this, simply define separate suggesters, as in this example:
 </searchComponent>
 ----
 
-When using these Suggesters in a query, you would define multiple `suggest.dictionary` parameters in the request, referring to the names given for each Suggester in the search component definition. The response will include the terms in sections for each Suggester. See the <<Suggester-ExampleUsages,Examples>> section below for an example request and response.
+When using these Suggesters in a query, you would define multiple `suggest.dictionary` parameters in the request, referring to the names given for each Suggester in the search component definition. The response will include the terms in sections for each Suggester. See the <<Example Usages>> section below for an example request and response.
 
-[[Suggester-AddingtheSuggestRequestHandler]]
 == Adding the Suggest Request Handler
 
 After adding the search component, a request handler must be added to `solrconfig.xml`. This request handler works the <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,same as any other request handler>>, and allows you to configure default parameters for serving suggestion requests. The request handler definition must incorporate the "suggest" search component defined previously.
@@ -384,7 +365,6 @@ After adding the search component, a request handler must be added to `solrconfi
 </requestHandler>
 ----
 
-[[Suggester-SuggestRequestHandlerParameters]]
 === Suggest Request Handler Parameters
 
 The following parameters allow you to set defaults for the Suggest request handler:
@@ -424,10 +404,8 @@ These properties can also be overridden at query time, or not set in the request
 Context filtering (`suggest.cfq`) is currently only supported by `AnalyzingInfixLookupFactory` and `BlendedInfixLookupFactory`, and only when backed by a `Document*Dictionary`. All other implementations will return unfiltered matches as if filtering was not requested.
 ====
 
-[[Suggester-ExampleUsages]]
 == Example Usages
 
-[[Suggester-GetSuggestionswithWeights]]
 === Get Suggestions with Weights
 
 This is a basic suggestion using a single dictionary and a single Solr core.
@@ -478,8 +456,7 @@ Example response:
 }
 ----
 
-[[Suggester-MultipleDictionaries.1]]
-=== Multiple Dictionaries
+=== Using Multiple Dictionaries
 
 If you have defined multiple dictionaries, you can use them in queries.
 
@@ -531,7 +508,6 @@ Example response:
 }
 ----
 
-[[Suggester-ContextFiltering]]
 === Context Filtering
 
 Context filtering lets you filter suggestions by a separate context field, such as category, department or any other token. The `AnalyzingInfixLookupFactory` and `BlendedInfixLookupFactory` currently support this feature, when backed by `DocumentDictionaryFactory`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/the-query-elevation-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-query-elevation-component.adoc b/solr/solr-ref-guide/src/the-query-elevation-component.adoc
index dcd3c7e..638aa81 100644
--- a/solr/solr-ref-guide/src/the-query-elevation-component.adoc
+++ b/solr/solr-ref-guide/src/the-query-elevation-component.adoc
@@ -31,7 +31,6 @@ All of the sample configuration and queries used in this section assume you are
 bin/solr -e techproducts
 ----
 
-[[TheQueryElevationComponent-ConfiguringtheQueryElevationComponent]]
 == Configuring the Query Elevation Component
 
 You can configure the Query Elevation Component in the `solrconfig.xml` file. Search components like `QueryElevationComponent` may be added to any request handler; a dedicated request handler is used here for brevity.
@@ -72,7 +71,6 @@ Path to the file that defines query elevation. This file must exist in `<instanc
 `forceElevation`::
 By default, this component respects the requested `sort` parameter: if the request asks to sort by date, it will order the results by date. If `forceElevation=true` (the default), results will first return the boosted docs, then order by date.
 
-[[TheQueryElevationComponent-elevate.xml]]
 === elevate.xml
 
 Elevated query results are configured in an external XML file specified in the `config-file` argument. An `elevate.xml` file might look like this:
@@ -95,10 +93,8 @@ Elevated query results are configured in an external XML file specified in the `
 
 In this example, the query "foo bar" would first return documents 1, 2 and 3, then whatever normally appears for the same query. For the query "ipod", it would first return "MA147LL/A", and would make sure that "IW-02" is not in the result set.
 
-[[TheQueryElevationComponent-UsingtheQueryElevationComponent]]
 == Using the Query Elevation Component
 
-[[TheQueryElevationComponent-TheenableElevationParameter]]
 === The enableElevation Parameter
 
 For debugging it may be useful to see results with and without the elevated docs. To hide results, use `enableElevation=false`:
@@ -107,21 +103,18 @@ For debugging it may be useful to see results with and without the elevated docs
 
 `\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&debugQuery=true&enableElevation=false`
 
-[[TheQueryElevationComponent-TheforceElevationParameter]]
 === The forceElevation Parameter
 
 You can force elevation during runtime by adding `forceElevation=true` to the query URL:
 
 `\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&debugQuery=true&enableElevation=true&forceElevation=true`
 
-[[TheQueryElevationComponent-TheexclusiveParameter]]
 === The exclusive Parameter
 
 You can force Solr to return only the results specified in the elevation file by adding `exclusive=true` to the URL:
 
 `\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&debugQuery=true&exclusive=true`
 
-[[TheQueryElevationComponent-DocumentTransformersandthemarkExcludesParameter]]
 === Document Transformers and the markExcludes Parameter
 
 The `[elevated]` <<transforming-result-documents.adoc#transforming-result-documents,Document Transformer>> can be used to annotate each document with information about whether or not it was elevated:
@@ -132,7 +125,6 @@ Likewise, it can be helpful when troubleshooting to see all matching documents 
 
 `\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&markExcludes=true&fl=id,[elevated],[excluded]`
 
-[[TheQueryElevationComponent-TheelevateIdsandexcludeIdsParameters]]
 === The elevateIds and excludeIds Parameters
 
 When the elevation component is in use, the pre-configured list of elevations for a query can be overridden at request time to use the unique keys specified in these request parameters.
@@ -147,7 +139,6 @@ For example, in the request below documents IW-02 and F8V7067-APL-KIT will be el
 
 `\http://localhost:8983/solr/techproducts/elevate?q=ipod&df=text&elevateIds=IW-02,F8V7067-APL-KIT`
 
-[[TheQueryElevationComponent-ThefqParameter]]
-=== The fq Parameter
+=== The fq Parameter with Elevation
 
 Query elevation respects the standard filter query (`fq`) parameter. That is, if the query contains the `fq` parameter, all results will be within that filter even if `elevate.xml` adds other documents to the result set.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/the-stats-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-stats-component.adoc b/solr/solr-ref-guide/src/the-stats-component.adoc
index a5eb334..ada56a8 100644
--- a/solr/solr-ref-guide/src/the-stats-component.adoc
+++ b/solr/solr-ref-guide/src/the-stats-component.adoc
@@ -27,7 +27,6 @@ The sample queries in this section assume you are running the "```techproducts``
 bin/solr -e techproducts
 ----
 
-[[TheStatsComponent-StatsComponentParameters]]
 == Stats Component Parameters
 
 The Stats Component accepts the following parameters:
@@ -41,8 +40,7 @@ Specifies a field for which statistics should be generated. This parameter may b
 <<local-parameters-in-queries.adoc#local-parameters-in-queries,Local Parameters>> may be used to indicate which subset of the supported statistics should be computed, and/or that statistics should be computed over the results of an arbitrary numeric function (or query) instead of a simple field name. See the examples below.
 
 
-[[TheStatsComponent-Example]]
-=== Example
+=== Stats Component Example
 
 The query below demonstrates computing stats against two different fields numeric fields, as well as stats over the results of a `termfreq()` function call using the `text` field:
 
@@ -89,10 +87,9 @@ The query below demonstrates computing stats against two different fields numeri
 </lst>
 ----
 
-[[TheStatsComponent-StatisticsSupported]]
 == Statistics Supported
 
-The table below explains the statistics supported by the Stats component. Not all statistics are supported for all field types, and not all statistics are computed by default (see <<TheStatsComponent-LocalParameters,Local Parameters>> below for details)
+The table below explains the statistics supported by the Stats component. Not all statistics are supported for all field types, and not all statistics are computed by default (see <<Local Parameters with the Stats Component>> below for details)
 
 `min`::
 The minimum value of the field/function in all documents in the set. This statistic is computed for all field types and is computed by default.
@@ -134,14 +131,13 @@ Input for this option can be floating point number between `0.0` and `1.0` indic
 +
 This statistic is computed for all field types but is not computed by default.
 
-[[TheStatsComponent-LocalParameters]]
-== Local Parameters
+== Local Parameters with the Stats Component
 
 Similar to the <<faceting.adoc#faceting,Facet Component>>, the `stats.field` parameter supports local parameters for:
 
 * Tagging & Excluding Filters: `stats.field={!ex=filterA}price`
 * Changing the Output Key: `stats.field={!key=my_price_stats}price`
-* Tagging stats for <<TheStatsComponent-TheStatsComponentandFaceting,use with `facet.pivot`>>: `stats.field={!tag=my_pivot_stats}price`
+* Tagging stats for <<The Stats Component and Faceting,use with `facet.pivot`>>: `stats.field={!tag=my_pivot_stats}price`
 
 Local parameters can also be used to specify individual statistics by name, overriding the set of statistics computed by default, eg: `stats.field={!min=true max=true percentiles='99,99.9,99.99'}price`
 
@@ -159,8 +155,7 @@ Additional "Expert" local params are supported in some cases for affecting the b
 ** `hllLog2m` - an integer value specifying an explicit "log2m" value to use, overriding the heuristic value determined by the cardinality local param and the field type – see the https://github.com/aggregateknowledge/java-hll/[java-hll] documentation for more details
 ** `hllRegwidth` - an integer value specifying an explicit "regwidth" value to use, overriding the heuristic value determined by the cardinality local param and the field type – see the https://github.com/aggregateknowledge/java-hll/[java-hll] documentation for more details
 
-[[TheStatsComponent-Examples]]
-=== Examples
+=== Examples with Local Parameters
 
 Here we compute some statistics for the price field. The min, max, mean, 90th, and 99th percentile price values are computed against all products that are in stock (`q=*:*` and `fq=inStock:true`), and independently all of the default statistics are computed against all products regardless of whether they are in stock or not (by excluding that filter).
 
@@ -193,7 +188,6 @@ Here we compute some statistics for the price field. The min, max, mean, 90th, a
 </lst>
 ----
 
-[[TheStatsComponent-TheStatsComponentandFaceting]]
 == The Stats Component and Faceting
 
 Sets of `stats.field` parameters can be referenced by `'tag'` when using Pivot Faceting to compute multiple statistics at every level (i.e.: field) in the tree of pivot constraints.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/the-term-vector-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-term-vector-component.adoc b/solr/solr-ref-guide/src/the-term-vector-component.adoc
index 218d553..fc679b7 100644
--- a/solr/solr-ref-guide/src/the-term-vector-component.adoc
+++ b/solr/solr-ref-guide/src/the-term-vector-component.adoc
@@ -22,8 +22,7 @@ The TermVectorComponent is a search component designed to return additional info
 
 For each document in the response, the TermVectorCcomponent can return the term vector, the term frequency, inverse document frequency, position, and offset information.
 
-[[TheTermVectorComponent-Configuration]]
-== Configuration
+== Term Vector Component Configuration
 
 The TermVectorComponent is not enabled implicitly in Solr - it must be explicitly configured in your `solrconfig.xml` file. The examples on this page show how it is configured in Solr's "```techproducts```" example:
 
@@ -67,7 +66,6 @@ Once your handler is defined, you may use in conjunction with any schema (that h
        termOffsets="true" />
 ----
 
-[[TheTermVectorComponent-InvokingtheTermVectorComponent]]
 == Invoking the Term Vector Component
 
 The example below shows an invocation of this component using the above configuration:
@@ -124,8 +122,7 @@ The example below shows an invocation of this component using the above configur
 </lst>
 ----
 
-[[TheTermVectorComponent-RequestParameters]]
-=== Request Parameters
+=== Term Vector Request Parameters
 
 The example below shows some of the available request parameters for this component:
 
@@ -168,7 +165,6 @@ To learn more about TermVector component output, see the Wiki page: http://wiki.
 
 For schema requirements, see also the section  <<field-properties-by-use-case.adoc#field-properties-by-use-case, Field Properties by Use Case>>.
 
-[[TheTermVectorComponent-SolrJandtheTermVectorComponent]]
 == SolrJ and the Term Vector Component
 
 Neither the `SolrQuery` class nor the `QueryResponse` class offer specific method calls to set Term Vector Component parameters or get the "termVectors" output. However, there is a patch for it: https://issues.apache.org/jira/browse/SOLR-949[SOLR-949].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc b/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc
index a6883ec..29f829e 100644
--- a/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc
+++ b/solr/solr-ref-guide/src/the-well-configured-solr-instance.adoc
@@ -37,7 +37,5 @@ This section covers the following topics:
 
 [IMPORTANT]
 ====
-
 The focus of this section is generally on configuring a single Solr instance, but for those interested in scaling a Solr implementation in a cluster environment, see also the section <<solrcloud.adoc#solrcloud,SolrCloud>>. There are also options to scale through sharding or replication, described in the section <<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>.
-
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc b/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
index a3ea40e..30e32a7 100644
--- a/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
+++ b/solr/solr-ref-guide/src/transforming-and-indexing-custom-json.adoc
@@ -20,16 +20,29 @@
 
 If you have JSON documents that you would like to index without transforming them into Solr's structure, you can add them to Solr by including some parameters with the update request. These parameters provide information on how to split a single JSON file into multiple Solr documents and how to map fields to Solr's schema. One or more valid JSON documents can be sent to the `/update/json/docs` path with the configuration params.
 
-[[TransformingandIndexingCustomJSON-MappingParameters]]
 == Mapping Parameters
 
 These parameters allow you to define how a JSON file should be read for multiple Solr documents.
 
-* **split**: Defines the path at which to split the input JSON into multiple Solr documents and is required if you have multiple documents in a single JSON file. If the entire JSON makes a single solr document, the path must be “`/`”. It is possible to pass multiple split paths by separating them with a pipe `(|)` example : `split=/|/foo|/foo/bar` . If one path is a child of another, they automatically become a child document **f**: This is a multivalued mapping parameter. The format of the parameter is` target-field-name:json-path`. The `json-path` is required. The `target-field-name` is the Solr document field name, and is optional. If not specified, it is automatically derived from the input JSON.The default target field name is the fully qualified name of the field. Wildcards can be used here, see the <<TransformingandIndexingCustomJSON-Wildcards,Wildcards>> below for more information.
-* *mapUniqueKeyOnly* (boolean): This parameter is particularly convenient when the fields in the input JSON are not available in the schema and <<schemaless-mode.adoc#schemaless-mode,schemaless mode>> is not enabled. This will index all the fields into the default search field (using the `df` parameter, below) and only the `uniqueKey` field is mapped to the corresponding field in the schema. If the input JSON does not have a value for the `uniqueKey` field then a UUID is generated for the same.
-* **df**: If the `mapUniqueKeyOnly` flag is used, the update handler needs a field where the data should be indexed to. This is the same field that other handlers use as a default search field.
-* **srcField**: This is the name of the field to which the JSON source will be stored into. This can only be used if `split=/` (i.e., you want your JSON input file to be indexed as a single Solr document). Note that atomic updates will cause the field to be out-of-sync with the document.
-* **echo**: This is for debugging purpose only. Set it to true if you want the docs to be returned as a response. Nothing will be indexed.
+split::
+Defines the path at which to split the input JSON into multiple Solr documents and is required if you have multiple documents in a single JSON file. If the entire JSON makes a single solr document, the path must be “`/`”. It is possible to pass multiple split paths by separating them with a pipe `(|)` example : `split=/|/foo|/foo/bar`. If one path is a child of another, they automatically become a child document
+
+f::
+A multivalued mapping parameter. The format of the parameter is `target-field-name:json-path`. The `json-path` is required. The `target-field-name` is the Solr document field name, and is optional. If not specified, it is automatically derived from the input JSON. The default target field name is the fully qualified name of the field.
++
+Wildcards can be used here, see <<Using Wildcards for Field Names>> below for more information.
+
+mapUniqueKeyOnly::
+(boolean) This parameter is particularly convenient when the fields in the input JSON are not available in the schema and <<schemaless-mode.adoc#schemaless-mode,schemaless mode>> is not enabled. This will index all the fields into the default search field (using the `df` parameter, below) and only the `uniqueKey` field is mapped to the corresponding field in the schema. If the input JSON does not have a value for the `uniqueKey` field then a UUID is generated for the same.
+
+df::
+If the `mapUniqueKeyOnly` flag is used, the update handler needs a field where the data should be indexed to. This is the same field that other handlers use as a default search field.
+
+srcField::
+This is the name of the field to which the JSON source will be stored into. This can only be used if `split=/` (i.e., you want your JSON input file to be indexed as a single Solr document). Note that atomic updates will cause the field to be out-of-sync with the document.
+
+echo::
+This is for debugging purpose only. Set it to `true` if you want the docs to be returned as a response. Nothing will be indexed.
 
 For example, if we have a JSON file that includes two documents, we could define an update request like this:
 
@@ -152,15 +165,16 @@ In this example, we simply named the field paths (such as `/exams/test`). Solr w
 
 [TIP]
 ====
+Documents WILL get rejected if the fields do not exist in the schema before indexing. So, if you are NOT using schemaless mode, pre-create those fields. If you are working in <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>>, fields that don't exist will be created on the fly with Solr's best guess for the field type.
+====
 
-Documents WILL get rejected if the fields do not exist in the schema before indexing. So, if you are NOT using schemaless mode, pre-create those fields. If you are working in <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>>, fields that don't exist will be created on the fly with Solr's best guess for the field type. 
+== Using Wildcards for Field Names
 
-====
+Instead of specifying all the field names explicitly, it is possible to specify wildcards to map fields automatically.
 
-[[TransformingandIndexingCustomJSON-Wildcards]]
-== Wildcards
+There are two restrictions: wildcards can only be used at the end of the `json-path`, and the split path cannot use wildcards.
 
-Instead of specifying all the field names explicitly, it is possible to specify wildcards to map fields automatically. There are two restrictions: wildcards can only be used at the end of the `json-path`, and the split path cannot use wildcards. A single asterisk `\*` maps only to direct children, and a double asterisk `\*\*` maps recursively to all descendants. The following are example wildcard path mappings:
+A single asterisk `\*` maps only to direct children, and a double asterisk `\*\*` maps recursively to all descendants. The following are example wildcard path mappings:
 
 * `f=$FQN:/**`: maps all fields to the fully qualified name (`$FQN`) of the JSON field. The fully qualified name is obtained by concatenating all the keys in the hierarchy with a period (`.`) as a delimiter. This is the default behavior if no `f` path mappings are specified.
 * `f=/docs/*`: maps all the fields under docs and in the name as given in json
@@ -217,7 +231,7 @@ curl 'http://localhost:8983/solr/my_collection/update/json/docs'\
       "test"   : "term1",
       "marks"  : 86}
   ]
-}' 
+}'
 ----
 
 In the above example, we've said all of the fields should be added to a field in Solr named 'txt'. This will add multiple fields to a single field, so whatever field you choose should be multi-valued.
@@ -247,7 +261,7 @@ curl 'http://localhost:8983/solr/my_collection/update/json/docs?split=/exams'\
 
 The indexed documents would be added to the index with fields that look like this:
 
-[source,bash]
+[source,json]
 ----
 {
   "first":"John",
@@ -265,8 +279,7 @@ The indexed documents would be added to the index with fields that look like thi
   "exams.marks":86}
 ----
 
-[[TransformingandIndexingCustomJSON-MultipledocumentsinaSinglePayload]]
-== Multiple documents in a Single Payload
+== Multiple Documents in a Single Payload
 
 This functionality supports documents in the http://jsonlines.org/[JSON Lines] format (`.jsonl`), which specifies one document per line.
 
@@ -288,7 +301,6 @@ curl 'http://localhost:8983/solr/my_collection/update/json/docs' -H 'Content-typ
 { "first":"Steve", "last":"Woz", "grade":1, "subject": "Calculus", "test"   : "term1", "marks"  : 86}]'
 ----
 
-[[TransformingandIndexingCustomJSON-IndexingNestedDocuments]]
 == Indexing Nested Documents
 
 The following is an example of indexing nested documents:
@@ -332,14 +344,12 @@ With this example, the documents indexed would be, as follows:
       "zip":95014}]}
 ----
 
-[[TransformingandIndexingCustomJSON-TipsforCustomJSONIndexing]]
 == Tips for Custom JSON Indexing
 
-1.  Schemaless mode: This handles field creation automatically. The field guessing may not be exactly as you expect, but it works. The best thing to do is to setup a local server in schemaless mode, index a few sample docs and create those fields in your real setup with proper field types before indexing
-2.  Pre-created Schema : Post your docs to the `/update/json/docs` endpoint with `echo=true`. This gives you the list of field names you need to create. Create the fields before you actually index
-3.  No schema, only full-text search : All you need to do is to do full-text search on your JSON. Set the configuration as given in the Setting JSON Defaults section.
+. Schemaless mode: This handles field creation automatically. The field guessing may not be exactly as you expect, but it works. The best thing to do is to setup a local server in schemaless mode, index a few sample docs and create those fields in your real setup with proper field types before indexing
+. Pre-created Schema: Post your docs to the `/update/json/docs` endpoint with `echo=true`. This gives you the list of field names you need to create. Create the fields before you actually index
+. No schema, only full-text search : All you need to do is to do full-text search on your JSON. Set the configuration as given in the Setting JSON Defaults section.
 
-[[TransformingandIndexingCustomJSON-SettingJSONDefaults]]
 == Setting JSON Defaults
 
 It is possible to send any json to the `/update/json/docs` endpoint and the default configuration of the component is as follows:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/transforming-result-documents.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/transforming-result-documents.adoc b/solr/solr-ref-guide/src/transforming-result-documents.adoc
index feb6931..9e1d4ad 100644
--- a/solr/solr-ref-guide/src/transforming-result-documents.adoc
+++ b/solr/solr-ref-guide/src/transforming-result-documents.adoc
@@ -20,7 +20,6 @@
 
 Document Transformers can be used to modify the information returned about each documents in the results of a query.
 
-[[TransformingResultDocuments-UsingDocumentTransformers]]
 == Using Document Transformers
 
 When executing a request, a document transformer can be used by including it in the `fl` parameter using square brackets, for example:
@@ -46,11 +45,9 @@ fl=id,name,score,my_val_a:[value v=42 t=int],my_val_b:[value v=7 t=float]
 
 The sections below discuss exactly what these various transformers do.
 
-[[TransformingResultDocuments-AvailableTransformers]]
 == Available Transformers
 
 
-[[TransformingResultDocuments-_value_-ValueAugmenterFactory]]
 === [value] - ValueAugmenterFactory
 
 Modifies every document to include the exact same value, as if it were a stored field in every document:
@@ -94,7 +91,6 @@ In addition to using these request parameters, you can configure additional name
 The "```value```" option forces an explicit value to always be used, while the "```defaultValue```" option provides a default that can still be overridden using the "```v```" and "```t```" local parameters.
 
 
-[[TransformingResultDocuments-_explain_-ExplainAugmenterFactory]]
 === [explain] - ExplainAugmenterFactory
 
 Augments each document with an inline explanation of its score exactly like the information available about each document in the debug section:
@@ -128,8 +124,6 @@ A default style can be configured by specifying an "args" parameter in your conf
 </transformer>
 ----
 
-
-[[TransformingResultDocuments-_child_-ChildDocTransformerFactory]]
 === [child] - ChildDocTransformerFactory
 
 This transformer returns all <<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-NestedChildDocuments,descendant documents>> of each parent document matching your query in a flat list nested inside the matching parent document. This is useful when you have indexed nested child documents and want to retrieve the child documents for the relevant parent documents for any type of search query.
@@ -147,7 +141,6 @@ When using this transformer, the `parentFilter` parameter must be specified, and
 * `limit` - the maximum number of child documents to be returned per parent document (default: 10)
 
 
-[[TransformingResultDocuments-_shard_-ShardAugmenterFactory]]
 === [shard] - ShardAugmenterFactory
 
 This transformer adds information about what shard each individual document came from in a distributed request.
@@ -155,7 +148,6 @@ This transformer adds information about what shard each individual document came
 ShardAugmenterFactory does not support any request parameters, or configuration options.
 
 
-[[TransformingResultDocuments-_docid_-DocIdAugmenterFactory]]
 === [docid] - DocIdAugmenterFactory
 
 This transformer adds the internal Lucene document id to each document – this is primarily only useful for debugging purposes.
@@ -163,7 +155,6 @@ This transformer adds the internal Lucene document id to each document – this
 DocIdAugmenterFactory does not support any request parameters, or configuration options.
 
 
-[[TransformingResultDocuments-_elevated_and_excluded_]]
 === [elevated] and [excluded]
 
 These transformers are available only when using the <<the-query-elevation-component.adoc#the-query-elevation-component,Query Elevation Component>>.
@@ -195,7 +186,6 @@ fl=id,[elevated],[excluded]&excludeIds=GB18030TEST&elevateIds=6H500F0&markExclud
 ----
 
 
-[[TransformingResultDocuments-_json_xml_]]
 === [json] / [xml]
 
 These transformers replace field value containing a string representation of a valid XML or JSON structure with the actual raw XML or JSON structure rather than just the string value. Each applies only to the specific writer, such that `[json]` only applies to `wt=json` and `[xml]` only applies to `wt=xml`.
@@ -206,7 +196,6 @@ fl=id,source_s:[json]&wt=json
 ----
 
 
-[[TransformingResultDocuments-_subquery_]]
 === [subquery]
 
 This transformer executes a separate query per transforming document passing document fields as an input for subquery parameters. It's usually used with `{!join}` and `{!parent}` query parsers, and is intended to be an improvement for `[child]`.
@@ -261,8 +250,7 @@ Here is how it looks like in various formats:
  SolrDocumentList subResults = (SolrDocumentList)doc.getFieldValue("children");
 ----
 
-[[TransformingResultDocuments-Subqueryresultfields]]
-==== Subquery result fields
+==== Subquery Result Fields
 
 To appear in subquery document list, a field should be specified both fl parameters, in main one fl (despite the main result documents have no this field) and in subquery's one eg `foo.fl`. Of course, you can use wildcard in any or both of these parameters. For example, if field title should appear in categories subquery, it can be done via one of these ways.
 
@@ -274,14 +262,12 @@ fl=...*,categories:[subquery]&categories.fl=*&categories.q=...
 fl=...*,categories:[subquery]&categories.fl=*&categories.q=...
 ----
 
-[[TransformingResultDocuments-SubqueryParametersShift]]
 ==== Subquery Parameters Shift
 
 If subquery is declared as `fl=*,foo:[subquery]`, subquery parameters are prefixed with the given name and period. eg
 
 `q=*:*&fl=*,**foo**:[subquery]&**foo.**q=to be continued&**foo.**rows=10&**foo.**sort=id desc`
 
-[[TransformingResultDocuments-DocumentFieldasanInputforSubqueryParameters]]
 ==== Document Field as an Input for Subquery Parameters
 
 It's necessary to pass some document field values as a parameter for subquery. It's supported via implicit *`row.__fieldname__`* parameter, and can be (but might not only) referred via Local Parameters syntax: `q=namne:john&fl=name,id,depts:[subquery]&depts.q={!terms f=id **v=$row.dept_id**}&depts.rows=10`
@@ -292,7 +278,6 @@ Note, when document field has multiple values they are concatenated with comma b
 
 To log substituted subquery request parameters, add the corresponding parameter names, as in `depts.logParamsList=q,fl,rows,**row.dept_id**`
 
-[[TransformingResultDocuments-CoresandCollectionsinSolrCloud]]
 ==== Cores and Collections in SolrCloud
 
 Use `foo:[subquery fromIndex=departments]` to invoke subquery on another core on the same node, it's what *`{!join}`* does for non-SolrCloud mode. But in case of SolrCloud just (and only) explicitly specify its' native parameters like `collection, shards` for subquery, eg:
@@ -301,13 +286,10 @@ Use `foo:[subquery fromIndex=departments]` to invoke subquery on another core on
 
 [IMPORTANT]
 ====
-
 If subquery collection has a different unique key field name (let's say `foo_id` at contrast to `id` in primary collection), add the following parameters to accommodate this difference: `foo.fl=id:foo_id&foo.distrib.singlePass=true`. Otherwise you'll get `NullPoniterException` from `QueryComponent.mergeIds`.
-
 ====
 
 
-[[TransformingResultDocuments-_geo_-Geospatialformatter]]
 === [geo] - Geospatial formatter
 
 Formats spatial data from a spatial field using a designated format type name. Two inner parameters are required: `f` for the field name, and `w` for the format name. Example: `geojson:[geo f=mySpatialField w=GeoJSON]`.
@@ -317,7 +299,6 @@ Normally you'll simply be consistent in choosing the format type you want by set
 In addition, this feature is very useful with the `RptWithGeometrySpatialField` to avoid double-storage of the potentially large vector geometry. This transformer will detect that field type and fetch the geometry from an internal compact binary representation on disk (in docValues), and then format it as desired. As such, you needn't mark the field as stored, which would be redundant. In a sense this double-storage between docValues and stored-value storage isn't unique to spatial but with polygonal geometry it can be a lot of data, and furthermore you'd like to avoid storing it in a verbose format (like GeoJSON or WKT).
 
 
-[[TransformingResultDocuments-_features_-LTRFeatureLoggerTransformerFactory]]
 === [features] - LTRFeatureLoggerTransformerFactory
 
 The "LTR" prefix stands for <<learning-to-rank.adoc#learning-to-rank,Learning To Rank>>. This transformer returns the values of features and it can be used for feature extraction and feature logging.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/uima-integration.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uima-integration.adoc b/solr/solr-ref-guide/src/uima-integration.adoc
index c7f9725..9255205 100644
--- a/solr/solr-ref-guide/src/uima-integration.adoc
+++ b/solr/solr-ref-guide/src/uima-integration.adoc
@@ -20,7 +20,6 @@
 
 You can integrate the Apache Unstructured Information Management Architecture (https://uima.apache.org/[UIMA]) with Solr. UIMA lets you define custom pipelines of Analysis Engines that incrementally add metadata to your documents as annotations.
 
-[[UIMAIntegration-ConfiguringUIMA]]
 == Configuring UIMA
 
 The SolrUIMA UpdateRequestProcessor is a custom update request processor that takes documents being indexed, sends them to a UIMA pipeline, and then returns the documents enriched with the specified metadata. To configure UIMA for Solr, follow these steps:
@@ -123,4 +122,3 @@ The SolrUIMA UpdateRequestProcessor is a custom update request processor that ta
 Once you are done with the configuration your documents will be automatically enriched with the specified fields when you index them.
 
 For more information about Solr UIMA integration, see https://wiki.apache.org/solr/SolrUIMA.
-

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/understanding-analyzers-tokenizers-and-filters.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/understanding-analyzers-tokenizers-and-filters.adoc b/solr/solr-ref-guide/src/understanding-analyzers-tokenizers-and-filters.adoc
index 511a5e9..345634d 100644
--- a/solr/solr-ref-guide/src/understanding-analyzers-tokenizers-and-filters.adoc
+++ b/solr/solr-ref-guide/src/understanding-analyzers-tokenizers-and-filters.adoc
@@ -25,16 +25,12 @@ The following sections describe how Solr breaks down and works with textual data
 * <<about-tokenizers.adoc#about-tokenizers,Tokenizers>> break field data into lexical units, or _tokens_.
 * <<about-filters.adoc#about-filters,Filters>> examine a stream of tokens and keep them, transform or discard them, or create new ones. Tokenizers and filters may be combined to form pipelines, or _chains_, where the output of one is input to the next. Such a sequence of tokenizers and filters is called an _analyzer_ and the resulting output of an analyzer is used to match query results or build indices.
 
-
-[[UnderstandingAnalyzers_Tokenizers_andFilters-UsingAnalyzers_Tokenizers_andFilters]]
 == Using Analyzers, Tokenizers, and Filters
 
 Although the analysis process is used for both indexing and querying, the same analysis process need not be used for both operations. For indexing, you often want to simplify, or normalize, words. For example, setting all letters to lowercase, eliminating punctuation and accents, mapping words to their stems, and so on. Doing so can increase recall because, for example, "ram", "Ram" and "RAM" would all match a query for "ram". To increase query-time precision, a filter could be employed to narrow the matches by, for example, ignoring all-cap acronyms if you're interested in male sheep, but not Random Access Memory.
 
 The tokens output by the analysis process define the values, or _terms_, of that field and are used either to build an index of those terms when a new document is added, or to identify which documents contain the terms you are querying for.
 
-
-[[UnderstandingAnalyzers_Tokenizers_andFilters-ForMoreInformation]]
 === For More Information
 
 These sections will show you how to configure field analyzers and also serves as a reference for the details of configuring each of the available tokenizer and filter classes. It also serves as a guide so that you can configure your own analysis classes if you have special needs that cannot be met with the included filters or tokenizers.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/update-request-processors.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/update-request-processors.adoc b/solr/solr-ref-guide/src/update-request-processors.adoc
index 37cdebb..cbc6013 100644
--- a/solr/solr-ref-guide/src/update-request-processors.adoc
+++ b/solr/solr-ref-guide/src/update-request-processors.adoc
@@ -22,8 +22,7 @@ Every update request received by Solr is run through a chain of plugins known as
 
 This can be useful, for example, to add a field to the document being indexed; to change the value of a particular field; or to drop an update if the incoming document doesn't fulfill certain criteria. In fact, a surprisingly large number of features in Solr are implemented as Update Processors and therefore it is necessary to understand how such plugins work and where are they configured.
 
-[[UpdateRequestProcessors-AnatomyandLifecycle]]
-== Anatomy and Lifecycle
+== URP Anatomy and Lifecycle
 
 An Update Request Processor is created as part of a {solr-javadocs}/solr-core/org/apache/solr/update/processor/UpdateRequestProcessorChain.html[chain] of one or more update processors. Solr creates a default update request processor chain comprising of a few update request processors which enable essential Solr features. This default chain is used to process every update request unless a user chooses to configure and specify a different custom update request processor chain.
 
@@ -38,14 +37,12 @@ When an update request is received by Solr, it looks up the update chain to be u
 
 NOTE: A single update request may contain a batch of multiple new documents or deletes and therefore the corresponding processXXX methods of an UpdateRequestProcessor will be invoked multiple times for every individual update. However, it is guaranteed that a single thread will serially invoke these methods.
 
-[[UpdateRequestProcessors-Configuration]]
-== Configuration
+== Update Request Processor Configuration
 
 Update request processors chains can be created by either creating the whole chain directly in `solrconfig.xml` or by creating individual update processors in `solrconfig.xml` and then dynamically creating the chain at run-time by specifying all processors via request parameters.
 
 However, before we understand how to configure update processor chains, we must learn about the default update processor chain because it provides essential features which are needed in most custom request processor chains as well.
 
-[[UpdateRequestProcessors-DefaultUpdateRequestProcessorChain]]
 === Default Update Request Processor Chain
 
 In case no update processor chains are configured in `solrconfig.xml`, Solr will automatically create a default update processor chain which will be used for all update requests. This default update processor chain consists of the following processors (in order):
@@ -56,7 +53,6 @@ In case no update processor chains are configured in `solrconfig.xml`, Solr will
 
 Each of these perform an essential function and as such any custom chain usually contain all of these processors. The `RunUpdateProcessorFactory` is usually the last update processor in any custom chain.
 
-[[UpdateRequestProcessors-CustomUpdateRequestProcessorChain]]
 === Custom Update Request Processor Chain
 
 The following example demonstrates how a custom chain can be configured inside `solrconfig.xml`.
@@ -85,7 +81,6 @@ In the above example, a new update processor chain named "dedupe" is created wit
 Do not forget to add `RunUpdateProcessorFactory` at the end of any chains you define in `solrconfig.xml`. Otherwise update requests processed by that chain will not actually affect the indexed data.
 ====
 
-[[UpdateRequestProcessors-ConfiguringIndividualProcessorsasTop-LevelPlugins]]
 === Configuring Individual Processors as Top-Level Plugins
 
 Update request processors can also be configured independent of a chain in `solrconfig.xml`.
@@ -113,7 +108,6 @@ In this case, an instance of `SignatureUpdateProcessorFactory` is configured wit
 </updateProcessorChain>
 ----
 
-[[UpdateRequestProcessors-UpdateProcessorsinSolrCloud]]
 == Update Processors in SolrCloud
 
 In a single node, stand-alone Solr, each update is run through all the update processors in a chain exactly once. But the behavior of update request processors in SolrCloud deserves special consideration.
@@ -158,10 +152,8 @@ If the `AtomicUpdateProcessorFactory` is in the update chain before the `Distrib
 Because `DistributedUpdateProcessor` is responsible for processing <<updating-parts-of-documents.adoc#updating-parts-of-documents,Atomic Updates>> into full documents on the leader node, this means that pre-processors which are executed only on the forwarding nodes can only operate on the partial document. If you have a processor which must process a full document then the only choice is to specify it as a post-processor.
 
 
-[[UpdateRequestProcessors-UsingCustomChains]]
 == Using Custom Chains
 
-[[UpdateRequestProcessors-update.chainRequestParameter]]
 === update.chain Request Parameter
 
 The `update.chain` parameter can be used in any update request to choose a custom chain which has been configured in `solrconfig.xml`. For example, in order to choose the "dedupe" chain described in a previous section, one can issue the following request:
@@ -187,7 +179,6 @@ curl "http://localhost:8983/solr/gettingstarted/update/json?update.chain=dedupe&
 The above should dedupe the two identical documents and index only one of them.
 
 
-[[UpdateRequestProcessors-Processor_Post-ProcessorRequestParameters]]
 === Processor & Post-Processor Request Parameters
 
 We can dynamically construct a custom update request processor chain using the `processor` and `post-processor` request parameters. Multiple processors can be specified as a comma-separated value for these two parameters. For example:
@@ -232,7 +223,6 @@ curl "http://localhost:8983/solr/gettingstarted/update/json?processor=remove_bla
 
 In the first example, Solr will dynamically create a chain which has "signature" and "remove_blanks" as pre-processors to be executed only on the forwarding node where as in the second example, "remove_blanks" will be executed as a pre-processor and "signature" will be executed on the leader and replicas as a post-processor.
 
-[[UpdateRequestProcessors-ConfiguringaCustomChainasaDefault]]
 === Configuring a Custom Chain as a Default
 
 We can also specify a custom chain to be used by default for all requests sent to specific update handlers instead of specifying the names in request parameters for each request.
@@ -263,12 +253,10 @@ Alternately, one can achieve a similar effect using the "defaults" as shown in t
 </requestHandler>
 ----
 
-[[UpdateRequestProcessors-UpdateRequestProcessorFactories]]
 == Update Request Processor Factories
 
 What follows are brief descriptions of the currently available update request processors. An `UpdateRequestProcessorFactory` can be integrated into an update chain in `solrconfig.xml` as necessary. You are strongly urged to examine the Javadocs for these classes; these descriptions are abridged snippets taken for the most part from the Javadocs.
 
-[[UpdateRequestProcessors-GeneralUseUpdateProcessorFactories]]
 === General Use UpdateProcessorFactories
 
 {solr-javadocs}/solr-core/org/apache/solr/update/processor/AddSchemaFieldsUpdateProcessorFactory.html[AddSchemaFieldsUpdateProcessorFactory]:: This processor will dynamically add fields to the schema if an input document contains one or more fields that don't match any field or dynamic field in the schema.
@@ -300,7 +288,6 @@ What follows are brief descriptions of the currently available update request pr
 
 {solr-javadocs}/solr-core/org/apache/solr/update/processor/UUIDUpdateProcessorFactory.html[UUIDUpdateProcessorFactory]:: An update processor that adds a newly generated UUID value to any document being added that does not already have a value in the specified field.
 
-[[UpdateRequestProcessors-FieldMutatingUpdateProcessorFactoryDerivedFactories]]
 === FieldMutatingUpdateProcessorFactory Derived Factories
 
 These factories all provide functionality to _modify_ fields in a document as they're being indexed. When using any of these factories, please consult the {solr-javadocs}/solr-core/org/apache/solr/update/processor/FieldMutatingUpdateProcessorFactory.html[FieldMutatingUpdateProcessorFactory javadocs] for details on the common options they all support for configuring which fields are modified.
@@ -349,7 +336,6 @@ These factories all provide functionality to _modify_ fields in a document as th
 
 {solr-javadocs}/solr-core/org/apache/solr/update/processor/UniqFieldsUpdateProcessorFactory.html[UniqFieldsUpdateProcessorFactory]:: Removes duplicate values found in fields matching the specified conditions.
 
-[[UpdateRequestProcessors-UpdateProcessorFactoriesThatCanBeLoadedasPlugins]]
 === Update Processor Factories That Can Be Loaded as Plugins
 
 These processors are included in Solr releases as "contribs", and require additional jars loaded at runtime. See the README files associated with each contrib for details:
@@ -364,7 +350,6 @@ The {solr-javadocs}/solr-uima/index.html[`uima`] contrib provides::
 
 {solr-javadocs}/solr-uima/org/apache/solr/uima/processor/UIMAUpdateRequestProcessorFactory.html[UIMAUpdateRequestProcessorFactory]::: Update document(s) to be indexed with UIMA extracted information.
 
-[[UpdateRequestProcessors-UpdateProcessorFactoriesYouShouldNotModifyorRemove]]
 === Update Processor Factories You Should _Not_ Modify or Remove
 
 These are listed for completeness, but are part of the Solr infrastructure, particularly SolrCloud. Other than insuring you do _not_ remove them when modifying the update request handlers (or any copies you make), you will rarely, if ever, need to change these.
@@ -377,11 +362,9 @@ These are listed for completeness, but are part of the Solr infrastructure, part
 
 {solr-javadocs}/solr-core/org/apache/solr/update/processor/RunUpdateProcessorFactory.html[RunUpdateProcessorFactory]:: Executes the update commands using the underlying UpdateHandler. Almost all processor chains should end with an instance of `RunUpdateProcessorFactory` unless the user is explicitly executing the update commands in an alternative custom `UpdateRequestProcessorFactory`.
 
-[[UpdateRequestProcessors-UpdateProcessorsThatCanBeUsedatRuntime]]
 === Update Processors That Can Be Used at Runtime
 These Update processors do not need any configuration is your `solrconfig.xml` . They are automatically initialized when their name is added to the `processor` parameter. Multiple processors can be used by appending multiple processor names (comma separated)
 
-[[UpdateRequestProcessors-TemplateUpdateProcessorFactory]]
 ==== TemplateUpdateProcessorFactory
 
 The `TemplateUpdateProcessorFactory` can be used to add new fields to documents based on a template pattern.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc b/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
index 00b825a..24a7ac9 100644
--- a/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
+++ b/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
@@ -28,7 +28,6 @@ The steps outlined on this page assume you use the default service name of "```s
 
 ====
 
-[[UpgradingaSolrCluster-PlanningYourUpgrade]]
 == Planning Your Upgrade
 
 Here is a checklist of things you need to prepare before starting the upgrade process:
@@ -49,19 +48,16 @@ If you are upgrading from an installation of Solr 5.x or later, these values can
 
 You should now be ready to upgrade your cluster. Please verify this process in a test / staging cluster before doing it in production.
 
-[[UpgradingaSolrCluster-UpgradeProcess]]
 == Upgrade Process
 
 The approach we recommend is to perform the upgrade of each Solr node, one-by-one. In other words, you will need to stop a node, upgrade it to the new version of Solr, and restart it before moving on to the next node. This means that for a short period of time, there will be a mix of "Old Solr" and "New Solr" nodes running in your cluster. We also assume that you will point the new Solr node to your existing Solr home directory where the Lucene index files are managed for each collection on the node. This means that you won't need to move any index files around to perform the upgrade.
 
 
-[[UpgradingaSolrCluster-Step1_StopSolr]]
 === Step 1: Stop Solr
 
 Begin by stopping the Solr node you want to upgrade. After stopping the node, if using a replication, (ie: collections with replicationFactor > 1) verify that all leaders hosted on the downed node have successfully migrated to other replicas; you can do this by visiting the <<cloud-screens.adoc#cloud-screens,Cloud panel in the Solr Admin UI>>. If not using replication, then any collections with shards hosted on the downed node will be temporarily off-line.
 
 
-[[UpgradingaSolrCluster-Step2_InstallSolrasaService]]
 === Step 2: Install Solr as a Service
 
 Please follow the instructions to install Solr as a Service on Linux documented at <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>. Use the `-n` parameter to avoid automatic start of Solr by the installer script. You need to update the `/etc/default/solr.in.sh` include file in the next step to complete the upgrade process.
@@ -74,7 +70,6 @@ If you have a `/var/solr/solr.in.sh` file for your existing Solr install, runnin
 ====
 
 
-[[UpgradingaSolrCluster-Step3_SetEnvironmentVariableOverrides]]
 === Step 3: Set Environment Variable Overrides
 
 Open `/etc/default/solr.in.sh` with a text editor and verify that the following variables are set correctly, or add them bottom of the include file as needed:
@@ -84,13 +79,10 @@ Open `/etc/default/solr.in.sh` with a text editor and verify that the following
 Make sure the user you plan to own the Solr process is the owner of the `SOLR_HOME` directory. For instance, if you plan to run Solr as the "solr" user and `SOLR_HOME` is `/var/solr/data`, then you would do: `sudo chown -R solr: /var/solr/data`
 
 
-[[UpgradingaSolrCluster-Step4_StartSolr]]
 === Step 4: Start Solr
 
 You are now ready to start the upgraded Solr node by doing: `sudo service solr start`. The upgraded instance will join the existing cluster because you're using the same `SOLR_HOME`, `SOLR_PORT`, and `SOLR_HOST` settings used by the old Solr node; thus, the new server will look like the old node to the running cluster. Be sure to look in `/var/solr/logs/solr.log` for errors during startup.
 
-
-[[UpgradingaSolrCluster-Step5_RunHealthcheck]]
 === Step 5: Run Healthcheck
 
 You should run the Solr *healthcheck* command for all collections that are hosted on the upgraded node before proceeding to upgrade the next node in your cluster. For instance, if the newly upgraded node hosts a replica for the *MyDocuments* collection, then you can run the following command (replace ZK_HOST with the ZooKeeper connection string):

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/upgrading-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/upgrading-solr.adoc b/solr/solr-ref-guide/src/upgrading-solr.adoc
index e41b93b..6a60b8d 100644
--- a/solr/solr-ref-guide/src/upgrading-solr.adoc
+++ b/solr/solr-ref-guide/src/upgrading-solr.adoc
@@ -20,7 +20,6 @@
 
 If you are already using Solr 6.5, Solr 6.6 should not present any major problems. However, you should review the {solr-javadocs}/changes/Changes.html[`CHANGES.txt`] file found in your Solr package for changes and updates that may effect your existing implementation. Detailed steps for upgrading a Solr cluster can be found in the appendix: <<upgrading-a-solr-cluster.adoc#upgrading-a-solr-cluster,Upgrading a Solr Cluster>>.
 
-[[UpgradingSolr-Upgradingfrom6.5.x]]
 == Upgrading from 6.5.x
 
 * Solr contribs map-reduce, morphlines-core and morphlines-cell have been removed.
@@ -29,7 +28,6 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 
 * ZooKeeper dependency has been upgraded from 3.4.6 to 3.4.10.
 
-[[UpgradingSolr-Upgradingfromearlier6.xversions]]
 == Upgrading from earlier 6.x versions
 
 * If you use historical dates, specifically on or before the year 1582, you should re-index after upgrading to this version.
@@ -52,7 +50,6 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 * Index-time boosts are now deprecated. As a replacement, index-time scoring factors should be indexed in a separate field and combined with the query score using a function query. These boosts will be removed in Solr 7.0.
 * Parallel SQL now uses Apache Calcite as its SQL framework. As part of this change the default aggregation mode has been changed to facet rather than map_reduce. There have also been changes to the SQL aggregate response and some SQL syntax changes. Consult the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>> documentation for full details.
 
-[[UpgradingSolr-Upgradingfrom5.5.x]]
 == Upgrading from 5.5.x
 
 * The deprecated `SolrServer` and subclasses have been removed, use <<using-solrj.adoc#using-solrj,`SolrClient`>> instead.
@@ -74,7 +71,6 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 * <<using-solrj.adoc#using-solrj,SolrJ>> no longer includes `DateUtil`. If for some reason you need to format or parse dates, simply use `Instant.format()` and `Instant.parse()`.
 * If you are using spatial4j, please upgrade to 0.6 and <<spatial-search.adoc#spatial-search,edit your `spatialContextFactory`>> to replace `com.spatial4j.core` with `org.locationtech.spatial4j` .
 
-[[UpgradingSolr-UpgradingfromOlderVersionsofSolr]]
 == Upgrading from Older Versions of Solr
 
 Users upgrading from older versions are strongly encouraged to consult {solr-javadocs}/changes/Changes.html[`CHANGES.txt`] for the details of _all_ changes since the version they are upgrading from.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/74ab1616/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
index cdd9539..1489d16 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
@@ -26,8 +26,7 @@ If you want to supply your own `ContentHandler` for Solr to use, you can extend
 
 For more information on Solr's Extracting Request Handler, see https://wiki.apache.org/solr/ExtractingRequestHandler.
 
-[[UploadingDatawithSolrCellusingApacheTika-KeyConcepts]]
-== Key Concepts
+== Key Solr Cell Concepts
 
 When using the Solr Cell framework, it is helpful to keep the following in mind:
 
@@ -42,12 +41,9 @@ When using the Solr Cell framework, it is helpful to keep the following in mind:
 
 [TIP]
 ====
-
 While Apache Tika is quite powerful, it is not perfect and fails on some files. PDF files are particularly problematic, mostly due to the PDF format itself. In case of a failure processing any file, the `ExtractingRequestHandler` does not have a secondary mechanism to try to extract some text from the file; it will throw an exception and fail.
-
 ====
 
-[[UploadingDatawithSolrCellusingApacheTika-TryingoutTikawiththeSolrtechproductsExample]]
 == Trying out Tika with the Solr techproducts Example
 
 You can try out the Tika framework using the `techproducts` example included in Solr.
@@ -96,8 +92,7 @@ In this command, the `uprefix=attr_` parameter causes all generated fields that
 
 This command allows you to query the document using an attribute, as in: `\http://localhost:8983/solr/techproducts/select?q=attr_meta:microsoft`.
 
-[[UploadingDatawithSolrCellusingApacheTika-InputParameters]]
-== Input Parameters
+== Solr Cell Input Parameters
 
 The table below describes the parameters accepted by the Extracting Request Handler.
 
@@ -158,8 +153,6 @@ Prefixes all fields that are not defined in the schema with the given prefix. Th
 `xpath`::
 When extracting, only return Tika XHTML content that satisfies the given XPath expression. See http://tika.apache.org/1.7/index.html for details on the format of Tika XHTML. See also http://wiki.apache.org/solr/TikaExtractOnlyExampleOutput.
 
-
-[[UploadingDatawithSolrCellusingApacheTika-OrderofOperations]]
 == Order of Operations
 
 Here is the order in which the Solr Cell framework, using the Extracting Request Handler and Tika, processes its input.
@@ -169,7 +162,6 @@ Here is the order in which the Solr Cell framework, using the Extracting Request
 .  Tika applies the mapping rules specified by `fmap.__source__=__target__` parameters.
 .  If `uprefix` is specified, any unknown field names are prefixed with that value, else if `defaultField` is specified, any unknown fields are copied to the default field.
 
-[[UploadingDatawithSolrCellusingApacheTika-ConfiguringtheSolrExtractingRequestHandler]]
 == Configuring the Solr ExtractingRequestHandler
 
 If you are not working with the supplied `sample_techproducts_configs` or `_default` <<config-sets.adoc#config-sets,config set>>, you must configure your own `solrconfig.xml` to know about the Jar's containing the `ExtractingRequestHandler` and its dependencies:
@@ -216,7 +208,6 @@ The `tika.config` entry points to a file containing a Tika configuration. The `d
 * `EEEE, dd-MMM-yy HH:mm:ss zzz`
 * `EEE MMM d HH:mm:ss yyyy`
 
-[[UploadingDatawithSolrCellusingApacheTika-Parserspecificproperties]]
 === Parser-Specific Properties
 
 Parsers used by Tika may have specific properties to govern how data is extracted. For instance, when using the Tika library from a Java program, the PDFParserConfig class has a method setSortByPosition(boolean) that can extract vertically oriented text. To access that method via configuration with the ExtractingRequestHandler, one can add the parseContext.config property to the solrconfig.xml file (see above) and then set properties in Tika's PDFParserConfig as below. Consult the Tika Java API documentation for configuration parameters that can be set for any particular parsers that require this level of control.
@@ -232,14 +223,12 @@ Parsers used by Tika may have specific properties to govern how data is extracte
 </entries>
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-Multi-CoreConfiguration]]
 === Multi-Core Configuration
 
 For a multi-core configuration, you can specify `sharedLib='lib'` in the `<solr/>` section of `solr.xml` and place the necessary jar files there.
 
 For more information about Solr cores, see <<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,The Well-Configured Solr Instance>>.
 
-[[UploadingDatawithSolrCellusingApacheTika-IndexingEncryptedDocumentswiththeExtractingUpdateRequestHandler]]
 == Indexing Encrypted Documents with the ExtractingUpdateRequestHandler
 
 The ExtractingRequestHandler will decrypt encrypted files and index their content if you supply a password in either `resource.password` on the request, or in a `passwordsFile` file.
@@ -254,11 +243,9 @@ myFileName = myPassword
 .*\.pdf$ = myPdfPassword
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-Examples]]
-== Examples
+== Solr Cell Examples
 
-[[UploadingDatawithSolrCellusingApacheTika-Metadata]]
-=== Metadata
+=== Metadata Created by Tika
 
 As mentioned before, Tika produces metadata about the document. Metadata describes different aspects of a document, such as the author's name, the number of pages, the file size, and so on. The metadata produced depends on the type of document submitted. For instance, PDFs have different metadata than Word documents do.
 
@@ -277,17 +264,10 @@ The size of the stream in bytes.
 The content type of the stream, if available.
 
 
-[IMPORTANT]
-====
-
-We recommend that you try using the `extractOnly` option to discover which values Solr is setting for these metadata elements.
-
-====
+IMPORTANT: We recommend that you try using the `extractOnly` option to discover which values Solr is setting for these metadata elements.
 
-[[UploadingDatawithSolrCellusingApacheTika-ExamplesofUploadsUsingtheExtractingRequestHandler]]
 === Examples of Uploads Using the Extracting Request Handler
 
-[[UploadingDatawithSolrCellusingApacheTika-CaptureandMapping]]
 ==== Capture and Mapping
 
 The command below captures `<div>` tags separately, and then maps all the instances of that field to a dynamic field named `foo_t`.
@@ -297,18 +277,6 @@ The command below captures `<div>` tags separately, and then maps all the instan
 bin/post -c techproducts example/exampledocs/sample.html -params "literal.id=doc2&captureAttr=true&defaultField=_text_&fmap.div=foo_t&capture=div"
 ----
 
-
-[[UploadingDatawithSolrCellusingApacheTika-Capture_Mapping]]
-==== Capture & Mapping
-
-The command below captures `<div>` tags separately and maps the field to a dynamic field named `foo_t`.
-
-[source,bash]
-----
-bin/post -c techproducts example/exampledocs/sample.html -params "literal.id=doc3&captureAttr=true&defaultField=_text_&capture=div&fmap.div=foo_t"
-----
-
-[[UploadingDatawithSolrCellusingApacheTika-UsingLiteralstoDefineYourOwnMetadata]]
 ==== Using Literals to Define Your Own Metadata
 
 To add in your own metadata, pass in the literal parameter along with the file:
@@ -318,8 +286,7 @@ To add in your own metadata, pass in the literal parameter along with the file:
 bin/post -c techproducts -params "literal.id=doc4&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_t&literal.blah_s=Bah" example/exampledocs/sample.html
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-XPath]]
-==== XPath
+==== XPath Expressions
 
 The example below passes in an XPath expression to restrict the XHTML returned by Tika:
 
@@ -328,7 +295,6 @@ The example below passes in an XPath expression to restrict the XHTML returned b
 bin/post -c techproducts -params "literal.id=doc5&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_t&xpath=/xhtml:html/xhtml:body/xhtml:div//node()" example/exampledocs/sample.html
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-ExtractingDatawithoutIndexingIt]]
 === Extracting Data without Indexing It
 
 Solr allows you to extract data without indexing. You might want to do this if you're using Solr solely as an extraction server or if you're interested in testing Solr extraction.
@@ -347,7 +313,6 @@ The output includes XML generated by Tika (and further escaped by Solr's XML) us
 bin/post -c techproducts -params "extractOnly=true&wt=ruby&indent=true" -out yes example/exampledocs/sample.html
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-SendingDocumentstoSolrwithaPOST]]
 == Sending Documents to Solr with a POST
 
 The example below streams the file as the body of the POST, which does not, then, provide information to Solr about the name of the file.
@@ -357,7 +322,6 @@ The example below streams the file as the body of the POST, which does not, then
 curl "http://localhost:8983/solr/techproducts/update/extract?literal.id=doc6&defaultField=text&commit=true" --data-binary @example/exampledocs/sample.html -H 'Content-type:text/html'
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-SendingDocumentstoSolrwithSolrCellandSolrJ]]
 == Sending Documents to Solr with Solr Cell and SolrJ
 
 SolrJ is a Java client that you can use to add documents to the index, update the index, or query the index. You'll find more information on SolrJ in <<client-apis.adoc#client-apis,Client APIs>>.