You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/07/14 18:35:06 UTC

[08/11] lucene-solr:branch_7_0: SOLR-11050: remove Confluence-style anchors and fix all incoming links

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/index-replication.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/index-replication.adoc b/solr/solr-ref-guide/src/index-replication.adoc
index 774b78c..8c51341 100644
--- a/solr/solr-ref-guide/src/index-replication.adoc
+++ b/solr/solr-ref-guide/src/index-replication.adoc
@@ -26,7 +26,6 @@ The figure below shows a Solr configuration using index replication. The master
 image::images/index-replication/worddav2b7e14725d898b4104cdd9c502fc77cd.png[image,width=159,height=235]
 
 
-[[IndexReplication-IndexReplicationinSolr]]
 == Index Replication in Solr
 
 Solr includes a Java implementation of index replication that works over HTTP:
@@ -46,7 +45,6 @@ Although there is no explicit concept of "master/slave" nodes in a <<solrcloud.a
 When using SolrCloud, the `ReplicationHandler` must be available via the `/replication` path. Solr does this implicitly unless overridden explicitly in your `solrconfig.xml`, but if you wish to override the default behavior, make certain that you do not explicitly set any of the "master" or "slave" configuration options mentioned below, or they will interfere with normal SolrCloud operation.
 ====
 
-[[IndexReplication-ReplicationTerminology]]
 == Replication Terminology
 
 The table below defines the key terms associated with Solr replication.
@@ -79,15 +77,13 @@ Snapshot::
 A directory containing hard links to the data files of an index. Snapshots are distributed from the master nodes when the slaves pull them, "smart copying" any segments the slave node does not have in snapshot directory that contains the hard links to the most recent index data files.
 
 
-[[IndexReplication-ConfiguringtheReplicationHandler]]
 == Configuring the ReplicationHandler
 
 In addition to `ReplicationHandler` configuration options specific to the master/slave roles, there are a few special configuration options that are generally supported (even when using SolrCloud).
 
 * `maxNumberOfBackups` an integer value dictating the maximum number of backups this node will keep on disk as it receives `backup` commands.
-* Similar to most other request handlers in Solr you may configure a set of <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,defaults, invariants, and/or appends>> parameters corresponding with any request parameters supported by the `ReplicationHandler` when <<IndexReplication-HTTPAPICommandsfortheReplicationHandler,processing commands>>.
+* Similar to most other request handlers in Solr you may configure a set of <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#searchhandlers,defaults, invariants, and/or appends>> parameters corresponding with any request parameters supported by the `ReplicationHandler` when <<HTTP API Commands for the ReplicationHandler,processing commands>>.
 
-[[IndexReplication-ConfiguringtheReplicationRequestHandleronaMasterServer]]
 === Configuring the Replication RequestHandler on a Master Server
 
 Before running a replication, you should set the following parameters on initialization of the handler:
@@ -125,7 +121,6 @@ The example below shows a possible 'master' configuration for the `ReplicationHa
 </requestHandler>
 ----
 
-[[IndexReplication-Replicatingsolrconfig.xml]]
 ==== Replicating solrconfig.xml
 
 In the configuration file on the master server, include a line like the following:
@@ -139,7 +134,6 @@ This ensures that the local configuration `solrconfig_slave.xml` will be saved a
 
 On the master server, the file name of the slave configuration file can be anything, as long as the name is correctly identified in the `confFiles` string; then it will be saved as whatever file name appears after the colon ':'.
 
-[[IndexReplication-ConfiguringtheReplicationRequestHandleronaSlaveServer]]
 === Configuring the Replication RequestHandler on a Slave Server
 
 The code below shows how to configure a ReplicationHandler on a slave.
@@ -188,7 +182,6 @@ The code below shows how to configure a ReplicationHandler on a slave.
 </requestHandler>
 ----
 
-[[IndexReplication-SettingUpaRepeaterwiththeReplicationHandler]]
 == Setting Up a Repeater with the ReplicationHandler
 
 A master may be able to serve only so many slaves without affecting performance. Some organizations have deployed slave servers across multiple data centers. If each slave downloads the index from a remote data center, the resulting download may consume too much network bandwidth. To avoid performance degradation in cases like this, you can configure one or more slaves as repeaters. A repeater is simply a node that acts as both a master and a slave.
@@ -213,7 +206,6 @@ Here is an example of a ReplicationHandler configuration for a repeater:
 </requestHandler>
 ----
 
-[[IndexReplication-CommitandOptimizeOperations]]
 == Commit and Optimize Operations
 
 When a commit or optimize operation is performed on the master, the RequestHandler reads the list of file names which are associated with each commit point. This relies on the `replicateAfter` parameter in the configuration to decide which types of events should trigger replication.
@@ -233,7 +225,6 @@ The `replicateAfter` parameter can accept multiple arguments. For example:
 <str name="replicateAfter">optimize</str>
 ----
 
-[[IndexReplication-SlaveReplication]]
 == Slave Replication
 
 The master is totally unaware of the slaves.
@@ -246,7 +237,6 @@ The slave continuously keeps polling the master (depending on the `pollInterval`
 * After the download completes, all the new files are moved to the live index directory and the file's timestamp is same as its counterpart on the master.
 * A commit command is issued on the slave by the Slave's ReplicationHandler and the new index is loaded.
 
-[[IndexReplication-ReplicatingConfigurationFiles]]
 === Replicating Configuration Files
 
 To replicate configuration files, list them using using the `confFiles` parameter. Only files found in the `conf` directory of the master's Solr instance will be replicated.
@@ -259,7 +249,6 @@ As a precaution when replicating configuration files, Solr copies configuration
 
 If a replication involved downloading of at least one configuration file, the ReplicationHandler issues a core-reload command instead of a commit command.
 
-[[IndexReplication-ResolvingCorruptionIssuesonSlaveServers]]
 === Resolving Corruption Issues on Slave Servers
 
 If documents are added to the slave, then the slave is no longer in sync with its master. However, the slave will not undertake any action to put itself in sync, until the master has new index data.
@@ -268,7 +257,6 @@ When a commit operation takes place on the master, the index version of the mast
 
 To correct this problem, the slave then copies all the index files from master to a new index directory and asks the core to load the fresh index from the new directory.
 
-[[IndexReplication-HTTPAPICommandsfortheReplicationHandler]]
 == HTTP API Commands for the ReplicationHandler
 
 You can use the HTTP commands below to control the ReplicationHandler's operations.
@@ -355,7 +343,6 @@ There are two supported parameters:
 * `location`: Location where the snapshot is created.
 
 
-[[IndexReplication-DistributionandOptimization]]
 == Distribution and Optimization
 
 Optimizing an index is not something most users should generally worry about - but in particular users should be aware of the impacts of optimizing an index when using the `ReplicationHandler`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
index 63ab26d..a592a2d 100644
--- a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
@@ -29,10 +29,8 @@ By default, the settings are commented out in the sample `solrconfig.xml` includ
 </indexConfig>
 ----
 
-[[IndexConfiginSolrConfig-WritingNewSegments]]
 == Writing New Segments
 
-[[IndexConfiginSolrConfig-ramBufferSizeMB]]
 === ramBufferSizeMB
 
 Once accumulated document updates exceed this much memory space (defined in megabytes), then the pending updates are flushed. This can also create new segments or trigger a merge. Using this setting is generally preferable to `maxBufferedDocs`. If both `maxBufferedDocs` and `ramBufferSizeMB` are set in `solrconfig.xml`, then a flush will occur when either limit is reached. The default is 100Mb.
@@ -42,7 +40,6 @@ Once accumulated document updates exceed this much memory space (defined in mega
 <ramBufferSizeMB>100</ramBufferSizeMB>
 ----
 
-[[IndexConfiginSolrConfig-maxBufferedDocs]]
 === maxBufferedDocs
 
 Sets the number of document updates to buffer in memory before they are flushed as a new segment. This may also trigger a merge. The default Solr configuration sets to flush by RAM usage (`ramBufferSizeMB`).
@@ -52,20 +49,17 @@ Sets the number of document updates to buffer in memory before they are flushed
 <maxBufferedDocs>1000</maxBufferedDocs>
 ----
 
-[[IndexConfiginSolrConfig-useCompoundFile]]
 === useCompoundFile
 
-Controls whether newly written (and not yet merged) index segments should use the <<IndexConfiginSolrConfig-CompoundFileSegments,Compound File Segment>> format. The default is false.
+Controls whether newly written (and not yet merged) index segments should use the <<Compound File Segments>> format. The default is false.
 
 [source,xml]
 ----
 <useCompoundFile>false</useCompoundFile>
 ----
 
-[[IndexConfiginSolrConfig-MergingIndexSegments]]
 == Merging Index Segments
 
-[[IndexConfiginSolrConfig-mergePolicyFactory]]
 === mergePolicyFactory
 
 Defines how merging segments is done.
@@ -99,7 +93,6 @@ Choosing the best merge factors is generally a trade-off of indexing speed vs. s
 
 Conversely, keeping more segments can accelerate indexing, because merges happen less often, making an update is less likely to trigger a merge. But searches become more computationally expensive and will likely be slower, because search terms must be looked up in more index segments. Faster index updates also means shorter commit turnaround times, which means more timely search results.
 
-[[IndexConfiginSolrConfig-CustomizingMergePolicies]]
 === Customizing Merge Policies
 
 If the configuration options for the built-in merge policies do not fully suit your use case, you can customize them: either by creating a custom merge policy factory that you specify in your configuration, or by configuring a {solr-javadocs}/solr-core/org/apache/solr/index/WrapperMergePolicyFactory.html[merge policy wrapper] which uses a `wrapped.prefix` configuration option to control how the factory it wraps will be configured:
@@ -117,7 +110,6 @@ If the configuration options for the built-in merge policies do not fully suit y
 
 The example above shows Solr's {solr-javadocs}/solr-core/org/apache/solr/index/SortingMergePolicyFactory.html[`SortingMergePolicyFactory`] being configured to sort documents in merged segments by `"timestamp desc"`, and wrapped around a `TieredMergePolicyFactory` configured to use the values `maxMergeAtOnce=10` and `segmentsPerTier=10` via the `inner` prefix defined by `SortingMergePolicyFactory` 's `wrapped.prefix` option. For more information on using `SortingMergePolicyFactory`, see <<common-query-parameters.adoc#CommonQueryParameters-ThesegmentTerminateEarlyParameter,the segmentTerminateEarly parameter>>.
 
-[[IndexConfiginSolrConfig-mergeScheduler]]
 === mergeScheduler
 
 The merge scheduler controls how merges are performed. The default `ConcurrentMergeScheduler` performs merges in the background using separate threads. The alternative, `SerialMergeScheduler`, does not perform merges with separate threads.
@@ -127,7 +119,6 @@ The merge scheduler controls how merges are performed. The default `ConcurrentMe
 <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
 ----
 
-[[IndexConfiginSolrConfig-mergedSegmentWarmer]]
 === mergedSegmentWarmer
 
 When using Solr in for <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>> a merged segment warmer can be configured to warm the reader on the newly merged segment, before the merge commits. This is not required for near real-time search, but will reduce search latency on opening a new near real-time reader after a merge completes.
@@ -137,7 +128,6 @@ When using Solr in for <<near-real-time-searching.adoc#near-real-time-searching,
 <mergedSegmentWarmer class="org.apache.lucene.index.SimpleMergedSegmentWarmer"/>
 ----
 
-[[IndexConfiginSolrConfig-CompoundFileSegments]]
 == Compound File Segments
 
 Each Lucene segment is typically comprised of a dozen or so files. Lucene can be configured to bundle all of the files for a segment into a single compound file using a file extension of `.cfs`; it's an abbreviation for Compound File Segment.
@@ -149,16 +139,14 @@ On systems where the number of open files allowed per process is limited, CFS ma
 .CFS: New Segments vs Merged Segments
 [NOTE]
 ====
-To configure whether _newly written segments_ should use CFS, see the <<IndexConfiginSolrConfig-useCompoundFile,`useCompoundFile`>> setting described above. To configure whether _merged segments_ use CFS, review the Javadocs for your <<IndexConfiginSolrConfig-mergePolicyFactory,`mergePolicyFactory`>> .
+To configure whether _newly written segments_ should use CFS, see the <<useCompoundFile,`useCompoundFile`>> setting described above. To configure whether _merged segments_ use CFS, review the Javadocs for your <<mergePolicyFactory,`mergePolicyFactory`>> .
 
-Many <<IndexConfiginSolrConfig-MergingIndexSegments,Merge Policy>> implementations support `noCFSRatio` and `maxCFSSegmentSizeMB` settings with default values that prevent compound files from being used for large segments, but do use compound files for small segments.
+Many <<Merging Index Segments,Merge Policy>> implementations support `noCFSRatio` and `maxCFSSegmentSizeMB` settings with default values that prevent compound files from being used for large segments, but do use compound files for small segments.
 
 ====
 
-[[IndexConfiginSolrConfig-IndexLocks]]
 == Index Locks
 
-[[IndexConfiginSolrConfig-lockType]]
 === lockType
 
 The LockFactory options specify the locking implementation to use.
@@ -177,7 +165,6 @@ For more information on the nuances of each LockFactory, see http://wiki.apache.
 <lockType>native</lockType>
 ----
 
-[[IndexConfiginSolrConfig-writeLockTimeout]]
 === writeLockTimeout
 
 The maximum time to wait for a write lock on an IndexWriter. The default is 1000, expressed in milliseconds.
@@ -187,7 +174,6 @@ The maximum time to wait for a write lock on an IndexWriter. The default is 1000
 <writeLockTimeout>1000</writeLockTimeout>
 ----
 
-[[IndexConfiginSolrConfig-OtherIndexingSettings]]
 == Other Indexing Settings
 
 There are a few other parameters that may be important to configure for your implementation. These settings affect how or when updates are made to an index.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/language-analysis.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/language-analysis.adoc b/solr/solr-ref-guide/src/language-analysis.adoc
index c2b02ff..a6d04da 100644
--- a/solr/solr-ref-guide/src/language-analysis.adoc
+++ b/solr/solr-ref-guide/src/language-analysis.adoc
@@ -26,7 +26,6 @@ In other languages the tokenization rules are often not so simple. Some European
 
 For information about language detection at index time, see <<detecting-languages-during-indexing.adoc#detecting-languages-during-indexing,Detecting Languages During Indexing>>.
 
-[[LanguageAnalysis-KeywordMarkerFilterFactory]]
 == KeywordMarkerFilterFactory
 
 Protects words from being modified by stemmers. A customized protected word list may be specified with the "protected" attribute in the schema. Any words in the protected word list will not be modified by any stemmer in Solr.
@@ -44,7 +43,6 @@ A sample Solr `protwords.txt` with comments can be found in the `sample_techprod
 </fieldtype>
 ----
 
-[[LanguageAnalysis-KeywordRepeatFilterFactory]]
 == KeywordRepeatFilterFactory
 
 Emits each token twice, one with the `KEYWORD` attribute and once without.
@@ -69,8 +67,6 @@ A sample fieldType configuration could look like this:
 
 IMPORTANT: When adding the same token twice, it will also score twice (double), so you may have to re-tune your ranking rules.
 
-
-[[LanguageAnalysis-StemmerOverrideFilterFactory]]
 == StemmerOverrideFilterFactory
 
 Overrides stemming algorithms by applying a custom mapping, then protecting these terms from being modified by stemmers.
@@ -90,7 +86,6 @@ A sample http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/test-fil
 </fieldtype>
 ----
 
-[[LanguageAnalysis-DictionaryCompoundWordTokenFilter]]
 == Dictionary Compound Word Token Filter
 
 This filter splits, or _decompounds_, compound words into individual words using a dictionary of the component words. Each input token is passed through unchanged. If it can also be decompounded into subwords, each subword is also added to the stream at the same logical position.
@@ -129,7 +124,6 @@ Assume that `germanwords.txt` contains at least the following words: `dumm kopf
 
 *Out:* "Donaudampfschiff"(1), "Donau"(1), "dampf"(1), "schiff"(1), "dummkopf"(2), "dumm"(2), "kopf"(2)
 
-[[LanguageAnalysis-UnicodeCollation]]
 == Unicode Collation
 
 Unicode Collation is a language-sensitive method of sorting text that can also be used for advanced search purposes.
@@ -175,7 +169,6 @@ Expert options:
 
 `variableTop`:: Single character or contraction. Controls what is variable for `alternate`.
 
-[[LanguageAnalysis-SortingTextforaSpecificLanguage]]
 === Sorting Text for a Specific Language
 
 In this example, text is sorted according to the default German rules provided by ICU4J.
@@ -223,7 +216,6 @@ An example using the "city_sort" field to sort:
 q=*:*&fl=city&sort=city_sort+asc
 ----
 
-[[LanguageAnalysis-SortingTextforMultipleLanguages]]
 === Sorting Text for Multiple Languages
 
 There are two approaches to supporting multiple languages: if there is a small list of languages you wish to support, consider defining collated fields for each language and using `copyField`. However, adding a large number of sort fields can increase disk and indexing costs. An alternative approach is to use the Unicode `default` collator.
@@ -237,7 +229,6 @@ The Unicode `default` or `ROOT` locale has rules that are designed to work well
            strength="primary" />
 ----
 
-[[LanguageAnalysis-SortingTextwithCustomRules]]
 === Sorting Text with Custom Rules
 
 You can define your own set of sorting rules. It's easiest to take existing rules that are close to what you want and customize them.
@@ -277,7 +268,6 @@ This rule set can now be used for custom collation in Solr:
            strength="primary" />
 ----
 
-[[LanguageAnalysis-JDKCollation]]
 === JDK Collation
 
 As mentioned above, ICU Unicode Collation is better in several ways than JDK Collation, but if you cannot use ICU4J for some reason, you can use `solr.CollationField`.
@@ -321,7 +311,6 @@ Using a Tailored ruleset:
 
 == ASCII & Decimal Folding Filters
 
-[[LanguageAnalysis-AsciiFolding]]
 === ASCII Folding
 
 This filter converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists. Only those characters with reasonable ASCII alternatives are converted.
@@ -348,7 +337,6 @@ This can increase recall by causing more matches. On the other hand, it can redu
 
 *Out:* "Bjorn", "Angstrom"
 
-[[LanguageAnalysis-DecimalDigitFolding]]
 === Decimal Digit Folding
 
 This filter converts any character in the Unicode "Decimal Number" general category (`Nd`) into their equivalent Basic Latin digits (0-9).
@@ -369,7 +357,6 @@ This can increase recall by causing more matches. On the other hand, it can redu
 </analyzer>
 ----
 
-[[LanguageAnalysis-Language-SpecificFactories]]
 == Language-Specific Factories
 
 These factories are each designed to work with specific languages. The languages covered here are:
@@ -380,8 +367,8 @@ These factories are each designed to work with specific languages. The languages
 * <<Catalan>>
 * <<Traditional Chinese>>
 * <<Simplified Chinese>>
-* <<LanguageAnalysis-Czech,Czech>>
-* <<LanguageAnalysis-Danish,Danish>>
+* <<Czech>>
+* <<Danish>>
 
 * <<Dutch>>
 * <<Finnish>>
@@ -389,7 +376,7 @@ These factories are each designed to work with specific languages. The languages
 * <<Galician>>
 * <<German>>
 * <<Greek>>
-* <<LanguageAnalysis-Hebrew_Lao_Myanmar_Khmer,Hebrew, Lao, Myanmar, Khmer>>
+* <<hebrew-lao-myanmar-khmer,Hebrew, Lao, Myanmar, Khmer>>
 * <<Hindi>>
 * <<Indonesian>>
 * <<Italian>>
@@ -410,7 +397,6 @@ These factories are each designed to work with specific languages. The languages
 * <<Turkish>>
 * <<Ukrainian>>
 
-[[LanguageAnalysis-Arabic]]
 === Arabic
 
 Solr provides support for the http://www.mtholyoke.edu/~lballest/Pubs/arab_stem05.pdf[Light-10] (PDF) stemming algorithm, and Lucene includes an example stopword list.
@@ -432,7 +418,6 @@ This algorithm defines both character normalization and stemming, so these are s
 </analyzer>
 ----
 
-[[LanguageAnalysis-BrazilianPortuguese]]
 === Brazilian Portuguese
 
 This is a Java filter written specifically for stemming the Brazilian dialect of the Portuguese language. It uses the Lucene class `org.apache.lucene.analysis.br.BrazilianStemmer`. Although that stemmer can be configured to use a list of protected words (which should not be stemmed), this factory does not accept any arguments to specify such a list.
@@ -457,7 +442,6 @@ This is a Java filter written specifically for stemming the Brazilian dialect of
 
 *Out:* "pra", "pra"
 
-[[LanguageAnalysis-Bulgarian]]
 === Bulgarian
 
 Solr includes a light stemmer for Bulgarian, following http://members.unine.ch/jacques.savoy/Papers/BUIR.pdf[this algorithm] (PDF), and Lucene includes an example stopword list.
@@ -477,7 +461,6 @@ Solr includes a light stemmer for Bulgarian, following http://members.unine.ch/j
 </analyzer>
 ----
 
-[[LanguageAnalysis-Catalan]]
 === Catalan
 
 Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `language="Catalan"`. Solr includes a set of contractions for Catalan, which can be stripped using `solr.ElisionFilterFactory`.
@@ -507,14 +490,13 @@ Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `lan
 
 *Out:* "llengu"(1), "llengu"(2)
 
-[[LanguageAnalysis-TraditionalChinese]]
 === Traditional Chinese
 
-The default configuration of the <<tokenizers.adoc#Tokenizers-ICUTokenizer,ICU Tokenizer>> is suitable for Traditional Chinese text.  It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.  To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
+The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>> is suitable for Traditional Chinese text.  It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.  To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
 
-<<tokenizers.adoc#Tokenizers-StandardTokenizer,Standard Tokenizer>> can also be used to tokenize Traditional Chinese text.  Following the Word Break rules from the Unicode Text Segmentation algorithm, it produces one token per Chinese character.  When combined with <<LanguageAnalysis-CJKBigramFilter,CJK Bigram Filter>>, overlapping bigrams of Chinese characters are formed.
+<<tokenizers.adoc#standard-tokenizer,Standard Tokenizer>> can also be used to tokenize Traditional Chinese text.  Following the Word Break rules from the Unicode Text Segmentation algorithm, it produces one token per Chinese character.  When combined with <<CJK Bigram Filter>>, overlapping bigrams of Chinese characters are formed.
 
-<<LanguageAnalysis-CJKWidthFilter,CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms.
+<<CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms.
 
 *Examples:*
 
@@ -537,10 +519,9 @@ The default configuration of the <<tokenizers.adoc#Tokenizers-ICUTokenizer,ICU T
 </analyzer>
 ----
 
-[[LanguageAnalysis-CJKBigramFilter]]
 === CJK Bigram Filter
 
-Forms bigrams (overlapping 2-character sequences) of CJK characters that are generated from <<tokenizers.adoc#Tokenizers-StandardTokenizer,Standard Tokenizer>> or <<tokenizers.adoc#Tokenizers-ICUTokenizer,ICU Tokenizer>>.
+Forms bigrams (overlapping 2-character sequences) of CJK characters that are generated from <<tokenizers.adoc#standard-tokenizer,Standard Tokenizer>> or <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>>.
 
 By default, all CJK characters produce bigrams, but finer grained control is available by specifying orthographic type arguments `han`, `hiragana`, `katakana`, and `hangul`.  When set to `false`, characters of the corresponding type will be passed through as unigrams, and will not be included in any bigrams.
 
@@ -560,18 +541,17 @@ In all cases, all non-CJK input is passed through unmodified.
 
 `outputUnigrams`:: (true/false) If true, in addition to forming bigrams, all characters are also passed through as unigrams. Default is false.
 
-See the example under <<LanguageAnalysis-TraditionalChinese,Traditional Chinese>>.
+See the example under <<Traditional Chinese>>.
 
-[[LanguageAnalysis-SimplifiedChinese]]
 === Simplified Chinese
 
-For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the <<LanguageAnalysis-HMMChineseTokenizer,HMM Chinese Tokenizer>>. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
+For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the <<HMM Chinese Tokenizer>>. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model. To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
 
-The default configuration of the <<tokenizers.adoc#Tokenizers-ICUTokenizer,ICU Tokenizer>> is also suitable for Simplified Chinese text.  It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.  To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
+The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>> is also suitable for Simplified Chinese text.  It follows the Word Break rules from the Unicode Text Segmentation algorithm for non-Chinese text, and uses a dictionary to segment Chinese words.  To use this tokenizer, you must add additional .jars to Solr's classpath (as described in the section <<lib-directives-in-solrconfig.adoc#lib-directives-in-solrconfig,Lib Directives in SolrConfig>>). See the `solr/contrib/analysis-extras/README.txt` for information on which jars you need to add to your `SOLR_HOME/lib`.
 
 Also useful for Chinese analysis:
 
-<<LanguageAnalysis-CJKWidthFilter,CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
+<<CJK Width Filter>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
 
 *Examples:*
 
@@ -598,7 +578,6 @@ Also useful for Chinese analysis:
 </analyzer>
 ----
 
-[[LanguageAnalysis-HMMChineseTokenizer]]
 === HMM Chinese Tokenizer
 
 For Simplified Chinese, Solr provides support for Chinese sentence and word segmentation with the `solr.HMMChineseTokenizerFactory` in the `analysis-extras` contrib module. This component includes a large dictionary and segments Chinese text into words with the Hidden Markov Model. To use this tokenizer, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.
@@ -613,9 +592,8 @@ To use the default setup with fallback to English Porter stemmer for English wor
 
 `<analyzer class="org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer"/>`
 
-Or to configure your own analysis setup, use the `solr.HMMChineseTokenizerFactory` along with your custom filter setup.  See an example of this in the <<LanguageAnalysis-SimplifiedChinese,Simplified Chinese>> section.
+Or to configure your own analysis setup, use the `solr.HMMChineseTokenizerFactory` along with your custom filter setup.  See an example of this in the <<Simplified Chinese>> section.
 
-[[LanguageAnalysis-Czech]]
 === Czech
 
 Solr includes a light stemmer for Czech, following https://dl.acm.org/citation.cfm?id=1598600[this algorithm], and Lucene includes an example stopword list.
@@ -641,12 +619,11 @@ Solr includes a light stemmer for Czech, following https://dl.acm.org/citation.c
 
 *Out:* "preziden", "preziden", "preziden"
 
-[[LanguageAnalysis-Danish]]
 === Danish
 
 Solr can stem Danish using the Snowball Porter Stemmer with an argument of `language="Danish"`.
 
-Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization filters>>.
+Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
 
 *Factory class:* `solr.SnowballPorterFilterFactory`
 
@@ -671,8 +648,6 @@ Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization
 
 *Out:* "undersøg"(1), "undersøg"(2)
 
-
-[[LanguageAnalysis-Dutch]]
 === Dutch
 
 Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `language="Dutch"`.
@@ -700,7 +675,6 @@ Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `langu
 
 *Out:* "kanal", "kanal"
 
-[[LanguageAnalysis-Finnish]]
 === Finnish
 
 Solr includes support for stemming Finnish, and Lucene includes an example stopword list.
@@ -726,10 +700,8 @@ Solr includes support for stemming Finnish, and Lucene includes an example stopw
 *Out:* "kala", "kala"
 
 
-[[LanguageAnalysis-French]]
 === French
 
-[[LanguageAnalysis-ElisionFilter]]
 ==== Elision Filter
 
 Removes article elisions from a token stream. This filter can be useful for languages such as French, Catalan, Italian, and Irish.
@@ -760,7 +732,6 @@ Removes article elisions from a token stream. This filter can be useful for lang
 
 *Out:* "histoire", "art"
 
-[[LanguageAnalysis-FrenchLightStemFilter]]
 ==== French Light Stem Filter
 
 Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFactory`, a lighter stemmer called `solr.FrenchLightStemFilterFactory`, and an even less aggressive stemmer called `solr.FrenchMinimalStemFilterFactory`. Lucene includes an example stopword list.
@@ -800,7 +771,6 @@ Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFa
 *Out:* "le", "chat", "le", "chat"
 
 
-[[LanguageAnalysis-Galician]]
 === Galician
 
 Solr includes a stemmer for Galician following http://bvg.udc.es/recursos_lingua/stemming.jsp[this algorithm], and Lucene includes an example stopword list.
@@ -826,8 +796,6 @@ Solr includes a stemmer for Galician following http://bvg.udc.es/recursos_lingua
 
 *Out:* "feliz", "luz"
 
-
-[[LanguageAnalysis-German]]
 === German
 
 Solr includes four stemmers for German: one in the `solr.SnowballPorterFilterFactory language="German"`, a stemmer called `solr.GermanStemFilterFactory`, a lighter stemmer called `solr.GermanLightStemFilterFactory`, and an even less aggressive stemmer called `solr.GermanMinimalStemFilterFactory`. Lucene includes an example stopword list.
@@ -868,8 +836,6 @@ Solr includes four stemmers for German: one in the `solr.SnowballPorterFilterFac
 
 *Out:* "haus", "haus"
 
-
-[[LanguageAnalysis-Greek]]
 === Greek
 
 This filter converts uppercase letters in the Greek character set to the equivalent lowercase character.
@@ -893,7 +859,6 @@ Use of custom charsets is no longer supported as of Solr 3.1. If you need to ind
 </analyzer>
 ----
 
-[[LanguageAnalysis-Hindi]]
 === Hindi
 
 Solr includes support for stemming Hindi following http://computing.open.ac.uk/Sites/EACLSouthAsia/Papers/p6-Ramanathan.pdf[this algorithm] (PDF), support for common spelling differences through the `solr.HindiNormalizationFilterFactory`, support for encoding differences through the `solr.IndicNormalizationFilterFactory` following http://ldc.upenn.edu/myl/IndianScriptsUnicode.html[this algorithm], and Lucene includes an example stopword list.
@@ -914,8 +879,6 @@ Solr includes support for stemming Hindi following http://computing.open.ac.uk/S
 </analyzer>
 ----
 
-
-[[LanguageAnalysis-Indonesian]]
 === Indonesian
 
 Solr includes support for stemming Indonesian (Bahasa Indonesia) following http://www.illc.uva.nl/Publications/ResearchReports/MoL-2003-02.text.pdf[this algorithm] (PDF), and Lucene includes an example stopword list.
@@ -941,7 +904,6 @@ Solr includes support for stemming Indonesian (Bahasa Indonesia) following http:
 
 *Out:* "bagai", "bagai"
 
-[[LanguageAnalysis-Italian]]
 === Italian
 
 Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFactory language="Italian"`, and a lighter stemmer called `solr.ItalianLightStemFilterFactory`. Lucene includes an example stopword list.
@@ -969,7 +931,6 @@ Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFac
 
 *Out:* "propag", "propag", "propag"
 
-[[LanguageAnalysis-Irish]]
 === Irish
 
 Solr can stem Irish using the Snowball Porter Stemmer with an argument of `language="Irish"`. Solr includes `solr.IrishLowerCaseFilterFactory`, which can handle Irish-specific constructs. Solr also includes a set of contractions for Irish which can be stripped using `solr.ElisionFilterFactory`.
@@ -999,22 +960,20 @@ Solr can stem Irish using the Snowball Porter Stemmer with an argument of `langu
 
 *Out:* "siopadóir", "síceapaite", "fearr", "athair"
 
-[[LanguageAnalysis-Japanese]]
 === Japanese
 
 Solr includes support for analyzing Japanese, via the Lucene Kuromoji morphological analyzer, which includes several analysis components - more details on each below:
 
-* <<LanguageAnalysis-JapaneseIterationMarkCharFilter,`JapaneseIterationMarkCharFilter`>> normalizes Japanese horizontal iteration marks (odoriji) to their expanded form.
-* <<LanguageAnalysis-JapaneseTokenizer,`JapaneseTokenizer`>> tokenizes Japanese using morphological analysis, and annotates each term with part-of-speech, base form (a.k.a. lemma), reading and pronunciation.
-* <<LanguageAnalysis-JapaneseBaseFormFilter,`JapaneseBaseFormFilter`>> replaces original terms with their base forms (a.k.a. lemmas).
-* <<LanguageAnalysis-JapanesePartOfSpeechStopFilter,`JapanesePartOfSpeechStopFilter`>> removes terms that have one of the configured parts-of-speech.
-* <<LanguageAnalysis-JapaneseKatakanaStemFilter,`JapaneseKatakanaStemFilter`>> normalizes common katakana spelling variations ending in a long sound character (U+30FC) by removing the long sound character.
+* <<Japanese Iteration Mark CharFilter,`JapaneseIterationMarkCharFilter`>> normalizes Japanese horizontal iteration marks (odoriji) to their expanded form.
+* <<Japanese Tokenizer,`JapaneseTokenizer`>> tokenizes Japanese using morphological analysis, and annotates each term with part-of-speech, base form (a.k.a. lemma), reading and pronunciation.
+* <<Japanese Base Form Filter,`JapaneseBaseFormFilter`>> replaces original terms with their base forms (a.k.a. lemmas).
+* <<Japanese Part Of Speech Stop Filter,`JapanesePartOfSpeechStopFilter`>> removes terms that have one of the configured parts-of-speech.
+* <<Japanese Katakana Stem Filter,`JapaneseKatakanaStemFilter`>> normalizes common katakana spelling variations ending in a long sound character (U+30FC) by removing the long sound character.
 
 Also useful for Japanese analysis, from lucene-analyzers-common:
 
-* <<LanguageAnalysis-CJKWidthFilter,`CJKWidthFilter`>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
+* <<CJK Width Filter,`CJKWidthFilter`>> folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
 
-[[LanguageAnalysis-JapaneseIterationMarkCharFilter]]
 ==== Japanese Iteration Mark CharFilter
 
 Normalizes horizontal Japanese iteration marks (odoriji) to their expanded form. Vertical iteration marks are not supported.
@@ -1027,7 +986,6 @@ Normalizes horizontal Japanese iteration marks (odoriji) to their expanded form.
 
 `normalizeKana`:: set to `false` to not normalize kana iteration marks (default is `true`)
 
-[[LanguageAnalysis-JapaneseTokenizer]]
 ==== Japanese Tokenizer
 
 Tokenizer for Japanese that uses morphological analysis, and annotates each term with part-of-speech, base form (a.k.a. lemma), reading and pronunciation.
@@ -1052,7 +1010,6 @@ For some applications it might be good to use `search` mode for indexing and `no
 
 `discardPunctuation`:: set to `false` to keep punctuation, `true` to discard (the default)
 
-[[LanguageAnalysis-JapaneseBaseFormFilter]]
 ==== Japanese Base Form Filter
 
 Replaces original terms' text with the corresponding base form (lemma). (`JapaneseTokenizer` annotates each term with its base form.)
@@ -1061,7 +1018,6 @@ Replaces original terms' text with the corresponding base form (lemma). (`Japane
 
 (no arguments)
 
-[[LanguageAnalysis-JapanesePartOfSpeechStopFilter]]
 ==== Japanese Part Of Speech Stop Filter
 
 Removes terms with one of the configured parts-of-speech. `JapaneseTokenizer` annotates terms with parts-of-speech.
@@ -1074,12 +1030,11 @@ Removes terms with one of the configured parts-of-speech. `JapaneseTokenizer` an
 
 `enablePositionIncrements`:: if `luceneMatchVersion` is `4.3` or earlier and `enablePositionIncrements="false"`, no position holes will be left by this filter when it removes tokens. *This argument is invalid if `luceneMatchVersion` is `5.0` or later.*
 
-[[LanguageAnalysis-JapaneseKatakanaStemFilter]]
 ==== Japanese Katakana Stem Filter
 
 Normalizes common katakana spelling variations ending in a long sound character (U+30FC) by removing the long sound character.
 
-<<LanguageAnalysis-CJKWidthFilter,`solr.CJKWidthFilterFactory`>> should be specified prior to this filter to normalize half-width katakana to full-width.
+<<CJK Width Filter,`solr.CJKWidthFilterFactory`>> should be specified prior to this filter to normalize half-width katakana to full-width.
 
 *Factory class:* `JapaneseKatakanaStemFilterFactory`
 
@@ -1087,7 +1042,6 @@ Normalizes common katakana spelling variations ending in a long sound character
 
 `minimumLength`:: terms below this length will not be stemmed. Default is 4, value must be 2 or more.
 
-[[LanguageAnalysis-CJKWidthFilter]]
 ==== CJK Width Filter
 
 Folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds halfwidth Katakana variants into their equivalent fullwidth forms.
@@ -1115,14 +1069,13 @@ Example:
 </fieldType>
 ----
 
-[[LanguageAnalysis-Hebrew_Lao_Myanmar_Khmer]]
+[[hebrew-lao-myanmar-khmer]]
 === Hebrew, Lao, Myanmar, Khmer
 
 Lucene provides support, in addition to UAX#29 word break rules, for Hebrew's use of the double and single quote characters, and for segmenting Lao, Myanmar, and Khmer into syllables with the `solr.ICUTokenizerFactory` in the `analysis-extras` contrib module. To use this tokenizer, see `solr/contrib/analysis-extras/README.txt for` instructions on which jars you need to add to your `solr_home/lib`.
 
-See <<tokenizers.adoc#Tokenizers-ICUTokenizer,the ICUTokenizer>> for more information.
+See <<tokenizers.adoc#icu-tokenizer,the ICUTokenizer>> for more information.
 
-[[LanguageAnalysis-Latvian]]
 === Latvian
 
 Solr includes support for stemming Latvian, and Lucene includes an example stopword list.
@@ -1150,16 +1103,14 @@ Solr includes support for stemming Latvian, and Lucene includes an example stopw
 
 *Out:* "tirg", "tirg"
 
-[[LanguageAnalysis-Norwegian]]
 === Norwegian
 
 Solr includes two classes for stemming Norwegian, `NorwegianLightStemFilterFactory` and `NorwegianMinimalStemFilterFactory`. Lucene includes an example stopword list.
 
 Another option is to use the Snowball Porter Stemmer with an argument of language="Norwegian".
 
-Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization filters>>.
+Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
 
-[[LanguageAnalysis-NorwegianLightStemmer]]
 ==== Norwegian Light Stemmer
 
 The `NorwegianLightStemFilterFactory` requires a "two-pass" sort for the -dom and -het endings. This means that in the first pass the word "kristendom" is stemmed to "kristen", and then all the general rules apply so it will be further stemmed to "krist". The effect of this is that "kristen," "kristendom," "kristendommen," and "kristendommens" will all be stemmed to "krist."
@@ -1209,7 +1160,6 @@ The second pass is to pick up -dom and -het endings. Consider this example:
 
 *Out:* "forelske"
 
-[[LanguageAnalysis-NorwegianMinimalStemmer]]
 ==== Norwegian Minimal Stemmer
 
 The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns only.
@@ -1244,10 +1194,8 @@ The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns on
 
 *Out:* "bil"
 
-[[LanguageAnalysis-Persian]]
 === Persian
 
-[[LanguageAnalysis-PersianFilterFactories]]
 ==== Persian Filter Factories
 
 Solr includes support for normalizing Persian, and Lucene includes an example stopword list.
@@ -1267,7 +1215,6 @@ Solr includes support for normalizing Persian, and Lucene includes an example st
 </analyzer>
 ----
 
-[[LanguageAnalysis-Polish]]
 === Polish
 
 Solr provides support for Polish stemming with the `solr.StempelPolishStemFilterFactory`, and `solr.MorphologikFilterFactory` for lemmatization, in the `contrib/analysis-extras` module. The `solr.StempelPolishStemFilterFactory` component includes an algorithmic stemmer with tables for Polish. To use either of these filters, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.
@@ -1308,7 +1255,6 @@ Note the lower case filter is applied _after_ the Morfologik stemmer; this is be
 
 The Morfologik dictionary parameter value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
 
-[[LanguageAnalysis-Portuguese]]
 === Portuguese
 
 Solr includes four stemmers for Portuguese: one in the `solr.SnowballPorterFilterFactory`, an alternative stemmer called `solr.PortugueseStemFilterFactory`, a lighter stemmer called `solr.PortugueseLightStemFilterFactory`, and an even less aggressive stemmer called `solr.PortugueseMinimalStemFilterFactory`. Lucene includes an example stopword list.
@@ -1352,8 +1298,6 @@ Solr includes four stemmers for Portuguese: one in the `solr.SnowballPorterFilte
 
 *Out:* "pra", "pra"
 
-
-[[LanguageAnalysis-Romanian]]
 === Romanian
 
 Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `language="Romanian"`.
@@ -1375,11 +1319,8 @@ Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `la
 </analyzer>
 ----
 
-
-[[LanguageAnalysis-Russian]]
 === Russian
 
-[[LanguageAnalysis-RussianStemFilter]]
 ==== Russian Stem Filter
 
 Solr includes two stemmers for Russian: one in the `solr.SnowballPorterFilterFactory language="Russian"`, and a lighter stemmer called `solr.RussianLightStemFilterFactory`. Lucene includes an example stopword list.
@@ -1399,11 +1340,9 @@ Solr includes two stemmers for Russian: one in the `solr.SnowballPorterFilterFac
 </analyzer>
 ----
 
-
-[[LanguageAnalysis-Scandinavian]]
 === Scandinavian
 
-Scandinavian is a language group spanning three languages <<LanguageAnalysis-Norwegian,Norwegian>>, <<LanguageAnalysis-Swedish,Swedish>> and <<LanguageAnalysis-Danish,Danish>> which are very similar.
+Scandinavian is a language group spanning three languages <<Norwegian>>, <<Swedish>> and <<Danish>> which are very similar.
 
 Swedish å, ä, ö are in fact the same letters as Norwegian and Danish å, æ, ø and thus interchangeable when used between these languages. They are however folded differently when people type them on a keyboard lacking these characters.
 
@@ -1413,7 +1352,6 @@ There are two filters for helping with normalization between Scandinavian langua
 
 See also each language section for other relevant filters.
 
-[[LanguageAnalysis-ScandinavianNormalizationFilter]]
 ==== Scandinavian Normalization Filter
 
 This filter normalize use of the interchangeable Scandinavian characters æÆäÄöÖøØ and folded variants (aa, ao, ae, oe and oo) by transforming them to åÅæÆøØ.
@@ -1441,7 +1379,6 @@ It's a semantically less destructive solution than `ScandinavianFoldingFilter`,
 
 *Out:* "blåbærsyltetøj", "blåbærsyltetøj", "blåbærsyltetøj", "blabarsyltetoj"
 
-[[LanguageAnalysis-ScandinavianFoldingFilter]]
 ==== Scandinavian Folding Filter
 
 This filter folds Scandinavian characters åÅäæÄÆ\->a and öÖøØ\->o. It also discriminate against use of double vowels aa, ae, ao, oe and oo, leaving just the first one.
@@ -1469,10 +1406,8 @@ It's a semantically more destructive solution than `ScandinavianNormalizationFil
 
 *Out:* "blabarsyltetoj", "blabarsyltetoj", "blabarsyltetoj", "blabarsyltetoj"
 
-[[LanguageAnalysis-Serbian]]
 === Serbian
 
-[[LanguageAnalysis-SerbianNormalizationFilter]]
 ==== Serbian Normalization Filter
 
 Solr includes a filter that normalizes Serbian Cyrillic and Latin characters. Note that this filter only works with lowercased input.
@@ -1499,7 +1434,6 @@ See the Solr wiki for tips & advice on using this filter: https://wiki.apache.or
 </analyzer>
 ----
 
-[[LanguageAnalysis-Spanish]]
 === Spanish
 
 Solr includes two stemmers for Spanish: one in the `solr.SnowballPorterFilterFactory language="Spanish"`, and a lighter stemmer called `solr.SpanishLightStemFilterFactory`. Lucene includes an example stopword list.
@@ -1526,15 +1460,13 @@ Solr includes two stemmers for Spanish: one in the `solr.SnowballPorterFilterFac
 *Out:* "tor", "tor", "tor"
 
 
-[[LanguageAnalysis-Swedish]]
 === Swedish
 
-[[LanguageAnalysis-SwedishStemFilter]]
 ==== Swedish Stem Filter
 
 Solr includes two stemmers for Swedish: one in the `solr.SnowballPorterFilterFactory language="Swedish"`, and a lighter stemmer called `solr.SwedishLightStemFilterFactory`. Lucene includes an example stopword list.
 
-Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization filters>>.
+Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
 
 *Factory class:* `solr.SwedishStemFilterFactory`
 
@@ -1557,8 +1489,6 @@ Also relevant are the <<LanguageAnalysis-Scandinavian,Scandinavian normalization
 
 *Out:* "klok", "klok", "klok"
 
-
-[[LanguageAnalysis-Thai]]
 === Thai
 
 This filter converts sequences of Thai characters into individual Thai words. Unlike European languages, Thai does not use whitespace to delimit words.
@@ -1577,7 +1507,6 @@ This filter converts sequences of Thai characters into individual Thai words. Un
 </analyzer>
 ----
 
-[[LanguageAnalysis-Turkish]]
 === Turkish
 
 Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFactory`; support for case-insensitive search with the `solr.TurkishLowerCaseFilterFactory`; support for stripping apostrophes and following suffixes with `solr.ApostropheFilterFactory` (see http://www.ipcsit.com/vol57/015-ICNI2012-M021.pdf[Role of Apostrophes in Turkish Information Retrieval]); support for a form of stemming that truncating tokens at a configurable maximum length through the `solr.TruncateTokenFilterFactory` (see http://www.users.muohio.edu/canf/papers/JASIST2008offPrint.pdf[Information Retrieval on Turkish Texts]); and Lucene includes an example stopword list.
@@ -1613,10 +1542,6 @@ Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFa
 </analyzer>
 ----
 
-[[LanguageAnalysis-BacktoTop#main]]
-===
-
-[[LanguageAnalysis-Ukrainian]]
 === Ukrainian
 
 Solr provides support for Ukrainian lemmatization with the `solr.MorphologikFilterFactory`, in the `contrib/analysis-extras` module. To use this filter, see `solr/contrib/analysis-extras/README.txt` for instructions on which jars you need to add to your `solr_home/lib`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/learning-to-rank.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/learning-to-rank.adoc b/solr/solr-ref-guide/src/learning-to-rank.adoc
index 64a461b..d2687c1 100644
--- a/solr/solr-ref-guide/src/learning-to-rank.adoc
+++ b/solr/solr-ref-guide/src/learning-to-rank.adoc
@@ -22,21 +22,17 @@ With the *Learning To Rank* (or *LTR* for short) contrib module you can configur
 
 The module also supports feature extraction inside Solr. The only thing you need to do outside Solr is train your own ranking model.
 
-[[LearningToRank-Concepts]]
-== Concepts
+== Learning to Rank Concepts
 
-[[LearningToRank-Re-Ranking]]
 === Re-Ranking
 
-Re-Ranking allows you to run a simple query for matching documents and then re-rank the top N documents using the scores from a different, complex query. This page describes the use of *LTR* complex queries, information on other rank queries included in the Solr distribution can be found on the <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>> page.
+Re-Ranking allows you to run a simple query for matching documents and then re-rank the top N documents using the scores from a different, more complex query. This page describes the use of *LTR* complex queries, information on other rank queries included in the Solr distribution can be found on the <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>> page.
 
-[[LearningToRank-LearningToRank]]
-=== Learning To Rank
+=== Learning To Rank Models
 
 In information retrieval systems, https://en.wikipedia.org/wiki/Learning_to_rank[Learning to Rank] is used to re-rank the top N retrieved documents using trained machine learning models. The hope is that such sophisticated models can make more nuanced ranking decisions than standard ranking functions like https://en.wikipedia.org/wiki/Tf%E2%80%93idf[TF-IDF] or https://en.wikipedia.org/wiki/Okapi_BM25[BM25].
 
-[[LearningToRank-Model]]
-==== Model
+==== Ranking Model
 
 A ranking model computes the scores used to rerank documents. Irrespective of any particular algorithm or implementation, a ranking model's computation can use three types of inputs:
 
@@ -44,27 +40,23 @@ A ranking model computes the scores used to rerank documents. Irrespective of an
 * features that represent the document being scored
 * features that represent the query for which the document is being scored
 
-[[LearningToRank-Feature]]
 ==== Feature
 
 A feature is a value, a number, that represents some quantity or quality of the document being scored or of the query for which documents are being scored. For example documents often have a 'recency' quality and 'number of past purchases' might be a quantity that is passed to Solr as part of the search query.
 
-[[LearningToRank-Normalizer]]
 ==== Normalizer
 
 Some ranking models expect features on a particular scale. A normalizer can be used to translate arbitrary feature values into normalized values e.g. on a 0..1 or 0..100 scale.
 
-[[LearningToRank-Training]]
-=== Training
+=== Training Models
 
-[[LearningToRank-Featureengineering]]
-==== Feature engineering
+==== Feature Engineering
 
 The LTR contrib module includes several feature classes as well as support for custom features. Each feature class's javadocs contain an example to illustrate use of that class. The process of https://en.wikipedia.org/wiki/Feature_engineering[feature engineering] itself is then entirely up to your domain expertise and creativity.
 
 [cols=",,,",options="header",]
 |===
-|Feature |Class |Example parameters |<<LearningToRank-ExternalFeatureInformation,External Feature Information>>
+|Feature |Class |Example parameters |<<External Feature Information>>
 |field length |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/FieldLengthFeature.html[FieldLengthFeature] |`{"field":"title"}` |not (yet) supported
 |field value |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/FieldValueFeature.html[FieldValueFeature] |`{"field":"hits"}` |not (yet) supported
 |original score |{solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/OriginalScoreFeature.html[OriginalScoreFeature] |`{}` |not applicable
@@ -84,12 +76,10 @@ The LTR contrib module includes several feature classes as well as support for c
 |(custom) |(custom class extending {solr-javadocs}/solr-ltr/org/apache/solr/ltr/norm/Normalizer.html[Normalizer]) |
 |===
 
-[[LearningToRank-Featureextraction]]
 ==== Feature Extraction
 
 The ltr contrib module includes a <<transforming-result-documents.adoc#transforming-result-documents,[features>> transformer] to support the calculation and return of feature values for https://en.wikipedia.org/wiki/Feature_extraction[feature extraction] purposes including and especially when you do not yet have an actual reranking model.
 
-[[LearningToRank-Featureselectionandmodeltraining]]
 ==== Feature Selection and Model Training
 
 Feature selection and model training take place offline and outside Solr. The ltr contrib module supports two generalized forms of models as well as custom models. Each model class's javadocs contain an example to illustrate configuration of that class. In the form of JSON files your trained model or models (e.g. different models for different customer geographies) can then be directly uploaded into Solr using provided REST APIs.
@@ -102,8 +92,7 @@ Feature selection and model training take place offline and outside Solr. The lt
 |(custom) |(custom class extending {solr-javadocs}/solr-ltr/org/apache/solr/ltr/model/LTRScoringModel.html[LTRScoringModel]) |(not applicable)
 |===
 
-[[LearningToRank-QuickStartExample]]
-== Quick Start Example
+== Quick Start with LTR
 
 The `"techproducts"` example included with Solr is pre-configured with the plugins required for learning-to-rank, but they are disabled by default.
 
@@ -114,7 +103,6 @@ To enable the plugins, please specify the `solr.ltr.enabled` JVM System Property
 bin/solr start -e techproducts -Dsolr.ltr.enabled=true
 ----
 
-[[LearningToRank-Uploadingfeatures]]
 === Uploading Features
 
 To upload features in a `/path/myFeatures.json` file, please run:
@@ -154,7 +142,6 @@ To view the features you just uploaded please open the following URL in a browse
 ]
 ----
 
-[[LearningToRank-Extractingfeatures]]
 === Extracting Features
 
 To extract features as part of a query, add `[features]` to the `fl` parameter, for example:
@@ -184,7 +171,6 @@ The output XML will include feature values as a comma-separated list, resembling
   }}
 ----
 
-[[LearningToRank-Uploadingamodel]]
 === Uploading a Model
 
 To upload the model in a `/path/myModel.json` file, please run:
@@ -219,7 +205,6 @@ To view the model you just uploaded please open the following URL in a browser:
 }
 ----
 
-[[LearningToRank-Runningarerankquery]]
 === Running a Rerank Query
 
 To rerank the results of a query, add the `rq` parameter to your search, for example:
@@ -258,12 +243,10 @@ The output XML will include feature values as a comma-separated list, resembling
   }}
 ----
 
-[[LearningToRank-ExternalFeatureInformation]]
 === External Feature Information
 
 The {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/ValueFeature.html[ValueFeature] and {solr-javadocs}/solr-ltr/org/apache/solr/ltr/feature/SolrFeature.html[SolrFeature] classes support the use of external feature information, `efi` for short.
 
-[[LearningToRank-Uploadingfeatures.1]]
 ==== Uploading Features
 
 To upload features in a `/path/myEfiFeatures.json` file, please run:
@@ -308,9 +291,8 @@ To view the features you just uploaded please open the following URL in a browse
 ]
 ----
 
-As an aside, you may have noticed that the `myEfiFeatures.json` example uses `"store":"myEfiFeatureStore"` attributes: read more about feature `store` in the <<Lifecycle>> section of this page.
+As an aside, you may have noticed that the `myEfiFeatures.json` example uses `"store":"myEfiFeatureStore"` attributes: read more about feature `store` in the <<LTR Lifecycle>> section of this page.
 
-[[LearningToRank-Extractingfeatures.1]]
 ==== Extracting Features
 
 To extract `myEfiFeatureStore` features as part of a query, add `efi.*` parameters to the `[features]` part of the `fl` parameter, for example:
@@ -321,7 +303,6 @@ http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[featu
 [source,text]
 http://localhost:8983/solr/techproducts/query?q=test&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13]
 
-[[LearningToRank-Uploadingamodel.1]]
 ==== Uploading a Model
 
 To upload the model in a `/path/myEfiModel.json` file, please run:
@@ -359,7 +340,6 @@ To view the model you just uploaded please open the following URL in a browser:
 }
 ----
 
-[[LearningToRank-Runningarerankquery.1]]
 ==== Running a Rerank Query
 
 To obtain the feature values computed during reranking, add `[features]` to the `fl` parameter and `efi.*` parameters to the `rq` parameter, for example:
@@ -368,39 +348,34 @@ To obtain the feature values computed during reranking, add `[features]` to the
 http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1}&fl=id,cat,manu,score,[features]] link:[]
 
 [source,text]
-http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]]
+http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myEfiModel efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=0 efi.answer=13}&fl=id,cat,manu,score,[features]
 
 Notice the absence of `efi.*` parameters in the `[features]` part of the `fl` parameter.
 
-[[LearningToRank-Extractingfeatureswhilstreranking]]
 ==== Extracting Features While Reranking
 
 To extract features for `myEfiFeatureStore` features while still reranking with `myModel`:
 
 [source,text]
-http://localhost:8983/solr/techproducts/query?q=test&rq=\{!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]] link:[]
+http://localhost:8983/solr/techproducts/query?q=test&rq={!ltr model=myModel}&fl=id,cat,manu,score,[features store=myEfiFeatureStore efi.text=test efi.preferredManufacturer=Apache efi.fromMobile=1]
 
-Notice the absence of `efi.*` parameters in the `rq` parameter (because `myModel` does not use `efi` feature) and the presence of `efi.*` parameters in the `[features]` part of the `fl` parameter (because `myEfiFeatureStore` contains `efi` features).
+Notice the absence of `efi.\*` parameters in the `rq` parameter (because `myModel` does not use `efi` feature) and the presence of `efi.*` parameters in the `[features]` part of the `fl` parameter (because `myEfiFeatureStore` contains `efi` features).
 
-Read more about model evolution in the <<Lifecycle>> section of this page.
+Read more about model evolution in the <<LTR Lifecycle>> section of this page.
 
-[[LearningToRank-Trainingexample]]
 === Training Example
 
 Example training data and a demo 'train and upload model' script can be found in the `solr/contrib/ltr/example` folder in the https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git[Apache lucene-solr git repository] which is mirrored on https://github.com/apache/lucene-solr/tree/releases/lucene-solr/6.4.0/solr/contrib/ltr/example[github.com] (the `solr/contrib/ltr/example` folder is not shipped in the solr binary release).
 
-[[LearningToRank-Installation]]
-== Installation
+== Installation of LTR
 
 The ltr contrib module requires the `dist/solr-ltr-*.jar` JARs.
 
-[[LearningToRank-Configuration]]
-== Configuration
+== LTR Configuration
 
 Learning-To-Rank is a contrib module and therefore its plugins must be configured in `solrconfig.xml`.
 
-[[LearningToRank-Minimumrequirements]]
-=== Minimum requirements
+=== Minimum Requirements
 
 * Include the required contrib JARs. Note that by default paths are relative to the Solr core so they may need adjustments to your configuration, or an explicit specification of the `$solr.install.dir`.
 +
@@ -437,15 +412,12 @@ Learning-To-Rank is a contrib module and therefore its plugins must be configure
 </transformer>
 ----
 
-[[LearningToRank-Advancedoptions]]
 === Advanced Options
 
-[[LearningToRank-LTRThreadModule]]
 ==== LTRThreadModule
 
 A thread module can be configured for the query parser and/or the transformer to parallelize the creation of feature weights. For details, please refer to the {solr-javadocs}/solr-ltr/org/apache/solr/ltr/LTRThreadModule.html[LTRThreadModule] javadocs.
 
-[[LearningToRank-Featurevectorcustomization]]
 ==== Feature Vector Customization
 
 The features transformer returns dense CSV values such as `featureA=0.1,featureB=0.2,featureC=0.3,featureD=0.0`.
@@ -462,7 +434,6 @@ For sparse CSV output such as `featureA:0.1 featureB:0.2 featureC:0.3` you can c
 </transformer>
 ----
 
-[[LearningToRank-Implementationandcontributions]]
 ==== Implementation and Contributions
 
 .How does Solr Learning-To-Rank work under the hood?
@@ -481,10 +452,8 @@ Contributions for further models, features and normalizers are welcome. Related
 * http://wiki.apache.org/lucene-java/HowToContribute
 ====
 
-[[LearningToRank-Lifecycle]]
-== Lifecycle
+== LTR Lifecycle
 
-[[LearningToRank-Featurestores]]
 === Feature Stores
 
 It is recommended that you organise all your features into stores which are akin to namespaces:
@@ -501,7 +470,6 @@ To inspect the content of the `commonFeatureStore` feature store:
 
 `\http://localhost:8983/solr/techproducts/schema/feature-store/commonFeatureStore`
 
-[[LearningToRank-Models]]
 === Models
 
 * A model uses features from exactly one feature store.
@@ -537,13 +505,11 @@ To delete the `currentFeatureStore` feature store:
 curl -XDELETE 'http://localhost:8983/solr/techproducts/schema/feature-store/currentFeatureStore'
 ----
 
-[[LearningToRank-Applyingchanges]]
 === Applying Changes
 
 The feature store and the model store are both <<managed-resources.adoc#managed-resources,Managed Resources>>. Changes made to managed resources are not applied to the active Solr components until the Solr collection (or Solr core in single server mode) is reloaded.
 
-[[LearningToRank-Examples]]
-=== Examples
+=== LTR Examples
 
 ==== One Feature Store, Multiple Ranking Models
 
@@ -628,7 +594,6 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 }
 ----
 
-[[LearningToRank-Modelevolution]]
 ==== Model Evolution
 
 * `linearModel201701` uses features from `featureStore201701`
@@ -752,8 +717,7 @@ The feature store and the model store are both <<managed-resources.adoc#managed-
 }
 ----
 
-[[LearningToRank-AdditionalResources]]
-== Additional Resources
+== Additional LTR Resources
 
 * "Learning to Rank in Solr" presentation at Lucene/Solr Revolution 2015 in Austin:
 ** Slides: http://www.slideshare.net/lucidworks/learning-to-rank-in-solr-presented-by-michael-nilsson-diego-ceccarelli-bloomberg-lp

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
index 6810e4b..9ec44d8 100644
--- a/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
+++ b/solr/solr-ref-guide/src/major-changes-from-solr-5-to-solr-6.adoc
@@ -46,9 +46,9 @@ Built on streaming expressions, new in Solr 6 is a <<parallel-sql-interface.adoc
 
 Replication across data centers is now possible with <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication>>. Using an active-passive model, a SolrCloud cluster can be replicated to another data center, and monitored with a new API.
 
-=== Graph Query Parser
+=== Graph QueryParser
 
-A new <<other-parsers.adoc#OtherParsers-GraphQueryParser,`graph` query parser>> makes it possible to to graph traversal queries of Directed (Cyclic) Graphs modelled using Solr documents.
+A new <<other-parsers.adoc#graph-query-parser,`graph` query parser>> makes it possible to to graph traversal queries of Directed (Cyclic) Graphs modelled using Solr documents.
 
 [[major-5-6-docvalues]]
 === DocValues

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/making-and-restoring-backups.adoc b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
index 6f3383c..38da729 100644
--- a/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
+++ b/solr/solr-ref-guide/src/making-and-restoring-backups.adoc
@@ -28,12 +28,12 @@ Support for backups when running SolrCloud is provided with the <<collections-ap
 
 Two commands are available:
 
-* `action=BACKUP`: This command backs up Solr indexes and configurations. More information is available in the section <<collections-api.adoc#CollectionsAPI-backup,Backup Collection>>.
-* `action=RESTORE`: This command restores Solr indexes and configurations. More information is available in the section <<collections-api.adoc#CollectionsAPI-restore,Restore Collection>>.
+* `action=BACKUP`: This command backs up Solr indexes and configurations. More information is available in the section <<collections-api.adoc#backup,Backup Collection>>.
+* `action=RESTORE`: This command restores Solr indexes and configurations. More information is available in the section <<collections-api.adoc#restore,Restore Collection>>.
 
 == Standalone Mode Backups
 
-Backups and restoration uses Solr's replication handler. Out of the box, Solr includes implicit support for replication so this API can be used. Configuration of the replication handler can, however, be customized by defining your own replication handler in `solrconfig.xml` . For details on configuring the replication handler, see the section <<index-replication.adoc#IndexReplication-ConfiguringtheReplicationHandler,Configuring the ReplicationHandler>>.
+Backups and restoration uses Solr's replication handler. Out of the box, Solr includes implicit support for replication so this API can be used. Configuration of the replication handler can, however, be customized by defining your own replication handler in `solrconfig.xml` . For details on configuring the replication handler, see the section <<index-replication.adoc#configuring-the-replicationhandler,Configuring the ReplicationHandler>>.
 
 === Backup API
 
@@ -58,7 +58,7 @@ The path where the backup will be created. If the path is not absolute then the
 |name |The snapshot will be created in a directory called `snapshot.<name>`. If a name is not specified then the directory name would have the following format: `snapshot.<yyyyMMddHHmmssSSS>`.
 
 `numberToKeep`::
-The number of backups to keep. If `maxNumberOfBackups` has been specified on the replication handler in `solrconfig.xml`, `maxNumberOfBackups` is always used and attempts to use `numberToKeep` will cause an error. Also, this parameter is not taken into consideration if the backup name is specified. More information about `maxNumberOfBackups` can be found in the section <<index-replication.adoc#IndexReplication-ConfiguringtheReplicationHandler,Configuring the ReplicationHandler>>.
+The number of backups to keep. If `maxNumberOfBackups` has been specified on the replication handler in `solrconfig.xml`, `maxNumberOfBackups` is always used and attempts to use `numberToKeep` will cause an error. Also, this parameter is not taken into consideration if the backup name is specified. More information about `maxNumberOfBackups` can be found in the section <<index-replication.adoc#configuring-the-replicationhandler,Configuring the ReplicationHandler>>.
 
 `repository`::
 The name of the repository to be used for the backup. If no repository is specified then the local filesystem repository will be used automatically.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/managed-resources.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/managed-resources.adoc b/solr/solr-ref-guide/src/managed-resources.adoc
index 14fcffd..deb10cc 100644
--- a/solr/solr-ref-guide/src/managed-resources.adoc
+++ b/solr/solr-ref-guide/src/managed-resources.adoc
@@ -33,14 +33,13 @@ All of the examples in this section assume you are running the "techproducts" So
 bin/solr -e techproducts
 ----
 
-[[ManagedResources-Overview]]
-== Overview
+== Managed Resources Overview
 
 Let's begin learning about managed resources by looking at a couple of examples provided by Solr for managing stop words and synonyms using a REST API. After reading this section, you'll be ready to dig into the details of how managed resources are implemented in Solr so you can start building your own implementation.
 
 === Managing Stop Words
 
-To begin, you need to define a field type that uses the <<filter-descriptions.adoc#FilterDescriptions-ManagedStopFilter,ManagedStopFilterFactory>>, such as:
+To begin, you need to define a field type that uses the <<filter-descriptions.adoc#managed-stop-filter,ManagedStopFilterFactory>>, such as:
 
 [source,xml,subs="verbatim,callouts"]
 ----
@@ -55,7 +54,7 @@ To begin, you need to define a field type that uses the <<filter-descriptions.ad
 
 There are two important things to notice about this field type definition:
 
-<1> The filter implementation class is `solr.ManagedStopFilterFactory`. This is a special implementation of the <<filter-descriptions.adoc#FilterDescriptions-StopFilter,StopFilterFactory>> that uses a set of stop words that are managed from a REST API.
+<1> The filter implementation class is `solr.ManagedStopFilterFactory`. This is a special implementation of the <<filter-descriptions.adoc#stop-filter,StopFilterFactory>> that uses a set of stop words that are managed from a REST API.
 
 <2> The `managed=”english”` attribute gives a name to the set of managed stop words, in this case indicating the stop words are for English text.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/merging-indexes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/merging-indexes.adoc b/solr/solr-ref-guide/src/merging-indexes.adoc
index 1c11851..cf1cd37 100644
--- a/solr/solr-ref-guide/src/merging-indexes.adoc
+++ b/solr/solr-ref-guide/src/merging-indexes.adoc
@@ -44,6 +44,6 @@ This will create a new index at `/path/to/newindex` that contains both index1 an
 
 == Using CoreAdmin
 
-The `MERGEINDEXES` command of the <<coreadmin-api.adoc#CoreAdminAPI-MERGEINDEXES,CoreAdminHandler>> can be used to merge indexes into a new core – either from one or more arbitrary `indexDir` directories or by merging from one or more existing `srcCore` core names.
+The `MERGEINDEXES` command of the <<coreadmin-api.adoc#coreadmin-mergeindexes,CoreAdminHandler>> can be used to merge indexes into a new core – either from one or more arbitrary `indexDir` directories or by merging from one or more existing `srcCore` core names.
 
-See the <<coreadmin-api.adoc#CoreAdminAPI-MERGEINDEXES,CoreAdminHandler>> section for details.
+See the <<coreadmin-api.adoc#coreadmin-mergeindexes,CoreAdminHandler>> section for details.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c6771499/solr/solr-ref-guide/src/near-real-time-searching.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/near-real-time-searching.adoc b/solr/solr-ref-guide/src/near-real-time-searching.adoc
index 8727387..2b5ad5a 100644
--- a/solr/solr-ref-guide/src/near-real-time-searching.adoc
+++ b/solr/solr-ref-guide/src/near-real-time-searching.adoc
@@ -127,14 +127,9 @@ curl http://localhost:8983/solr/my_collection/update?commitWithin=10000
   -H "Content-Type: text/xml" --data-binary '<add><doc><field name="id">testdoc</field></doc></add>'
 ----
 
-<<<<<<< HEAD
-[[NearRealTimeSearching-ChangingdefaultcommitWithinBehavior]]
-=== Changing default commitWithin Behavior
-=======
-WARNING: While the `stream.body` feature is great for development and testing, it should normally not be enabled in production systems, as it lets a user with READ permissions post data that may alter the system state. The feature is disabled by default. See <<requestdispatcher-in-solrconfig.adoc#RequestDispatcherinSolrConfig-requestParsersElement,RequestDispatcher in SolrConfig>> for details.
+WARNING: While the `stream.body` feature is great for development and testing, it should normally not be enabled in production systems, as it lets a user with READ permissions post data that may alter the system state. The feature is disabled by default. See <<requestdispatcher-in-solrconfig.adoc#requestparsers-element,RequestDispatcher in SolrConfig>> for details.
 
 === Changing Default commitWithin Behavior
->>>>>>> 74ab16168c... SOLR-11050: remove unneeded anchors for pages that have no incoming links from other pages
 
 The `commitWithin` settings allow forcing document commits to happen in a defined time period. This is used most frequently with <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>, and for that reason the default is to perform a soft commit. This does not, however, replicate new documents to slave servers in a master/slave environment. If that's a requirement for your implementation, you can force a hard commit by adding a parameter, as in this example: