You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by to...@apache.org on 2019/08/31 16:33:30 UTC

[lucene-solr] branch master updated: SOLR-13691: Add example field type configurations using name attributes to Ref Guide

This is an automated email from the ASF dual-hosted git repository.

tomoko pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/lucene-solr.git


The following commit(s) were added to refs/heads/master by this push:
     new 66d7dff  SOLR-13691: Add example field type configurations using name attributes to Ref Guide
66d7dff is described below

commit 66d7dffc790bac158209ead76b68327d165d67d3
Author: Tomoko Uchida <to...@apache.org>
AuthorDate: Sun Sep 1 01:32:10 2019 +0900

    SOLR-13691: Add example field type configurations using name attributes to Ref Guide
---
 solr/CHANGES.txt                                 |    2 +-
 solr/solr-ref-guide/src/about-filters.adoc       |   21 +
 solr/solr-ref-guide/src/about-tokenizers.adoc    |   19 +
 solr/solr-ref-guide/src/analyzers.adoc           |   64 +-
 solr/solr-ref-guide/src/charfilterfactories.adoc |   77 ++
 solr/solr-ref-guide/src/filter-descriptions.adoc |  959 ++++++++++++++++++--
 solr/solr-ref-guide/src/language-analysis.adoc   | 1025 ++++++++++++++++++++--
 solr/solr-ref-guide/src/schema-api.adoc          |   24 +-
 solr/solr-ref-guide/src/tokenizers.adoc          |  368 +++++++-
 9 files changed, 2420 insertions(+), 139 deletions(-)

diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index a74eb91..bacb637 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -64,7 +64,7 @@ Upgrade Notes
 * SOLR-11266: default Content-Type override for JSONResponseWriter from _default configSet is removed. Example has been
   provided in sample_techproducts_configs to override content-type. (Ishan Chattopadhyaya, Munendra S N, Gus Heck)
 
-* SOLR-13593 SOLR-13690: Allow to look up analyzer components by their SPI names in field type configuration. (Tomoko Uchida)
+* SOLR-13593 SOLR-13690 SOLR-13691: Allow to look up analyzer components by their SPI names in field type configuration. (Tomoko Uchida)
 
 Other Changes
 ----------------------
diff --git a/solr/solr-ref-guide/src/about-filters.adoc b/solr/solr-ref-guide/src/about-filters.adoc
index dbb10a6..a0577b9 100644
--- a/solr/solr-ref-guide/src/about-filters.adoc
+++ b/solr/solr-ref-guide/src/about-filters.adoc
@@ -22,6 +22,25 @@ A filter may also do more complex analysis by looking ahead to consider multiple
 
 Because filters consume one `TokenStream` and produce a new `TokenStream`, they can be chained one after another indefinitely. Each filter in the chain in turn processes the tokens produced by its predecessor. The order in which you specify the filters is therefore significant. Typically, the most general filtering is done first, and later filtering stages are more specialized.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filterexample]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="text" class="solr.TextField">
+  <analyzer>
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+    <filter name="englishPorter"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-filterexample]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="text" class="solr.TextField">
@@ -32,6 +51,8 @@ Because filters consume one `TokenStream` and produce a new `TokenStream`, they
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 This example starts with Solr's standard tokenizer, which breaks the field's text into tokens. All the tokens are then set to lowercase, which will facilitate case-insensitive matching at query time.
 
diff --git a/solr/solr-ref-guide/src/about-tokenizers.adoc b/solr/solr-ref-guide/src/about-tokenizers.adoc
index 77898d7..f94ef5e 100644
--- a/solr/solr-ref-guide/src/about-tokenizers.adoc
+++ b/solr/solr-ref-guide/src/about-tokenizers.adoc
@@ -20,6 +20,23 @@ The job of a <<tokenizers.adoc#tokenizers,tokenizer>> is to break up a stream of
 
 Characters in the input stream may be discarded, such as whitespace or other delimiters. They may also be added to or replaced, such as mapping aliases or abbreviations to normalized forms. A token contains various metadata in addition to its text value, such as the location at which the token occurs in the field. Because a tokenizer may produce tokens that diverge from the input text, you should not assume that the text of the token is the same text that occurs in the field, or that its [...]
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tok]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="text" class="solr.TextField">
+  <analyzer>
+    <tokenizer name="standard"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-tok]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="text" class="solr.TextField">
@@ -28,6 +45,8 @@ Characters in the input stream may be discarded, such as whitespace or other del
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 The class named in the tokenizer element is not the actual tokenizer, but rather a class that implements the `TokenizerFactory` API. This factory class will be called upon to create new tokenizer instances as needed. Objects created by the factory must derive from `Tokenizer`, which indicates that they produce sequences of tokens. If the tokenizer produces tokens that are usable as is, it may be the only component of the analyzer. Otherwise, the tokenizer's output tokens will serve as in [...]
 
diff --git a/solr/solr-ref-guide/src/analyzers.adoc b/solr/solr-ref-guide/src/analyzers.adoc
index 998f50f..36ca4c5 100644
--- a/solr/solr-ref-guide/src/analyzers.adoc
+++ b/solr/solr-ref-guide/src/analyzers.adoc
@@ -35,6 +35,27 @@ Even the most complex analysis requirements can usually be decomposed into a ser
 
 For example:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="nametext" class="solr.TextField">
+  <analyzer>
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+    <filter name="stop"/>
+    <filter name="englishPorter"/>
+  </analyzer>
+</fieldType>
+----
+Tokenizer and filter factory classes are referred by their symbolic names (SPI names). Here, name="standard" refers `org.apache.lucene.analysis.standard.StandardTokenizerFactory`.
+====
+[example.tab-pane#byclass]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="nametext" class="solr.TextField">
@@ -46,8 +67,9 @@ For example:
   </analyzer>
 </fieldType>
 ----
-
-Note that classes in the `org.apache.solr.analysis` package may be referred to here with the shorthand `solr.` prefix.
+Note that classes in the `org.apache.lucene.analysis` package may be referred to here with the shorthand `solr.` prefix.
+====
+--
 
 In this case, no Analyzer class was specified on the `<analyzer>` element. Rather, a sequence of more specialized classes are wired together and collectively act as the Analyzer for the field. The text of the field is passed to the first item in the list (`solr.StandardTokenizerFactory`), and the tokens that emerge from the last one (`solr.EnglishPorterFilterFactory`) are the terms that are used for indexing or querying any fields that use the "nametext" `fieldType`.
 
@@ -65,6 +87,30 @@ In many cases, the same analysis should be applied to both phases. This is desir
 
 If you provide a simple `<analyzer>` definition for a field type, as in the examples above, then it will be used for both indexing and queries. If you want distinct analyzers for each phase, you may include two `<analyzer>` definitions distinguished with a type attribute. For example:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-phases]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="nametext" class="solr.TextField">
+  <analyzer type="index">
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+    <filter name="keepWord" words="keepwords.txt"/>
+    <filter name="synonymFilter" synonyms="syns.txt"/>
+  </analyzer>
+  <analyzer type="query">
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-phases]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="nametext" class="solr.TextField">
@@ -80,6 +126,8 @@ If you provide a simple `<analyzer>` definition for a field type, as in the exam
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 In this theoretical example, at index time the text is tokenized, the tokens are set to lowercase, any that are not listed in `keepwords.txt` are discarded and those that remain are mapped to alternate values as defined by the synonym rules in the file `syns.txt`. This essentially builds an index from a restricted set of possible values and then normalizes them to values that may not even occur in the original text.
 
@@ -103,14 +151,14 @@ For most use cases, this provides the best possible behavior, but if you wish fo
 ----
 <fieldType name="nametext" class="solr.TextField">
   <analyzer type="index">
-    <tokenizer class="solr.StandardTokenizerFactory"/>
-    <filter class="solr.LowerCaseFilterFactory"/>
-    <filter class="solr.KeepWordFilterFactory" words="keepwords.txt"/>
-    <filter class="solr.SynonymFilterFactory" synonyms="syns.txt"/>
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+    <filter name="keepWord" words="keepwords.txt"/>
+    <filter name="synonym" synonyms="syns.txt"/>
   </analyzer>
   <analyzer type="query">
-    <tokenizer class="solr.StandardTokenizerFactory"/>
-    <filter class="solr.LowerCaseFilterFactory"/>
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
   </analyzer>
   <!-- No analysis at all when doing queries that involved Multi-Term expansion -->
   <analyzer type="multiterm">
diff --git a/solr/solr-ref-guide/src/charfilterfactories.adoc b/solr/solr-ref-guide/src/charfilterfactories.adoc
index cd81b3e..031706c 100644
--- a/solr/solr-ref-guide/src/charfilterfactories.adoc
+++ b/solr/solr-ref-guide/src/charfilterfactories.adoc
@@ -28,6 +28,23 @@ This filter requires specifying a `mapping` argument, which is the path and name
 
 Example:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-charfilter]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <charFilter name="mapping" mapping="mapping-FoldToASCII.txt"/>
+  <tokenizer ...>
+  [...]
+</analyzer>
+----
+====
+[example.tab-pane#byclass-charfilter]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -36,6 +53,8 @@ Example:
   [...]
 </analyzer>
 ----
+====
+--
 
 Mapping file syntax:
 
@@ -101,6 +120,23 @@ The table below presents examples of HTML stripping.
 
 Example:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-charfilter-htmlstrip]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <charFilter name="htmlStrip"/>
+  <tokenizer ...>
+  [...]
+</analyzer>
+----
+====
+[example.tab-pane#byclass-charfilter-htmlstrip]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -109,6 +145,8 @@ Example:
   [...]
 </analyzer>
 ----
+====
+--
 
 == solr.ICUNormalizer2CharFilterFactory
 
@@ -124,6 +162,23 @@ Arguments:
 
 Example:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-charfilter-icunormalizer2]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <charFilter name="icuNormalizer2"/>
+  <tokenizer ...>
+  [...]
+</analyzer>
+----
+====
+[example.tab-pane#byclass-charfilter-icunormalizer2]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -132,6 +187,8 @@ Example:
   [...]
 </analyzer>
 ----
+====
+--
 
 == solr.PatternReplaceCharFilterFactory
 
@@ -145,6 +202,24 @@ Arguments:
 
 You can configure this filter in `schema.xml` like this:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-charfilter-patternreplace]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <charFilter name="patternReplace"
+             pattern="([nN][oO]\.)\s*(\d+)" replacement="$1$2"/>
+  <tokenizer ...>
+  [...]
+</analyzer>
+----
+====
+[example.tab-pane#byclass-charfilter-patternreplace]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -154,6 +229,8 @@ You can configure this filter in `schema.xml` like this:
   [...]
 </analyzer>
 ----
+====
+--
 
 The table below presents examples of regex-based pattern replacement:
 
diff --git a/solr/solr-ref-guide/src/filter-descriptions.adoc b/solr/solr-ref-guide/src/filter-descriptions.adoc
index eedfbe9..f59a366 100644
--- a/solr/solr-ref-guide/src/filter-descriptions.adoc
+++ b/solr/solr-ref-guide/src/filter-descriptions.adoc
@@ -20,20 +20,58 @@ Filters examine a stream of tokens and keep them, transform them or discard them
 
 You configure each filter with a `<filter>` element in `schema.xml` as a child of `<analyzer>`, following the `<tokenizer>` element. Filter definitions should follow a tokenizer or another filter definition because they take a `TokenStream` as input. For example:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="text" class="solr.TextField">
+  <analyzer type="index">
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-filter]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="text" class="solr.TextField">
   <analyzer type="index">
     <tokenizer class="solr.StandardTokenizerFactory"/>
-    <filter class="solr.LowerCaseFilterFactory"/>...
+    <filter class="solr.LowerCaseFilterFactory"/>
   </analyzer>
 </fieldType>
 ----
+====
+--
 
-The class attribute names a factory class that will instantiate a filter object as needed. Filter factory classes must implement the `org.apache.solr.analysis.TokenFilterFactory` interface. Like tokenizers, filters are also instances of TokenStream and thus are producers of tokens. Unlike tokenizers, filters also consume tokens from a TokenStream. This allows you to mix and match filters, in any order you prefer, downstream of a tokenizer.
+The name/class attribute names a factory class that will instantiate a filter object as needed. Filter factory classes must implement the `org.apache.lucene.analysis.util.TokenFilterFactory` interface. Like tokenizers, filters are also instances of TokenStream and thus are producers of tokens. Unlike tokenizers, filters also consume tokens from a TokenStream. This allows you to mix and match filters, in any order you prefer, downstream of a tokenizer.
 
 Arguments may be passed to tokenizer factories to modify their behavior by setting attributes on the `<filter>` element. For example:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter2]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="semicolonDelimited" class="solr.TextField">
+  <analyzer type="query">
+    <tokenizer name="pattern" pattern="; " />
+    <filter name="length" min="2" max="7"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-filter-2]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="semicolonDelimited" class="solr.TextField">
@@ -43,6 +81,8 @@ Arguments may be passed to tokenizer factories to modify their behavior by setti
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 The following sections describe the filter factories that are included in this release of Solr.
 
@@ -77,6 +117,22 @@ This filter converts alphabetic, numeric, and symbolic Unicode characters which
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-asciifolding]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace"/>
+  <filter name="asciiFolding" preserveOriginal="false" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-asciifolding]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -84,6 +140,8 @@ This filter converts alphabetic, numeric, and symbolic Unicode characters which
   <filter class="solr.ASCIIFoldingFilterFactory" preserveOriginal="false" />
 </analyzer>
 ----
+====
+--
 
 *In:* "á" (Unicode character 00E1)
 
@@ -112,6 +170,23 @@ BeiderMorseFilter changed its behavior in Solr 5.0 due to an update to version 3
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-beidermorse]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="beiderMorse" nameType="GENERIC" ruleType="APPROX" concat="true" languageSet="auto">
+  </filter>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-beidermorse]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -120,6 +195,8 @@ BeiderMorseFilter changed its behavior in Solr 5.0 due to an update to version 3
   </filter>
 </analyzer>
 ----
+====
+--
 
 == Classic Filter
 
@@ -131,6 +208,22 @@ This filter takes the output of the <<tokenizers.adoc#classic-tokenizer,Classic
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-classic]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="classic"/>
+  <filter name="classic"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-classic]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -138,6 +231,8 @@ This filter takes the output of the <<tokenizers.adoc#classic-tokenizer,Classic
   <filter class="solr.ClassicFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "I.B.M. cat's can't"
 
@@ -161,6 +256,22 @@ This filter creates word shingles by combining common tokens such as stop words
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-commongrams]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="commonGrams" words="stopwords.txt" ignoreCase="true"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-commongrams]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -168,6 +279,8 @@ This filter creates word shingles by combining common tokens such as stop words
   <filter class="solr.CommonGramsFilterFactory" words="stopwords.txt" ignoreCase="true"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "the Cat"
 
@@ -191,6 +304,22 @@ Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-daitchmokotoffsondex]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="daitchMokotoffSoundex" inject="true"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-daitchmokotoffsondex]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -198,6 +327,8 @@ Implements the Daitch-Mokotoff Soundex algorithm, which allows identification of
   <filter class="solr.DaitchMokotoffSoundexFilterFactory" inject="true"/>
 </analyzer>
 ----
+====
+--
 
 == Double Metaphone Filter
 
@@ -215,6 +346,22 @@ This filter creates tokens using the http://commons.apache.org/proper/commons-co
 
 Default behavior for inject (true): keep the original token and add phonetic token(s) at the same position.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-doublemetaphone]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="doubleMetaphone"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-doublemetaphone]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -222,6 +369,8 @@ Default behavior for inject (true): keep the original token and add phonetic tok
   <filter class="solr.DoubleMetaphoneFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "four score and Kuczewski"
 
@@ -238,8 +387,8 @@ Discard original token (`inject="false"`).
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/>
+  <tokenizer name="standard"/>
+  <filter name="doubleMetaphone" inject="false"/>
 </analyzer>
 ----
 
@@ -267,6 +416,22 @@ This filter generates edge n-gram tokens of sizes within the given range.
 
 Default behavior.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-edgengram]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="edgeNGram"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-edgengram]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -274,6 +439,8 @@ Default behavior.
   <filter class="solr.EdgeNGramFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "four score and twenty"
 
@@ -288,8 +455,8 @@ A range of 1 to 4.
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.EdgeNGramFilterFactory" minGramSize="1" maxGramSize="4"/>
+  <tokenizer name="standard"/>
+  <filter name="edgeNGram" minGramSize="1" maxGramSize="4"/>
 </analyzer>
 ----
 
@@ -306,8 +473,8 @@ A range of 4 to 6.
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.EdgeNGramFilterFactory" minGramSize="4" maxGramSize="6"/>
+  <tokenizer name="standard"/>
+  <filter name="edgeNGram" minGramSize="4" maxGramSize="6"/>
 </analyzer>
 ----
 
@@ -327,6 +494,22 @@ This filter stems plural English words to their singular form.
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-englishminimalstem]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="englishMinimalStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-englishminimalstem]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -334,6 +517,8 @@ This filter stems plural English words to their singular form.
   <filter class="solr.EnglishMinimalStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "dogs cats"
 
@@ -351,6 +536,22 @@ This filter removes singular possessives (trailing *'s*) from words. Note that p
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-englishpossessive]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace"/>
+  <filter name="englishPossessive"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-englishpossessive]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -358,6 +559,8 @@ This filter removes singular possessives (trailing *'s*) from words. Note that p
   <filter class="solr.EnglishPossessiveFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Man's dog bites dogs' man"
 
@@ -379,6 +582,22 @@ This filter outputs a single token which is a concatenation of the sorted and de
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-fingerprint]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="whitespace"/>
+  <filter name="fingerprint" separator="_" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-fingerprint]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -386,6 +605,8 @@ This filter outputs a single token which is a concatenation of the sorted and de
   <filter class="solr.FingerprintFilterFactory" separator="_" />
 </analyzer>
 ----
+====
+--
 
 *In:* "the quick brown fox jumped over the lazy dog"
 
@@ -423,6 +644,26 @@ Be aware that your results will vary widely based on the quality of the provided
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-hunspellstem]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="whitespace"/>
+  <filter name="hunspellStem"
+    dictionary="en_GB.dic"
+    affix="en_GB.aff"
+    ignoreCase="true"
+    strictAffixParsing="true" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-hunspellstem]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -434,6 +675,8 @@ Be aware that your results will vary widely based on the quality of the provided
     strictAffixParsing="true" />
 </analyzer>
 ----
+====
+--
 
 *In:* "jump jumping jumped"
 
@@ -453,6 +696,22 @@ Note that for this filter to work properly, the upstream tokenizer must not remo
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-hyphenatedwords]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="whitespace"/>
+  <filter name="hyphenatedWords"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-hyphenatedwords]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -460,6 +719,8 @@ Note that for this filter to work properly, the upstream tokenizer must not remo
   <filter class="solr.HyphenatedWordsFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "A hyphen- ated word"
 
@@ -481,6 +742,22 @@ To use this filter, you must add additional .jars to Solr's classpath (as descri
 
 *Example without a filter:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-icufolding]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="icuFolding"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-icufolding]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -488,14 +765,16 @@ To use this filter, you must add additional .jars to Solr's classpath (as descri
   <filter class="solr.ICUFoldingFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *Example with a filter to exclude Swedish/Finnish characters:*
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.ICUFoldingFilterFactory" filter="[^åäöÅÄÖ]"/>
+  <tokenizer name="standard"/>
+  <filter name="icuFolding" filter="[^åäöÅÄÖ]"/>
 </analyzer>
 ----
 
@@ -523,6 +802,22 @@ This filter factory normalizes text according to one of five Unicode Normalizati
 
 *Example with NFKC_Casefold:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-icunormalizer2]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="icuNormalizer2" form="nfkc_cf" mode="compose"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-icunormalizer2]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -530,14 +825,16 @@ This filter factory normalizes text according to one of five Unicode Normalizati
   <filter class="solr.ICUNormalizer2FilterFactory" form="nfkc_cf" mode="compose"/>
 </analyzer>
 ----
+====
+--
 
 *Example with a filter to exclude Swedish/Finnish characters:*
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.ICUNormalizer2FilterFactory" form="nfkc_cf" mode="compose" filter="[^åäöÅÄÖ]"/>
+  <tokenizer name="standard"/>
+  <filter name="icuNormalizer2" form="nfkc_cf" mode="compose" filter="[^åäöÅÄÖ]"/>
 </analyzer>
 ----
 
@@ -557,6 +854,22 @@ This filter applies http://userguide.icu-project.org/transforms/general[ICU Tran
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-icutransform]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="icuTransform" id="Traditional-Simplified"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-icutransform]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -564,6 +877,8 @@ This filter applies http://userguide.icu-project.org/transforms/general[ICU Tran
   <filter class="solr.ICUTransformFilterFactory" id="Traditional-Simplified"/>
 </analyzer>
 ----
+====
+--
 
 For detailed information about ICU Transforms, see http://userguide.icu-project.org/transforms/general.
 
@@ -589,6 +904,22 @@ Where `keepwords.txt` contains:
 
 `happy funny silly`
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-keepword]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="keepWord" words="keepwords.txt"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-keepword]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -596,6 +927,8 @@ Where `keepwords.txt` contains:
   <filter class="solr.KeepWordFilterFactory" words="keepwords.txt"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Happy, sad or funny"
 
@@ -610,8 +943,8 @@ Same `keepwords.txt`, case insensitive:
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.KeepWordFilterFactory" words="keepwords.txt" ignoreCase="true"/>
+  <tokenizer name="standard"/>
+  <filter name="keepWord" words="keepwords.txt" ignoreCase="true"/>
 </analyzer>
 ----
 
@@ -628,9 +961,9 @@ Using LowerCaseFilterFactory before filtering for keep words, no `ignoreCase` fl
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.KeepWordFilterFactory" words="keepwords.txt"/>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="keepWord" words="keepwords.txt"/>
 </analyzer>
 ----
 
@@ -652,13 +985,31 @@ KStem is an alternative to the Porter Stem Filter for developers looking for a l
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-kstem]
+====
+[.tab-label]*With name*
 [source,xml]
 ----
 <analyzer type="index">
-  <tokenizer class="solr.StandardTokenizerFactory "/>
+  <tokenizer name="standard"/>
+  <filter name="kStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-kstem]
+====
+[.tab-label]*With class name (legacy)*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.KStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "jump jumping jumped"
 
@@ -682,6 +1033,22 @@ This filter passes tokens whose length falls within the min/max limit specified.
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-length]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="length" min="3" max="7"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-length]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -689,6 +1056,8 @@ This filter passes tokens whose length falls within the min/max limit specified.
   <filter class="solr.LengthFilterFactory" min="3" max="7"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "turn right at Albuquerque"
 
@@ -712,6 +1081,23 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-limittokencount]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="whitespace"/>
+  <filter name="limitTokenCount" maxTokenCount="10"
+          consumeAllTokens="false" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-limittokencount]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -720,6 +1106,8 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
           consumeAllTokens="false" />
 </analyzer>
 ----
+====
+--
 
 *In:* "1 2 3 4 5 6 7 8 9 10 11 12"
 
@@ -743,6 +1131,23 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-limittokenoffset]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace"/>
+  <filter name="limitTokenOffset" maxStartOffset="10"
+          consumeAllTokens="false" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-limittokenoffset]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -751,6 +1156,8 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
           consumeAllTokens="false" />
 </analyzer>
 ----
+====
+--
 
 *In:* "0 2 4 6 8 A C E"
 
@@ -774,6 +1181,23 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-limittokenposition]
+====
+[.tab-label]*With name)*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace"/>
+  <filter name="limitTokenPosition" maxTokenPosition="3"
+          consumeAllTokens="false" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-limittokenposition]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -782,6 +1206,8 @@ By default, this filter ignores any tokens in the wrapped `TokenStream` once the
           consumeAllTokens="false" />
 </analyzer>
 ----
+====
+--
 
 *In:* "1 2 3 4 5"
 
@@ -799,6 +1225,22 @@ Converts any uppercase letters in a token to the equivalent lowercase token. All
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-lowercase]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-lowercase]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -806,6 +1248,8 @@ Converts any uppercase letters in a token to the equivalent lowercase token. All
   <filter class="solr.LowerCaseFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Down With CamelCase"
 
@@ -825,6 +1269,22 @@ This is specialized version of the <<Stop Filter,Stop Words Filter Factory>> tha
 //TODO: make this show an actual API call.
 With this configuration the set of words is named "english" and can be managed via `/solr/collection_name/schema/analysis/stopwords/english`
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-managedstop]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="managedStop" managed="english"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-managedstop]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -832,6 +1292,8 @@ With this configuration the set of words is named "english" and can be managed v
   <filter class="solr.ManagedStopFilterFactory" managed="english"/>
 </analyzer>
 ----
+====
+--
 
 See <<Stop Filter>> for example input/output.
 
@@ -865,6 +1327,27 @@ NOTE: Although this filter produces correct token graphs, it cannot consume an i
 //TODO: make this show an actual API call
 With this configuration the set of mappings is named "english" and can be managed via `/solr/collection_name/schema/analysis/synonyms/english`
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-managedsynonymgraph]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="managedSynonymGraph" managed="english"/>
+  <filter name="flattenGraph"/> <!-- required on index analyzers after graph filters -->
+</analyzer>
+<analyzer type="query">
+  <tokenizer name="standard"/>
+  <filter name="managedSynonymGraph" managed="english"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-managedsynonymgraph]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -877,6 +1360,8 @@ With this configuration the set of mappings is named "english" and can be manage
   <filter class="solr.ManagedSynonymGraphFilterFactory" managed="english"/>
 </analyzer>
 ----
+====
+--
 
 See <<Synonym Graph Filter>> below for example input/output.
 
@@ -896,6 +1381,22 @@ Generates n-gram tokens of sizes in the given range. Note that tokens are ordere
 
 Default behavior.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-ngram]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="nGram"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-ngram]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -903,6 +1404,8 @@ Default behavior.
   <filter class="solr.NGramFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "four score"
 
@@ -917,8 +1420,8 @@ A range of 1 to 4.
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.NGramFilterFactory" minGramSize="1" maxGramSize="4"/>
+  <tokenizer name="standard"/>
+  <filter name="nGram" minGramSize="1" maxGramSize="4"/>
 </analyzer>
 ----
 
@@ -935,8 +1438,8 @@ A range of 3 to 5.
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.NGramFilterFactory" minGramSize="3" maxGramSize="5"/>
+  <tokenizer name="standard"/>
+  <filter name="nGram" minGramSize="3" maxGramSize="5"/>
 </analyzer>
 ----
 
@@ -960,6 +1463,22 @@ This filter adds a numeric floating point payload value to tokens that match a g
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-numericpayload]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace"/>
+  <filter name="numericPayload" payload="0.75" typeMatch="word"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-numericpayload]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -967,6 +1486,8 @@ This filter adds a numeric floating point payload value to tokens that match a g
   <filter class="solr.NumericPayloadTokenFilterFactory" payload="0.75" typeMatch="word"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "bing bang boom"
 
@@ -992,6 +1513,22 @@ This filter applies a regular expression to each token and, for those that match
 
 Simple string replace:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-patternreplace]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="patternReplace" pattern="cat" replacement="dog"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-patternreplace]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -999,6 +1536,8 @@ Simple string replace:
   <filter class="solr.PatternReplaceFilterFactory" pattern="cat" replacement="dog"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "cat concatenate catycat"
 
@@ -1013,8 +1552,8 @@ String replacement, first occurrence only:
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.PatternReplaceFilterFactory" pattern="cat" replacement="dog" replace="first"/>
+  <tokenizer name="standard"/>
+  <filter name="patternReplace" pattern="cat" replacement="dog" replace="first"/>
 </analyzer>
 ----
 
@@ -1031,8 +1570,8 @@ More complex pattern with capture group reference in the replacement. Tokens tha
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.PatternReplaceFilterFactory" pattern="(\D+)(\d+)$" replacement="$1_$2"/>
+  <tokenizer name="standard"/>
+  <filter name="patternReplace" pattern="(\D+)(\d+)$" replacement="$1_$2"/>
 </analyzer>
 ----
 
@@ -1060,6 +1599,22 @@ This filter creates tokens using one of the phonetic encoding algorithms in the
 
 Default behavior for DoubleMetaphone encoding.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-phonetic]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="phonetic" encoder="DoubleMetaphone"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-phonetic]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1067,6 +1622,8 @@ Default behavior for DoubleMetaphone encoding.
   <filter class="solr.PhoneticFilterFactory" encoder="DoubleMetaphone"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "four score and twenty"
 
@@ -1083,8 +1640,8 @@ Discard original token.
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.PhoneticFilterFactory" encoder="DoubleMetaphone" inject="false"/>
+  <tokenizer name="standard"/>
+  <filter name="phonetic" encoder="DoubleMetaphone" inject="false"/>
 </analyzer>
 ----
 
@@ -1101,8 +1658,8 @@ Default Soundex encoder.
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.PhoneticFilterFactory" encoder="Soundex"/>
+  <tokenizer name="standard"/>
+  <filter name="phonetic" encoder="Soundex"/>
 </analyzer>
 ----
 
@@ -1122,6 +1679,22 @@ This filter applies the Porter Stemming Algorithm for English. The results are s
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-porterstem]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="porterStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-porterstem]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -1129,6 +1702,8 @@ This filter applies the Porter Stemming Algorithm for English. The results are s
   <filter class="solr.PorterStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "jump jumping jumped"
 
@@ -1154,6 +1729,25 @@ This filter enables a form of conditional filtering: it only applies its wrapped
 
 All terms except those in `protectedTerms.txt` are truncated at 4 characters and lowercased:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-protectedterm]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace"/>
+  <filter name="protectedTerm"
+          ignoreCase="true" protected="protectedTerms.txt"
+          wrappedFilters="truncate,lowercase"
+          truncate.prefixLength="4"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-protectedterm]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1164,6 +1758,8 @@ All terms except those in `protectedTerms.txt` are truncated at 4 characters and
           truncate.prefixLength="4"/>
 </analyzer>
 ----
+====
+--
 
 *Example:*
 
@@ -1174,8 +1770,8 @@ For all terms except those in `protectedTerms.txt`, synonyms are added, terms ar
 [source,xml]
 ----
 <analyzer type="query">
-  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.ProtectedTermFilterFactory"
+  <tokenizer name="whitespace"/>
+  <filter name="protectedTerm"
           ignoreCase="true" protected="protectedTerms.txt"
           wrappedFilters="SynonymGraph-fwd,ReverseString,SynonymGraph-rev"
           synonymgraph-FWD.synonyms="fwd-syns.txt"
@@ -1206,6 +1802,24 @@ Consider the following entry from a `synonyms.txt` file:
 
 When used in the following configuration:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-removeduplicates]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="query">
+  <tokenizer name="standard"/>
+  <filter name="synonymGraph" synonyms="synonyms.txt"/>
+  <filter name="englishMinimalStem"/>
+  <filter name="removeDuplicates"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-removeduplicates]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="query">
@@ -1215,6 +1829,8 @@ When used in the following configuration:
   <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Watch TV"
 
@@ -1246,6 +1862,23 @@ This filter reverses tokens to provide faster leading wildcard and prefix querie
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-reversedwildcard]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="whitespace"/>
+  <filter name="reversedWildcard" withOriginal="true"
+    maxPosAsterisk="2" maxPosQuestion="1" minTrailing="2" maxFractionAsterisk="0"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-reversedwildcard]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -1254,6 +1887,8 @@ This filter reverses tokens to provide faster leading wildcard and prefix querie
     maxPosAsterisk="2" maxPosQuestion="1" minTrailing="2" maxFractionAsterisk="0"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "*foo *bar"
 
@@ -1283,6 +1918,22 @@ This filter constructs shingles, which are token n-grams, from the token stream.
 
 Default behavior.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-shingle]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="shingle"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-shingle]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1290,6 +1941,8 @@ Default behavior.
   <filter class="solr.ShingleFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "To be, or what?"
 
@@ -1304,8 +1957,8 @@ A shingle size of four, do not include original token.
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.ShingleFilterFactory" maxShingleSize="4" outputUnigrams="false"/>
+  <tokenizer name="standard"/>
+  <filter name="shingle" maxShingleSize="4" outputUnigrams="false"/>
 </analyzer>
 ----
 
@@ -1335,6 +1988,22 @@ Solr contains Snowball stemmers for Armenian, Basque, Catalan, Danish, Dutch, En
 
 Default behavior:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-snowball]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="snowballPorter"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-snowball]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1342,6 +2011,8 @@ Default behavior:
   <filter class="solr.SnowballPorterFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "flip flipped flipping"
 
@@ -1356,8 +2027,8 @@ French stemmer, English words:
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.SnowballPorterFilterFactory" language="French"/>
+  <tokenizer name="standard"/>
+  <filter name="snowballPorter" language="French"/>
 </analyzer>
 ----
 
@@ -1374,8 +2045,8 @@ Spanish stemmer, Spanish words:
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.SnowballPorterFilterFactory" language="Spanish"/>
+  <tokenizer name="standard"/>
+  <filter name="snowballPorter" language="Spanish"/>
 </analyzer>
 ----
 
@@ -1405,6 +2076,22 @@ This filter discards, or _stops_ analysis of, tokens that are on the given stop
 
 Case-sensitive matching, capitalized words not stopped. Token positions skip stopped words.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-stop]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="stop" words="stopwords.txt"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-stop]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1412,6 +2099,8 @@ Case-sensitive matching, capitalized words not stopped. Token positions skip sto
   <filter class="solr.StopFilterFactory" words="stopwords.txt"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "To be or what?"
 
@@ -1424,8 +2113,8 @@ Case-sensitive matching, capitalized words not stopped. Token positions skip sto
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.StopFilterFactory" words="stopwords.txt" ignoreCase="true"/>
+  <tokenizer name="standard"/>
+  <filter name="stop" words="stopwords.txt" ignoreCase="true"/>
 </analyzer>
 ----
 
@@ -1459,6 +2148,24 @@ By contrast, a query like "`find the popsicle`" would remove '`the`' as a stopwo
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-suggeststop]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="query">
+  <tokenizer name="whitespace"/>
+  <filter name="lowercase"/>
+  <filter name="suggestStop" ignoreCase="true"
+          words="stopwords.txt" format="wordset"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-suggeststop]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="query">
@@ -1468,6 +2175,8 @@ By contrast, a query like "`find the popsicle`" would remove '`the`' as a stopwo
           words="stopwords.txt" format="wordset"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "The The"
 
@@ -1535,6 +2244,27 @@ small => tiny,teeny,weeny
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-stop-synonymgraph]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="synonymGraph" synonyms="mysynonyms.txt"/>
+  <filter name="flattenGraph"/> <!-- required on index analyzers after graph filters -->
+</analyzer>
+<analyzer type="query">
+  <tokenizer name="standard"/>
+  <filter name="synonymGraph" synonyms="mysynonyms.txt"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-stop-synonymgraph]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -1547,6 +2277,8 @@ small => tiny,teeny,weeny
   <filter class="solr.SynonymGraphFilterFactory" synonyms="mysynonyms.txt"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "teh small couch"
 
@@ -1572,6 +2304,22 @@ This filter adds the numeric character offsets of the token as a payload value f
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-stop-tokenoffsetpayload]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace"/>
+  <filter name="tokenOffsetPayload"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-stop-tokenoffsetpayload]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1579,6 +2327,8 @@ This filter adds the numeric character offsets of the token as a payload value f
   <filter class="solr.TokenOffsetPayloadTokenFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "bing bang boom"
 
@@ -1600,6 +2350,22 @@ This filter trims leading and/or trailing whitespace from tokens. Most tokenizer
 
 The PatternTokenizerFactory configuration used here splits the input on simple commas, it does not remove whitespace.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-trim]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="pattern" pattern=","/>
+  <filter name="trim"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-trim]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1607,6 +2373,8 @@ The PatternTokenizerFactory configuration used here splits the input on simple c
   <filter class="solr.TrimFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "one, two , three ,four "
 
@@ -1624,6 +2392,22 @@ This filter adds the token's type, as an encoded byte sequence, as its payload.
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-typeaspayload]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace"/>
+  <filter name="typeAsPayload"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-typeaspayload]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1631,6 +2415,8 @@ This filter adds the token's type, as an encoded byte sequence, as its payload.
   <filter class="solr.TypeAsPayloadTokenFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Pay Bob's I.O.U."
 
@@ -1652,6 +2438,22 @@ This filter adds the token's type, as a token at the same position as the token,
 
 With the example below, each token's type will be emitted verbatim at the same position:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-typeassynonym]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="typeAsSynonym"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-typeassynonym]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1659,9 +2461,27 @@ With the example below, each token's type will be emitted verbatim at the same p
   <filter class="solr.TypeAsSynonymFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 With the example below, for a token "example.com" with type `<URL>`, the token emitted at the same position will be "\_type_<URL>":
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-typeassynonym-args]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="uax29URLEmail"/>
+  <filter name="typeAsSynonym" prefix="_type_"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-typeassynonym-args]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1669,6 +2489,8 @@ With the example below, for a token "example.com" with type `<URL>`, the token e
   <filter class="solr.TypeAsSynonymFilterFactory" prefix="_type_"/>
 </analyzer>
 ----
+====
+--
 
 == Type Token Filter
 
@@ -1686,12 +2508,29 @@ This filter blacklists or whitelists a specified list of token types, assuming t
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-typetoken]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <filter name="typeToken" types="stoptypes.txt" useWhitelist="true"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-typetoken]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <filter class="solr.TypeTokenFilterFactory" types="stoptypes.txt" useWhitelist="true"/>
 </analyzer>
 ----
+====
+--
 
 == Word Delimiter Filter
 
@@ -1770,6 +2609,27 @@ $ => DIGIT
 
 Default behavior. The whitespace tokenizer is used here to preserve non-alphanumeric characters.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-worddelimitergraph]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="whitespace"/>
+  <filter name="wordDelimiterGraph"/>
+  <filter name="flattenGraph"/> <!-- required on index analyzers after graph filters -->
+</analyzer>
+<analyzer type="query">
+  <tokenizer name="whitespace"/>
+  <filter name="wordDelimiterGraph"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-worddelimitergraph]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -1777,12 +2637,13 @@ Default behavior. The whitespace tokenizer is used here to preserve non-alphanum
   <filter class="solr.WordDelimiterGraphFilterFactory"/>
   <filter class="solr.FlattenGraphFilterFactory"/> <!-- required on index analyzers after graph filters -->
 </analyzer>
-
 <analyzer type="query">
   <tokenizer class="solr.WhitespaceTokenizerFactory"/>
   <filter class="solr.WordDelimiterGraphFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "hot-spot RoboBlaster/9000 100XL"
 
@@ -1797,8 +2658,8 @@ Do not split on case changes, and do not generate number parts. Note that by not
 [source,xml]
 ----
 <analyzer type="query">
-  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterGraphFilterFactory" generateNumberParts="0" splitOnCaseChange="0"/>
+  <tokenizer name="whitespace"/>
+  <filter name="wordDelimiterGraph" generateNumberParts="0" splitOnCaseChange="0"/>
 </analyzer>
 ----
 
@@ -1815,8 +2676,8 @@ Concatenate word parts and number parts, but not word and number parts that occu
 [source,xml]
 ----
 <analyzer type="query">
-  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterGraphFilterFactory" catenateWords="1" catenateNumbers="1"/>
+  <tokenizer name="whitespace"/>
+  <filter name="wordDelimiterGraph" catenateWords="1" catenateNumbers="1"/>
 </analyzer>
 ----
 
@@ -1833,8 +2694,8 @@ Concatenate all. Word and/or number parts are joined together.
 [source,xml]
 ----
 <analyzer type="query">
-  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterGraphFilterFactory" catenateAll="1"/>
+  <tokenizer name="whitespace"/>
+  <filter name="wordDelimiterGraph" catenateAll="1"/>
 </analyzer>
 ----
 
@@ -1851,8 +2712,8 @@ Using a protected words list that contains "AstroBlaster" and "XL-5000" (among o
 [source,xml]
 ----
 <analyzer type="query">
-  <tokenizer class="solr.WhitespaceTokenizerFactory"/>
-  <filter class="solr.WordDelimiterGraphFilterFactory" protected="protwords.txt"/>
+  <tokenizer name="whitespace"/>
+  <filter name="wordDelimiterGraph" protected="protwords.txt"/>
 </analyzer>
 ----
 
diff --git a/solr/solr-ref-guide/src/language-analysis.adoc b/solr/solr-ref-guide/src/language-analysis.adoc
index b32e6be..d71b8f5 100644
--- a/solr/solr-ref-guide/src/language-analysis.adoc
+++ b/solr/solr-ref-guide/src/language-analysis.adoc
@@ -31,6 +31,25 @@ Protects words from being modified by stemmers. A customized protected word list
 
 A sample Solr `protwords.txt` with comments can be found in the `sample_techproducts_configs` <<config-sets.adoc#config-sets,configset>> directory:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-keywordmarker]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldtype name="myfieldtype" class="solr.TextField">
+  <analyzer>
+    <tokenizer name="whitespace"/>
+    <filter name="keywordMarker" protected="protwords.txt" />
+    <filter name="porterStem" />
+  </analyzer>
+</fieldtype>
+----
+====
+[example.tab-pane#byclass-filter-keywordmarker]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldtype name="myfieldtype" class="solr.TextField">
@@ -41,6 +60,8 @@ A sample Solr `protwords.txt` with comments can be found in the `sample_techprod
   </analyzer>
 </fieldtype>
 ----
+====
+--
 
 == KeywordRepeatFilterFactory
 
@@ -52,6 +73,26 @@ To configure, add the `KeywordRepeatFilterFactory` early in the analysis chain.
 
 A sample fieldType configuration could look like this:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-keywordrepeat]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldtype name="english_stem_preserve_original" class="solr.TextField">
+  <analyzer>
+    <tokenizer name="standard"/>
+    <filter name="keywordRepeat" />
+    <filter name="porterStem" />
+    <filter name="removeDuplicates" />
+  </analyzer>
+</fieldtype>
+----
+====
+[example.tab-pane#byclass-filter-keywordrepeat]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldtype name="english_stem_preserve_original" class="solr.TextField">
@@ -63,6 +104,8 @@ A sample fieldType configuration could look like this:
   </analyzer>
 </fieldtype>
 ----
+====
+--
 
 IMPORTANT: When adding the same token twice, it will also score twice (double), so you may have to re-tune your ranking rules.
 
@@ -72,7 +115,25 @@ Overrides stemming algorithms by applying a custom mapping, then protecting thes
 
 A customized mapping of words to stems, in a tab-separated file, can be specified to the `dictionary` attribute in the schema. Words in this mapping will be stemmed to the stems from the file, and will not be further changed by any stemmer.
 
-
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-stemmeroverride]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldtype name="myfieldtype" class="solr.TextField">
+  <analyzer>
+    <tokenizer name="whitespace"/>
+    <filter name="stemmerOverride" dictionary="stemdict.txt" />
+    <filter name="porterStem" />
+  </analyzer>
+</fieldtype>
+----
+====
+[example.tab-pane#byclass-filter-stemmeroverride]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldtype name="myfieldtype" class="solr.TextField">
@@ -83,6 +144,8 @@ A customized mapping of words to stems, in a tab-separated file, can be specifie
   </analyzer>
 </fieldtype>
 ----
+====
+--
 
 A sample `stemdict.txt` file is shown below:
 
@@ -117,6 +180,22 @@ Compound words are most commonly found in Germanic languages.
 
 Assume that `germanwords.txt` contains at least the following words: `dumm kopf donau dampf schiff`
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-dictionarycompoundword]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="dictionaryCompoundWord" dictionary="germanwords.txt"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-dictionarycompoundword]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -124,6 +203,8 @@ Assume that `germanwords.txt` contains at least the following words: `dumm kopf
   <filter class="solr.DictionaryCompoundWordTokenFilterFactory" dictionary="germanwords.txt"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Donaudampfschiff dummkopf"
 
@@ -330,6 +411,22 @@ This can increase recall by causing more matches. On the other hand, it can redu
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-lang-asciifolding]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="asciiFolding"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-lang-asciifolding]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -337,6 +434,8 @@ This can increase recall by causing more matches. On the other hand, it can redu
   <filter class="solr.ASCIIFoldingFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Björn Ångström"
 
@@ -356,6 +455,22 @@ This can increase recall by causing more matches. On the other hand, it can redu
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-decimaldigit]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="decimalDigit"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-decimaldigit]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -363,6 +478,8 @@ This can increase recall by causing more matches. On the other hand, it can redu
   <filter class="solr.DecimalDigitFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 == OpenNLP Integration
 
@@ -386,6 +503,23 @@ The OpenNLP Tokenizer takes two language-specific binary model files as paramete
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-opennlp]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="openNLP"
+             sentenceModel="en-sent.bin"
+             tokenizerModel="en-tokenizer.bin"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-opennlp]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -394,6 +528,8 @@ The OpenNLP Tokenizer takes two language-specific binary model files as paramete
              tokenizerModel="en-tokenizer.bin"/>
 </analyzer>
 ----
+====
+--
 
 === OpenNLP Part-Of-Speech Filter
 
@@ -427,6 +563,26 @@ $
 
 Index the POS for each token as a payload:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-opennlppos]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="openNLP"
+             sentenceModel="en-sent.bin"
+             tokenizerModel="en-tokenizer.bin"/>
+  <filter name="openNLPPOS" posTaggerModel="en-pos-maxent.bin"/>
+  <filter name="typeAsPayload"/>
+  <filter name="type" types="stop.pos.txt"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-opennlppos]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -438,18 +594,20 @@ Index the POS for each token as a payload:
   <filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
 </analyzer>
 ----
+====
+--
 
 Index the POS for each token as a synonym, after prefixing the POS with "@" (see the <<filter-descriptions.adoc#type-as-synonym-filter,TypeAsSynonymFilter description>>):
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.OpenNLPTokenizerFactory"
+  <tokenizer name="openNLP"
              sentenceModel="en-sent.bin"
              tokenizerModel="en-tokenizer.bin"/>
-  <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
-  <filter class="solr.TypeAsSynonymFilterFactory" prefix="@"/>
-  <filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
+  <filter name="openNLPPOS" posTaggerModel="en-pos-maxent.bin"/>
+  <filter name="typeAsSynonym" prefix="@"/>
+  <filter name="type" types="stop.pos.txt"/>
 </analyzer>
 ----
 
@@ -458,11 +616,11 @@ Only index nouns - the `keep.pos.txt` file contains lines `NN`, `NNS`, `NNP` and
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.OpenNLPTokenizerFactory"
+  <tokenizer name="openNLP"
              sentenceModel="en-sent.bin"
              tokenizerModel="en-tokenizer.bin"/>
-  <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
-  <filter class="solr.TypeTokenFilterFactory" types="keep.pos.txt" useWhitelist="true"/>
+  <filter name="openNLPPOS" posTaggerModel="en-pos-maxent.bin"/>
+  <filter name="type" types="keep.pos.txt" useWhitelist="true"/>
 </analyzer>
 ----
 
@@ -484,6 +642,26 @@ NOTE: Lucene currently does not index token types, so if you want to keep this i
 
 Index the phrase chunk label for each token as a payload:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-opennlpchunker]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="openNLP"
+             sentenceModel="en-sent.bin"
+             tokenizerModel="en-tokenizer.bin"/>
+  <filter name="openNLPPOS" posTaggerModel="en-pos-maxent.bin"/>
+  <filter name="openNLPChunker" chunkerModel="en-chunker.bin"/>
+  <filter name="typeAsPayload"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-opennlpchunker]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -495,18 +673,20 @@ Index the phrase chunk label for each token as a payload:
   <filter class="solr.TypeAsPayloadFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 Index the phrase chunk label for each token as a synonym, after prefixing it with "#" (see the <<filter-descriptions.adoc#type-as-synonym-filter,TypeAsSynonymFilter description>>):
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.OpenNLPTokenizerFactory"
+  <tokenizer name="openNLP"
              sentenceModel="en-sent.bin"
              tokenizerModel="en-tokenizer.bin"/>
-  <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
-  <filter class="solr.OpenNLPChunkerFactory" chunkerModel="en-chunker.bin"/>
-  <filter class="solr.TypeAsSynonymFilterFactory" prefix="#"/>
+  <filter name="openNLPPOS" posTaggerModel="en-pos-maxent.bin"/>
+  <filter name="openNLPChunker" chunkerModel="en-chunker.bin"/>
+  <filter name="typeAsSynonym" prefix="#"/>
 </analyzer>
 ----
 
@@ -528,6 +708,28 @@ Either `dictionary` or `lemmatizerModel` must be provided, and both may be provi
 
 Perform dictionary-based lemmatization, and fall back to model-based lemmatization for out-of-vocabulary tokens (see the <<OpenNLP Part-Of-Speech Filter>> section above for information about using `TypeTokenFilter` to avoid indexing punctuation):
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-filter-opennlplemmatizer]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="openNLP"
+             sentenceModel="en-sent.bin"
+             tokenizerModel="en-tokenizer.bin"/>
+  <filter name="openNLPPOS" posTaggerModel="en-pos-maxent.bin"/>
+  <filter name="oenNLPLemmatizer"
+          dictionary="lemmas.txt"
+          lemmatizerModel="en-lemmatizer.bin"/>
+  <filter name="type" types="stop.pos.txt"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-filter-opennlplemmatizer]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -541,18 +743,20 @@ Perform dictionary-based lemmatization, and fall back to model-based lemmatizati
   <filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
 </analyzer>
 ----
+====
+--
 
 Perform dictionary-based lemmatization only:
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.OpenNLPTokenizerFactory"
+  <tokenizer name="openNLP"
              sentenceModel="en-sent.bin"
              tokenizerModel="en-tokenizer.bin"/>
-  <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
-  <filter class="solr.OpenNLPLemmatizerFilterFactory" dictionary="lemmas.txt"/>
-  <filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
+  <filter name="openNLPPOS" posTaggerModel="en-pos-maxent.bin"/>
+  <filter name="openNLPLemmatizer" dictionary="lemmas.txt"/>
+  <filter name="type" types="stop.pos.txt"/>
 </analyzer>
 ----
 
@@ -561,14 +765,14 @@ Perform model-based lemmatization only, preserving the original token and emitti
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.OpenNLPTokenizerFactory"
+  <tokenizer name="openNLP"
              sentenceModel="en-sent.bin"
              tokenizerModel="en-tokenizer.bin"/>
-  <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
-  <filter class="solr.KeywordRepeatFilterFactory"/>
-  <filter class="solr.OpenNLPLemmatizerFilterFactory" lemmatizerModel="en-lemmatizer.bin"/>
-  <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
-  <filter class="solr.TypeTokenFilterFactory" types="stop.pos.txt"/>
+  <filter name="openNLPPOS" posTaggerModel="en-pos-maxent.bin"/>
+  <filter name="keywordRepeat"/>
+  <filter name="openNLPLemmatizer" lemmatizerModel="en-lemmatizer.bin"/>
+  <filter name="removeDuplicates"/>
+  <filter name="type" types="stop.pos.txt"/>
 </analyzer>
 ----
 
@@ -626,6 +830,23 @@ This algorithm defines both character normalization and stemming, so these are s
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-arabic]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="arabicNormalization"/>
+  <filter name="arabicStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-arabic]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -634,6 +855,8 @@ This algorithm defines both character normalization and stemming, so these are s
   <filter class="solr.ArabicStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 === Bengali
 
@@ -645,6 +868,23 @@ There are two filters written specifically for dealing with Bengali language. Th
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-bengali]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="bengaliNormalization"/>
+  <filter name="bengaliStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-bengali]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer> 
@@ -652,8 +892,9 @@ There are two filters written specifically for dealing with Bengali language. Th
   <filter class="solr.BengaliNormalizationFilterFactory"/> 
   <filter class="solr.BengaliStemFilterFactory"/>       
 </analyzer>
-
 ----
+====
+--
 
 *Normalisation* - `মানুষ` \-> `মানুস`
 
@@ -670,6 +911,22 @@ This is a Java filter written specifically for stemming the Brazilian dialect of
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-brazilian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="brazilianStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-brazilian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -677,6 +934,8 @@ This is a Java filter written specifically for stemming the Brazilian dialect of
   <filter class="solr.BrazilianStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "praia praias"
 
@@ -694,6 +953,23 @@ Solr includes a light stemmer for Bulgarian, following http://members.unine.ch/j
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-bulgarian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="bulgarianStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-bulgarian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -702,6 +978,8 @@ Solr includes a light stemmer for Bulgarian, following http://members.unine.ch/j
   <filter class="solr.BulgarianStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 === Catalan
 
@@ -715,6 +993,25 @@ Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `lan
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-catalan]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="elision"
+          articles="lang/contractions_ca.txt"/>
+  <filter class="solr.SnowballPorterFilterFactory" language="Catalan" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-catalan]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -725,6 +1022,8 @@ Solr can stem Catalan using the Snowball Porter Stemmer with an argument of `lan
   <filter class="solr.SnowballPorterFilterFactory" language="Catalan" />
 </analyzer>
 ----
+====
+--
 
 *In:* "llengües llengua"
 
@@ -742,6 +1041,23 @@ The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>>
 
 *Examples:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-trd-chinese]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="icu"/>
+  <filter name="cjkWidth"/>
+  <filter name="lowercase"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-trad-chinese]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -750,14 +1066,16 @@ The default configuration of the <<tokenizers.adoc#icu-tokenizer,ICU Tokenizer>>
   <filter class="solr.LowerCaseFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.CJKBigramFilterFactory"/>
-  <filter class="solr.CJKWidthFilterFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
+  <tokenizer name="standard"/>
+  <filter name="cjkBigram"/>
+  <filter name="cjkWidth"/>
+  <filter name="lowercase"/>
 </analyzer>
 ----
 
@@ -797,6 +1115,26 @@ Also useful for Chinese analysis:
 
 *Examples:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-chinese]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="hmmChinese"/>
+  <filter name="cjkWidth"/>
+  <filter name="stop"
+          words="org/apache/lucene/analysis/cn/smart/stopwords.txt"/>
+  <filter name="porterStem"/>
+  <filter name="lowercase"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-chinese]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -808,15 +1146,17 @@ Also useful for Chinese analysis:
   <filter class="solr.LowerCaseFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.ICUTokenizerFactory"/>
-  <filter class="solr.CJKWidthFilterFactory"/>
-  <filter class="solr.StopFilterFactory"
+  <tokenizer name="icu"/>
+  <filter name="cjkWidth"/>
+  <filter name="stop"
           words="org/apache/lucene/analysis/cn/smart/stopwords.txt"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
+  <filter name="lowercase"/>
 </analyzer>
 ----
 
@@ -846,6 +1186,23 @@ Solr includes a light stemmer for Czech, following https://dl.acm.org/citation.c
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-czech]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="czechStem"/>
+<analyzer>
+----
+====
+[example.tab-pane#byclass-lang-czech]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -854,6 +1211,8 @@ Solr includes a light stemmer for Czech, following https://dl.acm.org/citation.c
   <filter class="solr.CzechStemFilterFactory"/>
 <analyzer>
 ----
+====
+--
 
 *In:* "prezidenští, prezidenta, prezidentského"
 
@@ -875,6 +1234,23 @@ Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-danish]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="snowballPorter" language="Danish" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-danish]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -883,6 +1259,8 @@ Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
   <filter class="solr.SnowballPorterFilterFactory" language="Danish" />
 </analyzer>
 ----
+====
+--
 
 *In:* "undersøg undersøgelse"
 
@@ -902,6 +1280,23 @@ Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `langu
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-dutch]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="snowballPorter" language="Dutch"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-dutch]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -910,6 +1305,8 @@ Solr can stem Dutch using the Snowball Porter Stemmer with an argument of `langu
   <filter class="solr.SnowballPorterFilterFactory" language="Dutch"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "kanaal kanalen"
 
@@ -929,6 +1326,23 @@ Solr can stem Estonian using the Snowball Porter Stemmer with an argument of `la
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-estonian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="snowballPorter" language="Estonian"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-estonian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -937,6 +1351,8 @@ Solr can stem Estonian using the Snowball Porter Stemmer with an argument of `la
   <filter class="solr.SnowballPorterFilterFactory" language="Estonian"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Taevani tõustes"
 
@@ -954,6 +1370,22 @@ Solr includes support for stemming Finnish, and Lucene includes an example stopw
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-finnish]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="finnishLightStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-finnish]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -961,6 +1393,8 @@ Solr includes support for stemming Finnish, and Lucene includes an example stopw
   <filter class="solr.FinnishLightStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "kala kalat"
 
@@ -985,6 +1419,24 @@ Removes article elisions from a token stream. This filter can be useful for lang
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-french]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="elision"
+          ignoreCase="true"
+          articles="lang/contractions_fr.txt"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-french]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -994,6 +1446,8 @@ Removes article elisions from a token stream. This filter can be useful for lang
           articles="lang/contractions_fr.txt"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "L'histoire d'art"
 
@@ -1014,22 +1468,22 @@ Solr includes three stemmers for French: one in the `solr.SnowballPorterFilterFa
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.ElisionFilterFactory"
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="elision"
           articles="lang/contractions_fr.txt"/>
-  <filter class="solr.FrenchLightStemFilterFactory"/>
+  <filter name="frenchLightStem"/>
 </analyzer>
 ----
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.ElisionFilterFactory"
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="elision"
           articles="lang/contractions_fr.txt"/>
-  <filter class="solr.FrenchMinimalStemFilterFactory"/>
+  <filter name="frenchMinimalStem"/>
 </analyzer>
 ----
 
@@ -1050,6 +1504,23 @@ Solr includes a stemmer for Galician following http://bvg.udc.es/recursos_lingua
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-galician]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="galicianStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-galician]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1058,6 +1529,8 @@ Solr includes a stemmer for Galician following http://bvg.udc.es/recursos_lingua
   <filter class="solr.GalicianStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "felizmente Luzes"
 
@@ -1075,27 +1548,45 @@ Solr includes four stemmers for German: one in the `solr.SnowballPorterFilterFac
 
 *Examples:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-german]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="germanStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-german]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
-  <tokenizer class="solr.StandardTokenizerFactory "/>
+  <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.GermanStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 [source,xml]
 ----
 <analyzer type="index">
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.GermanLightStemFilterFactory"/>
+  <tokenizer name="standard"/>
+  <filter name="germanLightStem"/>
 </analyzer>
 ----
 
 [source,xml]
 ----
 <analyzer type="index">
-  <tokenizer class="solr.StandardTokenizerFactory "/>
-  <filter class="solr.GermanMinimalStemFilterFactory"/>
+  <tokenizer name="standard"/>
+  <filter name="germanMinimalStem"/>
 </analyzer>
 ----
 
@@ -1120,6 +1611,22 @@ Use of custom charsets is no longer supported as of Solr 3.1. If you need to ind
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-greek]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="greekLowercase"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-greek]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -1127,6 +1634,8 @@ Use of custom charsets is no longer supported as of Solr 3.1. If you need to ind
   <filter class="solr.GreekLowerCaseFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 === Hindi
 
@@ -1138,6 +1647,24 @@ Solr includes support for stemming Hindi following http://computing.open.ac.uk/S
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-hindi]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="indicNormalization"/>
+  <filter name="hindiNormalization"/>
+  <filter name="hindiStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-hindi]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -1147,6 +1674,8 @@ Solr includes support for stemming Hindi following http://computing.open.ac.uk/S
   <filter class="solr.HindiStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 === Indonesian
 
@@ -1158,6 +1687,23 @@ Solr includes support for stemming Indonesian (Bahasa Indonesia) following http:
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-indonesian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="indonesianStem" stemDerivational="true" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-indonesian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1166,6 +1712,8 @@ Solr includes support for stemming Indonesian (Bahasa Indonesia) following http:
   <filter class="solr.IndonesianStemFilterFactory" stemDerivational="true" />
 </analyzer>
 ----
+====
+--
 
 *In:* "sebagai sebagainya"
 
@@ -1183,6 +1731,25 @@ Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFac
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-italian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="elision"
+          articles="lang/contractions_it.txt"/>
+  <filter name="italianLightStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-italian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1193,6 +1760,8 @@ Solr includes two stemmers for Italian: one in the `solr.SnowballPorterFilterFac
   <filter class="solr.ItalianLightStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "propaga propagare propagamento"
 
@@ -1212,6 +1781,25 @@ Solr can stem Irish using the Snowball Porter Stemmer with an argument of `langu
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-irish]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="elision"
+          articles="lang/contractions_ga.txt"/>
+  <filter name="irishLowercase"/>
+  <filter name="snowballPorter" language="Irish" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-irish]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1222,6 +1810,8 @@ Solr can stem Irish using the Snowball Porter Stemmer with an argument of `langu
   <filter class="solr.SnowballPorterFilterFactory" language="Irish" />
 </analyzer>
 ----
+====
+--
 
 *In:* "siopadóireacht síceapatacha b'fhearr m'athair"
 
@@ -1321,6 +1911,31 @@ Folds fullwidth ASCII variants into the equivalent Basic Latin forms, and folds
 
 Example:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-japanese]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="text_ja" positionIncrementGap="100" autoGeneratePhraseQueries="false">
+  <analyzer>
+    <!-- Uncomment if you need to handle iteration marks: -->
+    <!-- <charFilter name="japaneseIterationMark" /> -->
+    <tokenizer name="japanese" mode="search" userDictionary="lang/userdict_ja.txt"/>
+    <filter name="japaneseBaseForm"/>
+    <filter name="japanesePartOfSpeechStop" tags="lang/stoptags_ja.txt"/>
+    <filter name="cjkWidth"/>
+    <filter name="stop" ignoreCase="true" words="lang/stopwords_ja.txt"/>
+    <filter name="japaneseKatakanaStem" minimumLength="4"/>
+    <filter name="lowercase"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-lang-japanese]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="text_ja" positionIncrementGap="100" autoGeneratePhraseQueries="false">
@@ -1337,6 +1952,8 @@ Example:
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 [[hebrew-lao-myanmar-khmer]]
 === Hebrew, Lao, Myanmar, Khmer
@@ -1355,6 +1972,25 @@ Solr includes support for stemming Latvian, and Lucene includes an example stopw
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-latvian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="text_lvstem" class="solr.TextField" positionIncrementGap="100">
+  <analyzer>
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+    <filter name="latvianStem"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-lang-latvian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="text_lvstem" class="solr.TextField" positionIncrementGap="100">
@@ -1365,6 +2001,8 @@ Solr includes support for stemming Latvian, and Lucene includes an example stopw
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 *In:* "tirgiem tirgus"
 
@@ -1411,6 +2049,26 @@ The second pass is to pick up -dom and -het endings. Consider this example:
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-norweigian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
+  <analyzer>
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+    <filter name="stop" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball"/>
+    <filter name="norwegianLightStem"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-lang-norweigian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
@@ -1422,6 +2080,8 @@ The second pass is to pick up -dom and -het endings. Consider this example:
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 *In:* "Forelskelsen"
 
@@ -1449,10 +2109,10 @@ The `NorwegianMinimalStemFilterFactory` stems plural forms of Norwegian nouns on
 ----
 <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
   <analyzer>
-    <tokenizer class="solr.StandardTokenizerFactory"/>
-    <filter class="solr.LowerCaseFilterFactory"/>
-    <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball"/>
-    <filter class="solr.NorwegianMinimalStemFilterFactory"/>
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+    <filter name="stop" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball"/>
+    <filter name="norwegianMinimalStem"/>
   </analyzer>
 </fieldType>
 ----
@@ -1475,14 +2135,33 @@ Solr includes support for normalizing Persian, and Lucene includes an example st
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-persian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="arabicNormalization"/>
+  <filter name="persianNormalization"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-persian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
   <filter class="solr.ArabicNormalizationFilterFactory"/>
-  <filter class="solr.PersianNormalizationFilterFactory">
+  <filter class="solr.PersianNormalizationFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 === Polish
 
@@ -1494,6 +2173,23 @@ Solr provides support for Polish stemming with the `solr.StempelPolishStemFilter
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-polish]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="stempelPolishStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-polish]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1502,13 +2198,15 @@ Solr provides support for Polish stemming with the `solr.StempelPolishStemFilter
   <filter class="solr.StempelPolishStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.MorfologikFilterFactory" dictionary="morfologik/stemming/polish/polish.dict"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
+  <tokenizer name="standard"/>
+  <filter name="morfologik" dictionary="morfologik/stemming/polish/polish.dict"/>
+  <filter name="lowercase"/>
 </analyzer>
 ----
 
@@ -1534,6 +2232,23 @@ Solr includes four stemmers for Portuguese: one in the `solr.SnowballPorterFilte
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-portuguese]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="portugueseStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-portuguese]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1542,22 +2257,24 @@ Solr includes four stemmers for Portuguese: one in the `solr.SnowballPorterFilte
   <filter class="solr.PortugueseStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.PortugueseLightStemFilterFactory"/>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="portugueseLightStem"/>
 </analyzer>
 ----
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.PortugueseMinimalStemFilterFactory"/>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="portugueseMinimalStem"/>
 </analyzer>
 ----
 
@@ -1579,6 +2296,23 @@ Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `la
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-romanian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="wnowballPorter" language="Romanian" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-romanian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1587,6 +2321,8 @@ Solr can stem Romanian using the Snowball Porter Stemmer with an argument of `la
   <filter class="solr.SnowballPorterFilterFactory" language="Romanian" />
 </analyzer>
 ----
+====
+--
 
 === Russian
 
@@ -1600,6 +2336,23 @@ Solr includes two stemmers for Russian: one in the `solr.SnowballPorterFilterFac
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-russian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="russianLightStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-russian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -1608,6 +2361,8 @@ Solr includes two stemmers for Russian: one in the `solr.SnowballPorterFilterFac
   <filter class="solr.RussianLightStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 === Scandinavian
 
@@ -1633,6 +2388,23 @@ It's a semantically less destructive solution than `ScandinavianFoldingFilter`,
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-scandinavian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="scandinavianNormalization"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-scandinavian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1641,6 +2413,8 @@ It's a semantically less destructive solution than `ScandinavianFoldingFilter`,
   <filter class="solr.ScandinavianNormalizationFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "blåbærsyltetøj blåbärsyltetöj blaabaarsyltetoej blabarsyltetoj"
 
@@ -1663,9 +2437,9 @@ It's a semantically more destructive solution than `ScandinavianNormalizationFil
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.LowerCaseFilterFactory"/>
-  <filter class="solr.ScandinavianFoldingFilterFactory"/>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="scandinavianFolding"/>
 </analyzer>
 ----
 
@@ -1694,6 +2468,23 @@ See the Solr wiki for tips & advice on using this filter: https://wiki.apache.or
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-serbian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="serbianNormalization" haircut="bald"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-serbian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1702,6 +2493,8 @@ See the Solr wiki for tips & advice on using this filter: https://wiki.apache.or
   <filter class="solr.SerbianNormalizationFilterFactory" haircut="bald"/>
 </analyzer>
 ----
+====
+--
 
 === Spanish
 
@@ -1713,6 +2506,23 @@ Solr includes two stemmers for Spanish: one in the `solr.SnowballPorterFilterFac
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-spanish]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="spanishLightStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-spanish]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1721,6 +2531,8 @@ Solr includes two stemmers for Spanish: one in the `solr.SnowballPorterFilterFac
   <filter class="solr.SpanishLightStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "torear toreara torearlo"
 
@@ -1743,6 +2555,23 @@ Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-swedish]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="lowercase"/>
+  <filter name="swedishLightStem"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-swedish]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1751,6 +2580,8 @@ Also relevant are the <<Scandinavian,Scandinavian normalization filters>>.
   <filter class="solr.SwedishLightStemFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "kloke klokhet klokheten"
 
@@ -1768,6 +2599,22 @@ This filter converts sequences of Thai characters into individual Thai words. Un
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-thai]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer type="index">
+  <tokenizer name="thai"/>
+  <filter name="lowercase"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-thai]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer type="index">
@@ -1775,6 +2622,8 @@ This filter converts sequences of Thai characters into individual Thai words. Un
   <filter class="solr.LowerCaseFilterFactory"/>
 </analyzer>
 ----
+====
+--
 
 === Turkish
 
@@ -1786,6 +2635,24 @@ Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFa
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-turkish]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="apostrophe"/>
+  <filter name="turkishLowercase"/>
+  <filter name="snowballPorter" language="Turkish"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-turkish]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1795,19 +2662,21 @@ Solr includes support for stemming Turkish with the `solr.SnowballPorterFilterFa
   <filter class="solr.SnowballPorterFilterFactory" language="Turkish"/>
 </analyzer>
 ----
+====
+--
 
 *Another example, illustrating diacritics-insensitive search:*
 
 [source,xml]
 ----
 <analyzer>
-  <tokenizer class="solr.StandardTokenizerFactory"/>
-  <filter class="solr.ApostropheFilterFactory"/>
-  <filter class="solr.TurkishLowerCaseFilterFactory"/>
-  <filter class="solr.ASCIIFoldingFilterFactory" preserveOriginal="true"/>
-  <filter class="solr.KeywordRepeatFilterFactory"/>
-  <filter class="solr.TruncateTokenFilterFactory" prefixLength="5"/>
-  <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
+  <tokenizer name="standard"/>
+  <filter name="apostrophe"/>
+  <filter name="turkishLowercase"/>
+  <filter name="asciiFoldingFilterFactory" preserveOriginal="true"/>
+  <filter name="keywordRepeat"/>
+  <filter name="truncate" prefixLength="5"/>
+  <filter name="removeDuplicates"/>
 </analyzer>
 ----
 
@@ -1825,6 +2694,24 @@ Lucene also includes an example Ukrainian stopword list, in the `lucene-analyzer
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-lang-ukranian]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+  <filter name="stop" words="org/apache/lucene/analysis/uk/stopwords.txt"/>
+  <filter name="lowercase"/>
+  <filter name="morfologik" dictionary="org/apache/lucene/analysis/uk/ukrainian.dict"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-lang-ukranian]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -1834,5 +2721,7 @@ Lucene also includes an example Ukrainian stopword list, in the `lucene-analyzer
   <filter class="solr.MorfologikFilterFactory" dictionary="org/apache/lucene/analysis/uk/ukrainian.dict"/>
 </analyzer>
 ----
+====
+--
 
 The Morfologik `dictionary` parameter value is a constant specifying which dictionary to choose. The dictionary resource must be named `path/to/_language_.dict` and have an associated `.info` metadata file. See http://morfologik.blogspot.com/[the Morfologik project] for details. If the dictionary attribute is not provided, the Polish dictionary is loaded and used by default.
diff --git a/solr/solr-ref-guide/src/schema-api.adoc b/solr/solr-ref-guide/src/schema-api.adoc
index 68f865a..96de55e 100644
--- a/solr/solr-ref-guide/src/schema-api.adoc
+++ b/solr/solr-ref-guide/src/schema-api.adoc
@@ -333,13 +333,13 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
      "positionIncrementGap":"100",
      "analyzer" : {
         "charFilters":[{
-           "class":"solr.PatternReplaceCharFilterFactory",
+           "name":"patternReplace",
            "replacement":"$1$1",
            "pattern":"([a-zA-Z])\\\\1+" }],
         "tokenizer":{
-           "class":"solr.WhitespaceTokenizerFactory" },
+           "name":"whitespace" },
         "filters":[{
-           "class":"solr.WordDelimiterFilterFactory",
+           "name":"wordDelimiter",
            "preserveOriginal":"0" }]}}
 }' http://localhost:8983/solr/gettingstarted/schema
 ----
@@ -361,11 +361,11 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
      "class":"solr.TextField",
      "indexAnalyzer":{
         "tokenizer":{
-           "class":"solr.PathHierarchyTokenizerFactory",
+           "name":"pathHierarchy",
            "delimiter":"/" }},
      "queryAnalyzer":{
         "tokenizer":{
-           "class":"solr.KeywordTokenizerFactory" }}}
+           "name":"keyword" }}}
 }' http://localhost:8983/solr/gettingstarted/schema
 ----
 ====
@@ -383,11 +383,11 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
      "class":"solr.TextField",
      "indexAnalyzer":{
         "tokenizer":{
-           "class":"solr.PathHierarchyTokenizerFactory",
+           "name":"pathHierarchy",
            "delimiter":"/" }},
      "queryAnalyzer":{
         "tokenizer":{
-           "class":"solr.KeywordTokenizerFactory" }}}
+           "name":"keyword" }}}
 }' http://localhost:8983/api/cores/gettingstarted/schema
 ----
 ====
@@ -446,7 +446,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
      "positionIncrementGap":"100",
      "analyzer":{
         "tokenizer":{
-           "class":"solr.StandardTokenizerFactory" }}}
+           "name":"standard" }}}
 }' http://localhost:8983/solr/gettingstarted/schema
 ----
 ====
@@ -463,7 +463,7 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
      "positionIncrementGap":"100",
      "analyzer":{
         "tokenizer":{
-           "class":"solr.StandardTokenizerFactory" }}}
+           "name":"standard" }}}
 }' http://localhost:8983/api/cores/gettingstarted/schema
 ----
 ====
@@ -565,13 +565,13 @@ curl -X POST -H 'Content-type:application/json' --data-binary '{
      "positionIncrementGap":"100",
      "analyzer":{
         "charFilters":[{
-           "class":"solr.PatternReplaceCharFilterFactory",
+           "name":"patternReplace",
            "replacement":"$1$1",
            "pattern":"([a-zA-Z])\\\\1+" }],
         "tokenizer":{
-           "class":"solr.WhitespaceTokenizerFactory" },
+           "name":"whitespace" },
         "filters":[{
-           "class":"solr.WordDelimiterFilterFactory",
+           "name":"wordDelimiter",
            "preserveOriginal":"0" }]}},
    "add-field" : {
       "name":"sell_by",
diff --git a/solr/solr-ref-guide/src/tokenizers.adoc b/solr/solr-ref-guide/src/tokenizers.adoc
index db32c78..c883342 100644
--- a/solr/solr-ref-guide/src/tokenizers.adoc
+++ b/solr/solr-ref-guide/src/tokenizers.adoc
@@ -20,6 +20,24 @@ Tokenizers are responsible for breaking field data into lexical units, or _token
 
 You configure the tokenizer for a text field type in `schema.xml` with a `<tokenizer>` element, as a child of `<analyzer>`:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="text" class="solr.TextField">
+  <analyzer type="index">
+    <tokenizer name="standard"/>
+    <filter name="lowercase"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-tokenizer]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="text" class="solr.TextField">
@@ -29,11 +47,30 @@ You configure the tokenizer for a text field type in `schema.xml` with a `<token
   </analyzer>
 </fieldType>
 ----
+====
+--
 
-The class attribute names a factory class that will instantiate a tokenizer object when needed. Tokenizer factory classes implement the `org.apache.solr.analysis.TokenizerFactory`. A TokenizerFactory's `create()` method accepts a Reader and returns a TokenStream. When Solr creates the tokenizer it passes a Reader object that provides the content of the text field.
+The name/class attribute names a factory class that will instantiate a tokenizer object when needed. Tokenizer factory classes implement the `org.apache.lucene.analysis.util.TokenizerFactory`. A TokenizerFactory's `create()` method accepts a Reader and returns a TokenStream. When Solr creates the tokenizer it passes a Reader object that provides the content of the text field.
 
 Arguments may be passed to tokenizer factories by setting attributes on the `<tokenizer>` element.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-args]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="semicolonDelimited" class="solr.TextField">
+  <analyzer type="query">
+    <tokenizer name="pattern" pattern="; "/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-tokenizer-args]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="semicolonDelimited" class="solr.TextField">
@@ -42,6 +79,8 @@ Arguments may be passed to tokenizer factories by setting attributes on the `<to
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 The following sections describe the tokenizer factory classes included in this release of Solr.
 
@@ -66,12 +105,29 @@ The Standard Tokenizer supports http://unicode.org/reports/tr29/#Word_Boundaries
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-standard]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="standard"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-standard]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.StandardTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Please, email john.doe@foo.com by 03-09, re: m37-xq."
 
@@ -95,12 +151,29 @@ The Classic Tokenizer preserves the same behavior as the Standard Tokenizer of S
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-classic]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="classic"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-classic]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.ClassicTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Please, email john.doe@foo.com by 03-09, re: m37-xq."
 
@@ -116,12 +189,29 @@ This tokenizer treats the entire text field as a single token.
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-keyword]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="keyword"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-keyword]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.KeywordTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Please, email john.doe@foo.com by 03-09, re: m37-xq."
 
@@ -137,12 +227,29 @@ This tokenizer creates tokens from strings of contiguous letters, discarding all
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-letter]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="letter"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-letter]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.LetterTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "I can't."
 
@@ -158,12 +265,29 @@ Tokenizes the input stream by delimiting at non-letters and then converting all
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-lowercase]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="lowercase"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-lowercase]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.LowerCaseTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "I just \*LOVE* my iPhone!"
 
@@ -185,12 +309,29 @@ Reads the field text and generates n-gram tokens of sizes in the given range.
 
 Default behavior. Note that this tokenizer operates over the whole field. It does not break the field at whitespace. As a result, the space character is included in the encoding.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-ngram]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="nGram"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-ngram]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.NGramTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "hey man"
 
@@ -200,12 +341,29 @@ Default behavior. Note that this tokenizer operates over the whole field. It doe
 
 With an n-gram size range of 4 to 5:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-ngram-args]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="nGram" minGramSize="4" maxGramSize="5"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-ngram-args]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.NGramTokenizerFactory" minGramSize="4" maxGramSize="5"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "bicycle"
 
@@ -227,12 +385,29 @@ Reads the field text and generates edge n-gram tokens of sizes in the given rang
 
 Default behavior (min and max default to 1):
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-edgengram]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="edgeNGram"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-edgengram]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.EdgeNGramTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "babaloo"
 
@@ -242,12 +417,29 @@ Default behavior (min and max default to 1):
 
 Edge n-gram range of 2 to 5
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-edgengram-args]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="edgeNGram" minGramSize="2" maxGramSize="5"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-edgengram-args]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.EdgeNGramTokenizerFactory" minGramSize="2" maxGramSize="5"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "babaloo"
 
@@ -269,6 +461,22 @@ The default configuration for `solr.ICUTokenizerFactory` provides UAX#29 word br
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-icu]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <!-- no customization -->
+  <tokenizer name="icu"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-icu]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -276,7 +484,25 @@ The default configuration for `solr.ICUTokenizerFactory` provides UAX#29 word br
   <tokenizer class="solr.ICUTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-icu-rule]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="icu"
+             rulefiles="Latn:my.Latin.rules.rbbi,Cyrl:my.Cyrillic.rules.rbbi"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-icu-rule]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
@@ -284,6 +510,8 @@ The default configuration for `solr.ICUTokenizerFactory` provides UAX#29 word br
              rulefiles="Latn:my.Latin.rules.rbbi,Cyrl:my.Cyrillic.rules.rbbi"/>
 </analyzer>
 ----
+====
+--
 
 [IMPORTANT]
 ====
@@ -306,6 +534,23 @@ This tokenizer creates synonyms from file path hierarchies.
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-pathhierarchy]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<fieldType name="text_path" class="solr.TextField" positionIncrementGap="100">
+  <analyzer>
+    <tokenizer name="pathHierarchy" delimiter="\" replace="/"/>
+  </analyzer>
+</fieldType>
+----
+====
+[example.tab-pane#byclass-tokenizer-pathhierarchy]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <fieldType name="text_path" class="solr.TextField" positionIncrementGap="100">
@@ -314,6 +559,8 @@ This tokenizer creates synonyms from file path hierarchies.
   </analyzer>
 </fieldType>
 ----
+====
+--
 
 *In:* "c:\usr\local\apache"
 
@@ -337,12 +584,29 @@ See {java-javadocs}java/util/regex/Pattern.html[the Javadocs for `java.util.rege
 
 A comma separated list. Tokens are separated by a sequence of zero or more spaces, a comma, and zero or more spaces.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-pattern]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="pattern" pattern="\s*,\s*"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-pattern]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.PatternTokenizerFactory" pattern="\s*,\s*"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "fee,fie, foe , fum, foo"
 
@@ -352,12 +616,29 @@ A comma separated list. Tokens are separated by a sequence of zero or more space
 
 Extract simple, capitalized words. A sequence of at least one capital letter followed by zero or more letters of either case is extracted as a token.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-pattern-words]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="pattern" pattern="[A-Z][A-Za-z]*" group="0"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-pattern-words]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.PatternTokenizerFactory" pattern="[A-Z][A-Za-z]*" group="0"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Hello. My name is Inigo Montoya. You killed my father. Prepare to die."
 
@@ -367,12 +648,29 @@ Extract simple, capitalized words. A sequence of at least one capital letter fol
 
 Extract part numbers which are preceded by "SKU", "Part" or "Part Number", case sensitive, with an optional semi-colon separator. Part numbers must be all numeric digits, with an optional hyphen. Regex capture groups are numbered by counting left parenthesis from left to right. Group 3 is the subexpression "[0-9-]+", which matches one or more digits or hyphens.
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-pattern-sku]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="pattern" pattern="(SKU|Part(\sNumber)?):?\s(\[0-9-\]+)" group="3"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-pattern-sku]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.PatternTokenizerFactory" pattern="(SKU|Part(\sNumber)?):?\s(\[0-9-\]+)" group="3"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "SKU: 1234, Part Number 5678, Part: 126-987"
 
@@ -394,12 +692,29 @@ This tokenizer is similar to the `PatternTokenizerFactory` described above, but
 
 To match tokens delimited by simple whitespace characters:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-simplepattern]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="simplePattern" pattern="[^ \t\r\n]+"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-simplepattern]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.SimplePatternTokenizerFactory" pattern="[^ \t\r\n]+"/>
 </analyzer>
 ----
+====
+--
 
 == Simplified Regular Expression Pattern Splitting Tokenizer
 
@@ -417,12 +732,29 @@ This tokenizer is similar to the `SimplePatternTokenizerFactory` described above
 
 To match tokens delimited by simple whitespace characters:
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-simplepatternsplit]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="simplePatternSplit" pattern="[ \t\r\n]+"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-simplepatternsplit]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.SimplePatternSplitTokenizerFactory" pattern="[ \t\r\n]+"/>
 </analyzer>
 ----
+====
+--
 
 == UAX29 URL Email Tokenizer
 
@@ -448,12 +780,29 @@ The UAX29 URL Email Tokenizer supports http://unicode.org/reports/tr29/#Word_Bou
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-uax29urlemail]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="uax29URLEmail"/>
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-uax29urlemail]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.UAX29URLEmailTokenizerFactory"/>
 </analyzer>
 ----
+====
+--
 
 *In:* "Visit http://accarol.com/contact.htm?from=external&a=10 or e-mail bob.cratchet@accarol.com"
 
@@ -475,12 +824,29 @@ Specifies how to define whitespace for the purpose of tokenization. Valid values
 
 *Example:*
 
+[.dynamic-tabs]
+--
+[example.tab-pane#byname-tokenizer-whitespace]
+====
+[.tab-label]*With name*
+[source,xml]
+----
+<analyzer>
+  <tokenizer name="whitespace" rule="java" />
+</analyzer>
+----
+====
+[example.tab-pane#byclass-tokenizer-whitespace]
+====
+[.tab-label]*With class name (legacy)*
 [source,xml]
 ----
 <analyzer>
   <tokenizer class="solr.WhitespaceTokenizerFactory" rule="java" />
 </analyzer>
 ----
+====
+--
 
 *In:* "To be, or what?"