You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/05 20:52:58 UTC

[2/2] lucene-solr:jira/solr-10290: SOLR-10290: content conversion: B's and some C's

SOLR-10290: content conversion: B's and some C's


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/bbe60af2
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/bbe60af2
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/bbe60af2

Branch: refs/heads/jira/solr-10290
Commit: bbe60af21393f6c8b59bdcae9c79a28a10545aa0
Parents: c7361af
Author: Cassandra Targett <ct...@apache.org>
Authored: Fri May 5 15:52:23 2017 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Fri May 5 15:52:50 2017 -0500

----------------------------------------------------------------------
 ...uthentication-and-authorization-plugins.adoc |   2 +-
 .../src/basic-authentication-plugin.adoc        |  12 +-
 solr/solr-ref-guide/src/blob-store-api.adoc     |  17 +-
 solr/solr-ref-guide/src/blockjoin-faceting.adoc |  27 +-
 .../solr-ref-guide/src/charfilterfactories.adoc |  32 +--
 solr/solr-ref-guide/src/cloud-screens.adoc      |  11 +-
 .../src/collapse-and-expand-results.adoc        |  52 ++--
 .../src/collection-specific-tools.adoc          |   5 +-
 solr/solr-ref-guide/src/collections-api.adoc    | 270 +++++--------------
 .../src/collections-core-admin.adoc             |   3 -
 .../combining-distribution-and-replication.adoc |   5 +-
 .../src/command-line-utilities.adoc             |  45 ++--
 .../src/common-query-parameters.adoc            |  70 +++--
 solr/solr-ref-guide/src/defining-fields.adoc    |  10 +-
 solr/solr-ref-guide/src/docvalues.adoc          |   6 +-
 15 files changed, 197 insertions(+), 370 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc b/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc
index 7afc2b7..950ae42 100644
--- a/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc
+++ b/solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc
@@ -45,7 +45,7 @@ Here is a more detailed `security.json` example. In this, the Basic authenticati
 "authorization":{
    "class":"solr.RuleBasedAuthorizationPlugin",
    "permissions":[{"name":"security-edit",
-      "role":"admin"}]
+      "role":"admin"}],
    "user-role":{"solr":"admin"}
 }}
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/basic-authentication-plugin.adoc b/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
index 4a3ed98..592da1b 100644
--- a/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
+++ b/solr/solr-ref-guide/src/basic-authentication-plugin.adoc
@@ -80,8 +80,8 @@ The `set-user` command allows you to add users and change their passwords. For e
 
 [source,bash]
 ----
-curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'Content-type:application/json' -d '{ 
-  "set-user": {"tom" : "TomIsCool" , 
+curl --user solr:SolrRocks http://localhost:8983/solr/admin/authentication -H 'Content-type:application/json' -d '{
+  "set-user": {"tom" : "TomIsCool" ,
                "harry":"HarrysSecret"}}'
 ----
 
@@ -114,8 +114,8 @@ In SolrJ, the basic authentication credentials need to be set for each request a
 
 [source,java]
 ----
-SolrRequest req ;//create a new request object 
-req.setBasicAuthCredentials(userName, password); 
+SolrRequest req ;//create a new request object
+req.setBasicAuthCredentials(userName, password);
 solrClient.request(req);
 ----
 
@@ -124,14 +124,14 @@ Query example:
 [source,java]
 ----
 QueryRequest req = new QueryRequest(new SolrQuery("*:*"));
-req.setBasicAuthCredentials(userName, password); 
+req.setBasicAuthCredentials(userName, password);
 QueryResponse rsp = req.process(solrClient);
 ----
 
 [[BasicAuthenticationPlugin-UsingCommandLinescriptswithBasicAuth]]
 === Using Command Line scripts with BasicAuth
 
-Add the following line to the `solr.in.sh/solr.in.cmd` file. This example tells the `bin/solr` command line to to use "basic" as the type of authentication, and to pass credentials with the user-name "solr" and password "SolrRocks":
+Add the following line to the `solr.in.sh` or `solr.in.cmd` file. This example tells the `bin/solr` command line to to use "basic" as the type of authentication, and to pass credentials with the user-name "solr" and password "SolrRocks":
 
 [source,bash]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/blob-store-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/blob-store-api.adoc b/solr/solr-ref-guide/src/blob-store-api.adoc
index bf032cf..1326e06 100644
--- a/solr/solr-ref-guide/src/blob-store-api.adoc
+++ b/solr/solr-ref-guide/src/blob-store-api.adoc
@@ -2,7 +2,9 @@
 :page-shortname: blob-store-api
 :page-permalink: blob-store-api.html
 
-The Blob Store REST API provides REST methods to store, retrieve or list files in a Lucene index. This can be used to upload a jar file which contains standard solr components such as RequestHandlers, SearchComponents, or other custom code you have written for Solr. Schema components _do not_ yet support the Blob Store.
+The Blob Store REST API provides REST methods to store, retrieve or list files in a Lucene index.
+
+It can be used to upload a jar file which contains standard solr components such as RequestHandlers, SearchComponents, or other custom code you have written for Solr. Schema components _do not_ yet support the Blob Store.
 
 When using the blob store, note that the API does not delete or overwrite a previous object if a new one is uploaded with the same name. It always adds a new version of the blob to the index. Deletes can be performed with standard REST delete commands.
 
@@ -23,15 +25,10 @@ You can create the `.system` collection with the <<collections-api.adoc#collecti
 
 [source,bash]
 ----
-curl "http://localhost:8983/solr/admin/collections?action=CREATE&name=.system&replicationFactor=2"
+curl http://localhost:8983/solr/admin/collections?action=CREATE&name=.system&replicationFactor=2
 ----
 
-[IMPORTANT]
-====
-
-The `bin/solr` script cannot be used to create the `.system ` collection at this time.
-
-====
+IMPORTANT: The `bin/solr` script cannot be used to create the `.system` collection.
 
 [[BlobStoreAPI-UploadFilestoBlobStore]]
 === Upload Files to Blob Store
@@ -124,8 +121,8 @@ curl http://localhost:8983/solr/.system/blob/{blobname}?wt=filestream > {outputf
 
 To use the blob as the class for a request handler or search component, you create a request handler in `solrconfig.xml` as usual. You will need to define the following parameters:
 
-* `class`: the fully qualified class name. For example, if you created a new request handler class called CRUDHandler, you would enter `org.apache.solr.core.CRUDHandler`.
-* `runtimeLib`: Set to true to require that this component should be loaded from the classloader that loads the runtime jars.
+`class`:: the fully qualified class name. For example, if you created a new request handler class called CRUDHandler, you would enter `org.apache.solr.core.CRUDHandler`.
+`runtimeLib`:: Set to true to require that this component should be loaded from the classloader that loads the runtime jars.
 
 For example, to use a blob named test, you would configure `solrconfig.xml` like this:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/blockjoin-faceting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/blockjoin-faceting.adoc b/solr/solr-ref-guide/src/blockjoin-faceting.adoc
index 4244708..b57979c 100644
--- a/solr/solr-ref-guide/src/blockjoin-faceting.adoc
+++ b/solr/solr-ref-guide/src/blockjoin-faceting.adoc
@@ -2,13 +2,13 @@
 :page-shortname: blockjoin-faceting
 :page-permalink: blockjoin-faceting.html
 
-It's a common requirement to aggregate children facet counts by their parents, i.e., if a parent document has several children documents, all of them need to increment facet value count only once. This functionality is provided by `BlockJoinDocSetFacetComponent`, and `BlockJoinFacetComponent` just an alias for compatibility.
+BlockJoin facets allow you to aggregate children facet counts by their parents.
 
-This component is considered experimental, and must be explicitly enabled for a request handler in `solrconfig.xml`, in the same way as any other <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,search component>>.
+It is a common requirement that if a parent document has several children documents, all of them need to increment facet value count only once. This functionality is provided by `BlockJoinDocSetFacetComponent`, and `BlockJoinFacetComponent` just an alias for compatibility.
 
-This example shows how you could add this search components to `solrconfig.xml` and define it in request handler:
+CAUTION: This component is considered experimental, and must be explicitly enabled for a request handler in `solrconfig.xml`, in the same way as any other <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,search component>>.
 
-*solrconfig.xml*
+This example shows how you could add this search components to `solrconfig.xml` and define it in request handler:
 
 [source,xml]
 ----
@@ -28,8 +28,7 @@ This component can be added into any search request handler. This component work
 
 Documents should be added in children-parent blocks as described in <<uploading-data-with-index-handlers.adoc#UploadingDatawithIndexHandlers-NestedChildDocuments,indexing nested child documents>>. Examples:
 
-*document sample*
-
+.Sample document
 [source,xml]
 ----
 <add>
@@ -37,19 +36,19 @@ Documents should be added in children-parent blocks as described in <<uploading-
     <field name="id">1</field>
     <field name="type_s">parent</field>
     <doc>
-      <field name="id">11</field> 
+      <field name="id">11</field>
       <field name="COLOR_s">Red</field>
       <field name="SIZE_s">XL</field>
       <field name="PRICE_i">6</field>
     </doc>
     <doc>
-      <field name="id">12</field> 
+      <field name="id">12</field>
       <field name="COLOR_s">Red</field>
       <field name="SIZE_s">XL</field>
       <field name="PRICE_i">7</field>
     </doc>
     <doc>
-      <field name="id">13</field> 
+      <field name="id">13</field>
       <field name="COLOR_s">Blue</field>
       <field name="SIZE_s">L</field>
       <field name="PRICE_i">5</field>
@@ -59,19 +58,19 @@ Documents should be added in children-parent blocks as described in <<uploading-
     <field name="id">2</field>
     <field name="type_s">parent</field>
     <doc>
-      <field name="id">21</field> 
+      <field name="id">21</field>
       <field name="COLOR_s">Blue</field>
       <field name="SIZE_s">XL</field>
       <field name="PRICE_i">6</field>
     </doc>
     <doc>
-      <field name="id">22</field> 
+      <field name="id">22</field>
       <field name="COLOR_s">Blue</field>
       <field name="SIZE_s">XL</field>
       <field name="PRICE_i">7</field>
     </doc>
     <doc>
-      <field name="id">23</field> 
+      <field name="id">23</field>
       <field name="COLOR_s">Red</field>
       <field name="SIZE_s">L</field>
       <field name="PRICE_i">5</field>
@@ -82,7 +81,7 @@ Documents should be added in children-parent blocks as described in <<uploading-
 
 Queries are constructed the same way as for a <<other-parsers.adoc#OtherParsers-BlockJoinQueryParsers,Parent Block Join query>>. For example:
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/bjqfacet?q={!parent which=type_s:parent}SIZE_s:XL&child.facet.field=COLOR_s
 ----
@@ -91,7 +90,7 @@ As a result we should have facets for Red(1) and Blue(1), because matches on chi
 
 [cols=",",options="header",]
 |===
-|url part |meaning
+|URL Part | Meaning
 |`/bjqfacet` |The name of the request handler that has been defined with one of block join facet components enabled.
 |`q={!parent ...}..` |The mandatory parent query as a main query. The parent query could also be a subordinate clause in a more complex query.
 |`child.facet.field=...` |The child document field, which might be repeated many times with several fields, as necessary.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/charfilterfactories.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/charfilterfactories.adoc b/solr/solr-ref-guide/src/charfilterfactories.adoc
index a527cbb..8ecfe3d 100644
--- a/solr/solr-ref-guide/src/charfilterfactories.adoc
+++ b/solr/solr-ref-guide/src/charfilterfactories.adoc
@@ -2,7 +2,9 @@
 :page-shortname: charfilterfactories
 :page-permalink: charfilterfactories.html
 
-Char Filter is a component that pre-processes input characters. Char Filters can be chained like Token Filters and placed in front of a Tokenizer. Char Filters can add, change, or remove characters while preserving the original character offsets to support features like highlighting.
+CharFilter is a component that pre-processes input characters.
+
+CharFilters can be chained like Token Filters and placed in front of a Tokenizer. CharFilters can add, change, or remove characters while preserving the original character offsets to support features like highlighting.
 
 [[CharFilterFactories-solr.MappingCharFilterFactory]]
 == solr.MappingCharFilterFactory
@@ -33,7 +35,7 @@ Mapping file syntax:
 +
 [cols=",,,",options="header",]
 |===
-|Escapesequence |Resulting character (http://www.ecma-international.org/publications/standards/Ecma-048.htm[ECMA-48] alias) |Unicode character |Example mapping line
+|Escape Sequence |Resulting Character (http://www.ecma-international.org/publications/standards/Ecma-048.htm[ECMA-48] alias) |Unicode Character |Example Mapping Line
 |`\\` |`\` |U+005C |`"\\" => "/"`
 |`\"` |`"` |U+0022 |`"\"and\"" => "'and'"`
 |`\b` |backspace (BS) |U+0008 |`"\b" => " "`
@@ -48,7 +50,7 @@ Mapping file syntax:
 [[CharFilterFactories-solr.HTMLStripCharFilterFactory]]
 == solr.HTMLStripCharFilterFactory
 
-This filter creates `org.apache.solr.analysis.HTMLStripCharFilter`. This Char Filter strips HTML from the input stream and passes the result to another Char Filter or a Tokenizer.
+This filter creates `org.apache.solr.analysis.HTMLStripCharFilter`. This CharFilter strips HTML from the input stream and passes the result to another CharFilter or a Tokenizer.
 
 This filter:
 
@@ -68,12 +70,7 @@ This filter:
 * Inline tags, such as `<b>`, `<i>`, or `<span>` will be removed.
 * Uppercase character entities like `quot`, `gt`, `lt` and `amp` are recognized and handled as lowercase.
 
-[TIP]
-====
-
-The input need not be an HTML document. The filter removes only constructs that look like HTML. If the input doesn't include anything that looks like HTML, the filter won't remove any input.
-
-====
+TIP: The input need not be an HTML document. The filter removes only constructs that look like HTML. If the input doesn't include anything that looks like HTML, the filter won't remove any input.
 
 The table below presents examples of HTML stripping.
 
@@ -106,11 +103,11 @@ This filter performs pre-tokenization Unicode normalization using http://site.ic
 
 Arguments:
 
-`name`: A http://unicode.org/reports/tr15/[Unicode Normalization Form], one of `nfc`, `nfkc`, `nfkc_cf`. Default is `nfkc_cf`.
+`name`:: A http://unicode.org/reports/tr15/[Unicode Normalization Form], one of `nfc`, `nfkc`, `nfkc_cf`. Default is `nfkc_cf`.
 
-`mode`: Either `compose` or `decompose`. Default is `compose`. Use `decompose` with `name="nfc"` or `name="nfkc"` to get NFD or NFKD, respectively.
+`mode`:: Either `compose` or `decompose`. Default is `compose`. Use `decompose` with `name="nfc"` or `name="nfkc"` to get NFD or NFKD, respectively.
 
-`filter`: A http://www.icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet] pattern. Codepoints outside the set are always left unchanged. Default is `[]` (the null set, no filtering - all codepoints are subject to normalization).
+`filter`:: A http://www.icu-project.org/apiref/icu4j/com/ibm/icu/text/UnicodeSet.html[UnicodeSet] pattern. Codepoints outside the set are always left unchanged. Default is `[]` (the null set, no filtering - all codepoints are subject to normalization).
 
 Example:
 
@@ -130,9 +127,9 @@ This filter uses http://www.regular-expressions.info/reference.html[regular expr
 
 Arguments:
 
-`pattern`: the regular expression pattern to apply to the incoming text.
+`pattern`:: the regular expression pattern to apply to the incoming text.
 
-`replacement`: the text to use to replace matching patterns.
+`replacement`:: the text to use to replace matching patterns.
 
 You can configure this filter in `schema.xml` like this:
 
@@ -150,14 +147,9 @@ The table below presents examples of regex-based pattern replacement:
 
 [width="100%",cols="20%,20%,20%,20%,20%",options="header",]
 |===
-|Input |pattern |replacement |Output |Description
+|Input |Pattern |Replacement |Output |Description
 |see-ing looking |`(\w+)(ing)` |`$1` |see-ing look |Removes "ing" from the end of word.
 |see-ing looking |`(\w+)ing` |`$1` |see-ing look |Same as above. 2nd parentheses can be omitted.
 |No.1 NO. no. 543 |`[nN][oO]\.\s*(\d+)` |`#$1` |#1 NO. #543 |Replace some string literals
 |abc=1234=5678 |`(\w+)=(\d+)=(\d+)` |`$3=$1=$2` |5678=abc=1234 |Change the order of the groups.
 |===
-
-[[CharFilterFactories-RelatedTopics]]
-== Related Topics
-
-* http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#CharFilterFactories[CharFilterFactories]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/cloud-screens.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/cloud-screens.adoc b/solr/solr-ref-guide/src/cloud-screens.adoc
index a74e2f2..927faed 100644
--- a/solr/solr-ref-guide/src/cloud-screens.adoc
+++ b/solr/solr-ref-guide/src/cloud-screens.adoc
@@ -2,31 +2,28 @@
 :page-shortname: cloud-screens
 :page-permalink: cloud-screens.html
 
-When running in <<solrcloud.adoc#solrcloud,SolrCloud>> mode, a "Cloud" option will appear in the Admin UI between <<logging.adoc#logging,Logging>> and <<collections-core-admin.adoc#collections-core-admin,Collections/Core Admin>> which provides status information about each collection & node in your cluster, as well as access to the low level data being stored in <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,Zookeeper>>.
+When running in <<solrcloud.adoc#solrcloud,SolrCloud>> mode, a "Cloud" option will appear in the Admin UI between <<logging.adoc#logging,Logging>> and <<collections-core-admin.adoc#collections-core-admin,Collections/Core Admin>>.
+
+This screen provides status information about each collection & node in your cluster, as well as access to the low level data being stored in <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,Zookeeper>>.
 
 .Only Visible When using SolrCloud
 [NOTE]
 ====
-
 The "Cloud" menu option is only available on Solr instances running in <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,SolrCloud mode>>. Single node or master/slave replication instances of Solr will not display this option.
-
 ====
 
 Click on the Cloud option in the left-hand navigation, and a small sub-menu appears with options called "Tree", "Graph", "Graph (Radial)" and "Dump". The default view ("Graph") shows a graph of each collection, the shards that make up those collections, and the addresses of each replica for each shard.
 
-This example shows the very simple two-node cluster created using the "`bin/solr -e cloud -noprompt`" example command. In addition to the 2 shard, 2 replica "gettingstarted" collection, there is an additional "films" collection consisting of a single shard/replica:
+This example shows the very simple two-node cluster created using the `bin/solr -e cloud -noprompt` example command. In addition to the 2 shard, 2 replica "gettingstarted" collection, there is an additional "films" collection consisting of a single shard/replica:
 
 image::images/cloud-screens/cloud-graph.png[image,width=512,height=250]
 
-
 The "Graph (Radial)" option provides a different visual view of each node. Using the same example cluster, the radial graph view looks like:
 
 image::images/cloud-screens/cloud-radial.png[image,width=478,height=250]
 
-
 The "Tree" option shows a directory structure of the data in ZooKeeper, including cluster wide information regarding the `live_nodes` and `overseer` status, as well as collection specific information such as the `state.json`, current shard leaders, and configuration files in use. In this example, we see the `state.json` file definition for the "films" collection:
 
 image::images/cloud-screens/cloud-tree.png[image,width=487,height=250]
 
-
 The final option is "Dump", which returns a JSON document containing all nodes, their contents and their children (recursively). This can be used to export a snapshot of all the data that Solr has kept inside ZooKeeper and can aid in debugging SolrCloud problems.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
index a9c0ce5..8dec65d 100644
--- a/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
+++ b/solr/solr-ref-guide/src/collapse-and-expand-results.adoc
@@ -2,13 +2,13 @@
 :page-shortname: collapse-and-expand-results
 :page-permalink: collapse-and-expand-results.html
 
-The Collapsing query parser and the Expand component combine to form an approach to grouping documents for field collapsing in search results. The Collapsing query parser groups documents (collapsing the result set) according to your parameters, while the Expand component provides access to documents in the collapsed group for use in results display or other processing by a client application. Collapse & Expand can together do what the older <<result-grouping.adoc#result-grouping,Result Grouping>> (`group=true`) does for _most_ use-cases but not all. Generally, you should prefer Collapse & Expand.
+The Collapsing query parser and the Expand component combine to form an approach to grouping documents for field collapsing in search results.
+
+The Collapsing query parser groups documents (collapsing the result set) according to your parameters, while the Expand component provides access to documents in the collapsed group for use in results display or other processing by a client application. Collapse & Expand can together do what the older <<result-grouping.adoc#result-grouping,Result Grouping>> (`group=true`) does for _most_ use-cases but not all. Generally, you should prefer Collapse & Expand.
 
 [IMPORTANT]
 ====
-
 In order to use these features with SolrCloud, the documents must be located on the same shard. To ensure document co-location, you can define the `router.name` parameter as `compositeId` when creating the collection. For more information on this option, see the section <<shards-and-indexing-data-in-solrcloud.adoc#ShardsandIndexingDatainSolrCloud-DocumentRouting,Document Routing>>.
-
 ====
 
 [[CollapseandExpandResults-CollapsingQueryParser]]
@@ -24,76 +24,72 @@ The CollapsingQParser accepts the following local parameters:
 |===
 |Parameter |Description |Default
 |field |The field that is being collapsed on. The field must be a single valued String, Int or Float |none
-|min | max a|
+|min \| max a|
 Selects the group head document for each group based on which document has the min or max value of the specified numeric field or <<function-queries.adoc#function-queries,function query>>.
 
 At most only one of the min, max, or sort (see below) parameters may be specified.
 
-If none are specified, the group head document of each group will be selected based on the highest scoring document in that group.
-
- |none
+If none are specified, the group head document of each group will be selected based on the highest scoring document in that group. |none
 |sort a|
 Selects the group head document for each group based on which document comes first according to the specified <<common-query-parameters.adoc#CommonQueryParameters-ThesortParameter,sort string>>.
 
 At most only one of the min, max, (see above) or sort parameters may be specified.
 
-If none are specified, the group head document of each group will be selected based on the highest scoring document in that group.
-
- |none
+If none are specified, the group head document of each group will be selected based on the highest scoring document in that group. |none
 |nullPolicy a|
 There are three null policies:
 
-* **ignore**: removes documents with a null value in the collapse field. This is the default.
-* **expand**: treats each document with a null value in the collapse field as a separate group.
-* **collapse**: collapses all documents with a null value into a single group using either highest score, or minimum/maximum.
+* *ignore*: removes documents with a null value in the collapse field. This is the default.
+* *expand*: treats each document with a null value in the collapse field as a separate group.
+* *collapse*: collapses all documents with a null value into a single group using either highest score, or minimum/maximum.
 
  |ignore
-|hint |Currently there is only one hint available "```top_fc```", which stands for top level FieldCache. The top_fc hint is only available when collapsing on String fields. top_fc usually provides the best query time speed but takes the longest to warm on startup or following a commit. top_fc will also result in having the collapsed field cached in memory twice if it's used for faceting or sorting. For very high cardinality (high distinct count) fields, top_fc may not fare so well. |none
-|size |Sets the initial size of the collapse data structures when collapsing on a **numeric field only**. The data structures used for collapsing grow dynamically when collapsing on numeric fields. Setting the size above the number of results expected in the result set will eliminate the resizing cost. |100,000
+|hint |Currently there is only one hint available: `top_fc`, which stands for top level FieldCache. The `top_fc` hint is only available when collapsing on String fields. `top_fc` usually provides the best query time speed but takes the longest to warm on startup or following a commit. `top_fc` will also result in having the collapsed field cached in memory twice if it's used for faceting or sorting. For very high cardinality (high distinct count) fields, `top_fc` may not fare so well. |none
+|size |Sets the initial size of the collapse data structures when collapsing on a *numeric field only*. The data structures used for collapsing grow dynamically when collapsing on numeric fields. Setting the size above the number of results expected in the result set will eliminate the resizing cost. |100,000
 |===
 
 *Sample Syntax:*
 
 Collapse on `group_field` selecting the document in each group with the highest scoring document:
 
-[source,java]
+[source]
 ----
 fq={!collapse field=group_field}
 ----
 
 Collapse on `group_field` selecting the document in each group with the minimum value of `numeric_field`:
 
-[source,java]
+[source]
 ----
-fq={!collapse field=group_field min=numeric_field} 
+fq={!collapse field=group_field min=numeric_field}
 ----
 
 Collapse on `group_field` selecting the document in each group with the maximum value of `numeric_field`:
 
-[source,java]
+[source]
 ----
-fq={!collapse field=group_field max=numeric_field} 
+fq={!collapse field=group_field max=numeric_field}
 ----
 
 Collapse on `group_field` selecting the document in each group with the maximum value of a function. Note that the *cscore()* function can be used with the min/max options to use the score of the current document being collapsed.
 
-[source,java]
+[source]
 ----
-fq={!collapse field=group_field max=sum(cscore(),numeric_field)} 
+fq={!collapse field=group_field max=sum(cscore(),numeric_field)}
 ----
 
 Collapse on `group_field` with a null policy so that all docs that do not have a value in the `group_field` will be treated as a single group. For each group, the selected document will be based first on a `numeric_field`, but ties will be broken by score:
 
-[source,java]
+[source]
 ----
-fq={!collapse field=group_field nullPolicy=collapse sort='numeric_field asc, score desc'} 
+fq={!collapse field=group_field nullPolicy=collapse sort='numeric_field asc, score desc'}
 ----
 
 Collapse on `group_field` with a hint to use the top level field cache:
 
-[source,java]
+[source]
 ----
-fq={!collapse field=group_field hint=top_fc} 
+fq={!collapse field=group_field hint=top_fc}
 ----
 
 The CollapsingQParserPlugin fully supports the QueryElevationComponent.
@@ -105,7 +101,7 @@ The ExpandComponent can be used to expand the groups that were collapsed by the
 
 Example usage with the CollapsingQParserPlugin:
 
-[source,java]
+[source]
 ----
 q=foo&fq={!collapse field=ISBN}
 ----
@@ -114,7 +110,7 @@ In the query above, the CollapsingQParserPlugin will collapse the search results
 
 The ExpandComponent can now be used to expand the results so you can see the documents grouped by ISBN. For example:
 
-[source,java]
+[source]
 ----
 q=foo&fq={!collapse field=ISBN}&expand=true
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/collection-specific-tools.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collection-specific-tools.adoc b/solr/solr-ref-guide/src/collection-specific-tools.adoc
index ef847ef..47188e5 100644
--- a/solr/solr-ref-guide/src/collection-specific-tools.adoc
+++ b/solr/solr-ref-guide/src/collection-specific-tools.adoc
@@ -5,21 +5,18 @@
 
 In the left-hand navigation bar, you will see a pull-down menu titled "Collection Selector" that can be used to access collection specific administration screens.
 
-.Only Visible When using SolrCloud
+.Only Visible When Using SolrCloud
 [NOTE]
 ====
-
 The "Collection Selector" pull-down menu is only available on Solr instances running in <<solrcloud.adoc#solrcloud,SolrCloud mode>>.
 
 Single node or master/slave replication instances of Solr will not display this menu, instead the Collection specific UI pages described in this section will be available in the <<core-specific-tools.adoc#core-specific-tools,Core Selector pull-down menu>>.
-
 ====
 
 Clicking on the Collection Selector pull-down menu will show a list of the collections in your Solr cluster, with a search box that can be used to find a specific collection by name. When you select a collection from the pull-down, the main display of the page will display some basic metadata about the collection, and a secondary menu will appear in the left nav with links to additional collection specific administration screens.
 
 image::images/collection-specific-tools/collection_dashboard.png[image,width=482,height=250]
 
-
 The collection-specific UI screens are listed below, with a link to the section of this guide to find out more:
 
 * <<analysis-screen.adoc#analysis-screen,Analysis>> - lets you analyze the data found in specific fields.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/collections-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collections-api.adoc b/solr/solr-ref-guide/src/collections-api.adoc
index ea04d1e..e5917b1 100644
--- a/solr/solr-ref-guide/src/collections-api.adoc
+++ b/solr/solr-ref-guide/src/collections-api.adoc
@@ -4,14 +4,10 @@
 
 The Collections API is used to enable you to create, remove, or reload collections, but in the context of SolrCloud you can also use it to create collections with a specific number of shards and replicas.
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-CREATE:CreateaCollection
-
-[[CollectionsAPI-CREATE_CreateaCollection]]
-
 [[CollectionsAPI-create]]
 == CREATE: Create a Collection
 
-`/admin/collections?action=CREATE&name=__name__&numShards=__number__&replicationFactor=__number__&maxShardsPerNode=__number__&createNodeSet=__nodelist__&collection.configName=__configname__`
+`/admin/collections?action=CREATE&name=_name_&numShards=_number_&replicationFactor=_number_&maxShardsPerNode=_number_&createNodeSet=_nodelist_&collection.configName=_configname_`
 
 [[CollectionsAPI-Input]]
 === Input
@@ -37,7 +33,7 @@ Ignored if createNodeSet is not also specified.
 
 |collection.configName |string |No |empty |Defines the name of the configurations (which must already be stored in ZooKeeper) to use for this collection. If not provided, Solr will default to the collection name as the configuration name.
 |router.field |string |No |empty |If this field is specified, the router will look at the value of the field in an input document to compute the hash and identify a shard instead of looking at the `uniqueKey` field. If the field specified is null in the document, the document will be rejected. Please note that <<realtime-get.adoc#realtime-get,RealTime Get>> or retrieval by id would also require the parameter `_route_` (or `shard.keys`) to avoid a distributed search.
-|property.__name__=__value__ |string |No | |Set core property _name_ to __value__. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
+|property._name_=_value_ |string |No | |Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
 |autoAddReplicas |boolean |No |false |When set to true, enables auto addition of replicas on shared file systems. See the section <<running-solr-on-hdfs.adoc#RunningSolronHDFS-autoAddReplicasSettings,autoAddReplicas Settings>> for more details on settings and overrides.
 |async |string |No | |Request ID to track this action which will be <<CollectionsAPI-AsynchronousCalls,processed asynchronously>>.
 |rule |string |No | |Replica placement rules. See the section <<rule-based-replica-placement.adoc#rule-based-replica-placement,Rule-based Replica Placement>> for details.
@@ -54,7 +50,7 @@ The response will include the status of the request and the new core names. If t
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=CREATE&name=newCollection&numShards=2&replicationFactor=1
 ----
@@ -87,14 +83,10 @@ http://localhost:8983/solr/admin/collections?action=CREATE&name=newCollection&nu
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-MODIFYCOLLECTION:ModifyAttributesofaCollection
-
-[[CollectionsAPI-MODIFYCOLLECTION_ModifyAttributesofaCollection]]
-
 [[CollectionsAPI-modifycollection]]
 == MODIFYCOLLECTION: Modify Attributes of a Collection
 
-`/admin/collections?action=MODIFYCOLLECTION&collection=<collection-name>&<attribute-name>=` `__<attribute-value>&<another-attribute-name>=<another-value>__`
+`/admin/collections?action=MODIFYCOLLECTION&collection=<collection-name>&<attribute-name>=` `_<attribute-value>&<another-attribute-name>=<another-value>_`
 
 It's possible to edit multiple attributes at a time. Changing these values only updates the z-node on Zookeeper, they do not change the topology of the collection. For instance, increasing replicationFactor will _not_ automatically add more replicas to the collection but _will_ allow more ADDREPLICA commands to succeed.
 
@@ -122,14 +114,10 @@ See the <<CollectionsAPI-api1,CREATE>> section above for details on these attrib
 
 |===
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-RELOAD:ReloadaCollection
-
-[[CollectionsAPI-RELOAD_ReloadaCollection]]
-
 [[CollectionsAPI-reload]]
 == RELOAD: Reload a Collection
 
-`/admin/collections?action=RELOAD&name=__name__`
+`/admin/collections?action=RELOAD&name=_name_`
 
 The RELOAD action is used when you have changed a configuration in ZooKeeper.
 
@@ -155,7 +143,7 @@ The response will include the status of the request and the cores that were relo
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=RELOAD&name=newCollection
 ----
@@ -186,24 +174,20 @@ http://localhost:8983/solr/admin/collections?action=RELOAD&name=newCollection
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-SPLITSHARD:SplitaShard
-
-[[CollectionsAPI-SPLITSHARD_SplitaShard]]
-
 [[CollectionsAPI-splitshard]]
 == SPLITSHARD: Split a Shard
 
-`/admin/collections?action=SPLITSHARD&collection=__name__&shard=__shardID__`
+`/admin/collections?action=SPLITSHARD&collection=_name_&shard=_shardID_`
 
 Splitting a shard will take an existing shard and break it into two pieces which are written to disk as two (new) shards. The original shard will continue to contain the same data as-is but it will start re-routing requests to the new shards. The new shards will have as many replicas as the original shard. A soft commit is automatically issued after splitting a shard so that documents are made visible on sub-shards. An explicit commit (hard or soft) is not necessary after a split operation because the index is automatically persisted to disk during the split operation.
 
-This command allows for seamless splitting and requires no downtime. A shard being split will continue to accept query and indexing requests and will automatically start routing them to the new shards once this operation is complete. This command can only be used for SolrCloud collections created with "numShards" parameter, meaning collections which rely on Solr's hash-based routing mechanism.
+This command allows for seamless splitting and requires no downtime. A shard being split will continue to accept query and indexing requests and will automatically start routing them to the new shards once this operation is complete. This command can only be used for SolrCloud collections created with `numShards` parameter, meaning collections which rely on Solr's hash-based routing mechanism.
 
 The split is performed by dividing the original shard's hash range into two equal partitions and dividing up the documents in the original shard according to the new sub-ranges.
 
-One can also specify an optional 'ranges' parameter to divide the original shard's hash range into arbitrary hash range intervals specified in hexadecimal. For example, if the original hash range is 0-1500 then adding the parameter: ranges=0-1f4,1f5-3e8,3e9-5dc will divide the original shard into three shards with hash range 0-500, 501-1000 and 1001-1500 respectively.
+One can also specify an optional `ranges` parameter to divide the original shard's hash range into arbitrary hash range intervals specified in hexadecimal. For example, if the original hash range is 0-1500 then adding the parameter: ranges=0-1f4,1f5-3e8,3e9-5dc will divide the original shard into three shards with hash range 0-500, 501-1000 and 1001-1500 respectively.
 
-Another optional parameter 'split.key' can be used to split a shard using a route key such that all documents of the specified route key end up in a single dedicated sub-shard. Providing the 'shard' parameter is not required in this case because the route key is enough to figure out the right shard. A route key which spans more than one shard is not supported. For example, suppose split.key=A! hashes to the range 12-15 and belongs to shard 'shard1' with range 0-20 then splitting by this route key would yield three sub-shards with ranges 0-11, 12-15 and 16-20. Note that the sub-shard with the hash range of the route key may also contain documents for other route keys whose hash ranges overlap.
+Another optional parameter `split.key` can be used to split a shard using a route key such that all documents of the specified route key end up in a single dedicated sub-shard. Providing the 'shard' parameter is not required in this case because the route key is enough to figure out the right shard. A route key which spans more than one shard is not supported. For example, suppose `split.key=A!` hashes to the range 12-15 and belongs to shard 'shard1' with range 0-20 then splitting by this route key would yield three sub-shards with ranges 0-11, 12-15 and 16-20. Note that the sub-shard with the hash range of the route key may also contain documents for other route keys whose hash ranges overlap.
 
 Shard splitting can be a long running process. In order to avoid timeouts, you should run this as an <<CollectionsAPI-AsynchronousCalls,asynchronous call>>.
 
@@ -217,9 +201,9 @@ Shard splitting can be a long running process. In order to avoid timeouts, you s
 |Key |Type |Required |Description
 |collection |string |Yes |The name of the collection that includes the shard to be split.
 |shard |string |Yes |The name of the shard to be split.
-|ranges |string |No |>A comma-separated list of hash ranges in hexadecimal, such as `ranges=0-1f4,1f5-3e8,3e9-5dc`.
-|>split.key |string |No |The key to use for splitting the index.
-|property.__name__=__value__ |string |No |Set core property _name_ to __value__. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
+|ranges |string |No |A comma-separated list of hash ranges in hexadecimal, such as `ranges=0-1f4,1f5-3e8,3e9-5dc`.
+|split.key |string |No |The key to use for splitting the index.
+|property._name_=_value_ |string |No |Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
 |async |string |No |Request ID to track this action which will be <<CollectionsAPI-AsynchronousCalls,processed asynchronously>>
 |===
 
@@ -235,7 +219,7 @@ The output will include the status of the request and the new shard names, which
 
 Split shard1 of the "anotherCollection" collection.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=anotherCollection&shard=shard1
 ----
@@ -302,16 +286,12 @@ http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=anothe
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-CREATESHARD:CreateaShard
-
-[[CollectionsAPI-CREATESHARD_CreateaShard]]
-
 [[CollectionsAPI-createshard]]
 == CREATESHARD: Create a Shard
 
 Shards can only created with this API for collections that use the 'implicit' router. Use SPLITSHARD for collections using the 'compositeId' router. A new shard with a name can be created for an existing 'implicit' collection.
 
-`/admin/collections?action=CREATESHARD&shard=__shardName__&collection=__name__`
+`/admin/collections?action=CREATESHARD&shard=_shardName_&collection=_name_`
 
 [[CollectionsAPI-Input.3]]
 === Input
@@ -324,7 +304,7 @@ Shards can only created with this API for collections that use the 'implicit' ro
 |collection |string |Yes |The name of the collection that includes the shard that will be splitted.
 |shard |string |Yes |The name of the shard to be created.
 |createNodeSet |string |No |Allows defining the nodes to spread the new collection across. If not provided, the CREATE operation will create shard-replica spread across all live Solr nodes. The format is a comma-separated list of node_names, such as `localhost:8983_solr,` `localhost:8984_solr,` `localhost:8985_solr`.
-|property.__name__=__value__ |string |No |Set core property _name_ to __value__. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
+|property._name_=_value_ |string |No |Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
 |async |string |No |Request ID to track this action which will be <<CollectionsAPI-AsynchronousCalls,processed asynchronously>>.
 |===
 
@@ -340,7 +320,7 @@ The output will include the status of the request. If the status is anything oth
 
 Create 'shard-z' for the "anImplicitCollection" collection.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=CREATESHARD&collection=anImplicitCollection&shard=shard-z
 ----
@@ -357,16 +337,12 @@ http://localhost:8983/solr/admin/collections?action=CREATESHARD&collection=anImp
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-DELETESHARD:DeleteaShard
-
-[[CollectionsAPI-DELETESHARD_DeleteaShard]]
-
 [[CollectionsAPI-deleteshard]]
 == DELETESHARD: Delete a Shard
 
 Deleting a shard will unload all replicas of the shard, remove them from `clusterstate.json`, and (by default) delete the instanceDir and dataDir for each replica. It will only remove shards that are inactive, or which have no range given for custom sharding.
 
-`/admin/collections?action=DELETESHARD&shard=__shardID__&collection=__name__`
+`/admin/collections?action=DELETESHARD&shard=_shardID_&collection=_name_`
 
 [[CollectionsAPI-Input.4]]
 === Input
@@ -396,7 +372,7 @@ The output will include the status of the request. If the status is anything oth
 
 Delete 'shard1' of the "anotherCollection" collection.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=DELETESHARD&collection=anotherCollection&shard=shard1
 ----
@@ -421,16 +397,12 @@ http://localhost:8983/solr/admin/collections?action=DELETESHARD&collection=anoth
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-CREATEALIAS:CreateorModifyanAliasforaCollection
-
-[[CollectionsAPI-CREATEALIAS_CreateorModifyanAliasforaCollection]]
-
 [[CollectionsAPI-createalias]]
 == CREATEALIAS: Create or Modify an Alias for a Collection
 
 The `CREATEALIAS` action will create a new alias pointing to one or more collections. If an alias by the same name already exists, this action will replace the existing alias, effectively acting like an atomic "MOVE" command.
 
-`/admin/collections?action=CREATEALIAS&name=__name__&collections=__collectionlist__`
+`/admin/collections?action=CREATEALIAS&name=_name_&collections=_collectionlist_`
 
 [[CollectionsAPI-Input.5]]
 === Input
@@ -457,7 +429,7 @@ The output will simply be a responseHeader with details of the time it took to p
 
 Create an alias named "testalias" and link it to the collections named "anotherCollection" and "testCollection".
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=testalias&collections=anotherCollection,testCollection
 ----
@@ -474,14 +446,10 @@ http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=testalias&c
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-DELETEALIAS:DeleteaCollectionAlias
-
-[[CollectionsAPI-DELETEALIAS_DeleteaCollectionAlias]]
-
 [[CollectionsAPI-deletealias]]
 == DELETEALIAS: Delete a Collection Alias
 
-`/admin/collections?action=DELETEALIAS&name=__name__`
+`/admin/collections?action=DELETEALIAS&name=_name_`
 
 [[CollectionsAPI-Input.6]]
 === Input
@@ -507,7 +475,7 @@ The output will simply be a responseHeader with details of the time it took to p
 
 Remove the alias named "testalias".
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=DELETEALIAS&name=testalias
 ----
@@ -524,14 +492,10 @@ http://localhost:8983/solr/admin/collections?action=DELETEALIAS&name=testalias
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-DELETE:DeleteaCollection
-
-[[CollectionsAPI-DELETE_DeleteaCollection]]
-
 [[CollectionsAPI-delete]]
 == DELETE: Delete a Collection
 
-`/admin/collections?action=DELETE&name=__collection__`
+`/admin/collections?action=DELETE&name=_collection_`
 
 [[CollectionsAPI-Input.7]]
 === Input
@@ -557,7 +521,7 @@ The response will include the status of the request and the cores that were dele
 
 Delete the collection named "newCollection".
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=DELETE&name=newCollection
 ----
@@ -588,16 +552,12 @@ http://localhost:8983/solr/admin/collections?action=DELETE&name=newCollection
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-DELETEREPLICA:DeleteaReplica
-
-[[CollectionsAPI-DELETEREPLICA_DeleteaReplica]]
-
 [[CollectionsAPI-deletereplica]]
 == DELETEREPLICA: Delete a Replica
 
 Delete a named replica from the specified collection and shard. If the corresponding core is up and running the core is unloaded, the entry is removed from the clusterstate, and (by default) delete the instanceDir and dataDir. If the node/core is down, the entry is taken off the clusterstate and if the core comes up later it is automatically unregistered.
 
-`/admin/collections?action=DELETEREPLICA&collection=__collection__&shard=__shard__&replica=__replica__`
+`/admin/collections?action=DELETEREPLICA&collection=_collection_&shard=_shard_&replica=_replica_`
 
 [[CollectionsAPI-Input.8]]
 === Input
@@ -623,7 +583,7 @@ Delete a named replica from the specified collection and shard. If the correspon
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=DELETEREPLICA&collection=test2&shard=shard2&replica=core_node3
 ----
@@ -640,16 +600,12 @@ http://localhost:8983/solr/admin/collections?action=DELETEREPLICA&collection=tes
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-ADDREPLICA:AddReplica
-
-[[CollectionsAPI-ADDREPLICA_AddReplica]]
-
 [[CollectionsAPI-addreplica]]
 == ADDREPLICA: Add Replica
 
 Add a replica to a shard in a collection. The node name can be specified if the replica is to be created in a specific node.
 
-`/admin/collections?action=ADDREPLICA&collection=__collection__&shard=__shard__&node=__nodeName__`
+`/admin/collections?action=ADDREPLICA&collection=_collection_&shard=_shard_&node=_nodeName_`
 
 [[CollectionsAPI-Input.9]]
 === Input
@@ -675,7 +631,7 @@ Ignored if the shard param is also specified.
 |node |string |No |The name of the node where the replica should be created
 |instanceDir |string |No |The instanceDir for the core that will be created
 |dataDir |string |No |The directory in which the core should be created
-|property.__name__=__value__ |string |No |Set core property _name_ to __value__. See <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>>.
+|property._name_=_value_ |string |No |Set core property _name_ to _value_. See <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>>.
 |async |string |No |Request ID to track this action which will be <<CollectionsAPI-AsynchronousCalls,processed asynchronously>>
 |===
 
@@ -684,7 +640,7 @@ Ignored if the shard param is also specified.
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=ADDREPLICA&collection=test2&shard=shard2&node=192.167.1.2:8983_solr
 ----
@@ -710,16 +666,12 @@ http://localhost:8983/solr/admin/collections?action=ADDREPLICA&collection=test2&
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-CLUSTERPROP:ClusterProperties
-
-[[CollectionsAPI-CLUSTERPROP_ClusterProperties]]
-
 [[CollectionsAPI-clusterprop]]
 == CLUSTERPROP: Cluster Properties
 
 Add, edit or delete a cluster-wide property.
 
-`/admin/collections?action=CLUSTERPROP&name=__propertyName__&val=__propertyValue__`
+`/admin/collections?action=CLUSTERPROP&name=_propertyName_&val=_propertyValue_`
 
 [[CollectionsAPI-Input.10]]
 === Input
@@ -743,7 +695,7 @@ The response will include the status of the request and the properties that were
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https
 ----
@@ -760,18 +712,14 @@ http://localhost:8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme&v
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-MIGRATE:MigrateDocumentstoAnotherCollection
-
-[[CollectionsAPI-MIGRATE_MigrateDocumentstoAnotherCollection]]
-
 [[CollectionsAPI-migrate]]
 == MIGRATE: Migrate Documents to Another Collection
 
-`/admin/collections?action=MIGRATE&collection=__name__&split.key=__key1!__&target.collection=__target_collection__&forward.timeout=60`
+`/admin/collections?action=MIGRATE&collection=_name_&split.key=_key1!_&target.collection=_target_collection_&forward.timeout=60`
 
 The MIGRATE command is used to migrate all documents having the given routing key to another collection. The source collection will continue to have the same data as-is but it will start re-routing write requests to the target collection for the number of seconds specified by the forward.timeout parameter. It is the responsibility of the user to switch to the target collection for reads and writes after the ‘migrate’ command completes.
 
-The routing key specified by the ‘split.key’ parameter may span multiple shards on both the source and the target collections. The migration is performed shard-by-shard in a single thread. One or more temporary collections may be created by this command during the ‘migrate’ process but they are cleaned up at the end automatically.
+The routing key specified by the `split.key` parameter may span multiple shards on both the source and the target collections. The migration is performed shard-by-shard in a single thread. One or more temporary collections may be created by this command during the ‘migrate’ process but they are cleaned up at the end automatically.
 
 This is a long running operation and therefore using the `async` parameter is highly recommended. If the async parameter is not specified then the operation is synchronous by default and keeping a large read timeout on the invocation is advised. Even with a large read timeout, the request may still timeout due to inherent limitations of the Collection APIs but that doesn’t necessarily mean that the operation has failed. Users should check logs, cluster state, source and target collections before invoking the operation again.
 
@@ -791,7 +739,7 @@ Please note that the migrate API does not perform any de-duplication on the docu
 |target.collection |string |Yes |The name of the target collection to which documents will be migrated.
 |split.key |string |Yes |The routing key prefix. For example, if uniqueKey is a!123, then you would use `split.key=a!`.
 |forward.timeout |int |No |The timeout, in seconds, until which write requests made to the source collection for the given `split.key` will be forwarded to the target shard. The default is 60 seconds.
-|property.__name__=__value__ |string |No |Set core property _name_ to __value__. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
+|property._name_=_value_ |string |No |Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
 |async |string |No |Request ID to track this action which will be <<CollectionsAPI-AsynchronousCalls,processed asynchronously>>.
 |===
 
@@ -805,7 +753,7 @@ The response will include the status of the request.
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=MIGRATE&collection=test1&split.key=a!&target.collection=test2
 ----
@@ -956,14 +904,10 @@ http://localhost:8983/solr/admin/collections?action=MIGRATE&collection=test1&spl
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-ADDROLE:AddaRole
-
-[[CollectionsAPI-ADDROLE_AddaRole]]
-
 [[CollectionsAPI-addrole]]
 == ADDROLE: Add a Role
 
-`/admin/collections?action=ADDROLE&role=__roleName__&node=__nodeName__`
+`/admin/collections?action=ADDROLE&role=_roleName_&node=_nodeName_`
 
 Assign a role to a given node in the cluster. The only supported role as of 4.7 is 'overseer'. Use this API to dedicate a particular node as Overseer. Invoke it multiple times to add more nodes. This is useful in large clusters where an Overseer is likely to get overloaded. If available, one among the list of nodes which are assigned the 'overseer' role would become the overseer. The system would assign the role to any other node if none of the designated nodes are up and running.
 
@@ -975,7 +919,7 @@ Assign a role to a given node in the cluster. The only supported role as of 4.7
 [width="100%",cols="25%,25%,25%,25%",options="header",]
 |===
 |Key |Type |Required |Description
-|role |string |Yes |The name of the role. The only supported role as of now is __overseer__.
+|role |string |Yes |The name of the role. The only supported role as of now is _overseer_.
 |node |string |Yes |The name of the node. It is possible to assign a role even before that node is started.
 |===
 
@@ -989,7 +933,7 @@ The response will include the status of the request and the properties that were
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=ADDROLE&role=overseer&node=192.167.1.2:8983_solr
 ----
@@ -1006,16 +950,12 @@ http://localhost:8983/solr/admin/collections?action=ADDROLE&role=overseer&node=1
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-REMOVEROLE:RemoveRole
-
-[[CollectionsAPI-REMOVEROLE_RemoveRole]]
-
 [[CollectionsAPI-removerole]]
 == REMOVEROLE: Remove Role
 
 Remove an assigned role. This API is used to undo the roles assigned using ADDROLE operation
 
-`/admin/collections?action=REMOVEROLE&role=__roleName__&node=__nodeName__`
+`/admin/collections?action=REMOVEROLE&role=_roleName_&node=_nodeName_`
 
 [[CollectionsAPI-Input.13]]
 === Input
@@ -1025,7 +965,7 @@ Remove an assigned role. This API is used to undo the roles assigned using ADDRO
 [width="100%",cols="25%,25%,25%,25%",options="header",]
 |===
 |Key |Type |Required |Description
-|role |string |Yes |The name of the role. The only supported role as of now is __overseer__.
+|role |string |Yes |The name of the role. The only supported role as of now is _overseer_.
 |node |string |Yes |The name of the node.
 |===
 
@@ -1039,7 +979,7 @@ The response will include the status of the request and the properties that were
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=REMOVEROLE&role=overseer&node=192.167.1.2:8983_solr
 ----
@@ -1056,10 +996,6 @@ http://localhost:8983/solr/admin/collections?action=REMOVEROLE&role=overseer&nod
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-OVERSEERSTATUS:OverseerStatusandStatistics
-
-[[CollectionsAPI-OVERSEERSTATUS_OverseerStatusandStatistics]]
-
 [[CollectionsAPI-overseerstatus]]
 == OVERSEERSTATUS: Overseer Status and Statistics
 
@@ -1101,7 +1037,7 @@ http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS&wt=json
       "99thPcRequestTime":0.519016,
       "999thPcRequestTime":0.519016},
     "removeshard",{
-      ...
+      "..."
   }],
   "collection_operations":[
     "splitshard",{
@@ -1125,19 +1061,16 @@ http://localhost:8983/solr/admin/collections?action=OVERSEERSTATUS&wt=json
       "75thPcRequestTime":5904.384052,
       "95thPcRequestTime":5904.384052,
       "99thPcRequestTime":5904.384052,
-      "999thPcRequestTime":5904.384052}, 
-    ...
+      "999thPcRequestTime":5904.384052},
+    "..."
   ],
   "overseer_queue":[
-    ...
+    "..."
   ],
-  ...
+  "..."
+ }
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-CLUSTERSTATUS:ClusterStatus
-
-[[CollectionsAPI-CLUSTERSTATUS_ClusterStatus]]
-
 [[CollectionsAPI-clusterstatus]]
 == CLUSTERSTATUS: Cluster Status
 
@@ -1168,7 +1101,7 @@ The response will include the status of the request and the status of the cluste
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=clusterstatus&wt=json
 ----
@@ -1224,7 +1157,7 @@ http://localhost:8983/solr/admin/collections?action=clusterstatus&wt=json
         "aliases":["both_collections"]
       },
       "collection2":{
-        ...
+        "..."
       }
     },
     "aliases":{ "both_collections":"collection1,collection2" },
@@ -1242,16 +1175,12 @@ http://localhost:8983/solr/admin/collections?action=clusterstatus&wt=json
 }
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-REQUESTSTATUS:RequestStatusofanAsyncCall
-
-[[CollectionsAPI-REQUESTSTATUS_RequestStatusofanAsyncCall]]
-
 [[CollectionsAPI-requeststatus]]
 == REQUESTSTATUS: Request Status of an Async Call
 
 Request the status and response of an already submitted <<CollectionsAPI-AsynchronousCalls,Asynchronous Collection API>> (below) call. This call is also used to clear up the stored statuses.
 
-`/admin/collections?action=REQUESTSTATUS&requestid=__request-id__`
+`/admin/collections?action=REQUESTSTATUS&requestid=_request-id_`
 
 [[CollectionsAPI-Input.15]]
 === Input
@@ -1269,7 +1198,7 @@ Request the status and response of an already submitted <<CollectionsAPI-Asynchr
 
 *Input: Valid Request Status*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1000
 ----
@@ -1292,7 +1221,7 @@ http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1000
 
 *Input: Invalid RequestId*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1004
 ----
@@ -1313,16 +1242,12 @@ http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1004
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-DELETESTATUS:DeleteStatus
-
-[[CollectionsAPI-DELETESTATUS_DeleteStatus]]
-
 [[CollectionsAPI-deletestatus]]
 == DELETESTATUS: Delete Status
 
 Delete the stored response of an already failed or completed <<CollectionsAPI-AsynchronousCalls,Asynchronous Collection API>> call.
 
-`/admin/collections?action=DELETESTATUS&requestid=__request-id__`
+`/admin/collections?action=DELETESTATUS&requestid=_request-id_`
 
 [[CollectionsAPI-Input.16]]
 === Input
@@ -1341,7 +1266,7 @@ Delete the stored response of an already failed or completed <<CollectionsAPI-As
 
 *Input: Valid Request Status*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=DELETESTATUS&requestid=foo
 ----
@@ -1379,9 +1304,9 @@ http://localhost:8983/solr/admin/collections?action=DELETESTATUS&requestid=bar
 </response>
 ----
 
-*Input: Clearing up all the stored statuses*
+*Input: Clear all the stored statuses*
 
-[source,java]
+[source
 ----
 http://localhost:8983/solr/admin/collections?action=DELETESTATUS&flush=true
 ----
@@ -1399,10 +1324,6 @@ http://localhost:8983/solr/admin/collections?action=DELETESTATUS&flush=true
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-LIST:ListCollections
-
-[[CollectionsAPI-LIST_ListCollections]]
-
 [[CollectionsAPI-list]]
 == LIST: List Collections
 
@@ -1415,7 +1336,7 @@ Fetch the names of the collections in the cluster.
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=LIST&wt=json
 ----
@@ -1433,10 +1354,6 @@ http://localhost:8983/solr/admin/collections?action=LIST&wt=json
     "example2"]}
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-ADDREPLICAPROP:AddReplicaProperty
-
-[[CollectionsAPI-ADDREPLICAPROP_AddReplicaProperty]]
-
 [[CollectionsAPI-addreplicaprop]]
 == ADDREPLICAPROP: Add Replica Property
 
@@ -1457,7 +1374,7 @@ Assign an arbitrary property to a particular replica and give it the value speci
 |collection |string |Yes |The name of the collection this replica belongs to.
 |shard |string |Yes |The name of the shard the replica belongs to.
 |replica |string |Yes |The replica, e.g. core_node1.
-|property (1) |string |Yes a|
+|property |string |Yes a|
 The property to add. Note: this will have the literal 'property.' prepended to distinguish it from system-maintained properties. So these two forms are equivalent:
 
 `property=special`
@@ -1466,12 +1383,11 @@ and
 
 `property=property.special`
 
+There is one pre-defined property "preferredLeader" for which shardUnique is forced to 'true' and an error returned if shardUnique is explicitly set to 'false'. PreferredLeader is a boolean property, any value assigned that is not equal (case insensitive) to 'true' will be interpreted as 'false' for preferredLeader.
 |property.value |string |Yes |The value to assign to the property.
 |shardUnique (1) |Boolean |No |default: false. If true, then setting this property in one replica will remove the property from all other replicas in that shard.
 |===
 
-\(1) There is one pre-defined property "preferredLeader" for which shardUnique is forced to 'true' and an error returned if shardUnique is explicitly set to 'false'. PreferredLeader is a boolean property, any value assigned that is not equal (case insensitive) to 'true' will be interpreted as 'false' for preferredLeader.
-
 [[CollectionsAPI-Output.13]]
 === Output
 
@@ -1484,7 +1400,7 @@ The response will include the status of the request. If the status is anything o
 
 This command would set the preferredLeader (`property.preferredLeader`) to true on core_node1, and remove that property from any other replica in the shard.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node1&property=preferredLeader&property.value=true
 ----
@@ -1505,7 +1421,7 @@ http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&
 
 This pair of commands will set the "testprop" (`property.testprop`) to 'value1' and 'value2' respectively for two nodes in the same shard.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node1&property=testprop&property.value=value1
 
@@ -1516,23 +1432,19 @@ http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&
 
 This pair of commands would result in core_node_3 having the testprop (`property.testprop`) value set because the second command specifies `shardUnique=true`, which would cause the property to be removed from core_node_1.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node1&property=testprop&property.value=value1
 
 http://localhost:8983/solr/admin/collections?action=ADDREPLICAPROP&shard=shard1&collection=collection1&replica=core_node3&property=testprop&property.value=value2&shardUnique=true
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-DELETEREPLICAPROP:DeleteReplicaProperty
-
-[[CollectionsAPI-DELETEREPLICAPROP_DeleteReplicaProperty]]
-
 [[CollectionsAPI-deletereplicaprop]]
 == DELETEREPLICAPROP: Delete Replica Property
 
 Deletes an arbitrary property from a particular replica.
 
-`/admin/collections?action=DELETEREPLICAPROP&collection=collectionName&shard=__shardName__&replica=__replicaName__&property=__propertyName__`
+`/admin/collections?action=DELETEREPLICAPROP&collection=collectionName&shard=_shardName_&replica=_replicaName_&property=_propertyName_`
 
 [[CollectionsAPI-Input.18]]
 === Input
@@ -1570,7 +1482,7 @@ The response will include the status of the request. If the status is anything o
 
 This command would delete the preferredLeader (`property.preferredLeader`) from core_node1.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=DELETEREPLICAPROP&shard=shard1&collection=collection1&replica=core_node1&property=preferredLeader
 ----
@@ -1587,14 +1499,10 @@ http://localhost:8983/solr/admin/collections?action=DELETEREPLICAPROP&shard=shar
 </response>
 ----
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-BALANCESHARDUNIQUE:BalanceaPropertyAcrossNodes
-
-[[CollectionsAPI-BALANCESHARDUNIQUE_BalanceaPropertyAcrossNodes]]
-
 [[CollectionsAPI-balanceshardunique]]
 == BALANCESHARDUNIQUE: Balance a Property Across Nodes
 
-`/admin/collections?action=BALANCESHARDUNIQUE&collection=__collectionName__&property=__propertyName__`
+`/admin/collections?action=BALANCESHARDUNIQUE&collection=_collectionName_&property=_propertyName_`
 
 Insures that a particular property is distributed evenly amongst the physical nodes that make up a collection. If the property already exists on a replica, every effort is made to leave it there. If the property is *not* on any replica on a shard, one is chosen and the property is added.
 
@@ -1624,7 +1532,7 @@ The response will include the status of the request. If the status is anything o
 
 Either of these commands would put the "preferredLeader" property on one replica in every shard in the "collection1" collection.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=BALANCESHARDUNIQUE&collection=collection1&property=preferredLeader
 
@@ -1645,10 +1553,6 @@ http://localhost:8983/solr/admin/collections?action=BALANCESHARDUNIQUE&collectio
 
 Examining the clusterstate after issuing this call should show exactly one replica in each shard that has this property.
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-REBALANCELEADERS:RebalanceLeaders
-
-[[CollectionsAPI-REBALANCELEADERS_RebalanceLeaders]]
-
 [[CollectionsAPI-rebalanceleaders]]
 == REBALANCELEADERS: Rebalance Leaders
 
@@ -1683,7 +1587,7 @@ The response will include the status of the request. If the status is anything o
 
 Either of these commands would cause all the active replicas that had the "preferredLeader" property set and were _not_ already the preferred leader to become leaders.
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=REBALANCELEADERS&collection=collection1
 http://localhost:8983/solr/admin/collections?action=REBALANCELEADERS&collection=collection1&maxAtOnce=5&maxWaitSeconds=30
@@ -1745,7 +1649,7 @@ In this example, two replicas in the "alreadyLeaders" section already had the le
 </response>
 ----
 
-Examining the clusterstate after issuing this call should show that every live node that has the "preferredLeader" property should also have the "leader" property set to __true__.
+Examining the clusterstate after issuing this call should show that every live node that has the "preferredLeader" property should also have the "leader" property set to _true_.
 
 // OLD_CONFLUENCE_ID: CollectionsAPI-FORCELEADER:ForceShardLeader
 
@@ -1756,9 +1660,7 @@ Examining the clusterstate after issuing this call should show that every live n
 
 In the unlikely event of a shard losing its leader, this command can be invoked to force the election of a new leader
 
-....
-/admin/collections?action=FORCELEADER&collection=<collectionName>&shard=<shardName>
-....
+`/admin/collections?action=FORCELEADER&collection=<collectionName>&shard=<shardName>`
 
 [[CollectionsAPI-Input.21]]
 === Input
@@ -1774,19 +1676,13 @@ In the unlikely event of a shard losing its leader, this command can be invoked
 
 [IMPORTANT]
 ====
-
 This is an expert level command, and should be invoked only when regular leader election is not working. This may potentially lead to loss of data in the event that the new leader doesn't have certain updates, possibly recent ones, which were acknowledged by the old leader before going down.
-
 ====
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-MIGRATESTATEFORMAT:MigrateClusterState
-
-[[CollectionsAPI-MIGRATESTATEFORMAT_MigrateClusterState]]
-
 [[CollectionsAPI-migratestateformat]]
 == MIGRATESTATEFORMAT: Migrate Cluster State
 
-A Expert level utility API to move a collection from shared `clusterstate.json` zookeeper node (created with `stateFormat=1`, the default in all Solr releases prior to 5.0) to the per-collection `state.json` stored in ZooKeeper (created with `stateFormat=2`, the current default) seamlessly without any application down-time.
+A expert level utility API to move a collection from shared `clusterstate.json` zookeeper node (created with `stateFormat=1`, the default in all Solr releases prior to 5.0) to the per-collection `state.json` stored in ZooKeeper (created with `stateFormat=2`, the current default) seamlessly without any application down-time.
 
 `/admin/collections?action=MIGRATESTATEFORMAT&collection=<collection_name>`
 
@@ -1799,10 +1695,6 @@ A Expert level utility API to move a collection from shared `clusterstate.json`
 
 This API is useful in migrating any collections created prior to Solr 5.0 to the more scalable cluster state format now used by default. If a collection was created in any Solr 5.x version or higher, then executing this command is not necessary.
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-BACKUP:BackupCollection
-
-[[CollectionsAPI-BACKUP_BackupCollection]]
-
 [[CollectionsAPI-backup]]
 == BACKUP: Backup Collection
 
@@ -1826,10 +1718,6 @@ The backup command will backup Solr indexes and configurations for a specified c
 |repository |string |No |The name of the repository to be used for the backup. If no repository is specified then the local filesystem repository will be used automatically.
 |===
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-RESTORE:RestoreCollection
-
-[[CollectionsAPI-RESTORE_RestoreCollection]]
-
 [[CollectionsAPI-restore]]
 == RESTORE: Restore Collection
 
@@ -1868,13 +1756,9 @@ Additionally, there are several parameters that can be overridden:
 |replicationFactor |Integer |No |The number of replicas to be created for each shard.
 |maxShardsPerNode |Integer |No |When creating collections, the shards and/or replicas are spread across all available (i.e., live) nodes, and two replicas of the same shard will never be on the same node. If a node is not live when the CREATE operation is called, it will not get any parts of the new collection, which could lead to too many replicas being created on a single live node. Defining `maxShardsPerNode` sets a limit on the number of replicas CREATE will spread to each node. If the entire collection can not be fit into the live nodes, no collection will be created at all.
 |autoAddReplicas |Boolean |No |When set to true, enables auto addition of replicas on shared file systems. See the section <<running-solr-on-hdfs.adoc#RunningSolronHDFS-AutomaticallyAddReplicasinSolrCloud,Automatically Add Replicas in SolrCloud>> for more details on settings and overrides.
-|property.__name__=__value__ |String |No |Set core property _name_ to __value__. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
+|property._name_=_value_ |String |No |Set core property _name_ to _value_. See the section <<defining-core-properties.adoc#defining-core-properties,Defining core.properties>> for details on supported properties and values.
 |===
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-DELETENODE:DeleteReplicasinaNode
-
-[[CollectionsAPI-DELETENODE_DeleteReplicasinaNode]]
-
 [[CollectionsAPI-deletenode]]
 == DELETENODE: Delete Replicas in a Node
 
@@ -1894,16 +1778,12 @@ Deletes all replicas of all collections in that node. Please note that the node
 |async |string |No |Request ID to track this action which will be <<CollectionsAPI-AsynchronousCalls,processed asynchronously>>.
 |===
 
-// OLD_CONFLUENCE_ID: CollectionsAPI-REPLACENODE:MoveAllReplicasinaNodetoAnother
-
-[[CollectionsAPI-REPLACENODE_MoveAllReplicasinaNodetoAnother]]
-
 [[CollectionsAPI-replacenode]]
 == REPLACENODE: Move All Replicas in a Node to Another
 
 This command recreates replicas in the source node to the target node. After each replica is copied, the replicas in the source node are deleted.
 
-`/admin/collections?action=REPLACENODE&source=<source-node>&target=<target-node>`
+`/admin/collections?action=REPLACENODE&source=_source-node_&target=_target-node_`
 
 [[CollectionsAPI-Input.25]]
 === Input
@@ -1921,9 +1801,7 @@ This command recreates replicas in the source node to the target node. After eac
 
 [IMPORTANT]
 ====
-
 This operation does not hold necessary locks on the replicas that belong to on the source node. So don't perform other collection operations in this period.
-
 ====
 
 [[CollectionsAPI-AsynchronousCalls]]
@@ -1940,7 +1818,7 @@ As of now, REQUESTSTATUS does not automatically clean up the tracking data struc
 
 *Input*
 
-[source,java]
+[source]
 ----
 http://localhost:8983/solr/admin/collections?action=SPLITSHARD&collection=collection1&shard=shard1&async=1000
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/collections-core-admin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/collections-core-admin.adoc b/solr/solr-ref-guide/src/collections-core-admin.adoc
index 6f35507..fc75354 100644
--- a/solr/solr-ref-guide/src/collections-core-admin.adoc
+++ b/solr/solr-ref-guide/src/collections-core-admin.adoc
@@ -6,11 +6,9 @@ The Collections screen provides some basic functionality for managing your Colle
 
 [NOTE]
 ====
-
 If you are running a single node Solr instance, you will not see a Collections option in the left nav menu of the Admin UI.
 
 You will instead see a "Core Admin" screen that supports some comparable Core level information & manipulation via the <<coreadmin-api.adoc#coreadmin-api,CoreAdmin API>> instead.
-
 ====
 
 The main display of this page provides a list of collections that exist in your cluster. Clicking on a collection name provides some basic metadata about how the collection is defined, and its current shards & replicas, with options for adding and deleting individual replicas.
@@ -25,4 +23,3 @@ Replicas can be deleted by clicking the red "X" next to the replica name.
 If the shard is inactive, for example after a <<collections-api.adoc#CollectionsAPI-splitshard,SPLITSHARD action>>, an option to delete the shard will appear as a red "X" next to the shard name.
 
 image::images/collections-core-admin/DeleteShard.png[image,width=486,height=250]
-

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc b/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc
index 8e6eb7c..cff045d 100644
--- a/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc
+++ b/solr/solr-ref-guide/src/combining-distribution-and-replication.adoc
@@ -4,15 +4,14 @@
 
 When your index is too large for a single machine and you have a query volume that single shards cannot keep up with, it's time to replicate each shard in your distributed search setup.
 
-The idea is to combine distributed search with replication. As shown in the figure below, a combined distributed-replication configuration features a master server for each shard and then 1-__n__ slaves that are replicated from the master. As in a standard replicated configuration, the master server handles updates and optimizations without adversely affecting query handling performance.
+The idea is to combine distributed search with replication. As shown in the figure below, a combined distributed-replication configuration features a master server for each shard and then 1-_n_ slaves that are replicated from the master. As in a standard replicated configuration, the master server handles updates and optimizations without adversely affecting query handling performance.
 
 Query requests should be load balanced across each of the shard slaves. This gives you both increased query handling capacity and fail-over backup if a server goes down.
 
+.A Solr configuration combining both replication and master-slave distribution.
 image::images/combining-distribution-and-replication/worddav4101c16174820e932b44baa22abcfcd1.png[image,width=312,height=344]
 
 
-_A Solr configuration combining both replication and master-slave distribution._
-
 None of the master shards in this configuration know about each other. You index to each master, the index is replicated to each slave, and then searches are distributed across the slaves, using one slave from each master/slave shard.
 
 For high availability you can use a load balancer to set up a virtual IP for each shard's set of slaves. If you are new to load balancing, HAProxy (http://haproxy.1wt.eu/) is a good open source software load-balancer. If a slave server goes down, a good load-balancer will detect the failure using some technique (generally a heartbeat system), and forward all requests to the remaining live slaves that served with the failed slave. A single virtual IP should then be set up so that requests can hit a single IP, and get load balanced to each of the virtual IPs for the search slaves.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bbe60af2/solr/solr-ref-guide/src/command-line-utilities.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/command-line-utilities.adoc b/solr/solr-ref-guide/src/command-line-utilities.adoc
index 5a90930..f12c5bb 100644
--- a/solr/solr-ref-guide/src/command-line-utilities.adoc
+++ b/solr/solr-ref-guide/src/command-line-utilities.adoc
@@ -9,23 +9,19 @@ In addition, SolrCloud provides its own administration page (found at http://loc
 .Solr's zkcli.sh vs ZooKeeper's zkCli.sh vs Solr Start Script
 [IMPORTANT]
 ====
-
 The `zkcli.sh` provided by Solr is not the same as the https://zookeeper.apache.org/doc/trunk/zookeeperStarted.html#sc_ConnectingToZooKeeper[`zkCli.sh` included in ZooKeeper distributions].
 
 ZooKeeper's `zkCli.sh` provides a completely general, application-agnostic shell for manipulating data in ZooKeeper. Solr's `zkcli.sh` – discussed in this section – is specific to Solr, and has command line arguments specific to dealing with Solr data in ZooKeeper.
 
 Many of the functions provided by the zkCli.sh script are also provided by the <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script Reference>>, which may be more familiar as the start script ZooKeeper maintenance commands are very similar to Unix commands.
-
 ====
 
-// OLD_CONFLUENCE_ID: CommandLineUtilities-UsingSolr'sZooKeeperCLI
-
 [[CommandLineUtilities-UsingSolr_sZooKeeperCLI]]
 == Using Solr's ZooKeeper CLI
 
 Both `zkcli.sh` (for Unix environments) and `zkcli.bat` (for Windows environments) support the following command line options:
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header"]
 |===
 |Short |Parameter Usage |Meaning
 | |`-cmd <arg>` |CLI Command to be executed: `bootstrap`, `upconfig`, `downconfig`, `linkconfig`, `makepath`, `get`, `getfile`, `put`, `putfile`, `list`, `clear` or `clusterprop`. This parameter is **mandatory**.
@@ -54,63 +50,55 @@ If you are on Windows machine, simply replace `zkcli.sh` with `zkcli.bat` in the
 [[CommandLineUtilities-Uploadaconfigurationdirectory]]
 === Upload a configuration directory
 
-[source,java]
+[source,bash]
 ----
-./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 \
-   -cmd upconfig -confname my_new_config -confdir server/solr/configsets/basic_configs/conf
+./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 -cmd upconfig -confname my_new_config -confdir server/solr/configsets/basic_configs/conf
 ----
 
 [[CommandLineUtilities-BootstrapZooKeeperfromexistingSOLR_HOME]]
 === Bootstrap ZooKeeper from existing SOLR_HOME
 
-[source,java]
+[source,bash]
 ----
-./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:2181 \
-   -cmd bootstrap -solrhome /var/solr/data
+./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:2181 -cmd bootstrap -solrhome /var/solr/data
 ----
 
 .Bootstrap with chroot
 [NOTE]
 ====
-
 Using the boostrap command with a zookeeper chroot in the -zkhost parameter, e.g. `-zkhost 127.0.0.1:2181/solr`, will automatically create the chroot path before uploading the configs.
-
 ====
 
 [[CommandLineUtilities-PutarbitrarydataintoanewZooKeeperfile]]
 === Put arbitrary data into a new ZooKeeper file
 
-[source,java]
+[source,bash]
 ----
-./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 \
-   -cmd put /my_zk_file.txt 'some data'
+./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 -cmd put /my_zk_file.txt 'some data'
 ----
 
 [[CommandLineUtilities-PutalocalfileintoanewZooKeeperfile]]
 === Put a local file into a new ZooKeeper file
 
-[source,java]
+[source,bash]
 ----
-./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 \
-   -cmd putfile /my_zk_file.txt /tmp/my_local_file.txt
+./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 -cmd putfile /my_zk_file.txt /tmp/my_local_file.txt
 ----
 
 [[CommandLineUtilities-Linkacollectiontoaconfigurationset]]
 === Link a collection to a configuration set
 
-[source,java]
+[source,bash]
 ----
-./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 \
-   -cmd linkconfig -collection gettingstarted -confname my_new_config
+./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 -cmd linkconfig -collection gettingstarted -confname my_new_config
 ----
 
 [[CommandLineUtilities-CreateanewZooKeeperpath]]
 === Create a new ZooKeeper path
 
-[source,java]
+[source,bash]
 ----
-./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:2181 \
-   -cmd makepath /solr
+./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:2181 -cmd makepath /solr
 ----
 
 This can be useful to create a chroot path in ZooKeeper before first cluster start.
@@ -118,10 +106,9 @@ This can be useful to create a chroot path in ZooKeeper before first cluster sta
 [[CommandLineUtilities-Setaclusterproperty]]
 === Set a cluster property
 
-This command will add or modify a single cluster property in `/clusterprops.json`. Use this command instead of the usual getfile -> edit -> putfile cycle. Unlike the CLUSTERPROP REST API, this command does *not* require a running Solr cluster.
+This command will add or modify a single cluster property in `clusterprops.json`. Use this command instead of the usual getfile -> edit -> putfile cycle. Unlike the CLUSTERPROP REST API, this command does *not* require a running Solr cluster.
 
-[source,java]
+[source,bash]
 ----
-./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:2181 \
-   -cmd clusterprop -name urlScheme -val https
+./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:2181 -cmd clusterprop -name urlScheme -val https
 ----