You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/07 23:51:09 UTC

[1/3] lucene-solr:jira/solr-10290: SOLR-10296: conversion, letter R

Repository: lucene-solr
Updated Branches:
  refs/heads/jira/solr-10290 e53c64a35 -> ff9fdcf1f


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/running-your-analyzer.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/running-your-analyzer.adoc b/solr/solr-ref-guide/src/running-your-analyzer.adoc
index 12c15d1..c143d7f 100644
--- a/solr/solr-ref-guide/src/running-your-analyzer.adoc
+++ b/solr/solr-ref-guide/src/running-your-analyzer.adoc
@@ -2,18 +2,19 @@
 :page-shortname: running-your-analyzer
 :page-permalink: running-your-analyzer.html
 
-Once you've <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,defined a field type in your Schema>>, and specified the analysis steps that you want applied to it, you should test it out to make sure that it behaves the way you expect it to. Luckily, there is a very handy page in the Solr <<using-the-solr-administration-user-interface.adoc#using-the-solr-administration-user-interface,admin interface>> that lets you do just that. You can invoke the analyzer for any text field, provide sample input, and display the resulting token stream.
+Once you've <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,defined a field type in your Schema>>, and specified the analysis steps that you want applied to it, you should test it out to make sure that it behaves the way you expect it to.
 
-For example, let's look at some of the "Text" field types available in the "`bin/solr -e techproducts`" example configuration, and use the <<analysis-screen.adoc#analysis-screen,Analysis Screen>> (http://localhost:8983/solr/#/techproducts/analysis) to compare how the tokens produced at index time for the sentence "`Running an Analyzer`" match up with a slightly different query text of "`run my analyzer`"
+Luckily, there is a very handy page in the Solr <<using-the-solr-administration-user-interface.adoc#using-the-solr-administration-user-interface,admin interface>> that lets you do just that. You can invoke the analyzer for any text field, provide sample input, and display the resulting token stream.
+
+For example, let's look at some of the "Text" field types available in the `bin/solr -e techproducts` example configuration, and use the <<analysis-screen.adoc#analysis-screen,Analysis Screen>> (`\http://localhost:8983/solr/#/techproducts/analysis`) to compare how the tokens produced at index time for the sentence "Running an Analyzer" match up with a slightly different query text of "run my analyzer"
 
 We can begin with "```text_ws```" - one of the most simplified Text field types available:
 
 image::images/running-your-analyzer/analysis_compare_0.png[image]
 
-
 By looking at the start and end positions for each term, we can see that the only thing this field type does is tokenize text on whitespace. Notice in this image that the term "Running" has a start position of 0 and an end position of 7, while "an" has a start position of 8 and an end position of 10, and "Analyzer" starts at 11 and ends at 19. If the whitespace between the terms was also included, the count would be 21; since it is 19, we know that whitespace has been removed from this query.
 
-Note also that the indexed terms and the query terms are still very different. "Running" doesn't match "run", "Analyzer" doesn't match "analyzer" (to a computer), and obviously "an" and "my" are totally different words. If our objective is to allow queries like "`run my analyzer`" to match indexed text like "`Running an Analyzer`" then we will evidently need to pick a different field type with index & query time text analysis that does more processing of the inputs.
+Note also that the indexed terms and the query terms are still very different. "Running" doesn't match "run", "Analyzer" doesn't match "analyzer" (to a computer), and obviously "an" and "my" are totally different words. If our objective is to allow queries like "run my analyzer" to match indexed text like "Running an Analyzer" then we will evidently need to pick a different field type with index & query time text analysis that does more processing of the inputs.
 
 In particular we will want:
 
@@ -25,17 +26,14 @@ For our next attempt, let's try the "```text_general```" field type:
 
 image::images/running-your-analyzer/analysis_compare_1.png[image]
 
-
 With the verbose output enabled, we can see how each stage of our new analyzers modify the tokens they receive before passing them on to the next stage. As we scroll down to the final output, we can see that we do start to get a match on "analyzer" from each input string, thanks to the "LCF" stage -- which if you hover over with your mouse, you'll see is the "```LowerCaseFilter```":
 
 image::images/running-your-analyzer/analysis_compare_2.png[image]
 
-
 The "```text_general```" field type is designed to be generally useful for any language, and it has definitely gotten us closer to our objective than "```text_ws```" from our first example by solving the problem of case sensitivity. It's still not quite what we are looking for because we don't see stemming or stopword rules being applied. So now let us try the "```text_en```" field type:
 
 image::images/running-your-analyzer/analysis_compare_3.png[image]
 
-
 Now we can see the "SF" (`StopFilter`) stage of the analyzers solving the problem of removing Stop Words ("an"), and as we scroll down, we also see the "PSF" (`PorterStemFilter`) stage apply stemming rules suitable for our English language input, such that the terms produced by our "index analyzer" and the terms produced by our "query analyzer" match the way we expect.
 
 image::images/running-your-analyzer/analysis_compare_4.png[image]


[2/3] lucene-solr:jira/solr-10290: SOLR-10296: conversion, letter R

Posted by ct...@apache.org.
SOLR-10296: conversion, letter R


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/f7859d7f
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/f7859d7f
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/f7859d7f

Branch: refs/heads/jira/solr-10290
Commit: f7859d7f32df43308b1d82914523d90723b415a4
Parents: e53c64a
Author: Cassandra Targett <ct...@apache.org>
Authored: Sun May 7 18:46:57 2017 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Sun May 7 18:46:57 2017 -0500

----------------------------------------------------------------------
 .../read-and-write-side-fault-tolerance.adoc    | 27 ++++++-----
 solr/solr-ref-guide/src/realtime-get.adoc       | 20 ++++----
 solr/solr-ref-guide/src/replication-screen.adoc |  9 ++--
 .../src/request-parameters-api.adoc             | 26 +++++-----
 .../src/requestdispatcher-in-solrconfig.adoc    |  8 ++--
 ...lers-and-searchcomponents-in-solrconfig.adoc | 10 ++--
 solr/solr-ref-guide/src/response-writers.adoc   | 34 +++++++------
 solr/solr-ref-guide/src/result-clustering.adoc  | 37 +++++++--------
 solr/solr-ref-guide/src/result-grouping.adoc    | 31 ++++++------
 .../src/rule-based-authorization-plugin.adoc    | 50 ++++++++++----------
 .../src/rule-based-replica-placement.adoc       | 47 +++++++++---------
 .../src/running-solr-on-hdfs.adoc               | 46 +++++++++---------
 solr/solr-ref-guide/src/running-solr.adoc       | 22 ++++-----
 .../src/running-your-analyzer.adoc              | 12 ++---
 14 files changed, 185 insertions(+), 194 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc b/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
index 00d0008..91d94bb 100644
--- a/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
+++ b/solr/solr-ref-guide/src/read-and-write-side-fault-tolerance.adoc
@@ -2,25 +2,26 @@
 :page-shortname: read-and-write-side-fault-tolerance
 :page-permalink: read-and-write-side-fault-tolerance.html
 
-SolrCloud supports elasticity, high availability, and fault tolerance in reads and writes. What this means, basically, is that when you have a large cluster, you can always make requests to the cluster: Reads will return results whenever possible, even if some nodes are down, and Writes will be acknowledged only if they are durable; i.e., you won't lose data.
+SolrCloud supports elasticity, high availability, and fault tolerance in reads and writes.
+
+What this means, basically, is that when you have a large cluster, you can always make requests to the cluster: Reads will return results whenever possible, even if some nodes are down, and Writes will be acknowledged only if they are durable; i.e., you won't lose data.
 
 [[ReadandWriteSideFaultTolerance-ReadSideFaultTolerance]]
 == Read Side Fault Tolerance
 
 In a SolrCloud cluster each individual node load balances read requests across all the replicas in collection. You still need a load balancer on the 'outside' that talks to the cluster, or you need a smart client which understands how to read and interact with Solr's metadata in ZooKeeper and only requests the ZooKeeper ensemble's address to start discovering to which nodes it should send requests. (Solr provides a smart Java SolrJ client called {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html[CloudSolrClient].)
 
-Even if some nodes in the cluster are offline or unreachable, a Solr node will be able to correctly respond to a search request as long as it can communicate with at least one replica of every shard, or one replica of every _relevant_ shard if the user limited the search via the '`shards`' or '`_route_`' parameters. The more replicas there are of every shard, the more likely that the Solr cluster will be able to handle search results in the event of node failures.
+Even if some nodes in the cluster are offline or unreachable, a Solr node will be able to correctly respond to a search request as long as it can communicate with at least one replica of every shard, or one replica of every _relevant_ shard if the user limited the search via the `shards` or `\_route_` parameters. The more replicas there are of every shard, the more likely that the Solr cluster will be able to handle search results in the event of node failures.
 
 [[ReadandWriteSideFaultTolerance-zkConnected]]
 === `zkConnected`
 
-A Solr node will return the results of a search request as long as it can communicate with at least one replica of every shard that it knows about, even if it can _not_ communicate with ZooKeeper at the time it receives the request. This is normally the preferred behavior from a fault tolerance standpoint, but may result in stale or incorrect results if there have been major changes to the collection structure that the node has not been informed of via ZooKeeper (ie: shards may have been added or removed, or split into sub-shards)
+A Solr node will return the results of a search request as long as it can communicate with at least one replica of every shard that it knows about, even if it can _not_ communicate with ZooKeeper at the time it receives the request. This is normally the preferred behavior from a fault tolerance standpoint, but may result in stale or incorrect results if there have been major changes to the collection structure that the node has not been informed of via ZooKeeper (i.e., shards may have been added or removed, or split into sub-shards)
 
 A `zkConnected` header is included in every search response indicating if the node that processed the request was connected with ZooKeeper at the time:
 
-*Solr Response with partialResults*
-
-[source,text]
+.Solr Response with partialResults
+[source,json]
 ----
 {
   "responseHeader": {
@@ -34,7 +35,7 @@ A `zkConnected` header is included in every search response indicating if the no
   "response": {
     "numFound": 107,
     "start": 0,
-    "docs": [ ... ]
+    "docs": [ "..." ]
   }
 }
 ----
@@ -42,13 +43,17 @@ A `zkConnected` header is included in every search response indicating if the no
 [[ReadandWriteSideFaultTolerance-shards.tolerant]]
 === `shards.tolerant`
 
-In the event that one or more shards queried are completely unavailable, then Solr's default behavior is to fail the request. However, there are many use-cases where partial results are acceptable and so Solr provides a boolean `shards.tolerant` parameter (default '`false`'). If `shards.tolerant=true` then partial results may be returned. If the returned response does not contain results from all the appropriate shards then the response header contains a special flag called '`partialResults`'. The client can specify '<<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,`shards.info`>>' along with the '`shards.tolerant`' parameter to retrieve more fine-grained details.
+In the event that one or more shards queried are completely unavailable, then Solr's default behavior is to fail the request. However, there are many use-cases where partial results are acceptable and so Solr provides a boolean `shards.tolerant` parameter (default `false`).
+
+If `shards.tolerant=true` then partial results may be returned. If the returned response does not contain results from all the appropriate shards then the response header contains a special flag called `partialResults`.
+
+The client can specify '<<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,`shards.info`>>' along with the `shards.tolerant` parameter to retrieve more fine-grained details.
 
 Example response with `partialResults` flag set to 'true':
 
 *Solr Response with partialResults*
 
-[source,text]
+[source,json]
 ----
 {
   "responseHeader": {
@@ -63,7 +68,7 @@ Example response with `partialResults` flag set to 'true':
   "response": {
     "numFound": 77,
     "start": 0,
-    "docs": [ ... ]
+    "docs": [ "..." ]
   }
 }
 ----
@@ -89,6 +94,6 @@ If an update fails because cores are reloading schemas and some have finished bu
 
 When using a replication factor greater than one, an update request may succeed on the shard leader but fail on one or more of the replicas. For instance, consider a collection with one shard and a replication factor of three. In this case, you have a shard leader and two additional replicas. If an update request succeeds on the leader but fails on both replicas, for whatever reason, the update request is still considered successful from the perspective of the client. The replicas that missed the update will sync with the leader when they recover.
 
-Behind the scenes, this means that Solr has accepted updates that are only on one of the nodes (the current leader). Solr supports the optional `min_rf` parameter on update requests that cause the server to return the achieved replication factor for an update request in the response. For the example scenario described above, if the client application included min_rf >= 1, then Solr would return rf=1 in the Solr response header because the request only succeeded on the leader. The update request will still be accepted as the `min_rf` parameter only tells Solr that the client application wishes to know what the achieved replication factor was for the update request. In other words, min_rf does not mean Solr will enforce a minimum replication factor as Solr does not support rolling back updates that succeed on a subset of replicas.
+Behind the scenes, this means that Solr has accepted updates that are only on one of the nodes (the current leader). Solr supports the optional `min_rf` parameter on update requests that cause the server to return the achieved replication factor for an update request in the response. For the example scenario described above, if the client application included `min_rf >= 1`, then Solr would return `rf=1` in the Solr response header because the request only succeeded on the leader. The update request will still be accepted as the `min_rf` parameter only tells Solr that the client application wishes to know what the achieved replication factor was for the update request. In other words, `min_rf` does not mean Solr will enforce a minimum replication factor as Solr does not support rolling back updates that succeed on a subset of replicas.
 
 On the client side, if the achieved replication factor is less than the acceptable level, then the client application can take additional measures to handle the degraded state. For instance, a client application may want to keep a log of which update requests were sent while the state of the collection was degraded and then resend the updates once the problem has been resolved. In short, `min_rf` is an optional mechanism for a client application to be warned that an update request was accepted while the collection is in a degraded state.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/realtime-get.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/realtime-get.adoc b/solr/solr-ref-guide/src/realtime-get.adoc
index b540d03..2bf9868 100644
--- a/solr/solr-ref-guide/src/realtime-get.adoc
+++ b/solr/solr-ref-guide/src/realtime-get.adoc
@@ -2,7 +2,9 @@
 :page-shortname: realtime-get
 :page-permalink: realtime-get.html
 
-For index updates to be visible (searchable), some kind of commit must reopen a searcher to a new point-in-time view of the index. The *realtime get* feature allows retrieval (by `unique-key`) of the latest version of any documents without the associated cost of reopening a searcher. This is primarily useful when using Solr as a NoSQL data store and not just a search index.
+For index updates to be visible (searchable), some kind of commit must reopen a searcher to a new point-in-time view of the index.
+
+The *realtime get* feature allows retrieval (by `unique-key`) of the latest version of any documents without the associated cost of reopening a searcher. This is primarily useful when using Solr as a NoSQL data store and not just a search index.
 
 Real Time Get relies on the update log feature, which is enabled by default and can be configured in `solrconfig.xml`:
 
@@ -28,7 +30,7 @@ Real Time Get requests can be performed using the `/get` handler which exists im
 
 For example, if you started Solr using the `bin/solr -e techproducts` example command, you could then index a new document (with out committing it) like so:
 
-[source,bash]
+[source,text]
 ----
 curl 'http://localhost:8983/solr/techproducts/update/json?commitWithin=10000000'
   -H 'Content-type:application/json' -d '[{"id":"mydoc","name":"realtime-get test!"}]'
@@ -36,7 +38,7 @@ curl 'http://localhost:8983/solr/techproducts/update/json?commitWithin=10000000'
 
 If you do a normal search, this document should not be found yet:
 
-[source,xml]
+[source,text]
 ----
 http://localhost:8983/solr/techproducts/query?q=id:mydoc
 ...
@@ -46,18 +48,18 @@ http://localhost:8983/solr/techproducts/query?q=id:mydoc
 
 However if you use the Real Time Get handler exposed at `/get`, you can still retrieve that document:
 
-[source,xml]
+[source,text]
 ----
 http://localhost:8983/solr/techproducts/get?id=mydoc
 ...
 {"doc":{"id":"mydoc","name":"realtime-get test!", "_version_":1487137811571146752}}
 ----
 
-You can also specify multiple documents at once via the *ids* parameter and a comma separated list of ids, or by using multiple *id* parameters. If you specify multiple ids, or use the *ids* parameter, the response will mimic a normal query response to make it easier for existing clients to parse.
+You can also specify multiple documents at once via the `ids` parameter and a comma separated list of ids, or by using multiple `id` parameters. If you specify multiple ids, or use the `ids` parameter, the response will mimic a normal query response to make it easier for existing clients to parse.
 
 For example:
 
-[source,xml]
+[source,text]
 ----
 http://localhost:8983/solr/techproducts/get?ids=mydoc,IW-02
 http://localhost:8983/solr/techproducts/get?id=mydoc&id=IW-02
@@ -78,7 +80,7 @@ http://localhost:8983/solr/techproducts/get?id=mydoc&id=IW-02
 
 Real Time Get requests can also be combined with filter queries, specified with an <<common-query-parameters.adoc#CommonQueryParameters-Thefq_FilterQuery_Parameter,`fq` parameter>>, just like search requests:
 
-[source,xml]
+[source,text]
 ----
 http://localhost:8983/solr/techproducts/get?id=mydoc&id=IW-02&fq=name:realtime-get
 ...
@@ -94,7 +96,7 @@ http://localhost:8983/solr/techproducts/get?id=mydoc&id=IW-02&fq=name:realtime-g
 
 [IMPORTANT]
 ====
+Do *NOT* disable the realtime get handler at `/get` if you are using SolrCloud otherwise any leader election will cause a full sync in *ALL* replicas for the shard in question.
 
-Do *NOT* disable the realtime get handler at `/get` if you are using SolrCloud otherwise any leader election will cause a full sync in *ALL* replicas for the shard in question. Similarly, a replica recovery will also always fetch the complete index from the leader because a partial sync will not be possible in the absence of this handler.
-
+Similarly, a replica recovery will also always fetch the complete index from the leader because a partial sync will not be possible in the absence of this handler.
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/replication-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/replication-screen.adoc b/solr/solr-ref-guide/src/replication-screen.adoc
index 73ad8f4..d8c8e3f 100644
--- a/solr/solr-ref-guide/src/replication-screen.adoc
+++ b/solr/solr-ref-guide/src/replication-screen.adoc
@@ -4,18 +4,17 @@
 
 The Replication screen shows you the current replication state for the core you have specified. <<solrcloud.adoc#solrcloud,SolrCloud>> has supplanted much of this functionality, but if you are still using Master-Slave index replication, you can use this screen to:
 
-1.  View the replicatable index state. (on a master node)
-2.  View the current replication status (on a slave node)
-3.  Disable replication. (on a master node)
+. View the replicatable index state. (on a master node)
+. View the current replication status (on a slave node)
+. Disable replication. (on a master node)
 
 .Caution When Using SolrCloud
 [IMPORTANT]
 ====
-
 When using <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,SolrCloud>>, do not attempt to disable replication via this screen.
-
 ====
 
+.Sample Replication Screen
 image::images/replication-screen/replication.png[image,width=412,height=250]
 
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/request-parameters-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/request-parameters-api.adoc b/solr/solr-ref-guide/src/request-parameters-api.adoc
index 75ce8a6..c4c6e8d 100644
--- a/solr/solr-ref-guide/src/request-parameters-api.adoc
+++ b/solr/solr-ref-guide/src/request-parameters-api.adoc
@@ -2,7 +2,9 @@
 :page-shortname: request-parameters-api
 :page-permalink: request-parameters-api.html
 
-The Request Parameters API allows creating parameter sets, a.k.a. paramsets, that can override or take the place of parameters defined in `solrconfig.xml`. The parameter sets defined with this API can be used in requests to Solr, or referenced directly in `solrconfig.xml` request handler definitions.
+The Request Parameters API allows creating parameter sets, a.k.a. paramsets, that can override or take the place of parameters defined in `solrconfig.xml`.
+
+The parameter sets defined with this API can be used in requests to Solr, or referenced directly in `solrconfig.xml` request handler definitions.
 
 It is really another endpoint of the <<config-api.adoc#config-api,Config API>> instead of a separate API, and has distinct commands. It does not replace or modify any sections of `solrconfig.xml`, but instead provides another approach to handling parameters used in requests. It behaves in the same way as the Config API, by storing parameters in another file that will be used at runtime. In this case, the parameters are stored in a file named `params.json`. This file is kept in ZooKeeper or in the `conf` directory of a standalone Solr instance.
 
@@ -43,7 +45,7 @@ curl http://localhost:8983/solr/techproducts/config/params -H 'Content-type:appl
   "set":{
     "myFacets":{
       "facet":"true",
-      "facet.limit":5}}, 
+      "facet.limit":5}},
   "set":{
     "myQueries":{
       "defType":"edismax",
@@ -59,7 +61,7 @@ In the above example all the parameters are equivalent to the "defaults" in `sol
 curl http://localhost:8983/solr/techproducts/config/params -H 'Content-type:application/json'  -d '{
   "set":{
     "my_handler_params":{
-      "facet.limit":5,      
+      "facet.limit":5,
       "_invariants_": {
         "facet":true,
         "wt":"json"
@@ -88,9 +90,9 @@ It will be equivalent to a standard request handler definition such as this one:
   <lst name="defaults">
     <int name="facet.limit">5</int>
   </lst>
-  <lst name="invariants>
-    <str name="wt">json</>
-    <bool name="facet">true<bool>
+  <lst name="invariants">
+    <str name="wt">json</str>
+    <bool name="facet">true</bool>
   </lst>
   <lst name="appends">
     <arr name="facet.field">
@@ -113,7 +115,7 @@ To see the expanded paramset and the resulting effective parameters for a Reques
 
 [source,bash]
 ----
-curl "http://localhost:8983/solr/techproducts/config/requestHandler?componentName=/export&expandParams=true"
+curl http://localhost:8983/solr/techproducts/config/requestHandler?componentName=/export&expandParams=true
 ----
 
 [[RequestParametersAPI-ViewingRequestParameters]]
@@ -124,7 +126,7 @@ To see the paramsets that have been created, you can use the `/config/params` en
 [source,bash]
 ----
 curl http://localhost:8983/solr/techproducts/config/params
- 
+
 #Or use the paramset name
 curl http://localhost:8983/solr/techproducts/config/params/myQueries
 ----
@@ -148,9 +150,9 @@ It is possible to pass more than one parameter set in the same request. For exam
 http://localhost/solr/techproducts/select?useParams=myFacets,myQueries
 ----
 
-In the above example the param set 'myQueries' is applied on top of 'myFacets'. So, values in 'myQueries' take precedence over values in 'myFacets'. Additionally, any values passed in the request take precedence over 'useParams'parameters. This acts like the "defaults" specified in the '`<requestHandler>`' definition in `solrconfig.xml`.
+In the above example the param set 'myQueries' is applied on top of 'myFacets'. So, values in 'myQueries' take precedence over values in 'myFacets'. Additionally, any values passed in the request take precedence over `useParams` parameters. This acts like the "defaults" specified in the `<requestHandler>` definition in `solrconfig.xml`.
 
-The parameter sets can be used directly in a request handler definition as follows. Please note that the 'useParams' specified is always applied even if the request contains `useParams`.
+The parameter sets can be used directly in a request handler definition as follows. Please note that the `useParams` specified is always applied even if the request contains `useParams`.
 
 [source,xml]
 ----
@@ -158,7 +160,7 @@ The parameter sets can be used directly in a request handler definition as follo
   <lst name="defaults">
     <bool name="terms">true</bool>
     <bool name="distrib">false</bool>
-  </lst>     
+  </lst>
   <arr name="components">
     <str>terms</str>
   </arr>
@@ -168,7 +170,7 @@ The parameter sets can be used directly in a request handler definition as follo
 To summarize, parameters are applied in this order:
 
 * parameters defined in `<invariants>` in `solrconfig.xml`.
-* parameters applied in _invariants_ in params.json and that is specified in the requesthandler definition or even in request
+* parameters applied in `invariants` in `params.json` and that is specified in the requesthandler definition or even in request
 * parameters defined in the request directly.
 * parameter sets defined in the request, in the order they have been listed with `useParams`.
 * parameter sets defined in `params.json` that have been defined in the request handler.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
index d4306af..82b8534 100644
--- a/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/requestdispatcher-in-solrconfig.adoc
@@ -2,16 +2,16 @@
 :page-shortname: requestdispatcher-in-solrconfig
 :page-permalink: requestdispatcher-in-solrconfig.html
 
-The `requestDispatcher` element of `solrconfig.xml` controls the way the Solr HTTP `RequestDispatcher` implementation responds to requests. Included are parameters for defining if it should handle `/select` urls (for Solr 1.1 compatibility), if it will support remote streaming, the maximum size of file uploads and how it will respond to HTTP cache headers in requests.
+The `requestDispatcher` element of `solrconfig.xml` controls the way the Solr HTTP `RequestDispatcher` implementation responds to requests.
+
+Included are parameters for defining if it should handle `/select` urls (for Solr 1.1 compatibility), if it will support remote streaming, the maximum size of file uploads and how it will respond to HTTP cache headers in requests.
 
 [[RequestDispatcherinSolrConfig-handleSelectElement]]
 == `handleSelect` Element
 
 [IMPORTANT]
 ====
-
 `handleSelect` is for legacy back-compatibility; those new to Solr do not need to change anything about the way this is configured by default.
-
 ====
 
 The first configurable item is the `handleSelect` attribute on the `<requestDispatcher>` element itself. This attribute can be set to one of two values, either "true" or "false". It governs how Solr responds to requests such as `/select?qt=XXX`. The default value "false" will ignore requests to `/select` if a requestHandler is not explicitly registered with the name `/select`. A value of "true" will route query requests to the parser defined with the `qt` value.
@@ -42,7 +42,7 @@ The attribute `addHttpRequestToContext` can be used to indicate that the origina
 
 [source,xml]
 ----
-<requestParsers enableRemoteStreaming="true" 
+<requestParsers enableRemoteStreaming="true"
                 multipartUploadLimitInKB="2048000"
                 formdataUploadLimitInKB="2048"
                 addHttpRequestToContext="false" />

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc b/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
index 4e1ef81..f01ac7e 100644
--- a/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/requesthandlers-and-searchcomponents-in-solrconfig.adoc
@@ -15,12 +15,12 @@ These are often referred to as "requestHandler" and "searchComponent", which is
 
 Every request handler is defined with a name and a class. The name of the request handler is referenced with the request to Solr, typically as a path. For example, if Solr is installed at ` http://localhost:8983/solr/ `and you have a collection named "```gettingstarted```", you can make a request using URLs like this:
 
-[source,xml]
+[source,text]
 ----
 http://localhost:8983/solr/gettingstarted/select?q=solr
 ----
 
-This query will be processed by the request handler with the name "`/select`". We've only used the "q" parameter here, which includes our query term, a simple keyword of "solr". If the request handler has more parameters defined, those will be used with any query we send to this request handler unless they are over-ridden by the client (or user) in the query itself.
+This query will be processed by the request handler with the name `/select`. We've only used the "q" parameter here, which includes our query term, a simple keyword of "solr". If the request handler has more parameters defined, those will be used with any query we send to this request handler unless they are over-ridden by the client (or user) in the query itself.
 
 If you have another request handler defined, you would send your request with that name. For example, `/update` is a request handler that handles index updates (i.e., sending new documents to the index). By default, `/select` is a request handler that handles query requests.
 
@@ -47,7 +47,7 @@ For example, in the default `solrconfig.xml`, the first request handler defined
 
 This example defines the `rows` parameter, which defines how many search results to return, to "10". The `echoParams` parameter defines that the parameters defined in the query should be returned when debug information is returned. Note also that the way the defaults are defined in the list varies if the parameter is a string, an integer, or another type.
 
-All of the parameters described in the section on <<searching.adoc#searching,searching>> can be defined as defaults for any of the SearchHandlers.
+All of the parameters described in the section  <<searching.adoc#searching,Searching>> can be defined as defaults for any of the SearchHandlers.
 
 Besides `defaults`, there are other options for the SearchHandler, which are:
 
@@ -106,7 +106,7 @@ Search components define the logic that is used by the SearchHandler to perform
 
 There are several default search components that work with all SearchHandlers without any additional configuration. If no components are defined (with the exception of `first-components` and `last-components` - see below), these are executed by default, in the following order:
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Component Name |Class Name |More Information
 |query |solr.QueryComponent |Described in the section <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,Query Syntax and Parsing>>.
@@ -127,9 +127,7 @@ It's possible to define some components as being used before (with `first-compon
 
 [IMPORTANT]
 ====
-
 `first-components` and/or `last-components` may only be used in conjunction with the default components. If you define your own `components`, the default components will not be executed, and `first-components` and `last-components` are disallowed.
-
 ====
 
 [source,xml]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/response-writers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/response-writers.adoc b/solr/solr-ref-guide/src/response-writers.adoc
index 6607d6c..4a333b4 100644
--- a/solr/solr-ref-guide/src/response-writers.adoc
+++ b/solr/solr-ref-guide/src/response-writers.adoc
@@ -41,7 +41,7 @@ The `version` parameter determines the XML protocol used in the response. Client
 
 Currently supported version values are:
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |XML Version |Notes
 |2.2 |The format of the responseHeader changed to use the same `<lst>` structure as the rest of the response.
@@ -58,9 +58,7 @@ The default behavior is not to return any stylesheet declaration at all.
 
 [IMPORTANT]
 ====
-
 Use of the `stylesheet` parameter is discouraged, as there is currently no way to specify external stylesheets, and no stylesheets are provided in the Solr distributions. This is a legacy parameter, which may be developed further in a future release.
-
 ====
 
 [[ResponseWriters-TheindentParameter]]
@@ -108,7 +106,7 @@ A very commonly used Response Writer is the `JsonResponseWriter`, which formats
 
 Here is a sample response for a simple query like `q=id:VS1GB400C3&wt=json`:
 
-[source,xml]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -161,12 +159,12 @@ This parameter controls the output format of NamedLists, where order is more imp
 
 [cols=",,",options="header",]
 |===
-|json.nl Parameter setting |Example output forNamedList("a"=1, "bar"="foo", null=3, null=null) |Description
-|flat _(the default)_ |["a",1, "bar","foo", null,3, null,null] |NamedList is represented as a flat array, alternating names and values.
-|map |\{"a":1, "bar":"foo", "":3, "":null} |NamedList is represented as a JSON object. Although this is the simplest mapping, a NamedList can have optional keys, repeated keys, and preserves order. Using a JSON object (essentially a map or hash) for a NamedList results in the loss of some information.
-|arrarr |[["a",1], ["bar","foo"], [null,3], [null,null]] |NamedList is represented as an array of two element arrays.
-|arrmap |[\{"a":1}, \{"b":2}, 3, null] |NamedList is represented as an array of JSON objects.
-|arrntv |[\{"name":"a","type":"int","value":1}, \{"name":"bar","type":"str","value":"foo"}, \{"name":null,"type":"int","value":3}, \{"name":null,"type":"null","value":null}] |NamedList is represented as an array of Name Type Value JSON objects.
+|json.nl Parameter setting |Example output for `NamedList("a"=1, "bar"="foo", null=3, null=null)` |Description
+|flat _(the default)_ |`["a",1, "bar","foo", null,3, null,null]` |NamedList is represented as a flat array, alternating names and values.
+|map |`{"a":1, "bar":"foo", "":3, "":null}` |NamedList is represented as a JSON object. Although this is the simplest mapping, a NamedList can have optional keys, repeated keys, and preserves order. Using a JSON object (essentially a map or hash) for a NamedList results in the loss of some information.
+|arrarr |`[["a",1], ["bar","foo"], [null,3], [null,null]]` |NamedList is represented as an array of two element arrays.
+|arrmap |[`{"a":1}, {"b":2}, 3, null]` |NamedList is represented as an array of JSON objects.
+|arrntv |`[{"name":"a","type":"int","value":1}, {"name":"bar","type":"str","value":"foo"}, {"name":null,"type":"int","value":3}, {"name":null,"type":"null","value":null}]` |NamedList is represented as an array of Name Type Value JSON objects.
 |===
 
 [[ResponseWriters-json.wrf]]
@@ -205,7 +203,7 @@ Solr has a PHP response format that outputs an array (as PHP code) which can be
 
 Example usage:
 
-[source,java]
+[source,php]
 ----
 $code = file_get_contents('http://localhost:8983/solr/techproducts/select?q=iPod&wt=php');
 eval("$result = " . $code . ";");
@@ -216,7 +214,7 @@ Solr also includes a PHP Serialized Response Writer that formats output in a ser
 
 Example usage:
 
-[source,java]
+[source,php]
 ----
 $serializedResult = file_get_contents('http://localhost:8983/solr/techproducts/select?q=iPod&wt=phps');
 $result = unserialize($serializedResult);
@@ -236,7 +234,7 @@ Solr has an optional Ruby response format that extends its JSON output in the fo
 
 Here is a simple example of how one may query Solr using the Ruby response format:
 
-[source,java]
+[source,ruby]
 ----
 require 'net/http'
 h = Net::HTTP.new('localhost', 8983)
@@ -259,7 +257,7 @@ The CSV response writer supports multi-valued fields, as well as<<transforming-r
 
 These parameters specify the CSV format that will be returned. You can accept the default values or specify your own.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Default Value
 |csv.encapsulator |"
@@ -275,7 +273,7 @@ These parameters specify the CSV format that will be returned. You can accept th
 
 These parameters specify how multi-valued fields are encoded. Per-field overrides for these values can be done using `f.<fieldname>.csv.separator=|`.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Default Value
 |csv.mv.encapsulator |None
@@ -286,9 +284,9 @@ These parameters specify how multi-valued fields are encoded. Per-field override
 [[ResponseWriters-Example]]
 === Example
 
-`http://localhost:8983/solr/techproducts/select?q=ipod&fl=id,cat,name,popularity,price,score&wt=csv` returns:
+`\http://localhost:8983/solr/techproducts/select?q=ipod&fl=id,cat,name,popularity,price,score&wt=csv` returns:
 
-[source,java]
+[source,text]
 ----
 id,cat,name,popularity,price,score
 IW-02,"electronics,connector",iPod & iPod Mini USB 2.0 Cable,1,11.5,0.98867977
@@ -315,7 +313,7 @@ Use this to get the response as a spreadsheet in the .xlsx (Microsoft Excel) for
 
 This response writer has been added as part of the extraction library, and will only work if the extraction contrib is present in the server classpath. Defining the classpath with the `lib` directive is not sufficient. Instead, you will need to copy the necessary .jars to the Solr webapp's `lib` directory manually. You can run these commands from your `$SOLR_INSTALL` directory:
 
-[source,java]
+[source,text]
 ----
 cp contrib/extraction/lib/*.jar server/solr-webapp/webapp/WEB-INF/lib/
 cp dist/solr-cell-6.3.0.jar server/solr-webapp/webapp/WEB-INF/lib/

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/result-clustering.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/result-clustering.adoc b/solr/solr-ref-guide/src/result-clustering.adoc
index 01d64d0..d442961 100644
--- a/solr/solr-ref-guide/src/result-clustering.adoc
+++ b/solr/solr-ref-guide/src/result-clustering.adoc
@@ -2,14 +2,15 @@
 :page-shortname: result-clustering
 :page-permalink: result-clustering.html
 
-The *clustering* (or **cluster analysis**) plugin attempts to automatically discover groups of related search hits (documents) and assign human-readable labels to these groups. By default in Solr, the clustering algorithm is applied to the search result of each single query—this is called an _on-line_ clustering. While Solr contains an extension for full-index clustering (__off-line__ clustering) this section will focus on discussing on-line clustering only.
+The *clustering* (or *cluster analysis*) plugin attempts to automatically discover groups of related search hits (documents) and assign human-readable labels to these groups.
 
-Clusters discovered for a given query can be perceived as __dynamic facets__. This is beneficial when regular faceting is difficult (field values are not known in advance) or when the queries are exploratory in nature. Take a look at the http://search.carrot2.org/stable/search?query=solr&results=100&source=web&view=foamtree[Carrot2] project's demo page to see an example of search results clustering in action (the groups in the visualization have been discovered automatically in search results to the right, there is no external information involved).
+By default in Solr, the clustering algorithm is applied to the search result of each single query -— this is called an _on-line_ clustering. While Solr contains an extension for full-index clustering (_off-line_ clustering) this section will focus on discussing on-line clustering only.
 
-image::images/result-clustering/carrot2.png[image,width=900]
+Clusters discovered for a given query can be perceived as _dynamic facets_. This is beneficial when regular faceting is difficult (field values are not known in advance) or when the queries are exploratory in nature. Take a look at the http://search.carrot2.org/stable/search?query=solr&results=100&source=web&view=foamtree[Carrot2] project's demo page to see an example of search results clustering in action (the groups in the visualization have been discovered automatically in search results to the right, there is no external information involved).
 
+image::images/result-clustering/carrot2.png[image,width=900]
 
-The query issued to the system was __Solr__. It seems clear that faceting could not yield a similar set of groups, although the goals of both techniques are similar—to let the user explore the set of search results and either rephrase the query or narrow the focus to a subset of current documents. Clustering is also similar to <<result-grouping.adoc#result-grouping,Result Grouping>> in that it can help to look deeper into search results, beyond the top few hits.
+The query issued to the system was _Solr_. It seems clear that faceting could not yield a similar set of groups, although the goals of both techniques are similar—to let the user explore the set of search results and either rephrase the query or narrow the focus to a subset of current documents. Clustering is also similar to <<result-grouping.adoc#result-grouping,Result Grouping>> in that it can help to look deeper into search results, beyond the top few hits.
 
 [[ResultClustering-PreliminaryConcepts]]
 == Preliminary Concepts
@@ -29,7 +30,7 @@ A *clustering algorithm* is the actual logic (implementation) that discovers rel
 [[ResultClustering-QuickStartExample]]
 == Quick Start Example
 
-The "```techproducts```" example included with Solr is pre-configured with all the necessary components for result clustering - but they are disabled by default.
+The "```techproducts```" example included with Solr is pre-configured with all the necessary components for result clustering -- but they are disabled by default.
 
 To enable the clustering component contrib and a dedicated search handler configured to use it, specify a JVM System Property when running the example:
 
@@ -40,7 +41,7 @@ bin/solr start -e techproducts -Dsolr.clustering.enabled=true
 
 You can now try out the clustering handler by opening the following URL in a browser:
 
-* `http://localhost:8983/solr/techproducts/clustering?q=*:*&rows=100`
+`\http://localhost:8983/solr/techproducts/clustering?q=*:*&rows=100`
 
 The output XML should include search hits and an array of automatically discovered clusters at the end, resembling the output shown here:
 
@@ -116,7 +117,7 @@ The output XML should include search hits and an array of automatically discover
 </response>
 ----
 
-There were a few clusters discovered for this query (`*:*`), separating search hits into various categories: DDR, iPod, Hard Drive, etc. Each cluster has a label and score that indicates the "goodness" of the cluster. The score is algorithm-specific and is meaningful only in relation to the scores of other clusters in the same set. In other words, if cluster _A_ has a higher score than cluster __B__, cluster _A_ should be of better quality (have a better label and/or more coherent document set). Each cluster has an array of identifiers of documents belonging to it. These identifiers correspond to the `uniqueKey` field declared in the schema.
+There were a few clusters discovered for this query (`\*:*`), separating search hits into various categories: DDR, iPod, Hard Drive, etc. Each cluster has a label and score that indicates the "goodness" of the cluster. The score is algorithm-specific and is meaningful only in relation to the scores of other clusters in the same set. In other words, if cluster _A_ has a higher score than cluster _B_, cluster _A_ should be of better quality (have a better label and/or more coherent document set). Each cluster has an array of identifiers of documents belonging to it. These identifiers correspond to the `uniqueKey` field declared in the schema.
 
 Depending on the quality of input documents, some clusters may not make much sense. Some documents may be left out and not be clustered at all; these will be assigned to the synthetic _Other Topics_ group, marked with the `other-topics` property set to `true` (see the XML dump above for an example). The score of the other topics group is zero.
 
@@ -135,15 +136,14 @@ Clustering extension is a search component and must be declared in `solrconfig.x
 
 An example configuration could look as shown below.
 
-________________________________________________________________________________________________________________________________________________________________________________________________________________
-1.  Include the required contrib JARs. Note that by default paths are relative to the Solr core so they may need adjustments to your configuration, or an explicit specification of the `$solr.install.dir`.
+. Include the required contrib JARs. Note that by default paths are relative to the Solr core so they may need adjustments to your configuration, or an explicit specification of the `$solr.install.dir`.
 +
 [source,xml]
 ----
 <lib dir="${solr.install.dir:../../..}/contrib/clustering/lib/" regex=".*\.jar" />
 <lib dir="${solr.install.dir:../../..}/dist/" regex="solr-clustering-\d.*\.jar" />
 ----
-2.  Declaration of the search component. Each component can also declare multiple clustering pipelines ("engines"), which can be selected at runtime by passing `clustering.engine=(engine name)` URL parameter.
+. Declaration of the search component. Each component can also declare multiple clustering pipelines ("engines"), which can be selected at runtime by passing `clustering.engine=(engine name)` URL parameter.
 +
 [source,xml]
 ----
@@ -161,7 +161,7 @@ ________________________________________________________________________________
   </lst>
 </searchComponent>
 ----
-3.  A request handler to which we append the clustering component declared above.
+. A request handler to which we append the clustering component declared above.
 +
 [source,xml]
 ----
@@ -188,14 +188,14 @@ ________________________________________________________________________________
   </arr>
 </requestHandler>
 ----
-________________________________________________________________________________________________________________________________________________________________________________________________________________
+
 
 [[ResultClustering-ConfigurationParametersoftheClusteringComponent]]
 === Configuration Parameters of the Clustering Component
 
 The table below summarizes parameters of each clustering engine or the entire clustering component (depending where they are declared).
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |`clustering` |When `true`, clustering component is enabled.
@@ -206,7 +206,7 @@ The table below summarizes parameters of each clustering engine or the entire cl
 
 At the engine declaration level, the following parameters are supported.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |`carrot.algorithm` |The algorithm class.
@@ -235,7 +235,7 @@ The question of which algorithm to choose depends on the amount of traffic (STC
 
 The clustering engine can apply clustering to the full content of (stored) fields or it can run an internal highlighter pass to extract context-snippets before clustering. Highlighting is recommended when the logical snippet field contains a lot of content (this would affect clustering performance). Highlighting can also increase the quality of clustering because the content passed to the algorithm will be more focused around the query (it will be query-specific context). The following parameters control the internal highlighter.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |`carrot.produceSummary` |When `true` the clustering component will run a highlighter pass on the content of logical fields pointed to by `carrot.title` and `carrot.snippet`. Otherwise full content of those fields will be clustered.
@@ -248,7 +248,7 @@ The clustering engine can apply clustering to the full content of (stored) field
 
 As already mentioned in <<ResultClustering-PreliminaryConcepts,Preliminary Concepts>>, the clustering component clusters "documents" consisting of logical parts that need to be mapped onto physical schema of data stored in Solr. The field mapping attributes provide a connection between fields and logical document parts. Note that the content of title and snippet fields must be *stored* so that it can be retrieved at search time.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |`carrot.title` |The field (alternatively comma- or space-separated list of fields) that should be mapped to the logical document's title. The clustering algorithms typically give more weight to the content of the title field compared to the content (snippet). For best results, the field should contain concise, noise-free content. If there is no clear title in your data, you can leave this parameter blank.
@@ -263,7 +263,7 @@ The field mapping specification can include a `carrot.lang` parameter, which def
 
 The language hint makes it easier for clustering algorithms to separate documents from different languages on input and to pick the right language resources for clustering. If you do have multi-lingual query results (or query results in a language different than English), it is strongly advised to map the language field appropriately.
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Description
 |`carrot.lang` |The field that stores ISO 639-1 code of the language of the document's text fields.
@@ -279,7 +279,6 @@ The algorithms that come with Solr are using their default settings which may be
 
 image::images/result-clustering/carrot2-workbench.png[image,scaledwidth=75.0%]
 
-
 [[ResultClustering-ProvidingDefaults]]
 === Providing Defaults
 
@@ -298,7 +297,7 @@ An example XML file changing the default language of documents to Polish is show
       </attribute>
     </value-set>
   </attribute-set>
-</attribute-sets> 
+</attribute-sets>
 ----
 
 [[ResultClustering-TweakingatQuery-Time]]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/result-grouping.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/result-grouping.adoc b/solr/solr-ref-guide/src/result-grouping.adoc
index 81ea51a..3d448d6 100644
--- a/solr/solr-ref-guide/src/result-grouping.adoc
+++ b/solr/solr-ref-guide/src/result-grouping.adoc
@@ -2,21 +2,21 @@
 :page-shortname: result-grouping
 :page-permalink: result-grouping.html
 
-Result Grouping groups documents with a common field value into groups and returns the top documents for each group. For example, if you searched for "DVD" on an electronic retailer's e-commerce site, you might be returned three categories such as "TV and Video," "Movies," and "Computers," with three results per category. In this case, the query term "DVD" appeared in all three categories, so Solr groups them together in order to increase relevancy for the user.
+Result Grouping groups documents with a common field value into groups and returns the top documents for each group.
+
+For example, if you searched for "DVD" on an electronic retailer's e-commerce site, you might be returned three categories such as "TV and Video", "Movies", and "Computers" with three results per category. In this case, the query term "DVD" appeared in all three categories, so Solr groups them together in order to increase relevancy for the user.
 
 .Prefer Collapse & Expand instead
 [NOTE]
 ====
-
 Solr's <<collapse-and-expand-results.adoc#collapse-and-expand-results,Collapse and Expand>> feature is newer and mostly overlaps with Result Grouping. There are features unique to both, and they have different performance characteristics. That said, in most cases Collapse and Expand is preferable to Result Grouping.
-
 ====
 
 Result Grouping is separate from <<faceting.adoc#faceting,Faceting>>. Though it is conceptually similar, faceting returns all relevant results and allows the user to refine the results based on the facet category. For example, if you search for "shoes" on a footwear retailer's e-commerce site, Solr would return all results for that query term, along with selectable facets such as "size," "color," "brand," and so on.
 
 You can however combine grouping with faceting. Grouped faceting supports `facet.field` and `facet.range` but currently doesn't support date and pivot faceting. The facet counts are computed based on the first `group.field` parameter, and other `group.field` parameters are ignored.
 
-Grouped faceting differs from non grouped facets (sum of all facets) == (total of products with that property) as shown in the following example:
+Grouped faceting differs from non grouped facets `(sum of all facets) == (total of products with that property)` as shown in the following example:
 
 Object 1
 
@@ -45,7 +45,7 @@ Result Grouping takes the following request parameters. Any number of these requ
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Type |Description
 |group |Boolean |If true, query results will be grouped.
@@ -73,7 +73,7 @@ See below for <<ResultGrouping-DistributedResultGroupingCaveats,Distributed Resu
 |group.facet |Boolean a|
 Determines whether to compute grouped facets for the field facets specified in facet.field parameters. Grouped facets are computed based on the first specified group. As with normal field faceting, fields shouldn't be tokenized (otherwise counts are computed for each token). Grouped faceting supports single and multivalued fields. Default is false.
 
-**Warning**: There can be a heavy performance cost to this option.
+*Warning*: There can be a heavy performance cost to this option.
 
 See below for <<ResultGrouping-DistributedResultGroupingCaveats,Distributed Result Grouping Caveats>> when using sharded indexes
 
@@ -92,12 +92,12 @@ All of the following sample queries work with Solr's "`bin/solr -e techproducts`
 
 In this example, we will group results based on the `manu_exact` field, which specifies the manufacturer of the items in the sample dataset.
 
-`http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=id,name&q=solr+memory&group=true&group.field=manu_exact`
+`\http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=id,name&q=solr+memory&group=true&group.field=manu_exact`
 
-[source,java]
+[source,json]
 ----
 {
-...
+"..."
 "grouped":{
   "manu_exact":{
     "matches":6,
@@ -136,19 +136,16 @@ In this example, we will group results based on the `manu_exact` field, which sp
               "id":"EN7800GTX/2DHTV/256M",
               "name":"ASUS Extreme N7800GTX/2DHTV (256 MB)"}]
         }
-      }
-    ]
-  }
-}
+      }]}}}
 ----
 
 The response indicates that there are six total matches for our query. For each of the five unique values of `group.field`, Solr returns a `docList` for that `groupValue` such that the `numFound` indicates the total number of documents in that group, and the top documents are returned according to the implicit default `group.limit=1` and `group.sort=score desc` parameters. The resulting groups are then sorted by the score of the top document within each group based on the implicit `sort=score desc`, and the number of groups returned is limited to the implicit `rows=10`.
 
 We can run the same query with the request parameter `group.main=true`. This will format the results as a single flat document list. This flat format does not include as much information as the normal result grouping query results – notably the `numFound` in each group – but it may be easier for existing Solr clients to parse.
 
-`http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=id,name,manufacturer&q=solr+memory&group=true&group.field=manu_exact&group.main=true`
+`\http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=id,name,manufacturer&q=solr+memory&group=true&group.field=manu_exact&group.main=true`
 
-[source,java]
+[source,json]
 ----
 {
   "responseHeader":{
@@ -188,9 +185,9 @@ We can run the same query with the request parameter `group.main=true`. This wil
 
 In this example, we will use the `group.query` parameter to find the top three results for "memory" in two different price ranges: 0.00 to 99.99, and over 100.
 
-http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=name,price&q=memory&group=true&group.query=price:%5B0+TO+99.99%5D&group.query=price:%5B100+TO+*%5D&group.limit=3[`http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=name,price&q=memory&group=true&group.query=price:[0+TO+99.99]&group.query=price:[100+TO+*]&group.limit=3`]
+`\http://localhost:8983/solr/techproducts/select?wt=json&indent=true&fl=name,price&q=memory&group=true&group.query=price:[0+TO+99.99]&group.query=price:[100+TO+*]&group.limit=3`
 
-[source,java]
+[source,json]
 ----
 {
   "responseHeader":{

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
index 2366143..9ff0999 100644
--- a/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
+++ b/solr/solr-ref-guide/src/rule-based-authorization-plugin.adoc
@@ -2,7 +2,9 @@
 :page-shortname: rule-based-authorization-plugin
 :page-permalink: rule-based-authorization-plugin.html
 
-Solr allows configuring roles to control user access to the system. This is accomplished through rule-based permission definitions which are assigned to users. The roles are fully customizable, and provide the ability to limit access to specific collections, request handlers, request parameters, and request methods.
+Solr allows configuring roles to control user access to the system.
+
+This is accomplished through rule-based permission definitions which are assigned to users. The roles are fully customizable, and provide the ability to limit access to specific collections, request handlers, request parameters, and request methods.
 
 The roles can be used with any of the authentication plugins or with a custom authentication plugin if you have created one. You will only need to ensure that you configure the role-to-user mappings with the proper user IDs that your authentication system provides.
 
@@ -58,14 +60,14 @@ There are several permissions that are pre-defined. These have fixed default val
 The pre-defined permissions are:
 
 * *security-edit:* this permission is allowed to edit the security configuration, meaning any update action that modifies `security.json` through the APIs will be allowed.
-* **security-read**: this permission is allowed to read the security configuration, meaning any action that reads `security.json` settings through the APIs will be allowed.
-* **schema-edit**: this permission is allowed to edit a collection's schema using the <<schema-api.adoc#schema-api,Schema API>>. Note that this allows schema edit permissions for _all_ collections. If edit permissions should only be applied to specific collections, a custom permission would need to be created.
-* **schema-read**: this permission is allowed to read a collection's schema using the <<schema-api.adoc#schema-api,Schema API>>. Note that this allows schema read permissions for _all_ collections. If read permissions should only be applied to specific collections, a custom permission would need to be created.
-* **config-edit**: this permission is allowed to edit a collection's configuration using the <<config-api.adoc#config-api,Config API>>, the <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>>, and other APIs which modify `configoverlay.json`. Note that this allows configuration edit permissions for _all_ collections. If edit permissions should only be applied to specific collections, a custom permission would need to be created.
+* *security-read*: this permission is allowed to read the security configuration, meaning any action that reads `security.json` settings through the APIs will be allowed.
+* *schema-edit*: this permission is allowed to edit a collection's schema using the <<schema-api.adoc#schema-api,Schema API>>. Note that this allows schema edit permissions for _all_ collections. If edit permissions should only be applied to specific collections, a custom permission would need to be created.
+* *schema-read*: this permission is allowed to read a collection's schema using the <<schema-api.adoc#schema-api,Schema API>>. Note that this allows schema read permissions for _all_ collections. If read permissions should only be applied to specific collections, a custom permission would need to be created.
+* *config-edit*: this permission is allowed to edit a collection's configuration using the <<config-api.adoc#config-api,Config API>>, the <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>>, and other APIs which modify `configoverlay.json`. Note that this allows configuration edit permissions for _all_ collections. If edit permissions should only be applied to specific collections, a custom permission would need to be created.
 * *core-admin-read* : Read operations on the core admin API
-* **core-admin-edit**: Core admin commands that can mutate the system state.
-* **config-read**: this permission is allowed to read a collection's configuration using the <<config-api.adoc#config-api,Config API>>, the <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>>, and other APIs which modify `configoverlay.json`. Note that this allows configuration read permissions for _all_ collections. If read permissions should only be applied to specific collections, a custom permission would need to be created.
-* **collection-admin-edit**: this permission is allowed to edit a collection's configuration using the <<collections-api.adoc#collections-api,Collections API>>. Note that this allows configuration edit permissions for _all_ collections. If edit permissions should only be applied to specific collections, a custom permission would need to be created. Specifically, the following actions of the Collections API would be allowed:
+* *core-admin-edit*: Core admin commands that can mutate the system state.
+* *config-read*: this permission is allowed to read a collection's configuration using the <<config-api.adoc#config-api,Config API>>, the <<request-parameters-api.adoc#request-parameters-api,Request Parameters API>>, and other APIs which modify `configoverlay.json`. Note that this allows configuration read permissions for _all_ collections. If read permissions should only be applied to specific collections, a custom permission would need to be created.
+* *collection-admin-edit*: this permission is allowed to edit a collection's configuration using the <<collections-api.adoc#collections-api,Collections API>>. Note that this allows configuration edit permissions for _all_ collections. If edit permissions should only be applied to specific collections, a custom permission would need to be created. Specifically, the following actions of the Collections API would be allowed:
 ** CREATE
 ** RELOAD
 ** SPLITSHARD
@@ -84,14 +86,14 @@ The pre-defined permissions are:
 ** DELETEREPLICAPROP
 ** BALANCESHARDUNIQUE
 ** REBALANCELEADERS
-* **collection-admin-read**: this permission is allowed to read a collection's configuration using the <<collections-api.adoc#collections-api,Collections API>>. Note that this allows configuration read permissions for _all_ collections. If read permissions should only be applied to specific collections, a custom permission would need to be created. Specifically, the following actions of the Collections API would be allowed:
+* *collection-admin-read*: this permission is allowed to read a collection's configuration using the <<collections-api.adoc#collections-api,Collections API>>. Note that this allows configuration read permissions for _all_ collections. If read permissions should only be applied to specific collections, a custom permission would need to be created. Specifically, the following actions of the Collections API would be allowed:
 ** LIST
 ** OVERSEERSTATUS
 ** CLUSTERSTATUS
 ** REQUESTSTATUS
-* **update**: this permission is allowed to perform any update action on any collection. This includes sending documents for indexing (using an <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-UpdateRequestHandlers,update request handler>>). This applies to all collections by default (`collection:"*"`).
-* **read**: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`. This applies to all collections by default ( `collection:"*"` ).
-* **all**: Any requests coming to Solr.
+* *update*: this permission is allowed to perform any update action on any collection. This includes sending documents for indexing (using an <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-UpdateRequestHandlers,update request handler>>). This applies to all collections by default (`collection:"*"`).
+* *read*: this permission is allowed to perform any read action on any collection. This includes querying using search handlers (using <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,request handlers>>) such as `/select`, `/get`, `/browse`, `/tvrh`, `/terms`, `/clustering`, `/elevate`, `/export`, `/spell`, `/clustering`, and `/sql`. This applies to all collections by default ( `collection:"*"` ).
+* *all*: Any requests coming to Solr.
 
 [[Rule-BasedAuthorizationPlugin-AuthorizationAPI]]
 == Authorization API
@@ -116,7 +118,7 @@ Several properties can be used to define your custom permission.
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="50%,50%",options="header",]
+[width="100%",options="header",]
 |===
 |Property |Description
 |name |The name of the permission. This is required only if it is a predefined permission.
@@ -135,11 +137,11 @@ For example, this property could be used to limit the actions a role is allowed
 [source,json]
 ----
 "params": {
-   "action": [LIST, CLUSTERSTATUS]
+   "action": ["LIST", "CLUSTERSTATUS"]
 }
 ----
 
-The value of the parameter can be a simple string or it could be a regular expression. use the prefix `REGEX:` to use a regular expression match instead of a string identity match
+The value of the parameter can be a simple string or it could be a regular expression. Use the prefix `REGEX:` to use a regular expression match instead of a string identity match
 
 If the commands LIST and CLUSTERSTATUS are case insensitive, the above example should be as follows
 
@@ -158,23 +160,23 @@ The following creates a new permission named "collection-mgr" that is allowed to
 
 [source,bash]
 ----
-curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{ 
+curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
   "set-permission": {"collection": null,
                      "path":"/admin/collections",
                      "params":{"action":[LIST, CREATE]},
                      "before: 3,
                      "role": "admin"}
-}' http://localhost:8983/solr/admin/authorization 
+}' http://localhost:8983/solr/admin/authorization
 ----
 
-Apply an update permission on all collections to a role called '`dev`' and read permissions to a role called '`guest`':
+Apply an update permission on all collections to a role called `dev` and read permissions to a role called `guest`:
 
 [source,bash]
 ----
-curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{ 
+curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
   "set-permission": {"name": "update, "role":"dev"},
   "set-permission": {"name": "read, "role":"guest"},
-}' http://localhost:8983/solr/admin/authorization 
+}' http://localhost:8983/solr/admin/authorization
 ----
 
 [[Rule-BasedAuthorizationPlugin-UpdateorDeletePermissions]]
@@ -186,19 +188,19 @@ The following example updates the '`role`' attribute of permission at index '`3`
 
 [source,bash]
 ----
-curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{ 
+curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
   "update-permission": {"index": 3,
                        "role": ["admin", "dev"]}
-}' http://localhost:8983/solr/admin/authorization 
+}' http://localhost:8983/solr/admin/authorization
 ----
 
 The following example deletes permission at index '`3`':
 
 [source,bash]
 ----
-curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{ 
+curl --user solr:SolrRocks -H 'Content-type:application/json' -d '{
   "delete-permission": 3
-}' http://localhost:8983/solr/admin/authorization 
+}' http://localhost:8983/solr/admin/authorization
 ----
 
 [[Rule-BasedAuthorizationPlugin-MapRolestoUsers]]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
index eebf665..59e8300 100644
--- a/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
+++ b/solr/solr-ref-guide/src/rule-based-replica-placement.adoc
@@ -2,7 +2,9 @@
 :page-shortname: rule-based-replica-placement
 :page-permalink: rule-based-replica-placement.html
 
-When Solr needs to assign nodes to collections, it can either automatically assign them randomly or the user can specify a set of nodes where it should create the replicas. With very large clusters, it is hard to specify exact node names and it still does not give you fine grained control over how nodes are chosen for a shard. The user should be in complete control of where the nodes are allocated for each collection, shard and replica. This helps to optimally allocate hardware resources across the cluster.
+When Solr needs to assign nodes to collections, it can either automatically assign them randomly or the user can specify a set of nodes where it should create the replicas.
+
+With very large clusters, it is hard to specify exact node names and it still does not give you fine grained control over how nodes are chosen for a shard. The user should be in complete control of where the nodes are allocated for each collection, shard and replica. This helps to optimally allocate hardware resources across the cluster.
 
 Rule-based replica assignment allows the creation of rules to determine the placement of replicas in the cluster. In the future, this feature will help to automatically add or remove replicas when systems go down, or when higher throughput is required. This enables a more hands-off approach to administration of the cluster.
 
@@ -30,24 +32,21 @@ There are several situations where this functionality may be used. A few of the
 
 A rule is a set of conditions that a node must satisfy before a replica core can be created there.
 
-[[Rule-basedReplicaPlacement-RuleConditions.1]]
-=== Rule Conditions
-
 There are three possible conditions.
 
-* **shard**: this is the name of a shard or a wild card (* means for all shards). If shard is not specified, then the rule applies to the entire collection.
-* **replica**: this can be a number or a wild-card (* means any number zero to infinity).
-* **tag**: this is an attribute of a node in the cluster that can be used in a rule, e.g. “freedisk”, “cores”, “rack”, “dc”, etc. The tag name can be a custom string. If creating a custom tag, a snitch is responsible for providing tags and values. The section <<Rule-basedReplicaPlacement-Snitches,Snitches>> below describes how to add a custom tag, and defines six pre-defined tags (cores, freedisk, host, port, node, and sysprop).
+* *shard*: this is the name of a shard or a wild card (* means for all shards). If shard is not specified, then the rule applies to the entire collection.
+* *replica*: this can be a number or a wild-card (* means any number zero to infinity).
+* *tag*: this is an attribute of a node in the cluster that can be used in a rule, e.g., “freedisk”, “cores”, “rack”, “dc”, etc. The tag name can be a custom string. If creating a custom tag, a snitch is responsible for providing tags and values. The section <<Rule-basedReplicaPlacement-Snitches,Snitches>> below describes how to add a custom tag, and defines six pre-defined tags (cores, freedisk, host, port, node, and sysprop).
 
 [[Rule-basedReplicaPlacement-RuleOperators]]
 === Rule Operators
 
 A condition can have one of the following operators to set the parameters for the rule.
 
-* **equals (no operator required)**: tag:x means tag value must be equal to ‘x’
-* **greater than (>)**: tag:>x means tag value greater than ‘x’. x must be a number
-* **less than (<)**: tag:<x means tag value less than ‘x’. x must be a number
-* **not equal (!)**: tag:!x means tag value MUST NOT be equal to ‘x’. The equals check is performed on String value
+* *equals (no operator required)*: `tag:x` means tag value must be equal to ‘x’
+* *greater than (>)*: `tag:>x` means tag value greater than ‘x’. x must be a number
+* *less than (<)*: `tag:<x` means tag value less than ‘x’. x must be a number
+* *not equal (!)*: `tag:!x` means tag value MUST NOT be equal to ‘x’. The equals check is performed on String value
 
 // OLD_CONFLUENCE_ID: Rule-basedReplicaPlacement-FuzzyOperator(~)
 
@@ -73,14 +72,14 @@ The same is applicable to shard splitting. Shard splitting is treated exactly th
 
 Tag values come from a plugin called Snitch. If there is a tag named ‘rack’ in a rule, there must be Snitch which provides the value for ‘rack’ for each node in the cluster. A snitch implements the Snitch interface. Solr, by default, provides a default snitch which provides the following tags:
 
-* **cores**: Number of cores in the node
-* **freedisk**: Disk space available in the node
-* **host**: host name of the node
-* **port**: port of the node
-* **node**: node name
+* *cores*: Number of cores in the node
+* *freedisk*: Disk space available in the node
+* *host*: host name of the node
+* *port*: port of the node
+* *node*: node name
 * *role* : The role of the node. The only supported role is 'overseer'
-* **ip_1, ip_2, ip_3, ip_4**: These are ip fragments for each node. For example, in a host with ip `192.168.1.2`, `ip_1 = 2`, `ip_2 =1`, `ip_3 = 168` and` ip_4 = 192`
-* **sysprop.\{PROPERTY_NAME}**: These are values available from system properties. `sysprop.key` means a value that is passed to the node as `-Dkey=keyValue` during the node startup. It is possible to use rules like `sysprop.key:expectedVal,shard:*`
+* *ip_1, ip_2, ip_3, ip_4*: These are ip fragments for each node. For example, in a host with ip `192.168.1.2`, `ip_1 = 2`, `ip_2 =1`, `ip_3 = 168` and` ip_4 = 192`
+* *sysprop.{PROPERTY_NAME}*: These are values available from system properties. `sysprop.key` means a value that is passed to the node as `-Dkey=keyValue` during the node startup. It is possible to use rules like `sysprop.key:expectedVal,shard:*`
 
 [[Rule-basedReplicaPlacement-HowSnitchesareConfigured]]
 === How Snitches are Configured
@@ -94,11 +93,11 @@ snitch=class:fqn.ClassName,key1:val1,key2:val2,key3:val3
 
 *How Tag Values are Collected*
 
-1.  Identify the set of tags in the rules
-2.  Create instances of Snitches specified. The default snitch is always created.
-3.  Ask each Snitch if it can provide values for the any of the tags. If even one tag does not have a snitch, the assignment fails.
-4.  After identifying the Snitches, they provide the tag values for each node in the cluster.
-5.  If the value for a tag is not obtained for a given node, it cannot participate in the assignment.
+. Identify the set of tags in the rules
+. Create instances of Snitches specified. The default snitch is always created.
+. Ask each Snitch if it can provide values for the any of the tags. If even one tag does not have a snitch, the assignment fails.
+. After identifying the Snitches, they provide the tag values for each node in the cluster.
+. If the value for a tag is not obtained for a given node, it cannot participate in the assignment.
 
 [[Rule-basedReplicaPlacement-Examples]]
 == Examples
@@ -110,7 +109,7 @@ snitch=class:fqn.ClassName,key1:val1,key2:val2,key3:val3
 
 For this rule, we define the `replica` condition with operators for "less than 2", and use a pre-defined tag named `node` to define nodes with any name.
 
-[source,bash]
+[source,text]
 ----
 replica:<2,node:*
 // this is equivalent to replica:<2,node:*,shard:**. We can omit shard:** because ** is the default value of shard

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
index a82aa99..0e2016a 100644
--- a/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
+++ b/solr/solr-ref-guide/src/running-solr-on-hdfs.adoc
@@ -2,7 +2,9 @@
 :page-shortname: running-solr-on-hdfs
 :page-permalink: running-solr-on-hdfs.html
 
-Solr has support for writing and reading its index and transaction log files to the HDFS distributed filesystem. This does not use Hadoop MapReduce to process Solr data, rather it only uses the HDFS filesystem for index and transaction log file storage. To use Hadoop MapReduce to process Solr data, see the MapReduceIndexerTool in the Solr contrib area.
+Solr has support for writing and reading its index and transaction log files to the HDFS distributed filesystem.
+
+This does not use Hadoop MapReduce to process Solr data, rather it only uses the HDFS filesystem for index and transaction log file storage. To use Hadoop MapReduce to process Solr data, see the MapReduceIndexerTool in the Solr contrib area.
 
 To use HDFS rather than a local filesystem, you must be using Hadoop 2.x and you will need to instruct Solr to use the `HdfsDirectoryFactory`. There are also several additional parameters to define. These can be set in one of three ways:
 
@@ -18,18 +20,18 @@ To use HDFS rather than a local filesystem, you must be using Hadoop 2.x and you
 
 For standalone Solr instances, there are a few parameters you should be sure to modify before starting Solr. These can be set in `solrconfig.xml`(more on that <<RunningSolronHDFS-Settings,below>>), or passed to the `bin/solr` script at startup.
 
-* You need to use an HdfsDirectoryFactory and a data dir of the form `hdfs://host:port/path`
+* You need to use an `HdfsDirectoryFactory` and a data dir of the form `hdfs://host:port/path`
 * You need to specify an UpdateLog location of the form `hdfs://host:port/path`
 * You should specify a lock factory type of '`hdfs`' or none.
 
 If you do not modify `solrconfig.xml`, you can instead start Solr on HDFS with the following command:
 
-[source,java]
+[source,bash]
 ----
 bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory
      -Dsolr.lock.type=hdfs
      -Dsolr.data.dir=hdfs://host:port/path
-     -Dsolr.updatelog=hdfs://host:port/path 
+     -Dsolr.updatelog=hdfs://host:port/path
 ----
 
 This example will start Solr in standalone mode, using the defined JVM properties (explained in more detail <<RunningSolronHDFS-Settings,below>>).
@@ -42,11 +44,11 @@ In SolrCloud mode, it's best to leave the data and update log directories as the
 * Set `solr.hdfs.home` in the form `hdfs://host:port/path`
 * You should specify a lock factory type of '`hdfs`' or none.
 
-[source,java]
+[source,bash]
 ----
 bin/solr start -c -Dsolr.directoryFactory=HdfsDirectoryFactory
      -Dsolr.lock.type=hdfs
-     -Dsolr.hdfs.home=hdfs://host:port/path 
+     -Dsolr.hdfs.home=hdfs://host:port/path
 ----
 
 This command starts Solr in SolrCloud mode, using the defined JVM properties.
@@ -60,7 +62,7 @@ The examples above assume you will pass JVM arguments as part of the start comma
 
 For example, to set JVM arguments to always use HDFS when running in SolrCloud mode (as shown above), you would add a section such as this:
 
-[source,java]
+[source,bash]
 ----
 # Set HDFS DirectoryFactory & Settings
 -Dsolr.directoryFactory=HdfsDirectoryFactory \
@@ -73,7 +75,7 @@ For example, to set JVM arguments to always use HDFS when running in SolrCloud m
 
 For performance, the HdfsDirectoryFactory uses a Directory that will cache HDFS blocks. This caching mechanism is meant to replace the standard file system cache that Solr utilizes so much. By default, this cache is allocated off heap. This cache will often need to be quite large and you may need to raise the off heap memory limit for the specific JVM you are running Solr in. For the Oracle/OpenJDK JVMs, the follow is an example command line parameter that you can use to raise the limit when starting Solr:
 
-[source,java]
+[source,bash]
 ----
 -XX:MaxDirectMemorySize=20g
 ----
@@ -86,7 +88,7 @@ The `HdfsDirectoryFactory` has a number of settings that are defined as part of
 [[RunningSolronHDFS-SolrHDFSSettings]]
 === Solr HDFS Settings
 
-[width="100%",cols="25%,25%,25%,25%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Example Value |Default |Description
 |`solr.hdfs.home` |`hdfs://host:port/path/solr` |N/A |A root location in HDFS for Solr to write collection data to. Rather than specifying an HDFS location for the data directory or update log directory, use this to specify one root location and have everything automatically created within this HDFS location.
@@ -97,25 +99,21 @@ The `HdfsDirectoryFactory` has a number of settings that are defined as part of
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Default |Description
 |`solr.hdfs.blockcache.enabled` |true |Enable the blockcache
 |`solr.hdfs.blockcache.read.enabled` |true |Enable the read cache
 |`solr.hdfs.blockcache.direct.memory.allocation` |true |Enable direct memory allocation. If this is false, heap is used
 |`solr.hdfs.blockcache.slab.count` |1 |Number of memory slabs to allocate. Each slab is 128 MB in size.
-a|
-....
-solr.hdfs.blockcache.global
-....
-
- |true |Enable/Disable using one global cache for all SolrCores. The settings used will be from the first HdfsDirectoryFactory created.
+|`solr.hdfs.blockcache.global`
+|true |Enable/Disable using one global cache for all SolrCores. The settings used will be from the first HdfsDirectoryFactory created.
 |===
 
 [[RunningSolronHDFS-NRTCachingDirectorySettings]]
 === NRTCachingDirectory Settings
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Default |Description
 |`solr.hdfs.nrtcachingdirectory.enable` |true |Enable the use of NRTCachingDirectory
@@ -128,7 +126,7 @@ solr.hdfs.blockcache.global
 
 solr.hdfs.confdir pass the location of HDFS client configuration files - needed for HDFS HA for example.
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Default |Description
 |`solr.hdfs.confdir` |N/A |Pass the location of HDFS client configuration files - needed for HDFS HA for example.
@@ -141,7 +139,7 @@ Hadoop can be configured to use the Kerberos protocol to verify user identity wh
 
 // TODO: This table has cells that won't work with PDF: https://github.com/ctargett/refguide-asciidoc-poc/issues/13
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Parameter |Default |Description
 |`solr.hdfs.security.kerberos.enabled` |false |Set to true to enable Kerberos authentication
@@ -158,7 +156,7 @@ This file will need to be present on all Solr servers at the same path provided
 
 Here is a sample `solrconfig.xml` configuration for storing Solr indexes on HDFS:
 
-[source,java]
+[source,xml]
 ----
 <directoryFactory name="DirectoryFactory" class="solr.HdfsDirectoryFactory">
   <str name="solr.hdfs.home">hdfs://host:port/solr</str>
@@ -175,7 +173,7 @@ Here is a sample `solrconfig.xml` configuration for storing Solr indexes on HDFS
 
 If using Kerberos, you will need to add the three Kerberos related properties to the `<directoryFactory>` element in solrconfig.xml, such as:
 
-[source,java]
+[source,xml]
 ----
 <directoryFactory name="DirectoryFactory" class="solr.HdfsDirectoryFactory">
    ...
@@ -192,7 +190,7 @@ One benefit to running Solr in HDFS is the ability to automatically add new repl
 
 Collections created using `autoAddReplicas=true` on a shared file system have automatic addition of replicas enabled. The following settings can be used to override the defaults in the `<solrcloud>` section of `solr.xml`.
 
-[width="100%",cols="34%,33%,33%",options="header",]
+[width="100%",options="header",]
 |===
 |Param |Default |Description
 |autoReplicaFailoverWorkLoopDelay |10000 |The time (in ms) between clusterstate inspections by the Overseer to detect and possibly act on creation of a replacement replica.
@@ -207,14 +205,14 @@ When doing offline maintenance on the cluster and for various other use cases wh
 
 Disable auto addition of replicas cluster wide by setting the cluster property `autoAddReplicas` to `false`:
 
-[source,java]
+[source,text]
 ----
 http://localhost:8983/solr/admin/collections?action=CLUSTERPROP&name=autoAddReplicas&val=false
 ----
 
 Re-enable auto addition of replicas (for those collections created with autoAddReplica=true) by unsetting the `autoAddReplicas` cluster property (when no `val` param is provided, the cluster property is unset):
 
-[source,java]
+[source,text]
 ----
 http://localhost:8983/solr/admin/collections?action=CLUSTERPROP&name=autoAddReplicas
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/f7859d7f/solr/solr-ref-guide/src/running-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/running-solr.adoc b/solr/solr-ref-guide/src/running-solr.adoc
index aed0d12..9014cf7 100644
--- a/solr/solr-ref-guide/src/running-solr.adoc
+++ b/solr/solr-ref-guide/src/running-solr.adoc
@@ -103,9 +103,7 @@ Currently, the available examples you can run are: techproducts, dih, schemaless
 .Getting Started with SolrCloud
 [NOTE]
 ====
-
 Running the `cloud` example starts Solr in <<solrcloud.adoc#solrcloud,SolrCloud>> mode. For more information on starting Solr in cloud mode, see the section <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
-
 ====
 
 [[RunningSolr-CheckifSolrisRunning]]
@@ -122,13 +120,11 @@ This will search for running Solr instances on your computer and then gather bas
 
 That's it! Solr is running. If you need convincing, use a Web browser to see the Admin Console.
 
-` http://localhost:8983/solr/ `
+`\http://localhost:8983/solr/`
 
+.The Solr Admin interface.
 image::images/running-solr/SolrAdminDashboard.png[image,width=900,height=456]
 
-
-_The Solr Admin interface._
-
 If Solr is not running, your browser will complain that it cannot connect to the server. Check your port number and try again.
 
 [[RunningSolr-CreateaCore]]
@@ -199,7 +195,7 @@ Now that you have indexed documents, you can perform queries. The simplest way i
 
 For example, the following query searches all document fields for "video":
 
-`http://localhost:8983/solr/gettingstarted/select?q=video`
+`\http://localhost:8983/solr/gettingstarted/select?q=video`
 
 Notice how the URL includes the host name (`localhost`), the port number where the server is listening (`8983`), the application name (`solr`), the request handler for queries (`select`), and finally, the query itself (`q=video`).
 
@@ -207,18 +203,16 @@ The results are contained in an XML document, which you can examine directly by
 
 Just in case you are not running Solr as you read, the following screen shot shows the result of a query (the next example, actually) as viewed in Mozilla Firefox. The top-level response contains a `lst` named `responseHeader` and a result named response. Inside result, you can see the three docs that represent the search results.
 
+.An XML response to a query.
 image::images/running-solr/solr34_responseHeader.png[image,width=600,height=634]
 
-
-_An XML response to a query._
-
 Once you have mastered the basic idea of a query, it is easy to add enhancements to explore the query syntax. This one is the same as before but the results only contain the ID, name, and price for each returned document. If you don't specify which fields you want, all of them are returned.
 
-`http://localhost:8983/solr/gettingstarted/select?q=video&fl=id,name,price`
+`\http://localhost:8983/solr/gettingstarted/select?q=video&fl=id,name,price`
 
 Here is another example which searches for "black" in the `name` field only. If you do not tell Solr which field to search, it will search default fields, as specified in the schema.
 
-`http://localhost:8983/solr/gettingstarted/select?q=name:black`
+`\http://localhost:8983/solr/gettingstarted/select?q=name:black`
 
 You can provide ranges for fields. The following query finds every document whose price is between $0 and $400.
 
@@ -228,7 +222,7 @@ You can provide ranges for fields. The following query finds every document whos
 
 Faceting information is returned as a third part of Solr's query response. To get a taste of this power, take a look at the following query. It adds `facet=true` and `facet.field=cat`.
 
-`http://localhost:8983/solr/gettingstarted/select?q=price:[0%20TO%20400]&fl=id,name,price&facet=true&facet.field=cat`
+`\http://localhost:8983/solr/gettingstarted/select?q=price:[0%20TO%20400]&fl=id,name,price&facet=true&facet.field=cat`
 
 In addition to the familiar `responseHeader` and response from Solr, a `facet_counts` element is also present. Here is a view with the `responseHeader` and response collapsed so you can see the faceting information clearly.
 
@@ -276,4 +270,4 @@ In addition to the familiar `responseHeader` and response from Solr, a `facet_co
 
 The facet information shows how many of the query results have each possible value of the `cat` field. You could easily use this information to provide users with a quick way to narrow their query results. You can filter results by adding one or more filter queries to the Solr request. This request constrains documents with a category of "software".
 
-`http://localhost:8983/solr/gettingstarted/select?q=price:0%20TO%20400&fl=id,name,price&facet=true&facet.field=cat&fq=cat:software`
+`\http://localhost:8983/solr/gettingstarted/select?q=price:0%20TO%20400&fl=id,name,price&facet=true&facet.field=cat&fq=cat:software`


[3/3] lucene-solr:jira/solr-10290: SOLR-10296: add glossary attribute to CDCR Glossary

Posted by ct...@apache.org.
SOLR-10296: add glossary attribute to CDCR Glossary


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/ff9fdcf1
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/ff9fdcf1
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/ff9fdcf1

Branch: refs/heads/jira/solr-10290
Commit: ff9fdcf1fcfb68008064e902f7cca8c776516323
Parents: f7859d7
Author: Cassandra Targett <ct...@apache.org>
Authored: Sun May 7 18:50:42 2017 -0500
Committer: Cassandra Targett <ct...@apache.org>
Committed: Sun May 7 18:50:42 2017 -0500

----------------------------------------------------------------------
 .../cross-data-center-replication-cdcr-.adoc    | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ff9fdcf1/solr/solr-ref-guide/src/cross-data-center-replication-cdcr-.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr-.adoc b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr-.adoc
index 0b590a5..72af0ad 100644
--- a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr-.adoc
+++ b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr-.adoc
@@ -28,19 +28,21 @@ CDCR is configured to replicate from collections in the source cluster to collec
 
 CDCR can be configured to replicate from one collection to a second collection _within the same cluster_. That is a specialized scenario not covered in this document.
 
+[glossary]
 == CDCR Glossary
 
 Terms used in this document include:
 
-* *Node*: A JVM instance running Solr; a server.
-* *Cluster*: A set of Solr nodes managed as a single unit by a ZooKeeper ensemble, hosting one or more Collections.
-* *Data Center:* A group of networked servers hosting a Solr cluster. In this document, the terms _Cluster_ and _Data Center_ are interchangeable as we assume that each Solr cluster is hosted in a different group of networked servers.
-* *Shard*: A sub-index of a single logical collection. This may be spread across multiple nodes of the cluster. Each shard can have as many replicas as needed.
-* *Leader*: Each shard has one node identified as its leader. All the writes for documents belonging to a shard are routed through the leader.
-* *Replica*: A copy of a shard for use in failover or load balancing. Replicas comprising a shard can either be leaders or non-leaders.
-* *Follower:* A convenience term for a replica that is _not_ the leader of a shard.
-* *Collection*: Multiple documents that make up one logical index. A cluster can have multiple collections.
-* *Updates Log*: An append-only log of write operations maintained by each node.
+[glossary]
+Node:: A JVM instance running Solr; a server.
+Cluster:: A set of Solr nodes managed as a single unit by a ZooKeeper ensemble, hosting one or more Collections.
+Data Center:: A group of networked servers hosting a Solr cluster. In this document, the terms _Cluster_ and _Data Center_ are interchangeable as we assume that each Solr cluster is hosted in a different group of networked servers.
+Shard:: A sub-index of a single logical collection. This may be spread across multiple nodes of the cluster. Each shard can have as many replicas as needed.
+Leader:: Each shard has one node identified as its leader. All the writes for documents belonging to a shard are routed through the leader.
+Replica:: A copy of a shard for use in failover or load balancing. Replicas comprising a shard can either be leaders or non-leaders.
+Follower:: A convenience term for a replica that is _not_ the leader of a shard.
+Collection:: Multiple documents that make up one logical index. A cluster can have multiple collections.
+Updates Log:: An append-only log of write operations maintained by each node.
 
 == CDCR Architecture