You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by is...@apache.org on 2017/07/29 21:59:40 UTC

[03/28] lucene-solr:jira/solr-6630: Merging master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc b/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
index 00b825a..24a7ac9 100644
--- a/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
+++ b/solr/solr-ref-guide/src/upgrading-a-solr-cluster.adoc
@@ -28,7 +28,6 @@ The steps outlined on this page assume you use the default service name of "```s
 
 ====
 
-[[UpgradingaSolrCluster-PlanningYourUpgrade]]
 == Planning Your Upgrade
 
 Here is a checklist of things you need to prepare before starting the upgrade process:
@@ -49,19 +48,16 @@ If you are upgrading from an installation of Solr 5.x or later, these values can
 
 You should now be ready to upgrade your cluster. Please verify this process in a test / staging cluster before doing it in production.
 
-[[UpgradingaSolrCluster-UpgradeProcess]]
 == Upgrade Process
 
 The approach we recommend is to perform the upgrade of each Solr node, one-by-one. In other words, you will need to stop a node, upgrade it to the new version of Solr, and restart it before moving on to the next node. This means that for a short period of time, there will be a mix of "Old Solr" and "New Solr" nodes running in your cluster. We also assume that you will point the new Solr node to your existing Solr home directory where the Lucene index files are managed for each collection on the node. This means that you won't need to move any index files around to perform the upgrade.
 
 
-[[UpgradingaSolrCluster-Step1_StopSolr]]
 === Step 1: Stop Solr
 
 Begin by stopping the Solr node you want to upgrade. After stopping the node, if using a replication, (ie: collections with replicationFactor > 1) verify that all leaders hosted on the downed node have successfully migrated to other replicas; you can do this by visiting the <<cloud-screens.adoc#cloud-screens,Cloud panel in the Solr Admin UI>>. If not using replication, then any collections with shards hosted on the downed node will be temporarily off-line.
 
 
-[[UpgradingaSolrCluster-Step2_InstallSolrasaService]]
 === Step 2: Install Solr as a Service
 
 Please follow the instructions to install Solr as a Service on Linux documented at <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>. Use the `-n` parameter to avoid automatic start of Solr by the installer script. You need to update the `/etc/default/solr.in.sh` include file in the next step to complete the upgrade process.
@@ -74,7 +70,6 @@ If you have a `/var/solr/solr.in.sh` file for your existing Solr install, runnin
 ====
 
 
-[[UpgradingaSolrCluster-Step3_SetEnvironmentVariableOverrides]]
 === Step 3: Set Environment Variable Overrides
 
 Open `/etc/default/solr.in.sh` with a text editor and verify that the following variables are set correctly, or add them bottom of the include file as needed:
@@ -84,13 +79,10 @@ Open `/etc/default/solr.in.sh` with a text editor and verify that the following
 Make sure the user you plan to own the Solr process is the owner of the `SOLR_HOME` directory. For instance, if you plan to run Solr as the "solr" user and `SOLR_HOME` is `/var/solr/data`, then you would do: `sudo chown -R solr: /var/solr/data`
 
 
-[[UpgradingaSolrCluster-Step4_StartSolr]]
 === Step 4: Start Solr
 
 You are now ready to start the upgraded Solr node by doing: `sudo service solr start`. The upgraded instance will join the existing cluster because you're using the same `SOLR_HOME`, `SOLR_PORT`, and `SOLR_HOST` settings used by the old Solr node; thus, the new server will look like the old node to the running cluster. Be sure to look in `/var/solr/logs/solr.log` for errors during startup.
 
-
-[[UpgradingaSolrCluster-Step5_RunHealthcheck]]
 === Step 5: Run Healthcheck
 
 You should run the Solr *healthcheck* command for all collections that are hosted on the upgraded node before proceeding to upgrade the next node in your cluster. For instance, if the newly upgraded node hosts a replica for the *MyDocuments* collection, then you can run the following command (replace ZK_HOST with the ZooKeeper connection string):

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/upgrading-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/upgrading-solr.adoc b/solr/solr-ref-guide/src/upgrading-solr.adoc
index e41b93b..a1db074 100644
--- a/solr/solr-ref-guide/src/upgrading-solr.adoc
+++ b/solr/solr-ref-guide/src/upgrading-solr.adoc
@@ -20,7 +20,6 @@
 
 If you are already using Solr 6.5, Solr 6.6 should not present any major problems. However, you should review the {solr-javadocs}/changes/Changes.html[`CHANGES.txt`] file found in your Solr package for changes and updates that may effect your existing implementation. Detailed steps for upgrading a Solr cluster can be found in the appendix: <<upgrading-a-solr-cluster.adoc#upgrading-a-solr-cluster,Upgrading a Solr Cluster>>.
 
-[[UpgradingSolr-Upgradingfrom6.5.x]]
 == Upgrading from 6.5.x
 
 * Solr contribs map-reduce, morphlines-core and morphlines-cell have been removed.
@@ -29,7 +28,6 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 
 * ZooKeeper dependency has been upgraded from 3.4.6 to 3.4.10.
 
-[[UpgradingSolr-Upgradingfromearlier6.xversions]]
 == Upgrading from earlier 6.x versions
 
 * If you use historical dates, specifically on or before the year 1582, you should re-index after upgrading to this version.
@@ -47,12 +45,11 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 ** The metrics "avgRequestsPerMinute", "5minRateRequestsPerMinute" and "15minRateRequestsPerMinute" have been replaced by corresponding per-second rates viz. "avgRequestsPerSecond", "5minRateRequestsPerSecond" and "15minRateRequestsPerSecond" for consistency with stats output in other parts of Solr.
 * A new highlighter named UnifiedHighlighter has been added. You are encouraged to try out the UnifiedHighlighter by setting `hl.method=unified` and report feedback. It might become the default in 7.0. It's more efficient/faster than the other highlighters, especially compared to the original Highlighter. That said, some options aren't supported yet. It will get more features in time, especially with your input. See HighlightParams.java for a listing of highlight parameters annotated with which highlighters use them. `hl.useFastVectorHighlighter` is now considered deprecated in lieu of `hl.method=fastVector`.
 * The <<query-settings-in-solrconfig.adoc#query-settings-in-solrconfig,`maxWarmingSearchers` parameter>> now defaults to 1, and more importantly commits will now block if this limit is exceeded instead of throwing an exception (a good thing). Consequently there is no longer a risk in overlapping commits. Nonetheless users should continue to avoid excessive committing. Users are advised to remove any pre-existing maxWarmingSearchers entries from their solrconfig.xml files.
-* The <<other-parsers.adoc#OtherParsers-ComplexPhraseQueryParser,Complex Phrase query parser>> now supports leading wildcards. Beware of its possible heaviness, users are encouraged to use ReversedWildcardFilter in index time analysis.
+* The <<other-parsers.adoc#complex-phrase-query-parser,Complex Phrase query parser>> now supports leading wildcards. Beware of its possible heaviness, users are encouraged to use ReversedWildcardFilter in index time analysis.
 * The JMX metric "avgTimePerRequest" (and the corresponding metric in the metrics API for each handler) used to be a simple non-decaying average based on total cumulative time and the number of requests. New Codahale Metrics implementation applies exponential decay to this value, which heavily biases the average towards the last 5 minutes.
 * Index-time boosts are now deprecated. As a replacement, index-time scoring factors should be indexed in a separate field and combined with the query score using a function query. These boosts will be removed in Solr 7.0.
 * Parallel SQL now uses Apache Calcite as its SQL framework. As part of this change the default aggregation mode has been changed to facet rather than map_reduce. There have also been changes to the SQL aggregate response and some SQL syntax changes. Consult the <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>> documentation for full details.
 
-[[UpgradingSolr-Upgradingfrom5.5.x]]
 == Upgrading from 5.5.x
 
 * The deprecated `SolrServer` and subclasses have been removed, use <<using-solrj.adoc#using-solrj,`SolrClient`>> instead.
@@ -60,7 +57,7 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 * `SolrClient.shutdown()` has been removed, use {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/SolrClient.html[`SolrClient.close()`] instead.
 * The deprecated `zkCredientialsProvider` element in `solrcloud` section of `solr.xml` is now removed. Use the correct spelling (<<zookeeper-access-control.adoc#zookeeper-access-control,`zkCredentialsProvider`>>) instead.
 * Internal/expert - `ResultContext` was significantly changed and expanded to allow for multiple full query results (`DocLists`) per Solr request. `TransformContext` was rendered redundant and was removed. See https://issues.apache.org/jira/browse/SOLR-7957[SOLR-7957] for details.
-* Several changes have been made regarding the "<<other-schema-elements.adoc#OtherSchemaElements-Similarity,`Similarity`>>" used in Solr, in order to provide better default behavior for new users. There are 3 key impacts of these changes on existing users who upgrade:
+* Several changes have been made regarding the "<<other-schema-elements.adoc#similarity,`Similarity`>>" used in Solr, in order to provide better default behavior for new users. There are 3 key impacts of these changes on existing users who upgrade:
 ** `DefaultSimilarityFactory` has been removed. If you currently have `DefaultSimilarityFactory` explicitly referenced in your `schema.xml`, edit your config to use the functionally identical `ClassicSimilarityFactory`. See https://issues.apache.org/jira/browse/SOLR-8239[SOLR-8239] for more details.
 ** The implicit default Similarity used when no `<similarity/>` is configured in `schema.xml` has been changed to `SchemaSimilarityFactory`. Users who wish to preserve back-compatible behavior should either explicitly configure `ClassicSimilarityFactory`, or ensure that the `luceneMatchVersion` for the collection is less then 6.0. See https://issues.apache.org/jira/browse/SOLR-8270[SOLR-8270] + http://SOLR-8271[SOLR-8271] for details.
 ** `SchemaSimilarityFactory` has been modified to use `BM25Similarity` as the default for `fieldTypes` that do not explicitly declare a Similarity. The legacy behavior of using `ClassicSimilarity` as the default will occur if the `luceneMatchVersion` for the collection is less then 6.0, or the `'defaultSimFromFieldType'` configuration option may be used to specify any default of your choosing. See https://issues.apache.org/jira/browse/SOLR-8261[SOLR-8261] + https://issues.apache.org/jira/browse/SOLR-8329[SOLR-8329] for more details.
@@ -74,7 +71,6 @@ If you are already using Solr 6.5, Solr 6.6 should not present any major problem
 * <<using-solrj.adoc#using-solrj,SolrJ>> no longer includes `DateUtil`. If for some reason you need to format or parse dates, simply use `Instant.format()` and `Instant.parse()`.
 * If you are using spatial4j, please upgrade to 0.6 and <<spatial-search.adoc#spatial-search,edit your `spatialContextFactory`>> to replace `com.spatial4j.core` with `org.locationtech.spatial4j` .
 
-[[UpgradingSolr-UpgradingfromOlderVersionsofSolr]]
 == Upgrading from Older Versions of Solr
 
 Users upgrading from older versions are strongly encouraged to consult {solr-javadocs}/changes/Changes.html[`CHANGES.txt`] for the details of _all_ changes since the version they are upgrading from.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
index 6a8ad99..ff59d61 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
@@ -25,7 +25,6 @@ The recommended way to configure and use request handlers is with path based nam
 
 A single unified update request handler supports XML, CSV, JSON, and javabin update requests, delegating to the appropriate `ContentStreamLoader` based on the `Content-Type` of the <<content-streams.adoc#content-streams,ContentStream>>.
 
-[[UploadingDatawithIndexHandlers-UpdateRequestHandlerConfiguration]]
 == UpdateRequestHandler Configuration
 
 The default configuration file has the update request handler configured by default.
@@ -35,12 +34,10 @@ The default configuration file has the update request handler configured by defa
 <requestHandler name="/update" class="solr.UpdateRequestHandler" />
 ----
 
-[[UploadingDatawithIndexHandlers-XMLFormattedIndexUpdates]]
 == XML Formatted Index Updates
 
 Index update commands can be sent as XML message to the update handler using `Content-type: application/xml` or `Content-type: text/xml`.
 
-[[UploadingDatawithIndexHandlers-AddingDocuments]]
 === Adding Documents
 
 The XML schema recognized by the update handler for adding documents is very straightforward:
@@ -84,11 +81,9 @@ If the document schema defines a unique key, then by default an `/update` operat
 
 If you have a unique key field, but you feel confident that you can safely bypass the uniqueness check (e.g., you build your indexes in batch, and your indexing code guarantees it never adds the same document more than once) you can specify the `overwrite="false"` option when adding your documents.
 
-[[UploadingDatawithIndexHandlers-XMLUpdateCommands]]
 === XML Update Commands
 
-[[UploadingDatawithIndexHandlers-CommitandOptimizeOperations]]
-==== Commit and Optimize Operations
+==== Commit and Optimize During Updates
 
 The `<commit>` operation writes all documents loaded since the last commit to one or more segment files on the disk. Before a commit has been issued, newly indexed content is not visible to searches. The commit operation opens a new searcher, and triggers any event listeners that have been configured.
 
@@ -114,7 +109,6 @@ Here are examples of <commit> and <optimize> using optional attributes:
 <optimize waitSearcher="false"/>
 ----
 
-[[UploadingDatawithIndexHandlers-DeleteOperations]]
 ==== Delete Operations
 
 Documents can be deleted from the index in two ways. "Delete by ID" deletes the document with the specified ID, and can be used only if a UniqueID field has been defined in the schema. "Delete by Query" deletes all documents matching a specified query, although `commitWithin` is ignored for a Delete by Query. A single delete message can contain multiple delete operations.
@@ -136,12 +130,10 @@ When using the Join query parser in a Delete By Query, you should use the `score
 
 ====
 
-[[UploadingDatawithIndexHandlers-RollbackOperations]]
 ==== Rollback Operations
 
 The rollback command rolls back all add and deletes made to the index since the last commit. It neither calls any event listeners nor creates a new searcher. Its syntax is simple: `<rollback/>`.
 
-[[UploadingDatawithIndexHandlers-UsingcurltoPerformUpdates]]
 === Using curl to Perform Updates
 
 You can use the `curl` utility to perform any of the above commands, using its `--data-binary` option to append the XML message to the `curl` command, and generating a HTTP POST request. For example:
@@ -168,7 +160,7 @@ For posting XML messages contained in a file, you can use the alternative form:
 curl http://localhost:8983/solr/my_collection/update -H "Content-Type: text/xml" --data-binary @myfile.xml
 ----
 
-Short requests can also be sent using a HTTP GET command, if enabled in <<requestdispatcher-in-solrconfig.adoc#RequestDispatcherinSolrConfig-requestParsersElement,RequestDispatcher in SolrConfig>> element, URL-encoding the request, as in the following. Note the escaping of "<" and ">":
+Short requests can also be sent using a HTTP GET command, if enabled in <<requestdispatcher-in-solrconfig.adoc#requestparsers-element,RequestDispatcher in SolrConfig>> element, URL-encoding the request, as in the following. Note the escaping of "<" and ">":
 
 [source,bash]
 ----
@@ -189,7 +181,6 @@ Responses from Solr take the form shown here:
 
 The status field will be non-zero in case of failure.
 
-[[UploadingDatawithIndexHandlers-UsingXSLTtoTransformXMLIndexUpdates]]
 === Using XSLT to Transform XML Index Updates
 
 The UpdateRequestHandler allows you to index any arbitrary XML using the `<tr>` parameter to apply an https://en.wikipedia.org/wiki/XSLT[XSL transformation]. You must have an XSLT stylesheet in the `conf/xslt` directory of your <<config-sets.adoc#config-sets,config set>> that can transform the incoming data to the expected `<add><doc/></add>` format, and use the `tr` parameter to specify the name of that stylesheet.
@@ -250,23 +241,20 @@ You can also use the stylesheet in `XsltUpdateRequestHandler` to transform an in
 curl "http://localhost:8983/solr/my_collection/update?commit=true&tr=updateXml.xsl" -H "Content-Type: text/xml" --data-binary @myexporteddata.xml
 ----
 
-[[UploadingDatawithIndexHandlers-JSONFormattedIndexUpdates]]
 == JSON Formatted Index Updates
 
 Solr can accept JSON that conforms to a defined structure, or can accept arbitrary JSON-formatted documents. If sending arbitrarily formatted JSON, there are some additional parameters that need to be sent with the update request, described below in the section <<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,Transforming and Indexing Custom JSON>>.
 
-[[UploadingDatawithIndexHandlers-Solr-StyleJSON]]
 === Solr-Style JSON
 
 JSON formatted update requests may be sent to Solr's `/update` handler using `Content-Type: application/json` or `Content-Type: text/json`.
 
 JSON formatted updates can take 3 basic forms, described in depth below:
 
-* <<UploadingDatawithIndexHandlers-AddingaSingleJSONDocument,A single document to add>>, expressed as a top level JSON Object. To differentiate this from a set of commands, the `json.command=false` request parameter is required.
-* <<UploadingDatawithIndexHandlers-AddingMultipleJSONDocuments,A list of documents to add>>, expressed as a top level JSON Array containing a JSON Object per document.
-* <<UploadingDatawithIndexHandlers-SendingJSONUpdateCommands,A sequence of update commands>>, expressed as a top level JSON Object (aka: Map).
+* <<Adding a Single JSON Document,A single document to add>>, expressed as a top level JSON Object. To differentiate this from a set of commands, the `json.command=false` request parameter is required.
+* <<Adding Multiple JSON Documents,A list of documents to add>>, expressed as a top level JSON Array containing a JSON Object per document.
+* <<Sending JSON Update Commands,A sequence of update commands>>, expressed as a top level JSON Object (aka: Map).
 
-[[UploadingDatawithIndexHandlers-AddingaSingleJSONDocument]]
 ==== Adding a Single JSON Document
 
 The simplest way to add Documents via JSON is to send each document individually as a JSON Object, using the `/update/json/docs` path:
@@ -280,7 +268,6 @@ curl -X POST -H 'Content-Type: application/json' 'http://localhost:8983/solr/my_
 }'
 ----
 
-[[UploadingDatawithIndexHandlers-AddingMultipleJSONDocuments]]
 ==== Adding Multiple JSON Documents
 
 Adding multiple documents at one time via JSON can be done via a JSON Array of JSON Objects, where each object represents a document:
@@ -307,7 +294,6 @@ A sample JSON file is provided at `example/exampledocs/books.json` and contains
 curl 'http://localhost:8983/solr/techproducts/update?commit=true' --data-binary @example/exampledocs/books.json -H 'Content-type:application/json'
 ----
 
-[[UploadingDatawithIndexHandlers-SendingJSONUpdateCommands]]
 ==== Sending JSON Update Commands
 
 In general, the JSON update syntax supports all of the update commands that the XML update handler supports, through a straightforward mapping. Multiple commands, adding and deleting documents, may be contained in one message:
@@ -377,7 +363,6 @@ You can also specify `\_version_` with each "delete":
 
 You can specify the version of deletes in the body of the update request as well.
 
-[[UploadingDatawithIndexHandlers-JSONUpdateConveniencePaths]]
 === JSON Update Convenience Paths
 
 In addition to the `/update` handler, there are a few additional JSON specific request handler paths available by default in Solr, that implicitly override the behavior of some request parameters:
@@ -395,13 +380,11 @@ In addition to the `/update` handler, there are a few additional JSON specific r
 
 The `/update/json` path may be useful for clients sending in JSON formatted update commands from applications where setting the Content-Type proves difficult, while the `/update/json/docs` path can be particularly convenient for clients that always want to send in documents – either individually or as a list – without needing to worry about the full JSON command syntax.
 
-[[UploadingDatawithIndexHandlers-CustomJSONDocuments]]
 === Custom JSON Documents
 
 Solr can support custom JSON. This is covered in the section <<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,Transforming and Indexing Custom JSON>>.
 
 
-[[UploadingDatawithIndexHandlers-CSVFormattedIndexUpdates]]
 == CSV Formatted Index Updates
 
 CSV formatted update requests may be sent to Solr's `/update` handler using `Content-Type: application/csv` or `Content-Type: text/csv`.
@@ -413,7 +396,6 @@ A sample CSV file is provided at `example/exampledocs/books.csv` that you can us
 curl 'http://localhost:8983/solr/my_collection/update?commit=true' --data-binary @example/exampledocs/books.csv -H 'Content-type:application/csv'
 ----
 
-[[UploadingDatawithIndexHandlers-CSVUpdateParameters]]
 === CSV Update Parameters
 
 The CSV handler allows the specification of many parameters in the URL in the form: `f._parameter_._optional_fieldname_=_value_` .
@@ -498,7 +480,6 @@ Add the given offset (as an integer) to the `rowid` before adding it to the docu
 +
 Example: `rowidOffset=10`
 
-[[UploadingDatawithIndexHandlers-IndexingTab-Delimitedfiles]]
 === Indexing Tab-Delimited files
 
 The same feature used to index CSV documents can also be easily used to index tab-delimited files (TSV files) and even handle backslash escaping rather than CSV encapsulation.
@@ -517,7 +498,6 @@ This file could then be imported into Solr by setting the `separator` to tab (%0
 curl 'http://localhost:8983/solr/my_collection/update/csv?commit=true&separator=%09&escape=%5c' --data-binary @/tmp/result.txt
 ----
 
-[[UploadingDatawithIndexHandlers-CSVUpdateConveniencePaths]]
 === CSV Update Convenience Paths
 
 In addition to the `/update` handler, there is an additional CSV specific request handler path available by default in Solr, that implicitly override the behavior of some request parameters:
@@ -530,16 +510,14 @@ In addition to the `/update` handler, there is an additional CSV specific reques
 
 The `/update/csv` path may be useful for clients sending in CSV formatted update commands from applications where setting the Content-Type proves difficult.
 
-[[UploadingDatawithIndexHandlers-NestedChildDocuments]]
 == Nested Child Documents
 
-Solr indexes nested documents in blocks as a way to model documents containing other documents, such as a blog post parent document and comments as child documents -- or products as parent documents and sizes, colors, or other variations as child documents. At query time, the <<other-parsers.adoc#OtherParsers-BlockJoinQueryParsers,Block Join Query Parsers>> can search these relationships. In terms of performance, indexing the relationships between documents may be more efficient than attempting to do joins only at query time, since the relationships are already stored in the index and do not need to be computed.
+Solr indexes nested documents in blocks as a way to model documents containing other documents, such as a blog post parent document and comments as child documents -- or products as parent documents and sizes, colors, or other variations as child documents. At query time, the <<other-parsers.adoc#block-join-query-parsers,Block Join Query Parsers>> can search these relationships. In terms of performance, indexing the relationships between documents may be more efficient than attempting to do joins only at query time, since the relationships are already stored in the index and do not need to be computed.
 
-Nested documents may be indexed via either the XML or JSON data syntax (or using <<using-solrj.adoc#using-solrj,SolrJ)>> - but regardless of syntax, you must include a field that identifies the parent document as a parent; it can be any field that suits this purpose, and it will be used as input for the <<other-parsers.adoc#OtherParsers-BlockJoinQueryParsers,block join query parsers>>.
+Nested documents may be indexed via either the XML or JSON data syntax (or using <<using-solrj.adoc#using-solrj,SolrJ)>> - but regardless of syntax, you must include a field that identifies the parent document as a parent; it can be any field that suits this purpose, and it will be used as input for the <<other-parsers.adoc#block-join-query-parsers,block join query parsers>>.
 
 To support nested documents, the schema must include an indexed/non-stored field `\_root_`. The value of that field is populated automatically and is the same for all documents in the block, regardless of the inheritance depth.
 
-[[UploadingDatawithIndexHandlers-XMLExamples]]
 === XML Examples
 
 For example, here are two documents and their child documents:
@@ -570,7 +548,6 @@ For example, here are two documents and their child documents:
 
 In this example, we have indexed the parent documents with the field `content_type`, which has the value "parentDocument". We could have also used a boolean field, such as `isParent`, with a value of "true", or any other similar approach.
 
-[[UploadingDatawithIndexHandlers-JSONExamples]]
 === JSON Examples
 
 This example is equivalent to the XML example above, note the special `\_childDocuments_` key need to indicate the nested documents in JSON.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
index cdd9539..1489d16 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
@@ -26,8 +26,7 @@ If you want to supply your own `ContentHandler` for Solr to use, you can extend
 
 For more information on Solr's Extracting Request Handler, see https://wiki.apache.org/solr/ExtractingRequestHandler.
 
-[[UploadingDatawithSolrCellusingApacheTika-KeyConcepts]]
-== Key Concepts
+== Key Solr Cell Concepts
 
 When using the Solr Cell framework, it is helpful to keep the following in mind:
 
@@ -42,12 +41,9 @@ When using the Solr Cell framework, it is helpful to keep the following in mind:
 
 [TIP]
 ====
-
 While Apache Tika is quite powerful, it is not perfect and fails on some files. PDF files are particularly problematic, mostly due to the PDF format itself. In case of a failure processing any file, the `ExtractingRequestHandler` does not have a secondary mechanism to try to extract some text from the file; it will throw an exception and fail.
-
 ====
 
-[[UploadingDatawithSolrCellusingApacheTika-TryingoutTikawiththeSolrtechproductsExample]]
 == Trying out Tika with the Solr techproducts Example
 
 You can try out the Tika framework using the `techproducts` example included in Solr.
@@ -96,8 +92,7 @@ In this command, the `uprefix=attr_` parameter causes all generated fields that
 
 This command allows you to query the document using an attribute, as in: `\http://localhost:8983/solr/techproducts/select?q=attr_meta:microsoft`.
 
-[[UploadingDatawithSolrCellusingApacheTika-InputParameters]]
-== Input Parameters
+== Solr Cell Input Parameters
 
 The table below describes the parameters accepted by the Extracting Request Handler.
 
@@ -158,8 +153,6 @@ Prefixes all fields that are not defined in the schema with the given prefix. Th
 `xpath`::
 When extracting, only return Tika XHTML content that satisfies the given XPath expression. See http://tika.apache.org/1.7/index.html for details on the format of Tika XHTML. See also http://wiki.apache.org/solr/TikaExtractOnlyExampleOutput.
 
-
-[[UploadingDatawithSolrCellusingApacheTika-OrderofOperations]]
 == Order of Operations
 
 Here is the order in which the Solr Cell framework, using the Extracting Request Handler and Tika, processes its input.
@@ -169,7 +162,6 @@ Here is the order in which the Solr Cell framework, using the Extracting Request
 .  Tika applies the mapping rules specified by `fmap.__source__=__target__` parameters.
 .  If `uprefix` is specified, any unknown field names are prefixed with that value, else if `defaultField` is specified, any unknown fields are copied to the default field.
 
-[[UploadingDatawithSolrCellusingApacheTika-ConfiguringtheSolrExtractingRequestHandler]]
 == Configuring the Solr ExtractingRequestHandler
 
 If you are not working with the supplied `sample_techproducts_configs` or `_default` <<config-sets.adoc#config-sets,config set>>, you must configure your own `solrconfig.xml` to know about the Jar's containing the `ExtractingRequestHandler` and its dependencies:
@@ -216,7 +208,6 @@ The `tika.config` entry points to a file containing a Tika configuration. The `d
 * `EEEE, dd-MMM-yy HH:mm:ss zzz`
 * `EEE MMM d HH:mm:ss yyyy`
 
-[[UploadingDatawithSolrCellusingApacheTika-Parserspecificproperties]]
 === Parser-Specific Properties
 
 Parsers used by Tika may have specific properties to govern how data is extracted. For instance, when using the Tika library from a Java program, the PDFParserConfig class has a method setSortByPosition(boolean) that can extract vertically oriented text. To access that method via configuration with the ExtractingRequestHandler, one can add the parseContext.config property to the solrconfig.xml file (see above) and then set properties in Tika's PDFParserConfig as below. Consult the Tika Java API documentation for configuration parameters that can be set for any particular parsers that require this level of control.
@@ -232,14 +223,12 @@ Parsers used by Tika may have specific properties to govern how data is extracte
 </entries>
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-Multi-CoreConfiguration]]
 === Multi-Core Configuration
 
 For a multi-core configuration, you can specify `sharedLib='lib'` in the `<solr/>` section of `solr.xml` and place the necessary jar files there.
 
 For more information about Solr cores, see <<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,The Well-Configured Solr Instance>>.
 
-[[UploadingDatawithSolrCellusingApacheTika-IndexingEncryptedDocumentswiththeExtractingUpdateRequestHandler]]
 == Indexing Encrypted Documents with the ExtractingUpdateRequestHandler
 
 The ExtractingRequestHandler will decrypt encrypted files and index their content if you supply a password in either `resource.password` on the request, or in a `passwordsFile` file.
@@ -254,11 +243,9 @@ myFileName = myPassword
 .*\.pdf$ = myPdfPassword
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-Examples]]
-== Examples
+== Solr Cell Examples
 
-[[UploadingDatawithSolrCellusingApacheTika-Metadata]]
-=== Metadata
+=== Metadata Created by Tika
 
 As mentioned before, Tika produces metadata about the document. Metadata describes different aspects of a document, such as the author's name, the number of pages, the file size, and so on. The metadata produced depends on the type of document submitted. For instance, PDFs have different metadata than Word documents do.
 
@@ -277,17 +264,10 @@ The size of the stream in bytes.
 The content type of the stream, if available.
 
 
-[IMPORTANT]
-====
-
-We recommend that you try using the `extractOnly` option to discover which values Solr is setting for these metadata elements.
-
-====
+IMPORTANT: We recommend that you try using the `extractOnly` option to discover which values Solr is setting for these metadata elements.
 
-[[UploadingDatawithSolrCellusingApacheTika-ExamplesofUploadsUsingtheExtractingRequestHandler]]
 === Examples of Uploads Using the Extracting Request Handler
 
-[[UploadingDatawithSolrCellusingApacheTika-CaptureandMapping]]
 ==== Capture and Mapping
 
 The command below captures `<div>` tags separately, and then maps all the instances of that field to a dynamic field named `foo_t`.
@@ -297,18 +277,6 @@ The command below captures `<div>` tags separately, and then maps all the instan
 bin/post -c techproducts example/exampledocs/sample.html -params "literal.id=doc2&captureAttr=true&defaultField=_text_&fmap.div=foo_t&capture=div"
 ----
 
-
-[[UploadingDatawithSolrCellusingApacheTika-Capture_Mapping]]
-==== Capture & Mapping
-
-The command below captures `<div>` tags separately and maps the field to a dynamic field named `foo_t`.
-
-[source,bash]
-----
-bin/post -c techproducts example/exampledocs/sample.html -params "literal.id=doc3&captureAttr=true&defaultField=_text_&capture=div&fmap.div=foo_t"
-----
-
-[[UploadingDatawithSolrCellusingApacheTika-UsingLiteralstoDefineYourOwnMetadata]]
 ==== Using Literals to Define Your Own Metadata
 
 To add in your own metadata, pass in the literal parameter along with the file:
@@ -318,8 +286,7 @@ To add in your own metadata, pass in the literal parameter along with the file:
 bin/post -c techproducts -params "literal.id=doc4&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_t&literal.blah_s=Bah" example/exampledocs/sample.html
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-XPath]]
-==== XPath
+==== XPath Expressions
 
 The example below passes in an XPath expression to restrict the XHTML returned by Tika:
 
@@ -328,7 +295,6 @@ The example below passes in an XPath expression to restrict the XHTML returned b
 bin/post -c techproducts -params "literal.id=doc5&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_t&xpath=/xhtml:html/xhtml:body/xhtml:div//node()" example/exampledocs/sample.html
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-ExtractingDatawithoutIndexingIt]]
 === Extracting Data without Indexing It
 
 Solr allows you to extract data without indexing. You might want to do this if you're using Solr solely as an extraction server or if you're interested in testing Solr extraction.
@@ -347,7 +313,6 @@ The output includes XML generated by Tika (and further escaped by Solr's XML) us
 bin/post -c techproducts -params "extractOnly=true&wt=ruby&indent=true" -out yes example/exampledocs/sample.html
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-SendingDocumentstoSolrwithaPOST]]
 == Sending Documents to Solr with a POST
 
 The example below streams the file as the body of the POST, which does not, then, provide information to Solr about the name of the file.
@@ -357,7 +322,6 @@ The example below streams the file as the body of the POST, which does not, then
 curl "http://localhost:8983/solr/techproducts/update/extract?literal.id=doc6&defaultField=text&commit=true" --data-binary @example/exampledocs/sample.html -H 'Content-type:text/html'
 ----
 
-[[UploadingDatawithSolrCellusingApacheTika-SendingDocumentstoSolrwithSolrCellandSolrJ]]
 == Sending Documents to Solr with Solr Cell and SolrJ
 
 SolrJ is a Java client that you can use to add documents to the index, update the index, or query the index. You'll find more information on SolrJ in <<client-apis.adoc#client-apis,Client APIs>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/using-javascript.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-javascript.adoc b/solr/solr-ref-guide/src/using-javascript.adoc
index 25aabf8..62c681f 100644
--- a/solr/solr-ref-guide/src/using-javascript.adoc
+++ b/solr/solr-ref-guide/src/using-javascript.adoc
@@ -22,7 +22,7 @@ Using Solr from JavaScript clients is so straightforward that it deserves a spec
 
 HTTP requests can be sent to Solr using the standard `XMLHttpRequest` mechanism.
 
-Out of the box, Solr can send <<response-writers.adoc#ResponseWriters-JSONResponseWriter,JavaScript Object Notation (JSON) responses>>, which are easily interpreted in JavaScript. Just add `wt=json` to the request URL to have responses sent as JSON.
+By default, Solr sends <<response-writers.adoc#json-response-writer,JavaScript Object Notation (JSON) responses>>, which are easily interpreted in JavaScript. You don't need to add anything to the request URL to have responses sent as JSON.
 
 For more information and an excellent example, read the SolJSON page on the Solr Wiki:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/using-jmx-with-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-jmx-with-solr.adoc b/solr/solr-ref-guide/src/using-jmx-with-solr.adoc
index 241b30b..77fd0ca 100644
--- a/solr/solr-ref-guide/src/using-jmx-with-solr.adoc
+++ b/solr/solr-ref-guide/src/using-jmx-with-solr.adoc
@@ -22,7 +22,6 @@ http://www.oracle.com/technetwork/java/javase/tech/javamanagement-140525.html[Ja
 
 Solr, like any other good citizen of the Java universe, can be controlled via a JMX interface. You can enable JMX support by adding lines to `solrconfig.xml`. You can use a JMX client, like jconsole, to connect with Solr. Check out the Wiki page http://wiki.apache.org/solr/SolrJmx for more information. You may also find the following overview of JMX to be useful: http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html.
 
-[[UsingJMXwithSolr-ConfiguringJMX]]
 == Configuring JMX
 
 JMX configuration is provided in `solrconfig.xml`. Please see the http://www.oracle.com/technetwork/java/javase/tech/javamanagement-140525.html[JMX Technology Home Page] for more details.
@@ -36,7 +35,6 @@ Enabling/disabling JMX and securing access to MBeanServers is left up to the use
 
 ====
 
-[[UsingJMXwithSolr-ConfiguringanExistingMBeanServer]]
 === Configuring an Existing MBeanServer
 
 The command:
@@ -48,7 +46,6 @@ The command:
 
 enables JMX support in Solr if and only if an existing MBeanServer is found. Use this if you want to configure JMX with JVM parameters. Remove this to disable exposing Solr configuration and statistics to JMX. If this is specified, Solr will try to list all available MBeanServers and use the first one to register MBeans.
 
-[[UsingJMXwithSolr-ConfiguringanExistingMBeanServerwithagentId]]
 === Configuring an Existing MBeanServer with agentId
 
 The command:
@@ -60,7 +57,6 @@ The command:
 
 enables JMX support in Solr if and only if an existing MBeanServer is found matching the given agentId. If multiple servers are found, the first one is used. If none is found, an exception is raised and depending on the configuration, Solr may refuse to start.
 
-[[UsingJMXwithSolr-ConfiguringaNewMBeanServer]]
 === Configuring a New MBeanServer
 
 The command:
@@ -72,8 +68,7 @@ The command:
 
 creates a new MBeanServer exposed for remote monitoring at the specific service URL. If the JMXConnectorServer can't be started (probably because the serviceUrl is bad), an exception is thrown.
 
-[[UsingJMXwithSolr-Example]]
-==== Example
+==== MBean Server Example
 
 Solr's `sample_techproducts_configs` config set uses the simple `<jmx />` configuration option. If you start the example with the necessary JVM system properties to launch an internal MBeanServer, Solr will register with it and you can connect using a tool like `jconsole`:
 
@@ -87,7 +82,6 @@ bin/solr -e techproducts -Dcom.sun.management.jmxremote
 3.  Connect to the "`start.jar`" shown in the list of local processes.
 4.  Switch to the "MBeans" tab. You should be able to see "`solr/techproducts`" listed there, at which point you can drill down and see details of every solr plugin.
 
-[[UsingJMXwithSolr-ConfiguringaRemoteConnectiontoSolrJMX]]
 === Configuring a Remote Connection to Solr JMX
 
 If you need to attach a JMX-enabled Java profiling tool, such as JConsole or VisualVM, to a remote Solr server, then you need to enable remote JMX access when starting the Solr server. Simply change the `ENABLE_REMOTE_JMX_OPTS` property in the include file to true. You’ll also need to choose a port for the JMX RMI connector to bind to, such as 18983. For example, if your Solr include script sets:
@@ -118,7 +112,5 @@ http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html
 
 [IMPORTANT]
 ====
-
 Making JMX connections into machines running behind NATs (e.g. Amazon's EC2 service) is not a simple task. The `java.rmi.server.hostname` system property may help, but running `jconsole` on the server itself and using a remote desktop is often the simplest solution. See http://web.archive.org/web/20130525022506/http://jmsbrdy.com/monitoring-java-applications-running-on-ec2-i.
-
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/using-python.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-python.adoc b/solr/solr-ref-guide/src/using-python.adoc
index 1e8045f..2c51486 100644
--- a/solr/solr-ref-guide/src/using-python.adoc
+++ b/solr/solr-ref-guide/src/using-python.adoc
@@ -18,9 +18,8 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr includes an output format specifically for <<response-writers.adoc#ResponseWriters-PythonResponseWriter,Python>>, but <<response-writers.adoc#ResponseWriters-JSONResponseWriter,JSON output>> is a little more robust.
+Solr includes an output format specifically for <<response-writers.adoc#python-response-writer,Python>>, but <<response-writers.adoc#json-response-writer,JSON output>> is a little more robust.
 
-[[UsingPython-SimplePython]]
 == Simple Python
 
 Making a query is a simple matter. First, tell Python you will need to make HTTP connections.
@@ -50,7 +49,6 @@ for document in response['response']['docs']:
   print "  Name =", document['name']
 ----
 
-[[UsingPython-PythonwithJSON]]
 == Python with JSON
 
 JSON is a more robust response format, but you will need to add a Python package in order to use it. At a command line, install the simplejson package like this:
@@ -60,7 +58,7 @@ JSON is a more robust response format, but you will need to add a Python package
 sudo easy_install simplejson
 ----
 
-Once that is done, making a query is nearly the same as before. However, notice that the wt query parameter is now json, and the response is now digested by `simplejson.load()`.
+Once that is done, making a query is nearly the same as before. However, notice that the wt query parameter is now json (which is also the default if not wt parameter is specified), and the response is now digested by `simplejson.load()`.
 
 [source,python]
 ----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/using-solr-from-ruby.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-solr-from-ruby.adoc b/solr/solr-ref-guide/src/using-solr-from-ruby.adoc
index ef5454c..0b70336 100644
--- a/solr/solr-ref-guide/src/using-solr-from-ruby.adoc
+++ b/solr/solr-ref-guide/src/using-solr-from-ruby.adoc
@@ -18,7 +18,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr has an optional Ruby response format that extends the <<response-writers.adoc#ResponseWriters-JSONResponseWriter,JSON output>> to allow the response to be safely eval'd by Ruby's interpreter
+Solr has an optional Ruby response format that extends the <<response-writers.adoc#json-response-writer,JSON output>> to allow the response to be safely eval'd by Ruby's interpreter
 
 This Ruby response format differs from JSON in the following ways:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/using-solrj.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-solrj.adoc b/solr/solr-ref-guide/src/using-solrj.adoc
index 4788ea2..9ac5acc 100644
--- a/solr/solr-ref-guide/src/using-solrj.adoc
+++ b/solr/solr-ref-guide/src/using-solrj.adoc
@@ -45,7 +45,6 @@ SolrClient solr = new CloudSolrClient.Builder().withSolrUrl("http://localhost:89
 
 Once you have a `SolrClient`, you can use it by calling methods like `query()`, `add()`, and `commit()`.
 
-[[UsingSolrJ-BuildingandRunningSolrJApplications]]
 == Building and Running SolrJ Applications
 
 The SolrJ API is included with Solr, so you do not have to download or install anything else. However, in order to build and run applications that use SolrJ, you have to add some libraries to the classpath.
@@ -69,7 +68,6 @@ You can sidestep a lot of the messing around with the JAR files by using Maven i
 
 If you are worried about the SolrJ libraries expanding the size of your client application, you can use a code obfuscator like http://proguard.sourceforge.net/[ProGuard] to remove APIs that you are not using.
 
-[[UsingSolrJ-SpecifyingSolrUrl]]
 == Specifying Solr Base URLs
 
 Most `SolrClient` implementations (with the notable exception of `CloudSolrClient`) require users to specify one or more Solr base URLs, which the client then uses to send HTTP requests to Solr.  The path users include on the base URL they provide has an effect on the behavior of the created client from that point on.
@@ -77,7 +75,6 @@ Most `SolrClient` implementations (with the notable exception of `CloudSolrClien
 . A URL with a path pointing to a specific core or collection (e.g. `http://hostname:8983/solr/core1`).  When a core or collection is specified in the base URL, subsequent requests made with that client are not required to re-specify the affected collection.  However, the client is limited to sending requests to  that core/collection, and can not send requests to any others.
 . A URL with a generic path pointing to the root Solr path (e.g. `http://hostname:8983/solr`).  When no core or collection is specified in the base URL, requests can be made to any core/collection, but the affected core/collection must be specified on all requests.
 
-[[UsingSolrJ-SettingXMLResponseParser]]
 == Setting XMLResponseParser
 
 SolrJ uses a binary format, rather than XML, as its default response format. If you are trying to mix Solr and SolrJ versions where one is version 1.x and the other is 3.x or later, then you MUST use the XML response parser. The binary format changed in 3.x, and the two javabin versions are entirely incompatible. The following code will make this change:
@@ -87,7 +84,6 @@ SolrJ uses a binary format, rather than XML, as its default response format. If
 solr.setParser(new XMLResponseParser());
 ----
 
-[[UsingSolrJ-PerformingQueries]]
 == Performing Queries
 
 Use `query()` to have Solr search for results. You have to pass a `SolrQuery` object that describes the query, and you will get back a QueryResponse (from the `org.apache.solr.client.solrj.response` package).
@@ -132,7 +128,6 @@ The `QueryResponse` is a collection of documents that satisfy the query paramete
 SolrDocumentList list = response.getResults();
 ----
 
-[[UsingSolrJ-IndexingDocuments]]
 == Indexing Documents
 
 Other operations are just as simple. To index (add) a document, all you need to do is create a `SolrInputDocument` and pass it along to the `SolrClient` 's `add()` method. This example assumes that the SolrClient object called 'solr' is already created based on the examples shown earlier.
@@ -150,7 +145,6 @@ UpdateResponse response = solr.add(document);
 solr.commit();
 ----
 
-[[UsingSolrJ-UploadingContentinXMLorBinaryFormats]]
 === Uploading Content in XML or Binary Formats
 
 SolrJ lets you upload content in binary format instead of the default XML format. Use the following code to upload using binary format, which is the same format SolrJ uses to fetch results. If you are trying to mix Solr and SolrJ versions where one is version 1.x and the other is 3.x or later, then you MUST stick with the XML request writer. The binary format changed in 3.x, and the two javabin versions are entirely incompatible.
@@ -160,12 +154,10 @@ SolrJ lets you upload content in binary format instead of the default XML format
 solr.setRequestWriter(new BinaryRequestWriter());
 ----
 
-[[UsingSolrJ-UsingtheConcurrentUpdateSolrClient]]
 === Using the ConcurrentUpdateSolrClient
 
 When implementing java applications that will be bulk loading a lot of documents at once, {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/impl/ConcurrentUpdateSolrClient.html[`ConcurrentUpdateSolrClient`] is an alternative to consider instead of using `HttpSolrClient`. The `ConcurrentUpdateSolrClient` buffers all added documents and writes them into open HTTP connections. This class is thread safe. Although any SolrClient request can be made with this implementation, it is only recommended to use the `ConcurrentUpdateSolrClient` for `/update` requests.
 
-[[UsingSolrJ-EmbeddedSolrServer]]
 == EmbeddedSolrServer
 
 The {solr-javadocs}/solr-core/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.html[`EmbeddedSolrServer`] class provides an implementation of the `SolrClient` client API talking directly to an micro-instance of Solr running directly in your Java application. This embedded approach is not recommended in most cases and fairly limited in the set of features it supports – in particular it can not be used with <<solrcloud.adoc#solrcloud,SolrCloud>> or <<index-replication.adoc#index-replication,Index Replication>>. `EmbeddedSolrServer` exists primarily to help facilitate testing.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc b/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc
index 3166e1c..31b49f2 100644
--- a/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc
+++ b/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc
@@ -26,7 +26,6 @@ These files are uploaded in either of the following cases:
 * When you create a collection using the `bin/solr` script.
 * Explicitly upload a configuration set to ZooKeeper.
 
-[[UsingZooKeepertoManageConfigurationFiles-StartupBootstrap]]
 == Startup Bootstrap
 
 When you try SolrCloud for the first time using the `bin/solr -e cloud`, the related configset gets uploaded to ZooKeeper automatically and is linked with the newly created collection.
@@ -49,15 +48,9 @@ The create command will upload a copy of the `_default` configuration directory
 
 Once a configuration directory has been uploaded to ZooKeeper, you can update them using the <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script>>
 
-[IMPORTANT]
-====
+IMPORTANT: It's a good idea to keep these files under version control.
 
-It's a good idea to keep these files under version control.
 
-====
-
-
-[[UsingZooKeepertoManageConfigurationFiles-UploadingConfigurationFilesusingbin_solrorSolrJ]]
 == Uploading Configuration Files using bin/solr or SolrJ
 
 In production situations, <<config-sets.adoc#config-sets,Config Sets>> can also be uploaded to ZooKeeper independent of collection creation using either Solr's <<solr-control-script-reference.adoc#solr-control-script-reference,Solr Control Script>> or the {solr-javadocs}/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrClient.html[CloudSolrClient.uploadConfig] java method.
@@ -71,21 +64,19 @@ bin/solr zk upconfig -n <name for configset> -d <path to directory with configse
 
 It is strongly recommended that the configurations be kept in a version control system, Git, SVN or similar.
 
-[[UsingZooKeepertoManageConfigurationFiles-ManagingYourSolrCloudConfigurationFiles]]
 == Managing Your SolrCloud Configuration Files
 
 To update or change your SolrCloud configuration files:
 
-1.  Download the latest configuration files from ZooKeeper, using the source control checkout process.
-2.  Make your changes.
-3.  Commit your changed file to source control.
-4.  Push the changes back to ZooKeeper.
-5.  Reload the collection so that the changes will be in effect.
+. Download the latest configuration files from ZooKeeper, using the source control checkout process.
+. Make your changes.
+. Commit your changed file to source control.
+. Push the changes back to ZooKeeper.
+. Reload the collection so that the changes will be in effect.
 
-[[UsingZooKeepertoManageConfigurationFiles-PreparingZooKeeperbeforefirstclusterstart]]
-== Preparing ZooKeeper before first cluster start
+== Preparing ZooKeeper before First Cluster Start
 
-If you will share the same ZooKeeper instance with other applications you should use a _chroot_ in ZooKeeper. Please see <<taking-solr-to-production.adoc#TakingSolrtoProduction-ZooKeeperchroot,ZooKeeper chroot>> for instructions.
+If you will share the same ZooKeeper instance with other applications you should use a _chroot_ in ZooKeeper. Please see <<taking-solr-to-production.adoc#zookeeper-chroot,ZooKeeper chroot>> for instructions.
 
 There are certain configuration files containing cluster wide configuration. Since some of these are crucial for the cluster to function properly, you may need to upload such files to ZooKeeper before starting your Solr cluster for the first time. Examples of such configuration files (not exhaustive) are `solr.xml`, `security.json` and `clusterprops.json`.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/v2-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/v2-api.adoc b/solr/solr-ref-guide/src/v2-api.adoc
index 6906b1c..15e1fda 100644
--- a/solr/solr-ref-guide/src/v2-api.adoc
+++ b/solr/solr-ref-guide/src/v2-api.adoc
@@ -34,7 +34,6 @@ The old API and the v2 API differ in three principle ways:
 .  Endpoint structure: The v2 API endpoint structure has been rationalized and regularized.
 .  Documentation: The v2 APIs are self-documenting: append `/_introspect` to any valid v2 API path and the API specification will be returned in JSON format.
 
-[[v2API-v2APIPathPrefixes]]
 == v2 API Path Prefixes
 
 Following are some v2 API URL paths and path prefixes, along with some of the operations that are supported at these paths and their sub-paths.
@@ -57,7 +56,6 @@ Following are some v2 API URL paths and path prefixes, along with some of the op
 |`/v2/c/.system/blob` |Upload and download blobs and metadata.
 |===
 
-[[v2API-Introspect]]
 == Introspect
 
 Append `/_introspect` to any valid v2 API path and the API specification will be returned in JSON format.
@@ -72,7 +70,6 @@ Most endpoints support commands provided in a body sent via POST. To limit the i
 
 `\http://localhost:8983/v2/c/gettingstarted/_introspect?method=POST&command=modify`
 
-[[v2API-InterpretingtheIntrospectOutput]]
 === Interpreting the Introspect Output
 
 Example : `\http://localhost:8983/v2/c/gettingstarted/get/_introspect`
@@ -81,7 +78,7 @@ Example : `\http://localhost:8983/v2/c/gettingstarted/get/_introspect`
 ----
 {
   "spec":[{
-      "documentation":"https://cwiki.apache.org/confluence/display/solr/RealTime+Get",
+      "documentation":"https://lucene.apache.org/solr/guide/real-time-get.html",
       "description":"RealTime Get allows retrieving documents by ID before the documents have been committed to the index. It is useful when you need access to documents as soon as they are indexed but your commit times are high for other reasons.",
       "methods":["GET"],
       "url":{
@@ -97,7 +94,8 @@ Example : `\http://localhost:8983/v2/c/gettingstarted/get/_introspect`
             "type":"string",
             "description":"An optional filter query to add to the query. One use case for this is security filtering, in case users or groups should not be able to retrieve the document ID requested."}}}}],
   "WARNING":"This response format is experimental.  It is likely to change in the future.",
-  "availableSubPaths":{}}
+  "availableSubPaths":{}
+}
 ----
 
 Description of some of the keys in the above example:
@@ -115,29 +113,29 @@ Example of introspect for a POST API: `\http://localhost:8983/v2/c/gettingstarte
 ----
 {
   "spec":[{
-      "documentation":"https://cwiki.apache.org/confluence/display/solr/Collections+API",
+      "documentation":"https://lucene.apache.org/solr/guide/collections-api.html",
       "description":"Several collection-level operations are supported with this endpoint: modify collection attributes; reload a collection; migrate documents to a different collection; rebalance collection leaders; balance properties across shards; and add or delete a replica property.",
       "methods":["POST"],
       "url":{"paths":["/collections/{collection}",
           "/c/{collection}"]},
       "commands":{"modify":{
-          "documentation":"https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-modifycoll",
+          "documentation":"https://lucene.apache.org/solr/guide/collections-api.html#modifycollection",
           "description":"Modifies specific attributes of a collection. Multiple attributes can be changed at one time.",
           "type":"object",
           "properties":{
             "rule":{
               "type":"array",
-              "documentation":"https://cwiki.apache.org/confluence/display/solr/Rule-based+Replica+Placement",
+              "documentation":"https://lucene.apache.org/solr/guide/rule-based-replica-placement.html",
               "description":"Modifies the rules for where replicas should be located in a cluster.",
               "items":{"type":"string"}},
             "snitch":{
               "type":"array",
-              "documentation":"https://cwiki.apache.org/confluence/display/solr/Rule-based+Replica+Placement",
+              "documentation":"https://lucene.apache.org/solr/guide/rule-based-replica-placement.html",
               "description":"Details of the snitch provider",
               "items":{"type":"string"}},
             "autoAddReplicas":{
               "type":"boolean",
-              "description":"When set to true, enables auto addition of replicas on shared file systems (such as HDFS). See https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS for more details on settings and overrides."},
+              "description":"When set to true, enables auto addition of replicas on shared file systems (such as HDFS). See https://lucene.apache.org/solr/guide/running-solr-on-hdfs.html for more details on settings and overrides."},
             "replicationFactor":{
               "type":"string",
               "description":"The number of replicas to be created for each shard. Replicas are physical copies of each shard, acting as failover for the shard. Note that changing this value on an existing collection does not automatically add more replicas to the collection. However, it will allow add-replica commands to succeed."},
@@ -151,16 +149,12 @@ Example of introspect for a POST API: `\http://localhost:8983/v2/c/gettingstarte
     "/c/gettingstarted/schema":["POST", "GET"],
     "/c/gettingstarted/export":["POST", "GET"],
     "/c/gettingstarted/admin/ping":["POST", "GET"],
-    "/c/gettingstarted/update":["POST"]},
-
-[... more sub-paths ...]
-
+    "/c/gettingstarted/update":["POST"]}
 }
 ----
 
 The `"commands"` section in the above example has one entry for each command supported at this endpoint. The key is the command name and the value is a json object describing the command structure using JSON schema (see http://json-schema.org/ for a description).
 
-[[v2API-InvocationExamples]]
 == Invocation Examples
 
 For the "gettingstarted" collection, set the replication factor and whether to automatically add replicas (see above for the introspect output for the `"modify"` command used here):

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/velocity-response-writer.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/velocity-response-writer.adoc b/solr/solr-ref-guide/src/velocity-response-writer.adoc
index 424a033..0101030 100644
--- a/solr/solr-ref-guide/src/velocity-response-writer.adoc
+++ b/solr/solr-ref-guide/src/velocity-response-writer.adoc
@@ -42,7 +42,6 @@ The above example shows the optional initialization and custom tool parameters u
 
 == Configuration & Usage
 
-[[VelocityResponseWriter-VelocityResponseWriterinitializationparameters]]
 === VelocityResponseWriter Initialization Parameters
 
 `template.base.dir`::
@@ -66,7 +65,6 @@ External "tools" can be specified as list of string name/value (tool name / clas
 +
 A custom registered tool can override the built-in context objects with the same name, except for `$request`, `$response`, `$page`, and `$debug` (these tools are designed to not be overridden).
 
-[[VelocityResponseWriter-VelocityResponseWriterrequestparameters]]
 === VelocityResponseWriter Request Parameters
 
 `v.template`::
@@ -102,7 +100,6 @@ Resource bundles can be added by providing a JAR file visible by the SolrResourc
 `v.template._template_name_`:: When the "params" resource loader is enabled, templates can be specified as part of the Solr request.
 
 
-[[VelocityResponseWriter-VelocityResponseWritercontextobjects]]
 === VelocityResponseWriter Context Objects
 
 // TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/velocity-search-ui.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/velocity-search-ui.adoc b/solr/solr-ref-guide/src/velocity-search-ui.adoc
index 0cb4697..cc2fb47 100644
--- a/solr/solr-ref-guide/src/velocity-search-ui.adoc
+++ b/solr/solr-ref-guide/src/velocity-search-ui.adoc
@@ -18,11 +18,11 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr includes a sample search UI based on the <<response-writers.adoc#ResponseWriters-VelocityResponseWriter,VelocityResponseWriter>> (also known as Solritas) that demonstrates several useful features, such as searching, faceting, highlighting, autocomplete, and geospatial searching.
+Solr includes a sample search UI based on the <<response-writers.adoc#velocity-writer,VelocityResponseWriter>> (also known as Solritas) that demonstrates several useful features, such as searching, faceting, highlighting, autocomplete, and geospatial searching.
 
 When using the `sample_techproducts_configs` config set, you can access the Velocity sample Search UI: `\http://localhost:8983/solr/techproducts/browse`
 
 .The Velocity Search UI
 image::images/velocity-search-ui/techproducts_browse.png[image,width=500]
 
-For more information about the Velocity Response Writer, see the <<response-writers.adoc#ResponseWriters-VelocityResponseWriter,Response Writer page>>.
+For more information about the Velocity Response Writer, see the <<response-writers.adoc#velocity-writer,Response Writer page>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc b/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
index 5ed4a56..9208775 100644
--- a/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
+++ b/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
@@ -27,7 +27,6 @@ The `currency` FieldType provides support for monetary values to Solr/Lucene wit
 * Currency parsing by either currency code or symbol
 * Symmetric & asymmetric exchange rates (asymmetric exchange rates are useful if there are fees associated with exchanging the currency)
 
-[[WorkingwithCurrenciesandExchangeRates-ConfiguringCurrencies]]
 == Configuring Currencies
 
 .CurrencyField has been Deprecated
@@ -40,12 +39,12 @@ The `currency` field type is defined in `schema.xml`. This is the default config
 
 [source,xml]
 ----
-<fieldType name="currency" class="solr.CurrencyFieldType" 
+<fieldType name="currency" class="solr.CurrencyFieldType"
            amountLongSuffix="_l_ns" codeStrSuffix="_s_ns"
            defaultCurrency="USD" currencyConfig="currency.xml" />
 ----
 
-In this example, we have defined the name and class of the field type, and defined the `defaultCurrency` as "USD", for U.S. Dollars. We have also defined a `currencyConfig` to use a file called "currency.xml". This is a file of exchange rates between our default currency to other currencies. There is an alternate implementation that would allow regular downloading of currency data. See <<WorkingwithCurrenciesandExchangeRates-ExchangeRates,Exchange Rates>> below for more.
+In this example, we have defined the name and class of the field type, and defined the `defaultCurrency` as "USD", for U.S. Dollars. We have also defined a `currencyConfig` to use a file called "currency.xml". This is a file of exchange rates between our default currency to other currencies. There is an alternate implementation that would allow regular downloading of currency data. See <<Exchange Rates>> below for more.
 
 Many of the example schemas that ship with Solr include a <<dynamic-fields.adoc#dynamic-fields,dynamic field>> that uses this type, such as this example:
 
@@ -60,10 +59,9 @@ At indexing time, money fields can be indexed in a native currency. For example,
 
 During query processing, range and point queries are both supported.
 
-[[WorkingwithCurrenciesandExchangeRates-Sub-fieldSuffixes]]
 === Sub-field Suffixes
 
-You must specify parameters `amountLongSuffix` and `codeStrSuffix`, corresponding to dynamic fields to be used for the raw amount and the currency dynamic sub-fields, e.g.: 
+You must specify parameters `amountLongSuffix` and `codeStrSuffix`, corresponding to dynamic fields to be used for the raw amount and the currency dynamic sub-fields, e.g.:
 
 [source,xml]
 ----
@@ -77,15 +75,13 @@ In the above example, the raw amount field will use the `"*_l_ns"` dynamic field
 .Atomic Updates won't work if dynamic sub-fields are stored
 [NOTE]
 ====
-As noted on <<updating-parts-of-documents.adoc#UpdatingPartsofDocuments-FieldStorage,Updating Parts of Documents>>, stored dynamic sub-fields will cause indexing to fail when you use Atomic Updates. To avoid this problem, specify `stored="false"` on those dynamic fields.
+As noted on <<updating-parts-of-documents.adoc#field-storage,Updating Parts of Documents>>, stored dynamic sub-fields will cause indexing to fail when you use Atomic Updates. To avoid this problem, specify `stored="false"` on those dynamic fields.
 ====
 
-[[WorkingwithCurrenciesandExchangeRates-ExchangeRates]]
 == Exchange Rates
 
 You configure exchange rates by specifying a provider. Natively, two provider types are supported: `FileExchangeRateProvider` or `OpenExchangeRatesOrgProvider`.
 
-[[WorkingwithCurrenciesandExchangeRates-FileExchangeRateProvider]]
 === FileExchangeRateProvider
 
 This provider requires you to provide a file of exchange rates. It is the default, meaning that to use this provider you only need to specify the file path and name as a value for `currencyConfig` in the definition for this type.
@@ -103,9 +99,9 @@ There is a sample `currency.xml` file included with Solr, found in the same dire
     <rate from="USD" to="CAD" rate="1.030815" comment="CANADA Dollar" />
 
     <!-- Cross-rates for some common currencies -->
-    <rate from="EUR" to="GBP" rate="0.869914" />  
-    <rate from="EUR" to="NOK" rate="7.800095" />  
-    <rate from="GBP" to="NOK" rate="8.966508" />  
+    <rate from="EUR" to="GBP" rate="0.869914" />
+    <rate from="EUR" to="NOK" rate="7.800095" />
+    <rate from="GBP" to="NOK" rate="8.966508" />
 
     <!-- Asymmetrical rates -->
     <rate from="EUR" to="USD" rate="0.5" />
@@ -113,7 +109,6 @@ There is a sample `currency.xml` file included with Solr, found in the same dire
 </currencyConfig>
 ----
 
-[[WorkingwithCurrenciesandExchangeRates-OpenExchangeRatesOrgProvider]]
 === OpenExchangeRatesOrgProvider
 
 You can configure Solr to download exchange rates from http://www.OpenExchangeRates.Org[OpenExchangeRates.Org], with updates rates between USD and 170 currencies hourly. These rates are symmetrical only.
@@ -122,10 +117,10 @@ In this case, you need to specify the `providerClass` in the definitions for the
 
 [source,xml]
 ----
-<fieldType name="currency" class="solr.CurrencyFieldType" 
+<fieldType name="currency" class="solr.CurrencyFieldType"
            amountLongSuffix="_l_ns" codeStrSuffix="_s_ns"
            providerClass="solr.OpenExchangeRatesOrgProvider"
-           refreshInterval="60" 
+           refreshInterval="60"
            ratesFileLocation="http://www.openexchangerates.org/api/latest.json?app_id=yourPersonalAppIdKey"/>
 ----
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/working-with-dates.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-dates.adoc b/solr/solr-ref-guide/src/working-with-dates.adoc
index 31d0f1f..5f28f61 100644
--- a/solr/solr-ref-guide/src/working-with-dates.adoc
+++ b/solr/solr-ref-guide/src/working-with-dates.adoc
@@ -18,7 +18,6 @@
 // specific language governing permissions and limitations
 // under the License.
 
-[[WorkingwithDates-DateFormatting]]
 == Date Formatting
 
 Solr's date fields (`TrieDateField`, `DatePointField` and `DateRangeField`) represent "dates" as a point in time with millisecond precision. The format used is a restricted form of the canonical representation of dateTime in the http://www.w3.org/TR/xmlschema-2/#dateTime[XML Schema specification] – a restricted subset of https://en.wikipedia.org/wiki/ISO_8601[ISO-8601]. For those familiar with Java 8, Solr uses https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_INSTANT[DateTimeFormatter.ISO_INSTANT] for formatting, and parsing too with "leniency".
@@ -48,7 +47,6 @@ There must be a leading `'-'` for dates prior to year 0000, and Solr will format
 .Query escaping may be required
 [WARNING]
 ====
-
 As you can see, the date format includes colon characters separating the hours, minutes, and seconds. Because the colon is a special character to Solr's most common query parsers, escaping is sometimes required, depending on exactly what you are trying to do.
 
 This is normally an invalid query: `datefield:1972-05-20T17:33:18.772Z`
@@ -57,10 +55,8 @@ These are valid queries: +
 `datefield:1972-05-20T17\:33\:18.772Z` +
 `datefield:"1972-05-20T17:33:18.772Z"` +
 `datefield:[1972-05-20T17:33:18.772Z TO *]`
-
 ====
 
-[[WorkingwithDates-DateRangeFormatting]]
 === Date Range Formatting
 
 Solr's `DateRangeField` supports the same point in time date syntax described above (with _date math_ described below) and more to express date ranges. One class of examples is truncated dates, which represent the entire date span to the precision indicated. The other class uses the range syntax (`[ TO ]`). Here are some examples:
@@ -74,12 +70,10 @@ Solr's `DateRangeField` supports the same point in time date syntax described ab
 
 Limitations: The range syntax doesn't support embedded date math. If you specify a date instance supported by TrieDateField with date math truncating it, like `NOW/DAY`, you still get the first millisecond of that day, not the entire day's range. Exclusive ranges (using `{` & `}`) work in _queries_ but not for _indexing_ ranges.
 
-[[WorkingwithDates-DateMath]]
 == Date Math
 
 Solr's date field types also supports _date math_ expressions, which makes it easy to create times relative to fixed moments in time, include the current time which can be represented using the special value of "```NOW```".
 
-[[WorkingwithDates-DateMathSyntax]]
 === Date Math Syntax
 
 Date math expressions consist either adding some quantity of time in a specified unit, or rounding the current time by a specified unit. expressions can be chained and are evaluated left to right.
@@ -104,10 +98,8 @@ Note that while date math is most commonly used relative to `NOW` it can be appl
 
 `1972-05-20T17:33:18.772Z+6MONTHS+3DAYS/DAY`
 
-[[WorkingwithDates-RequestParametersThatAffectDateMath]]
 === Request Parameters That Affect Date Math
 
-[[WorkingwithDates-NOW]]
 ==== NOW
 
 The `NOW` parameter is used internally by Solr to ensure consistent date math expression parsing across multiple nodes in a distributed request. But it can be specified to instruct Solr to use an arbitrary moment in time (past or future) to override for all situations where the the special value of "```NOW```" would impact date math expressions.
@@ -118,7 +110,6 @@ Example:
 
 `q=solr&fq=start_date:[* TO NOW]&NOW=1384387200000`
 
-[[WorkingwithDates-TZ]]
 ==== TZ
 
 By default, all date math expressions are evaluated relative to the UTC TimeZone, but the `TZ` parameter can be specified to override this behaviour, by forcing all date based addition and rounding to be relative to the specified http://docs.oracle.com/javase/8/docs/api/java/util/TimeZone.html[time zone].
@@ -161,7 +152,6 @@ http://localhost:8983/solr/my_collection/select?q=*:*&facet.range=my_date_field&
 ...
 ----
 
-[[WorkingwithDates-MoreDateRangeFieldDetails]]
 == More DateRangeField Details
 
 `DateRangeField` is almost a drop-in replacement for places where `TrieDateField` is used. The only difference is that Solr's XML or SolrJ response formats will expose the stored data as a String instead of a Date. The underlying index data for this field will be a bit larger. Queries that align to units of time a second on up should be faster than TrieDateField, especially if it's in UTC. But the main point of DateRangeField as its name suggests is to allow indexing date ranges. To do that, simply supply strings in the format shown above. It also supports specifying 3 different relational predicates between the indexed data, and the query range: `Intersects` (default), `Contains`, `Within`. You can specify the predicate by querying using the `op` local-params parameter like so:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/working-with-enum-fields.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-enum-fields.adoc b/solr/solr-ref-guide/src/working-with-enum-fields.adoc
index 8931543..205b735 100644
--- a/solr/solr-ref-guide/src/working-with-enum-fields.adoc
+++ b/solr/solr-ref-guide/src/working-with-enum-fields.adoc
@@ -20,7 +20,6 @@
 
 The EnumField type allows defining a field whose values are a closed set, and the sort order is pre-determined but is not alphabetic nor numeric. Examples of this are severity lists, or risk definitions.
 
-[[WorkingwithEnumFields-DefininganEnumFieldinschema.xml]]
 == Defining an EnumField in schema.xml
 
 The EnumField type definition is quite simple, as in this example defining field types for "priorityLevel" and "riskLevel" enumerations:
@@ -33,11 +32,10 @@ The EnumField type definition is quite simple, as in this example defining field
 
 Besides the `name` and the `class`, which are common to all field types, this type also takes two additional parameters:
 
-* `enumsConfig`: the name of a configuration file that contains the `<enum/>` list of field values and their order that you wish to use with this field type. If a path to the file is not defined specified, the file should be in the `conf` directory for the collection.
-* `enumName`: the name of the specific enumeration in the `enumsConfig` file to use for this type.
+`enumsConfig`:: the name of a configuration file that contains the `<enum/>` list of field values and their order that you wish to use with this field type. If a path to the file is not defined specified, the file should be in the `conf` directory for the collection.
+`enumName`:: the name of the specific enumeration in the `enumsConfig` file to use for this type.
 
-[[WorkingwithEnumFields-DefiningtheEnumFieldconfigurationfile]]
-== Defining the EnumField configuration file
+== Defining the EnumField Configuration File
 
 The file named with the `enumsConfig` parameter can contain multiple enumeration value lists with different names if there are multiple uses for enumerations in your Solr schema.
 
@@ -68,9 +66,7 @@ In this example, there are two value lists defined. Each list is between `enum`
 .Changing Values
 [IMPORTANT]
 ====
-
 You cannot change the order, or remove, existing values in an `<enum/>` without reindexing.
 
 You can however add new values to the end.
-
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc b/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc
index 3aa0195..ac42636 100644
--- a/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc
+++ b/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc
@@ -18,7 +18,6 @@
 // specific language governing permissions and limitations
 // under the License.
 
-[[WorkingwithExternalFilesandProcesses-TheExternalFileFieldType]]
 == The ExternalFileField Type
 
 The `ExternalFileField` type makes it possible to specify the values for a field in a file outside the Solr index. For such a field, the file contains mappings from a key field to the field value. Another way to think of this is that, instead of specifying the field in documents as they are indexed, Solr finds values for this field in the external file.
@@ -41,7 +40,6 @@ The `keyField` attribute defines the key that will be defined in the external fi
 
 The `valType` attribute specifies the actual type of values that will be found in the file. The type specified must be either a float field type, so valid values for this attribute are `pfloat`, `float` or `tfloat`. This attribute can be omitted.
 
-[[WorkingwithExternalFilesandProcesses-FormatoftheExternalFile]]
 === Format of the External File
 
 The file itself is located in Solr's index directory, which by default is `$SOLR_HOME/data`. The name of the file should be `external___fieldname__` or `external___fieldname__.*`. For the example above, then, the file could be named `external_entryRankFile` or `external_entryRankFile.txt`.
@@ -62,10 +60,9 @@ doc40=42
 
 The keys listed in this file do not need to be unique. The file does not need to be sorted, but Solr will be able to perform the lookup faster if it is.
 
-[[WorkingwithExternalFilesandProcesses-ReloadinganExternalFile]]
 === Reloading an External File
 
-It's possible to define an event listener to reload an external file when either a searcher is reloaded or when a new searcher is started. See the section <<query-settings-in-solrconfig.adoc#QuerySettingsinSolrConfig-Query-RelatedListeners,Query-Related Listeners>> for more information, but a sample definition in `solrconfig.xml` might look like this:
+It's possible to define an event listener to reload an external file when either a searcher is reloaded or when a new searcher is started. See the section <<query-settings-in-solrconfig.adoc#query-related-listeners,Query-Related Listeners>> for more information, but a sample definition in `solrconfig.xml` might look like this:
 
 [source,xml]
 ----
@@ -73,15 +70,14 @@ It's possible to define an event listener to reload an external file when either
 <listener event="firstSearcher" class="org.apache.solr.schema.ExternalFileFieldReloader"/>
 ----
 
-[[WorkingwithExternalFilesandProcesses-ThePreAnalyzedFieldType]]
 == The PreAnalyzedField Type
 
 The `PreAnalyzedField` type provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing applied in Solr. This is useful if user wants to submit field content that was already processed by some existing external text processing pipeline (e.g., it has been tokenized, annotated, stemmed, synonyms inserted, etc.), while using all the rich attributes that Lucene's TokenStream provides (per-token attributes).
 
 The serialization format is pluggable using implementations of PreAnalyzedParser interface. There are two out-of-the-box implementations:
 
-* <<WorkingwithExternalFilesandProcesses-JsonPreAnalyzedParser,JsonPreAnalyzedParser>>: as the name suggests, it parses content that uses JSON to represent field's content. This is the default parser to use if the field type is not configured otherwise.
-* <<WorkingwithExternalFilesandProcesses-SimplePreAnalyzedParser,SimplePreAnalyzedParser>>: uses a simple strict plain text format, which in some situations may be easier to create than JSON.
+* <<JsonPreAnalyzedParser>>: as the name suggests, it parses content that uses JSON to represent field's content. This is the default parser to use if the field type is not configured otherwise.
+* <<SimplePreAnalyzedParser>>: uses a simple strict plain text format, which in some situations may be easier to create than JSON.
 
 There is only one configuration parameter, `parserImpl`. The value of this parameter should be a fully qualified class name of a class that implements PreAnalyzedParser interface. The default value of this parameter is `org.apache.solr.schema.JsonPreAnalyzedParser`.
 
@@ -97,7 +93,6 @@ By default, the query-time analyzer for fields of this type will be the same as
 </fieldType>
 ----
 
-[[WorkingwithExternalFilesandProcesses-JsonPreAnalyzedParser]]
 === JsonPreAnalyzedParser
 
 This is the default serialization format used by PreAnalyzedField type. It uses a top-level JSON map with the following keys:
@@ -115,8 +110,7 @@ This is the default serialization format used by PreAnalyzedField type. It uses
 
 Any other top-level key is silently ignored.
 
-[[WorkingwithExternalFilesandProcesses-Tokenstreamserialization]]
-==== Token stream serialization
+==== Token Stream Serialization
 
 The token stream is expressed as a JSON list of JSON maps. The map for each token consists of the following keys and values:
 
@@ -136,8 +130,7 @@ The token stream is expressed as a JSON list of JSON maps. The map for each toke
 
 Any other key is silently ignored.
 
-[[WorkingwithExternalFilesandProcesses-Example]]
-==== Example
+==== JsonPreAnalyzedParser Example
 
 [source,json]
 ----
@@ -152,13 +145,11 @@ Any other key is silently ignored.
 }
 ----
 
-[[WorkingwithExternalFilesandProcesses-SimplePreAnalyzedParser]]
 === SimplePreAnalyzedParser
 
 The fully qualified class name to use when specifying this format via the `parserImpl` configuration parameter is `org.apache.solr.schema.SimplePreAnalyzedParser`.
 
-[[WorkingwithExternalFilesandProcesses-Syntax]]
-==== Syntax
+==== SimplePreAnalyzedParser Syntax
 
 The serialization format supported by this parser is as follows:
 
@@ -192,8 +183,7 @@ Special characters in "text" values can be escaped using the escape character `\
 
 Please note that Unicode sequences (e.g. `\u0001`) are not supported.
 
-[[WorkingwithExternalFilesandProcesses-Supportedattributenames]]
-==== Supported attribute names
+==== Supported Attributes
 
 The following token attributes are supported, and identified with short symbolic names:
 
@@ -212,8 +202,7 @@ The following token attributes are supported, and identified with short symbolic
 
 Token positions are tracked and implicitly added to the token stream - the start and end offsets consider only the term text and whitespace, and exclude the space taken by token attributes.
 
-[[WorkingwithExternalFilesandProcesses-Exampletokenstreams]]
-==== Example token streams
+==== Example Token Streams
 
 // TODO: in cwiki each of these examples was in it's own "panel" ... do we want something like that here?
 // TODO: these examples match what was in cwiki, but I'm honestly not sure if the formatting there was correct to start?