You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by is...@apache.org on 2017/07/29 21:59:49 UTC

[12/28] lucene-solr:jira/solr-6630: Merging master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/coreadmin-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/coreadmin-api.adoc b/solr/solr-ref-guide/src/coreadmin-api.adoc
index a4a8ea9..6758701 100644
--- a/solr/solr-ref-guide/src/coreadmin-api.adoc
+++ b/solr/solr-ref-guide/src/coreadmin-api.adoc
@@ -29,7 +29,7 @@ CoreAdmin actions can be executed by via HTTP requests that specify an `action`
 
 All action names are uppercase, and are defined in depth in the sections below.
 
-[[CoreAdminAPI-STATUS]]
+[[coreadmin-status]]
 == STATUS
 
 The `STATUS` action returns the status of all running Solr cores, or status for only the named core.
@@ -44,7 +44,7 @@ The name of a core, as listed in the "name" attribute of a `<core>` element in `
 `indexInfo`::
 If `false`, information about the index will not be returned with a core STATUS request. In Solr implementations with a large number of cores (i.e., more than hundreds), retrieving the index information for each core can take a lot of time and isn't always required. The default is `true`.
 
-[[CoreAdminAPI-CREATE]]
+[[coreadmin-create]]
 == CREATE
 
 The `CREATE` action creates a new core and registers it.
@@ -102,7 +102,7 @@ WARNING: While it's possible to create a core for a non-existent collection, thi
 The shard id this core represents. Normally you want to be auto-assigned a shard id.
 
 `property._name_=_value_`::
-Sets the core property _name_ to _value_. See the section on defining <<defining-core-properties.adoc#Definingcore.properties-core.properties_files,core.properties file contents>>.
+Sets the core property _name_ to _value_. See the section on defining <<defining-core-properties.adoc#defining-core-properties-files,core.properties file contents>>.
 
 `async`::
 Request ID to track this action which will be processed asynchronously.
@@ -115,7 +115,7 @@ Use `collection.configName=_configname_` to point to the config for a new collec
 http://localhost:8983/solr/admin/cores?action=CREATE&name=my_core&collection=my_collection&shard=shard2
 
 
-[[CoreAdminAPI-RELOAD]]
+[[coreadmin-reload]]
 == RELOAD
 
 The RELOAD action loads a new core from the configuration of an existing, registered Solr core. While the new core is initializing, the existing one will continue to handle requests. When the new Solr core is ready, it takes over and the old core is unloaded.
@@ -134,7 +134,7 @@ RELOAD performs "live" reloads of SolrCore, reusing some existing objects. Some
 `core`::
 The name of the core, as listed in the "name" attribute of a `<core>` element in `solr.xml`. This parameter is required.
 
-[[CoreAdminAPI-RENAME]]
+[[coreadmin-rename]]
 == RENAME
 
 The `RENAME` action changes the name of a Solr core.
@@ -153,7 +153,7 @@ The new name for the Solr core. If the persistent attribute of `<solr>` is `true
 Request ID to track this action which will be processed asynchronously.
 
 
-[[CoreAdminAPI-SWAP]]
+[[coreadmin-swap]]
 == SWAP
 
 `SWAP` atomically swaps the names used to access two existing Solr cores. This can be used to swap new content into production. The prior core remains available and can be swapped back, if necessary. Each core will be known by the name of the other, after the swap.
@@ -162,9 +162,7 @@ Request ID to track this action which will be processed asynchronously.
 
 [IMPORTANT]
 ====
-
 Do not use `SWAP` with a SolrCloud node. It is not supported and can result in the core being unusable.
-
 ====
 
 === SWAP Parameters
@@ -179,7 +177,7 @@ The name of one of the cores to be swapped. This parameter is required.
 Request ID to track this action which will be processed asynchronously.
 
 
-[[CoreAdminAPI-UNLOAD]]
+[[coreadmin-unload]]
 == UNLOAD
 
 The `UNLOAD` action removes a core from Solr. Active requests will continue to be processed, but no new requests will be sent to the named core. If a core is registered under more than one name, only the given name is removed.
@@ -210,8 +208,7 @@ If `true`, removes everything related to the core, including the index directory
 `async`::
 Request ID to track this action which will be processed asynchronously.
 
-
-[[CoreAdminAPI-MERGEINDEXES]]
+[[coreadmin-mergeindexes]]
 == MERGEINDEXES
 
 The `MERGEINDEXES` action merges one or more indexes to another index. The indexes must have completed commits, and should be locked against writes until the merge is complete or the resulting merged index may become corrupted. The target core index must already exist and have a compatible schema with the one or more indexes that will be merged to it. Another commit on the target core should also be performed after the merge is complete.
@@ -243,7 +240,7 @@ Multi-valued, source cores that would be merged.
 Request ID to track this action which will be processed asynchronously
 
 
-[[CoreAdminAPI-SPLIT]]
+[[coreadmin-split]]
 == SPLIT
 
 The `SPLIT` action splits an index into two or more indexes. The index being split can continue to handle requests. The split pieces can be placed into a specified directory on the server's filesystem or it can be merged into running Solr cores.
@@ -270,7 +267,6 @@ The key to be used for splitting the index. If this parameter is used, `ranges`
 `async`::
 Request ID to track this action which will be processed asynchronously.
 
-
 === SPLIT Examples
 
 The `core` index will be split into as many pieces as the number of `path` or `targetCore` parameters.
@@ -305,9 +301,9 @@ This example uses the `ranges` parameter with hash ranges 0-500, 501-1000 and 10
 
 The `targetCore` must already exist and must have a compatible schema with the `core` index. A commit is automatically called on the `core` index before it is split.
 
-This command is used as part of the <<collections-api.adoc#CollectionsAPI-splitshard,SPLITSHARD>> command but it can be used for non-cloud Solr cores as well. When used against a non-cloud core without `split.key` parameter, this action will split the source index and distribute its documents alternately so that each split piece contains an equal number of documents. If the `split.key` parameter is specified then only documents having the same route key will be split from the source index.
+This command is used as part of the <<collections-api.adoc#splitshard,SPLITSHARD>> command but it can be used for non-cloud Solr cores as well. When used against a non-cloud core without `split.key` parameter, this action will split the source index and distribute its documents alternately so that each split piece contains an equal number of documents. If the `split.key` parameter is specified then only documents having the same route key will be split from the source index.
 
-[[CoreAdminAPI-REQUESTSTATUS]]
+[[coreadmin-requeststatus]]
 == REQUESTSTATUS
 
 Request the status of an already submitted asynchronous CoreAdmin API call.
@@ -326,7 +322,7 @@ The call below will return the status of an already submitted asynchronous CoreA
 [source,bash]
 http://localhost:8983/solr/admin/cores?action=REQUESTSTATUS&requestid=1
 
-[[CoreAdminAPI-REQUESTRECOVERY]]
+[[coreadmin-requestrecovery]]
 == REQUESTRECOVERY
 
 The `REQUESTRECOVERY` action manually asks a core to recover by synching with the leader. This should be considered an "expert" level command and should be used in situations where the node (SorlCloud replica) is unable to become active automatically.
@@ -338,7 +334,6 @@ The `REQUESTRECOVERY` action manually asks a core to recover by synching with th
 `core`::
 The name of the core to re-sync. This parameter is required.
 
-[[CoreAdminAPI-Examples.1]]
 === REQUESTRECOVERY Examples
 
 [source,bash]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
index bffa71f..77332d3 100644
--- a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
+++ b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
@@ -140,8 +140,6 @@ The CDCR replication logic requires modification to the maintenance logic of the
 
 If the communication with one of the target data center is slow, the Updates Log on the source data center can grow to a substantial size. In such a scenario, it is necessary for the Updates Log to be able to efficiently find a given update operation given its identifier. Given that its identifier is an incremental number, it is possible to implement an efficient search strategy. Each transaction log file contains as part of its filename the version number of the first element. This is used to quickly traverse all the transaction log files and find the transaction log file containing one specific version number.
 
-
-[[CrossDataCenterReplication_CDCR_-Monitoring]]
 === Monitoring
 
 CDCR provides the following monitoring capabilities over the replication operations:
@@ -155,24 +153,19 @@ Information about the lifecycle and statistics will be provided on a per-shard b
 
 The CDC Replicator is a background thread that is responsible for replicating updates from a Source data center to one or more target data centers. It is responsible in providing monitoring information on a per-shard basis. As there can be a large number of collections and shards in a cluster, we will use a fixed-size pool of CDC Replicator threads that will be shared across shards.
 
-
-[[CrossDataCenterReplication_CDCR_-Limitations]]
-=== Limitations
+=== CDCR Limitations
 
 The current design of CDCR has some limitations. CDCR will continue to evolve over time and many of these limitations will be addressed. Among them are:
 
 * CDCR is unlikely to be satisfactory for bulk-load situations where the update rate is high, especially if the bandwidth between the Source and target clusters is restricted. In this scenario, the initial bulk load should be performed, the Source and target data centers synchronized and CDCR be utilized for incremental updates.
 * CDCR is currently only active-passive; data is pushed from the Source cluster to the target cluster. There is active work being done in this area in the 6x code line to remove this limitation.
 * CDCR works most robustly with the same number of shards in the Source and target collection. The shards in the two collections may have different numbers of replicas.
+* Running CDCR with the indexes on HDFS is not currently supported, see the https://issues.apache.org/jira/browse/SOLR-9861[Solr CDCR over HDFS] JIRA issue.
 
-
-[[CrossDataCenterReplication_CDCR_-Configuration]]
-== Configuration
+== CDCR Configuration
 
 The source and target configurations differ in the case of the data centers being in separate clusters. "Cluster" here means separate ZooKeeper ensembles controlling disjoint Solr instances. Whether these data centers are physically separated or not is immaterial for this discussion.
 
-
-[[CrossDataCenterReplication_CDCR_-SourceConfiguration]]
 === Source Configuration
 
 Here is a sample of a source configuration file, a section in `solrconfig.xml`. The presence of the <replica> section causes CDCR to use this cluster as the Source and should not be present in the target collections in the cluster-to-cluster case. Details about each setting are after the two examples:
@@ -211,8 +204,6 @@ Here is a sample of a source configuration file, a section in `solrconfig.xml`.
 </updateHandler>
 ----
 
-
-[[CrossDataCenterReplication_CDCR_-TargetConfiguration]]
 === Target Configuration
 
 Here is a typical target configuration.
@@ -256,7 +247,6 @@ The configuration details, defaults and options are as follows:
 
 CDCR can be configured to forward update requests to one or more replicas. A replica is defined with a “replica” list as follows:
 
-
 `zkHost`::
 The host address for ZooKeeper of the target SolrCloud. Usually this is a comma-separated list of addresses to each node in the target ZooKeeper ensemble. This parameter is required.
 
@@ -303,41 +293,27 @@ Monitor actions are performed at a core level, i.e., by using the following base
 
 Currently, none of the CDCR API calls have parameters.
 
-
 === API Entry Points (Control)
 
-* `<collection>/cdcr?action=STATUS`: <<CrossDataCenterReplication_CDCR_-STATUS,Returns the current state>> of CDCR.
-* `<collection>/cdcr?action=START`: <<CrossDataCenterReplication_CDCR_-START,Starts CDCR>> replication
-* `<collection>/cdcr?action=STOP`: <<CrossDataCenterReplication_CDCR_-STOP,Stops CDCR>> replication.
-* `<collection>/cdcr?action=ENABLEBUFFER`: <<CrossDataCenterReplication_CDCR_-ENABLEBUFFER,Enables the buffering>> of updates.
-* `<collection>/cdcr?action=DISABLEBUFFER`: <<CrossDataCenterReplication_CDCR_-DISABLEBUFFER,Disables the buffering>> of updates.
-
+* `<collection>/cdcr?action=STATUS`: <<CDCR STATUS,Returns the current state>> of CDCR.
+* `<collection>/cdcr?action=START`: <<CDCR START,Starts CDCR>> replication
+* `<collection>/cdcr?action=STOP`: <<CDCR STOP,Stops CDCR>> replication.
+* `<collection>/cdcr?action=ENABLEBUFFER`: <<ENABLEBUFFER,Enables the buffering>> of updates.
+* `<collection>/cdcr?action=DISABLEBUFFER`: <<DISABLEBUFFER,Disables the buffering>> of updates.
 
 === API Entry Points (Monitoring)
 
-* `core/cdcr?action=QUEUES`: <<CrossDataCenterReplication_CDCR_-QUEUES,Fetches statistics about the queue>> for each replica and about the update logs.
-* `core/cdcr?action=OPS`: <<CrossDataCenterReplication_CDCR_-OPS,Fetches statistics about the replication performance>> (operations per second) for each replica.
-* `core/cdcr?action=ERRORS`: <<CrossDataCenterReplication_CDCR_-ERRORS,Fetches statistics and other information about replication errors>> for each replica.
+* `core/cdcr?action=QUEUES`: <<QUEUES,Fetches statistics about the queue>> for each replica and about the update logs.
+* `core/cdcr?action=OPS`: <<OPS,Fetches statistics about the replication performance>> (operations per second) for each replica.
+* `core/cdcr?action=ERRORS`: <<ERRORS,Fetches statistics and other information about replication errors>> for each replica.
 
 === Control Commands
 
-[[CrossDataCenterReplication_CDCR_-STATUS]]
-==== STATUS
+==== CDCR STATUS
 
 `/collection/cdcr?action=STATUS`
 
-===== Input
-
-*Query Parameters:* There are no parameters to this command.
-
-===== Output
-
-*Output Content*
-
-The current state of the CDCR, which includes the state of the replication process and the state of the buffer.
-
-[[cdcr_examples]]
-===== Examples
+===== CDCR Status Example
 
 *Input*
 
@@ -362,22 +338,15 @@ The current state of the CDCR, which includes the state of the replication proce
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-ENABLEBUFFER]]
 ==== ENABLEBUFFER
 
 `/collection/cdcr?action=ENABLEBUFFER`
 
-===== Input
-
-*Query Parameters:* There are no parameters to this command.
-
-===== Output
+===== Enable Buffer Response
 
-*Output Content*
-
-The status of the process and an indication of whether the buffer is enabled
+The status of the process and an indication of whether the buffer is enabled.
 
-===== Examples
+===== Enable Buffer Example
 
 *Input*
 
@@ -402,20 +371,15 @@ The status of the process and an indication of whether the buffer is enabled
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-DISABLEBUFFER]]
 ==== DISABLEBUFFER
 
 `/collection/cdcr?action=DISABLEBUFFER`
 
-===== Input
-
-*Query Parameters:* There are no parameters to this command
-
-===== Output
+===== Disable Buffer Response
 
-*Output Content:* The status of CDCR and an indication that the buffer is disabled.
+The status of CDCR and an indication that the buffer is disabled.
 
-===== Examples
+===== Disable Buffer Example
 
 *Input*
 
@@ -440,20 +404,15 @@ http://host:8983/solr/<collection_name>/cdcr?action=DISABLEBUFFER
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-START]]
-==== START
+==== CDCR START
 
 `/collection/cdcr?action=START`
 
-===== Input
+===== CDCR Start Response
 
-*Query Parameters:* There are no parameters for this action
+Confirmation that CDCR is started and the status of buffering
 
-===== Output
-
-*Output Content:* Confirmation that CDCR is started and the status of buffering
-
-===== Examples
+===== CDCR Start Examples
 
 *Input*
 
@@ -478,20 +437,15 @@ http://host:8983/solr/<collection_name>/cdcr?action=START
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-STOP]]
-==== STOP
+==== CDCR STOP
 
 `/collection/cdcr?action=STOP`
 
-===== Input
-
-*Query Parameters:* There are no parameters for this command.
-
-===== Output
+===== CDCR Stop Response
 
-*Output Content:* The status of CDCR, including the confirmation that CDCR is stopped
+The status of CDCR, including the confirmation that CDCR is stopped.
 
-===== Examples
+===== CDCR Stop Examples
 
 *Input*
 
@@ -517,19 +471,13 @@ http://host:8983/solr/<collection_name>/cdcr?action=START
 ----
 
 
-[[CrossDataCenterReplication_CDCR_-Monitoringcommands]]
-=== Monitoring commands
+=== CDCR Monitoring Commands
 
-[[CrossDataCenterReplication_CDCR_-QUEUES]]
 ==== QUEUES
 
 `/core/cdcr?action=QUEUES`
 
-===== Input
-
-*Query Parameters:* There are no parameters for this command
-
-===== Output
+===== QUEUES Response
 
 *Output Content*
 
@@ -537,7 +485,7 @@ The output is composed of a list “queues” which contains a list of (ZooKeepe
 
 The “queues” object also contains information about the updates log, such as the size (in bytes) of the updates log on disk (“tlogTotalSize”), the number of transaction log files (“tlogTotalCount”) and the status of the updates log synchronizer (“updateLogSynchronizer”).
 
-===== Examples
+===== QUEUES Examples
 
 *Input*
 
@@ -569,20 +517,15 @@ The “queues” object also contains information about the updates log, such as
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-OPS]]
 ==== OPS
 
 `/core/cdcr?action=OPS`
 
-===== Input
-
-*Query Parameters:* There are no parameters for this command.
-
-===== Output
+===== OPS Response
 
-*Output Content:* The output is composed of a list “operationsPerSecond” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, the average number of processed operations per second since the start of the replication process is provided. The operations are further broken down into two groups: add and delete operations.
+The output is composed of `operationsPerSecond` which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, the average number of processed operations per second since the start of the replication process is provided. The operations are further broken down into two groups: add and delete operations.
 
-===== Examples
+===== OPS Examples
 
 *Input*
 
@@ -612,20 +555,15 @@ The “queues” object also contains information about the updates log, such as
 }
 ----
 
-[[CrossDataCenterReplication_CDCR_-ERRORS]]
 ==== ERRORS
 
 `/core/cdcr?action=ERRORS`
 
-===== Input
+===== ERRORS Response
 
-*Query Parameters:* There are no parameters for this command.
+The output is composed of a list “errors” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, information about errors encountered during the replication is provided, such as the number of consecutive errors encountered by the replicator thread, the number of bad requests or internal errors since the start of the replication process, and a list of the last errors encountered ordered by timestamp.
 
-===== Output
-
-*Output Content:* The output is composed of a list “errors” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, information about errors encountered during the replication is provided, such as the number of consecutive errors encountered by the replicator thread, the number of bad requests or internal errors since the start of the replication process, and a list of the last errors encountered ordered by timestamp.
-
-===== Examples
+===== ERRORS Examples
 
 *Input*
 
@@ -728,7 +666,6 @@ http://host:port/solr/collection_name/cdcr?action=DISABLEBUFFER
 +
 * Renable indexing
 
-[[CrossDataCenterReplication_CDCR_-Monitoring.1]]
 == Monitoring
 
 .  Network and disk space monitoring are essential. Ensure that the system has plenty of available storage to queue up changes if there is a disconnect between the Source and Target. A network outage between the two data centers can cause your disk usage to grow.
@@ -761,10 +698,5 @@ When rolling in upgrades to your indexer or application, you should shutdown the
 curl http://<Source>/solr/cloud1/update -H 'Content-type:application/json' -d '[{"SKU":"ABC"}]'
 
 #check the Target
-curl "http://<Target>:8983/solr/<collection_name>/select?q=SKU:ABC&wt=json&indent=true"
+curl "http://<Target>:8983/solr/<collection_name>/select?q=SKU:ABC&indent=true"
 ----
-
-[[CrossDataCenterReplication_CDCR_-Limitations.1]]
-== Limitations
-
-* Running CDCR with the indexes on HDFS is not currently supported, see: https://issues.apache.org/jira/browse/SOLR-9861[Solr CDCR over HDFS].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc b/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
index c68a3ad..d36d781 100644
--- a/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc
@@ -33,9 +33,9 @@ The `${solr.core.name}` substitution will cause the name of the current core to
 
 If you are using replication to replicate the Solr index (as described in <<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>), then the `<dataDir>` directory should correspond to the index directory used in the replication configuration.
 
-NOTE: If the environment variable `SOLR_DATA_HOME` if defined, or if `solr.data.home` is configured for your DirectoryFactory, the location of data directory will be `<SOLR_DATA_HOME>/<instance_name>/data`.
+NOTE: If the environment variable `SOLR_DATA_HOME` is defined, or if `solr.data.home` is configured for your DirectoryFactory, or if `solr.xml` contains an
+element `<solrDataHome>` then the location of data directory will be `<SOLR_DATA_HOME>/<instance_name>/data`.
 
-[[DataDirandDirectoryFactoryinSolrConfig-SpecifyingtheDirectoryFactoryForYourIndex]]
 == Specifying the DirectoryFactory For Your Index
 
 The default {solr-javadocs}/solr-core/org/apache/solr/core/StandardDirectoryFactory.html[`solr.StandardDirectoryFactory`] is filesystem based, and tries to pick the best implementation for the current JVM and platform. You can force a particular implementation and/or config options by specifying {solr-javadocs}/solr-core/org/apache/solr/core/MMapDirectoryFactory.html[`solr.MMapDirectoryFactory`], {solr-javadocs}/solr-core/org/apache/solr/core/NIOFSDirectoryFactory.html[`solr.NIOFSDirectoryFactory`], or {solr-javadocs}/solr-core/org/apache/solr/core/SimpleFSDirectoryFactory.html[`solr.SimpleFSDirectoryFactory`].
@@ -57,7 +57,5 @@ The {solr-javadocs}/solr-core/org/apache/solr/core/RAMDirectoryFactory.html[`sol
 
 [NOTE]
 ====
-
 If you are using Hadoop and would like to store your indexes in HDFS, you should use the {solr-javadocs}/solr-core/org/apache/solr/core/HdfsDirectoryFactory.html[`solr.HdfsDirectoryFactory`] instead of either of the above implementations. For more details, see the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>>.
-
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/dataimport-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/dataimport-screen.adoc b/solr/solr-ref-guide/src/dataimport-screen.adoc
index 363a2bd..9f3cb43 100644
--- a/solr/solr-ref-guide/src/dataimport-screen.adoc
+++ b/solr/solr-ref-guide/src/dataimport-screen.adoc
@@ -23,7 +23,6 @@ The Dataimport screen shows the configuration of the DataImportHandler (DIH) and
 .The Dataimport Screen
 image::images/dataimport-screen/dataimport.png[image,width=485,height=250]
 
-
 This screen also lets you adjust various options to control how the data is imported to Solr, and view the data import configuration file that controls the import.
 
 For more information about data importing with DIH, see the section on <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/de-duplication.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/de-duplication.adoc b/solr/solr-ref-guide/src/de-duplication.adoc
index 3e9cd46..67f8d8c 100644
--- a/solr/solr-ref-guide/src/de-duplication.adoc
+++ b/solr/solr-ref-guide/src/de-duplication.adoc
@@ -26,7 +26,6 @@ Preventing duplicate or near duplicate documents from entering an index or taggi
 * Lookup3Signature: 64-bit hash used for exact duplicate detection. This is much faster than MD5 and smaller to index.
 * http://wiki.apache.org/solr/TextProfileSignature[TextProfileSignature]: Fuzzy hashing implementation from Apache Nutch for near duplicate detection. It's tunable but works best on longer text.
 
-
 Other, more sophisticated algorithms for fuzzy/near hashing can be added later.
 
 [IMPORTANT]
@@ -36,12 +35,10 @@ Adding in the de-duplication process will change the `allowDups` setting so that
 Of course the `signatureField` could be the unique field, but generally you want the unique field to be unique. When a document is added, a signature will automatically be generated and attached to the document in the specified `signatureField`.
 ====
 
-[[De-Duplication-ConfigurationOptions]]
 == Configuration Options
 
 There are two places in Solr to configure de-duplication: in `solrconfig.xml` and in `schema.xml`.
 
-[[De-Duplication-Insolrconfig.xml]]
 === In solrconfig.xml
 
 The `SignatureUpdateProcessorFactory` has to be registered in `solrconfig.xml` as part of an <<update-request-processors.adoc#update-request-processors,Update Request Processor Chain>>, as in this example:
@@ -84,8 +81,6 @@ Set to *false* to disable de-duplication processing. The default is *true*.
 overwriteDupes::
 If true, the default, when a document exists that already matches this signature, it will be overwritten.
 
-
-[[De-Duplication-Inschema.xml]]
 === In schema.xml
 
 If you are using a separate field for storing the signature, you must have it indexed:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/defining-core-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/defining-core-properties.adoc b/solr/solr-ref-guide/src/defining-core-properties.adoc
index a533098..1424327 100644
--- a/solr/solr-ref-guide/src/defining-core-properties.adoc
+++ b/solr/solr-ref-guide/src/defining-core-properties.adoc
@@ -29,7 +29,6 @@ A minimal `core.properties` file looks like the example below. However, it can a
 name=my_core_name
 ----
 
-[[Definingcore.properties-Placementofcore.properties]]
 == Placement of core.properties
 
 Solr cores are configured by placing a file named `core.properties` in a sub-directory under `solr.home`. There are no a-priori limits to the depth of the tree, nor are there limits to the number of cores that can be defined. Cores may be anywhere in the tree with the exception that cores may _not_ be defined under an existing core. That is, the following is not allowed:
@@ -61,11 +60,8 @@ Your `core.properties` file can be empty if necessary. Suppose `core.properties`
 You can run Solr without configuring any cores.
 ====
 
-[[Definingcore.properties-Definingcore.propertiesFiles]]
 == Defining core.properties Files
 
-[[Definingcore.properties-core.properties_files]]
-
 The minimal `core.properties` file is an empty file, in which case all of the properties are defaulted appropriately.
 
 Java properties files allow the hash (`#`) or bang (`!`) characters to specify comment-to-end-of-line.
@@ -98,4 +94,4 @@ The following properties are available:
 
 `roles`:: Future parameter for SolrCloud or a way for users to mark nodes for their own use.
 
-Additional user-defined properties may be specified for use as variables. For more information on how to define local properties, see the section <<configuring-solrconfig-xml.adoc#Configuringsolrconfig.xml-SubstitutingPropertiesinSolrConfigFiles,Substituting Properties in Solr Config Files>>.
+Additional user-defined properties may be specified for use as variables. For more information on how to define local properties, see the section <<configuring-solrconfig-xml.adoc#substituting-properties-in-solr-config-files,Substituting Properties in Solr Config Files>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/defining-fields.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/defining-fields.adoc b/solr/solr-ref-guide/src/defining-fields.adoc
index 8e6de9c..82f0345 100644
--- a/solr/solr-ref-guide/src/defining-fields.adoc
+++ b/solr/solr-ref-guide/src/defining-fields.adoc
@@ -20,8 +20,7 @@
 
 Fields are defined in the fields element of `schema.xml`. Once you have the field types set up, defining the fields themselves is simple.
 
-[[DefiningFields-Example]]
-== Example
+== Example Field Definition
 
 The following example defines a field named `price` with a type named `float` and a default value of `0.0`; the `indexed` and `stored` properties are explicitly set to `true`, while any other properties specified on the `float` field type are inherited.
 
@@ -30,7 +29,6 @@ The following example defines a field named `price` with a type named `float` an
 <field name="price" type="float" default="0.0" indexed="true" stored="true"/>
 ----
 
-[[DefiningFields-FieldProperties]]
 == Field Properties
 
 Field definitions can have the following properties:
@@ -44,7 +42,6 @@ The name of the `fieldType` for this field. This will be found in the `name` att
 `default`::
 A default value that will be added automatically to any document that does not have a value in this field when it is indexed. If this property is not specified, there is no default.
 
-[[DefiningFields-OptionalFieldTypeOverrideProperties]]
 == Optional Field Type Override Properties
 
 Fields can have many of the same properties as field types. Properties from the table below which are specified on an individual field will override any explicit value for that property specified on the the `fieldType` of the field, or any implicit default property value provided by the underlying `fieldType` implementation. The table below is reproduced from <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,Field Type Definitions and Properties>>, which has more details:
@@ -66,7 +63,7 @@ Fields can have many of the same properties as field types. Properties from the
 |omitPositions |Similar to `omitTermFreqAndPositions` but preserves term frequency information. |true or false |*
 |termVectors termPositions termOffsets termPayloads |These options instruct Solr to maintain full term vectors for each document, optionally including position, offset and payload information for each term occurrence in those vectors. These can be used to accelerate highlighting and other ancillary functionality, but impose a substantial cost in terms of index size. They are not necessary for typical uses of Solr. |true or false |false
 |required |Instructs Solr to reject any attempts to add a document which does not have a value for this field. This property defaults to false. |true or false |false
-|useDocValuesAsStored |If the field has `<<docvalues.adoc#docvalues,docValues>>` enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#CommonQueryParameters-Thefl_FieldList_Parameter,fl parameter>>. |true or false |true
+|useDocValuesAsStored |If the field has `<<docvalues.adoc#docvalues,docValues>>` enabled, setting this to true would allow the field to be returned as if it were a stored field (even if it has `stored=false`) when matching "`*`" in an <<common-query-parameters.adoc#fl-field-list-parameter,fl parameter>>. |true or false |true
 |large |Large fields are always lazy loaded and will only take up space in the document cache if the actual value is < 512KB. This option requires `stored="true"` and `multiValued="false"`. It's intended for fields that might have very large values so that they don't get cached in memory. |true or false |false
 |===
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
index 4003f1a..392a0df 100644
--- a/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
+++ b/solr/solr-ref-guide/src/detecting-languages-during-indexing.adoc
@@ -31,12 +31,10 @@ For specific information on each of these language identification implementation
 
 For more information about language analysis in Solr, see <<language-analysis.adoc#language-analysis,Language Analysis>>.
 
-[[DetectingLanguagesDuringIndexing-ConfiguringLanguageDetection]]
 == Configuring Language Detection
 
 You can configure the `langid` UpdateRequestProcessor in `solrconfig.xml`. Both implementations take the same parameters, which are described in the following section. At a minimum, you must specify the fields for language identification and a field for the resulting language code.
 
-[[DetectingLanguagesDuringIndexing-ConfiguringTikaLanguageDetection]]
 === Configuring Tika Language Detection
 
 Here is an example of a minimal Tika `langid` configuration in `solrconfig.xml`:
@@ -51,7 +49,6 @@ Here is an example of a minimal Tika `langid` configuration in `solrconfig.xml`:
 </processor>
 ----
 
-[[DetectingLanguagesDuringIndexing-ConfiguringLangDetectLanguageDetection]]
 === Configuring LangDetect Language Detection
 
 Here is an example of a minimal LangDetect `langid` configuration in `solrconfig.xml`:
@@ -66,7 +63,6 @@ Here is an example of a minimal LangDetect `langid` configuration in `solrconfig
 </processor>
 ----
 
-[[DetectingLanguagesDuringIndexing-langidParameters]]
 == langid Parameters
 
 As previously mentioned, both implementations of the `langid` UpdateRequestProcessor take the same parameters.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/distributed-requests.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/distributed-requests.adoc b/solr/solr-ref-guide/src/distributed-requests.adoc
index 6d2c585..b9c3920 100644
--- a/solr/solr-ref-guide/src/distributed-requests.adoc
+++ b/solr/solr-ref-guide/src/distributed-requests.adoc
@@ -22,10 +22,9 @@ When a Solr node receives a search request, the request is routed behind the sce
 
 The chosen replica acts as an aggregator: it creates internal requests to randomly chosen replicas of every shard in the collection, coordinates the responses, issues any subsequent internal requests as needed (for example, to refine facets values, or request additional stored fields), and constructs the final response for the client.
 
-[[DistributedRequests-LimitingWhichShardsareQueried]]
 == Limiting Which Shards are Queried
 
-While one of the advantages of using SolrCloud is the ability to query very large collections distributed among various shards, in some cases <<shards-and-indexing-data-in-solrcloud.adoc#ShardsandIndexingDatainSolrCloud-DocumentRouting,you may know that you are only interested in results from a subset of your shards>>. You have the option of searching over all of your data or just parts of it.
+While one of the advantages of using SolrCloud is the ability to query very large collections distributed among various shards, in some cases <<shards-and-indexing-data-in-solrcloud.adoc#document-routing,you may know that you are only interested in results from a subset of your shards>>. You have the option of searching over all of your data or just parts of it.
 
 Querying all shards for a collection should look familiar; it's as though SolrCloud didn't even come into play:
 
@@ -71,7 +70,6 @@ And of course, you can specify a list of shards (seperated by commas) each defin
 http://localhost:8983/solr/gettingstarted/select?q=*:*&shards=shard1,localhost:7574/solr/gettingstarted|localhost:7500/solr/gettingstarted
 ----
 
-[[DistributedRequests-ConfiguringtheShardHandlerFactory]]
 == Configuring the ShardHandlerFactory
 
 You can directly configure aspects of the concurrency and thread-pooling used within distributed search in Solr. This allows for finer grained control and you can tune it to target your own specific requirements. The default configuration favors throughput over latency.
@@ -118,7 +116,6 @@ If specified, the thread pool will use a backing queue instead of a direct hando
 `fairnessPolicy`::
 Chooses the JVM specifics dealing with fair policy queuing, if enabled distributed searches will be handled in a First in First out fashion at a cost to throughput. If disabled throughput will be favored over latency. The default is `false`.
 
-[[DistributedRequests-ConfiguringstatsCache_DistributedIDF_]]
 == Configuring statsCache (Distributed IDF)
 
 Document and term statistics are needed in order to calculate relevancy. Solr provides four implementations out of the box when it comes to document stats calculation:
@@ -135,15 +132,13 @@ The implementation can be selected by setting `<statsCache>` in `solrconfig.xml`
 <statsCache class="org.apache.solr.search.stats.ExactStatsCache"/>
 ----
 
-[[DistributedRequests-AvoidingDistributedDeadlock]]
 == Avoiding Distributed Deadlock
 
 Each shard serves top-level query requests and then makes sub-requests to all of the other shards. Care should be taken to ensure that the max number of threads serving HTTP requests is greater than the possible number of requests from both top-level clients and other shards. If this is not the case, the configuration may result in a distributed deadlock.
 
 For example, a deadlock might occur in the case of two shards, each with just a single thread to service HTTP requests. Both threads could receive a top-level request concurrently, and make sub-requests to each other. Because there are no more remaining threads to service requests, the incoming requests will be blocked until the other pending requests are finished, but they will not finish since they are waiting for the sub-requests. By ensuring that Solr is configured to handle a sufficient number of threads, you can avoid deadlock situations like this.
 
-[[DistributedRequests-PreferLocalShards]]
-== Prefer Local Shards
+== preferLocalShards Parameter
 
 Solr allows you to pass an optional boolean parameter named `preferLocalShards` to indicate that a distributed query should prefer local replicas of a shard when available. In other words, if a query includes `preferLocalShards=true`, then the query controller will look for local replicas to service the query instead of selecting replicas at random from across the cluster. This is useful when a query requests many fields or large fields to be returned per document because it avoids moving large amounts of data over the network when it is available locally. In addition, this feature can be useful for minimizing the impact of a problematic replica with degraded performance, as it reduces the likelihood that the degraded replica will be hit by other healthy replicas.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
index b1ad8dc..0e6e7d8 100644
--- a/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
+++ b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
@@ -26,14 +26,12 @@ Everything on this page is specific to legacy setup of distributed search. Users
 
 Update reorders (i.e., replica A may see update X then Y, and replica B may see update Y then X). *deleteByQuery* also handles reorders the same way, to ensure replicas are consistent. All replicas of a shard are consistent, even if the updates arrive in a different order on different replicas.
 
-[[DistributedSearchwithIndexSharding-DistributingDocumentsacrossShards]]
 == Distributing Documents across Shards
 
 When not using SolrCloud, it is up to you to get all your documents indexed on each shard of your server farm. Solr supports distributed indexing (routing) in its true form only in the SolrCloud mode.
 
 In the legacy distributed mode, Solr does not calculate universal term/doc frequencies. For most large-scale implementations, it is not likely to matter that Solr calculates TF/IDF at the shard level. However, if your collection is heavily skewed in its distribution across servers, you may find misleading relevancy results in your searches. In general, it is probably best to randomly distribute documents to your shards.
 
-[[DistributedSearchwithIndexSharding-ExecutingDistributedSearcheswiththeshardsParameter]]
 == Executing Distributed Searches with the shards Parameter
 
 If a query request includes the `shards` parameter, the Solr server distributes the request across all the shards listed as arguments to the parameter. The `shards` parameter uses this syntax:
@@ -63,7 +61,6 @@ The following components support distributed search:
 * The *Stats* component, which returns simple statistics for numeric fields within the DocSet.
 * The *Debug* component, which helps with debugging.
 
-[[DistributedSearchwithIndexSharding-LimitationstoDistributedSearch]]
 == Limitations to Distributed Search
 
 Distributed searching in Solr has the following limitations:
@@ -78,12 +75,10 @@ Distributed searching in Solr has the following limitations:
 
 Formerly a limitation was that TF/IDF relevancy computations only used shard-local statistics. This is still the case by default. If your data isn't randomly distributed, or if you want more exact statistics, then remember to configure the ExactStatsCache.
 
-[[DistributedSearchwithIndexSharding-AvoidingDistributedDeadlock]]
-== Avoiding Distributed Deadlock
+== Avoiding Distributed Deadlock with Distributed Search
 
 Like in SolrCloud mode, inter-shard requests could lead to a distributed deadlock. It can be avoided by following the instructions in the section  <<distributed-requests.adoc#distributed-requests,Distributed Requests>>.
 
-[[DistributedSearchwithIndexSharding-TestingIndexShardingonTwoLocalServers]]
 == Testing Index Sharding on Two Local Servers
 
 For simple functional testing, it's easiest to just set up two local Solr servers on different ports. (In a production environment, of course, these servers would be deployed on separate machines.)

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/documents-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/documents-screen.adoc b/solr/solr-ref-guide/src/documents-screen.adoc
index 4605dd7..7c16ee9 100644
--- a/solr/solr-ref-guide/src/documents-screen.adoc
+++ b/solr/solr-ref-guide/src/documents-screen.adoc
@@ -42,28 +42,24 @@ The first step is to define the RequestHandler to use (aka, 'qt'). By default `/
 
 Then choose the Document Type to define the type of document to load. The remaining parameters will change depending on the document type selected.
 
-[[DocumentsScreen-JSON]]
-== JSON
+== JSON Documents
 
 When using the JSON document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper JSON format.
 
 Then you can choose when documents should be added to the index (Commit Within), & whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
 
-This option will only add or overwrite documents to the index; for other update tasks, see the <<DocumentsScreen-SolrCommand,Solr Command>> option.
+This option will only add or overwrite documents to the index; for other update tasks, see the <<Solr Command>> option.
 
-[[DocumentsScreen-CSV]]
-== CSV
+== CSV Documents
 
 When using the CSV document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper CSV format, with columns delimited and one row per document.
 
 Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
 
-[[DocumentsScreen-DocumentBuilder]]
 == Document Builder
 
 The Document Builder provides a wizard-like interface to enter fields of a document
 
-[[DocumentsScreen-FileUpload]]
 == File Upload
 
 The File Upload option allows choosing a prepared file and uploading it. If using only `/update` for the Request-Handler option, you will be limited to XML, CSV, and JSON.
@@ -72,18 +68,16 @@ However, to use the ExtractingRequestHandler (aka Solr Cell), you can modify the
 
 Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
 
-[[DocumentsScreen-SolrCommand]]
 == Solr Command
 
 The Solr Command option allows you use XML or JSON to perform specific actions on documents, such as defining documents to be added or deleted, updating only certain fields of documents, or commit and optimize commands on the index.
 
 The documents should be structured as they would be if using `/update` on the command line.
 
-[[DocumentsScreen-XML]]
-== XML
+== XML Documents
 
 When using the XML document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper Solr XML format, with each document separated by `<doc>` tags and each field defined.
 
 Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not **true**, then the incoming documents will be dropped).
 
-This option will only add or overwrite documents to the index; for other update tasks, see the <<DocumentsScreen-SolrCommand,Solr Command>> option.
+This option will only add or overwrite documents to the index; for other update tasks, see the <<Solr Command>> option.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/docvalues.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/docvalues.adoc b/solr/solr-ref-guide/src/docvalues.adoc
index b2debda..c0b7c31 100644
--- a/solr/solr-ref-guide/src/docvalues.adoc
+++ b/solr/solr-ref-guide/src/docvalues.adoc
@@ -28,7 +28,6 @@ For other features that we now commonly associate with search, such as sorting,
 
 In Lucene 4.0, a new approach was introduced. DocValue fields are now column-oriented fields with a document-to-value mapping built at index time. This approach promises to relieve some of the memory requirements of the fieldCache and make lookups for faceting, sorting, and grouping much faster.
 
-[[DocValues-EnablingDocValues]]
 == Enabling DocValues
 
 To use docValues, you only need to enable it for a field that you will use it with. As with all schema design, you need to define a field type and then define fields of that type with docValues enabled. All of these actions are done in `schema.xml`.
@@ -58,7 +57,7 @@ DocValues are only available for specific field types. The types chosen determin
 
 These Lucene types are related to how the {lucene-javadocs}/core/org/apache/lucene/index/DocValuesType.html[values are sorted and stored].
 
-There is an additional configuration option available, which is to modify the `docValuesFormat` <<field-type-definitions-and-properties.adoc#FieldTypeDefinitionsandProperties-docValuesFormat,used by the field type>>. The default implementation employs a mixture of loading some things into memory and keeping some on disk. In some cases, however, you may choose to specify an alternative {lucene-javadocs}/core/org/apache/lucene/codecs/DocValuesFormat.html[DocValuesFormat implementation]. For example, you could choose to keep everything in memory by specifying `docValuesFormat="Memory"` on a field type:
+There is an additional configuration option available, which is to modify the `docValuesFormat` <<field-type-definitions-and-properties.adoc#docvaluesformat,used by the field type>>. The default implementation employs a mixture of loading some things into memory and keeping some on disk. In some cases, however, you may choose to specify an alternative {lucene-javadocs}/core/org/apache/lucene/codecs/DocValuesFormat.html[DocValuesFormat implementation]. For example, you could choose to keep everything in memory by specifying `docValuesFormat="Memory"` on a field type:
 
 [source,xml]
 ----
@@ -74,14 +73,13 @@ Lucene index back-compatibility is only supported for the default codec. If you
 
 === Sorting, Faceting & Functions
 
-If `docValues="true"` for a field, then DocValues will automatically be used any time the field is used for <<common-query-parameters.adoc#CommonQueryParameters-ThesortParameter,sorting>>, <<faceting.adoc#faceting,faceting>> or <<function-queries.adoc#function-queries,function queries>>.
+If `docValues="true"` for a field, then DocValues will automatically be used any time the field is used for <<common-query-parameters.adoc#sort-parameter,sorting>>, <<faceting.adoc#faceting,faceting>> or <<function-queries.adoc#function-queries,function queries>>.
 
-[[DocValues-RetrievingDocValuesDuringSearch]]
 === Retrieving DocValues During Search
 
 Field values retrieved during search queries are typically returned from stored values. However, non-stored docValues fields will be also returned along with other stored fields when all fields (or pattern matching globs) are specified to be returned (e.g. "`fl=*`") for search queries depending on the effective value of the `useDocValuesAsStored` parameter for each field. For schema versions >= 1.6, the implicit default is `useDocValuesAsStored="true"`. See <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,Field Type Definitions and Properties>> & <<defining-fields.adoc#defining-fields,Defining Fields>> for more details.
 
-When `useDocValuesAsStored="false"`, non-stored DocValues fields can still be explicitly requested by name in the <<common-query-parameters.adoc#CommonQueryParameters-Thefl_FieldList_Parameter,fl param>>, but will not match glob patterns (`"*"`). Note that returning DocValues along with "regular" stored fields at query time has performance implications that stored fields may not because DocValues are column-oriented and may therefore incur additional cost to retrieve for each returned document. Also note that while returning non-stored fields from DocValues, the values of a multi-valued field are returned in sorted order (and not insertion order). If you require the multi-valued fields to be returned in the original insertion order, then make your multi-valued field as stored (such a change requires re-indexing).
+When `useDocValuesAsStored="false"`, non-stored DocValues fields can still be explicitly requested by name in the <<common-query-parameters.adoc#fl-field-list-parameter,fl param>>, but will not match glob patterns (`"*"`). Note that returning DocValues along with "regular" stored fields at query time has performance implications that stored fields may not because DocValues are column-oriented and may therefore incur additional cost to retrieve for each returned document. Also note that while returning non-stored fields from DocValues, the values of a multi-valued field are returned in sorted order (and not insertion order). If you require the multi-valued fields to be returned in the original insertion order, then make your multi-valued field as stored (such a change requires re-indexing).
 
 In cases where the query is returning _only_ docValues fields performance may improve since returning stored fields requires disk reads and decompression whereas returning docValues fields in the fl list only requires memory access.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/enabling-ssl.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/enabling-ssl.adoc b/solr/solr-ref-guide/src/enabling-ssl.adoc
index be2025e..cbd8754 100644
--- a/solr/solr-ref-guide/src/enabling-ssl.adoc
+++ b/solr/solr-ref-guide/src/enabling-ssl.adoc
@@ -24,10 +24,8 @@ This section describes enabling SSL using a self-signed certificate.
 
 For background on SSL certificates and keys, see http://www.tldp.org/HOWTO/SSL-Certificates-HOWTO/.
 
-[[EnablingSSL-BasicSSLSetup]]
 == Basic SSL Setup
 
-[[EnablingSSL-Generateaself-signedcertificateandakey]]
 === Generate a Self-Signed Certificate and a Key
 
 To generate a self-signed certificate and a single key that will be used to authenticate both the server and the client, we'll use the JDK https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html[`keytool`] command and create a separate keystore. This keystore will also be used as a truststore below. It's possible to use the keystore that comes with the JDK for these purposes, and to use a separate truststore, but those options aren't covered here.
@@ -45,7 +43,6 @@ keytool -genkeypair -alias solr-ssl -keyalg RSA -keysize 2048 -keypass secret -s
 
 The above command will create a keystore file named `solr-ssl.keystore.jks` in the current directory.
 
-[[EnablingSSL-ConvertthecertificateandkeytoPEMformatforusewithcURL]]
 === Convert the Certificate and Key to PEM Format for Use with cURL
 
 cURL isn't capable of using JKS formatted keystores, so the JKS keystore needs to be converted to PEM format, which cURL understands.
@@ -73,7 +70,6 @@ If you want to use cURL on OS X Yosemite (10.10), you'll need to create a certif
 openssl pkcs12 -nokeys -in solr-ssl.keystore.p12 -out solr-ssl.cacert.pem
 ----
 
-[[EnablingSSL-SetcommonSSLrelatedsystemproperties]]
 === Set Common SSL-Related System Properties
 
 The Solr Control Script is already setup to pass SSL-related Java system properties to the JVM. To activate the SSL settings, uncomment and update the set of properties beginning with SOLR_SSL_* in `bin/solr.in.sh`. (or `bin\solr.in.cmd` on Windows).
@@ -116,7 +112,6 @@ REM Enable clients to authenticate (but not require)
 set SOLR_SSL_WANT_CLIENT_AUTH=false
 ----
 
-[[EnablingSSL-RunSingleNodeSolrusingSSL]]
 === Run Single Node Solr using SSL
 
 Start Solr using the command shown below; by default clients will not be required to authenticate:
@@ -133,12 +128,10 @@ bin/solr -p 8984
 bin\solr.cmd -p 8984
 ----
 
-[[EnablingSSL-SolrCloud]]
 == SSL with SolrCloud
 
 This section describes how to run a two-node SolrCloud cluster with no initial collections and a single-node external ZooKeeper. The commands below assume you have already created the keystore described above.
 
-[[EnablingSSL-ConfigureZooKeeper]]
 === Configure ZooKeeper
 
 NOTE: ZooKeeper does not support encrypted communication with clients like Solr. There are several related JIRA tickets where SSL support is being planned/worked on: https://issues.apache.org/jira/browse/ZOOKEEPER-235[ZOOKEEPER-235]; https://issues.apache.org/jira/browse/ZOOKEEPER-236[ZOOKEEPER-236]; https://issues.apache.org/jira/browse/ZOOKEEPER-1000[ZOOKEEPER-1000]; and https://issues.apache.org/jira/browse/ZOOKEEPER-2120[ZOOKEEPER-2120].
@@ -161,12 +154,10 @@ server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd clusterprop -n
 server\scripts\cloud-scripts\zkcli.bat -zkhost localhost:2181 -cmd clusterprop -name urlScheme -val https
 ----
 
-If you have set up your ZooKeeper cluster to use a <<taking-solr-to-production.adoc#TakingSolrtoProduction-ZooKeeperchroot,chroot for Solr>> , make sure you use the correct `zkhost` string with `zkcli`, e.g. `-zkhost localhost:2181/solr`.
+If you have set up your ZooKeeper cluster to use a <<taking-solr-to-production.adoc#zookeeper-chroot,chroot for Solr>> , make sure you use the correct `zkhost` string with `zkcli`, e.g. `-zkhost localhost:2181/solr`.
 
-[[EnablingSSL-RunSolrCloudwithSSL]]
 === Run SolrCloud with SSL
 
-[[EnablingSSL-CreateSolrhomedirectoriesfortwonodes]]
 ==== Create Solr Home Directories for Two Nodes
 
 Create two copies of the `server/solr/` directory which will serve as the Solr home directories for each of your two SolrCloud nodes:
@@ -187,7 +178,6 @@ xcopy /E server\solr cloud\node1\
 xcopy /E server\solr cloud\node2\
 ----
 
-[[EnablingSSL-StartthefirstSolrnode]]
 ==== Start the First Solr Node
 
 Next, start the first Solr node on port 8984. Be sure to stop the standalone server first if you started it when working through the previous section on this page.
@@ -220,7 +210,6 @@ bin/solr -cloud -s cloud/node1 -z localhost:2181 -p 8984 -Dsolr.ssl.checkPeerNam
 bin\solr.cmd -cloud -s cloud\node1 -z localhost:2181 -p 8984 -Dsolr.ssl.checkPeerName=false
 ----
 
-[[EnablingSSL-StartthesecondSolrnode]]
 ==== Start the Second Solr Node
 
 Finally, start the second Solr node on port 7574 - again, to skip hostname verification, add `-Dsolr.ssl.checkPeerName=false`;
@@ -237,14 +226,13 @@ bin/solr -cloud -s cloud/node2 -z localhost:2181 -p 7574
 bin\solr.cmd -cloud -s cloud\node2 -z localhost:2181 -p 7574
 ----
 
-[[EnablingSSL-ExampleClientActions]]
 == Example Client Actions
 
 [IMPORTANT]
 ====
 cURL on OS X Mavericks (10.9) has degraded SSL support. For more information and workarounds to allow one-way SSL, see http://curl.haxx.se/mail/archive-2013-10/0036.html. cURL on OS X Yosemite (10.10) is improved - 2-way SSL is possible - see http://curl.haxx.se/mail/archive-2014-10/0053.html .
 
-The cURL commands in the following sections will not work with the system `curl` on OS X Yosemite (10.10). Instead, the certificate supplied with the `-E` param must be in PKCS12 format, and the file supplied with the `--cacert` param must contain only the CA certificate, and no key (see <<EnablingSSL-ConvertthecertificateandkeytoPEMformatforusewithcURL,above>> for instructions on creating this file):
+The cURL commands in the following sections will not work with the system `curl` on OS X Yosemite (10.10). Instead, the certificate supplied with the `-E` param must be in PKCS12 format, and the file supplied with the `--cacert` param must contain only the CA certificate, and no key (see <<Convert the Certificate and Key to PEM Format for Use with cURL,above>> for instructions on creating this file):
 
 [source,bash]
 curl -E solr-ssl.keystore.p12:secret --cacert solr-ssl.cacert.pem ...
@@ -271,14 +259,13 @@ bin\solr.cmd create -c mycollection -shards 2
 
 The `create` action will pass the `SOLR_SSL_*` properties set in your include file to the SolrJ code used to create the collection.
 
-[[EnablingSSL-RetrieveSolrCloudclusterstatususingcURL]]
 === Retrieve SolrCloud Cluster Status using cURL
 
 To get the resulting cluster status (again, if you have not enabled client authentication, remove the `-E solr-ssl.pem:secret` option):
 
 [source,bash]
 ----
-curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/admin/collections?action=CLUSTERSTATUS&wt=json&indent=on"
+curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/admin/collections?action=CLUSTERSTATUS&indent=on"
 ----
 
 You should get a response that looks like this:
@@ -317,7 +304,6 @@ You should get a response that looks like this:
     "properties":{"urlScheme":"https"}}}
 ----
 
-[[EnablingSSL-Indexdocumentsusingpost.jar]]
 === Index Documents using post.jar
 
 Use `post.jar` to index some example documents to the SolrCloud collection created above:
@@ -329,18 +315,16 @@ cd example/exampledocs
 java -Djavax.net.ssl.keyStorePassword=secret -Djavax.net.ssl.keyStore=../../server/etc/solr-ssl.keystore.jks -Djavax.net.ssl.trustStore=../../server/etc/solr-ssl.keystore.jks -Djavax.net.ssl.trustStorePassword=secret -Durl=https://localhost:8984/solr/mycollection/update -jar post.jar *.xml
 ----
 
-[[EnablingSSL-QueryusingcURL]]
 === Query Using cURL
 
 Use cURL to query the SolrCloud collection created above, from a directory containing the PEM formatted certificate and key created above (e.g. `example/etc/`) - if you have not enabled client authentication (system property `-Djetty.ssl.clientAuth=true)`, then you can remove the `-E solr-ssl.pem:secret` option:
 
 [source,bash]
 ----
-curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/mycollection/select?q=*:*&wt=json&indent=on"
+curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/mycollection/select?q=*:*"
 ----
 
-[[EnablingSSL-IndexadocumentusingCloudSolrClient]]
-=== Index a document using CloudSolrClient
+=== Index a Document using CloudSolrClient
 
 From a java client using SolrJ, index a document. In the code below, the `javax.net.ssl.*` system properties are set programmatically, but you could instead specify them on the java command line, as in the `post.jar` example above:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/errata.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/errata.adoc b/solr/solr-ref-guide/src/errata.adoc
index 9030ee3..7484c17 100644
--- a/solr/solr-ref-guide/src/errata.adoc
+++ b/solr/solr-ref-guide/src/errata.adoc
@@ -18,14 +18,12 @@
 // specific language governing permissions and limitations
 // under the License.
 
-[[Errata-ErrataForThisDocumentation]]
 == Errata For This Documentation
 
 Any mistakes found in this documentation after its release will be listed on the on-line version of this page:
 
 https://lucene.apache.org/solr/guide/{solr-docs-version}/errata.html
 
-[[Errata-ErrataForPastVersionsofThisDocumentation]]
 == Errata For Past Versions of This Documentation
 
 Any known mistakes in past releases of this documentation will be noted below.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8d00e53b/solr/solr-ref-guide/src/exporting-result-sets.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/exporting-result-sets.adoc b/solr/solr-ref-guide/src/exporting-result-sets.adoc
index 33852fa..0f8866d 100644
--- a/solr/solr-ref-guide/src/exporting-result-sets.adoc
+++ b/solr/solr-ref-guide/src/exporting-result-sets.adoc
@@ -25,19 +25,16 @@ This feature uses a stream sorting technique that begins to send records within
 
 The cases where this functionality may be useful include: session analysis, distributed merge joins, time series roll-ups, aggregations on high cardinality fields, fully distributed field collapsing, and sort based stats.
 
-[[ExportingResultSets-FieldRequirements]]
 == Field Requirements
 
 All the fields being sorted and exported must have docValues set to true. For more information, see the section on <<docvalues.adoc#docvalues,DocValues>>.
 
-[[ExportingResultSets-The_exportRequestHandler]]
 == The /export RequestHandler
 
 The `/export` request handler with the appropriate configuration is one of Solr's out-of-the-box request handlers - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> for more information.
 
 Note that this request handler's properties are defined as "invariants", which means they cannot be overridden by other properties passed at another time (such as at query time).
 
-[[ExportingResultSets-RequestingResultsExport]]
 == Requesting Results Export
 
 You can use `/export` to make requests to export the result set of a query.
@@ -53,19 +50,16 @@ Here is an example of an export request of some indexed log data:
 http://localhost:8983/solr/core_name/export?q=my-query&sort=severity+desc,timestamp+desc&fl=severity,timestamp,msg
 ----
 
-[[ExportingResultSets-SpecifyingtheSortCriteria]]
 === Specifying the Sort Criteria
 
 The `sort` property defines how documents will be sorted in the exported result set. Results can be sorted by any field that has a field type of int,long, float, double, string. The sort fields must be single valued fields.
 
 Up to four sort fields can be specified per request, with the 'asc' or 'desc' properties.
 
-[[ExportingResultSets-SpecifyingtheFieldList]]
 === Specifying the Field List
 
 The `fl` property defines the fields that will be exported with the result set. Any of the field types that can be sorted (i.e., int, long, float, double, string, date, boolean) can be used in the field list. The fields can be single or multi-valued. However, returning scores and wildcards are not supported at this time.
 
-[[ExportingResultSets-DistributedSupport]]
 == Distributed Support
 
 See the section <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>> for distributed support.