You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/12 14:35:41 UTC

[31/37] lucene-solr:branch_6_6: squash merge jira/solr-10290 into master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/coreadmin-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/coreadmin-api.adoc b/solr/solr-ref-guide/src/coreadmin-api.adoc
new file mode 100644
index 0000000..4cd7147
--- /dev/null
+++ b/solr/solr-ref-guide/src/coreadmin-api.adoc
@@ -0,0 +1,353 @@
+= CoreAdmin API
+:page-shortname: coreadmin-api
+:page-permalink: coreadmin-api.html
+
+The Core Admin API is primarily used under the covers by the <<collections-api.adoc#collections-api,Collections API>> when running a <<solrcloud.adoc#solrcloud,SolrCloud>> cluster.
+
+SolrCloud users should not typically use the CoreAdmin API directly, but the API may be useful for users of single-node or master/slave Solr installations for core maintenance operations.
+
+The CoreAdmin API is implemented by the CoreAdminHandler, which is a special purpose <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,request handler>> that is used to manage Solr cores. Unlike other request handlers, the CoreAdminHandler is not attached to a single core. Instead, there is a single instance of the CoreAdminHandler in each Solr node that manages all the cores running in that node and is accessible at the `/solr/admin/cores` path.
+
+CoreAdmin actions can be executed by via HTTP requests that specify an `action` request parameter, with additional action specific arguments provided as additional parameters.
+
+All action names are uppercase, and are defined in depth in the sections below.
+
+[[CoreAdminAPI-STATUS]]
+== STATUS
+
+The `STATUS` action returns the status of all running Solr cores, or status for only the named core.
+
+`admin/cores?action=STATUS&core=_core-name_`
+
+[[CoreAdminAPI-Input]]
+=== *Input*
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|core |string |No | |The name of a core, as listed in the "name" attribute of a `<core>` element in `solr.xml`.
+|indexInfo |boolean |No |true |If **false**, information about the index will not be returned with a core STATUS request. In Solr implementations with a large number of cores (i.e., more than hundreds), retrieving the index information for each core can take a lot of time and isn't always required.
+|===
+
+[[CoreAdminAPI-CREATE]]
+== CREATE
+
+The `CREATE` action creates a new core and registers it.
+
+If a Solr core with the given name already exists, it will continue to handle requests while the new core is initializing. When the new core is ready, it will take new requests and the old core will be unloaded.
+
+`admin/cores?action=CREATE&name=_core-name_&instanceDir=_path/to/dir_&config=solrconfig.xml&dataDir=data`
+
+Note that this command is the only one of the Core Admin API commands that *does not* support the `core` parameter. Instead, the `name` parameter is required, as shown below.
+
+.CREATE must be able to find a configuration!
+[WARNING]
+====
+Your CREATE call must be able to find a configuration, or it will not succeed.
+
+When you are running SolrCloud and create a new core for a collection, the configuration will be inherited from the collection. Each collection is linked to a configName, which is stored in the ZooKeeper database. This satisfies the config requirement. There is something to note, though – if you're running SolrCloud, you should *NOT* be using the CoreAdmin API at all. Use the Collections API.
+
+When you are not running SolrCloud, if you have <<config-sets.adoc#config-sets,Config Sets>> defined, you can use the configSet parameter as documented below. If there are no config sets, then the instanceDir specified in the CREATE call must already exist, and it must contain a `conf` directory which in turn must contain `solrconfig.xml`, your schema, which is usually named either `managed-schema` or `schema.xml`, and any files referenced by those configs.
+
+The config and schema filenames can be specified with the config and schema parameters, but these are expert options. One thing you could do to avoid creating the conf directory is use config and schema parameters that point at absolute paths, but this can lead to confusing configurations unless you fully understand what you are doing.
+====
+
+.CREATE and the core.properties file
+[IMPORTANT]
+====
+The core.properties file is built as part of the CREATE command. If you create a core.properties file yourself in a core directory and then try to use CREATE to add that core to Solr, you will get an error telling you that another core is already defined there. The core.properties file must NOT exist before calling the CoreAdmin API with the CREATE command.
+====
+
+[[CoreAdminAPI-Input.1]]
+=== *Input*
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|name |string |Yes |N/A |The name of the new core. Same as "name" on the `<core>` element.
+|instanceDir |string |No |The value specified for "name" parameter |The directory where files for this SolrCore should be stored. Same as `instanceDir` on the `<core>` element.
+|config |string |No | |Name of the config file (i.e., `solrconfig.xml`) relative to `instanceDir`.
+|schema |string |No | |Name of the schema file to use for the core. Please note that if you are using a "managed schema" (the default behavior) then any value for this property which does not match the effective `managedSchemaResourceName` will be read once, backed up, and converted for managed schema use. See <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for details.
+|dataDir |string |No | |Name of the data directory relative to `instanceDir`.
+|configSet |string |No | |Name of the configset to use for this core. For more information, see the section <<config-sets.adoc#config-sets,Config Sets>>.
+|collection |string |No | |The name of the collection to which this core belongs. The default is the name of the core. `collection.<param>=<value>` causes a property of `<param>=<value>` to be set if a new collection is being created. Use `collection.configName=<configname>` to point to the configuration for a new collection.
+|shard |string |No | |The shard id this core represents. Normally you want to be auto-assigned a shard id.
+|property.__name__=__value__ |string |No | |Sets the core property _name_ to __value__. See the section on defining <<defining-core-properties.adoc#Definingcore.properties-core.properties_files,core.properties file contents>>.
+|async |string |No | |Request ID to track this action which will be processed asynchronously
+|===
+
+Use `collection.configName=<configname>` to point to the config for a new collection.
+
+[[CoreAdminAPI-Example]]
+=== Example
+
+`\http://localhost:8983/solr/admin/cores?action=CREATE&name=my_core&collection=my_collection&shard=shard2`
+
+[WARNING]
+====
+While it's possible to create a core for a non-existent collection, this approach is not supported and not recommended. Always create a collection using the <<collections-api.adoc#collections-api,Collections API>> before creating a core directly for it.
+====
+
+[[CoreAdminAPI-RELOAD]]
+== RELOAD
+
+The RELOAD action loads a new core from the configuration of an existing, registered Solr core. While the new core is initializing, the existing one will continue to handle requests. When the new Solr core is ready, it takes over and the old core is unloaded.
+
+`admin/cores?action=RELOAD&core=_core-name_`
+
+This is useful when you've made changes to a Solr core's configuration on disk, such as adding new field definitions. Calling the RELOAD action lets you apply the new configuration without having to restart the Web container.
+
+[IMPORTANT]
+====
+RELOAD performs "live" reloads of SolrCore, reusing some existing objects. Some configuration options, such as the `dataDir` location and `IndexWriter`-related settings in `solrconfig.xml` can not be changed and made active with a simple RELOAD action.
+====
+
+[[CoreAdminAPI-Input.2]]
+=== Input
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|core |string |Yes |N/A |The name of the core, as listed in the "name" attribute of a `<core>` element in `solr.xml`.
+|===
+
+[[CoreAdminAPI-RENAME]]
+== RENAME
+
+The `RENAME` action changes the name of a Solr core.
+
+`admin/cores?action=RENAME&core=_core-name_&other=_other-core-name_`
+
+[[CoreAdminAPI-Input.3]]
+=== Input
+
+**Query Parameters**
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|core |string |Yes | |The name of the Solr core to be renamed.
+|other |string |Yes | |The new name for the Solr core. If the persistent attribute of `<solr>` is `true`, the new name will be written to `solr.xml` as the `name` attribute of the `<core>` attribute.
+|async |string |No | |Request ID to track this action which will be processed asynchronously
+|===
+
+[[CoreAdminAPI-SWAP]]
+== SWAP
+
+`SWAP` atomically swaps the names used to access two existing Solr cores. This can be used to swap new content into production. The prior core remains available and can be swapped back, if necessary. Each core will be known by the name of the other, after the swap.
+
+`admin/cores?action=SWAP&core=_core-name_&other=_other-core-name_`
+
+[IMPORTANT]
+====
+
+Do not use `SWAP` with a SolrCloud node. It is not supported and can result in the core being unusable.
+
+====
+
+[[CoreAdminAPI-Input.4]]
+=== Input
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|core |string |Yes | |The name of one of the cores to be swapped.
+|other |string |Yes | |The name of one of the cores to be swapped.
+|async |string |No | |Request ID to track this action which will be processed asynchronously
+|===
+
+[[CoreAdminAPI-UNLOAD]]
+== UNLOAD
+
+The `UNLOAD` action removes a core from Solr. Active requests will continue to be processed, but no new requests will be sent to the named core. If a core is registered under more than one name, only the given name is removed.
+
+`admin/cores?action=UNLOAD&core=_core-name_`
+
+The `UNLOAD` action requires a parameter (`core`) identifying the core to be removed. If the persistent attribute of `<solr>` is set to `true`, the `<core>` element with this `name` attribute will be removed from `solr.xml`.
+
+[IMPORTANT]
+====
+Unloading all cores in a SolrCloud collection causes the removal of that collection's metadata from ZooKeeper.
+====
+
+[[CoreAdminAPI-Input.5]]
+=== Input
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|core |string |Yes | |The name of one of the cores to be removed.
+|deleteIndex |boolean |No |false |If true, will remove the index when unloading the core.
+|deleteDataDir |boolean |No |false |If true, removes the `data` directory and all sub-directories.
+|deleteInstanceDir |boolean |No |false |If true, removes everything related to the core, including the index directory, configuration files and other related files.
+|async |string |No | |Request ID to track this action which will be processed asynchronously
+|===
+
+[[CoreAdminAPI-MERGEINDEXES]]
+== MERGEINDEXES
+
+The `MERGEINDEXES` action merges one or more indexes to another index. The indexes must have completed commits, and should be locked against writes until the merge is complete or the resulting merged index may become corrupted. The target core index must already exist and have a compatible schema with the one or more indexes that will be merged to it. Another commit on the target core should also be performed after the merge is complete.
+
+`admin/cores?action=MERGEINDEXES&core=_new-core-name_&indexDir=_path/to/core1/data/index_&indexDir=_path/to/core2/data/index_`
+
+In this example, we use the `indexDir` parameter to define the index locations of the source cores. The `core` parameter defines the target index. A benefit of this approach is that we can merge any Lucene-based index that may not be associated with a Solr core.
+
+Alternatively, we can instead use a `srcCore` parameter, as in this example:
+
+`admin/cores?action=mergeindexes&core=_new-core-name_&srcCore=_core1-name_&srcCore=_core2-name_`
+
+This approach allows us to define cores that may not have an index path that is on the same physical server as the target core. However, we can only use Solr cores as the source indexes. Another benefit of this approach is that we don't have as high a risk for corruption if writes occur in parallel with the source index.
+
+We can make this call run asynchronously by specifying the `async` parameter and passing a request-id. This id can then be used to check the status of the already submitted task using the REQUESTSTATUS API.
+
+[[CoreAdminAPI-Input.6]]
+=== Input
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|core |string |Yes | |The name of the target core/index.
+|indexDir |string | | |Multi-valued, directories that would be merged.
+|srcCore |string | | |Multi-valued, source cores that would be merged.
+|async |string | | |Request ID to track this action which will be processed asynchronously
+|===
+
+[[CoreAdminAPI-SPLIT]]
+== SPLIT
+
+The `SPLIT` action splits an index into two or more indexes. The index being split can continue to handle requests. The split pieces can be placed into a specified directory on the server's filesystem or it can be merged into running Solr cores.
+
+The `SPLIT` action supports five parameters, which are described in the table below.
+
+[[CoreAdminAPI-Input.7]]
+=== Input
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|core |string |Yes | |The name of the core to be split.
+|path |string | | |Multi-valued, the directory path in which a piece of the index will be written.
+|targetCore |string | | |Multi-valued, the target Solr core to which a piece of the index will be merged
+|ranges |string |No | |A comma-separated list of hash ranges in hexadecimal format
+|split.key |string |No | |The key to be used for splitting the index
+|async |string |No | |Request ID to track this action which will be processed asynchronously
+|===
+
+[IMPORTANT]
+====
+Either `path` or `targetCore` parameter must be specified but not both. The ranges and split.key parameters are optional and only one of the two should be specified, if at all required.
+====
+
+[[CoreAdminAPI-Examples]]
+=== Examples
+
+The `core` index will be split into as many pieces as the number of `path` or `targetCore` parameters.
+
+==== Usage with two `targetCore` parameters:
+
+`\http://localhost:8983/solr/admin/cores?action=SPLIT&core=core0&targetCore=core1&targetCore=core2`
+
+Here the `core` index will be split into two pieces and merged into the two `targetCore` indexes.
+
+==== Usage with two `path` parameters:
+
+`\http://localhost:8983/solr/admin/cores?action=SPLIT&core=core0&path=/path/to/index/1&path=/path/to/index/2`
+
+The `core` index will be split into two pieces and written into the two directory paths specified.
+
+==== Usage with the `split.key` parameter:
+
+`\http://localhost:8983/solr/admin/cores?action=SPLIT&core=core0&targetCore=core1&split.key=A!`
+
+Here all documents having the same route key as the `split.key` i.e. 'A!' will be split from the `core` index and written to the `targetCore`.
+
+==== Usage with `ranges` parameter:
+
+`\http://localhost:8983/solr/admin/cores?action=SPLIT&core=core0&targetCore=core1&targetCore=core2&targetCore=core3&ranges=0-1f4,1f5-3e8,3e9-5dc`
+
+This example uses the `ranges` parameter with hash ranges 0-500, 501-1000 and 1001-1500 specified in hexadecimal. Here the index will be split into three pieces with each targetCore receiving documents matching the hash ranges specified i.e. core1 will get documents with hash range 0-500, core2 will receive documents with hash range 501-1000 and finally, core3 will receive documents with hash range 1001-1500. At least one hash range must be specified. Please note that using a single hash range equal to a route key's hash range is NOT equivalent to using the `split.key` parameter because multiple route keys can hash to the same range.
+
+The `targetCore` must already exist and must have a compatible schema with the `core` index. A commit is automatically called on the `core` index before it is split.
+
+This command is used as part of the <<collections-api.adoc#CollectionsAPI-splitshard,SPLITSHARD>> command but it can be used for non-cloud Solr cores as well. When used against a non-cloud core without `split.key` parameter, this action will split the source index and distribute its documents alternately so that each split piece contains an equal number of documents. If the `split.key` parameter is specified then only documents having the same route key will be split from the source index.
+
+[[CoreAdminAPI-REQUESTSTATUS]]
+== REQUESTSTATUS
+
+Request the status of an already submitted asynchronous CoreAdmin API call.
+
+`admin/cores?action=REQUESTSTATUS&requestid=_id_`
+
+[[CoreAdminAPI-Input.8]]
+=== Input
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|requestid |string |Yes | |The user defined request-id for the Asynchronous request.
+|===
+
+The call below will return the status of an already submitted Asynchronous CoreAdmin call.
+
+`\http://localhost:8983/solr/admin/cores?action=REQUESTSTATUS&requestid=1`
+
+[[CoreAdminAPI-REQUESTRECOVERY]]
+== REQUESTRECOVERY
+
+The `REQUESTRECOVERY` action manually asks a core to recover by synching with the leader. This should be considered an "expert" level command and should be used in situations where the node (SorlCloud replica) is unable to become active automatically.
+
+`admin/cores?action=REQUESTRECOVERY&core=_core-name_`
+
+[[CoreAdminAPI-Input.9]]
+=== Input
+
+*Query Parameters*
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="15,10,10,10,55",options="header"]
+|===
+|Parameter |Type |Required |Default |Description
+|core |string |Yes | |The name of the core to re-sync.
+|===
+
+[[CoreAdminAPI-Examples.1]]
+=== Examples
+
+`\http://localhost:8981/solr/admin/cores?action=REQUESTRECOVERY&core=gettingstarted_shard1_replica1`
+
+The core to specify can be found by expanding the appropriate ZooKeeper node via the admin UI.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
new file mode 100644
index 0000000..3509884
--- /dev/null
+++ b/solr/solr-ref-guide/src/cross-data-center-replication-cdcr.adoc
@@ -0,0 +1,761 @@
+= Cross Data Center Replication (CDCR)
+:page-shortname: cross-data-center-replication-cdcr
+:page-permalink: cross-data-center-replication-cdcr.html
+
+Cross Data Center Replication (CDCR) allows you to create multiple SolrCloud data centers and keep them in sync in case they are needed at a future time.
+
+The <<solrcloud.adoc#solrcloud,SolrCloud>> architecture is not particularly well suited for situations where a single SolrCloud cluster consists of nodes in separated data clusters connected by an expensive pipe. The root problem is that SolrCloud is designed to support <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>> by immediately forwarding updates between nodes in the cluster on a per-shard basis. "CDCR" features exist to help mitigate the risk of an entire data center outage.
+
+== What is CDCR?
+
+CDCR supports replicating data from one data center to multiple data centers. The initial version of the solution supports an active-passive scenario where data updates are replicated from a Source data center to one or more target data centers.
+
+The target data center(s) will not propagate updates such as adds, updates, or deletes to the source data center and updates should _not_ be sent to any of the target data center(s).
+
+Source and target data centers can serve search queries when CDCR is operating. The target data centers will have slightly stale views of the corpus due to propagation delays, but this is minimal (perhaps a few seconds).
+
+Data changes on the source data center are replicated to the target data center only after they are persisted to disk. The data changes can be replicated in near real-time (with a small delay) or could be scheduled to be sent in intervals to the target data center. This solution pre-supposes that the source and target data centers begin with the same documents indexed. Of course the indexes may be empty to start.
+
+Each shard leader in the source data center will be responsible for replicating its updates to the corresponding leader in the target data center. When receiving updates from the source data center, shard leaders in the target data center will replicate the changes to their own replicas.
+
+This replication model is designed to tolerate some degradation in connectivity, accommodate limited bandwidth, and support batch updates to optimize communication.
+
+Replication supports both a new empty index and pre-built indexes. In the scenario where the replication is set up on a pre-built index, CDCR will ensure consistency of the replication of the updates, but cannot ensure consistency on the full index. Therefore any index created before CDCR was set up will have to be replicated by other means (described in the section <<Initial Startup>>) so source and target indexes are fully consistent.
+
+The active-passive nature of the initial implementation implies a "push" model from the source collection to the target collection. Therefore, the source configuration must be able to "see" the ZooKeeper ensemble in the target cluster. The ZooKeeper ensemble is provided configured in the Source's `solrconfig.xml` file.
+
+CDCR is configured to replicate from collections in the source cluster to collections in the target cluster on a collection-by-collection basis. Since CDCR is configured in `solrconfig.xml` (on both source and target clusters), the settings can be tailored for the needs of each collection.
+
+CDCR can be configured to replicate from one collection to a second collection _within the same cluster_. That is a specialized scenario not covered in this document.
+
+[glossary]
+== CDCR Glossary
+
+Terms used in this document include:
+
+[glossary]
+Node:: A JVM instance running Solr; a server.
+Cluster:: A set of Solr nodes managed as a single unit by a ZooKeeper ensemble, hosting one or more Collections.
+Data Center:: A group of networked servers hosting a Solr cluster. In this document, the terms _Cluster_ and _Data Center_ are interchangeable as we assume that each Solr cluster is hosted in a different group of networked servers.
+Shard:: A sub-index of a single logical collection. This may be spread across multiple nodes of the cluster. Each shard can have as many replicas as needed.
+Leader:: Each shard has one node identified as its leader. All the writes for documents belonging to a shard are routed through the leader.
+Replica:: A copy of a shard for use in failover or load balancing. Replicas comprising a shard can either be leaders or non-leaders.
+Follower:: A convenience term for a replica that is _not_ the leader of a shard.
+Collection:: Multiple documents that make up one logical index. A cluster can have multiple collections.
+Updates Log:: An append-only log of write operations maintained by each node.
+
+== CDCR Architecture
+
+Here is a picture of the data flow.
+
+.CDCR Data Flow
+image::images/cross-data-center-replication-cdcr-/CDCR_arch.png[image,width=700,height=525]
+
+Updates and deletes are first written to the Source cluster, then forwarded to the Target cluster. The data flow sequence is:
+
+. A shard leader receives a new data update that is processed by its update processor chain.
+. The data update is first applied to the local index.
+. Upon successful application of the data update on the local index, the data update is added to the Updates Log queue.
+. After the data update is persisted to disk, the data update is sent to the replicas within the data center.
+. After Step 4 is successful, CDCR reads the data update from the Updates Log and pushes it to the corresponding collection in the target data center. This is necessary in order to ensure consistency between the Source and target data centers.
+. The leader on the target data center writes the data locally and forwards it to all its followers.
+
+Steps 1, 2, 3 and 4 are performed synchronously by SolrCloud; Step 5 is performed asynchronously by a background thread. Given that CDCR replication is performed asynchronously, it becomes possible to push batch updates in order to minimize network communication overhead. Also, if CDCR is unable to push the update at a given time, for example, due to a degradation in connectivity, it can retry later without any impact on the source data center.
+
+One implication of the architecture is that the leaders in the source cluster must be able to "see" the leaders in the target cluster. Since leaders may change, this effectively means that all nodes in the source cluster must be able to "see" all Solr nodes in the target cluster so firewalls, ACL rules, etc. must be configured with care.
+
+The current design works most robustly if both the Source and target clusters have the same number of shards. There is no requirement that the shards in the Source and target collection have the same number of replicas.
+
+Having different numbers of shards on the Source and target cluster is possible, but is also an "expert" configuration as that option imposes certain constraints and is not recommended. Most of the scenarios where having differing numbers of shards are contemplated are better accomplished by hosting multiple shards on each target Solr instance.
+
+== Major Components of CDCR
+
+There are a number of key features and components in CDCR’s architecture:
+
+=== CDCR Configuration
+
+In order to configure CDCR, the Source data center requires the host address of the ZooKeeper cluster associated with the target data center. The ZooKeeper host address is the only information needed by CDCR to instantiate the communication with the target Solr cluster. The CDCR configuration file on the source cluster will therefore contain a list of ZooKeeper hosts. The CDCR configuration file might also contain secondary/optional configuration, such as the number of CDC Replicator threads, batch updates related settings, etc.
+
+=== CDCR Initialization
+
+CDCR supports incremental updates to either new or existing collections. CDCR may not be able to keep up with very high volume updates, especially if there are significant communications latencies due to a slow "pipe" between the data centers. Some scenarios:
+
+* There is an initial bulk load of a corpus followed by lower volume incremental updates. In this case, one can do the initial bulk load and then enable CDCR. See the section <<Initial Startup>> for more information.
+* The index is being built up from scratch, without a significant initial bulk load. CDCR can be set up on empty collections and keep them synchronized from the start.
+* The index is always being updated at a volume too high for CDCR to keep up. This is especially possible in situations where the connection between the Source and target data centers is poor. This scenario is unsuitable for CDCR in its current form.
+
+=== Inter-Data Center Communication
+
+Communication between data centers will be achieved through HTTP and the Solr REST API using the SolrJ client. The SolrJ client will be instantiated with the ZooKeeper host of the target data center. SolrJ will manage the shard leader discovery process.
+
+=== Updates Tracking & Pushing
+
+CDCR replicates data updates from the source to the target data center by leveraging the Updates Log.
+
+A background thread regularly checks the Updates Log for new entries, and then forwards them to the target data center. The thread therefore needs to keep a checkpoint in the form of a pointer to the last update successfully processed in the Updates Log. Upon acknowledgement from the target data center that updates have been successfully processed, the Updates Log pointer is updated to reflect the current checkpoint.
+
+This pointer must be synchronized across all the replicas. In the case where the leader goes down and a new leader is elected, the new leader will be able to resume replication from the last update by using this synchronized pointer. The strategy to synchronize such a pointer across replicas will be explained next.
+
+If for some reason, the target data center is offline or fails to process the updates, the thread will periodically try to contact the target data center and push the updates.
+
+=== Synchronization of Update Checkpoints
+
+A reliable synchronization of the update checkpoints between the shard leader and shard replicas is critical to avoid introducing inconsistency between the Source and target data centers. Another important requirement is that the synchronization must be performed with minimal network traffic to maximize scalability.
+
+In order to achieve this, the strategy is to:
+
+* Uniquely identify each update operation. This unique identifier will serve as pointer.
+* Rely on two storages: an ephemeral storage on the Source shard leader, and a persistent storage on the target cluster.
+
+The shard leader in the source cluster will be in charge of generating a unique identifier for each update operation, and will keep a copy of the identifier of the last processed updates in memory. The identifier will be sent to the target cluster as part of the update request. On the target data center side, the shard leader will receive the update request, store it along with the unique identifier in the Updates Log, and replicate it to the other shards.
+
+SolrCloud already provides a unique identifier for each update operation, i.e., a “version” number. This version number is generated using a time-based lmport clock which is incremented for each update operation sent. This provides an “happened-before” ordering of the update operations that will be leveraged in (1) the initialization of the update checkpoint on the source cluster, and in (2) the maintenance strategy of the Updates Log.
+
+The persistent storage on the target cluster is used only during the election of a new shard leader on the Source cluster. If a shard leader goes down on the source cluster and a new leader is elected, the new leader will contact the target cluster to retrieve the last update checkpoint and instantiate its ephemeral pointer. On such a request, the target cluster will retrieve the latest identifier received across all the shards, and send it back to the source cluster. To retrieve the latest identifier, every shard leader will look up the identifier of the first entry in its Update Logs and send it back to a coordinator. The coordinator will have to select the highest among them.
+
+This strategy does not require any additional network traffic and ensures reliable pointer synchronization. Consistency is principally achieved by leveraging SolrCloud. The update workflow of SolrCloud ensures that every update is applied to the leader but also to any of the replicas. If the leader goes down, a new leader is elected. During the leader election, a synchronization is performed between the new leader and the other replicas. As a result, this ensures that the new leader has a consistent Update Logs with the previous leader. Having a consistent Updates Log means that:
+
+* On the source cluster, the update checkpoint can be reused by the new leader.
+* On the target cluster, the update checkpoint will be consistent between the previous and new leader. This ensures the correctness of the update checkpoint sent by a newly elected leader from the target cluster.
+
+=== Maintenance of Updates Log
+
+The CDCR replication logic requires modification to the maintenance logic of the Updates Log on the source data center. Initially, the Updates Log acts as a fixed size queue, limited to 100 update entries. In the CDCR scenario, the Update Logs must act as a queue of variable size as they need to keep track of all the updates up through the last processed update by the target data center. Entries in the Update Logs are removed only when all pointers (one pointer per target data center) are after them.
+
+If the communication with one of the target data center is slow, the Updates Log on the source data center can grow to a substantial size. In such a scenario, it is necessary for the Updates Log to be able to efficiently find a given update operation given its identifier. Given that its identifier is an incremental number, it is possible to implement an efficient search strategy. Each transaction log file contains as part of its filename the version number of the first element. This is used to quickly traverse all the transaction log files and find the transaction log file containing one specific version number.
+
+
+[[CrossDataCenterReplication_CDCR_-Monitoring]]
+=== Monitoring
+
+CDCR provides the following monitoring capabilities over the replication operations:
+
+* Monitoring of the outgoing and incoming replications, with information such as the Source and target nodes, their status, etc.
+* Statistics about the replication, with information such as operations (add/delete) per second, number of documents in the queue, etc.
+
+Information about the lifecycle and statistics will be provided on a per-shard basis by the CDC Replicator thread. The CDCR API can then aggregate this information an a collection level.
+
+=== CDC Replicator
+
+The CDC Replicator is a background thread that is responsible for replicating updates from a Source data center to one or more target data centers. It is responsible in providing monitoring information on a per-shard basis. As there can be a large number of collections and shards in a cluster, we will use a fixed-size pool of CDC Replicator threads that will be shared across shards.
+
+
+[[CrossDataCenterReplication_CDCR_-Limitations]]
+=== Limitations
+
+The current design of CDCR has some limitations. CDCR will continue to evolve over time and many of these limitations will be addressed. Among them are:
+
+* CDCR is unlikely to be satisfactory for bulk-load situations where the update rate is high, especially if the bandwidth between the Source and target clusters is restricted. In this scenario, the initial bulk load should be performed, the Source and target data centers synchronized and CDCR be utilized for incremental updates.
+* CDCR is currently only active-passive; data is pushed from the Source cluster to the target cluster. There is active work being done in this area in the 6x code line to remove this limitation.
+* CDCR works most robustly with the same number of shards in the Source and target collection. The shards in the two collections may have different numbers of replicas.
+
+
+[[CrossDataCenterReplication_CDCR_-Configuration]]
+== Configuration
+
+The source and target configurations differ in the case of the data centers being in separate clusters. "Cluster" here means separate ZooKeeper ensembles controlling disjoint Solr instances. Whether these data centers are physically separated or not is immaterial for this discussion.
+
+
+[[CrossDataCenterReplication_CDCR_-SourceConfiguration]]
+=== Source Configuration
+
+Here is a sample of a source configuration file, a section in `solrconfig.xml`. The presence of the <replica> section causes CDCR to use this cluster as the Source and should not be present in the target collections in the cluster-to-cluster case. Details about each setting are after the two examples:
+
+[source,xml]
+----
+<requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
+  <lst name="replica">
+    <str name="zkHost">10.240.18.211:2181</str>
+    <str name="source">collection1</str>
+    <str name="target">collection1</str>
+  </lst>
+
+  <lst name="replicator">
+    <str name="threadPoolSize">8</str>
+    <str name="schedule">1000</str>
+    <str name="batchSize">128</str>
+  </lst>
+
+  <lst name="updateLogSynchronizer">
+    <str name="schedule">1000</str>
+  </lst>
+</requestHandler>
+
+<!-- Modify the <updateLog> section of your existing <updateHandler>
+     in your config as below -->
+<updateHandler class="solr.DirectUpdateHandler2">
+  <updateLog class="solr.CdcrUpdateLog">
+    <str name="dir">${solr.ulog.dir:}</str>
+    <!--Any parameters from the original <updateLog> section -->
+  </updateLog>
+</updateHandler>
+----
+
+
+[[CrossDataCenterReplication_CDCR_-TargetConfiguration]]
+=== Target Configuration
+
+Here is a typical target configuration.
+
+Target instance must configure an update processor chain that is specific to CDCR. The update processor chain must include the *CdcrUpdateProcessorFactory*. The task of this processor is to ensure that the version numbers attached to update requests coming from a CDCR source SolrCloud are reused and not overwritten by the target. A properly configured Target configuration looks similar to this.
+
+[source,xml]
+----
+<requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
+  <lst name="buffer">
+    <str name="defaultState">disabled</str>
+  </lst>
+</requestHandler>
+
+<requestHandler name="/update" class="solr.UpdateRequestHandler">
+  <lst name="defaults">
+    <str name="update.chain">cdcr-processor-chain</str>
+  </lst>
+</requestHandler>
+
+<updateRequestProcessorChain name="cdcr-processor-chain">
+  <processor class="solr.CdcrUpdateProcessorFactory"/>
+  <processor class="solr.RunUpdateProcessorFactory"/>
+</updateRequestProcessorChain>
+
+<!-- Modify the <updateLog> section of your existing <updateHandler> in your
+    config as below -->
+<updateHandler class="solr.DirectUpdateHandler2">
+  <updateLog class="solr.CdcrUpdateLog">
+    <str name="dir">${solr.ulog.dir:}</str>
+    <!--Any parameters from the original <updateLog> section -->
+  </updateLog>
+</updateHandler>
+----
+
+=== Configuration Details
+
+The configuration details, defaults and options are as follows:
+
+==== The Replica Element
+
+CDCR can be configured to forward update requests to one or more replicas. A replica is defined with a “replica” list as follows:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,10,15,55",options="header"]
+|===
+|Parameter |Required |Default |Description
+|zkHost |Yes |none |The host address for ZooKeeper of the target SolrCloud. Usually this is a comma-separated list of addresses to each node in the target ZooKeeper ensemble.
+|Source |Yes |none |The name of the collection on the Source SolrCloud to be replicated.
+|Target |Yes |none |The name of the collection on the target SolrCloud to which updates will be forwarded.
+|===
+
+==== The Replicator Element
+
+The CDC Replicator is the component in charge of forwarding updates to the replicas. The replicator will monitor the update logs of the Source collection and will forward any new updates to the target collection.
+
+The replicator uses a fixed thread pool to forward updates to multiple replicas in parallel. If more than one replica is configured, one thread will forward a batch of updates from one replica at a time in a round-robin fashion. The replicator can be configured with a “replicator” list as follows:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,10,15,55",options="header"]
+|===
+|Parameter |Required |Default |Description
+|threadPoolSize |No |2 |The number of threads to use for forwarding updates. One thread per replica is recommended.
+|schedule |No |10 |The delay in milliseconds for the monitoring the update log(s).
+|batchSize |No |128 |The number of updates to send in one batch. The optimal size depends on the size of the documents. Large batches of large documents can increase your memory usage significantly.
+|===
+
+==== The updateLogSynchronizer Element
+
+Expert: Non-leader nodes need to synchronize their update logs with their leader node from time to time in order to clean deprecated transaction log files. By default, such a synchronization process is performed every minute. The schedule of the synchronization can be modified with a “updateLogSynchronizer” list as follows:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,10,15,55",options="header"]
+|===
+|Parameter |Required |Default |Description
+|schedule |No |60000 |The delay in milliseconds for synchronizing the updates log.
+|===
+
+==== The Buffer Element
+
+CDCR is configured by default to buffer any new incoming updates. When buffering updates, the updates log will store all the updates indefinitely. Replicas do not need to buffer updates, and it is recommended to disable buffer on the target SolrCloud. The buffer can be disabled at startup with a “buffer” list and the parameter “defaultState” as follows:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,10,15,55",options="header"]
+|===
+|Parameter |Required |Default |Description
+|defaultState |No |enabled |The state of the buffer at startup.
+|===
+
+== CDCR API
+
+The CDCR API is used to control and monitor the replication process. Control actions are performed at a collection level, i.e., by using the following base URL for API calls: `\http://localhost:8983/solr/<collection>`.
+
+Monitor actions are performed at a core level, i.e., by using the following base URL for API calls: `\http://localhost:8983/solr/<collection>`.
+
+Currently, none of the CDCR API calls have parameters.
+
+
+=== API Entry Points (Control)
+
+* `<collection>/cdcr?action=STATUS`: <<CrossDataCenterReplication_CDCR_-STATUS,Returns the current state>> of CDCR.
+* `<collection>/cdcr?action=START`: <<CrossDataCenterReplication_CDCR_-START,Starts CDCR>> replication
+* `<collection>/cdcr?action=STOP`: <<CrossDataCenterReplication_CDCR_-STOP,Stops CDCR>> replication.
+* `<collection>/cdcr?action=ENABLEBUFFER`: <<CrossDataCenterReplication_CDCR_-ENABLEBUFFER,Enables the buffering>> of updates.
+* `<collection>/cdcr?action=DISABLEBUFFER`: <<CrossDataCenterReplication_CDCR_-DISABLEBUFFER,Disables the buffering>> of updates.
+
+
+=== API Entry Points (Monitoring)
+
+* `core/cdcr?action=QUEUES`: <<CrossDataCenterReplication_CDCR_-QUEUES,Fetches statistics about the queue>> for each replica and about the update logs.
+* `core/cdcr?action=OPS`: <<CrossDataCenterReplication_CDCR_-OPS,Fetches statistics about the replication performance>> (operations per second) for each replica.
+* `core/cdcr?action=ERRORS`: <<CrossDataCenterReplication_CDCR_-ERRORS,Fetches statistics and other information about replication errors>> for each replica.
+
+=== Control Commands
+
+[[CrossDataCenterReplication_CDCR_-STATUS]]
+==== STATUS
+
+`/collection/cdcr?action=STATUS`
+
+===== Input
+
+*Query Parameters:* There are no parameters to this command.
+
+===== Output
+
+*Output Content*
+
+The current state of the CDCR, which includes the state of the replication process and the state of the buffer.
+
+[[cdcr_examples]]
+===== Examples
+
+*Input*
+
+[source,text]
+----
+ http://host:8983/solr/<collection_name>/cdcr?action=STATUS
+----
+
+*Output*
+
+[source,json]
+----
+{
+  "responseHeader": {
+  "status": 0,
+  "QTime": 0
+  },
+  "status": {
+  "process": "stopped",
+  "buffer": "enabled"
+  }
+}
+----
+
+[[CrossDataCenterReplication_CDCR_-ENABLEBUFFER]]
+==== ENABLEBUFFER
+
+`/collection/cdcr?action=ENABLEBUFFER`
+
+===== Input
+
+*Query Parameters:* There are no parameters to this command.
+
+===== Output
+
+*Output Content*
+
+The status of the process and an indication of whether the buffer is enabled
+
+===== Examples
+
+*Input*
+
+[source,text]
+----
+ http://host:8983/solr/<collection_name>/cdcr?action=ENABLEBUFFER
+----
+
+*Output*
+
+[source,json]
+----
+{
+  "responseHeader": {
+  "status": 0,
+  "QTime": 0
+  },
+  "status": {
+  "process": "started",
+  "buffer": "enabled"
+  }
+}
+----
+
+[[CrossDataCenterReplication_CDCR_-DISABLEBUFFER]]
+==== DISABLEBUFFER
+
+`/collection/cdcr?action=DISABLEBUFFER`
+
+===== Input
+
+*Query Parameters:* There are no parameters to this command
+
+===== Output
+
+*Output Content:* The status of CDCR and an indication that the buffer is disabled.
+
+===== Examples
+
+*Input*
+
+[source,text]
+----
+http://host:8983/solr/<collection_name>/cdcr?action=DISABLEBUFFER
+----
+
+*Output*
+
+[source,json]
+----
+{
+  "responseHeader": {
+  "status": 0,
+  "QTime": 0
+  },
+  "status": {
+  "process": "started",
+  "buffer": "disabled"
+  }
+}
+----
+
+[[CrossDataCenterReplication_CDCR_-START]]
+==== START
+
+`/collection/cdcr?action=START`
+
+===== Input
+
+*Query Parameters:* There are no parameters for this action
+
+===== Output
+
+*Output Content:* Confirmation that CDCR is started and the status of buffering
+
+===== Examples
+
+*Input*
+
+[source,text]
+----
+http://host:8983/solr/<collection_name>/cdcr?action=START
+----
+
+*Output*
+
+[source,json]
+----
+{
+  "responseHeader": {
+  "status": 0,
+  "QTime": 0
+  },
+  "status": {
+  "process": "started",
+  "buffer": "enabled"
+  }
+}
+----
+
+[[CrossDataCenterReplication_CDCR_-STOP]]
+==== STOP
+
+`/collection/cdcr?action=STOP`
+
+===== Input
+
+*Query Parameters:* There are no parameters for this command.
+
+===== Output
+
+*Output Content:* The status of CDCR, including the confirmation that CDCR is stopped
+
+===== Examples
+
+*Input*
+
+[source,text]
+----
+ http://host:8983/solr/<collection_name>/cdcr?action=STOP
+----
+
+*Output*
+
+[source,json]
+----
+{
+  "responseHeader": {
+  "status": 0,
+  "QTime": 0
+  },
+  "status": {
+  "process": "stopped",
+  "buffer": "enabled"
+  }
+}
+----
+
+
+[[CrossDataCenterReplication_CDCR_-Monitoringcommands]]
+=== Monitoring commands
+
+[[CrossDataCenterReplication_CDCR_-QUEUES]]
+==== QUEUES
+
+`/core/cdcr?action=QUEUES`
+
+===== Input
+
+*Query Parameters:* There are no parameters for this command
+
+===== Output
+
+*Output Content*
+
+The output is composed of a list “queues” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, the current size of the queue and the timestamp of the last update operation successfully processed is provided. The timestamp of the update operation is the original timestamp, i.e., the time this operation was processed on the Source SolrCloud. This allows an estimate the latency of the replication process.
+
+The “queues” object also contains information about the updates log, such as the size (in bytes) of the updates log on disk (“tlogTotalSize”), the number of transaction log files (“tlogTotalCount”) and the status of the updates log synchronizer (“updateLogSynchronizer”).
+
+===== Examples
+
+*Input*
+
+[source,text]
+----
+ http://host:8983/solr/<replica_name>/cdcr?action=QUEUES
+----
+
+*Output*
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status": 0,
+    "QTime": 1
+  },
+  "queues":{
+    "127.0.0.1: 40342/solr":{
+    "Target_collection":{
+        "queueSize": 104,
+        "lastTimestamp": "2014-12-02T10:32:15.879Z"
+      }
+    }
+  },
+  "tlogTotalSize":3817,
+  "tlogTotalCount":1,
+  "updateLogSynchronizer": "stopped"
+}
+----
+
+[[CrossDataCenterReplication_CDCR_-OPS]]
+==== OPS
+
+`/core/cdcr?action=OPS`
+
+===== Input
+
+*Query Parameters:* There are no parameters for this command.
+
+===== Output
+
+*Output Content:* The output is composed of a list “operationsPerSecond” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, the average number of processed operations per second since the start of the replication process is provided. The operations are further broken down into two groups: add and delete operations.
+
+===== Examples
+
+*Input*
+
+[source,text]
+----
+ http://host:8983/solr/<collection_name>/cdcr?action=OPS
+----
+
+*Output*
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status":0,
+    "QTime":1
+  },
+  "operationsPerSecond":{
+    "127.0.0.1: 59661/solr":{
+      "Target_collection":{
+          "all": 297.102944952749052,
+          "adds": 297.102944952749052,
+          "deletes": 0.0
+      }
+    }
+  }
+}
+----
+
+[[CrossDataCenterReplication_CDCR_-ERRORS]]
+==== ERRORS
+
+`/core/cdcr?action=ERRORS`
+
+===== Input
+
+*Query Parameters:* There are no parameters for this command.
+
+===== Output
+
+*Output Content:* The output is composed of a list “errors” which contains a list of (ZooKeeper) target hosts, themselves containing a list of target collections. For each collection, information about errors encountered during the replication is provided, such as the number of consecutive errors encountered by the replicator thread, the number of bad requests or internal errors since the start of the replication process, and a list of the last errors encountered ordered by timestamp.
+
+===== Examples
+
+*Input*
+
+[source,text]
+----
+ http://host:8983/solr/<collection_name>/cdcr?action=ERRORS
+----
+
+*Output*
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status":0,
+    "QTime":2
+  },
+  "errors": {
+    "127.0.0.1: 36872/solr":{
+      "Target_collection":{
+        "consecutiveErrors":3,
+        "bad_request":0,
+        "internal":3,
+        "last":{
+          "2014-12-02T11:04:42.523Z":"internal",
+          "2014-12-02T11:04:39.223Z":"internal",
+          "2014-12-02T11:04:38.22Z":"internal"
+        }
+      }
+    }
+  }
+}
+----
+
+== Initial Startup
+
+This is a general approach for initializing CDCR in a production environment based upon an approach taken by the initial working installation of CDCR and generously contributed to illustrate a "real world" scenario.
+
+* Customer uses the CDCR approach to keep a remote disaster-recovery instance available for production backup. This is an active-passive solution.
+* Customer has 26 clouds with 200 million assets per cloud (15GB indexes). Total document count is over 4.8 billion.
+** Source and target clouds were synched in 2-3 hour maintenance windows to establish the base index for the targets.
+
+As usual, it is good to start small. Sync a single cloud and monitor for a period of time before doing the others. You may need to adjust your settings several times before finding the right balance.
+
+* Before starting, stop or pause the indexers. This is best done during a small maintenance window.
+* Stop the SolrCloud instances at the Source
+* Include the CDCR request handler configuration in `solrconfig.xml` as in the below example.
++
+[source,xml]
+----
+<requestHandler name="/cdcr" class="solr.CdcrRequestHandler">
+    <lst name="replica">
+      <str name="zkHost">${TargetZk}</str>
+      <str name="Source">${SourceCollection}</str>
+      <str name="Target">${TargetCollection}</str>
+    </lst>
+    <lst name="replicator">
+      <str name="threadPoolSize">8</str>
+      <str name="schedule">10</str>
+      <str name="batchSize">2000</str>
+    </lst>
+    <lst name="updateLogSynchronizer">
+      <str name="schedule">1000</str>
+    </lst>
+  </requestHandler>
+
+  <updateRequestProcessorChain name="cdcr-processor-chain">
+    <processor class="solr.CdcrUpdateProcessorFactory" />
+    <processor class="solr.RunUpdateProcessorFactory" />
+  </updateRequestProcessorChain>
+----
++
+* Upload the modified `solrconfig.xml` to ZooKeeper on both Source and Target
+* Sync the index directories from the Source collection to target collection across to the corresponding shard nodes. `rsync` works well for this.
++
+For example, if there are 2 shards on collection1 with 2 replicas for each shard, copy the corresponding index directories from
++
+[width="75%",cols="45,10,45"]
+|===
+|shard1replica1Source |to |shard1replica1Target
+|shard1replica2Source |to |shard1replica2Target
+|shard2replica1Source |to |shard2replica1Target
+|shard2replica2Source |to |shard2replica2Target
+|===
++
+* Start the ZooKeeper on the Target (DR) side
+* Start the SolrCloud on the Target (DR) side
+* Start the ZooKeeper on the Source side
+* Start the SolrCloud on the Source side. As a general rule, the Target (DR) side of the SolrCloud should be started before the Source side.
+* Activate the CDCR on Source instance using the CDCR API: `\http://host:port/solr/collection_name/cdcr?action=START`
++
+[source,text]
+http://host:port/solr/<collection_name>/cdcr?action=START
++
+* There is no need to run the /cdcr?action=START command on the Target
+* Disable the buffer on the Target
++
+[source,text]
+http://host:port/solr/collection_name/cdcr?action=DISABLEBUFFER
++
+* Renable indexing
+
+[[CrossDataCenterReplication_CDCR_-Monitoring.1]]
+== Monitoring
+
+.  Network and disk space monitoring are essential. Ensure that the system has plenty of available storage to queue up changes if there is a disconnect between the Source and Target. A network outage between the two data centers can cause your disk usage to grow.
+..  Tip: Set a monitor for your disks to send alerts when the disk gets over a certain percentage (e.g., 70%)
+..  Tip: Run a test. With moderate indexing, how long can the system queue changes before you run out of disk space?
+.  Create a simple way to check the counts between the Source and the Target.
+..  Keep in mind that if indexing is running, the Source and Target may not match document for document. Set an alert to fire if the difference is greater than some percentage of the overall cloud size.
+
+== ZooKeeper Settings
+
+With CDCR, the target ZooKeepers will have connections from the Target clouds and the Source clouds. You may need to increase the `maxClientCnxns` setting in `zoo.cfg`.
+
+[source,text]
+----
+## set numbers of connection to 200 from client
+## is maxClientCnxns=0 that means no limit
+maxClientCnxns=800
+----
+
+== Upgrading and Patching Production
+
+When rolling in upgrades to your indexer or application, you should shutdown the Source (production) and the Target (DR). Depending on your setup, you may want to pause/stop indexing. Deploy the release or patch and renable indexing. Then start the Target (DR).
+
+* There is no need to reissue the DISABLEBUFFERS or START commands. These are persisted.
+* After starting the Target, run a simple test. Add a test document to each of the Source clouds. Then check for it on the Target.
+
+[source,bash]
+----
+#send to the Source
+curl http://<Source>/solr/cloud1/update -H 'Content-type:application/json' -d '[{"SKU":"ABC"}]'
+
+#check the Target
+curl "http://<Target>:8983/solr/<collection_name>/select?q=SKU:ABC&wt=json&indent=true"
+----
+
+[[CrossDataCenterReplication_CDCR_-Limitations.1]]
+== Limitations
+
+* Running CDCR with the indexes on HDFS is not currently supported, see: https://issues.apache.org/jira/browse/SOLR-9861[Solr CDCR over HDFS].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/css/comments.css
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/css/comments.css b/solr/solr-ref-guide/src/css/comments.css
new file mode 100644
index 0000000..1292d23
--- /dev/null
+++ b/solr/solr-ref-guide/src/css/comments.css
@@ -0,0 +1,164 @@
+/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ * comments.css
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */
+
+#comments_thread a:link {
+    color: #5A88B5;
+    background-color: inherit;
+}
+
+#comments_thread a:visited {
+    color: #5A88B5;
+    background-color: inherit;
+}
+
+#comments_thread a:link:hover,
+#comments_thread a:link:active,
+#comments_thread a:visited:hover,
+#comments_thread a:visited:active {
+    color: #0073c7;
+    background-color: #f0f0f0;
+}
+
+
+/* in general */
+#comments_thread p {
+    line-height: 1.3em;
+    color: #003;
+}
+
+#comments_thread h4 {
+   font-size: 14px;
+}
+
+.apaste_menu {
+        float: right;
+        margin-right: 10px;
+        width: 80px;
+}
+
+.apaste_comment {
+  background: #FEFEFE;
+  border: 1px solid #AAA;
+  border-radius: 2px;
+  display: block;
+  white-space: pre-wrap;
+  font-weight: normal;
+  padding-left: 20px;
+  padding-right: 20px;
+  padding-bottom: 16px;
+  padding-top: 5px;
+  margin: 15px;
+  font-size: 13px
+}
+.comment_header {
+    color: #000000;
+    border-radius: 3px;
+    border: 1px solid #999;
+    min-height: 24px;
+    text-indent: 5px;
+    font-size: 12pt;
+    background: #ffe9a3; /* Old browsers */
+    background: -moz-linear-gradient(top, #ffe9a3 0%, #ffd08a 32%, #ff9d57 69%, #ff833d 100%); /* FF3.6-15 */
+    background: -webkit-linear-gradient(top, #ffe9a3 0%,#ffd08a 32%,#ff9d57 69%,#ff833d 100%); /* Chrome10-25,Safari5.1-6 */
+    background: linear-gradient(to bottom, #ffe9a3 0%,#ffd08a 32%,#ff9d57 69%,#ff833d 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
+}
+
+.comment_header_verified {
+    color: #000000;
+    border-radius: 3px;
+    border: 1px solid #999;
+    min-height: 24px;
+    text-indent: 5px;
+    font-size: 12pt;
+    background: #ffe9a3; /* Old browsers */
+    background: -moz-linear-gradient(top, #ffe9a3 0%, #ffd08a 32%, #ff9d57 69%, #ff833d 100%); /* FF3.6-15 */
+    background: -webkit-linear-gradient(top, #ffe9a3 0%,#ffd08a 32%,#ff9d57 69%,#ff833d 100%); /* Chrome10-25,Safari5.1-6 */
+    background: linear-gradient(to bottom, #ffe9a3 0%,#ffd08a 32%,#ff9d57 69%,#ff833d 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
+}
+
+.comment_header_sticky {
+    color: #000000;
+    border-radius: 3px;
+    border: 1px solid #999;
+    min-height: 24px;
+    text-indent: 5px;
+    font-size: 12pt;
+    background: #ffe9a3; /* Old browsers */
+    background: -moz-linear-gradient(top, #ffe9a3 0%, #ffd08a 32%, #ff9d57 69%, #ff833d 100%); /* FF3.6-15 */
+    background: -webkit-linear-gradient(top, #ffe9a3 0%,#ffd08a 32%,#ff9d57 69%,#ff833d 100%); /* Chrome10-25,Safari5.1-6 */
+    background: linear-gradient(to bottom, #ffe9a3 0%,#ffd08a 32%,#ff9d57 69%,#ff833d 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */
+}
+
+.comment_header img {
+    padding-top: 3px;
+    padding-bottom: 2px;
+}
+
+.comment_header_verified img {
+    padding-top: 3px;
+    padding-bottom: 2px;
+}
+
+.comment_header_sticky img {
+    padding-top: 3px;
+    padding-bottom: 2px;
+}
+
+.apaste_comment img {
+/*    border-radius: 5px;*/
+    border: none;
+}
+
+.apaste_comment_selected {background: #F8F4E9;}
+.apaste_comment_notapproved {background: #F8E0E0;}
+.apaste_comment_resolved {background: #FAFCFA;}
+.apaste_comment_sticky {background: #FFFFF6;}
+.apaste_comment_verified {background: #FAFBFA;}
+
+.apaste_comment_invalid {
+  color: #999;
+  background: #F8F8F8;
+}
+
+
+.apaste_comment textarea {
+  width: 480px;
+  height: 180px;
+}
+
+#apaste {
+  margin: 5px;
+  font-weight: normal;
+  font-size: 14px;
+  color: #024;
+
+}
+#apaste .section {
+  padding: 20px;
+  padding-left: 80px;
+}
+
+.notapproved {
+  background-color: #FEE;
+  padding: 5px;
+}
+
+#comments_thread textarea{
+    background-color: #ffffff;
+    width: auto;
+    border: 1px solid #1c1c1c;
+    border-radius: 3px;
+    box-shadow: 0pt 1px 3px rgba(0, 0, 0, 0.16) inset;
+    position: relative;
+}
+
+.apaste_honeypot {
+  display: none;
+}
+
+//* Remove external link icons when they appear in comments *//
+a[href^="http://"]:after,
+a[href^="https://"]:after {
+   content: none !important;
+}

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/css/customstyles.css
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/css/customstyles.css b/solr/solr-ref-guide/src/css/customstyles.css
new file mode 100755
index 0000000..057cd86
--- /dev/null
+++ b/solr/solr-ref-guide/src/css/customstyles.css
@@ -0,0 +1,869 @@
+
+.gi-2x{font-size: 2em;}
+.gi-3x{font-size: 3em;}
+.gi-4x{font-size: 4em;}
+.gi-5x{font-size: 5em;}
+
+.breadcrumb > .active {color: #777 !important;}
+
+.post-content img {
+    margin: 12px 0 3px 0;
+    width: auto;
+    height: auto;
+    max-width: 100%;
+    max-height: 100%;
+}
+
+.post-content ol li, .post-content ul li {
+    margin: 10px 0;
+}
+
+.pageSummary {
+    font-size:13px;
+    display:block;
+    margin-bottom:15px;
+    padding-left:20px;
+}
+
+.post-summary {
+    margin-bottom:12px;
+}
+
+.bs-example{
+    margin: 20px;
+}
+
+.breadcrumb li {
+    color: gray;
+}
+
+caption {
+    padding-top: 8px;
+    padding-bottom: 8px;
+    color: #777;
+    text-align: left;
+}
+
+p.external a {
+    text-align:right;
+    font-size:12px;
+    color: #0088cc;
+    display:inline;
+}
+
+#definition-box-container div a.active {
+    font-weight: bold;
+}
+p.post-meta {font-size: 80%; color: #777;}
+
+.entry-date{font-size:14px;font-size:0.875rem;line-height:1.71429;margin-bottom:0;text-transform:uppercase;}
+
+/* search area */
+#search-demo-container ul#results-container {
+    list-style: none;
+    font-size: 12px;
+    background-color: white;
+    position: absolute;
+    top: 40px; /* if you change anything about the nav, you'll prob. need to reset the top and left values here.*/
+    left: 20px;
+    z-index: -1;
+    width:223px;
+    border-left: 1px solid #dedede;
+    box-shadow: 2px 3px 2px #dedede;
+}
+
+/* make room for the nav bar */
+h1[id],
+h2[id],
+h3[id],
+h4[id],
+h5[id],
+h6[id],
+dt[id]{
+padding-top: 60px;
+margin-top: -40px
+}
+
+ul#results-container a {
+    background-color: transparent;
+}
+
+ul#results-container a:hover {
+    color: black;
+}
+
+
+#search-demo-container a:hover {
+    color: black;
+}
+#search-input {
+    padding: .5em;
+    margin-left:20px;
+    width:20em;
+    font-size: 0.8em;
+    -webkit-box-sizing: border-box;
+    -moz-box-sizing: border-box;
+    box-sizing: border-box;
+    float: right;
+    margin-top:10px;
+}
+/* end search */
+
+.filter-options {
+    margin-bottom: 20px;
+}
+.filter-options button {
+    margin: 3px;
+}
+
+
+
+li.dropdownActive a {
+    font-weight: bold;
+}
+
+
+.post-content a.fa-rss {
+    color: orange;
+}
+
+
+.navbar-inverse .navbar-nav > li > a {
+    background-color: transparent;
+    margin-top:10px;
+}
+
+.post-content .rssfeedLink {
+    color: #248EC2;
+}
+
+footer {
+    font-size: smaller;
+}
+
+/* FAQ page */
+#accordion .panel-heading {
+    font-size: 12px;
+}
+
+a.accordion-toggle, a.accordion-collapsed {
+    font-size: 14px;
+    text-decoration: none;
+}
+
+/* navgoco sidebar styles (customized) */
+.nav, .nav ul, .nav li {
+    list-style: none;
+}
+
+.nav ul {
+    padding: 0;
+    /*margin: 0 0 0 18px;*/
+    margin:0;
+}
+
+.nav {
+    /* padding: 4px;*/
+    padding:0;
+    margin: 0;
+}
+
+.nav > li {
+    margin: 1px 0;
+}
+
+.nav > li li {
+    margin: 2px 0;
+}
+
+.nav a {
+    color: #333;
+    display: block;
+    outline: none;
+    text-decoration: none;
+}
+
+.nav li > a > span {
+    float: right;
+    font-size: 19px;
+    font-weight: bolder;
+}
+
+
+.nav li > a > span:after {
+    content: '\25be';
+}
+.nav li.open > a > span:after {
+    content: '\25b4';
+}
+
+.nav a:hover, .nav a:focus, .nav li.active > a {
+    background-color: #8D8D8D;
+    color: #f5f5f5;
+}
+
+.nav > li.active > a  {
+background-color: #347DBE;
+}
+
+.nav li a {
+    line-height: 18px;
+    padding: 2px 10px;
+    background-color: #f1f1f1;
+}
+
+.nav > li > a {
+    line-height: 20px;
+    padding: 4px 10px;
+}
+
+ul#mysidebar {
+    border-radius:0;
+}
+
+ul.nav li ul {
+   font-size: 10pt;
+}
+
+.nav ul li a {
+    background-color: #FAFAFA;
+}
+
+.nav li a {
+    padding-right:10px;
+}
+
+.nav li a:hover {
+    background-color: #8D8D8D;
+}
+
+.nav ul li a {
+    border-top:1px solid whitesmoke;
+    padding-left:10px;
+}
+/* end sidebar */
+
+.navbar-inverse .navbar-nav > .active > a, .navbar-inverse .navbar-nav > .active > a:hover, .navbar-inverse .navbar-nav > .active > a:focus {
+    border-radius:5px;
+}
+
+.navbar-inverse .navbar-nav>.open>a, .navbar-inverse .navbar-nav>.open>a:focus, .navbar-inverse .navbar-nav>.open>a:hover {
+    border-radius: 5px;
+}
+
+.footer {
+    text-align: right;
+}
+
+.footerMeta {
+    background-color: whitesmoke;
+    padding: 10px;
+    max-width: 250px;
+    border-radius: 5px;
+    margin-top: 50px;
+    font-style:italic;
+    font-size:12px;
+}
+
+img.screenshotSmall {
+    max-width: 300px;
+}
+
+
+dl dt p {
+    margin-left:20px;
+}
+
+
+dl dd {
+    margin-top:10px;
+    margin-bottom:10px;
+}
+
+dl.dl-horizontal dd {
+    padding-top: 20px;
+}
+
+figcaption {
+
+    padding-bottom:12px;
+    padding-top:6px;
+    max-width: 90%;
+    margin-bottom:20px;
+    font-style: italic;
+    color: gray;
+
+}
+
+.testing {
+    color: orange;
+}
+
+.preference {
+    color: red;
+}
+
+
+table.dataTable thead {
+    background-color: #444;
+}
+
+section table tr.success {
+    background-color: #dff0d8 !important;
+}
+
+table tr.info {
+    background-color: #d9edf7 !important;
+}
+
+section table tr.warning, table tr.testing, table tr.testing > td.sorting_1  {
+    background-color: #fcf8e3 !important;
+}
+section table tr.danger, table tr.preference, table tr.preference > td.sorting_1  {
+    background-color: #f2dede !important;
+}
+
+.orange {
+    color: orange;
+}
+
+table.profile thead tr th {
+    background-color: #248ec2;
+}
+
+table.request thead tr th {
+    background-color: #ED1951;
+}
+
+.audienceLabel {
+    margin: 10px;
+    float: right;
+    border:1px solid #dedede;
+    padding:7px;
+}
+
+.prefaceAudienceLabel {
+    color: gray;
+    text-align: center;
+    margin:5px;
+}
+span.myLabel {
+    padding-left:10px;
+    padding-right:10px;
+}
+
+button.cursorNorm {
+    cursor: default;
+}
+
+a.dropdown-toggle, .navbar-inverse .navbar-nav > li > a  {
+    margin-left: 10px;
+}
+
+hr.faded {
+    border: 0;
+    height: 1px;
+    background-image: -webkit-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75), rgba(0,0,0,0));
+    background-image:    -moz-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75), rgba(0,0,0,0));
+    background-image:     -ms-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75), rgba(0,0,0,0));
+    background-image:      -o-linear-gradient(left, rgba(0,0,0,0), rgba(0,0,0,0.75), rgba(0,0,0,0));
+}
+
+hr.shaded {
+    height: 12px;
+    border: 0;
+    box-shadow: inset 0 6px 6px -6px rgba(0,0,0,0.5);
+    margin-top: 70px;
+    background: white;
+    width: 100%;
+    margin-bottom: 10px;
+}
+
+.fa-6x{font-size:900%;}
+.fa-7x{font-size:1100%;}
+.fa-8x{font-size:1300%;}
+.fa-9x{font-size:1500%;}
+.fa-10x{font-size:1700%;}
+
+i.border {
+    padding: 10px 20px;
+    background-color: whitesmoke;
+}
+
+a[data-toggle] {
+    color: #248EC2;
+}
+
+.summary {
+    font-size:120%;
+    color: #808080;
+    margin:20px 0 20px 0;
+    border-left: 5px solid #ED1951;
+    padding-left: 10px;
+
+}
+
+.summary:before {
+    content: "Summary: ";
+    font-weight: bold;
+}
+
+
+a.fa.fa-envelope-o.mailto {
+    font-weight: 600;
+}
+
+.nav-tabs > li.active > a, .nav-tabs > li.active > a:hover, .nav-tabs > li.active > a:focus {
+    background-color: #248ec2;
+    color: white;
+}
+
+ol li ol li {list-style-type: lower-alpha;}
+ol li ul li {list-style-type: disc;}
+
+li img {clear:both; }
+
+div#toc ul li ul li {
+    list-style-type: none;
+    margin: 5px 0 0 0;
+}
+
+.tab-content {
+    padding: 15px;
+    background-color: #FAFAFA;
+}
+
+span.tagTitle {font-weight: 500;}
+
+li.activeSeries {
+    font-weight: bold;
+}
+
+.seriesContext .dropdown-menu li.active {
+    font-weight: bold;
+    margin-left: 43px;
+    font-size:18px;
+}
+
+div.tags {padding: 10px 5px;}
+
+.tabLabel {
+    font-weight: normal;
+}
+
+hr {
+    border: 0;
+    border-bottom: 1px dashed #ccc;
+    background: #999;
+    margin: 30px 0;
+    width: 90%;
+    margin-left: auto;
+    margin-right: auto;
+}
+
+button.cursorNorm {
+    cursor: pointer;
+}
+
+span.otherProgrammingLanguages {
+    font-style: normal;
+}
+
+a[data-toggle="tooltip"] {
+    color: #649345;
+    font-style: italic;
+    cursor: default;
+}
+
+.seriesNext, .seriesContext {
+    margin-top: 15px;
+    margin-bottom: 15px;
+}
+
+.seriescontext ol li {
+    list-style-type: upper-roman;
+}
+
+ol.series li {
+    list-style-type: decimal;
+    margin-left: 40px;
+    padding-left: 0;
+}
+
+.siteTagline {
+    font-size: 200%;
+    font-weight: bold;
+    color: silver;
+    font-family: monospace;
+    text-align: center;
+    line-height: 10px;
+    margin: 20px 0;
+    display: block;
+}
+
+.versionTagline {
+    text-align: center;
+    margin-bottom: 20px;
+    font-family: courier;
+    color: silver;
+    color: #444;
+    display:block;
+}
+
+#mysidebar .nav ul {
+    background-color: #FAFAFA;
+}
+.nav ul.series li {
+    list-style: decimal;
+    font-size:12px;
+}
+
+.nav ul.series li a:hover {
+    background-color: gray;
+}
+.nav ul.series {
+    padding-left: 30px;
+}
+
+.nav ul.series {
+    background-color: #FAFAFA;
+}
+
+/*
+a.dropdown-toggle.otherProgLangs {
+    color: #f7e68f !important;
+}
+*/
+
+span.muted {color: #666;}
+
+table code {background-color: transparent;}
+
+.highlight .err {
+    color: #a61717;
+    background-color: transparent !important;
+}
+
+#json-box-container pre {
+    margin: 0;
+}
+
+.video-js {
+    margin: 30px 0;
+}
+
+video {
+    display: block;
+    margin: 30px 0;
+    border: 1px solid #c0c0c0;
+}
+
+
+p.required, p.dataType {display: block; color: #c0c0c0; font-size: 80%; margin-left:4px;}
+
+dd {margin-left:20px;}
+
+.post-content img.inline {
+    margin:0;
+    margin-bottom:6px;
+}
+.panel-heading {
+    font-weight: bold;
+}
+
+a.accordion-toggle {
+    font-style: normal;
+}
+
+span.red {
+    color: red;
+    font-family: Monaco, Menlo, Consolas, "Courier New", monospace;
+}
+
+h3.codeExplanation {
+    font-size:18px;
+    font-style:normal;
+    color: black;
+    line-height: 24px;
+}
+
+span.soft {
+    color: #c0c0c0;
+}
+
+.githubEditButton {
+    margin-bottom:7px;
+}
+
+.endpoint {
+    padding: 15px;
+    background-color: #f0f0f0;
+    font-family: courier;
+    font-size: 110%;
+    margin: 20px 0;
+    color: #444;
+}
+
+.parameter {
+    font-family: courier;
+    color: red !important;
+}
+
+.formBoundary {
+    border: 1px solid gray;
+    padding: 15px;
+    margin: 15px 0;
+    background-color: whitesmoke;
+}
+
+@media (max-width: 767px) {
+    .navbar-inverse .navbar-nav .open .dropdown-menu > li > a {
+        color: #444;
+    }
+}
+
+@media (max-width: 990px) {
+    #mysidebar {
+        position: relative;
+    }
+}
+
+@media (min-width: 1000px) {
+
+    ul#mysidebar {
+        width: 225px;
+    }
+}
+
+@media (max-width: 900px) {
+
+    ul#mysidebar {
+        max-width: 100%;
+    }
+}
+
+.col-md-9 img {
+    max-width: 100%;
+    max-height: 100%;
+}
+
+.videoThumbs img {
+    float: left;
+    margin:15px 15px 15px 0;
+    box-shadow: 2px 2px 1px #f0f0f0;
+    border: 1px solid #dedede;
+}
+
+@media only screen and (min-width: 900px)
+{.col-md-9 img {
+        max-width: 700px;
+        max-height: 700px;
+    }
+}
+
+@media only screen and (min-device-width: 900px)
+{.col-md-9 img {
+        max-width: 700px;
+        max-height: 700px;
+    }
+}
+*:hover > .anchorjs-link {
+    transition: color .25s linear;
+    text-decoration: none;
+}
+
+.kbCaption {
+    color: white;
+    background-color: #444;
+    padding:10px;
+}
+
+.btn-default {
+    margin-bottom: 10px;
+}
+
+/* algolia search */
+
+.search {
+    text-align: left;
+}
+.search input {
+    font-size: 20px;
+    width: 300px;
+}
+.results {
+    margin: auto;
+    text-align: left;
+}
+.results ul {
+    list-style-type: none;
+    padding: 0;
+}
+
+/* algolia */
+
+div.results {
+    position: absolute;
+    background-color: white;
+    width: 100%;
+}
+
+.post-meta {
+    font-size: 14px;
+    color: #828282;
+}
+
+.post-link {
+    font-size: 22px;
+}
+
+.post-list p {
+    margin: 10px 0;
+}
+
+time {
+    margin-right: 10px;
+}
+
+p.post-meta time {
+    margin-right: 0;
+}
+
+span.label.label-default {
+    background-color: gray;
+}
+
+span.label.label-primary {
+    background-color: #f0ad4e;
+}
+.col-lg-12 .nav li a {background-color: white}
+
+a code {
+    color: ##2156a5;
+}
+
+table th code {
+    color: white;
+}
+
+ol li ul li ol li {
+    list-style: decimal;
+}
+
+ol li ul li ol li ul li{
+    list-style: disc;
+}
+
+
+.box {
+    padding: 10px;
+    border: 1px solid #888;
+    box-shadow: 2px 2px 4px #dedede;
+    width: 100px;
+    height: 80px;
+    background-color: #f5f5f5;
+    font-family: Arial;
+    font-size: 12px;
+    hyphens: auto;
+    float: left;
+}
+
+.box:hover {
+    background-color: #f0f0f0;
+}
+
+#userMap {
+    overflow-x: auto;
+    overflow-y: auto;
+    padding: 20px;
+    min-width: 770px;
+}
+
+#userMap .active {
+    background-color: #d6f5d6;
+    border:1px solid #555;
+    font-weight: bold;
+}
+
+h2.userMapTitle {
+    font-family: Arial;
+}
+
+#userMap a:hover {
+    text-decoration: none;
+  }
+
+div.arrow {
+    max-width: 50px;
+    margin-left: 15px;
+    margin-right: 15px;
+    font-size: 20px;
+}
+
+#userMap div.arrow, #userMap div.content {
+    float: left;
+}
+
+.clearfix {
+    clear: both;
+}
+
+
+#userMap div.arrow {
+    position: relative;
+    top: 30px;
+}
+
+.box1 {
+    margin-left:0;
+}
+
+button.btn.btn-default.btn-lg.modalButton1 {
+    margin-left: -20px;
+}
+
+div.box.box1 {
+    margin-left: -20px;
+}
+
+#userMap .btn-lg {
+    width: 100px;
+    height: 80px;
+
+}
+
+#userMap .complexArrow {
+    font-size: 22px;
+    margin: 0 10px;
+}
+
+
+#userMap .btn-lg .active {
+    background-color: #d6f5d6;
+}
+
+#userMap .btn-lg  {
+        white-space: pre-wrap;       /* css-3 */
+        white-space: -moz-pre-wrap;  /* Mozilla, since 1999 */
+        white-space: -pre-wrap;      /* Opera 4-6 */
+        white-space: -o-pre-wrap;    /* Opera 7 */
+        word-wrap: break-word;       /* Internet Explorer 5.5+ */
+        font-size: 14px;
+    }
+
+/*
+ * Let's target IE to respect aspect ratios and sizes for img tags containing SVG files
+ *
+ * [1] IE9
+ * [2] IE10+
+ */
+/* 1 */
+.ie9 img[src$=".svg"] {
+    width: 100%;
+}
+/* 2 */
+@media screen and (-ms-high-contrast: active), (-ms-high-contrast: none) {
+    img[src$=".svg"] {
+        width: 100%;
+    }
+}