You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/12 14:05:26 UTC

[18/37] lucene-solr:branch_6x: squash merge jira/solr-10290 into master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/index-replication.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/index-replication.adoc b/solr/solr-ref-guide/src/index-replication.adoc
new file mode 100644
index 0000000..b606b00
--- /dev/null
+++ b/solr/solr-ref-guide/src/index-replication.adoc
@@ -0,0 +1,293 @@
+= Index Replication
+:page-shortname: index-replication
+:page-permalink: index-replication.html
+
+Index Replication distributes complete copies of a master index to one or more slave servers. The master server continues to manage updates to the index. All querying is handled by the slaves. This division of labor enables Solr to scale to provide adequate responsiveness to queries against large search volumes.
+
+The figure below shows a Solr configuration using index replication. The master server's index is replicated on the slaves.
+
+.A Solr index can be replicated across multiple slave servers, which then process requests.
+image::images/index-replication/worddav2b7e14725d898b4104cdd9c502fc77cd.png[image,width=159,height=235]
+
+
+[[IndexReplication-IndexReplicationinSolr]]
+== Index Replication in Solr
+
+Solr includes a Java implementation of index replication that works over HTTP:
+
+* The configuration affecting replication is controlled by a single file, `solrconfig.xml`
+* Supports the replication of configuration files as well as index files
+* Works across platforms with same configuration
+* No reliance on OS-dependent file system features (eg: hard links)
+* Tightly integrated with Solr; an admin page offers fine-grained control of each aspect of replication
+* The Java-based replication feature is implemented as a request handler. Configuring replication is therefore similar to any normal request handler.
+
+.Replication In SolrCloud
+[NOTE]
+====
+Although there is no explicit concept of "master/slave" nodes in a <<solrcloud.adoc#solrcloud,SolrCloud>> cluster, the `ReplicationHandler` discussed on this page is still used by SolrCloud as needed to support "shard recovery" – but this is done in a peer to peer manner.
+
+When using SolrCloud, the `ReplicationHandler` must be available via the `/replication` path. Solr does this implicitly unless overridden explicitly in your `solrconfig.xml`, but if you wish to override the default behavior, make certain that you do not explicitly set any of the "master" or "slave" configuration options mentioned below, or they will interfere with normal SolrCloud operation.
+====
+
+[[IndexReplication-ReplicationTerminology]]
+== Replication Terminology
+
+The table below defines the key terms associated with Solr replication.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Term |Definition
+|Index |A Lucene index is a directory of files. These files make up the searchable and returnable data of a Solr Core.
+|Distribution |The copying of an index from the master server to all slaves. The distribution process takes advantage of Lucene's index file structure.
+|Inserts and Deletes |As inserts and deletes occur in the index, the directory remains unchanged. Documents are always inserted into newly created files. Documents that are deleted are not removed from the files. They are flagged in the file, deletable, and are not removed from the files until the index is optimized.
+|Master and Slave |A Solr replication master is a single node which receives all updates initially and keeps everything organized. Solr replication slave nodes receive no updates directly, instead all changes (such as inserts, updates, deletes, etc.) are made against the single master node. Changes made on the master are distributed to all the slave nodes which service all query requests from the clients.
+|Update |An update is a single change request against a single Solr instance. It may be a request to delete a document, add a new document, change a document, delete all documents matching a query, etc. Updates are handled synchronously within an individual Solr instance.
+|Optimization |A process that compacts the index and merges segments in order to improve query performance. Optimization should only be run on the master nodes. An optimized index may give query performance gains compared to an index that has become fragmented over a period of time with many updates. Distributing an optimized index requires a much longer time than the distribution of new segments to an un-optimized index.
+|Segments |A self contained subset of an index consisting of some documents and data structures related to the inverted index of terms in those documents.
+|mergeFactor |A parameter that controls the number of segments in an index. For example, when mergeFactor is set to 3, Solr will fill one segment with documents until the limit maxBufferedDocs is met, then it will start a new segment. When the number of segments specified by mergeFactor is reached (in this example, 3) then Solr will merge all the segments into a single index file, then begin writing new documents to a new segment.
+|Snapshot |A directory containing hard links to the data files of an index. Snapshots are distributed from the master nodes when the slaves pull them, "smart copying" any segments the slave node does not have in snapshot directory that contains the hard links to the most recent index data files.
+|===
+
+[[IndexReplication-ConfiguringtheReplicationHandler]]
+== Configuring the ReplicationHandler
+
+In addition to `ReplicationHandler` configuration options specific to the master/slave roles, there are a few special configuration options that are generally supported (even when using SolrCloud).
+
+* `maxNumberOfBackups` an integer value dictating the maximum number of backups this node will keep on disk as it receives `backup` commands.
+* Similar to most other request handlers in Solr you may configure a set of <<requesthandlers-and-searchcomponents-in-solrconfig.adoc#RequestHandlersandSearchComponentsinSolrConfig-SearchHandlers,defaults, invariants, and/or appends>> parameters corresponding with any request parameters supported by the `ReplicationHandler` when <<IndexReplication-HTTPAPICommandsfortheReplicationHandler,processing commands>>.
+
+[[IndexReplication-ConfiguringtheReplicationRequestHandleronaMasterServer]]
+=== Configuring the Replication RequestHandler on a Master Server
+
+Before running a replication, you should set the following parameters on initialization of the handler:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Name |Description
+|replicateAfter |String specifying action after which replication should occur. Valid values are commit, optimize, or startup. There can be multiple values for this parameter. If you use "startup", you need to have a "commit" and/or "optimize" entry also if you want to trigger replication on future commits or optimizes.
+|backupAfter |String specifying action after which a backup should occur. Valid values are commit, optimize, or startup. There can be multiple values for this parameter. It is not required for replication, it just makes a backup.
+|maxNumberOfBackups |Integer specifying how many backups to keep. This can be used to delete all but the most recent N backups.
+|confFiles |The configuration files to replicate, separated by a comma.
+|commitReserveDuration |If your commits are very frequent and your network is slow, you can tweak this parameter to increase the amount of time taken to download 5Mb from the master to a slave. The default is 10 seconds.
+|===
+
+The example below shows a possible 'master' configuration for the `ReplicationHandler`, including a fixed number of backups and an invariant setting for the `maxWriteMBPerSec` request parameter to prevent slaves from saturating its network interface
+
+[source,xml]
+----
+<requestHandler name="/replication" class="solr.ReplicationHandler">
+  <lst name="master">
+    <str name="replicateAfter">optimize</str>
+    <str name="backupAfter">optimize</str>
+    <str name="confFiles">schema.xml,stopwords.txt,elevate.xml</str>
+    <str name="commitReserveDuration">00:00:10</str>
+  </lst>
+  <int name="maxNumberOfBackups">2</int>
+  <lst name="invariants">
+    <str name="maxWriteMBPerSec">16</str>
+  </lst>
+</requestHandler>
+----
+
+[[IndexReplication-Replicatingsolrconfig.xml]]
+==== Replicating `solrconfig.xml`
+
+In the configuration file on the master server, include a line like the following:
+
+[source,xml]
+----
+<str name="confFiles">solrconfig_slave.xml:solrconfig.xml,x.xml,y.xml</str>
+----
+
+This ensures that the local configuration `solrconfig_slave.xml` will be saved as `solrconfig.xml` on the slave. All other files will be saved with their original names.
+
+On the master server, the file name of the slave configuration file can be anything, as long as the name is correctly identified in the `confFiles` string; then it will be saved as whatever file name appears after the colon ':'.
+
+[[IndexReplication-ConfiguringtheReplicationRequestHandleronaSlaveServer]]
+=== Configuring the Replication RequestHandler on a Slave Server
+
+The code below shows how to configure a ReplicationHandler on a slave.
+
+[source,xml]
+----
+<requestHandler name="/replication" class="solr.ReplicationHandler">
+  <lst name="slave">
+
+    <!-- fully qualified url for the replication handler of master. It is
+         possible to pass on this as a request param for the fetchindex command -->
+    <str name="masterUrl">http://remote_host:port/solr/core_name/replication</str>
+
+    <!-- Interval in which the slave should poll master.  Format is HH:mm:ss .
+         If this is absent slave does not poll automatically.
+
+         But a fetchindex can be triggered from the admin or the http API -->
+
+    <str name="pollInterval">00:00:20</str>
+
+    <!-- THE FOLLOWING PARAMETERS ARE USUALLY NOT REQUIRED-->
+
+    <!-- To use compression while transferring the index files. The possible
+         values are internal|external.  If the value is 'external' make sure
+         that your master Solr has the settings to honor the accept-encoding header.
+         See here for details: http://wiki.apache.org/solr/SolrHttpCompression
+         If it is 'internal' everything will be taken care of automatically.
+         USE THIS ONLY IF YOUR BANDWIDTH IS LOW.
+         THIS CAN ACTUALLY SLOWDOWN REPLICATION IN A LAN -->
+    <str name="compression">internal</str>
+
+    <!-- The following values are used when the slave connects to the master to
+         download the index files.  Default values implicitly set as 5000ms and
+         10000ms respectively. The user DOES NOT need to specify these unless the
+         bandwidth is extremely low or if there is an extremely high latency -->
+
+    <str name="httpConnTimeout">5000</str>
+    <str name="httpReadTimeout">10000</str>
+
+    <!-- If HTTP Basic authentication is enabled on the master, then the slave
+         can be configured with the following -->
+
+    <str name="httpBasicAuthUser">username</str>
+    <str name="httpBasicAuthPassword">password</str>
+  </lst>
+</requestHandler>
+----
+
+[[IndexReplication-SettingUpaRepeaterwiththeReplicationHandler]]
+== Setting Up a Repeater with the ReplicationHandler
+
+A master may be able to serve only so many slaves without affecting performance. Some organizations have deployed slave servers across multiple data centers. If each slave downloads the index from a remote data center, the resulting download may consume too much network bandwidth. To avoid performance degradation in cases like this, you can configure one or more slaves as repeaters. A repeater is simply a node that acts as both a master and a slave.
+
+* To configure a server as a repeater, the definition of the Replication `requestHandler` in the `solrconfig.xml` file must include file lists of use for both masters and slaves.
+* Be sure to set the `replicateAfter` parameter to commit, even if `replicateAfter` is set to optimize on the main master. This is because on a repeater (or any slave), a commit is called only after the index is downloaded. The optimize command is never called on slaves.
+* Optionally, one can configure the repeater to fetch compressed files from the master through the compression parameter to reduce the index download time.
+
+Here is an example of a ReplicationHandler configuration for a repeater:
+
+[source,xml]
+----
+<requestHandler name="/replication" class="solr.ReplicationHandler">
+  <lst name="master">
+    <str name="replicateAfter">commit</str>
+    <str name="confFiles">schema.xml,stopwords.txt,synonyms.txt</str>
+  </lst>
+  <lst name="slave">
+    <str name="masterUrl">http://master.solr.company.com:8983/solr/core_name/replication</str>
+    <str name="pollInterval">00:00:60</str>
+  </lst>
+</requestHandler>
+----
+
+[[IndexReplication-CommitandOptimizeOperations]]
+== Commit and Optimize Operations
+
+When a commit or optimize operation is performed on the master, the RequestHandler reads the list of file names which are associated with each commit point. This relies on the `replicateAfter` parameter in the configuration to decide which types of events should trigger replication.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Setting on the Master |Description
+|commit |Triggers replication whenever a commit is performed on the master index.
+|optimize |Triggers replication whenever the master index is optimized.
+|startup |Triggers replication whenever the master index starts up.
+|===
+
+The replicateAfter parameter can accept multiple arguments. For example:
+
+[source,xml]
+----
+<str name="replicateAfter">startup</str>
+<str name="replicateAfter">commit</str>
+<str name="replicateAfter">optimize</str>
+----
+
+[[IndexReplication-SlaveReplication]]
+== Slave Replication
+
+The master is totally unaware of the slaves.
+
+The slave continuously keeps polling the master (depending on the `pollInterval` parameter) to check the current index version of the master. If the slave finds out that the master has a newer version of the index it initiates a replication process. The steps are as follows:
+
+* The slave issues a `filelist` command to get the list of the files. This command returns the names of the files as well as some metadata (for example, size, a lastmodified timestamp, an alias if any).
+* The slave checks with its own index if it has any of those files in the local index. It then runs the filecontent command to download the missing files. This uses a custom format (akin to the HTTP chunked encoding) to download the full content or a part of each file. If the connection breaks in between, the download resumes from the point it failed. At any point, the slave tries 5 times before giving up a replication altogether.
+* The files are downloaded into a temp directory, so that if either the slave or the master crashes during the download process, no files will be corrupted. Instead, the current replication will simply abort.
+* After the download completes, all the new files are moved to the live index directory and the file's timestamp is same as its counterpart on the master.
+* A commit command is issued on the slave by the Slave's ReplicationHandler and the new index is loaded.
+
+[[IndexReplication-ReplicatingConfigurationFiles]]
+=== Replicating Configuration Files
+
+To replicate configuration files, list them using using the `confFiles` parameter. Only files found in the `conf` directory of the master's Solr instance will be replicated.
+
+Solr replicates configuration files only when the index itself is replicated. That means even if a configuration file is changed on the master, that file will be replicated only after there is a new commit/optimize on master's index.
+
+Unlike the index files, where the timestamp is good enough to figure out if they are identical, configuration files are compared against their checksum. The `schema.xml` files (on master and slave) are judged to be identical if their checksums are identical.
+
+As a precaution when replicating configuration files, Solr copies configuration files to a temporary directory before moving them into their ultimate location in the conf directory. The old configuration files are then renamed and kept in the same `conf/` directory. The ReplicationHandler does not automatically clean up these old files.
+
+If a replication involved downloading of at least one configuration file, the ReplicationHandler issues a core-reload command instead of a commit command.
+
+[[IndexReplication-ResolvingCorruptionIssuesonSlaveServers]]
+=== Resolving Corruption Issues on Slave Servers
+
+If documents are added to the slave, then the slave is no longer in sync with its master. However, the slave will not undertake any action to put itself in sync, until the master has new index data.
+
+When a commit operation takes place on the master, the index version of the master becomes different from that of the slave. The slave then fetches the list of files and finds that some of the files present on the master are also present in the local index but with different sizes and timestamps. This means that the master and slave have incompatible indexes.
+
+To correct this problem, the slave then copies all the index files from master to a new index directory and asks the core to load the fresh index from the new directory.
+
+[[IndexReplication-HTTPAPICommandsfortheReplicationHandler]]
+== HTTP API Commands for the ReplicationHandler
+
+You can use the HTTP commands below to control the ReplicationHandler's operations.
+
+[width="100%",options="header",]
+|===
+|Command |Description
+|http://_master_host:port_/solr/_core_name_/replication?command=enablereplication |Enables replication on the master for all its slaves.
+|http://_master_host:port_/solr/_core_name_/replication?command=disablereplication |Disables replication on the master for all its slaves.
+|http://_host:port_/solr/_core_name_/replication?command=indexversion |Returns the version of the latest replicatable index on the specified master or slave.
+|http://_slave_host:port_/solr/_core_name_/replication?command=fetchindex |Forces the specified slave to fetch a copy of the index from its master. If you like, you can pass an extra attribute such as masterUrl or compression (or any other parameter which is specified in the `<lst name="slave">` tag) to do a one time replication from a master. This obviates the need for hard-coding the master in the slave.
+|http://_slave_host:port_/solr/_core_name_/replication?command=abortfetch |Aborts copying an index from a master to the specified slave.
+|http://_slave_host:port_/solr/_core_name_/replication?command=enablepoll |Enables the specified slave to poll for changes on the master.
+|http://_slave_host:port_/solr/_core_name_/replication?command=disablepoll |Disables the specified slave from polling for changes on the master.
+|http://_slave_host:port_/solr/_core_name_/replication?command=details |Retrieves configuration details and current status.
+|http://_host:port_/solr/_core_name_/replication?command=filelist&generation=<_generation-number_> |Retrieves a list of Lucene files present in the specified host's index. You can discover the generation number of the index by running the `indexversion` command.
+|http://_master_host:port_/solr/_core_name_/replication?command=backup a|
+Creates a backup on master if there are committed index data in the server; otherwise, does nothing. This command is useful for making periodic backups.
+
+supported request parameters:
+
+* `numberToKeep:` request parameter can be used with the backup command unless the `maxNumberOfBackups` initialization parameter has been specified on the handler – in which case `maxNumberOfBackups` is always used and attempts to use the `numberToKeep` request parameter will cause an error.
+* `name` : (optional) Backup name . The snapshot will be created in a directory called snapshot.<name> within the data directory of the core . By default the name is generated using date in `yyyyMMddHHmmssSSS` format. If `location` parameter is passed , that would be used instead of the data directory
+* `location`: Backup location
+
+|http://_master_host:port_ /solr/_core_name_/replication?command=deletebackup a|
+Delete any backup created using the `backup` command .
+
+Request parameters:
+
+* name: The name of the snapshot . A snapshot with the name snapshot.<name> must exist .If not, an error is thrown
+* location: Location where the snapshot is created
+
+|===
+
+[[IndexReplication-DistributionandOptimization]]
+== Distribution and Optimization
+
+Optimizing an index is not something most users should generally worry about - but in particular users should be aware of the impacts of optimizing an index when using the `ReplicationHandler`.
+
+The time required to optimize a master index can vary dramatically. A small index may be optimized in minutes. A very large index may take hours. The variables include the size of the index and the speed of the hardware.
+
+Distributing a newly optimized index may take only a few minutes or up to an hour or more, again depending on the size of the index and the performance capabilities of network connections and disks. During optimization the machine is under load and does not process queries very well. Given a schedule of updates being driven a few times an hour to the slaves, we cannot run an optimize with every committed snapshot.
+
+Copying an optimized index means that the *entire* index will need to be transferred during the next snappull. This is a large expense, but not nearly as huge as running the optimize everywhere. Consider this example: on a three-slave one-master configuration, distributing a newly-optimized index takes approximately 80 seconds _total_. Rolling the change across a tier would require approximately ten minutes per machine (or machine group). If this optimize were rolled across the query tier, and if each slave node being optimized were disabled and not receiving queries, a rollout would take at least twenty minutes and potentially as long as an hour and a half. Additionally, the files would need to be synchronized so that the _following_ the optimize, snappull would not think that the independently optimized files were different in any way. This would also leave the door open to independent corruption of indexes instead of each being a perfect copy of the master.
+
+Optimizing on the master allows for a straight-forward optimization operation. No query slaves need to be taken out of service. The optimized index can be distributed in the background as queries are being normally serviced. The optimization can occur at any time convenient to the application providing index updates.
+
+While optimizing may have some benefits in some situations, a rapidly changing index will not retain those benefits for long, and since optimization is an intensive process, it may be better to consider other options, such as lowering the merge factor (discussed in the section on <<indexconfig-in-solrconfig.adoc#merge-factors,Index Configuration>>).

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/index.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/index.adoc b/solr/solr-ref-guide/src/index.adoc
new file mode 100644
index 0000000..d1f0a84
--- /dev/null
+++ b/solr/solr-ref-guide/src/index.adoc
@@ -0,0 +1,30 @@
+= Apache Solr Reference Guide
+:page-shortname: index
+:page-permalink: index.html
+:page-children: about-this-guide, getting-started, upgrading-solr, using-the-solr-administration-user-interface, documents-fields-and-schema-design, understanding-analyzers-tokenizers-and-filters, indexing-and-basic-data-operations, searching, the-well-configured-solr-instance, managing-solr, solrcloud, legacy-scaling-and-distribution, client-apis, major-changes-from-solr-5-to-solr-6, upgrading-a-solr-cluster, further-assistance, solr-glossary, errata
+
+This reference guide describes Apache Solr, the open source solution for search. You can download Apache Solr from the Solr website at http://lucene.apache.org/solr/.
+
+This Guide contains the following sections:
+
+*<<getting-started.adoc#getting-started,Getting Started>>*: This section guides you through the installation and setup of Solr.
+
+*<<using-the-solr-administration-user-interface.adoc#using-the-solr-administration-user-interface,Using the Solr Administration User Interface>>*: This section introduces the Solr Web-based user interface. From your browser you can view configuration files, submit queries, view logfile settings and Java environment settings, and monitor and control distributed configurations.
+
+*<<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>*: This section describes how Solr organizes its data for indexing. It explains how a Solr schema defines the fields and field types which Solr uses to organize data within the document files it indexes.
+
+*<<understanding-analyzers-tokenizers-and-filters.adoc#understanding-analyzers-tokenizers-and-filters,Understanding Analyzers, Tokenizers, and Filters>>*: This section explains how Solr prepares text for indexing and searching. Analyzers parse text and produce a stream of tokens, lexical units used for indexing and searching. Tokenizers break field data down into tokens. Filters perform other transformational or selective work on token streams.
+
+*<<indexing-and-basic-data-operations.adoc#indexing-and-basic-data-operations,Indexing and Basic Data Operations>>*: This section describes the indexing process and basic index operations, such as commit, optimize, and rollback.
+
+*<<searching.adoc#searching,Searching>>*: This section presents an overview of the search process in Solr. It describes the main components used in searches, including request handlers, query parsers, and response writers. It lists the query parameters that can be passed to Solr, and it describes features such as boosting and faceting, which can be used to fine-tune search results.
+
+*<<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,The Well-Configured Solr Instance>>*: This section discusses performance tuning for Solr. It begins with an overview of the `solrconfig.xml` file, then tells you how to configure cores with `solr.xml`, how to configure the Lucene index writer, and more.
+
+*<<managing-solr.adoc#managing-solr,Managing Solr>>*: This section discusses important topics for running and monitoring Solr. Other topics include how to back up a Solr instance, and how to run Solr with Java Management Extensions (JMX).
+
+*<<solrcloud.adoc#solrcloud,SolrCloud>>*: This section describes the newest and most exciting of Solr's new features, SolrCloud, which provides comprehensive distributed capabilities.
+
+*<<legacy-scaling-and-distribution.adoc#legacy-scaling-and-distribution,Legacy Scaling and Distribution>>*: This section tells you how to grow a Solr distribution by dividing a large index into sections called shards, which are then distributed across multiple servers, or by replicating a single index across multiple services.
+
+*<<client-apis.adoc#client-apis,Client APIs>>*: This section tells you how to access Solr through various client APIs, including JavaScript, JSON, and Ruby.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
new file mode 100644
index 0000000..97fe220
--- /dev/null
+++ b/solr/solr-ref-guide/src/indexconfig-in-solrconfig.adoc
@@ -0,0 +1,198 @@
+= IndexConfig in SolrConfig
+:page-shortname: indexconfig-in-solrconfig
+:page-permalink: indexconfig-in-solrconfig.html
+
+The `<indexConfig>` section of `solrconfig.xml` defines low-level behavior of the Lucene index writers.
+
+By default, the settings are commented out in the sample `solrconfig.xml` included with Solr, which means the defaults are used. In most cases, the defaults are fine.
+
+[source,xml]
+----
+<indexConfig>
+  ...
+</indexConfig>
+----
+
+[[IndexConfiginSolrConfig-WritingNewSegments]]
+== Writing New Segments
+
+[[IndexConfiginSolrConfig-ramBufferSizeMB]]
+=== `ramBufferSizeMB`
+
+Once accumulated document updates exceed this much memory space (defined in megabytes), then the pending updates are flushed. This can also create new segments or trigger a merge. Using this setting is generally preferable to `maxBufferedDocs`. If both `maxBufferedDocs` and `ramBufferSizeMB` are set in `solrconfig.xml`, then a flush will occur when either limit is reached. The default is 100Mb.
+
+[source,xml]
+----
+<ramBufferSizeMB>100</ramBufferSizeMB>
+----
+
+[[IndexConfiginSolrConfig-maxBufferedDocs]]
+=== `maxBufferedDocs`
+
+Sets the number of document updates to buffer in memory before they are flushed as a new segment. This may also trigger a merge. The default Solr configuration sets to flush by RAM usage (`ramBufferSizeMB`).
+
+[source,xml]
+----
+<maxBufferedDocs>1000</maxBufferedDocs>
+----
+
+[[IndexConfiginSolrConfig-useCompoundFile]]
+=== useCompoundFile
+
+Controls whether newly written (and not yet merged) index segments should use the <<IndexConfiginSolrConfig-CompoundFileSegments,Compound File Segment>> format. The default is false.
+
+[source,xml]
+----
+<useCompoundFile>false</useCompoundFile>
+----
+
+[[IndexConfiginSolrConfig-MergingIndexSegments]]
+== Merging Index Segments
+
+[[IndexConfiginSolrConfig-mergePolicyFactory]]
+=== `mergePolicyFactory`
+
+Defines how merging segments is done.
+
+The default in Solr is to use a `TieredMergePolicy`, which merges segments of approximately equal size, subject to an allowed number of segments per tier.
+
+Other policies available are the `LogByteSizeMergePolicy` and `LogDocMergePolicy`. For more information on these policies, please see {lucene-javadocs}/core/org/apache/lucene/index/MergePolicy.html[the MergePolicy javadocs].
+
+[source,xml]
+----
+<mergePolicyFactory class="org.apache.solr.index.TieredMergePolicyFactory">
+  <int name="maxMergeAtOnce">10</int>
+  <int name="segmentsPerTier">10</int>
+</mergePolicyFactory>
+----
+
+[[merge-factors]]
+=== Controlling Segment Sizes: Merge Factors
+
+The most common adjustment users make to the configuration of TieredMergePolicy (or LogByteSizeMergePolicy) are the "merge factors" to change how many segments should be merged at one time.
+
+For TieredMergePolicy, this is controlled by setting the `<int name="maxMergeAtOnce">` and `<int name="segmentsPerTier">` options, while LogByteSizeMergePolicy has a single `<int name="mergeFactor">` option (all of which default to `10`).
+
+To understand why these options are important, consider what happens when an update is made to an index using LogByteSizeMergePolicy: Documents are always added to the most recently opened segment. When a segment fills up, a new segment is created and subsequent updates are placed there.
+
+If creating a new segment would cause the number of lowest-level segments to exceed the `mergeFactor` value, then all those segments are merged together to form a single large segment. Thus, if the merge factor is 10, each merge results in the creation of a single segment that is roughly ten times larger than each of its ten constituents. When there are 10 of these larger segments, then they in turn are merged into an even larger single segment. This process can continue indefinitely.
+
+When using TieredMergePolicy, the process is the same, but instead of a single `mergeFactor` value, the `segmentsPerTier` setting is used as the threshold to decide if a merge should happen, and the `maxMergeAtOnce` setting determines how many segments should be included in the merge.
+
+Choosing the best merge factors is generally a trade-off of indexing speed vs. searching speed. Having fewer segments in the index generally accelerates searches, because there are fewer places to look. It also can also result in fewer physical files on disk. But to keep the number of segments low, merges will occur more often, which can add load to the system and slow down updates to the index.
+
+Conversely, keeping more segments can accelerate indexing, because merges happen less often, making an update is less likely to trigger a merge. But searches become more computationally expensive and will likely be slower, because search terms must be looked up in more index segments. Faster index updates also means shorter commit turnaround times, which means more timely search results.
+
+[[IndexConfiginSolrConfig-CustomizingMergePolicies]]
+=== Customizing Merge Policies
+
+If the configuration options for the built-in merge policies do not fully suit your use case, you can customize them: either by creating a custom merge policy factory that you specify in your configuration, or by configuring a {solr-javadocs}/solr-core/org/apache/solr/index/WrapperMergePolicyFactory.html[merge policy wrapper] which uses a `wrapped.prefix` configuration option to control how the factory it wraps will be configured:
+
+[source,xml]
+----
+<mergePolicyFactory class="org.apache.solr.index.SortingMergePolicyFactory">
+  <str name="sort">timestamp desc</str>
+  <str name="wrapped.prefix">inner</str>
+  <str name="inner.class">org.apache.solr.index.TieredMergePolicyFactory</str>
+  <int name="inner.maxMergeAtOnce">10</int>
+  <int name="inner.segmentsPerTier">10</int>
+</mergePolicyFactory>
+----
+
+The example above shows Solr's {solr-javadocs}/solr-core/org/apache/solr/index/SortingMergePolicyFactory.html[`SortingMergePolicyFactory`] being configured to sort documents in merged segments by `"timestamp desc"`, and wrapped around a `TieredMergePolicyFactory` configured to use the values `maxMergeAtOnce=10` and `segmentsPerTier=10` via the `inner` prefix defined by `SortingMergePolicyFactory`'s `wrapped.prefix` option. For more information on using `SortingMergePolicyFactory`, see <<common-query-parameters.adoc#CommonQueryParameters-ThesegmentTerminateEarlyParameter,the segmentTerminateEarly parameter>>.
+
+[[IndexConfiginSolrConfig-mergeScheduler]]
+=== `mergeScheduler`
+
+The merge scheduler controls how merges are performed. The default `ConcurrentMergeScheduler` performs merges in the background using separate threads. The alternative, `SerialMergeScheduler`, does not perform merges with separate threads.
+
+[source,xml]
+----
+<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
+----
+
+[[IndexConfiginSolrConfig-mergedSegmentWarmer]]
+=== `mergedSegmentWarmer`
+
+When using Solr in for <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>> a merged segment warmer can be configured to warm the reader on the newly merged segment, before the merge commits. This is not required for near real-time search, but will reduce search latency on opening a new near real-time reader after a merge completes.
+
+[source,xml]
+----
+<mergedSegmentWarmer class="org.apache.lucene.index.SimpleMergedSegmentWarmer"/>
+----
+
+[[IndexConfiginSolrConfig-CompoundFileSegments]]
+== Compound File Segments
+
+Each Lucene segment is typically comprised of a dozen or so files. Lucene can be configured to bundle all of the files for a segment into a single compound file using a file extension of `.cfs`; it's an abbreviation for Compound File Segment.
+
+CFS segments may incur a minor performance hit for various reasons, depending on the runtime environment. For example, filesystem buffers are typically associated with open file descriptors, which may limit the total cache space available to each index.
+
+On systems where the number of open files allowed per process is limited, CFS may avoid hitting that limit. The open files limit might also be tunable for your OS with the Linux/Unix `ulimit` command, or something similar for other operating systems.
+
+.CFS: New Segments vs Merged Segments
+[NOTE]
+====
+To configure whether _newly written segments_ should use CFS, see the <<IndexConfiginSolrConfig-useCompoundFile,`useCompoundFile`>> setting described above. To configure whether _merged segments_ use CFS, review the Javadocs for your <<IndexConfiginSolrConfig-mergePolicyFactory,`mergePolicyFactory`>> .
+
+Many <<IndexConfiginSolrConfig-MergingIndexSegments,Merge Policy>> implementations support `noCFSRatio` and `maxCFSSegmentSizeMB` settings with default values that prevent compound files from being used for large segments, but do use compound files for small segments.
+
+====
+
+[[IndexConfiginSolrConfig-IndexLocks]]
+== Index Locks
+
+[[IndexConfiginSolrConfig-lockType]]
+=== `lockType`
+
+The LockFactory options specify the locking implementation to use.
+
+The set of valid lock type options depends on the <<datadir-and-directoryfactory-in-solrconfig.adoc#datadir-and-directoryfactory-in-solrconfig,DirectoryFactory>> you have configured. The values listed below are are supported by `StandardDirectoryFactory` (the default):
+
+* `native` (default) uses NativeFSLockFactory to specify native OS file locking. If a second Solr process attempts to access the directory, it will fail. Do not use when multiple Solr web applications are attempting to share a single index.
+* `simple` uses SimpleFSLockFactory to specify a plain file for locking.
+* `single` (expert) uses SingleInstanceLockFactory. Use for special situations of a read-only index directory, or when there is no possibility of more than one process trying to modify the index (even sequentially). This type will protect against multiple cores within the _same_ JVM attempting to access the same index. WARNING! If multiple Solr instances in different JVMs modify an index, this type will _not_ protect against index corruption.
+* `hdfs` uses HdfsLockFactory to support reading and writing index and transaction log files to a HDFS filesystem. See the section <<running-solr-on-hdfs.adoc#running-solr-on-hdfs,Running Solr on HDFS>> for more details on using this feature.
+
+For more information on the nuances of each LockFactory, see http://wiki.apache.org/lucene-java/AvailableLockFactories.
+
+[source,xml]
+----
+<lockType>native</lockType>
+----
+
+[[IndexConfiginSolrConfig-writeLockTimeout]]
+=== `writeLockTimeout`
+
+The maximum time to wait for a write lock on an IndexWriter. The default is 1000, expressed in milliseconds.
+
+[source,xml]
+----
+<writeLockTimeout>1000</writeLockTimeout>
+----
+
+[[IndexConfiginSolrConfig-OtherIndexingSettings]]
+== Other Indexing Settings
+
+There are a few other parameters that may be important to configure for your implementation. These settings affect how or when updates are made to an index.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Setting |Description
+|reopenReaders |Controls if IndexReaders will be re-opened, instead of closed and then opened, which is often less efficient. The default is true.
+|deletionPolicy |Controls how commits are retained in case of rollback. The default is `SolrDeletionPolicy`, which has sub-parameters for the maximum number of commits to keep (`maxCommitsToKeep`), the maximum number of optimized commits to keep (`maxOptimizedCommitsToKeep`), and the maximum age of any commit to keep (`maxCommitAge`), which supports `DateMathParser` syntax.
+|infoStream |The InfoStream setting instructs the underlying Lucene classes to write detailed debug information from the indexing process as Solr log messages.
+|===
+
+[source,xml]
+----
+<reopenReaders>true</reopenReaders>
+<deletionPolicy class="solr.SolrDeletionPolicy">
+  <str name="maxCommitsToKeep">1</str>
+  <str name="maxOptimizedCommitsToKeep">0</str>
+  <str name="maxCommitAge">1DAY</str>
+</deletionPolicy>
+<infoStream>false</infoStream>
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc b/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
new file mode 100644
index 0000000..ebd35e1
--- /dev/null
+++ b/solr/solr-ref-guide/src/indexing-and-basic-data-operations.adoc
@@ -0,0 +1,33 @@
+= Indexing and Basic Data Operations
+:page-shortname: indexing-and-basic-data-operations
+:page-permalink: indexing-and-basic-data-operations.html
+:page-children: introduction-to-solr-indexing, post-tool, uploading-data-with-index-handlers, uploading-data-with-solr-cell-using-apache-tika, uploading-structured-data-store-data-with-the-data-import-handler, updating-parts-of-documents, detecting-languages-during-indexing, de-duplication, content-streams, uima-integration
+
+This section describes how Solr adds data to its index. It covers the following topics:
+
+* *<<introduction-to-solr-indexing.adoc#introduction-to-solr-indexing,Introduction to Solr Indexing>>*: An overview of Solr's indexing process.
+
+* *<<post-tool.adoc#post-tool,Post Tool>>*: Information about using `post.jar` to quickly upload some content to your system.
+
+* *<<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>*: Information about using Solr's Index Handlers to upload XML/XSLT, JSON and CSV data.
+
+* *<<transforming-and-indexing-custom-json.adoc#transforming-and-indexing-custom-json,Transforming and Indexing Custom JSON>>* : Index any JSON of your choice
+
+* *<<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Uploading Data with Solr Cell using Apache Tika>>*: Information about using the Solr Cell framework to upload data for indexing.
+
+* *<<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>*: Information about uploading and indexing data from a structured data store.
+
+* *<<updating-parts-of-documents.adoc#updating-parts-of-documents,Updating Parts of Documents>>*: Information about how to use atomic updates and optimistic concurrency with Solr.
+
+* *<<detecting-languages-during-indexing.adoc#detecting-languages-during-indexing,Detecting Languages During Indexing>>*: Information about using language identification during the indexing process.
+
+* *<<de-duplication.adoc#de-duplication,De-Duplication>>*: Information about configuring Solr to mark duplicate documents as they are indexed.
+
+* *<<content-streams.adoc#content-streams,Content Streams>>*: Information about streaming content to Solr Request Handlers.
+
+* *<<uima-integration.adoc#uima-integration,UIMA Integration>>*: Information about integrating Solr with Apache's Unstructured Information Management Architecture (UIMA). UIMA lets you define custom pipelines of Analysis Engines that incrementally add metadata to your documents as annotations.
+
+[[IndexingandBasicDataOperations-IndexingUsingClientAPIs]]
+== Indexing Using Client APIs
+
+Using client APIs, such as <<using-solrj.adoc#using-solrj,SolrJ>>, from your applications is an important option for updating Solr indexes. See the <<client-apis.adoc#client-apis,Client APIs>> section for more information.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/indexupgrader-tool.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/indexupgrader-tool.adoc b/solr/solr-ref-guide/src/indexupgrader-tool.adoc
new file mode 100644
index 0000000..1c3db45
--- /dev/null
+++ b/solr/solr-ref-guide/src/indexupgrader-tool.adoc
@@ -0,0 +1,23 @@
+= IndexUpgrader Tool
+:page-shortname: indexupgrader-tool
+:page-permalink: indexupgrader-tool.html
+
+The Lucene distribution includes {lucene-javadocs}/core/org/apache/lucene/index/IndexUpgrader.html[a tool that upgrades] an index from previous Lucene versions to the current file format.
+
+The tool can be used from command line, or it can be instantiated and executed in Java.
+
+In a Solr distribution, the Lucene files are located in `./server/solr-webapp/webapp/WEB-INF/lib`. You will need to include the `lucene-core-<version>.jar` and `lucene-backwards-codecs-<version>.jar` on the classpath when running the tool.
+
+[source,bash]
+----
+java -cp lucene-core-6.0.0.jar:lucene-backward-codecs-6.0.0.jar org.apache.lucene.index.IndexUpgrader [-delete-prior-commits] [-verbose] /path/to/index
+----
+
+This tool keeps only the last commit in an index. For this reason, if the incoming index has more than one commit, the tool refuses to run by default. Specify `-delete-prior-commits` to override this, allowing the tool to delete all but the last commit.
+
+Upgrading large indexes may take a long time. As a rule of thumb, the upgrade processes about 1 GB per minute.
+
+[WARNING]
+====
+This tool may reorder documents if the index was partially upgraded before execution (e.g., documents were added). If your application relies on monotonicity of document IDs (which means that the order in which the documents were added to the index is preserved), do a full forceMerge instead.
+====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
new file mode 100644
index 0000000..5423801
--- /dev/null
+++ b/solr/solr-ref-guide/src/initparams-in-solrconfig.adoc
@@ -0,0 +1,106 @@
+= InitParams in SolrConfig
+:page-shortname: initparams-in-solrconfig
+:page-permalink: initparams-in-solrconfig.html
+
+The `<initParams>` section of `solrconfig.xml` allows you to define request handler parameters outside of the handler configuration.
+
+There are a couple of use cases where this might be desired:
+
+* Some handlers are implicitly defined in code - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> - and there should be a way to add/append/override some of the implicitly defined properties.
+* There are a few properties that are used across handlers. This helps you keep only a single definition of those properties and apply them over multiple handlers.
+
+For example, if you want several of your search handlers to return the same list of fields, you can create an `<initParams>` section without having to define the same set of parameters in each request handler definition. If you have a single request handler that should return different fields, you can define the overriding parameters in individual `<requestHandler>` sections as usual.
+
+The properties and configuration of an `<initParams>` section mirror the properties and configuration of a request handler. It can include sections for defaults, appends, and invariants, the same as any request handler.
+
+For example, here is one of the `<initParams>` sections defined by default in the `data_driven_config` example:
+
+[source,xml]
+----
+<initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse">
+  <lst name="defaults">
+    <str name="df">_text_</str>
+  </lst>
+</initParams>
+----
+
+This sets the default search field ("df") to be "_text_" for all of the request handlers named in the path section. If we later want to change the `/query` request handler to search a different field by default, we could override the `<initParams>` by defining the parameter in the `<requestHandler>` section for `/query`.
+
+The syntax and semantics are similar to that of a `<requestHandler>` . The following are the attributes
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Property |Description
+|path |A comma-separated list of paths which will use the parameters. Wildcards can be used in paths to define nested paths, as described below.
+|name a|
+The name of this set of parameters. The name can be used directly in a requestHandler definition if a path is not explicitly named. If you give your `<initParams>` a name, you can refer to the params in a `<requestHandler>` that is not defined as a path.
+
+For example, if an `<initParams>` section has the name "myParams", you can call the name when defining your request handler:
+
+[source,xml]
+----
+<requestHandler name="/dump1" class="DumpRequestHandler" initParams="myParams"/>
+----
+|===
+
+[[InitParamsinSolrConfig-Wildcards]]
+== Wildcards
+
+An `<initParams>` section can support wildcards to define nested paths that should use the parameters defined. A single asterisk (\*) denotes that a nested path one level deeper should use the parameters. Double asterisks (**) denote all nested paths no matter how deep should use the parameters.
+
+For example, if we have an `<initParams>` that looks like this:
+
+[source,xml]
+----
+<initParams name="myParams" path="/myhandler,/root/*,/root1/**">
+  <lst name="defaults">
+    <str name="fl">_text_</str>
+  </lst>
+  <lst name="invariants">
+    <str name="rows">10</str>
+  </lst>
+  <lst name="appends">
+    <str name="df">title</str>
+  </lst>
+</initParams>
+----
+
+We've defined three paths with this section:
+
+* `/myhandler` declared as a direct path.
+* `/root/*` with a single asterisk to indicate the parameters should apply to paths that are one level deep.
+* `/root1/**` with double asterisks to indicate the parameters should apply to all nested paths, no matter how deep.
+
+When we define the request handlers, the wildcards will work in the following ways:
+
+[source,xml]
+----
+<requestHandler name="/myhandler" class="SearchHandler"/>
+----
+
+The `/myhandler` class was named as a path in the `<initParams>` so this will use those parameters.
+
+Next we have a request handler named `/root/search5`:
+
+[source,xml]
+----
+<requestHandler name="/root/search5" class="SearchHandler"/>
+----
+
+We defined a wildcard for nested paths that are one level deeper than `/root`, so this request handler will use the parameters. This one, however, will not, because `/root/search5/test` is more than one level deep from `/root`:
+
+[source,xml]
+----
+<requestHandler name="/root/search5/test" class="SearchHandler"/>
+----
+
+If we want to define all levels of nested paths, we should use double asterisks, as in the example path `/root1/**`:
+
+[source,xml]
+----
+<requestHandler name="/root1/search/tests" class="SearchHandler"/>
+----
+
+Any path under `/root1`, whether explicitly defined in a request handler or not, will use the parameters defined in the matching `initParams` section.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/installing-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/installing-solr.adoc b/solr/solr-ref-guide/src/installing-solr.adoc
new file mode 100644
index 0000000..ab3fe5b
--- /dev/null
+++ b/solr/solr-ref-guide/src/installing-solr.adoc
@@ -0,0 +1,40 @@
+= Installing Solr
+:page-shortname: installing-solr
+:page-permalink: installing-solr.html
+
+This section describes how to install Solr.
+
+You can install Solr in any system where a suitable Java Runtime Environment (JRE) is available, as detailed below. Currently this includes Linux, OS X, and Microsoft Windows. The instructions in this section should work for any platform, with a few exceptions for Windows as noted.
+
+== Got Java?
+
+You will need the Java Runtime Environment (JRE) version 1.8 or higher. At a command line, check your Java version like this:
+
+[source,plain]
+----
+$ java -version
+java version "1.8.0_60"
+Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
+Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
+----
+
+The exact output will vary, but you need to make sure you meet the minimum version requirement. We also recommend choosing a version that is not end-of-life from its vendor. If you don't have the required version, or if the java command is not found, download and install the latest version from Oracle at http://www.oracle.com/technetwork/java/javase/downloads/index.html.
+
+[[install-command]]
+== Installing Solr
+
+Solr is available from the Solr website at http://lucene.apache.org/solr/.
+
+For Linux/Unix/OSX systems, download the `.tgz` file. For Microsoft Windows systems, download the `.zip` file.
+
+When getting started, all you need to do is extract the Solr distribution archive to a directory of your choosing. When you're ready to setup Solr for a production environment, please refer to the instructions provided on the <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>> page.
+
+To keep things simple for now, extract the Solr distribution archive to your local home directory, for instance on Linux, do:
+
+[source,bash]
+----
+cd ~/
+tar zxf solr-x.y.z.tgz
+----
+
+Once extracted, you are now ready to run Solr using the instructions provided in the <<running-solr.adoc#running-solr,Running Solr>> section.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/introduction-to-client-apis.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/introduction-to-client-apis.adoc b/solr/solr-ref-guide/src/introduction-to-client-apis.adoc
new file mode 100644
index 0000000..c32f3e2
--- /dev/null
+++ b/solr/solr-ref-guide/src/introduction-to-client-apis.adoc
@@ -0,0 +1,15 @@
+= Introduction to Client APIs
+:page-shortname: introduction-to-client-apis
+:page-permalink: introduction-to-client-apis.html
+
+At its heart, Solr is a Web application, but because it is built on open protocols, any type of client application can use Solr.
+
+HTTP is the fundamental protocol used between client applications and Solr. The client makes a request and Solr does some work and provides a response. Clients use requests to ask Solr to do things like perform queries or index documents.
+
+Client applications can reach Solr by creating HTTP requests and parsing the HTTP responses. Client APIs encapsulate much of the work of sending requests and parsing responses, which makes it much easier to write client applications.
+
+Clients use Solr's five fundamental operations to work with Solr. The operations are query, index, delete, commit, and optimize.
+
+Queries are executed by creating a URL that contains all the query parameters. Solr examines the request URL, performs the query, and returns the results. The other operations are similar, although in certain cases the HTTP request is a POST operation and contains information beyond whatever is included in the request URL. An index operation, for example, may contain a document in the body of the request.
+
+Solr also features an EmbeddedSolrServer that offers a Java API without requiring an HTTP connection. For details, see <<using-solrj.adoc#using-solrj,Using SolrJ>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/introduction-to-scaling-and-distribution.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/introduction-to-scaling-and-distribution.adoc b/solr/solr-ref-guide/src/introduction-to-scaling-and-distribution.adoc
new file mode 100644
index 0000000..9eb4313
--- /dev/null
+++ b/solr/solr-ref-guide/src/introduction-to-scaling-and-distribution.adoc
@@ -0,0 +1,29 @@
+= Introduction to Scaling and Distribution
+:page-shortname: introduction-to-scaling-and-distribution
+:page-permalink: introduction-to-scaling-and-distribution.html
+
+Both Lucene and Solr were designed to scale to support large implementations with minimal custom coding.
+
+This section covers:
+
+* <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,distributing>> an index across multiple servers
+* <<index-replication.adoc#index-replication,replicating>> an index on multiple servers
+* <<merging-indexes.adoc#merging-indexes,merging indexes>>
+
+If you need full scale distribution of indexes and queries, as well as replication, load balancing and failover, you may want to use SolrCloud. Full details on configuring and using SolrCloud is available in the section <<solrcloud.adoc#solrcloud,SolrCloud>>.
+
+== What Problem Does Distribution Solve?
+
+If searches are taking too long or the index is approaching the physical limitations of its machine, you should consider distributing the index across two or more Solr servers.
+
+To distribute an index, you divide the index into partitions called shards, each of which runs on a separate machine. Solr then partitions searches into sub-searches, which run on the individual shards, reporting results collectively.
+
+The architectural details underlying index sharding are invisible to end users, who simply experience faster performance on queries against very large indexes.
+
+== What Problem Does Replication Solve?
+
+Replicating an index is useful when:
+
+* You have a large search volume which one machine cannot handle, so you need to distribute searches across multiple read-only copies of the index.
+* There is a high volume/high rate of indexing which consumes machine resources and reduces search performance on the indexing machine, so you need to separate indexing and searching.
+* You want to make a backup of the index (see <<making-and-restoring-backups.adoc#making-and-restoring-backups,Making and Restoring Backups>>).

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc b/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc
new file mode 100644
index 0000000..80f4cfd
--- /dev/null
+++ b/solr/solr-ref-guide/src/introduction-to-solr-indexing.adoc
@@ -0,0 +1,40 @@
+= Introduction to Solr Indexing
+:page-shortname: introduction-to-solr-indexing
+:page-permalink: introduction-to-solr-indexing.html
+
+This section describes the process of indexing: adding content to a Solr index and, if necessary, modifying that content or deleting it.
+
+By adding content to an index, we make it searchable by Solr.
+
+A Solr index can accept data from many different sources, including XML files, comma-separated value (CSV) files, data extracted from tables in a database, and files in common file formats such as Microsoft Word or PDF.
+
+Here are the three most common ways of loading data into a Solr index:
+
+* Using the <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Solr Cell>> framework built on Apache Tika for ingesting binary files or structured files such as Office, Word, PDF, and other proprietary formats.
+
+* Uploading XML files by sending HTTP requests to the Solr server from any environment where such requests can be generated.
+
+* Writing a custom Java application to ingest data through Solr's Java Client API (which is described in more detail in <<client-apis.adoc#client-apis,Client APIs>>). Using the Java API may be the best choice if you're working with an application, such as a Content Management System (CMS), that offers a Java API.
+
+Regardless of the method used to ingest data, there is a common basic data structure for data being fed into a Solr index: a _document_ containing multiple _fields,_ each with a _name_ and containing _content,_ which may be empty. One of the fields is usually designated as a unique ID field (analogous to a primary key in a database), although the use of a unique ID field is not strictly required by Solr.
+
+If the field name is defined in the Schema that is associated with the index, then the analysis steps associated with that field will be applied to its content when the content is tokenized. Fields that are not explicitly defined in the Schema will either be ignored or mapped to a dynamic field definition (see <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>), if one matching the field name exists.
+
+For more information on indexing in Solr, see the https://wiki.apache.org/solr/FrontPage[Solr Wiki].
+
+[[IntroductiontoSolrIndexing-TheSolrExampleDirectory]]
+== The Solr Example Directory
+
+When starting Solr with the "-e" option, the `example/` directory will be used as base directory for the example Solr instances that are created. This directory also includes an `example/exampledocs/` subdirectory containing sample documents in a variety of formats that you can use to experiment with indexing into the various examples.
+
+[[IntroductiontoSolrIndexing-ThecurlUtilityforTransferringFiles]]
+== The `curl` Utility for Transferring Files
+
+Many of the instructions and examples in this section make use of the `curl` utility for transferring content through a URL. `curl` posts and retrieves data over HTTP, FTP, and many other protocols. Most Linux distributions include a copy of `curl`. You'll find curl downloads for Linux, Windows, and many other operating systems at http://curl.haxx.se/download.html. Documentation for `curl` is available here: http://curl.haxx.se/docs/manpage.html.
+
+[IMPORTANT]
+====
+Using `curl` or other command line tools for posting data is just fine for examples or tests, but it's not the recommended method for achieving the best performance for updates in production environments. You will achieve better performance with Solr Cell or the other methods described in this section.
+
+Instead of `curl`, you can use utilities such as GNU `wget` (http://www.gnu.org/software/wget/) or manage GETs and POSTS with Perl, although the command line options will differ.
+====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/java-properties.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/java-properties.adoc b/solr/solr-ref-guide/src/java-properties.adoc
new file mode 100644
index 0000000..7b6553c
--- /dev/null
+++ b/solr/solr-ref-guide/src/java-properties.adoc
@@ -0,0 +1,8 @@
+= Java Properties
+:page-shortname: java-properties
+:page-permalink: java-properties.html
+
+The Java Properties screen provides easy access to one of the most essential components of a top-performing Solr systems. With the Java Properties screen, you can see all the properties of the JVM running Solr, including the class paths, file encodings, JVM memory settings, operating system, and more.
+
+.Java Properties Screen
+image::images/java-properties/javaproperties.png[image,width=593,height=250]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/js/customscripts.js
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/js/customscripts.js b/solr/solr-ref-guide/src/js/customscripts.js
new file mode 100755
index 0000000..19156c5
--- /dev/null
+++ b/solr/solr-ref-guide/src/js/customscripts.js
@@ -0,0 +1,56 @@
+
+$('#mysidebar').height($(".nav").height());
+
+
+$( document ).ready(function() {
+
+    //this script says, if the height of the viewport is greater than 800px, then insert affix class, which makes the nav bar float in a fixed
+    // position as your scroll. if you have a lot of nav items, this height may not work for you.
+    // commented out...to add back, uncomment the next line, then the 3 after the "console.log" comment
+    //var h = $(window).height();
+    //console.log (h);
+    //if (h > 800) {
+    //     $( "#mysidebar" ).attr("class", "nav affix");
+    // }
+    // activate tooltips. although this is a bootstrap js function, it must be activated this way in your theme.
+    $('[data-toggle="tooltip"]').tooltip({
+        placement : 'top'
+    });
+
+    /**
+     * AnchorJS
+     */
+    anchors.add('h2,h3,h4,h5');
+
+});
+
+// needed for nav tabs on pages. See Formatting > Nav tabs for more details.
+// script from http://stackoverflow.com/questions/10523433/how-do-i-keep-the-current-tab-active-with-twitter-bootstrap-after-a-page-reload
+$(function() {
+    var json, tabsState;
+    $('a[data-toggle="pill"], a[data-toggle="tab"]').on('shown.bs.tab', function(e) {
+        var href, json, parentId, tabsState;
+
+        tabsState = localStorage.getItem("tabs-state");
+        json = JSON.parse(tabsState || "{}");
+        parentId = $(e.target).parents("ul.nav.nav-pills, ul.nav.nav-tabs").attr("id");
+        href = $(e.target).attr('href');
+        json[parentId] = href;
+
+        return localStorage.setItem("tabs-state", JSON.stringify(json));
+    });
+
+    tabsState = localStorage.getItem("tabs-state");
+    json = JSON.parse(tabsState || "{}");
+
+    $.each(json, function(containerId, href) {
+        return $("#" + containerId + " a[href=" + href + "]").tab('show');
+    });
+
+    $("ul.nav.nav-pills, ul.nav.nav-tabs").each(function() {
+        var $this = $(this);
+        if (!json[$this.attr("id")]) {
+            return $this.find("a[data-toggle=tab]:first, a[data-toggle=pill]:first").tab("show");
+        }
+    });
+});

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/js/jekyll-search.js
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/js/jekyll-search.js b/solr/solr-ref-guide/src/js/jekyll-search.js
new file mode 100755
index 0000000..04d6a0d
--- /dev/null
+++ b/solr/solr-ref-guide/src/js/jekyll-search.js
@@ -0,0 +1 @@
+!function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a="function"==typeof require&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);throw new Error("Cannot find module '"+o+"'")}var f=n[o]={exports:{}};t[o][0].call(f.exports,function(e){var n=t[o][1][e];return s(n?n:e)},f,f.exports,e,t,n,r)}return n[o].exports}for(var i="function"==typeof require&&require,o=0;o<r.length;o++)s(r[o]);return s}({1:[function(require,module){module.exports=function(){function receivedResponse(xhr){return 200==xhr.status&&4==xhr.readyState}function handleResponse(xhr,callback){xhr.onreadystatechange=function(){if(receivedResponse(xhr))try{callback(null,JSON.parse(xhr.responseText))}catch(err){callback(err,null)}}}var self=this;self.load=function(location,callback){var xhr=window.XMLHttpRequest?new XMLHttpRequest:new ActiveXObject("Microsoft.XMLHTTP");xhr.open("GET",location,!0),handleResponse(xhr,callback),xhr.send()}}},{}],2:[function(require,module){function FuzzySearchStrategy(){function creat
 eFuzzyRegExpFromString(string){return new RegExp(string.split("").join(".*?"),"gi")}var self=this;self.matches=function(string,crit){return"string"!=typeof string?!1:(string=string.trim(),!!string.match(createFuzzyRegExpFromString(crit)))}}module.exports=new FuzzySearchStrategy},{}],3:[function(require,module){function LiteralSearchStrategy(){function doMatch(string,crit){return string.toLowerCase().indexOf(crit.toLowerCase())>=0}var self=this;self.matches=function(string,crit){return"string"!=typeof string?!1:(string=string.trim(),doMatch(string,crit))}}module.exports=new LiteralSearchStrategy},{}],4:[function(require,module){module.exports=function(){function findMatches(store,crit,strategy){for(var data=store.get(),i=0;i<data.length&&matches.length<limit;i++)findMatchesInObject(data[i],crit,strategy);return matches}function findMatchesInObject(obj,crit,strategy){for(var key in obj)if(strategy.matches(obj[key],crit)){matches.push(obj);break}}function getSearchStrategy(){return fuz
 zy?fuzzySearchStrategy:literalSearchStrategy}var self=this,matches=[],fuzzy=!1,limit=10,fuzzySearchStrategy=require("./SearchStrategies/fuzzy"),literalSearchStrategy=require("./SearchStrategies/literal");self.setFuzzy=function(_fuzzy){fuzzy=!!_fuzzy},self.setLimit=function(_limit){limit=parseInt(_limit,10)||limit},self.search=function(data,crit){return crit?(matches.length=0,findMatches(data,crit,getSearchStrategy())):[]}}},{"./SearchStrategies/fuzzy":2,"./SearchStrategies/literal":3}],5:[function(require,module){module.exports=function(_store){function isObject(obj){return!!obj&&"[object Object]"==Object.prototype.toString.call(obj)}function isArray(obj){return!!obj&&"[object Array]"==Object.prototype.toString.call(obj)}function addObject(data){return store.push(data),data}function addArray(data){for(var added=[],i=0;i<data.length;i++)isObject(data[i])&&added.push(addObject(data[i]));return added}var self=this,store=[];isArray(_store)&&addArray(_store),self.clear=function(){return 
 store.length=0,store},self.get=function(){return store},self.put=function(data){return isObject(data)?addObject(data):isArray(data)?addArray(data):void 0}}},{}],6:[function(require,module){module.exports=function(){var self=this,templatePattern=/\{(.*?)\}/g;self.setTemplatePattern=function(newTemplatePattern){templatePattern=newTemplatePattern},self.render=function(t,data){return t.replace(templatePattern,function(match,prop){return data[prop]||match})}}},{}],7:[function(require){!function(window){"use strict";function SimpleJekyllSearch(){function initWithJSON(){store.put(opt.dataSource),registerInput()}function initWithURL(url){jsonLoader.load(url,function(err,json){err?throwError("failed to get JSON ("+url+")"):(store.put(json),registerInput())})}function throwError(message){throw new Error("SimpleJekyllSearch --- "+message)}function validateOptions(_opt){for(var i=0;i<requiredOptions.length;i++){var req=requiredOptions[i];_opt[req]||throwError("You must specify a "+req)}}functio
 n assignOptions(_opt){for(var option in opt)opt[option]=_opt[option]||opt[option]}function isJSON(json){try{return json instanceof Object&&JSON.parse(JSON.stringify(json))}catch(e){return!1}}function emptyResultsContainer(){opt.resultsContainer.innerHTML=""}function appendToResultsContainer(text){opt.resultsContainer.innerHTML+=text}function registerInput(){opt.searchInput.addEventListener("keyup",function(e){return 0==e.target.value.length?void emptyResultsContainer():void render(searcher.search(store,e.target.value))})}function render(results){if(emptyResultsContainer(),0==results.length)return appendToResultsContainer(opt.noResultsText);for(var i=0;i<results.length;i++)appendToResultsContainer(templater.render(opt.searchResultTemplate,results[i]))}var self=this,requiredOptions=["searchInput","resultsContainer","dataSource"],opt={searchInput:null,resultsContainer:null,dataSource:[],searchResultTemplate:'<li><a href="{url}" title="{desc}">{title}</a></li>',noResultsText:"No results
  found",limit:10,fuzzy:!1};self.init=function(_opt){validateOptions(_opt),assignOptions(_opt),isJSON(opt.dataSource)?initWithJSON(opt.dataSource):initWithURL(opt.dataSource)}}var Searcher=require("./Searcher"),Templater=require("./Templater"),Store=require("./Store"),JSONLoader=require("./JSONLoader"),searcher=new Searcher,templater=new Templater,store=new Store,jsonLoader=new JSONLoader;window.SimpleJekyllSearch=new SimpleJekyllSearch}(window,document)},{"./JSONLoader":1,"./Searcher":4,"./Store":5,"./Templater":6}]},{},[7]);
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/js/jquery.navgoco.min.js
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/js/jquery.navgoco.min.js b/solr/solr-ref-guide/src/js/jquery.navgoco.min.js
new file mode 100755
index 0000000..4ba4475
--- /dev/null
+++ b/solr/solr-ref-guide/src/js/jquery.navgoco.min.js
@@ -0,0 +1,8 @@
+/*
+ * jQuery Navgoco Menus Plugin v0.2.1 (2014-04-11)
+ * https://github.com/tefra/navgoco
+ *
+ * Copyright (c) 2014 Chris T (@tefra)
+ * BSD - https://github.com/tefra/navgoco/blob/master/LICENSE-BSD
+ */
+!function(a){"use strict";var b=function(b,c,d){return this.el=b,this.$el=a(b),this.options=c,this.uuid=this.$el.attr("id")?this.$el.attr("id"):d,this.state={},this.init(),this};b.prototype={init:function(){var b=this;b._load(),b.$el.find("ul").each(function(c){var d=a(this);d.attr("data-index",c),b.options.save&&b.state.hasOwnProperty(c)?(d.parent().addClass(b.options.openClass),d.show()):d.parent().hasClass(b.options.openClass)?(d.show(),b.state[c]=1):d.hide()});var c=a("<span></span>").prepend(b.options.caretHtml),d=b.$el.find("li > a");b._trigger(c,!1),b._trigger(d,!0),b.$el.find("li:has(ul) > a").prepend(c)},_trigger:function(b,c){var d=this;b.on("click",function(b){b.stopPropagation();var e=c?a(this).next():a(this).parent().next(),f=!1;if(c){var g=a(this).attr("href");f=void 0===g||""===g||"#"===g}if(e=e.length>0?e:!1,d.options.onClickBefore.call(this,b,e),!c||e&&f)b.preventDefault(),d._toggle(e,e.is(":hidden")),d._save();else if(d.options.accordion){var h=d.state=d._parents(a
 (this));d.$el.find("ul").filter(":visible").each(function(){var b=a(this),c=b.attr("data-index");h.hasOwnProperty(c)||d._toggle(b,!1)}),d._save()}d.options.onClickAfter.call(this,b,e)})},_toggle:function(b,c){var d=this,e=b.attr("data-index"),f=b.parent();if(d.options.onToggleBefore.call(this,b,c),c){if(f.addClass(d.options.openClass),b.slideDown(d.options.slide),d.state[e]=1,d.options.accordion){var g=d.state=d._parents(b);g[e]=d.state[e]=1,d.$el.find("ul").filter(":visible").each(function(){var b=a(this),c=b.attr("data-index");g.hasOwnProperty(c)||d._toggle(b,!1)})}}else f.removeClass(d.options.openClass),b.slideUp(d.options.slide),d.state[e]=0;d.options.onToggleAfter.call(this,b,c)},_parents:function(b,c){var d={},e=b.parent(),f=e.parents("ul");return f.each(function(){var b=a(this),e=b.attr("data-index");return e?void(d[e]=c?b:1):!1}),d},_save:function(){if(this.options.save){var b={};for(var d in this.state)1===this.state[d]&&(b[d]=1);c[this.uuid]=this.state=b,a.cookie(this.opt
 ions.cookie.name,JSON.stringify(c),this.options.cookie)}},_load:function(){if(this.options.save){if(null===c){var b=a.cookie(this.options.cookie.name);c=b?JSON.parse(b):{}}this.state=c.hasOwnProperty(this.uuid)?c[this.uuid]:{}}},toggle:function(b){var c=this,d=arguments.length;if(1>=d)c.$el.find("ul").each(function(){var d=a(this);c._toggle(d,b)});else{var e,f={},g=Array.prototype.slice.call(arguments,1);d--;for(var h=0;d>h;h++){e=g[h];var i=c.$el.find('ul[data-index="'+e+'"]').first();if(i&&(f[e]=i,b)){var j=c._parents(i,!0);for(var k in j)f.hasOwnProperty(k)||(f[k]=j[k])}}for(e in f)c._toggle(f[e],b)}c._save()},destroy:function(){a.removeData(this.$el),this.$el.find("li:has(ul) > a").unbind("click"),this.$el.find("li:has(ul) > a > span").unbind("click")}},a.fn.navgoco=function(c){if("string"==typeof c&&"_"!==c.charAt(0)&&"init"!==c)var d=!0,e=Array.prototype.slice.call(arguments,1);else c=a.extend({},a.fn.navgoco.defaults,c||{}),a.cookie||(c.save=!1);return this.each(function(f){v
 ar g=a(this),h=g.data("navgoco");h||(h=new b(this,d?a.fn.navgoco.defaults:c,f),g.data("navgoco",h)),d&&h[c].apply(h,e)})};var c=null;a.fn.navgoco.defaults={caretHtml:"",accordion:!1,openClass:"open",save:!0,cookie:{name:"navgoco",expires:!1,path:"/"},slide:{duration:400,easing:"swing"},onClickBefore:a.noop,onClickAfter:a.noop,onToggleBefore:a.noop,onToggleAfter:a.noop}}(jQuery);
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/js/toc.js
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/js/toc.js b/solr/solr-ref-guide/src/js/toc.js
new file mode 100755
index 0000000..9adff0d
--- /dev/null
+++ b/solr/solr-ref-guide/src/js/toc.js
@@ -0,0 +1,82 @@
+// https://github.com/ghiculescu/jekyll-table-of-contents
+(function($){
+  $.fn.toc = function(options) {
+    var defaults = {
+      noBackToTopLinks: false,
+      title: '',
+      minimumHeaders: 3,
+      headers: 'h1, h2, h3, h4',
+      listType: 'ol', // values: [ol|ul]
+      showEffect: 'none', // values: [show|slideDown|fadeIn|none]
+      showSpeed: '0' // set to 0 to deactivate effect
+    },
+    settings = $.extend(defaults, options);
+
+    var headers = $(settings.headers).filter(function() {
+      // get all headers with an ID
+      var previousSiblingName = $(this).prev().attr( "name" );
+      if (!this.id && previousSiblingName) {
+        this.id = $(this).attr( "id", previousSiblingName.replace(/\./g, "-") );
+      }
+      return this.id;
+    }), output = $(this);
+    if (!headers.length || headers.length < settings.minimumHeaders || !output.length) {
+      return;
+    }
+
+    if (0 === settings.showSpeed) {
+      settings.showEffect = 'none';
+    }
+
+    var render = {
+      show: function() { output.hide().html(html).show(settings.showSpeed); },
+      slideDown: function() { output.hide().html(html).slideDown(settings.showSpeed); },
+      fadeIn: function() { output.hide().html(html).fadeIn(settings.showSpeed); },
+      none: function() { output.html(html); }
+    };
+
+    var get_level = function(ele) { return parseInt(ele.nodeName.replace("H", ""), 10); }
+    var highest_level = headers.map(function(_, ele) { return get_level(ele); }).get().sort()[0];
+    var return_to_top = '<i class="icon-arrow-up back-to-top"> </i>';
+
+    var level = get_level(headers[0]),
+      this_level,
+      html = settings.title + " <"+settings.listType+">";
+    headers.on('click', function() {
+      if (!settings.noBackToTopLinks) {
+        window.location.hash = this.id;
+      }
+    })
+    .addClass('clickable-header')
+    .each(function(_, header) {
+      this_level = get_level(header);
+      if (!settings.noBackToTopLinks && this_level === highest_level) {
+        $(header).addClass('top-level-header').after(return_to_top);
+      }
+      if (this_level === level) // same level as before; same indenting
+        html += "<li><a href='#" + header.id + "'>" + header.innerHTML + "</a>";
+      else if (this_level <= level){ // higher level than before; end parent ol
+        for(i = this_level; i < level; i++) {
+          html += "</li></"+settings.listType+">"
+        }
+        html += "<li><a href='#" + header.id + "'>" + header.innerHTML + "</a>";
+      }
+      else if (this_level > level) { // lower level than before; expand the previous to contain a ol
+        for(i = this_level; i > level; i--) {
+          html += "<"+settings.listType+"><li>"
+        }
+        html += "<a href='#" + header.id + "'>" + header.innerHTML + "</a>";
+      }
+      level = this_level; // update for the next one
+    });
+    html += "</"+settings.listType+">";
+    if (!settings.noBackToTopLinks) {
+      $(document).on('click', '.back-to-top', function() {
+        $(window).scrollTop(0);
+        window.location.hash = '';
+      });
+    }
+
+    render[settings.showEffect]();
+  };
+})(jQuery);

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/jvm-settings.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/jvm-settings.adoc b/solr/solr-ref-guide/src/jvm-settings.adoc
new file mode 100644
index 0000000..dbb0640
--- /dev/null
+++ b/solr/solr-ref-guide/src/jvm-settings.adoc
@@ -0,0 +1,38 @@
+= JVM Settings
+:page-shortname: jvm-settings
+:page-permalink: jvm-settings.html
+
+Optimizing the JVM can be a key factor in getting the most from your Solr installation.
+
+Configuring your JVM can be a complex topic and a full discussion is beyond the scope of this document. Luckily, most modern JVMs are quite good at making the best use of available resources with default settings. The following sections contain a few tips that may be helpful when the defaults are not optimal for your situation.
+
+For more general information about improving Solr performance, see https://wiki.apache.org/solr/SolrPerformanceFactors.
+
+[[JVMSettings-ChoosingMemoryHeapSettings]]
+== Choosing Memory Heap Settings
+
+The most important JVM configuration settings are those that determine the amount of memory it is allowed to allocate. There are two primary command-line options that set memory limits for the JVM. These are `-Xms`, which sets the initial size of the JVM's memory heap, and `-Xmx`, which sets the maximum size to which the heap is allowed to grow.
+
+If your Solr application requires more heap space than you specify with the `-Xms` option, the heap will grow automatically. It's quite reasonable to not specify an initial size and let the heap grow as needed. The only downside is a somewhat slower startup time since the application will take longer to initialize. Setting the initial heap size higher than the default may avoid a series of heap expansions, which often results in objects being shuffled around within the heap, as the application spins up.
+
+The maximum heap size, set with `-Xmx`, is more critical. If the memory heap grows to this size, object creation may begin to fail and throw `OutOfMemoryException`. Setting this limit too low can cause spurious errors in your application, but setting it too high can be detrimental as well.
+
+It doesn't always cause an error when the heap reaches the maximum size. Before an error is raised, the JVM will first try to reclaim any available space that already exists in the heap. Only if all garbage collection attempts fail will your application see an exception. As long as the maximum is big enough, your app will run without error, but it may run more slowly if forced garbage collection kicks in frequently.
+
+The larger the heap the longer it takes to do garbage collection. This can mean minor, random pauses or, in extreme cases, "freeze the world" pauses of a minute or more. As a practical matter, this can become a serious problem for heap sizes that exceed about two gigabytes, even if far more physical memory is available. On robust hardware, you may get better results running multiple JVMs, rather than just one with a large memory heap. Some specialized JVM implementations may have customized garbage collection algorithms that do better with large heaps. Consult your JVM vendor's documentation.
+
+When setting the maximum heap size, be careful not to let the JVM consume all available physical memory. If the JVM process space grows too large, the operating system will start swapping it, which will severely impact performance. In addition, the operating system uses memory space not allocated to processes for file system cache and other purposes. This is especially important for I/O-intensive applications, like Lucene/Solr. The larger your indexes, the more you will benefit from filesystem caching by the OS. It may require some experimentation to determine the optimal tradeoff between heap space for the JVM and memory space for the OS to use.
+
+On systems with many CPUs/cores, it can also be beneficial to tune the layout of the heap and/or the behavior of the garbage collector. Adjusting the relative sizes of the generational pools in the heap can affect how often GC sweeps occur and whether they run concurrently. Configuring the various settings of how the garbage collector should behave can greatly reduce the overall performance impact when it does run. There is a lot of good information on this topic available on Sun's website. A good place to start is here: http://www.oracle.com/technetwork/java/javase/tech/index-jsp-140228.html[Oracle's Java HotSpot Garbage Collection].
+
+[[JVMSettings-UsetheServerHotSpotVM]]
+== Use the Server HotSpot VM
+
+If you are using Sun's JVM, add the `-server` command-line option when you start Solr. This tells the JVM that it should optimize for a long running, server process. If the Java runtime on your system is a JRE, rather than a full JDK distribution (including `javac` and other development tools), then it is possible that it may not support the `-server` JVM option. Test this by running `java -help` and look for `-server` as an available option in the displayed usage message.
+
+[[JVMSettings-CheckingJVMSettings]]
+== Checking JVM Settings
+
+A great way to see what JVM settings your server is using, along with other useful information, is to use the admin RequestHandler, `solr/admin/system`. This request handler will display a wealth of server statistics and settings.
+
+You can also use any of the tools that are compatible with the Java Management Extensions (JMX). See the section _Using JMX with Solr_ in <<managing-solr.adoc#managing-solr,Managing Solr>> for more information.