You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by cp...@apache.org on 2017/05/12 13:43:04 UTC

[11/50] [abbrv] lucene-solr:jira/solr-8668: squash merge jira/solr-10290 into master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/searching.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/searching.adoc b/solr/solr-ref-guide/src/searching.adoc
new file mode 100644
index 0000000..67cb851
--- /dev/null
+++ b/solr/solr-ref-guide/src/searching.adoc
@@ -0,0 +1,41 @@
+= Searching
+:page-shortname: searching
+:page-permalink: searching.html
+:page-children: overview-of-searching-in-solr, velocity-search-ui, relevance, query-syntax-and-parsing, faceting, highlighting, spell-checking, query-re-ranking, transforming-result-documents, suggester, morelikethis, pagination-of-results, collapse-and-expand-results, result-grouping, result-clustering, spatial-search, the-terms-component, the-term-vector-component, the-stats-component, the-query-elevation-component, response-writers, near-real-time-searching, realtime-get, exporting-result-sets, streaming-expressions, parallel-sql-interface
+
+This section describes how Solr works with search requests. It covers the following topics:
+
+* <<overview-of-searching-in-solr.adoc#overview-of-searching-in-solr,Overview of Searching in Solr>>: An introduction to searching with Solr.
+* <<velocity-search-ui.adoc#velocity-search-ui,Velocity Search UI>>: A simple search UI using the VelocityResponseWriter.
+* <<relevance.adoc#relevance,Relevance>>: Conceptual information about understanding relevance in search results.
+* <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,Query Syntax and Parsing>>: A brief conceptual overview of query syntax and parsing. It also contains the following sub-sections:
+** <<common-query-parameters.adoc#common-query-parameters,Common Query Parameters>>: No matter the query parser, there are several parameters that are common to all of them.
+** <<the-standard-query-parser.adoc#the-standard-query-parser,The Standard Query Parser>>: Detailed information about the standard Lucene query parser.
+** <<the-dismax-query-parser.adoc#the-dismax-query-parser,The DisMax Query Parser>>: Detailed information about Solr's DisMax query parser.
+** <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,The Extended DisMax Query Parser>>: Detailed information about Solr's Extended DisMax (eDisMax) Query Parser.
+** <<function-queries.adoc#function-queries,Function Queries>>: Detailed information about parameters for generating relevancy scores using values from one or more numeric fields.
+** <<local-parameters-in-queries.adoc#local-parameters-in-queries,Local Parameters in Queries>>: How to add local arguments to queries.
+** <<other-parsers.adoc#other-parsers,Other Parsers>>: More parsers designed for use in specific situations.
+* <<faceting.adoc#faceting,Faceting>>: Detailed information about categorizing search results based on indexed terms.
+* <<highlighting.adoc#highlighting,Highlighting>>: Detailed information about Solr's highlighting capabilities, including multiple underlying highlighter implementations.
+* <<spell-checking.adoc#spell-checking,Spell Checking>>: Detailed information about Solr's spelling checker.
+* <<query-re-ranking.adoc#query-re-ranking,Query Re-Ranking>>: Detailed information about re-ranking top scoring documents from simple queries using more complex scores.
+** <<learning-to-rank.adoc#learning-to-rank,Learning To Rank>>: How to use LTR to run machine learned ranking models in Solr.
+
+* <<transforming-result-documents.adoc#transforming-result-documents,Transforming Result Documents>>: Detailed information about using `DocTransformers` to add computed information to individual documents
+* <<suggester.adoc#suggester,Suggester>>: Detailed information about Solr's powerful autosuggest component.
+* <<morelikethis.adoc#morelikethis,MoreLikeThis>>: Detailed information about Solr's similar results query component.
+* <<pagination-of-results.adoc#pagination-of-results,Pagination of Results>>: Detailed information about fetching paginated results for display in a UI, or for fetching all documents matching a query.
+* <<result-grouping.adoc#result-grouping,Result Grouping>>: Detailed information about grouping results based on common field values.
+* <<result-clustering.adoc#result-clustering,Result Clustering>>: Detailed information about grouping search results based on cluster analysis applied to text fields. A bit like "unsupervised" faceting.
+* <<spatial-search.adoc#spatial-search,Spatial Search>>: How to use Solr's spatial search capabilities.
+* <<the-terms-component.adoc#the-terms-component,The Terms Component>>: Detailed information about accessing indexed terms and the documents that include them.
+* <<the-term-vector-component.adoc#the-term-vector-component,The Term Vector Component>>: How to get term information about specific documents.
+* <<the-stats-component.adoc#the-stats-component,The Stats Component>>: How to return information from numeric fields within a document set.
+* <<the-query-elevation-component.adoc#the-query-elevation-component,The Query Elevation Component>>: How to force documents to the top of the results for certain queries.
+* <<response-writers.adoc#response-writers,Response Writers>>: Detailed information about configuring and using Solr's response writers.
+* <<near-real-time-searching.adoc#near-real-time-searching,Near Real Time Searching>>: How to include documents in search results nearly immediately after they are indexed.
+* <<realtime-get.adoc#realtime-get,RealTime Get>>: How to get the latest version of a document without opening a searcher.
+* <<exporting-result-sets.adoc#exporting-result-sets,Exporting Result Sets>>: Functionality to export large result sets out of Solr.
+* <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>>: A stream processing language for Solr, with a suite of functions to perform many types of queries and parallel execution tasks.
+* <<parallel-sql-interface.adoc#parallel-sql-interface,Parallel SQL Interface>>: An interface for sending SQL statements to Solr, and using advanced parallel query processing and relational algebra for complex data analysis.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/securing-solr.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/securing-solr.adoc b/solr/solr-ref-guide/src/securing-solr.adoc
new file mode 100644
index 0000000..e8a226b
--- /dev/null
+++ b/solr/solr-ref-guide/src/securing-solr.adoc
@@ -0,0 +1,19 @@
+= Securing Solr
+:page-shortname: securing-solr
+:page-permalink: securing-solr.html
+:page-children: authentication-and-authorization-plugins, enabling-ssl
+
+When planning how to secure Solr, you should consider which of the available features or approaches are right for you.
+
+* Authentication or authorization of users using:
+** <<kerberos-authentication-plugin.adoc#kerberos-authentication-plugin,Kerberos Authentication Plugin>>
+** <<basic-authentication-plugin.adoc#basic-authentication-plugin,Basic Authentication Plugin>>
+** <<rule-based-authorization-plugin.adoc#rule-based-authorization-plugin,Rule-Based Authorization Plugin>>
+** <<authentication-and-authorization-plugins.adoc#authentication-and-authorization-plugins,Custom authentication or authorization plugin>>
+* <<enabling-ssl.adoc#enabling-ssl,Enabling SSL>>
+* If using SolrCloud, <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>
+
+[WARNING]
+====
+No Solr API, including the Admin UI, is designed to be exposed to non-trusted parties. Tune your firewall so that only trusted computers and people are allowed access. Because of this, the project will not regard e.g., Admin UI XSS issues as security vulnerabilities. However, we still ask you to report such issues in JIRA.
+====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/segments-info.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/segments-info.adoc b/solr/solr-ref-guide/src/segments-info.adoc
new file mode 100644
index 0000000..1f11fb1
--- /dev/null
+++ b/solr/solr-ref-guide/src/segments-info.adoc
@@ -0,0 +1,9 @@
+= Segments Info
+:page-shortname: segments-info
+:page-permalink: segments-info.html
+
+The Segments Info screen lets you see a visualization of the various segments in the underlying Lucene index for this core, with information about the size of each segment – both bytes and in number of documents – as well as other basic metadata about those segments. Most visible is the the number of deleted documents, but you can hover your mouse over the segments to see additional numeric details.
+
+image::images/segments-info/segments_info.png[image,width=486,height=250]
+
+This information may be useful for people to help make decisions about the optimal <<indexconfig-in-solrconfig.adoc#IndexConfiginSolrConfig-MergingIndexSegments,merge settings>> for their data.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
new file mode 100644
index 0000000..451c3f8
--- /dev/null
+++ b/solr/solr-ref-guide/src/setting-up-an-external-zookeeper-ensemble.adoc
@@ -0,0 +1,182 @@
+= Setting Up an External ZooKeeper Ensemble
+:page-shortname: setting-up-an-external-zookeeper-ensemble
+:page-permalink: setting-up-an-external-zookeeper-ensemble.html
+
+Although Solr comes bundled with http://zookeeper.apache.org[Apache ZooKeeper], you should consider yourself discouraged from using this internal ZooKeeper in production.
+
+Shutting down a redundant Solr instance will also shut down its ZooKeeper server, which might not be quite so redundant. Because a ZooKeeper ensemble must have a quorum of more than half its servers running at any given time, this can be a problem.
+
+The solution to this problem is to set up an external ZooKeeper ensemble. Fortunately, while this process can seem intimidating due to the number of powerful options, setting up a simple ensemble is actually quite straightforward, as described below.
+
+.How Many ZooKeepers?
+[quote,ZooKeeper Administrator's Guide,http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html]
+____
+"For a ZooKeeper service to be active, there must be a majority of non-failing machines that can communicate with each other. *To create a deployment that can tolerate the failure of F machines, you should count on deploying 2xF+1 machines*. Thus, a deployment that consists of three machines can handle one failure, and a deployment of five machines can handle two failures. Note that a deployment of six machines can only handle two failures since three machines is not a majority.
+
+For this reason, ZooKeeper deployments are usually made up of an odd number of machines."
+____
+
+When planning how many ZooKeeper nodes to configure, keep in mind that the main principle for a ZooKeeper ensemble is maintaining a majority of servers to serve requests. This majority is also called a _quorum_.
+
+It is generally recommended to have an odd number of ZooKeeper servers in your ensemble, so a majority is maintained.
+
+For example, if you only have two ZooKeeper nodes and one goes down, 50% of available servers is not a majority, so ZooKeeper will no longer serve requests. However, if you have three ZooKeeper nodes and one goes down, you have 66% of available servers available, and ZooKeeper will continue normally while you repair the one down node. If you have 5 nodes, you could continue operating with two down nodes if necessary.
+
+More information on ZooKeeper clusters is available from the ZooKeeper documentation at http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html#sc_zkMulitServerSetup.
+
+[[SettingUpanExternalZooKeeperEnsemble-DownloadApacheZooKeeper]]
+== Download Apache ZooKeeper
+
+The first step in setting up Apache ZooKeeper is, of course, to download the software. It's available from http://zookeeper.apache.org/releases.html.
+
+[IMPORTANT]
+====
+When using stand-alone ZooKeeper, you need to take care to keep your version of ZooKeeper updated with the latest version distributed with Solr. Since you are using it as a stand-alone application, it does not get upgraded when you upgrade Solr.
+
+Solr currently uses Apache ZooKeeper v3.4.6.
+====
+
+[[SettingUpanExternalZooKeeperEnsemble-SettingUpaSingleZooKeeper]]
+== Setting Up a Single ZooKeeper
+
+[[SettingUpanExternalZooKeeperEnsemble-Createtheinstance]]
+=== Create the instance
+Creating the instance is a simple matter of extracting the files into a specific target directory. The actual directory itself doesn't matter, as long as you know where it is, and where you'd like to have ZooKeeper store its internal data.
+
+[[SettingUpanExternalZooKeeperEnsemble-Configuretheinstance]]
+=== Configure the instance
+The next step is to configure your ZooKeeper instance. To do that, create the following file: `<ZOOKEEPER_HOME>/conf/zoo.cfg`. To this file, add the following information:
+
+[source,bash]
+----
+tickTime=2000
+dataDir=/var/lib/zookeeper
+clientPort=2181
+----
+
+The parameters are as follows:
+
+`tickTime`:: Part of what ZooKeeper does is to determine which servers are up and running at any given time, and the minimum session time out is defined as two "ticks". The `tickTime` parameter specifies, in miliseconds, how long each tick should be.
+
+`dataDir`:: This is the directory in which ZooKeeper will store data about the cluster. This directory should start out empty.
+
+`clientPort`:: This is the port on which Solr will access ZooKeeper.
+
+Once this file is in place, you're ready to start the ZooKeeper instance.
+
+[[SettingUpanExternalZooKeeperEnsemble-Runtheinstance]]
+=== Run the instance
+
+To run the instance, you can simply use the `ZOOKEEPER_HOME/bin/zkServer.sh` script provided, as with this command: `zkServer.sh start`
+
+Again, ZooKeeper provides a great deal of power through additional configurations, but delving into them is beyond the scope of this tutorial. For more information, see the ZooKeeper http://zookeeper.apache.org/doc/r3.4.5/zookeeperStarted.html[Getting Started] page. For this example, however, the defaults are fine.
+
+[[SettingUpanExternalZooKeeperEnsemble-PointSolrattheinstance]]
+=== Point Solr at the instance
+
+Pointing Solr at the ZooKeeper instance you've created is a simple matter of using the `-z` parameter when using the bin/solr script. For example, in order to point the Solr instance to the ZooKeeper you've started on port 2181, this is what you'd need to do:
+
+Starting `cloud` example with Zookeeper already running at port 2181 (with all other defaults):
+
+[source,bash]
+----
+bin/solr start -e cloud -z localhost:2181 -noprompt
+----
+
+Add a node pointing to an existing ZooKeeper at port 2181:
+
+[source,bash]
+----
+bin/solr start -cloud -s <path to solr home for new node> -p 8987 -z localhost:2181
+----
+
+NOTE: When you are not using an example to start solr, make sure you upload the configuration set to ZooKeeper before creating the collection.
+
+[[SettingUpanExternalZooKeeperEnsemble-ShutdownZooKeeper]]
+=== Shut down ZooKeeper
+
+To shut down ZooKeeper, use the zkServer script with the "stop" command: `zkServer.sh stop`.
+
+[[SettingUpanExternalZooKeeperEnsemble-SettingupaZooKeeperEnsemble]]
+== Setting up a ZooKeeper Ensemble
+
+With an external ZooKeeper ensemble, you need to set things up just a little more carefully as compared to the Getting Started example.
+
+The difference is that rather than simply starting up the servers, you need to configure them to know about and talk to each other first. So your original `zoo.cfg` file might look like this:
+
+[source,bash]
+----
+dataDir=/var/lib/zookeeperdata/1
+clientPort=2181
+initLimit=5
+syncLimit=2
+server.1=localhost:2888:3888
+server.2=localhost:2889:3889
+server.3=localhost:2890:3890
+----
+
+Here you see three new parameters:
+
+initLimit:: Amount of time, in ticks, to allow followers to connect and sync to a leader. In this case, you have 5 ticks, each of which is 2000 milliseconds long, so the server will wait as long as 10 seconds to connect and sync with the leader.
+
+syncLimit:: Amount of time, in ticks, to allow followers to sync with ZooKeeper. If followers fall too far behind a leader, they will be dropped.
+
+server.X:: These are the IDs and locations of all servers in the ensemble, the ports on which they communicate with each other. The server ID must additionally stored in the `<dataDir>/myid` file and be located in the `dataDir` of each ZooKeeper instance. The ID identifies each server, so in the case of this first instance, you would create the file `/var/lib/zookeeperdata/1/myid` with the content "1".
+
+Now, whereas with Solr you need to create entirely new directories to run multiple instances, all you need for a new ZooKeeper instance, even if it's on the same machine for testing purposes, is a new configuration file. To complete the example you'll create two more configuration files.
+
+The `<ZOOKEEPER_HOME>/conf/zoo2.cfg` file should have the content:
+
+[source,bash]
+----
+tickTime=2000
+dataDir=c:/sw/zookeeperdata/2
+clientPort=2182
+initLimit=5
+syncLimit=2
+server.1=localhost:2888:3888
+server.2=localhost:2889:3889
+server.3=localhost:2890:3890
+----
+
+You'll also need to create `<ZOOKEEPER_HOME>/conf/zoo3.cfg`:
+
+[source,bash]
+----
+tickTime=2000
+dataDir=c:/sw/zookeeperdata/3
+clientPort=2183
+initLimit=5
+syncLimit=2
+server.1=localhost:2888:3888
+server.2=localhost:2889:3889
+server.3=localhost:2890:3890
+----
+
+Finally, create your `myid` files in each of the `dataDir` directories so that each server knows which instance it is. The id in the `myid` file on each machine must match the "server.X" definition. So, the ZooKeeper instance (or machine) named "server.1" in the above example, must have a `myid` file containing the value "1". The `myid` file can be any integer between 1 and 255, and must match the server IDs assigned in the `zoo.cfg` file.
+
+To start the servers, you can simply explicitly reference the configuration files:
+
+[source,bash]
+----
+cd <ZOOKEEPER_HOME>
+bin/zkServer.sh start zoo.cfg
+bin/zkServer.sh start zoo2.cfg
+bin/zkServer.sh start zoo3.cfg
+----
+
+Once these servers are running, you can reference them from Solr just as you did before:
+
+[source,bash]
+----
+bin/solr start -e cloud -z localhost:2181,localhost:2182,localhost:2183 -noprompt
+----
+
+[[SettingUpanExternalZooKeeperEnsemble-SecuringtheZooKeeperconnection]]
+== Securing the ZooKeeper connection
+
+You may also want to secure the communication between ZooKeeper and Solr.
+
+To setup ACL protection of znodes, see <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>.
+
+For more information on getting the most power from your ZooKeeper installation, check out the http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html[ZooKeeper Administrator's Guide].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
new file mode 100644
index 0000000..930779c
--- /dev/null
+++ b/solr/solr-ref-guide/src/shards-and-indexing-data-in-solrcloud.adoc
@@ -0,0 +1,105 @@
+= Shards and Indexing Data in SolrCloud
+:page-shortname: shards-and-indexing-data-in-solrcloud
+:page-permalink: shards-and-indexing-data-in-solrcloud.html
+
+When your collection is too large for one node, you can break it up and store it in sections by creating multiple *shards*.
+
+A Shard is a logical partition of the collection, containing a subset of documents from the collection, such that every document in a collection is contained in exactly one Shard. Which shard contains a each document in a collection depends on the overall "Sharding" strategy for that collection. For example, you might have a collection where the "country" field of each document determines which shard it is part of, so documents from the same country are co-located. A different collection might simply use a "hash" on the uniqueKey of each document to determine its Shard.
+
+Before SolrCloud, Solr supported Distributed Search, which allowed one query to be executed across multiple shards, so the query was executed against the entire Solr index and no documents would be missed from the search results. So splitting an index across shards is not exclusively a SolrCloud concept. There were, however, several problems with the distributed approach that necessitated improvement with SolrCloud:
+
+. Splitting an index into shards was somewhat manual.
+. There was no support for distributed indexing, which meant that you needed to explicitly send documents to a specific shard; Solr couldn't figure out on its own what shards to send documents to.
+. There was no load balancing or failover, so if you got a high number of queries, you needed to figure out where to send them and if one shard died it was just gone.
+
+SolrCloud fixes all those problems. There is support for distributing both the index process and the queries automatically, and ZooKeeper provides failover and load balancing. Additionally, every shard can also have multiple replicas for additional robustness.
+
+In SolrCloud there are no masters or slaves. Instead, every shard consists of at least one physical *replica*, exactly one of which is a *leader*. Leaders are automatically elected, initially on a first-come-first-served basis, and then based on the Zookeeper process described at http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection[http://zookeeper.apache.org/doc/trunk/recipes.html#sc_leaderElection.].
+
+If a leader goes down, one of the other replicas is automatically elected as the new leader.
+
+When a document is sent to a Solr node for indexing, the system first determines which Shard that document belongs to, and then which node is currently hosting the leader for that shard. The document is then forwarded to the current leader for indexing, and the leader forwards the update to all of the other replicas.
+
+[[ShardsandIndexingDatainSolrCloud-DocumentRouting]]
+== Document Routing
+
+Solr offers the ability to specify the router implementation used by a collection by specifying the `router.name` parameter when <<collections-api.adoc#CollectionsAPI-create,creating your collection>>.
+
+If you use the (default) "```compositeId```" router, you can send documents with a prefix in the document ID which will be used to calculate the hash Solr uses to determine the shard a document is sent to for indexing. The prefix can be anything you'd like it to be (it doesn't have to be the shard name, for example), but it must be consistent so Solr behaves consistently. For example, if you wanted to co-locate documents for a customer, you could use the customer name or ID as the prefix. If your customer is "IBM", for example, with a document with the ID "12345", you would insert the prefix into the document id field: "IBM!12345". The exclamation mark ('!') is critical here, as it distinguishes the prefix used to determine which shard to direct the document to.
+
+Then at query time, you include the prefix(es) into your query with the `\_route_` parameter (i.e., `q=solr&_route_=IBM!`) to direct queries to specific shards. In some situations, this may improve query performance because it overcomes network latency when querying all the shards.
+
+[IMPORTANT]
+====
+The `\_route_` parameter replaces `shard.keys`, which has been deprecated and will be removed in a future Solr release.
+====
+
+The `compositeId` router supports prefixes containing up to 2 levels of routing. For example: a prefix routing first by region, then by customer: "USA!IBM!12345"
+
+Another use case could be if the customer "IBM" has a lot of documents and you want to spread it across multiple shards. The syntax for such a use case would be : "shard_key/num!document_id" where the /num is the number of bits from the shard key to use in the composite hash.
+
+So "IBM/3!12345" will take 3 bits from the shard key and 29 bits from the unique doc id, spreading the tenant over 1/8th of the shards in the collection. Likewise if the num value was 2 it would spread the documents across 1/4th the number of shards. At query time, you include the prefix(es) along with the number of bits into your query with the `\_route_` parameter (i.e., `q=solr&_route_=IBM/3!`) to direct queries to specific shards.
+
+If you do not want to influence how documents are stored, you don't need to specify a prefix in your document ID.
+
+If you created the collection and defined the "implicit" router at the time of creation, you can additionally define a `router.field` parameter to use a field from each document to identify a shard where the document belongs. If the field specified is missing in the document, however, the document will be rejected. You could also use the `\_route_` parameter to name a specific shard.
+
+[[ShardsandIndexingDatainSolrCloud-ShardSplitting]]
+== Shard Splitting
+
+When you create a collection in SolrCloud, you decide on the initial number shards to be used. But it can be difficult to know in advance the number of shards that you need, particularly when organizational requirements can change at a moment's notice, and the cost of finding out later that you chose wrong can be high, involving creating new cores and re-indexing all of your data.
+
+The ability to split shards is in the Collections API. It currently allows splitting a shard into two pieces. The existing shard is left as-is, so the split action effectively makes two copies of the data as new shards. You can delete the old shard at a later time when you're ready.
+
+More details on how to use shard splitting is in the section on the Collection API's <<collections-api.adoc#CollectionsAPI-splitshard,SPLITSHARD command>>.
+
+[[ShardsandIndexingDatainSolrCloud-IgnoringCommitsfromClientApplicationsinSolrCloud]]
+== Ignoring Commits from Client Applications in SolrCloud
+
+In most cases, when running in SolrCloud mode, indexing client applications should not send explicit commit requests. Rather, you should configure auto commits with `openSearcher=false` and auto soft-commits to make recent updates visible in search requests. This ensures that auto commits occur on a regular schedule in the cluster.
+
+To enforce a policy where client applications should not send explicit commits, you should update all client applications that index data into SolrCloud. However, that is not always feasible, so Solr provides the `IgnoreCommitOptimizeUpdateProcessorFactory`, which allows you to ignore explicit commits and/or optimize requests from client applications without having refactor your client application code.
+
+To activate this request processor you'll need to add the following to your `solrconfig.xml`:
+
+[source,xml]
+----
+<updateRequestProcessorChain name="ignore-commit-from-client" default="true">
+  <processor class="solr.IgnoreCommitOptimizeUpdateProcessorFactory">
+    <int name="statusCode">200</int>
+  </processor>
+  <processor class="solr.LogUpdateProcessorFactory" />
+  <processor class="solr.DistributedUpdateProcessorFactory" />
+  <processor class="solr.RunUpdateProcessorFactory" />
+</updateRequestProcessorChain>
+----
+
+As shown in the example above, the processor will return 200 to the client but will ignore the commit / optimize request. Notice that you need to wire-in the implicit processors needed by SolrCloud as well, since this custom chain is taking the place of the default chain.
+
+In the following example, the processor will raise an exception with a 403 code with a customized error message:
+
+[source,xml]
+----
+<updateRequestProcessorChain name="ignore-commit-from-client" default="true">
+  <processor class="solr.IgnoreCommitOptimizeUpdateProcessorFactory">
+    <int name="statusCode">403</int>
+    <str name="responseMessage">Thou shall not issue a commit!</str>
+  </processor>
+  <processor class="solr.LogUpdateProcessorFactory" />
+  <processor class="solr.DistributedUpdateProcessorFactory" />
+  <processor class="solr.RunUpdateProcessorFactory" />
+</updateRequestProcessorChain>
+----
+
+Lastly, you can also configure it to just ignore optimize and let commits pass thru by doing:
+
+[source,xml]
+----
+<updateRequestProcessorChain name="ignore-optimize-only-from-client-403">
+  <processor class="solr.IgnoreCommitOptimizeUpdateProcessorFactory">
+    <str name="responseMessage">Thou shall not issue an optimize, but commits are OK!</str>
+    <bool name="ignoreOptimizeOnly">true</bool>
+  </processor>
+  <processor class="solr.RunUpdateProcessorFactory" />
+</updateRequestProcessorChain>
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/sitemap.xml
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/sitemap.xml b/solr/solr-ref-guide/src/sitemap.xml
new file mode 100755
index 0000000..d5fa97a
--- /dev/null
+++ b/solr/solr-ref-guide/src/sitemap.xml
@@ -0,0 +1,17 @@
+---
+layout: none
+search: exclude
+---
+
+<?xml version="1.0" encoding="UTF-8"?>
+<urlset xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+        xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd"
+        xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
+  {% for page in site.pages %}
+  {% unless page.search == "exclude" %}
+  <url>
+    <loc>{{site.url}}{{page.url}}</loc>
+  </url>
+  {% endunless %}
+  {% endfor %}
+</urlset>

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/solr-control-script-reference.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-control-script-reference.adoc b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
new file mode 100644
index 0000000..3d5a7f7
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-control-script-reference.adoc
@@ -0,0 +1,634 @@
+= Solr Control Script Reference
+:page-shortname: solr-control-script-reference
+:page-permalink: solr-control-script-reference.html
+
+Solr includes a script known as "`bin/solr`" that allows you to perform many common operations on your Solr installation or cluster.
+
+You can start and stop Solr, create and delete collections or cores, perform operations on ZooKeeper and check the status of Solr and configured shards.
+
+You can find the script in the `bin/` directory of your Solr installation. The `bin/solr` script makes Solr easier to work with by providing simple commands and options to quickly accomplish common goals.
+
+More examples of `bin/solr` in use are available throughout the Solr Reference Guide, but particularly in the sections <<running-solr.adoc#running-solr,Running Solr>> and <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
+
+[[SolrControlScriptReference-StartingandStopping]]
+== Starting and Stopping
+
+[[SolrControlScriptReference-StartandRestart]]
+=== Start and Restart
+
+The `start` command starts Solr. The `restart` command allows you to restart Solr while it is already running or if it has been stopped already.
+
+The `start` and `restart` commands have several options to allow you to run in SolrCloud mode, use an example configuration set, start with a hostname or port that is not the default and point to a local ZooKeeper ensemble.
+
+`bin/solr start [options]`
+
+`bin/solr start -help`
+
+`bin/solr restart [options]`
+
+`bin/solr restart -help`
+
+When using the `restart` command, you must pass all of the parameters you initially passed when you started Solr. Behind the scenes, a stop request is initiated, so Solr will be stopped before being started again. If no nodes are already running, restart will skip the step to stop and proceed to starting Solr.
+
+[[SolrControlScriptReference-AvailableParameters]]
+==== Available Parameters
+
+The`bin/solr` script provides many options to allow you to customize the server in common ways, such as changing the listening port. However, most of the defaults are adequate for most Solr installations, especially when just getting started.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-a "<string>" |Start Solr with additional JVM parameters, such as those starting with -X. If you are passing JVM parameters that begin with "-D", you can omit the -a option. |`bin/solr start -a "-Xdebug -Xrunjdwp:transport=dt_socket, server=y,suspend=n,address=1044"`
+|-cloud a|
+Start Solr in SolrCloud mode, which will also launch the embedded ZooKeeper instance included with Solr.
+
+This option can be shortened to simply `-c`.
+
+If you are already running a ZooKeeper ensemble that you want to use instead of the embedded (single-node) ZooKeeper, you should also pass the -z parameter.
+
+For more details, see the section <<SolrControlScriptReference-SolrCloudMode,SolrCloud Mode>> below.
+
+ |`bin/solr start -c`
+|-d <dir> |Define a server directory, defaults to `server` (as in, `$SOLR_HOME/server`). It is uncommon to override this option. When running multiple instances of Solr on the same host, it is more common to use the same server directory for each instance and use a unique Solr home directory using the -s option. |`bin/solr start -d newServerDir`
+|-e <name> a|
+Start Solr with an example configuration. These examples are provided to help you get started faster with Solr generally, or just try a specific feature.
+
+The available options are:
+
+* cloud
+* techproducts
+* dih
+* schemaless
+
+See the section <<SolrControlScriptReference-RunningwithExampleConfigurations,Running with Example Configurations>> below for more details on the example configurations.
+ |`bin/solr start -e schemaless`
+|-f |Start Solr in the foreground; you cannot use this option when running examples with the -e option. |`bin/solr start -f`
+|-h <hostname> |Start Solr with the defined hostname. If this is not specified, 'localhost' will be assumed. |`bin/solr start -h search.mysolr.com`
+|-m <memory> |Start Solr with the defined value as the min (-Xms) and max (-Xmx) heap size for the JVM. |`bin/solr start -m 1g`
+|-noprompt a|
+Start Solr and suppress any prompts that may be seen with another option. This would have the side effect of accepting all defaults implicitly.
+
+For example, when using the "cloud" example, an interactive session guides you through several options for your SolrCloud cluster. If you want to accept all of the defaults, you can simply add the -noprompt option to your request.
+
+ |`bin/solr start -e cloud -noprompt`
+|-p <port> |Start Solr on the defined port. If this is not specified, '8983' will be used. |`bin/solr start -p 8655`
+|-s <dir> a|
+Sets the solr.solr.home system property; Solr will create core directories under this directory. This allows you to run multiple Solr instances on the same host while reusing the same server directory set using the -d parameter. If set, the specified directory should contain a solr.xml file, unless solr.xml exists in ZooKeeper. The default value is `server/solr`.
+
+This parameter is ignored when running examples (-e), as the solr.solr.home depends on which example is run.
+
+ |`bin/solr start -s newHome`
+|-v |Be more verbose. This changes the logging level of log4j from `INFO` to `DEBUG`., having the same effect as if you edited `log4j.properties` accordingly. |`bin/solr start -f -v`
+|-q |Be more quiet. This changes the logging level of log4j from `INFO` to `WARN`, having the same effect as if you edited `log4j.properties` accordingly. This can be useful in a production setting where you want to limit logging to warnings and errors. |`bin/solr start -f -q`
+|-V |Start Solr with verbose messages from the start script. |`bin/solr start -V`
+|-z <zkHost> |Start Solr with the defined ZooKeeper connection string. This option is only used with the -c option, to start Solr in SolrCloud mode. If this option is not provided, Solr will start the embedded ZooKeeper instance and use that instance for SolrCloud operations. |`bin/solr start -c -z server1:2181,server2:2181`
+|-force |If attempting to start Solr as the root user, the script will exit with a warning that running Solr as "root" can cause problems. It is possible to override this warning with the -force parameter. |`sudo bin/solr start -force`
+|===
+
+To emphasize how the default settings work take a moment to understand that the following commands are equivalent:
+
+`bin/solr start`
+
+`bin/solr start -h localhost -p 8983 -d server -s solr -m 512m`
+
+It is not necessary to define all of the options when starting if the defaults are fine for your needs.
+
+[[SolrControlScriptReference-SettingJavaSystemProperties]]
+==== Setting Java System Properties
+
+The `bin/solr` script will pass any additional parameters that begin with `-D` to the JVM, which allows you to set arbitrary Java system properties.
+
+For example, to set the auto soft-commit frequency to 3 seconds, you can do:
+
+`bin/solr start -Dsolr.autoSoftCommit.maxTime=3000`
+
+[[SolrControlScriptReference-SolrCloudMode]]
+==== SolrCloud Mode
+
+The `-c` and `-cloud` options are equivalent:
+
+`bin/solr start -c`
+
+`bin/solr start -cloud`
+
+If you specify a ZooKeeper connection string, such as `-z 192.168.1.4:2181`, then Solr will connect to ZooKeeper and join the cluster.
+
+If you do not specify the `-z` option when starting Solr in cloud mode, then Solr will launch an embedded ZooKeeper server listening on the Solr port + 1000, i.e., if Solr is running on port 8983, then the embedded ZooKeeper will be listening on port 9983.
+
+[IMPORTANT]
+====
+If your ZooKeeper connection string uses a chroot, such as `localhost:2181/solr`, then you need to create the /solr znode before launching SolrCloud using the `bin/solr` script.
++
+To do this use the `mkroot` command outlined below, for example: `bin/solr zk mkroot /solr -z 192.168.1.4:2181`
+====
+
+When starting in SolrCloud mode, the interactive script session will prompt you to choose a configset to use.
+
+For more information about starting Solr in SolrCloud mode, see also the section <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>.
+
+[[SolrControlScriptReference-RunningwithExampleConfigurations]]
+==== Running with Example Configurations
+
+`bin/solr start -e <name>`
+
+The example configurations allow you to get started quickly with a configuration that mirrors what you hope to accomplish with Solr.
+
+Each example launches Solr with a managed schema, which allows use of the <<schema-api.adoc#schema-api,Schema API>> to make schema edits, but does not allow manual editing of a Schema file If you would prefer to manually modify a `schema.xml` file directly, you can change this default as described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>.
+
+Unless otherwise noted in the descriptions below, the examples do not enable <<solrcloud.adoc#solrcloud,SolrCloud>> nor <<schemaless-mode.adoc#schemaless-mode,schemaless mode>>.
+
+The following examples are provided:
+
+* *cloud*: This example starts a 1-4 node SolrCloud cluster on a single machine. When chosen, an interactive session will start to guide you through options to select the initial configset to use, the number of nodes for your example cluster, the ports to use, and name of the collection to be created. When using this example, you can choose from any of the available configsets found in `$SOLR_HOME/server/solr/configsets`.
+* *techproducts*: This example starts Solr in standalone mode with a schema designed for the sample documents included in the `$SOLR_HOME/example/exampledocs` directory. The configset used can be found in `$SOLR_HOME/server/solr/configsets/sample_techproducts_configs`.
+* *dih*: This example starts Solr in standalone mode with the DataImportHandler (DIH) enabled and several example `dataconfig.xml` files pre-configured for different types of data supported with DIH (such as, database contents, email, RSS feeds, etc.). The configset used is customized for DIH, and is found in `$SOLR_HOME/example/example-DIH/solr/conf`. For more information about DIH, see the section <<uploading-structured-data-store-data-with-the-data-import-handler.adoc#uploading-structured-data-store-data-with-the-data-import-handler,Uploading Structured Data Store Data with the Data Import Handler>>.
+* *schemaless*: This example starts Solr in standalone mode using a managed schema, as described in the section <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>>, and provides a very minimal pre-defined schema. Solr will run in <<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>> with this configuration, where Solr will create fields in the schema on the fly and will guess field types used in incoming documents. The configset used can be found in `$SOLR_HOME/server/solr/configsets/data_driven_schema_configs`.
+
+[IMPORTANT]
+====
+The run in-foreground option (`-f`) is not compatible with the `-e` option since the script needs to perform additional tasks after starting the Solr server.
+====
+
+[[SolrControlScriptReference-Stop]]
+=== Stop
+
+The `stop` command sends a STOP request to a running Solr node, which allows it to shutdown gracefully. The command will wait up to 5 seconds for Solr to stop gracefully and then will forcefully kill the process (kill -9).
+
+`bin/solr stop [options]`
+
+`bin/solr stop -help`
+
+[[SolrControlScriptReference-AvailableParameters.1]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-p <port> |Stop Solr running on the given port. If you are running more than one instance, or are running in SolrCloud mode, you either need to specify the ports in separate requests or use the -all option. |`bin/solr stop -p 8983`
+|-all |Stop all running Solr instances that have a valid PID. |`bin/solr stop -all`
+|-k <key> |Stop key used to protect from stopping Solr inadvertently; default is "solrrocks". |`bin/solr stop -k solrrocks`
+|===
+
+[[SolrControlScriptReference-SystemInformation]]
+== System Information
+
+[[SolrControlScriptReference-Version]]
+=== Version
+
+The `version` command simply returns the version of Solr currently installed and immediately exists.
+
+[source,plain]
+----
+$ bin/solr version
+X.Y.0
+----
+
+[[SolrControlScriptReference-Status]]
+=== Status
+
+The `status` command displays basic JSON-formatted information for any Solr nodes found running on the local system.
+
+The `status` command uses the `SOLR_PID_DIR` environment variable to locate Solr process ID files to find running Solr instances, which defaults to the `bin` directory.
+
+`bin/solr status`
+
+The output will include a status of each node of the cluster, as in this example:
+
+[source,plain]
+----
+Found 2 Solr nodes:
+
+Solr process 39920 running on port 7574
+{
+  "solr_home":"/Applications/Solr/example/cloud/node2/solr/",
+  "version":"X.Y.0",
+  "startTime":"2015-02-10T17:19:54.739Z",
+  "uptime":"1 days, 23 hours, 55 minutes, 48 seconds",
+  "memory":"77.2 MB (%15.7) of 490.7 MB",
+  "cloud":{
+    "ZooKeeper":"localhost:9865",
+    "liveNodes":"2",
+    "collections":"2"}}
+
+Solr process 39827 running on port 8865
+{
+  "solr_home":"/Applications/Solr/example/cloud/node1/solr/",
+  "version":"X.Y.0",
+  "startTime":"2015-02-10T17:19:49.057Z",
+  "uptime":"1 days, 23 hours, 55 minutes, 54 seconds",
+  "memory":"94.2 MB (%19.2) of 490.7 MB",
+  "cloud":{
+    "ZooKeeper":"localhost:9865",
+    "liveNodes":"2",
+    "collections":"2"}}
+----
+
+[[SolrControlScriptReference-Healthcheck]]
+=== Healthcheck
+
+The `healthcheck` command generates a JSON-formatted health report for a collection when running in SolrCloud mode. The health report provides information about the state of every replica for all shards in a collection, including the number of committed documents and its current state.
+
+`bin/solr healthcheck [options]`
+
+`bin/solr healthcheck -help`
+
+[[SolrControlScriptReference-AvailableParameters.2]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-c <collection> |Name of the collection to run a healthcheck against (required). |`bin/solr healthcheck -c gettingstarted`
+|-z <zkhost> |ZooKeeper connection string, defaults to localhost:9983. If you are running Solr on a port other than 8983, you will have to specify the ZooKeeper connection string. By default, this will be the Solr port + 1000. |`bin/solr healthcheck -z localhost:2181`
+|===
+
+Below is an example healthcheck request and response using a non-standard ZooKeeper connect string, with 2 nodes running:
+
+`$ bin/solr healthcheck -c gettingstarted -z localhost:9865`
+
+[source,json]
+----
+{
+  "collection":"gettingstarted",
+  "status":"healthy",
+  "numDocs":0,
+  "numShards":2,
+  "shards":[
+    {
+      "shard":"shard1",
+      "status":"healthy",
+      "replicas":[
+        {
+          "name":"core_node1",
+          "url":"http://10.0.1.10:8865/solr/gettingstarted_shard1_replica2/",
+          "numDocs":0,
+          "status":"active",
+          "uptime":"2 days, 1 hours, 18 minutes, 48 seconds",
+          "memory":"25.6 MB (%5.2) of 490.7 MB",
+          "leader":true},
+        {
+          "name":"core_node4",
+          "url":"http://10.0.1.10:7574/solr/gettingstarted_shard1_replica1/",
+          "numDocs":0,
+          "status":"active",
+          "uptime":"2 days, 1 hours, 18 minutes, 42 seconds",
+          "memory":"95.3 MB (%19.4) of 490.7 MB"}]},
+    {
+      "shard":"shard2",
+      "status":"healthy",
+      "replicas":[
+        {
+          "name":"core_node2",
+          "url":"http://10.0.1.10:8865/solr/gettingstarted_shard2_replica2/",
+          "numDocs":0,
+          "status":"active",
+          "uptime":"2 days, 1 hours, 18 minutes, 48 seconds",
+          "memory":"25.8 MB (%5.3) of 490.7 MB"},
+        {
+          "name":"core_node3",
+          "url":"http://10.0.1.10:7574/solr/gettingstarted_shard2_replica1/",
+          "numDocs":0,
+          "status":"active",
+          "uptime":"2 days, 1 hours, 18 minutes, 42 seconds",
+          "memory":"95.4 MB (%19.4) of 490.7 MB",
+          "leader":true}]}]}
+----
+
+[[SolrControlScriptReference-CollectionsandCores]]
+== Collections and Cores
+
+The `bin/solr` script can also help you create new collections (in SolrCloud mode) or cores (in standalone mode), or delete collections.
+
+[[SolrControlScriptReference-Create]]
+=== Create
+
+The `create` command detects the mode that Solr is running in (standalone or SolrCloud) and then creates a core or collection depending on the mode.
+
+`bin/solr create [options]`
+
+`bin/solr create -help`
+
+[[SolrControlScriptReference-AvailableParameters.3]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-c <name> |Name of the core or collection to create (required). |`bin/solr create -c mycollection`
+|-d <confdir> a|
+The configuration directory. This defaults to `data_driven_schema_configs`.
+
+See the section <<SolrControlScriptReference-ConfigurationDirectoriesandSolrCloud,Configuration Directories and SolrCloud>> below for more details about this option when running in SolrCloud mode.
+
+ |`bin/solr create -d basic_configs`
+|-n <configName> |The configuration name. This defaults to the same name as the core or collection. |`bin/solr create -n basic`
+|-p <port> a|
+Port of a local Solr instance to send the create command to; by default the script tries to detect the port by looking for running Solr instances.
+
+This option is useful if you are running multiple standalone Solr instances on the same host, thus requiring you to be specific about which instance to create the core in.
+
+ |`bin/solr create -p 8983`
+a|
+-s <shards>
+
+-shards
+
+ |Number of shards to split a collection into, default is 1; only applies when Solr is running in SolrCloud mode. |`bin/solr create -s 2`
+a|
+-rf <replicas>
+
+-replicationFactor
+
+ |Number of copies of each document in the collection. The default is 1 (no replication). |`bin/solr create -rf 2`
+|-force |If attempting to run create as "root" user, the script will exit with a warning that running Solr or actions against Solr as "root" can cause problems. It is possible to override this warning with the -force parameter. |`bin/solr create -c foo -force`
+|===
+
+[[SolrControlScriptReference-ConfigurationDirectoriesandSolrCloud]]
+==== Configuration Directories and SolrCloud
+
+Before creating a collection in SolrCloud, the configuration directory used by the collection must be uploaded to ZooKeeper. The create command supports several use cases for how collections and configuration directories work. The main decision you need to make is whether a configuration directory in ZooKeeper should be shared across multiple collections.
+
+Let's work through a few examples to illustrate how configuration directories work in SolrCloud.
+
+First, if you don't provide the `-d` or `-n` options, then the default configuration (`$SOLR_HOME/server/solr/configsets/data_driven_schema_configs/conf`) is uploaded to ZooKeeper using the same name as the collection. For example, the following command will result in the *data_driven_schema_configs* configuration being uploaded to `/configs/contacts` in ZooKeeper: `bin/solr create -c contacts`. If you create another collection, by doing `bin/solr create -c contacts2`, then another copy of the `data_driven_schema_configs` directory will be uploaded to ZooKeeper under `/configs/contacts2`. Any changes you make to the configuration for the contacts collection will not affect the contacts2 collection. Put simply, the default behavior creates a unique copy of the configuration directory for each collection you create.
+
+You can override the name given to the configuration directory in ZooKeeper by using the `-n` option. For instance, the command `bin/solr create -c logs -d basic_configs -n basic` will upload the `server/solr/configsets/basic_configs/conf` directory to ZooKeeper as `/configs/basic`.
+
+Notice that we used the `-d` option to specify a different configuration than the default. Solr provides several built-in configurations under `server/solr/configsets`. However you can also provide the path to your own configuration directory using the `-d` option. For instance, the command `bin/solr create -c mycoll -d /tmp/myconfigs`, will upload `/tmp/myconfigs` into ZooKeeper under `/configs/mycoll` . To reiterate, the configuration directory is named after the collection unless you override it using the `-n` option.
+
+Other collections can share the same configuration by specifying the name of the shared configuration using the `-n` option. For instance, the following command will create a new collection that shares the basic configuration created previously: `bin/solr create -c logs2 -n basic`.
+
+[[SolrControlScriptReference-Data-drivenSchemaandSharedConfigurations]]
+==== Data-driven Schema and Shared Configurations
+
+The `data_driven_schema_configs` schema can mutate as data is indexed. Consequently, we recommend that you do not share data-driven configurations between collections unless you are certain that all collections should inherit the changes made when indexing data into one of the collections.
+
+[[SolrControlScriptReference-Delete]]
+=== Delete
+
+The `delete` command detects the mode that Solr is running in (standalone or SolrCloud) and then deletes the specified core (standalone) or collection (SolrCloud) as appropriate.
+
+`bin/solr delete [options]`
+
+`bin/solr delete -help`
+
+If running in SolrCloud mode, the delete command checks if the configuration directory used by the collection you are deleting is being used by other collections. If not, then the configuration directory is also deleted from ZooKeeper. For example, if you created a collection by doing `bin/solr create -c contacts`, then the delete command `bin/solr delete -c contacts` will check to see if the `/configs/contacts` configuration directory is being used by any other collections. If not, then the `/configs/contacts` directory is removed from ZooKeeper.
+
+[[SolrControlScriptReference-AvailableParameters.4]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-c <name> |Name of the core / collection to delete (required). |`bin/solr delete -c mycoll`
+|-deleteConfig <true|false> a|
+Delete the configuration directory from ZooKeeper. The default is true.
+
+If the configuration directory is being used by another collection, then it will not be deleted even if you pass `-deleteConfig` as true.
+
+ |`bin/solr delete -deleteConfig false`
+|-p <port> a|
+The port of a local Solr instance to send the delete command to. By default the script tries to detect the port by looking for running Solr instances.
+
+This option is useful if you are running multiple standalone Solr instances on the same host, thus requiring you to be specific about which instance to delete the core from.
+
+ |`bin/solr delete -p 8983`
+|===
+
+[[SolrControlScriptReference-ZooKeeperOperations]]
+== ZooKeeper Operations
+
+The `bin/solr` script allows certain operations affecting ZooKeeper. These operations are for SolrCloud mode only. The operations are available as sub-commands, which each have their own set of options.
+
+`bin/solr zk [sub-command] [options]`
+
+`bin/solr zk -help`
+
+NOTE: Solr should have been started at least once before issuing these commands to initialize ZooKeeper with the znodes Solr expects. Once ZooKeeper is initialized, Solr doesn't need to be running on any node to use these commands.
+
+[[SolrControlScriptReference-UploadaConfigurationSet]]
+=== Upload a Configuration Set
+
+Use the `zk upconfig` command to upload one of the pre-configured configuration set or a customized configuration set to ZooKeeper.
+
+
+[[SolrControlScriptReference-AvailableParameters_allparametersarerequired_]]
+==== Available Parameters (all parameters are required)
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-n <name> a|
+Name of the configuration set in ZooKeeper. This command will upload the configuration set to the "configs" ZooKeeper node giving it the name specified.
+
+You can see all uploaded configuration sets in the Admin UI via the Cloud screens. Choose Cloud -> Tree -> configs to see them.
+
+If a pre-existing configuration set is specified, it will be overwritten in ZooKeeper.
+
+ |`-n myconfig`
+|-d <configset dir> a|
+The path of the configuration set to upload. It should have a "conf" directory immediately below it that in turn contains solrconfig.xml etc.
+
+If just a name is supplied, `$SOLR_HOME/server/solr/configsets` will be checked for this name. An absolute path may be supplied instead.
+
+ a|
+`-d directory_under_configsets`
+
+`-d /path/to/configset/source`
+
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
+|===
+
+An example of this command with these parameters is:
+
+`bin/solr zk upconfig -z 111.222.333.444:2181 -n mynewconfig -d /path/to/configset`
+
+.Reload Collections When Changing Configurations
+[WARNING]
+====
+This command does *not* automatically make changes effective! It simply uploads the configuration sets to ZooKeeper. You can use the Collection API's <<collections-api.adoc#CollectionsAPI-reload,RELOAD command>> to reload any collections that uses this configuration set.
+====
+
+[[SolrControlScriptReference-DownloadaConfigurationSet]]
+=== Download a Configuration Set
+
+Use the `zk downconfig` command to download a configuration set from ZooKeeper to the local filesystem.
+
+
+[[SolrControlScriptReference-AvailableParameters_allparametersarerequired_.1]]
+==== Available Parameters (all parameters are required)
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-n <name> |Name of config set in ZooKeeper to download. The Admin UI Cloud -> Tree -> configs node lists all available configuration sets. |`-n myconfig`
+|-d <configset dir> a|
+The path to write the downloaded configuration set into. If just a name is supplied, `$SOLR_HOME/server/solr/configsets` will be the parent. An absolute path may be supplied as well.
+
+In either case, _pre-existing configurations at the destination will be overwritten!_
+
+ |`-d directory_under_configsets` `-d /path/to/configset/destination`
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
+|===
+
+An example of this command with the parameters is:
+
+`bin/solr zk downconfig -z 111.222.333.444:2181 -n mynewconfig -d /path/to/configset`
+
+A "best practice" is to keep your configuration sets in some form of version control as the system-of-record. In that scenario, `downconfig` should rarely be used.
+
+[[SolrControlScriptReference-CopybetweenLocalFilesandZooKeeperznodes]]
+=== Copy between Local Files and ZooKeeper znodes
+
+Use the `zk cp` command for transferring files and directories between ZooKeeper znodes and your local drive. This command will copy from the local drive to ZooKeeper, from ZooKeeper to the local drive or from ZooKeeper to ZooKeeper.
+
+[[SolrControlScriptReference-AvailableParameters.5]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-r |Optional. Do a recursive copy. The command will fail if the <src> has children unless '-r' is specified. |`-r`
+|<src> |The file or path to copy from. If prefixed with `zk:` then the source is presumed to be ZooKeeper. If no prefix or the prefix is 'file:' this is the local drive. At least one of <src> or <dest> must be prefixed by '`zk:`' or the command will fail. a|
+`zk:/configs/myconfigs/solrconfig.xml`
+
+`file:/Users/apache/configs/src`
+
+|<dest> |The file or path to copy to. If prefixed with `zk:` then the source is presumed to be ZooKeeper. If no prefix or the prefix is 'file:' this is the local drive. At least one of <src> or <dest> must be prefixed by `zk:` or the command will fail. If <dest> ends in a slash character it names a directory. |`zk:/configs/myconfigs/solrconfig.xml` `file:/Users/apache/configs/src`
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
+|===
+
+An example of this command with the parameters is:
+
+Recursively copy a directory from local to ZooKeeper.
+
+`bin/solr zk cp -r file:/apache/confgs/whatever/conf zk:/configs/myconf -z 111.222.333.444:2181`
+
+Copy a single file from ZooKeeper to local.
+
+`bin/solr zk cp zk:/configs/myconf/managed_schema /configs/myconf/managed_schema -z 111.222.333.444:2181`
+
+[[SolrControlScriptReference-RemoveaznodefromZooKeeper]]
+=== Remove a znode from ZooKeeper
+
+Use the `zk rm` command to remove a znode (and optionally all child nodes) from ZooKeeper
+
+[[SolrControlScriptReference-AvailableParameters.6]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-r |Optional. Do a recursive removal. The command will fail if the <path> has children unless '-r' is specified. |`-r`
+|<path> a|
+The path to remove from ZooKeeper, either a parent or leaf node.
+
+There are limited safety checks, you cannot remove '/' or '/zookeeper' nodes.
+
+The path is assumed to be a ZooKeeper node, no `zk:` prefix is necessary.
+
+ a|
+`/configs`
+
+`/configs/myconfigset`
+
+`/configs/myconfigset/solrconfig.xml`
+
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
+|===
+
+An example of this command with the parameters is:
+
+`bin/solr zk rm -r /configs`
+
+`bin/solr zk rm /configs/myconfigset/schema.xml`
+
+
+[[SolrControlScriptReference-MoveOneZooKeeperznodetoAnother_Rename_]]
+=== Move One ZooKeeper znode to Another (Rename)
+
+Use the `zk mv` command to move (rename) a ZooKeeper znode
+
+[[SolrControlScriptReference-AvailableParameters.7]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|<src> |The znode to rename. The `zk:` prefix is assumed. |`/configs/oldconfigset`
+|<dest> |The new name of the znode. The `zk:` prefix is assumed. |`/configs/newconfigset`
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
+|===
+
+An example of this command is:
+
+`bin/solr zk mv /configs/oldconfigset /configs/newconfigset`
+
+
+[[SolrControlScriptReference-ListaZooKeeperznode_sChildren]]
+=== List a ZooKeeper znode's Children
+
+Use the `zk ls` command to see the children of a znode.
+
+[[SolrControlScriptReference-AvailableParameters.8]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|-r |Optional. Recursively list all descendants of a znode. |`-r`
+|<path> |The path on ZooKeeper to list. |`/collections/mycollection`
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
+|===
+
+An example of this command with the parameters is:
+
+`bin/solr zk ls -r /collections/mycollection`
+
+`bin/solr zk ls /collections`
+
+
+[[SolrControlScriptReference-Createaznode_supportschroot_]]
+=== Create a znode (supports chroot)
+
+Use the `zk mkroot` command to create a znode. The primary use-case for this command to support ZooKeeper's "chroot" concept. However, it can also be used to create arbitrary paths.
+
+[[SolrControlScriptReference-AvailableParameters.9]]
+==== Available Parameters
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="20,40,40",options="header"]
+|===
+|Parameter |Description |Example
+|<path> |The path on ZooKeeper to create. Intermediate znodes will be created if necessary. A leading slash is assumed even if not specified. |`/solr`
+|-z <zkHost> |The ZooKeeper connection string. Unnecessary if ZK_HOST is defined in `solr.in.sh` or `solr.in.cmd`. |`-z 123.321.23.43:2181`
+|===
+
+Examples of this command:
+
+`bin/solr zk mkroot /solr -z 123.321.23.43:2181`
+
+`bin/solr zk mkroot /solr/production`

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc b/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc
new file mode 100644
index 0000000..2e44bab
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-cores-and-solr-xml.adoc
@@ -0,0 +1,22 @@
+= Solr Cores and solr.xml
+:page-shortname: solr-cores-and-solr-xml
+:page-permalink: solr-cores-and-solr-xml.html
+:page-children: format-of-solr-xml, defining-core-properties, coreadmin-api, config-sets
+
+In Solr, the term _core_ is used to refer to a single index and associated transaction log and configuration files (including the `solrconfig.xml` and Schema files, among others). Your Solr installation can have multiple cores if needed, which allows you to index data with different structures in the same server, and maintain more control over how your data is presented to different audiences. In SolrCloud mode you will be more familiar with the term _collection._ Behind the scenes a collection consists of one or more cores.
+
+Cores can be created using `bin/solr` script or as part of SolrCloud collection creation using the APIs. Core-specific properties (such as the directories to use for the indexes or configuration files, the core name, and other options) are defined in a `core.properties` file. Any `core.properties` file in any directory of your Solr installation (or in a directory under where `solr_home` is defined) will be found by Solr and the defined properties will be used for the core named in the file.
+
+In standalone mode, `solr.xml` must reside in `solr_home`. In SolrCloud mode, `solr.xml` will be loaded from Zookeeper if it exists, with fallback to `solr_home`.
+
+[NOTE]
+====
+In older versions of Solr, cores had to be predefined as `<core>` tags in `solr.xml` in order for Solr to know about them. Now, however, Solr supports automatic discovery of cores and they no longer need to be explicitly defined. The recommended way is to dynamically create cores/collections using the APIs.
+====
+
+The following sections describe these options in more detail.
+
+* *<<format-of-solr-xml.adoc#format-of-solr-xml,Format of solr.xml>>*: Details on how to define `solr.xml`, including the acceptable parameters for the `solr.xml` file
+* *<<defining-core-properties.adoc#defining-core-properties,Defining core.properties>>*: Details on placement of `core.properties` and available property options.
+* *<<coreadmin-api.adoc#coreadmin-api,CoreAdmin API>>*: Tools and commands for core administration using a REST API.
+* *<<config-sets.adoc#config-sets,Config Sets>>*: How to use configsets to avoid duplicating effort when defining a new core.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/solr-field-types.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-field-types.adoc b/solr/solr-ref-guide/src/solr-field-types.adoc
new file mode 100644
index 0000000..b9bf1da
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-field-types.adoc
@@ -0,0 +1,24 @@
+= Solr Field Types
+:page-shortname: solr-field-types
+:page-permalink: solr-field-types.html
+:page-children: field-type-definitions-and-properties, field-types-included-with-solr, working-with-currencies-and-exchange-rates, working-with-dates, working-with-enum-fields, working-with-external-files-and-processes, field-properties-by-use-case
+
+The field type defines how Solr should interpret data in a field and how the field can be queried. There are many field types included with Solr by default, and they can also be defined locally.
+
+Topics covered in this section:
+
+* <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,Field Type Definitions and Properties>>
+
+* <<field-types-included-with-solr.adoc#field-types-included-with-solr,Field Types Included with Solr>>
+
+* <<working-with-currencies-and-exchange-rates.adoc#working-with-currencies-and-exchange-rates,Working with Currencies and Exchange Rates>>
+
+* <<working-with-dates.adoc#working-with-dates,Working with Dates>>
+
+* <<working-with-enum-fields.adoc#working-with-enum-fields,Working with Enum Fields>>
+
+* <<working-with-external-files-and-processes.adoc#working-with-external-files-and-processes,Working with External Files and Processes>>
+
+* <<field-properties-by-use-case.adoc#field-properties-by-use-case,Field Properties by Use Case>>
+
+TIP: See also the {solr-javadocs}/solr-core/org/apache/solr/schema/FieldType.html[FieldType Javadoc].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/solr-glossary.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-glossary.adoc b/solr/solr-ref-guide/src/solr-glossary.adoc
new file mode 100644
index 0000000..829293a
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-glossary.adoc
@@ -0,0 +1,188 @@
+= Solr Glossary
+:page-shortname: solr-glossary
+:page-permalink: solr-glossary.html
+:page-toc: false
+
+These are common terms used with Solr.
+
+== Solr Terms
+
+Where possible, terms are linked to relevant parts of the Solr Reference Guide for more information.
+
+*Jump to a letter:*
+
+<<SolrGlossary-A,A>> <<SolrGlossary-B,B>> <<SolrGlossary-C,C>> <<SolrGlossary-D,D>> <<SolrGlossary-E,E>> <<SolrGlossary-F,F>> G H <<SolrGlossary-I,I>> J K <<SolrGlossary-L,L>> <<SolrGlossary-M,M>> <<SolrGlossary-N,N>> <<SolrGlossary-O,O>> P <<SolrGlossary-Q,Q>> <<SolrGlossary-R,R>> <<SolrGlossary-S,S>> <<SolrGlossary-T,T>> U V <<SolrGlossary-W,W>> X Y <<SolrGlossary-Z,Z>>
+
+
+[[SolrGlossary-A]]
+=== A
+
+[[atomicupdates]]<<updating-parts-of-documents.adoc#UpdatingPartsofDocuments-AtomicUpdates,Atomic updates>>::
+An approach to updating only one or more fields of a document, instead of reindexing the entire document.
+
+
+[[SolrGlossary-B]]
+=== B
+
+[[booleanoperators]]Boolean operators::
+These control the inclusion or exclusion of keywords in a query by using operators such as AND, OR, and NOT.
+
+[[SolrGlossary-C]]
+=== C
+
+[[cluster]]Cluster::
+In Solr, a cluster is a set of Solr nodes operating in coordination with each other via <<zookeeper,ZooKeeper>>, and managed as a unit. A cluster may contain many collections. See also <<solrclouddef,SolrCloud>>.
+
+[[collection]]Collection::
+In Solr, one or more <<document,Documents>> grouped together in a single logical index using a single configuration and Schema.
++
+In <<solrclouddef,SolrCloud>> a collection may be divided up into multiple logical shards, which may in turn be distributed across many nodes, or in a Single node Solr installation, a collection may be a single <<core,Core>>.
+
+[[commit]]Commit::
+To make document changes permanent in the index. In the case of added documents, they would be searchable after a _commit_.
+
+[[core]]Core::
+An individual Solr instance (represents a logical index). Multiple cores can run on a single node. See also <<solrclouddef,SolrCloud>>.
+
+[[corereload]]Core reload::
+To re-initialize a Solr core after changes to `schema.xml`, `solrconfig.xml` or other configuration files.
+
+[[SolrGlossary-D]]
+=== D
+
+[[distributedsearch]]Distributed search::
+Distributed search is one where queries are processed across more than one <<shard,Shard>>.
+
+[[document]]Document::
+A group of <<field,fields>> and their values. Documents are the basic unit of data in a <<collection,collection>>. Documents are assigned to <<shard,shards>> using standard hashing, or by specifically assigning a shard within the document ID. Documents are versioned after each write operation.
+
+[[SolrGlossary-E]]
+=== E
+
+[[ensemble]]Ensemble::
+A <<zookeeper,ZooKeeper>> term to indicate multiple ZooKeeper instances running simultaneously and in coordination with each other for fault tolerance.
+
+[[SolrGlossary-F]]
+=== F
+
+[[facet]]Facet::
+The arrangement of search results into categories based on indexed terms.
+
+[[field]]Field::
+The content to be indexed/searched along with metadata defining how the content should be processed by Solr.
+
+[[SolrGlossary-I]]
+=== I
+
+[[idf]]Inverse document frequency (IDF)::
+A measure of the general importance of a term. It is calculated as the number of total Documents divided by the number of Documents that a particular word occurs in the collection. See http://en.wikipedia.org/wiki/Tf-idf and {lucene-javadocs}/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html[the Lucene TFIDFSimilarity javadocs] for more info on TF-IDF based scoring and Lucene scoring in particular. See also <<termfrequency,Term frequency>>.
+
+[[invertedindex]]Inverted index::
+A way of creating a searchable index that lists every word and the documents that contain those words, similar to an index in the back of a book which lists words and the pages on which they can be found. When performing keyword searches, this method is considered more efficient than the alternative, which would be to create a list of documents paired with every word used in each document. Since users search using terms they expect to be in documents, finding the term before the document saves processing resources and time.
+
+[[SolrGlossary-L]]
+=== L
+
+[[leader]]Leader::
+A single <<replica,Replica>> for each <<shard,Shard>> that takes charge of coordinating index updates (document additions or deletions) to other replicas in the same shard. This is a transient responsibility assigned to a node via an election, if the current Shard Leader goes down, a new node will automatically be elected to take its place. See also <<solrclouddef,SolrCloud>>.
+
+[[SolrGlossary-M]]
+=== M
+
+[[metadata]]Metadata::
+Literally, _data about data_. Metadata is information about a document, such as its title, author, or location.
+
+[[SolrGlossary-N]]
+=== N
+
+[[naturallanguagequery]]Natural language query::
+A search that is entered as a user would normally speak or write, as in, "What is aspirin?"
+
+[[node]]Node::
+A JVM instance running Solr. Also known as a Solr server.
+
+[[SolrGlossary-O]]
+=== O
+
+[[optimisticconcurrency]]<<updating-parts-of-documents.adoc#UpdatingPartsofDocuments-OptimisticConcurrency,Optimistic concurrency>>::
+Also known as "optimistic locking", this is an approach that allows for updates to documents currently in the index while retaining locking or version control.
+
+[[overseer]]Overseer::
+A single node in <<solrclouddef,SolrCloud>> that is responsible for processing and coordinating actions involving the entire cluster. It keeps track of the state of existing nodes, collections, shards, and replicas, and assigns new replicas to nodes. This is a transient responsibility assigned to a node via an election, if the current Overseer goes down, a new node will be automatically elected to take its place. See also <<solrclouddef,SolrCloud>>.
+
+[[SolrGlossary-Q]]
+=== Q
+
+[[query-parser]]Query parser::
+A query parser processes the terms entered by a user.
+
+[[SolrGlossary-R]]
+=== R
+
+[[recall]]Recall::
+The ability of a search engine to retrieve _all_ of the possible matches to a user's query.
+
+[[relevancedef]]Relevance::
+The appropriateness of a document to the search conducted by the user.
+
+[[replica]]Replica::
+A <<core,Core>> that acts as a physical copy of a <<shard,Shard>> in a <<solrclouddef,SolrCloud>> <<collection,Collection>>.
+
+[[replication]]<<index-replication.adoc#index-replication,Replication>>::
+
+A method of copying a master index from one server to one or more "slave" or "child" servers.
+
+[[requesthandler]]<<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,RequestHandler>>::
+Logic and configuration parameters that tell Solr how to handle incoming "requests", whether the requests are to return search results, to index documents, or to handle other custom situations.
+
+[[SolrGlossary-S]]
+=== S
+
+[[searchcomponent]]<<requesthandlers-and-searchcomponents-in-solrconfig.adoc#requesthandlers-and-searchcomponents-in-solrconfig,SearchComponent>>::
+Logic and configuration parameters used by request handlers to process query requests. Examples of search components include faceting, highlighting, and "more like this" functionality.
+
+[[shard]]Shard::
+In SolrCloud, a logical partition of a single <<collection,Collection>>. Every shard consists of at least one physical <<replica,Replica>>, but there may be multiple Replicas distributed across multiple <<node,Nodes>> for fault tolerance. See also <<solrclouddef,SolrCloud>>.
+
+[[solrclouddef]]<<solrcloud.adoc#solrcloud,SolrCloud>>::
+Umbrella term for a suite of functionality in Solr which allows managing a <<cluster,Cluster>> of Solr <<node,Nodes>> for scalability, fault tolerance, and high availability.
+
+[[schema]]<<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Solr Schema (managed-schema or schema.xml)>>::
+The Solr index Schema defines the fields to be indexed and the type for the field (text, integers, etc.) By default schema data can be "managed" at run time using the <<schema-api.adoc#schema-api,Schema API>> and is typically kept in a file named `managed-schema` which Solr modifies as needed, but a collection may be configured to use a static Schema, which is only loaded on startup from a human edited configuration file - typically named `schema.xml`. See <<schema-factory-definition-in-solrconfig.adoc#schema-factory-definition-in-solrconfig,Schema Factory Definition in SolrConfig>> for details.
+
+[[solrconfig]]<<the-well-configured-solr-instance.adoc#the-well-configured-solr-instance,SolrConfig (solrconfig.xml)>>::
+
+The Apache Solr configuration file. Defines indexing options, RequestHandlers, highlighting, spellchecking and various other configurations. The file, solrconfig.xml is located in the Solr home conf directory.
+
+[[spellcheck]]<<spell-checking.adoc#spell-checking,Spell Check>>::
+The ability to suggest alternative spellings of search terms to a user, as a check against spelling errors causing few or zero results.
+
+[[stopwords]]Stopwords::
+Generally, words that have little meaning to a user's search but which may have been entered as part of a <<naturallanguagequery,natural language>> query. Stopwords are generally very small pronouns, conjunctions and prepositions (such as, "the", "with", or "and")
+
+[[suggesterdef]]<<suggester.adoc#suggester,Suggester>>::
+Functionality in Solr that provides the ability to suggest possible query terms to users as they type.
+
+[[synonyms]]Synonyms::
+Synonyms generally are terms which are near to each other in meaning and may substitute for one another. In a search engine implementation, synonyms may be abbreviations as well as words, or terms that are not consistently hyphenated. Examples of synonyms in this context would be "Inc." and "Incorporated" or "iPod" and "i-pod".
+
+[[SolrGlossary-T]]
+=== T
+
+[[termfrequency]]Term frequency::
+The number of times a word occurs in a given document. See http://en.wikipedia.org/wiki/Tf-idf and {lucene-javadocs}/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html[the Lucene TFIDFSimilarity javadocs] for more info on TF-IDF based scoring and Lucene scoring in particular. See also <<idf,Inverse document frequency (IDF)>>.
+
+[[transactionlog]]Transaction log::
+An append-only log of write operations maintained by each <<replica,Replica>>. This log is required with SolrCloud implementations and is created and managed automatically by Solr.
+
+[[SolrGlossary-W]]
+=== W
+
+[[wildcard]]Wildcard::
+A wildcard allows a substitution of one or more letters of a word to account for possible variations in spelling or tenses.
+
+[[SolrGlossary-Z]]
+=== Z
+
+[[zookeeper]]ZooKeeper::
+Also known as http://zookeeper.apache.org/[Apache ZooKeeper]. The system used by SolrCloud to keep track of configuration files and node names for a cluster. A ZooKeeper cluster is used as the central configuration store for the cluster, a coordinator for operations requiring distributed synchronization, and the system of record for cluster topology. See also <<solrclouddef,SolrCloud>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
new file mode 100644
index 0000000..dd81b0e
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-jdbc-apache-zeppelin.adoc
@@ -0,0 +1,54 @@
+= Solr JDBC - Apache Zeppelin
+:page-shortname: solr-jdbc-apache-zeppelin
+:page-permalink: solr-jdbc-apache-zeppelin.html
+
+The Solr JDBC driver can support Apache Zeppelin.
+
+IMPORTANT: This requires Apache Zeppelin 0.6.0 or greater which contains the JDBC interpreter.
+
+To use http://zeppelin.apache.org[Apache Zeppelin] with Solr, you will need to create a JDBC interpreter for Solr. This will add SolrJ to the interpreter classpath. Once the interpreter has been created, you can create a notebook to issue queries. The http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation] provides additional information about JDBC prefixes and other features.
+
+[[SolrJDBC-ApacheZeppelin-CreatetheApacheSolrJDBCInterpreter]]
+== Create the Apache Solr JDBC Interpreter
+
+.Click "Interpreter" in the top navigation
+image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_1.png[image,height=400]
+
+.Click "Create"
+image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_2.png[image,height=400]
+
+.Enter information about your Solr installation
+image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_3.png[image,height=400]
+
+[NOTE]
+====
+For most installations, Apache Zeppelin configures PostgreSQL as the JDBC interpreter default driver. The default driver can either be replaced by the Solr driver as outlined above or you can add a separate JDBC interpreter prefix as outlined in the http://zeppelin.apache.org/docs/latest/interpreter/jdbc.html[Apache Zeppelin JDBC interpreter documentation].
+====
+
+[[SolrJDBC-ApacheZeppelin-CreateaNotebook]]
+== Create a Notebook
+
+.Click Notebook -> Create new note
+image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_4.png[image,width=517,height=400]
+
+.Provide a name and click "Create Note"
+image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_5.png[image,width=839,height=400]
+
+[[SolrJDBC-ApacheZeppelin-QuerywiththeNotebook]]
+== Query with the Notebook
+
+[IMPORTANT]
+====
+For some notebooks, the JDBC interpreter will not be bound to the notebook by default. Instructions on how to bind the JDBC interpreter to a notebook are available https://zeppelin.apache.org/docs/latest/interpreter/jdbc.html#bind-to-notebook[here].
+====
+
+.Results of Solr query
+image::images/solr-jdbc-apache-zeppelin/zeppelin_solrjdbc_6.png[image,width=481,height=400]
+
+The below code block assumes that the Apache Solr driver is setup as the default JDBC interpreter driver. If that is not the case, instructions for using a different prefix is available https://zeppelin.apache.org/docs/latest/interpreter/jdbc.html#how-to-use[here].
+
+[source,text]
+----
+%jdbc
+select fielda, fieldb, from test limit 10
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/95968c69/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
new file mode 100644
index 0000000..af70dfd
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-jdbc-dbvisualizer.adoc
@@ -0,0 +1,120 @@
+= Solr JDBC - DbVisualizer
+:page-shortname: solr-jdbc-dbvisualizer
+:page-permalink: solr-jdbc-dbvisualizer.html
+
+Solr's JDBC driver supports DBVisualizer for querying Solr.
+
+For https://www.dbvis.com/[DbVisualizer], you will need to create a new driver for Solr using the DbVisualizer Driver Manager. This will add several SolrJ client .jars to the DbVisualizer classpath. The files required are:
+
+* all .jars found in `$SOLR_HOME/dist/solrj-lib`
+* the SolrJ .jar found at `$SOLR_HOME/dist/solr-solrj-<version>.jar`
+
+Once the driver has been created, you can create a connection to Solr with the connection string format outlined in the generic section and use the SQL Commander to issue queries.
+
+[[SolrJDBC-DbVisualizer-SetupDriver]]
+== Setup Driver
+
+[[SolrJDBC-DbVisualizer-OpenDriverManager]]
+=== Open Driver Manager
+
+From the Tools menu, choose Driver Manager to add a driver.
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_1.png[image,width=673,height=400]
+
+
+[[SolrJDBC-DbVisualizer-CreateaNewDriver]]
+=== Create a New Driver
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_2.png[image,width=532,height=400]
+
+
+[[SolrJDBC-DbVisualizer-NametheDriver]]
+=== Name the Driver
+
+Provide a name for the driver, and provide the URL format: `jdbc:solr://<zk_connection_string>/?collection=<collection>`. Do not fill in values for the variables "```zk_connection_string```" and "```collection```", those will be provided later when the connection to Solr is configured. The Driver Class will also be automatically added when the driver .jars are added.
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_3.png[image,width=532,height=400]
+
+
+[[SolrJDBC-DbVisualizer-AddDriverFilestoClasspath]]
+=== Add Driver Files to Classpath
+
+The driver files to be added are:
+
+* all .jars in `$SOLR_HOME/dist/solrj-lib`
+* the SolrJ .jar found in `$SOLR_HOME/dist/solr-solrj-<version>.jar`
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_4.png[image,width=535,height=400]
+
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_5.png[image,width=664,height=400]
+
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_6.png[image,width=653,height=400]
+
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_7.png[image,width=655,height=400]
+
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_9.png[image,width=651,height=400]
+
+
+[[SolrJDBC-DbVisualizer-ReviewandCloseDriverManager]]
+=== Review and Close Driver Manager
+
+Once the driver files have been added, you can close the Driver Manager.
+
+[[SolrJDBC-DbVisualizer-CreateaConnection]]
+== Create a Connection
+
+Next, create a connection to Solr using the driver just created.
+
+[[SolrJDBC-DbVisualizer-UsetheConnectionWizard]]
+=== Use the Connection Wizard
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_11.png[image,width=763,height=400]
+
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_12.png[image,width=807,height=400]
+
+
+[[SolrJDBC-DbVisualizer-NametheConnection]]
+=== Name the Connection
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_13.png[image,width=402,height=400]
+
+
+[[SolrJDBC-DbVisualizer-SelecttheSolrdriver]]
+=== Select the Solr driver
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_14.png[image,width=399,height=400]
+
+
+[[SolrJDBC-DbVisualizer-SpecifytheSolrURL]]
+=== Specify the Solr URL
+
+Provide the Solr URL, using the ZooKeeper host and port and the collection. For example, `jdbc:solr://localhost:9983?collection=test`
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_15.png[image,width=401,height=400]
+
+
+[[SolrJDBC-DbVisualizer-OpenandConnecttoSolr]]
+== Open and Connect to Solr
+
+Once the connection has been created, double-click on it to open the connection details screen and connect to Solr.
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_16.png[image,width=625,height=400]
+
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_17.png[image,width=592,height=400]
+
+
+[[SolrJDBC-DbVisualizer-OpenSQLCommandertoEnterQueries]]
+== Open SQL Commander to Enter Queries
+
+When the connection is established, you can use the SQL Commander to issue queries and view data.
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_19.png[image,width=577,height=400]
+
+
+image::images/solr-jdbc-dbvisualizer/dbvisualizer_solrjdbc_20.png[image,width=556,height=400]