You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/12 14:35:37 UTC

[27/37] lucene-solr:branch_6_6: squash merge jira/solr-10290 into master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
new file mode 100644
index 0000000..7c7ede8
--- /dev/null
+++ b/solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc
@@ -0,0 +1,165 @@
+= Distributed Search with Index Sharding
+:page-shortname: distributed-search-with-index-sharding
+:page-permalink: distributed-search-with-index-sharding.html
+
+When using traditional index sharding, you will need to consider how to query your documents.
+
+It is highly recommended that you use <<solrcloud.adoc#solrcloud,SolrCloud>> when needing to scale up or scale out. The setup described below is legacy and was used prior to the existence of SolrCloud. SolrCloud provides for a truly distributed set of features with support for things like automatic routing, leader election, optimistic concurrency and other sanity checks that are expected out of a distributed system.
+
+Everything on this page is specific to legacy setup of distributed search. Users trying out SolrCloud should not follow any of the steps or information below.
+
+Update reorders (i.e., replica A may see update X then Y, and replica B may see update Y then X). *deleteByQuery* also handles reorders the same way, to ensure replicas are consistent. All replicas of a shard are consistent, even if the updates arrive in a different order on different replicas.
+
+[[DistributedSearchwithIndexSharding-DistributingDocumentsacrossShards]]
+== Distributing Documents across Shards
+
+When not using SolrCloud, it is up to you to get all your documents indexed on each shard of your server farm. Solr supports distributed indexing (routing) in its true form only in the SolrCloud mode.
+
+In the legacy distributed mode, Solr does not calculate universal term/doc frequencies. For most large-scale implementations, it is not likely to matter that Solr calculates TF/IDF at the shard level. However, if your collection is heavily skewed in its distribution across servers, you may find misleading relevancy results in your searches. In general, it is probably best to randomly distribute documents to your shards.
+
+[[DistributedSearchwithIndexSharding-ExecutingDistributedSearcheswiththeshardsParameter]]
+== Executing Distributed Searches with the `shards` Parameter
+
+If a query request includes the `shards` parameter, the Solr server distributes the request across all the shards listed as arguments to the parameter. The `shards` parameter uses this syntax:
+
+`host:port/base_url,host:port/base_url*`
+
+For example, the `shards` parameter below causes the search to be distributed across two Solr servers: *solr1* and **solr2**, both of which are running on port 8983:
+
+`\http://localhost:8983/solr/core1/select?shards=solr1:8983/solr/core1,solr2:8983/solr/core1&indent=true&q=ipod+solr`
+
+Rather than require users to include the shards parameter explicitly, it is usually preferred to configure this parameter as a default in the RequestHandler section of `solrconfig.xml`.
+
+[IMPORTANT]
+====
+Do not add the `shards` parameter to the standard request handler; doing so may cause search queries may enter an infinite loop. Instead, define a new request handler that uses the `shards` parameter, and pass distributed search requests to that handler.
+====
+
+With Legacy mode, only query requests are distributed. This includes requests to the SearchHandler (or any handler extending from `org.apache.solr.handler.component.SearchHandler`) using standard components that support distributed search.
+
+As in SolrCloud mode, when `shards.info=true`, distributed responses will include information about the shard (where each shard represents a logically different index or physical location)
+
+The following components support distributed search:
+
+* The *Query* component, which returns documents matching a query
+* The *Facet* component, which processes facet.query and facet.field requests where facets are sorted by count (the default).
+* The *Highlighting* component, which enables Solr to include "highlighted" matches in field values.
+* The *Stats* component, which returns simple statistics for numeric fields within the DocSet.
+* The *Debug* component, which helps with debugging.
+
+[[DistributedSearchwithIndexSharding-LimitationstoDistributedSearch]]
+== Limitations to Distributed Search
+
+Distributed searching in Solr has the following limitations:
+
+* Each document indexed must have a unique key.
+* If Solr discovers duplicate document IDs, Solr selects the first document and discards subsequent ones.
+* The index for distributed searching may become momentarily out of sync if a commit happens between the first and second phase of the distributed search. This might cause a situation where a document that once matched a query and was subsequently changed may no longer match the query but will still be retrieved. This situation is expected to be quite rare, however, and is only possible for a single query request.
+* The number of shards is limited by number of characters allowed for GET method's URI; most Web servers generally support at least 4000 characters, but many servers limit URI length to reduce their vulnerability to Denial of Service (DoS) attacks.
+* Shard information can be returned with each document in a distributed search by including `fl=id, [shard]` in the search request. This returns the shard URL.
+* In a distributed search, the data directory from the core descriptor overrides any data directory in `solrconfig.xml.`
+* Update commands may be sent to any server with distributed indexing configured correctly. Document adds and deletes are forwarded to the appropriate server/shard based on a hash of the unique document id. *commit* commands and *deleteByQuery* commands are sent to every server in `shards`.
+
+Formerly a limitation was that TF/IDF relevancy computations only used shard-local statistics. This is still the case by default. If your data isn't randomly distributed, or if you want more exact statistics, then remember to configure the ExactStatsCache.
+
+[[DistributedSearchwithIndexSharding-AvoidingDistributedDeadlock]]
+== Avoiding Distributed Deadlock
+
+Like in SolrCloud mode, inter-shard requests could lead to a distributed deadlock. It can be avoided by following the instructions in the section  <<distributed-requests.adoc#distributed-requests,Distributed Requests>>.
+
+[[DistributedSearchwithIndexSharding-TestingIndexShardingonTwoLocalServers]]
+== Testing Index Sharding on Two Local Servers
+
+For simple functional testing, it's easiest to just set up two local Solr servers on different ports. (In a production environment, of course, these servers would be deployed on separate machines.)
+
+.  Make two Solr home directories and copy `solr.xml` into the new directories:
++
+[source,bash]
+----
+mkdir example/nodes
+mkdir example/nodes/node1
+# Copy solr.xml into this solr.home
+cp server/solr/solr.xml example/nodes/node1/.
+# Repeat the above steps for the second node
+mkdir example/nodes/node2
+cp server/solr/solr.xml example/nodes/node2/.
+----
+.  Start the two Solr instances
++
+[source,bash]
+----
+# Start first node on port 8983
+bin/solr start -s example/nodes/node1 -p 8983
+
+# Start second node on port 8984
+bin/solr start -s example/nodes/node2 -p 8984
+----
+.  Create a core on both the nodes with the sample_techproducts_configs.
++
+[source,bash]
+----
+bin/solr create_core -c core1 -p 8983 -d sample_techproducts_configs
+# Create a core on the Solr node running on port 8984
+bin/solr create_core -c core1 -p 8984 -d sample_techproducts_configs
+----
+.  In a third window, index an example document to each of the server:
++
+[source,bash]
+----
+bin/post -c core1 example/exampledocs/monitor.xml -port 8983
+
+bin/post -c core1 example/exampledocs/monitor2.xml -port 8984
+----
+.  Search on the node on port 8983:
++
+[source,bash]
+----
+curl http://localhost:8983/solr/core1/select?q=*:*&wt=xml&indent=true
+----
++
+This should bring back one document.
++
+Search on the node on port 8984:
++
+[source,bash]
+----
+curl http://localhost:8984/solr/core1/select?q=*:*&wt=xml&indent=true
+----
++
+This should also bring back a single document.
++
+Now do a distributed search across both servers with your browser or `curl.` In the example below, an extra parameter 'fl' is passed to restrict the returned fields to id and name.
++
+[source,bash]
+----
+curl http://localhost:8983/solr/core1/select?q=*:*&indent=true&shards=localhost:8983/solr/core1,localhost:8984/solr/core1&fl=id,name
+----
++
+This should contain both the documents as shown below:
++
+[source,xml]
+----
+<response>
+  <lst name="responseHeader">
+    <int name="status">0</int>
+    <int name="QTime">8</int>
+    <lst name="params">
+      <str name="q">*:*</str>
+      <str name="shards">localhost:8983/solr/core1,localhost:8984/solr/core1</str>
+      <str name="indent">true</str>
+      <str name="fl">id,name</str>
+      <str name="wt">xml</str>
+    </lst>
+  </lst>
+  <result name="response" numFound="2" start="0" maxScore="1.0">
+    <doc>
+      <str name="id">3007WFP</str>
+      <str name="name">Dell Widescreen UltraSharp 3007WFP</str>
+    </doc>
+    <doc>
+      <str name="id">VA902B</str>
+      <str name="name">ViewSonic VA902B - flat panel display - TFT - 19"</str>
+    </doc>
+  </result>
+</response>
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc b/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
new file mode 100644
index 0000000..15b2164
--- /dev/null
+++ b/solr/solr-ref-guide/src/documents-fields-and-schema-design.adoc
@@ -0,0 +1,28 @@
+= Documents, Fields, and Schema Design
+:page-shortname: documents-fields-and-schema-design
+:page-permalink: documents-fields-and-schema-design.html
+:page-children: overview-of-documents-fields-and-schema-design, solr-field-types, defining-fields, copying-fields, dynamic-fields, other-schema-elements, schema-api, putting-the-pieces-together, docvalues, schemaless-mode
+
+This section discusses how Solr organizes its data into documents and fields, as well as how to work with a schema in Solr.
+
+This section includes the following topics:
+
+<<overview-of-documents-fields-and-schema-design.adoc#overview-of-documents-fields-and-schema-design,Overview of Documents, Fields, and Schema Design>>: An introduction to the concepts covered in this section.
+
+<<solr-field-types.adoc#solr-field-types,Solr Field Types>>: Detailed information about field types in Solr, including the field types in the default Solr schema.
+
+<<defining-fields.adoc#defining-fields,Defining Fields>>: Describes how to define fields in Solr.
+
+<<copying-fields.adoc#copying-fields,Copying Fields>>: Describes how to populate fields with data copied from another field.
+
+<<dynamic-fields.adoc#dynamic-fields,Dynamic Fields>>: Information about using dynamic fields in order to catch and index fields that do not exactly conform to other field definitions in your schema.
+
+<<schema-api.adoc#schema-api,Schema API>>: Use curl commands to read various parts of a schema or create new fields and copyField rules.
+
+<<other-schema-elements.adoc#other-schema-elements,Other Schema Elements>>: Describes other important elements in the Solr schema.
+
+<<putting-the-pieces-together.adoc#putting-the-pieces-together,Putting the Pieces Together>>: A higher-level view of the Solr schema and how its elements work together.
+
+<<docvalues.adoc#docvalues,DocValues>>: Describes how to create a docValues index for faster lookups.
+
+<<schemaless-mode.adoc#schemaless-mode,Schemaless Mode>>: Automatically add previously unknown schema fields using value-based field type guessing.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/documents-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/documents-screen.adoc b/solr/solr-ref-guide/src/documents-screen.adoc
new file mode 100644
index 0000000..a885e97
--- /dev/null
+++ b/solr/solr-ref-guide/src/documents-screen.adoc
@@ -0,0 +1,73 @@
+= Documents Screen
+:page-shortname: documents-screen
+:page-permalink: documents-screen.html
+
+The Documents screen provides a simple form allowing you to execute various Solr indexing commands in a variety of formats directly from the browser.
+
+.The Documents Screen
+image::images/documents-screen/documents_add_screen.png[image,height=400]
+
+The screen allows you to:
+
+* Copy documents in JSON, CSV or XML and submit them to the index
+* Upload documents (in JSON, CSV or XML)
+* Construct documents by selecting fields and field values
+
+
+[TIP]
+====
+There are other ways to load data, see also these sections:
+
+* <<uploading-data-with-index-handlers.adoc#uploading-data-with-index-handlers,Uploading Data with Index Handlers>>
+* <<uploading-data-with-solr-cell-using-apache-tika.adoc#uploading-data-with-solr-cell-using-apache-tika,Uploading Data with Solr Cell using Apache Tika>>
+====
+
+The first step is to define the RequestHandler to use (aka, 'qt'). By default `/update` will be defined. To use Solr Cell, for example, change the request handler to `/update/extract`.
+
+Then choose the Document Type to define the type of document to load. The remaining parameters will change depending on the document type selected.
+
+[[DocumentsScreen-JSON]]
+== JSON
+
+When using the JSON document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper JSON format.
+
+Then you can choose when documents should be added to the index (Commit Within), & whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
+
+This option will only add or overwrite documents to the index; for other update tasks, see the <<DocumentsScreen-SolrCommand,Solr Command>> option.
+
+[[DocumentsScreen-CSV]]
+== CSV
+
+When using the CSV document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper CSV format, with columns delimited and one row per document.
+
+Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
+
+[[DocumentsScreen-DocumentBuilder]]
+== Document Builder
+
+The Document Builder provides a wizard-like interface to enter fields of a document
+
+[[DocumentsScreen-FileUpload]]
+== File Upload
+
+The File Upload option allows choosing a prepared file and uploading it. If using only `/update` for the Request-Handler option, you will be limited to XML, CSV, and JSON.
+
+However, to use the ExtractingRequestHandler (aka Solr Cell), you can modify the Request-Handler to `/update/extract`. You must have this defined in your `solrconfig.xml` file, with your desired defaults. You should also update the `&literal.id` shown in the Extracting Req. Handler Params so the file chosen is given a unique id.
+
+Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not *true*, then the incoming documents will be dropped).
+
+[[DocumentsScreen-SolrCommand]]
+== Solr Command
+
+The Solr Command option allows you use XML or JSON to perform specific actions on documents, such as defining documents to be added or deleted, updating only certain fields of documents, or commit and optimize commands on the index.
+
+The documents should be structured as they would be if using `/update` on the command line.
+
+[[DocumentsScreen-XML]]
+== XML
+
+When using the XML document type, the functionality is similar to using a requestHandler on the command line. Instead of putting the documents in a curl command, they can instead be input into the Document entry box. The document structure should still be in proper Solr XML format, with each document separated by `<doc>` tags and each field defined.
+
+Then you can choose when documents should be added to the index (Commit Within), and whether existing documents should be overwritten with incoming documents with the same id (if this is not **true**, then the incoming documents will be dropped).
+
+This option will only add or overwrite documents to the index; for other update tasks, see the <<DocumentsScreen-SolrCommand,Solr Command>> option.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/docvalues.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/docvalues.adoc b/solr/solr-ref-guide/src/docvalues.adoc
new file mode 100644
index 0000000..d7ff9ad
--- /dev/null
+++ b/solr/solr-ref-guide/src/docvalues.adoc
@@ -0,0 +1,75 @@
+= DocValues
+:page-shortname: docvalues
+:page-permalink: docvalues.html
+
+DocValues are a way of recording field values internally that is more efficient for some purposes, such as sorting and faceting, than traditional indexing.
+
+== Why DocValues?
+
+The standard way that Solr builds the index is with an _inverted index_. This style builds a list of terms found in all the documents in the index and next to each term is a list of documents that the term appears in (as well as how many times the term appears in that document). This makes search very fast - since users search by terms, having a ready list of term-to-document values makes the query process faster.
+
+For other features that we now commonly associate with search, such as sorting, faceting, and highlighting, this approach is not very efficient. The faceting engine, for example, must look up each term that appears in each document that will make up the result set and pull the document IDs in order to build the facet list. In Solr, this is maintained in memory, and can be slow to load (depending on the number of documents, terms, etc.).
+
+In Lucene 4.0, a new approach was introduced. DocValue fields are now column-oriented fields with a document-to-value mapping built at index time. This approach promises to relieve some of the memory requirements of the fieldCache and make lookups for faceting, sorting, and grouping much faster.
+
+[[DocValues-EnablingDocValues]]
+== Enabling DocValues
+
+To use docValues, you only need to enable it for a field that you will use it with. As with all schema design, you need to define a field type and then define fields of that type with docValues enabled. All of these actions are done in `schema.xml`.
+
+Enabling a field for docValues only requires adding `docValues="true"` to the field (or field type) definition, as in this example from the `schema.xml` of Solr's `sample_techproducts_configs` <<config-sets.adoc#config-sets,config set>>:
+
+[source,xml]
+----
+<field name="manu_exact" type="string" indexed="false" stored="false" docValues="true" />
+----
+
+[IMPORTANT]
+If you have already indexed data into your Solr index, you will need to completely re-index your content after changing your field definitions in `schema.xml` in order to successfully use docValues.
+
+DocValues are only available for specific field types. The types chosen determine the underlying Lucene docValue type that will be used. The available Solr field types are:
+
+* `StrField` and `UUIDField`.
+** If the field is single-valued (i.e., multi-valued is false), Lucene will use the SORTED type.
+** If the field is multi-valued, Lucene will use the SORTED_SET type.
+* Any `Trie*` numeric fields, date fields and `EnumField`.
+** If the field is single-valued (i.e., multi-valued is false), Lucene will use the NUMERIC type.
+** If the field is multi-valued, Lucene will use the SORTED_SET type.
+* Boolean fields
+* Int|Long|Float|Double|Date PointField
+** If the field is single-valued (i.e., multi-valued is false), Lucene will use the NUMERIC type.
+** If the field is multi-valued, Lucene will use the SORTED_NUMERIC type.
+
+These Lucene types are related to how the {lucene-javadocs}/core/org/apache/lucene/index/DocValuesType.html[values are sorted and stored].
+
+There is an additional configuration option available, which is to modify the `docValuesFormat` <<field-type-definitions-and-properties.adoc#FieldTypeDefinitionsandProperties-docValuesFormat,used by the field type>>. The default implementation employs a mixture of loading some things into memory and keeping some on disk. In some cases, however, you may choose to specify an alternative {lucene-javadocs}/core/org/apache/lucene/codecs/DocValuesFormat.html[DocValuesFormat implementation]. For example, you could choose to keep everything in memory by specifying `docValuesFormat="Memory"` on a field type:
+
+[source,xml]
+----
+<fieldType name="string_in_mem_dv" class="solr.StrField" docValues="true" docValuesFormat="Memory" />
+----
+
+Please note that the `docValuesFormat` option may change in future releases.
+
+[NOTE]
+Lucene index back-compatibility is only supported for the default codec. If you choose to customize the `docValuesFormat` in your schema.xml, upgrading to a future version of Solr may require you to either switch back to the default codec and optimize your index to rewrite it into the default codec before upgrading, or re-build your entire index from scratch after upgrading.
+
+== Using DocValues
+
+=== Sorting, Faceting & Functions
+
+If `docValues="true"` for a field, then DocValues will automatically be used any time the field is used for <<common-query-parameters.adoc#CommonQueryParameters-ThesortParameter,sorting>>, <<faceting.adoc#faceting,faceting>> or <<function-queries.adoc#function-queries,function queries>>.
+
+[[DocValues-RetrievingDocValuesDuringSearch]]
+=== Retrieving DocValues During Search
+
+Field values retrieved during search queries are typically returned from stored values. However, non-stored docValues fields will be also returned along with other stored fields when all fields (or pattern matching globs) are specified to be returned (e.g. "`fl=*`") for search queries depending on the effective value of the `useDocValuesAsStored` parameter for each field. For schema versions >= 1.6, the implicit default is `useDocValuesAsStored="true"`. See <<field-type-definitions-and-properties.adoc#field-type-definitions-and-properties,Field Type Definitions and Properties>> & <<defining-fields.adoc#defining-fields,Defining Fields>> for more details.
+
+When `useDocValuesAsStored="false"`, non-stored DocValues fields can still be explicitly requested by name in the <<common-query-parameters.adoc#CommonQueryParameters-Thefl_FieldList_Parameter,fl param>>, but will not match glob patterns (`"*"`). Note that returning DocValues along with "regular" stored fields at query time has performance implications that stored fields may not because DocValues are column-oriented and may therefore incur additional cost to retrieve for each returned document. Also note that while returning non-stored fields from DocValues, the values of a multi-valued field are returned in sorted order (and not insertion order). If you require the multi-valued fields to be returned in the original insertion order, then make your multi-valued field as stored (such a change requires re-indexing).
+
+In cases where the query is returning _only_ docValues fields performance may improve since returning stored fields requires disk reads and decompression whereas returning docValues fields in the fl list only requires memory access.
+
+When retrieving fields from their docValues form (using the <<exporting-result-sets.adoc#exporting-result-sets,/export handler>>, <<streaming-expressions.adoc#streaming-expressions,streaming expressions>> or if the field is requested in the `fl` parameter), two important differences between regular stored fields and docValues fields must be understood:
+
+1.  Order is _not_ preserved. For simply retrieving stored fields, the insertion order is the return order. For docValues, it is the _sorted_ order.
+2.  Multiple identical entries are collapsed into a single value. Thus if I insert values 4, 5, 2, 4, 1, my return will be 1, 2, 4, 5.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/dynamic-fields.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/dynamic-fields.adoc b/solr/solr-ref-guide/src/dynamic-fields.adoc
new file mode 100644
index 0000000..32888cc
--- /dev/null
+++ b/solr/solr-ref-guide/src/dynamic-fields.adoc
@@ -0,0 +1,20 @@
+= Dynamic Fields
+:page-shortname: dynamic-fields
+:page-permalink: dynamic-fields.html
+
+_Dynamic fields_ allow Solr to index fields that you did not explicitly define in your schema.
+
+This is useful if you discover you have forgotten to define one or more fields. Dynamic fields can make your application less brittle by providing some flexibility in the documents you can add to Solr.
+
+A dynamic field is just like a regular field except it has a name with a wildcard in it. When you are indexing documents, a field that does not match any explicitly defined fields can be matched with a dynamic field.
+
+For example, suppose your schema includes a dynamic field with a name of `*_i`. If you attempt to index a document with a `cost_i` field, but no explicit `cost_i` field is defined in the schema, then the `cost_i` field will have the field type and analysis defined for `*_i`.
+
+Like regular fields, dynamic fields have a name, a field type, and options.
+
+[source,xml]
+----
+<dynamicField name="*_i" type="int" indexed="true"  stored="true"/>
+----
+
+It is recommended that you include basic dynamic field mappings (like that shown above) in your `schema.xml`. The mappings can be very useful.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/enabling-ssl.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/enabling-ssl.adoc b/solr/solr-ref-guide/src/enabling-ssl.adoc
new file mode 100644
index 0000000..122f104
--- /dev/null
+++ b/solr/solr-ref-guide/src/enabling-ssl.adoc
@@ -0,0 +1,345 @@
+= Enabling SSL
+:page-shortname: enabling-ssl
+:page-permalink: enabling-ssl.html
+
+Solr can encrypt communications to and from clients, and between nodes in SolrCloud mode, with SSL.
+
+This section describes enabling SSL with the example Jetty server using a self-signed certificate.
+
+For background on SSL certificates and keys, see http://www.tldp.org/HOWTO/SSL-Certificates-HOWTO/.
+
+[[EnablingSSL-BasicSSLSetup]]
+== Basic SSL Setup
+
+[[EnablingSSL-Generateaself-signedcertificateandakey]]
+=== Generate a Self-Signed Certificate and a Key
+
+To generate a self-signed certificate and a single key that will be used to authenticate both the server and the client, we'll use the JDK https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html[`keytool`] command and create a separate keystore. This keystore will also be used as a truststore below. It's possible to use the keystore that comes with the JDK for these purposes, and to use a separate truststore, but those options aren't covered here.
+
+Run the commands below in the `server/etc/` directory in the binary Solr distribution. It's assumed that you have the JDK `keytool` utility on your `PATH`, and that `openssl` is also on your `PATH`. See https://www.openssl.org/related/binaries.html for OpenSSL binaries for Windows and Solaris.
+
+The `-ext SAN=...` `keytool` option allows you to specify all the DNS names and/or IP addresses that will be allowed during hostname verification (but see below for how to skip hostname verification between Solr nodes so that you don't have to specify all hosts here).
+
+In addition to `localhost` and `127.0.0.1`, this example includes a LAN IP address `192.168.1.3` for the machine the Solr nodes will be running on:
+
+[source,bash]
+----
+keytool -genkeypair -alias solr-ssl -keyalg RSA -keysize 2048 -keypass secret -storepass secret -validity 9999 -keystore solr-ssl.keystore.jks -ext SAN=DNS:localhost,IP:192.168.1.3,IP:127.0.0.1 -dname "CN=localhost, OU=Organizational Unit, O=Organization, L=Location, ST=State, C=Country"
+----
+
+The above command will create a keystore file named `solr-ssl.keystore.jks` in the current directory.
+
+[[EnablingSSL-ConvertthecertificateandkeytoPEMformatforusewithcURL]]
+=== Convert the Certificate and Key to PEM Format for Use with cURL
+
+cURL isn't capable of using JKS formatted keystores, so the JKS keystore needs to be converted to PEM format, which cURL understands.
+
+First convert the JKS keystore into PKCS12 format using `keytool`:
+
+[source,bash]
+----
+keytool -importkeystore -srckeystore solr-ssl.keystore.jks -destkeystore solr-ssl.keystore.p12 -srcstoretype jks -deststoretype pkcs12
+----
+
+The keytool application will prompt you to create a destination keystore password and for the source keystore password, which was set when creating the keystore ("secret" in the example shown above).
+
+Next convert the PKCS12 format keystore, including both the certificate and the key, into PEM format using the http://www.openssl.org[`openssl`] command:
+
+[source,bash]
+----
+openssl pkcs12 -in solr-ssl.keystore.p12 -out solr-ssl.pem
+----
+
+If you want to use cURL on OS X Yosemite (10.10), you'll need to create a certificate-only version of the PEM format, as follows:
+
+[source,bash]
+----
+openssl pkcs12 -nokeys -in solr-ssl.keystore.p12 -out solr-ssl.cacert.pem
+----
+
+[[EnablingSSL-SetcommonSSLrelatedsystemproperties]]
+=== Set common SSL related system properties
+
+The Solr Control Script is already setup to pass SSL-related Java system properties to the JVM. To activate the SSL settings, uncomment and update the set of properties beginning with SOLR_SSL_* in `bin/solr.in.sh`. (or `bin\solr.in.cmd` on Windows).
+
+NOTE: If you setup Solr as a service on Linux using the steps outlined in <<taking-solr-to-production.adoc#taking-solr-to-production,Taking Solr to Production>>, then make these changes in `/var/solr/solr.in.sh` instead.
+
+.bin/solr.in.sh example SOLR_SSL_* configuration
+[source,bash]
+----
+SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.jks
+SOLR_SSL_KEY_STORE_PASSWORD=secret
+SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.jks
+SOLR_SSL_TRUST_STORE_PASSWORD=secret
+# Require clients to authenticate
+SOLR_SSL_NEED_CLIENT_AUTH=false
+# Enable clients to authenticate (but not require)
+SOLR_SSL_WANT_CLIENT_AUTH=false
+# Define Key Store type if necessary
+SOLR_SSL_KEY_STORE_TYPE=JKS
+SOLR_SSL_TRUST_STORE_TYPE=JKS
+----
+
+When you start Solr, the `bin/solr` script includes the settings in `bin/solr.in.sh` and will pass these SSL-related system properties to the JVM.
+
+.Client Authentication Settings
+WARNING: Enable either SOLR_SSL_NEED_CLIENT_AUTH or SOLR_SSL_WANT_CLIENT_AUTH but not both at the same time. They are mutually exclusive and Jetty will select one of them which may not be what you expect.
+
+Similarly, when you start Solr on Windows, the `bin\solr.cmd` script includes the settings in `bin\solr.in.cmd` - uncomment and update the set of properties beginning with `SOLR_SSL_*` to pass these SSL-related system properties to the JVM:
+
+.bin\solr.in.cmd example SOLR_SSL_* configuration
+[source,text]
+----
+set SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.jks
+set SOLR_SSL_KEY_STORE_PASSWORD=secret
+set SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.jks
+set SOLR_SSL_TRUST_STORE_PASSWORD=secret
+REM Require clients to authenticate
+set SOLR_SSL_NEED_CLIENT_AUTH=false
+REM Enable clients to authenticate (but not require)
+set SOLR_SSL_WANT_CLIENT_AUTH=false
+----
+
+[[EnablingSSL-RunSingleNodeSolrusingSSL]]
+=== Run Single Node Solr using SSL
+
+Start Solr using the command shown below; by default clients will not be required to authenticate:
+
+.*nix command
+[source,bash]
+----
+bin/solr -p 8984
+----
+
+.Windows command
+[source,text]
+----
+bin\solr.cmd -p 8984
+----
+
+[[EnablingSSL-SolrCloud]]
+== SolrCloud
+
+This section describes how to run a two-node SolrCloud cluster with no initial collections and a single-node external ZooKeeper. The commands below assume you have already created the keystore described above.
+
+[[EnablingSSL-ConfigureZooKeeper]]
+=== Configure ZooKeeper
+
+NOTE: ZooKeeper does not support encrypted communication with clients like Solr. There are several related JIRA tickets where SSL support is being planned/worked on: https://issues.apache.org/jira/browse/ZOOKEEPER-235[ZOOKEEPER-235]; https://issues.apache.org/jira/browse/ZOOKEEPER-236[ZOOKEEPER-236]; https://issues.apache.org/jira/browse/ZOOKEEPER-1000[ZOOKEEPER-1000]; and https://issues.apache.org/jira/browse/ZOOKEEPER-2120[ZOOKEEPER-2120].
+
+Before you start any SolrCloud nodes, you must configure your solr cluster properties in ZooKeeper, so that Solr nodes know to communicate via SSL.
+
+This section assumes you have created and started a single-node external ZooKeeper on port 2181 on localhost - see <<setting-up-an-external-zookeeper-ensemble.adoc#setting-up-an-external-zookeeper-ensemble,Setting Up an External ZooKeeper Ensemble>>.
+
+The `urlScheme` cluster-wide property needs to be set to `https` before any Solr node starts up. The example below uses the `zkcli` tool that comes with the binary Solr distribution to do this:
+
+.*nix command
+[source,bash]
+----
+server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd clusterprop -name urlScheme -val https
+----
+
+.Windows command
+[source,text]
+----
+server\scripts\cloud-scripts\zkcli.bat -zkhost localhost:2181 -cmd clusterprop -name urlScheme -val https
+----
+
+If you have set up your ZooKeeper cluster to use a <<taking-solr-to-production.adoc#TakingSolrtoProduction-ZooKeeperchroot,chroot for Solr>> , make sure you use the correct `zkhost` string with `zkcli`, e.g. `-zkhost localhost:2181/solr`.
+
+[[EnablingSSL-RunSolrCloudwithSSL]]
+=== Run SolrCloud with SSL
+
+[[EnablingSSL-CreateSolrhomedirectoriesfortwonodes]]
+==== Create Solr home directories for two nodes
+
+Create two copies of the `server/solr/` directory which will serve as the Solr home directories for each of your two SolrCloud nodes:
+
+.*nix commands
+[source,bash]
+----
+mkdir cloud
+cp -r server/solr cloud/node1
+cp -r server/solr cloud/node2
+----
+
+.Windows commands
+[source,text]
+----
+mkdir cloud
+xcopy /E server\solr cloud\node1\
+xcopy /E server\solr cloud\node2\
+----
+
+[[EnablingSSL-StartthefirstSolrnode]]
+==== Start the First Solr Node
+
+Next, start the first Solr node on port 8984. Be sure to stop the standalone server first if you started it when working through the previous section on this page.
+
+.*nix command
+[source,bash]
+----
+bin/solr -cloud -s cloud/node1 -z localhost:2181 -p 8984
+----
+
+.Windows command
+[source,text]
+----
+bin\solr.cmd -cloud -s cloud\node1 -z localhost:2181 -p 8984
+----
+
+Notice the use of the `-s` option to set the location of the Solr home directory for node1.
+
+If you created your SSL key without all DNS names/IP addresses on which Solr nodes will run, you can tell Solr to skip hostname verification for inter-Solr-node communications by setting the `solr.ssl.checkPeerName` system property to `false`:
+
+.*nix command
+[source,bash]
+----
+bin/solr -cloud -s cloud/node1 -z localhost:2181 -p 8984 -Dsolr.ssl.checkPeerName=false
+----
+
+.Windows command
+[source,text]
+----
+bin\solr.cmd -cloud -s cloud\node1 -z localhost:2181 -p 8984 -Dsolr.ssl.checkPeerName=false
+----
+
+[[EnablingSSL-StartthesecondSolrnode]]
+==== Start the Second Solr Node
+
+Finally, start the second Solr node on port 7574 - again, to skip hostname verification, add `-Dsolr.ssl.checkPeerName=false`;
+
+.*nix command
+[source,text]
+----
+bin/solr -cloud -s cloud/node2 -z localhost:2181 -p 7574
+----
+
+.Windows command
+[source,text]
+----
+bin\solr.cmd -cloud -s cloud\node2 -z localhost:2181 -p 7574
+----
+
+[[EnablingSSL-ExampleClientActions]]
+== Example Client Actions
+
+[IMPORTANT]
+====
+cURL on OS X Mavericks (10.9) has degraded SSL support. For more information and workarounds to allow one-way SSL, see http://curl.haxx.se/mail/archive-2013-10/0036.html. cURL on OS X Yosemite (10.10) is improved - 2-way SSL is possible - see http://curl.haxx.se/mail/archive-2014-10/0053.html .
+
+The cURL commands in the following sections will not work with the system `curl` on OS X Yosemite (10.10). Instead, the certificate supplied with the `-E` param must be in PKCS12 format, and the file supplied with the `--cacert` param must contain only the CA certificate, and no key (see <<EnablingSSL-ConvertthecertificateandkeytoPEMformatforusewithcURL,above>> for instructions on creating this file):
+
+[source,bash]
+curl -E solr-ssl.keystore.p12:secret --cacert solr-ssl.cacert.pem ...
+
+====
+
+NOTE: If your operating system does not include cURL, you can download binaries here: http://curl.haxx.se/download.html
+
+=== Create a SolrCloud Collection using `bin/solr`
+
+Create a 2-shard, replicationFactor=1 collection named mycollection using the default configset (data_driven_schema_configs):
+
+.*nix command
+[source,bash]
+----
+bin/solr create -c mycollection -shards 2
+----
+
+.Windows command
+[source,text]
+----
+bin\solr.cmd create -c mycollection -shards 2
+----
+
+The `create` action will pass the `SOLR_SSL_*` properties set in your include file to the SolrJ code used to create the collection.
+
+[[EnablingSSL-RetrieveSolrCloudclusterstatususingcURL]]
+=== Retrieve SolrCloud Cluster Status using cURL
+
+To get the resulting cluster status (again, if you have not enabled client authentication, remove the `-E solr-ssl.pem:secret` option):
+
+[source,bash]
+----
+curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/admin/collections?action=CLUSTERSTATUS&wt=json&indent=on"
+----
+
+You should get a response that looks like this:
+
+[source,json]
+----
+{
+  "responseHeader":{
+    "status":0,
+    "QTime":2041},
+  "cluster":{
+    "collections":{
+      "mycollection":{
+        "shards":{
+          "shard1":{
+            "range":"80000000-ffffffff",
+            "state":"active",
+            "replicas":{"core_node1":{
+                "state":"active",
+                "base_url":"https://127.0.0.1:8984/solr",
+                "core":"mycollection_shard1_replica1",
+                "node_name":"127.0.0.1:8984_solr",
+                "leader":"true"}}},
+          "shard2":{
+            "range":"0-7fffffff",
+            "state":"active",
+            "replicas":{"core_node2":{
+                "state":"active",
+                "base_url":"https://127.0.0.1:7574/solr",
+                "core":"mycollection_shard2_replica1",
+                "node_name":"127.0.0.1:7574_solr",
+                "leader":"true"}}}},
+        "maxShardsPerNode":"1",
+        "router":{"name":"compositeId"},
+        "replicationFactor":"1"}},
+    "properties":{"urlScheme":"https"}}}
+----
+
+[[EnablingSSL-Indexdocumentsusingpost.jar]]
+=== Index Documents using `post.jar`
+
+Use `post.jar` to index some example documents to the SolrCloud collection created above:
+
+[source,bash]
+----
+cd example/exampledocs
+
+java -Djavax.net.ssl.keyStorePassword=secret -Djavax.net.ssl.keyStore=../../server/etc/solr-ssl.keystore.jks -Djavax.net.ssl.trustStore=../../server/etc/solr-ssl.keystore.jks -Djavax.net.ssl.trustStorePassword=secret -Durl=https://localhost:8984/solr/mycollection/update -jar post.jar *.xml
+----
+
+[[EnablingSSL-QueryusingcURL]]
+=== Query Using cURL
+
+Use cURL to query the SolrCloud collection created above, from a directory containing the PEM formatted certificate and key created above (e.g. `example/etc/`) - if you have not enabled client authentication (system property `-Djetty.ssl.clientAuth=true)`, then you can remove the `-E solr-ssl.pem:secret` option:
+
+[source,bash]
+----
+curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/mycollection/select?q=*:*&wt=json&indent=on"
+----
+
+[[EnablingSSL-IndexadocumentusingCloudSolrClient]]
+=== Index a document using `CloudSolrClient`
+
+From a java client using SolrJ, index a document. In the code below, the `javax.net.ssl.*` system properties are set programmatically, but you could instead specify them on the java command line, as in the `post.jar` example above:
+
+[source,java]
+----
+System.setProperty("javax.net.ssl.keyStore", "/path/to/solr-ssl.keystore.jks");
+System.setProperty("javax.net.ssl.keyStorePassword", "secret");
+System.setProperty("javax.net.ssl.trustStore", "/path/to/solr-ssl.keystore.jks");
+System.setProperty("javax.net.ssl.trustStorePassword", "secret");
+String zkHost = "127.0.0.1:2181";
+CloudSolrClient client = new CloudSolrClient.Builder().withZkHost(zkHost).build();
+client.setDefaultCollection("mycollection");
+SolrInputDocument doc = new SolrInputDocument();
+doc.addField("id", "1234");
+doc.addField("name", "A lovely summer holiday");
+client.add(doc);
+client.commit();
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/errata.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/errata.adoc b/solr/solr-ref-guide/src/errata.adoc
new file mode 100644
index 0000000..4608992
--- /dev/null
+++ b/solr/solr-ref-guide/src/errata.adoc
@@ -0,0 +1,17 @@
+= Errata
+:page-shortname: errata
+:page-permalink: errata.html
+
+[[Errata-ErrataForThisDocumentation]]
+== Errata For This Documentation
+
+Any mistakes found in this documentation after its release will be listed on the on-line version of this page:
+
+https://lucene.apache.org/solr/guide/{solr-docs-version}/errata.html
+
+[[Errata-ErrataForPastVersionsofThisDocumentation]]
+== Errata For Past Versions of This Documentation
+
+Any known mistakes in past releases of this documentation will be noted below.
+
+**v2 API Blob store api path**: The 6.5 guide listed the Blob store api path as `/v2/blob`, but the correct path is `/v2/c/.system/blob`.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/exporting-result-sets.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/exporting-result-sets.adoc b/solr/solr-ref-guide/src/exporting-result-sets.adoc
new file mode 100644
index 0000000..51639f5
--- /dev/null
+++ b/solr/solr-ref-guide/src/exporting-result-sets.adoc
@@ -0,0 +1,55 @@
+= Exporting Result Sets
+:page-shortname: exporting-result-sets
+:page-permalink: exporting-result-sets.html
+
+
+It's possible to export fully sorted result sets using a special <<query-re-ranking.adoc#query-re-ranking,rank query parser>> and <<response-writers.adoc#response-writers,response writer>> specifically designed to work together to handle scenarios that involve sorting and exporting millions of records.
+
+This feature uses a stream sorting technique that begins to send records within milliseconds and continues to stream results until the entire result set has been sorted and exported.
+
+The cases where this functionality may be useful include: session analysis, distributed merge joins, time series roll-ups, aggregations on high cardinality fields, fully distributed field collapsing, and sort based stats.
+
+[[ExportingResultSets-FieldRequirements]]
+== Field Requirements
+
+All the fields being sorted and exported must have docValues set to true. For more information, see the section on <<docvalues.adoc#docvalues,DocValues>>.
+
+[[ExportingResultSets-The_exportRequestHandler]]
+== The `/export` RequestHandler
+
+The `/export` request handler with the appropriate configuration is one of Solr's out-of-the-box request handlers - see <<implicit-requesthandlers.adoc#implicit-requesthandlers,Implicit RequestHandlers>> for more information.
+
+Note that this request handler's properties are defined as "invariants", which means they cannot be overridden by other properties passed at another time (such as at query time).
+
+[[ExportingResultSets-RequestingResultsExport]]
+== Requesting Results Export
+
+You can use `/export` to make requests to export the result set of a query.
+
+All queries must include `sort` and `fl` parameters, or the query will return an error. Filter queries are also supported.
+
+The supported response writers are `json` and `javabin`. For backward compatibility reasons `wt=xsort` is also supported as input, but `wt=xsort` behaves same as `wt=json`. The default output format is `json`.
+
+Here is an example of an export request of some indexed log data:
+
+[source,text]
+----
+http://localhost:8983/solr/core_name/export?q=my-query&sort=severity+desc,timestamp+desc&fl=severity,timestamp,msg
+----
+
+[[ExportingResultSets-SpecifyingtheSortCriteria]]
+=== Specifying the Sort Criteria
+
+The `sort` property defines how documents will be sorted in the exported result set. Results can be sorted by any field that has a field type of int,long, float, double, string. The sort fields must be single valued fields.
+
+Up to four sort fields can be specified per request, with the 'asc' or 'desc' properties.
+
+[[ExportingResultSets-SpecifyingtheFieldList]]
+=== Specifying the Field List
+
+The `fl` property defines the fields that will be exported with the result set. Any of the field types that can be sorted (i.e., int, long, float, double, string, date, boolean) can be used in the field list. The fields can be single or multi-valued. However, returning scores and wildcards are not supported at this time.
+
+[[ExportingResultSets-DistributedSupport]]
+== Distributed Support
+
+See the section <<streaming-expressions.adoc#streaming-expressions,Streaming Expressions>> for distributed support.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/faceting.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/faceting.adoc b/solr/solr-ref-guide/src/faceting.adoc
new file mode 100644
index 0000000..26bfbe2
--- /dev/null
+++ b/solr/solr-ref-guide/src/faceting.adoc
@@ -0,0 +1,738 @@
+= Faceting
+:page-shortname: faceting
+:page-permalink: faceting.html
+:page-children: blockjoin-faceting
+
+Faceting is the arrangement of search results into categories based on indexed terms.
+
+Searchers are presented with the indexed terms, along with numerical counts of how many matching documents were found were each term. Faceting makes it easy for users to explore search results, narrowing in on exactly the results they are looking for.
+
+[[Faceting-GeneralParameters]]
+== General Parameters
+
+There are two general parameters for controlling faceting.
+
+[[Faceting-ThefacetParameter]]
+=== The `facet` Parameter
+
+If set to *true*, this parameter enables facet counts in the query response. If set to *false*, a blank or missing value, this parameter disables faceting. None of the other parameters listed below will have any effect unless this parameter is set to *true*. The default value is blank (false).
+
+[[Faceting-Thefacet.queryParameter]]
+=== The `facet.query` Parameter
+
+This parameter allows you to specify an arbitrary query in the Lucene default syntax to generate a facet count.
+
+By default, Solr's faceting feature automatically determines the unique terms for a field and returns a count for each of those terms. Using `facet.query`, you can override this default behavior and select exactly which terms or expressions you would like to see counted. In a typical implementation of faceting, you will specify a number of `facet.query` parameters. This parameter can be particularly useful for numeric-range-based facets or prefix-based facets.
+
+You can set the `facet.query` parameter multiple times to indicate that multiple queries should be used as separate facet constraints.
+
+To use facet queries in a syntax other than the default syntax, prefix the facet query with the name of the query notation. For example, to use the hypothetical `myfunc` query parser, you could set the `facet.query` parameter like so:
+
+`facet.query={!myfunc}name~fred`
+
+[[Faceting-Field-ValueFacetingParameters]]
+== Field-Value Faceting Parameters
+
+Several parameters can be used to trigger faceting based on the indexed terms in a field.
+
+When using these parameters, it is important to remember that "term" is a very specific concept in Lucene: it relates to the literal field/value pairs that are indexed after any analysis occurs. For text fields that include stemming, lowercasing, or word splitting, the resulting terms may not be what you expect.
+
+If you want Solr to perform both analysis (for searching) and faceting on the full literal strings, use the `copyField` directive in your Schema to create two versions of the field: one Text and one String. Make sure both are `indexed="true"`. (For more information about the `copyField` directive, see <<documents-fields-and-schema-design.adoc#documents-fields-and-schema-design,Documents, Fields, and Schema Design>>.)
+
+The table below summarizes Solr's field value faceting parameters.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|<<Faceting-Thefacet.fieldParameter,facet.field>> |Identifies a field to be treated as a facet.
+|<<Faceting-Thefacet.prefixParameter,facet.prefix>> |Limits the terms used for faceting to those that begin with the specified prefix.
+|<<Faceting-Thefacet.containsParameter,facet.contains>> |Limits the terms used for faceting to those that contain the specified substring.
+|<<Faceting-Thefacet.contains.ignoreCaseParameter,facet.contains.ignoreCase>> |If facet.contains is used, ignore case when searching for the specified substring.
+|<<Faceting-Thefacet.sortParameter,facet.sort>> |Controls how faceted results are sorted.
+|<<Faceting-Thefacet.limitParameter,facet.limit>> |Controls how many constraints should be returned for each facet.
+|<<Faceting-Thefacet.offsetParameter,facet.offset>> |Specifies an offset into the facet results at which to begin displaying facets.
+|<<Faceting-Thefacet.mincountParameter,facet.mincount>> |Specifies the minimum counts required for a facet field to be included in the response.
+|<<Faceting-Thefacet.missingParameter,facet.missing>> |Controls whether Solr should compute a count of all matching results which have no value for the field, in addition to the term-based constraints of a facet field.
+|<<Faceting-Thefacet.methodParameter,facet.method>> |Selects the algorithm or method Solr should use when faceting a field.
+|<<Faceting-Thefacet.existsParameter,facet.exists>> |Caps facet counts by one. Available only for `facet.method=enum` as performance optimization.
+|<<Faceting-Thefacet.excludeTermsParameter,facet.excludeTerms>> |Removes specific terms from facet counts. This allows you to exclude certain terms from faceting, while maintaining the terms in the index for general queries.
+|<<Faceting-Thefacet.enum.cache.minDfParameter,facet.enum.cache.minDf>> |(Advanced) Specifies the minimum document frequency (the number of documents matching a term) for which the `filterCache` should be used when determining the constraint count for that term.
+|<<Faceting-Over-RequestParameters,facet.overrequest.count>> |(Advanced) A number of documents, beyond the effective `facet.limit` to request from each shard in a distributed search
+|<<Faceting-Over-RequestParameters,facet.overrequest.ratio>> |(Advanced) A multiplier of the effective `facet.limit` to request from each shard in a distributed search
+|<<Faceting-Thefacet.threadsParameter,facet.threads>> |(Advanced) Controls parallel execution of field faceting
+|===
+
+These parameters are described in the sections below.
+
+[[Faceting-Thefacet.fieldParameter]]
+=== The `facet.field` Parameter
+
+The `facet.field` parameter identifies a field that should be treated as a facet. It iterates over each Term in the field and generate a facet count using that Term as the constraint. This parameter can be specified multiple times in a query to select multiple facet fields.
+
+[IMPORTANT]
+====
+If you do not set this parameter to at least one field in the schema, none of the other parameters described in this section will have any effect.
+====
+
+[[Faceting-Thefacet.prefixParameter]]
+=== The `facet.prefix` Parameter
+
+The `facet.prefix` parameter limits the terms on which to facet to those starting with the given string prefix. This does not limit the query in any way, only the facets that would be returned in response to the query.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.prefix`.
+
+[[Faceting-Thefacet.containsParameter]]
+=== The `facet.contains` Parameter
+
+The `facet.contains` parameter limits the terms on which to facet to those containing the given substring. This does not limit the query in any way, only the facets that would be returned in response to the query.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.contains`.
+
+[[Faceting-Thefacet.contains.ignoreCaseParameter]]
+=== The `facet.contains.ignoreCase` Parameter
+
+If `facet.contains` is used, the `facet.contains.ignoreCase` parameter causes case to be ignored when matching the given substring against candidate facet terms.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.contains.ignoreCase`.
+
+[[Faceting-Thefacet.sortParameter]]
+=== The `facet.sort` Parameter
+
+This parameter determines the ordering of the facet field constraints.
+
+There are two options for this parameter.
+
+count:: Sort the constraints by count (highest count first).
+index:: Return the constraints sorted in their index order (lexicographic by indexed term). For terms in the ASCII range, this will be alphabetically sorted.
+
+The default is `count` if `facet.limit` is greater than 0, otherwise, the default is `index`.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.sort`.
+
+[[Faceting-Thefacet.limitParameter]]
+=== The `facet.limit` Parameter
+
+This parameter specifies the maximum number of constraint counts (essentially, the number of facets for a field that are returned) that should be returned for the facet fields. A negative value means that Solr will return unlimited number of constraint counts.
+
+The default value is 100.
+
+This parameter can be specified on a per-field basis to apply a distinct limit to each field with the syntax of `f.<fieldname>.facet.limit`.
+
+[[Faceting-Thefacet.offsetParameter]]
+=== The `facet.offset` Parameter
+
+The `facet.offset` parameter indicates an offset into the list of constraints to allow paging.
+
+The default value is 0.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.offset`.
+
+[[Faceting-Thefacet.mincountParameter]]
+=== The `facet.mincount` Parameter
+
+The `facet.mincount` parameter specifies the minimum counts required for a facet field to be included in the response. If a field's counts are below the minimum, the field's facet is not returned.
+
+The default value is 0.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.mincount`.
+
+[[Faceting-Thefacet.missingParameter]]
+=== The `facet.missing` Parameter
+
+If set to true, this parameter indicates that, in addition to the Term-based constraints of a facet field, a count of all results that match the query but which have no facet value for the field should be computed and returned in the response.
+
+The default value is false.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.missing`.
+
+[[Faceting-Thefacet.methodParameter]]
+=== The `facet.method` Parameter
+
+The facet.method parameter selects the type of algorithm or method Solr should use when faceting a field.
+
+The following methods are available.
+
+enum:: Enumerates all terms in a field, calculating the set intersection of documents that match the term with documents that match the query.
++
+This method is recommended for faceting multi-valued fields that have only a few distinct values. The average number of values per document does not matter.
++
+For example, faceting on a field with U.S. States such as `Alabama, Alaska, ... Wyoming` would lead to fifty cached filters which would be used over and over again. The `filterCache` should be large enough to hold all the cached filters.
+
+fc:: Calculates facet counts by iterating over documents that match the query and summing the terms that appear in each document.
++
+This is currently implemented using an `UnInvertedField` cache if the field either is multi-valued or is tokenized (according to `FieldType.isTokened()`). Each document is looked up in the cache to see what terms/values it contains, and a tally is incremented for each value.
++
+This method is excellent for situations where the number of indexed values for the field is high, but the number of values per document is low. For multi-valued fields, a hybrid approach is used that uses term filters from the `filterCache` for terms that match many documents. The letters `fc` stand for field cache.
+
+fcs:: Per-segment field faceting for single-valued string fields. Enable with `facet.method=fcs` and control the number of threads used with the `threads` local parameter. This parameter allows faceting to be faster in the presence of rapid index changes.
+
+The default value is `fc` (except for fields using the `BoolField` field type and when `facet.exists=true` is requested) since it tends to use less memory and is faster when a field has many unique terms in the index.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.method`.
+
+[[Faceting-Thefacet.enum.cache.minDfParameter]]
+=== The `facet.enum.cache.minDf` Parameter
+
+This parameter indicates the minimum document frequency (the number of documents matching a term) for which the filterCache should be used when determining the constraint count for that term. This is only used with the `facet.method=enum` method of faceting.
+
+A value greater than zero decreases the filterCache's memory usage, but increases the time required for the query to be processed. If you are faceting on a field with a very large number of terms, and you wish to decrease memory usage, try setting this parameter to a value between 25 and 50, and run a few tests. Then, optimize the parameter setting as necessary.
+
+The default value is 0, causing the filterCache to be used for all terms in the field.
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.enum.cache.minDf`.
+
+[[Faceting-Thefacet.existsParameter]]
+=== The `facet.exists` Parameter
+
+To cap facet counts by 1, specify `facet.exists=true`. It can be used with `facet.method=enum` or when it's omitted. It can be used only on non-trie fields (such as strings). It may speed up facet counting on large indices and/or high-cardinality facet values..
+
+This parameter can be specified on a per-field basis with the syntax of `f.<fieldname>.facet.exists` or via local parameter` facet.field={!facet.method=enum facet.exists=true}size`.
+
+[[Faceting-Thefacet.excludeTermsParameter]]
+=== The `facet.excludeTerms` Parameter
+
+If you want to remove terms from facet counts but keep them in the index, the `facet.excludeTerms` parameter allows you to do that.
+
+[[Faceting-Over-RequestParameters]]
+=== Over-Request Parameters
+
+In some situations, the accuracy in selecting the "top" constraints returned for a facet in a distributed Solr query can be improved by "Over Requesting" the number of desired constraints (ie: `facet.limit`) from each of the individual Shards. In these situations, each shard is by default asked for the top "`10 + (1.5 * facet.limit)`" constraints.
+
+In some situations, depending on how your docs are partitioned across your shards, and what `facet.limit` value you used, you may find it advantageous to increase or decrease the amount of over-requesting Solr does. This can be achieved by setting the `facet.overrequest.count` (defaults to 10) and `facet.overrequest.ratio` (defaults to 1.5) parameters.
+
+[[Faceting-Thefacet.threadsParameter]]
+=== The `facet.threads` Parameter
+
+This param will cause loading the underlying fields used in faceting to be executed in parallel with the number of threads specified. Specify as `facet.threads=N` where `N` is the maximum number of threads used. Omitting this parameter or specifying the thread count as 0 will not spawn any threads, and only the main request thread will be used. Specifying a negative number of threads will create up to Integer.MAX_VALUE threads.
+
+[[Faceting-RangeFaceting]]
+== Range Faceting
+
+You can use Range Faceting on any date field or any numeric field that supports range queries. This is particularly useful for stitching together a series of range queries (as facet by query) for things like prices.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|<<Faceting-Thefacet.rangeParameter,facet.range>> |Specifies the field to facet by range.
+|<<Faceting-Thefacet.range.startParameter,facet.range.start>> |Specifies the start of the facet range.
+|<<Faceting-Thefacet.range.endParameter,facet.range.end>> |Specifies the end of the facet range.
+|<<Faceting-Thefacet.range.gapParameter,facet.range.gap>> |Specifies the span of the range as a value to be added to the lower bound.
+|<<Faceting-Thefacet.range.hardendParameter,facet.range.hardend>> |A boolean parameter that specifies how Solr handles a range gap that cannot be evenly divided between the range start and end values. If true, the last range constraint will have the `facet.range.end` value an upper bound. If false, the last range will have the smallest possible upper bound greater then `facet.range.end` such that the range is the exact width of the specified range gap. The default value for this parameter is false.
+|<<Faceting-Thefacet.range.includeParameter,facet.range.include>> |Specifies inclusion and exclusion preferences for the upper and lower bounds of the range. See the `facet.range.include` topic for more detailed information.
+|<<Faceting-Thefacet.range.otherParameter,facet.range.other>> |Specifies counts for Solr to compute in addition to the counts for each facet range constraint.
+|<<Faceting-Thefacet.range.methodParameter,facet.range.method>> |Specifies the algorithm or method to use for calculating facets.
+|===
+
+[[Faceting-Thefacet.rangeParameter]]
+=== The `facet.range` Parameter
+
+The `facet.range` parameter defines the field for which Solr should create range facets. For example:
+
+`facet.range=price&facet.range=age`
+
+`facet.range=lastModified_dt`
+
+[[Faceting-Thefacet.range.startParameter]]
+=== The `facet.range.start` Parameter
+
+The `facet.range.start` parameter specifies the lower bound of the ranges. You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.start`. For example:
+
+`f.price.facet.range.start=0.0&f.age.facet.range.start=10`
+
+`f.lastModified_dt.facet.range.start=NOW/DAY-30DAYS`
+
+[[Faceting-Thefacet.range.endParameter]]
+=== The `facet.range.end` Parameter
+
+The facet.range.end specifies the upper bound of the ranges. You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.end`. For example:
+
+`f.price.facet.range.end=1000.0&f.age.facet.range.start=99`
+
+`f.lastModified_dt.facet.range.end=NOW/DAY+30DAYS`
+
+[[Faceting-Thefacet.range.gapParameter]]
+=== The `facet.range.gap` Parameter
+
+The span of each range expressed as a value to be added to the lower bound. For date fields, this should be expressed using the {solr-javadocs}/solr-core/org/apache/solr/util/DateMathParser.html[`DateMathParser` syntax] (such as, `facet.range.gap=%2B1DAY ... '+1DAY'`). You can specify this parameter on a per-field basis with the syntax of `f.<fieldname>.facet.range.gap`. For example:
+
+`f.price.facet.range.gap=100&f.age.facet.range.gap=10`
+
+`f.lastModified_dt.facet.range.gap=+1DAY`
+
+[[Faceting-Thefacet.range.hardendParameter]]
+=== The `facet.range.hardend` Parameter
+
+The `facet.range.hardend` parameter is a Boolean parameter that specifies how Solr should handle cases where the `facet.range.gap` does not divide evenly between `facet.range.start` and `facet.range.end`.
+
+If *true*, the last range constraint will have the `facet.range.end` value as an upper bound. If *false*, the last range will have the smallest possible upper bound greater then `facet.range.end` such that the range is the exact width of the specified range gap. The default value for this parameter is false.
+
+This parameter can be specified on a per field basis with the syntax `f.<fieldname>.facet.range.hardend`.
+
+[[Faceting-Thefacet.range.includeParameter]]
+=== The `facet.range.include` Parameter
+
+By default, the ranges used to compute range faceting between `facet.range.start` and `facet.range.end` are inclusive of their lower bounds and exclusive of the upper bounds. The "before" range defined with the `facet.range.other` parameter is exclusive and the "after" range is inclusive. This default, equivalent to "lower" below, will not result in double counting at the boundaries. You can use the `facet.range.include` parameter to modify this behavior using the following options:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Option |Description
+|lower |All gap-based ranges include their lower bound.
+|upper |All gap-based ranges include their upper bound.
+|edge |The first and last gap ranges include their edge bounds (lower for the first one, upper for the last one) even if the corresponding upper/lower option is not specified.
+|outer |The "before" and "after" ranges will be inclusive of their bounds, even if the first or last ranges already include those boundaries.
+|all |Includes all options: lower, upper, edge, outer.
+|===
+
+You can specify this parameter on a per field basis with the syntax of `f.<fieldname>.facet.range.include`, and you can specify it multiple times to indicate multiple choices.
+
+[NOTE]
+====
+To ensure you avoid double-counting, do not choose both `lower` and `upper`, do not choose `outer`, and do not choose `all`.
+====
+
+[[Faceting-Thefacet.range.otherParameter]]
+=== The `facet.range.other` Parameter
+
+The `facet.range.other` parameter specifies that in addition to the counts for each range constraint between `facet.range.start` and `facet.range.end`, counts should also be computed for these options:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Option |Description
+|before |All records with field values lower then lower bound of the first range.
+|after |All records with field values greater then the upper bound of the last range.
+|between |All records with field values between the start and end bounds of all ranges.
+|none |Do not compute any counts.
+|all |Compute counts for before, between, and after.
+|===
+
+This parameter can be specified on a per field basis with the syntax of `f.<fieldname>.facet.range.other`. In addition to the `all` option, this parameter can be specified multiple times to indicate multiple choices, but `none` will override all other options.
+
+[[Faceting-Thefacet.range.methodParameter]]
+=== The `facet.range.method` Parameter
+
+The `facet.range.method` parameter selects the type of algorithm or method Solr should use for range faceting. Both methods produce the same results, but performance may vary.
+
+filter:: This method generates the ranges based on other facet.range parameters, and for each of them executes a filter that later intersects with the main query resultset to get the count. It will make use of the filterCache, so it will benefit of a cache large enough to contain all ranges.
+
+dv:: This method iterates the documents that match the main query, and for each of them finds the correct range for the value. This method will make use of <<docvalues.adoc#docvalues,docValues>> (if enabled for the field) or fieldCache. The `dv` method is not supported for field type DateRangeField or when using <<result-grouping.adoc#result-grouping,group.facets>>.
+
+Default value for this parameter is "filter".
+
+[[Faceting-Thefacet.mincountParameterinRangeFaceting]]
+=== The `facet.mincount` Parameter in Range Faceting
+
+The `facet.mincount` parameter, the same one as used in field faceting is also applied to range faceting. When used, no ranges with a count below the minimum will be included in the response.
+
+.Date Ranges & Time Zones
+[NOTE]
+====
+
+Range faceting on date fields is a common situation where the <<working-with-dates.adoc#WorkingwithDates-TZ,`TZ`>> parameter can be useful to ensure that the "facet counts per day" or "facet counts per month" are based on a meaningful definition of when a given day/month "starts" relative to a particular TimeZone.
+
+For more information, see the examples in the <<working-with-dates.adoc#working-with-dates,Working with Dates>> section.
+
+====
+
+
+[[Faceting-Pivot_DecisionTree_Faceting]]
+== Pivot (Decision Tree) Faceting
+
+Pivoting is a summarization tool that lets you automatically sort, count, total or average data stored in a table. The results are typically displayed in a second table showing the summarized data. Pivot faceting lets you create a summary table of the results from a faceting documents by multiple fields.
+
+Another way to look at it is that the query produces a Decision Tree, in that Solr tells you "for facet A, the constraints/counts are X/N, Y/M, etc. If you were to constrain A by X, then the constraint counts for B would be S/P, T/Q, etc.". In other words, it tells you in advance what the "next" set of facet results would be for a field if you apply a constraint from the current facet results.
+
+[[Faceting-facet.pivot]]
+=== facet.pivot
+
+The `facet.pivot` parameter defines the fields to use for the pivot. Multiple `facet.pivot` values will create multiple "facet_pivot" sections in the response. Separate each list of fields with a comma.
+
+[[Faceting-facet.pivot.mincount]]
+=== facet.pivot.mincount
+
+The `facet.pivot.mincount` parameter defines the minimum number of documents that need to match in order for the facet to be included in results. The default is 1.
+
+Using the "`bin/solr -e techproducts`" example, A query URL like this one will return the data below, with the pivot faceting results found in the section "facet_pivot":
+
+[source,text]
+----
+http://localhost:8983/solr/techproducts/select?q=*:*&facet.pivot=cat,popularity,inStock
+   &facet.pivot=popularity,cat&facet=true&facet.field=cat&facet.limit=5
+   &rows=0&wt=json&indent=true&facet.pivot.mincount=2
+----
+
+[source,json]
+----
+{  "facet_counts":{
+    "facet_queries":{},
+    "facet_fields":{
+      "cat":[
+        "electronics",14,
+        "currency",4,
+        "memory",3,
+        "connector",2,
+        "graphics card",2]},
+    "facet_dates":{},
+    "facet_ranges":{},
+    "facet_pivot":{
+      "cat,popularity,inStock":[{
+          "field":"cat",
+          "value":"electronics",
+          "count":14,
+          "pivot":[{
+              "field":"popularity",
+              "value":6,
+              "count":5,
+              "pivot":[{
+                  "field":"inStock",
+                  "value":true,
+                  "count":5}]}]
+}]}}}
+----
+
+[[Faceting-CombiningStatsComponentWithPivots]]
+=== Combining Stats Component With Pivots
+
+In addition to some of the <<Faceting-LocalParametersforFaceting,general local parameters>> supported by other types of faceting, a `stats` local parameters can be used with `facet.pivot` to refer to <<the-stats-component.adoc#the-stats-component,`stats.field`>> instances (by tag) that you would like to have computed for each Pivot Constraint.
+
+In the example below, two different (overlapping) sets of statistics are computed for each of the facet.pivot result hierarchies:
+
+[source,text]
+----
+stats=true
+stats.field={!tag=piv1,piv2 min=true max=true}price
+stats.field={!tag=piv2 mean=true}popularity
+facet=true
+facet.pivot={!stats=piv1}cat,inStock
+facet.pivot={!stats=piv2}manu,inStock
+----
+
+Results:
+
+[source,json]
+----
+{"facet_pivot":{
+  "cat,inStock":[{
+      "field":"cat",
+      "value":"electronics",
+      "count":12,
+      "pivot":[{
+          "field":"inStock",
+          "value":true,
+          "count":8,
+          "stats":{
+            "stats_fields":{
+              "price":{
+                "min":74.98999786376953,
+                "max":399.0}}}},
+        {
+          "field":"inStock",
+          "value":false,
+          "count":4,
+          "stats":{
+            "stats_fields":{
+              "price":{
+                "min":11.5,
+                "max":649.989990234375}}}}],
+      "stats":{
+        "stats_fields":{
+          "price":{
+            "min":11.5,
+            "max":649.989990234375}}}},
+    {
+      "field":"cat",
+      "value":"currency",
+      "count":4,
+      "pivot":[{
+          "field":"inStock",
+          "value":true,
+          "count":4,
+          "stats":{
+            "stats_fields":{
+              "price":{
+                "..."
+  "manu,inStock":[{
+      "field":"manu",
+      "value":"inc",
+      "count":8,
+      "pivot":[{
+          "field":"inStock",
+          "value":true,
+          "count":7,
+          "stats":{
+            "stats_fields":{
+              "price":{
+                "min":74.98999786376953,
+                "max":2199.0},
+              "popularity":{
+                "mean":5.857142857142857}}}},
+        {
+          "field":"inStock",
+          "value":false,
+          "count":1,
+          "stats":{
+            "stats_fields":{
+              "price":{
+                "min":479.95001220703125,
+                "max":479.95001220703125},
+              "popularity":{
+                "mean":7.0}}}}],
+      "..."}]}}}}]}]}}
+----
+
+[[Faceting-CombiningFacetQueriesAndFacetRangesWithPivotFacets]]
+=== Combining Facet Queries And Facet Ranges With Pivot Facets
+
+A `query` local parameter can be used with `facet.pivot` to refer to `facet.query` instances (by tag) that should be computed for each pivot constraint. Similarly, a `range` local parameter can be used with `facet.pivot` to refer to `facet.range` instances.
+
+In the example below, two query facets are computed for h of the `facet.pivot` result hierarchies:
+
+[source,text]
+----
+facet=true
+facet.query={!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z TO NOW]
+facet.query={!tag=q1}price:[0 TO 100]
+facet.pivot={!query=q1}cat,inStock
+----
+
+[source,json]
+----
+{"facet_counts": {
+    "facet_queries": {
+      "{!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z TO NOW]": 9,
+      "{!tag=q1}price:[0 TO 100]": 7
+    },
+    "facet_fields": {},
+    "facet_dates": {},
+    "facet_ranges": {},
+    "facet_intervals": {},
+    "facet_heatmaps": {},
+    "facet_pivot": {
+      "cat,inStock": [
+        {
+          "field": "cat",
+          "value": "electronics",
+          "count": 12,
+          "queries": {
+            "{!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z TO NOW]": 9,
+            "{!tag=q1}price:[0 TO 100]": 4
+          },
+          "pivot": [
+            {
+              "field": "inStock",
+              "value": true,
+              "count": 8,
+              "queries": {
+                "{!tag=q1}manufacturedate_dt:[2006-01-01T00:00:00Z TO NOW]": 6,
+                "{!tag=q1}price:[0 TO 100]": 2
+              }
+            },
+            "..."]}]}}}
+----
+
+In a similar way, in the example below, two range facets are computed for each of the `facet.pivot` result hierarchies:
+
+[source,text]
+----
+facet=true
+facet.range={!tag=r1}manufacturedate_dt
+facet.range.start=2006-01-01T00:00:00Z
+facet.range.end=NOW/YEAR
+facet.range.gap=+1YEAR
+facet.pivot={!range=r1}cat,inStock
+----
+
+[source,json]
+----
+{"facet_counts":{
+    "facet_queries":{},
+    "facet_fields":{},
+    "facet_dates":{},
+    "facet_ranges":{
+      "manufacturedate_dt":{
+        "counts":[
+          "2006-01-01T00:00:00Z",9,
+          "2007-01-01T00:00:00Z",0,
+          "2008-01-01T00:00:00Z",0,
+          "2009-01-01T00:00:00Z",0,
+          "2010-01-01T00:00:00Z",0,
+          "2011-01-01T00:00:00Z",0,
+          "2012-01-01T00:00:00Z",0,
+          "2013-01-01T00:00:00Z",0,
+          "2014-01-01T00:00:00Z",0],
+        "gap":"+1YEAR",
+        "start":"2006-01-01T00:00:00Z",
+        "end":"2015-01-01T00:00:00Z"}},
+    "facet_intervals":{},
+    "facet_heatmaps":{},
+    "facet_pivot":{
+      "cat,inStock":[{
+          "field":"cat",
+          "value":"electronics",
+          "count":12,
+          "ranges":{
+            "manufacturedate_dt":{
+              "counts":[
+                "2006-01-01T00:00:00Z",9,
+                "2007-01-01T00:00:00Z",0,
+                "2008-01-01T00:00:00Z",0,
+                "2009-01-01T00:00:00Z",0,
+                "2010-01-01T00:00:00Z",0,
+                "2011-01-01T00:00:00Z",0,
+                "2012-01-01T00:00:00Z",0,
+                "2013-01-01T00:00:00Z",0,
+                "2014-01-01T00:00:00Z",0],
+              "gap":"+1YEAR",
+              "start":"2006-01-01T00:00:00Z",
+              "end":"2015-01-01T00:00:00Z"}},
+          "pivot":[{
+              "field":"inStock",
+              "value":true,
+              "count":8,
+              "ranges":{
+                "manufacturedate_dt":{
+                  "counts":[
+                    "2006-01-01T00:00:00Z",6,
+                    "2007-01-01T00:00:00Z",0,
+                    "2008-01-01T00:00:00Z",0,
+                    "2009-01-01T00:00:00Z",0,
+                    "2010-01-01T00:00:00Z",0,
+                    "2011-01-01T00:00:00Z",0,
+                    "2012-01-01T00:00:00Z",0,
+                    "2013-01-01T00:00:00Z",0,
+                    "2014-01-01T00:00:00Z",0],
+                  "gap":"+1YEAR",
+                  "start":"2006-01-01T00:00:00Z",
+                  "end":"2015-01-01T00:00:00Z"}}},
+                  "..."]}]}}}
+----
+
+[[Faceting-AdditionalPivotParameters]]
+=== Additional Pivot Parameters
+
+Although `facet.pivot.mincount` deviates in name from the `facet.mincount` parameter used by field faceting, many other Field faceting parameters described above can also be used with pivot faceting:
+
+* `facet.limit`
+* `facet.offset`
+* `facet.sort`
+* `facet.overrequest.count`
+* `facet.overrequest.ratio`
+
+[[Faceting-IntervalFaceting]]
+== Interval Faceting
+
+Another supported form of faceting is interval faceting. This sounds similar to range faceting, but the functionality is really closer to doing facet queries with range queries. Interval faceting allows you to set variable intervals and count the number of documents that have values within those intervals in the specified field.
+
+Even though the same functionality can be achieved by using a facet query with range queries, the implementation of these two methods is very different and will provide different performance depending on the context.
+
+If you are concerned about the performance of your searches you should test with both options. Interval faceting tends to be better with multiple intervals for the same fields, while facet query tend to be better in environments where filter cache is more effective (static indexes for example).
+
+This method will use <<docvalues.adoc#docvalues,docValues>> if they are enabled for the field, will use fieldCache otherwise.
+
+[[Faceting-Thefacet.intervalparameter]]
+=== The `facet.interval` parameter
+
+This parameter Indicates the field where interval faceting must be applied. It can be used multiple times in the same request to indicate multiple fields.
+
+`facet.interval=price&facet.interval=size`
+
+[[Faceting-Thefacet.interval.setparameter]]
+=== The `facet.interval.set` parameter
+
+This parameter is used to set the intervals for the field, it can be specified multiple times to indicate multiple intervals. This parameter is global, which means that it will be used for all fields indicated with `facet.interval` unless there is an override for a specific field. To override this parameter on a specific field you can use: `f.<fieldname>.facet.interval.set`, for example:
+
+[source,text]
+f.price.facet.interval.set=[0,10]&f.price.facet.interval.set=(10,100]
+
+
+[[Faceting-IntervalSyntax]]
+=== Interval Syntax
+
+Intervals must begin with either '(' or '[', be followed by the start value, then a comma (','), the end value, and finally a closing ')' or ']’.
+
+For example:
+
+* (1,10) -> will include values greater than 1 and lower than 10
+* [1,10) -> will include values greater or equal to 1 and lower than 10
+* [1,10] -> will include values greater or equal to 1 and lower or equal to 10
+
+The initial and end values cannot be empty. If the interval needs to be unbounded, the special character `*` can be used for both, start and end limit.
+
+When using `\*`, `(` and `[`, and `)` and `]` will be treated equal. `[*,*]` will include all documents with a value in the field.
+
+The interval limits may be strings but there is no need to add quotes. All the text until the comma will be treated as the start limit, and the text after that will be the end limit. For example: `[Buenos Aires,New York]`. Keep in mind that a string-like comparison will be done to match documents in string intervals (case-sensitive). The comparator can't be changed.
+
+Commas, brackets and square brackets can be escaped by using `\` in front of them. Whitespaces before and after the values will be omitted.
+
+The start limit can't be grater than the end limit. Equal limits are allowed, this allows you to indicate the specific values that you want to count, like `[A,A]`, `[B,B]` and `[C,Z]`.
+
+Interval faceting supports output key replacement described below. Output keys can be replaced in both the `facet.interval parameter` and in the `facet.interval.set parameter`. For example:
+
+[source,text]
+----
+&facet.interval={!key=popularity}some_field
+&facet.interval.set={!key=bad}[0,5]
+&facet.interval.set={!key=good}[5,*]
+&facet=true
+----
+
+[[Faceting-LocalParametersforFaceting]]
+== Local Parameters for Faceting
+
+The <<local-parameters-in-queries.adoc#local-parameters-in-queries,LocalParams syntax>> allows overriding global settings. It can also provide a method of adding metadata to other parameter values, much like XML attributes.
+
+[[Faceting-TaggingandExcludingFilters]]
+=== Tagging and Excluding Filters
+
+You can tag specific filters and exclude those filters when faceting. This is useful when doing multi-select faceting.
+
+Consider the following example query with faceting:
+
+`q=mainquery&fq=status:public&fq=doctype:pdf&facet=true&facet.field=doctype`
+
+Because everything is already constrained by the filter `doctype:pdf`, the `facet.field=doctype` facet command is currently redundant and will return 0 counts for everything except `doctype:pdf`.
+
+To implement a multi-select facet for doctype, a GUI may want to still display the other doctype values and their associated counts, as if the http://doctypepdf[`doctype:pdf`] constraint had not yet been applied. For example:
+
+[source,text]
+----
+=== Document Type ===
+  [ ] Word (42)
+  [x] PDF  (96)
+  [ ] Excel(11)
+  [ ] HTML (63)
+----
+
+To return counts for doctype values that are currently not selected, tag filters that directly constrain doctype, and exclude those filters when faceting on doctype.
+
+`q=mainquery&fq=status:public&fq={!tag=dt}doctype:pdf&facet=true&facet.field={!ex=dt}doctype`
+
+Filter exclusion is supported for all types of facets. Both the `tag` and `ex` local parameters may specify multiple values by separating them with commas.
+
+[[Faceting-ChangingtheOutputKey]]
+=== Changing the Output Key
+
+To change the output key for a faceting command, specify a new name with the `key` local parameter. For example:
+
+`facet.field={!ex=dt key=mylabel}doctype`
+
+The parameter setting above causes the field facet results for the "doctype" field to be returned using the key "mylabel" rather than "doctype" in the response. This can be helpful when faceting on the same field multiple times with different exclusions.
+
+[[Faceting-Limitingfacetwithcertainterms]]
+=== Limiting Facet with Certain Terms
+
+To limit field facet with certain terms specify them comma separated with `terms` local parameter. Commas and quotes in terms can be escaped with backslash, as in `\,`. In this case facet is calculated on a way similar to `facet.method=enum` , but ignores `facet.enum.cache.minDf`. For example:
+
+`facet.field={!terms='alfa,betta,with\,with\',with space'}symbol`
+
+[[Faceting-RelatedTopics]]
+== Related Topics
+
+* <<spatial-search.adoc#spatial-search,Heatmap Faceting (Spatial)>>

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/c8c2aab8/solr/solr-ref-guide/src/feed.xml
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/feed.xml b/solr/solr-ref-guide/src/feed.xml
new file mode 100755
index 0000000..d9faae8
--- /dev/null
+++ b/solr/solr-ref-guide/src/feed.xml
@@ -0,0 +1,28 @@
+---
+search: exclude
+layout: none
+---
+
+<?xml version="1.0" encoding="UTF-8"?>
+<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
+    <channel>
+        <title>{{ site.site_title | xml_escape }}</title>
+        <description>{{ site.description | xml_escape }}</description>
+        <link>{{ site.url }}</link>
+        <atom:link href="{{ "/feed.xml" | prepend: site.url }}" rel="self" type="application/rss+xml"/>
+        <pubDate>{{ site.time | date_to_rfc822 }}</pubDate>
+        <lastBuildDate>{{ site.time | date_to_rfc822 }}</lastBuildDate>
+        <generator>Jekyll v{{ jekyll.version }}</generator>
+        {% for page in site.pages limit:10 %}
+        <item>
+            <title>{{ page.title | xml_escape }}</title>
+            <description>{{ page.content | xml_escape }}</description>
+            <link>{{ page.url | prepend: site.url }}</link>
+            <guid isPermaLink="true">{{ page.url | prepend: site.url }}</guid>
+            {% for tag in page.tags %}
+               <category>{{ tag | xml_escape }}</category>
+            {% endfor %}
+        </item>
+        {% endfor %}
+    </channel>
+</rss>