You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/05/12 14:05:17 UTC

[09/37] lucene-solr:branch_6x: squash merge jira/solr-10290 into master

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc b/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc
new file mode 100644
index 0000000..4158bb0
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-jdbc-python-jython.adoc
@@ -0,0 +1,126 @@
+= Solr JDBC - Python/Jython
+:page-shortname: solr-jdbc-python-jython
+:page-permalink: solr-jdbc-python-jython.html
+
+Solr's JDBC driver supports Python and Jython.
+
+== Python
+
+Python supports accessing JDBC using the https://pypi.python.org/pypi/JayDeBeApi/[JayDeBeApi] library. The CLASSPATH variable must be configured to contain the solr-solrj jar and the supporting solrj-lib jars.
+
+
+=== JayDeBeApi
+
+.run.sh
+[source,bash]
+----
+#!/usr/bin/env bash
+# Java 8 must already be installed
+
+pip install JayDeBeApi
+
+export CLASSPATH="$(echo $(ls /opt/solr/dist/solr-solrj* /opt/solr/dist/solrj-lib/*) | tr ' ' ':')"
+
+python solr_jaydebeapi.py
+----
+
+.solr_jaydebeapi.py
+[source,python]
+----
+#!/usr/bin/env python
+
+# https://pypi.python.org/pypi/JayDeBeApi/
+
+import jaydebeapi
+import sys
+if __name__ == '__main__':
+  jdbc_url = "jdbc:solr://localhost:9983?collection=test"
+  driverName = "org.apache.solr.client.solrj.io.sql.DriverImpl"
+  statement = "select fielda, fieldb, fieldc, fieldd_s, fielde_i from test limit 10"
+
+  conn = jaydebeapi.connect(driverName, jdbc_url)
+  curs = conn.cursor()
+  curs.execute(statement)
+  print(curs.fetchall())
+
+  conn.close()
+
+  sys.exit(0)
+----
+
+== Jython
+
+Jython supports accessing JDBC natively with Java interfaces or with the zxJDBC library. The CLASSPATH variable must be configured to contain the solr-solrj jar and the supporting solrj-lib jars.
+
+.run.sh
+[source,bash]
+----
+#!/usr/bin/env bash
+# Java 8 and Jython must already be installed
+
+export CLASSPATH="$(echo $(ls /opt/solr/dist/solr-solrj* /opt/solr/dist/solrj-lib/*) | tr ' ' ':')"
+
+jython [solr_java_native.py | solr_zxjdbc.py]
+----
+
+=== Java Native
+
+.solr_java_native.py
+[source,python]
+----
+#!/usr/bin/env jython
+
+# http://www.jython.org/jythonbook/en/1.0/DatabasesAndJython.html
+# https://wiki.python.org/jython/DatabaseExamples#SQLite_using_JDBC
+
+import sys
+
+from java.lang import Class
+from java.sql  import DriverManager, SQLException
+
+if __name__ == '__main__':
+  jdbc_url = "jdbc:solr://localhost:9983?collection=test"
+  driverName = "org.apache.solr.client.solrj.io.sql.DriverImpl"
+  statement = "select fielda, fieldb, fieldc, fieldd_s, fielde_i from test limit 10"
+
+  dbConn = DriverManager.getConnection(jdbc_url)
+  stmt = dbConn.createStatement()
+
+  resultSet = stmt.executeQuery(statement)
+  while resultSet.next():
+    print(resultSet.getString("fielda"))
+
+  resultSet.close()
+  stmt.close()
+  dbConn.close()
+
+  sys.exit(0)
+----
+
+=== zxJDBC
+
+.solr_zxjdbc.py
+[source,python]
+----
+#!/usr/bin/env jython
+
+# http://www.jython.org/jythonbook/en/1.0/DatabasesAndJython.html
+# https://wiki.python.org/jython/DatabaseExamples#SQLite_using_ziclix
+
+import sys
+
+from com.ziclix.python.sql import zxJDBC
+
+if __name__ == '__main__':
+  jdbc_url = "jdbc:solr://localhost:9983?collection=test"
+  driverName = "org.apache.solr.client.solrj.io.sql.DriverImpl"
+  statement = "select fielda, fieldb, fieldc, fieldd_s, fielde_i from test limit 10"
+
+  with zxJDBC.connect(jdbc_url, None, None, driverName) as conn:
+    with conn:
+      with conn.cursor() as c:
+        c.execute(statement)
+        print(c.fetchall())
+
+  sys.exit(0)
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/solr-jdbc-r.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-r.adoc b/solr/solr-ref-guide/src/solr-jdbc-r.adoc
new file mode 100644
index 0000000..3dedbcc
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-jdbc-r.adoc
@@ -0,0 +1,37 @@
+= Solr JDBC - R
+:page-shortname: solr-jdbc-r
+:page-permalink: solr-jdbc-r.html
+
+R supports accessing JDBC using the https://www.rforge.net/RJDBC/[RJDBC] library.
+
+== RJDBC
+
+.run.sh
+[source,bash]
+----
+#!/usr/bin/env bash
+
+# Java 8 must already be installed and R configured with `R CMD javareconf`
+
+Rscript -e 'install.packages("RJDBC", dep=TRUE)'
+Rscript solr_rjdbc.R
+----
+
+.solr_rjdbc.R
+[source,r]
+----
+# https://www.rforge.net/RJDBC/
+
+library("RJDBC")
+
+solrCP <- c(list.files('/opt/solr/dist/solrj-lib', full.names=TRUE), list.files('/opt/solr/dist', pattern='solrj', full.names=TRUE, recursive = TRUE))
+
+drv <- JDBC("org.apache.solr.client.solrj.io.sql.DriverImpl",
+           solrCP,
+           identifier.quote="`")
+conn <- dbConnect(drv, "jdbc:solr://localhost:9983?collection=test", "user", "pwd")
+
+dbGetQuery(conn, "select fielda, fieldb, fieldc, fieldd_s, fielde_i from test limit 10")
+
+dbDisconnect(conn)
+----

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc b/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc
new file mode 100644
index 0000000..bac4cbd
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-jdbc-squirrel-sql.adoc
@@ -0,0 +1,80 @@
+= Solr JDBC - SQuirreL SQL
+:page-shortname: solr-jdbc-squirrel-sql
+:page-permalink: solr-jdbc-squirrel-sql.html
+
+For http://squirrel-sql.sourceforge.net[SQuirreL SQL], you will need to create a new driver for Solr. This will add several SolrJ client .jars to the SQuirreL SQL classpath. The files required are:
+
+* all .jars found in `$SOLR_HOME/dist/solrj-libs`
+* the SolrJ .jar found at `$SOLR_HOME/dist/solr-solrj-<version>.jar`
+
+Once the driver has been created, you can create a connection to Solr with the connection string format outlined in the generic section and use the editor to issue queries.
+
+== Add Solr JDBC Driver
+
+=== Open Drivers
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_1.png[image,width=900,height=400]
+
+
+=== Add Driver
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_2.png[image,width=892,height=400]
+
+
+=== Name the Driver
+
+Provide a name for the driver, and provide the URL format: `jdbc:solr://<zk_connection_string>/?collection=<collection>`. Do not fill in values for the variables "```zk_connection_string```" and "```collection```", those will be defined later when the connection to Solr is configured.
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_3.png[image,width=467,height=400]
+
+
+=== Add Solr JDBC jars to Classpath
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_4.png[image,width=467,height=400]
+
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_9.png[image,width=469,height=400]
+
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_5.png[image,width=469,height=400]
+
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_7.png[image,width=467,height=400]
+
+
+=== Add the Solr JDBC driver class name
+
+After adding the .jars, you will need to additionally define the Class Name `org.apache.solr.client.solrj.io.sql.DriverImpl`.
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_11.png[image,width=470,height=400]
+
+
+== Create an Alias
+
+To define a JDBC connection, you must define an alias.
+
+=== Open Aliases
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_10.png[image,width=840,height=400]
+
+
+=== Add an Alias
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_12.png[image,width=959,height=400]
+
+
+=== Configure the Alias
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_14.png[image,width=470,height=400]
+
+
+=== Connect to the Alias
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_13.png[image,width=522,height=400]
+
+
+== Querying
+
+Once you've successfully connected to Solr, you can use the SQL interface to enter queries and work with data.
+
+image::images/solr-jdbc-squirrel-sql/squirrelsql_solrjdbc_15.png[image,width=655,height=400]

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/solr-plugins.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-plugins.adoc b/solr/solr-ref-guide/src/solr-plugins.adoc
new file mode 100644
index 0000000..05b264a
--- /dev/null
+++ b/solr/solr-ref-guide/src/solr-plugins.adoc
@@ -0,0 +1,10 @@
+= Solr Plugins
+:page-shortname: solr-plugins
+:page-permalink: solr-plugins.html
+:page-children: adding-custom-plugins-in-solrcloud-mode
+
+Solr allows you to load custom code to perform a variety of tasks within Solr, from custom Request Handlers to process your searches, to custom Analyzers and Token Filters for your text field. You can even load custom Field Types. These pieces of custom code are called plugins.
+
+Not everyone will need to create plugins for their Solr instances - what's provided is usually enough for most applications. However, if there's something that you need, you may want to review the Solr Wiki documentation on plugins at http://wiki.apache.org/solr/SolrPlugins[SolrPlugins].
+
+If you have a plugin you would like to use, and you are running in SolrCloud mode, you can use the Blob Store API and the Config API to load the jars to Solr. The commands to use are described in the section <<adding-custom-plugins-in-solrcloud-mode.adoc#adding-custom-plugins-in-solrcloud-mode,Adding Custom Plugins in SolrCloud Mode>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/solr-sunOnly-small.png
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solr-sunOnly-small.png b/solr/solr-ref-guide/src/solr-sunOnly-small.png
new file mode 100644
index 0000000..366f1c8
Binary files /dev/null and b/solr/solr-ref-guide/src/solr-sunOnly-small.png differ

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/solrcloud-configuration-and-parameters.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solrcloud-configuration-and-parameters.adoc b/solr/solr-ref-guide/src/solrcloud-configuration-and-parameters.adoc
new file mode 100644
index 0000000..c4f3c55
--- /dev/null
+++ b/solr/solr-ref-guide/src/solrcloud-configuration-and-parameters.adoc
@@ -0,0 +1,18 @@
+= SolrCloud Configuration and Parameters
+:page-shortname: solrcloud-configuration-and-parameters
+:page-permalink: solrcloud-configuration-and-parameters.html
+:page-children: setting-up-an-external-zookeeper-ensemble, using-zookeeper-to-manage-configuration-files, zookeeper-access-control, collections-api, parameter-reference, command-line-utilities, solrcloud-with-legacy-configuration-files, configsets-api
+
+In this section, we'll cover the various configuration options for SolrCloud.
+
+The following sections cover these topics:
+
+* <<setting-up-an-external-zookeeper-ensemble.adoc#setting-up-an-external-zookeeper-ensemble,Setting Up an External ZooKeeper Ensemble>>
+* <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,Using ZooKeeper to Manage Configuration Files>>
+* <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>
+* <<collections-api.adoc#collections-api,Collections API>>
+
+* <<parameter-reference.adoc#parameter-reference,Parameter Reference>>
+* <<command-line-utilities.adoc#command-line-utilities,Command Line Utilities>>
+* <<solrcloud-with-legacy-configuration-files.adoc#solrcloud-with-legacy-configuration-files,SolrCloud with Legacy Configuration Files>>
+* <<configsets-api.adoc#configsets-api,ConfigSets API>>

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc b/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
new file mode 100644
index 0000000..6f79c6a
--- /dev/null
+++ b/solr/solr-ref-guide/src/solrcloud-with-legacy-configuration-files.adoc
@@ -0,0 +1,55 @@
+= SolrCloud with Legacy Configuration Files
+:page-shortname: solrcloud-with-legacy-configuration-files
+:page-permalink: solrcloud-with-legacy-configuration-files.html
+
+If you are migrating from a non-SolrCloud environment to SolrCloud, this information may be helpful.
+
+All of the required configuration is already set up in the sample configurations shipped with Solr. You only need to add the following if you are migrating old configuration files. Do not remove these files and parameters from a new Solr instance if you intend to use Solr in SolrCloud mode.
+
+These properties exist in 3 files: `schema.xml`, `solrconfig.xml`, and `solr.xml`.
+
+. In `schema.xml`, you must have a `\_version_` field defined:
++
+[source,xml]
+----
+<field name="_version_" type="long" indexed="true" stored="true" multiValued="false"/>
+----
++
+. In `solrconfig.xml`, you must have an `UpdateLog` defined. This should be defined in the `updateHandler` section.
++
+[source,xml]
+----
+<updateHandler>
+  ...
+  <updateLog>
+    <str name="dir">${solr.data.dir:}</str>
+  </updateLog>
+  ...
+</updateHandler>
+----
++
+. The http://wiki.apache.org/solr/UpdateRequestProcessor#Distributed_Updates[DistributedUpdateProcessor] is part of the default update chain and is automatically injected into any of your custom update chains, so you don't actually need to make any changes for this capability. However, should you wish to add it explicitly, you can still add it to the `solrconfig.xml` file as part of an `updateRequestProcessorChain`. For example:
++
+[source,xml]
+----
+<updateRequestProcessorChain name="sample">
+  <processor class="solr.LogUpdateProcessorFactory" />
+  <processor class="solr.DistributedUpdateProcessorFactory"/>
+  <processor class="my.package.UpdateFactory"/>
+  <processor class="solr.RunUpdateProcessorFactory" />
+</updateRequestProcessorChain>
+----
++
+If you do not want the DistributedUpdateProcessFactory auto-injected into your chain (for example, if you want to use SolrCloud functionality, but you want to distribute updates yourself) then specify the `NoOpDistributingUpdateProcessorFactory` update processor factory in your chain:
++
+[source,xml]
+----
+<updateRequestProcessorChain name="sample">
+  <processor class="solr.LogUpdateProcessorFactory" />
+  <processor class="solr.NoOpDistributingUpdateProcessorFactory"/>
+  <processor class="my.package.MyDistributedUpdateFactory"/>
+  <processor class="solr.RunUpdateProcessorFactory" />
+</updateRequestProcessorChain>
+----
++
+In the update process, Solr skips updating processors that have already been run on other nodes.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/solrcloud.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/solrcloud.adoc b/solr/solr-ref-guide/src/solrcloud.adoc
new file mode 100644
index 0000000..b095862
--- /dev/null
+++ b/solr/solr-ref-guide/src/solrcloud.adoc
@@ -0,0 +1,31 @@
+= SolrCloud
+:page-shortname: solrcloud
+:page-permalink: solrcloud.html
+:page-children: getting-started-with-solrcloud, how-solrcloud-works, solrcloud-configuration-and-parameters, rule-based-replica-placement, cross-data-center-replication-cdcr
+
+Apache Solr includes the ability to set up a cluster of Solr servers that combines fault tolerance and high availability. Called *SolrCloud*, these capabilities provide distributed indexing and search capabilities, supporting the following features:
+
+* Central configuration for the entire cluster
+* Automatic load balancing and fail-over for queries
+* ZooKeeper integration for cluster coordination and configuration.
+
+SolrCloud is flexible distributed search and indexing, without a master node to allocate nodes, shards and replicas. Instead, Solr uses ZooKeeper to manage these locations, depending on configuration files and schemas. Queries and updates can be sent to any server. Solr will use the information in the ZooKeeper database to figure out which servers need to handle the request.
+
+In this section, we'll cover everything you need to know about using Solr in SolrCloud mode. We've split up the details into the following topics:
+
+* <<getting-started-with-solrcloud.adoc#getting-started-with-solrcloud,Getting Started with SolrCloud>>
+* <<how-solrcloud-works.adoc#how-solrcloud-works,How SolrCloud Works>>
+** <<shards-and-indexing-data-in-solrcloud.adoc#shards-and-indexing-data-in-solrcloud,Shards and Indexing Data in SolrCloud>>
+** <<distributed-requests.adoc#distributed-requests,Distributed Requests>>
+** <<read-and-write-side-fault-tolerance.adoc#read-and-write-side-fault-tolerance,Read and Write Side Fault Tolerance>>
+* <<solrcloud-configuration-and-parameters.adoc#solrcloud-configuration-and-parameters,SolrCloud Configuration and Parameters>>
+** <<setting-up-an-external-zookeeper-ensemble.adoc#setting-up-an-external-zookeeper-ensemble,Setting Up an External ZooKeeper Ensemble>>
+** <<using-zookeeper-to-manage-configuration-files.adoc#using-zookeeper-to-manage-configuration-files,Using ZooKeeper to Manage Configuration Files>>
+** <<zookeeper-access-control.adoc#zookeeper-access-control,ZooKeeper Access Control>>
+** <<collections-api.adoc#collections-api,Collections API>>
+** <<parameter-reference.adoc#parameter-reference,Parameter Reference>>
+** <<command-line-utilities.adoc#command-line-utilities,Command Line Utilities>>
+** <<solrcloud-with-legacy-configuration-files.adoc#solrcloud-with-legacy-configuration-files,SolrCloud with Legacy Configuration Files>>
+** <<configsets-api.adoc#configsets-api,ConfigSets API>>
+* <<rule-based-replica-placement.adoc#rule-based-replica-placement,Rule-based Replica Placement>>
+* <<cross-data-center-replication-cdcr.adoc#cross-data-center-replication-cdcr,Cross Data Center Replication (CDCR)>>

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/spatial-search.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spatial-search.adoc b/solr/solr-ref-guide/src/spatial-search.adoc
new file mode 100644
index 0000000..af1d0ba
--- /dev/null
+++ b/solr/solr-ref-guide/src/spatial-search.adoc
@@ -0,0 +1,355 @@
+= Spatial Search
+:page-shortname: spatial-search
+:page-permalink: spatial-search.html
+
+Solr supports location data for use in spatial/geospatial searches.
+
+Using spatial search, you can:
+
+* Index points or other shapes
+* Filter search results by a bounding box or circle or by other shapes
+* Sort or boost scoring by distance between points, or relative area between rectangles
+* Generate a 2D grid of facet count numbers for heatmap generation or point-plotting.
+
+There are four main field types available for spatial search:
+
+* `LatLonPointSpatialField`
+* `LatLonType` (now deprecated) and its non-geodetic twin `PointType`
+* `SpatialRecursivePrefixTreeFieldType` (RPT for short), including `RptWithGeometrySpatialField`, a derivative
+* `BBoxField`
+
+`LatLonPointSpatialField` is the ideal field type for the most common use-cases for lat-lon point data. It replaces LatLonType which still exists for backwards compatibility. RPT offers some more features for more advanced/custom use cases and options like polygons and heatmaps.
+
+`RptWithGeometrySpatialField` is for indexing and searching non-point data though it can do points too. It can't do sorting/boosting.
+
+`BBoxField` is for indexing bounding boxes, querying by a box, specifying a search predicate (Intersects,Within,Contains,Disjoint,Equals), and a relevancy sort/boost like overlapRatio or simply the area.
+
+Some esoteric details that are not in this guide can be found at http://wiki.apache.org/solr/SpatialSearch.
+
+[[SpatialSearch-LatLonPointSpatialField]]
+== LatLonPointSpatialField
+
+Here's how `LatLonPointSpatialField` (LLPSF) should usually be configured in the schema:
+
+[source,xml]
+<fieldType name="location" class="solr.LatLonPointSpatialField" docValues="true"/>
+
+LLPSF supports toggling `indexed`, `stored`, `docValues`, and `multiValued`. LLPSF internally uses a 2-dimensional Lucene "Points" (BDK tree) index when "indexed" is enabled (the default). When "docValues" is enabled, a latitude and longitudes pair are bit-interleaved into 64 bits and put into Lucene DocValues. The accuracy of the docValues data is about a centimeter.
+
+[[SpatialSearch-IndexingPoints]]
+== Indexing Points
+
+For indexing geodetic points (latitude and longitude), supply it in "lat,lon" order (comma separated).
+
+For indexing non-geodetic points, it depends. Use `x y` (a space) if RPT. For PointType however, use `x,y` (a comma).
+
+If you'd rather use a standard industry format, Solr supports WKT and GeoJSON. However it's much bulkier than the raw coordinates for such simple data. (Not supported by the deprecated LatLonType or PointType)
+
+[[SpatialSearch-SearchingwithQueryParsers]]
+== Searching with Query Parsers
+
+There are two spatial Solr "query parsers" for geospatial search: `geofilt` and `bbox`. They take the following parameters:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|d |the radial distance, usually in kilometers. (RPT & BBoxField can set other units via the setting `distanceUnits`)
+|pt |the center point using the format "lat,lon" if latitude & longitude. Otherwise, "x,y" for PointType or "x y" for RPT field types.
+|sfield |a spatial indexed field
+|score a|
+(Advanced option; not supported by LatLonType (deprecated) or PointType) If the query is used in a scoring context (e.g. as the main query in `q`), this _<<local-parameters-in-queries.adoc#local-parameters-in-queries,local parameter>>_ determines what scores will be produced. Valid values are:
+
+* `none` - A fixed score of 1.0. (the default)
+* `kilometers` - distance in kilometers between the field value and the specified center point
+* `miles` - distance in miles between the field value and the specified center point
+* `degrees` - distance in degrees between the field value and the specified center point
+* `distance` - distance between the field value and the specified center point in the `distanceUnits` configured for this field
+* `recipDistance` - 1 / the distance
+
+[WARNING]
+====
+Don't use this for indexed non-point shapes (e.g. polygons). The results will be erroneous. And with RPT, it's only recommended for multi-valued point data, as the implementation doesn't scale very well and for single-valued fields, you should instead use a separate non-RPT field purely for distance sorting.
+====
+
+When used with `BBoxField`, additional options are supported:
+
+* `overlapRatio` - The relative overlap between the indexed shape & query shape.
+* `area` - haversine based area of the overlapping shapes expressed in terms of the `distanceUnits` configured for this field
+* `area2D` - cartesian coordinates based area of the overlapping shapes expressed in terms of the `distanceUnits` configured for this field
+
+|filter |(Advanced option; not supported by LatLonType (deprecated) or PointType). If you only want the query to score (with the above `score` local parameter), not filter, then set this local parameter to false.
+|===
+
+[[SpatialSearch-geofilt]]
+=== `geofilt`
+
+The `geofilt` filter allows you to retrieve results based on the geospatial distance (AKA the "great circle distance") from a given point. Another way of looking at it is that it creates a circular shape filter. For example, to find all documents within five kilometers of a given lat/lon point, you could enter `&q=*:*&fq={!geofilt sfield=store}&pt=45.15,-93.85&d=5`. This filter returns all results within a circle of the given radius around the initial point:
+
+image::images/spatial-search/circle.png[image]
+
+
+[[SpatialSearch-bbox]]
+=== `bbox`
+
+The `bbox` filter is very similar to `geofilt` except it uses the _bounding box_ of the calculated circle. See the blue box in the diagram below. It takes the same parameters as geofilt.
+
+Here's a sample query:
+
+`&q=*:*&fq={!bbox sfield=store}&pt=45.15,-93.85&d=5`
+
+The rectangular shape is faster to compute and so it's sometimes used as an alternative to `geofilt` when it's acceptable to return points outside of the radius. However, if the ideal goal is a circle but you want it to run faster, then instead consider using the RPT field and try a large `distErrPct` value like `0.1` (10% radius). This will return results outside the radius but it will do so somewhat uniformly around the shape.
+
+image::images/spatial-search/bbox.png[image]
+
+
+[IMPORTANT]
+====
+When a bounding box includes a pole, the bounding box ends up being a "bounding bowl" (a _spherical cap_) that includes all values north of the lowest latitude of the circle if it touches the north pole (or south of the highest latitude if it touches the south pole).
+====
+
+[[SpatialSearch-Filteringbyanarbitraryrectangle]]
+=== Filtering by an Arbitrary Rectangle
+
+Sometimes the spatial search requirement calls for finding everything in a rectangular area, such as the area covered by a map the user is looking at. For this case, geofilt and bbox won't cut it. This is somewhat of a trick, but you can use Solr's range query syntax for this by supplying the lower-left corner as the start of the range and the upper-right corner as the end of the range.
+
+Here's an example:
+
+`&q=*:*&fq=store:[45,-94 TO 46,-93]`
+
+LatLonType (deprecated) does *not* support rectangles that cross the dateline. For RPT and BBoxField, if you are non-geospatial coordinates (`geo="false"`) then you must quote the points due to the space, e.g. `"x y"`.
+
+
+[[SpatialSearch-Optimizing_CacheorNot]]
+=== Optimizing: Cache or Not
+
+It's most common to put a spatial query into an "fq" parameter – a filter query. By default, Solr will cache the query in the filter cache.
+
+If you know the filter query (be it spatial or not) is fairly unique and not likely to get a cache hit then specify `cache="false"` as a local-param as seen in the following example. The only spatial types which stand to benefit from this technique are LatLonPointSpatialField and LatLonType (deprecated). Enable docValues on the field (if it isn't already). LatLonType (deprecated) additionally requires a `cost="100"` (or more) local-param.
+
+`&q=...mykeywords...&fq=...someotherfilters...&fq={!geofilt cache=false}&sfield=store&pt=45.15,-93.85&d=5`
+
+LLPSF does not support Solr's "PostFilter".
+
+
+[[SpatialSearch-DistanceSortingorBoosting_FunctionQueries_]]
+== Distance Sorting or Boosting (Function Queries)
+
+There are four distance function queries:
+
+* `geodist`, see below, usually the most appropriate;
+*  http://wiki.apache.org/solr/FunctionQuery#dist[`dist`], to calculate the p-norm distance between multi-dimensional vectors;
+* http://wiki.apache.org/solr/FunctionQuery#hsin.2C_ghhsin_-_Haversine_Formula[`hsin`], to calculate the distance between two points on a sphere;
+* https://wiki.apache.org/solr/FunctionQuery#sqedist_-_Squared_Euclidean_Distance[`sqedist`], to calculate the squared Euclidean distance between two points.
+
+For more information about these function queries, see the section on <<function-queries.adoc#function-queries,Function Queries>>.
+
+[[SpatialSearch-geodist]]
+=== `geodist`
+
+`geodist` is a distance function that takes three optional parameters: `(sfield,latitude,longitude)`. You can use the `geodist` function to sort results by distance or score return results.
+
+For example, to sort your results by ascending distance, enter `...&q=*:*&fq={!geofilt}&sfield=store&pt=45.15,-93.85&d=50&sort=geodist() asc`.
+
+To return the distance as the document score, enter `...&q={!func}geodist()&sfield=store&pt=45.15,-93.85&sort=score+asc`.
+
+[[SpatialSearch-MoreExamples]]
+== More Examples
+
+Here are a few more useful examples of what you can do with spatial search in Solr.
+
+[[SpatialSearch-UseasaSub-QuerytoExpandSearchResults]]
+=== Use as a Sub-Query to Expand Search Results
+
+Here we will query for results in Jacksonville, Florida, or within 50 kilometers of 45.15,-93.85 (near Buffalo, Minnesota):
+
+`&q=*:*&fq=(state:"FL" AND city:"Jacksonville") OR {!geofilt}&sfield=store&pt=45.15,-93.85&d=50&sort=geodist()+asc`
+
+[[SpatialSearch-FacetbyDistance]]
+=== Facet by Distance
+
+To facet by distance, you can use the Frange query parser:
+
+`&q=*:*&sfield=store&pt=45.15,-93.85&facet.query={!frange l=0 u=5}geodist()&facet.query={!frange l=5.001 u=3000}geodist()`
+
+There are other ways to do it too, like using a \{!geofilt} in each facet.query.
+
+[[SpatialSearch-BoostNearestResults]]
+=== Boost Nearest Results
+
+Using the <<the-dismax-query-parser.adoc#the-dismax-query-parser,DisMax>> or <<the-extended-dismax-query-parser.adoc#the-extended-dismax-query-parser,Extended DisMax>>, you can combine spatial search with the boost function to boost the nearest results:
+
+`&q.alt=*:*&fq={!geofilt}&sfield=store&pt=45.15,-93.85&d=50&bf=recip(geodist(),2,200,20)&sort=score desc`
+
+[[SpatialSearch-RPT]]
+== RPT
+
+RPT refers to either `SpatialRecursivePrefixTreeFieldType` (aka simply RPT) and an extended version: `RptWithGeometrySpatialField` (aka RPT with Geometry). RPT offers several functional improvements over LatLonPointSpatialField:
+
+* Non-geodetic – geo=false general x & y (_not_ latitude and longitude)
+* Query by polygons and other complex shapes, in addition to circles & rectangles
+* Ability to index non-point shapes (e.g. polygons) as well as points – see RptWithGeometrySpatialField
+* Heatmap grid faceting
+
+RPT _shares_ various features in common with `LatLonPointSpatialField`. Some are listed here:
+
+* Latitude/Longitude indexed point data; possibly multi-valued
+* Fast filtering with `geofilt`, `bbox` filters, and range query syntax (dateline crossing is supported)
+* Sort/boost via `geodist`
+* Well-Known-Text (WKT) shape syntax (required for specifying polygons & other complex shapes), and GeoJSON too. In addition to indexing and searching, this works with the `wt=geojson` (GeoJSON Solr response-writer) and `[geo f=myfield]` (geo Solr document-transformer).
+
+[[SpatialSearch-Schemaconfiguration]]
+=== Schema Configuration
+
+To use RPT, the field type must be registered and configured in `schema.xml`. There are many options for this field type.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Setting |Description
+|name |The name of the field type.
+|class |This should be `solr.SpatialRecursivePrefixTreeFieldType`. But be aware that the Lucene spatial module includes some other so-called "spatial strategies" other than RPT, notably TermQueryPT*, BBox, PointVector*, and SerializedDV. Solr requires a field type to parallel these in order to use them. The asterisked ones have them.
+|spatialContextFactory |This is a Java class name to an internal extension point governing support for shape definitions & parsing. If you require polygon support, set this to `JTS` – an alias for `org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory`; otherwise it can be omitted. See important info below about JTS. (note: prior to Solr 6, the "org.locationtech.spatial4j" part was "com.spatial4j.core" and there used to be no convenience JTS alias)
+|geo |If **true**, the default, latitude and longitude coordinates will be used and the mathematical model will generally be a sphere. If false, the coordinates will be generic X & Y on a 2D plane using Euclidean/Cartesian geometry.
+|format |Defines the shape syntax/format to be used. Defaults to `WKT` but `GeoJSON` is another popular format. Spatial4j governs this feature and supports https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/io/package-frame.html[other formats]. If a given shape is parseable as "lat,lon" or "x y" then that is always supported.
+|distanceUnits a|
+This is used to specify the units for distance measurements used throughout the use of this field. This can be `degrees`, `kilometers` or `miles`. It is applied to nearly all distance measurements involving the field: `maxDistErr`, `distErr`, `d`, `geodist` and the `score` when score is `distance`, `area`, or `area2d`. However, it doesn't affect distances embedded in WKT strings, (eg: "`BUFFER(POINT(200 10),0.2)`"), which are still in degrees.
+
+`distanceUnits` defaults to either "```kilometers```" if `geo` is "```true```", or "```degrees```" if `geo` is "```false```".
+
+`distanceUnits` replaces the `units` attribute; which is now deprecated and mutually exclusive with this attribute.
+
+|distErrPct |Defines the default precision of non-point shapes (both index & query), as a fraction between 0.0 (fully precise) to 0.5. The closer this number is to zero, the more accurate the shape will be. However, more precise indexed shapes use more disk space and take longer to index. Bigger distErrPct values will make queries faster but less accurate. At query time this can be overridden in the query syntax, such as to 0.0 so as to not approximate the search shape. The default for the RPT field is 0.025. Note: For RPTWithGeometrySpatialField (see below), there's always complete accuracy with the serialized geometry and so this doesn't control accuracy so much as it controls the trade-off of how big the index should be. distErrPct defaults to 0.15 for that field.
+|maxDistErr |Defines the highest level of detail required for indexed data. If left blank, the default is one meter – just a bit less than 0.000009 degrees. This setting is used internally to compute an appropriate maxLevels (see below).
+|worldBounds |Defines the valid numerical ranges for x and y, in the format of `ENVELOPE(minX, maxX, maxY, minY)`. If `geo="true"`, the standard lat-lon world boundaries are assumed. If `geo=false`, you should define your boundaries.
+|distCalculator |Defines the distance calculation algorithm. If `geo=true`, "haversine" is the default. If `geo=false`, "cartesian" will be the default. Other possible values are "lawOfCosines", "vincentySphere" and "cartesian^2".
+|prefixTree |Defines the spatial grid implementation. Since a PrefixTree (such as RecursivePrefixTree) maps the world as a grid, each grid cell is decomposed to another set of grid cells at the next level. If `geo=true` then the default prefix tree is "```geohash```", otherwise it's "```quad```". Geohash has 32 children at each level, quad has 4. Geohash can only be used for `geo=true` as it's strictly geospatial. A third choice is "```packedQuad```", which is generally more efficient than plain "quad", provided there are many levels -- perhaps 20 or more.
+|maxLevels |Sets the maximum grid depth for indexed data. Instead, it's usually more intuitive to compute an appropriate maxLevels by specifying `maxDistErr` .
+|===
+
+*_And there are others:_* `normWrapLongitude` _,_ `datelineRule`, `validationRule`, `autoIndex`, `allowMultiOverlap`, `precisionModel`. For further info, see notes below about `spatialContextFactory` implementations referenced above, especially the link to the JTS based one.
+
+[[SpatialSearch-JTSandPolygons]]
+=== JTS and Polygons
+
+As indicated above, `spatialContextFactory` must be set to `JTS` for polygon support, including multi-polygon.
+
+All other shapes, including even line-strings, are supported without JTS. JTS stands for http://sourceforge.net/projects/jts-topo-suite/[JTS Topology Suite], which does not come with Solr due to its LGPL license. You must download it (a JAR file) and put that in a special location internal to Solr: `SOLR_INSTALL/server/solr-webapp/webapp/WEB-INF/lib/`. You can readily download it here: https://repo1.maven.org/maven2/com/vividsolutions/jts-core/. It will not work if placed in other more typical Solr lib directories, unfortunately.
+
+When activated, there are additional configuration attributes available; see https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/context/jts/JtsSpatialContextFactory.html[org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory] for the Javadocs, and remember to look at the superclass's options in as well. One option in particular you should most likely enable is `autoIndex` (i.e., use JTS's PreparedGeometry) as it's been shown to be a major performance boost for non-trivial polygons.
+
+[source,xml]
+----
+<fieldType name="location_rpt"   class="solr.SpatialRecursivePrefixTreeFieldType"
+               spatialContextFactory="org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory"
+               autoIndex="true"
+               validationRule="repairBuffer0"
+               distErrPct="0.025"
+               maxDistErr="0.001"
+               distanceUnits="kilometers" />
+----
+
+Once the field type has been defined, define a field that uses it.
+
+Here's an example polygon query for a field "geo" that can be either solr.SpatialRecursivePrefixTreeFieldType or RptWithGeometrySpatialField:
+
+[source,plain]
+&q=*:*&fq={!field f=geo}Intersects(POLYGON((-10 30, -40 40, -10 -20, 40 20, 0 0, -10 30)))
+
+Inside the parenthesis following the search predicate is the shape definition. The format of that shape is governed by the `format` attribute on the field type, defaulting to WKT. If you prefer GeoJSON, you can specify that instead.
+
+Beyond this Reference Guide and Spatila4j's docs, there are some details that remain at the Solr Wiki at http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4.
+
+[[SpatialSearch-RptWithGeometrySpatialField]]
+=== RptWithGeometrySpatialField
+
+The `RptWithGeometrySpatialField` field type is a derivative of `SpatialRecursivePrefixTreeFieldType` that also stores the original geometry internally in Lucene DocValues, which it uses to achieve accurate search. It can also be used for indexed point fields. The Intersects predicate (the default) is particularly fast, since many search results can be returned as an accurate hit without requiring a geometry check. This field type is configured just like RPT except that the default `distErrPct` is 0.15 (higher than 0.025) because the grid squares are purely for performance and not to fundamentally represent the shape.
+
+An optional in-memory cache can be defined in `solrconfig.xml`, which should be done when the data tends to have shapes with many vertices. Assuming you name your field "geom", you can configure an optional cache in solrconfig.xml by adding the following – notice the suffix of the cache name:
+
+[source,xml]
+----
+<cache name="perSegSpatialFieldCache_geom"
+           class="solr.LRUCache"
+           size="256"
+           initialSize="0"
+           autowarmCount="100%"
+           regenerator="solr.NoOpRegenerator"/>
+----
+
+When using this field type, you will likely _not_ want to mark the field as stored because it's redundant with the DocValues data and surely larger because of the formatting (be it WKT or GeoJSON). To retrieve the spatial data in search results from DocValues, use the `[geo]` transformer -- <<transforming-result-documents.adoc#transforming-result-documents,Transforming Result Documents>>.
+
+[[SpatialSearch-HeatmapFaceting]]
+=== Heatmap Faceting
+
+The RPT field supports generating a 2D grid of facet counts for documents having spatial data in each grid cell. For high-detail grids, this can be used to plot points, and for lesser detail it can be used for heatmap generation. The grid cells are determined at index-time based on RPT's configuration. At facet counting time, the indexed cells in the region of interest are traversed and a grid of counters corresponding to each cell are incremented. Solr can return the data in a straight-forward 2D array of integers or in a PNG which compresses better for larger data sets but must be decoded.
+
+The heatmap feature is accessed from Solr's faceting feature. As a part of faceting, it supports the `key` local parameter as well as excluding tagged filter queries, just like other types of faceting do. This allows multiple heatmaps to be returned on the same field with different filters.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|facet |Set to `true` to enable faceting
+|facet.heatmap |The field name of type RPT
+|facet.heatmap.geom |The region to compute the heatmap on, specified using the rectangle-range syntax or WKT. It defaults to the world. ex: `["-180 -90" TO "180 90"]`
+|facet.heatmap.gridLevel |A specific grid level, which determines how big each grid cell is. Defaults to being computed via distErrPct (or distErr)
+|facet.heatmap.distErrPct |A fraction of the size of geom used to compute gridLevel. Defaults to 0.15. It's computed the same as a similarly named parameter for RPT.
+|facet.heatmap.distErr |A cell error distance used to pick the grid level indirectly. It's computed the same as a similarly named parameter for RPT.
+|facet.heatmap.format |The format, either `ints2D` (default) or `png`.
+|===
+
+[TIP]
+====
+You'll experiment with different distErrPct values (probably 0.10 - 0.20) with various input geometries till the default size is what you're looking for. The specific details of how it's computed isn't important. For high-detail grids used in point-plotting (loosely one cell per pixel), set distErr to be the number of decimal-degrees of several pixels or so of the map being displayed. Also, you probably don't want to use a geohash based grid because the cell orientation between grid levels flip-flops between being square and rectangle. Quad is consistent and has more levels, albeit at the expense of a larger index.
+====
+
+Here's some sample output in JSON (with "..." inserted for brevity):
+
+[source,plain]
+----
+{gridLevel=6,columns=64,rows=64,minX=-180.0,maxX=180.0,minY=-90.0,maxY=90.0,
+counts_ints2D=[[0, 0, 2, 1, ....],[1, 1, 3, 2, ...],...]}
+----
+
+The output shows the gridLevel which is interesting since it's often computed from other parameters. If an interface being developed allows an explicit resolution increase/decrease feature then subsequent requests can specify the gridLevel explicitly.
+
+The `minX`, `maxX`, `minY`, `maxY` reports the region where the counts are. This is the minimally enclosing bounding rectangle of the input `geom` at the target grid level. This may wrap the dateline. The `columns` and `rows` values are how many columns and rows that the output rectangle is to be divided by evenly. Note: Don't divide an on-screen projected map rectangle evenly to plot these rectangles/points since the cell data is in the coordinate space of decimal degrees if geo=true or whatever units were given if geo=false. This could be arranged to be the same as an on-screen map but won't necessarily be.
+
+The `counts_ints2D` key has a 2D array of integers. The initial outer level is in row order (top-down), then the inner arrays are the columns (left-right). If any array would be all zeros, a null is returned instead for efficiency reasons. The entire value is null if there is no matching spatial data.
+
+If `format=png` then the output key is `counts_png`. It's a base-64 encoded string of a 4-byte PNG. The PNG logically holds exactly the same data that the ints2D format does. Note that the alpha channel byte is flipped to make it easier to view the PNG for diagnostic purposes, since otherwise counts would have to exceed 2^24 before it becomes non-opague. Thus counts greater than this value will become opaque.
+
+[[SpatialSearch-BBoxField]]
+== BBoxField
+
+The `BBoxField` field type indexes a single rectangle (bounding box) per document field and supports searching via a bounding box. It supports most spatial search predicates, it has enhanced relevancy modes based on the overlap or area between the search rectangle and the indexed rectangle. It's particularly useful for its relevancy modes. To configure it in the schema, use a configuration like this:
+
+[source,xml]
+----
+<field name="bbox" type="bbox" />
+<fieldType name="bbox" class="solr.BBoxField"
+        geo="true" units="kilometers" numberType="_bbox_coord" storeSubFields="false"/>
+<fieldType name="_bbox_coord" class="solr.TrieDoubleField" precisionStep="8" docValues="true" stored="false"/>
+----
+
+BBoxField is actually based off of 4 instances of another field type referred to by numberType. It also uses a boolean to flag a dateline cross. Assuming you want to use the relevancy feature, docValues is required. Some of the attributes are in common with the RPT field like geo, units, worldBounds, and spatialContextFactory because they share some of the same spatial infrastructure.
+
+To index a box, add a field value to a bbox field that's a string in the WKT/CQL ENVELOPE syntax. Example: `ENVELOPE(-10, 20, 15, 10)` which is minX, maxX, maxY, minY order. The parameter ordering is unintuitive but that's what the spec calls for. Alternatively, you could provide a rectangular polygon in WKT (or GeoJSON if you set set `format="GeoJSON"`).
+
+To search, you can use the `{!bbox}` query parser, or the range syntax e.g. `[10,-10 TO 15,20]`, or the ENVELOPE syntax wrapped in parenthesis with a leading search predicate. The latter is the only way to choose a predicate other than Intersects. For example:
+
+[source,plain]
+&q={!field f=bbox}Contains(ENVELOPE(-10, 20, 15, 10))
+
+
+Now to sort the results by one of the relevancy modes, use it like this:
+
+[source,plain]
+&q={!field f=bbox score=overlapRatio}Intersects(ENVELOPE(-10, 20, 15, 10))
+
+
+The `score` local parameter can be one of `overlapRatio`, `area`, and `area2D`. `area` scores by the document area using surface-of-a-sphere (assuming `geo=true`) math, while `area2D` uses simple width * height. `overlapRatio` computes a [0-1] ranged score based on how much overlap exists relative to the document's area and the query area. The javadocs of {lucene-javadocs}/spatial-extras/org/apache/lucene/spatial/bbox/BBoxOverlapRatioValueSource.html[BBoxOverlapRatioValueSource] have more info on the formula. There is an additional parameter `queryTargetProportion` that allows you to weight the query side of the formula to the index (target) side of the formula. You can also use `&debug=results` to see useful score computation info.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/spell-checking.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spell-checking.adoc b/solr/solr-ref-guide/src/spell-checking.adoc
new file mode 100644
index 0000000..770ea44
--- /dev/null
+++ b/solr/solr-ref-guide/src/spell-checking.adoc
@@ -0,0 +1,374 @@
+= Spell Checking
+:page-shortname: spell-checking
+:page-permalink: spell-checking.html
+
+The SpellCheck component is designed to provide inline query suggestions based on other, similar, terms.
+
+The basis for these suggestions can be terms in a field in Solr, externally created text files, or fields in other Lucene indexes.
+
+[[SpellChecking-ConfiguringtheSpellCheckComponent]]
+== Configuring the SpellCheckComponent
+
+[[SpellChecking-DefineSpellCheckinsolrconfig.xml]]
+=== Define Spell Check in `solrconfig.xml`
+
+The first step is to specify the source of terms in `solrconfig.xml`. There are three approaches to spell checking in Solr, discussed below.
+
+[[SpellChecking-IndexBasedSpellChecker]]
+==== IndexBasedSpellChecker
+
+The `IndexBasedSpellChecker` uses a Solr index as the basis for a parallel index used for spell checking. It requires defining a field as the basis for the index terms; a common practice is to copy terms from some fields (such as `title`, `body`, etc.) to another field created for spell checking. Here is a simple example of configuring `solrconfig.xml` with the `IndexBasedSpellChecker`:
+
+[source,xml]
+----
+<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
+  <lst name="spellchecker">
+    <str name="classname">solr.IndexBasedSpellChecker</str>
+    <str name="spellcheckIndexDir">./spellchecker</str>
+    <str name="field">content</str>
+    <str name="buildOnCommit">true</str>
+    <!-- optional elements with defaults
+    <str name="distanceMeasure">org.apache.lucene.search.spell.LevensteinDistance</str>
+    <str name="accuracy">0.5</str>
+    -->
+ </lst>
+</searchComponent>
+----
+
+The first element defines the `searchComponent` to use the `solr.SpellCheckComponent`. The `classname` is the specific implementation of the SpellCheckComponent, in this case `solr.IndexBasedSpellChecker`. Defining the `classname` is optional; if not defined, it will default to `IndexBasedSpellChecker`.
+
+The `spellcheckIndexDir` defines the location of the directory that holds the spellcheck index, while the `field` defines the source field (defined in the Schema) for spell check terms. When choosing a field for the spellcheck index, it's best to avoid a heavily processed field to get more accurate results. If the field has many word variations from processing synonyms and/or stemming, the dictionary will be created with those variations in addition to more valid spelling data.
+
+Finally, _buildOnCommit_ defines whether to build the spell check index at every commit (that is, every time new documents are added to the index). It is optional, and can be omitted if you would rather set it to `false`.
+
+[[SpellChecking-DirectSolrSpellChecker]]
+==== DirectSolrSpellChecker
+
+The `DirectSolrSpellChecker` uses terms from the Solr index without building a parallel index like the `IndexBasedSpellChecker`. This spell checker has the benefit of not having to be built regularly, meaning that the terms are always up-to-date with terms in the index. Here is how this might be configured in `solrconfig.xml`
+
+[source,xml]
+----
+<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
+  <lst name="spellchecker">
+    <str name="name">default</str>
+    <str name="field">name</str>
+    <str name="classname">solr.DirectSolrSpellChecker</str>
+    <str name="distanceMeasure">internal</str>
+    <float name="accuracy">0.5</float>
+    <int name="maxEdits">2</int>
+    <int name="minPrefix">1</int>
+    <int name="maxInspections">5</int>
+    <int name="minQueryLength">4</int>
+    <float name="maxQueryFrequency">0.01</float>
+    <float name="thresholdTokenFrequency">.01</float>
+  </lst>
+</searchComponent>
+----
+
+When choosing a `field` to query for this spell checker, you want one which has relatively little analysis performed on it (particularly analysis such as stemming). Note that you need to specify a field to use for the suggestions, so like the `IndexBasedSpellChecker`, you may want to copy data from fields like `title`, `body`, etc., to a field dedicated to providing spelling suggestions.
+
+Many of the parameters relate to how this spell checker should query the index for term suggestions. The `distanceMeasure` defines the metric to use during the spell check query. The value "internal" uses the default Levenshtein metric, which is the same metric used with the other spell checker implementations.
+
+Because this spell checker is querying the main index, you may want to limit how often it queries the index to be sure to avoid any performance conflicts with user queries. The `accuracy` setting defines the threshold for a valid suggestion, while `maxEdits` defines the number of changes to the term to allow. Since most spelling mistakes are only 1 letter off, setting this to 1 will reduce the number of possible suggestions (the default, however, is 2); the value can only be 1 or 2. `minPrefix` defines the minimum number of characters the terms should share. Setting this to 1 means that the spelling suggestions will all start with the same letter, for example.
+
+The `maxInspections` parameter defines the maximum number of possible matches to review before returning results; the default is 5. `minQueryLength` defines how many characters must be in the query before suggestions are provided; the default is 4.
+
+At first, spellchecker analyses incoming query words by looking up them in the index. Only query words, which are absent in index or too rare ones (below `maxQueryFrequency` ) are considered as misspelled and used for finding suggestions. Words which are frequent than `maxQueryFrequency` bypass spellchecker unchanged. After suggestions for every misspelled word are found they are filtered for enough frequency with `thresholdTokenFrequency` as boundary value. These parameters (`maxQueryFrequency` and `thresholdTokenFrequency`) can be a percentage (such as .01, or 1%) or an absolute value (such as 4).
+
+[[SpellChecking-FileBasedSpellChecker]]
+==== FileBasedSpellChecker
+
+The `FileBasedSpellChecker` uses an external file as a spelling dictionary. This can be useful if using Solr as a spelling server, or if spelling suggestions don't need to be based on actual terms in the index. In `solrconfig.xml`, you would define the searchComponent as so:
+
+[source,xml]
+----
+<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
+  <lst name="spellchecker">
+    <str name="classname">solr.FileBasedSpellChecker</str>
+    <str name="name">file</str>
+    <str name="sourceLocation">spellings.txt</str>
+    <str name="characterEncoding">UTF-8</str>
+    <str name="spellcheckIndexDir">./spellcheckerFile</str>
+    <!-- optional elements with defaults
+    <str name="distanceMeasure">org.apache.lucene.search.spell.LevensteinDistance</str>
+    <str name="accuracy">0.5</str>
+    -->
+ </lst>
+</searchComponent>
+----
+
+The differences here are the use of the `sourceLocation` to define the location of the file of terms and the use of `characterEncoding` to define the encoding of the terms file.
+
+[TIP]
+====
+In the previous example, _name_ is used to name this specific definition of the spellchecker. Multiple definitions can co-exist in a single `solrconfig.xml`, and the _name_ helps to differentiate them. If only defining one spellchecker, no name is required.
+====
+
+[[SpellChecking-WordBreakSolrSpellChecker]]
+==== WordBreakSolrSpellChecker
+
+`WordBreakSolrSpellChecker` offers suggestions by combining adjacent query terms and/or breaking terms into multiple words. It is a `SpellCheckComponent` enhancement, leveraging Lucene's `WordBreakSpellChecker`. It can detect spelling errors resulting from misplaced whitespace without the use of shingle-based dictionaries and provides collation support for word-break errors, including cases where the user has a mix of single-word spelling errors and word-break errors in the same query. It also provides shard support.
+
+Here is how it might be configured in `solrconfig.xml`:
+
+[source,xml]
+----
+<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
+  <lst name="spellchecker">
+    <str name="name">wordbreak</str>
+    <str name="classname">solr.WordBreakSolrSpellChecker</str>
+    <str name="field">lowerfilt</str>
+    <str name="combineWords">true</str>
+    <str name="breakWords">true</str>
+    <int name="maxChanges">10</int>
+  </lst>
+</searchComponent>
+----
+
+Some of the parameters will be familiar from the discussion of the other spell checkers, such as `name`, `classname`, and `field`. New for this spell checker is `combineWords`, which defines whether words should be combined in a dictionary search (default is true); `breakWords`, which defines if words should be broken during a dictionary search (default is true); and `maxChanges`, an integer which defines how many times the spell checker should check collation possibilities against the index (default is 10).
+
+The spellchecker can be configured with a traditional checker (ie: `DirectSolrSpellChecker`). The results are combined and collations can contain a mix of corrections from both spellcheckers.
+
+[[SpellChecking-AddIttoaRequestHandler]]
+=== Add It to a Request Handler
+
+Queries will be sent to a <<query-syntax-and-parsing.adoc#query-syntax-and-parsing,RequestHandler>>. If every request should generate a suggestion, then you would add the following to the `requestHandler` that you are using:
+
+[source,xml]
+----
+<str name="spellcheck">true</str>
+----
+
+One of the possible parameters is the `spellcheck.dictionary` to use, and multiples can be defined. With multiple dictionaries, all specified dictionaries are consulted and results are interleaved. Collations are created with combinations from the different spellcheckers, with care taken that multiple overlapping corrections do not occur in the same collation.
+
+Here is an example with multiple dictionaries:
+
+[source,xml]
+----
+<requestHandler name="spellCheckWithWordbreak" class="org.apache.solr.handler.component.SearchHandler">
+  <lst name="defaults">
+    <str name="spellcheck.dictionary">default</str>
+    <str name="spellcheck.dictionary">wordbreak</str>
+    <str name="spellcheck.count">20</str>
+  </lst>
+  <arr name="last-components">
+    <str>spellcheck</str>
+  </arr>
+</requestHandler>
+----
+
+[[SpellChecking-SpellCheckParameters]]
+== Spell Check Parameters
+
+The SpellCheck component accepts the parameters described in the table below.
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|<<SpellChecking-ThespellcheckParameter,spellcheck>> |Turns on or off SpellCheck suggestions for the request. If *true*, then spelling suggestions will be generated.
+|<<SpellChecking-Thespellcheck.qorqParameter,spellcheck.q or q>> |Selects the query to be spellchecked.
+|<<SpellChecking-Thespellcheck.buildParameter,spellcheck.build>> |Instructs Solr to build a dictionary for use in spellchecking.
+|<<SpellChecking-Thespellcheck.collateParameter,spellcheck.collate>> |Causes Solr to build a new query based on the best suggestion for each term in the submitted query.
+|<<SpellChecking-Thespellcheck.maxCollationsParameter,spellcheck.maxCollations>> |This parameter specifies the maximum number of collations to return.
+|<<SpellChecking-Thespellcheck.maxCollationTriesParameter,spellcheck.maxCollationTries>> |This parameter specifies the number of collation possibilities for Solr to try before giving up.
+|<<SpellChecking-Thespellcheck.maxCollationEvaluationsParameter,spellcheck.maxCollationEvaluations>> |This parameter specifies the maximum number of word correction combinations to rank and evaluate prior to deciding which collation candidates to test against the index.
+|<<SpellChecking-Thespellcheck.collateExtendedResultsParameter,spellcheck.collateExtendedResults>> |If true, returns an expanded response detailing the collations found. If `spellcheck.collate` is false, this parameter will be ignored.
+|<<SpellChecking-Thespellcheck.collateMaxCollectDocsParameter,spellcheck.collateMaxCollectDocs>> |The maximum number of documents to collect when testing potential Collations
+|<<SpellChecking-Thespellcheck.collateParam._ParameterPrefix,spellcheck.collateParam.*>> |Specifies param=value pairs that can be used to override normal query params when validating collations
+|<<SpellChecking-Thespellcheck.countParameter,spellcheck.count>> |Specifies the maximum number of spelling suggestions to be returned.
+|<<SpellChecking-Thespellcheck.dictionaryParameter,spellcheck.dictionary>> |Specifies the dictionary that should be used for spellchecking.
+|<<SpellChecking-Thespellcheck.extendedResultsParameter,spellcheck.extendedResults>> |Causes Solr to return additional information about spellcheck results, such as the frequency of each original term in the index (origFreq) as well as the frequency of each suggestion in the index (frequency). Note that this result format differs from the non-extended one as the returned suggestion for a word is actually an array of lists, where each list holds the suggested term and its frequency.
+|<<SpellChecking-Thespellcheck.onlyMorePopularParameter,spellcheck.onlyMorePopular>> |Limits spellcheck responses to queries that are more popular than the original query.
+|<<SpellChecking-Thespellcheck.maxResultsForSuggestParameter,spellcheck.maxResultsForSuggest>> |The maximum number of hits the request can return in order to both generate spelling suggestions and set the "correctlySpelled" element to "false".
+|<<SpellChecking-Thespellcheck.alternativeTermCountParameter,spellcheck.alternativeTermCount>> |The count of suggestions to return for each query term existing in the index and/or dictionary.
+|<<SpellChecking-Thespellcheck.reloadParameter,spellcheck.reload>> |Reloads the spellchecker.
+|<<SpellChecking-Thespellcheck.accuracyParameter,spellcheck.accuracy>> |Specifies an accuracy value to help decide whether a result is worthwhile.
+|<<spellcheck_DICT_NAME,spellcheck.<DICT_NAME>.key>> |Specifies a key/value pair for the implementation handling a given dictionary.
+|===
+
+[[SpellChecking-ThespellcheckParameter]]
+=== The `spellcheck` Parameter
+
+This parameter turns on SpellCheck suggestions for the request. If *true*, then spelling suggestions will be generated.
+
+[[SpellChecking-Thespellcheck.qorqParameter]]
+=== The `spellcheck.q` or `q` Parameter
+
+This parameter specifies the query to spellcheck. If `spellcheck.q` is defined, then it is used; otherwise the original input query is used. The `spellcheck.q` parameter is intended to be the original query, minus any extra markup like field names, boosts, and so on. If the `q` parameter is specified, then the `SpellingQueryConverter` class is used to parse it into tokens; otherwise the <<tokenizers.adoc#Tokenizers-WhiteSpaceTokenizer,`WhitespaceTokenizer`>> is used. The choice of which one to use is up to the application. Essentially, if you have a spelling "ready" version in your application, then it is probably better to use `spellcheck.q`. Otherwise, if you just want Solr to do the job, use the `q` parameter.
+
+[NOTE]
+====
+The SpellingQueryConverter class does not deal properly with non-ASCII characters. In this case, you have either to use `spellcheck.q`, or implement your own QueryConverter.
+====
+
+[[SpellChecking-Thespellcheck.buildParameter]]
+=== The `spellcheck.build` Parameter
+
+If set to *true*, this parameter creates the dictionary that the SolrSpellChecker will use for spell-checking. In a typical search application, you will need to build the dictionary before using the SolrSpellChecker. However, it's not always necessary to build a dictionary first. For example, you can configure the spellchecker to use a dictionary that already exists.
+
+The dictionary will take some time to build, so this parameter should not be sent with every request.
+
+[[SpellChecking-Thespellcheck.reloadParameter]]
+=== The `spellcheck.reload` Parameter
+
+If set to true, this parameter reloads the spellchecker. The results depend on the implementation of `SolrSpellChecker.reload()`. In a typical implementation, reloading the spellchecker means reloading the dictionary.
+
+[[SpellChecking-Thespellcheck.countParameter]]
+=== The `spellcheck.count` Parameter
+
+This parameter specifies the maximum number of suggestions that the spellchecker should return for a term. If this parameter isn't set, the value defaults to 1. If the parameter is set but not assigned a number, the value defaults to 5. If the parameter is set to a positive integer, that number becomes the maximum number of suggestions returned by the spellchecker.
+
+[[SpellChecking-Thespellcheck.onlyMorePopularParameter]]
+=== The `spellcheck.onlyMorePopular` Parameter
+
+If *true*, Solr will to return suggestions that result in more hits for the query than the existing query. Note that this will return more popular suggestions even when the given query term is present in the index and considered "correct".
+
+[[SpellChecking-Thespellcheck.maxResultsForSuggestParameter]]
+=== The `spellcheck.maxResultsForSuggest` Parameter
+
+For example, if this is set to 5 and the user's query returns 5 or fewer results, the spellchecker will report "correctlySpelled=false" and also offer suggestions (and collations if requested). Setting this greater than zero is useful for creating "did-you-mean?" suggestions for queries that return a low number of hits.
+
+[[SpellChecking-Thespellcheck.alternativeTermCountParameter]]
+=== The `spellcheck.alternativeTermCount` Parameter
+
+Specify the number of suggestions to return for each query term existing in the index and/or dictionary. Presumably, users will want fewer suggestions for words with docFrequency>0. Also setting this value turns "on" context-sensitive spell suggestions.
+
+[[SpellChecking-Thespellcheck.extendedResultsParameter]]
+=== The `spellcheck.extendedResults` Parameter
+
+This parameter causes to Solr to include additional information about the suggestion, such as the frequency in the index.
+
+[[SpellChecking-Thespellcheck.collateParameter]]
+=== The `spellcheck.collate` Parameter
+
+If *true*, this parameter directs Solr to take the best suggestion for each token (if one exists) and construct a new query from the suggestions. For example, if the input query was "jawa class lording" and the best suggestion for "jawa" was "java" and "lording" was "loading", then the resulting collation would be "java class loading".
+
+The spellcheck.collate parameter only returns collations that are guaranteed to result in hits if re-queried, even when applying original `fq` parameters. This is especially helpful when there is more than one correction per query.
+
+NOTE: This only returns a query to be used. It does not actually run the suggested query.
+
+[[SpellChecking-Thespellcheck.maxCollationsParameter]]
+=== The `spellcheck.maxCollations` Parameter
+
+The maximum number of collations to return. The default is *1*. This parameter is ignored if `spellcheck.collate` is false.
+
+[[SpellChecking-Thespellcheck.maxCollationTriesParameter]]
+=== The `spellcheck.maxCollationTries` Parameter
+
+This parameter specifies the number of collation possibilities for Solr to try before giving up. Lower values ensure better performance. Higher values may be necessary to find a collation that can return results. The default value is `0`, which maintains backwards-compatible (Solr 1.4) behavior (do not check collations). This parameter is ignored if `spellcheck.collate` is false.
+
+[[SpellChecking-Thespellcheck.maxCollationEvaluationsParameter]]
+=== The `spellcheck.maxCollationEvaluations` Parameter
+
+This parameter specifies the maximum number of word correction combinations to rank and evaluate prior to deciding which collation candidates to test against the index. This is a performance safety-net in case a user enters a query with many misspelled words. The default is *10,000* combinations, which should work well in most situations.
+
+[[SpellChecking-Thespellcheck.collateExtendedResultsParameter]]
+=== The `spellcheck.collateExtendedResults` Parameter
+
+If *true*, this parameter returns an expanded response format detailing the collations Solr found. The default value is *false* and this is ignored if `spellcheck.collate` is false.
+
+[[SpellChecking-Thespellcheck.collateMaxCollectDocsParameter]]
+=== The `spellcheck.collateMaxCollectDocs` Parameter
+
+This parameter specifies the maximum number of documents that should be collect when testing potential collations against the index. A value of *0* indicates that all documents should be collected, resulting in exact hit-counts. Otherwise an estimation is provided as a performance optimization in cases where exact hit-counts are unnecessary – the higher the value specified, the more precise the estimation.
+
+The default value for this parameter is *0*, but when `spellcheck.collateExtendedResults` is *false*, the optimization is always used as if a *1* had been specified.
+
+
+[[SpellChecking-Thespellcheck.collateParam._ParameterPrefix]]
+=== The `spellcheck.collateParam.*` Parameter Prefix
+
+This parameter prefix can be used to specify any additional parameters that you wish to the Spellchecker to use when internally validating collation queries. For example, even if your regular search results allow for loose matching of one or more query terms via parameters like `q.op=OR` and `mm=20%` you can specify override params such as `spellcheck.collateParam.q.op=AND&spellcheck.collateParam.mm=100%` to require that only collations consisting of words that are all found in at least one document may be returned.
+
+[[SpellChecking-Thespellcheck.dictionaryParameter]]
+=== The `spellcheck.dictionary` Parameter
+
+This parameter causes Solr to use the dictionary named in the parameter's argument. The default setting is "default". This parameter can be used to invoke a specific spellchecker on a per request basis.
+
+[[SpellChecking-Thespellcheck.accuracyParameter]]
+=== The `spellcheck.accuracy` Parameter
+
+Specifies an accuracy value to be used by the spell checking implementation to decide whether a result is worthwhile or not. The value is a float between 0 and 1. Defaults to `Float.MIN_VALUE`.
+
+
+[[spellcheck_DICT_NAME]]
+=== The `spellcheck.<DICT_NAME>.key` Parameter
+
+Specifies a key/value pair for the implementation handling a given dictionary. The value that is passed through is just `key=value` (`spellcheck.<DICT_NAME>.` is stripped off.
+
+For example, given a dictionary called `foo`, `spellcheck.foo.myKey=myValue` would result in `myKey=myValue` being passed through to the implementation handling the dictionary `foo`.
+
+[[SpellChecking-Example]]
+=== Example
+
+Using Solr's `bin/solr -e techproducts` example, this query shows the results of a simple request that defines a query using the `spellcheck.q` parameter, and forces the collations to require all input terms must match:
+
+`\http://localhost:8983/solr/techproducts/spell?df=text&spellcheck.q=delll+ultra+sharp&spellcheck=true&spellcheck.collateParam.q.op=AND`
+
+Results:
+
+[source,xml]
+----
+<lst name="spellcheck">
+  <lst name="suggestions">
+    <lst name="delll">
+      <int name="numFound">1</int>
+      <int name="startOffset">0</int>
+      <int name="endOffset">5</int>
+      <int name="origFreq">0</int>
+      <arr name="suggestion">
+        <lst>
+          <str name="word">dell</str>
+          <int name="freq">1</int>
+        </lst>
+      </arr>
+    </lst>
+    <lst name="ultra sharp">
+      <int name="numFound">1</int>
+      <int name="startOffset">6</int>
+      <int name="endOffset">17</int>
+      <int name="origFreq">0</int>
+      <arr name="suggestion">
+        <lst>
+          <str name="word">ultrasharp</str>
+          <int name="freq">1</int>
+        </lst>
+      </arr>
+    </lst>
+  </lst>
+  <bool name="correctlySpelled">false</bool>
+  <lst name="collations">
+    <lst name="collation">
+      <str name="collationQuery">dell ultrasharp</str>
+      <int name="hits">1</int>
+      <lst name="misspellingsAndCorrections">
+        <str name="delll">dell</str>
+        <str name="ultra sharp">ultrasharp</str>
+      </lst>
+    </lst>
+  </lst>
+</lst>
+----
+
+[[SpellChecking-DistributedSpellCheck]]
+== Distributed SpellCheck
+
+The `SpellCheckComponent` also supports spellchecking on distributed indexes. If you are using the SpellCheckComponent on a request handler other than "/select", you must provide the following two parameters:
+
+// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+
+[cols="30,70",options="header"]
+|===
+|Parameter |Description
+|shards |Specifies the shards in your distributed indexing configuration. For more information about distributed indexing, see <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>
+|shards.qt |Specifies the request handler Solr uses for requests to shards. This parameter is not required for the `/select` request handler.
+|===
+
+For example:
+
+[source,text]
+http://localhost:8983/solr/techproducts/spell?spellcheck=true&spellcheck.build=true&spellcheck.q=toyata&shards.qt=/spell&shards=solr-shard1:8983/solr/techproducts,solr-shard2:8983/solr/techproducts
+
+In case of a distributed request to the SpellCheckComponent, the shards are requested for at least five suggestions even if the `spellcheck.count` parameter value is less than five. Once the suggestions are collected, they are ranked by the configured distance measure (Levenstein Distance by default) and then by aggregate frequency.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ccbc93b8/solr/solr-ref-guide/src/stream-screen.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/stream-screen.adoc b/solr/solr-ref-guide/src/stream-screen.adoc
new file mode 100644
index 0000000..a351b0a
--- /dev/null
+++ b/solr/solr-ref-guide/src/stream-screen.adoc
@@ -0,0 +1,12 @@
+= Stream Screen
+:page-shortname: stream-screen
+:page-permalink: stream-screen.html
+
+The Stream screen allows you to enter a <<streaming-expressions.adoc#streaming-expressions,streaming expression>> and see the results. It is very similar to the <<query-screen.adoc#query-screen,Query Screen>>, except the input box is at the top and all options must be declared in the expression.
+
+The screen will insert everything up to the streaming expression itself, so you do not need to enter the full URI with the hostname, port, collection, etc. Simply input the expression after the `expr=` part, and the URL will be constructed dynamically as appropriate.
+
+Under the input box, the Execute button will run the expression. An option "with explanation" will show the parts of the streaming expression that were executed. Under this, the streamed results are shown. A URL to be able to view the output in a browser is also available.
+
+.Stream Screen with query and results
+image::images/stream-screen/StreamScreen.png[image,height=400]