You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by ct...@apache.org on 2017/06/16 01:05:18 UTC

[3/6] lucene-solr:master: SOLR-10892: Change easy tables to description lists

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/spatial-search.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/spatial-search.adoc b/solr/solr-ref-guide/src/spatial-search.adoc
index 69d1305..8b56c02 100644
--- a/solr/solr-ref-guide/src/spatial-search.adoc
+++ b/solr/solr-ref-guide/src/spatial-search.adoc
@@ -66,44 +66,46 @@ If you'd rather use a standard industry format, Solr supports WKT and GeoJSON. H
 
 There are two spatial Solr "query parsers" for geospatial search: `geofilt` and `bbox`. They take the following parameters:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|d |the radial distance, usually in kilometers. (RPT & BBoxField can set other units via the setting `distanceUnits`)
-|pt |the center point using the format "lat,lon" if latitude & longitude. Otherwise, "x,y" for PointType or "x y" for RPT field types.
-|sfield |a spatial indexed field
-|score a|
-(Advanced option; not supported by LatLonType (deprecated) or PointType) If the query is used in a scoring context (e.g. as the main query in `q`), this _<<local-parameters-in-queries.adoc#local-parameters-in-queries,local parameter>>_ determines what scores will be produced. Valid values are:
+`d`::
+The radial distance, usually in kilometers. RPT & BBoxField can set other units via the setting `distanceUnits`.
+
+`pt`::
+The center point using the format "lat,lon" if latitude & longitude. Otherwise, "x,y" for PointType or "x y" for RPT field types.
 
-* `none` - A fixed score of 1.0. (the default)
-* `kilometers` - distance in kilometers between the field value and the specified center point
-* `miles` - distance in miles between the field value and the specified center point
-* `degrees` - distance in degrees between the field value and the specified center point
-* `distance` - distance between the field value and the specified center point in the `distanceUnits` configured for this field
-* `recipDistance` - 1 / the distance
+`sfield`::
+A spatial indexed field.
 
+`score`::
+(Advanced option; not supported by LatLonType (deprecated) or PointType) If the query is used in a scoring context (e.g. as the main query in `q`), this _<<local-parameters-in-queries.adoc#local-parameters-in-queries,local parameter>>_ determines what scores will be produced. Valid values are:
+
+* `none`: A fixed score of 1.0. (the default)
+* `kilometers`: distance in kilometers between the field value and the specified center point
+* `miles`: distance in miles between the field value and the specified center point
+* `degrees`: distance in degrees between the field value and the specified center point
+* `distance`: distance between the field value and the specified center point in the `distanceUnits` configured for this field
+* `recipDistance`: 1 / the distance
++
 [WARNING]
 ====
 Don't use this for indexed non-point shapes (e.g. polygons). The results will be erroneous. And with RPT, it's only recommended for multi-valued point data, as the implementation doesn't scale very well and for single-valued fields, you should instead use a separate non-RPT field purely for distance sorting.
 ====
-
++
 When used with `BBoxField`, additional options are supported:
++
+* `overlapRatio`: The relative overlap between the indexed shape & query shape.
+* `area`: haversine based area of the overlapping shapes expressed in terms of the `distanceUnits` configured for this field
+* `area2D`: cartesian coordinates based area of the overlapping shapes expressed in terms of the `distanceUnits` configured for this field
 
-* `overlapRatio` - The relative overlap between the indexed shape & query shape.
-* `area` - haversine based area of the overlapping shapes expressed in terms of the `distanceUnits` configured for this field
-* `area2D` - cartesian coordinates based area of the overlapping shapes expressed in terms of the `distanceUnits` configured for this field
+`filter`::
+(Advanced option; not supported by LatLonType (deprecated) or PointType). If you only want the query to score (with the above `score` local parameter), not filter, then set this local parameter to false.
 
-|filter |(Advanced option; not supported by LatLonType (deprecated) or PointType). If you only want the query to score (with the above `score` local parameter), not filter, then set this local parameter to false.
-|===
 
 [[SpatialSearch-geofilt]]
 === geofilt
 
 The `geofilt` filter allows you to retrieve results based on the geospatial distance (AKA the "great circle distance") from a given point. Another way of looking at it is that it creates a circular shape filter. For example, to find all documents within five kilometers of a given lat/lon point, you could enter `&q=*:*&fq={!geofilt sfield=store}&pt=45.15,-93.85&d=5`. This filter returns all results within a circle of the given radius around the initial point:
 
-image::images/spatial-search/circle.png[image]
+image::images/spatial-search/circle.png[5KM radius]
 
 
 [[SpatialSearch-bbox]]
@@ -117,8 +119,7 @@ Here's a sample query:
 
 The rectangular shape is faster to compute and so it's sometimes used as an alternative to `geofilt` when it's acceptable to return points outside of the radius. However, if the ideal goal is a circle but you want it to run faster, then instead consider using the RPT field and try a large `distErrPct` value like `0.1` (10% radius). This will return results outside the radius but it will do so somewhat uniformly around the shape.
 
-image::images/spatial-search/bbox.png[image]
-
+image::images/spatial-search/bbox.png[Bounding box]
 
 [IMPORTANT]
 ====
@@ -148,7 +149,6 @@ If you know the filter query (be it spatial or not) is fairly unique and not lik
 
 LLPSF does not support Solr's "PostFilter".
 
-
 [[SpatialSearch-DistanceSortingorBoosting_FunctionQueries_]]
 == Distance Sorting or Boosting (Function Queries)
 
@@ -220,32 +220,51 @@ RPT _shares_ various features in common with `LatLonPointSpatialField`. Some are
 
 To use RPT, the field type must be registered and configured in `schema.xml`. There are many options for this field type.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`name`::
+The name of the field type.
+
+`class`::
+This should be `solr.SpatialRecursivePrefixTreeFieldType`. But be aware that the Lucene spatial module includes some other so-called "spatial strategies" other than RPT, notably TermQueryPT*, BBox, PointVector*, and SerializedDV. Solr requires a field type to parallel these in order to use them. The asterisked ones have them.
 
-[cols="30,70",options="header"]
-|===
-|Setting |Description
-|name |The name of the field type.
-|class |This should be `solr.SpatialRecursivePrefixTreeFieldType`. But be aware that the Lucene spatial module includes some other so-called "spatial strategies" other than RPT, notably TermQueryPT*, BBox, PointVector*, and SerializedDV. Solr requires a field type to parallel these in order to use them. The asterisked ones have them.
-|spatialContextFactory |This is a Java class name to an internal extension point governing support for shape definitions & parsing. If you require polygon support, set this to `JTS` – an alias for `org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory`; otherwise it can be omitted. See important info below about JTS. (note: prior to Solr 6, the "org.locationtech.spatial4j" part was "com.spatial4j.core" and there used to be no convenience JTS alias)
-|geo |If **true**, the default, latitude and longitude coordinates will be used and the mathematical model will generally be a sphere. If false, the coordinates will be generic X & Y on a 2D plane using Euclidean/Cartesian geometry.
-|format |Defines the shape syntax/format to be used. Defaults to `WKT` but `GeoJSON` is another popular format. Spatial4j governs this feature and supports https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/io/package-frame.html[other formats]. If a given shape is parseable as "lat,lon" or "x y" then that is always supported.
-|distanceUnits a|
-This is used to specify the units for distance measurements used throughout the use of this field. This can be `degrees`, `kilometers` or `miles`. It is applied to nearly all distance measurements involving the field: `maxDistErr`, `distErr`, `d`, `geodist` and the `score` when score is `distance`, `area`, or `area2d`. However, it doesn't affect distances embedded in WKT strings, (eg: "`BUFFER(POINT(200 10),0.2)`"), which are still in degrees.
+`spatialContextFactory`::
+This is a Java class name to an internal extension point governing support for shape definitions & parsing. If you require polygon support, set this to `JTS` – an alias for `org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory`; otherwise it can be omitted. See important info below about JTS. (note: prior to Solr 6, the "org.locationtech.spatial4j" part was "com.spatial4j.core" and there used to be no convenience JTS alias)
 
-`distanceUnits` defaults to either "```kilometers```" if `geo` is "```true```", or "```degrees```" if `geo` is "```false```".
+`geo`::
+If `true`, the default, latitude and longitude coordinates will be used and the mathematical model will generally be a sphere. If `false`, the coordinates will be generic X & Y on a 2D plane using Euclidean/Cartesian geometry.
 
+`format`:: Defines the shape syntax/format to be used. Defaults to `WKT` but `GeoJSON` is another popular format. Spatial4j governs this feature and supports https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/io/package-frame.html[other formats]. If a given shape is parseable as "lat,lon" or "x y" then that is always supported.
+
+`distanceUnits`:: a|
+This is used to specify the units for distance measurements used throughout the use of this field. This can be `degrees`, `kilometers` or `miles`. It is applied to nearly all distance measurements involving the field: `maxDistErr`, `distErr`, `d`, `geodist` and the `score` when score is `distance`, `area`, or `area2d`. However, it doesn't affect distances embedded in WKT strings, (e.g., `BUFFER(POINT(200 10),0.2)`), which are still in degrees.
++
+`distanceUnits` defaults to either `kilometers` if `geo` is true`, or `degrees` if `geo` is `false`.
++
 `distanceUnits` replaces the `units` attribute; which is now deprecated and mutually exclusive with this attribute.
 
-|distErrPct |Defines the default precision of non-point shapes (both index & query), as a fraction between 0.0 (fully precise) to 0.5. The closer this number is to zero, the more accurate the shape will be. However, more precise indexed shapes use more disk space and take longer to index. Bigger distErrPct values will make queries faster but less accurate. At query time this can be overridden in the query syntax, such as to 0.0 so as to not approximate the search shape. The default for the RPT field is 0.025. Note: For RPTWithGeometrySpatialField (see below), there's always complete accuracy with the serialized geometry and so this doesn't control accuracy so much as it controls the trade-off of how big the index should be. distErrPct defaults to 0.15 for that field.
-|maxDistErr |Defines the highest level of detail required for indexed data. If left blank, the default is one meter – just a bit less than 0.000009 degrees. This setting is used internally to compute an appropriate maxLevels (see below).
-|worldBounds |Defines the valid numerical ranges for x and y, in the format of `ENVELOPE(minX, maxX, maxY, minY)`. If `geo="true"`, the standard lat-lon world boundaries are assumed. If `geo=false`, you should define your boundaries.
-|distCalculator |Defines the distance calculation algorithm. If `geo=true`, "haversine" is the default. If `geo=false`, "cartesian" will be the default. Other possible values are "lawOfCosines", "vincentySphere" and "cartesian^2".
-|prefixTree |Defines the spatial grid implementation. Since a PrefixTree (such as RecursivePrefixTree) maps the world as a grid, each grid cell is decomposed to another set of grid cells at the next level. If `geo=true` then the default prefix tree is "```geohash```", otherwise it's "```quad```". Geohash has 32 children at each level, quad has 4. Geohash can only be used for `geo=true` as it's strictly geospatial. A third choice is "```packedQuad```", which is generally more efficient than plain "quad", provided there are many levels -- perhaps 20 or more.
-|maxLevels |Sets the maximum grid depth for indexed data. Instead, it's usually more intuitive to compute an appropriate maxLevels by specifying `maxDistErr` .
-|===
+`distErrPct`::
+Defines the default precision of non-point shapes (both index & query), as a fraction between `0.0` (fully precise) to `0.5`. The closer this number is to zero, the more accurate the shape will be. However, more precise indexed shapes use more disk space and take longer to index.
++
+Bigger `distErrPct` values will make queries faster but less accurate. At query time this can be overridden in the query syntax, such as to `0.0` so as to not approximate the search shape. The default for the RPT field is `0.025`.
++
+NOTE: For RPTWithGeometrySpatialField (see below), there's always complete accuracy with the serialized geometry and so this doesn't control accuracy so much as it controls the trade-off of how big the index should be. distErrPct defaults to 0.15 for that field.
+
+`maxDistErr`:: Defines the highest level of detail required for indexed data. If left blank, the default is one meter – just a bit less than 0.000009 degrees. This setting is used internally to compute an appropriate maxLevels (see below).
 
-*_And there are others:_* `normWrapLongitude` _,_ `datelineRule`, `validationRule`, `autoIndex`, `allowMultiOverlap`, `precisionModel`. For further info, see notes below about `spatialContextFactory` implementations referenced above, especially the link to the JTS based one.
+`worldBounds`::
+Defines the valid numerical ranges for x and y, in the format of `ENVELOPE(minX, maxX, maxY, minY)`. If `geo="true"`, the standard lat-lon world boundaries are assumed. If `geo=false`, you should define your boundaries.
+
+`distCalculator`::
+Defines the distance calculation algorithm. If `geo=true`, "haversine" is the default. If `geo=false`, "cartesian" will be the default. Other possible values are "lawOfCosines", "vincentySphere" and "cartesian^2".
+
+`prefixTree`:: Defines the spatial grid implementation. Since a PrefixTree (such as RecursivePrefixTree) maps the world as a grid, each grid cell is decomposed to another set of grid cells at the next level.
++
+If `geo=true` then the default prefix tree is `geohash`, otherwise it's `quad`. Geohash has 32 children at each level, quad has 4. Geohash can only be used for `geo=true` as it's strictly geospatial.
++
+A third choice is `packedQuad`, which is generally more efficient than `quad`, provided there are many levels -- perhaps 20 or more.
+
+`maxLevels`:: Sets the maximum grid depth for indexed data. Instead, it's usually more intuitive to compute an appropriate maxLevels by specifying `maxDistErr` .
+
+*_And there are others:_* `normWrapLongitude`, `datelineRule`, `validationRule`, `autoIndex`, `allowMultiOverlap`, `precisionModel`. For further info, see notes below about `spatialContextFactory` implementations referenced above, especially the link to the JTS based one.
 
 [[SpatialSearch-JTSandPolygons]]
 === JTS and Polygons
@@ -304,23 +323,30 @@ The RPT field supports generating a 2D grid of facet counts for documents having
 
 The heatmap feature is accessed from Solr's faceting feature. As a part of faceting, it supports the `key` local parameter as well as excluding tagged filter queries, just like other types of faceting do. This allows multiple heatmaps to be returned on the same field with different filters.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`facet`::
+Set to `true` to enable faceting.
+
+`facet.heatmap`::
+The field name of type RPT.
+
+`facet.heatmap.geom`::
+The region to compute the heatmap on, specified using the rectangle-range syntax or WKT. It defaults to the world. ex: `["-180 -90" TO "180 90"]`.
+
+`facet.heatmap.gridLevel`::
+A specific grid level, which determines how big each grid cell is. Defaults to being computed via `distErrPct` (or `distErr`).
+
+`facet.heatmap.distErrPct`::
+A fraction of the size of geom used to compute gridLevel. Defaults to 0.15. It's computed the same as a similarly named parameter for RPT.
+
+`facet.heatmap.distErr`::
+A cell error distance used to pick the grid level indirectly. It's computed the same as a similarly named parameter for RPT.
 
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|facet |Set to `true` to enable faceting
-|facet.heatmap |The field name of type RPT
-|facet.heatmap.geom |The region to compute the heatmap on, specified using the rectangle-range syntax or WKT. It defaults to the world. ex: `["-180 -90" TO "180 90"]`
-|facet.heatmap.gridLevel |A specific grid level, which determines how big each grid cell is. Defaults to being computed via distErrPct (or distErr)
-|facet.heatmap.distErrPct |A fraction of the size of geom used to compute gridLevel. Defaults to 0.15. It's computed the same as a similarly named parameter for RPT.
-|facet.heatmap.distErr |A cell error distance used to pick the grid level indirectly. It's computed the same as a similarly named parameter for RPT.
-|facet.heatmap.format |The format, either `ints2D` (default) or `png`.
-|===
+`facet.heatmap.format`::
+The format, either `ints2D` (default) or `png`.
 
 [TIP]
 ====
-You'll experiment with different distErrPct values (probably 0.10 - 0.20) with various input geometries till the default size is what you're looking for. The specific details of how it's computed isn't important. For high-detail grids used in point-plotting (loosely one cell per pixel), set distErr to be the number of decimal-degrees of several pixels or so of the map being displayed. Also, you probably don't want to use a geohash based grid because the cell orientation between grid levels flip-flops between being square and rectangle. Quad is consistent and has more levels, albeit at the expense of a larger index.
+You'll experiment with different `distErrPct` values (probably 0.10 - 0.20) with various input geometries till the default size is what you're looking for. The specific details of how it's computed isn't important. For high-detail grids used in point-plotting (loosely one cell per pixel), set `distErr` to be the number of decimal-degrees of several pixels or so of the map being displayed. Also, you probably don't want to use a geohash-based grid because the cell orientation between grid levels flip-flops between being square and rectangle. Quad is consistent and has more levels, albeit at the expense of a larger index.
 ====
 
 Here's some sample output in JSON (with "..." inserted for brevity):

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/the-query-elevation-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-query-elevation-component.adoc b/solr/solr-ref-guide/src/the-query-elevation-component.adoc
index 9898a08..dcd3c7e 100644
--- a/solr/solr-ref-guide/src/the-query-elevation-component.adoc
+++ b/solr/solr-ref-guide/src/the-query-elevation-component.adoc
@@ -61,17 +61,16 @@ Optionally, in the Query Elevation Component configuration you can also specify
 <str name="editorialMarkerFieldName">foo</str>
 ----
 
-The Query Elevation Search Component takes the following arguments:
+The Query Elevation Search Component takes the following parameters:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`queryFieldType`::
+Specifies which fieldType should be used to analyze the incoming text. For example, it may be appropriate to use a fieldType with a LowerCaseFilter.
 
-[cols="30,70",options="header"]
-|===
-|Argument |Description
-|`queryFieldType` |Specifies which fieldType should be used to analyze the incoming text. For example, it may be appropriate to use a fieldType with a LowerCaseFilter.
-|`config-file` |Path to the file that defines query elevation. This file must exist in `<instanceDir>/conf/<config-file>` or `<dataDir>/<config-file>`. If the file exists in the /conf/ directory it will be loaded once at startup. If it exists in the data directory, it will be reloaded for each IndexReader.
-|`forceElevation` |By default, this component respects the requested `sort` parameter: if the request asks to sort by date, it will order the results by date. If `forceElevation=true` (the default), results will first return the boosted docs, then order by date.
-|===
+`config-file`::
+Path to the file that defines query elevation. This file must exist in `<instanceDir>/conf/<config-file>` or `<dataDir>/<config-file>`. If the file exists in the `conf/` directory it will be loaded once at startup. If it exists in the `data/` directory, it will be reloaded for each IndexReader.
+
+`forceElevation`::
+By default, this component respects the requested `sort` parameter: if the request asks to sort by date, it will order the results by date. If `forceElevation=true` (the default), results will first return the boosted docs, then order by date.
 
 [[TheQueryElevationComponent-elevate.xml]]
 === elevate.xml

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/the-stats-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-stats-component.adoc b/solr/solr-ref-guide/src/the-stats-component.adoc
index 96ba88c..a5eb334 100644
--- a/solr/solr-ref-guide/src/the-stats-component.adoc
+++ b/solr/solr-ref-guide/src/the-stats-component.adoc
@@ -32,18 +32,14 @@ bin/solr -e techproducts
 
 The Stats Component accepts the following parameters:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`stats`::
+If `true`, then invokes the Stats component.
 
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|stats |If **true**, then invokes the Stats component.
-|stats.field a|
+`stats.field`::
 Specifies a field for which statistics should be generated. This parameter may be invoked multiple times in a query in order to request statistics on multiple fields.
-
++
 <<local-parameters-in-queries.adoc#local-parameters-in-queries,Local Parameters>> may be used to indicate which subset of the supported statistics should be computed, and/or that statistics should be computed over the results of an arbitrary numeric function (or query) instead of a simple field name. See the examples below.
 
-|===
 
 [[TheStatsComponent-Example]]
 === Example
@@ -96,26 +92,47 @@ The query below demonstrates computing stats against two different fields numeri
 [[TheStatsComponent-StatisticsSupported]]
 == Statistics Supported
 
-The table below explains the statistics supported by the Stats component. Not all statistics are supported for all field types, and not all statistics are computed by default (See <<TheStatsComponent-LocalParameters,Local Parameters>> below for details)
-
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="10,10,50,20,10",options="header"]
-|===
-|Local Param |Sample Input |Description |Supported Types |Computed by Default
-|min |true |The minimum value of the field/function in all documents in the set. |All |Yes
-|max |true |The maximum value of the field/function in all documents in the set. |All |Yes
-|sum |true |The sum of all values of the field/function in all documents in the set. |Numeric & Date |Yes
-|count |true |The number of values found in all documents in the set for this field/function. |All |Yes
-|missing |true |The number of documents in the set which do not have a value for this field/function. |All |Yes
-|sumOfSquares |true |Sum of all values squared (a by product of computing stddev) |Numeric & Date |Yes
-|mean |true |The average `(v1 + v2 .... + vN)/N` |Numeric & Date |Yes
-|stddev |true |Standard deviation, measuring how widely spread the values in the data set are. |Numeric & Date |Yes
-|percentiles |"1,99,99.9" |A list of percentile values based on cut-off points specified by the param value. These values are an approximation, using the https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[t-digest algorithm]. |Numeric |No
-|distinctValues |true |The set of all distinct values for the field/function in all of the documents in the set. This calculation can be very expensive for fields that do not have a tiny cardinality. |All |No
-|countDistinct |true |The exact number of distinct values in the field/function in all of the documents in the set. This calculation can be very expensive for fields that do not have a tiny cardinality. |All |No
-|cardinality |"true" or"0.3" |A statistical approximation (currently using the https://en.wikipedia.org/wiki/HyperLogLog[HyperLogLog] algorithm) of the number of distinct values in the field/function in all of the documents in the set. This calculation is much more efficient then using the 'countDistinct' option, but may not be 100% accurate. Input for this option can be floating point number between 0.0 and 1.0 indicating how aggressively the algorithm should try to be accurate: 0.0 means use as little memory as possible; 1.0 means use as much memory as needed to be as accurate as possible. 'true' is supported as an alias for "0.3" |All |No
-|===
+The table below explains the statistics supported by the Stats component. Not all statistics are supported for all field types, and not all statistics are computed by default (see <<TheStatsComponent-LocalParameters,Local Parameters>> below for details)
+
+`min`::
+The minimum value of the field/function in all documents in the set. This statistic is computed for all field types and is computed by default.
+
+`max`::
+The maximum value of the field/function in all documents in the set. This statistic is computed for all field types and is computed by default.
+
+`sum`::
+The sum of all values of the field/function in all documents in the set. This statistic is computed for numeric and date field types and is computed by default.
+
+`count`::
+The number of values found in all documents in the set for this field/function. This statistic is computed for all field types and is computed by default.
+
+`missing`::
+The number of documents in the set which do not have a value for this field/function. This statistic is computed for all field types and is computed by default.
+
+`sumOfSquares`::
+Sum of all values squared (a by product of computing stddev). This statistic is computed for numeric and date field types and is computed by default.
+
+`mean`::
+The average `(v1 + v2 .... + vN)/N`. This statistic is computed for numeric and date field types and is computed by default.
+
+`stddev`::
+Standard deviation, measuring how widely spread the values in the data set are. This statistic is computed for numeric and date field types and is computed by default.
+
+`percentiles`::
+A list of percentile values based on cut-off points specified by the parameter value, such as `1,99,99.9`. These values are an approximation, using the https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[t-digest algorithm]. This statistic is computed for numeric field types and is not computed by default.
+
+`distinctValues`::
+The set of all distinct values for the field/function in all of the documents in the set. This calculation can be very expensive for fields that do not have a tiny cardinality. This statistic is computed for all field types but is not computed by default.
+
+`countDistinct`::
+The exact number of distinct values in the field/function in all of the documents in the set. This calculation can be very expensive for fields that do not have a tiny cardinality. This statistic is computed for all field types but is not computed by default.
+
+`cardinality`::
+A statistical approximation (currently using the https://en.wikipedia.org/wiki/HyperLogLog[HyperLogLog] algorithm) of the number of distinct values in the field/function in all of the documents in the set. This calculation is much more efficient then using the `countDistinct` option, but may not be 100% accurate.
++
+Input for this option can be floating point number between `0.0` and `1.0` indicating how aggressively the algorithm should try to be accurate: `0.0` means use as little memory as possible; `1.0` means use as much memory as needed to be as accurate as possible. `true` is supported as an alias for `0.3`.
++
+This statistic is computed for all field types but is not computed by default.
 
 [[TheStatsComponent-LocalParameters]]
 == Local Parameters

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/the-term-vector-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-term-vector-component.adoc b/solr/solr-ref-guide/src/the-term-vector-component.adoc
index fb92dc9..dd73d86 100644
--- a/solr/solr-ref-guide/src/the-term-vector-component.adoc
+++ b/solr/solr-ref-guide/src/the-term-vector-component.adoc
@@ -127,37 +127,48 @@ The example below shows an invocation of this component using the above configur
 [[TheTermVectorComponent-RequestParameters]]
 === Request Parameters
 
-The example below shows the available request parameters for this component:
+The example below shows some of the available request parameters for this component:
 
-`\http://localhost:8983/solr/techproducts/tvrh?q=includes:[* TO *]&rows=10&indent=true&tv=true&tv.tf=true&tv.df=true&tv.positions=true&tv.offsets=true&tv.payloads=true&tv.fl=includes`
+[source,bash]
+http://localhost:8983/solr/techproducts/tvrh?q=includes:[* TO *]&rows=10&indent=true&tv=true&tv.tf=true&tv.df=true&tv.positions=true&tv.offsets=true&tv.payloads=true&tv.fl=includes
+
+`tv`::
+If `true`, the Term Vector Component will run.
+
+`tv.docIds`::
+For a given comma-separated list of Lucene document IDs (*not* the Solr Unique Key), term vectors will be returned.
+
+`tv.fl`:: 
+For a given comma-separated list of fields, term vectors will be returned. If not specified, the `fl` parameter is used.
+
+`tv.all`::
+If `true`, all the boolean parameters listed below (`tv.df`, `tv.offsets`, `tv.positions`, `tv.payloads`, `tv.tf` and `tv.tf_idf`) will be enabled.
+
+`tv.df`::
+If `true`, returns the Document Frequency (DF) of the term in the collection. This can be computationally expensive.
+
+`tv.offsets`::
+If `true`, returns offset information for each term in the document.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`tv.positions`::
+If `true`, returns position information.
 
-[cols="20,60,20",options="header"]
-|===
-|Boolean Parameters |Description |Type
-|tv |Should the component run or not |boolean
-|tv.docIds |Returns term vectors for the specified list of Lucene document IDs (not the Solr Unique Key). |comma seperated integers
-|tv.fl |Returns term vectors for the specified list of fields. If not specified, the `fl` parameter is used. |comma seperated list of field names
-|tv.all |A shortcut that invokes all the boolean parameters listed below. |boolean
-|tv.df |Returns the Document Frequency (DF) of the term in the collection. This can be computationally expensive. |boolean
-|tv.offsets |Returns offset information for each term in the document. |boolean
-|tv.positions |Returns position information. |boolean
-|tv.payloads |Returns payload information. |boolean
-|tv.tf |Returns document term frequency info per term in the document. |boolean
-|tv.tf_idf a|
-Calculates TF / DF (ie: TF * IDF) for each term. Please note that this is a _literal_ calculation of "Term Frequency multiplied by Inverse Document Frequency" and *not* a classical TF-IDF similarity measure.
+`tv.payloads`::
+If `true`, returns payload information.
 
-Requires the parameters `tv.tf` and `tv.df` to be "true". This can be computationally expensive. (The results are not shown in example output)
+`tv.tf`::
+If `true`, returns document term frequency info for each term in the document.
 
- |boolean
-|===
+`tv.tf_idf`:: a|
+If `true`, calculates TF / DF (ie: TF * IDF) for each term. Please note that this is a _literal_ calculation of "Term Frequency multiplied by Inverse Document Frequency" and *not* a classical TF-IDF similarity measure.
++
+This parameter requires both `tv.tf` and `tv.df` to be "true". This can be computationally expensive. (The results are not shown in example output)
 
 To learn more about TermVector component output, see the Wiki page: http://wiki.apache.org/solr/TermVectorComponentExampleOptions
 
-For schema requirements, see the Wiki page: http://wiki.apache.org/solr/FieldOptionsByUseCase
+For schema requirements, see also the section  <<field-properties-by-use-case.adoc#field-properties-by-use-case, Field Properties by Use Case>>.
 
 [[TheTermVectorComponent-SolrJandtheTermVectorComponent]]
 == SolrJ and the Term Vector Component
 
-Neither the SolrQuery class nor the QueryResponse class offer specific method calls to set Term Vector Component parameters or get the "termVectors" output. However, there is a patch for it: https://issues.apache.org/jira/browse/SOLR-949[SOLR-949].
+Neither the `SolrQuery` class nor the `QueryResponse` class offer specific method calls to set Term Vector Component parameters or get the "termVectors" output. However, there is a patch for it: https://issues.apache.org/jira/browse/SOLR-949[SOLR-949].

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/the-terms-component.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/the-terms-component.adoc b/solr/solr-ref-guide/src/the-terms-component.adoc
index 346278b..c8ec782 100644
--- a/solr/solr-ref-guide/src/the-terms-component.adoc
+++ b/solr/solr-ref-guide/src/the-terms-component.adoc
@@ -53,89 +53,86 @@ You could add this component to another handler if you wanted to, and pass "term
 
 The parameters below allow you to control what terms are returned. You can also configure any of these with the request handler if you'd like to set them permanently. Or, you can add them to the query request. These parameters are:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,15,15,50",options="header"]
-|===
-|Parameter |Required |Default |Description
-|terms |No |false a|
-If set to true, enables the Terms Component. By default, the Terms Component is off.
-
+`terms`::
+If set to `true`, enables the Terms Component. By default, the Terms Component is off (`false`).
++
 Example: `terms=true`
 
-|terms.fl |Yes |null a|
-Specifies the field from which to retrieve terms.
-
+`terms.fl`::
+Specifies the field from which to retrieve terms. This parameter is required if `terms=true`.
++
 Example: `terms.fl=title`
 
-|terms.list |No |null a|
+`terms.list`::
 Fetches the document frequency for a comma delimited list of terms. Terms are always returned in index order. If `terms.ttf` is set to true, also returns their total term frequency. If multiple `terms.fl` are defined, these statistics will be returned for each term in each requested field.
-
++
 Example: `terms.list=termA,termB,termC`
 
-|terms.limit |No |10 a|
-Specifies the maximum number of terms to return. The default is 10. If the limit is set to a number less than 0, then no maximum limit is enforced. Although this is not required, either this parameter or `terms.upper` must be defined.
-
+`terms.limit`::
+Specifies the maximum number of terms to return. The default is `10`. If the limit is set to a number less than 0, then no maximum limit is enforced. Although this is not required, either this parameter or `terms.upper` must be defined.
++
 Example: `terms.limit=20`
 
-|terms.lower |No |empty string a|
+`terms.lower`::
 Specifies the term at which to start. If not specified, the empty string is used, causing Solr to start at the beginning of the field.
-
++
 Example: `terms.lower=orange`
 
-|terms.lower.incl |No |true a|
+`terms.lower.incl`::
 If set to true, includes the lower-bound term (specified with `terms.lower` in the result set.
-
++
 Example: `terms.lower.incl=false`
 
-|terms.mincount |No |null a|
+`terms.mincount`::
 Specifies the minimum document frequency to return in order for a term to be included in a query response. Results are inclusive of the mincount (that is, >= mincount).
-
++
 Example: `terms.mincount=5`
 
-|terms.maxcount |No |null a|
+`terms.maxcount`::
 Specifies the maximum document frequency a term must have in order to be included in a query response. The default setting is -1, which sets no upper bound. Results are inclusive of the maxcount (that is, <= maxcount).
-
++
 Example: `terms.maxcount=25`
 
-|terms.prefix |No |null a|
+`terms.prefix`::
 Restricts matches to terms that begin with the specified string.
-
++
 Example: `terms.prefix=inter`
 
-|terms.raw |No |false a|
+`terms.raw`::
 If set to true, returns the raw characters of the indexed term, regardless of whether it is human-readable. For instance, the indexed form of numeric numbers is not human-readable.
-
++
 Example: `terms.raw=true`
 
-|terms.regex |No |null a|
+`terms.regex`::
 Restricts matches to terms that match the regular expression.
-
++
 Example: `terms.regex=.*pedist`
 
-|terms.regex.flag |No |null a|
+`terms.regex.flag`::
 Defines a Java regex flag to use when evaluating the regular expression defined with `terms.regex`. See http://docs.oracle.com/javase/tutorial/essential/regex/pattern.html for details of each flag. Valid options are:
 
-* case_insensitive
-* comments
-* multiline
-* literal
-* dotall
-* unicode_case
-* canon_eq
-* unix_lines
-
+* `case_insensitive`
+* `comments`
+* `multiline`
+* `literal`
+* `dotall`
+* `unicode_case`
+* `canon_eq`
+* `unix_lines`
++
 Example: `terms.regex.flag=case_insensitive`
 
-|terms.stats |No |null |Include index statistics in the results. Currently returns only the *numDocs* for a collection. When combined with terms.list it provides enough information to compute idf for a list of terms.
-|terms.sort |No |count a|
-Defines how to sort the terms returned. Valid options are *count*, which sorts by the term frequency, with the highest term frequency first, or *index*, which sorts in index order.
+`terms.stats`::
+Include index statistics in the results. Currently returns only the *numDocs* for a collection. When combined with `terms.list` it provides enough information to compute inverse document frequency (IDF) for a list of terms.
 
+`terms.sort`::
+Defines how to sort the terms returned. Valid options are `count`, which sorts by the term frequency, with the highest term frequency first, or `index`, which sorts in index order.
++
 Example: `terms.sort=index`
 
-|terms.ttf |No |false a|
+`terms.ttf`::
 If set to true, returns both `df` (docFreq) and `ttf` (totalTermFreq) statistics for each requested term in `terms.list`. In this case, the response format is:
-
++
 [source,xml]
 ----
 <lst name="terms">
@@ -148,19 +145,19 @@ If set to true, returns both `df` (docFreq) and `ttf` (totalTermFreq) statistics
 </lst>
 ----
 
-|terms.upper |No |null a|
+`terms.upper`::
 Specifies the term to stop at. Although this parameter is not required, either this parameter or `terms.limit` must be defined.
-
++
 Example: `terms.upper=plum`
 
-|terms.upper.incl |No |false a|
+`terms.upper.incl`::
 If set to true, the upper bound term is included in the result set. The default is false.
-
++
 Example: `terms.upper.incl=true`
 
-|===
+The response to a terms request is a list of the terms and their document frequency values.
 
-The output is a list of the terms and their document frequency values. See below for examples.
+You may also be interested in the {solr-javadocs}/solr-core/org/apache/solr/handler/component/TermsComponent.html[TermsComponent javadoc].
 
 [[TheTermsComponent-Examples]]
 == Examples
@@ -296,16 +293,8 @@ Result:
 
 The TermsComponent also supports distributed indexes. For the `/terms` request handler, you must provide the following two parameters:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|shards |Specifies the shards in your distributed indexing configuration. For more information about distributed indexing, see <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>.
-|shards.qt |Specifies the request handler Solr uses for requests to shards.
-|===
-
-[[TheTermsComponent-MoreResources]]
-== More Resources
+`shards`::
+Specifies the shards in your distributed indexing configuration. For more information about distributed indexing, see <<distributed-search-with-index-sharding.adoc#distributed-search-with-index-sharding,Distributed Search with Index Sharding>>.
 
-* {solr-javadocs}/solr-core/org/apache/solr/handler/component/TermsComponent.html[TermsComponent javadoc]
+`shards.qt`::
+Specifies the request handler Solr uses for requests to shards.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/update-request-processors.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/update-request-processors.adoc b/solr/solr-ref-guide/src/update-request-processors.adoc
index d1f5c35..7942028 100644
--- a/solr/solr-ref-guide/src/update-request-processors.adoc
+++ b/solr/solr-ref-guide/src/update-request-processors.adoc
@@ -386,7 +386,7 @@ These Update processors do not need any configuration is your `solrconfig.xml` .
 
 The `TemplateUpdateProcessorFactory` can be used to add new fields to documents based on a template pattern.
 
-Use the parameter `processor=Template` to use it. The template parameter `Template.field` (multivalued) define the field to add and the pattern. Templates may contain placeholders which refer to other fields in the document. You can have multiple `Template.field` parameters in a single request.
+Use the parameter `processor=Template` to use it. The template parameter `Template.field` (multivalued) defines the field to add and the pattern. Templates may contain placeholders which refer to other fields in the document. You can have multiple `Template.field` parameters in a single request.
 
 For example:
 
@@ -395,7 +395,7 @@ For example:
 processor=Template&Template.field=fullName:Mr. {firstName} {lastName}
 ----
 
-The above example would add a new field to the document called `fullName`. The fields `firstName and` `lastName` are supplied from the document fields. If either of them is missing, that part is replaced with an empty string. If those fields are multi-valued, only the first value is used.
+The above example would add a new field to the document called `fullName`. The fields `firstName` and `lastName` are supplied from the document fields. If either of them is missing, that part is replaced with an empty string. If those fields are multi-valued, only the first value is used.
 
 ==== AtomicUpdateProcessorFactory
 
@@ -414,4 +414,4 @@ The above parameters convert a normal `update` operation on
 * `field1` to an atomic `add` operation
 * `field2` to an atomic `set` operation
 * `field3` to an atomic `inc` operation
-* `field4` to an atomic `remove` operation
\ No newline at end of file
+* `field4` to an atomic `remove` operation

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc b/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
index 664bd8c..040da86 100644
--- a/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
+++ b/solr/solr-ref-guide/src/updatehandlers-in-solrconfig.adoc
@@ -46,17 +46,16 @@ For more information about Near Real Time operations, see <<near-real-time-searc
 
 These settings control how often pending updates will be automatically pushed to the index. An alternative to `autoCommit` is to use `commitWithin`, which can be defined when making the update request to Solr (i.e., when pushing documents), or in an update RequestHandler.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`maxDocs`::
+The number of updates that have occurred since the last commit.
 
-[cols="30,70",options="header"]
-|===
-|Setting |Description
-|maxDocs |The number of updates that have occurred since the last commit.
-|maxTime |The number of milliseconds since the oldest uncommitted update.
-|openSearcher |Whether to open a new searcher when performing a commit. If this is **false**, the commit will flush recent index changes to stable storage, but does not cause a new searcher to be opened to make those changes visible. The default is **true**.
-|===
+`maxTime`::
+The number of milliseconds since the oldest uncommitted update.
 
-If either of these `maxDocs` or `maxTime` limits are reached, Solr automatically performs a commit operation. If the `autoCommit` tag is missing, then only explicit commits will update the index. The decision whether to use auto-commit or not depends on the needs of your application.
+`openSearcher`::
+Whether to open a new searcher when performing a commit. If this is `false`, the commit will flush recent index changes to stable storage, but does not cause a new searcher to be opened to make those changes visible. The default is `true`.
+
+If either of the `maxDocs` or `maxTime` limits are reached, Solr automatically performs a commit operation. If the `autoCommit` tag is missing, then only explicit commits will update the index. The decision whether to use auto-commit or not depends on the needs of your application.
 
 Determining the best auto-commit settings is a tradeoff between performance and accuracy. Settings that cause frequent updates will improve the accuracy of searches because new content will be searchable more quickly, but performance may suffer because of the frequent updates. Less frequent updates may improve performance but it will take longer for updates to show up in queries.
 
@@ -99,17 +98,20 @@ The UpdateHandler section is also where update-related event listeners can be co
 
 Users can write custom update event listener classes, but a common use case is to run external executables via the `RunExecutableListener`:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`exe`::
+The name of the executable to run. It should include the path to the file, relative to Solr home.
+
+`dir`::
+The directory to use as the working directory. The default is the current directory (".").
+
+`wait`::
+Forces the calling thread to wait until the executable returns a response. The default is `true`.
 
-[cols="30,70",options="header"]
-|===
-|Setting |Description
-|exe |The name of the executable to run. It should include the path to the file, relative to Solr home.
-|dir |The directory to use as the working directory. The default is ".".
-|wait |Forces the calling thread to wait until the executable returns a response. The default is **true**.
-|args |Any arguments to pass to the program. The default is none.
-|env |Any environment variables to set. The default is none.
-|===
+`args`::
+Any arguments to pass to the program. The default is none.
+
+`env`::
+Any environment variables to set. The default is none.
 
 [[UpdateHandlersinSolrConfig-TransactionLog]]
 == Transaction Log
@@ -127,15 +129,15 @@ Realtime Get currently relies on the update log feature, which is enabled by def
 
 Three additional expert-level configuration settings affect indexing performance and how far a replica can fall behind on updates before it must enter into full recovery - see the section on <<read-and-write-side-fault-tolerance.adoc#ReadandWriteSideFaultTolerance-WriteSideFaultTolerance,write side fault tolerance>> for more information:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`numRecordsToKeep`::
+The number of update records to keep per log. The default is `100`.
+
+`maxNumLogsToKeep`::
+The maximum number of logs keep. The default is `10`.
+
+`numVersionBuckets`::
+The number of buckets used to keep track of max version values when checking for re-ordered updates; increase this value to reduce the cost of synchronizing access to version buckets during high-volume indexing, this requires `(8 bytes (long) * numVersionBuckets)` of heap space per Solr core. The default is `65536`.
 
-[cols="25,10,10,55",options="header"]
-|===
-|Setting Name |Type |Default |Description
-|numRecordsToKeep |int |100 |The number of update records to keep per log
-|maxNumLogsToKeep |int |10 |The maximum number of logs keep
-|numVersionBuckets |int |65536 |The number of buckets used to keep track of max version values when checking for re-ordered updates; increase this value to reduce the cost of synchronizing access to version buckets during high-volume indexing, this requires (8 bytes (long) * numVersionBuckets) of heap space per Solr core.
-|===
 
 An example, to be included under `<config><updateHandler>` in `solrconfig.xml`, employing the above advanced settings:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/updating-parts-of-documents.adoc b/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
index ecd9b4c..fac3cac 100644
--- a/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
+++ b/solr/solr-ref-guide/src/updating-parts-of-documents.adoc
@@ -35,37 +35,22 @@ Solr supports several modifiers that atomically update values of a document. Thi
 
 To use atomic updates, add a modifier to the field that needs to be updated. The content can be updated, added to, or incrementally increased if a number.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Modifier |Usage
-|set a|
+`set`::
 Set or replace the field value(s) with the specified value(s), or remove the values if 'null' or empty list is specified as the new value.
++
+May be specified as a single value, or as a list for multiValued fields.
 
-May be specified as a single value, or as a list for multiValued fields
-
-|add a|
-Adds the specified values to a multiValued field.
-
-May be specified as a single value, or as a list.
-
-|remove a|
-Removes (all occurrences of) the specified values from a multiValued field.
-
-May be specified as a single value, or as a list.
+`add`::
+Adds the specified values to a multiValued field. May be specified as a single value, or as a list.
 
-|removeregex a|
-Removes all occurrences of the specified regex from a multiValued field.
+`remove`::
+Removes (all occurrences of) the specified values from a multiValued field. May be specified as a single value, or as a list.
 
-May be specified as a single value, or as a list.
+`removeregex`::
+Removes all occurrences of the specified regex from a multiValued field. May be specified as a single value, or as a list.
 
-|inc a|
-Increments a numeric value by a specific amount.
-
-Must be specified as a single numeric value.
-
-|===
+`inc`::
+Increments a numeric value by a specific amount. Must be specified as a single numeric value.
 
 [[UpdatingPartsofDocuments-FieldStorage]]
 === Field Storage
@@ -130,20 +115,11 @@ An atomic update operation is performed using this approach only when the fields
 
 To use in-place updates, add a modifier to the field that needs to be updated. The content can be updated or incrementally increased.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Modifier |Usage
-|set a|
-Set or replace the field value(s) with the specified value(s).
-
-May be specified as a single value.
-|inc a|
-Increments a numeric value by a specific amount.
+`set`::
+Set or replace the field value(s) with the specified value(s). May be specified as a single value.
 
-Must be specified as a single numeric value.
-|===
+`inc`::
+Increments a numeric value by a specific amount. Must be specified as a single numeric value.
 
 [[UpdatingPartsofDocuments-Example.1]]
 === Example

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
index 8bad5f5..6a9d350 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-index-handlers.adoc
@@ -74,18 +74,15 @@ For example:
 
 The add command supports some optional attributes which may be specified.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`commitWithin`::
+Add the document within the specified number of milliseconds.
 
-[cols="30,70",options="header"]
-|===
-|Optional Parameter |Parameter Description
-|commitWithin=_number_ |Add the document within the specified number of milliseconds
-|overwrite=_boolean_ |Default is true. Indicates if the unique key constraints should be checked to overwrite previous versions of the same document (see below)
-|===
+`overwrite`::
+Default is `true`. Indicates if the unique key constraints should be checked to overwrite previous versions of the same document (see below).
 
-If the document schema defines a unique key, then by default an `/update` operation to add a document will overwrite (ie: replace) any document in the index with the same unique key. If no unique key has been defined, indexing performance is somewhat faster, as no check has to be made for an existing documents to replace.
+If the document schema defines a unique key, then by default an `/update` operation to add a document will overwrite (i.e., replace) any document in the index with the same unique key. If no unique key has been defined, indexing performance is somewhat faster, as no check has to be made for an existing documents to replace.
 
-If you have a unique key field, but you feel confident that you can safely bypass the uniqueness check (eg: you build your indexes in batch, and your indexing code guarantees it never adds the same document more than once) you can specify the `overwrite="false"` option when adding your documents.
+If you have a unique key field, but you feel confident that you can safely bypass the uniqueness check (e.g., you build your indexes in batch, and your indexing code guarantees it never adds the same document more than once) you can specify the `overwrite="false"` option when adding your documents.
 
 [[UploadingDatawithIndexHandlers-XMLUpdateCommands]]
 === XML Update Commands
@@ -101,15 +98,12 @@ The `<optimize>` operation requests Solr to merge internal data structures in or
 
 The `<commit>` and `<optimize>` elements accept these optional attributes:
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`waitSearcher`::
+Default is `true`. Blocks until a new searcher is opened and registered as the main query searcher, making the changes visible.
 
-[cols="30,70",options="header"]
-|===
-|Optional Attribute |Description
-|waitSearcher |Default is true. Blocks until a new searcher is opened and registered as the main query searcher, making the changes visible.
-|expungeDeletes |(commit only) Default is false. Merges segments that have more than 10% deleted docs, expunging them in the process.
-|maxSegments |(optimize only) Default is 1. Merges the segments down to no more than this number of segments.
-|===
+`expungeDeletes`:: (commit only) Default is `false`. Merges segments that have more than 10% deleted docs, expunging them in the process.
+
+`maxSegments`:: (optimize only) Default is `1`. Merges the segments down to no more than this number of segments.
 
 Here are examples of <commit> and <optimize> using optional attributes:
 
@@ -426,29 +420,83 @@ The CSV handler allows the specification of many parameters in the URL in the fo
 
 The table below describes the parameters for the update handler.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="20,40,20,20",options="header"]
-|===
-|Parameter |Usage |Global (g) or Per Field (f) |Example
-|separator |Character used as field separator; default is "," |g,(f: see split) |separator=%09
-|trim |If true, remove leading and trailing whitespace from values. Default=false. |g,f |f.isbn.trim=true trim=false
-|header |Set to true if first line of input contains field names. These will be used if the *fieldnames* parameter is absent. |g |
-|fieldnames |Comma separated list of field names to use when adding documents. |g |fieldnames=isbn,price,title
-|literal.<field_name> |A literal value for a specified field name. |g |literal.color=red
-|skip |Comma separated list of field names to skip. |g |skip=uninteresting,shoesize
-|skipLines |Number of lines to discard in the input stream before the CSV data starts, including the header, if present. Default=0. |g |skipLines=5
-|encapsulator |The character optionally used to surround values to preserve characters such as the CSV separator or whitespace. This standard CSV format handles the encapsulator itself appearing in an encapsulated value by doubling the encapsulator. |g,(f: see split) |encapsulator="
-|escape |The character used for escaping CSV separators or other reserved characters. If an escape is specified, the encapsulator is not used unless also explicitly specified since most formats use either encapsulation or escaping, not both |g |escape=\
-|keepEmpty |Keep and index zero length (empty) fields. Default=false. |g,f |f.price.keepEmpty=true
-|map |Map one value to another. Format is value:replacement (which can be empty.) |g,f |map=left:right f.subject.map=history:bunk
-|split |If true, split a field into multiple values by a separate parser. |f |
-|overwrite |If true (the default), check for and overwrite duplicate documents, based on the uniqueKey field declared in the Solr schema. If you know the documents you are indexing do not contain any duplicates then you may see a considerable speed up setting this to false. |g |
-|commit |Issues a commit after the data has been ingested. |g |
-|commitWithin |Add the document within the specified number of milliseconds. |g |commitWithin=10000
-|rowid |Map the rowid (line number) to a field specified by the value of the parameter, for instance if your CSV doesn't have a unique key and you want to use the row id as such. |g |rowid=id
-|rowidOffset |Add the given offset (as an int) to the rowid before adding it to the document. Default is 0 |g |rowidOffset=10
-|===
+`separator`::
+Character used as field separator; default is ",". This parameter is global; for per-field usage, see the `split` parameter.
++
+Example:  `separator=%09`
+
+`trim`::
+If `true`, remove leading and trailing whitespace from values. The default is `false`. This parameter can be either global or per-field.
++
+Examples: `f.isbn.trim=true` or `trim=false`
+
+`header`::
+Set to `true` if first line of input contains field names. These will be used if the `fieldnames` parameter is absent. This parameter is global.
+
+`fieldnames`::
+Comma-separated list of field names to use when adding documents. This parameter is global.
++
+Example: `fieldnames=isbn,price,title`
+
+`literal._field_name_`::
+A literal value for a specified field name. This parameter is global.
++
+Example: `literal.color=red`
+
+`skip`::
+Comma separated list of field names to skip. This parameter is global.
++
+Example: `skip=uninteresting,shoesize`
+
+`skipLines`::
+Number of lines to discard in the input stream before the CSV data starts, including the header, if present. Default=`0`. This parameter is global.
++
+Example: `skipLines=5`
+
+`encapsulator`:: The character optionally used to surround values to preserve characters such as the CSV separator or whitespace. This standard CSV format handles the encapsulator itself appearing in an encapsulated value by doubling the encapsulator.
++
+This parameter is global; for per-field usage, see `split`.
++
+Example: `encapsulator="`
+
+`escape`:: The character used for escaping CSV separators or other reserved characters. If an escape is specified, the encapsulator is not used unless also explicitly specified since most formats use either encapsulation or escaping, not both. |g |
+
+Example: `escape=\`
+
+`keepEmpty`::
+Keep and index zero length (empty) fields. The default is `false`. This parameter can be global or per-field.
++
+Example: `f.price.keepEmpty=true`
+
+`map`:: Map one value to another. Format is value:replacement (which can be empty). This parameter can be global or per-field.
++
+Example: `map=left:right` or `f.subject.map=history:bunk`
+
+`split`::
+If `true`, split a field into multiple values by a separate parser. This parameter is used on a per-field basis.
+
+`overwrite`::
+If `true` (the default), check for and overwrite duplicate documents, based on the uniqueKey field declared in the Solr schema. If you know the documents you are indexing do not contain any duplicates then you may see a considerable speed up setting this to `false`.
++
+This parameter is global.
+
+`commit`::
+Issues a commit after the data has been ingested. This parameter is global.
+
+`commitWithin`::
+Add the document within the specified number of milliseconds. This parameter is global.
++
+Example: `commitWithin=10000`
+
+`rowid`::
+Map the `rowid` (line number) to a field specified by the value of the parameter, for instance if your CSV doesn't have a unique key and you want to use the row id as such. This parameter is global.
++
+Example: `rowid=id`
+
+`rowidOffset`::
+Add the given offset (as an integer) to the `rowid` before adding it to the document. Default is `0`. This parameter is global.
++
+Example: `rowidOffset=10`
 
 [[UploadingDatawithIndexHandlers-IndexingTab-Delimitedfiles]]
 === Indexing Tab-Delimited files

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
index 670ef2b..8096e8c 100644
--- a/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
+++ b/solr/solr-ref-guide/src/uploading-data-with-solr-cell-using-apache-tika.adoc
@@ -101,41 +101,73 @@ This command allows you to query the document using an attribute, as in: `\http:
 
 The table below describes the parameters accepted by the Extracting Request Handler.
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
-
-[cols="30,70",options="header"]
-|===
-|Parameter |Description
-|capture |Captures XHTML elements with the specified name for a supplementary addition to the Solr document. This parameter can be useful for copying chunks of the XHTML into a separate field. For instance, it could be used to grab paragraphs (`<p>`) and index them into a separate field. Note that content is still also captured into the overall "content" field.
-|captureAttr |Indexes attributes of the Tika XHTML elements into separate fields, named after the element. If set to true, for example, when extracting from HTML, Tika can return the href attributes in <a> tags as fields named "a". See the examples below.
-|commitWithin |Add the document within the specified number of milliseconds.
-|date.formats |Defines the date format patterns to identify in the documents.
-|defaultField |If the uprefix parameter (see below) is not specified and a field cannot be determined, the default field will be used.
-|extractOnly |Default is false. If true, returns the extracted content from Tika without indexing the document. This literally includes the extracted XHTML as a string in the response. When viewing manually, it may be useful to use a response format other than XML to aid in viewing the embedded XHTML tags.For an example, see http://wiki.apache.org/solr/TikaExtractOnlyExampleOutput.
-|extractFormat |Default is "xml", but the other option is "text". Controls the serialization format of the extract content. The xml format is actually XHTML, the same format that results from passing the `-x` command to the Tika command line application, while the text format is like that produced by Tika's `-t` command. This parameter is valid only if `extractOnly` is set to true.
-|fmap.<__source_field__> |Maps (moves) one field name to another. The `source_field` must be a field in incoming documents, and the value is the Solr field to map to. Example: `fmap.content=text` causes the data in the `content` field generated by Tika to be moved to the Solr's `text` field.
-|ignoreTikaException |If true, exceptions found during processing will be skipped. Any metadata available, however, will be indexed.
-|literal.<__fieldname__> |Populates a field with the name supplied with the specified value for each document. The data can be multivalued if the field is multivalued.
-|literalsOverride |If true (the default), literal field values will override other values with the same field name. If false, literal values defined with `literal.<__fieldname__>` will be appended to data already in the fields extracted from Tika. If setting `literalsOverride` to "false", the field must be multivalued.
-|lowernames |Values are "true" or "false". If true, all field names will be mapped to lowercase with underscores, if needed. For example, "Content-Type" would be mapped to "content_type."
-|multipartUploadLimitInKB |Useful if uploading very large documents, this defines the KB size of documents to allow.
-|passwordsFile |Defines a file path and name for a file of file name to password mappings.
-|resource.name |Specifies the optional name of the file. Tika can use it as a hint for detecting a file's MIME type.
-|resource.password |Defines a password to use for a password-protected PDF or OOXML file
-|tika.config |Defines a file path and name to a customized Tika configuration file. This is only required if you have customized your Tika implementation.
-|uprefix |Prefixes all fields that are not defined in the schema with the given prefix. This is very useful when combined with dynamic field definitions. Example: `uprefix=ignored_` would effectively ignore all unknown fields generated by Tika given the example schema contains `<dynamicField name="ignored_*" type="ignored"/>`
-|xpath |When extracting, only return Tika XHTML content that satisfies the given XPath expression. See http://tika.apache.org/1.7/index.html for details on the format of Tika XHTML. See also http://wiki.apache.org/solr/TikaExtractOnlyExampleOutput.
-|===
+`capture`::
+Captures XHTML elements with the specified name for a supplementary addition to the Solr document. This parameter can be useful for copying chunks of the XHTML into a separate field. For instance, it could be used to grab paragraphs (`<p>`) and index them into a separate field. Note that content is still also captured into the overall "content" field.
+
+`captureAttr`::
+Indexes attributes of the Tika XHTML elements into separate fields, named after the element. If set to true, for example, when extracting from HTML, Tika can return the href attributes in <a> tags as fields named "a". See the examples below.
+
+`commitWithin`::
+Add the document within the specified number of milliseconds.
+
+`date.formats`::
+Defines the date format patterns to identify in the documents.
+
+`defaultField`::
+If the `uprefix` parameter (see below) is not specified and a field cannot be determined, the default field will be used.
+
+`extractOnly`::
+Default is `false`. If `true`, returns the extracted content from Tika without indexing the document. This literally includes the extracted XHTML as a string in the response. When viewing manually, it may be useful to use a response format other than XML to aid in viewing the embedded XHTML tags. For an example, see http://wiki.apache.org/solr/TikaExtractOnlyExampleOutput.
+
+`extractFormat`::
+The default is `xml`, but the other option is `text`. Controls the serialization format of the extract content. The `xml` format is actually XHTML, the same format that results from passing the `-x` command to the Tika command line application, while the text format is like that produced by Tika's `-t` command. This parameter is valid only if `extractOnly` is set to true.
+
+`fmap._source_field_`::
+Maps (moves) one field name to another. The `source_field` must be a field in incoming documents, and the value is the Solr field to map to. Example: `fmap.content=text` causes the data in the `content` field generated by Tika to be moved to the Solr's `text` field.
+
+`ignoreTikaException`::
+If `true`, exceptions found during processing will be skipped. Any metadata available, however, will be indexed.
+
+`literal._fieldname_`::
+Populates a field with the name supplied with the specified value for each document. The data can be multivalued if the field is multivalued.
+
+`literalsOverride`::
+If `true` (the default), literal field values will override other values with the same field name. If `false`, literal values defined with `literal._fieldname_` will be appended to data already in the fields extracted from Tika. If setting `literalsOverride` to `false`, the field must be multivalued.
+
+`lowernames`::
+Values are `true` or `false`. If `true`, all field names will be mapped to lowercase with underscores, if needed. For example, "Content-Type" would be mapped to "content_type."
+
+`multipartUploadLimitInKB`::
+Useful if uploading very large documents, this defines the KB size of documents to allow.
+
+`passwordsFile`::
+Defines a file path and name for a file of file name to password mappings.
+
+`resource.name`::
+Specifies the optional name of the file. Tika can use it as a hint for detecting a file's MIME type.
+
+`resource.password`::
+Defines a password to use for a password-protected PDF or OOXML file
+
+`tika.config`::
+Defines a file path and name to a customized Tika configuration file. This is only required if you have customized your Tika implementation.
+
+`uprefix`::
+Prefixes all fields that are not defined in the schema with the given prefix. This is very useful when combined with dynamic field definitions. Example: `uprefix=ignored_` would effectively ignore all unknown fields generated by Tika given the example schema contains `<dynamicField name="ignored_*" type="ignored"/>`
+
+`xpath`::
+When extracting, only return Tika XHTML content that satisfies the given XPath expression. See http://tika.apache.org/1.7/index.html for details on the format of Tika XHTML. See also http://wiki.apache.org/solr/TikaExtractOnlyExampleOutput.
+
 
 [[UploadingDatawithSolrCellusingApacheTika-OrderofOperations]]
 == Order of Operations
 
 Here is the order in which the Solr Cell framework, using the Extracting Request Handler and Tika, processes its input.
 
-1.  Tika generates fields or passes them in as literals specified by `literal.<fieldname>=<value>`. If `literalsOverride=false`, literals will be appended as multi-value to the Tika-generated field.
-2.  If `lowernames=true`, Tika maps fields to lowercase.
-3.  Tika applies the mapping rules specified by `fmap.__source__=__target__` parameters.
-4.  If `uprefix` is specified, any unknown field names are prefixed with that value, else if `defaultField` is specified, any unknown fields are copied to the default field.
+.  Tika generates fields or passes them in as literals specified by `literal.<fieldname>=<value>`. If `literalsOverride=false`, literals will be appended as multi-value to the Tika-generated field.
+.  If `lowernames=true`, Tika maps fields to lowercase.
+.  Tika applies the mapping rules specified by `fmap.__source__=__target__` parameters.
+.  If `uprefix` is specified, any unknown field names are prefixed with that value, else if `defaultField` is specified, any unknown fields are copied to the default field.
 
 [[UploadingDatawithSolrCellusingApacheTika-ConfiguringtheSolrExtractingRequestHandler]]
 == Configuring the Solr ExtractingRequestHandler
@@ -194,7 +226,7 @@ You may also need to adjust the `multipartUploadLimitInKB` attribute as follows
 ----
 
 [[UploadingDatawithSolrCellusingApacheTika-Parserspecificproperties]]
-=== Parser specific properties
+=== Parser-Specific Properties
 
 Parsers used by Tika may have specific properties to govern how data is extracted. For instance, when using the Tika library from a Java program, the PDFParserConfig class has a method setSortByPosition(boolean) that can extract vertically oriented text. To access that method via configuration with the ExtractingRequestHandler, one can add the parseContext.config property to the solrconfig.xml file (see above) and then set properties in Tika's PDFParserConfig as below. Consult the Tika Java API documentation for configuration parameters that can be set for any particular parsers that require this level of control.
 
@@ -241,16 +273,18 @@ As mentioned before, Tika produces metadata about the document. Metadata describ
 
 In addition to Tika's metadata, Solr adds the following metadata (defined in `ExtractingMetadataConstants`):
 
-// TODO: Change column width to %autowidth.spread when https://github.com/asciidoctor/asciidoctor-pdf/issues/599 is fixed
+`stream_name`::
+The name of the Content Stream as uploaded to Solr. Depending on how the file is uploaded, this may or may not be set.
+
+`stream_source_info`::
+Any source info about the stream. (See the section on Content Streams later in this section.)
+
+`stream_size`::
+The size of the stream in bytes.
+
+`stream_content_type`::
+The content type of the stream, if available.
 
-[cols="30,70",options="header"]
-|===
-|Solr Metadata |Description
-|stream_name |The name of the Content Stream as uploaded to Solr. Depending on how the file is uploaded, this may or may not be set
-|stream_source_info |Any source info about the stream. (See the section on Content Streams later in this section.)
-|stream_size |The size of the stream in bytes.
-|stream_content_type |The content type of the stream, if available.
-|===
 
 [IMPORTANT]
 ====

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/bf26608f/solr/solr-ref-guide/src/v2-api.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/v2-api.adoc b/solr/solr-ref-guide/src/v2-api.adoc
index 51357ab..6906b1c 100644
--- a/solr/solr-ref-guide/src/v2-api.adoc
+++ b/solr/solr-ref-guide/src/v2-api.adoc
@@ -30,9 +30,9 @@ For now the two API styles will coexist, and all the old APIs will continue to w
 
 The old API and the v2 API differ in three principle ways:
 
-1.  Command format: The old API commands and associated parameters are provided through URL request parameters on HTTP GET requests, while in the v2 API most API commands are provided via a JSON body POST'ed to v2 API endpoints. The v2 API also supports HTTP methods GET and DELETE where appropriate.
-2.  Endpoint structure: The v2 API endpoint structure has been rationalized and regularized.
-3.  Documentation: The v2 APIs are self-documenting: append `/_introspect` to any valid v2 API path and the API specification will be returned in JSON format.
+.  Command format: The old API commands and associated parameters are provided through URL request parameters on HTTP GET requests, while in the v2 API most API commands are provided via a JSON body POST'ed to v2 API endpoints. The v2 API also supports HTTP methods GET and DELETE where appropriate.
+.  Endpoint structure: The v2 API endpoint structure has been rationalized and regularized.
+.  Documentation: The v2 APIs are self-documenting: append `/_introspect` to any valid v2 API path and the API specification will be returned in JSON format.
 
 [[v2API-v2APIPathPrefixes]]
 == v2 API Path Prefixes
@@ -43,15 +43,15 @@ Following are some v2 API URL paths and path prefixes, along with some of the op
 |===
 |Path prefix |Some Supported Operations
 |`/v2/collections` or equivalently: `/v2/c` |Create, alias, backup, and restore a collection.
-|`/v2/c/__collection-name__/update` |Update requests.
-|`/v2/c/__collection-name__/config` |Configuration requests.
-|`/v2/c/__collection-name__/schema` |Schema requests.
-|`/v2/c/__collection-name__/__handler-name__` |Handler-specific requests.
-|`/v2/c/__collection-name__/shards` |Split a shard, create a shard, add a replica.
-|`/v2/c/__collection-name__/shards/___shard-name___` |Delete a shard, force leader election
-|`/v2/c/__collection-name__/shards/___shard-name____/____replica-name___` |Delete a replica.
+|`/v2/c/_collection-name_/update` |Update requests.
+|`/v2/c/_collection-name_/config` |Configuration requests.
+|`/v2/c/_collection-name_/schema` |Schema requests.
+|`/v2/c/_collection-name_/_handler-name_` |Handler-specific requests.
+|`/v2/c/_collection-name_/shards` |Split a shard, create a shard, add a replica.
+|`/v2/c/_collection-name_/shards/_shard-name_` |Delete a shard, force leader election
+|`/v2/c/_collection-name_/shards/_shard-name_/_replica-name_` |Delete a replica.
 |`/v2/cores` |Create a core.
-|`/v2/cores/__core-name__` |Reload, rename, delete, and unload a core.
+|`/v2/cores/_core-name_` |Reload, rename, delete, and unload a core.
 |`/v2/node` |Perform overseer operation, rejoin leader election.
 |`/v2/cluster` |Add role, remove role, set cluster property.
 |`/v2/c/.system/blob` |Upload and download blobs and metadata.
@@ -68,7 +68,7 @@ To limit the introspect output to include just one particular HTTP method, add r
 
 `\http://localhost:8983/v2/c/_introspect?method=POST`
 
-Most endpoints support commands provided in a body sent via POST. To limit the introspect output to only one command, add request param `command=__command-name__` .
+Most endpoints support commands provided in a body sent via POST. To limit the introspect output to only one command, add request param `command=_command-name_` .
 
 `\http://localhost:8983/v2/c/gettingstarted/_introspect?method=POST&command=modify`