You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@lucene.apache.org by da...@apache.org on 2017/07/13 07:18:37 UTC

[29/41] lucene-solr:feature/autoscaling: SOLR-11050: remove Confluence-style anchors and fix all incoming links

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/using-solr-from-ruby.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-solr-from-ruby.adoc b/solr/solr-ref-guide/src/using-solr-from-ruby.adoc
index ef5454c..0b70336 100644
--- a/solr/solr-ref-guide/src/using-solr-from-ruby.adoc
+++ b/solr/solr-ref-guide/src/using-solr-from-ruby.adoc
@@ -18,7 +18,7 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr has an optional Ruby response format that extends the <<response-writers.adoc#ResponseWriters-JSONResponseWriter,JSON output>> to allow the response to be safely eval'd by Ruby's interpreter
+Solr has an optional Ruby response format that extends the <<response-writers.adoc#json-response-writer,JSON output>> to allow the response to be safely eval'd by Ruby's interpreter
 
 This Ruby response format differs from JSON in the following ways:
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc b/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc
index c7a9bd7..31b49f2 100644
--- a/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc
+++ b/solr/solr-ref-guide/src/using-zookeeper-to-manage-configuration-files.adoc
@@ -76,7 +76,7 @@ To update or change your SolrCloud configuration files:
 
 == Preparing ZooKeeper before First Cluster Start
 
-If you will share the same ZooKeeper instance with other applications you should use a _chroot_ in ZooKeeper. Please see <<taking-solr-to-production.adoc#TakingSolrtoProduction-ZooKeeperchroot,ZooKeeper chroot>> for instructions.
+If you will share the same ZooKeeper instance with other applications you should use a _chroot_ in ZooKeeper. Please see <<taking-solr-to-production.adoc#zookeeper-chroot,ZooKeeper chroot>> for instructions.
 
 There are certain configuration files containing cluster wide configuration. Since some of these are crucial for the cluster to function properly, you may need to upload such files to ZooKeeper before starting your Solr cluster for the first time. Examples of such configuration files (not exhaustive) are `solr.xml`, `security.json` and `clusterprops.json`.
 

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/velocity-search-ui.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/velocity-search-ui.adoc b/solr/solr-ref-guide/src/velocity-search-ui.adoc
index 0cb4697..cc2fb47 100644
--- a/solr/solr-ref-guide/src/velocity-search-ui.adoc
+++ b/solr/solr-ref-guide/src/velocity-search-ui.adoc
@@ -18,11 +18,11 @@
 // specific language governing permissions and limitations
 // under the License.
 
-Solr includes a sample search UI based on the <<response-writers.adoc#ResponseWriters-VelocityResponseWriter,VelocityResponseWriter>> (also known as Solritas) that demonstrates several useful features, such as searching, faceting, highlighting, autocomplete, and geospatial searching.
+Solr includes a sample search UI based on the <<response-writers.adoc#velocity-writer,VelocityResponseWriter>> (also known as Solritas) that demonstrates several useful features, such as searching, faceting, highlighting, autocomplete, and geospatial searching.
 
 When using the `sample_techproducts_configs` config set, you can access the Velocity sample Search UI: `\http://localhost:8983/solr/techproducts/browse`
 
 .The Velocity Search UI
 image::images/velocity-search-ui/techproducts_browse.png[image,width=500]
 
-For more information about the Velocity Response Writer, see the <<response-writers.adoc#ResponseWriters-VelocityResponseWriter,Response Writer page>>.
+For more information about the Velocity Response Writer, see the <<response-writers.adoc#velocity-writer,Response Writer page>>.

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc b/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
index c85cfa6..9208775 100644
--- a/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
+++ b/solr/solr-ref-guide/src/working-with-currencies-and-exchange-rates.adoc
@@ -75,7 +75,7 @@ In the above example, the raw amount field will use the `"*_l_ns"` dynamic field
 .Atomic Updates won't work if dynamic sub-fields are stored
 [NOTE]
 ====
-As noted on <<updating-parts-of-documents.adoc#UpdatingPartsofDocuments-FieldStorage,Updating Parts of Documents>>, stored dynamic sub-fields will cause indexing to fail when you use Atomic Updates. To avoid this problem, specify `stored="false"` on those dynamic fields.
+As noted on <<updating-parts-of-documents.adoc#field-storage,Updating Parts of Documents>>, stored dynamic sub-fields will cause indexing to fail when you use Atomic Updates. To avoid this problem, specify `stored="false"` on those dynamic fields.
 ====
 
 == Exchange Rates

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/working-with-dates.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-dates.adoc b/solr/solr-ref-guide/src/working-with-dates.adoc
index 31d0f1f..5f28f61 100644
--- a/solr/solr-ref-guide/src/working-with-dates.adoc
+++ b/solr/solr-ref-guide/src/working-with-dates.adoc
@@ -18,7 +18,6 @@
 // specific language governing permissions and limitations
 // under the License.
 
-[[WorkingwithDates-DateFormatting]]
 == Date Formatting
 
 Solr's date fields (`TrieDateField`, `DatePointField` and `DateRangeField`) represent "dates" as a point in time with millisecond precision. The format used is a restricted form of the canonical representation of dateTime in the http://www.w3.org/TR/xmlschema-2/#dateTime[XML Schema specification] – a restricted subset of https://en.wikipedia.org/wiki/ISO_8601[ISO-8601]. For those familiar with Java 8, Solr uses https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_INSTANT[DateTimeFormatter.ISO_INSTANT] for formatting, and parsing too with "leniency".
@@ -48,7 +47,6 @@ There must be a leading `'-'` for dates prior to year 0000, and Solr will format
 .Query escaping may be required
 [WARNING]
 ====
-
 As you can see, the date format includes colon characters separating the hours, minutes, and seconds. Because the colon is a special character to Solr's most common query parsers, escaping is sometimes required, depending on exactly what you are trying to do.
 
 This is normally an invalid query: `datefield:1972-05-20T17:33:18.772Z`
@@ -57,10 +55,8 @@ These are valid queries: +
 `datefield:1972-05-20T17\:33\:18.772Z` +
 `datefield:"1972-05-20T17:33:18.772Z"` +
 `datefield:[1972-05-20T17:33:18.772Z TO *]`
-
 ====
 
-[[WorkingwithDates-DateRangeFormatting]]
 === Date Range Formatting
 
 Solr's `DateRangeField` supports the same point in time date syntax described above (with _date math_ described below) and more to express date ranges. One class of examples is truncated dates, which represent the entire date span to the precision indicated. The other class uses the range syntax (`[ TO ]`). Here are some examples:
@@ -74,12 +70,10 @@ Solr's `DateRangeField` supports the same point in time date syntax described ab
 
 Limitations: The range syntax doesn't support embedded date math. If you specify a date instance supported by TrieDateField with date math truncating it, like `NOW/DAY`, you still get the first millisecond of that day, not the entire day's range. Exclusive ranges (using `{` & `}`) work in _queries_ but not for _indexing_ ranges.
 
-[[WorkingwithDates-DateMath]]
 == Date Math
 
 Solr's date field types also supports _date math_ expressions, which makes it easy to create times relative to fixed moments in time, include the current time which can be represented using the special value of "```NOW```".
 
-[[WorkingwithDates-DateMathSyntax]]
 === Date Math Syntax
 
 Date math expressions consist either adding some quantity of time in a specified unit, or rounding the current time by a specified unit. expressions can be chained and are evaluated left to right.
@@ -104,10 +98,8 @@ Note that while date math is most commonly used relative to `NOW` it can be appl
 
 `1972-05-20T17:33:18.772Z+6MONTHS+3DAYS/DAY`
 
-[[WorkingwithDates-RequestParametersThatAffectDateMath]]
 === Request Parameters That Affect Date Math
 
-[[WorkingwithDates-NOW]]
 ==== NOW
 
 The `NOW` parameter is used internally by Solr to ensure consistent date math expression parsing across multiple nodes in a distributed request. But it can be specified to instruct Solr to use an arbitrary moment in time (past or future) to override for all situations where the the special value of "```NOW```" would impact date math expressions.
@@ -118,7 +110,6 @@ Example:
 
 `q=solr&fq=start_date:[* TO NOW]&NOW=1384387200000`
 
-[[WorkingwithDates-TZ]]
 ==== TZ
 
 By default, all date math expressions are evaluated relative to the UTC TimeZone, but the `TZ` parameter can be specified to override this behaviour, by forcing all date based addition and rounding to be relative to the specified http://docs.oracle.com/javase/8/docs/api/java/util/TimeZone.html[time zone].
@@ -161,7 +152,6 @@ http://localhost:8983/solr/my_collection/select?q=*:*&facet.range=my_date_field&
 ...
 ----
 
-[[WorkingwithDates-MoreDateRangeFieldDetails]]
 == More DateRangeField Details
 
 `DateRangeField` is almost a drop-in replacement for places where `TrieDateField` is used. The only difference is that Solr's XML or SolrJ response formats will expose the stored data as a String instead of a Date. The underlying index data for this field will be a bit larger. Queries that align to units of time a second on up should be faster than TrieDateField, especially if it's in UTC. But the main point of DateRangeField as its name suggests is to allow indexing date ranges. To do that, simply supply strings in the format shown above. It also supports specifying 3 different relational predicates between the indexed data, and the query range: `Intersects` (default), `Contains`, `Within`. You can specify the predicate by querying using the `op` local-params parameter like so:

http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/8b65515f/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc
----------------------------------------------------------------------
diff --git a/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc b/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc
index 3aa0195..ac42636 100644
--- a/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc
+++ b/solr/solr-ref-guide/src/working-with-external-files-and-processes.adoc
@@ -18,7 +18,6 @@
 // specific language governing permissions and limitations
 // under the License.
 
-[[WorkingwithExternalFilesandProcesses-TheExternalFileFieldType]]
 == The ExternalFileField Type
 
 The `ExternalFileField` type makes it possible to specify the values for a field in a file outside the Solr index. For such a field, the file contains mappings from a key field to the field value. Another way to think of this is that, instead of specifying the field in documents as they are indexed, Solr finds values for this field in the external file.
@@ -41,7 +40,6 @@ The `keyField` attribute defines the key that will be defined in the external fi
 
 The `valType` attribute specifies the actual type of values that will be found in the file. The type specified must be either a float field type, so valid values for this attribute are `pfloat`, `float` or `tfloat`. This attribute can be omitted.
 
-[[WorkingwithExternalFilesandProcesses-FormatoftheExternalFile]]
 === Format of the External File
 
 The file itself is located in Solr's index directory, which by default is `$SOLR_HOME/data`. The name of the file should be `external___fieldname__` or `external___fieldname__.*`. For the example above, then, the file could be named `external_entryRankFile` or `external_entryRankFile.txt`.
@@ -62,10 +60,9 @@ doc40=42
 
 The keys listed in this file do not need to be unique. The file does not need to be sorted, but Solr will be able to perform the lookup faster if it is.
 
-[[WorkingwithExternalFilesandProcesses-ReloadinganExternalFile]]
 === Reloading an External File
 
-It's possible to define an event listener to reload an external file when either a searcher is reloaded or when a new searcher is started. See the section <<query-settings-in-solrconfig.adoc#QuerySettingsinSolrConfig-Query-RelatedListeners,Query-Related Listeners>> for more information, but a sample definition in `solrconfig.xml` might look like this:
+It's possible to define an event listener to reload an external file when either a searcher is reloaded or when a new searcher is started. See the section <<query-settings-in-solrconfig.adoc#query-related-listeners,Query-Related Listeners>> for more information, but a sample definition in `solrconfig.xml` might look like this:
 
 [source,xml]
 ----
@@ -73,15 +70,14 @@ It's possible to define an event listener to reload an external file when either
 <listener event="firstSearcher" class="org.apache.solr.schema.ExternalFileFieldReloader"/>
 ----
 
-[[WorkingwithExternalFilesandProcesses-ThePreAnalyzedFieldType]]
 == The PreAnalyzedField Type
 
 The `PreAnalyzedField` type provides a way to send to Solr serialized token streams, optionally with independent stored values of a field, and have this information stored and indexed without any additional text processing applied in Solr. This is useful if user wants to submit field content that was already processed by some existing external text processing pipeline (e.g., it has been tokenized, annotated, stemmed, synonyms inserted, etc.), while using all the rich attributes that Lucene's TokenStream provides (per-token attributes).
 
 The serialization format is pluggable using implementations of PreAnalyzedParser interface. There are two out-of-the-box implementations:
 
-* <<WorkingwithExternalFilesandProcesses-JsonPreAnalyzedParser,JsonPreAnalyzedParser>>: as the name suggests, it parses content that uses JSON to represent field's content. This is the default parser to use if the field type is not configured otherwise.
-* <<WorkingwithExternalFilesandProcesses-SimplePreAnalyzedParser,SimplePreAnalyzedParser>>: uses a simple strict plain text format, which in some situations may be easier to create than JSON.
+* <<JsonPreAnalyzedParser>>: as the name suggests, it parses content that uses JSON to represent field's content. This is the default parser to use if the field type is not configured otherwise.
+* <<SimplePreAnalyzedParser>>: uses a simple strict plain text format, which in some situations may be easier to create than JSON.
 
 There is only one configuration parameter, `parserImpl`. The value of this parameter should be a fully qualified class name of a class that implements PreAnalyzedParser interface. The default value of this parameter is `org.apache.solr.schema.JsonPreAnalyzedParser`.
 
@@ -97,7 +93,6 @@ By default, the query-time analyzer for fields of this type will be the same as
 </fieldType>
 ----
 
-[[WorkingwithExternalFilesandProcesses-JsonPreAnalyzedParser]]
 === JsonPreAnalyzedParser
 
 This is the default serialization format used by PreAnalyzedField type. It uses a top-level JSON map with the following keys:
@@ -115,8 +110,7 @@ This is the default serialization format used by PreAnalyzedField type. It uses
 
 Any other top-level key is silently ignored.
 
-[[WorkingwithExternalFilesandProcesses-Tokenstreamserialization]]
-==== Token stream serialization
+==== Token Stream Serialization
 
 The token stream is expressed as a JSON list of JSON maps. The map for each token consists of the following keys and values:
 
@@ -136,8 +130,7 @@ The token stream is expressed as a JSON list of JSON maps. The map for each toke
 
 Any other key is silently ignored.
 
-[[WorkingwithExternalFilesandProcesses-Example]]
-==== Example
+==== JsonPreAnalyzedParser Example
 
 [source,json]
 ----
@@ -152,13 +145,11 @@ Any other key is silently ignored.
 }
 ----
 
-[[WorkingwithExternalFilesandProcesses-SimplePreAnalyzedParser]]
 === SimplePreAnalyzedParser
 
 The fully qualified class name to use when specifying this format via the `parserImpl` configuration parameter is `org.apache.solr.schema.SimplePreAnalyzedParser`.
 
-[[WorkingwithExternalFilesandProcesses-Syntax]]
-==== Syntax
+==== SimplePreAnalyzedParser Syntax
 
 The serialization format supported by this parser is as follows:
 
@@ -192,8 +183,7 @@ Special characters in "text" values can be escaped using the escape character `\
 
 Please note that Unicode sequences (e.g. `\u0001`) are not supported.
 
-[[WorkingwithExternalFilesandProcesses-Supportedattributenames]]
-==== Supported attribute names
+==== Supported Attributes
 
 The following token attributes are supported, and identified with short symbolic names:
 
@@ -212,8 +202,7 @@ The following token attributes are supported, and identified with short symbolic
 
 Token positions are tracked and implicitly added to the token stream - the start and end offsets consider only the term text and whitespace, and exclude the space taken by token attributes.
 
-[[WorkingwithExternalFilesandProcesses-Exampletokenstreams]]
-==== Example token streams
+==== Example Token Streams
 
 // TODO: in cwiki each of these examples was in it's own "panel" ... do we want something like that here?
 // TODO: these examples match what was in cwiki, but I'm honestly not sure if the formatting there was correct to start?