You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by vo...@apache.org on 2022/09/19 02:40:46 UTC

[druid] branch master updated: fix html tags in docs (#13117)

This is an automated email from the ASF dual-hosted git repository.

vogievetsky pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new bb0b810b1d fix html tags in docs (#13117)
bb0b810b1d is described below

commit bb0b810b1dc54ef0ea971c8f3c6d4d9e7f64ff6b
Author: Vadim Ogievetsky <va...@ogievetsky.com>
AuthorDate: Sun Sep 18 19:40:33 2022 -0700

    fix html tags in docs (#13117)
    
    * fix html tags in docs
    
    * revert not null
---
 docs/configuration/index.md                        | 6 +++---
 docs/data-management/update.md                     | 4 ++--
 docs/development/extensions-core/druid-kerberos.md | 2 +-
 docs/ingestion/data-formats.md                     | 6 +++---
 docs/ingestion/native-batch-input-source.md        | 4 ++--
 docs/misc/math-expr.md                             | 2 +-
 docs/operations/api-reference.md                   | 4 ++--
 docs/operations/clean-metadata-store.md            | 2 +-
 docs/querying/nested-columns.md                    | 2 +-
 9 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 2e11eefe12..ad46f22819 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -963,7 +963,7 @@ http://<COORDINATOR_IP>:<PORT>/druid/coordinator/v1/config/history?interval=<int
 
 default value of interval can be specified by setting `druid.audit.manager.auditHistoryMillis` (1 week if not configured) in Coordinator runtime.properties
 
-To view last <n> entries of the audit history of Coordinator dynamic config issue a GET request to the URL -
+To view last `n` entries of the audit history of Coordinator dynamic config issue a GET request to the URL -
 
 ```
 http://<COORDINATOR_IP>:<PORT>/druid/coordinator/v1/config/history?count=<n>
@@ -1223,7 +1223,7 @@ http://<OVERLORD_IP>:<port>/druid/indexer/v1/worker/history?interval=<interval>
 
 default value of interval can be specified by setting `druid.audit.manager.auditHistoryMillis` (1 week if not configured) in Overlord runtime.properties.
 
-To view last <n> entries of the audit history of worker config issue a GET request to the URL -
+To view last `n` entries of the audit history of worker config issue a GET request to the URL -
 
 ```
 http://<OVERLORD_IP>:<port>/druid/indexer/v1/worker/history?count=<n>
@@ -2201,7 +2201,7 @@ Supported query contexts:
 |Property|Description|Default|
 |--------|-----------|-------|
 |`druid.router.defaultBrokerServiceName`|The default Broker to connect to in case service discovery fails.|druid/broker|
-|`druid.router.tierToBrokerMap`|Queries for a certain tier of data are routed to their appropriate Broker. This value should be an ordered JSON map of tiers to Broker names. The priority of Brokers is based on the ordering.|{"_default_tier": "<defaultBrokerServiceName>"}|
+|`druid.router.tierToBrokerMap`|Queries for a certain tier of data are routed to their appropriate Broker. This value should be an ordered JSON map of tiers to Broker names. The priority of Brokers is based on the ordering.|`{"_default_tier": "<defaultBrokerServiceName>"}`|
 |`druid.router.defaultRule`|The default rule for all datasources.|"_default"|
 |`druid.router.pollPeriod`|How often to poll for new rules.|PT1M|
 |`druid.router.sql.enable`|Enable routing of SQL queries using strategies. When`true`, the Router uses the  strategies defined in `druid.router.strategies` to determine the broker service for a given SQL query. When `false`, the Router uses the `defaultBrokerServiceName`.|`false`|
diff --git a/docs/data-management/update.md b/docs/data-management/update.md
index 4eb31f8242..3aa11a7411 100644
--- a/docs/data-management/update.md
+++ b/docs/data-management/update.md
@@ -56,8 +56,8 @@ source](../ingestion/native-batch-input-source.md#druid-input-source). If needed
 [`transformSpec`](../ingestion/ingestion-spec.md#transformspec) can be used to filter or modify data during the
 reindexing job.
 
-With SQL, use [`REPLACE <table> OVERWRITE`](../multi-stage-query/reference.md#replace) with `SELECT ... FROM
-<table>`. (Druid does not have `UPDATE` or `ALTER TABLE` statements.) Any SQL SELECT query can be used to filter,
+With SQL, use [`REPLACE <table> OVERWRITE`](../multi-stage-query/reference.md#replace) with `SELECT ... FROM <table>`.
+(Druid does not have `UPDATE` or `ALTER TABLE` statements.) Any SQL SELECT query can be used to filter,
 modify, or enrich the data during the reindexing job.
 
 ## Rolled-up datasources
diff --git a/docs/development/extensions-core/druid-kerberos.md b/docs/development/extensions-core/druid-kerberos.md
index 4828f3b56f..bb0fbb1158 100644
--- a/docs/development/extensions-core/druid-kerberos.md
+++ b/docs/development/extensions-core/druid-kerberos.md
@@ -53,7 +53,7 @@ The configuration examples in the rest of this document will use "kerberos" as t
 |`druid.auth.authenticator.kerberos.serverPrincipal`|`HTTP/_HOST@EXAMPLE.COM`| SPNEGO service principal used by druid processes|empty|Yes|
 |`druid.auth.authenticator.kerberos.serverKeytab`|`/etc/security/keytabs/spnego.service.keytab`|SPNego service keytab used by druid processes|empty|Yes|
 |`druid.auth.authenticator.kerberos.authToLocal`|`RULE:[1:$1@$0](druid@EXAMPLE.COM)s/.*/druid DEFAULT`|It allows you to set a general rule for mapping principal names to local user names. It will be used if there is not an explicit mapping for the principal name that is being translated.|DEFAULT|No|
-|`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid nodes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.|<Random value>|No|
+|`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid nodes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.|Random value|No|
 |`druid.auth.authenticator.kerberos.authorizerName`|Depends on available authorizers|Authorizer that requests should be directed to|Empty|Yes|
 
 As a note, it is required that the SPNego principal in use by the druid processes must start with HTTP (This specified by [RFC-4559](https://tools.ietf.org/html/rfc4559)) and must be of the form "HTTP/_HOST@REALM".
diff --git a/docs/ingestion/data-formats.md b/docs/ingestion/data-formats.md
index 780f6bbf2f..e1b59faa58 100644
--- a/docs/ingestion/data-formats.md
+++ b/docs/ingestion/data-formats.md
@@ -439,7 +439,7 @@ For details, see the Schema Registry [documentation](http://docs.confluent.io/cu
 | type | String | Set value to `schema_registry`. | no |
 | url | String | Specifies the URL endpoint of the Schema Registry. | yes |
 | capacity | Integer | Specifies the max size of the cache (default = Integer.MAX_VALUE). | no |
-| urls | Array<String> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
+| urls | Array<String\> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
 | config | Json | To send additional configurations, configured for Schema Registry.  This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |
 | headers | Json | To send headers to the Schema Registry.  This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |
 
@@ -640,7 +640,7 @@ Each entry in the `fields` list can have the following components:
   | sum()      | Provides the sum value of an array of numbers                       | Double      | &#10003;  |  &#10003;   |   &#10003;   |  &#10003;   |
   | concat(X)  | Provides a concatenated version of the path output with a new item  | like input  | &#10003;  |  &#10007;   |   &#10007;   | &#10007;   |
   | append(X)  | add an item to the json path output array                           | like input  | &#10003;  |  &#10007;   |   &#10007;   | &#10007;   |
-  | keys()     | Provides the property keys (An alternative for terminal tilde ~)    | Set<E>      | &#10007;  |  &#10007;   |   &#10007;   | &#10007;   |
+  | keys()     | Provides the property keys (An alternative for terminal tilde ~)    | Set<E\>      | &#10007;  |  &#10007;   |   &#10007;   | &#10007;   |
 
 
 ## Parser
@@ -1311,7 +1311,7 @@ For details, see the Schema Registry [documentation](http://docs.confluent.io/cu
 | type | String | Set value to `schema_registry`. | yes |
 | url | String | Specifies the URL endpoint of the Schema Registry. | yes |
 | capacity | Integer | Specifies the max size of the cache (default = Integer.MAX_VALUE). | no |
-| urls | Array<String> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
+| urls | Array<String\> | Specifies the URL endpoints of the multiple Schema Registry instances. | yes (if `url` is not provided) |
 | config | Json | To send additional configurations, configured for Schema Registry. This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md).  | no |
 | headers | Json | To send headers to the Schema Registry.  This can be supplied via a [DynamicConfigProvider](../operations/dynamic-config-provider.md) | no |
 
diff --git a/docs/ingestion/native-batch-input-source.md b/docs/ingestion/native-batch-input-source.md
index 693fbdd8c3..eb847cee71 100644
--- a/docs/ingestion/native-batch-input-source.md
+++ b/docs/ingestion/native-batch-input-source.md
@@ -359,8 +359,8 @@ Sample specs:
 |Property|Description|Default|Required|
 |--------|-----------|-------|---------|
 |type|Set the value to `azure`.|None|yes|
-|uris|JSON array of URIs where the Azure objects to be ingested are located, in the form "azure://\<container>/\<path-to-file\>"|None|`uris` or `prefixes` or `objects` must be set|
-|prefixes|JSON array of URI prefixes for the locations of Azure objects to ingest, in the form `azure://\<container>/\<prefix\>`. Empty objects starting with one of the given prefixes are skipped.|None|`uris` or `prefixes` or `objects` must be set|
+|uris|JSON array of URIs where the Azure objects to be ingested are located, in the form `azure://<container>/<path-to-file>`|None|`uris` or `prefixes` or `objects` must be set|
+|prefixes|JSON array of URI prefixes for the locations of Azure objects to ingest, in the form `azure://<container>/<prefix>`. Empty objects starting with one of the given prefixes are skipped.|None|`uris` or `prefixes` or `objects` must be set|
 |objects|JSON array of Azure objects to ingest.|None|`uris` or `prefixes` or `objects` must be set|
 |filter|A wildcard filter for files. See [here](http://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/filefilter/WildcardFileFilter) for more information. Files matching the filter criteria are considered for ingestion. Files not matching the filter criteria are ignored.|None|no|
 
diff --git a/docs/misc/math-expr.md b/docs/misc/math-expr.md
index d58261e61e..27bddb37d0 100644
--- a/docs/misc/math-expr.md
+++ b/docs/misc/math-expr.md
@@ -63,7 +63,7 @@ The following built-in functions are available.
 
 |name|description|
 |----|-----------|
-|cast|cast(expr,'LONG' or 'DOUBLE' or 'STRING' or 'ARRAY<LONG>', or 'ARRAY<DOUBLE>' or 'ARRAY<STRING>') returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null). |
+|cast|cast(expr,LONG or DOUBLE or STRING or ARRAY<LONG\>, or ARRAY<DOUBLE\> or ARRAY<STRING\>) returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null). |
 |if|if(predicate,then,else) returns 'then' if 'predicate' evaluates to a positive number, otherwise it returns 'else' |
 |nvl|nvl(expr,expr-for-null) returns 'expr-for-null' if 'expr' is null (or empty string for string type) |
 |like|like(expr, pattern[, escape]) is equivalent to SQL `expr LIKE pattern`|
diff --git a/docs/operations/api-reference.md b/docs/operations/api-reference.md
index d3dec8f0e6..f24fab839f 100644
--- a/docs/operations/api-reference.md
+++ b/docs/operations/api-reference.md
@@ -388,7 +388,7 @@ Returns all rules for a specified datasource and includes default datasource.
 
 * `/druid/coordinator/v1/rules/history?count=<n>`
 
- Returns last <n> entries of audit history of rules for all datasources.
+ Returns last `n` entries of audit history of rules for all datasources.
 
 * `/druid/coordinator/v1/rules/{dataSourceName}/history?interval=<interval>`
 
@@ -396,7 +396,7 @@ Returns all rules for a specified datasource and includes default datasource.
 
 * `/druid/coordinator/v1/rules/{dataSourceName}/history?count=<n>`
 
- Returns last <n> entries of audit history of rules for a specified datasource.
+ Returns last `n` entries of audit history of rules for a specified datasource.
 
 ##### POST
 
diff --git a/docs/operations/clean-metadata-store.md b/docs/operations/clean-metadata-store.md
index 0338d9aa70..c5fa5b6810 100644
--- a/docs/operations/clean-metadata-store.md
+++ b/docs/operations/clean-metadata-store.md
@@ -68,7 +68,7 @@ The cleanup of one entity may depend on the cleanup of another entity as follows
 For details on configuration properties, see [Metadata management](../configuration/index.md#metadata-management).
 If you want to skip the details, check out the [example](#example) for configuring automated metadata cleanup.
 
-<a name="kill-task">
+<a name="kill-task"></a>
 ### Segment records and segments in deep storage (kill task)
 
 > The kill task is the only configuration in this topic that affects actual data in deep storage and not simply metadata or logs.
diff --git a/docs/querying/nested-columns.md b/docs/querying/nested-columns.md
index 1d9503eff5..9c131ee27d 100644
--- a/docs/querying/nested-columns.md
+++ b/docs/querying/nested-columns.md
@@ -246,7 +246,7 @@ FROM (
 PARTITIONED BY ALL
 ```
 
-## Ingest a JSON string as COMPLEX\<json>
+## Ingest a JSON string as COMPLEX<json\>
 
 If your source data uses a string representation of your JSON column, you can still ingest the data as `COMPLEX<JSON>` as follows:
 - During native batch ingestion, call the `parse_json` function in a `transform` object in the `transformSpec`.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org