You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by vi...@apache.org on 2023/08/08 22:49:36 UTC

[druid] branch master updated: document expression aggregator (#14497)

This is an automated email from the ASF dual-hosted git repository.

victoria pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 667e4dab5e document expression aggregator (#14497)
667e4dab5e is described below

commit 667e4dab5e468f5d1732ac3473498d9cb8e339f6
Author: Clint Wylie <cw...@apache.org>
AuthorDate: Tue Aug 8 15:49:29 2023 -0700

    document expression aggregator (#14497)
---
 docs/querying/aggregations.md | 404 ++++++++++++++++++++++++++++++------------
 website/.spelling             |   9 +
 2 files changed, 297 insertions(+), 116 deletions(-)

diff --git a/docs/querying/aggregations.md b/docs/querying/aggregations.md
index fb43edf43d..d577428f1b 100644
--- a/docs/querying/aggregations.md
+++ b/docs/querying/aggregations.md
@@ -39,8 +39,14 @@ The following sections list the available aggregate functions. Unless otherwise
 
 `count` computes the count of Druid rows that match the filters.
 
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "count". | Yes |
+| `name` | Output name of the aggregator | Yes |
+
+Example:
 ```json
-{ "type" : "count", "name" : <output_name> }
+{ "type" : "count", "name" : "count" }
 ```
 
 The `count` aggregator counts the number of Druid rows, which does not always reflect the number of raw events ingested.
@@ -50,94 +56,121 @@ query time.
 
 ### Sum aggregators
 
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "longSum", "doubleSum", or "floatSum". | Yes |
+| `name` | Output name for the summed value. | Yes |
+| `fieldName` | Name of the input column to sum over. | No. You must specify `fieldName` or `expression`. |
+| `expression` | You can specify an inline [expression](./math-expr.md) as an alternative to `fieldName`. | No. You must specify `fieldName` or `expression`. |
+
 #### `longSum` aggregator
 
 Computes the sum of values as a 64-bit, signed integer.
 
+Example:
 ```json
-{ "type" : "longSum", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "longSum", "name" : "sumLong", "fieldName" : "aLong" }
 ```
 
-The `longSum` aggregator takes the following properties:
-* `name`: Output name for the summed value
-* `fieldName`: Name of the metric column to sum over
-
 #### `doubleSum` aggregator
 
 Computes and stores the sum of values as a 64-bit floating point value. Similar to `longSum`.
 
+Example:
 ```json
-{ "type" : "doubleSum", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "doubleSum", "name" : "sumDouble", "fieldName" : "aDouble" }
 ```
 
 #### `floatSum` aggregator
 
 Computes and stores the sum of values as a 32-bit floating point value. Similar to `longSum` and `doubleSum`.
 
+Example:
 ```json
-{ "type" : "floatSum", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "floatSum", "name" : "sumFloat", "fieldName" : "aFloat" }
 ```
 
 ### Min and max aggregators
 
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "doubleMin", "doubleMax", "floatMin", "floatMax", "longMin", or "longMax". | Yes |
+| `name` | Output name for the min or max value. | Yes |
+| `fieldName` | Name of the input column to compute the minimum or maximum value over. | No. You must specify `fieldName` or `expression`. |
+| `expression` | You can specify an inline [expression](./math-expr.md) as an alternative to `fieldName`. | No. You must specify `fieldName` or `expression`. |
+
 #### `doubleMin` aggregator
 
-`doubleMin` computes the minimum of all metric values and Double.POSITIVE_INFINITY.
+`doubleMin` computes the minimum of all input values and null if `druid.generic.useDefaultValueForNull` is false or Double.POSITIVE_INFINITY if true.
 
+Example:
 ```json
-{ "type" : "doubleMin", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "doubleMin", "name" : "maxDouble", "fieldName" : "aDouble" }
 ```
 
 #### `doubleMax` aggregator
 
-`doubleMax` computes the maximum of all metric values and Double.NEGATIVE_INFINITY.
+`doubleMax` computes the maximum of all input values and null if `druid.generic.useDefaultValueForNull` is false or Double.NEGATIVE_INFINITY if true.
 
+Example:
 ```json
-{ "type" : "doubleMax", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "doubleMax", "name" : "minDouble", "fieldName" : "aDouble" }
 ```
 
 #### `floatMin` aggregator
 
-`floatMin` computes the minimum of all metric values and Float.POSITIVE_INFINITY.
+`floatMin` computes the minimum of all input values and null if `druid.generic.useDefaultValueForNull` is false or Float.POSITIVE_INFINITY if true.
 
+Example:
 ```json
-{ "type" : "floatMin", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "floatMin", "name" : "minFloat", "fieldName" : "aFloat" }
 ```
 
 #### `floatMax` aggregator
 
-`floatMax` computes the maximum of all metric values and Float.NEGATIVE_INFINITY.
+`floatMax` computes the maximum of all input values and null if `druid.generic.useDefaultValueForNull` is false or Float.NEGATIVE_INFINITY if true.
 
+Example:
 ```json
-{ "type" : "floatMax", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "floatMax", "name" : "maxFloat", "fieldName" : "aFloat" }
 ```
 
 #### `longMin` aggregator
 
-`longMin` computes the minimum of all metric values and Long.MAX_VALUE.
+`longMin` computes the minimum of all input values and null if `druid.generic.useDefaultValueForNull` is false or Long.MAX_VALUE if true.
 
+Example:
 ```json
-{ "type" : "longMin", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "longMin", "name" : "minLong", "fieldName" : "aLong" }
 ```
 
 #### `longMax` aggregator
 
-`longMax` computes the maximum of all metric values and Long.MIN_VALUE.
+`longMax` computes the maximum of all metric values and null if `druid.generic.useDefaultValueForNull` is false or Long.MIN_VALUE if true.
 
+Example:
 ```json
-{ "type" : "longMax", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "longMax", "name" : "maxLong", "fieldName" : "aLong" }
 ```
 
 ### `doubleMean` aggregator
 
-Computes and returns the arithmetic mean of a column's values as a 64-bit floating point value. `doubleMean` is a query time aggregator only. It is not available for indexing.
+Computes and returns the arithmetic mean of a column's values as a 64-bit floating point value. 
 
-To accomplish mean aggregation on ingestion, refer to the [Quantiles aggregator](../development/extensions-core/datasketches-quantiles.md#aggregator) from the DataSketches extension.
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "doubleMean". | Yes |
+| `name` | Output name for the mean value. | Yes |
+| `fieldName` | Name of the input column to compute the arithmetic mean value over. | Yes |
 
+Example:
 ```json
-{ "type" : "doubleMean", "name" : <output_name>, "fieldName" : <metric_name> }
+{ "type" : "doubleMean", "name" : "aMean", "fieldName" : "aDouble" }
 ```
 
+`doubleMean` is a query time aggregator only. It is not available for indexing. To accomplish mean aggregation on ingestion, refer to the [Quantiles aggregator](../development/extensions-core/datasketches-quantiles.md#aggregator) from the DataSketches extension.
+
+
 ### First and last aggregators
 
 The first and last aggregators determine the metric values that respectively correspond to the earliest and latest values of a time column.
@@ -147,111 +180,131 @@ The string-typed aggregators, `stringFirst` and `stringLast`, are supported for
 
 Queries with first or last aggregators on a segment created with rollup return the rolled up value, not the first or last value from the raw ingested data.
 
-#### `doubleFirst` aggregator
+#### Numeric first and last aggregators
 
-`doubleFirst` computes the metric value with the minimum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "doubleFirst", "doubleLast", "floatFirst", "floatLast", "longFirst", "longLast". | Yes |
+| `name` | Output name for the first or last value. | Yes |
+| `fieldName` | Name of the input column to compute the first or last value over. | Yes |
+| `timeColumn` | Name of the input column to use for time values. Must be a LONG typed column. | No. Defaults to `__time`. |
 
+##### `doubleFirst` aggregator
+
+`doubleFirst` computes the input value with the minimum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
+
+Example:
 ```json
 {
   "type" : "doubleFirst",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "timeColumn" : <time_column_name> # (optional, defaults to __time)
+  "name" : "firstDouble",
+  "fieldName" : "aDouble"
 }
 ```
 
-#### `doubleLast` aggregator
+##### `doubleLast` aggregator
 
-`doubleLast` computes the metric value with the maximum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
+`doubleLast` computes the input value with the maximum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
 
+Example:
 ```json
 {
   "type" : "doubleLast",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "timeColumn" : <time_column_name> # (optional, defaults to __time)
+  "name" : "lastDouble",
+  "fieldName" : "aDouble",
+  "timeColumn" : "longTime"
 }
 ```
 
-#### `floatFirst` aggregator
+##### `floatFirst` aggregator
 
-`floatFirst` computes the metric value with the minimum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
+`floatFirst` computes the input value with the minimum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
 
+Example:
 ```json
 {
   "type" : "floatFirst",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "timeColumn" : <time_column_name> # (optional, defaults to __time)
+  "name" : "firstFloat",
+  "fieldName" : "aFloat"
 }
 ```
 
-#### `floatLast` aggregator
+##### `floatLast` aggregator
 
 `floatLast` computes the metric value with the maximum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
 
+Example:
 ```json
 {
   "type" : "floatLast",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "timeColumn" : <time_column_name> # (optional, defaults to __time)
+  "name" : "lastFloat",
+  "fieldName" : "aFloat"
 }
 ```
 
-#### `longFirst` aggregator
+##### `longFirst` aggregator
 
 `longFirst` computes the metric value with the minimum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
 
+Example:
 ```json
 {
   "type" : "longFirst",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "timeColumn" : <time_column_name> # (optional, defaults to __time)
+  "name" : "firstLong",
+  "fieldName" : "aLong"
 }
 ```
 
-#### `longLast` aggregator
+##### `longLast` aggregator
 
 `longLast` computes the metric value with the maximum value for time column or 0 in default mode, or `null` in SQL-compatible mode if no row exists.
 
+Example:
 ```json
 {
   "type" : "longLast",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "timeColumn" : <time_column_name> # (optional, defaults to __time)
+  "name" : "lastLong",
+  "fieldName" : "aLong",
+  "timeColumn" : "longTime"
 }
 ```
 
+#### String first and last aggregators
+
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "stringFirst", "stringLast". | Yes |
+| `name` | Output name for the first or last value. | Yes |
+| `fieldName` | Name of the input column to compute the first or last value over. | Yes |
+| `timeColumn` | Name of the input column to use for time values. Must be a LONG typed column. | No. Defaults to `__time`. |
+| `maxStringBytes` | Maximum size of string values to accumulate when computing the first or last value per group. Values longer than this will be truncated. | No. Defaults to 1024. |
+
+
 #### `stringFirst` aggregator
 
 `stringFirst` computes the metric value with the minimum value for time column or `null` if no row exists.
 
+Example:
 ```json
 {
   "type" : "stringFirst",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "maxStringBytes" : <integer> # (optional, defaults to 1024)
-  "timeColumn" : <time_column_name> # (optional, defaults to __time)
+  "name" : "firstString",
+  "fieldName" : "aString",
+  "maxStringBytes" : 2048,
+  "timeColumn" : "longTime"
 }
 ```
 
-
-
 #### `stringLast` aggregator
 
 `stringLast` computes the metric value with the maximum value for time column or `null` if no row exists.
 
+Example:
 ```json
 {
   "type" : "stringLast",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "maxStringBytes" : <integer> # (optional, defaults to 1024)
-  "timeColumn" : <time_column_name> # (optional, defaults to __time)
+  "name" : "lastString",
+  "fieldName" : "aString"
 }
 ```
 
@@ -261,88 +314,73 @@ Queries with first or last aggregators on a segment created with rollup return t
 
 Returns any value including null. This aggregator can simplify and optimize the performance by returning the first encountered value (including null)
 
-#### `doubleAny` aggregator
+#### Numeric any aggregators
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "doubleAny", "floatAny", or "longAny". | Yes |
+| `name` | Output name for the value. | Yes |
+| `fieldName` | Name of the input column to compute the value over. | Yes |
+
+##### `doubleAny` aggregator
 
 `doubleAny` returns any double metric value.
 
+Example:
 ```json
 {
   "type" : "doubleAny",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>
+  "name" : "anyDouble",
+  "fieldName" : "aDouble"
 }
 ```
 
-#### `floatAny` aggregator
+##### `floatAny` aggregator
 
 `floatAny` returns any float metric value.
 
+Example:
 ```json
 {
   "type" : "floatAny",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>
+  "name" : "anyFloat",
+  "fieldName" : "aFloat"
 }
 ```
 
-#### `longAny` aggregator
+##### `longAny` aggregator
 
 `longAny` returns any long metric value.
 
+Example:
 ```json
 {
   "type" : "longAny",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
+  "name" : "anyLong",
+  "fieldName" : "aLong"
 }
 ```
 
 #### `stringAny` aggregator
 
-`stringAny` returns any string metric value.
-
-```json
-{
-  "type" : "stringAny",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "maxStringBytes" : <integer> # (optional, defaults to 1024),
-}
-```
-
-### JavaScript aggregator
-
-Computes an arbitrary JavaScript function over a set of columns (both metrics and dimensions are allowed). Your
-JavaScript functions are expected to return floating-point values.
-
-```json
-{ "type": "javascript",
-  "name": "<output_name>",
-  "fieldNames"  : [ <column1>, <column2>, ... ],
-  "fnAggregate" : "function(current, column1, column2, ...) {
-                     <updates partial aggregate (current) based on the current row values>
-                     return <updated partial aggregate>
-                   }",
-  "fnCombine"   : "function(partialA, partialB) { return <combined partial results>; }",
-  "fnReset"     : "function()                   { return <initial value>; }"
-}
-```
+`stringAny` returns any string value present in the input.
 
-**Example**
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "stringAny". | Yes |
+| `name` | Output name for the value. | Yes |
+| `fieldName` | Name of the input column to compute the value over. | Yes |
+| `maxStringBytes` | Maximum size of string values to accumulate when computing the first or last value per group. Values longer than this will be truncated. | No. Defaults to 1024. |
 
+Example:
 ```json
 {
-  "type": "javascript",
-  "name": "sum(log(x)*y) + 10",
-  "fieldNames": ["x", "y"],
-  "fnAggregate" : "function(current, a, b)      { return current + (Math.log(a) * b); }",
-  "fnCombine"   : "function(partialA, partialB) { return partialA + partialB; }",
-  "fnReset"     : "function()                   { return 10; }"
+  "type" : "stringAny",
+  "name" : "anyString",
+  "fieldName" : "aString",
+  "maxStringBytes" : 2048
 }
 ```
 
-> JavaScript-based functionality is disabled by default. Please refer to the Druid [JavaScript programming guide](../development/javascript.md) for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it.
-
 <a name="approx"></a>
 
 ## Approximate aggregations
@@ -422,6 +460,117 @@ It is not possible to determine a priori how well this aggregator will behave fo
 
 For these reasons, we have deprecated this aggregator and recommend using the DataSketches Quantiles aggregator instead for new and existing use cases, although we will continue to support Approximate Histogram for backwards compatibility.
 
+
+## Expression aggregations
+
+### Expression aggregator
+
+Aggregator applicable only at query time. Aggregates results using [Druid expressions](./math-expr.md) functions to facilitate building custom functions.
+
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "expression". | Yes |
+| `name` | The aggregator output name. | Yes |
+| `fields` | The list of aggregator input columns. | Yes |
+| `accumulatorIdentifier` | The variable which identifies the accumulator value in the `fold` and `combine` expressions. | No. Default `__acc`.|
+| `fold` | The expression to accumulate values from `fields`. The result of the expression is stored in `accumulatorIdentifier` and available to the next computation. | Yes |
+| `combine` | The expression to combine the results of various `fold` expressions of each segment when merging results. The input is available to the expression as a variable identified by the `name`. | No. Default to `fold` expression if the expression has a single input in `fields`.|
+| `compare` | The comparator expression which can only refer to two input variables, `o1` and `o2`, where `o1` and `o2` are the output of `fold` or `combine` expressions, and must adhere to the Java comparator contract. If not set, the aggregator will try to fall back to an output type appropriate comparator. | No |
+| `finalize` | The finalize expression which can only refer to a single input variable, `o`. This expression is used to perform any final transformation of the output of the `fold` or `combine` expressions. If not set, then the value is not transformed. | No |
+| `initialValue` | The initial value of the accumulator for the `fold` (and `combine`, if `InitialCombineValue` is null) expression. | Yes |
+| `initialCombineValue` | The initial value of the accumulator for the `combine` expression. | No. Default `initialValue`. |
+| `isNullUnlessAggregated` | Indicates that the default output value should be `null` if the aggregator does not process any rows. If true, the value is `null`, if false, the result of running the expressions with initial values is used instead. | No. Defaults to the value of `druid.generic.useDefaultValueForNull`. |
+| `shouldAggregateNullInputs` | Indicates if the `fold` expression should operate on any `null` input values. | No. Defaults to `true`. |
+| `shouldCombineAggregateNullInputs` | Indicates if the `combine` expression should operate on any `null` input values. | No. Defaults to the value of `shouldAggregateNullInputs`. |
+| `maxSizeBytes` | Maximum size in bytes that variably sized aggregator output types such as strings and arrays are allowed to grow to before the aggregation fails. | No. Default is 8192 bytes. |
+
+#### Example: a "count" aggregator
+The initial value is `0`. `fold` adds `1` for each row processed.
+
+```json
+{
+  "type": "expression",
+  "name": "expression_count",
+  "fields": [],
+  "initialValue": "0",
+  "fold": "__acc + 1",
+  "combine": "__acc + expression_count"
+}
+```
+
+#### Example: a "sum" aggregator
+The initial value is `0`. `fold` adds the numeric value `column_a` for each row processed.
+
+```json
+{
+  "type": "expression",
+  "name": "expression_sum",
+  "fields": ["column_a"],
+  "initialValue": "0",
+  "fold": "__acc + column_a"
+}
+```
+
+#### Example: a "distinct array element" aggregator, sorted by array_length
+The initial value is an empty array. `fold` adds the elements of `column_a` to the accumulator using set semantics, `combine` merges the sets, and `compare` orders the values by `array_length`.
+
+```json
+{
+  "type": "expression",
+  "name": "expression_array_agg_distinct",
+  "fields": ["column_a"],
+  "initialValue": "[]",
+  "fold": "array_set_add(__acc, column_a)",
+  "combine": "array_set_add_all(__acc, expression_array_agg_distinct)",
+  "compare": "if(array_length(o1) > array_length(o2), 1, if (array_length(o1) == array_length(o2), 0, -1))"
+}
+```
+
+#### Example: an "approximate count" aggregator using the built-in hyper-unique
+Similar to the cardinality aggregator, the default value is an empty hyper-unique sketch, `fold` adds the value of `column_a` to the sketch, `combine` merges the sketches, and `finalize` gets the estimated count from the accumulated sketch.
+
+```json
+{
+  "type": "expression",
+  "name": "expression_cardinality",
+  "fields": ["column_a"],
+  "initialValue": "hyper_unique()",
+  "fold": "hyper_unique_add(column_a, __acc)",
+  "combine": "hyper_unique_add(expression_cardinality, __acc)",
+  "finalize": "hyper_unique_estimate(o)"
+}
+```
+
+### JavaScript aggregator
+
+Computes an arbitrary JavaScript function over a set of columns (both metrics and dimensions are allowed). Your
+JavaScript functions are expected to return floating-point values.
+
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "javascript". | Yes |
+| `name` | The aggregator output name. | Yes |
+| `fieldNames` | The list of aggregator input columns. | Yes |
+| `fnAggregate` | JavaScript function that updates partial aggregate based on the current row values, and returns the updated partial aggregate. | Yes |
+| `fnCombine` | JavaScript function to combine partial aggregates and return the combined result. | Yes |
+| `fnReset` | JavaScript function that returns the 'initial' value. | Yes |
+
+#### Example
+
+```json
+{
+  "type": "javascript",
+  "name": "sum(log(x)*y) + 10",
+  "fieldNames": ["x", "y"],
+  "fnAggregate" : "function(current, a, b)      { return current + (Math.log(a) * b); }",
+  "fnCombine"   : "function(partialA, partialB) { return partialA + partialB; }",
+  "fnReset"     : "function()                   { return 10; }"
+}
+```
+
+> JavaScript functionality is disabled by default. Refer to the Druid [JavaScript programming guide](../development/javascript.md) for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it.
+
+
 ## Miscellaneous aggregations
 
 ### Filtered aggregator
@@ -430,17 +579,30 @@ A filtered aggregator wraps any given aggregator, but only aggregates the values
 
 This makes it possible to compute the results of a filtered and an unfiltered aggregation simultaneously, without having to issue multiple queries, and use both results as part of post-aggregations.
 
-*Note:* If only the filtered results are required, consider putting the filter on the query itself, which will be much faster since it does not require scanning all the data.
+If only the filtered results are required, consider putting the filter on the query itself. This will be much faster since it does not require scanning all the data.
+
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "filtered". | Yes |
+| `name` | The aggregator output name. | No |
+| `aggregator` | Inline aggregator specification. | Yes |
+| `filter` | Inline [filter](./filters.md) specification. | Yes |
 
+Example:
 ```json
 {
-  "type" : "filtered",
-  "filter" : {
+  "type": "filtered",
+  "name": "filteredSumLong",
+  "filter": {
     "type" : "selector",
-    "dimension" : <dimension>,
-    "value" : <dimension value>
+    "dimension" : "someColumn",
+    "value" : "abcdef"
   },
-  "aggregator" : <aggregation>
+  "aggregator": {
+    "type": "longSum",
+    "name": "sumLong",
+    "fieldName": "aLong"
+  }
 }
 ```
 
@@ -450,7 +612,20 @@ A grouping aggregator can only be used as part of GroupBy queries which have a s
 each output row that lets you infer whether a particular dimension is included in the sub-grouping used for that row. You can pass
 a *non-empty* list of dimensions to this aggregator which *must* be a subset of dimensions that you are grouping on. 
 
-For example, if the aggregator has `["dim1", "dim2"]` as input dimensions and `[["dim1", "dim2"], ["dim1"], ["dim2"], []]` as subtotals, the
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be "grouping". | Yes |
+| `name` | The aggregator output name. | Yes |
+| `groupings` | The list of columns to use in the grouping set. | Yes |
+
+
+For example, the following aggregator has `["dim1", "dim2"]` as input dimensions:
+
+```json
+{ "type" : "grouping", "name" : "someGrouping", "groupings" : ["dim1", "dim2"] }
+```
+
+and used in a grouping query with `[["dim1", "dim2"], ["dim1"], ["dim2"], []]` as subtotals, the
 possible output of the aggregator is:
 
 | subtotal used in query | Output | (bits representation) |
@@ -463,6 +638,3 @@ possible output of the aggregator is:
 As the example illustrates, you can think of the output number as an unsigned _n_ bit number where _n_ is the number of dimensions passed to the aggregator. 
 Druid sets the bit at position X for the number to 0 if the sub-grouping includes a dimension at position X in the aggregator input. Otherwise, Druid sets this bit to 1.
 
-```json
-{ "type" : "grouping", "name" : <output_name>, "groupings" : [<dimension>] }
-```
diff --git a/website/.spelling b/website/.spelling
index 775c056dcf..cc7a09882d 100644
--- a/website/.spelling
+++ b/website/.spelling
@@ -347,6 +347,7 @@ interruptible
 isAllowList
 jackson-jq
 javadoc
+javascript
 joinable
 jsonCompression
 json_keys
@@ -878,6 +879,8 @@ P1D
 cycleSize
 doubleMax
 doubleAny
+doubleFirst
+doubleLast
 doubleMean
 doubleMeanNoNulls
 doubleMin
@@ -887,6 +890,8 @@ druid.generic.ignoreNullsForStringCardinality
 limitSpec
 longMax
 longAny
+longFirst
+longLast
 longMean
 longMeanNoNulls
 longMin
@@ -1502,6 +1507,8 @@ str1
 str2
 string_to_array
 stringAny
+stringFirst
+stringLast
 Strlen
 strlen
 strpos
@@ -1761,6 +1768,8 @@ enableJoinFilterPushDown
 enableJoinFilterRewrite
 enableRewriteJoinToFilter
 enableJoinFilterRewriteValueColumnFilters
+floatFirst
+floatLast
 floatSum
 joinFilterRewriteMaxSize
 maxQueuedBytes


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org