You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by ji...@apache.org on 2019/06/27 22:58:27 UTC

[incubator-druid-website-src] 46/48: using the 0.15.0 docs

This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid-website-src.git

commit 06d58c21e85e4e409fdba2ef3575034ea43eec00
Author: Vadim Ogievetsky <va...@gmail.com>
AuthorDate: Thu Jun 27 15:53:35 2019 -0700

    using the 0.15.0 docs
---
 docs/latest/configuration/index.md                 |  19 +--
 .../extensions-contrib/influxdb-emitter.md         |  75 ----------
 .../latest/development/extensions-contrib/orc.html |   4 +
 .../extensions-contrib/tdigestsketch-quantiles.md  | 159 ---------------------
 .../extensions-core/approximate-histograms.md      |   2 -
 .../extensions-core/datasketches-extension.md      |   2 +-
 .../extensions-core/datasketches-hll.md            |   2 +-
 .../extensions-core/datasketches-quantiles.md      |  27 +---
 .../extensions-core/datasketches-theta.md          |   2 +-
 .../extensions-core/datasketches-tuple.md          |   2 +-
 .../extensions-core/druid-basic-security.md        | 132 +----------------
 .../development/extensions-core/druid-kerberos.md  |   5 +-
 .../development/extensions-core/kafka-ingestion.md |  49 -------
 .../extensions-core/kinesis-ingestion.md           |  56 +-------
 .../development/extensions-core/postgresql.md      |   2 -
 docs/latest/development/extensions-core/s3.md      |  39 ++---
 docs/latest/development/extensions.md              |   5 +-
 docs/latest/development/geo.md                     |   7 -
 docs/latest/development/modules.md                 |   2 +-
 docs/latest/ingestion/compaction.md                |  16 ++-
 docs/latest/ingestion/hadoop-vs-native-batch.md    |   4 +-
 docs/latest/ingestion/hadoop.md                    |   1 -
 docs/latest/misc/math-expr.md                      |  62 +-------
 docs/latest/operations/api-reference.md            |  14 --
 docs/latest/operations/recommendations.md          |   8 +-
 docs/latest/querying/aggregations.md               |   4 +-
 docs/latest/querying/granularities.md              |   9 +-
 docs/latest/querying/lookups.md                    |   5 +-
 docs/latest/querying/scan-query.md                 |   8 +-
 docs/latest/querying/sql.md                        |  66 ++++-----
 docs/latest/querying/timeseriesquery.md            |   1 -
 docs/latest/toc.md                                 |   3 +-
 .../img/tutorial-batch-data-loader-01.png          | Bin 56488 -> 99355 bytes
 .../img/tutorial-batch-data-loader-02.png          | Bin 360295 -> 521148 bytes
 .../img/tutorial-batch-data-loader-03.png          | Bin 137443 -> 217008 bytes
 .../img/tutorial-batch-data-loader-04.png          | Bin 167252 -> 261225 bytes
 .../img/tutorial-batch-data-loader-05.png          | Bin 162488 -> 256368 bytes
 .../img/tutorial-batch-data-loader-06.png          | Bin 64301 -> 105983 bytes
 .../img/tutorial-batch-data-loader-07.png          | Bin 46529 -> 81399 bytes
 .../img/tutorial-batch-data-loader-08.png          | Bin 103928 -> 162397 bytes
 .../img/tutorial-batch-data-loader-09.png          | Bin 63348 -> 107662 bytes
 .../img/tutorial-batch-data-loader-10.png          | Bin 44516 -> 79080 bytes
 .../img/tutorial-batch-data-loader-11.png          | Bin 83288 -> 133329 bytes
 .../img/tutorial-batch-submit-task-01.png          | Bin 69356 -> 113916 bytes
 .../img/tutorial-batch-submit-task-02.png          | Bin 86076 -> 136268 bytes
 .../tutorials/img/tutorial-compaction-01.png       | Bin 35710 -> 55153 bytes
 .../tutorials/img/tutorial-compaction-02.png       | Bin 166571 -> 279736 bytes
 .../tutorials/img/tutorial-compaction-03.png       | Bin 26755 -> 40114 bytes
 .../tutorials/img/tutorial-compaction-04.png       | Bin 184365 -> 312142 bytes
 .../tutorials/img/tutorial-compaction-05.png       | Bin 26588 -> 39784 bytes
 .../tutorials/img/tutorial-compaction-06.png       | Bin 206717 -> 351505 bytes
 .../tutorials/img/tutorial-compaction-07.png       | Bin 26683 -> 40106 bytes
 .../tutorials/img/tutorial-compaction-08.png       | Bin 28751 -> 43257 bytes
 docs/latest/tutorials/img/tutorial-deletion-01.png | Bin 43586 -> 72062 bytes
 docs/latest/tutorials/img/tutorial-deletion-02.png | Bin 439602 -> 810422 bytes
 docs/latest/tutorials/img/tutorial-deletion-03.png | Bin 437304 -> 805673 bytes
 docs/latest/tutorials/img/tutorial-kafka-01.png    | Bin 85477 -> 136317 bytes
 docs/latest/tutorials/img/tutorial-kafka-02.png    | Bin 75709 -> 125452 bytes
 docs/latest/tutorials/img/tutorial-query-01.png    | Bin 100930 -> 153120 bytes
 docs/latest/tutorials/img/tutorial-query-02.png    | Bin 83369 -> 129962 bytes
 docs/latest/tutorials/img/tutorial-query-03.png    | Bin 65038 -> 106082 bytes
 docs/latest/tutorials/img/tutorial-query-04.png    | Bin 66423 -> 108331 bytes
 docs/latest/tutorials/img/tutorial-query-05.png    | Bin 51855 -> 87070 bytes
 docs/latest/tutorials/img/tutorial-query-06.png    | Bin 82211 -> 130612 bytes
 docs/latest/tutorials/img/tutorial-query-07.png    | Bin 78633 -> 125457 bytes
 .../tutorials/img/tutorial-quickstart-01.png       | Bin 29834 -> 56955 bytes
 .../latest/tutorials/img/tutorial-retention-00.png | Bin 77704 -> 138304 bytes
 .../latest/tutorials/img/tutorial-retention-01.png | Bin 35171 -> 53955 bytes
 .../latest/tutorials/img/tutorial-retention-02.png | Bin 240310 -> 410930 bytes
 .../latest/tutorials/img/tutorial-retention-03.png | Bin 30029 -> 44144 bytes
 .../latest/tutorials/img/tutorial-retention-04.png | Bin 44617 -> 67493 bytes
 .../latest/tutorials/img/tutorial-retention-05.png | Bin 38992 -> 61639 bytes
 .../latest/tutorials/img/tutorial-retention-06.png | Bin 137570 -> 233034 bytes
 73 files changed, 100 insertions(+), 692 deletions(-)

diff --git a/docs/latest/configuration/index.md b/docs/latest/configuration/index.md
index 0f70489..29c11fa 100644
--- a/docs/latest/configuration/index.md
+++ b/docs/latest/configuration/index.md
@@ -541,14 +541,13 @@ The below table shows some important configurations for S3. See [S3 Deep Storage
 
 |Property|Description|Default|
 |--------|-----------|-------|
+|`druid.s3.accessKey`|The access key to use to access S3.|none|
+|`druid.s3.secretKey`|The secret key to use to access S3.|none|
 |`druid.storage.bucket`|S3 bucket name.|none|
 |`druid.storage.baseKey`|S3 object key prefix for storage.|none|
 |`druid.storage.disableAcl`|Boolean flag for ACL. If this is set to `false`, the full control would be granted to the bucket owner. This may require to set additional permissions. See [S3 permissions settings](../development/extensions-core/s3.html#s3-permissions-settings).|false|
 |`druid.storage.archiveBucket`|S3 bucket name for archiving when running the *archive task*.|none|
 |`druid.storage.archiveBaseKey`|S3 object key prefix for archiving.|none|
-|`druid.storage.sse.type`|Server-side encryption type. Should be one of `s3`, `kms`, and `custom`. See the below [Server-side encryption section](../development/extensions-core/s3.html#server-side-encryption) for more details.|None|
-|`druid.storage.sse.kms.keyId`|AWS KMS key ID. This is used only when `druid.storage.sse.type` is `kms` and can be empty to use the default key ID.|None|
-|`druid.storage.sse.custom.base64EncodedKey`|Base64-encoded key. Should be specified if `druid.storage.sse.type` is `custom`.|None|
 |`druid.storage.useS3aSchema`|If true, use the "s3a" filesystem when using Hadoop-based ingestion. If false, the "s3n" filesystem will be used. Only affects Hadoop-based ingestion.|false|
 
 #### HDFS Deep Storage
@@ -846,6 +845,7 @@ A description of the compaction config is:
 |Property|Description|Required|
 |--------|-----------|--------|
 |`dataSource`|dataSource name to be compacted.|yes|
+|`keepSegmentGranularity`|Set [keepSegmentGranularity](../ingestion/compaction.html) to true for compactionTask.|no (default = true)|
 |`taskPriority`|[Priority](../ingestion/tasks.html#task-priorities) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`targetCompactionSizeBytes`|The target segment size, for each segment, after compaction. The actual sizes of compacted segments might be slightly larger or smaller than this value. Each compaction task may generate more than one output segment, and it will try to keep each output segment close to this configured size. This configuration cannot be used together with `maxRowsPerSegment`.|no (default = 419430400)|
@@ -942,17 +942,6 @@ There are additional configs for autoscaling (if it is enabled):
 |`druid.indexer.autoscale.workerVersion`|If set, will only create nodes of set version during autoscaling. Overrides dynamic configuration. |null|
 |`druid.indexer.autoscale.workerPort`|The port that MiddleManagers will run on.|8080|
 
-##### Supervisors
-
-|Property|Description|Default|
-|--------|-----------|-------|
-|`druid.supervisor.healthinessThreshold`|The number of successful runs before an unhealthy supervisor is again considered healthy.|3|
-|`druid.supervisor.unhealthinessThreshold`|The number of failed runs before the supervisor is considered unhealthy.|3|
-|`druid.supervisor.taskHealthinessThreshold`|The number of consecutive task successes before an unhealthy supervisor is again considered healthy.|3|
-|`druid.supervisor.taskUnhealthinessThreshold`|The number of consecutive task failures before the supervisor is considered unhealthy.|3|
-|`druid.supervisor.storeStackTrace`|Whether full stack traces of supervisor exceptions should be stored and returned by the supervisor `/status` endpoint.|false|
-|`druid.supervisor.maxStoredExceptionEvents`|The maximum number of exception events that can be returned through the supervisor `/status` endpoint.|`max(healthinessThreshold, unhealthinessThreshold)`|
-
 #### Overlord Dynamic Configuration
 
 The Overlord can dynamically change worker behavior.
@@ -1420,7 +1409,7 @@ The Druid SQL server is configured through the following properties on the Broke
 
 |Property|Description|Default|
 |--------|-----------|-------|
-|`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|true|
+|`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|false|
 |`druid.sql.avatica.enable`|Whether to enable JDBC querying at `/druid/v2/sql/avatica/`.|true|
 |`druid.sql.avatica.maxConnections`|Maximum number of open connections for the Avatica server. These are not HTTP connections, but are logical client connections that may span multiple HTTP connections.|50|
 |`druid.sql.avatica.maxRowsPerFrame`|Maximum number of rows to return in a single JDBC frame. Setting this property to -1 indicates that no row limit should be applied. Clients can optionally specify a row limit in their requests; if a client specifies a row limit, the lesser value of the client-provided limit and `maxRowsPerFrame` will be used.|5,000|
diff --git a/docs/latest/development/extensions-contrib/influxdb-emitter.md b/docs/latest/development/extensions-contrib/influxdb-emitter.md
deleted file mode 100644
index 138a0bb..0000000
--- a/docs/latest/development/extensions-contrib/influxdb-emitter.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-layout: doc_page
-title: "InfluxDB Emitter"
----
-
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~   http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing,
-  ~ software distributed under the License is distributed on an
-  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  ~ KIND, either express or implied.  See the License for the
-  ~ specific language governing permissions and limitations
-  ~ under the License.
-  -->
-
-# InfluxDB Emitter
-
-To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-influxdb-emitter` extension.
-
-## Introduction
-
-This extension emits druid metrics to [InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/) over HTTP. Currently this emitter only emits service metric events to InfluxDB (See [Druid metrics](../../operations/metrics.html) for a list of metrics).
-When a metric event is fired it is added to a queue of events. After a configurable amount of time, the events on the queue are transformed to InfluxDB's line protocol 
-and POSTed to the InfluxDB HTTP API. The entire queue is flushed at this point. The queue is also flushed as the emitter is shutdown.
-
-Note that authentication and authorization must be [enabled](https://docs.influxdata.com/influxdb/v1.7/administration/authentication_and_authorization/) on the InfluxDB server.
-
-## Configuration
-
-All the configuration parameters for the influxdb emitter are under `druid.emitter.influxdb`.
-
-|Property|Description|Required?|Default|
-|--------|-----------|---------|-------|
-|`druid.emitter.influxdb.hostname`|The hostname of the InfluxDB server.|Yes|N/A|
-|`druid.emitter.influxdb.port`|The port of the InfluxDB server.|No|8086|
-|`druid.emitter.influxdb.databaseName`|The name of the database in InfluxDB.|Yes|N/A|
-|`druid.emitter.influxdb.maxQueueSize`|The size of the queue that holds events.|No|Integer.Max_Value(=2^31-1)|
-|`druid.emitter.influxdb.flushPeriod`|How often (in milliseconds) the events queue is parsed into Line Protocol and POSTed to InfluxDB.|No|60000|
-|`druid.emitter.influxdb.flushDelay`|How long (in milliseconds) the scheduled method will wait until it first runs.|No|60000|
-|`druid.emitter.influxdb.influxdbUserName`|The username for authenticating with the InfluxDB database.|Yes|N/A|
-|`druid.emitter.influxdb.influxdbPassword`|The password of the database authorized user|Yes|N/A|
-|`druid.emitter.influxdb.dimensionWhitelist`|A whitelist of metric dimensions to include as tags|No|`["dataSource","type","numMetrics","numDimensions","threshold","dimension","taskType","taskStatus","tier"]`|
-
-## InfluxDB Line Protocol
-
-An example of how this emitter parses a Druid metric event into InfluxDB's [line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_reference/) is given here: 
-
-The syntax of the line protocol is :  
-
-`<measurement>[,<tag_key>=<tag_value>[,<tag_key>=<tag_value>]] <field_key>=<field_value>[,<field_key>=<field_value>] [<timestamp>]`
- 
-where timestamp is in nano-seconds since epoch.
-
-A typical service metric event as recorded by Druid's logging emitter is: `Event [{"feed":"metrics","timestamp":"2017-10-31T09:09:06.857Z","service":"druid/historical","host":"historical001:8083","version":"0.11.0-SNAPSHOT","metric":"query/cache/total/hits","value":34787256}]`.
-
-This event is parsed into line protocol according to these rules:
-
-* The measurement becomes druid_query since query is the first part of the metric. 
-* The tags are service=druid/historical, hostname=historical001, metric=druid_cache_total. (The metric tag is the middle part of the druid metric separated with _ and preceded by druid_. Another example would be if an event has metric=query/time then there is no middle part and hence no metric tag)
-* The field is druid_hits since this is the last part of the metric.
-
-This gives the following String which can be POSTed to InfluxDB: `"druid_query,service=druid/historical,hostname=historical001,metric=druid_cache_total druid_hits=34787256 1509440946857000000"`
-
-The InfluxDB emitter has a white list of dimensions
-which will be added as a tag to the line protocol string if the metric has a dimension from the white list.
-The value of the dimension is sanitized such that every occurence of a dot or whitespace is replaced with a `_` .
diff --git a/docs/latest/development/extensions-contrib/orc.html b/docs/latest/development/extensions-contrib/orc.html
new file mode 100644
index 0000000..1f92ebb
--- /dev/null
+++ b/docs/latest/development/extensions-contrib/orc.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/extensions-core/orc.html
+---
diff --git a/docs/latest/development/extensions-contrib/tdigestsketch-quantiles.md b/docs/latest/development/extensions-contrib/tdigestsketch-quantiles.md
deleted file mode 100644
index 9947e01..0000000
--- a/docs/latest/development/extensions-contrib/tdigestsketch-quantiles.md
+++ /dev/null
@@ -1,159 +0,0 @@
----
-layout: doc_page
-title: "T-Digest Quantiles Sketch module"
----
-
-<!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~   http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing,
-  ~ software distributed under the License is distributed on an
-  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  ~ KIND, either express or implied.  See the License for the
-  ~ specific language governing permissions and limitations
-  ~ under the License.
-  -->
-
-# T-Digest Quantiles Sketch module
-
-This module provides Apache Druid (incubating) approximate sketch aggregators based on T-Digest.
-T-Digest (https://github.com/tdunning/t-digest) is a popular datastructure for accurate on-line accumulation of
-rank-based statistics such as quantiles and trimmed means.
-The datastructure is also designed for parallel programming use cases like distributed aggregations or map reduce jobs by making combining two intermediate t-digests easy and efficient.
-
-There are three flavors of T-Digest sketch aggregator available in Apache Druid (incubating):
-
-1. buildTDigestSketch - used for building T-Digest sketches from raw numeric values. It generally makes sense to
-use this aggregator when ingesting raw data into Druid. One can also use this aggregator during query time too to
-generate sketches, just that one would be building these sketches on every query execution instead of building them
-once during ingestion.
-2. mergeTDigestSketch - used for merging pre-built T-Digest sketches. This aggregator is generally used during
-query time to combine sketches generated by buildTDigestSketch aggregator.
-3. quantilesFromTDigestSketch - used for generating quantiles from T-Digest sketches. This aggregator is generally used
-during query time to generate quantiles from sketches built using the above two sketch generating aggregators.
-
-To use this aggregator, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
-
-```
-druid.extensions.loadList=["druid-tdigestsketch"]
-```
-
-### Aggregator
-
-The result of the aggregation is a T-Digest sketch that is built ingesting numeric values from the raw data.
-
-```json
-{
-  "type" : "buildTDigestSketch",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "compression": <parameter that controls size and accuracy>
- }
-```
-Example:
-```json
-{
-	"type": "buildTDigestSketch",
-	"name": "sketch",
-	"fieldName": "session_duration",
-	"compression": 200
-}
-```
-
-|property|description|required?|
-|--------|-----------|---------|
-|type|This String should always be "buildTDigestSketch"|yes|
-|name|A String for the output (result) name of the calculation.|yes|
-|fieldName|A String for the name of the input field containing raw numeric values.|yes|
-|compression|Parameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.|no, defaults to 100|
-
-
-The result of the aggregation is a T-Digest sketch that is built by merging pre-built T-Digest sketches.
-
-```json
-{
-  "type" : "mergeTDigestSketch",
-  "name" : <output_name>,
-  "fieldName" : <metric_name>,
-  "compression": <parameter that controls size and accuracy>
- }
-```
-
-|property|description|required?|
-|--------|-----------|---------|
-|type|This String should always be "mergeTDigestSketch"|yes|
-|name|A String for the output (result) name of the calculation.|yes|
-|fieldName|A String for the name of the input field containing raw numeric values.|yes|
-|compression|Parameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.|no, defaults to 100|
-
-Example:
-```json
-{
-	"queryType": "groupBy",
-	"dataSource": "test_datasource",
-	"granularity": "ALL",
-	"dimensions": [],
-	"aggregations": [{
-		"type": "mergeTDigestSketch",
-		"name": "merged_sketch",
-		"fieldName": "ingested_sketch",
-		"compression": 200
-	}],
-	"intervals": ["2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z"]
-}
-```
-### Post Aggregators
-
-#### Quantiles
-
-This returns an array of quantiles corresponding to a given array of fractions.
-
-```json
-{
-  "type"  : "quantilesFromTDigestSketch",
-  "name": <output name>,
-  "field"  : <post aggregator that refers to a TDigestSketch (fieldAccess or another post aggregator)>,
-  "fractions" : <array of fractions>
-}
-```
-
-|property|description|required?|
-|--------|-----------|---------|
-|type|This String should always be "quantilesFromTDigestSketch"|yes|
-|name|A String for the output (result) name of the calculation.|yes|
-|fieldName|A String for the name of the input field containing raw numeric values.|yes|
-|fractions|Non-empty array of fractions between 0 and 1|yes|
-
-Example:
-```json
-{
-	"queryType": "groupBy",
-	"dataSource": "test_datasource",
-	"granularity": "ALL",
-	"dimensions": [],
-	"aggregations": [{
-		"type": "mergeTDigestSketch",
-		"name": "merged_sketch",
-		"fieldName": "ingested_sketch",
-		"compression": 200
-	}],
-	"postAggregations": [{
-		"type": "quantilesFromTDigestSketch",
-		"name": "quantiles",
-		"fractions": [0, 0.5, 1],
-		"field": {
-			"type": "fieldAccess",
-			"fieldName": "merged_sketch"
-		}
-	}],
-	"intervals": ["2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z"]
-}
-```
diff --git a/docs/latest/development/extensions-core/approximate-histograms.md b/docs/latest/development/extensions-core/approximate-histograms.md
index 30b5f32..73a5207 100644
--- a/docs/latest/development/extensions-core/approximate-histograms.md
+++ b/docs/latest/development/extensions-core/approximate-histograms.md
@@ -99,7 +99,6 @@ query.
 |`resolution`             |Number of centroids (data points) to store. The higher the resolution, the more accurate results are, but the slower the computation will be.|50|
 |`numBuckets`             |Number of output buckets for the resulting histogram. Bucket intervals are dynamic, based on the range of the underlying data. Use a post-aggregator to have finer control over the bucketing scheme|7|
 |`lowerLimit`/`upperLimit`|Restrict the approximation to the given range. The values outside this range will be aggregated into two centroids. Counts of values outside this range are still maintained. |-INF/+INF|
-|`finalizeAsBase64Binary` |If true, the finalized aggregator value will be a Base64-encoded byte array containing the serialized form of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.|false|
 
 ## Fixed Buckets Histogram
 
@@ -125,7 +124,6 @@ For general histogram and quantile use cases, the [DataSketches Quantiles Sketch
 |`upperLimit`|Upper limit of the histogram. |No default, must be specified|
 |`numBuckets`|Number of buckets for the histogram. The range [lowerLimit, upperLimit] will be divided into `numBuckets` intervals of equal size.|10|
 |`outlierHandlingMode`|Specifies how values outside of [lowerLimit, upperLimit] will be handled. Supported modes are "ignore", "overflow", and "clip". See [outlier handling modes](#outlier-handling-modes) for more details.|No default, must be specified|
-|`finalizeAsBase64Binary`|If true, the finalized aggregator value will be a Base64-encoded byte array containing the [serialized form](#serialization-formats) of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.|false|
 
 An example aggregator spec is shown below:
 
diff --git a/docs/latest/development/extensions-core/datasketches-extension.md b/docs/latest/development/extensions-core/datasketches-extension.md
index 49ac225..3a5b126 100644
--- a/docs/latest/development/extensions-core/datasketches-extension.md
+++ b/docs/latest/development/extensions-core/datasketches-extension.md
@@ -24,7 +24,7 @@ title: "DataSketches extension"
 
 # DataSketches extension
 
-Apache Druid (incubating) aggregators based on [datasketches](https://datasketches.github.io/) library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.
+Apache Druid (incubating) aggregators based on [datasketches](http://datasketches.github.io/) library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.
 
 To use the datasketches aggregators, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
 
diff --git a/docs/latest/development/extensions-core/datasketches-hll.md b/docs/latest/development/extensions-core/datasketches-hll.md
index 90e284f..799cbc0 100644
--- a/docs/latest/development/extensions-core/datasketches-hll.md
+++ b/docs/latest/development/extensions-core/datasketches-hll.md
@@ -24,7 +24,7 @@ title: "DataSketches HLL Sketch module"
 
 # DataSketches HLL Sketch module
 
-This module provides Apache Druid (incubating) aggregators for distinct counting based on HLL sketch from [datasketches](https://datasketches.github.io/) library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of sketch columns [...]
+This module provides Apache Druid (incubating) aggregators for distinct counting based on HLL sketch from [datasketches](http://datasketches.github.io/) library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of sketch columns  [...]
 You can use the HLL sketch aggregator on columns of any identifiers. It will return estimated cardinality of the column.
 
 To use this aggregator, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
diff --git a/docs/latest/development/extensions-core/datasketches-quantiles.md b/docs/latest/development/extensions-core/datasketches-quantiles.md
index 39b7cb9..2282de2 100644
--- a/docs/latest/development/extensions-core/datasketches-quantiles.md
+++ b/docs/latest/development/extensions-core/datasketches-quantiles.md
@@ -24,7 +24,7 @@ title: "DataSketches Quantiles Sketch module"
 
 # DataSketches Quantiles Sketch module
 
-This module provides Apache Druid (incubating) aggregators based on numeric quantiles DoublesSketch from [datasketches](https://datasketches.github.io/) library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such). See [Quanti [...]
+This module provides Apache Druid (incubating) aggregators based on numeric quantiles DoublesSketch from [datasketches](http://datasketches.github.io/) library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such). See [Quantil [...]
 
 There are three major modes of operation:
 
@@ -99,31 +99,6 @@ This returns an approximation to the histogram given an array of split points th
 }
 ```
 
-#### Rank
-
-This returns an approximation to the rank of a given value that is the fraction of the distribution less than that value.
-
-```json
-{
-  "type"  : "quantilesDoublesSketchToRank",
-  "name": <output name>,
-  "field"  : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>,
-  "value" : <value>
-}
-```
-#### CDF
-
-This returns an approximation to the Cumulative Distribution Function given an array of split points that define the edges of the bins. An array of <i>m</i> unique, monotonically increasing split points divide the real number line into <i>m+1</i> consecutive disjoint intervals. The definition of an interval is inclusive of the left split point and exclusive of the right split point. The resulting array of fractions can be viewed as ranks of each split point with one additional rank that  [...]
-
-```json
-{
-  "type"  : "quantilesDoublesSketchToCDF",
-  "name": <output name>,
-  "field"  : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>,
-  "splitPoints" : <array of split points>
-}
-```
-
 #### Sketch Summary
 
 This returns a summary of the sketch that can be used for debugging. This is the result of calling toString() method.
diff --git a/docs/latest/development/extensions-core/datasketches-theta.md b/docs/latest/development/extensions-core/datasketches-theta.md
index 5a2d1af..e248da3 100644
--- a/docs/latest/development/extensions-core/datasketches-theta.md
+++ b/docs/latest/development/extensions-core/datasketches-theta.md
@@ -24,7 +24,7 @@ title: "DataSketches Theta Sketch module"
 
 # DataSketches Theta Sketch module
 
-This module provides Apache Druid (incubating) aggregators based on Theta sketch from [datasketches](https://datasketches.github.io/) library. Note that sketch algorithms are approximate; see details in the "Accuracy" section of the datasketches doc.
+This module provides Apache Druid (incubating) aggregators based on Theta sketch from [datasketches](http://datasketches.github.io/) library. Note that sketch algorithms are approximate; see details in the "Accuracy" section of the datasketches doc. 
 At ingestion time, this aggregator creates the Theta sketch objects which get stored in Druid segments. Logically speaking, a Theta sketch object can be thought of as a Set data structure. At query time, sketches are read and aggregated (set unioned) together. In the end, by default, you receive the estimate of the number of unique entries in the sketch object. Also, you can use post aggregators to do union, intersection or difference on sketch columns in the same row. 
 Note that you can use `thetaSketch` aggregator on columns which were not ingested using the same. It will return estimated cardinality of the column. It is recommended to use it at ingestion time as well to make querying faster.
 
diff --git a/docs/latest/development/extensions-core/datasketches-tuple.md b/docs/latest/development/extensions-core/datasketches-tuple.md
index bd83c9f..69db25a 100644
--- a/docs/latest/development/extensions-core/datasketches-tuple.md
+++ b/docs/latest/development/extensions-core/datasketches-tuple.md
@@ -24,7 +24,7 @@ title: "DataSketches Tuple Sketch module"
 
 # DataSketches Tuple Sketch module
 
-This module provides Apache Druid (incubating) aggregators based on Tuple sketch from [datasketches](https://datasketches.github.io/) library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.
+This module provides Apache Druid (incubating) aggregators based on Tuple sketch from [datasketches](http://datasketches.github.io/) library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.
 
 To use this aggregator, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
 
diff --git a/docs/latest/development/extensions-core/druid-basic-security.md b/docs/latest/development/extensions-core/druid-basic-security.md
index 4282f91..28eff1f 100644
--- a/docs/latest/development/extensions-core/druid-basic-security.md
+++ b/docs/latest/development/extensions-core/druid-basic-security.md
@@ -172,87 +172,6 @@ Return a list of all user names.
 `GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`
 Return the name and role information of the user with name {userName}
 
-Example output:
-```json
-{
-  "name": "druid2",
-  "roles": [
-    "druidRole"
-  ]
-}
-```
-
-This API supports the following flags:
-- `?full`: The response will also include the full information for each role currently assigned to the user.
-
-Example output:
-```json
-{
-  "name": "druid2",
-  "roles": [
-    {
-      "name": "druidRole",
-      "permissions": [
-        {
-          "resourceAction": {
-            "resource": {
-              "name": "A",
-              "type": "DATASOURCE"
-            },
-            "action": "READ"
-          },
-          "resourceNamePattern": "A"
-        },
-        {
-          "resourceAction": {
-            "resource": {
-              "name": "C",
-              "type": "CONFIG"
-            },
-            "action": "WRITE"
-          },
-          "resourceNamePattern": "C"
-        }
-      ]
-    }
-  ]
-}
-```
-
-The output format of this API when `?full` is specified is deprecated and in later versions will be switched to the output format used when both `?full` and `?simplifyPermissions` flag is set. 
-
-The `resourceNamePattern` is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.
-
-- `?full?simplifyPermissions`: When both `?full` and `?simplifyPermissions` are set, the permissions in the output will contain only a list of `resourceAction` objects, without the extraneous `resourceNamePattern` field.
-
-```json
-{
-  "name": "druid2",
-  "roles": [
-    {
-      "name": "druidRole",
-      "users": null,
-      "permissions": [
-        {
-          "resource": {
-            "name": "A",
-            "type": "DATASOURCE"
-          },
-          "action": "READ"
-        },
-        {
-          "resource": {
-            "name": "C",
-            "type": "CONFIG"
-          },
-          "action": "WRITE"
-        }
-      ]
-    }
-  ]
-}
-```
-
 `POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`
 Create a new user with name {userName}
 
@@ -265,56 +184,7 @@ Delete the user with name {userName}
 Return a list of all role names.
 
 `GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`
-Return name and permissions for the role named {roleName}.
-
-Example output:
-```json
-{
-  "name": "druidRole2",
-  "permissions": [
-    {
-      "resourceAction": {
-        "resource": {
-          "name": "E",
-          "type": "DATASOURCE"
-        },
-        "action": "WRITE"
-      },
-      "resourceNamePattern": "E"
-    }
-  ]
-}
-```
-
-The default output format of this API is deprecated and in later versions will be switched to the output format used when the `?simplifyPermissions` flag is set. The `resourceNamePattern` is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.
-
-This API supports the following flags:
-
-- `?full`: The output will contain an extra `users` list, containing the users that currently have this role.
-
-```json
-"users":["druid"]
-```
-
-- `?simplifyPermissions`: The permissions in the output will contain only a list of `resourceAction` objects, without the extraneous `resourceNamePattern` field. The `users` field will be null when `?full` is not specified.
-
-Example output:
-```json
-{
-  "name": "druidRole2",
-  "users": null,
-  "permissions": [
-    {
-      "resource": {
-        "name": "E",
-        "type": "DATASOURCE"
-      },
-      "action": "WRITE"
-    }
-  ]
-}
-```
-
+Return name and permissions for the role named {roleName}
 
 `POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`
 Create a new role with name {roleName}.
diff --git a/docs/latest/development/extensions-core/druid-kerberos.md b/docs/latest/development/extensions-core/druid-kerberos.md
index 99d6e45..46af7f4 100644
--- a/docs/latest/development/extensions-core/druid-kerberos.md
+++ b/docs/latest/development/extensions-core/druid-kerberos.md
@@ -54,16 +54,13 @@ The configuration examples in the rest of this document will use "kerberos" as t
 |`druid.auth.authenticator.kerberos.serverPrincipal`|`HTTP/_HOST@EXAMPLE.COM`| SPNego service principal used by druid processes|empty|Yes|
 |`druid.auth.authenticator.kerberos.serverKeytab`|`/etc/security/keytabs/spnego.service.keytab`|SPNego service keytab used by druid processes|empty|Yes|
 |`druid.auth.authenticator.kerberos.authToLocal`|`RULE:[1:$1@$0](druid@EXAMPLE.COM)s/.*/druid DEFAULT`|It allows you to set a general rule for mapping principal names to local user names. It will be used if there is not an explicit mapping for the principal name that is being translated.|DEFAULT|No|
+|`druid.auth.authenticator.kerberos.excludedPaths`|`['/status','/health']`| Array of HTTP paths which which does NOT need to be authenticated.|None|No|
 |`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid ndoes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.|<Random value>|No|
 |`druid.auth.authenticator.kerberos.authorizerName`|Depends on available authorizers|Authorizer that requests should be directed to|Empty|Yes|
 
 As a note, it is required that the SPNego principal in use by the druid processes must start with HTTP (This specified by [RFC-4559](https://tools.ietf.org/html/rfc4559)) and must be of the form "HTTP/_HOST@REALM".
 The special string _HOST will be replaced automatically with the value of config `druid.host`
 
-### `druid.auth.authenticator.kerberos.excludedPaths`
-
-In older releases, the Kerberos authenticator had an `excludedPaths` property that allowed the user to specify a list of paths where authentication checks should be skipped. This property has been removed from the Kerberos authenticator because the path exclusion functionality is now handled across all authenticators/authorizers by setting `druid.auth.unsecuredPaths`, as described in the [main auth documentation](../../design/auth.html).
-
 ### Auth to Local Syntax
 `druid.auth.authenticator.kerberos.authToLocal` allows you to set a general rules for mapping principal names to local user names.
 The syntax for mapping rules is `RULE:\[n:string](regexp)s/pattern/replacement/g`. The integer n indicates how many components the target principal should have. If this matches, then a string will be formed from string, substituting the realm of the principal for $0 and the n‘th component of the principal for $n. e.g. if the principal was druid/admin then `\[2:$2$1suffix]` would result in the string `admindruidsuffix`.
diff --git a/docs/latest/development/extensions-core/kafka-ingestion.md b/docs/latest/development/extensions-core/kafka-ingestion.md
index c070e46..b415c27 100644
--- a/docs/latest/development/extensions-core/kafka-ingestion.md
+++ b/docs/latest/development/extensions-core/kafka-ingestion.md
@@ -214,61 +214,12 @@ offsets as reported by Kafka, the consumer lag per partition, as well as the agg
 consumer lag per partition may be reported as negative values if the supervisor has not received a recent latest offset
 response from Kafka. The aggregate lag value will always be >= 0.
 
-The status report also contains the supervisor's state and a list of recently thrown exceptions (reported as
-`recentErrors`, whose max size can be controlled using the `druid.supervisor.maxStoredExceptionEvents` configuration).
-There are two fields related to the supervisor's state - `state` and `detailedState`. The `state` field will always be
-one of a small number of generic states that are applicable to any type of supervisor, while the `detailedState` field
-will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor's
-activities than the generic `state` field.
-
-The list of possible `state` values are: [`PENDING`, `RUNNING`, `SUSPENDED`, `STOPPING`, `UNHEALTHY_SUPERVISOR`, `UNHEALTHY_TASKS`]
-
-The list of `detailedState` values and their corresponding `state` mapping is as follows:
-
-|Detailed State|Corresponding State|Description|
-|--------------|-------------------|-----------|
-|UNHEALTHY_SUPERVISOR|UNHEALTHY_SUPERVISOR|The supervisor has encountered errors on the past `druid.supervisor.unhealthinessThreshold` iterations|
-|UNHEALTHY_TASKS|UNHEALTHY_TASKS|The last `druid.supervisor.taskUnhealthinessThreshold` tasks have all failed|
-|UNABLE_TO_CONNECT_TO_STREAM|UNHEALTHY_SUPERVISOR|The supervisor is encountering connectivity issues with Kafka and has not successfully connected in the past|
-|LOST_CONTACT_WITH_STREAM|UNHEALTHY_SUPERVISOR|The supervisor is encountering connectivity issues with Kafka but has successfully connected in the past|
-|PENDING (first iteration only)|PENDING|The supervisor has been initialized and hasn't started connecting to the stream|
-|CONNECTING_TO_STREAM (first iteration only)|RUNNING|The supervisor is trying to connect to the stream and update partition data|
-|DISCOVERING_INITIAL_TASKS (first iteration only)|RUNNING|The supervisor is discovering already-running tasks|
-|CREATING_TASKS (first iteration only)|RUNNING|The supervisor is creating tasks and discovering state|
-|RUNNING|RUNNING|The supervisor has started tasks and is waiting for taskDuration to elapse|
-|SUSPENDED|SUSPENDED|The supervisor has been suspended|
-|STOPPING|STOPPING|The supervisor is stopping|
-
-On each iteration of the supervisor's run loop, the supervisor completes the following tasks in sequence:
-  1) Fetch the list of partitions from Kafka and determine the starting offset for each partition (either based on the
-  last processed offset if continuing, or starting from the beginning or ending of the stream if this is a new topic).
-  2) Discover any running indexing tasks that are writing to the supervisor's datasource and adopt them if they match
-  the supervisor's configuration, else signal them to stop.
-  3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision.
-  4) Handle tasks that have exceeded `taskDuration` and should transition from the reading to publishing state.
-  5) Handle tasks that have finished publishing and signal redundant replica tasks to stop.
-  6) Handle tasks that have failed and clean up the supervisor's internal state.
-  7) Compare the list of healthy tasks to the requested `taskCount` and `replicas` configurations and create additional tasks if required.
-
-The `detailedState` field will show additional values (those marked with "first iteration only") the first time the
-supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface
-initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can't connect to
-Kafka, it can't read from the Kafka topic, or it can't communicate with existing tasks). Once the supervisor is stable -
-that is, once it has completed a full execution without encountering any issues - `detailedState` will show a `RUNNING`
-state until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state.
-
 ### Getting Supervisor Ingestion Stats Report
 
 `GET /druid/indexer/v1/supervisor/<supervisorId>/stats` returns a snapshot of the current ingestion row counters for each task being managed by the supervisor, along with moving averages for the row counters.
 
 See [Task Reports: Row Stats](../../ingestion/reports.html#row-stats) for more information.
 
-### Supervisor Health Check
-
-`GET /druid/indexer/v1/supervisor/<supervisorId>/health` returns `200 OK` if the supervisor is healthy and
-`503 Service Unavailable` if it is unhealthy. Healthiness is determined by the supervisor's `state` (as returned by the
-`/status` endpoint) and the `druid.supervisor.*` Overlord configuration thresholds.
-
 ### Updating Existing Supervisors
 
 `POST /druid/indexer/v1/supervisor` can be used to update existing supervisor spec.
diff --git a/docs/latest/development/extensions-core/kinesis-ingestion.md b/docs/latest/development/extensions-core/kinesis-ingestion.md
index 0578dd2..3d406ed 100644
--- a/docs/latest/development/extensions-core/kinesis-ingestion.md
+++ b/docs/latest/development/extensions-core/kinesis-ingestion.md
@@ -113,7 +113,7 @@ A sample supervisor spec is shown below:
 }
 ```
 
-## Supervisor Spec
+## Supervisor Configuration
 
 |Field|Description|Required|
 |--------|-----------|---------|
@@ -218,58 +218,12 @@ To authenticate with AWS, you must provide your AWS access key and AWS secret ke
 ```
 -Ddruid.kinesis.accessKey=123 -Ddruid.kinesis.secretKey=456
 ```
-The AWS access key ID and secret access key are used for Kinesis API requests. If this is not provided, the service will
-look for credentials set in environment variables, in the default profile configuration file, and from the EC2 instance
-profile provider (in this order).
+The AWS access key ID and secret access key are used for Kinesis API requests. If this is not provided, the service will look for credentials set in environment variables, in the default profile configuration file, and from the EC2 instance profile provider (in this order).
 
 ### Getting Supervisor Status Report
 
-`GET /druid/indexer/v1/supervisor/<supervisorId>/status` returns a snapshot report of the current state of the tasks 
-managed by the given supervisor. This includes the latest sequence numbers as reported by Kinesis. Unlike the Kafka
-Indexing Service, stats about lag are not yet supported.
-
-The status report also contains the supervisor's state and a list of recently thrown exceptions (reported as
-`recentErrors`, whose max size can be controlled using the `druid.supervisor.maxStoredExceptionEvents` configuration).
-There are two fields related to the supervisor's state - `state` and `detailedState`. The `state` field will always be
-one of a small number of generic states that are applicable to any type of supervisor, while the `detailedState` field
-will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor's
-activities than the generic `state` field.
-
-The list of possible `state` values are: [`PENDING`, `RUNNING`, `SUSPENDED`, `STOPPING`, `UNHEALTHY_SUPERVISOR`, `UNHEALTHY_TASKS`]
-
-The list of `detailedState` values and their corresponding `state` mapping is as follows:
-
-|Detailed State|Corresponding State|Description|
-|--------------|-------------------|-----------|
-|UNHEALTHY_SUPERVISOR|UNHEALTHY_SUPERVISOR|The supervisor has encountered errors on the past `druid.supervisor.unhealthinessThreshold` iterations|
-|UNHEALTHY_TASKS|UNHEALTHY_TASKS|The last `druid.supervisor.taskUnhealthinessThreshold` tasks have all failed|
-|UNABLE_TO_CONNECT_TO_STREAM|UNHEALTHY_SUPERVISOR|The supervisor is encountering connectivity issues with Kinesis and has not successfully connected in the past|
-|LOST_CONTACT_WITH_STREAM|UNHEALTHY_SUPERVISOR|The supervisor is encountering connectivity issues with Kinesis but has successfully connected in the past|
-|PENDING (first iteration only)|PENDING|The supervisor has been initialized and hasn't started connecting to the stream|
-|CONNECTING_TO_STREAM (first iteration only)|RUNNING|The supervisor is trying to connect to the stream and update partition data|
-|DISCOVERING_INITIAL_TASKS (first iteration only)|RUNNING|The supervisor is discovering already-running tasks|
-|CREATING_TASKS (first iteration only)|RUNNING|The supervisor is creating tasks and discovering state|
-|RUNNING|RUNNING|The supervisor has started tasks and is waiting for taskDuration to elapse|
-|SUSPENDED|SUSPENDED|The supervisor has been suspended|
-|STOPPING|STOPPING|The supervisor is stopping|
-
-On each iteration of the supervisor's run loop, the supervisor completes the following tasks in sequence:
-  1) Fetch the list of shards from Kinesis and determine the starting sequence number for each shard (either based on the
-  last processed sequence number if continuing, or starting from the beginning or ending of the stream if this is a new stream).
-  2) Discover any running indexing tasks that are writing to the supervisor's datasource and adopt them if they match
-  the supervisor's configuration, else signal them to stop.
-  3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision.
-  4) Handle tasks that have exceeded `taskDuration` and should transition from the reading to publishing state.
-  5) Handle tasks that have finished publishing and signal redundant replica tasks to stop.
-  6) Handle tasks that have failed and clean up the supervisor's internal state.
-  7) Compare the list of healthy tasks to the requested `taskCount` and `replicas` configurations and create additional tasks if required.
-
-The `detailedState` field will show additional values (those marked with "first iteration only") the first time the
-supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface
-initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can't connect to
-Kinesis, it can't read from the stream, or it can't communicate with existing tasks). Once the supervisor is stable -
-that is, once it has completed a full execution without encountering any issues - `detailedState` will show a `RUNNING`
-state until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state.
+`GET /druid/indexer/v1/supervisor/<supervisorId>/status` returns a snapshot report of the current state of the tasks managed by the given supervisor. This includes the latest
+sequence numbers as reported by Kinesis. Unlike the Kafka Indexing Service, stats about lag is not yet supported.
 
 ### Updating Existing Supervisors
 
@@ -436,4 +390,4 @@ requires the user to manually provide the Kinesis Client Library on the classpat
 compatible with Apache projects.
 
 To enable this feature, add the `amazon-kinesis-client` (tested on version `1.9.2`) jar file ([link](https://mvnrepository.com/artifact/com.amazonaws/amazon-kinesis-client/1.9.2)) under `dist/druid/extensions/druid-kinesis-indexing-service/`.
-Then when submitting a supervisor-spec, set `deaggregate` to true.
+Then when submitting a supervisor-spec, set `deaggregate` to true.
\ No newline at end of file
diff --git a/docs/latest/development/extensions-core/postgresql.md b/docs/latest/development/extensions-core/postgresql.md
index 26f77fc..07a2a78 100644
--- a/docs/latest/development/extensions-core/postgresql.md
+++ b/docs/latest/development/extensions-core/postgresql.md
@@ -83,5 +83,3 @@ In most cases, the configuration options map directly to the [postgres jdbc conn
 | `druid.metadata.postgres.ssl.sslRootCert` | The full path to the root certificate. | none | no |
 | `druid.metadata.postgres.ssl.sslHostNameVerifier` | The classname of the hostname verifier. | none | no |
 | `druid.metadata.postgres.ssl.sslPasswordCallback` | The classname of the SSL password provider. | none | no |
-| `druid.metadata.postgres.dbTableSchema` | druid meta table schema | `public` | no |
-
diff --git a/docs/latest/development/extensions-core/s3.md b/docs/latest/development/extensions-core/s3.md
index 41b4b56..2fa1829 100644
--- a/docs/latest/development/extensions-core/s3.md
+++ b/docs/latest/development/extensions-core/s3.md
@@ -41,9 +41,14 @@ As an example, to set the region to 'us-east-1' through system properties:
 
 |Property|Description|Default|
 |--------|-----------|-------|
-|`druid.s3.accessKey`|S3 access key.See [S3 authentication methods](#s3-authentication-methods) for more details|Can be ommitted according to authentication methods chosen.|
-|`druid.s3.secretKey`|S3 secret key.See [S3 authentication methods](#s3-authentication-methods) for more details|Can be ommitted according to authentication methods chosen.|
-|`druid.s3.fileSessionCredentials`|Path to properties file containing `sessionToken`, `accessKey` and `secretKey` value. One key/value pair per line (format `key=value`). See [S3 authentication methods](#s3-authentication-methods) for more details |Can be ommitted according to authentication methods chosen.|
+|`druid.s3.accessKey`|S3 access key.|Must be set.|
+|`druid.s3.secretKey`|S3 secret key.|Must be set.|
+|`druid.storage.bucket`|Bucket to store in.|Must be set.|
+|`druid.storage.baseKey`|Base key prefix to use, i.e. what directory.|Must be set.|
+|`druid.storage.disableAcl`|Boolean flag to disable ACL. If this is set to `false`, the full control would be granted to the bucket owner. This may require to set additional permissions. See [S3 permissions settings](#s3-permissions-settings).|false|
+|`druid.storage.sse.type`|Server-side encryption type. Should be one of `s3`, `kms`, and `custom`. See the below [Server-side encryption section](#server-side-encryption) for more details.|None|
+|`druid.storage.sse.kms.keyId`|AWS KMS key ID. This is used only when `druid.storage.sse.type` is `kms` and can be empty to use the default key ID.|None|
+|`druid.storage.sse.custom.base64EncodedKey`|Base64-encoded key. Should be specified if `druid.storage.sse.type` is `custom`.|None|
 |`druid.s3.protocol`|Communication protocol type to use when sending requests to AWS. `http` or `https` can be used. This configuration would be ignored if `druid.s3.endpoint.url` is filled with a URL with a different protocol.|`https`|
 |`druid.s3.disableChunkedEncoding`|Disables chunked encoding. See [AWS document](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#disableChunkedEncoding--) for details.|false|
 |`druid.s3.enablePathStyleAccess`|Enables path style access. See [AWS document](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#enablePathStyleAccess--) for details.|false|
@@ -54,38 +59,12 @@ As an example, to set the region to 'us-east-1' through system properties:
 |`druid.s3.proxy.port`|Port on the proxy host to connect through.|None|
 |`druid.s3.proxy.username`|User name to use when connecting through a proxy.|None|
 |`druid.s3.proxy.password`|Password to use when connecting through a proxy.|None|
-|`druid.storage.bucket`|Bucket to store in.|Must be set.|
-|`druid.storage.baseKey`|Base key prefix to use, i.e. what directory.|Must be set.|
-|`druid.storage.archiveBucket`|S3 bucket name for archiving when running the *archive task*.|none|
-|`druid.storage.archiveBaseKey`|S3 object key prefix for archiving.|none|
-|`druid.storage.disableAcl`|Boolean flag to disable ACL. If this is set to `false`, the full control would be granted to the bucket owner. This may require to set additional permissions. See [S3 permissions settings](#s3-permissions-settings).|false|
-|`druid.storage.sse.type`|Server-side encryption type. Should be one of `s3`, `kms`, and `custom`. See the below [Server-side encryption section](#server-side-encryption) for more details.|None|
-|`druid.storage.sse.kms.keyId`|AWS KMS key ID. This is used only when `druid.storage.sse.type` is `kms` and can be empty to use the default key ID.|None|
-|`druid.storage.sse.custom.base64EncodedKey`|Base64-encoded key. Should be specified if `druid.storage.sse.type` is `custom`.|None|
-|`druid.storage.useS3aSchema`|If true, use the "s3a" filesystem when using Hadoop-based ingestion. If false, the "s3n" filesystem will be used. Only affects Hadoop-based ingestion.|false|
 
 ### S3 permissions settings
 
 `s3:GetObject` and `s3:PutObject` are basically required for pushing/loading segments to/from S3.
 If `druid.storage.disableAcl` is set to `false`, then `s3:GetBucketAcl` and `s3:PutObjectAcl` are additionally required to set ACL for objects.
 
-### S3 authentication methods
-
-To connect to your S3 bucket (whether deep storage bucket or source bucket), Druid use the following credentials providers chain
-
-|order|type|details|
-|--------|-----------|-------|
-|1|Druid config file|Based on your runtime.properties if it contains values `druid.s3.accessKey` and `druid.s3.secretKey` |
-|2|Custom properties file| Based on custom properties file where you can supply `sessionToken`, `accessKey` and `secretKey` values. This file is provided to Druid through `druid.s3.fileSessionCredentials` propertie|
-|3|Environment variables|Based on environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`|
-|4|Java system properties|Based on JVM properties `aws.accessKeyId` and `aws.secretKey` |
-|5|Profile informations|Based on credentials you may have on your druid instance (generally in `~/.aws/credentials`)|
-|6|Instance profile informations|Based on the instance profile you may have attached to your druid instance|
-
-You can find more informations about authentication method [here](https://docs.aws.amazon.com/fr_fr/sdk-for-java/v1/developer-guide/credentials.html)<br/>
-**Note :** *Order is important here as it indicates the precedence of authentication methods.<br/> 
-So if you are trying to use Instance profile informations, you **must not** set `druid.s3.accessKey` and `druid.s3.secretKey` in your Druid runtime.properties* 
-
 ## Server-side encryption
 
 You can enable [server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html) by setting
@@ -123,5 +102,3 @@ shardSpecs are not specified, and, in this case, caching can be useful. Prefetch
 |prefetchTriggerBytes|Threshold to trigger prefetching s3 objects.|maxFetchCapacityBytes / 2|no|
 |fetchTimeout|Timeout for fetching an s3 object.|60000|no|
 |maxFetchRetry|Maximum retry for fetching an s3 object.|3|no|
-
-
diff --git a/docs/latest/development/extensions.md b/docs/latest/development/extensions.md
index c56ff4f..2190793 100644
--- a/docs/latest/development/extensions.md
+++ b/docs/latest/development/extensions.md
@@ -45,7 +45,7 @@ Core extensions are maintained by Druid committers.
 |druid-basic-security|Support for Basic HTTP authentication and role-based access control.|[link](../development/extensions-core/druid-basic-security.html)|
 |druid-bloom-filter|Support for providing Bloom filters in druid queries.|[link](../development/extensions-core/bloom-filter.html)|
 |druid-caffeine-cache|A local cache implementation backed by Caffeine.|[link](../configuration/index.html#cache-configuration)|
-|druid-datasketches|Support for approximate counts and set operations with [DataSketches](https://datasketches.github.io/).|[link](../development/extensions-core/datasketches-extension.html)|
+|druid-datasketches|Support for approximate counts and set operations with [DataSketches](http://datasketches.github.io/).|[link](../development/extensions-core/datasketches-extension.html)|
 |druid-hdfs-storage|HDFS deep storage.|[link](../development/extensions-core/hdfs.html)|
 |druid-histogram|Approximate histograms and quantiles aggregator. Deprecated, please use the [DataSketches quantiles aggregator](../development/extensions-core/datasketches-quantiles.html) from the `druid-datasketches` extension instead.|[link](../development/extensions-core/approximate-histograms.html)|
 |druid-kafka-eight|Kafka ingest firehose (high level consumer) for realtime nodes(deprecated).|[link](../development/extensions-core/kafka-eight-firehose.html)|
@@ -96,9 +96,6 @@ All of these community extensions can be downloaded using *pull-deps* with the c
 |druid-thrift-extensions|Support thrift ingestion |[link](../development/extensions-contrib/thrift.html)|
 |druid-opentsdb-emitter|OpenTSDB metrics emitter |[link](../development/extensions-contrib/opentsdb-emitter.html)|
 |druid-moving-average-query|Support for [Moving Average](https://en.wikipedia.org/wiki/Moving_average) and other Aggregate [Window Functions](https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions) in Druid queries.|[link](../development/extensions-contrib/moving-average-query.html)|
-|druid-influxdb-emitter|InfluxDB metrics emitter|[link](../development/extensions-contrib/influxdb-emitter.html)|
-|druid-momentsketch|Support for approximate quantile queries using the [momentsketch](https://github.com/stanford-futuredata/momentsketch) library|[link](../development/extensions-contrib/momentsketch-quantiles.html)|
-|druid-tdigestsketch|Support for approximate sketch aggregators based on [T-Digest](https://github.com/tdunning/t-digest)|[link](../development/extensions-contrib/tdigestsketch-quantiles.html)|
 
 ## Promoting Community Extension to Core Extension
 
diff --git a/docs/latest/development/geo.md b/docs/latest/development/geo.md
index 8d6a6cf..b482740 100644
--- a/docs/latest/development/geo.md
+++ b/docs/latest/development/geo.md
@@ -91,10 +91,3 @@ Bounds
 |--------|-----------|---------|
 |coords|Origin coordinates in the form [x, y, z, …]|yes|
 |radius|The float radius value|yes|
-
-### PolygonBound
-
-|property|description|required?|
-|--------|-----------|---------|
-|abscissa|Horizontal coordinate for corners of the polygon|yes|
-|ordinate|Vertical coordinate for corners of the polygon|yes|
diff --git a/docs/latest/development/modules.md b/docs/latest/development/modules.md
index 44ce7bd..43a9ea8 100644
--- a/docs/latest/development/modules.md
+++ b/docs/latest/development/modules.md
@@ -39,7 +39,7 @@ Druid's extensions leverage Guice in order to add things at runtime.  Basically,
    and `org.apache.druid.query.aggregation.BufferAggregator`.
 1. Add PostAggregators by extending `org.apache.druid.query.aggregation.PostAggregator`.
 1. Add ExtractionFns by extending `org.apache.druid.query.extraction.ExtractionFn`.
-1. Add Complex metrics by extending `org.apache.druid.segment.serde.ComplexMetricSerde`.
+1. Add Complex metrics by extending `org.apache.druid.segment.serde.ComplexMetricsSerde`.
 1. Add new Query types by extending `org.apache.druid.query.QueryRunnerFactory`, `org.apache.druid.query.QueryToolChest`, and
    `org.apache.druid.query.Query`.
 1. Add new Jersey resources by calling `Jerseys.addResource(binder, clazz)`.
diff --git a/docs/latest/ingestion/compaction.md b/docs/latest/ingestion/compaction.md
index 759cd21..1c5dfe4 100644
--- a/docs/latest/ingestion/compaction.md
+++ b/docs/latest/ingestion/compaction.md
@@ -33,6 +33,7 @@ Compaction tasks merge all segments of the given interval. The syntax is:
     "dataSource": <task_datasource>,
     "interval": <interval to specify segments to be merged>,
     "dimensions" <custom dimensionsSpec>,
+    "keepSegmentGranularity": <true or false>,
     "segmentGranularity": <segment granularity after compaction>,
     "targetCompactionSizeBytes": <target size of compacted segments>
     "tuningConfig" <index task tuningConfig>,
@@ -49,10 +50,21 @@ Compaction tasks merge all segments of the given interval. The syntax is:
 |`dimensionsSpec`|Custom dimensionsSpec. Compaction task will use this dimensionsSpec if exist instead of generating one. See below for more details.|No|
 |`metricsSpec`|Custom metricsSpec. Compaction task will use this metricsSpec if specified rather than generating one.|No|
 |`segmentGranularity`|If this is set, compactionTask will change the segment granularity for the given interval. See [segmentGranularity of Uniform Granularity Spec](./ingestion-spec.html#uniform-granularity-spec) for more details. See the below table for the behavior.|No|
+|`keepSegmentGranularity`|Deprecated. Please use `segmentGranularity` instead. See the below table for its behavior.|No|
 |`targetCompactionSizeBytes`|Target segment size after comapction. Cannot be used with `maxRowsPerSegment`, `maxTotalRows`, and `numShards` in tuningConfig.|No|
 |`tuningConfig`|[Index task tuningConfig](../ingestion/native_tasks.html#tuningconfig)|No|
 |`context`|[Task context](../ingestion/locking-and-priority.html#task-context)|No|
 
+### Used segmentGranularity based on `segmentGranularity` and `keepSegmentGranularity`
+
+|SegmentGranularity|keepSegmentGranularity|Used SegmentGranularity|
+|------------------|----------------------|-----------------------|
+|Non-null|True|Error|
+|Non-null|False|Given segmentGranularity|
+|Non-null|Null|Given segmentGranularity|
+|Null|True|Original segmentGranularity|
+|Null|False|ALL segmentGranularity. All events will fall into the single time chunk.|
+|Null|Null|Original segmentGranularity|
 
 An example of compaction task is
 
@@ -65,12 +77,12 @@ An example of compaction task is
 ```
 
 This compaction task reads _all segments_ of the interval `2017-01-01/2018-01-01` and results in new segments.
-Since `segmentGranularity` is null, the original segment granularity will be remained and not changed after compaction.
+Since both `segmentGranularity` and `keepSegmentGranularity` are null, the original segment granularity will be remained and not changed after compaction.
 To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.html#compaction-dynamic-configuration) or [numShards](../ingestion/native_tasks.html#tuningconfig).
 Please note that you can run multiple compactionTasks at the same time. For example, you can run 12 compactionTasks per month instead of running a single task for the entire year.
 
 A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters.
-For example, its `firehose` is always the [ingestSegmentFirehose](./firehose.html#ingestsegmentfirehose), and `dimensionsSpec` and `metricsSpec`
+For example, its `firehose` is always the [ingestSegmentSpec](./firehose.html#ingestsegmentfirehose), and `dimensionsSpec` and `metricsSpec`
 include all dimensions and metrics of the input segments by default.
 
 Compaction tasks will exit with a failure status code, without doing anything, if the interval you specify has no
diff --git a/docs/latest/ingestion/hadoop-vs-native-batch.md b/docs/latest/ingestion/hadoop-vs-native-batch.md
index cde88e5..85373a0 100644
--- a/docs/latest/ingestion/hadoop-vs-native-batch.md
+++ b/docs/latest/ingestion/hadoop-vs-native-batch.md
@@ -35,8 +35,8 @@ ingestion method.
 | Parallel indexing | Always parallel | Parallel if firehose is splittable | Always sequential |
 | Supported indexing modes | Replacing mode | Both appending and replacing modes | Both appending and replacing modes |
 | External dependency | Hadoop (it internally submits Hadoop jobs) | No dependency | No dependency |
-| Supported [rollup modes](./index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
-| Supported partitioning methods | [Both Hash-based and range partitioning](./hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
+| Supported [rollup modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
+| Supported partitioning methods | [Both Hash-based and range partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
 | Supported input locations | All locations accessible via HDFS client or Druid dataSource | All implemented [firehoses](./firehose.html) | All implemented [firehoses](./firehose.html) |
 | Supported file formats | All implemented Hadoop InputFormats | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom extension](../development/modules.html) implementing [`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java) | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom exten [...]
 | Saving parse exceptions in ingestion report | Currently not supported | Currently not supported | Supported |
diff --git a/docs/latest/ingestion/hadoop.md b/docs/latest/ingestion/hadoop.md
index b9a6d72..ab1963b 100644
--- a/docs/latest/ingestion/hadoop.md
+++ b/docs/latest/ingestion/hadoop.md
@@ -198,7 +198,6 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
 |useExplicitVersion|Boolean|Forces HadoopIndexTask to use version.|no (default = false)|
 |logParseExceptions|Boolean|If true, log an error message when a parsing exception occurs, containing information about the row where the error occurred.|false|no|
 |maxParseExceptions|Integer|The maximum number of parse exceptions that can occur before the task halts ingestion and fails. Overrides `ignoreInvalidRows` if `maxParseExceptions` is defined.|unlimited|no|
-|useYarnRMJobStatusFallback|Boolean|If the Hadoop jobs created by the indexing task are unable to retrieve their completion status from the JobHistory server, and this parameter is true, the indexing task will try to fetch the application status from `http://<yarn-rm-address>/ws/v1/cluster/apps/<application-id>`, where `<yarn-rm-address>` is the value of `yarn.resourcemanager.webapp.address` in your Hadoop configuration. This flag is intended as a fallback for cases where an indexing tas [...]
 
 ### jobProperties field of TuningConfig
 
diff --git a/docs/latest/misc/math-expr.md b/docs/latest/misc/math-expr.md
index 9a686a1..c207f01 100644
--- a/docs/latest/misc/math-expr.md
+++ b/docs/latest/misc/math-expr.md
@@ -25,8 +25,7 @@ title: "Apache Druid (incubating) Expressions"
 # Apache Druid (incubating) Expressions
 
 <div class="note info">
-This feature is still experimental. It has not been optimized for performance yet, and its implementation is known to
- have significant inefficiencies.
+This feature is still experimental. It has not been optimized for performance yet, and its implementation is known to have significant inefficiencies.
 </div>
  
 This expression language supports the following operators (listed in decreasing order of precedence).
@@ -40,29 +39,14 @@ This expression language supports the following operators (listed in decreasing
 |<, <=, >, >=, ==, !=|Binary Comparison|
 |&&, &#124;|Binary Logical AND, OR|
 
-Long, double, and string data types are supported. If a number contains a dot, it is interpreted as a double, otherwise
-it is interpreted as a long. That means, always add a '.' to your number if you want it interpreted as a double value.
-String literals should be quoted by single quotation marks.
+Long, double, and string data types are supported. If a number contains a dot, it is interpreted as a double, otherwise it is interpreted as a long. That means, always add a '.' to your number if you want it interpreted as a double value. String literals should be quoted by single quotation marks.
 
-Additionally, the expression language supports long, double, and string arrays. Array literals are created by wrapping
-square brackets around a list of scalar literals values delimited by a comma or space character. All values in an array
-literal must be the same type.
+Multi-value types are not fully supported yet. Expressions may behave inconsistently on multi-value types, and you
+should not rely on the behavior in this case to stay the same in future releases.
 
-Expressions can contain variables. Variable names may contain letters, digits, '\_' and '$'. Variable names must not
-begin with a digit. To escape other special characters, you can quote it with double quotation marks.
-
-For logical operators, a number is true if and only if it is positive (0 or negative value means false). For string
-type, it's the evaluation result of 'Boolean.valueOf(string)'.
-
-Multi-value string dimensions are supported and may be treated as either scalar or array typed values. When treated as
-a scalar type, an expression will automatically be transformed to apply the scalar operation across all values of the
-multi-valued type, to mimic Druid's native behavior. Values that result in arrays will be coerced back into the native
-Druid string type for aggregation. Druid aggregations on multi-value string dimensions on the individual values, _not_
-the 'array', behaving similar to the `unnest` operator available in many SQL dialects. However, by using the
-`array_to_string` function, aggregations may be done on a stringified version of the complete array, allowing the
-complete row to be preserved. Using `string_to_array` in an expression post-aggregator, allows transforming the
-stringified dimension back into the true native array type.
+Expressions can contain variables. Variable names may contain letters, digits, '\_' and '$'. Variable names must not begin with a digit. To escape other special characters, you can quote it with double quotation marks.
 
+For logical operators, a number is true if and only if it is positive (0 or negative value means false). For string type, it's the evaluation result of 'Boolean.valueOf(string)'.
 
 The following built-in functions are available.
 
@@ -70,7 +54,7 @@ The following built-in functions are available.
 
 |name|description|
 |----|-----------|
-|cast|cast(expr,'LONG' or 'DOUBLE' or 'STRING' or 'LONG_ARRAY', or 'DOUBLE_ARRAY' or 'STRING_ARRAY') returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null). |
+|cast|cast(expr,'LONG' or 'DOUBLE' or 'STRING') returns expr with specified type. exception can be thrown |
 |if|if(predicate,then,else) returns 'then' if 'predicate' evaluates to a positive number, otherwise it returns 'else' |
 |nvl|nvl(expr,expr-for-null) returns 'expr-for-null' if 'expr' is null (or empty string for string type) |
 |like|like(expr, pattern[, escape]) is equivalent to SQL `expr LIKE pattern`|
@@ -162,35 +146,3 @@ See javadoc of java.lang.Math for detailed explanation for each function.
 |todegrees|todegrees(x) converts an angle measured in radians to an approximately equivalent angle measured in degrees|
 |toradians|toradians(x) converts an angle measured in degrees to an approximately equivalent angle measured in radians|
 |ulp|ulp(x) would return the size of an ulp of the argument x|
-
-
-## Array Functions 
-
-| function | description |
-| --- | --- |
-| `array_length(arr)` | returns length of array expression |
-| `array_offset(arr,long)` | returns the array element at the 0 based index supplied, or null for an out of range index|
-| `array_ordinal(arr,long)` | returns the array element at the 1 based index supplied, or null for an out of range index |
-| `array_contains(arr,expr)` | returns true if the array contains the element specified by expr, or contains all elements specified by expr if expr is an array |
-| `array_overlap(arr1,arr2)` | returns true if arr1 and arr2 have any elements in common |
-| `array_offset_of(arr,expr)` | returns the 0 based index of the first occurrence of expr in the array, or `null` if no matching elements exist in the array. |
-| `array_ordinal_of(arr,expr)` | returns the 1 based index of the first occurrence of expr in the array, or `null` if no matching elements exist in the array. |
-| `array_append(arr1,expr)` | appends expr to arr, the resulting array type determined by the type of the first array |
-| `array_concat(arr1,arr2)` | concatenates 2 arrays, the resulting array type determined by the type of the first array |
-| `array_to_string(arr,str)` | joins all elements of arr by the delimiter specified by str |
-| `string_to_array(str1,str2)` | splits str1 into an array on the delimiter specified by str2 |
-| `array_slice(arr,start,end)` | return the subarray of arr from the 0 based index start(inclusive) to end(exclusive), or `null`, if start is less than 0, greater than length of arr or less than end|
-| `array_prepend(expr,arr)` | adds expr to arr at the beginning, the resulting array type determined by the type of the array |
-
-
-## Apply Functions
-
-| function | description |
-| --- | --- |
-| `map(lambda,arr)` | applies a transform specified by a single argument lambda expression to all elements of arr, returning a new array |
-| `cartesian_map(lambda,arr1,arr2,...)` | applies a transform specified by a multi argument lambda expression to all elements of the cartesian product of all input arrays, returning a new array; the number of lambda arguments and array inputs must be the same |
-| `filter(lambda,arr)` | filters arr by a single argument lambda, returning a new array with all matching elements, or null if no elements match |
-| `fold(lambda,arr)` | folds a 2 argument lambda across arr. The first argument of the lambda is the array element and the second the accumulator, returning a single accumulated value. |
-| `cartesian_fold(lambda,arr1,arr2,...)` | folds a multi argument lambda across the cartesian product of all input arrays. The first arguments of the lambda is the array element and the last is the accumulator, returning a single accumulated value. |
-| `any(lambda,arr)` | returns true if any element in the array matches the lambda expression |
-| `all(lambda,arr)` | returns true if all elements in the array matches the lambda expression |
diff --git a/docs/latest/operations/api-reference.md b/docs/latest/operations/api-reference.md
index 473282f..9326f2b 100644
--- a/docs/latest/operations/api-reference.md
+++ b/docs/latest/operations/api-reference.md
@@ -510,22 +510,8 @@ Returns a list of objects of the currently active supervisors.
 |Field|Type|Description|
 |---|---|---|
 |`id`|String|supervisor unique identifier|
-|`state`|String|basic state of the supervisor. Available states:`UNHEALTHY_SUPERVISOR`, `UNHEALTHY_TASKS`, `PENDING`, `RUNNING`, `SUSPENDED`, `STOPPING`|
-|`detailedState`|String|supervisor specific state. (See documentation of specific supervisor for details)|
-|`healthy`|Boolean|true or false indicator of overall supervisor health|
 |`spec`|SupervisorSpec|json specification of supervisor (See Supervisor Configuration for details)|
 
-* `/druid/indexer/v1/supervisor?state=true`
-
-Returns a list of objects of the currently active supervisors and their current state.
-
-|Field|Type|Description|
-|---|---|---|
-|`id`|String|supervisor unique identifier|
-|`state`|String|basic state of the supervisor. Available states:`UNHEALTHY_SUPERVISOR`, `UNHEALTHY_TASKS`, `PENDING`, `RUNNING`, `SUSPENDED`, `STOPPING`|
-|`detailedState`|String|supervisor specific state. (See documentation of specific supervisor for details)|
-|`healthy`|Boolean|true or false indicator of overall supervisor health|
-
 * `/druid/indexer/v1/supervisor/<supervisorId>`
 
 Returns the current spec for the supervisor with the provided ID.
diff --git a/docs/latest/operations/recommendations.md b/docs/latest/operations/recommendations.md
index 61cb871..311b46d 100644
--- a/docs/latest/operations/recommendations.md
+++ b/docs/latest/operations/recommendations.md
@@ -84,8 +84,10 @@ Timeseries and TopN queries are much more optimized and significantly faster tha
 Segments should generally be between 300MB-700MB in size. Too many small segments results in inefficient CPU utilizations and 
 too many large segments impacts query performance, most notably with TopN queries.
 
-# FAQs and Guides
+# Read FAQs
 
-1) The [Ingestion FAQ](../ingestion/faq.html) provides help with common ingestion problems.
+You should read common problems people have here:
 
-2) The [Basic Cluster Tuning Guide](../operations/basic-cluster-tuning.html) offers introductory guidelines for tuning your Druid cluster.
+1) [Ingestion-FAQ](../ingestion/faq.html)
+
+2) [Performance-FAQ](../operations/performance-faq.html)
diff --git a/docs/latest/querying/aggregations.md b/docs/latest/querying/aggregations.md
index ba9b80e..b6b3e03 100644
--- a/docs/latest/querying/aggregations.md
+++ b/docs/latest/querying/aggregations.md
@@ -271,7 +271,7 @@ JavaScript-based functionality is disabled by default. Please refer to the Druid
 
 #### DataSketches Theta Sketch
 
-The [DataSketches Theta Sketch](../development/extensions-core/datasketches-theta.html) extension-provided aggregator gives distinct count estimates with support for set union, intersection, and difference post-aggregators, using Theta sketches from the [datasketches](https://datasketches.github.io/) library.
+The [DataSketches Theta Sketch](../development/extensions-core/datasketches-theta.html) extension-provided aggregator gives distinct count estimates with support for set union, intersection, and difference post-aggregators, using Theta sketches from the [datasketches](http://datasketches.github.io/) library.
 
 #### DataSketches HLL Sketch
 
@@ -304,7 +304,7 @@ Note the DataSketches Theta and HLL aggregators currently only support single-co
 
 #### DataSketches Quantiles Sketch
 
-The [DataSketches Quantiles Sketch](../development/extensions-core/datasketches-quantiles.html) extension-provided aggregator provides quantile estimates and histogram approximations using the numeric quantiles DoublesSketch from the [datasketches](https://datasketches.github.io/) library.
+The [DataSketches Quantiles Sketch](../development/extensions-core/datasketches-quantiles.html) extension-provided aggregator provides quantile estimates and histogram approximations using the numeric quantiles DoublesSketch from the [datasketches](http://datasketches.github.io/) library.
 
 We recommend this aggregator in general for quantiles/histogram use cases, as it provides formal error bounds and has distribution-independent accuracy.
 
diff --git a/docs/latest/querying/granularities.md b/docs/latest/querying/granularities.md
index 4953ead..7dc02ef 100644
--- a/docs/latest/querying/granularities.md
+++ b/docs/latest/querying/granularities.md
@@ -176,10 +176,11 @@ If you change the granularity to `none`, you will get the same results as settin
 } ]
 ```
 
-Having a query time `granularity` that is smaller than the `queryGranularity` parameter set at
-[ingestion time]((../ingestion/ingestion-spec.html#granularityspec)) is unreasonable because information about that
-smaller granularity is not present in the indexed data. So, if the query time granularity is smaller than the ingestion
-time query granularity, Druid produces results that are equivalent to having set `granularity` to `queryGranularity`.
+Having a query granularity smaller than the ingestion granularity doesn't make sense,
+because information about that smaller granularity is not present in the indexed data.
+So, if the query granularity is smaller than the ingestion granularity, druid produces
+results that are equivalent to having set the query granularity to the ingestion granularity.
+See `queryGranularity` in [Ingestion Spec](../ingestion/ingestion-spec.html#granularityspec).
 
 
 If you change the granularity to `all`, you will get everything aggregated in 1 bucket,
diff --git a/docs/latest/querying/lookups.md b/docs/latest/querying/lookups.md
index 7af2bcd..b54f769 100644
--- a/docs/latest/querying/lookups.md
+++ b/docs/latest/querying/lookups.md
@@ -292,10 +292,7 @@ Using the prior example, a `GET` to `/druid/coordinator/v1/lookups/config/realti
 ```
 
 ## Delete Lookup
-A `DELETE` to `/druid/coordinator/v1/lookups/config/{tier}/{id}` will remove that lookup from the cluster. If it was last lookup in the tier, then tier is deleted as well.
-
-## Delete Tier
-A `DELETE` to `/druid/coordinator/v1/lookups/config/{tier}` will remove that tier from the cluster.
+A `DELETE` to `/druid/coordinator/v1/lookups/config/{tier}/{id}` will remove that lookup from the cluster.
 
 ## List tier names
 A `GET` to `/druid/coordinator/v1/lookups/config` will return a list of known tier names in the dynamic configuration.
diff --git a/docs/latest/querying/scan-query.md b/docs/latest/querying/scan-query.md
index bb07b6a..1b9d360 100644
--- a/docs/latest/querying/scan-query.md
+++ b/docs/latest/querying/scan-query.md
@@ -59,7 +59,7 @@ The following are the main parameters for Scan queries:
 |resultFormat|How the results are represented: list, compactedList or valueVector. Currently only `list` and `compactedList` are supported. Default is `list`|no|
 |filter|See [Filters](../querying/filters.html)|no|
 |columns|A String array of dimensions and metrics to scan. If left empty, all dimensions and metrics are returned.|no|
-|batchSize|The maximum number of rows buffered before being returned to the client. Default is `20480`|no|
+|batchSize|How many rows buffered before return to client. Default is `20480`|no|
 |limit|How many rows to return. If not specified, all rows will be returned.|no|
 |order|The ordering of returned rows based on timestamp.  "ascending", "descending", and "none" (default) are supported.  Currently, "ascending" and "descending" are only supported for queries where the `__time` column is included in the `columns` field and the requirements outlined in the [time ordering](#time-ordering) section are met.|none|
 |legacy|Return results consistent with the legacy "scan-query" contrib extension. Defaults to the value set by `druid.query.scan.legacy`, which in turn defaults to false. See [Legacy mode](#legacy-mode) for details.|no|
@@ -188,9 +188,9 @@ the query context (see the Query Context Properties section).
 The Scan query supports a legacy mode designed for protocol compatibility with the former scan-query contrib extension.
 In legacy mode you can expect the following behavior changes:
 
-- The `__time` column is returned as `"timestamp"` rather than `"__time"`. This will take precedence over any other column
-you may have that is named `"timestamp"`.
-- The `__time` column is included in the list of columns even if you do not specifically ask for it.
+- The __time column is returned as "timestamp" rather than "__time". This will take precedence over any other column
+you may have that is named "timestamp".
+- The __time column is included in the list of columns even if you do not specifically ask for it.
 - Timestamps are returned as ISO8601 time strings rather than integers (milliseconds since 1970-01-01 00:00:00 UTC).
 
 Legacy mode can be triggered either by passing `"legacy" : true` in your query JSON, or by setting
diff --git a/docs/latest/querying/sql.md b/docs/latest/querying/sql.md
index 8921fe0..6c68a5f 100644
--- a/docs/latest/querying/sql.md
+++ b/docs/latest/querying/sql.md
@@ -22,12 +22,12 @@ title: "SQL"
   ~ under the License.
   -->
 
-  <!-- 
-    The format of the tables that describe the functions and operators 
-    should not be changed without updating the script create-sql-function-doc 
-    in web-console/script/create-sql-function-doc, because the script detects
-    patterns in this markdown file and parse it to TypeScript file for web console
-   -->
+<!--
+  The format of the tables that describe the functions and operators
+  should not be changed without updating the script create-sql-function-doc
+  in web-console/script/create-sql-function-doc, because the script detects
+  patterns in this markdown file and parse it to TypeScript file for web console
+-->
 
 # SQL
 
@@ -42,6 +42,9 @@ queries on the query Broker (the first process you query), which are then passed
 queries. Other than the (slight) overhead of translating SQL on the Broker, there isn't an additional performance
 penalty versus native queries.
 
+To enable Druid SQL, make sure you have set `druid.sql.enable = true` either in your common.runtime.properties or your
+Broker's runtime.properties.
+
 ## Query syntax
 
 Each Druid datasource appears as a table in the "druid" schema. This is also the default schema, so Druid datasources
@@ -129,13 +132,6 @@ Only the COUNT aggregation can accept DISTINCT.
 |`APPROX_QUANTILE_DS(expr, probability, [k])`|Computes approximate quantiles on numeric or [Quantiles sketch](../development/extensions-core/datasketches-quantiles.html) exprs. The "probability" should be between 0 and 1 (exclusive). The `k` parameter is described in the Quantiles sketch documentation. The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use this function.|
 |`APPROX_QUANTILE_FIXED_BUCKETS(expr, probability, numBuckets, lowerLimit, upperLimit, [outlierHandlingMode])`|Computes approximate quantiles on numeric or [fixed buckets histogram](../development/extensions-core/approximate-histograms.html#fixed-buckets-histogram) exprs. The "probability" should be between 0 and 1 (exclusive). The `numBuckets`, `lowerLimit`, `upperLimit`, and `outlierHandlingMode` parameters are described in the fixed buckets histogram documentation. The [approximate hi [...]
 |`BLOOM_FILTER(expr, numEntries)`|Computes a bloom filter from values produced by `expr`, with `numEntries` maximum number of distinct values before false positve rate increases. See [bloom filter extension](../development/extensions-core/bloom-filter.html) documentation for additional details.|
-|`VAR_POP(expr)`|Computes variance population of `expr`. See [stats extension](../development/extensions-core/stats.html) documentation for additional details.|
-|`VAR_SAMP(expr)`|Computes variance sample of `expr`. See [stats extension](../development/extensions-core/stats.html) documentation for additional details.|
-|`VARIANCE(expr)`|Computes variance sample of `expr`. See [stats extension](../development/extensions-core/stats.html) documentation for additional details.|
-|`STDDEV_POP(expr)`|Computes standard deviation population of `expr`. See [stats extension](../development/extensions-core/stats.html) documentation for additional details.|
-|`STDDEV_SAMP(expr)`|Computes standard deviation sample of `expr`. See [stats extension](../development/extensions-core/stats.html) documentation for additional details.|
-|`STDDEV(expr)`|Computes standard deviation sample of `expr`. See [stats extension](../development/extensions-core/stats.html) documentation for additional details.|
-
 
 For advice on choosing approximate aggregation functions, check out our [approximate aggregations documentation](aggregations.html#approx).
 
@@ -218,10 +214,6 @@ context parameter "sqlTimeZone" to the name of another time zone, like "America/
 the connection time zone, some functions also accept time zones as parameters. These parameters always take precedence
 over the connection time zone.
 
-Literal timestamps in the connection time zone can be written using `TIMESTAMP '2000-01-01 00:00:00'` syntax. The
-simplest way to write literal timestamps in other time zones is to use TIME_PARSE, like
-`TIME_PARSE('2000-02-01 00:00:00', NULL, 'America/Los_Angeles')`.
-
 |Function|Notes|
 |--------|-----|
 |`CURRENT_TIMESTAMP`|Current timestamp in the connection's time zone.|
@@ -238,7 +230,6 @@ simplest way to write literal timestamps in other time zones is to use TIME_PARS
 |`FLOOR(timestamp_expr TO <unit>)`|Rounds down a timestamp, returning it as a new timestamp. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR.|
 |`CEIL(timestamp_expr TO <unit>)`|Rounds up a timestamp, returning it as a new timestamp. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR.|
 |`TIMESTAMPADD(<unit>, <count>, <timestamp>)`|Equivalent to `timestamp + count * INTERVAL '1' UNIT`.|
-|`TIMESTAMPDIFF(<unit>, <timestamp1>, <timestamp2>)`|Returns the (signed) number of `unit` between `timestamp1` and `timestamp2`. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR.|
 |`timestamp_expr { + &#124; - } <interval_expr>`|Add or subtract an amount of time from a timestamp. interval_expr can include interval literals like `INTERVAL '2' HOUR`, and may include interval arithmetic as well. This operator treats days as uniformly 86400 seconds long, and does not take into account daylight savings time. To account for daylight savings time, use TIME_SHIFT instead.|
 
 ### Comparison operators
@@ -299,12 +290,11 @@ Additionally, some Druid features are not supported by the SQL language. Some un
 
 Druid natively supports five basic column types: "long" (64 bit signed int), "float" (32 bit float), "double" (64 bit
 float) "string" (UTF-8 encoded strings), and "complex" (catch-all for more exotic data types like hyperUnique and
-approxHistogram columns).
+approxHistogram columns). Timestamps (including the `__time` column) are stored as longs, with the value being the
+number of milliseconds since 1 January 1970 UTC.
 
-Timestamps (including the `__time` column) are treated by Druid as longs, with the value being the number of
-milliseconds since 1970-01-01 00:00:00 UTC, not counting leap seconds. Therefore, timestamps in Druid do not carry any
-timezone information, but only carry information about the exact moment in time they represent. See the
-[Time functions](#time-functions) section for more information about timestamp handling.
+At runtime, Druid may widen 32-bit floats to 64-bit for certain operators, like SUM aggregators. The reverse will not
+happen: 64-bit floats are not be narrowed to 32-bit.
 
 Druid generally treats NULLs and empty strings interchangeably, rather than according to the SQL standard. As such,
 Druid SQL only has partial support for NULLs. For example, the expressions `col IS NULL` and `col = ''` are equivalent,
@@ -316,7 +306,7 @@ datasource, then it will be treated as zero for rows from those segments.
 
 For mathematical operations, Druid SQL will use integer math if all operands involved in an expression are integers.
 Otherwise, Druid will switch to floating point math. You can force this to happen by casting one of your operands
-to FLOAT. At runtime, Druid may widen 32-bit floats to 64-bit for certain operators, like SUM aggregators.
+to FLOAT.
 
 The following table describes how SQL types map onto Druid types during query runtime. Casts between two SQL types
 that have the same Druid runtime type will have no effect, other than exceptions noted in the table. Casts between two
@@ -648,7 +638,7 @@ ORDER BY 2 DESC
 ```
 
 ### SERVERS table
-Servers table lists all discovered servers in the cluster.
+Servers table lists all data servers(any server that hosts a segment). It includes both Historicals and Peons.
 
 |Column|Type|Notes|
 |------|-----|-----|
@@ -656,10 +646,10 @@ Servers table lists all discovered servers in the cluster.
 |host|STRING|Hostname of the server|
 |plaintext_port|LONG|Unsecured port of the server, or -1 if plaintext traffic is disabled|
 |tls_port|LONG|TLS port of the server, or -1 if TLS is disabled|
-|server_type|STRING|Type of Druid service. Possible values include: COORDINATOR, OVERLORD,  BROKER, ROUTER, HISTORICAL, MIDDLE_MANAGER or PEON.|
-|tier|STRING|Distribution tier see [druid.server.tier](#../configuration/index.html#Historical-General-Configuration). Only valid for HISTORICAL type, for other types it's null|
-|current_size|LONG|Current size of segments in bytes on this server. Only valid for HISTORICAL type, for other types it's 0|
-|max_size|LONG|Max size in bytes this server recommends to assign to segments see [druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration). Only valid for HISTORICAL type, for other types it's 0|
+|server_type|STRING|Type of Druid service. Possible values include: Historical, realtime and indexer_executor(Peon).|
+|tier|STRING|Distribution tier see [druid.server.tier](#../configuration/index.html#Historical-General-Configuration)|
+|current_size|LONG|Current size of segments in bytes on this server|
+|max_size|LONG|Max size in bytes this server recommends to assign to segments see [druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration)|
 
 To retrieve information about all servers, use the query:
 
@@ -676,22 +666,22 @@ SERVER_SEGMENTS is used to join servers with segments table
 |server|STRING|Server name in format host:port (Primary key of [servers table](#SERVERS-table))|
 |segment_id|STRING|Segment identifier (Primary key of [segments table](#SEGMENTS-table))|
 
-JOIN between "servers" and "segments" can be used to query the number of segments for a specific datasource, 
+JOIN between "servers" and "segments" can be used to query the number of segments for a specific datasource,
 grouped by server, example query:
 
 ```sql
-SELECT count(segments.segment_id) as num_segments from sys.segments as segments 
-INNER JOIN sys.server_segments as server_segments 
-ON segments.segment_id  = server_segments.segment_id 
-INNER JOIN sys.servers as servers 
+SELECT count(segments.segment_id) as num_segments from sys.segments as segments
+INNER JOIN sys.server_segments as server_segments
+ON segments.segment_id  = server_segments.segment_id
+INNER JOIN sys.servers as servers
 ON servers.server = server_segments.server
-WHERE segments.datasource = 'wikipedia' 
+WHERE segments.datasource = 'wikipedia'
 GROUP BY servers.server;
 ```
 
 ### TASKS table
 
-The tasks table provides information about active and recently-completed indexing tasks. For more information 
+The tasks table provides information about active and recently-completed indexing tasks. For more information
 check out [ingestion tasks](#../ingestion/tasks.html)
 
 |Column|Type|Notes|
@@ -724,7 +714,7 @@ The Druid SQL server is configured through the following properties on the Broke
 
 |Property|Description|Default|
 |--------|-----------|-------|
-|`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|true|
+|`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|false|
 |`druid.sql.avatica.enable`|Whether to enable JDBC querying at `/druid/v2/sql/avatica/`.|true|
 |`druid.sql.avatica.maxConnections`|Maximum number of open connections for the Avatica server. These are not HTTP connections, but are logical client connections that may span multiple HTTP connections.|25|
 |`druid.sql.avatica.maxRowsPerFrame`|Maximum number of rows to return in a single JDBC frame. Setting this property to -1 indicates that no row limit should be applied. Clients can optionally specify a row limit in their requests; if a client specifies a row limit, the lesser value of the client-provided limit and `maxRowsPerFrame` will be used.|5,000|
@@ -754,4 +744,4 @@ Broker will emit the following metrics for SQL.
 
 ## Authorization Permissions
 
-Please see [Defining SQL permissions](../development/extensions-core/druid-basic-security.html#sql-permissions) for information on what permissions are needed for making SQL queries in a secured cluster.
+Please see [Defining SQL permissions](../development/extensions-core/druid-basic-security.html#sql-permissions) for information on what permissions are needed for making SQL queries in a secured cluster.
\ No newline at end of file
diff --git a/docs/latest/querying/timeseriesquery.md b/docs/latest/querying/timeseriesquery.md
index 39fea97..9feef88 100644
--- a/docs/latest/querying/timeseriesquery.md
+++ b/docs/latest/querying/timeseriesquery.md
@@ -76,7 +76,6 @@ There are 7 main parts to a timeseries query:
 |filter|See [Filters](../querying/filters.html)|no|
 |aggregations|See [Aggregations](../querying/aggregations.html)|no|
 |postAggregations|See [Post Aggregations](../querying/post-aggregations.html)|no|
-|limit|An integer that limits the number of results. The default is unlimited.|no|
 |context|Can be used to modify query behavior, including [grand totals](#grand-totals) and [zero-filling](#zero-filling). See also [Context](../querying/query-context.html) for parameters that apply to all query types.|no|
 
 To pull it all together, the above query would return 2 data points, one for each day between 2012-01-01 and 2012-01-03, from the "sample\_datasource" table. Each data point would be the (long) sum of sample\_fieldName1, the (double) sum of sample\_fieldName2 and the (double) result of sample\_fieldName1 divided by sample\_fieldName2 for the filter set. The output looks like this:
diff --git a/docs/latest/toc.md b/docs/latest/toc.md
index 6ee4908..57ed632 100644
--- a/docs/latest/toc.md
+++ b/docs/latest/toc.md
@@ -48,7 +48,7 @@ layout: toc
     * [Clustering](/docs/VERSION/tutorials/cluster.html)
     * Further examples
       * [Single-server deployment](/docs/VERSION/operations/single-server.html)
-      * [Clustered deployment](/docs/VERSION/tutorials/cluster.html#fresh-deployment)
+      * [Clustered deployment](/docs/VERSION/operations/example-cluster.html)
 
 ## Data Ingestion
   * [Ingestion overview](/docs/VERSION/ingestion/index.html)
@@ -143,6 +143,7 @@ layout: toc
   * Tuning and Recommendations
     * [Basic Cluster Tuning](/docs/VERSION/operations/basic-cluster-tuning.html)  
     * [General Recommendations](/docs/VERSION/operations/recommendations.html)
+    * [Performance FAQ](/docs/VERSION/operations/performance-faq.html)
     * [JVM Best Practices](/docs/VERSION/configuration/index.html#jvm-configuration-best-practices)        
   * Tools
     * [Dump Segment Tool](/docs/VERSION/operations/dump-segment.html)
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-01.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-01.png
index 08426fd..b0b5da8 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-01.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-02.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-02.png
index 76a1a7f..806ce4c 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-02.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-03.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-03.png
index ce3b0f0..c6bb701 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-03.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-04.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-04.png
index b30ef7f..83a018b 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-04.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-04.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-05.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-05.png
index 9ef3f80..71291c0 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-05.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-05.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-06.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-06.png
index b1f08c8..5fe9c37 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-06.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-06.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-07.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-07.png
index d7a8e68..16b48af 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-07.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-07.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-08.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-08.png
index 4e36aab..edaf039 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-08.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-08.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-09.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-09.png
index 144c02c..6191fc2 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-09.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-09.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-10.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-10.png
index 75487a2..4037792 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-10.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-10.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-11.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-11.png
index 5cadd52..76464f9 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-11.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-11.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-submit-task-01.png b/docs/latest/tutorials/img/tutorial-batch-submit-task-01.png
index e8a1346..1651401 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-submit-task-01.png and b/docs/latest/tutorials/img/tutorial-batch-submit-task-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-submit-task-02.png b/docs/latest/tutorials/img/tutorial-batch-submit-task-02.png
index fc0c924..834a9a5 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-submit-task-02.png and b/docs/latest/tutorials/img/tutorial-batch-submit-task-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-01.png b/docs/latest/tutorials/img/tutorial-compaction-01.png
index aeb9bf3..99b9e45 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-01.png and b/docs/latest/tutorials/img/tutorial-compaction-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-02.png b/docs/latest/tutorials/img/tutorial-compaction-02.png
index 836d8a7..11c316e 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-02.png and b/docs/latest/tutorials/img/tutorial-compaction-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-03.png b/docs/latest/tutorials/img/tutorial-compaction-03.png
index d51f8f8..88fd9d6 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-03.png and b/docs/latest/tutorials/img/tutorial-compaction-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-04.png b/docs/latest/tutorials/img/tutorial-compaction-04.png
index 46c5b1d..8df3699 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-04.png and b/docs/latest/tutorials/img/tutorial-compaction-04.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-05.png b/docs/latest/tutorials/img/tutorial-compaction-05.png
index e692694..07356df 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-05.png and b/docs/latest/tutorials/img/tutorial-compaction-05.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-06.png b/docs/latest/tutorials/img/tutorial-compaction-06.png
index 55c999f..ec1525c 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-06.png and b/docs/latest/tutorials/img/tutorial-compaction-06.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-07.png b/docs/latest/tutorials/img/tutorial-compaction-07.png
index 661e897..aa30458 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-07.png and b/docs/latest/tutorials/img/tutorial-compaction-07.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-08.png b/docs/latest/tutorials/img/tutorial-compaction-08.png
index 6e3f1aa..b9d89b2 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-08.png and b/docs/latest/tutorials/img/tutorial-compaction-08.png differ
diff --git a/docs/latest/tutorials/img/tutorial-deletion-01.png b/docs/latest/tutorials/img/tutorial-deletion-01.png
index de68d38..cddcb16 100644
Binary files a/docs/latest/tutorials/img/tutorial-deletion-01.png and b/docs/latest/tutorials/img/tutorial-deletion-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-deletion-02.png b/docs/latest/tutorials/img/tutorial-deletion-02.png
index ffe4585..9b84f0c 100644
Binary files a/docs/latest/tutorials/img/tutorial-deletion-02.png and b/docs/latest/tutorials/img/tutorial-deletion-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-deletion-03.png b/docs/latest/tutorials/img/tutorial-deletion-03.png
index 221774f..e6fb1f3 100644
Binary files a/docs/latest/tutorials/img/tutorial-deletion-03.png and b/docs/latest/tutorials/img/tutorial-deletion-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-kafka-01.png b/docs/latest/tutorials/img/tutorial-kafka-01.png
index b085625..580d9af 100644
Binary files a/docs/latest/tutorials/img/tutorial-kafka-01.png and b/docs/latest/tutorials/img/tutorial-kafka-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-kafka-02.png b/docs/latest/tutorials/img/tutorial-kafka-02.png
index f23e084..735ceaa 100644
Binary files a/docs/latest/tutorials/img/tutorial-kafka-02.png and b/docs/latest/tutorials/img/tutorial-kafka-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-01.png b/docs/latest/tutorials/img/tutorial-query-01.png
index b366b2b..7e483fc 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-01.png and b/docs/latest/tutorials/img/tutorial-query-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-02.png b/docs/latest/tutorials/img/tutorial-query-02.png
index f3ba025..c25c651 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-02.png and b/docs/latest/tutorials/img/tutorial-query-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-03.png b/docs/latest/tutorials/img/tutorial-query-03.png
index 9f7ae27..5b1e5bc 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-03.png and b/docs/latest/tutorials/img/tutorial-query-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-04.png b/docs/latest/tutorials/img/tutorial-query-04.png
index 3f800a6..df96420 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-04.png and b/docs/latest/tutorials/img/tutorial-query-04.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-05.png b/docs/latest/tutorials/img/tutorial-query-05.png
index 2fc59ce..c241627 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-05.png and b/docs/latest/tutorials/img/tutorial-query-05.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-06.png b/docs/latest/tutorials/img/tutorial-query-06.png
index 60b4e1a..1f3e5fb 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-06.png and b/docs/latest/tutorials/img/tutorial-query-06.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-07.png b/docs/latest/tutorials/img/tutorial-query-07.png
index d2e5a85..e23fc2a 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-07.png and b/docs/latest/tutorials/img/tutorial-query-07.png differ
diff --git a/docs/latest/tutorials/img/tutorial-quickstart-01.png b/docs/latest/tutorials/img/tutorial-quickstart-01.png
index 9a47bc7..94b2024 100644
Binary files a/docs/latest/tutorials/img/tutorial-quickstart-01.png and b/docs/latest/tutorials/img/tutorial-quickstart-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-00.png b/docs/latest/tutorials/img/tutorial-retention-00.png
index a3f84a9..99c4ca8 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-00.png and b/docs/latest/tutorials/img/tutorial-retention-00.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-01.png b/docs/latest/tutorials/img/tutorial-retention-01.png
index 35a97c2..64f666c 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-01.png and b/docs/latest/tutorials/img/tutorial-retention-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-02.png b/docs/latest/tutorials/img/tutorial-retention-02.png
index f38fad0..2458d9d 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-02.png and b/docs/latest/tutorials/img/tutorial-retention-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-03.png b/docs/latest/tutorials/img/tutorial-retention-03.png
index 256836a..5cf2e8a 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-03.png and b/docs/latest/tutorials/img/tutorial-retention-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-04.png b/docs/latest/tutorials/img/tutorial-retention-04.png
index d39495f..73f9f22 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-04.png and b/docs/latest/tutorials/img/tutorial-retention-04.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-05.png b/docs/latest/tutorials/img/tutorial-retention-05.png
index 638a752..622718f 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-05.png and b/docs/latest/tutorials/img/tutorial-retention-05.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-06.png b/docs/latest/tutorials/img/tutorial-retention-06.png
index f47cbff..540551f 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-06.png and b/docs/latest/tutorials/img/tutorial-retention-06.png differ


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org