You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by ji...@apache.org on 2019/06/27 23:01:16 UTC

[incubator-druid-website] branch asf-site updated: update to use correct 0.15.0 docs

This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-druid-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new f28db16  update to use correct 0.15.0 docs
     new af99b15  Merge pull request #16 from implydata/0.15.0-release-v2
f28db16 is described below

commit f28db1616e4abb42801e904c727333850c8023ac
Author: Vadim Ogievetsky <va...@gmail.com>
AuthorDate: Thu Jun 27 15:56:25 2019 -0700

    update to use correct 0.15.0 docs
---
 docs/0.15.0-incubating/configuration/index.html    |  73 +---
 .../extensions-contrib/influxdb-emitter.html       | 330 ----------------
 .../development/extensions-contrib/orc.html        |   8 +
 .../tdigestsketch-quantiles.html                   | 417 ---------------------
 .../extensions-core/approximate-histograms.html    |  10 -
 .../extensions-core/datasketches-extension.html    |   2 +-
 .../extensions-core/datasketches-hll.html          |   2 +-
 .../extensions-core/datasketches-quantiles.html    |  22 +-
 .../extensions-core/datasketches-theta.html        |   2 +-
 .../extensions-core/datasketches-tuple.html        |   2 +-
 .../extensions-core/druid-basic-security.html      | 131 +------
 .../extensions-core/druid-kerberos.html            |  11 +-
 .../extensions-core/kafka-ingestion.html           |  99 -----
 .../extensions-core/kinesis-ingestion.html         | 104 +----
 .../development/extensions-core/postgresql.html    |   6 -
 .../development/extensions-core/s3.html            | 131 ++-----
 docs/0.15.0-incubating/development/extensions.html |  17 +-
 docs/0.15.0-incubating/development/geo.html        |  21 --
 docs/0.15.0-incubating/development/modules.html    |   2 +-
 docs/0.15.0-incubating/ingestion/compaction.html   |  51 ++-
 .../ingestion/hadoop-vs-native-batch.html          |   4 +-
 docs/0.15.0-incubating/ingestion/hadoop.html       |   6 -
 docs/0.15.0-incubating/misc/math-expr.html         | 129 +------
 .../operations/api-reference.html                  |  50 ---
 .../operations/recommendations.html                |   8 +-
 docs/0.15.0-incubating/querying/aggregations.html  |   4 +-
 docs/0.15.0-incubating/querying/granularities.html |   9 +-
 docs/0.15.0-incubating/querying/lookups.html       |   6 +-
 docs/0.15.0-incubating/querying/scan-query.html    |   8 +-
 docs/0.15.0-incubating/querying/sql.html           |  84 ++---
 .../querying/timeseriesquery.html                  |   5 -
 docs/0.15.0-incubating/toc.html                    |   3 +-
 .../img/tutorial-batch-data-loader-01.png          | Bin 56488 -> 99355 bytes
 .../img/tutorial-batch-data-loader-02.png          | Bin 360295 -> 521148 bytes
 .../img/tutorial-batch-data-loader-03.png          | Bin 137443 -> 217008 bytes
 .../img/tutorial-batch-data-loader-04.png          | Bin 167252 -> 261225 bytes
 .../img/tutorial-batch-data-loader-05.png          | Bin 162488 -> 256368 bytes
 .../img/tutorial-batch-data-loader-06.png          | Bin 64301 -> 105983 bytes
 .../img/tutorial-batch-data-loader-07.png          | Bin 46529 -> 81399 bytes
 .../img/tutorial-batch-data-loader-08.png          | Bin 103928 -> 162397 bytes
 .../img/tutorial-batch-data-loader-09.png          | Bin 63348 -> 107662 bytes
 .../img/tutorial-batch-data-loader-10.png          | Bin 44516 -> 79080 bytes
 .../img/tutorial-batch-data-loader-11.png          | Bin 83288 -> 133329 bytes
 .../img/tutorial-batch-submit-task-01.png          | Bin 69356 -> 113916 bytes
 .../img/tutorial-batch-submit-task-02.png          | Bin 86076 -> 136268 bytes
 .../tutorials/img/tutorial-compaction-01.png       | Bin 35710 -> 55153 bytes
 .../tutorials/img/tutorial-compaction-02.png       | Bin 166571 -> 279736 bytes
 .../tutorials/img/tutorial-compaction-03.png       | Bin 26755 -> 40114 bytes
 .../tutorials/img/tutorial-compaction-04.png       | Bin 184365 -> 312142 bytes
 .../tutorials/img/tutorial-compaction-05.png       | Bin 26588 -> 39784 bytes
 .../tutorials/img/tutorial-compaction-06.png       | Bin 206717 -> 351505 bytes
 .../tutorials/img/tutorial-compaction-07.png       | Bin 26683 -> 40106 bytes
 .../tutorials/img/tutorial-compaction-08.png       | Bin 28751 -> 43257 bytes
 .../tutorials/img/tutorial-deletion-01.png         | Bin 43586 -> 72062 bytes
 .../tutorials/img/tutorial-deletion-02.png         | Bin 439602 -> 810422 bytes
 .../tutorials/img/tutorial-deletion-03.png         | Bin 437304 -> 805673 bytes
 .../tutorials/img/tutorial-kafka-01.png            | Bin 85477 -> 136317 bytes
 .../tutorials/img/tutorial-kafka-02.png            | Bin 75709 -> 125452 bytes
 .../tutorials/img/tutorial-query-01.png            | Bin 100930 -> 153120 bytes
 .../tutorials/img/tutorial-query-02.png            | Bin 83369 -> 129962 bytes
 .../tutorials/img/tutorial-query-03.png            | Bin 65038 -> 106082 bytes
 .../tutorials/img/tutorial-query-04.png            | Bin 66423 -> 108331 bytes
 .../tutorials/img/tutorial-query-05.png            | Bin 51855 -> 87070 bytes
 .../tutorials/img/tutorial-query-06.png            | Bin 82211 -> 130612 bytes
 .../tutorials/img/tutorial-query-07.png            | Bin 78633 -> 125457 bytes
 .../tutorials/img/tutorial-quickstart-01.png       | Bin 29834 -> 56955 bytes
 .../tutorials/img/tutorial-retention-00.png        | Bin 77704 -> 138304 bytes
 .../tutorials/img/tutorial-retention-01.png        | Bin 35171 -> 53955 bytes
 .../tutorials/img/tutorial-retention-02.png        | Bin 240310 -> 410930 bytes
 .../tutorials/img/tutorial-retention-03.png        | Bin 30029 -> 44144 bytes
 .../tutorials/img/tutorial-retention-04.png        | Bin 44617 -> 67493 bytes
 .../tutorials/img/tutorial-retention-05.png        | Bin 38992 -> 61639 bytes
 .../tutorials/img/tutorial-retention-06.png        | Bin 137570 -> 233034 bytes
 docs/latest/configuration/index.html               |  73 +---
 .../extensions-contrib/influxdb-emitter.html       | 330 ----------------
 .../latest/development/extensions-contrib/orc.html |   8 +
 .../tdigestsketch-quantiles.html                   | 417 ---------------------
 .../extensions-core/approximate-histograms.html    |  10 -
 .../extensions-core/datasketches-extension.html    |   2 +-
 .../extensions-core/datasketches-hll.html          |   2 +-
 .../extensions-core/datasketches-quantiles.html    |  22 +-
 .../extensions-core/datasketches-theta.html        |   2 +-
 .../extensions-core/datasketches-tuple.html        |   2 +-
 .../extensions-core/druid-basic-security.html      | 131 +------
 .../extensions-core/druid-kerberos.html            |  11 +-
 .../extensions-core/kafka-ingestion.html           |  99 -----
 .../extensions-core/kinesis-ingestion.html         | 104 +----
 .../development/extensions-core/postgresql.html    |   6 -
 docs/latest/development/extensions-core/s3.html    | 131 ++-----
 docs/latest/development/extensions.html            |  17 +-
 docs/latest/development/geo.html                   |  21 --
 docs/latest/development/modules.html               |   2 +-
 docs/latest/ingestion/compaction.html              |  51 ++-
 docs/latest/ingestion/hadoop-vs-native-batch.html  |   4 +-
 docs/latest/ingestion/hadoop.html                  |   6 -
 docs/latest/misc/math-expr.html                    | 129 +------
 docs/latest/operations/api-reference.html          |  50 ---
 docs/latest/operations/recommendations.html        |   8 +-
 docs/latest/querying/aggregations.html             |   4 +-
 docs/latest/querying/granularities.html            |   9 +-
 docs/latest/querying/lookups.html                  |   6 +-
 docs/latest/querying/scan-query.html               |   8 +-
 docs/latest/querying/sql.html                      |  84 ++---
 docs/latest/querying/timeseriesquery.html          |   5 -
 docs/latest/toc.html                               |   3 +-
 .../img/tutorial-batch-data-loader-01.png          | Bin 56488 -> 99355 bytes
 .../img/tutorial-batch-data-loader-02.png          | Bin 360295 -> 521148 bytes
 .../img/tutorial-batch-data-loader-03.png          | Bin 137443 -> 217008 bytes
 .../img/tutorial-batch-data-loader-04.png          | Bin 167252 -> 261225 bytes
 .../img/tutorial-batch-data-loader-05.png          | Bin 162488 -> 256368 bytes
 .../img/tutorial-batch-data-loader-06.png          | Bin 64301 -> 105983 bytes
 .../img/tutorial-batch-data-loader-07.png          | Bin 46529 -> 81399 bytes
 .../img/tutorial-batch-data-loader-08.png          | Bin 103928 -> 162397 bytes
 .../img/tutorial-batch-data-loader-09.png          | Bin 63348 -> 107662 bytes
 .../img/tutorial-batch-data-loader-10.png          | Bin 44516 -> 79080 bytes
 .../img/tutorial-batch-data-loader-11.png          | Bin 83288 -> 133329 bytes
 .../img/tutorial-batch-submit-task-01.png          | Bin 69356 -> 113916 bytes
 .../img/tutorial-batch-submit-task-02.png          | Bin 86076 -> 136268 bytes
 .../tutorials/img/tutorial-compaction-01.png       | Bin 35710 -> 55153 bytes
 .../tutorials/img/tutorial-compaction-02.png       | Bin 166571 -> 279736 bytes
 .../tutorials/img/tutorial-compaction-03.png       | Bin 26755 -> 40114 bytes
 .../tutorials/img/tutorial-compaction-04.png       | Bin 184365 -> 312142 bytes
 .../tutorials/img/tutorial-compaction-05.png       | Bin 26588 -> 39784 bytes
 .../tutorials/img/tutorial-compaction-06.png       | Bin 206717 -> 351505 bytes
 .../tutorials/img/tutorial-compaction-07.png       | Bin 26683 -> 40106 bytes
 .../tutorials/img/tutorial-compaction-08.png       | Bin 28751 -> 43257 bytes
 docs/latest/tutorials/img/tutorial-deletion-01.png | Bin 43586 -> 72062 bytes
 docs/latest/tutorials/img/tutorial-deletion-02.png | Bin 439602 -> 810422 bytes
 docs/latest/tutorials/img/tutorial-deletion-03.png | Bin 437304 -> 805673 bytes
 docs/latest/tutorials/img/tutorial-kafka-01.png    | Bin 85477 -> 136317 bytes
 docs/latest/tutorials/img/tutorial-kafka-02.png    | Bin 75709 -> 125452 bytes
 docs/latest/tutorials/img/tutorial-query-01.png    | Bin 100930 -> 153120 bytes
 docs/latest/tutorials/img/tutorial-query-02.png    | Bin 83369 -> 129962 bytes
 docs/latest/tutorials/img/tutorial-query-03.png    | Bin 65038 -> 106082 bytes
 docs/latest/tutorials/img/tutorial-query-04.png    | Bin 66423 -> 108331 bytes
 docs/latest/tutorials/img/tutorial-query-05.png    | Bin 51855 -> 87070 bytes
 docs/latest/tutorials/img/tutorial-query-06.png    | Bin 82211 -> 130612 bytes
 docs/latest/tutorials/img/tutorial-query-07.png    | Bin 78633 -> 125457 bytes
 .../tutorials/img/tutorial-quickstart-01.png       | Bin 29834 -> 56955 bytes
 .../latest/tutorials/img/tutorial-retention-00.png | Bin 77704 -> 138304 bytes
 .../latest/tutorials/img/tutorial-retention-01.png | Bin 35171 -> 53955 bytes
 .../latest/tutorials/img/tutorial-retention-02.png | Bin 240310 -> 410930 bytes
 .../latest/tutorials/img/tutorial-retention-03.png | Bin 30029 -> 44144 bytes
 .../latest/tutorials/img/tutorial-retention-04.png | Bin 44617 -> 67493 bytes
 .../latest/tutorials/img/tutorial-retention-05.png | Bin 38992 -> 61639 bytes
 .../latest/tutorials/img/tutorial-retention-06.png | Bin 137570 -> 233034 bytes
 146 files changed, 358 insertions(+), 3156 deletions(-)

diff --git a/docs/0.15.0-incubating/configuration/index.html b/docs/0.15.0-incubating/configuration/index.html
index fcf1e74..39b0c48 100644
--- a/docs/0.15.0-incubating/configuration/index.html
+++ b/docs/0.15.0-incubating/configuration/index.html
@@ -1440,6 +1440,16 @@ The below table shows some important configurations for S3. See <a href="../deve
 </tr>
 </thead><tbody>
 <tr>
+<td><code>druid.s3.accessKey</code></td>
+<td>The access key to use to access S3.</td>
+<td>none</td>
+</tr>
+<tr>
+<td><code>druid.s3.secretKey</code></td>
+<td>The secret key to use to access S3.</td>
+<td>none</td>
+</tr>
+<tr>
 <td><code>druid.storage.bucket</code></td>
 <td>S3 bucket name.</td>
 <td>none</td>
@@ -1465,21 +1475,6 @@ The below table shows some important configurations for S3. See <a href="../deve
 <td>none</td>
 </tr>
 <tr>
-<td><code>druid.storage.sse.type</code></td>
-<td>Server-side encryption type. Should be one of <code>s3</code>, <code>kms</code>, and <code>custom</code>. See the below <a href="../development/extensions-core/s3.html#server-side-encryption">Server-side encryption section</a> for more details.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.kms.keyId</code></td>
-<td>AWS KMS key ID. This is used only when <code>druid.storage.sse.type</code> is <code>kms</code> and can be empty to use the default key ID.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.custom.base64EncodedKey</code></td>
-<td>Base64-encoded key. Should be specified if <code>druid.storage.sse.type</code> is <code>custom</code>.</td>
-<td>None</td>
-</tr>
-<tr>
 <td><code>druid.storage.useS3aSchema</code></td>
 <td>If true, use the &quot;s3a&quot; filesystem when using Hadoop-based ingestion. If false, the &quot;s3n&quot; filesystem will be used. Only affects Hadoop-based ingestion.</td>
 <td>false</td>
@@ -2213,6 +2208,11 @@ Support for 64-bit floating point columns was released in Druid 0.11.0, so if yo
 <td>yes</td>
 </tr>
 <tr>
+<td><code>keepSegmentGranularity</code></td>
+<td>Set <a href="../ingestion/compaction.html">keepSegmentGranularity</a> to true for compactionTask.</td>
+<td>no (default = true)</td>
+</tr>
+<tr>
 <td><code>taskPriority</code></td>
 <td><a href="../ingestion/tasks.html#task-priorities">Priority</a> of compaction task.</td>
 <td>no (default = 25)</td>
@@ -2524,47 +2524,6 @@ If you see this problem, it&#39;s recommended to set <code>skipOffsetFromLatest<
 </tr>
 </tbody></table>
 
-<h5 id="supervisors">Supervisors</h5>
-
-<table><thead>
-<tr>
-<th>Property</th>
-<th>Description</th>
-<th>Default</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>druid.supervisor.healthinessThreshold</code></td>
-<td>The number of successful runs before an unhealthy supervisor is again considered healthy.</td>
-<td>3</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.unhealthinessThreshold</code></td>
-<td>The number of failed runs before the supervisor is considered unhealthy.</td>
-<td>3</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.taskHealthinessThreshold</code></td>
-<td>The number of consecutive task successes before an unhealthy supervisor is again considered healthy.</td>
-<td>3</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.taskUnhealthinessThreshold</code></td>
-<td>The number of consecutive task failures before the supervisor is considered unhealthy.</td>
-<td>3</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.storeStackTrace</code></td>
-<td>Whether full stack traces of supervisor exceptions should be stored and returned by the supervisor <code>/status</code> endpoint.</td>
-<td>false</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.maxStoredExceptionEvents</code></td>
-<td>The maximum number of exception events that can be returned through the supervisor <code>/status</code> endpoint.</td>
-<td><code>max(healthinessThreshold, unhealthinessThreshold)</code></td>
-</tr>
-</tbody></table>
-
 <h4 id="overlord-dynamic-configuration">Overlord Dynamic Configuration</h4>
 
 <p>The Overlord can dynamically change worker behavior.</p>
@@ -3699,7 +3658,7 @@ line.</p>
 <tr>
 <td><code>druid.sql.enable</code></td>
 <td>Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.</td>
-<td>true</td>
+<td>false</td>
 </tr>
 <tr>
 <td><code>druid.sql.avatica.enable</code></td>
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/influxdb-emitter.html b/docs/0.15.0-incubating/development/extensions-contrib/influxdb-emitter.html
deleted file mode 100644
index 7f82160..0000000
--- a/docs/0.15.0-incubating/development/extensions-contrib/influxdb-emitter.html
+++ /dev/null
@@ -1,330 +0,0 @@
-<!DOCTYPE html>
-<html lang="en">
-  <head>
-    <meta charset="UTF-8" />
-<meta name="viewport" content="width=device-width, initial-scale=1.0">
-<meta name="description" content="Apache Druid">
-<meta name="keywords" content="druid,kafka,database,analytics,streaming,real-time,real time,apache,open source">
-<meta name="author" content="Apache Software Foundation">
-
-<title>Druid | InfluxDB Emitter</title>
-
-<link rel="alternate" type="application/atom+xml" href="/feed">
-<link rel="shortcut icon" href="/img/favicon.png">
-
-<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css" integrity="sha384-fnmOCqbTlWIlj8LyTjo7mOUStjsKC4pOpQbqyi7RrhN7udi9RwhKkMHpvLbHG9Sr" crossorigin="anonymous">
-
-<link href='//fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700,300italic|Open+Sans:300italic,400italic,600italic,400,300,600,700' rel='stylesheet' type='text/css'>
-
-<link rel="stylesheet" href="/css/bootstrap-pure.css?v=1.1">
-<link rel="stylesheet" href="/css/base.css?v=1.1">
-<link rel="stylesheet" href="/css/header.css?v=1.1">
-<link rel="stylesheet" href="/css/footer.css?v=1.1">
-<link rel="stylesheet" href="/css/syntax.css?v=1.1">
-<link rel="stylesheet" href="/css/docs.css?v=1.1">
-
-<script>
-  (function() {
-    var cx = '000162378814775985090:molvbm0vggm';
-    var gcse = document.createElement('script');
-    gcse.type = 'text/javascript';
-    gcse.async = true;
-    gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
-        '//cse.google.com/cse.js?cx=' + cx;
-    var s = document.getElementsByTagName('script')[0];
-    s.parentNode.insertBefore(gcse, s);
-  })();
-</script>
-
-
-  </head>
-
-  <body>
-    <!-- Start page_header include -->
-<script src="//ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>
-
-<div class="top-navigator">
-  <div class="container">
-    <div class="left-cont">
-      <a class="logo" href="/"><span class="druid-logo"></span></a>
-    </div>
-    <div class="right-cont">
-      <ul class="links">
-        <li class=""><a href="/technology">Technology</a></li>
-        <li class=""><a href="/use-cases">Use Cases</a></li>
-        <li class=""><a href="/druid-powered">Powered By</a></li>
-        <li class=""><a href="/docs/latest/design/">Docs</a></li>
-        <li class=""><a href="/community/">Community</a></li>
-        <li class="header-dropdown">
-          <a>Apache</a>
-          <div class="header-dropdown-menu">
-            <a href="https://www.apache.org/" target="_blank">Foundation</a>
-            <a href="https://www.apache.org/events/current-event" target="_blank">Events</a>
-            <a href="https://www.apache.org/licenses/" target="_blank">License</a>
-            <a href="https://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a>
-            <a href="https://www.apache.org/security/" target="_blank">Security</a>
-            <a href="https://www.apache.org/foundation/sponsorship.html" target="_blank">Sponsorship</a>
-          </div>
-        </li>
-        <li class=" button-link"><a href="/downloads.html">Download</a></li>
-      </ul>
-    </div>
-  </div>
-  <div class="action-button menu-icon">
-    <span class="fa fa-bars"></span> MENU
-  </div>
-  <div class="action-button menu-icon-close">
-    <span class="fa fa-times"></span> MENU
-  </div>
-</div>
-
-<script type="text/javascript">
-  var $menu = $('.right-cont');
-  var $menuIcon = $('.menu-icon');
-  var $menuIconClose = $('.menu-icon-close');
-
-  function showMenu() {
-    $menu.fadeIn(100);
-    $menuIcon.fadeOut(100);
-    $menuIconClose.fadeIn(100);
-  }
-
-  $menuIcon.click(showMenu);
-
-  function hideMenu() {
-    $menu.fadeOut(100);
-    $menuIconClose.fadeOut(100);
-    $menuIcon.fadeIn(100);
-  }
-
-  $menuIconClose.click(hideMenu);
-
-  $(window).resize(function() {
-    if ($(window).width() >= 840) {
-      $menu.fadeIn(100);
-      $menuIcon.fadeOut(100);
-      $menuIconClose.fadeOut(100);
-    }
-    else {
-      $menu.fadeOut(100);
-      $menuIcon.fadeIn(100);
-      $menuIconClose.fadeOut(100);
-    }
-  });
-</script>
-
-<!-- Stop page_header include -->
-
-
-    <div class="container doc-container">
-      
-      
-
-      
-
-      <div class="row">
-        <div class="col-md-9 doc-content">
-          <p>
-            <a class="btn btn-default btn-xs visible-xs-inline-block visible-sm-inline-block" href="#toc">Table of Contents</a>
-          </p>
-          <!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~   http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing,
-  ~ software distributed under the License is distributed on an
-  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  ~ KIND, either express or implied.  See the License for the
-  ~ specific language governing permissions and limitations
-  ~ under the License.
-  -->
-
-<h1 id="influxdb-emitter">InfluxDB Emitter</h1>
-
-<p>To use this Apache Druid (incubating) extension, make sure to <a href="../../operations/including-extensions.html">include</a> <code>druid-influxdb-emitter</code> extension.</p>
-
-<h2 id="introduction">Introduction</h2>
-
-<p>This extension emits druid metrics to <a href="https://www.influxdata.com/time-series-platform/influxdb/">InfluxDB</a> over HTTP. Currently this emitter only emits service metric events to InfluxDB (See <a href="../../operations/metrics.html">Druid metrics</a> for a list of metrics).
-When a metric event is fired it is added to a queue of events. After a configurable amount of time, the events on the queue are transformed to InfluxDB&#39;s line protocol 
-and POSTed to the InfluxDB HTTP API. The entire queue is flushed at this point. The queue is also flushed as the emitter is shutdown.</p>
-
-<p>Note that authentication and authorization must be <a href="https://docs.influxdata.com/influxdb/v1.7/administration/authentication_and_authorization/">enabled</a> on the InfluxDB server.</p>
-
-<h2 id="configuration">Configuration</h2>
-
-<p>All the configuration parameters for the influxdb emitter are under <code>druid.emitter.influxdb</code>.</p>
-
-<table><thead>
-<tr>
-<th>Property</th>
-<th>Description</th>
-<th>Required?</th>
-<th>Default</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>druid.emitter.influxdb.hostname</code></td>
-<td>The hostname of the InfluxDB server.</td>
-<td>Yes</td>
-<td>N/A</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.port</code></td>
-<td>The port of the InfluxDB server.</td>
-<td>No</td>
-<td>8086</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.databaseName</code></td>
-<td>The name of the database in InfluxDB.</td>
-<td>Yes</td>
-<td>N/A</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.maxQueueSize</code></td>
-<td>The size of the queue that holds events.</td>
-<td>No</td>
-<td>Integer.Max_Value(=2^31-1)</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.flushPeriod</code></td>
-<td>How often (in milliseconds) the events queue is parsed into Line Protocol and POSTed to InfluxDB.</td>
-<td>No</td>
-<td>60000</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.flushDelay</code></td>
-<td>How long (in milliseconds) the scheduled method will wait until it first runs.</td>
-<td>No</td>
-<td>60000</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.influxdbUserName</code></td>
-<td>The username for authenticating with the InfluxDB database.</td>
-<td>Yes</td>
-<td>N/A</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.influxdbPassword</code></td>
-<td>The password of the database authorized user</td>
-<td>Yes</td>
-<td>N/A</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.dimensionWhitelist</code></td>
-<td>A whitelist of metric dimensions to include as tags</td>
-<td>No</td>
-<td><code>[&quot;dataSource&quot;,&quot;type&quot;,&quot;numMetrics&quot;,&quot;numDimensions&quot;,&quot;threshold&quot;,&quot;dimension&quot;,&quot;taskType&quot;,&quot;taskStatus&quot;,&quot;tier&quot;]</code></td>
-</tr>
-</tbody></table>
-
-<h2 id="influxdb-line-protocol">InfluxDB Line Protocol</h2>
-
-<p>An example of how this emitter parses a Druid metric event into InfluxDB&#39;s <a href="https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_reference/">line protocol</a> is given here: </p>
-
-<p>The syntax of the line protocol is :  </p>
-
-<p><code>&lt;measurement&gt;[,&lt;tag_key&gt;=&lt;tag_value&gt;[,&lt;tag_key&gt;=&lt;tag_value&gt;]] &lt;field_key&gt;=&lt;field_value&gt;[,&lt;field_key&gt;=&lt;field_value&gt;] [&lt;timestamp&gt;]</code></p>
-
-<p>where timestamp is in nano-seconds since epoch.</p>
-
-<p>A typical service metric event as recorded by Druid&#39;s logging emitter is: <code>Event [{&quot;feed&quot;:&quot;metrics&quot;,&quot;timestamp&quot;:&quot;2017-10-31T09:09:06.857Z&quot;,&quot;service&quot;:&quot;druid/historical&quot;,&quot;host&quot;:&quot;historical001:8083&quot;,&quot;version&quot;:&quot;0.11.0-SNAPSHOT&quot;,&quot;metric&quot;:&quot;query/cache/total/hits&quot;,&quot;value&quot;:34787256}]</code>.</p>
-
-<p>This event is parsed into line protocol according to these rules:</p>
-
-<ul>
-<li>The measurement becomes druid_query since query is the first part of the metric. </li>
-<li>The tags are service=druid/historical, hostname=historical001, metric=druid_cache_total. (The metric tag is the middle part of the druid metric separated with _ and preceded by druid_. Another example would be if an event has metric=query/time then there is no middle part and hence no metric tag)</li>
-<li>The field is druid_hits since this is the last part of the metric.</li>
-</ul>
-
-<p>This gives the following String which can be POSTed to InfluxDB: <code>&quot;druid_query,service=druid/historical,hostname=historical001,metric=druid_cache_total druid_hits=34787256 1509440946857000000&quot;</code></p>
-
-<p>The InfluxDB emitter has a white list of dimensions
-which will be added as a tag to the line protocol string if the metric has a dimension from the white list.
-The value of the dimension is sanitized such that every occurence of a dot or whitespace is replaced with a <code>_</code> .</p>
-
-        </div>
-        <div class="col-md-3">
-          <div class="searchbox">
-            <gcse:searchbox-only></gcse:searchbox-only>
-          </div>
-          <div id="toc" class="nav toc hidden-print">
-          </div>
-        </div>
-      </div>
-    </div>
-
-    <!-- Start page_footer include -->
-<footer class="druid-footer">
-<div class="container">
-  <div class="text-center">
-    <p>
-    <a href="/technology">Technology</a>&ensp;·&ensp;
-    <a href="/use-cases">Use Cases</a>&ensp;·&ensp;
-    <a href="/druid-powered">Powered by Druid</a>&ensp;·&ensp;
-    <a href="/docs/latest">Docs</a>&ensp;·&ensp;
-    <a href="/community/">Community</a>&ensp;·&ensp;
-    <a href="/downloads.html">Download</a>&ensp;·&ensp;
-    <a href="/faq">FAQ</a>
-    </p>
-  </div>
-  <div class="text-center">
-    <a title="Join the user group" href="https://groups.google.com/forum/#!forum/druid-user" target="_blank"><span class="fa fa-comments"></span></a>&ensp;·&ensp;
-    <a title="Follow Druid" href="https://twitter.com/druidio" target="_blank"><span class="fab fa-twitter"></span></a>&ensp;·&ensp;
-    <a title="Download via Apache" href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.15.0-incubating/apache-druid-0.15.0-incubating-bin.tar.gz" target="_blank"><span class="fas fa-feather"></span></a>&ensp;·&ensp;
-    <a title="GitHub" href="https://github.com/apache/incubator-druid" target="_blank"><span class="fab fa-github"></span></a>
-  </div>
-  <div class="text-center license">
-    Copyright © 2019 <a href="https://www.apache.org/" target="_blank">Apache Software Foundation</a>.<br>
-    Except where otherwise noted, licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>.<br>
-    Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.
-  </div>
-</div>
-</footer>
-
-<script async src="https://www.googletagmanager.com/gtag/js?id=UA-131010415-1"></script>
-<script>
-  window.dataLayer = window.dataLayer || [];
-  function gtag(){dataLayer.push(arguments);}
-  gtag('js', new Date());
-  gtag('config', 'UA-131010415-1');
-</script>
-<script>
-  function trackDownload(type, url) {
-    ga('send', 'event', 'download', type, url);
-  }
-</script>
-<script src="//code.jquery.com/jquery.min.js"></script>
-<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script>
-<script src="/assets/js/druid.js"></script>
-<!-- stop page_footer include -->
-
-
-    <script>
-    $(function() {
-      $(".toc").load("/docs/0.15.0-incubating/toc.html");
-
-      // There is no way to tell when .gsc-input will be async loaded into the page so just try to set a placeholder until it works
-      var tries = 0;
-      var timer = setInterval(function() {
-        tries++;
-        if (tries > 300) clearInterval(timer);
-        var searchInput = $('input.gsc-input');
-        if (searchInput.length) {
-          searchInput.attr('placeholder', 'Search');
-          clearInterval(timer);
-        }
-      }, 100);
-    });
-    </script>
-  </body>
-</html>
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/orc.html b/docs/0.15.0-incubating/development/extensions-contrib/orc.html
new file mode 100644
index 0000000..19bab1e
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/orc.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/extensions-core/orc.html">
+<meta http-equiv="refresh" content="0; url=development/extensions-core/orc.html">
+<h1>Redirecting...</h1>
+<a href="development/extensions-core/orc.html">Click here if you are not redirected.</a>
+<script>location="development/extensions-core/orc.html"</script>
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/tdigestsketch-quantiles.html b/docs/0.15.0-incubating/development/extensions-contrib/tdigestsketch-quantiles.html
deleted file mode 100644
index 1512e73..0000000
--- a/docs/0.15.0-incubating/development/extensions-contrib/tdigestsketch-quantiles.html
+++ /dev/null
@@ -1,417 +0,0 @@
-<!DOCTYPE html>
-<html lang="en">
-  <head>
-    <meta charset="UTF-8" />
-<meta name="viewport" content="width=device-width, initial-scale=1.0">
-<meta name="description" content="Apache Druid">
-<meta name="keywords" content="druid,kafka,database,analytics,streaming,real-time,real time,apache,open source">
-<meta name="author" content="Apache Software Foundation">
-
-<title>Druid | T-Digest Quantiles Sketch module</title>
-
-<link rel="alternate" type="application/atom+xml" href="/feed">
-<link rel="shortcut icon" href="/img/favicon.png">
-
-<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css" integrity="sha384-fnmOCqbTlWIlj8LyTjo7mOUStjsKC4pOpQbqyi7RrhN7udi9RwhKkMHpvLbHG9Sr" crossorigin="anonymous">
-
-<link href='//fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700,300italic|Open+Sans:300italic,400italic,600italic,400,300,600,700' rel='stylesheet' type='text/css'>
-
-<link rel="stylesheet" href="/css/bootstrap-pure.css?v=1.1">
-<link rel="stylesheet" href="/css/base.css?v=1.1">
-<link rel="stylesheet" href="/css/header.css?v=1.1">
-<link rel="stylesheet" href="/css/footer.css?v=1.1">
-<link rel="stylesheet" href="/css/syntax.css?v=1.1">
-<link rel="stylesheet" href="/css/docs.css?v=1.1">
-
-<script>
-  (function() {
-    var cx = '000162378814775985090:molvbm0vggm';
-    var gcse = document.createElement('script');
-    gcse.type = 'text/javascript';
-    gcse.async = true;
-    gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
-        '//cse.google.com/cse.js?cx=' + cx;
-    var s = document.getElementsByTagName('script')[0];
-    s.parentNode.insertBefore(gcse, s);
-  })();
-</script>
-
-
-  </head>
-
-  <body>
-    <!-- Start page_header include -->
-<script src="//ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>
-
-<div class="top-navigator">
-  <div class="container">
-    <div class="left-cont">
-      <a class="logo" href="/"><span class="druid-logo"></span></a>
-    </div>
-    <div class="right-cont">
-      <ul class="links">
-        <li class=""><a href="/technology">Technology</a></li>
-        <li class=""><a href="/use-cases">Use Cases</a></li>
-        <li class=""><a href="/druid-powered">Powered By</a></li>
-        <li class=""><a href="/docs/latest/design/">Docs</a></li>
-        <li class=""><a href="/community/">Community</a></li>
-        <li class="header-dropdown">
-          <a>Apache</a>
-          <div class="header-dropdown-menu">
-            <a href="https://www.apache.org/" target="_blank">Foundation</a>
-            <a href="https://www.apache.org/events/current-event" target="_blank">Events</a>
-            <a href="https://www.apache.org/licenses/" target="_blank">License</a>
-            <a href="https://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a>
-            <a href="https://www.apache.org/security/" target="_blank">Security</a>
-            <a href="https://www.apache.org/foundation/sponsorship.html" target="_blank">Sponsorship</a>
-          </div>
-        </li>
-        <li class=" button-link"><a href="/downloads.html">Download</a></li>
-      </ul>
-    </div>
-  </div>
-  <div class="action-button menu-icon">
-    <span class="fa fa-bars"></span> MENU
-  </div>
-  <div class="action-button menu-icon-close">
-    <span class="fa fa-times"></span> MENU
-  </div>
-</div>
-
-<script type="text/javascript">
-  var $menu = $('.right-cont');
-  var $menuIcon = $('.menu-icon');
-  var $menuIconClose = $('.menu-icon-close');
-
-  function showMenu() {
-    $menu.fadeIn(100);
-    $menuIcon.fadeOut(100);
-    $menuIconClose.fadeIn(100);
-  }
-
-  $menuIcon.click(showMenu);
-
-  function hideMenu() {
-    $menu.fadeOut(100);
-    $menuIconClose.fadeOut(100);
-    $menuIcon.fadeIn(100);
-  }
-
-  $menuIconClose.click(hideMenu);
-
-  $(window).resize(function() {
-    if ($(window).width() >= 840) {
-      $menu.fadeIn(100);
-      $menuIcon.fadeOut(100);
-      $menuIconClose.fadeOut(100);
-    }
-    else {
-      $menu.fadeOut(100);
-      $menuIcon.fadeIn(100);
-      $menuIconClose.fadeOut(100);
-    }
-  });
-</script>
-
-<!-- Stop page_header include -->
-
-
-    <div class="container doc-container">
-      
-      
-
-      
-
-      <div class="row">
-        <div class="col-md-9 doc-content">
-          <p>
-            <a class="btn btn-default btn-xs visible-xs-inline-block visible-sm-inline-block" href="#toc">Table of Contents</a>
-          </p>
-          <!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~   http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing,
-  ~ software distributed under the License is distributed on an
-  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  ~ KIND, either express or implied.  See the License for the
-  ~ specific language governing permissions and limitations
-  ~ under the License.
-  -->
-
-<h1 id="t-digest-quantiles-sketch-module">T-Digest Quantiles Sketch module</h1>
-
-<p>This module provides Apache Druid (incubating) approximate sketch aggregators based on T-Digest.
-T-Digest (https://github.com/tdunning/t-digest) is a popular datastructure for accurate on-line accumulation of
-rank-based statistics such as quantiles and trimmed means.
-The datastructure is also designed for parallel programming use cases like distributed aggregations or map reduce jobs by making combining two intermediate t-digests easy and efficient.</p>
-
-<p>There are three flavors of T-Digest sketch aggregator available in Apache Druid (incubating):</p>
-
-<ol>
-<li>buildTDigestSketch - used for building T-Digest sketches from raw numeric values. It generally makes sense to
-use this aggregator when ingesting raw data into Druid. One can also use this aggregator during query time too to
-generate sketches, just that one would be building these sketches on every query execution instead of building them
-once during ingestion.</li>
-<li>mergeTDigestSketch - used for merging pre-built T-Digest sketches. This aggregator is generally used during
-query time to combine sketches generated by buildTDigestSketch aggregator.</li>
-<li>quantilesFromTDigestSketch - used for generating quantiles from T-Digest sketches. This aggregator is generally used
-during query time to generate quantiles from sketches built using the above two sketch generating aggregators.</li>
-</ol>
-
-<p>To use this aggregator, make sure you <a href="../../operations/including-extensions.html">include</a> the extension in your config file:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text"><span></span>druid.extensions.loadList=[&quot;druid-tdigestsketch&quot;]
-</code></pre></div>
-<h3 id="aggregator">Aggregator</h3>
-
-<p>The result of the aggregation is a T-Digest sketch that is built ingesting numeric values from the raw data.</p>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span> <span class="p">:</span> <span class="s2">&quot;buildTDigestSketch&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span> <span class="p">:</span> <span class="err">&lt;output_name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;fieldName&quot;</span> <span class="p">:</span> <span class="err">&lt;metric_name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;compression&quot;</span><span class="p">:</span> <span class="err">&lt;parameter</span> <span class="err">that</span> <span class="err">controls</span> <span class="err">size</span> <span class="err">and</span> <span class="err">accuracy&gt;</span>
- <span class="p">}</span>
-</code></pre></div>
-<p>Example:
-<code>json
-{
-    &quot;type&quot;: &quot;buildTDigestSketch&quot;,
-    &quot;name&quot;: &quot;sketch&quot;,
-    &quot;fieldName&quot;: &quot;session_duration&quot;,
-    &quot;compression&quot;: 200
-}
-</code></p>
-
-<table><thead>
-<tr>
-<th>property</th>
-<th>description</th>
-<th>required?</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>type</td>
-<td>This String should always be &quot;buildTDigestSketch&quot;</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>name</td>
-<td>A String for the output (result) name of the calculation.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>fieldName</td>
-<td>A String for the name of the input field containing raw numeric values.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>compression</td>
-<td>Parameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.</td>
-<td>no, defaults to 100</td>
-</tr>
-</tbody></table>
-
-<p>The result of the aggregation is a T-Digest sketch that is built by merging pre-built T-Digest sketches.</p>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span> <span class="p">:</span> <span class="s2">&quot;mergeTDigestSketch&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span> <span class="p">:</span> <span class="err">&lt;output_name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;fieldName&quot;</span> <span class="p">:</span> <span class="err">&lt;metric_name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;compression&quot;</span><span class="p">:</span> <span class="err">&lt;parameter</span> <span class="err">that</span> <span class="err">controls</span> <span class="err">size</span> <span class="err">and</span> <span class="err">accuracy&gt;</span>
- <span class="p">}</span>
-</code></pre></div>
-<table><thead>
-<tr>
-<th>property</th>
-<th>description</th>
-<th>required?</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>type</td>
-<td>This String should always be &quot;mergeTDigestSketch&quot;</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>name</td>
-<td>A String for the output (result) name of the calculation.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>fieldName</td>
-<td>A String for the name of the input field containing raw numeric values.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>compression</td>
-<td>Parameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.</td>
-<td>no, defaults to 100</td>
-</tr>
-</tbody></table>
-
-<p>Example:
-<code>json
-{
-    &quot;queryType&quot;: &quot;groupBy&quot;,
-    &quot;dataSource&quot;: &quot;test_datasource&quot;,
-    &quot;granularity&quot;: &quot;ALL&quot;,
-    &quot;dimensions&quot;: [],
-    &quot;aggregations&quot;: [{
-        &quot;type&quot;: &quot;mergeTDigestSketch&quot;,
-        &quot;name&quot;: &quot;merged_sketch&quot;,
-        &quot;fieldName&quot;: &quot;ingested_sketch&quot;,
-        &quot;compression&quot;: 200
-    }],
-    &quot;intervals&quot;: [&quot;2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z&quot;]
-}
-</code></p>
-
-<h3 id="post-aggregators">Post Aggregators</h3>
-
-<h4 id="quantiles">Quantiles</h4>
-
-<p>This returns an array of quantiles corresponding to a given array of fractions.</p>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span>  <span class="p">:</span> <span class="s2">&quot;quantilesFromTDigestSketch&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="err">&lt;output</span> <span class="err">name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;field&quot;</span>  <span class="p">:</span> <span class="err">&lt;post</span> <span class="err">aggregator</span> <span class="err">that</span> <span class="err">refers</span> <span class="err">to</span> <span class="err">a</span> <span class="err">TDigestSketch</span> <span class="err">(fieldAccess</span> <span class="err">or</span> <span class="err">another</span> <span class="err">post</span> <span class="err">aggregator)&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;fractions&quot;</span> <span class="p">:</span> <span class="err">&lt;array</span> <span class="err">of</span> <span class="err">fractions&gt;</span>
-<span class="p">}</span>
-</code></pre></div>
-<table><thead>
-<tr>
-<th>property</th>
-<th>description</th>
-<th>required?</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>type</td>
-<td>This String should always be &quot;quantilesFromTDigestSketch&quot;</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>name</td>
-<td>A String for the output (result) name of the calculation.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>fieldName</td>
-<td>A String for the name of the input field containing raw numeric values.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>fractions</td>
-<td>Non-empty array of fractions between 0 and 1</td>
-<td>yes</td>
-</tr>
-</tbody></table>
-
-<p>Example:
-<code>json
-{
-    &quot;queryType&quot;: &quot;groupBy&quot;,
-    &quot;dataSource&quot;: &quot;test_datasource&quot;,
-    &quot;granularity&quot;: &quot;ALL&quot;,
-    &quot;dimensions&quot;: [],
-    &quot;aggregations&quot;: [{
-        &quot;type&quot;: &quot;mergeTDigestSketch&quot;,
-        &quot;name&quot;: &quot;merged_sketch&quot;,
-        &quot;fieldName&quot;: &quot;ingested_sketch&quot;,
-        &quot;compression&quot;: 200
-    }],
-    &quot;postAggregations&quot;: [{
-        &quot;type&quot;: &quot;quantilesFromTDigestSketch&quot;,
-        &quot;name&quot;: &quot;quantiles&quot;,
-        &quot;fractions&quot;: [0, 0.5, 1],
-        &quot;field&quot;: {
-            &quot;type&quot;: &quot;fieldAccess&quot;,
-            &quot;fieldName&quot;: &quot;merged_sketch&quot;
-        }
-    }],
-    &quot;intervals&quot;: [&quot;2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z&quot;]
-}
-</code></p>
-
-        </div>
-        <div class="col-md-3">
-          <div class="searchbox">
-            <gcse:searchbox-only></gcse:searchbox-only>
-          </div>
-          <div id="toc" class="nav toc hidden-print">
-          </div>
-        </div>
-      </div>
-    </div>
-
-    <!-- Start page_footer include -->
-<footer class="druid-footer">
-<div class="container">
-  <div class="text-center">
-    <p>
-    <a href="/technology">Technology</a>&ensp;·&ensp;
-    <a href="/use-cases">Use Cases</a>&ensp;·&ensp;
-    <a href="/druid-powered">Powered by Druid</a>&ensp;·&ensp;
-    <a href="/docs/latest">Docs</a>&ensp;·&ensp;
-    <a href="/community/">Community</a>&ensp;·&ensp;
-    <a href="/downloads.html">Download</a>&ensp;·&ensp;
-    <a href="/faq">FAQ</a>
-    </p>
-  </div>
-  <div class="text-center">
-    <a title="Join the user group" href="https://groups.google.com/forum/#!forum/druid-user" target="_blank"><span class="fa fa-comments"></span></a>&ensp;·&ensp;
-    <a title="Follow Druid" href="https://twitter.com/druidio" target="_blank"><span class="fab fa-twitter"></span></a>&ensp;·&ensp;
-    <a title="Download via Apache" href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.15.0-incubating/apache-druid-0.15.0-incubating-bin.tar.gz" target="_blank"><span class="fas fa-feather"></span></a>&ensp;·&ensp;
-    <a title="GitHub" href="https://github.com/apache/incubator-druid" target="_blank"><span class="fab fa-github"></span></a>
-  </div>
-  <div class="text-center license">
-    Copyright © 2019 <a href="https://www.apache.org/" target="_blank">Apache Software Foundation</a>.<br>
-    Except where otherwise noted, licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>.<br>
-    Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.
-  </div>
-</div>
-</footer>
-
-<script async src="https://www.googletagmanager.com/gtag/js?id=UA-131010415-1"></script>
-<script>
-  window.dataLayer = window.dataLayer || [];
-  function gtag(){dataLayer.push(arguments);}
-  gtag('js', new Date());
-  gtag('config', 'UA-131010415-1');
-</script>
-<script>
-  function trackDownload(type, url) {
-    ga('send', 'event', 'download', type, url);
-  }
-</script>
-<script src="//code.jquery.com/jquery.min.js"></script>
-<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script>
-<script src="/assets/js/druid.js"></script>
-<!-- stop page_footer include -->
-
-
-    <script>
-    $(function() {
-      $(".toc").load("/docs/0.15.0-incubating/toc.html");
-
-      // There is no way to tell when .gsc-input will be async loaded into the page so just try to set a placeholder until it works
-      var tries = 0;
-      var timer = setInterval(function() {
-        tries++;
-        if (tries > 300) clearInterval(timer);
-        var searchInput = $('input.gsc-input');
-        if (searchInput.length) {
-          searchInput.attr('placeholder', 'Search');
-          clearInterval(timer);
-        }
-      }, 100);
-    });
-    </script>
-  </body>
-</html>
diff --git a/docs/0.15.0-incubating/development/extensions-core/approximate-histograms.html b/docs/0.15.0-incubating/development/extensions-core/approximate-histograms.html
index 5a27a80..b231047 100644
--- a/docs/0.15.0-incubating/development/extensions-core/approximate-histograms.html
+++ b/docs/0.15.0-incubating/development/extensions-core/approximate-histograms.html
@@ -239,11 +239,6 @@ query.</p>
 <td>Restrict the approximation to the given range. The values outside this range will be aggregated into two centroids. Counts of values outside this range are still maintained.</td>
 <td>-INF/+INF</td>
 </tr>
-<tr>
-<td><code>finalizeAsBase64Binary</code></td>
-<td>If true, the finalized aggregator value will be a Base64-encoded byte array containing the serialized form of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.</td>
-<td>false</td>
-</tr>
 </tbody></table>
 
 <h2 id="fixed-buckets-histogram">Fixed Buckets Histogram</h2>
@@ -302,11 +297,6 @@ query.</p>
 <td>Specifies how values outside of [lowerLimit, upperLimit] will be handled. Supported modes are &quot;ignore&quot;, &quot;overflow&quot;, and &quot;clip&quot;. See <a href="#outlier-handling-modes">outlier handling modes</a> for more details.</td>
 <td>No default, must be specified</td>
 </tr>
-<tr>
-<td><code>finalizeAsBase64Binary</code></td>
-<td>If true, the finalized aggregator value will be a Base64-encoded byte array containing the <a href="#serialization-formats">serialized form</a> of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.</td>
-<td>false</td>
-</tr>
 </tbody></table>
 
 <p>An example aggregator spec is shown below:</p>
diff --git a/docs/0.15.0-incubating/development/extensions-core/datasketches-extension.html b/docs/0.15.0-incubating/development/extensions-core/datasketches-extension.html
index 8113239..da49e38 100644
--- a/docs/0.15.0-incubating/development/extensions-core/datasketches-extension.html
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-extension.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-extension">DataSketches extension</h1>
 
-<p>Apache Druid (incubating) aggregators based on <a href="https://datasketches.github.io/">datasketches</a> library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.</p>
+<p>Apache Druid (incubating) aggregators based on <a href="http://datasketches.github.io/">datasketches</a> library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.</p>
 
 <p>To use the datasketches aggregators, make sure you <a href="../../operations/including-extensions.html">include</a> the extension in your config file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text"><span></span>druid.extensions.loadList=[&quot;druid-datasketches&quot;]
diff --git a/docs/0.15.0-incubating/development/extensions-core/datasketches-hll.html b/docs/0.15.0-incubating/development/extensions-core/datasketches-hll.html
index d09845b..7c533ae 100644
--- a/docs/0.15.0-incubating/development/extensions-core/datasketches-hll.html
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-hll.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-hll-sketch-module">DataSketches HLL Sketch module</h1>
 
-<p>This module provides Apache Druid (incubating) aggregators for distinct counting based on HLL sketch from <a href="https://datasketches.github.io/">datasketches</a> library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of  [...]
+<p>This module provides Apache Druid (incubating) aggregators for distinct counting based on HLL sketch from <a href="http://datasketches.github.io/">datasketches</a> library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of s [...]
 You can use the HLL sketch aggregator on columns of any identifiers. It will return estimated cardinality of the column.</p>
 
 <p>To use this aggregator, make sure you <a href="../../operations/including-extensions.html">include</a> the extension in your config file:</p>
diff --git a/docs/0.15.0-incubating/development/extensions-core/datasketches-quantiles.html b/docs/0.15.0-incubating/development/extensions-core/datasketches-quantiles.html
index c57faf6..afba735 100644
--- a/docs/0.15.0-incubating/development/extensions-core/datasketches-quantiles.html
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-quantiles.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-quantiles-sketch-module">DataSketches Quantiles Sketch module</h1>
 
-<p>This module provides Apache Druid (incubating) aggregators based on numeric quantiles DoublesSketch from <a href="https://datasketches.github.io/">datasketches</a> library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such [...]
+<p>This module provides Apache Druid (incubating) aggregators based on numeric quantiles DoublesSketch from <a href="http://datasketches.github.io/">datasketches</a> library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such) [...]
 
 <p>There are three major modes of operation:</p>
 
@@ -232,26 +232,6 @@
   <span class="nt">&quot;splitPoints&quot;</span> <span class="p">:</span> <span class="err">&lt;array</span> <span class="err">of</span> <span class="err">split</span> <span class="err">points&gt;</span>
 <span class="p">}</span>
 </code></pre></div>
-<h4 id="rank">Rank</h4>
-
-<p>This returns an approximation to the rank of a given value that is the fraction of the distribution less than that value.</p>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span>  <span class="p">:</span> <span class="s2">&quot;quantilesDoublesSketchToRank&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="err">&lt;output</span> <span class="err">name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;field&quot;</span>  <span class="p">:</span> <span class="err">&lt;post</span> <span class="err">aggregator</span> <span class="err">that</span> <span class="err">refers</span> <span class="err">to</span> <span class="err">a</span> <span class="err">DoublesSketch</span> <span class="err">(fieldAccess</span> <span class="err">or</span> <span class="err">another</span> <span class="err">post</span> <span class="err">aggregator)&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;value&quot;</span> <span class="p">:</span> <span class="err">&lt;value&gt;</span>
-<span class="p">}</span>
-</code></pre></div>
-<h4 id="cdf">CDF</h4>
-
-<p>This returns an approximation to the Cumulative Distribution Function given an array of split points that define the edges of the bins. An array of <i>m</i> unique, monotonically increasing split points divide the real number line into <i>m+1</i> consecutive disjoint intervals. The definition of an interval is inclusive of the left split point and exclusive of the right split point. The resulting array of fractions can be viewed as ranks of each split point with one additional rank th [...]
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span>  <span class="p">:</span> <span class="s2">&quot;quantilesDoublesSketchToCDF&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="err">&lt;output</span> <span class="err">name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;field&quot;</span>  <span class="p">:</span> <span class="err">&lt;post</span> <span class="err">aggregator</span> <span class="err">that</span> <span class="err">refers</span> <span class="err">to</span> <span class="err">a</span> <span class="err">DoublesSketch</span> <span class="err">(fieldAccess</span> <span class="err">or</span> <span class="err">another</span> <span class="err">post</span> <span class="err">aggregator)&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;splitPoints&quot;</span> <span class="p">:</span> <span class="err">&lt;array</span> <span class="err">of</span> <span class="err">split</span> <span class="err">points&gt;</span>
-<span class="p">}</span>
-</code></pre></div>
 <h4 id="sketch-summary">Sketch Summary</h4>
 
 <p>This returns a summary of the sketch that can be used for debugging. This is the result of calling toString() method.</p>
diff --git a/docs/0.15.0-incubating/development/extensions-core/datasketches-theta.html b/docs/0.15.0-incubating/development/extensions-core/datasketches-theta.html
index 4a3f9d6..698ad8b 100644
--- a/docs/0.15.0-incubating/development/extensions-core/datasketches-theta.html
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-theta.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-theta-sketch-module">DataSketches Theta Sketch module</h1>
 
-<p>This module provides Apache Druid (incubating) aggregators based on Theta sketch from <a href="https://datasketches.github.io/">datasketches</a> library. Note that sketch algorithms are approximate; see details in the &quot;Accuracy&quot; section of the datasketches doc.
+<p>This module provides Apache Druid (incubating) aggregators based on Theta sketch from <a href="http://datasketches.github.io/">datasketches</a> library. Note that sketch algorithms are approximate; see details in the &quot;Accuracy&quot; section of the datasketches doc. 
 At ingestion time, this aggregator creates the Theta sketch objects which get stored in Druid segments. Logically speaking, a Theta sketch object can be thought of as a Set data structure. At query time, sketches are read and aggregated (set unioned) together. In the end, by default, you receive the estimate of the number of unique entries in the sketch object. Also, you can use post aggregators to do union, intersection or difference on sketch columns in the same row. 
 Note that you can use <code>thetaSketch</code> aggregator on columns which were not ingested using the same. It will return estimated cardinality of the column. It is recommended to use it at ingestion time as well to make querying faster.</p>
 
diff --git a/docs/0.15.0-incubating/development/extensions-core/datasketches-tuple.html b/docs/0.15.0-incubating/development/extensions-core/datasketches-tuple.html
index 535c879..0c110de 100644
--- a/docs/0.15.0-incubating/development/extensions-core/datasketches-tuple.html
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-tuple.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-tuple-sketch-module">DataSketches Tuple Sketch module</h1>
 
-<p>This module provides Apache Druid (incubating) aggregators based on Tuple sketch from <a href="https://datasketches.github.io/">datasketches</a> library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.</p>
+<p>This module provides Apache Druid (incubating) aggregators based on Tuple sketch from <a href="http://datasketches.github.io/">datasketches</a> library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.</p>
 
 <p>To use this aggregator, make sure you <a href="../../operations/including-extensions.html">include</a> the extension in your config file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text"><span></span>druid.extensions.loadList=[&quot;druid-datasketches&quot;]
diff --git a/docs/0.15.0-incubating/development/extensions-core/druid-basic-security.html b/docs/0.15.0-incubating/development/extensions-core/druid-basic-security.html
index 7d9ba71..d0a97e7 100644
--- a/docs/0.15.0-incubating/development/extensions-core/druid-basic-security.html
+++ b/docs/0.15.0-incubating/development/extensions-core/druid-basic-security.html
@@ -388,86 +388,6 @@ Return a list of all user names.</p>
 <p><code>GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})</code>
 Return the name and role information of the user with name {userName}</p>
 
-<p>Example output:
-<code>json
-{
-  &quot;name&quot;: &quot;druid2&quot;,
-  &quot;roles&quot;: [
-    &quot;druidRole&quot;
-  ]
-}
-</code></p>
-
-<p>This API supports the following flags:
-- <code>?full</code>: The response will also include the full information for each role currently assigned to the user.</p>
-
-<p>Example output:
-<code>json
-{
-  &quot;name&quot;: &quot;druid2&quot;,
-  &quot;roles&quot;: [
-    {
-      &quot;name&quot;: &quot;druidRole&quot;,
-      &quot;permissions&quot;: [
-        {
-          &quot;resourceAction&quot;: {
-            &quot;resource&quot;: {
-              &quot;name&quot;: &quot;A&quot;,
-              &quot;type&quot;: &quot;DATASOURCE&quot;
-            },
-            &quot;action&quot;: &quot;READ&quot;
-          },
-          &quot;resourceNamePattern&quot;: &quot;A&quot;
-        },
-        {
-          &quot;resourceAction&quot;: {
-            &quot;resource&quot;: {
-              &quot;name&quot;: &quot;C&quot;,
-              &quot;type&quot;: &quot;CONFIG&quot;
-            },
-            &quot;action&quot;: &quot;WRITE&quot;
-          },
-          &quot;resourceNamePattern&quot;: &quot;C&quot;
-        }
-      ]
-    }
-  ]
-}
-</code></p>
-
-<p>The output format of this API when <code>?full</code> is specified is deprecated and in later versions will be switched to the output format used when both <code>?full</code> and <code>?simplifyPermissions</code> flag is set. </p>
-
-<p>The <code>resourceNamePattern</code> is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.</p>
-
-<ul>
-<li><code>?full?simplifyPermissions</code>: When both <code>?full</code> and <code>?simplifyPermissions</code> are set, the permissions in the output will contain only a list of <code>resourceAction</code> objects, without the extraneous <code>resourceNamePattern</code> field.</li>
-</ul>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="s2">&quot;druid2&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;roles&quot;</span><span class="p">:</span> <span class="p">[</span>
-    <span class="p">{</span>
-      <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="s2">&quot;druidRole&quot;</span><span class="p">,</span>
-      <span class="nt">&quot;users&quot;</span><span class="p">:</span> <span class="kc">null</span><span class="p">,</span>
-      <span class="nt">&quot;permissions&quot;</span><span class="p">:</span> <span class="p">[</span>
-        <span class="p">{</span>
-          <span class="nt">&quot;resource&quot;</span><span class="p">:</span> <span class="p">{</span>
-            <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="s2">&quot;A&quot;</span><span class="p">,</span>
-            <span class="nt">&quot;type&quot;</span><span class="p">:</span> <span class="s2">&quot;DATASOURCE&quot;</span>
-          <span class="p">},</span>
-          <span class="nt">&quot;action&quot;</span><span class="p">:</span> <span class="s2">&quot;READ&quot;</span>
-        <span class="p">},</span>
-        <span class="p">{</span>
-          <span class="nt">&quot;resource&quot;</span><span class="p">:</span> <span class="p">{</span>
-            <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="s2">&quot;C&quot;</span><span class="p">,</span>
-            <span class="nt">&quot;type&quot;</span><span class="p">:</span> <span class="s2">&quot;CONFIG&quot;</span>
-          <span class="p">},</span>
-          <span class="nt">&quot;action&quot;</span><span class="p">:</span> <span class="s2">&quot;WRITE&quot;</span>
-        <span class="p">}</span>
-      <span class="p">]</span>
-    <span class="p">}</span>
-  <span class="p">]</span>
-<span class="p">}</span>
-</code></pre></div>
 <p><code>POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})</code>
 Create a new user with name {userName}</p>
 
@@ -480,56 +400,7 @@ Delete the user with name {userName}</p>
 Return a list of all role names.</p>
 
 <p><code>GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})</code>
-Return name and permissions for the role named {roleName}.</p>
-
-<p>Example output:
-<code>json
-{
-  &quot;name&quot;: &quot;druidRole2&quot;,
-  &quot;permissions&quot;: [
-    {
-      &quot;resourceAction&quot;: {
-        &quot;resource&quot;: {
-          &quot;name&quot;: &quot;E&quot;,
-          &quot;type&quot;: &quot;DATASOURCE&quot;
-        },
-        &quot;action&quot;: &quot;WRITE&quot;
-      },
-      &quot;resourceNamePattern&quot;: &quot;E&quot;
-    }
-  ]
-}
-</code></p>
-
-<p>The default output format of this API is deprecated and in later versions will be switched to the output format used when the <code>?simplifyPermissions</code> flag is set. The <code>resourceNamePattern</code> is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.</p>
-
-<p>This API supports the following flags:</p>
-
-<ul>
-<li><code>?full</code>: The output will contain an extra <code>users</code> list, containing the users that currently have this role.</li>
-</ul>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="s2">&quot;users&quot;</span><span class="err">:</span><span class="p">[</span><span class="s2">&quot;druid&quot;</span><span class="p">]</span>
-</code></pre></div>
-<ul>
-<li><code>?simplifyPermissions</code>: The permissions in the output will contain only a list of <code>resourceAction</code> objects, without the extraneous <code>resourceNamePattern</code> field. The <code>users</code> field will be null when <code>?full</code> is not specified.</li>
-</ul>
-
-<p>Example output:
-<code>json
-{
-  &quot;name&quot;: &quot;druidRole2&quot;,
-  &quot;users&quot;: null,
-  &quot;permissions&quot;: [
-    {
-      &quot;resource&quot;: {
-        &quot;name&quot;: &quot;E&quot;,
-        &quot;type&quot;: &quot;DATASOURCE&quot;
-      },
-      &quot;action&quot;: &quot;WRITE&quot;
-    }
-  ]
-}
-</code></p>
+Return name and permissions for the role named {roleName}</p>
 
 <p><code>POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})</code>
 Create a new role with name {roleName}.
diff --git a/docs/0.15.0-incubating/development/extensions-core/druid-kerberos.html b/docs/0.15.0-incubating/development/extensions-core/druid-kerberos.html
index d180184..8db0ccc 100644
--- a/docs/0.15.0-incubating/development/extensions-core/druid-kerberos.html
+++ b/docs/0.15.0-incubating/development/extensions-core/druid-kerberos.html
@@ -199,6 +199,13 @@ druid.auth.authenticator.MyKerberosAuthenticator.type=kerberos
 <td>No</td>
 </tr>
 <tr>
+<td><code>druid.auth.authenticator.kerberos.excludedPaths</code></td>
+<td><code>[&#39;/status&#39;,&#39;/health&#39;]</code></td>
+<td>Array of HTTP paths which which does NOT need to be authenticated.</td>
+<td>None</td>
+<td>No</td>
+</tr>
+<tr>
 <td><code>druid.auth.authenticator.kerberos.cookieSignatureSecret</code></td>
 <td><code>secretString</code></td>
 <td>Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid ndoes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.</td>
@@ -217,10 +224,6 @@ druid.auth.authenticator.MyKerberosAuthenticator.type=kerberos
 <p>As a note, it is required that the SPNego principal in use by the druid processes must start with HTTP (This specified by <a href="https://tools.ietf.org/html/rfc4559">RFC-4559</a>) and must be of the form &quot;HTTP/_HOST@REALM&quot;.
 The special string _HOST will be replaced automatically with the value of config <code>druid.host</code></p>
 
-<h3 id="druid-auth-authenticator-kerberos-excludedpaths"><code>druid.auth.authenticator.kerberos.excludedPaths</code></h3>
-
-<p>In older releases, the Kerberos authenticator had an <code>excludedPaths</code> property that allowed the user to specify a list of paths where authentication checks should be skipped. This property has been removed from the Kerberos authenticator because the path exclusion functionality is now handled across all authenticators/authorizers by setting <code>druid.auth.unsecuredPaths</code>, as described in the <a href="../../design/auth.html">main auth documentation</a>.</p>
-
 <h3 id="auth-to-local-syntax">Auth to Local Syntax</h3>
 
 <p><code>druid.auth.authenticator.kerberos.authToLocal</code> allows you to set a general rules for mapping principal names to local user names.
diff --git a/docs/0.15.0-incubating/development/extensions-core/kafka-ingestion.html b/docs/0.15.0-incubating/development/extensions-core/kafka-ingestion.html
index 970f6be..c5d0209 100644
--- a/docs/0.15.0-incubating/development/extensions-core/kafka-ingestion.html
+++ b/docs/0.15.0-incubating/development/extensions-core/kafka-ingestion.html
@@ -606,111 +606,12 @@ offsets as reported by Kafka, the consumer lag per partition, as well as the agg
 consumer lag per partition may be reported as negative values if the supervisor has not received a recent latest offset
 response from Kafka. The aggregate lag value will always be &gt;= 0.</p>
 
-<p>The status report also contains the supervisor&#39;s state and a list of recently thrown exceptions (reported as
-<code>recentErrors</code>, whose max size can be controlled using the <code>druid.supervisor.maxStoredExceptionEvents</code> configuration).
-There are two fields related to the supervisor&#39;s state - <code>state</code> and <code>detailedState</code>. The <code>state</code> field will always be
-one of a small number of generic states that are applicable to any type of supervisor, while the <code>detailedState</code> field
-will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor&#39;s
-activities than the generic <code>state</code> field.</p>
-
-<p>The list of possible <code>state</code> values are: [<code>PENDING</code>, <code>RUNNING</code>, <code>SUSPENDED</code>, <code>STOPPING</code>, <code>UNHEALTHY_SUPERVISOR</code>, <code>UNHEALTHY_TASKS</code>]</p>
-
-<p>The list of <code>detailedState</code> values and their corresponding <code>state</code> mapping is as follows:</p>
-
-<table><thead>
-<tr>
-<th>Detailed State</th>
-<th>Corresponding State</th>
-<th>Description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor has encountered errors on the past <code>druid.supervisor.unhealthinessThreshold</code> iterations</td>
-</tr>
-<tr>
-<td>UNHEALTHY_TASKS</td>
-<td>UNHEALTHY_TASKS</td>
-<td>The last <code>druid.supervisor.taskUnhealthinessThreshold</code> tasks have all failed</td>
-</tr>
-<tr>
-<td>UNABLE_TO_CONNECT_TO_STREAM</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor is encountering connectivity issues with Kafka and has not successfully connected in the past</td>
-</tr>
-<tr>
-<td>LOST_CONTACT_WITH_STREAM</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor is encountering connectivity issues with Kafka but has successfully connected in the past</td>
-</tr>
-<tr>
-<td>PENDING (first iteration only)</td>
-<td>PENDING</td>
-<td>The supervisor has been initialized and hasn&#39;t started connecting to the stream</td>
-</tr>
-<tr>
-<td>CONNECTING_TO_STREAM (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is trying to connect to the stream and update partition data</td>
-</tr>
-<tr>
-<td>DISCOVERING_INITIAL_TASKS (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is discovering already-running tasks</td>
-</tr>
-<tr>
-<td>CREATING_TASKS (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is creating tasks and discovering state</td>
-</tr>
-<tr>
-<td>RUNNING</td>
-<td>RUNNING</td>
-<td>The supervisor has started tasks and is waiting for taskDuration to elapse</td>
-</tr>
-<tr>
-<td>SUSPENDED</td>
-<td>SUSPENDED</td>
-<td>The supervisor has been suspended</td>
-</tr>
-<tr>
-<td>STOPPING</td>
-<td>STOPPING</td>
-<td>The supervisor is stopping</td>
-</tr>
-</tbody></table>
-
-<p>On each iteration of the supervisor&#39;s run loop, the supervisor completes the following tasks in sequence:
-  1) Fetch the list of partitions from Kafka and determine the starting offset for each partition (either based on the
-  last processed offset if continuing, or starting from the beginning or ending of the stream if this is a new topic).
-  2) Discover any running indexing tasks that are writing to the supervisor&#39;s datasource and adopt them if they match
-  the supervisor&#39;s configuration, else signal them to stop.
-  3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision.
-  4) Handle tasks that have exceeded <code>taskDuration</code> and should transition from the reading to publishing state.
-  5) Handle tasks that have finished publishing and signal redundant replica tasks to stop.
-  6) Handle tasks that have failed and clean up the supervisor&#39;s internal state.
-  7) Compare the list of healthy tasks to the requested <code>taskCount</code> and <code>replicas</code> configurations and create additional tasks if required.</p>
-
-<p>The <code>detailedState</code> field will show additional values (those marked with &quot;first iteration only&quot;) the first time the
-supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface
-initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can&#39;t connect to
-Kafka, it can&#39;t read from the Kafka topic, or it can&#39;t communicate with existing tasks). Once the supervisor is stable -
-that is, once it has completed a full execution without encountering any issues - <code>detailedState</code> will show a <code>RUNNING</code>
-state until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state.</p>
-
 <h3 id="getting-supervisor-ingestion-stats-report">Getting Supervisor Ingestion Stats Report</h3>
 
 <p><code>GET /druid/indexer/v1/supervisor/&lt;supervisorId&gt;/stats</code> returns a snapshot of the current ingestion row counters for each task being managed by the supervisor, along with moving averages for the row counters.</p>
 
 <p>See <a href="../../ingestion/reports.html#row-stats">Task Reports: Row Stats</a> for more information.</p>
 
-<h3 id="supervisor-health-check">Supervisor Health Check</h3>
-
-<p><code>GET /druid/indexer/v1/supervisor/&lt;supervisorId&gt;/health</code> returns <code>200 OK</code> if the supervisor is healthy and
-<code>503 Service Unavailable</code> if it is unhealthy. Healthiness is determined by the supervisor&#39;s <code>state</code> (as returned by the
-<code>/status</code> endpoint) and the <code>druid.supervisor.*</code> Overlord configuration thresholds.</p>
-
 <h3 id="updating-existing-supervisors">Updating Existing Supervisors</h3>
 
 <p><code>POST /druid/indexer/v1/supervisor</code> can be used to update existing supervisor spec.
diff --git a/docs/0.15.0-incubating/development/extensions-core/kinesis-ingestion.html b/docs/0.15.0-incubating/development/extensions-core/kinesis-ingestion.html
index 4fccb15..411dc7c 100644
--- a/docs/0.15.0-incubating/development/extensions-core/kinesis-ingestion.html
+++ b/docs/0.15.0-incubating/development/extensions-core/kinesis-ingestion.html
@@ -231,7 +231,7 @@ and the MiddleManagers. A supervisor for a dataSource is started by submitting a
   <span class="p">}</span>
 <span class="p">}</span>
 </code></pre></div>
-<h2 id="supervisor-spec">Supervisor Spec</h2>
+<h2 id="supervisor-configuration">Supervisor Configuration</h2>
 
 <table><thead>
 <tr>
@@ -661,108 +661,12 @@ For all supervisor APIs, please check <a href="../../operations/api-reference.ht
 <code>
 -Ddruid.kinesis.accessKey=123 -Ddruid.kinesis.secretKey=456
 </code>
-The AWS access key ID and secret access key are used for Kinesis API requests. If this is not provided, the service will
-look for credentials set in environment variables, in the default profile configuration file, and from the EC2 instance
-profile provider (in this order).</p>
+The AWS access key ID and secret access key are used for Kinesis API requests. If this is not provided, the service will look for credentials set in environment variables, in the default profile configuration file, and from the EC2 instance profile provider (in this order).</p>
 
 <h3 id="getting-supervisor-status-report">Getting Supervisor Status Report</h3>
 
-<p><code>GET /druid/indexer/v1/supervisor/&lt;supervisorId&gt;/status</code> returns a snapshot report of the current state of the tasks 
-managed by the given supervisor. This includes the latest sequence numbers as reported by Kinesis. Unlike the Kafka
-Indexing Service, stats about lag are not yet supported.</p>
-
-<p>The status report also contains the supervisor&#39;s state and a list of recently thrown exceptions (reported as
-<code>recentErrors</code>, whose max size can be controlled using the <code>druid.supervisor.maxStoredExceptionEvents</code> configuration).
-There are two fields related to the supervisor&#39;s state - <code>state</code> and <code>detailedState</code>. The <code>state</code> field will always be
-one of a small number of generic states that are applicable to any type of supervisor, while the <code>detailedState</code> field
-will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor&#39;s
-activities than the generic <code>state</code> field.</p>
-
-<p>The list of possible <code>state</code> values are: [<code>PENDING</code>, <code>RUNNING</code>, <code>SUSPENDED</code>, <code>STOPPING</code>, <code>UNHEALTHY_SUPERVISOR</code>, <code>UNHEALTHY_TASKS</code>]</p>
-
-<p>The list of <code>detailedState</code> values and their corresponding <code>state</code> mapping is as follows:</p>
-
-<table><thead>
-<tr>
-<th>Detailed State</th>
-<th>Corresponding State</th>
-<th>Description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor has encountered errors on the past <code>druid.supervisor.unhealthinessThreshold</code> iterations</td>
-</tr>
-<tr>
-<td>UNHEALTHY_TASKS</td>
-<td>UNHEALTHY_TASKS</td>
-<td>The last <code>druid.supervisor.taskUnhealthinessThreshold</code> tasks have all failed</td>
-</tr>
-<tr>
-<td>UNABLE_TO_CONNECT_TO_STREAM</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor is encountering connectivity issues with Kinesis and has not successfully connected in the past</td>
-</tr>
-<tr>
-<td>LOST_CONTACT_WITH_STREAM</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor is encountering connectivity issues with Kinesis but has successfully connected in the past</td>
-</tr>
-<tr>
-<td>PENDING (first iteration only)</td>
-<td>PENDING</td>
-<td>The supervisor has been initialized and hasn&#39;t started connecting to the stream</td>
-</tr>
-<tr>
-<td>CONNECTING_TO_STREAM (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is trying to connect to the stream and update partition data</td>
-</tr>
-<tr>
-<td>DISCOVERING_INITIAL_TASKS (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is discovering already-running tasks</td>
-</tr>
-<tr>
-<td>CREATING_TASKS (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is creating tasks and discovering state</td>
-</tr>
-<tr>
-<td>RUNNING</td>
-<td>RUNNING</td>
-<td>The supervisor has started tasks and is waiting for taskDuration to elapse</td>
-</tr>
-<tr>
-<td>SUSPENDED</td>
-<td>SUSPENDED</td>
-<td>The supervisor has been suspended</td>
-</tr>
-<tr>
-<td>STOPPING</td>
-<td>STOPPING</td>
-<td>The supervisor is stopping</td>
-</tr>
-</tbody></table>
-
-<p>On each iteration of the supervisor&#39;s run loop, the supervisor completes the following tasks in sequence:
-  1) Fetch the list of shards from Kinesis and determine the starting sequence number for each shard (either based on the
-  last processed sequence number if continuing, or starting from the beginning or ending of the stream if this is a new stream).
-  2) Discover any running indexing tasks that are writing to the supervisor&#39;s datasource and adopt them if they match
-  the supervisor&#39;s configuration, else signal them to stop.
-  3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision.
-  4) Handle tasks that have exceeded <code>taskDuration</code> and should transition from the reading to publishing state.
-  5) Handle tasks that have finished publishing and signal redundant replica tasks to stop.
-  6) Handle tasks that have failed and clean up the supervisor&#39;s internal state.
-  7) Compare the list of healthy tasks to the requested <code>taskCount</code> and <code>replicas</code> configurations and create additional tasks if required.</p>
-
-<p>The <code>detailedState</code> field will show additional values (those marked with &quot;first iteration only&quot;) the first time the
-supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface
-initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can&#39;t connect to
-Kinesis, it can&#39;t read from the stream, or it can&#39;t communicate with existing tasks). Once the supervisor is stable -
-that is, once it has completed a full execution without encountering any issues - <code>detailedState</code> will show a <code>RUNNING</code>
-state until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state.</p>
+<p><code>GET /druid/indexer/v1/supervisor/&lt;supervisorId&gt;/status</code> returns a snapshot report of the current state of the tasks managed by the given supervisor. This includes the latest
+sequence numbers as reported by Kinesis. Unlike the Kafka Indexing Service, stats about lag is not yet supported.</p>
 
 <h3 id="updating-existing-supervisors">Updating Existing Supervisors</h3>
 
diff --git a/docs/0.15.0-incubating/development/extensions-core/postgresql.html b/docs/0.15.0-incubating/development/extensions-core/postgresql.html
index 623a511..0330dd8 100644
--- a/docs/0.15.0-incubating/development/extensions-core/postgresql.html
+++ b/docs/0.15.0-incubating/development/extensions-core/postgresql.html
@@ -261,12 +261,6 @@
 <td>none</td>
 <td>no</td>
 </tr>
-<tr>
-<td><code>druid.metadata.postgres.dbTableSchema</code></td>
-<td>druid meta table schema</td>
-<td><code>public</code></td>
-<td>no</td>
-</tr>
 </tbody></table>
 
         </div>
diff --git a/docs/0.15.0-incubating/development/extensions-core/s3.html b/docs/0.15.0-incubating/development/extensions-core/s3.html
index 56fa55c..faaa53e 100644
--- a/docs/0.15.0-incubating/development/extensions-core/s3.html
+++ b/docs/0.15.0-incubating/development/extensions-core/s3.html
@@ -174,18 +174,43 @@
 </thead><tbody>
 <tr>
 <td><code>druid.s3.accessKey</code></td>
-<td>S3 access key.See <a href="#s3-authentication-methods">S3 authentication methods</a> for more details</td>
-<td>Can be ommitted according to authentication methods chosen.</td>
+<td>S3 access key.</td>
+<td>Must be set.</td>
 </tr>
 <tr>
 <td><code>druid.s3.secretKey</code></td>
-<td>S3 secret key.See <a href="#s3-authentication-methods">S3 authentication methods</a> for more details</td>
-<td>Can be ommitted according to authentication methods chosen.</td>
+<td>S3 secret key.</td>
+<td>Must be set.</td>
+</tr>
+<tr>
+<td><code>druid.storage.bucket</code></td>
+<td>Bucket to store in.</td>
+<td>Must be set.</td>
 </tr>
 <tr>
-<td><code>druid.s3.fileSessionCredentials</code></td>
-<td>Path to properties file containing <code>sessionToken</code>, <code>accessKey</code> and <code>secretKey</code> value. One key/value pair per line (format <code>key=value</code>). See <a href="#s3-authentication-methods">S3 authentication methods</a> for more details</td>
-<td>Can be ommitted according to authentication methods chosen.</td>
+<td><code>druid.storage.baseKey</code></td>
+<td>Base key prefix to use, i.e. what directory.</td>
+<td>Must be set.</td>
+</tr>
+<tr>
+<td><code>druid.storage.disableAcl</code></td>
+<td>Boolean flag to disable ACL. If this is set to <code>false</code>, the full control would be granted to the bucket owner. This may require to set additional permissions. See <a href="#s3-permissions-settings">S3 permissions settings</a>.</td>
+<td>false</td>
+</tr>
+<tr>
+<td><code>druid.storage.sse.type</code></td>
+<td>Server-side encryption type. Should be one of <code>s3</code>, <code>kms</code>, and <code>custom</code>. See the below <a href="#server-side-encryption">Server-side encryption section</a> for more details.</td>
+<td>None</td>
+</tr>
+<tr>
+<td><code>druid.storage.sse.kms.keyId</code></td>
+<td>AWS KMS key ID. This is used only when <code>druid.storage.sse.type</code> is <code>kms</code> and can be empty to use the default key ID.</td>
+<td>None</td>
+</tr>
+<tr>
+<td><code>druid.storage.sse.custom.base64EncodedKey</code></td>
+<td>Base64-encoded key. Should be specified if <code>druid.storage.sse.type</code> is <code>custom</code>.</td>
+<td>None</td>
 </tr>
 <tr>
 <td><code>druid.s3.protocol</code></td>
@@ -237,51 +262,6 @@
 <td>Password to use when connecting through a proxy.</td>
 <td>None</td>
 </tr>
-<tr>
-<td><code>druid.storage.bucket</code></td>
-<td>Bucket to store in.</td>
-<td>Must be set.</td>
-</tr>
-<tr>
-<td><code>druid.storage.baseKey</code></td>
-<td>Base key prefix to use, i.e. what directory.</td>
-<td>Must be set.</td>
-</tr>
-<tr>
-<td><code>druid.storage.archiveBucket</code></td>
-<td>S3 bucket name for archiving when running the <em>archive task</em>.</td>
-<td>none</td>
-</tr>
-<tr>
-<td><code>druid.storage.archiveBaseKey</code></td>
-<td>S3 object key prefix for archiving.</td>
-<td>none</td>
-</tr>
-<tr>
-<td><code>druid.storage.disableAcl</code></td>
-<td>Boolean flag to disable ACL. If this is set to <code>false</code>, the full control would be granted to the bucket owner. This may require to set additional permissions. See <a href="#s3-permissions-settings">S3 permissions settings</a>.</td>
-<td>false</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.type</code></td>
-<td>Server-side encryption type. Should be one of <code>s3</code>, <code>kms</code>, and <code>custom</code>. See the below <a href="#server-side-encryption">Server-side encryption section</a> for more details.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.kms.keyId</code></td>
-<td>AWS KMS key ID. This is used only when <code>druid.storage.sse.type</code> is <code>kms</code> and can be empty to use the default key ID.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.custom.base64EncodedKey</code></td>
-<td>Base64-encoded key. Should be specified if <code>druid.storage.sse.type</code> is <code>custom</code>.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.useS3aSchema</code></td>
-<td>If true, use the &quot;s3a&quot; filesystem when using Hadoop-based ingestion. If false, the &quot;s3n&quot; filesystem will be used. Only affects Hadoop-based ingestion.</td>
-<td>false</td>
-</tr>
 </tbody></table>
 
 <h3 id="s3-permissions-settings">S3 permissions settings</h3>
@@ -289,53 +269,6 @@
 <p><code>s3:GetObject</code> and <code>s3:PutObject</code> are basically required for pushing/loading segments to/from S3.
 If <code>druid.storage.disableAcl</code> is set to <code>false</code>, then <code>s3:GetBucketAcl</code> and <code>s3:PutObjectAcl</code> are additionally required to set ACL for objects.</p>
 
-<h3 id="s3-authentication-methods">S3 authentication methods</h3>
-
-<p>To connect to your S3 bucket (whether deep storage bucket or source bucket), Druid use the following credentials providers chain</p>
-
-<table><thead>
-<tr>
-<th>order</th>
-<th>type</th>
-<th>details</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>1</td>
-<td>Druid config file</td>
-<td>Based on your runtime.properties if it contains values <code>druid.s3.accessKey</code> and <code>druid.s3.secretKey</code></td>
-</tr>
-<tr>
-<td>2</td>
-<td>Custom properties file</td>
-<td>Based on custom properties file where you can supply <code>sessionToken</code>, <code>accessKey</code> and <code>secretKey</code> values. This file is provided to Druid through <code>druid.s3.fileSessionCredentials</code> propertie</td>
-</tr>
-<tr>
-<td>3</td>
-<td>Environment variables</td>
-<td>Based on environment variables <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code></td>
-</tr>
-<tr>
-<td>4</td>
-<td>Java system properties</td>
-<td>Based on JVM properties <code>aws.accessKeyId</code> and <code>aws.secretKey</code></td>
-</tr>
-<tr>
-<td>5</td>
-<td>Profile informations</td>
-<td>Based on credentials you may have on your druid instance (generally in <code>~/.aws/credentials</code>)</td>
-</tr>
-<tr>
-<td>6</td>
-<td>Instance profile informations</td>
-<td>Based on the instance profile you may have attached to your druid instance</td>
-</tr>
-</tbody></table>
-
-<p>You can find more informations about authentication method <a href="https://docs.aws.amazon.com/fr_fr/sdk-for-java/v1/developer-guide/credentials.html">here</a><br/>
-<strong>Note :</strong> <em>Order is important here as it indicates the precedence of authentication methods.<br/> 
-So if you are trying to use Instance profile informations, you **must not</em>* set <code>druid.s3.accessKey</code> and <code>druid.s3.secretKey</code> in your Druid runtime.properties* </p>
-
 <h2 id="server-side-encryption">Server-side encryption</h2>
 
 <p>You can enable <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html">server-side encryption</a> by setting
diff --git a/docs/0.15.0-incubating/development/extensions.html b/docs/0.15.0-incubating/development/extensions.html
index 51a63a3..bd56289 100644
--- a/docs/0.15.0-incubating/development/extensions.html
+++ b/docs/0.15.0-incubating/development/extensions.html
@@ -192,7 +192,7 @@ metadata store. Many clusters will also use additional extensions.</p>
 </tr>
 <tr>
 <td>druid-datasketches</td>
-<td>Support for approximate counts and set operations with <a href="https://datasketches.github.io/">DataSketches</a>.</td>
+<td>Support for approximate counts and set operations with <a href="http://datasketches.github.io/">DataSketches</a>.</td>
 <td><a href="../development/extensions-core/datasketches-extension.html">link</a></td>
 </tr>
 <tr>
@@ -395,21 +395,6 @@ If you&#39;d like to take on maintenance for a community extension, please post
 <td>Support for <a href="https://en.wikipedia.org/wiki/Moving_average">Moving Average</a> and other Aggregate <a href="https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions">Window Functions</a> in Druid queries.</td>
 <td><a href="../development/extensions-contrib/moving-average-query.html">link</a></td>
 </tr>
-<tr>
-<td>druid-influxdb-emitter</td>
-<td>InfluxDB metrics emitter</td>
-<td><a href="../development/extensions-contrib/influxdb-emitter.html">link</a></td>
-</tr>
-<tr>
-<td>druid-momentsketch</td>
-<td>Support for approximate quantile queries using the <a href="https://github.com/stanford-futuredata/momentsketch">momentsketch</a> library</td>
-<td><a href="../development/extensions-contrib/momentsketch-quantiles.html">link</a></td>
-</tr>
-<tr>
-<td>druid-tdigestsketch</td>
-<td>Support for approximate sketch aggregators based on <a href="https://github.com/tdunning/t-digest">T-Digest</a></td>
-<td><a href="../development/extensions-contrib/tdigestsketch-quantiles.html">link</a></td>
-</tr>
 </tbody></table>
 
 <h2 id="promoting-community-extension-to-core-extension">Promoting Community Extension to Core Extension</h2>
diff --git a/docs/0.15.0-incubating/development/geo.html b/docs/0.15.0-incubating/development/geo.html
index ff431f6..1af711b 100644
--- a/docs/0.15.0-incubating/development/geo.html
+++ b/docs/0.15.0-incubating/development/geo.html
@@ -253,27 +253,6 @@
 </tr>
 </tbody></table>
 
-<h3 id="polygonbound">PolygonBound</h3>
-
-<table><thead>
-<tr>
-<th>property</th>
-<th>description</th>
-<th>required?</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>abscissa</td>
-<td>Horizontal coordinate for corners of the polygon</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>ordinate</td>
-<td>Vertical coordinate for corners of the polygon</td>
-<td>yes</td>
-</tr>
-</tbody></table>
-
         </div>
         <div class="col-md-3">
           <div class="searchbox">
diff --git a/docs/0.15.0-incubating/development/modules.html b/docs/0.15.0-incubating/development/modules.html
index 1dab22c..5943690 100644
--- a/docs/0.15.0-incubating/development/modules.html
+++ b/docs/0.15.0-incubating/development/modules.html
@@ -164,7 +164,7 @@
 and <code>org.apache.druid.query.aggregation.BufferAggregator</code>.</li>
 <li>Add PostAggregators by extending <code>org.apache.druid.query.aggregation.PostAggregator</code>.</li>
 <li>Add ExtractionFns by extending <code>org.apache.druid.query.extraction.ExtractionFn</code>.</li>
-<li>Add Complex metrics by extending <code>org.apache.druid.segment.serde.ComplexMetricSerde</code>.</li>
+<li>Add Complex metrics by extending <code>org.apache.druid.segment.serde.ComplexMetricsSerde</code>.</li>
 <li>Add new Query types by extending <code>org.apache.druid.query.QueryRunnerFactory</code>, <code>org.apache.druid.query.QueryToolChest</code>, and
 <code>org.apache.druid.query.Query</code>.</li>
 <li>Add new Jersey resources by calling <code>Jerseys.addResource(binder, clazz)</code>.</li>
diff --git a/docs/0.15.0-incubating/ingestion/compaction.html b/docs/0.15.0-incubating/ingestion/compaction.html
index 7dba6ed..1e204fb 100644
--- a/docs/0.15.0-incubating/ingestion/compaction.html
+++ b/docs/0.15.0-incubating/ingestion/compaction.html
@@ -155,6 +155,7 @@
     <span class="nt">&quot;dataSource&quot;</span><span class="p">:</span> <span class="err">&lt;task_datasource&gt;</span><span class="p">,</span>
     <span class="nt">&quot;interval&quot;</span><span class="p">:</span> <span class="err">&lt;interval</span> <span class="err">to</span> <span class="err">specify</span> <span class="err">segments</span> <span class="err">to</span> <span class="err">be</span> <span class="err">merged&gt;</span><span class="p">,</span>
     <span class="nt">&quot;dimensions&quot;</span> <span class="err">&lt;custom</span> <span class="err">dimensionsSpec&gt;</span><span class="p">,</span>
+    <span class="nt">&quot;keepSegmentGranularity&quot;</span><span class="p">:</span> <span class="err">&lt;</span><span class="kc">true</span> <span class="err">or</span> <span class="kc">false</span><span class="err">&gt;</span><span class="p">,</span>
     <span class="nt">&quot;segmentGranularity&quot;</span><span class="p">:</span> <span class="err">&lt;segment</span> <span class="err">granularity</span> <span class="err">after</span> <span class="err">compaction&gt;</span><span class="p">,</span>
     <span class="nt">&quot;targetCompactionSizeBytes&quot;</span><span class="p">:</span> <span class="err">&lt;target</span> <span class="err">size</span> <span class="err">of</span> <span class="err">compacted</span> <span class="err">segments&gt;</span>
     <span class="s2">&quot;tuningConfig&quot;</span> <span class="err">&lt;index</span> <span class="err">task</span> <span class="err">tuningConfig&gt;</span><span class="p">,</span>
@@ -204,6 +205,11 @@
 <td>No</td>
 </tr>
 <tr>
+<td><code>keepSegmentGranularity</code></td>
+<td>Deprecated. Please use <code>segmentGranularity</code> instead. See the below table for its behavior.</td>
+<td>No</td>
+</tr>
+<tr>
 <td><code>targetCompactionSizeBytes</code></td>
 <td>Target segment size after comapction. Cannot be used with <code>maxRowsPerSegment</code>, <code>maxTotalRows</code>, and <code>numShards</code> in tuningConfig.</td>
 <td>No</td>
@@ -220,6 +226,47 @@
 </tr>
 </tbody></table>
 
+<h3 id="used-segmentgranularity-based-on-segmentgranularity-and-keepsegmentgranularity">Used segmentGranularity based on <code>segmentGranularity</code> and <code>keepSegmentGranularity</code></h3>
+
+<table><thead>
+<tr>
+<th>SegmentGranularity</th>
+<th>keepSegmentGranularity</th>
+<th>Used SegmentGranularity</th>
+</tr>
+</thead><tbody>
+<tr>
+<td>Non-null</td>
+<td>True</td>
+<td>Error</td>
+</tr>
+<tr>
+<td>Non-null</td>
+<td>False</td>
+<td>Given segmentGranularity</td>
+</tr>
+<tr>
+<td>Non-null</td>
+<td>Null</td>
+<td>Given segmentGranularity</td>
+</tr>
+<tr>
+<td>Null</td>
+<td>True</td>
+<td>Original segmentGranularity</td>
+</tr>
+<tr>
+<td>Null</td>
+<td>False</td>
+<td>ALL segmentGranularity. All events will fall into the single time chunk.</td>
+</tr>
+<tr>
+<td>Null</td>
+<td>Null</td>
+<td>Original segmentGranularity</td>
+</tr>
+</tbody></table>
+
 <p>An example of compaction task is</p>
 <div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
   <span class="nt">&quot;type&quot;</span> <span class="p">:</span> <span class="s2">&quot;compact&quot;</span><span class="p">,</span>
@@ -228,12 +275,12 @@
 <span class="p">}</span>
 </code></pre></div>
 <p>This compaction task reads <em>all segments</em> of the interval <code>2017-01-01/2018-01-01</code> and results in new segments.
-Since <code>segmentGranularity</code> is null, the original segment granularity will be remained and not changed after compaction.
+Since both <code>segmentGranularity</code> and <code>keepSegmentGranularity</code> are null, the original segment granularity will be remained and not changed after compaction.
 To control the number of result segments per time chunk, you can set <a href="../configuration/index.html#compaction-dynamic-configuration">maxRowsPerSegment</a> or <a href="../ingestion/native_tasks.html#tuningconfig">numShards</a>.
 Please note that you can run multiple compactionTasks at the same time. For example, you can run 12 compactionTasks per month instead of running a single task for the entire year.</p>
 
 <p>A compaction task internally generates an <code>index</code> task spec for performing compaction work with some fixed parameters.
-For example, its <code>firehose</code> is always the <a href="./firehose.html#ingestsegmentfirehose">ingestSegmentFirehose</a>, and <code>dimensionsSpec</code> and <code>metricsSpec</code>
+For example, its <code>firehose</code> is always the <a href="./firehose.html#ingestsegmentfirehose">ingestSegmentSpec</a>, and <code>dimensionsSpec</code> and <code>metricsSpec</code>
 include all dimensions and metrics of the input segments by default.</p>
 
 <p>Compaction tasks will exit with a failure status code, without doing anything, if the interval you specify has no
diff --git a/docs/0.15.0-incubating/ingestion/hadoop-vs-native-batch.html b/docs/0.15.0-incubating/ingestion/hadoop-vs-native-batch.html
index 1a2ccf1..060b44b 100644
--- a/docs/0.15.0-incubating/ingestion/hadoop-vs-native-batch.html
+++ b/docs/0.15.0-incubating/ingestion/hadoop-vs-native-batch.html
@@ -180,14 +180,14 @@ ingestion method.</p>
 <td>No dependency</td>
 </tr>
 <tr>
-<td>Supported <a href="./index.html#roll-up-modes">rollup modes</a></td>
+<td>Supported <a href="http://druid.io/docs/latest/ingestion/index.html#roll-up-modes">rollup modes</a></td>
 <td>Perfect rollup</td>
 <td>Best-effort rollup</td>
 <td>Both perfect and best-effort rollup</td>
 </tr>
 <tr>
 <td>Supported partitioning methods</td>
-<td><a href="./hadoop.html#partitioning-specification">Both Hash-based and range partitioning</a></td>
+<td><a href="http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification">Both Hash-based and range partitioning</a></td>
 <td>N/A</td>
 <td>Hash-based partitioning (when <code>forceGuaranteedRollup</code> = true)</td>
 </tr>
diff --git a/docs/0.15.0-incubating/ingestion/hadoop.html b/docs/0.15.0-incubating/ingestion/hadoop.html
index ae9bcd6..8dfeb62 100644
--- a/docs/0.15.0-incubating/ingestion/hadoop.html
+++ b/docs/0.15.0-incubating/ingestion/hadoop.html
@@ -507,12 +507,6 @@ s3n://billy-bucket/the/data/is/here/y=2012/m=06/d=01/H=23
 <td>The maximum number of parse exceptions that can occur before the task halts ingestion and fails. Overrides <code>ignoreInvalidRows</code> if <code>maxParseExceptions</code> is defined.</td>
 <td>unlimited</td>
 </tr>
-<tr>
-<td>useYarnRMJobStatusFallback</td>
-<td>Boolean</td>
-<td>If the Hadoop jobs created by the indexing task are unable to retrieve their completion status from the JobHistory server, and this parameter is true, the indexing task will try to fetch the application status from <code>http://&lt;yarn-rm-address&gt;/ws/v1/cluster/apps/&lt;application-id&gt;</code>, where <code>&lt;yarn-rm-address&gt;</code> is the value of <code>yarn.resourcemanager.webapp.address</code> in your Hadoop configuration. This flag is intended as a fallback for cases wh [...]
-<td>no (default = true)</td>
-</tr>
 </tbody></table>
 
 <h3 id="jobproperties-field-of-tuningconfig">jobProperties field of TuningConfig</h3>
diff --git a/docs/0.15.0-incubating/misc/math-expr.html b/docs/0.15.0-incubating/misc/math-expr.html
index 19ab484..a021a9a 100644
--- a/docs/0.15.0-incubating/misc/math-expr.html
+++ b/docs/0.15.0-incubating/misc/math-expr.html
@@ -149,8 +149,7 @@
 <h1 id="apache-druid-incubating-expressions">Apache Druid (incubating) Expressions</h1>
 
 <div class="note info">
-This feature is still experimental. It has not been optimized for performance yet, and its implementation is known to
- have significant inefficiencies.
+This feature is still experimental. It has not been optimized for performance yet, and its implementation is known to have significant inefficiencies.
 </div>
  
 
@@ -188,28 +187,14 @@ This feature is still experimental. It has not been optimized for performance ye
 </tr>
 </tbody></table>
 
-<p>Long, double, and string data types are supported. If a number contains a dot, it is interpreted as a double, otherwise
-it is interpreted as a long. That means, always add a &#39;.&#39; to your number if you want it interpreted as a double value.
-String literals should be quoted by single quotation marks.</p>
+<p>Long, double, and string data types are supported. If a number contains a dot, it is interpreted as a double, otherwise it is interpreted as a long. That means, always add a &#39;.&#39; to your number if you want it interpreted as a double value. String literals should be quoted by single quotation marks.</p>
 
-<p>Additionally, the expression language supports long, double, and string arrays. Array literals are created by wrapping
-square brackets around a list of scalar literals values delimited by a comma or space character. All values in an array
-literal must be the same type.</p>
+<p>Multi-value types are not fully supported yet. Expressions may behave inconsistently on multi-value types, and you
+should not rely on the behavior in this case to stay the same in future releases.</p>
 
-<p>Expressions can contain variables. Variable names may contain letters, digits, &#39;_&#39; and &#39;$&#39;. Variable names must not
-begin with a digit. To escape other special characters, you can quote it with double quotation marks.</p>
+<p>Expressions can contain variables. Variable names may contain letters, digits, &#39;_&#39; and &#39;$&#39;. Variable names must not begin with a digit. To escape other special characters, you can quote it with double quotation marks.</p>
 
-<p>For logical operators, a number is true if and only if it is positive (0 or negative value means false). For string
-type, it&#39;s the evaluation result of &#39;Boolean.valueOf(string)&#39;.</p>
-
-<p>Multi-value string dimensions are supported and may be treated as either scalar or array typed values. When treated as
-a scalar type, an expression will automatically be transformed to apply the scalar operation across all values of the
-multi-valued type, to mimic Druid&#39;s native behavior. Values that result in arrays will be coerced back into the native
-Druid string type for aggregation. Druid aggregations on multi-value string dimensions on the individual values, <em>not</em>
-the &#39;array&#39;, behaving similar to the <code>unnest</code> operator available in many SQL dialects. However, by using the
-<code>array_to_string</code> function, aggregations may be done on a stringified version of the complete array, allowing the
-complete row to be preserved. Using <code>string_to_array</code> in an expression post-aggregator, allows transforming the
-stringified dimension back into the true native array type.</p>
+<p>For logical operators, a number is true if and only if it is positive (0 or negative value means false). For string type, it&#39;s the evaluation result of &#39;Boolean.valueOf(string)&#39;.</p>
 
 <p>The following built-in functions are available.</p>
 
@@ -223,7 +208,7 @@ stringified dimension back into the true native array type.</p>
 </thead><tbody>
 <tr>
 <td>cast</td>
-<td>cast(expr,&#39;LONG&#39; or &#39;DOUBLE&#39; or &#39;STRING&#39; or &#39;LONG_ARRAY&#39;, or &#39;DOUBLE_ARRAY&#39; or &#39;STRING_ARRAY&#39;) returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null).</td>
+<td>cast(expr,&#39;LONG&#39; or &#39;DOUBLE&#39; or &#39;STRING&#39;) returns expr with specified type. exception can be thrown</td>
 </tr>
 <tr>
 <td>if</td>
@@ -555,106 +540,6 @@ stringified dimension back into the true native array type.</p>
 </tr>
 </tbody></table>
 
-<h2 id="array-functions">Array Functions</h2>
-
-<table><thead>
-<tr>
-<th>function</th>
-<th>description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>array_length(arr)</code></td>
-<td>returns length of array expression</td>
-</tr>
-<tr>
-<td><code>array_offset(arr,long)</code></td>
-<td>returns the array element at the 0 based index supplied, or null for an out of range index</td>
-</tr>
-<tr>
-<td><code>array_ordinal(arr,long)</code></td>
-<td>returns the array element at the 1 based index supplied, or null for an out of range index</td>
-</tr>
-<tr>
-<td><code>array_contains(arr,expr)</code></td>
-<td>returns true if the array contains the element specified by expr, or contains all elements specified by expr if expr is an array</td>
-</tr>
-<tr>
-<td><code>array_overlap(arr1,arr2)</code></td>
-<td>returns true if arr1 and arr2 have any elements in common</td>
-</tr>
-<tr>
-<td><code>array_offset_of(arr,expr)</code></td>
-<td>returns the 0 based index of the first occurrence of expr in the array, or <code>null</code> if no matching elements exist in the array.</td>
-</tr>
-<tr>
-<td><code>array_ordinal_of(arr,expr)</code></td>
-<td>returns the 1 based index of the first occurrence of expr in the array, or <code>null</code> if no matching elements exist in the array.</td>
-</tr>
-<tr>
-<td><code>array_append(arr1,expr)</code></td>
-<td>appends expr to arr, the resulting array type determined by the type of the first array</td>
-</tr>
-<tr>
-<td><code>array_concat(arr1,arr2)</code></td>
-<td>concatenates 2 arrays, the resulting array type determined by the type of the first array</td>
-</tr>
-<tr>
-<td><code>array_to_string(arr,str)</code></td>
-<td>joins all elements of arr by the delimiter specified by str</td>
-</tr>
-<tr>
-<td><code>string_to_array(str1,str2)</code></td>
-<td>splits str1 into an array on the delimiter specified by str2</td>
-</tr>
-<tr>
-<td><code>array_slice(arr,start,end)</code></td>
-<td>return the subarray of arr from the 0 based index start(inclusive) to end(exclusive), or <code>null</code>, if start is less than 0, greater than length of arr or less than end</td>
-</tr>
-<tr>
-<td><code>array_prepend(expr,arr)</code></td>
-<td>adds expr to arr at the beginning, the resulting array type determined by the type of the array</td>
-</tr>
-</tbody></table>
-
-<h2 id="apply-functions">Apply Functions</h2>
-
-<table><thead>
-<tr>
-<th>function</th>
-<th>description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>map(lambda,arr)</code></td>
-<td>applies a transform specified by a single argument lambda expression to all elements of arr, returning a new array</td>
-</tr>
-<tr>
-<td><code>cartesian_map(lambda,arr1,arr2,...)</code></td>
-<td>applies a transform specified by a multi argument lambda expression to all elements of the cartesian product of all input arrays, returning a new array; the number of lambda arguments and array inputs must be the same</td>
-</tr>
-<tr>
-<td><code>filter(lambda,arr)</code></td>
-<td>filters arr by a single argument lambda, returning a new array with all matching elements, or null if no elements match</td>
-</tr>
-<tr>
-<td><code>fold(lambda,arr)</code></td>
-<td>folds a 2 argument lambda across arr. The first argument of the lambda is the array element and the second the accumulator, returning a single accumulated value.</td>
-</tr>
-<tr>
-<td><code>cartesian_fold(lambda,arr1,arr2,...)</code></td>
-<td>folds a multi argument lambda across the cartesian product of all input arrays. The first arguments of the lambda is the array element and the last is the accumulator, returning a single accumulated value.</td>
-</tr>
-<tr>
-<td><code>any(lambda,arr)</code></td>
-<td>returns true if any element in the array matches the lambda expression</td>
-</tr>
-<tr>
-<td><code>all(lambda,arr)</code></td>
-<td>returns true if all elements in the array matches the lambda expression</td>
-</tr>
-</tbody></table>
-
         </div>
         <div class="col-md-3">
           <div class="searchbox">
diff --git a/docs/0.15.0-incubating/operations/api-reference.html b/docs/0.15.0-incubating/operations/api-reference.html
index 5760649..5eae294 100644
--- a/docs/0.15.0-incubating/operations/api-reference.html
+++ b/docs/0.15.0-incubating/operations/api-reference.html
@@ -845,21 +845,6 @@ which automates this operation to perform periodically.</p>
 <td>supervisor unique identifier</td>
 </tr>
 <tr>
-<td><code>state</code></td>
-<td>String</td>
-<td>basic state of the supervisor. Available states:<code>UNHEALTHY_SUPERVISOR</code>, <code>UNHEALTHY_TASKS</code>, <code>PENDING</code>, <code>RUNNING</code>, <code>SUSPENDED</code>, <code>STOPPING</code></td>
-</tr>
-<tr>
-<td><code>detailedState</code></td>
-<td>String</td>
-<td>supervisor specific state. (See documentation of specific supervisor for details)</td>
-</tr>
-<tr>
-<td><code>healthy</code></td>
-<td>Boolean</td>
-<td>true or false indicator of overall supervisor health</td>
-</tr>
-<tr>
 <td><code>spec</code></td>
 <td>SupervisorSpec</td>
 <td>json specification of supervisor (See Supervisor Configuration for details)</td>
@@ -867,41 +852,6 @@ which automates this operation to perform periodically.</p>
 </tbody></table>
 
 <ul>
-<li><code>/druid/indexer/v1/supervisor?state=true</code></li>
-</ul>
-
-<p>Returns a list of objects of the currently active supervisors and their current state.</p>
-
-<table><thead>
-<tr>
-<th>Field</th>
-<th>Type</th>
-<th>Description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>id</code></td>
-<td>String</td>
-<td>supervisor unique identifier</td>
-</tr>
-<tr>
-<td><code>state</code></td>
-<td>String</td>
-<td>basic state of the supervisor. Available states:<code>UNHEALTHY_SUPERVISOR</code>, <code>UNHEALTHY_TASKS</code>, <code>PENDING</code>, <code>RUNNING</code>, <code>SUSPENDED</code>, <code>STOPPING</code></td>
-</tr>
-<tr>
-<td><code>detailedState</code></td>
-<td>String</td>
-<td>supervisor specific state. (See documentation of specific supervisor for details)</td>
-</tr>
-<tr>
-<td><code>healthy</code></td>
-<td>Boolean</td>
-<td>true or false indicator of overall supervisor health</td>
-</tr>
-</tbody></table>
-
-<ul>
 <li><code>/druid/indexer/v1/supervisor/&lt;supervisorId&gt;</code></li>
 </ul>
 
diff --git a/docs/0.15.0-incubating/operations/recommendations.html b/docs/0.15.0-incubating/operations/recommendations.html
index c9e6ffe..9320710 100644
--- a/docs/0.15.0-incubating/operations/recommendations.html
+++ b/docs/0.15.0-incubating/operations/recommendations.html
@@ -206,11 +206,13 @@
 <p>Segments should generally be between 300MB-700MB in size. Too many small segments results in inefficient CPU utilizations and 
 too many large segments impacts query performance, most notably with TopN queries.</p>
 
-<h1 id="faqs-and-guides">FAQs and Guides</h1>
+<h1 id="read-faqs">Read FAQs</h1>
 
-<p>1) The <a href="../ingestion/faq.html">Ingestion FAQ</a> provides help with common ingestion problems.</p>
+<p>You should read common problems people have here:</p>
 
-<p>2) The <a href="../operations/basic-cluster-tuning.html">Basic Cluster Tuning Guide</a> offers introductory guidelines for tuning your Druid cluster.</p>
+<p>1) <a href="../ingestion/faq.html">Ingestion-FAQ</a></p>
+
+<p>2) <a href="../operations/performance-faq.html">Performance-FAQ</a></p>
 
         </div>
         <div class="col-md-3">
diff --git a/docs/0.15.0-incubating/querying/aggregations.html b/docs/0.15.0-incubating/querying/aggregations.html
index d859781..ef8f841 100644
--- a/docs/0.15.0-incubating/querying/aggregations.html
+++ b/docs/0.15.0-incubating/querying/aggregations.html
@@ -334,7 +334,7 @@ JavaScript-based functionality is disabled by default. Please refer to the Druid
 
 <h4 id="datasketches-theta-sketch">DataSketches Theta Sketch</h4>
 
-<p>The <a href="../development/extensions-core/datasketches-theta.html">DataSketches Theta Sketch</a> extension-provided aggregator gives distinct count estimates with support for set union, intersection, and difference post-aggregators, using Theta sketches from the <a href="https://datasketches.github.io/">datasketches</a> library.</p>
+<p>The <a href="../development/extensions-core/datasketches-theta.html">DataSketches Theta Sketch</a> extension-provided aggregator gives distinct count estimates with support for set union, intersection, and difference post-aggregators, using Theta sketches from the <a href="http://datasketches.github.io/">datasketches</a> library.</p>
 
 <h4 id="datasketches-hll-sketch">DataSketches HLL Sketch</h4>
 
@@ -369,7 +369,7 @@ However, to ensure backwards compatibility, we will continue to support the clas
 
 <h4 id="datasketches-quantiles-sketch">DataSketches Quantiles Sketch</h4>
 
-<p>The <a href="../development/extensions-core/datasketches-quantiles.html">DataSketches Quantiles Sketch</a> extension-provided aggregator provides quantile estimates and histogram approximations using the numeric quantiles DoublesSketch from the <a href="https://datasketches.github.io/">datasketches</a> library.</p>
+<p>The <a href="../development/extensions-core/datasketches-quantiles.html">DataSketches Quantiles Sketch</a> extension-provided aggregator provides quantile estimates and histogram approximations using the numeric quantiles DoublesSketch from the <a href="http://datasketches.github.io/">datasketches</a> library.</p>
 
 <p>We recommend this aggregator in general for quantiles/histogram use cases, as it provides formal error bounds and has distribution-independent accuracy.</p>
 
diff --git a/docs/0.15.0-incubating/querying/granularities.html b/docs/0.15.0-incubating/querying/granularities.html
index 42ed7a2..bcc1043 100644
--- a/docs/0.15.0-incubating/querying/granularities.html
+++ b/docs/0.15.0-incubating/querying/granularities.html
@@ -285,10 +285,11 @@
   <span class="p">}</span>
 <span class="p">}</span> <span class="p">]</span>
 </code></pre></div>
-<p>Having a query time <code>granularity</code> that is smaller than the <code>queryGranularity</code> parameter set at
-<a href="(../ingestion/ingestion-spec.html#granularityspec)">ingestion time</a> is unreasonable because information about that
-smaller granularity is not present in the indexed data. So, if the query time granularity is smaller than the ingestion
-time query granularity, Druid produces results that are equivalent to having set <code>granularity</code> to <code>queryGranularity</code>.</p>
+<p>Having a query granularity smaller than the ingestion granularity doesn&#39;t make sense,
+because information about that smaller granularity is not present in the indexed data.
+So, if the query granularity is smaller than the ingestion granularity, druid produces
+results that are equivalent to having set the query granularity to the ingestion granularity.
+See <code>queryGranularity</code> in <a href="../ingestion/ingestion-spec.html#granularityspec">Ingestion Spec</a>.</p>
 
 <p>If you change the granularity to <code>all</code>, you will get everything aggregated in 1 bucket,</p>
 <div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">[</span> <span class="p">{</span>
diff --git a/docs/0.15.0-incubating/querying/lookups.html b/docs/0.15.0-incubating/querying/lookups.html
index 732fbd7..9ce1da9 100644
--- a/docs/0.15.0-incubating/querying/lookups.html
+++ b/docs/0.15.0-incubating/querying/lookups.html
@@ -402,11 +402,7 @@ The Coordinator periodically checks if any of the processes need to load/drop lo
 </code></pre></div>
 <h2 id="delete-lookup">Delete Lookup</h2>
 
-<p>A <code>DELETE</code> to <code>/druid/coordinator/v1/lookups/config/{tier}/{id}</code> will remove that lookup from the cluster. If it was last lookup in the tier, then tier is deleted as well.</p>
-
-<h2 id="delete-tier">Delete Tier</h2>
-
-<p>A <code>DELETE</code> to <code>/druid/coordinator/v1/lookups/config/{tier}</code> will remove that tier from the cluster.</p>
+<p>A <code>DELETE</code> to <code>/druid/coordinator/v1/lookups/config/{tier}/{id}</code> will remove that lookup from the cluster.</p>
 
 <h2 id="list-tier-names">List tier names</h2>
 
diff --git a/docs/0.15.0-incubating/querying/scan-query.html b/docs/0.15.0-incubating/querying/scan-query.html
index 174ec63..633f575 100644
--- a/docs/0.15.0-incubating/querying/scan-query.html
+++ b/docs/0.15.0-incubating/querying/scan-query.html
@@ -211,7 +211,7 @@ amounts of data in parallel.</p>
 </tr>
 <tr>
 <td>batchSize</td>
-<td>The maximum number of rows buffered before being returned to the client. Default is <code>20480</code></td>
+<td>How many rows buffered before return to client. Default is <code>20480</code></td>
 <td>no</td>
 </tr>
 <tr>
@@ -355,9 +355,9 @@ the query context (see the Query Context Properties section).</p>
 In legacy mode you can expect the following behavior changes:</p>
 
 <ul>
-<li>The <code>__time</code> column is returned as <code>&quot;timestamp&quot;</code> rather than <code>&quot;__time&quot;</code>. This will take precedence over any other column
-you may have that is named <code>&quot;timestamp&quot;</code>.</li>
-<li>The <code>__time</code> column is included in the list of columns even if you do not specifically ask for it.</li>
+<li>The <strong>time column is returned as &quot;timestamp&quot; rather than &quot;</strong>time&quot;. This will take precedence over any other column
+you may have that is named &quot;timestamp&quot;.</li>
+<li>The __time column is included in the list of columns even if you do not specifically ask for it.</li>
 <li>Timestamps are returned as ISO8601 time strings rather than integers (milliseconds since 1970-01-01 00:00:00 UTC).</li>
 </ul>
 
diff --git a/docs/0.15.0-incubating/querying/sql.html b/docs/0.15.0-incubating/querying/sql.html
index 190e3d4..14bb79d 100644
--- a/docs/0.15.0-incubating/querying/sql.html
+++ b/docs/0.15.0-incubating/querying/sql.html
@@ -146,12 +146,12 @@
   ~ under the License.
   -->
 
-<p>&lt;!-- 
-    The format of the tables that describe the functions and operators 
-    should not be changed without updating the script create-sql-function-doc 
-    in web-console/script/create-sql-function-doc, because the script detects
-    patterns in this markdown file and parse it to TypeScript file for web console
-   --&gt;</p>
+<!--
+  The format of the tables that describe the functions and operators
+  should not be changed without updating the script create-sql-function-doc
+  in web-console/script/create-sql-function-doc, because the script detects
+  patterns in this markdown file and parse it to TypeScript file for web console
+-->
 
 <h1 id="sql">SQL</h1>
 
@@ -166,6 +166,9 @@ queries on the query Broker (the first process you query), which are then passed
 queries. Other than the (slight) overhead of translating SQL on the Broker, there isn&#39;t an additional performance
 penalty versus native queries.</p>
 
+<p>To enable Druid SQL, make sure you have set <code>druid.sql.enable = true</code> either in your common.runtime.properties or your
+Broker&#39;s runtime.properties.</p>
+
 <h2 id="query-syntax">Query syntax</h2>
 
 <p>Each Druid datasource appears as a table in the &quot;druid&quot; schema. This is also the default schema, so Druid datasources
@@ -293,30 +296,6 @@ possible for two aggregators in the same SQL query to have different filters.</p
 <td><code>BLOOM_FILTER(expr, numEntries)</code></td>
 <td>Computes a bloom filter from values produced by <code>expr</code>, with <code>numEntries</code> maximum number of distinct values before false positve rate increases. See <a href="../development/extensions-core/bloom-filter.html">bloom filter extension</a> documentation for additional details.</td>
 </tr>
-<tr>
-<td><code>VAR_POP(expr)</code></td>
-<td>Computes variance population of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>VAR_SAMP(expr)</code></td>
-<td>Computes variance sample of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>VARIANCE(expr)</code></td>
-<td>Computes variance sample of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>STDDEV_POP(expr)</code></td>
-<td>Computes standard deviation population of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>STDDEV_SAMP(expr)</code></td>
-<td>Computes standard deviation sample of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>STDDEV(expr)</code></td>
-<td>Computes standard deviation sample of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
 </tbody></table>
 
 <p>For advice on choosing approximate aggregation functions, check out our <a href="aggregations.html#approx">approximate aggregations documentation</a>.</p>
@@ -571,10 +550,6 @@ context parameter &quot;sqlTimeZone&quot; to the name of another time zone, like
 the connection time zone, some functions also accept time zones as parameters. These parameters always take precedence
 over the connection time zone.</p>
 
-<p>Literal timestamps in the connection time zone can be written using <code>TIMESTAMP &#39;2000-01-01 00:00:00&#39;</code> syntax. The
-simplest way to write literal timestamps in other time zones is to use TIME_PARSE, like
-<code>TIME_PARSE(&#39;2000-02-01 00:00:00&#39;, NULL, &#39;America/Los_Angeles&#39;)</code>.</p>
-
 <table><thead>
 <tr>
 <th>Function</th>
@@ -638,10 +613,6 @@ simplest way to write literal timestamps in other time zones is to use TIME_PARS
 <td>Equivalent to <code>timestamp + count * INTERVAL &#39;1&#39; UNIT</code>.</td>
 </tr>
 <tr>
-<td><code>TIMESTAMPDIFF(&lt;unit&gt;, &lt;timestamp1&gt;, &lt;timestamp2&gt;)</code></td>
-<td>Returns the (signed) number of <code>unit</code> between <code>timestamp1</code> and <code>timestamp2</code>. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR.</td>
-</tr>
-<tr>
 <td><code>timestamp_expr { + &amp;#124; - } &lt;interval_expr&gt;</code></td>
 <td>Add or subtract an amount of time from a timestamp. interval_expr can include interval literals like <code>INTERVAL &#39;2&#39; HOUR</code>, and may include interval arithmetic as well. This operator treats days as uniformly 86400 seconds long, and does not take into account daylight savings time. To account for daylight savings time, use TIME_SHIFT instead.</td>
 </tr>
@@ -807,12 +778,11 @@ simplest way to write literal timestamps in other time zones is to use TIME_PARS
 
 <p>Druid natively supports five basic column types: &quot;long&quot; (64 bit signed int), &quot;float&quot; (32 bit float), &quot;double&quot; (64 bit
 float) &quot;string&quot; (UTF-8 encoded strings), and &quot;complex&quot; (catch-all for more exotic data types like hyperUnique and
-approxHistogram columns).</p>
+approxHistogram columns). Timestamps (including the <code>__time</code> column) are stored as longs, with the value being the
+number of milliseconds since 1 January 1970 UTC.</p>
 
-<p>Timestamps (including the <code>__time</code> column) are treated by Druid as longs, with the value being the number of
-milliseconds since 1970-01-01 00:00:00 UTC, not counting leap seconds. Therefore, timestamps in Druid do not carry any
-timezone information, but only carry information about the exact moment in time they represent. See the
-<a href="#time-functions">Time functions</a> section for more information about timestamp handling.</p>
+<p>At runtime, Druid may widen 32-bit floats to 64-bit for certain operators, like SUM aggregators. The reverse will not
+happen: 64-bit floats are not be narrowed to 32-bit.</p>
 
 <p>Druid generally treats NULLs and empty strings interchangeably, rather than according to the SQL standard. As such,
 Druid SQL only has partial support for NULLs. For example, the expressions <code>col IS NULL</code> and <code>col = &#39;&#39;</code> are equivalent,
@@ -824,7 +794,7 @@ datasource, then it will be treated as zero for rows from those segments.</p>
 
 <p>For mathematical operations, Druid SQL will use integer math if all operands involved in an expression are integers.
 Otherwise, Druid will switch to floating point math. You can force this to happen by casting one of your operands
-to FLOAT. At runtime, Druid may widen 32-bit floats to 64-bit for certain operators, like SUM aggregators.</p>
+to FLOAT.</p>
 
 <p>The following table describes how SQL types map onto Druid types during query runtime. Casts between two SQL types
 that have the same Druid runtime type will have no effect, other than exceptions noted in the table. Casts between two
@@ -1423,7 +1393,7 @@ datasource &quot;foo&quot;, use the query:</p>
 </code></pre></div>
 <h3 id="servers-table">SERVERS table</h3>
 
-<p>Servers table lists all discovered servers in the cluster.</p>
+<p>Servers table lists all data servers(any server that hosts a segment). It includes both Historicals and Peons.</p>
 
 <table><thead>
 <tr>
@@ -1455,22 +1425,22 @@ datasource &quot;foo&quot;, use the query:</p>
 <tr>
 <td>server_type</td>
 <td>STRING</td>
-<td>Type of Druid service. Possible values include: COORDINATOR, OVERLORD,  BROKER, ROUTER, HISTORICAL, MIDDLE_MANAGER or PEON.</td>
+<td>Type of Druid service. Possible values include: Historical, realtime and indexer_executor(Peon).</td>
 </tr>
 <tr>
 <td>tier</td>
 <td>STRING</td>
-<td>Distribution tier see <a href="#../configuration/index.html#Historical-General-Configuration">druid.server.tier</a>. Only valid for HISTORICAL type, for other types it&#39;s null</td>
+<td>Distribution tier see <a href="#../configuration/index.html#Historical-General-Configuration">druid.server.tier</a></td>
 </tr>
 <tr>
 <td>current_size</td>
 <td>LONG</td>
-<td>Current size of segments in bytes on this server. Only valid for HISTORICAL type, for other types it&#39;s 0</td>
+<td>Current size of segments in bytes on this server</td>
 </tr>
 <tr>
 <td>max_size</td>
 <td>LONG</td>
-<td>Max size in bytes this server recommends to assign to segments see <a href="#../configuration/index.html#Historical-General-Configuration">druid.server.maxSize</a>. Only valid for HISTORICAL type, for other types it&#39;s 0</td>
+<td>Max size in bytes this server recommends to assign to segments see <a href="#../configuration/index.html#Historical-General-Configuration">druid.server.maxSize</a></td>
 </tr>
 </tbody></table>
 
@@ -1500,19 +1470,19 @@ datasource &quot;foo&quot;, use the query:</p>
 </tr>
 </tbody></table>
 
-<p>JOIN between &quot;servers&quot; and &quot;segments&quot; can be used to query the number of segments for a specific datasource, 
+<p>JOIN between &quot;servers&quot; and &quot;segments&quot; can be used to query the number of segments for a specific datasource,
 grouped by server, example query:</p>
-<div class="highlight"><pre><code class="language-sql" data-lang="sql"><span></span><span class="k">SELECT</span> <span class="k">count</span><span class="p">(</span><span class="n">segments</span><span class="p">.</span><span class="n">segment_id</span><span class="p">)</span> <span class="k">as</span> <span class="n">num_segments</span> <span class="k">from</span> <span class="n">sys</span><span class="p">.</span><span class="n">segments</span> <span class="k">as</span> <span class="n" [...]
-<span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">sys</span><span class="p">.</span><span class="n">server_segments</span> <span class="k">as</span> <span class="n">server_segments</span> 
-<span class="k">ON</span> <span class="n">segments</span><span class="p">.</span><span class="n">segment_id</span>  <span class="o">=</span> <span class="n">server_segments</span><span class="p">.</span><span class="n">segment_id</span> 
-<span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">sys</span><span class="p">.</span><span class="n">servers</span> <span class="k">as</span> <span class="n">servers</span> 
+<div class="highlight"><pre><code class="language-sql" data-lang="sql"><span></span><span class="k">SELECT</span> <span class="k">count</span><span class="p">(</span><span class="n">segments</span><span class="p">.</span><span class="n">segment_id</span><span class="p">)</span> <span class="k">as</span> <span class="n">num_segments</span> <span class="k">from</span> <span class="n">sys</span><span class="p">.</span><span class="n">segments</span> <span class="k">as</span> <span class="n" [...]
+<span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">sys</span><span class="p">.</span><span class="n">server_segments</span> <span class="k">as</span> <span class="n">server_segments</span>
+<span class="k">ON</span> <span class="n">segments</span><span class="p">.</span><span class="n">segment_id</span>  <span class="o">=</span> <span class="n">server_segments</span><span class="p">.</span><span class="n">segment_id</span>
+<span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">sys</span><span class="p">.</span><span class="n">servers</span> <span class="k">as</span> <span class="n">servers</span>
 <span class="k">ON</span> <span class="n">servers</span><span class="p">.</span><span class="n">server</span> <span class="o">=</span> <span class="n">server_segments</span><span class="p">.</span><span class="n">server</span>
-<span class="k">WHERE</span> <span class="n">segments</span><span class="p">.</span><span class="n">datasource</span> <span class="o">=</span> <span class="s1">&#39;wikipedia&#39;</span> 
+<span class="k">WHERE</span> <span class="n">segments</span><span class="p">.</span><span class="n">datasource</span> <span class="o">=</span> <span class="s1">&#39;wikipedia&#39;</span>
 <span class="k">GROUP</span> <span class="k">BY</span> <span class="n">servers</span><span class="p">.</span><span class="n">server</span><span class="p">;</span>
 </code></pre></div>
 <h3 id="tasks-table">TASKS table</h3>
 
-<p>The tasks table provides information about active and recently-completed indexing tasks. For more information 
+<p>The tasks table provides information about active and recently-completed indexing tasks. For more information
 check out <a href="#../ingestion/tasks.html">ingestion tasks</a></p>
 
 <table><thead>
@@ -1608,7 +1578,7 @@ check out <a href="#../ingestion/tasks.html">ingestion tasks</a></p>
 <tr>
 <td><code>druid.sql.enable</code></td>
 <td>Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.</td>
-<td>true</td>
+<td>false</td>
 </tr>
 <tr>
 <td><code>druid.sql.avatica.enable</code></td>
diff --git a/docs/0.15.0-incubating/querying/timeseriesquery.html b/docs/0.15.0-incubating/querying/timeseriesquery.html
index 17ef540..f9bb4e0 100644
--- a/docs/0.15.0-incubating/querying/timeseriesquery.html
+++ b/docs/0.15.0-incubating/querying/timeseriesquery.html
@@ -235,11 +235,6 @@
 <td>no</td>
 </tr>
 <tr>
-<td>limit</td>
-<td>An integer that limits the number of results. The default is unlimited.</td>
-<td>no</td>
-</tr>
-<tr>
 <td>context</td>
 <td>Can be used to modify query behavior, including <a href="#grand-totals">grand totals</a> and <a href="#zero-filling">zero-filling</a>. See also <a href="../querying/query-context.html">Context</a> for parameters that apply to all query types.</td>
 <td>no</td>
diff --git a/docs/0.15.0-incubating/toc.html b/docs/0.15.0-incubating/toc.html
index 5d98648..05785d5 100644
--- a/docs/0.15.0-incubating/toc.html
+++ b/docs/0.15.0-incubating/toc.html
@@ -59,7 +59,7 @@
 
 <ul>
 <li><a href="/docs/0.15.0-incubating/operations/single-server.html">Single-server deployment</a></li>
-<li><a href="/docs/0.15.0-incubating/tutorials/cluster.html#fresh-deployment">Clustered deployment</a></li>
+<li><a href="/docs/0.15.0-incubating/operations/example-cluster.html">Clustered deployment</a></li>
 </ul></li>
 </ul></li>
 </ul>
@@ -209,6 +209,7 @@
 <ul>
 <li><a href="/docs/0.15.0-incubating/operations/basic-cluster-tuning.html">Basic Cluster Tuning</a><br></li>
 <li><a href="/docs/0.15.0-incubating/operations/recommendations.html">General Recommendations</a></li>
+<li><a href="/docs/0.15.0-incubating/operations/performance-faq.html">Performance FAQ</a></li>
 <li><a href="/docs/0.15.0-incubating/configuration/index.html#jvm-configuration-best-practices">JVM Best Practices</a><br></li>
 </ul></li>
 <li>Tools
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-01.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-01.png
index 08426fd..b0b5da8 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-01.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-01.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-02.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-02.png
index 76a1a7f..806ce4c 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-02.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-02.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-03.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-03.png
index ce3b0f0..c6bb701 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-03.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-03.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-04.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-04.png
index b30ef7f..83a018b 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-04.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-04.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-05.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-05.png
index 9ef3f80..71291c0 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-05.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-05.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-06.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-06.png
index b1f08c8..5fe9c37 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-06.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-06.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-07.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-07.png
index d7a8e68..16b48af 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-07.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-07.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-08.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-08.png
index 4e36aab..edaf039 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-08.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-08.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-09.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-09.png
index 144c02c..6191fc2 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-09.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-09.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-10.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-10.png
index 75487a2..4037792 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-10.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-10.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-11.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-11.png
index 5cadd52..76464f9 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-11.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-data-loader-11.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-submit-task-01.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-submit-task-01.png
index e8a1346..1651401 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-submit-task-01.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-submit-task-01.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-submit-task-02.png b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-submit-task-02.png
index fc0c924..834a9a5 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-batch-submit-task-02.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-batch-submit-task-02.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-01.png b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-01.png
index aeb9bf3..99b9e45 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-01.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-01.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-02.png b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-02.png
index 836d8a7..11c316e 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-02.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-02.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-03.png b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-03.png
index d51f8f8..88fd9d6 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-03.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-03.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-04.png b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-04.png
index 46c5b1d..8df3699 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-04.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-04.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-05.png b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-05.png
index e692694..07356df 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-05.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-05.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-06.png b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-06.png
index 55c999f..ec1525c 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-06.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-06.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-07.png b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-07.png
index 661e897..aa30458 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-07.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-07.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-08.png b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-08.png
index 6e3f1aa..b9d89b2 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-08.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-compaction-08.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-01.png b/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-01.png
index de68d38..cddcb16 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-01.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-01.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-02.png b/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-02.png
index ffe4585..9b84f0c 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-02.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-02.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-03.png b/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-03.png
index 221774f..e6fb1f3 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-03.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-deletion-03.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-kafka-01.png b/docs/0.15.0-incubating/tutorials/img/tutorial-kafka-01.png
index b085625..580d9af 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-kafka-01.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-kafka-01.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-kafka-02.png b/docs/0.15.0-incubating/tutorials/img/tutorial-kafka-02.png
index f23e084..735ceaa 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-kafka-02.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-kafka-02.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-query-01.png b/docs/0.15.0-incubating/tutorials/img/tutorial-query-01.png
index b366b2b..7e483fc 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-query-01.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-query-01.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-query-02.png b/docs/0.15.0-incubating/tutorials/img/tutorial-query-02.png
index f3ba025..c25c651 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-query-02.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-query-02.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-query-03.png b/docs/0.15.0-incubating/tutorials/img/tutorial-query-03.png
index 9f7ae27..5b1e5bc 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-query-03.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-query-03.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-query-04.png b/docs/0.15.0-incubating/tutorials/img/tutorial-query-04.png
index 3f800a6..df96420 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-query-04.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-query-04.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-query-05.png b/docs/0.15.0-incubating/tutorials/img/tutorial-query-05.png
index 2fc59ce..c241627 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-query-05.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-query-05.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-query-06.png b/docs/0.15.0-incubating/tutorials/img/tutorial-query-06.png
index 60b4e1a..1f3e5fb 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-query-06.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-query-06.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-query-07.png b/docs/0.15.0-incubating/tutorials/img/tutorial-query-07.png
index d2e5a85..e23fc2a 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-query-07.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-query-07.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-quickstart-01.png b/docs/0.15.0-incubating/tutorials/img/tutorial-quickstart-01.png
index 9a47bc7..94b2024 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-quickstart-01.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-quickstart-01.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-00.png b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-00.png
index a3f84a9..99c4ca8 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-00.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-00.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-01.png b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-01.png
index 35a97c2..64f666c 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-01.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-01.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-02.png b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-02.png
index f38fad0..2458d9d 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-02.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-02.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-03.png b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-03.png
index 256836a..5cf2e8a 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-03.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-03.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-04.png b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-04.png
index d39495f..73f9f22 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-04.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-04.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-05.png b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-05.png
index 638a752..622718f 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-05.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-05.png differ
diff --git a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-06.png b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-06.png
index f47cbff..540551f 100644
Binary files a/docs/0.15.0-incubating/tutorials/img/tutorial-retention-06.png and b/docs/0.15.0-incubating/tutorials/img/tutorial-retention-06.png differ
diff --git a/docs/latest/configuration/index.html b/docs/latest/configuration/index.html
index 48e162c..b832aec 100644
--- a/docs/latest/configuration/index.html
+++ b/docs/latest/configuration/index.html
@@ -1440,6 +1440,16 @@ The below table shows some important configurations for S3. See <a href="../deve
 </tr>
 </thead><tbody>
 <tr>
+<td><code>druid.s3.accessKey</code></td>
+<td>The access key to use to access S3.</td>
+<td>none</td>
+</tr>
+<tr>
+<td><code>druid.s3.secretKey</code></td>
+<td>The secret key to use to access S3.</td>
+<td>none</td>
+</tr>
+<tr>
 <td><code>druid.storage.bucket</code></td>
 <td>S3 bucket name.</td>
 <td>none</td>
@@ -1465,21 +1475,6 @@ The below table shows some important configurations for S3. See <a href="../deve
 <td>none</td>
 </tr>
 <tr>
-<td><code>druid.storage.sse.type</code></td>
-<td>Server-side encryption type. Should be one of <code>s3</code>, <code>kms</code>, and <code>custom</code>. See the below <a href="../development/extensions-core/s3.html#server-side-encryption">Server-side encryption section</a> for more details.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.kms.keyId</code></td>
-<td>AWS KMS key ID. This is used only when <code>druid.storage.sse.type</code> is <code>kms</code> and can be empty to use the default key ID.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.custom.base64EncodedKey</code></td>
-<td>Base64-encoded key. Should be specified if <code>druid.storage.sse.type</code> is <code>custom</code>.</td>
-<td>None</td>
-</tr>
-<tr>
 <td><code>druid.storage.useS3aSchema</code></td>
 <td>If true, use the &quot;s3a&quot; filesystem when using Hadoop-based ingestion. If false, the &quot;s3n&quot; filesystem will be used. Only affects Hadoop-based ingestion.</td>
 <td>false</td>
@@ -2213,6 +2208,11 @@ Support for 64-bit floating point columns was released in Druid 0.11.0, so if yo
 <td>yes</td>
 </tr>
 <tr>
+<td><code>keepSegmentGranularity</code></td>
+<td>Set <a href="../ingestion/compaction.html">keepSegmentGranularity</a> to true for compactionTask.</td>
+<td>no (default = true)</td>
+</tr>
+<tr>
 <td><code>taskPriority</code></td>
 <td><a href="../ingestion/tasks.html#task-priorities">Priority</a> of compaction task.</td>
 <td>no (default = 25)</td>
@@ -2524,47 +2524,6 @@ If you see this problem, it&#39;s recommended to set <code>skipOffsetFromLatest<
 </tr>
 </tbody></table>
 
-<h5 id="supervisors">Supervisors</h5>
-
-<table><thead>
-<tr>
-<th>Property</th>
-<th>Description</th>
-<th>Default</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>druid.supervisor.healthinessThreshold</code></td>
-<td>The number of successful runs before an unhealthy supervisor is again considered healthy.</td>
-<td>3</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.unhealthinessThreshold</code></td>
-<td>The number of failed runs before the supervisor is considered unhealthy.</td>
-<td>3</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.taskHealthinessThreshold</code></td>
-<td>The number of consecutive task successes before an unhealthy supervisor is again considered healthy.</td>
-<td>3</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.taskUnhealthinessThreshold</code></td>
-<td>The number of consecutive task failures before the supervisor is considered unhealthy.</td>
-<td>3</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.storeStackTrace</code></td>
-<td>Whether full stack traces of supervisor exceptions should be stored and returned by the supervisor <code>/status</code> endpoint.</td>
-<td>false</td>
-</tr>
-<tr>
-<td><code>druid.supervisor.maxStoredExceptionEvents</code></td>
-<td>The maximum number of exception events that can be returned through the supervisor <code>/status</code> endpoint.</td>
-<td><code>max(healthinessThreshold, unhealthinessThreshold)</code></td>
-</tr>
-</tbody></table>
-
 <h4 id="overlord-dynamic-configuration">Overlord Dynamic Configuration</h4>
 
 <p>The Overlord can dynamically change worker behavior.</p>
@@ -3699,7 +3658,7 @@ line.</p>
 <tr>
 <td><code>druid.sql.enable</code></td>
 <td>Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.</td>
-<td>true</td>
+<td>false</td>
 </tr>
 <tr>
 <td><code>druid.sql.avatica.enable</code></td>
diff --git a/docs/latest/development/extensions-contrib/influxdb-emitter.html b/docs/latest/development/extensions-contrib/influxdb-emitter.html
deleted file mode 100644
index 7f2d1bf..0000000
--- a/docs/latest/development/extensions-contrib/influxdb-emitter.html
+++ /dev/null
@@ -1,330 +0,0 @@
-<!DOCTYPE html>
-<html lang="en">
-  <head>
-    <meta charset="UTF-8" />
-<meta name="viewport" content="width=device-width, initial-scale=1.0">
-<meta name="description" content="Apache Druid">
-<meta name="keywords" content="druid,kafka,database,analytics,streaming,real-time,real time,apache,open source">
-<meta name="author" content="Apache Software Foundation">
-
-<title>Druid | InfluxDB Emitter</title>
-
-<link rel="alternate" type="application/atom+xml" href="/feed">
-<link rel="shortcut icon" href="/img/favicon.png">
-
-<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css" integrity="sha384-fnmOCqbTlWIlj8LyTjo7mOUStjsKC4pOpQbqyi7RrhN7udi9RwhKkMHpvLbHG9Sr" crossorigin="anonymous">
-
-<link href='//fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700,300italic|Open+Sans:300italic,400italic,600italic,400,300,600,700' rel='stylesheet' type='text/css'>
-
-<link rel="stylesheet" href="/css/bootstrap-pure.css?v=1.1">
-<link rel="stylesheet" href="/css/base.css?v=1.1">
-<link rel="stylesheet" href="/css/header.css?v=1.1">
-<link rel="stylesheet" href="/css/footer.css?v=1.1">
-<link rel="stylesheet" href="/css/syntax.css?v=1.1">
-<link rel="stylesheet" href="/css/docs.css?v=1.1">
-
-<script>
-  (function() {
-    var cx = '000162378814775985090:molvbm0vggm';
-    var gcse = document.createElement('script');
-    gcse.type = 'text/javascript';
-    gcse.async = true;
-    gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
-        '//cse.google.com/cse.js?cx=' + cx;
-    var s = document.getElementsByTagName('script')[0];
-    s.parentNode.insertBefore(gcse, s);
-  })();
-</script>
-
-
-  </head>
-
-  <body>
-    <!-- Start page_header include -->
-<script src="//ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>
-
-<div class="top-navigator">
-  <div class="container">
-    <div class="left-cont">
-      <a class="logo" href="/"><span class="druid-logo"></span></a>
-    </div>
-    <div class="right-cont">
-      <ul class="links">
-        <li class=""><a href="/technology">Technology</a></li>
-        <li class=""><a href="/use-cases">Use Cases</a></li>
-        <li class=""><a href="/druid-powered">Powered By</a></li>
-        <li class=""><a href="/docs/latest/design/">Docs</a></li>
-        <li class=""><a href="/community/">Community</a></li>
-        <li class="header-dropdown">
-          <a>Apache</a>
-          <div class="header-dropdown-menu">
-            <a href="https://www.apache.org/" target="_blank">Foundation</a>
-            <a href="https://www.apache.org/events/current-event" target="_blank">Events</a>
-            <a href="https://www.apache.org/licenses/" target="_blank">License</a>
-            <a href="https://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a>
-            <a href="https://www.apache.org/security/" target="_blank">Security</a>
-            <a href="https://www.apache.org/foundation/sponsorship.html" target="_blank">Sponsorship</a>
-          </div>
-        </li>
-        <li class=" button-link"><a href="/downloads.html">Download</a></li>
-      </ul>
-    </div>
-  </div>
-  <div class="action-button menu-icon">
-    <span class="fa fa-bars"></span> MENU
-  </div>
-  <div class="action-button menu-icon-close">
-    <span class="fa fa-times"></span> MENU
-  </div>
-</div>
-
-<script type="text/javascript">
-  var $menu = $('.right-cont');
-  var $menuIcon = $('.menu-icon');
-  var $menuIconClose = $('.menu-icon-close');
-
-  function showMenu() {
-    $menu.fadeIn(100);
-    $menuIcon.fadeOut(100);
-    $menuIconClose.fadeIn(100);
-  }
-
-  $menuIcon.click(showMenu);
-
-  function hideMenu() {
-    $menu.fadeOut(100);
-    $menuIconClose.fadeOut(100);
-    $menuIcon.fadeIn(100);
-  }
-
-  $menuIconClose.click(hideMenu);
-
-  $(window).resize(function() {
-    if ($(window).width() >= 840) {
-      $menu.fadeIn(100);
-      $menuIcon.fadeOut(100);
-      $menuIconClose.fadeOut(100);
-    }
-    else {
-      $menu.fadeOut(100);
-      $menuIcon.fadeIn(100);
-      $menuIconClose.fadeOut(100);
-    }
-  });
-</script>
-
-<!-- Stop page_header include -->
-
-
-    <div class="container doc-container">
-      
-      
-
-      
-
-      <div class="row">
-        <div class="col-md-9 doc-content">
-          <p>
-            <a class="btn btn-default btn-xs visible-xs-inline-block visible-sm-inline-block" href="#toc">Table of Contents</a>
-          </p>
-          <!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~   http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing,
-  ~ software distributed under the License is distributed on an
-  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  ~ KIND, either express or implied.  See the License for the
-  ~ specific language governing permissions and limitations
-  ~ under the License.
-  -->
-
-<h1 id="influxdb-emitter">InfluxDB Emitter</h1>
-
-<p>To use this Apache Druid (incubating) extension, make sure to <a href="../../operations/including-extensions.html">include</a> <code>druid-influxdb-emitter</code> extension.</p>
-
-<h2 id="introduction">Introduction</h2>
-
-<p>This extension emits druid metrics to <a href="https://www.influxdata.com/time-series-platform/influxdb/">InfluxDB</a> over HTTP. Currently this emitter only emits service metric events to InfluxDB (See <a href="../../operations/metrics.html">Druid metrics</a> for a list of metrics).
-When a metric event is fired it is added to a queue of events. After a configurable amount of time, the events on the queue are transformed to InfluxDB&#39;s line protocol 
-and POSTed to the InfluxDB HTTP API. The entire queue is flushed at this point. The queue is also flushed as the emitter is shutdown.</p>
-
-<p>Note that authentication and authorization must be <a href="https://docs.influxdata.com/influxdb/v1.7/administration/authentication_and_authorization/">enabled</a> on the InfluxDB server.</p>
-
-<h2 id="configuration">Configuration</h2>
-
-<p>All the configuration parameters for the influxdb emitter are under <code>druid.emitter.influxdb</code>.</p>
-
-<table><thead>
-<tr>
-<th>Property</th>
-<th>Description</th>
-<th>Required?</th>
-<th>Default</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>druid.emitter.influxdb.hostname</code></td>
-<td>The hostname of the InfluxDB server.</td>
-<td>Yes</td>
-<td>N/A</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.port</code></td>
-<td>The port of the InfluxDB server.</td>
-<td>No</td>
-<td>8086</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.databaseName</code></td>
-<td>The name of the database in InfluxDB.</td>
-<td>Yes</td>
-<td>N/A</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.maxQueueSize</code></td>
-<td>The size of the queue that holds events.</td>
-<td>No</td>
-<td>Integer.Max_Value(=2^31-1)</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.flushPeriod</code></td>
-<td>How often (in milliseconds) the events queue is parsed into Line Protocol and POSTed to InfluxDB.</td>
-<td>No</td>
-<td>60000</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.flushDelay</code></td>
-<td>How long (in milliseconds) the scheduled method will wait until it first runs.</td>
-<td>No</td>
-<td>60000</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.influxdbUserName</code></td>
-<td>The username for authenticating with the InfluxDB database.</td>
-<td>Yes</td>
-<td>N/A</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.influxdbPassword</code></td>
-<td>The password of the database authorized user</td>
-<td>Yes</td>
-<td>N/A</td>
-</tr>
-<tr>
-<td><code>druid.emitter.influxdb.dimensionWhitelist</code></td>
-<td>A whitelist of metric dimensions to include as tags</td>
-<td>No</td>
-<td><code>[&quot;dataSource&quot;,&quot;type&quot;,&quot;numMetrics&quot;,&quot;numDimensions&quot;,&quot;threshold&quot;,&quot;dimension&quot;,&quot;taskType&quot;,&quot;taskStatus&quot;,&quot;tier&quot;]</code></td>
-</tr>
-</tbody></table>
-
-<h2 id="influxdb-line-protocol">InfluxDB Line Protocol</h2>
-
-<p>An example of how this emitter parses a Druid metric event into InfluxDB&#39;s <a href="https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_reference/">line protocol</a> is given here: </p>
-
-<p>The syntax of the line protocol is :  </p>
-
-<p><code>&lt;measurement&gt;[,&lt;tag_key&gt;=&lt;tag_value&gt;[,&lt;tag_key&gt;=&lt;tag_value&gt;]] &lt;field_key&gt;=&lt;field_value&gt;[,&lt;field_key&gt;=&lt;field_value&gt;] [&lt;timestamp&gt;]</code></p>
-
-<p>where timestamp is in nano-seconds since epoch.</p>
-
-<p>A typical service metric event as recorded by Druid&#39;s logging emitter is: <code>Event [{&quot;feed&quot;:&quot;metrics&quot;,&quot;timestamp&quot;:&quot;2017-10-31T09:09:06.857Z&quot;,&quot;service&quot;:&quot;druid/historical&quot;,&quot;host&quot;:&quot;historical001:8083&quot;,&quot;version&quot;:&quot;0.11.0-SNAPSHOT&quot;,&quot;metric&quot;:&quot;query/cache/total/hits&quot;,&quot;value&quot;:34787256}]</code>.</p>
-
-<p>This event is parsed into line protocol according to these rules:</p>
-
-<ul>
-<li>The measurement becomes druid_query since query is the first part of the metric. </li>
-<li>The tags are service=druid/historical, hostname=historical001, metric=druid_cache_total. (The metric tag is the middle part of the druid metric separated with _ and preceded by druid_. Another example would be if an event has metric=query/time then there is no middle part and hence no metric tag)</li>
-<li>The field is druid_hits since this is the last part of the metric.</li>
-</ul>
-
-<p>This gives the following String which can be POSTed to InfluxDB: <code>&quot;druid_query,service=druid/historical,hostname=historical001,metric=druid_cache_total druid_hits=34787256 1509440946857000000&quot;</code></p>
-
-<p>The InfluxDB emitter has a white list of dimensions
-which will be added as a tag to the line protocol string if the metric has a dimension from the white list.
-The value of the dimension is sanitized such that every occurence of a dot or whitespace is replaced with a <code>_</code> .</p>
-
-        </div>
-        <div class="col-md-3">
-          <div class="searchbox">
-            <gcse:searchbox-only></gcse:searchbox-only>
-          </div>
-          <div id="toc" class="nav toc hidden-print">
-          </div>
-        </div>
-      </div>
-    </div>
-
-    <!-- Start page_footer include -->
-<footer class="druid-footer">
-<div class="container">
-  <div class="text-center">
-    <p>
-    <a href="/technology">Technology</a>&ensp;·&ensp;
-    <a href="/use-cases">Use Cases</a>&ensp;·&ensp;
-    <a href="/druid-powered">Powered by Druid</a>&ensp;·&ensp;
-    <a href="/docs/latest">Docs</a>&ensp;·&ensp;
-    <a href="/community/">Community</a>&ensp;·&ensp;
-    <a href="/downloads.html">Download</a>&ensp;·&ensp;
-    <a href="/faq">FAQ</a>
-    </p>
-  </div>
-  <div class="text-center">
-    <a title="Join the user group" href="https://groups.google.com/forum/#!forum/druid-user" target="_blank"><span class="fa fa-comments"></span></a>&ensp;·&ensp;
-    <a title="Follow Druid" href="https://twitter.com/druidio" target="_blank"><span class="fab fa-twitter"></span></a>&ensp;·&ensp;
-    <a title="Download via Apache" href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.15.0-incubating/apache-druid-0.15.0-incubating-bin.tar.gz" target="_blank"><span class="fas fa-feather"></span></a>&ensp;·&ensp;
-    <a title="GitHub" href="https://github.com/apache/incubator-druid" target="_blank"><span class="fab fa-github"></span></a>
-  </div>
-  <div class="text-center license">
-    Copyright © 2019 <a href="https://www.apache.org/" target="_blank">Apache Software Foundation</a>.<br>
-    Except where otherwise noted, licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>.<br>
-    Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.
-  </div>
-</div>
-</footer>
-
-<script async src="https://www.googletagmanager.com/gtag/js?id=UA-131010415-1"></script>
-<script>
-  window.dataLayer = window.dataLayer || [];
-  function gtag(){dataLayer.push(arguments);}
-  gtag('js', new Date());
-  gtag('config', 'UA-131010415-1');
-</script>
-<script>
-  function trackDownload(type, url) {
-    ga('send', 'event', 'download', type, url);
-  }
-</script>
-<script src="//code.jquery.com/jquery.min.js"></script>
-<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script>
-<script src="/assets/js/druid.js"></script>
-<!-- stop page_footer include -->
-
-
-    <script>
-    $(function() {
-      $(".toc").load("/docs/latest/toc.html");
-
-      // There is no way to tell when .gsc-input will be async loaded into the page so just try to set a placeholder until it works
-      var tries = 0;
-      var timer = setInterval(function() {
-        tries++;
-        if (tries > 300) clearInterval(timer);
-        var searchInput = $('input.gsc-input');
-        if (searchInput.length) {
-          searchInput.attr('placeholder', 'Search');
-          clearInterval(timer);
-        }
-      }, 100);
-    });
-    </script>
-  </body>
-</html>
diff --git a/docs/latest/development/extensions-contrib/orc.html b/docs/latest/development/extensions-contrib/orc.html
new file mode 100644
index 0000000..19bab1e
--- /dev/null
+++ b/docs/latest/development/extensions-contrib/orc.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/extensions-core/orc.html">
+<meta http-equiv="refresh" content="0; url=development/extensions-core/orc.html">
+<h1>Redirecting...</h1>
+<a href="development/extensions-core/orc.html">Click here if you are not redirected.</a>
+<script>location="development/extensions-core/orc.html"</script>
diff --git a/docs/latest/development/extensions-contrib/tdigestsketch-quantiles.html b/docs/latest/development/extensions-contrib/tdigestsketch-quantiles.html
deleted file mode 100644
index 7c06207..0000000
--- a/docs/latest/development/extensions-contrib/tdigestsketch-quantiles.html
+++ /dev/null
@@ -1,417 +0,0 @@
-<!DOCTYPE html>
-<html lang="en">
-  <head>
-    <meta charset="UTF-8" />
-<meta name="viewport" content="width=device-width, initial-scale=1.0">
-<meta name="description" content="Apache Druid">
-<meta name="keywords" content="druid,kafka,database,analytics,streaming,real-time,real time,apache,open source">
-<meta name="author" content="Apache Software Foundation">
-
-<title>Druid | T-Digest Quantiles Sketch module</title>
-
-<link rel="alternate" type="application/atom+xml" href="/feed">
-<link rel="shortcut icon" href="/img/favicon.png">
-
-<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css" integrity="sha384-fnmOCqbTlWIlj8LyTjo7mOUStjsKC4pOpQbqyi7RrhN7udi9RwhKkMHpvLbHG9Sr" crossorigin="anonymous">
-
-<link href='//fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700,300italic|Open+Sans:300italic,400italic,600italic,400,300,600,700' rel='stylesheet' type='text/css'>
-
-<link rel="stylesheet" href="/css/bootstrap-pure.css?v=1.1">
-<link rel="stylesheet" href="/css/base.css?v=1.1">
-<link rel="stylesheet" href="/css/header.css?v=1.1">
-<link rel="stylesheet" href="/css/footer.css?v=1.1">
-<link rel="stylesheet" href="/css/syntax.css?v=1.1">
-<link rel="stylesheet" href="/css/docs.css?v=1.1">
-
-<script>
-  (function() {
-    var cx = '000162378814775985090:molvbm0vggm';
-    var gcse = document.createElement('script');
-    gcse.type = 'text/javascript';
-    gcse.async = true;
-    gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
-        '//cse.google.com/cse.js?cx=' + cx;
-    var s = document.getElementsByTagName('script')[0];
-    s.parentNode.insertBefore(gcse, s);
-  })();
-</script>
-
-
-  </head>
-
-  <body>
-    <!-- Start page_header include -->
-<script src="//ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>
-
-<div class="top-navigator">
-  <div class="container">
-    <div class="left-cont">
-      <a class="logo" href="/"><span class="druid-logo"></span></a>
-    </div>
-    <div class="right-cont">
-      <ul class="links">
-        <li class=""><a href="/technology">Technology</a></li>
-        <li class=""><a href="/use-cases">Use Cases</a></li>
-        <li class=""><a href="/druid-powered">Powered By</a></li>
-        <li class=""><a href="/docs/latest/design/">Docs</a></li>
-        <li class=""><a href="/community/">Community</a></li>
-        <li class="header-dropdown">
-          <a>Apache</a>
-          <div class="header-dropdown-menu">
-            <a href="https://www.apache.org/" target="_blank">Foundation</a>
-            <a href="https://www.apache.org/events/current-event" target="_blank">Events</a>
-            <a href="https://www.apache.org/licenses/" target="_blank">License</a>
-            <a href="https://www.apache.org/foundation/thanks.html" target="_blank">Thanks</a>
-            <a href="https://www.apache.org/security/" target="_blank">Security</a>
-            <a href="https://www.apache.org/foundation/sponsorship.html" target="_blank">Sponsorship</a>
-          </div>
-        </li>
-        <li class=" button-link"><a href="/downloads.html">Download</a></li>
-      </ul>
-    </div>
-  </div>
-  <div class="action-button menu-icon">
-    <span class="fa fa-bars"></span> MENU
-  </div>
-  <div class="action-button menu-icon-close">
-    <span class="fa fa-times"></span> MENU
-  </div>
-</div>
-
-<script type="text/javascript">
-  var $menu = $('.right-cont');
-  var $menuIcon = $('.menu-icon');
-  var $menuIconClose = $('.menu-icon-close');
-
-  function showMenu() {
-    $menu.fadeIn(100);
-    $menuIcon.fadeOut(100);
-    $menuIconClose.fadeIn(100);
-  }
-
-  $menuIcon.click(showMenu);
-
-  function hideMenu() {
-    $menu.fadeOut(100);
-    $menuIconClose.fadeOut(100);
-    $menuIcon.fadeIn(100);
-  }
-
-  $menuIconClose.click(hideMenu);
-
-  $(window).resize(function() {
-    if ($(window).width() >= 840) {
-      $menu.fadeIn(100);
-      $menuIcon.fadeOut(100);
-      $menuIconClose.fadeOut(100);
-    }
-    else {
-      $menu.fadeOut(100);
-      $menuIcon.fadeIn(100);
-      $menuIconClose.fadeOut(100);
-    }
-  });
-</script>
-
-<!-- Stop page_header include -->
-
-
-    <div class="container doc-container">
-      
-      
-
-      
-
-      <div class="row">
-        <div class="col-md-9 doc-content">
-          <p>
-            <a class="btn btn-default btn-xs visible-xs-inline-block visible-sm-inline-block" href="#toc">Table of Contents</a>
-          </p>
-          <!--
-  ~ Licensed to the Apache Software Foundation (ASF) under one
-  ~ or more contributor license agreements.  See the NOTICE file
-  ~ distributed with this work for additional information
-  ~ regarding copyright ownership.  The ASF licenses this file
-  ~ to you under the Apache License, Version 2.0 (the
-  ~ "License"); you may not use this file except in compliance
-  ~ with the License.  You may obtain a copy of the License at
-  ~
-  ~   http://www.apache.org/licenses/LICENSE-2.0
-  ~
-  ~ Unless required by applicable law or agreed to in writing,
-  ~ software distributed under the License is distributed on an
-  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  ~ KIND, either express or implied.  See the License for the
-  ~ specific language governing permissions and limitations
-  ~ under the License.
-  -->
-
-<h1 id="t-digest-quantiles-sketch-module">T-Digest Quantiles Sketch module</h1>
-
-<p>This module provides Apache Druid (incubating) approximate sketch aggregators based on T-Digest.
-T-Digest (https://github.com/tdunning/t-digest) is a popular datastructure for accurate on-line accumulation of
-rank-based statistics such as quantiles and trimmed means.
-The datastructure is also designed for parallel programming use cases like distributed aggregations or map reduce jobs by making combining two intermediate t-digests easy and efficient.</p>
-
-<p>There are three flavors of T-Digest sketch aggregator available in Apache Druid (incubating):</p>
-
-<ol>
-<li>buildTDigestSketch - used for building T-Digest sketches from raw numeric values. It generally makes sense to
-use this aggregator when ingesting raw data into Druid. One can also use this aggregator during query time too to
-generate sketches, just that one would be building these sketches on every query execution instead of building them
-once during ingestion.</li>
-<li>mergeTDigestSketch - used for merging pre-built T-Digest sketches. This aggregator is generally used during
-query time to combine sketches generated by buildTDigestSketch aggregator.</li>
-<li>quantilesFromTDigestSketch - used for generating quantiles from T-Digest sketches. This aggregator is generally used
-during query time to generate quantiles from sketches built using the above two sketch generating aggregators.</li>
-</ol>
-
-<p>To use this aggregator, make sure you <a href="../../operations/including-extensions.html">include</a> the extension in your config file:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text"><span></span>druid.extensions.loadList=[&quot;druid-tdigestsketch&quot;]
-</code></pre></div>
-<h3 id="aggregator">Aggregator</h3>
-
-<p>The result of the aggregation is a T-Digest sketch that is built ingesting numeric values from the raw data.</p>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span> <span class="p">:</span> <span class="s2">&quot;buildTDigestSketch&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span> <span class="p">:</span> <span class="err">&lt;output_name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;fieldName&quot;</span> <span class="p">:</span> <span class="err">&lt;metric_name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;compression&quot;</span><span class="p">:</span> <span class="err">&lt;parameter</span> <span class="err">that</span> <span class="err">controls</span> <span class="err">size</span> <span class="err">and</span> <span class="err">accuracy&gt;</span>
- <span class="p">}</span>
-</code></pre></div>
-<p>Example:
-<code>json
-{
-    &quot;type&quot;: &quot;buildTDigestSketch&quot;,
-    &quot;name&quot;: &quot;sketch&quot;,
-    &quot;fieldName&quot;: &quot;session_duration&quot;,
-    &quot;compression&quot;: 200
-}
-</code></p>
-
-<table><thead>
-<tr>
-<th>property</th>
-<th>description</th>
-<th>required?</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>type</td>
-<td>This String should always be &quot;buildTDigestSketch&quot;</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>name</td>
-<td>A String for the output (result) name of the calculation.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>fieldName</td>
-<td>A String for the name of the input field containing raw numeric values.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>compression</td>
-<td>Parameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.</td>
-<td>no, defaults to 100</td>
-</tr>
-</tbody></table>
-
-<p>The result of the aggregation is a T-Digest sketch that is built by merging pre-built T-Digest sketches.</p>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span> <span class="p">:</span> <span class="s2">&quot;mergeTDigestSketch&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span> <span class="p">:</span> <span class="err">&lt;output_name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;fieldName&quot;</span> <span class="p">:</span> <span class="err">&lt;metric_name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;compression&quot;</span><span class="p">:</span> <span class="err">&lt;parameter</span> <span class="err">that</span> <span class="err">controls</span> <span class="err">size</span> <span class="err">and</span> <span class="err">accuracy&gt;</span>
- <span class="p">}</span>
-</code></pre></div>
-<table><thead>
-<tr>
-<th>property</th>
-<th>description</th>
-<th>required?</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>type</td>
-<td>This String should always be &quot;mergeTDigestSketch&quot;</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>name</td>
-<td>A String for the output (result) name of the calculation.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>fieldName</td>
-<td>A String for the name of the input field containing raw numeric values.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>compression</td>
-<td>Parameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.</td>
-<td>no, defaults to 100</td>
-</tr>
-</tbody></table>
-
-<p>Example:
-<code>json
-{
-    &quot;queryType&quot;: &quot;groupBy&quot;,
-    &quot;dataSource&quot;: &quot;test_datasource&quot;,
-    &quot;granularity&quot;: &quot;ALL&quot;,
-    &quot;dimensions&quot;: [],
-    &quot;aggregations&quot;: [{
-        &quot;type&quot;: &quot;mergeTDigestSketch&quot;,
-        &quot;name&quot;: &quot;merged_sketch&quot;,
-        &quot;fieldName&quot;: &quot;ingested_sketch&quot;,
-        &quot;compression&quot;: 200
-    }],
-    &quot;intervals&quot;: [&quot;2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z&quot;]
-}
-</code></p>
-
-<h3 id="post-aggregators">Post Aggregators</h3>
-
-<h4 id="quantiles">Quantiles</h4>
-
-<p>This returns an array of quantiles corresponding to a given array of fractions.</p>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span>  <span class="p">:</span> <span class="s2">&quot;quantilesFromTDigestSketch&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="err">&lt;output</span> <span class="err">name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;field&quot;</span>  <span class="p">:</span> <span class="err">&lt;post</span> <span class="err">aggregator</span> <span class="err">that</span> <span class="err">refers</span> <span class="err">to</span> <span class="err">a</span> <span class="err">TDigestSketch</span> <span class="err">(fieldAccess</span> <span class="err">or</span> <span class="err">another</span> <span class="err">post</span> <span class="err">aggregator)&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;fractions&quot;</span> <span class="p">:</span> <span class="err">&lt;array</span> <span class="err">of</span> <span class="err">fractions&gt;</span>
-<span class="p">}</span>
-</code></pre></div>
-<table><thead>
-<tr>
-<th>property</th>
-<th>description</th>
-<th>required?</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>type</td>
-<td>This String should always be &quot;quantilesFromTDigestSketch&quot;</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>name</td>
-<td>A String for the output (result) name of the calculation.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>fieldName</td>
-<td>A String for the name of the input field containing raw numeric values.</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>fractions</td>
-<td>Non-empty array of fractions between 0 and 1</td>
-<td>yes</td>
-</tr>
-</tbody></table>
-
-<p>Example:
-<code>json
-{
-    &quot;queryType&quot;: &quot;groupBy&quot;,
-    &quot;dataSource&quot;: &quot;test_datasource&quot;,
-    &quot;granularity&quot;: &quot;ALL&quot;,
-    &quot;dimensions&quot;: [],
-    &quot;aggregations&quot;: [{
-        &quot;type&quot;: &quot;mergeTDigestSketch&quot;,
-        &quot;name&quot;: &quot;merged_sketch&quot;,
-        &quot;fieldName&quot;: &quot;ingested_sketch&quot;,
-        &quot;compression&quot;: 200
-    }],
-    &quot;postAggregations&quot;: [{
-        &quot;type&quot;: &quot;quantilesFromTDigestSketch&quot;,
-        &quot;name&quot;: &quot;quantiles&quot;,
-        &quot;fractions&quot;: [0, 0.5, 1],
-        &quot;field&quot;: {
-            &quot;type&quot;: &quot;fieldAccess&quot;,
-            &quot;fieldName&quot;: &quot;merged_sketch&quot;
-        }
-    }],
-    &quot;intervals&quot;: [&quot;2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z&quot;]
-}
-</code></p>
-
-        </div>
-        <div class="col-md-3">
-          <div class="searchbox">
-            <gcse:searchbox-only></gcse:searchbox-only>
-          </div>
-          <div id="toc" class="nav toc hidden-print">
-          </div>
-        </div>
-      </div>
-    </div>
-
-    <!-- Start page_footer include -->
-<footer class="druid-footer">
-<div class="container">
-  <div class="text-center">
-    <p>
-    <a href="/technology">Technology</a>&ensp;·&ensp;
-    <a href="/use-cases">Use Cases</a>&ensp;·&ensp;
-    <a href="/druid-powered">Powered by Druid</a>&ensp;·&ensp;
-    <a href="/docs/latest">Docs</a>&ensp;·&ensp;
-    <a href="/community/">Community</a>&ensp;·&ensp;
-    <a href="/downloads.html">Download</a>&ensp;·&ensp;
-    <a href="/faq">FAQ</a>
-    </p>
-  </div>
-  <div class="text-center">
-    <a title="Join the user group" href="https://groups.google.com/forum/#!forum/druid-user" target="_blank"><span class="fa fa-comments"></span></a>&ensp;·&ensp;
-    <a title="Follow Druid" href="https://twitter.com/druidio" target="_blank"><span class="fab fa-twitter"></span></a>&ensp;·&ensp;
-    <a title="Download via Apache" href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.15.0-incubating/apache-druid-0.15.0-incubating-bin.tar.gz" target="_blank"><span class="fas fa-feather"></span></a>&ensp;·&ensp;
-    <a title="GitHub" href="https://github.com/apache/incubator-druid" target="_blank"><span class="fab fa-github"></span></a>
-  </div>
-  <div class="text-center license">
-    Copyright © 2019 <a href="https://www.apache.org/" target="_blank">Apache Software Foundation</a>.<br>
-    Except where otherwise noted, licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>.<br>
-    Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.
-  </div>
-</div>
-</footer>
-
-<script async src="https://www.googletagmanager.com/gtag/js?id=UA-131010415-1"></script>
-<script>
-  window.dataLayer = window.dataLayer || [];
-  function gtag(){dataLayer.push(arguments);}
-  gtag('js', new Date());
-  gtag('config', 'UA-131010415-1');
-</script>
-<script>
-  function trackDownload(type, url) {
-    ga('send', 'event', 'download', type, url);
-  }
-</script>
-<script src="//code.jquery.com/jquery.min.js"></script>
-<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script>
-<script src="/assets/js/druid.js"></script>
-<!-- stop page_footer include -->
-
-
-    <script>
-    $(function() {
-      $(".toc").load("/docs/latest/toc.html");
-
-      // There is no way to tell when .gsc-input will be async loaded into the page so just try to set a placeholder until it works
-      var tries = 0;
-      var timer = setInterval(function() {
-        tries++;
-        if (tries > 300) clearInterval(timer);
-        var searchInput = $('input.gsc-input');
-        if (searchInput.length) {
-          searchInput.attr('placeholder', 'Search');
-          clearInterval(timer);
-        }
-      }, 100);
-    });
-    </script>
-  </body>
-</html>
diff --git a/docs/latest/development/extensions-core/approximate-histograms.html b/docs/latest/development/extensions-core/approximate-histograms.html
index 07238ab..c07dfd3 100644
--- a/docs/latest/development/extensions-core/approximate-histograms.html
+++ b/docs/latest/development/extensions-core/approximate-histograms.html
@@ -239,11 +239,6 @@ query.</p>
 <td>Restrict the approximation to the given range. The values outside this range will be aggregated into two centroids. Counts of values outside this range are still maintained.</td>
 <td>-INF/+INF</td>
 </tr>
-<tr>
-<td><code>finalizeAsBase64Binary</code></td>
-<td>If true, the finalized aggregator value will be a Base64-encoded byte array containing the serialized form of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.</td>
-<td>false</td>
-</tr>
 </tbody></table>
 
 <h2 id="fixed-buckets-histogram">Fixed Buckets Histogram</h2>
@@ -302,11 +297,6 @@ query.</p>
 <td>Specifies how values outside of [lowerLimit, upperLimit] will be handled. Supported modes are &quot;ignore&quot;, &quot;overflow&quot;, and &quot;clip&quot;. See <a href="#outlier-handling-modes">outlier handling modes</a> for more details.</td>
 <td>No default, must be specified</td>
 </tr>
-<tr>
-<td><code>finalizeAsBase64Binary</code></td>
-<td>If true, the finalized aggregator value will be a Base64-encoded byte array containing the <a href="#serialization-formats">serialized form</a> of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.</td>
-<td>false</td>
-</tr>
 </tbody></table>
 
 <p>An example aggregator spec is shown below:</p>
diff --git a/docs/latest/development/extensions-core/datasketches-extension.html b/docs/latest/development/extensions-core/datasketches-extension.html
index dace932..3228266 100644
--- a/docs/latest/development/extensions-core/datasketches-extension.html
+++ b/docs/latest/development/extensions-core/datasketches-extension.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-extension">DataSketches extension</h1>
 
-<p>Apache Druid (incubating) aggregators based on <a href="https://datasketches.github.io/">datasketches</a> library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.</p>
+<p>Apache Druid (incubating) aggregators based on <a href="http://datasketches.github.io/">datasketches</a> library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.</p>
 
 <p>To use the datasketches aggregators, make sure you <a href="../../operations/including-extensions.html">include</a> the extension in your config file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text"><span></span>druid.extensions.loadList=[&quot;druid-datasketches&quot;]
diff --git a/docs/latest/development/extensions-core/datasketches-hll.html b/docs/latest/development/extensions-core/datasketches-hll.html
index 61d1a47..ae6a308 100644
--- a/docs/latest/development/extensions-core/datasketches-hll.html
+++ b/docs/latest/development/extensions-core/datasketches-hll.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-hll-sketch-module">DataSketches HLL Sketch module</h1>
 
-<p>This module provides Apache Druid (incubating) aggregators for distinct counting based on HLL sketch from <a href="https://datasketches.github.io/">datasketches</a> library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of  [...]
+<p>This module provides Apache Druid (incubating) aggregators for distinct counting based on HLL sketch from <a href="http://datasketches.github.io/">datasketches</a> library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of s [...]
 You can use the HLL sketch aggregator on columns of any identifiers. It will return estimated cardinality of the column.</p>
 
 <p>To use this aggregator, make sure you <a href="../../operations/including-extensions.html">include</a> the extension in your config file:</p>
diff --git a/docs/latest/development/extensions-core/datasketches-quantiles.html b/docs/latest/development/extensions-core/datasketches-quantiles.html
index c3c11f8..2e1fb9e 100644
--- a/docs/latest/development/extensions-core/datasketches-quantiles.html
+++ b/docs/latest/development/extensions-core/datasketches-quantiles.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-quantiles-sketch-module">DataSketches Quantiles Sketch module</h1>
 
-<p>This module provides Apache Druid (incubating) aggregators based on numeric quantiles DoublesSketch from <a href="https://datasketches.github.io/">datasketches</a> library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such [...]
+<p>This module provides Apache Druid (incubating) aggregators based on numeric quantiles DoublesSketch from <a href="http://datasketches.github.io/">datasketches</a> library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such) [...]
 
 <p>There are three major modes of operation:</p>
 
@@ -232,26 +232,6 @@
   <span class="nt">&quot;splitPoints&quot;</span> <span class="p">:</span> <span class="err">&lt;array</span> <span class="err">of</span> <span class="err">split</span> <span class="err">points&gt;</span>
 <span class="p">}</span>
 </code></pre></div>
-<h4 id="rank">Rank</h4>
-
-<p>This returns an approximation to the rank of a given value that is the fraction of the distribution less than that value.</p>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span>  <span class="p">:</span> <span class="s2">&quot;quantilesDoublesSketchToRank&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="err">&lt;output</span> <span class="err">name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;field&quot;</span>  <span class="p">:</span> <span class="err">&lt;post</span> <span class="err">aggregator</span> <span class="err">that</span> <span class="err">refers</span> <span class="err">to</span> <span class="err">a</span> <span class="err">DoublesSketch</span> <span class="err">(fieldAccess</span> <span class="err">or</span> <span class="err">another</span> <span class="err">post</span> <span class="err">aggregator)&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;value&quot;</span> <span class="p">:</span> <span class="err">&lt;value&gt;</span>
-<span class="p">}</span>
-</code></pre></div>
-<h4 id="cdf">CDF</h4>
-
-<p>This returns an approximation to the Cumulative Distribution Function given an array of split points that define the edges of the bins. An array of <i>m</i> unique, monotonically increasing split points divide the real number line into <i>m+1</i> consecutive disjoint intervals. The definition of an interval is inclusive of the left split point and exclusive of the right split point. The resulting array of fractions can be viewed as ranks of each split point with one additional rank th [...]
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;type&quot;</span>  <span class="p">:</span> <span class="s2">&quot;quantilesDoublesSketchToCDF&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="err">&lt;output</span> <span class="err">name&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;field&quot;</span>  <span class="p">:</span> <span class="err">&lt;post</span> <span class="err">aggregator</span> <span class="err">that</span> <span class="err">refers</span> <span class="err">to</span> <span class="err">a</span> <span class="err">DoublesSketch</span> <span class="err">(fieldAccess</span> <span class="err">or</span> <span class="err">another</span> <span class="err">post</span> <span class="err">aggregator)&gt;</span><span class="p">,</span>
-  <span class="nt">&quot;splitPoints&quot;</span> <span class="p">:</span> <span class="err">&lt;array</span> <span class="err">of</span> <span class="err">split</span> <span class="err">points&gt;</span>
-<span class="p">}</span>
-</code></pre></div>
 <h4 id="sketch-summary">Sketch Summary</h4>
 
 <p>This returns a summary of the sketch that can be used for debugging. This is the result of calling toString() method.</p>
diff --git a/docs/latest/development/extensions-core/datasketches-theta.html b/docs/latest/development/extensions-core/datasketches-theta.html
index 6d468ca..20c3655 100644
--- a/docs/latest/development/extensions-core/datasketches-theta.html
+++ b/docs/latest/development/extensions-core/datasketches-theta.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-theta-sketch-module">DataSketches Theta Sketch module</h1>
 
-<p>This module provides Apache Druid (incubating) aggregators based on Theta sketch from <a href="https://datasketches.github.io/">datasketches</a> library. Note that sketch algorithms are approximate; see details in the &quot;Accuracy&quot; section of the datasketches doc.
+<p>This module provides Apache Druid (incubating) aggregators based on Theta sketch from <a href="http://datasketches.github.io/">datasketches</a> library. Note that sketch algorithms are approximate; see details in the &quot;Accuracy&quot; section of the datasketches doc. 
 At ingestion time, this aggregator creates the Theta sketch objects which get stored in Druid segments. Logically speaking, a Theta sketch object can be thought of as a Set data structure. At query time, sketches are read and aggregated (set unioned) together. In the end, by default, you receive the estimate of the number of unique entries in the sketch object. Also, you can use post aggregators to do union, intersection or difference on sketch columns in the same row. 
 Note that you can use <code>thetaSketch</code> aggregator on columns which were not ingested using the same. It will return estimated cardinality of the column. It is recommended to use it at ingestion time as well to make querying faster.</p>
 
diff --git a/docs/latest/development/extensions-core/datasketches-tuple.html b/docs/latest/development/extensions-core/datasketches-tuple.html
index 45ef9e6..2ac104e 100644
--- a/docs/latest/development/extensions-core/datasketches-tuple.html
+++ b/docs/latest/development/extensions-core/datasketches-tuple.html
@@ -148,7 +148,7 @@
 
 <h1 id="datasketches-tuple-sketch-module">DataSketches Tuple Sketch module</h1>
 
-<p>This module provides Apache Druid (incubating) aggregators based on Tuple sketch from <a href="https://datasketches.github.io/">datasketches</a> library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.</p>
+<p>This module provides Apache Druid (incubating) aggregators based on Tuple sketch from <a href="http://datasketches.github.io/">datasketches</a> library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.</p>
 
 <p>To use this aggregator, make sure you <a href="../../operations/including-extensions.html">include</a> the extension in your config file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text"><span></span>druid.extensions.loadList=[&quot;druid-datasketches&quot;]
diff --git a/docs/latest/development/extensions-core/druid-basic-security.html b/docs/latest/development/extensions-core/druid-basic-security.html
index a4f23a3..16ed14e 100644
--- a/docs/latest/development/extensions-core/druid-basic-security.html
+++ b/docs/latest/development/extensions-core/druid-basic-security.html
@@ -388,86 +388,6 @@ Return a list of all user names.</p>
 <p><code>GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})</code>
 Return the name and role information of the user with name {userName}</p>
 
-<p>Example output:
-<code>json
-{
-  &quot;name&quot;: &quot;druid2&quot;,
-  &quot;roles&quot;: [
-    &quot;druidRole&quot;
-  ]
-}
-</code></p>
-
-<p>This API supports the following flags:
-- <code>?full</code>: The response will also include the full information for each role currently assigned to the user.</p>
-
-<p>Example output:
-<code>json
-{
-  &quot;name&quot;: &quot;druid2&quot;,
-  &quot;roles&quot;: [
-    {
-      &quot;name&quot;: &quot;druidRole&quot;,
-      &quot;permissions&quot;: [
-        {
-          &quot;resourceAction&quot;: {
-            &quot;resource&quot;: {
-              &quot;name&quot;: &quot;A&quot;,
-              &quot;type&quot;: &quot;DATASOURCE&quot;
-            },
-            &quot;action&quot;: &quot;READ&quot;
-          },
-          &quot;resourceNamePattern&quot;: &quot;A&quot;
-        },
-        {
-          &quot;resourceAction&quot;: {
-            &quot;resource&quot;: {
-              &quot;name&quot;: &quot;C&quot;,
-              &quot;type&quot;: &quot;CONFIG&quot;
-            },
-            &quot;action&quot;: &quot;WRITE&quot;
-          },
-          &quot;resourceNamePattern&quot;: &quot;C&quot;
-        }
-      ]
-    }
-  ]
-}
-</code></p>
-
-<p>The output format of this API when <code>?full</code> is specified is deprecated and in later versions will be switched to the output format used when both <code>?full</code> and <code>?simplifyPermissions</code> flag is set. </p>
-
-<p>The <code>resourceNamePattern</code> is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.</p>
-
-<ul>
-<li><code>?full?simplifyPermissions</code>: When both <code>?full</code> and <code>?simplifyPermissions</code> are set, the permissions in the output will contain only a list of <code>resourceAction</code> objects, without the extraneous <code>resourceNamePattern</code> field.</li>
-</ul>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
-  <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="s2">&quot;druid2&quot;</span><span class="p">,</span>
-  <span class="nt">&quot;roles&quot;</span><span class="p">:</span> <span class="p">[</span>
-    <span class="p">{</span>
-      <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="s2">&quot;druidRole&quot;</span><span class="p">,</span>
-      <span class="nt">&quot;users&quot;</span><span class="p">:</span> <span class="kc">null</span><span class="p">,</span>
-      <span class="nt">&quot;permissions&quot;</span><span class="p">:</span> <span class="p">[</span>
-        <span class="p">{</span>
-          <span class="nt">&quot;resource&quot;</span><span class="p">:</span> <span class="p">{</span>
-            <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="s2">&quot;A&quot;</span><span class="p">,</span>
-            <span class="nt">&quot;type&quot;</span><span class="p">:</span> <span class="s2">&quot;DATASOURCE&quot;</span>
-          <span class="p">},</span>
-          <span class="nt">&quot;action&quot;</span><span class="p">:</span> <span class="s2">&quot;READ&quot;</span>
-        <span class="p">},</span>
-        <span class="p">{</span>
-          <span class="nt">&quot;resource&quot;</span><span class="p">:</span> <span class="p">{</span>
-            <span class="nt">&quot;name&quot;</span><span class="p">:</span> <span class="s2">&quot;C&quot;</span><span class="p">,</span>
-            <span class="nt">&quot;type&quot;</span><span class="p">:</span> <span class="s2">&quot;CONFIG&quot;</span>
-          <span class="p">},</span>
-          <span class="nt">&quot;action&quot;</span><span class="p">:</span> <span class="s2">&quot;WRITE&quot;</span>
-        <span class="p">}</span>
-      <span class="p">]</span>
-    <span class="p">}</span>
-  <span class="p">]</span>
-<span class="p">}</span>
-</code></pre></div>
 <p><code>POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})</code>
 Create a new user with name {userName}</p>
 
@@ -480,56 +400,7 @@ Delete the user with name {userName}</p>
 Return a list of all role names.</p>
 
 <p><code>GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})</code>
-Return name and permissions for the role named {roleName}.</p>
-
-<p>Example output:
-<code>json
-{
-  &quot;name&quot;: &quot;druidRole2&quot;,
-  &quot;permissions&quot;: [
-    {
-      &quot;resourceAction&quot;: {
-        &quot;resource&quot;: {
-          &quot;name&quot;: &quot;E&quot;,
-          &quot;type&quot;: &quot;DATASOURCE&quot;
-        },
-        &quot;action&quot;: &quot;WRITE&quot;
-      },
-      &quot;resourceNamePattern&quot;: &quot;E&quot;
-    }
-  ]
-}
-</code></p>
-
-<p>The default output format of this API is deprecated and in later versions will be switched to the output format used when the <code>?simplifyPermissions</code> flag is set. The <code>resourceNamePattern</code> is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.</p>
-
-<p>This API supports the following flags:</p>
-
-<ul>
-<li><code>?full</code>: The output will contain an extra <code>users</code> list, containing the users that currently have this role.</li>
-</ul>
-<div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="s2">&quot;users&quot;</span><span class="err">:</span><span class="p">[</span><span class="s2">&quot;druid&quot;</span><span class="p">]</span>
-</code></pre></div>
-<ul>
-<li><code>?simplifyPermissions</code>: The permissions in the output will contain only a list of <code>resourceAction</code> objects, without the extraneous <code>resourceNamePattern</code> field. The <code>users</code> field will be null when <code>?full</code> is not specified.</li>
-</ul>
-
-<p>Example output:
-<code>json
-{
-  &quot;name&quot;: &quot;druidRole2&quot;,
-  &quot;users&quot;: null,
-  &quot;permissions&quot;: [
-    {
-      &quot;resource&quot;: {
-        &quot;name&quot;: &quot;E&quot;,
-        &quot;type&quot;: &quot;DATASOURCE&quot;
-      },
-      &quot;action&quot;: &quot;WRITE&quot;
-    }
-  ]
-}
-</code></p>
+Return name and permissions for the role named {roleName}</p>
 
 <p><code>POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})</code>
 Create a new role with name {roleName}.
diff --git a/docs/latest/development/extensions-core/druid-kerberos.html b/docs/latest/development/extensions-core/druid-kerberos.html
index 208d3bc..5639c2c 100644
--- a/docs/latest/development/extensions-core/druid-kerberos.html
+++ b/docs/latest/development/extensions-core/druid-kerberos.html
@@ -199,6 +199,13 @@ druid.auth.authenticator.MyKerberosAuthenticator.type=kerberos
 <td>No</td>
 </tr>
 <tr>
+<td><code>druid.auth.authenticator.kerberos.excludedPaths</code></td>
+<td><code>[&#39;/status&#39;,&#39;/health&#39;]</code></td>
+<td>Array of HTTP paths which which does NOT need to be authenticated.</td>
+<td>None</td>
+<td>No</td>
+</tr>
+<tr>
 <td><code>druid.auth.authenticator.kerberos.cookieSignatureSecret</code></td>
 <td><code>secretString</code></td>
 <td>Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid ndoes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.</td>
@@ -217,10 +224,6 @@ druid.auth.authenticator.MyKerberosAuthenticator.type=kerberos
 <p>As a note, it is required that the SPNego principal in use by the druid processes must start with HTTP (This specified by <a href="https://tools.ietf.org/html/rfc4559">RFC-4559</a>) and must be of the form &quot;HTTP/_HOST@REALM&quot;.
 The special string _HOST will be replaced automatically with the value of config <code>druid.host</code></p>
 
-<h3 id="druid-auth-authenticator-kerberos-excludedpaths"><code>druid.auth.authenticator.kerberos.excludedPaths</code></h3>
-
-<p>In older releases, the Kerberos authenticator had an <code>excludedPaths</code> property that allowed the user to specify a list of paths where authentication checks should be skipped. This property has been removed from the Kerberos authenticator because the path exclusion functionality is now handled across all authenticators/authorizers by setting <code>druid.auth.unsecuredPaths</code>, as described in the <a href="../../design/auth.html">main auth documentation</a>.</p>
-
 <h3 id="auth-to-local-syntax">Auth to Local Syntax</h3>
 
 <p><code>druid.auth.authenticator.kerberos.authToLocal</code> allows you to set a general rules for mapping principal names to local user names.
diff --git a/docs/latest/development/extensions-core/kafka-ingestion.html b/docs/latest/development/extensions-core/kafka-ingestion.html
index a80a1bd..e8445f8 100644
--- a/docs/latest/development/extensions-core/kafka-ingestion.html
+++ b/docs/latest/development/extensions-core/kafka-ingestion.html
@@ -606,111 +606,12 @@ offsets as reported by Kafka, the consumer lag per partition, as well as the agg
 consumer lag per partition may be reported as negative values if the supervisor has not received a recent latest offset
 response from Kafka. The aggregate lag value will always be &gt;= 0.</p>
 
-<p>The status report also contains the supervisor&#39;s state and a list of recently thrown exceptions (reported as
-<code>recentErrors</code>, whose max size can be controlled using the <code>druid.supervisor.maxStoredExceptionEvents</code> configuration).
-There are two fields related to the supervisor&#39;s state - <code>state</code> and <code>detailedState</code>. The <code>state</code> field will always be
-one of a small number of generic states that are applicable to any type of supervisor, while the <code>detailedState</code> field
-will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor&#39;s
-activities than the generic <code>state</code> field.</p>
-
-<p>The list of possible <code>state</code> values are: [<code>PENDING</code>, <code>RUNNING</code>, <code>SUSPENDED</code>, <code>STOPPING</code>, <code>UNHEALTHY_SUPERVISOR</code>, <code>UNHEALTHY_TASKS</code>]</p>
-
-<p>The list of <code>detailedState</code> values and their corresponding <code>state</code> mapping is as follows:</p>
-
-<table><thead>
-<tr>
-<th>Detailed State</th>
-<th>Corresponding State</th>
-<th>Description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor has encountered errors on the past <code>druid.supervisor.unhealthinessThreshold</code> iterations</td>
-</tr>
-<tr>
-<td>UNHEALTHY_TASKS</td>
-<td>UNHEALTHY_TASKS</td>
-<td>The last <code>druid.supervisor.taskUnhealthinessThreshold</code> tasks have all failed</td>
-</tr>
-<tr>
-<td>UNABLE_TO_CONNECT_TO_STREAM</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor is encountering connectivity issues with Kafka and has not successfully connected in the past</td>
-</tr>
-<tr>
-<td>LOST_CONTACT_WITH_STREAM</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor is encountering connectivity issues with Kafka but has successfully connected in the past</td>
-</tr>
-<tr>
-<td>PENDING (first iteration only)</td>
-<td>PENDING</td>
-<td>The supervisor has been initialized and hasn&#39;t started connecting to the stream</td>
-</tr>
-<tr>
-<td>CONNECTING_TO_STREAM (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is trying to connect to the stream and update partition data</td>
-</tr>
-<tr>
-<td>DISCOVERING_INITIAL_TASKS (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is discovering already-running tasks</td>
-</tr>
-<tr>
-<td>CREATING_TASKS (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is creating tasks and discovering state</td>
-</tr>
-<tr>
-<td>RUNNING</td>
-<td>RUNNING</td>
-<td>The supervisor has started tasks and is waiting for taskDuration to elapse</td>
-</tr>
-<tr>
-<td>SUSPENDED</td>
-<td>SUSPENDED</td>
-<td>The supervisor has been suspended</td>
-</tr>
-<tr>
-<td>STOPPING</td>
-<td>STOPPING</td>
-<td>The supervisor is stopping</td>
-</tr>
-</tbody></table>
-
-<p>On each iteration of the supervisor&#39;s run loop, the supervisor completes the following tasks in sequence:
-  1) Fetch the list of partitions from Kafka and determine the starting offset for each partition (either based on the
-  last processed offset if continuing, or starting from the beginning or ending of the stream if this is a new topic).
-  2) Discover any running indexing tasks that are writing to the supervisor&#39;s datasource and adopt them if they match
-  the supervisor&#39;s configuration, else signal them to stop.
-  3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision.
-  4) Handle tasks that have exceeded <code>taskDuration</code> and should transition from the reading to publishing state.
-  5) Handle tasks that have finished publishing and signal redundant replica tasks to stop.
-  6) Handle tasks that have failed and clean up the supervisor&#39;s internal state.
-  7) Compare the list of healthy tasks to the requested <code>taskCount</code> and <code>replicas</code> configurations and create additional tasks if required.</p>
-
-<p>The <code>detailedState</code> field will show additional values (those marked with &quot;first iteration only&quot;) the first time the
-supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface
-initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can&#39;t connect to
-Kafka, it can&#39;t read from the Kafka topic, or it can&#39;t communicate with existing tasks). Once the supervisor is stable -
-that is, once it has completed a full execution without encountering any issues - <code>detailedState</code> will show a <code>RUNNING</code>
-state until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state.</p>
-
 <h3 id="getting-supervisor-ingestion-stats-report">Getting Supervisor Ingestion Stats Report</h3>
 
 <p><code>GET /druid/indexer/v1/supervisor/&lt;supervisorId&gt;/stats</code> returns a snapshot of the current ingestion row counters for each task being managed by the supervisor, along with moving averages for the row counters.</p>
 
 <p>See <a href="../../ingestion/reports.html#row-stats">Task Reports: Row Stats</a> for more information.</p>
 
-<h3 id="supervisor-health-check">Supervisor Health Check</h3>
-
-<p><code>GET /druid/indexer/v1/supervisor/&lt;supervisorId&gt;/health</code> returns <code>200 OK</code> if the supervisor is healthy and
-<code>503 Service Unavailable</code> if it is unhealthy. Healthiness is determined by the supervisor&#39;s <code>state</code> (as returned by the
-<code>/status</code> endpoint) and the <code>druid.supervisor.*</code> Overlord configuration thresholds.</p>
-
 <h3 id="updating-existing-supervisors">Updating Existing Supervisors</h3>
 
 <p><code>POST /druid/indexer/v1/supervisor</code> can be used to update existing supervisor spec.
diff --git a/docs/latest/development/extensions-core/kinesis-ingestion.html b/docs/latest/development/extensions-core/kinesis-ingestion.html
index 7e01891..1274b7a 100644
--- a/docs/latest/development/extensions-core/kinesis-ingestion.html
+++ b/docs/latest/development/extensions-core/kinesis-ingestion.html
@@ -231,7 +231,7 @@ and the MiddleManagers. A supervisor for a dataSource is started by submitting a
   <span class="p">}</span>
 <span class="p">}</span>
 </code></pre></div>
-<h2 id="supervisor-spec">Supervisor Spec</h2>
+<h2 id="supervisor-configuration">Supervisor Configuration</h2>
 
 <table><thead>
 <tr>
@@ -661,108 +661,12 @@ For all supervisor APIs, please check <a href="../../operations/api-reference.ht
 <code>
 -Ddruid.kinesis.accessKey=123 -Ddruid.kinesis.secretKey=456
 </code>
-The AWS access key ID and secret access key are used for Kinesis API requests. If this is not provided, the service will
-look for credentials set in environment variables, in the default profile configuration file, and from the EC2 instance
-profile provider (in this order).</p>
+The AWS access key ID and secret access key are used for Kinesis API requests. If this is not provided, the service will look for credentials set in environment variables, in the default profile configuration file, and from the EC2 instance profile provider (in this order).</p>
 
 <h3 id="getting-supervisor-status-report">Getting Supervisor Status Report</h3>
 
-<p><code>GET /druid/indexer/v1/supervisor/&lt;supervisorId&gt;/status</code> returns a snapshot report of the current state of the tasks 
-managed by the given supervisor. This includes the latest sequence numbers as reported by Kinesis. Unlike the Kafka
-Indexing Service, stats about lag are not yet supported.</p>
-
-<p>The status report also contains the supervisor&#39;s state and a list of recently thrown exceptions (reported as
-<code>recentErrors</code>, whose max size can be controlled using the <code>druid.supervisor.maxStoredExceptionEvents</code> configuration).
-There are two fields related to the supervisor&#39;s state - <code>state</code> and <code>detailedState</code>. The <code>state</code> field will always be
-one of a small number of generic states that are applicable to any type of supervisor, while the <code>detailedState</code> field
-will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor&#39;s
-activities than the generic <code>state</code> field.</p>
-
-<p>The list of possible <code>state</code> values are: [<code>PENDING</code>, <code>RUNNING</code>, <code>SUSPENDED</code>, <code>STOPPING</code>, <code>UNHEALTHY_SUPERVISOR</code>, <code>UNHEALTHY_TASKS</code>]</p>
-
-<p>The list of <code>detailedState</code> values and their corresponding <code>state</code> mapping is as follows:</p>
-
-<table><thead>
-<tr>
-<th>Detailed State</th>
-<th>Corresponding State</th>
-<th>Description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor has encountered errors on the past <code>druid.supervisor.unhealthinessThreshold</code> iterations</td>
-</tr>
-<tr>
-<td>UNHEALTHY_TASKS</td>
-<td>UNHEALTHY_TASKS</td>
-<td>The last <code>druid.supervisor.taskUnhealthinessThreshold</code> tasks have all failed</td>
-</tr>
-<tr>
-<td>UNABLE_TO_CONNECT_TO_STREAM</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor is encountering connectivity issues with Kinesis and has not successfully connected in the past</td>
-</tr>
-<tr>
-<td>LOST_CONTACT_WITH_STREAM</td>
-<td>UNHEALTHY_SUPERVISOR</td>
-<td>The supervisor is encountering connectivity issues with Kinesis but has successfully connected in the past</td>
-</tr>
-<tr>
-<td>PENDING (first iteration only)</td>
-<td>PENDING</td>
-<td>The supervisor has been initialized and hasn&#39;t started connecting to the stream</td>
-</tr>
-<tr>
-<td>CONNECTING_TO_STREAM (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is trying to connect to the stream and update partition data</td>
-</tr>
-<tr>
-<td>DISCOVERING_INITIAL_TASKS (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is discovering already-running tasks</td>
-</tr>
-<tr>
-<td>CREATING_TASKS (first iteration only)</td>
-<td>RUNNING</td>
-<td>The supervisor is creating tasks and discovering state</td>
-</tr>
-<tr>
-<td>RUNNING</td>
-<td>RUNNING</td>
-<td>The supervisor has started tasks and is waiting for taskDuration to elapse</td>
-</tr>
-<tr>
-<td>SUSPENDED</td>
-<td>SUSPENDED</td>
-<td>The supervisor has been suspended</td>
-</tr>
-<tr>
-<td>STOPPING</td>
-<td>STOPPING</td>
-<td>The supervisor is stopping</td>
-</tr>
-</tbody></table>
-
-<p>On each iteration of the supervisor&#39;s run loop, the supervisor completes the following tasks in sequence:
-  1) Fetch the list of shards from Kinesis and determine the starting sequence number for each shard (either based on the
-  last processed sequence number if continuing, or starting from the beginning or ending of the stream if this is a new stream).
-  2) Discover any running indexing tasks that are writing to the supervisor&#39;s datasource and adopt them if they match
-  the supervisor&#39;s configuration, else signal them to stop.
-  3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision.
-  4) Handle tasks that have exceeded <code>taskDuration</code> and should transition from the reading to publishing state.
-  5) Handle tasks that have finished publishing and signal redundant replica tasks to stop.
-  6) Handle tasks that have failed and clean up the supervisor&#39;s internal state.
-  7) Compare the list of healthy tasks to the requested <code>taskCount</code> and <code>replicas</code> configurations and create additional tasks if required.</p>
-
-<p>The <code>detailedState</code> field will show additional values (those marked with &quot;first iteration only&quot;) the first time the
-supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface
-initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can&#39;t connect to
-Kinesis, it can&#39;t read from the stream, or it can&#39;t communicate with existing tasks). Once the supervisor is stable -
-that is, once it has completed a full execution without encountering any issues - <code>detailedState</code> will show a <code>RUNNING</code>
-state until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state.</p>
+<p><code>GET /druid/indexer/v1/supervisor/&lt;supervisorId&gt;/status</code> returns a snapshot report of the current state of the tasks managed by the given supervisor. This includes the latest
+sequence numbers as reported by Kinesis. Unlike the Kafka Indexing Service, stats about lag is not yet supported.</p>
 
 <h3 id="updating-existing-supervisors">Updating Existing Supervisors</h3>
 
diff --git a/docs/latest/development/extensions-core/postgresql.html b/docs/latest/development/extensions-core/postgresql.html
index 37fb6c1..91b5657 100644
--- a/docs/latest/development/extensions-core/postgresql.html
+++ b/docs/latest/development/extensions-core/postgresql.html
@@ -261,12 +261,6 @@
 <td>none</td>
 <td>no</td>
 </tr>
-<tr>
-<td><code>druid.metadata.postgres.dbTableSchema</code></td>
-<td>druid meta table schema</td>
-<td><code>public</code></td>
-<td>no</td>
-</tr>
 </tbody></table>
 
         </div>
diff --git a/docs/latest/development/extensions-core/s3.html b/docs/latest/development/extensions-core/s3.html
index dee3752..4033f62 100644
--- a/docs/latest/development/extensions-core/s3.html
+++ b/docs/latest/development/extensions-core/s3.html
@@ -174,18 +174,43 @@
 </thead><tbody>
 <tr>
 <td><code>druid.s3.accessKey</code></td>
-<td>S3 access key.See <a href="#s3-authentication-methods">S3 authentication methods</a> for more details</td>
-<td>Can be ommitted according to authentication methods chosen.</td>
+<td>S3 access key.</td>
+<td>Must be set.</td>
 </tr>
 <tr>
 <td><code>druid.s3.secretKey</code></td>
-<td>S3 secret key.See <a href="#s3-authentication-methods">S3 authentication methods</a> for more details</td>
-<td>Can be ommitted according to authentication methods chosen.</td>
+<td>S3 secret key.</td>
+<td>Must be set.</td>
+</tr>
+<tr>
+<td><code>druid.storage.bucket</code></td>
+<td>Bucket to store in.</td>
+<td>Must be set.</td>
 </tr>
 <tr>
-<td><code>druid.s3.fileSessionCredentials</code></td>
-<td>Path to properties file containing <code>sessionToken</code>, <code>accessKey</code> and <code>secretKey</code> value. One key/value pair per line (format <code>key=value</code>). See <a href="#s3-authentication-methods">S3 authentication methods</a> for more details</td>
-<td>Can be ommitted according to authentication methods chosen.</td>
+<td><code>druid.storage.baseKey</code></td>
+<td>Base key prefix to use, i.e. what directory.</td>
+<td>Must be set.</td>
+</tr>
+<tr>
+<td><code>druid.storage.disableAcl</code></td>
+<td>Boolean flag to disable ACL. If this is set to <code>false</code>, the full control would be granted to the bucket owner. This may require to set additional permissions. See <a href="#s3-permissions-settings">S3 permissions settings</a>.</td>
+<td>false</td>
+</tr>
+<tr>
+<td><code>druid.storage.sse.type</code></td>
+<td>Server-side encryption type. Should be one of <code>s3</code>, <code>kms</code>, and <code>custom</code>. See the below <a href="#server-side-encryption">Server-side encryption section</a> for more details.</td>
+<td>None</td>
+</tr>
+<tr>
+<td><code>druid.storage.sse.kms.keyId</code></td>
+<td>AWS KMS key ID. This is used only when <code>druid.storage.sse.type</code> is <code>kms</code> and can be empty to use the default key ID.</td>
+<td>None</td>
+</tr>
+<tr>
+<td><code>druid.storage.sse.custom.base64EncodedKey</code></td>
+<td>Base64-encoded key. Should be specified if <code>druid.storage.sse.type</code> is <code>custom</code>.</td>
+<td>None</td>
 </tr>
 <tr>
 <td><code>druid.s3.protocol</code></td>
@@ -237,51 +262,6 @@
 <td>Password to use when connecting through a proxy.</td>
 <td>None</td>
 </tr>
-<tr>
-<td><code>druid.storage.bucket</code></td>
-<td>Bucket to store in.</td>
-<td>Must be set.</td>
-</tr>
-<tr>
-<td><code>druid.storage.baseKey</code></td>
-<td>Base key prefix to use, i.e. what directory.</td>
-<td>Must be set.</td>
-</tr>
-<tr>
-<td><code>druid.storage.archiveBucket</code></td>
-<td>S3 bucket name for archiving when running the <em>archive task</em>.</td>
-<td>none</td>
-</tr>
-<tr>
-<td><code>druid.storage.archiveBaseKey</code></td>
-<td>S3 object key prefix for archiving.</td>
-<td>none</td>
-</tr>
-<tr>
-<td><code>druid.storage.disableAcl</code></td>
-<td>Boolean flag to disable ACL. If this is set to <code>false</code>, the full control would be granted to the bucket owner. This may require to set additional permissions. See <a href="#s3-permissions-settings">S3 permissions settings</a>.</td>
-<td>false</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.type</code></td>
-<td>Server-side encryption type. Should be one of <code>s3</code>, <code>kms</code>, and <code>custom</code>. See the below <a href="#server-side-encryption">Server-side encryption section</a> for more details.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.kms.keyId</code></td>
-<td>AWS KMS key ID. This is used only when <code>druid.storage.sse.type</code> is <code>kms</code> and can be empty to use the default key ID.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.sse.custom.base64EncodedKey</code></td>
-<td>Base64-encoded key. Should be specified if <code>druid.storage.sse.type</code> is <code>custom</code>.</td>
-<td>None</td>
-</tr>
-<tr>
-<td><code>druid.storage.useS3aSchema</code></td>
-<td>If true, use the &quot;s3a&quot; filesystem when using Hadoop-based ingestion. If false, the &quot;s3n&quot; filesystem will be used. Only affects Hadoop-based ingestion.</td>
-<td>false</td>
-</tr>
 </tbody></table>
 
 <h3 id="s3-permissions-settings">S3 permissions settings</h3>
@@ -289,53 +269,6 @@
 <p><code>s3:GetObject</code> and <code>s3:PutObject</code> are basically required for pushing/loading segments to/from S3.
 If <code>druid.storage.disableAcl</code> is set to <code>false</code>, then <code>s3:GetBucketAcl</code> and <code>s3:PutObjectAcl</code> are additionally required to set ACL for objects.</p>
 
-<h3 id="s3-authentication-methods">S3 authentication methods</h3>
-
-<p>To connect to your S3 bucket (whether deep storage bucket or source bucket), Druid use the following credentials providers chain</p>
-
-<table><thead>
-<tr>
-<th>order</th>
-<th>type</th>
-<th>details</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>1</td>
-<td>Druid config file</td>
-<td>Based on your runtime.properties if it contains values <code>druid.s3.accessKey</code> and <code>druid.s3.secretKey</code></td>
-</tr>
-<tr>
-<td>2</td>
-<td>Custom properties file</td>
-<td>Based on custom properties file where you can supply <code>sessionToken</code>, <code>accessKey</code> and <code>secretKey</code> values. This file is provided to Druid through <code>druid.s3.fileSessionCredentials</code> propertie</td>
-</tr>
-<tr>
-<td>3</td>
-<td>Environment variables</td>
-<td>Based on environment variables <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code></td>
-</tr>
-<tr>
-<td>4</td>
-<td>Java system properties</td>
-<td>Based on JVM properties <code>aws.accessKeyId</code> and <code>aws.secretKey</code></td>
-</tr>
-<tr>
-<td>5</td>
-<td>Profile informations</td>
-<td>Based on credentials you may have on your druid instance (generally in <code>~/.aws/credentials</code>)</td>
-</tr>
-<tr>
-<td>6</td>
-<td>Instance profile informations</td>
-<td>Based on the instance profile you may have attached to your druid instance</td>
-</tr>
-</tbody></table>
-
-<p>You can find more informations about authentication method <a href="https://docs.aws.amazon.com/fr_fr/sdk-for-java/v1/developer-guide/credentials.html">here</a><br/>
-<strong>Note :</strong> <em>Order is important here as it indicates the precedence of authentication methods.<br/> 
-So if you are trying to use Instance profile informations, you **must not</em>* set <code>druid.s3.accessKey</code> and <code>druid.s3.secretKey</code> in your Druid runtime.properties* </p>
-
 <h2 id="server-side-encryption">Server-side encryption</h2>
 
 <p>You can enable <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html">server-side encryption</a> by setting
diff --git a/docs/latest/development/extensions.html b/docs/latest/development/extensions.html
index 2985fd6..20b109a 100644
--- a/docs/latest/development/extensions.html
+++ b/docs/latest/development/extensions.html
@@ -192,7 +192,7 @@ metadata store. Many clusters will also use additional extensions.</p>
 </tr>
 <tr>
 <td>druid-datasketches</td>
-<td>Support for approximate counts and set operations with <a href="https://datasketches.github.io/">DataSketches</a>.</td>
+<td>Support for approximate counts and set operations with <a href="http://datasketches.github.io/">DataSketches</a>.</td>
 <td><a href="../development/extensions-core/datasketches-extension.html">link</a></td>
 </tr>
 <tr>
@@ -395,21 +395,6 @@ If you&#39;d like to take on maintenance for a community extension, please post
 <td>Support for <a href="https://en.wikipedia.org/wiki/Moving_average">Moving Average</a> and other Aggregate <a href="https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions">Window Functions</a> in Druid queries.</td>
 <td><a href="../development/extensions-contrib/moving-average-query.html">link</a></td>
 </tr>
-<tr>
-<td>druid-influxdb-emitter</td>
-<td>InfluxDB metrics emitter</td>
-<td><a href="../development/extensions-contrib/influxdb-emitter.html">link</a></td>
-</tr>
-<tr>
-<td>druid-momentsketch</td>
-<td>Support for approximate quantile queries using the <a href="https://github.com/stanford-futuredata/momentsketch">momentsketch</a> library</td>
-<td><a href="../development/extensions-contrib/momentsketch-quantiles.html">link</a></td>
-</tr>
-<tr>
-<td>druid-tdigestsketch</td>
-<td>Support for approximate sketch aggregators based on <a href="https://github.com/tdunning/t-digest">T-Digest</a></td>
-<td><a href="../development/extensions-contrib/tdigestsketch-quantiles.html">link</a></td>
-</tr>
 </tbody></table>
 
 <h2 id="promoting-community-extension-to-core-extension">Promoting Community Extension to Core Extension</h2>
diff --git a/docs/latest/development/geo.html b/docs/latest/development/geo.html
index 7477d19..d4b9af5 100644
--- a/docs/latest/development/geo.html
+++ b/docs/latest/development/geo.html
@@ -253,27 +253,6 @@
 </tr>
 </tbody></table>
 
-<h3 id="polygonbound">PolygonBound</h3>
-
-<table><thead>
-<tr>
-<th>property</th>
-<th>description</th>
-<th>required?</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>abscissa</td>
-<td>Horizontal coordinate for corners of the polygon</td>
-<td>yes</td>
-</tr>
-<tr>
-<td>ordinate</td>
-<td>Vertical coordinate for corners of the polygon</td>
-<td>yes</td>
-</tr>
-</tbody></table>
-
         </div>
         <div class="col-md-3">
           <div class="searchbox">
diff --git a/docs/latest/development/modules.html b/docs/latest/development/modules.html
index 368f1ce..c30268e 100644
--- a/docs/latest/development/modules.html
+++ b/docs/latest/development/modules.html
@@ -164,7 +164,7 @@
 and <code>org.apache.druid.query.aggregation.BufferAggregator</code>.</li>
 <li>Add PostAggregators by extending <code>org.apache.druid.query.aggregation.PostAggregator</code>.</li>
 <li>Add ExtractionFns by extending <code>org.apache.druid.query.extraction.ExtractionFn</code>.</li>
-<li>Add Complex metrics by extending <code>org.apache.druid.segment.serde.ComplexMetricSerde</code>.</li>
+<li>Add Complex metrics by extending <code>org.apache.druid.segment.serde.ComplexMetricsSerde</code>.</li>
 <li>Add new Query types by extending <code>org.apache.druid.query.QueryRunnerFactory</code>, <code>org.apache.druid.query.QueryToolChest</code>, and
 <code>org.apache.druid.query.Query</code>.</li>
 <li>Add new Jersey resources by calling <code>Jerseys.addResource(binder, clazz)</code>.</li>
diff --git a/docs/latest/ingestion/compaction.html b/docs/latest/ingestion/compaction.html
index b6a556e..9e5dc08 100644
--- a/docs/latest/ingestion/compaction.html
+++ b/docs/latest/ingestion/compaction.html
@@ -155,6 +155,7 @@
     <span class="nt">&quot;dataSource&quot;</span><span class="p">:</span> <span class="err">&lt;task_datasource&gt;</span><span class="p">,</span>
     <span class="nt">&quot;interval&quot;</span><span class="p">:</span> <span class="err">&lt;interval</span> <span class="err">to</span> <span class="err">specify</span> <span class="err">segments</span> <span class="err">to</span> <span class="err">be</span> <span class="err">merged&gt;</span><span class="p">,</span>
     <span class="nt">&quot;dimensions&quot;</span> <span class="err">&lt;custom</span> <span class="err">dimensionsSpec&gt;</span><span class="p">,</span>
+    <span class="nt">&quot;keepSegmentGranularity&quot;</span><span class="p">:</span> <span class="err">&lt;</span><span class="kc">true</span> <span class="err">or</span> <span class="kc">false</span><span class="err">&gt;</span><span class="p">,</span>
     <span class="nt">&quot;segmentGranularity&quot;</span><span class="p">:</span> <span class="err">&lt;segment</span> <span class="err">granularity</span> <span class="err">after</span> <span class="err">compaction&gt;</span><span class="p">,</span>
     <span class="nt">&quot;targetCompactionSizeBytes&quot;</span><span class="p">:</span> <span class="err">&lt;target</span> <span class="err">size</span> <span class="err">of</span> <span class="err">compacted</span> <span class="err">segments&gt;</span>
     <span class="s2">&quot;tuningConfig&quot;</span> <span class="err">&lt;index</span> <span class="err">task</span> <span class="err">tuningConfig&gt;</span><span class="p">,</span>
@@ -204,6 +205,11 @@
 <td>No</td>
 </tr>
 <tr>
+<td><code>keepSegmentGranularity</code></td>
+<td>Deprecated. Please use <code>segmentGranularity</code> instead. See the below table for its behavior.</td>
+<td>No</td>
+</tr>
+<tr>
 <td><code>targetCompactionSizeBytes</code></td>
 <td>Target segment size after comapction. Cannot be used with <code>maxRowsPerSegment</code>, <code>maxTotalRows</code>, and <code>numShards</code> in tuningConfig.</td>
 <td>No</td>
@@ -220,6 +226,47 @@
 </tr>
 </tbody></table>
 
+<h3 id="used-segmentgranularity-based-on-segmentgranularity-and-keepsegmentgranularity">Used segmentGranularity based on <code>segmentGranularity</code> and <code>keepSegmentGranularity</code></h3>
+
+<table><thead>
+<tr>
+<th>SegmentGranularity</th>
+<th>keepSegmentGranularity</th>
+<th>Used SegmentGranularity</th>
+</tr>
+</thead><tbody>
+<tr>
+<td>Non-null</td>
+<td>True</td>
+<td>Error</td>
+</tr>
+<tr>
+<td>Non-null</td>
+<td>False</td>
+<td>Given segmentGranularity</td>
+</tr>
+<tr>
+<td>Non-null</td>
+<td>Null</td>
+<td>Given segmentGranularity</td>
+</tr>
+<tr>
+<td>Null</td>
+<td>True</td>
+<td>Original segmentGranularity</td>
+</tr>
+<tr>
+<td>Null</td>
+<td>False</td>
+<td>ALL segmentGranularity. All events will fall into the single time chunk.</td>
+</tr>
+<tr>
+<td>Null</td>
+<td>Null</td>
+<td>Original segmentGranularity</td>
+</tr>
+</tbody></table>
+
 <p>An example of compaction task is</p>
 <div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">{</span>
   <span class="nt">&quot;type&quot;</span> <span class="p">:</span> <span class="s2">&quot;compact&quot;</span><span class="p">,</span>
@@ -228,12 +275,12 @@
 <span class="p">}</span>
 </code></pre></div>
 <p>This compaction task reads <em>all segments</em> of the interval <code>2017-01-01/2018-01-01</code> and results in new segments.
-Since <code>segmentGranularity</code> is null, the original segment granularity will be remained and not changed after compaction.
+Since both <code>segmentGranularity</code> and <code>keepSegmentGranularity</code> are null, the original segment granularity will be remained and not changed after compaction.
 To control the number of result segments per time chunk, you can set <a href="../configuration/index.html#compaction-dynamic-configuration">maxRowsPerSegment</a> or <a href="../ingestion/native_tasks.html#tuningconfig">numShards</a>.
 Please note that you can run multiple compactionTasks at the same time. For example, you can run 12 compactionTasks per month instead of running a single task for the entire year.</p>
 
 <p>A compaction task internally generates an <code>index</code> task spec for performing compaction work with some fixed parameters.
-For example, its <code>firehose</code> is always the <a href="./firehose.html#ingestsegmentfirehose">ingestSegmentFirehose</a>, and <code>dimensionsSpec</code> and <code>metricsSpec</code>
+For example, its <code>firehose</code> is always the <a href="./firehose.html#ingestsegmentfirehose">ingestSegmentSpec</a>, and <code>dimensionsSpec</code> and <code>metricsSpec</code>
 include all dimensions and metrics of the input segments by default.</p>
 
 <p>Compaction tasks will exit with a failure status code, without doing anything, if the interval you specify has no
diff --git a/docs/latest/ingestion/hadoop-vs-native-batch.html b/docs/latest/ingestion/hadoop-vs-native-batch.html
index 50bf90a..398e9d3 100644
--- a/docs/latest/ingestion/hadoop-vs-native-batch.html
+++ b/docs/latest/ingestion/hadoop-vs-native-batch.html
@@ -180,14 +180,14 @@ ingestion method.</p>
 <td>No dependency</td>
 </tr>
 <tr>
-<td>Supported <a href="./index.html#roll-up-modes">rollup modes</a></td>
+<td>Supported <a href="http://druid.io/docs/latest/ingestion/index.html#roll-up-modes">rollup modes</a></td>
 <td>Perfect rollup</td>
 <td>Best-effort rollup</td>
 <td>Both perfect and best-effort rollup</td>
 </tr>
 <tr>
 <td>Supported partitioning methods</td>
-<td><a href="./hadoop.html#partitioning-specification">Both Hash-based and range partitioning</a></td>
+<td><a href="http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification">Both Hash-based and range partitioning</a></td>
 <td>N/A</td>
 <td>Hash-based partitioning (when <code>forceGuaranteedRollup</code> = true)</td>
 </tr>
diff --git a/docs/latest/ingestion/hadoop.html b/docs/latest/ingestion/hadoop.html
index d2607c3..66d48df 100644
--- a/docs/latest/ingestion/hadoop.html
+++ b/docs/latest/ingestion/hadoop.html
@@ -507,12 +507,6 @@ s3n://billy-bucket/the/data/is/here/y=2012/m=06/d=01/H=23
 <td>The maximum number of parse exceptions that can occur before the task halts ingestion and fails. Overrides <code>ignoreInvalidRows</code> if <code>maxParseExceptions</code> is defined.</td>
 <td>unlimited</td>
 </tr>
-<tr>
-<td>useYarnRMJobStatusFallback</td>
-<td>Boolean</td>
-<td>If the Hadoop jobs created by the indexing task are unable to retrieve their completion status from the JobHistory server, and this parameter is true, the indexing task will try to fetch the application status from <code>http://&lt;yarn-rm-address&gt;/ws/v1/cluster/apps/&lt;application-id&gt;</code>, where <code>&lt;yarn-rm-address&gt;</code> is the value of <code>yarn.resourcemanager.webapp.address</code> in your Hadoop configuration. This flag is intended as a fallback for cases wh [...]
-<td>no (default = true)</td>
-</tr>
 </tbody></table>
 
 <h3 id="jobproperties-field-of-tuningconfig">jobProperties field of TuningConfig</h3>
diff --git a/docs/latest/misc/math-expr.html b/docs/latest/misc/math-expr.html
index f2affe2..98a33ee 100644
--- a/docs/latest/misc/math-expr.html
+++ b/docs/latest/misc/math-expr.html
@@ -149,8 +149,7 @@
 <h1 id="apache-druid-incubating-expressions">Apache Druid (incubating) Expressions</h1>
 
 <div class="note info">
-This feature is still experimental. It has not been optimized for performance yet, and its implementation is known to
- have significant inefficiencies.
+This feature is still experimental. It has not been optimized for performance yet, and its implementation is known to have significant inefficiencies.
 </div>
  
 
@@ -188,28 +187,14 @@ This feature is still experimental. It has not been optimized for performance ye
 </tr>
 </tbody></table>
 
-<p>Long, double, and string data types are supported. If a number contains a dot, it is interpreted as a double, otherwise
-it is interpreted as a long. That means, always add a &#39;.&#39; to your number if you want it interpreted as a double value.
-String literals should be quoted by single quotation marks.</p>
+<p>Long, double, and string data types are supported. If a number contains a dot, it is interpreted as a double, otherwise it is interpreted as a long. That means, always add a &#39;.&#39; to your number if you want it interpreted as a double value. String literals should be quoted by single quotation marks.</p>
 
-<p>Additionally, the expression language supports long, double, and string arrays. Array literals are created by wrapping
-square brackets around a list of scalar literals values delimited by a comma or space character. All values in an array
-literal must be the same type.</p>
+<p>Multi-value types are not fully supported yet. Expressions may behave inconsistently on multi-value types, and you
+should not rely on the behavior in this case to stay the same in future releases.</p>
 
-<p>Expressions can contain variables. Variable names may contain letters, digits, &#39;_&#39; and &#39;$&#39;. Variable names must not
-begin with a digit. To escape other special characters, you can quote it with double quotation marks.</p>
+<p>Expressions can contain variables. Variable names may contain letters, digits, &#39;_&#39; and &#39;$&#39;. Variable names must not begin with a digit. To escape other special characters, you can quote it with double quotation marks.</p>
 
-<p>For logical operators, a number is true if and only if it is positive (0 or negative value means false). For string
-type, it&#39;s the evaluation result of &#39;Boolean.valueOf(string)&#39;.</p>
-
-<p>Multi-value string dimensions are supported and may be treated as either scalar or array typed values. When treated as
-a scalar type, an expression will automatically be transformed to apply the scalar operation across all values of the
-multi-valued type, to mimic Druid&#39;s native behavior. Values that result in arrays will be coerced back into the native
-Druid string type for aggregation. Druid aggregations on multi-value string dimensions on the individual values, <em>not</em>
-the &#39;array&#39;, behaving similar to the <code>unnest</code> operator available in many SQL dialects. However, by using the
-<code>array_to_string</code> function, aggregations may be done on a stringified version of the complete array, allowing the
-complete row to be preserved. Using <code>string_to_array</code> in an expression post-aggregator, allows transforming the
-stringified dimension back into the true native array type.</p>
+<p>For logical operators, a number is true if and only if it is positive (0 or negative value means false). For string type, it&#39;s the evaluation result of &#39;Boolean.valueOf(string)&#39;.</p>
 
 <p>The following built-in functions are available.</p>
 
@@ -223,7 +208,7 @@ stringified dimension back into the true native array type.</p>
 </thead><tbody>
 <tr>
 <td>cast</td>
-<td>cast(expr,&#39;LONG&#39; or &#39;DOUBLE&#39; or &#39;STRING&#39; or &#39;LONG_ARRAY&#39;, or &#39;DOUBLE_ARRAY&#39; or &#39;STRING_ARRAY&#39;) returns expr with specified type. exception can be thrown. Scalar types may be cast to array types and will take the form of a single element list (null will still be null).</td>
+<td>cast(expr,&#39;LONG&#39; or &#39;DOUBLE&#39; or &#39;STRING&#39;) returns expr with specified type. exception can be thrown</td>
 </tr>
 <tr>
 <td>if</td>
@@ -555,106 +540,6 @@ stringified dimension back into the true native array type.</p>
 </tr>
 </tbody></table>
 
-<h2 id="array-functions">Array Functions</h2>
-
-<table><thead>
-<tr>
-<th>function</th>
-<th>description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>array_length(arr)</code></td>
-<td>returns length of array expression</td>
-</tr>
-<tr>
-<td><code>array_offset(arr,long)</code></td>
-<td>returns the array element at the 0 based index supplied, or null for an out of range index</td>
-</tr>
-<tr>
-<td><code>array_ordinal(arr,long)</code></td>
-<td>returns the array element at the 1 based index supplied, or null for an out of range index</td>
-</tr>
-<tr>
-<td><code>array_contains(arr,expr)</code></td>
-<td>returns true if the array contains the element specified by expr, or contains all elements specified by expr if expr is an array</td>
-</tr>
-<tr>
-<td><code>array_overlap(arr1,arr2)</code></td>
-<td>returns true if arr1 and arr2 have any elements in common</td>
-</tr>
-<tr>
-<td><code>array_offset_of(arr,expr)</code></td>
-<td>returns the 0 based index of the first occurrence of expr in the array, or <code>null</code> if no matching elements exist in the array.</td>
-</tr>
-<tr>
-<td><code>array_ordinal_of(arr,expr)</code></td>
-<td>returns the 1 based index of the first occurrence of expr in the array, or <code>null</code> if no matching elements exist in the array.</td>
-</tr>
-<tr>
-<td><code>array_append(arr1,expr)</code></td>
-<td>appends expr to arr, the resulting array type determined by the type of the first array</td>
-</tr>
-<tr>
-<td><code>array_concat(arr1,arr2)</code></td>
-<td>concatenates 2 arrays, the resulting array type determined by the type of the first array</td>
-</tr>
-<tr>
-<td><code>array_to_string(arr,str)</code></td>
-<td>joins all elements of arr by the delimiter specified by str</td>
-</tr>
-<tr>
-<td><code>string_to_array(str1,str2)</code></td>
-<td>splits str1 into an array on the delimiter specified by str2</td>
-</tr>
-<tr>
-<td><code>array_slice(arr,start,end)</code></td>
-<td>return the subarray of arr from the 0 based index start(inclusive) to end(exclusive), or <code>null</code>, if start is less than 0, greater than length of arr or less than end</td>
-</tr>
-<tr>
-<td><code>array_prepend(expr,arr)</code></td>
-<td>adds expr to arr at the beginning, the resulting array type determined by the type of the array</td>
-</tr>
-</tbody></table>
-
-<h2 id="apply-functions">Apply Functions</h2>
-
-<table><thead>
-<tr>
-<th>function</th>
-<th>description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>map(lambda,arr)</code></td>
-<td>applies a transform specified by a single argument lambda expression to all elements of arr, returning a new array</td>
-</tr>
-<tr>
-<td><code>cartesian_map(lambda,arr1,arr2,...)</code></td>
-<td>applies a transform specified by a multi argument lambda expression to all elements of the cartesian product of all input arrays, returning a new array; the number of lambda arguments and array inputs must be the same</td>
-</tr>
-<tr>
-<td><code>filter(lambda,arr)</code></td>
-<td>filters arr by a single argument lambda, returning a new array with all matching elements, or null if no elements match</td>
-</tr>
-<tr>
-<td><code>fold(lambda,arr)</code></td>
-<td>folds a 2 argument lambda across arr. The first argument of the lambda is the array element and the second the accumulator, returning a single accumulated value.</td>
-</tr>
-<tr>
-<td><code>cartesian_fold(lambda,arr1,arr2,...)</code></td>
-<td>folds a multi argument lambda across the cartesian product of all input arrays. The first arguments of the lambda is the array element and the last is the accumulator, returning a single accumulated value.</td>
-</tr>
-<tr>
-<td><code>any(lambda,arr)</code></td>
-<td>returns true if any element in the array matches the lambda expression</td>
-</tr>
-<tr>
-<td><code>all(lambda,arr)</code></td>
-<td>returns true if all elements in the array matches the lambda expression</td>
-</tr>
-</tbody></table>
-
         </div>
         <div class="col-md-3">
           <div class="searchbox">
diff --git a/docs/latest/operations/api-reference.html b/docs/latest/operations/api-reference.html
index f775b4d..98748cd 100644
--- a/docs/latest/operations/api-reference.html
+++ b/docs/latest/operations/api-reference.html
@@ -845,21 +845,6 @@ which automates this operation to perform periodically.</p>
 <td>supervisor unique identifier</td>
 </tr>
 <tr>
-<td><code>state</code></td>
-<td>String</td>
-<td>basic state of the supervisor. Available states:<code>UNHEALTHY_SUPERVISOR</code>, <code>UNHEALTHY_TASKS</code>, <code>PENDING</code>, <code>RUNNING</code>, <code>SUSPENDED</code>, <code>STOPPING</code></td>
-</tr>
-<tr>
-<td><code>detailedState</code></td>
-<td>String</td>
-<td>supervisor specific state. (See documentation of specific supervisor for details)</td>
-</tr>
-<tr>
-<td><code>healthy</code></td>
-<td>Boolean</td>
-<td>true or false indicator of overall supervisor health</td>
-</tr>
-<tr>
 <td><code>spec</code></td>
 <td>SupervisorSpec</td>
 <td>json specification of supervisor (See Supervisor Configuration for details)</td>
@@ -867,41 +852,6 @@ which automates this operation to perform periodically.</p>
 </tbody></table>
 
 <ul>
-<li><code>/druid/indexer/v1/supervisor?state=true</code></li>
-</ul>
-
-<p>Returns a list of objects of the currently active supervisors and their current state.</p>
-
-<table><thead>
-<tr>
-<th>Field</th>
-<th>Type</th>
-<th>Description</th>
-</tr>
-</thead><tbody>
-<tr>
-<td><code>id</code></td>
-<td>String</td>
-<td>supervisor unique identifier</td>
-</tr>
-<tr>
-<td><code>state</code></td>
-<td>String</td>
-<td>basic state of the supervisor. Available states:<code>UNHEALTHY_SUPERVISOR</code>, <code>UNHEALTHY_TASKS</code>, <code>PENDING</code>, <code>RUNNING</code>, <code>SUSPENDED</code>, <code>STOPPING</code></td>
-</tr>
-<tr>
-<td><code>detailedState</code></td>
-<td>String</td>
-<td>supervisor specific state. (See documentation of specific supervisor for details)</td>
-</tr>
-<tr>
-<td><code>healthy</code></td>
-<td>Boolean</td>
-<td>true or false indicator of overall supervisor health</td>
-</tr>
-</tbody></table>
-
-<ul>
 <li><code>/druid/indexer/v1/supervisor/&lt;supervisorId&gt;</code></li>
 </ul>
 
diff --git a/docs/latest/operations/recommendations.html b/docs/latest/operations/recommendations.html
index 3a66105..26fa1df 100644
--- a/docs/latest/operations/recommendations.html
+++ b/docs/latest/operations/recommendations.html
@@ -206,11 +206,13 @@
 <p>Segments should generally be between 300MB-700MB in size. Too many small segments results in inefficient CPU utilizations and 
 too many large segments impacts query performance, most notably with TopN queries.</p>
 
-<h1 id="faqs-and-guides">FAQs and Guides</h1>
+<h1 id="read-faqs">Read FAQs</h1>
 
-<p>1) The <a href="../ingestion/faq.html">Ingestion FAQ</a> provides help with common ingestion problems.</p>
+<p>You should read common problems people have here:</p>
 
-<p>2) The <a href="../operations/basic-cluster-tuning.html">Basic Cluster Tuning Guide</a> offers introductory guidelines for tuning your Druid cluster.</p>
+<p>1) <a href="../ingestion/faq.html">Ingestion-FAQ</a></p>
+
+<p>2) <a href="../operations/performance-faq.html">Performance-FAQ</a></p>
 
         </div>
         <div class="col-md-3">
diff --git a/docs/latest/querying/aggregations.html b/docs/latest/querying/aggregations.html
index 05c6c16..94e11d6 100644
--- a/docs/latest/querying/aggregations.html
+++ b/docs/latest/querying/aggregations.html
@@ -334,7 +334,7 @@ JavaScript-based functionality is disabled by default. Please refer to the Druid
 
 <h4 id="datasketches-theta-sketch">DataSketches Theta Sketch</h4>
 
-<p>The <a href="../development/extensions-core/datasketches-theta.html">DataSketches Theta Sketch</a> extension-provided aggregator gives distinct count estimates with support for set union, intersection, and difference post-aggregators, using Theta sketches from the <a href="https://datasketches.github.io/">datasketches</a> library.</p>
+<p>The <a href="../development/extensions-core/datasketches-theta.html">DataSketches Theta Sketch</a> extension-provided aggregator gives distinct count estimates with support for set union, intersection, and difference post-aggregators, using Theta sketches from the <a href="http://datasketches.github.io/">datasketches</a> library.</p>
 
 <h4 id="datasketches-hll-sketch">DataSketches HLL Sketch</h4>
 
@@ -369,7 +369,7 @@ However, to ensure backwards compatibility, we will continue to support the clas
 
 <h4 id="datasketches-quantiles-sketch">DataSketches Quantiles Sketch</h4>
 
-<p>The <a href="../development/extensions-core/datasketches-quantiles.html">DataSketches Quantiles Sketch</a> extension-provided aggregator provides quantile estimates and histogram approximations using the numeric quantiles DoublesSketch from the <a href="https://datasketches.github.io/">datasketches</a> library.</p>
+<p>The <a href="../development/extensions-core/datasketches-quantiles.html">DataSketches Quantiles Sketch</a> extension-provided aggregator provides quantile estimates and histogram approximations using the numeric quantiles DoublesSketch from the <a href="http://datasketches.github.io/">datasketches</a> library.</p>
 
 <p>We recommend this aggregator in general for quantiles/histogram use cases, as it provides formal error bounds and has distribution-independent accuracy.</p>
 
diff --git a/docs/latest/querying/granularities.html b/docs/latest/querying/granularities.html
index b6b0476..37a26c4 100644
--- a/docs/latest/querying/granularities.html
+++ b/docs/latest/querying/granularities.html
@@ -285,10 +285,11 @@
   <span class="p">}</span>
 <span class="p">}</span> <span class="p">]</span>
 </code></pre></div>
-<p>Having a query time <code>granularity</code> that is smaller than the <code>queryGranularity</code> parameter set at
-<a href="(../ingestion/ingestion-spec.html#granularityspec)">ingestion time</a> is unreasonable because information about that
-smaller granularity is not present in the indexed data. So, if the query time granularity is smaller than the ingestion
-time query granularity, Druid produces results that are equivalent to having set <code>granularity</code> to <code>queryGranularity</code>.</p>
+<p>Having a query granularity smaller than the ingestion granularity doesn&#39;t make sense,
+because information about that smaller granularity is not present in the indexed data.
+So, if the query granularity is smaller than the ingestion granularity, druid produces
+results that are equivalent to having set the query granularity to the ingestion granularity.
+See <code>queryGranularity</code> in <a href="../ingestion/ingestion-spec.html#granularityspec">Ingestion Spec</a>.</p>
 
 <p>If you change the granularity to <code>all</code>, you will get everything aggregated in 1 bucket,</p>
 <div class="highlight"><pre><code class="language-json" data-lang="json"><span></span><span class="p">[</span> <span class="p">{</span>
diff --git a/docs/latest/querying/lookups.html b/docs/latest/querying/lookups.html
index bbdb913..ac83b72 100644
--- a/docs/latest/querying/lookups.html
+++ b/docs/latest/querying/lookups.html
@@ -402,11 +402,7 @@ The Coordinator periodically checks if any of the processes need to load/drop lo
 </code></pre></div>
 <h2 id="delete-lookup">Delete Lookup</h2>
 
-<p>A <code>DELETE</code> to <code>/druid/coordinator/v1/lookups/config/{tier}/{id}</code> will remove that lookup from the cluster. If it was last lookup in the tier, then tier is deleted as well.</p>
-
-<h2 id="delete-tier">Delete Tier</h2>
-
-<p>A <code>DELETE</code> to <code>/druid/coordinator/v1/lookups/config/{tier}</code> will remove that tier from the cluster.</p>
+<p>A <code>DELETE</code> to <code>/druid/coordinator/v1/lookups/config/{tier}/{id}</code> will remove that lookup from the cluster.</p>
 
 <h2 id="list-tier-names">List tier names</h2>
 
diff --git a/docs/latest/querying/scan-query.html b/docs/latest/querying/scan-query.html
index 7de5381..c78edab 100644
--- a/docs/latest/querying/scan-query.html
+++ b/docs/latest/querying/scan-query.html
@@ -211,7 +211,7 @@ amounts of data in parallel.</p>
 </tr>
 <tr>
 <td>batchSize</td>
-<td>The maximum number of rows buffered before being returned to the client. Default is <code>20480</code></td>
+<td>How many rows buffered before return to client. Default is <code>20480</code></td>
 <td>no</td>
 </tr>
 <tr>
@@ -355,9 +355,9 @@ the query context (see the Query Context Properties section).</p>
 In legacy mode you can expect the following behavior changes:</p>
 
 <ul>
-<li>The <code>__time</code> column is returned as <code>&quot;timestamp&quot;</code> rather than <code>&quot;__time&quot;</code>. This will take precedence over any other column
-you may have that is named <code>&quot;timestamp&quot;</code>.</li>
-<li>The <code>__time</code> column is included in the list of columns even if you do not specifically ask for it.</li>
+<li>The <strong>time column is returned as &quot;timestamp&quot; rather than &quot;</strong>time&quot;. This will take precedence over any other column
+you may have that is named &quot;timestamp&quot;.</li>
+<li>The __time column is included in the list of columns even if you do not specifically ask for it.</li>
 <li>Timestamps are returned as ISO8601 time strings rather than integers (milliseconds since 1970-01-01 00:00:00 UTC).</li>
 </ul>
 
diff --git a/docs/latest/querying/sql.html b/docs/latest/querying/sql.html
index 2e0a689..0e11bd5 100644
--- a/docs/latest/querying/sql.html
+++ b/docs/latest/querying/sql.html
@@ -146,12 +146,12 @@
   ~ under the License.
   -->
 
-<p>&lt;!-- 
-    The format of the tables that describe the functions and operators 
-    should not be changed without updating the script create-sql-function-doc 
-    in web-console/script/create-sql-function-doc, because the script detects
-    patterns in this markdown file and parse it to TypeScript file for web console
-   --&gt;</p>
+<!--
+  The format of the tables that describe the functions and operators
+  should not be changed without updating the script create-sql-function-doc
+  in web-console/script/create-sql-function-doc, because the script detects
+  patterns in this markdown file and parse it to TypeScript file for web console
+-->
 
 <h1 id="sql">SQL</h1>
 
@@ -166,6 +166,9 @@ queries on the query Broker (the first process you query), which are then passed
 queries. Other than the (slight) overhead of translating SQL on the Broker, there isn&#39;t an additional performance
 penalty versus native queries.</p>
 
+<p>To enable Druid SQL, make sure you have set <code>druid.sql.enable = true</code> either in your common.runtime.properties or your
+Broker&#39;s runtime.properties.</p>
+
 <h2 id="query-syntax">Query syntax</h2>
 
 <p>Each Druid datasource appears as a table in the &quot;druid&quot; schema. This is also the default schema, so Druid datasources
@@ -293,30 +296,6 @@ possible for two aggregators in the same SQL query to have different filters.</p
 <td><code>BLOOM_FILTER(expr, numEntries)</code></td>
 <td>Computes a bloom filter from values produced by <code>expr</code>, with <code>numEntries</code> maximum number of distinct values before false positve rate increases. See <a href="../development/extensions-core/bloom-filter.html">bloom filter extension</a> documentation for additional details.</td>
 </tr>
-<tr>
-<td><code>VAR_POP(expr)</code></td>
-<td>Computes variance population of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>VAR_SAMP(expr)</code></td>
-<td>Computes variance sample of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>VARIANCE(expr)</code></td>
-<td>Computes variance sample of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>STDDEV_POP(expr)</code></td>
-<td>Computes standard deviation population of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>STDDEV_SAMP(expr)</code></td>
-<td>Computes standard deviation sample of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
-<tr>
-<td><code>STDDEV(expr)</code></td>
-<td>Computes standard deviation sample of <code>expr</code>. See <a href="../development/extensions-core/stats.html">stats extension</a> documentation for additional details.</td>
-</tr>
 </tbody></table>
 
 <p>For advice on choosing approximate aggregation functions, check out our <a href="aggregations.html#approx">approximate aggregations documentation</a>.</p>
@@ -571,10 +550,6 @@ context parameter &quot;sqlTimeZone&quot; to the name of another time zone, like
 the connection time zone, some functions also accept time zones as parameters. These parameters always take precedence
 over the connection time zone.</p>
 
-<p>Literal timestamps in the connection time zone can be written using <code>TIMESTAMP &#39;2000-01-01 00:00:00&#39;</code> syntax. The
-simplest way to write literal timestamps in other time zones is to use TIME_PARSE, like
-<code>TIME_PARSE(&#39;2000-02-01 00:00:00&#39;, NULL, &#39;America/Los_Angeles&#39;)</code>.</p>
-
 <table><thead>
 <tr>
 <th>Function</th>
@@ -638,10 +613,6 @@ simplest way to write literal timestamps in other time zones is to use TIME_PARS
 <td>Equivalent to <code>timestamp + count * INTERVAL &#39;1&#39; UNIT</code>.</td>
 </tr>
 <tr>
-<td><code>TIMESTAMPDIFF(&lt;unit&gt;, &lt;timestamp1&gt;, &lt;timestamp2&gt;)</code></td>
-<td>Returns the (signed) number of <code>unit</code> between <code>timestamp1</code> and <code>timestamp2</code>. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR.</td>
-</tr>
-<tr>
 <td><code>timestamp_expr { + &amp;#124; - } &lt;interval_expr&gt;</code></td>
 <td>Add or subtract an amount of time from a timestamp. interval_expr can include interval literals like <code>INTERVAL &#39;2&#39; HOUR</code>, and may include interval arithmetic as well. This operator treats days as uniformly 86400 seconds long, and does not take into account daylight savings time. To account for daylight savings time, use TIME_SHIFT instead.</td>
 </tr>
@@ -807,12 +778,11 @@ simplest way to write literal timestamps in other time zones is to use TIME_PARS
 
 <p>Druid natively supports five basic column types: &quot;long&quot; (64 bit signed int), &quot;float&quot; (32 bit float), &quot;double&quot; (64 bit
 float) &quot;string&quot; (UTF-8 encoded strings), and &quot;complex&quot; (catch-all for more exotic data types like hyperUnique and
-approxHistogram columns).</p>
+approxHistogram columns). Timestamps (including the <code>__time</code> column) are stored as longs, with the value being the
+number of milliseconds since 1 January 1970 UTC.</p>
 
-<p>Timestamps (including the <code>__time</code> column) are treated by Druid as longs, with the value being the number of
-milliseconds since 1970-01-01 00:00:00 UTC, not counting leap seconds. Therefore, timestamps in Druid do not carry any
-timezone information, but only carry information about the exact moment in time they represent. See the
-<a href="#time-functions">Time functions</a> section for more information about timestamp handling.</p>
+<p>At runtime, Druid may widen 32-bit floats to 64-bit for certain operators, like SUM aggregators. The reverse will not
+happen: 64-bit floats are not be narrowed to 32-bit.</p>
 
 <p>Druid generally treats NULLs and empty strings interchangeably, rather than according to the SQL standard. As such,
 Druid SQL only has partial support for NULLs. For example, the expressions <code>col IS NULL</code> and <code>col = &#39;&#39;</code> are equivalent,
@@ -824,7 +794,7 @@ datasource, then it will be treated as zero for rows from those segments.</p>
 
 <p>For mathematical operations, Druid SQL will use integer math if all operands involved in an expression are integers.
 Otherwise, Druid will switch to floating point math. You can force this to happen by casting one of your operands
-to FLOAT. At runtime, Druid may widen 32-bit floats to 64-bit for certain operators, like SUM aggregators.</p>
+to FLOAT.</p>
 
 <p>The following table describes how SQL types map onto Druid types during query runtime. Casts between two SQL types
 that have the same Druid runtime type will have no effect, other than exceptions noted in the table. Casts between two
@@ -1423,7 +1393,7 @@ datasource &quot;foo&quot;, use the query:</p>
 </code></pre></div>
 <h3 id="servers-table">SERVERS table</h3>
 
-<p>Servers table lists all discovered servers in the cluster.</p>
+<p>Servers table lists all data servers(any server that hosts a segment). It includes both Historicals and Peons.</p>
 
 <table><thead>
 <tr>
@@ -1455,22 +1425,22 @@ datasource &quot;foo&quot;, use the query:</p>
 <tr>
 <td>server_type</td>
 <td>STRING</td>
-<td>Type of Druid service. Possible values include: COORDINATOR, OVERLORD,  BROKER, ROUTER, HISTORICAL, MIDDLE_MANAGER or PEON.</td>
+<td>Type of Druid service. Possible values include: Historical, realtime and indexer_executor(Peon).</td>
 </tr>
 <tr>
 <td>tier</td>
 <td>STRING</td>
-<td>Distribution tier see <a href="#../configuration/index.html#Historical-General-Configuration">druid.server.tier</a>. Only valid for HISTORICAL type, for other types it&#39;s null</td>
+<td>Distribution tier see <a href="#../configuration/index.html#Historical-General-Configuration">druid.server.tier</a></td>
 </tr>
 <tr>
 <td>current_size</td>
 <td>LONG</td>
-<td>Current size of segments in bytes on this server. Only valid for HISTORICAL type, for other types it&#39;s 0</td>
+<td>Current size of segments in bytes on this server</td>
 </tr>
 <tr>
 <td>max_size</td>
 <td>LONG</td>
-<td>Max size in bytes this server recommends to assign to segments see <a href="#../configuration/index.html#Historical-General-Configuration">druid.server.maxSize</a>. Only valid for HISTORICAL type, for other types it&#39;s 0</td>
+<td>Max size in bytes this server recommends to assign to segments see <a href="#../configuration/index.html#Historical-General-Configuration">druid.server.maxSize</a></td>
 </tr>
 </tbody></table>
 
@@ -1500,19 +1470,19 @@ datasource &quot;foo&quot;, use the query:</p>
 </tr>
 </tbody></table>
 
-<p>JOIN between &quot;servers&quot; and &quot;segments&quot; can be used to query the number of segments for a specific datasource, 
+<p>JOIN between &quot;servers&quot; and &quot;segments&quot; can be used to query the number of segments for a specific datasource,
 grouped by server, example query:</p>
-<div class="highlight"><pre><code class="language-sql" data-lang="sql"><span></span><span class="k">SELECT</span> <span class="k">count</span><span class="p">(</span><span class="n">segments</span><span class="p">.</span><span class="n">segment_id</span><span class="p">)</span> <span class="k">as</span> <span class="n">num_segments</span> <span class="k">from</span> <span class="n">sys</span><span class="p">.</span><span class="n">segments</span> <span class="k">as</span> <span class="n" [...]
-<span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">sys</span><span class="p">.</span><span class="n">server_segments</span> <span class="k">as</span> <span class="n">server_segments</span> 
-<span class="k">ON</span> <span class="n">segments</span><span class="p">.</span><span class="n">segment_id</span>  <span class="o">=</span> <span class="n">server_segments</span><span class="p">.</span><span class="n">segment_id</span> 
-<span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">sys</span><span class="p">.</span><span class="n">servers</span> <span class="k">as</span> <span class="n">servers</span> 
+<div class="highlight"><pre><code class="language-sql" data-lang="sql"><span></span><span class="k">SELECT</span> <span class="k">count</span><span class="p">(</span><span class="n">segments</span><span class="p">.</span><span class="n">segment_id</span><span class="p">)</span> <span class="k">as</span> <span class="n">num_segments</span> <span class="k">from</span> <span class="n">sys</span><span class="p">.</span><span class="n">segments</span> <span class="k">as</span> <span class="n" [...]
+<span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">sys</span><span class="p">.</span><span class="n">server_segments</span> <span class="k">as</span> <span class="n">server_segments</span>
+<span class="k">ON</span> <span class="n">segments</span><span class="p">.</span><span class="n">segment_id</span>  <span class="o">=</span> <span class="n">server_segments</span><span class="p">.</span><span class="n">segment_id</span>
+<span class="k">INNER</span> <span class="k">JOIN</span> <span class="n">sys</span><span class="p">.</span><span class="n">servers</span> <span class="k">as</span> <span class="n">servers</span>
 <span class="k">ON</span> <span class="n">servers</span><span class="p">.</span><span class="n">server</span> <span class="o">=</span> <span class="n">server_segments</span><span class="p">.</span><span class="n">server</span>
-<span class="k">WHERE</span> <span class="n">segments</span><span class="p">.</span><span class="n">datasource</span> <span class="o">=</span> <span class="s1">&#39;wikipedia&#39;</span> 
+<span class="k">WHERE</span> <span class="n">segments</span><span class="p">.</span><span class="n">datasource</span> <span class="o">=</span> <span class="s1">&#39;wikipedia&#39;</span>
 <span class="k">GROUP</span> <span class="k">BY</span> <span class="n">servers</span><span class="p">.</span><span class="n">server</span><span class="p">;</span>
 </code></pre></div>
 <h3 id="tasks-table">TASKS table</h3>
 
-<p>The tasks table provides information about active and recently-completed indexing tasks. For more information 
+<p>The tasks table provides information about active and recently-completed indexing tasks. For more information
 check out <a href="#../ingestion/tasks.html">ingestion tasks</a></p>
 
 <table><thead>
@@ -1608,7 +1578,7 @@ check out <a href="#../ingestion/tasks.html">ingestion tasks</a></p>
 <tr>
 <td><code>druid.sql.enable</code></td>
 <td>Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.</td>
-<td>true</td>
+<td>false</td>
 </tr>
 <tr>
 <td><code>druid.sql.avatica.enable</code></td>
diff --git a/docs/latest/querying/timeseriesquery.html b/docs/latest/querying/timeseriesquery.html
index dd1f643..7bcf0bc 100644
--- a/docs/latest/querying/timeseriesquery.html
+++ b/docs/latest/querying/timeseriesquery.html
@@ -235,11 +235,6 @@
 <td>no</td>
 </tr>
 <tr>
-<td>limit</td>
-<td>An integer that limits the number of results. The default is unlimited.</td>
-<td>no</td>
-</tr>
-<tr>
 <td>context</td>
 <td>Can be used to modify query behavior, including <a href="#grand-totals">grand totals</a> and <a href="#zero-filling">zero-filling</a>. See also <a href="../querying/query-context.html">Context</a> for parameters that apply to all query types.</td>
 <td>no</td>
diff --git a/docs/latest/toc.html b/docs/latest/toc.html
index 6f98670..723360f 100644
--- a/docs/latest/toc.html
+++ b/docs/latest/toc.html
@@ -59,7 +59,7 @@
 
 <ul>
 <li><a href="/docs/latest/operations/single-server.html">Single-server deployment</a></li>
-<li><a href="/docs/latest/tutorials/cluster.html#fresh-deployment">Clustered deployment</a></li>
+<li><a href="/docs/latest/operations/example-cluster.html">Clustered deployment</a></li>
 </ul></li>
 </ul></li>
 </ul>
@@ -209,6 +209,7 @@
 <ul>
 <li><a href="/docs/latest/operations/basic-cluster-tuning.html">Basic Cluster Tuning</a><br></li>
 <li><a href="/docs/latest/operations/recommendations.html">General Recommendations</a></li>
+<li><a href="/docs/latest/operations/performance-faq.html">Performance FAQ</a></li>
 <li><a href="/docs/latest/configuration/index.html#jvm-configuration-best-practices">JVM Best Practices</a><br></li>
 </ul></li>
 <li>Tools
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-01.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-01.png
index 08426fd..b0b5da8 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-01.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-02.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-02.png
index 76a1a7f..806ce4c 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-02.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-03.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-03.png
index ce3b0f0..c6bb701 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-03.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-04.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-04.png
index b30ef7f..83a018b 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-04.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-04.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-05.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-05.png
index 9ef3f80..71291c0 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-05.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-05.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-06.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-06.png
index b1f08c8..5fe9c37 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-06.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-06.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-07.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-07.png
index d7a8e68..16b48af 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-07.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-07.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-08.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-08.png
index 4e36aab..edaf039 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-08.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-08.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-09.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-09.png
index 144c02c..6191fc2 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-09.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-09.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-10.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-10.png
index 75487a2..4037792 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-10.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-10.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-data-loader-11.png b/docs/latest/tutorials/img/tutorial-batch-data-loader-11.png
index 5cadd52..76464f9 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-data-loader-11.png and b/docs/latest/tutorials/img/tutorial-batch-data-loader-11.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-submit-task-01.png b/docs/latest/tutorials/img/tutorial-batch-submit-task-01.png
index e8a1346..1651401 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-submit-task-01.png and b/docs/latest/tutorials/img/tutorial-batch-submit-task-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-batch-submit-task-02.png b/docs/latest/tutorials/img/tutorial-batch-submit-task-02.png
index fc0c924..834a9a5 100644
Binary files a/docs/latest/tutorials/img/tutorial-batch-submit-task-02.png and b/docs/latest/tutorials/img/tutorial-batch-submit-task-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-01.png b/docs/latest/tutorials/img/tutorial-compaction-01.png
index aeb9bf3..99b9e45 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-01.png and b/docs/latest/tutorials/img/tutorial-compaction-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-02.png b/docs/latest/tutorials/img/tutorial-compaction-02.png
index 836d8a7..11c316e 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-02.png and b/docs/latest/tutorials/img/tutorial-compaction-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-03.png b/docs/latest/tutorials/img/tutorial-compaction-03.png
index d51f8f8..88fd9d6 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-03.png and b/docs/latest/tutorials/img/tutorial-compaction-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-04.png b/docs/latest/tutorials/img/tutorial-compaction-04.png
index 46c5b1d..8df3699 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-04.png and b/docs/latest/tutorials/img/tutorial-compaction-04.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-05.png b/docs/latest/tutorials/img/tutorial-compaction-05.png
index e692694..07356df 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-05.png and b/docs/latest/tutorials/img/tutorial-compaction-05.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-06.png b/docs/latest/tutorials/img/tutorial-compaction-06.png
index 55c999f..ec1525c 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-06.png and b/docs/latest/tutorials/img/tutorial-compaction-06.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-07.png b/docs/latest/tutorials/img/tutorial-compaction-07.png
index 661e897..aa30458 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-07.png and b/docs/latest/tutorials/img/tutorial-compaction-07.png differ
diff --git a/docs/latest/tutorials/img/tutorial-compaction-08.png b/docs/latest/tutorials/img/tutorial-compaction-08.png
index 6e3f1aa..b9d89b2 100644
Binary files a/docs/latest/tutorials/img/tutorial-compaction-08.png and b/docs/latest/tutorials/img/tutorial-compaction-08.png differ
diff --git a/docs/latest/tutorials/img/tutorial-deletion-01.png b/docs/latest/tutorials/img/tutorial-deletion-01.png
index de68d38..cddcb16 100644
Binary files a/docs/latest/tutorials/img/tutorial-deletion-01.png and b/docs/latest/tutorials/img/tutorial-deletion-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-deletion-02.png b/docs/latest/tutorials/img/tutorial-deletion-02.png
index ffe4585..9b84f0c 100644
Binary files a/docs/latest/tutorials/img/tutorial-deletion-02.png and b/docs/latest/tutorials/img/tutorial-deletion-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-deletion-03.png b/docs/latest/tutorials/img/tutorial-deletion-03.png
index 221774f..e6fb1f3 100644
Binary files a/docs/latest/tutorials/img/tutorial-deletion-03.png and b/docs/latest/tutorials/img/tutorial-deletion-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-kafka-01.png b/docs/latest/tutorials/img/tutorial-kafka-01.png
index b085625..580d9af 100644
Binary files a/docs/latest/tutorials/img/tutorial-kafka-01.png and b/docs/latest/tutorials/img/tutorial-kafka-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-kafka-02.png b/docs/latest/tutorials/img/tutorial-kafka-02.png
index f23e084..735ceaa 100644
Binary files a/docs/latest/tutorials/img/tutorial-kafka-02.png and b/docs/latest/tutorials/img/tutorial-kafka-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-01.png b/docs/latest/tutorials/img/tutorial-query-01.png
index b366b2b..7e483fc 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-01.png and b/docs/latest/tutorials/img/tutorial-query-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-02.png b/docs/latest/tutorials/img/tutorial-query-02.png
index f3ba025..c25c651 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-02.png and b/docs/latest/tutorials/img/tutorial-query-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-03.png b/docs/latest/tutorials/img/tutorial-query-03.png
index 9f7ae27..5b1e5bc 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-03.png and b/docs/latest/tutorials/img/tutorial-query-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-04.png b/docs/latest/tutorials/img/tutorial-query-04.png
index 3f800a6..df96420 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-04.png and b/docs/latest/tutorials/img/tutorial-query-04.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-05.png b/docs/latest/tutorials/img/tutorial-query-05.png
index 2fc59ce..c241627 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-05.png and b/docs/latest/tutorials/img/tutorial-query-05.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-06.png b/docs/latest/tutorials/img/tutorial-query-06.png
index 60b4e1a..1f3e5fb 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-06.png and b/docs/latest/tutorials/img/tutorial-query-06.png differ
diff --git a/docs/latest/tutorials/img/tutorial-query-07.png b/docs/latest/tutorials/img/tutorial-query-07.png
index d2e5a85..e23fc2a 100644
Binary files a/docs/latest/tutorials/img/tutorial-query-07.png and b/docs/latest/tutorials/img/tutorial-query-07.png differ
diff --git a/docs/latest/tutorials/img/tutorial-quickstart-01.png b/docs/latest/tutorials/img/tutorial-quickstart-01.png
index 9a47bc7..94b2024 100644
Binary files a/docs/latest/tutorials/img/tutorial-quickstart-01.png and b/docs/latest/tutorials/img/tutorial-quickstart-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-00.png b/docs/latest/tutorials/img/tutorial-retention-00.png
index a3f84a9..99c4ca8 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-00.png and b/docs/latest/tutorials/img/tutorial-retention-00.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-01.png b/docs/latest/tutorials/img/tutorial-retention-01.png
index 35a97c2..64f666c 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-01.png and b/docs/latest/tutorials/img/tutorial-retention-01.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-02.png b/docs/latest/tutorials/img/tutorial-retention-02.png
index f38fad0..2458d9d 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-02.png and b/docs/latest/tutorials/img/tutorial-retention-02.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-03.png b/docs/latest/tutorials/img/tutorial-retention-03.png
index 256836a..5cf2e8a 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-03.png and b/docs/latest/tutorials/img/tutorial-retention-03.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-04.png b/docs/latest/tutorials/img/tutorial-retention-04.png
index d39495f..73f9f22 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-04.png and b/docs/latest/tutorials/img/tutorial-retention-04.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-05.png b/docs/latest/tutorials/img/tutorial-retention-05.png
index 638a752..622718f 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-05.png and b/docs/latest/tutorials/img/tutorial-retention-05.png differ
diff --git a/docs/latest/tutorials/img/tutorial-retention-06.png b/docs/latest/tutorials/img/tutorial-retention-06.png
index f47cbff..540551f 100644
Binary files a/docs/latest/tutorials/img/tutorial-retention-06.png and b/docs/latest/tutorials/img/tutorial-retention-06.png differ


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org