You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by cw...@apache.org on 2019/08/15 23:08:46 UTC

[incubator-druid-website-src] 01/01: update website for 0.15.1-incubating

This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch 0.15.1-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid-website-src.git

commit 2faf8cdc161f45e0ebd9a27a56de8316733d1207
Author: Clint Wylie <cw...@apache.org>
AuthorDate: Thu Aug 15 16:08:29 2019 -0700

    update website for 0.15.1-incubating
---
 _config.yml                                        |   4 +-
 docs/0.14.0-incubating/configuration/index.md      |  28 +-
 docs/0.14.0-incubating/configuration/logging.md    |  33 ++
 docs/0.14.0-incubating/configuration/realtime.md   |   2 +-
 .../dependencies/metadata-storage.md               |   5 +-
 docs/0.14.0-incubating/design/coordinator.md       |   3 +-
 docs/0.14.0-incubating/design/index.md             |  41 ++-
 docs/0.14.0-incubating/design/segments.md          |   2 +-
 docs/0.14.0-incubating/development/build.md        |   5 +-
 docs/0.14.0-incubating/development/experimental.md |  19 +-
 .../extensions-contrib/distinctcount.md            |   4 +-
 .../development/extensions-contrib/influx.md       |   2 +
 .../extensions-contrib/materialized-view.md        |   3 +
 .../extensions-core/druid-basic-security.md        | 151 +++++++-
 .../development/extensions-core/druid-lookups.md   |   1 +
 .../development/extensions-core/kafka-ingestion.md |   3 +-
 .../development/extensions-core/parquet.md         |   9 +-
 .../development/extensions-core/s3.md              |  10 +-
 docs/0.14.0-incubating/development/extensions.md   |  13 +-
 docs/0.14.0-incubating/development/modules.md      |   6 +-
 docs/0.14.0-incubating/development/overview.md     |   2 +-
 docs/0.14.0-incubating/development/router.md       |   5 +
 docs/0.14.0-incubating/ingestion/firehose.md       |  64 +++-
 .../ingestion/hadoop-vs-native-batch.md            |   4 +-
 docs/0.14.0-incubating/ingestion/hadoop.md         |   2 +-
 docs/0.14.0-incubating/ingestion/index.md          |   2 +-
 docs/0.14.0-incubating/ingestion/ingestion-spec.md |   6 +-
 docs/0.14.0-incubating/ingestion/native_tasks.md   |  11 +-
 docs/0.14.0-incubating/misc/math-expr.md           |  16 +-
 docs/0.14.0-incubating/operations/api-reference.md |  67 ++--
 docs/0.14.0-incubating/operations/druid-console.md |  51 ++-
 .../operations/img/01-home-view.png                | Bin 60287 -> 58587 bytes
 .../operations/insert-segment-to-db.md             | 153 ++------
 docs/0.14.0-incubating/operations/pull-deps.md     |  14 +-
 .../operations/recommendations.md                  |   8 +-
 .../operations/rule-configuration.md               |   2 -
 docs/0.14.0-incubating/operations/tls-support.md   |  15 +-
 docs/0.14.0-incubating/querying/aggregations.md    |  29 +-
 docs/0.14.0-incubating/querying/caching.md         |  44 ++-
 docs/0.14.0-incubating/querying/filters.md         |   6 +
 docs/0.14.0-incubating/querying/groupbyquery.md    |   3 +-
 docs/0.14.0-incubating/querying/lookups.md         |  13 +-
 docs/0.14.0-incubating/querying/multitenancy.md    |   2 +-
 docs/0.14.0-incubating/querying/query-context.md   |   4 +-
 docs/0.14.0-incubating/querying/querying.md        |  36 +-
 docs/0.14.0-incubating/querying/scan-query.md      | 120 ++++---
 docs/0.14.0-incubating/querying/select-query.md    |  17 +-
 docs/0.14.0-incubating/querying/sql.md             | 159 +++++----
 docs/0.14.0-incubating/toc.md                      | 173 ++++-----
 docs/0.14.0-incubating/tutorials/cluster.md        | 390 +++++++++++++--------
 .../tutorials/img/tutorial-deletion-02.png         | Bin 200459 -> 810422 bytes
 docs/0.14.0-incubating/tutorials/index.md          | 116 +++---
 .../tutorials/tutorial-batch-hadoop.md             |  20 +-
 docs/0.14.0-incubating/tutorials/tutorial-batch.md | 154 ++++++--
 .../tutorials/tutorial-compaction.md               |   9 +-
 .../tutorials/tutorial-delete-data.md              |  60 ++--
 .../tutorials/tutorial-ingestion-spec.md           |   4 +-
 docs/0.14.0-incubating/tutorials/tutorial-kafka.md | 103 +++++-
 docs/0.14.0-incubating/tutorials/tutorial-query.md | 329 ++++++++---------
 .../tutorials/tutorial-retention.md                |   2 +-
 .../0.14.0-incubating/tutorials/tutorial-rollup.md |   7 +-
 .../tutorials/tutorial-tranquility.md              |  14 +-
 .../tutorials/tutorial-transform-spec.md           |   5 +-
 .../tutorials/tutorial-update-data.md              |   8 +-
 .../extensions-contrib/distinctcount.md            |   4 +-
 .../development/extensions-contrib/influx.md       |   2 +
 .../extensions-contrib/materialized-view.md        |   3 +
 .../extensions-contrib/momentsketch-quantiles.md   |   6 +
 .../extensions-contrib/moving-average-query.md     |   9 +
 .../extensions-core/druid-basic-security.md        | 137 +++++++-
 .../development/extensions-core/druid-lookups.md   |   1 +
 docs/latest/development/extensions-core/orc.md     |   4 +
 docs/latest/development/extensions.md              |   3 +-
 docs/latest/operations/pull-deps.md                |  14 +-
 docs/latest/querying/filters.md                    |   6 +
 docs/latest/tutorials/cluster.md                   |  10 +-
 docs/latest/tutorials/index.md                     |  26 +-
 docs/latest/tutorials/tutorial-batch-hadoop.md     |   6 +-
 docs/latest/tutorials/tutorial-batch.md            |   2 +-
 docs/latest/tutorials/tutorial-ingestion-spec.md   |   2 +-
 docs/latest/tutorials/tutorial-rollup.md           |   2 +-
 docs/latest/tutorials/tutorial-tranquility.md      |   6 +-
 package-lock.json                                  |   2 +-
 83 files changed, 1805 insertions(+), 1033 deletions(-)

diff --git a/_config.yml b/_config.yml
index 8d150cb..87bd10e 100644
--- a/_config.yml
+++ b/_config.yml
@@ -26,8 +26,8 @@ description: 'Real²time Exploratory Analytics on Large Datasets'
 druid_versions:
   - release: 0.15
     versions:
-      - version: 0.15.0-incubating
-        date: 2019-06-27
+      - version: 0.15.1-incubating
+        date: 2019-08-15
   - release: 0.14
     versions:
       - version: 0.14.2-incubating
diff --git a/docs/0.14.0-incubating/configuration/index.md b/docs/0.14.0-incubating/configuration/index.md
index da506c9..29c11fa 100644
--- a/docs/0.14.0-incubating/configuration/index.md
+++ b/docs/0.14.0-incubating/configuration/index.md
@@ -32,7 +32,7 @@ This page documents all of the configuration properties for each Druid service t
     * [JVM Configuration Best Practices](#jvm-configuration-best-practices)
     * [Extensions](#extensions)
     * [Modules](#modules)
-    * [Zookeeper](#zookeper)
+    * [Zookeeper](#zookeeper)
     * [Exhibitor](#exhibitor)
     * [TLS](#tls)
     * [Authentication & Authorization](#authentication-and-authorization)
@@ -171,6 +171,7 @@ We recommend just setting the base ZK path and the ZK service host, but all ZK p
 |`druid.zk.service.user`|The username to authenticate with ZooKeeper. This is an optional property.|none|
 |`druid.zk.service.pwd`|The [Password Provider](../operations/password-provider.html) or the string password to authenticate with ZooKeeper. This is an optional property.|none|
 |`druid.zk.service.authScheme`|digest is the only authentication scheme supported. |digest|
+|`druid.zk.service.terminateDruidProcessOnConnectFail`|If set to 'true' and the connection to ZooKeeper fails (after exhausting all potential backoff retires), Druid process terminates itself with exit code 1.|false|
 
 #### Zookeeper Behavior
 
@@ -423,11 +424,11 @@ The following monitors are available:
 
 ### Emitting Metrics
 
-The Druid servers [emit various metrics](../operations/metrics.html) and alerts via something we call an Emitter. There are three emitter implementations included with the code, a "noop" emitter, one that just logs to log4j ("logging", which is used by default if no emitter is specified) and one that does POSTs of JSON events to a server ("http"). The properties for using the logging emitter are described below.
+The Druid servers [emit various metrics](../operations/metrics.html) and alerts via something we call an Emitter. There are three emitter implementations included with the code, a "noop" emitter (the default if none is specified), one that just logs to log4j ("logging"), and one that does POSTs of JSON events to a server ("http"). The properties for using the logging emitter are described below.
 
 |Property|Description|Default|
 |--------|-----------|-------|
-|`druid.emitter`|Setting this value to "noop", "logging", "http" or "parametrized" will initialize one of the emitter modules. value "composing" can be used to initialize multiple emitter modules. |noop|
+|`druid.emitter`|Setting this value to "noop", "logging", "http" or "parametrized" will initialize one of the emitter modules. The value "composing" can be used to initialize multiple emitter modules. |noop|
 
 #### Logging Emitter Module
 
@@ -536,6 +537,7 @@ This deep storage doesn't do anything. There are no configs.
 #### S3 Deep Storage
 
 This deep storage is used to interface with Amazon's S3. Note that the `druid-s3-extensions` extension must be loaded.
+The below table shows some important configurations for S3. See [S3 Deep Storage](../development/extensions-core/s3.html) for full configurations.
 
 |Property|Description|Default|
 |--------|-----------|-------|
@@ -543,7 +545,7 @@ This deep storage is used to interface with Amazon's S3. Note that the `druid-s3
 |`druid.s3.secretKey`|The secret key to use to access S3.|none|
 |`druid.storage.bucket`|S3 bucket name.|none|
 |`druid.storage.baseKey`|S3 object key prefix for storage.|none|
-|`druid.storage.disableAcl`|Boolean flag for ACL.|false|
+|`druid.storage.disableAcl`|Boolean flag for ACL. If this is set to `false`, the full control would be granted to the bucket owner. This may require to set additional permissions. See [S3 permissions settings](../development/extensions-core/s3.html#s3-permissions-settings).|false|
 |`druid.storage.archiveBucket`|S3 bucket name for archiving when running the *archive task*.|none|
 |`druid.storage.archiveBaseKey`|S3 object key prefix for archiving.|none|
 |`druid.storage.useS3aSchema`|If true, use the "s3a" filesystem when using Hadoop-based ingestion. If false, the "s3n" filesystem will be used. Only affects Hadoop-based ingestion.|false|
@@ -575,7 +577,7 @@ If you are running the indexing service in remote mode, the task logs must be st
 |`druid.indexer.logs.type`|Choices:noop, s3, azure, google, hdfs, file. Where to store task logs|file|
 
 You can also configure the Overlord to automatically retain the task logs in log directory and entries in task-related metadata storage tables only for last x milliseconds by configuring following additional properties.
-Caution: Automatic log file deletion typically works based on log file modification timestamp on the backing store, so large clock skews between druid processes and backing store nodes might result in un-intended behavior.  
+Caution: Automatic log file deletion typically works based on log file modification timestamp on the backing store, so large clock skews between druid processes and backing store nodes might result in un-intended behavior.
 
 |Property|Description|Default|
 |--------|-----------|-------|
@@ -718,14 +720,14 @@ These Coordinator static configurations can be defined in the `coordinator/runti
 |`druid.coordinator.period`|The run period for the Coordinator. The Coordinator’s operates by maintaining the current state of the world in memory and periodically looking at the set of segments available and segments being served to make decisions about whether any changes need to be made to the data topology. This property sets the delay between each of these runs.|PT60S|
 |`druid.coordinator.period.indexingPeriod`|How often to send compact/merge/conversion tasks to the indexing service. It's recommended to be longer than `druid.manager.segments.pollDuration`|PT1800S (30 mins)|
 |`druid.coordinator.startDelay`|The operation of the Coordinator works on the assumption that it has an up-to-date view of the state of the world when it runs, the current ZK interaction code, however, is written in a way that doesn’t allow the Coordinator to know for a fact that it’s done loading the current state of the world. This delay is a hack to give it enough time to believe that it has all the data.|PT300S|
-|`druid.coordinator.merge.on`|Boolean flag for whether or not the Coordinator should try and merge small segments into a more optimal segment size.|false|
 |`druid.coordinator.load.timeout`|The timeout duration for when the Coordinator assigns a segment to a Historical process.|PT15M|
 |`druid.coordinator.kill.pendingSegments.on`|Boolean flag for whether or not the Coordinator clean up old entries in the `pendingSegments` table of metadata store. If set to true, Coordinator will check the created time of most recently complete task. If it doesn't exist, it finds the created time of the earlist running/pending/waiting tasks. Once the created time is found, then for all dataSources not in the `killPendingSegmentsSkipList` (see [Dynamic configuration](#dynamic-configurati [...]
 |`druid.coordinator.kill.on`|Boolean flag for whether or not the Coordinator should submit kill task for unused segments, that is, hard delete them from metadata store and deep storage. If set to true, then for all whitelisted dataSources (or optionally all), Coordinator will submit tasks periodically based on `period` specified. These kill tasks will delete all segments except for the last `durationToRetain` period. Whitelist or All can be set via dynamic configuration `killAllDataSourc [...]
 |`druid.coordinator.kill.period`|How often to send kill tasks to the indexing service. Value must be greater than `druid.coordinator.period.indexingPeriod`. Only applies if kill is turned on.|P1D (1 Day)|
 |`druid.coordinator.kill.durationToRetain`| Do not kill segments in last `durationToRetain`, must be greater or equal to 0. Only applies and MUST be specified if kill is turned on. Note that default value is invalid.|PT-1S (-1 seconds)|
 |`druid.coordinator.kill.maxSegments`|Kill at most n segments per kill task submission, must be greater than 0. Only applies and MUST be specified if kill is turned on. Note that default value is invalid.|0|
-|`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy that the Coordinator should use to distribute segments among the Historicals. `cachingCost` is logically equivalent to `cost` but is more CPU-efficient on large clusters and will replace `cost` in the future versions, users are invited to try it. Use `diskNormalized` to distribute segments among Historical processes so that the disks fill up uniformly and use `random` to randomly pick nodes to distribute segmen [...]
+|`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy that the coordinator should use to distribute segments among the historicals. `cachingCost` is logically equivalent to `cost` but is more CPU-efficient on large clusters and will replace `cost` in the future versions, users are invited to try it. Use `diskNormalized` to distribute segments among processes so that the disks fill up uniformly and use `random` to randomly pick processes to distribute segments.|`cost`|
+|`druid.coordinator.balancer.cachingCost.awaitInitialization`|Whether to wait for segment view initialization before creating the `cachingCost` balancing strategy. This property is enabled only when `druid.coordinator.balancer.strategy` is `cachingCost`. If set to 'true', the Coordinator will not start to assign segments, until the segment view is initialized. If set to 'false', the Coordinator will fallback to use the `cost` balancing strategy only if the segment view is not initialized [...]
 |`druid.coordinator.loadqueuepeon.repeatDelay`|The start and repeat delay for the loadqueuepeon , which manages the load and drop of segments.|PT0.050S (50 ms)|
 |`druid.coordinator.asOverlord.enabled`|Boolean value for whether this Coordinator process should act like an Overlord as well. This configuration allows users to simplify a druid cluster by not having to deploy any standalone Overlord processes. If set to true, then Overlord console is available at `http://coordinator-host:port/console.html` and be sure to set `druid.coordinator.asOverlord.overlordService` also. See next.|false|
 |`druid.coordinator.asOverlord.overlordService`| Required, if `druid.coordinator.asOverlord.enabled` is `true`. This must be same value as `druid.service` on standalone Overlord processes and `druid.selectors.indexing.serviceName` on Middle Managers.|NULL|
@@ -734,7 +736,8 @@ These Coordinator static configurations can be defined in the `coordinator/runti
 |Property|Possible Values|Description|Default|
 |--------|---------------|-----------|-------|
 |`druid.serverview.type`|batch or http|Segment discovery method to use. "http" enables discovering segments using HTTP instead of zookeeper.|batch|
-|`druid.coordinator.loadqueuepeon.type`|curator or http|Whether to use "http" or "curator" implementation to assign segment loads/drops to Historical|curator|
+|`druid.coordinator.loadqueuepeon.type`|curator or http|Whether to use "http" or "curator" implementation to assign segment loads/drops to historical|curator|
+|`druid.coordinator.segment.awaitInitializationOnStart`|true or false|Whether the the Coordinator will wait for its view of segments to fully initialize before starting up. If set to 'true', the Coordinator's HTTP server will not start up, and the Coordinator will not announce itself as available, until the server view is initialized.|true|
 
 ###### Additional config when "http" loadqueuepeon is used
 |Property|Description|Default|
@@ -1061,7 +1064,7 @@ Example: a function that sends batch_index_task to workers 10.0.0.1 and 10.0.0.2
 ```
 {
 "type":"javascript",
-"function":"function (config, zkWorkers, task) {\nvar batch_workers = new java.util.ArrayList();\nbatch_workers.add(\"10.0.0.1\");\nbatch_workers.add(\"10.0.0.2\");\nworkers = zkWorkers.keySet().toArray();\nvar sortedWorkers = new Array()\n;for(var i = 0; i < workers.length; i++){\n sortedWorkers[i] = workers[i];\n}\nArray.prototype.sort.call(sortedWorkers,function(a, b){return zkWorkers.get(b).getCurrCapacityUsed() - zkWorkers.get(a).getCurrCapacityUsed();});\nvar minWorkerVer = config. [...]
+"function":"function (config, zkWorkers, task) {\nvar batch_workers = new java.util.ArrayList();\nbatch_workers.add(\"middleManager1_hostname:8091\");\nbatch_workers.add(\"middleManager2_hostname:8091\");\nworkers = zkWorkers.keySet().toArray();\nvar sortedWorkers = new Array()\n;for(var i = 0; i < workers.length; i++){\n sortedWorkers[i] = workers[i];\n}\nArray.prototype.sort.call(sortedWorkers,function(a, b){return zkWorkers.get(b).getCurrCapacityUsed() - zkWorkers.get(a).getCurrCapaci [...]
 }
 ```
 
@@ -1251,8 +1254,8 @@ These Historical configurations can be defined in the `historical/runtime.proper
 |`druid.segmentCache.dropSegmentDelayMillis`|How long a process delays before completely dropping segment.|30000 (30 seconds)|
 |`druid.segmentCache.infoDir`|Historical processes keep track of the segments they are serving so that when the process is restarted they can reload the same segments without waiting for the Coordinator to reassign. This path defines where this metadata is kept. Directory will be created if needed.|${first_location}/info_dir|
 |`druid.segmentCache.announceIntervalMillis`|How frequently to announce segments while segments are loading from cache. Set this value to zero to wait for all segments to be loaded before announcing.|5000 (5 seconds)|
-|`druid.segmentCache.numLoadingThreads`|How many segments to drop or load concurrently from from deep storage.|10|
-|`druid.segmentCache.numBootstrapThreads`|How many segments to load concurrently from local storage at startup.|Same as numLoadingThreads|
+|`druid.segmentCache.numLoadingThreads`|How many segments to drop or load concurrently from deep storage. Note that the work of loading segments involves downloading segments from deep storage, decompressing them and loading them to a memory mapped location. So the work is not all I/O Bound. Depending on CPU and network load, one could possibly increase this config to a higher value.|Number of cores|
+|`druid.coordinator.loadqueuepeon.curator.numCallbackThreads`|Number of threads for executing callback actions associated with loading or dropping of segments. One might want to increase this number when noticing clusters are lagging behind w.r.t. balancing segments across historical nodes.|2|
 
 In `druid.segmentCache.locations`, *freeSpacePercent* was added because *maxSize* setting is only a theoretical limit and assumes that much space will always be available for storing segments. In case of any druid bug leading to unaccounted segment files left alone on disk or some other process writing stuff to disk, This check can start failing segment loading early before filling up the disk completely and leaving the host usable otherwise.
 
@@ -1421,7 +1424,6 @@ The Druid SQL server is configured through the following properties on the Broke
 |`druid.sql.planner.selectThreshold`|Page size threshold for [Select queries](../querying/select-query.html). Select queries for larger resultsets will be issued back-to-back using pagination.|1000|
 |`druid.sql.planner.useApproximateCountDistinct`|Whether to use an approximate cardinalty algorithm for `COUNT(DISTINCT foo)`.|true|
 |`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.html) will be used instead.|true|
-|`druid.sql.planner.useFallback`|Whether to evaluate operations on the Broker when they cannot be expressed as Druid queries. This option is not recommended for production since it can generate unscalable query plans. If false, SQL queries that cannot be translated to Druid queries will fail.|false|
 |`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries wihout filter condition on __time column will fail|false|
 |`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
 |`druid.sql.planner.serializeComplexValues`|Whether to serialize "complex" output values, false will return the class name instead of the serialized value.|true|
@@ -1454,7 +1456,7 @@ See [cache configuration](#cache-configuration) for how to configure cache setti
 
 This section describes caching configuration that is common to Broker, Historical, and MiddleManager/Peon processes.
  
-Caching can optionally be enabled on the Broker, Historical, and MiddleManager/Peon processses. See [Broker](#broker-caching), 
+Caching can optionally be enabled on the Broker, Historical, and MiddleManager/Peon processses. See [Broker](#broker-caching),
 [Historical](#Historical-caching), and [Peon](#peon-caching) configuration options for how to enable it for different processes.
 
 Druid uses a local in-memory cache by default, unless a diffrent type of cache is specified.
diff --git a/docs/0.14.0-incubating/configuration/logging.md b/docs/0.14.0-incubating/configuration/logging.md
index 1c89b7d..28c9052 100644
--- a/docs/0.14.0-incubating/configuration/logging.md
+++ b/docs/0.14.0-incubating/configuration/logging.md
@@ -53,3 +53,36 @@ An example log4j2.xml ships with Druid under config/_common/log4j2.xml, and a sa
   </Loggers>
 </Configuration>
 ```
+
+## My logs are really chatty, can I set them to asynchronously write?
+
+Yes, using a `log4j2.xml` similar to the following causes some of the more chatty classes to write asynchronously:
+
+```
+<?xml version="1.0" encoding="UTF-8" ?>
+<Configuration status="WARN">
+  <Appenders>
+    <Console name="Console" target="SYSTEM_OUT">
+      <PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/>
+    </Console>
+  </Appenders>
+  <Loggers>
+    <AsyncLogger name="org.apache.druid.curator.inventory.CuratorInventoryManager" level="debug" additivity="false">
+      <AppenderRef ref="Console"/>
+    </AsyncLogger>
+    <AsyncLogger name="org.apache.druid.client.BatchServerInventoryView" level="debug" additivity="false">
+      <AppenderRef ref="Console"/>
+    </AsyncLogger>
+    <!-- Make extra sure nobody adds logs in a bad way that can hurt performance -->
+    <AsyncLogger name="org.apache.druid.client.ServerInventoryView" level="debug" additivity="false">
+      <AppenderRef ref="Console"/>
+    </AsyncLogger>
+    <AsyncLogger name ="org.apache.druid.java.util.http.client.pool.ChannelResourceFactory" level="info" additivity="false">
+      <AppenderRef ref="Console"/>
+    </AsyncLogger>
+    <Root level="info">
+      <AppenderRef ref="Console"/>
+    </Root>
+  </Loggers>
+</Configuration>
+```
diff --git a/docs/0.14.0-incubating/configuration/realtime.md b/docs/0.14.0-incubating/configuration/realtime.md
index 49cc934..dd319fe 100644
--- a/docs/0.14.0-incubating/configuration/realtime.md
+++ b/docs/0.14.0-incubating/configuration/realtime.md
@@ -95,4 +95,4 @@ You can optionally configure caching to be enabled on the realtime process by se
 |`druid.realtime.cache.unCacheable`|All druid query types|All query types to not cache.|`["select"]`|
 |`druid.realtime.cache.maxEntrySize`|positive integer or -1|Maximum size of an individual cache entry (processed results for one segment), in bytes, or -1 for unlimited.|`1000000` (1MB)|
 
-See [cache configuration](caching.html) for how to configure cache settings.
+See [cache configuration](index.html#cache-configuration) for how to configure cache settings.
diff --git a/docs/0.14.0-incubating/dependencies/metadata-storage.md b/docs/0.14.0-incubating/dependencies/metadata-storage.md
index e76eb2f..c05e732 100644
--- a/docs/0.14.0-incubating/dependencies/metadata-storage.md
+++ b/docs/0.14.0-incubating/dependencies/metadata-storage.md
@@ -32,7 +32,10 @@ Derby is the default metadata store for Druid, however, it is not suitable for p
 [MySQL](../development/extensions-core/mysql.html) and [PostgreSQL](../development/extensions-core/postgresql.html) are more production suitable metadata stores.
 
 <div class="note caution">
-Derby is not suitable for production use as a metadata store. Use MySQL or PostgreSQL instead.
+The Metadata Storage stores the entire metadata which is essential for a Druid cluster to work.
+For production clusters, consider using MySQL or PostgreSQL instead of Derby.
+Also, it's highly recommended to set up a high availability environment
+because there is no way to restore if you lose any metadata.
 </div>
 
 ## Using derby
diff --git a/docs/0.14.0-incubating/design/coordinator.md b/docs/0.14.0-incubating/design/coordinator.md
index 49d8a51..0dbbd47 100644
--- a/docs/0.14.0-incubating/design/coordinator.md
+++ b/docs/0.14.0-incubating/design/coordinator.md
@@ -52,8 +52,7 @@ Segments can be automatically loaded and dropped from the cluster based on a set
 
 ### Cleaning Up Segments
 
-Each run, the Druid Coordinator compares the list of available database segments in the database with the current segments in the cluster. Segments that are not in the database but are still being served in the cluster are flagged and appended to a removal list. Segments that are overshadowed (their versions are too old and their data has been replaced by newer segments) are also dropped.
-Note that if all segments in database are deleted(or marked unused), then Coordinator will not drop anything from the Historicals. This is done to prevent a race condition in which the Coordinator would drop all segments if it started running cleanup before it finished polling the database for available segments for the first time and believed that there were no segments.
+Each run, the Druid coordinator compares the list of available database segments in the database with the current segments in the cluster. Segments that are not in the database but are still being served in the cluster are flagged and appended to a removal list. Segments that are overshadowed (their versions are too old and their data has been replaced by newer segments) are also dropped.
 
 ### Segment Availability
 
diff --git a/docs/0.14.0-incubating/design/index.md b/docs/0.14.0-incubating/design/index.md
index ec7e38a..191a7d6 100644
--- a/docs/0.14.0-incubating/design/index.md
+++ b/docs/0.14.0-incubating/design/index.md
@@ -24,18 +24,23 @@ title: "Apache Druid (incubating) Design"
 
 # What is Druid?<a id="what-is-druid"></a>
 
-Apache Druid (incubating) is a data store designed for high-performance slice-and-dice analytics
-("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)"-style) on large data sets. Druid is most often
-used as a data store for powering GUI analytical applications, or as a backend for highly-concurrent APIs that need
-fast aggregations. Common application areas for Druid include:
+Apache Druid (incubating) is a real-time analytics database designed for fast slice-and-dice analytics
+("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)" queries) on large data sets. Druid is most often
+used as a database for powering use cases where real-time ingest, fast query performance, and high uptime are important. 
+As such, Druid is commonly used for powering GUIs of analytical applications, or as a backend for highly-concurrent APIs 
+that need fast aggregations. Druid works best with event-oriented data.
 
-- Clickstream analytics
-- Network flow analytics
+Common application areas for Druid include:
+
+- Clickstream analytics (web and mobile analytics)
+- Network telemetry analytics (network performance monitoring)
 - Server metrics storage
+- Supply chain analytics (manufacturing metrics)
 - Application performance metrics
-- Digital marketing analytics
+- Digital marketing/advertising analytics
 - Business intelligence / OLAP
 
+Druid's core architecture combines ideas from data warehouses, timeseries databases, and logsearch systems. Some of 
 Druid's key features are:
 
 1. **Columnar storage format.** Druid uses column-oriented storage, meaning it only needs to load the exact columns
@@ -45,7 +50,7 @@ column is stored optimized for its particular data type, which supports fast sca
 offer ingest rates of millions of records/sec, retention of trillions of records, and query latencies of sub-second to a
 few seconds.
 3. **Massively parallel processing.** Druid can process a query in parallel across the entire cluster.
-4. **Realtime or batch ingestion.** Druid can ingest data either realtime (ingested data is immediately available for
+4. **Realtime or batch ingestion.** Druid can ingest data either real-time (ingested data is immediately available for
 querying) or in batches.
 5. **Self-healing, self-balancing, easy to operate.** As an operator, to scale the cluster out or in, simply add or
 remove servers and the cluster will rebalance itself automatically, in the background, without any downtime. If any
@@ -59,11 +64,14 @@ Druid servers, replication ensures that queries are still possible while the sys
 7. **Indexes for quick filtering.** Druid uses [CONCISE](https://arxiv.org/pdf/1004.0403) or
 [Roaring](https://roaringbitmap.org/) compressed bitmap indexes to create indexes that power fast filtering and
 searching across multiple columns.
-8. **Approximate algorithms.** Druid includes algorithms for approximate count-distinct, approximate ranking, and
+8. **Time-based partitioning.** Druid first partitions data by time, and can additionally partition based on other fields. 
+This means time-based queries will only access the partitions that match the time range of the query. This leads to 
+significant performance improvements for time-based data. 
+9. **Approximate algorithms.** Druid includes algorithms for approximate count-distinct, approximate ranking, and
 computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often
 substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also
 offers exact count-distinct and exact ranking.
-9. **Automatic summarization at ingest time.** Druid optionally supports data summarization at ingestion time. This
+10. **Automatic summarization at ingest time.** Druid optionally supports data summarization at ingestion time. This
 summarization partially pre-aggregates your data, and can lead to big costs savings and performance boosts.
 
 # When should I use Druid?<a id="when-to-use-druid"></a>
@@ -85,7 +93,8 @@ Situations where you would likely _not_ want to use Druid include:
 - You need low-latency updates of _existing_ records using a primary key. Druid supports streaming inserts, but not streaming updates (updates are done using
 background batch jobs).
 - You are building an offline reporting system where query latency is not very important.
-- You want to do "big" joins (joining one big fact table to another big fact table).
+- You want to do "big" joins (joining one big fact table to another big fact table) and you are okay with these queries 
+taking up to hours to complete.
 
 # Architecture
 
@@ -157,7 +166,7 @@ The following diagram shows how queries and data flow through this architecture,
 Druid data is stored in "datasources", which are similar to tables in a traditional RDBMS. Each datasource is
 partitioned by time and, optionally, further partitioned by other attributes. Each time range is called a "chunk" (for
 example, a single day, if your datasource is partitioned by day). Within a chunk, data is partitioned into one or more
-"segments". Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
+["segments"](../design/segments.html). Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
 organized into time chunks, it's sometimes helpful to think of segments as living on a timeline like the following:
 
 <img src="../../img/druid-timeline.png" width="800" />
@@ -183,10 +192,10 @@ cluster.
 
 # Query processing
 
-Queries first enter the Broker, where the Broker will identify which segments have data that may pertain to that query.
+Queries first enter the [Broker](../design/broker.html), where the Broker will identify which segments have data that may pertain to that query.
 The list of segments is always pruned by time, and may also be pruned by other attributes depending on how your
-datasource is partitioned. The Broker will then identify which Historicals and MiddleManagers are serving those segments
-and send a rewritten subquery to each of those processes. The Historical/MiddleManager processes will take in the
+datasource is partitioned. The Broker will then identify which [Historicals](../design/historical.html) and 
+[MiddleManagers](../design/middlemanager.html) are serving those segments and send a rewritten subquery to each of those processes. The Historical/MiddleManager processes will take in the
 queries, process them and return results. The Broker receives results and merges them together to get the final answer,
 which it returns to the original caller.
 
@@ -200,4 +209,4 @@ So Druid uses three different techniques to maximize query performance:
 
 - Pruning which segments are accessed for each query.
 - Within each segment, using indexes to identify which rows must be accessed.
-- Within each segment, only reading the specific rows and columns that are relevant to a particular query.
\ No newline at end of file
+- Within each segment, only reading the specific rows and columns that are relevant to a particular query.
diff --git a/docs/0.14.0-incubating/design/segments.md b/docs/0.14.0-incubating/design/segments.md
index d8d69c1..adc454b 100644
--- a/docs/0.14.0-incubating/design/segments.md
+++ b/docs/0.14.0-incubating/design/segments.md
@@ -28,7 +28,7 @@ Apache Druid (incubating) stores its index in *segment files*, which are partiti
 time. In a basic setup, one segment file is created for each time
 interval, where the time interval is configurable in the
 `segmentGranularity` parameter of the `granularitySpec`, which is
-documented [here](../ingestion/ingestion-spec.html#granularityspec).  For druid to
+documented [here](../ingestion/ingestion-spec.html#granularityspec).  For Druid to
 operate well under heavy query load, it is important for the segment
 file size to be within the recommended range of 300mb-700mb. If your
 segment files are larger than this range, then consider either
diff --git a/docs/0.14.0-incubating/development/build.md b/docs/0.14.0-incubating/development/build.md
index 3600406..b28a836 100644
--- a/docs/0.14.0-incubating/development/build.md
+++ b/docs/0.14.0-incubating/development/build.md
@@ -31,9 +31,12 @@ For building the latest code in master, follow the instructions [here](https://g
 #### Prerequisites
 
 ##### Installing Java and Maven:
-- [JDK 8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)
+- JDK 8, 8u92+. We recommend using an OpenJDK distribution that provides long-term support and open-source licensing,
+  like [Amazon Corretto](https://aws.amazon.com/corretto/) or [Azul Zulu](https://www.azul.com/downloads/zulu/).
 - [Maven version 3.x](http://maven.apache.org/download.cgi)
 
+
+
 ##### Downloading the source:
 
 ```bash
diff --git a/docs/0.14.0-incubating/development/experimental.md b/docs/0.14.0-incubating/development/experimental.md
index adf4e24..eb3c051 100644
--- a/docs/0.14.0-incubating/development/experimental.md
+++ b/docs/0.14.0-incubating/development/experimental.md
@@ -24,16 +24,15 @@ title: "Experimental Features"
 
 # Experimental Features
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+Features often start out in "experimental" status that indicates they are still evolving.
+This can mean any of the following things:
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
+1. The feature's API may change even in minor releases or patch releases.
+2. The feature may have known "missing" pieces that will be added later.
+3. The feature may or may not have received full battle-testing in production environments.
 
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
+All experimental features are optional.
 
-```
-druid.extensions.loadList=["druid-histogram"]
-```
-
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
+Note that not all of these points apply to every experimental feature. Some have been battle-tested in terms of
+implementation, but are still marked experimental due to an evolving API. Please check the documentation for each
+feature for full details.
diff --git a/docs/0.14.0-incubating/development/extensions-contrib/distinctcount.md b/docs/0.14.0-incubating/development/extensions-contrib/distinctcount.md
index a392360..7cf67b5 100644
--- a/docs/0.14.0-incubating/development/extensions-contrib/distinctcount.md
+++ b/docs/0.14.0-incubating/development/extensions-contrib/distinctcount.md
@@ -28,8 +28,8 @@ To use this Apache Druid (incubating) extension, make sure to [include](../../op
 
 Additionally, follow these steps:
 
-(1) First, use a single dimension hash-based partition spec to partition data by a single dimension. For example visitor_id. This to make sure all rows with a particular value for that dimension will go into the same segment, or this might over count.
-(2) Second, use distinctCount to calculate the distinct count, make sure queryGranularity is divided exactly by segmentGranularity or else the result will be wrong.
+1. First, use a single dimension hash-based partition spec to partition data by a single dimension. For example visitor_id. This to make sure all rows with a particular value for that dimension will go into the same segment, or this might over count.
+2. Second, use distinctCount to calculate the distinct count, make sure queryGranularity is divided exactly by segmentGranularity or else the result will be wrong.
 
 There are some limitations, when used with groupBy, the groupBy keys' numbers should not exceed maxIntermediateRows in every segment. If exceeded the result will be wrong. When used with topN, numValuesPerPass should not be too big. If too big the distinctCount will use a lot of memory and might cause the JVM to go our of memory.
 
diff --git a/docs/0.14.0-incubating/development/extensions-contrib/influx.md b/docs/0.14.0-incubating/development/extensions-contrib/influx.md
index c5c071b..62e036b 100644
--- a/docs/0.14.0-incubating/development/extensions-contrib/influx.md
+++ b/docs/0.14.0-incubating/development/extensions-contrib/influx.md
@@ -35,6 +35,7 @@ A typical line looks like this:
 ```cpu,application=dbhost=prdb123,region=us-east-1 usage_idle=99.24,usage_user=0.55 1520722030000000000```
 
 which contains four parts:
+
   - measurement: A string indicating the name of the measurement represented (e.g. cpu, network, web_requests)
   - tags: zero or more key-value pairs (i.e. dimensions)
   - measurements: one or more key-value pairs; values can be numeric, boolean, or string
@@ -43,6 +44,7 @@ which contains four parts:
 The parser extracts these fields into a map, giving the measurement the key `measurement` and the timestamp the key `_ts`. The tag and measurement keys are copied verbatim, so users should take care to avoid name collisions. It is up to the ingestion spec to decide which fields should be treated as dimensions and which should be treated as metrics (typically tags correspond to dimensions and measurements correspond to metrics).
 
 The parser is configured like so:
+
 ```json
 "parser": {
       "type": "string",
diff --git a/docs/0.14.0-incubating/development/extensions-contrib/materialized-view.md b/docs/0.14.0-incubating/development/extensions-contrib/materialized-view.md
index 95bfde9..963a944 100644
--- a/docs/0.14.0-incubating/development/extensions-contrib/materialized-view.md
+++ b/docs/0.14.0-incubating/development/extensions-contrib/materialized-view.md
@@ -33,6 +33,7 @@ In materialized-view-maintenance, dataSouces user ingested are called "base-data
 The `derivativeDataSource` supervisor is used to keep the timeline of derived-dataSource consistent with base-dataSource. Each `derivativeDataSource` supervisor  is responsible for one derived-dataSource.
 
 A sample derivativeDataSource supervisor spec is shown below:
+
 ```json
    {
        "type": "derivativeDataSource",
@@ -90,6 +91,7 @@ A sample derivativeDataSource supervisor spec is shown below:
 In materialized-view-selection, we implement a new query type `view`. When we request a view query, Druid will try its best to optimize the query based on query dataSource and intervals.
 
 A sample view query spec is shown below:
+
 ```json
    {
        "queryType": "view",
@@ -124,6 +126,7 @@ A sample view query spec is shown below:
        }
    }
 ```
+
 There are 2 parts in a view query:
 
 |Field|Description|Required|
diff --git a/docs/0.14.0-incubating/development/extensions-core/druid-basic-security.md b/docs/0.14.0-incubating/development/extensions-core/druid-basic-security.md
index adba32b..e067fdf 100644
--- a/docs/0.14.0-incubating/development/extensions-core/druid-basic-security.md
+++ b/docs/0.14.0-incubating/development/extensions-core/druid-basic-security.md
@@ -172,6 +172,90 @@ Return a list of all user names.
 `GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`
 Return the name and role information of the user with name {userName}
 
+Example output:
+
+```json
+{
+  "name": "druid2",
+  "roles": [
+    "druidRole"
+  ]
+}
+```
+
+This API supports the following flags:
+
+- `?full`: The response will also include the full information for each role currently assigned to the user.
+
+Example output:
+
+```json
+{
+  "name": "druid2",
+  "roles": [
+    {
+      "name": "druidRole",
+      "permissions": [
+        {
+          "resourceAction": {
+            "resource": {
+              "name": "A",
+              "type": "DATASOURCE"
+            },
+            "action": "READ"
+          },
+          "resourceNamePattern": "A"
+        },
+        {
+          "resourceAction": {
+            "resource": {
+              "name": "C",
+              "type": "CONFIG"
+            },
+            "action": "WRITE"
+          },
+          "resourceNamePattern": "C"
+        }
+      ]
+    }
+  ]
+}
+```
+
+The output format of this API when `?full` is specified is deprecated and in later versions will be switched to the output format used when both `?full` and `?simplifyPermissions` flag is set. 
+
+The `resourceNamePattern` is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.
+
+- `?full?simplifyPermissions`: When both `?full` and `?simplifyPermissions` are set, the permissions in the output will contain only a list of `resourceAction` objects, without the extraneous `resourceNamePattern` field.
+
+```json
+{
+  "name": "druid2",
+  "roles": [
+    {
+      "name": "druidRole",
+      "users": null,
+      "permissions": [
+        {
+          "resource": {
+            "name": "A",
+            "type": "DATASOURCE"
+          },
+          "action": "READ"
+        },
+        {
+          "resource": {
+            "name": "C",
+            "type": "CONFIG"
+          },
+          "action": "WRITE"
+        }
+      ]
+    }
+  ]
+}
+```
+
 `POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`
 Create a new user with name {userName}
 
@@ -184,7 +268,58 @@ Delete the user with name {userName}
 Return a list of all role names.
 
 `GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`
-Return name and permissions for the role named {roleName}
+Return name and permissions for the role named {roleName}.
+
+Example output:
+
+```json
+{
+  "name": "druidRole2",
+  "permissions": [
+    {
+      "resourceAction": {
+        "resource": {
+          "name": "E",
+          "type": "DATASOURCE"
+        },
+        "action": "WRITE"
+      },
+      "resourceNamePattern": "E"
+    }
+  ]
+}
+```
+
+The default output format of this API is deprecated and in later versions will be switched to the output format used when the `?simplifyPermissions` flag is set. The `resourceNamePattern` is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.
+
+This API supports the following flags:
+
+- `?full`: The output will contain an extra `users` list, containing the users that currently have this role.
+
+```json
+"users":["druid"]
+```
+
+- `?simplifyPermissions`: The permissions in the output will contain only a list of `resourceAction` objects, without the extraneous `resourceNamePattern` field. The `users` field will be null when `?full` is not specified.
+
+Example output:
+
+```json
+{
+  "name": "druidRole2",
+  "users": null,
+  "permissions": [
+    {
+      "resource": {
+        "name": "E",
+        "type": "DATASOURCE"
+      },
+      "action": "WRITE"
+    }
+  ]
+}
+```
+
 
 `POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`
 Create a new role with name {roleName}.
@@ -310,6 +445,20 @@ For information on what HTTP methods are supported on a particular request endpo
 
 GET requires READ permission, while POST and DELETE require WRITE permission.
 
+### SQL Permissions
+
+Queries on Druid datasources require DATASOURCE READ permissions for the specified datasource.
+
+Queries on the [INFORMATION_SCHEMA tables](../../querying/sql.html#information-schema) will
+return information about datasources that the caller has DATASOURCE READ access to. Other
+datasources will be omitted.
+
+Queries on the [system schema tables](../../querying/sql.html#system-schema) require the following permissions:
+- `segments`: Segments will be filtered based on DATASOURCE READ permissions.
+- `servers`: The user requires STATE READ permissions.
+- `server_segments`: The user requires STATE READ permissions and segments will be filtered based on DATASOURCE READ permissions.
+- `tasks`: Tasks will be filtered based on DATASOURCE READ permissions.
+
 ## Configuration Propagation
 
 To prevent excessive load on the Coordinator, the Authenticator and Authorizer user/role database state is cached on each Druid process.
diff --git a/docs/0.14.0-incubating/development/extensions-core/druid-lookups.md b/docs/0.14.0-incubating/development/extensions-core/druid-lookups.md
index 53476eb..9f5798e 100644
--- a/docs/0.14.0-incubating/development/extensions-core/druid-lookups.md
+++ b/docs/0.14.0-incubating/development/extensions-core/druid-lookups.md
@@ -75,6 +75,7 @@ Same for Loading cache, developer can implement a new type of loading cache by i
 
 #####   Example of Polling On-heap Lookup
 This example demonstrates a polling cache that will update its on-heap cache every 10 minutes
+
 ```json
 {
     "type":"pollingLookup",
diff --git a/docs/0.14.0-incubating/development/extensions-core/kafka-ingestion.md b/docs/0.14.0-incubating/development/extensions-core/kafka-ingestion.md
index 90db0c9..b415c27 100644
--- a/docs/0.14.0-incubating/development/extensions-core/kafka-ingestion.md
+++ b/docs/0.14.0-incubating/development/extensions-core/kafka-ingestion.md
@@ -140,7 +140,7 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
 |`intermediatePersistPeriod`|ISO8601 Period|The period that determines the rate at which intermediate persists occur.|no (default == PT10M)|
 |`maxPendingPersists`|Integer|Maximum number of persists that can be pending but not started. If this limit would be exceeded by a new intermediate persist, ingestion will block until the currently-running persist finishes. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|no (default == 0, meaning one persist can be running concurrently with ingestion, and none can be queued up)|
 |`indexSpec`|Object|Tune how data is indexed, see 'IndexSpec' below for more details.|no|
-|`reportParseExceptions`|DEPRECATED. If true, exceptions encountered during parsing will be thrown and will halt ingestion; if false, unparseable rows and fields will be skipped. Setting `reportParseExceptions` to true will override existing configurations for `maxParseExceptions` and `maxSavedParseExceptions`, setting `maxParseExceptions` to 0 and limiting `maxSavedParseExceptions` to no more than 1.|false|no|
+|`reportParseExceptions`|Boolean|*DEPRECATED*. If true, exceptions encountered during parsing will be thrown and will halt ingestion; if false, unparseable rows and fields will be skipped. Setting `reportParseExceptions` to true will override existing configurations for `maxParseExceptions` and `maxSavedParseExceptions`, setting `maxParseExceptions` to 0 and limiting `maxSavedParseExceptions` to no more than 1.|no (default == false)|
 |`handoffConditionTimeout`|Long|Milliseconds to wait for segment handoff. It must be >= 0, where 0 means to wait forever.|no (default == 0)|
 |`resetOffsetAutomatically`|Boolean|Whether to reset the consumer offset if the next offset that it is trying to fetch is less than the earliest available offset for that particular partition. The consumer offset will be reset to either the earliest or latest offset depending on `useEarliestOffset` property of `KafkaSupervisorIOConfig` (see below). This situation typically occurs when messages in Kafka are no longer available for consumption and therefore won't be ingested into Druid. If [...]
 |`workerThreads`|Integer|The number of threads that will be used by the supervisor for asynchronous operations.|no (default == min(10, taskCount))|
@@ -201,7 +201,6 @@ For Roaring bitmaps:
 |`completionTimeout`|ISO8601 Period|The length of time to wait before declaring a publishing task as failed and terminating it. If this is set too low, your tasks may never publish. The publishing clock for a task begins roughly after `taskDuration` elapses.|no (default == PT30M)|
 |`lateMessageRejectionPeriod`|ISO8601 Period|Configure tasks to reject messages with timestamps earlier than this period before the task was created; for example if this is set to `PT1H` and the supervisor creates a task at *2016-01-01T12:00Z*, messages with timestamps earlier than *2016-01-01T11:00Z* will be dropped. This may help prevent concurrency issues if your data stream has late messages and you have multiple pipelines that need to operate on the same segments (e.g. a realtime an [...]
 |`earlyMessageRejectionPeriod`|ISO8601 Period|Configure tasks to reject messages with timestamps later than this period after the task reached its taskDuration; for example if this is set to `PT1H`, the taskDuration is set to `PT1H` and the supervisor creates a task at *2016-01-01T12:00Z*, messages with timestamps later than *2016-01-01T14:00Z* will be dropped. **Note:** Tasks sometimes run past their task duration, for example, in cases of supervisor failover. Setting earlyMessageReject [...]
-|`skipOffsetGaps`|Boolean|Whether or not to allow gaps of missing offsets in the Kafka stream. This is required for compatibility with implementations such as MapR Streams which does not guarantee consecutive offsets. If this is false, an exception will be thrown if offsets are not consecutive.|no (default == false)|
 
 ## Operations
 
diff --git a/docs/0.14.0-incubating/development/extensions-core/parquet.md b/docs/0.14.0-incubating/development/extensions-core/parquet.md
index 9b628b9..207fae7 100644
--- a/docs/0.14.0-incubating/development/extensions-core/parquet.md
+++ b/docs/0.14.0-incubating/development/extensions-core/parquet.md
@@ -33,17 +33,20 @@ Note: `druid-parquet-extensions` depends on the `druid-avro-extensions` module,
 ## Parquet Hadoop Parser
 
 This extension provides two ways to parse Parquet files:
+
 * `parquet` - using a simple conversion contained within this extension 
 * `parquet-avro` - conversion to avro records with the `parquet-avro` library and using the `druid-avro-extensions`
  module to parse the avro data
 
 Selection of conversion method is controlled by parser type, and the correct hadoop input format must also be set in 
-the `ioConfig`,  `org.apache.druid.data.input.parquet.DruidParquetInputFormat` for `parquet` and 
-`org.apache.druid.data.input.parquet.DruidParquetAvroInputFormat` for `parquet-avro`.
+the `ioConfig`:
+
+* `org.apache.druid.data.input.parquet.DruidParquetInputFormat` for `parquet`
+* `org.apache.druid.data.input.parquet.DruidParquetAvroInputFormat` for `parquet-avro`
  
 
 Both parse options support auto field discovery and flattening if provided with a 
-[flattenSpec](../../ingestion/flatten-json.html) with `parquet` or `avro` as the `format`. Parquet nested list and map 
+[flattenSpec](../../ingestion/flatten-json.html) with `parquet` or `avro` as the format. Parquet nested list and map 
 [logical types](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md) _should_ operate correctly with 
 json path expressions for all supported types. `parquet-avro` sets a hadoop job property 
 `parquet.avro.add-list-element-records` to `false` (which normally defaults to `true`), in order to 'unwrap' primitive 
diff --git a/docs/0.14.0-incubating/development/extensions-core/s3.md b/docs/0.14.0-incubating/development/extensions-core/s3.md
index e93e5e0..2fa1829 100644
--- a/docs/0.14.0-incubating/development/extensions-core/s3.md
+++ b/docs/0.14.0-incubating/development/extensions-core/s3.md
@@ -45,10 +45,11 @@ As an example, to set the region to 'us-east-1' through system properties:
 |`druid.s3.secretKey`|S3 secret key.|Must be set.|
 |`druid.storage.bucket`|Bucket to store in.|Must be set.|
 |`druid.storage.baseKey`|Base key prefix to use, i.e. what directory.|Must be set.|
+|`druid.storage.disableAcl`|Boolean flag to disable ACL. If this is set to `false`, the full control would be granted to the bucket owner. This may require to set additional permissions. See [S3 permissions settings](#s3-permissions-settings).|false|
 |`druid.storage.sse.type`|Server-side encryption type. Should be one of `s3`, `kms`, and `custom`. See the below [Server-side encryption section](#server-side-encryption) for more details.|None|
-|`druid.storage.sse.kms.keyId`|AWS KMS key ID. Can be empty if `druid.storage.sse.type` is `kms`.|None|
+|`druid.storage.sse.kms.keyId`|AWS KMS key ID. This is used only when `druid.storage.sse.type` is `kms` and can be empty to use the default key ID.|None|
 |`druid.storage.sse.custom.base64EncodedKey`|Base64-encoded key. Should be specified if `druid.storage.sse.type` is `custom`.|None|
-|`druid.s3.protocol`|Communication protocol type to use when sending requests to AWS. `http` or `https` can be used.|`https`|
+|`druid.s3.protocol`|Communication protocol type to use when sending requests to AWS. `http` or `https` can be used. This configuration would be ignored if `druid.s3.endpoint.url` is filled with a URL with a different protocol.|`https`|
 |`druid.s3.disableChunkedEncoding`|Disables chunked encoding. See [AWS document](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#disableChunkedEncoding--) for details.|false|
 |`druid.s3.enablePathStyleAccess`|Enables path style access. See [AWS document](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#enablePathStyleAccess--) for details.|false|
 |`druid.s3.forceGlobalBucketAccessEnabled`|Enables global bucket access. See [AWS document](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#setForceGlobalBucketAccessEnabled-java.lang.Boolean-) for details.|false|
@@ -59,6 +60,11 @@ As an example, to set the region to 'us-east-1' through system properties:
 |`druid.s3.proxy.username`|User name to use when connecting through a proxy.|None|
 |`druid.s3.proxy.password`|Password to use when connecting through a proxy.|None|
 
+### S3 permissions settings
+
+`s3:GetObject` and `s3:PutObject` are basically required for pushing/loading segments to/from S3.
+If `druid.storage.disableAcl` is set to `false`, then `s3:GetBucketAcl` and `s3:PutObjectAcl` are additionally required to set ACL for objects.
+
 ## Server-side encryption
 
 You can enable [server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html) by setting
diff --git a/docs/0.14.0-incubating/development/extensions.md b/docs/0.14.0-incubating/development/extensions.md
index 4cebe0e..15b087c 100644
--- a/docs/0.14.0-incubating/development/extensions.md
+++ b/docs/0.14.0-incubating/development/extensions.md
@@ -44,20 +44,22 @@ Core extensions are maintained by Druid committers.
 |druid-avro-extensions|Support for data in Apache Avro data format.|[link](../development/extensions-core/avro.html)|
 |druid-basic-security|Support for Basic HTTP authentication and role-based access control.|[link](../development/extensions-core/druid-basic-security.html)|
 |druid-bloom-filter|Support for providing Bloom filters in druid queries.|[link](../development/extensions-core/bloom-filter.html)|
-|druid-caffeine-cache|A local cache implementation backed by Caffeine.|[link](../development/extensions-core/caffeine-cache.html)|
+|druid-caffeine-cache|A local cache implementation backed by Caffeine.|[link](../configuration/index.html#cache-configuration)|
 |druid-datasketches|Support for approximate counts and set operations with [DataSketches](http://datasketches.github.io/).|[link](../development/extensions-core/datasketches-extension.html)|
 |druid-hdfs-storage|HDFS deep storage.|[link](../development/extensions-core/hdfs.html)|
 |druid-histogram|Approximate histograms and quantiles aggregator. Deprecated, please use the [DataSketches quantiles aggregator](../development/extensions-core/datasketches-quantiles.html) from the `druid-datasketches` extension instead.|[link](../development/extensions-core/approximate-histograms.html)|
-|druid-kafka-eight|Kafka ingest firehose (high level consumer) for realtime nodes.|[link](../development/extensions-core/kafka-eight-firehose.html)|
+|druid-kafka-eight|Kafka ingest firehose (high level consumer) for realtime nodes(deprecated).|[link](../development/extensions-core/kafka-eight-firehose.html)|
 |druid-kafka-extraction-namespace|Kafka-based namespaced lookup. Requires namespace lookup extension.|[link](../development/extensions-core/kafka-extraction-namespace.html)|
 |druid-kafka-indexing-service|Supervised exactly-once Kafka ingestion for the indexing service.|[link](../development/extensions-core/kafka-ingestion.html)|
 |druid-kinesis-indexing-service|Supervised exactly-once Kinesis ingestion for the indexing service.|[link](../development/extensions-core/kinesis-ingestion.html)|
 |druid-kerberos|Kerberos authentication for druid processes.|[link](../development/extensions-core/druid-kerberos.html)|
 |druid-lookups-cached-global|A module for [lookups](../querying/lookups.html) providing a jvm-global eager caching for lookups. It provides JDBC and URI implementations for fetching lookup data.|[link](../development/extensions-core/lookups-cached-global.html)|
 |druid-lookups-cached-single| Per lookup caching module to support the use cases where a lookup need to be isolated from the global pool of lookups |[link](../development/extensions-core/druid-lookups.html)|
+|druid-orc-extensions|Support for data in Apache Orc data format.|[link](../development/extensions-core/orc.html)|
 |druid-parquet-extensions|Support for data in Apache Parquet data format. Requires druid-avro-extensions to be loaded.|[link](../development/extensions-core/parquet.html)|
 |druid-protobuf-extensions| Support for data in Protobuf data format.|[link](../development/extensions-core/protobuf.html)|
 |druid-s3-extensions|Interfacing with data in AWS S3, and using S3 as deep storage.|[link](../development/extensions-core/s3.html)|
+|druid-ec2-extensions|Interfacing with AWS EC2 for autoscaling middle managers|UNDOCUMENTED|
 |druid-stats|Statistics related module including variance and standard deviation.|[link](../development/extensions-core/stats.html)|
 |mysql-metadata-storage|MySQL metadata store.|[link](../development/extensions-core/mysql.html)|
 |postgresql-metadata-storage|PostgreSQL metadata store.|[link](../development/extensions-core/postgresql.html)|
@@ -72,7 +74,7 @@ Community extensions are not maintained by Druid committers, although we accept
 A number of community members have contributed their own extensions to Druid that are not packaged with the default Druid tarball.
 If you'd like to take on maintenance for a community extension, please post on [dev@druid.apache.org](https://lists.apache.org/list.html?dev@druid.apache.org) to let us know!
 
-All of these community extensions can be downloaded using *pull-deps* with the coordinate org.apache.druid.extensions.contrib:EXTENSION_NAME:LATEST_DRUID_STABLE_VERSION.
+All of these community extensions can be downloaded using [pull-deps](../operations/pull-deps.html) while specifying a `-c` coordinate option to pull `org.apache.druid.extensions.contrib:{EXTENSION_NAME}:{DRUID_VERSION}`.
 
 |Name|Description|Docs|
 |----|-----------|----|
@@ -81,8 +83,7 @@ All of these community extensions can be downloaded using *pull-deps* with the c
 |druid-cassandra-storage|Apache Cassandra deep storage.|[link](../development/extensions-contrib/cassandra.html)|
 |druid-cloudfiles-extensions|Rackspace Cloudfiles deep storage and firehose.|[link](../development/extensions-contrib/cloudfiles.html)|
 |druid-distinctcount|DistinctCount aggregator|[link](../development/extensions-contrib/distinctcount.html)|
-|druid-kafka-eight-simpleConsumer|Kafka ingest firehose (low level consumer).|[link](../development/extensions-contrib/kafka-simple.html)|
-|druid-orc-extensions|Support for data in Apache Orc data format.|[link](../development/extensions-contrib/orc.html)|
+|druid-kafka-eight-simpleConsumer|Kafka ingest firehose (low level consumer)(deprecated).|[link](../development/extensions-contrib/kafka-simple.html)|
 |druid-rabbitmq|RabbitMQ firehose.|[link](../development/extensions-contrib/rabbitmq.html)|
 |druid-redis-cache|A cache implementation for Druid based on Redis.|[link](../development/extensions-contrib/redis-cache.html)|
 |druid-rocketmq|RocketMQ firehose.|[link](../development/extensions-contrib/rocketmq.html)|
@@ -94,6 +95,8 @@ All of these community extensions can be downloaded using *pull-deps* with the c
 |kafka-emitter|Kafka metrics emitter|[link](../development/extensions-contrib/kafka-emitter.html)|
 |druid-thrift-extensions|Support thrift ingestion |[link](../development/extensions-contrib/thrift.html)|
 |druid-opentsdb-emitter|OpenTSDB metrics emitter |[link](../development/extensions-contrib/opentsdb-emitter.html)|
+|materialized-view-selection, materialized-view-maintenance|Materialized View|[link](../development/extensions-contrib/materialized-view.html)|
+|druid-moving-average-query|Support for [Moving Average](https://en.wikipedia.org/wiki/Moving_average) and other Aggregate [Window Functions](https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions) in Druid queries.|[link](../development/extensions-contrib/moving-average-query.html)|
 
 ## Promoting Community Extension to Core Extension
 
diff --git a/docs/0.14.0-incubating/development/modules.md b/docs/0.14.0-incubating/development/modules.md
index c665b8e..43a9ea8 100644
--- a/docs/0.14.0-incubating/development/modules.md
+++ b/docs/0.14.0-incubating/development/modules.md
@@ -114,7 +114,7 @@ In this way, you can validate both push (at realtime process) and pull (at Histo
 
 * DataSegmentPusher
 
-Wherever your data storage (cloud storage service, distributed file system, etc.) is, you should be able to see two new files: `descriptor.json` (`partitionNum_descriptor.json` for HDFS data storage) and `index.zip` (`partitionNum_index.zip` for HDFS data storage) after your ingestion task ends.
+Wherever your data storage (cloud storage service, distributed file system, etc.) is, you should be able to see one new file: `index.zip` (`partitionNum_index.zip` for HDFS data storage) after your ingestion task ends.
 
 * DataSegmentPuller
 
@@ -130,7 +130,7 @@ The following example was retrieved from a Historical process configured to use
 00Z_2015-04-14T02:41:09.484Z
 2015-04-14T02:42:33,463 INFO [ZkCoordinator-0] org.apache.druid.guice.JsonConfigurator - Loaded class[class org.apache.druid.storage.azure.AzureAccountConfig] from props[drui
 d.azure.] as [org.apache.druid.storage.azure.AzureAccountConfig@759c9ad9]
-2015-04-14T02:49:08,275 INFO [ZkCoordinator-0] org.apache.druid.java.util.common.CompressionUtils - Unzipping file[/opt/druid/tmp/compressionUtilZipCache1263964429587449785.z
+2015-04-14T02:49:08,275 INFO [ZkCoordinator-0] org.apache.druid.utils.CompressionUtils - Unzipping file[/opt/druid/tmp/compressionUtilZipCache1263964429587449785.z
 ip] to [/opt/druid/zk_druid/dde/2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z/2015-04-14T02:41:09.484Z/0]
 2015-04-14T02:49:08,276 INFO [ZkCoordinator-0] org.apache.druid.storage.azure.AzureDataSegmentPuller - Loaded 1196 bytes from [dde/2015-01-02T00:00:00.000Z_2015-01-03
 T00:00:00.000Z/2015-04-14T02:41:09.484Z/0/index.zip] to [/opt/druid/zk_druid/dde/2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z/2015-04-14T02:41:09.484Z/0]
@@ -147,7 +147,7 @@ To mark a segment as not used, you need to connect to your metadata storage and
 
 To start a segment killing task, you need to access the old Coordinator console `http://<COODRINATOR_IP>:<COORDINATOR_PORT/old-console/kill.html` then select the appropriate datasource and then input a time range (e.g. `2000/3000`).
 
-After the killing task ends, both `descriptor.json` (`partitionNum_descriptor.json` for HDFS data storage)  and `index.zip` (`partitionNum_index.zip` for HDFS data storage) files should be deleted from the data storage.
+After the killing task ends, `index.zip` (`partitionNum_index.zip` for HDFS data storage) file should be deleted from the data storage.
 
 ### Adding a new Firehose
 
diff --git a/docs/0.14.0-incubating/development/overview.md b/docs/0.14.0-incubating/development/overview.md
index c0ca8de..ad360a5 100644
--- a/docs/0.14.0-incubating/development/overview.md
+++ b/docs/0.14.0-incubating/development/overview.md
@@ -73,4 +73,4 @@ At some point in the future, we will likely move the internal UI code out of cor
 ## Client Libraries
 
 We welcome contributions for new client libraries to interact with Druid. See client 
-[libraries](../development/libraries.html) for existing client libraries.
+[libraries](/libraries.html) for existing client libraries.
diff --git a/docs/0.14.0-incubating/development/router.md b/docs/0.14.0-incubating/development/router.md
index 3c8f3b7..11508ac 100644
--- a/docs/0.14.0-incubating/development/router.md
+++ b/docs/0.14.0-incubating/development/router.md
@@ -24,6 +24,11 @@ title: "Router Process"
 
 # Router Process
 
+<div class="note info">
+The Router is an optional and <a href="../development/experimental.html">experimental</a> feature due to the fact that its recommended place in the Druid cluster architecture is still evolving.
+However, it has been battle-tested in production, and it hosts the powerful [Druid Console](../operations/management-uis.html#druid-console), so you should feel safe deploying it.
+</div>
+
 The Apache Druid (incubating) Router process can be used to route queries to different Broker processes. By default, the broker routes queries based on how [Rules](../operations/rule-configuration.html) are set up. For example, if 1 month of recent data is loaded into a `hot` cluster, queries that fall within the recent month can be routed to a dedicated set of brokers. Queries outside this range are routed to another set of brokers. This set up provides query isolation such that queries [...]
 
 For query routing purposes, you should only ever need the Router process if you have a Druid cluster well into the terabyte range. 
diff --git a/docs/0.14.0-incubating/ingestion/firehose.md b/docs/0.14.0-incubating/ingestion/firehose.md
index 51749b9..f35bcc0 100644
--- a/docs/0.14.0-incubating/ingestion/firehose.md
+++ b/docs/0.14.0-incubating/ingestion/firehose.md
@@ -74,6 +74,39 @@ A sample http firehose spec is shown below:
 }
 ```
 
+The below configurations can be optionally used if the URIs specified in the spec require a Basic Authentication Header.
+Omitting these fields from your spec will result in HTTP requests with no Basic Authentication Header.
+
+|property|description|default|
+|--------|-----------|-------|
+|httpAuthenticationUsername|Username to use for authentication with specified URIs|None|
+|httpAuthenticationPassword|PasswordProvider to use with specified URIs|None|
+
+Example with authentication fields using the DefaultPassword provider (this requires the password to be in the ingestion spec):
+
+```json
+{
+    "type": "http",
+    "uris": ["http://example.com/uri1", "http://example2.com/uri2"],
+    "httpAuthenticationUsername": "username",
+    "httpAuthenticationPassword": "password123"
+}
+```
+
+You can also use the other existing Druid PasswordProviders. Here is an example using the EnvironmentVariablePasswordProvider:
+
+```json
+{
+    "type": "http",
+    "uris": ["http://example.com/uri1", "http://example2.com/uri2"],
+    "httpAuthenticationUsername": "username",
+    "httpAuthenticationPassword": {
+        "type": "environment",
+        "variable": "HTTP_FIREHOSE_PW"
+    }
+}
+```
+
 The below configurations can be optionally used for tuning the firehose performance.
 
 |property|description|default|
@@ -87,7 +120,8 @@ The below configurations can be optionally used for tuning the firehose performa
 ### IngestSegmentFirehose
 
 This Firehose can be used to read the data from existing druid segments.
-It can be used ingest existing druid segments using a new schema and change the name, dimensions, metrics, rollup, etc. of the segment.
+It can be used to ingest existing druid segments using a new schema and change the name, dimensions, metrics, rollup, etc. of the segment.
+This firehose is _splittable_ and can be used by [native parallel index tasks](./native_tasks.html#parallel-index-task).
 A sample ingest firehose spec is shown below -
 
 ```json
@@ -106,11 +140,15 @@ A sample ingest firehose spec is shown below -
 |dimensions|The list of dimensions to select. If left empty, no dimensions are returned. If left null or not defined, all dimensions are returned. |no|
 |metrics|The list of metrics to select. If left empty, no metrics are returned. If left null or not defined, all metrics are selected.|no|
 |filter| See [Filters](../querying/filters.html)|no|
+|maxInputSegmentBytesPerTask|When used with the native parallel index task, the maximum number of bytes of input segments to process in a single task. If a single segment is larger than this number, it will be processed by itself in a single task (input segments are never split across tasks). Defaults to 150MB.|no|
 
 #### SqlFirehose
 
 SqlFirehoseFactory can be used to ingest events residing in RDBMS. The database connection information is provided as part of the ingestion spec. For each query, the results are fetched locally and indexed. If there are multiple queries from which data needs to be indexed, queries are prefetched in the background upto `maxFetchCapacityBytes` bytes.
-An example is shown below:
+
+Requires one of the following extensions:
+ * [MySQL Metadata Store](../development/extensions-core/mysql.html).
+ * [PostgreSQL Metadata Store](../development/extensions-core/postgresql.html).
 
 ```json
 {
@@ -118,20 +156,19 @@ An example is shown below:
     "database": {
         "type": "mysql",
         "connectorConfig" : {
-        "connectURI" : "jdbc:mysql://host:port/schema",
-        "user" : "user",
-        "password" : "password"
+            "connectURI" : "jdbc:mysql://host:port/schema",
+            "user" : "user",
+            "password" : "password"
         }
      },
     "sqls" : ["SELECT * FROM table1", "SELECT * FROM table2"]
 }
 ```
 
-
 |property|description|default|required?|
 |--------|-----------|-------|---------|
 |type|This should be "sql".||Yes|
-|database|Specifies the database connection details.`type` should specify the database type and `connectorConfig` should specify the database connection properties via `connectURI`, `user` and `password`||Yes|
+|database|Specifies the database connection details.||Yes|
 |maxCacheCapacityBytes|Maximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.|1073741824|No|
 |maxFetchCapacityBytes|Maximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.|1073741824|No|
 |prefetchTriggerBytes|Threshold to trigger prefetching SQL result objects.|maxFetchCapacityBytes / 2|No|
@@ -139,6 +176,14 @@ An example is shown below:
 |foldCase|Toggle case folding of database column names. This may be enabled in cases where the database returns case insensitive column names in query results.|false|No|
 |sqls|List of SQL queries where each SQL query would retrieve the data to be indexed.||Yes|
 
+#### Database
+
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|type|The type of database to query. Valid values are `mysql` and `postgresql`_||Yes|
+|connectorConfig|specify the database connection properties via `connectURI`, `user` and `password`||Yes|
+
+
 ### CombiningFirehose
 
 This firehose can be used to combine and merge data from a list of different firehoses.
@@ -181,8 +226,9 @@ When using this firehose, events can be sent by submitting a POST request to the
 |property|description|required?|
 |--------|-----------|---------|
 |type|This should be "receiver"|yes|
-|serviceName|name used to announce the event receiver service endpoint|yes|
-|bufferSize| size of buffer used by firehose to store events|no default(100000)|
+|serviceName|Name used to announce the event receiver service endpoint|yes|
+|maxIdleTime|A firehose is automatically shut down after not receiving any events for this period of time, in milliseconds. If not specified, a firehose is never shut down due to being idle. Zero and negative values have the same effect.|no|
+|bufferSize|Size of buffer used by firehose to store events|no, default is 100000|
 
 Shut down time for EventReceiverFirehose can be specified by submitting a POST request to
 
diff --git a/docs/0.14.0-incubating/ingestion/hadoop-vs-native-batch.md b/docs/0.14.0-incubating/ingestion/hadoop-vs-native-batch.md
index 326309a..85373a0 100644
--- a/docs/0.14.0-incubating/ingestion/hadoop-vs-native-batch.md
+++ b/docs/0.14.0-incubating/ingestion/hadoop-vs-native-batch.md
@@ -35,8 +35,8 @@ ingestion method.
 | Parallel indexing | Always parallel | Parallel if firehose is splittable | Always sequential |
 | Supported indexing modes | Replacing mode | Both appending and replacing modes | Both appending and replacing modes |
 | External dependency | Hadoop (it internally submits Hadoop jobs) | No dependency | No dependency |
-| Supported [rollup modes](/docs/latest/ingestion/index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
-| Supported partitioning methods | [Both Hash-based and range partitioning](/docs/latest/ingestion/hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
+| Supported [rollup modes](http://druid.io/docs/latest/ingestion/index.html#roll-up-modes) | Perfect rollup | Best-effort rollup | Both perfect and best-effort rollup |
+| Supported partitioning methods | [Both Hash-based and range partitioning](http://druid.io/docs/latest/ingestion/hadoop.html#partitioning-specification) | N/A | Hash-based partitioning (when `forceGuaranteedRollup` = true) |
 | Supported input locations | All locations accessible via HDFS client or Druid dataSource | All implemented [firehoses](./firehose.html) | All implemented [firehoses](./firehose.html) |
 | Supported file formats | All implemented Hadoop InputFormats | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom extension](../development/modules.html) implementing [`FiniteFirehoseFactory`](https://github.com/apache/incubator-druid/blob/master/core/src/main/java/org/apache/druid/data/input/FiniteFirehoseFactory.java) | Currently text file formats (CSV, TSV, JSON) by default. Additional formats can be added though a [custom exten [...]
 | Saving parse exceptions in ingestion report | Currently not supported | Currently not supported | Supported |
diff --git a/docs/0.14.0-incubating/ingestion/hadoop.md b/docs/0.14.0-incubating/ingestion/hadoop.md
index 249bd02..ab1963b 100644
--- a/docs/0.14.0-incubating/ingestion/hadoop.md
+++ b/docs/0.14.0-incubating/ingestion/hadoop.md
@@ -194,7 +194,7 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
 |jobProperties|Object|A map of properties to add to the Hadoop job configuration, see below for details.|no (default == null)|
 |indexSpec|Object|Tune how data is indexed. See below for more information.|no|
 |numBackgroundPersistThreads|Integer|The number of new background threads to use for incremental persists. Using this feature causes a notable increase in memory pressure and cpu usage but will make the job finish more quickly. If changing from the default of 0 (use current thread for persists), we recommend setting it to 1.|no (default == 0)|
-|forceExtendableShardSpecs|Boolean|Forces use of extendable shardSpecs. Experimental feature intended for use with the [Kafka indexing service extension](../development/extensions-core/kafka-ingestion.html).|no (default = false)|
+|forceExtendableShardSpecs|Boolean|Forces use of extendable shardSpecs. Hash-based partitioning always uses an extendable shardSpec. For single-dimension partitioning, this option should be set to true to use an extendable shardSpec. For partitioning, please check [Partitioning specification](#partitioning-specification). This option can be useful when you need to append more data to existing dataSource.|no (default = false)|
 |useExplicitVersion|Boolean|Forces HadoopIndexTask to use version.|no (default = false)|
 |logParseExceptions|Boolean|If true, log an error message when a parsing exception occurs, containing information about the row where the error occurred.|false|no|
 |maxParseExceptions|Integer|The maximum number of parse exceptions that can occur before the task halts ingestion and fails. Overrides `ignoreInvalidRows` if `maxParseExceptions` is defined.|unlimited|no|
diff --git a/docs/0.14.0-incubating/ingestion/index.md b/docs/0.14.0-incubating/ingestion/index.md
index 9141cb5..e9909a1 100644
--- a/docs/0.14.0-incubating/ingestion/index.md
+++ b/docs/0.14.0-incubating/ingestion/index.md
@@ -33,7 +33,7 @@ title: "Ingestion"
 Apache Druid (incubating) data is stored in "datasources", which are similar to tables in a traditional RDBMS. Each datasource is
 partitioned by time and, optionally, further partitioned by other attributes. Each time range is called a "chunk" (for
 example, a single day, if your datasource is partitioned by day). Within a chunk, data is partitioned into one or more
-"segments". Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
+["segments"](../design/segments.html). Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
 organized into time chunks, it's sometimes helpful to think of segments as living on a timeline like the following:
 
 <img src="../../img/druid-timeline.png" width="800" />
diff --git a/docs/0.14.0-incubating/ingestion/ingestion-spec.md b/docs/0.14.0-incubating/ingestion/ingestion-spec.md
index 6a9e5c6..3b03c5f 100644
--- a/docs/0.14.0-incubating/ingestion/ingestion-spec.md
+++ b/docs/0.14.0-incubating/ingestion/ingestion-spec.md
@@ -207,7 +207,7 @@ handle all formatting decisions on their own, without using the ParseSpec.
 | Field | Type | Description | Required |
 |-------|------|-------------|----------|
 | column | String | The column of the timestamp. | yes |
-| format | String | iso, posix, millis, micro, nano, auto or any [Joda time](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html) format. | no (default == 'auto' |
+| format | String | iso, posix, millis, micro, nano, auto or any [Joda time](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html) format. | no (default == 'auto') |
 
 <a name="dimensions" />
 
@@ -216,8 +216,8 @@ handle all formatting decisions on their own, without using the ParseSpec.
 | Field | Type | Description | Required |
 |-------|------|-------------|----------|
 | dimensions | JSON array | A list of [dimension schema](#dimension-schema) objects or dimension names. Providing a name is equivalent to providing a String-typed dimension schema with the given name. If this is an empty array, Druid will treat all non-timestamp, non-metric columns that do not appear in "dimensionExclusions" as String-typed dimension columns. | yes |
-| dimensionExclusions | JSON String array | The names of dimensions to exclude from ingestion. | no (default == [] |
-| spatialDimensions | JSON Object array | An array of [spatial dimensions](../development/geo.html) | no (default == [] |
+| dimensionExclusions | JSON String array | The names of dimensions to exclude from ingestion. | no (default == []) |
+| spatialDimensions | JSON Object array | An array of [spatial dimensions](../development/geo.html) | no (default == []) |
 
 #### Dimension Schema
 A dimension schema specifies the type and name of a dimension to be ingested.
diff --git a/docs/0.14.0-incubating/ingestion/native_tasks.md b/docs/0.14.0-incubating/ingestion/native_tasks.md
index d67cf8d..ad7cac9 100644
--- a/docs/0.14.0-incubating/ingestion/native_tasks.md
+++ b/docs/0.14.0-incubating/ingestion/native_tasks.md
@@ -45,7 +45,7 @@ task statuses. If one of them fails, it retries the failed task until the retryi
 If all worker tasks succeed, then it collects the reported list of generated segments and publishes those segments at once.
 
 To use this task, the `firehose` in `ioConfig` should be _splittable_. If it's not, this task runs sequentially. The
-current splittable fireshoses are [`LocalFirehose`](./firehose.html#localfirehose), [`HttpFirehose`](./firehose.html#httpfirehose)
+current splittable fireshoses are [`LocalFirehose`](./firehose.html#localfirehose), [`IngestSegmentFirehose`](./firehose.html#ingestsegmentfirehose), [`HttpFirehose`](./firehose.html#httpfirehose)
 , [`StaticS3Firehose`](../development/extensions-core/s3.html#statics3firehose), [`StaticAzureBlobStoreFirehose`](../development/extensions-contrib/azure.html#staticazureblobstorefirehose)
 , [`StaticGoogleBlobStoreFirehose`](../development/extensions-contrib/google.html#staticgoogleblobstorefirehose), and [`StaticCloudFilesFirehose`](../development/extensions-contrib/cloudfiles.html#staticcloudfilesfirehose).
 
@@ -170,7 +170,7 @@ that range if there's some stray data with unexpected timestamps.
 |--------|-----------|-------|---------|
 |type|The task type, this should always be `index_parallel`.|none|yes|
 |firehose|Specify a [Firehose](../ingestion/firehose.html) here.|none|yes|
-|appendToExisting|Creates segments as additional shards of the latest version, effectively appending to the segment set instead of replacing it. This will only work if the existing segment set has extendable-type shardSpecs (which can be forced by setting 'forceExtendableShardSpecs' in the tuning config).|false|no|
+|appendToExisting|Creates segments as additional shards of the latest version, effectively appending to the segment set instead of replacing it. This will only work if the existing segment set has extendable-type shardSpecs.|false|no|
 
 #### TuningConfig
 
@@ -186,7 +186,6 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
 |numShards|Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data. numShards cannot be specified if maxRowsPerSegment is set.|null|no|
 |indexSpec|defines segment storage format options to be used at indexing time, see [IndexSpec](#indexspec)|null|no|
 |maxPendingPersists|Maximum number of persists that can be pending but not started. If this limit would be exceeded by a new intermediate persist, ingestion will block until the currently-running persist finishes. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|0 (meaning one persist can be running concurrently with ingestion, and none can be queued up)|no|
-|forceExtendableShardSpecs|Forces use of extendable shardSpecs. Experimental feature intended for use with the [Kafka indexing service extension](../development/extensions-core/kafka-ingestion.html).|false|no|
 |reportParseExceptions|If true, exceptions encountered during parsing will be thrown and will halt ingestion; if false, unparseable rows and fields will be skipped.|false|no|
 |pushTimeout|Milliseconds to wait for pushing segments. It must be >= 0, where 0 means to wait forever.|0|no|
 |segmentWriteOutMediumFactory|Segment write-out medium to use when creating segments. See [SegmentWriteOutMediumFactory](#segmentWriteOutMediumFactory).|Not specified, the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type` is used|no|
@@ -377,7 +376,6 @@ An example of the result is
           "longEncoding": "longs"
         },
         "maxPendingPersists": 0,
-        "forceExtendableShardSpecs": false,
         "reportParseExceptions": false,
         "pushTimeout": 0,
         "segmentWriteOutMediumFactory": null,
@@ -541,7 +539,7 @@ that range if there's some stray data with unexpected timestamps.
 |--------|-----------|-------|---------|
 |type|The task type, this should always be "index".|none|yes|
 |firehose|Specify a [Firehose](../ingestion/firehose.html) here.|none|yes|
-|appendToExisting|Creates segments as additional shards of the latest version, effectively appending to the segment set instead of replacing it. This will only work if the existing segment set has extendable-type shardSpecs (which can be forced by setting 'forceExtendableShardSpecs' in the tuning config).|false|no|
+|appendToExisting|Creates segments as additional shards of the latest version, effectively appending to the segment set instead of replacing it. This will only work if the existing segment set has extendable-type shardSpecs.|false|no|
 
 #### TuningConfig
 
@@ -558,7 +556,6 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
 |partitionDimensions|The dimensions to partition on. Leave blank to select all dimensions. Only used with `forceGuaranteedRollup` = true, will be ignored otherwise.|null|no|
 |indexSpec|defines segment storage format options to be used at indexing time, see [IndexSpec](#indexspec)|null|no|
 |maxPendingPersists|Maximum number of persists that can be pending but not started. If this limit would be exceeded by a new intermediate persist, ingestion will block until the currently-running persist finishes. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|0 (meaning one persist can be running concurrently with ingestion, and none can be queued up)|no|
-|forceExtendableShardSpecs|Forces use of extendable shardSpecs. Experimental feature intended for use with the [Kafka indexing service extension](../development/extensions-core/kafka-ingestion.html).|false|no|
 |forceGuaranteedRollup|Forces guaranteeing the [perfect rollup](../ingestion/index.html#roll-up-modes). The perfect rollup optimizes the total size of generated segments and querying time while indexing time will be increased. If this is set to true, the index task will read the entire input data twice: one for finding the optimal number of partitions per time chunk and one for generating segments. Note that the result segments would be hash-partitioned. You can set `forceExtendableShard [...]
 |reportParseExceptions|DEPRECATED. If true, exceptions encountered during parsing will be thrown and will halt ingestion; if false, unparseable rows and fields will be skipped. Setting `reportParseExceptions` to true will override existing configurations for `maxParseExceptions` and `maxSavedParseExceptions`, setting `maxParseExceptions` to 0 and limiting `maxSavedParseExceptions` to no more than 1.|false|no|
 |pushTimeout|Milliseconds to wait for pushing segments. It must be >= 0, where 0 means to wait forever.|0|no|
@@ -617,4 +614,4 @@ the index task immediately pushes all segments created until that moment, cleans
 continues to ingest remaining data.
 
 To enable bulk pushing mode, `forceGuaranteedRollup` should be set in the TuningConfig. Note that this option cannot
-be used with either `forceExtendableShardSpecs` of TuningConfig or `appendToExisting` of IOConfig.
+be used with `appendToExisting` of IOConfig.
diff --git a/docs/0.14.0-incubating/misc/math-expr.md b/docs/0.14.0-incubating/misc/math-expr.md
index 4df6546..c207f01 100644
--- a/docs/0.14.0-incubating/misc/math-expr.md
+++ b/docs/0.14.0-incubating/misc/math-expr.md
@@ -37,7 +37,7 @@ This expression language supports the following operators (listed in decreasing
 |*, /, %|Binary multiplicative|
 |+, -|Binary additive|
 |<, <=, >, >=, ==, !=|Binary Comparison|
-|&&,\|\||Binary Logical AND, OR|
+|&&, &#124;|Binary Logical AND, OR|
 
 Long, double, and string data types are supported. If a number contains a dot, it is interpreted as a double, otherwise it is interpreted as a long. That means, always add a '.' to your number if you want it interpreted as a double value. String literals should be quoted by single quotation marks.
 
@@ -66,12 +66,16 @@ The following built-in functions are available.
 
 |name|description|
 |----|-----------|
-|concat|concatenate a list of strings|
+|concat|concat(expr, expr...) concatenate a list of strings|
+|format|format(pattern[, args...]) returns a string formatted in the manner of Java's [String.format](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#format-java.lang.String-java.lang.Object...-).|
 |like|like(expr, pattern[, escape]) is equivalent to SQL `expr LIKE pattern`|
 |lookup|lookup(expr, lookup-name) looks up expr in a registered [query-time lookup](../querying/lookups.html)|
+|parse_long|parse_long(string[, radix]) parses a string as a long with the given radix, or 10 (decimal) if a radix is not provided.|
 |regexp_extract|regexp_extract(expr, pattern[, index]) applies a regular expression pattern and extracts a capture group index, or null if there is no match. If index is unspecified or zero, returns the substring that matched the pattern.|
 |replace|replace(expr, pattern, replacement) replaces pattern with replacement|
 |substring|substring(expr, index, length) behaves like java.lang.String's substring|
+|right|right(expr, length) returns the rightmost length characters from a string|
+|left|left(expr, length) returns the leftmost length characters from a string|
 |strlen|strlen(expr) returns length of a string in UTF-16 code units|
 |strpos|strpos(haystack, needle[, fromIndex]) returns the position of the needle within the haystack, with indexes starting from 0. The search will begin at fromIndex, or 0 if fromIndex is not specified. If the needle is not found then the function returns -1.|
 |trim|trim(expr[, chars]) remove leading and trailing characters from `expr` if they are present in `chars`. `chars` defaults to ' ' (space) if not provided.|
@@ -79,6 +83,10 @@ The following built-in functions are available.
 |rtrim|rtrim(expr[, chars]) remove trailing characters from `expr` if they are present in `chars`. `chars` defaults to ' ' (space) if not provided.|
 |lower|lower(expr) converts a string to lowercase|
 |upper|upper(expr) converts a string to uppercase|
+|reverse|reverse(expr) reverses a string|
+|repeat|repeat(expr, N) repeats a string N times|
+|lpad|lpad(expr, length, chars) returns a string of `length` from `expr` left-padded with `chars`. If `length` is shorter than the length of `expr`, the result is `expr` which is truncated to `length`. If either `expr` or `chars` are null, the result will be null.|
+|rpad|rpad(expr, length, chars) returns a string of `length` from `expr` right-padded with `chars`. If `length` is shorter than the length of `expr`, the result is `expr` which is truncated to `length`. If either `expr` or `chars` are null, the result will be null.|
 
 ## Time functions
 
@@ -109,6 +117,7 @@ See javadoc of java.lang.Math for detailed explanation for each function.
 |copysign|copysign(x) would return the first floating-point argument with the sign of the second floating-point argument|
 |cos|cos(x) would return the trigonometric cosine of x|
 |cosh|cosh(x) would return the hyperbolic cosine of x|
+|cot|cot(x) would return the trigonometric cotangent of an angle x|
 |div|div(x,y) is integer division of x by y|
 |exp|exp(x) would return Euler's number raised to the power of x|
 |expm1|expm1(x) would return e^x-1|
@@ -122,10 +131,11 @@ See javadoc of java.lang.Math for detailed explanation for each function.
 |min|min(x, y) would return the smaller of two values|
 |nextafter|nextafter(x, y) would return the floating-point number adjacent to the x in the direction of the y|
 |nextUp|nextUp(x) would return the floating-point value adjacent to x in the direction of positive infinity|
+|pi|pi would return the constant value of the π |
 |pow|pow(x, y) would return the value of the x raised to the power of y|
 |remainder|remainder(x, y) would return the remainder operation on two arguments as prescribed by the IEEE 754 standard|
 |rint|rint(x) would return value that is closest in value to x and is equal to a mathematical integer|
-|round|round(x) would return the closest long value to x, with ties rounding up|
+|round|round(x, y) would return the value of the x rounded to the y decimal places. While x can be an integer or floating-point number, y must be an integer. The type of the return value is specified by that of x. y defaults to 0 if omitted. When y is negative, x is rounded on the left side of the y decimal points.|
 |scalb|scalb(d, sf) would return d * 2^sf rounded as if performed by a single correctly rounded floating-point multiply to a member of the double value set|
 |signum|signum(x) would return the signum function of the argument x|
 |sin|sin(x) would return the trigonometric sine of an angle x|
diff --git a/docs/0.14.0-incubating/operations/api-reference.md b/docs/0.14.0-incubating/operations/api-reference.md
index a3685d9..9326f2b 100644
--- a/docs/0.14.0-incubating/operations/api-reference.md
+++ b/docs/0.14.0-incubating/operations/api-reference.md
@@ -151,7 +151,7 @@ Returns a list of all segments, overlapping with any of given intervals, for a d
 
 #### Datasources
 
-Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/` 
+Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/`
 (e.g., 2016-06-27_2016-06-28).
 
 ##### GET
@@ -220,6 +220,11 @@ Returns full segment metadata for a specific segment in the cluster.
 
 Return the tiers that a datasource exists in.
 
+#### Note for coordinator's POST and DELETE API's
+The segments would be enabled when these API's are called, but then can be disabled again by the coordinator if any dropRule matches. Segments enabled by these API's might not be loaded by historical processes if no loadRule matches.  If an indexing or kill task runs at the same time as these API's are invoked, the behavior is undefined. Some segments might be killed and others might be enabled. It's also possible that all segments might be disabled but at the same time, the indexing tas [...]
+
+Caution : Avoid using indexing or kill tasks and these API's at the same time for the same datasource and time chunk. (It's fine if the time chunks or datasource don't overlap)
+
 ##### POST
 
 * `/druid/coordinator/v1/datasources/{dataSourceName}`
@@ -230,6 +235,26 @@ Enables all segments of datasource which are not overshadowed by others.
 
 Enables a segment of a datasource.
 
+* `/druid/coordinator/v1/datasources/{dataSourceName}/markUsed`
+
+* `/druid/coordinator/v1/datasources/{dataSourceName}/markUnused`
+
+Marks segments (un)used for a datasource by interval or set of segment Ids. 
+
+When marking used only segments that are not overshadowed will be updated.
+
+The request payload contains the interval or set of segment Ids to be marked unused.
+Either interval or segment ids should be provided, if both or none are provided in the payload, the API would throw an error (400 BAD REQUEST).
+
+Interval specifies the start and end times as IS0 8601 strings. `interval=(start/end)` where start and end both are inclusive and only the segments completely contained within the specified interval will be disabled, partially overlapping segments will not be affected.
+
+JSON Request Payload:
+
+ |Key|Description|Example|
+|----------|-------------|---------|
+|`interval`|The interval for which to mark segments unused|"2015-09-12T03:00:00.000Z/2015-09-12T05:00:00.000Z"|
+|`segmentIds`|Set of segment Ids to be marked unused|["segmentId1", "segmentId2"]|
+
 ##### DELETE<a name="coordinator-delete"></a>
 
 * `/druid/coordinator/v1/datasources/{dataSourceName}`
@@ -247,7 +272,7 @@ Disables a segment.
 
 #### Retention Rules
 
-Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/` 
+Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/`
 (e.g., 2016-06-27_2016-06-28).
 
 ##### GET
@@ -296,7 +321,7 @@ Optional Header Parameters for auditing the config change can also be specified.
 
 #### Intervals
 
-Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/` 
+Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/`
 (e.g., 2016-06-27_2016-06-28).
 
 ##### GET
@@ -389,7 +414,7 @@ only want the active leader to be considered in-service at the load balancer.
 
 #### Tasks<a name="overlord-tasks"></a> 
 
-Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/` 
+Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/`
 (e.g., 2016-06-27_2016-06-28).
 
 ##### GET
@@ -402,7 +427,7 @@ Retrieve list of tasks. Accepts query string parameters `state`, `datasource`, `
 |---|---|
 |`state`|filter list of tasks by task state, valid options are `running`, `complete`, `waiting`, and `pending`.|
 | `datasource`| return tasks filtered by Druid datasource.|
-| `createdTimeInterval`| return tasks created within the specified interval. | 
+| `createdTimeInterval`| return tasks created within the specified interval. |
 | `max`| maximum number of `"complete"` tasks to return. Only applies when `state` is set to `"complete"`.|
 | `type`| filter tasks by task type. See [task documentation](../ingestion/tasks.html) for more details.|
 
@@ -465,8 +490,8 @@ Retrieve list of task status objects for list of task id strings in request body
 
 * `/druid/indexer/v1/pendingSegments/{dataSource}`
 
-Manually clean up pending segments table in metadata storage for `datasource`. Returns a JSON object response with 
-`numDeleted` and count of rows deleted from the pending segments table. This API is used by the 
+Manually clean up pending segments table in metadata storage for `datasource`. Returns a JSON object response with
+`numDeleted` and count of rows deleted from the pending segments table. This API is used by the
 `druid.coordinator.kill.pendingSegments.on` [coordinator setting](../configuration/index.html#coordinator-operation)
 which automates this operation to perform periodically.
 
@@ -547,20 +572,20 @@ Please use the equivalent 'terminate' instead.
 </div>
 
 #### Dynamic Configuration
-See [Overlord Dynamic Configuration](../configuration/index.html#overlord-dynamic-configuration) for details. 
+See [Overlord Dynamic Configuration](../configuration/index.html#overlord-dynamic-configuration) for details.
 
-Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/` 
+Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/`
 (e.g., 2016-06-27_2016-06-28).
 
 ##### GET
 
 * `/druid/indexer/v1/worker`
 
-Retreives current overlord dynamic configuration. 
+Retreives current overlord dynamic configuration.
 
 * `/druid/indexer/v1/worker/history?interval={interval}&counter={count}`
 
-Retrieves history of changes to overlord dynamic configuration. Accepts `interval` and  `count` query string parameters 
+Retrieves history of changes to overlord dynamic configuration. Accepts `interval` and  `count` query string parameters
 to filter by interval and limit the number of results respectively.
 
 * `/druid/indexer/v1/scaling`
@@ -575,7 +600,7 @@ Update overlord dynamic worker configuration.
 
 ## Data Server
 
-This section documents the API endpoints for the processes that reside on Data servers (MiddleManagers/Peons and Historicals) 
+This section documents the API endpoints for the processes that reside on Data servers (MiddleManagers/Peons and Historicals)
 in the suggested [three-server configuration](../design/processes.html#server-types).
 
 ### MiddleManager
@@ -584,7 +609,7 @@ in the suggested [three-server configuration](../design/processes.html#server-ty
 
 * `/druid/worker/v1/enabled`
 
-Check whether a MiddleManager is in an enabled or disabled state. Returns JSON object keyed by the combined `druid.host` 
+Check whether a MiddleManager is in an enabled or disabled state. Returns JSON object keyed by the combined `druid.host`
 and `druid.port` with the boolean state as the value.
 
 ```json
@@ -593,14 +618,14 @@ and `druid.port` with the boolean state as the value.
 
 * `/druid/worker/v1/tasks`
 
-Retrieve a list of active tasks being run on MiddleManager. Returns JSON list of taskid strings.  Normal usage should 
+Retrieve a list of active tasks being run on MiddleManager. Returns JSON list of taskid strings.  Normal usage should
 prefer to use the `/druid/indexer/v1/tasks` [Overlord API](#overlord) or one of it's task state specific variants instead.
 
 ```json
 ["index_wikiticker_2019-02-11T02:20:15.316Z"]
 ```
 
-* `/druid/worker/v1/task/{taskid}/log` 
+* `/druid/worker/v1/task/{taskid}/log`
 
 Retrieve task log output stream by task id. Normal usage should prefer to use the `/druid/indexer/v1/task/{taskId}/log`
 [Overlord API](#overlord) instead.
@@ -609,7 +634,7 @@ Retrieve task log output stream by task id. Normal usage should prefer to use th
 
 * `/druid/worker/v1/disable`
 
-'Disable' a MiddleManager, causing it to stop accepting new tasks but complete all existing tasks. Returns JSON  object 
+'Disable' a MiddleManager, causing it to stop accepting new tasks but complete all existing tasks. Returns JSON  object
 keyed by the combined `druid.host` and `druid.port`:
 
 ```json
@@ -618,7 +643,7 @@ keyed by the combined `druid.host` and `druid.port`:
 
 * `/druid/worker/v1/enable`
 
-'Enable' a MiddleManager, allowing it to accept new tasks again if it was previously disabled. Returns JSON  object 
+'Enable' a MiddleManager, allowing it to accept new tasks again if it was previously disabled. Returns JSON  object
 keyed by the combined `druid.host` and `druid.port`:
 
 ```json
@@ -627,7 +652,7 @@ keyed by the combined `druid.host` and `druid.port`:
 
 * `/druid/worker/v1/task/{taskid}/shutdown`
 
-Shutdown a running task by `taskid`. Normal usage should prefer to use the `/druid/indexer/v1/task/{taskId}/shutdown` 
+Shutdown a running task by `taskid`. Normal usage should prefer to use the `/druid/indexer/v1/task/{taskId}/shutdown`
 [Overlord API](#overlord) instead. Returns JSON:
 
 ```json
@@ -673,7 +698,7 @@ This section documents the API endpoints for the processes that reside on Query
 
 #### Datasource Information
 
-Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/` 
+Note that all _interval_ URL parameters are ISO 8601 strings delimited by a `_` instead of a `/`
 (e.g., 2016-06-27_2016-06-28).
 
 ##### GET
@@ -696,7 +721,7 @@ Returns the dimensions of the datasource.
 
 <div class="note caution">
 This API is deprecated and will be removed in future releases. Please use <a href="../querying/segmentmetadataquery.html">SegmentMetadataQuery</a> instead
-which provides more comprehensive information and supports all dataSource types including streaming dataSources. It's also encouraged to use [INFORMATION_SCHEMA tables](../querying/sql.html#retrieving-metadata)
+which provides more comprehensive information and supports all dataSource types including streaming dataSources. It's also encouraged to use <a href="../querying/sql.html#retrieving-metadata">INFORMATION_SCHEMA tables</a>
 if you're using SQL.
 </div>
 
@@ -706,7 +731,7 @@ Returns the metrics of the datasource.
 
 <div class="note caution">
 This API is deprecated and will be removed in future releases. Please use <a href="../querying/segmentmetadataquery.html">SegmentMetadataQuery</a> instead
-which provides more comprehensive information and supports all dataSource types including streaming dataSources. It's also encouraged to use [INFORMATION_SCHEMA tables](../querying/sql.html#retrieving-metadata)
+which provides more comprehensive information and supports all dataSource types including streaming dataSources. It's also encouraged to use <a href="../querying/sql.html#retrieving-metadata">INFORMATION_SCHEMA tables</a>
 if you're using SQL.
 </div>
 
diff --git a/docs/0.14.0-incubating/operations/druid-console.md b/docs/0.14.0-incubating/operations/druid-console.md
index 902aaeb..3dbc491 100644
--- a/docs/0.14.0-incubating/operations/druid-console.md
+++ b/docs/0.14.0-incubating/operations/druid-console.md
@@ -41,50 +41,75 @@ Below is a description of the high-level features and functionality of the Druid
 
 ## Home
 
-The home view provide a high level overview of the cluster. Each card is clickable and links to the appropriate view. The legacy menu allows you to go to the [legacy coordinator and overlord consoles](./management-uis#legacy-consoles) should you need them.
+The home view provides a high level overview of the cluster. Each card is clickable and links to the appropriate view. The legacy menu allows you to go to the [legacy coordinator and overlord consoles](./management-uis.html#legacy-consoles) should you need them.
 
 ![home-view](./img/01-home-view.png)
 
+## Data loader
+
+The data loader view allows you to load data by building an ingestion spec with a step-by-step wizard. 
+
+![data-loader-1](./img/02-data-loader-1.png)
+
+After picking the source of your data just follow the series for steps that will show you incremental previews of the data as it will be ingested.
+After filling in the required details on every step you can navigate to the next step by clicking the `Next` button.
+You can also freely navigate between the steps from the top navigation.
+
+Navigating with the top navigation will leave the underlying spec unmodified while clicking the `Next` button will attempt to fill in the subsequent steps with appropriate defaults.
+
+![data-loader-2](./img/03-data-loader-2.png)
+
 ## Datasources
 
 The datasources view shows all the currently enabled datasources. From this view you can see the sizes and availability of the different datasources. You can edit the retention rules and drop data (as well as issue kill tasks).
 Like any view that is powered by a DruidSQL query you can click “Go to SQL” to run the underlying SQL query directly.
 
-![datasources](./img/02-datasources.png)
+![datasources](./img/04-datasources.png)
 
 You can view and edit retention rules to determine the general availability of a datasource.
 
-![retention](./img/03-retention.png)
+![retention](./img/05-retention.png)
 
 ## Segments
 
 The segment view shows every single segment in the cluster. Each segment can be expanded to provide more information. The Segment ID is also conveniently broken down into Datasource, Start, End, Version, and Partition columns for ease of filtering and sorting.
 
-![segments](./img/04-segments.png)
+![segments](./img/06-segments.png)
 
 ## Tasks and supervisors
 
 The task view is also the home of supervisors. From this view you can check the status of existing supervisors as well as suspend and resume them. You can also submit new supervisors by entering their JSON spec.
 
-![tasks-1](./img/05-tasks-1.png)
+![supervisors](./img/07-supervisors.png)
 
-The tasks table let’s you see the currently running and recently completed tasks. From this table you can monitor individual tasks and also submit new tasks by entering their JSON spec.
+The tasks table allows you see the currently running and recently completed tasks. From this table you can monitor individual tasks and also submit new tasks by entering their JSON spec.
+To make managing a lot of tasks more accessible, you can group the tasks by their type, datasource, or status to make navigation easier.
 
-![tasks-2](./img/06-tasks-2.png)
+![tasks](./img/08-tasks.png)
 
-Since there will likely be a lot of tasks you can group the tasks by their type, datasource, or status to make navigation easier.
+Click on the magnifying glass for any task to see more detail about it.
 
-![tasks-3](./img/07-tasks-3.png)
+![tasks-status](./img/09-task-status.png)
 
 ## Servers
 
 The data servers tab lets you see the current status of the historical nodes and MiddleManager (indexer) processes. Note that currently only historical nodes that are actively serving segments will be shown in this view.
 
-![servers](./img/08-servers.png)
+![servers](./img/10-servers.png)
+
+## Query
+
+The query view lets you issue [DruidSQL](../querying/sql.html) queries and display the results as a simple table.
+
+![query-sql](./img/11-query-sql.png)
+
+The query view can also issue queries in Druid's [native query format](../querying/querying.html), which is JSON over HTTP.
+To send a native Druid query, you must start your query with `{` and format it as JSON.
 
-## SQL
+![query-rune](./img/12-query-rune.png)
 
-The SQL view lets you issue direct DruidSQL queries and display the results as a simple table. Note that despite the name this view also allows you to enter native Druid queries in Hjson format.
+## Lookups
 
-![sql](./img/09-sql.png)
+You can create and edit query time lookups via the lookup view.
 
+![lookups](./img/13-lookups.png)
diff --git a/docs/0.14.0-incubating/operations/img/01-home-view.png b/docs/0.14.0-incubating/operations/img/01-home-view.png
index 4dbd31d..6fb3cf5 100644
Binary files a/docs/0.14.0-incubating/operations/img/01-home-view.png and b/docs/0.14.0-incubating/operations/img/01-home-view.png differ
diff --git a/docs/0.14.0-incubating/operations/insert-segment-to-db.md b/docs/0.14.0-incubating/operations/insert-segment-to-db.md
index ba8e644..c2d1e81 100644
--- a/docs/0.14.0-incubating/operations/insert-segment-to-db.md
+++ b/docs/0.14.0-incubating/operations/insert-segment-to-db.md
@@ -24,133 +24,26 @@ title: "insert-segment-to-db Tool"
 
 # insert-segment-to-db Tool
 
-`insert-segment-to-db` is a tool that can insert segments into Druid metadata storage. It is intended to be used
-to update the segment table in metadata storage after people manually migrate segments from one place to another.
-It can also be used to insert missing segments into Druid, or even recover metadata storage by telling it where the
-segments are stored.
-
-**Note:** This tool simply scans the deep storage directory to reconstruct the metadata entries used to locate and
-identify each segment. It does not have any understanding about whether those segments _should actually_ be written to
-the metadata storage. In certain cases, this can lead to undesired or inconsistent results. Some examples of things to
-watch out for:
-  - Dropped datasources will be re-enabled.
-  - The latest version of each segment set will be loaded by Druid, which in some cases may not be the version you
-    actually want. An example of this is a bad compaction job that generates segments which need to be manually rolled
-    back by removing that version from the metadata table. If these segments are not also removed from deep storage,
-    they will be imported back into the metadata table and overshadow the correct version.
-  - Some indexers such as the Kafka indexing service have the potential to generate more than one set of segments that
-    have the same segment ID but different contents. When the metadata is first written, the correct set of segments is
-    referenced and the other set is normally deleted from deep storage. It is possible however that an unhandled
-    exception could result in multiple sets of segments with the same segment ID remaining in deep storage. Since this
-    tool does not know which one is the 'correct' one to use, it will simply select the newest segment set and ignore
-    the other versions. If the wrong segment set is picked, the exactly-once semantics of the Kafka indexing service
-    will no longer hold true and you may get duplicated or dropped events.
-
-With these considerations in mind, it is recommended that data migrations be done by exporting the original metadata
-storage directly, since that is the definitive cluster state. This tool should be used as a last resort when a direct
-export is not possible.
-
-**Note:** This tool expects users to have Druid cluster running in a "safe" mode, where there are no active tasks to interfere
-with the segments being inserted. Users can optionally bring down the cluster to make 100% sure nothing is interfering.
-
-In order to make it work, user will have to provide metadata storage credentials and deep storage type through Java JVM argument
-or runtime.properties file. Specifically, this tool needs to know:
-
-```
-druid.metadata.storage.type
-druid.metadata.storage.connector.connectURI
-druid.metadata.storage.connector.user
-druid.metadata.storage.connector.password
-druid.storage.type
-```
-
-Besides the properties above, you also need to specify the location where the segments are stored and whether you want to
-update descriptor.json (`partitionNum_descriptor.json` for HDFS data storage). These two can be provided through command line arguments.
-
-`--workingDir` (Required)
-
-    The directory URI where segments are stored. This tool will recursively look for segments underneath this directory
-    and insert/update these segments in metdata storage.
-    Attention: workingDir must be a complete URI, which means it must be prefixed with scheme type. For example,
-    hdfs://hostname:port/segment_directory
-
-`--updateDescriptor` (Optional)
-
-    if set to true, this tool will update `loadSpec` field in `descriptor.json` (`partitionNum_descriptor.json` for HDFS data storage) if the path in `loadSpec` is different from
-    where `desciptor.json` (`partitionNum_descriptor.json` for HDFS data storage) was found. Default value is `true`.
-
-Note: you will also need to load different Druid extensions per the metadata and deep storage you use. For example, if you
-use `mysql` as metadata storage and HDFS as deep storage, you should load `mysql-metadata-storage` and `druid-hdfs-storage`
-extensions.
-
-
-Example:
-
-Suppose your metadata storage is `mysql` and you've migrated some segments to a directory in HDFS, and that directory looks
-like this,
-
-```
-Directory path: /druid/storage/wikipedia
-
-├── 2013-08-31T000000.000Z_2013-09-01T000000.000Z
-│   └── 2015-10-21T22_07_57.074Z
-│           ├── 0_descriptor.json
-│           └── 0_index.zip
-├── 2013-09-01T000000.000Z_2013-09-02T000000.000Z
-│   └── 2015-10-21T22_07_57.074Z
-│           ├── 0_descriptor.json
-│           └── 0_index.zip
-├── 2013-09-02T000000.000Z_2013-09-03T000000.000Z
-│   └── 2015-10-21T22_07_57.074Z
-│           ├── 0_descriptor.json
-│           └── 0_index.zip
-└── 2013-09-03T000000.000Z_2013-09-04T000000.000Z
-    └── 2015-10-21T22_07_57.074Z
-            ├── 0_descriptor.json
-            └── 0_index.zip
-```
-
-To load all these segments into `mysql`, you can fire the command below,
-
-```
-java 
--Ddruid.metadata.storage.type=mysql 
--Ddruid.metadata.storage.connector.connectURI=jdbc\:mysql\://localhost\:3306/druid 
--Ddruid.metadata.storage.connector.user=druid 
--Ddruid.metadata.storage.connector.password=diurd 
--Ddruid.extensions.loadList=[\"mysql-metadata-storage\",\"druid-hdfs-storage\"] 
--Ddruid.storage.type=hdfs
--cp $DRUID_CLASSPATH 
-org.apache.druid.cli.Main tools insert-segment-to-db --workingDir hdfs://host:port//druid/storage/wikipedia --updateDescriptor true
-```
-
-In this example, `mysql` and deep storage type are provided through Java JVM arguments, you can optionally put all
-of them in a runtime.properites file and include it in the Druid classpath. Note that we also include `mysql-metadata-storage`
-and `druid-hdfs-storage` in the extension list.
-
-After running this command, the segments table in `mysql` should store the new location for each segment we just inserted.
-Note that for segments stored in HDFS, druid config must contain core-site.xml as described in [Druid Docs](../tutorials/cluster.html), as this new location is stored with relative path.
-
-It is also possible to use `s3` as deep storage. In order to work with it, specify `s3` as deep storage type and load 
-[`druid-s3-extensions`](../development/extensions-core/s3.html) as an extension.
-
-```
-java
--Ddruid.metadata.storage.type=mysql 
--Ddruid.metadata.storage.connector.connectURI=jdbc\:mysql\://localhost\:3306/druid 
--Ddruid.metadata.storage.connector.user=druid 
--Ddruid.metadata.storage.connector.password=diurd
--Ddruid.extensions.loadList=[\"mysql-metadata-storage\",\"druid-s3-extensions\"]
--Ddruid.storage.type=s3
--Ddruid.s3.accessKey=... 
--Ddruid.s3.secretKey=...
--Ddruid.storage.bucket=your-bucket
--Ddruid.storage.baseKey=druid/storage/wikipedia
--Ddruid.storage.maxListingLength=1000
--cp $DRUID_CLASSPATH
-org.apache.druid.cli.Main tools insert-segment-to-db --workingDir "druid/storage/wikipedia" --updateDescriptor true
-```
-
- Note that you can provide the location of segments with either `druid.storage.baseKey` or `--workingDir`. If both are 
- specified, `--workingDir` gets higher priority. `druid.storage.maxListingLength` is to determine the length of a
- partial list in requesting a object listing to `s3`, which defaults to 1000.
+In older versions of Apache Druid (incubating), `insert-segment-to-db` was a tool that could scan deep storage and
+insert data from there into Druid metadata storage. It was intended to be used to update the segment table in the
+metadata storage after manually migrating segments from one place to another, or even to recover lost metadata storage
+by telling it where the segments are stored.
+
+In Druid 0.14.x and earlier, Druid wrote segment metadata to two places: the metadata store's `druid_segments` table, and
+`descriptor.json` files in deep storage. This practice was stopped in Druid 0.15.0 as part of
+[consolidated metadata management](https://github.com/apache/druid/issues/6849), for the following reasons:
+
+1. If any segments are manually dropped or re-enabled by cluster operators, this information is not reflected in
+deep storage. Restoring metadata from deep storage would undo any such drops or re-enables.
+2. Ingestion methods that allocate segments optimistically (such as native Kafka or Kinesis stream ingestion, or native
+batch ingestion in 'append' mode) can write segments to deep storage that are not meant to actually be used by the
+Druid cluster. There is no way, while purely looking at deep storage, to differentiate the segments that made it into
+the metadata store originally (and therefore _should_ be used) from the segments that did not (and therefore
+_should not_ be used).
+3. Nothing in Druid other than the `insert-segment-to-db` tool read the `descriptor.json` files.
+
+After this change, Druid stopped writing `descriptor.json` files to deep storage, and now only writes segment metadata
+to the metadata store. This meant the `insert-segment-to-db` tool is no longer useful, so it was removed in Druid 0.15.0.
+
+It is highly recommended that you take regular backups of your metadata store, since it is difficult to recover Druid
+clusters properly without it.
diff --git a/docs/0.14.0-incubating/operations/pull-deps.md b/docs/0.14.0-incubating/operations/pull-deps.md
index be63a6a..2af9a7d 100644
--- a/docs/0.14.0-incubating/operations/pull-deps.md
+++ b/docs/0.14.0-incubating/operations/pull-deps.md
@@ -58,7 +58,7 @@ Don't use the default remote repositories, only use the repositories provided di
 
 `-d` or `--defaultVersion`
 
-Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is `org.apache.druid.extensions:mysql-metadata-storage`, and default version is `0.14.0-incubating`, then this coordinate will be treated as `org.apache.druid.extensions:mysql-metadata-storage:0.14.0-incubating`
+Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is `org.apache.druid.extensions:mysql-metadata-storage`, and default version is `#{DRUIDVERSION}`, then this coordinate will be treated as `org.apache.druid.extensions:mysql-metadata-storage:#{DRUIDVERSION}`
 
 `--use-proxy`
 
@@ -92,10 +92,10 @@ To run `pull-deps`, you should
 
 Example:
 
-Suppose you want to download ```druid-rabbitmq```, ```mysql-metadata-storage``` and ```hadoop-client```(both 2.3.0 and 2.4.0) with a specific version, you can run `pull-deps` command with `-c org.apache.druid.extensions:druid-examples:0.14.0-incubating`, `-c org.apache.druid.extensions:mysql-metadata-storage:0.14.0-incubating`, `-h org.apache.hadoop:hadoop-client:2.3.0` and `-h org.apache.hadoop:hadoop-client:2.4.0`, an example command would be:
+Suppose you want to download ```druid-rabbitmq```, ```mysql-metadata-storage``` and ```hadoop-client```(both 2.3.0 and 2.4.0) with a specific version, you can run `pull-deps` command with `-c org.apache.druid.extensions:druid-examples:#{DRUIDVERSION}`, `-c org.apache.druid.extensions:mysql-metadata-storage:#{DRUIDVERSION}`, `-h org.apache.hadoop:hadoop-client:2.3.0` and `-h org.apache.hadoop:hadoop-client:2.4.0`, an example command would be:
 
 ```
-java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --clean -c org.apache.druid.extensions:mysql-metadata-storage:0.14.0-incubating -c org.apache.druid.extensions.contrib:druid-rabbitmq:0.14.0-incubating -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0
+java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --clean -c org.apache.druid.extensions:mysql-metadata-storage:#{DRUIDVERSION} -c org.apache.druid.extensions.contrib:druid-rabbitmq:#{DRUIDVERSION} -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0
 ```
 
 Because `--clean` is supplied, this command will first remove the directories specified at `druid.extensions.directory` and `druid.extensions.hadoopDependenciesDir`, then recreate them and start downloading the extensions there. After finishing downloading, if you go to the extension directories you specified, you will see
@@ -108,12 +108,12 @@ extensions
 │   ├── commons-digester-1.8.jar
 │   ├── commons-logging-1.1.1.jar
 │   ├── commons-validator-1.4.0.jar
-│   ├── druid-examples-0.14.0-incubating.jar
+│   ├── druid-examples-#{DRUIDVERSION}.jar
 │   ├── twitter4j-async-3.0.3.jar
 │   ├── twitter4j-core-3.0.3.jar
 │   └── twitter4j-stream-3.0.3.jar
 └── mysql-metadata-storage
-    └── mysql-metadata-storage-0.14.0-incubating.jar
+    └── mysql-metadata-storage-#{DRUIDVERSION}.jar
 ```
 
 ```
@@ -138,10 +138,10 @@ hadoop-dependencies/
     ..... lots of jars
 ```
 
-Note that if you specify `--defaultVersion`, you don't have to put version information in the coordinate. For example, if you want both `druid-rabbitmq` and `mysql-metadata-storage` to use version `0.14.0-incubating`,  you can change the command above to
+Note that if you specify `--defaultVersion`, you don't have to put version information in the coordinate. For example, if you want both `druid-rabbitmq` and `mysql-metadata-storage` to use version `#{DRUIDVERSION}`,  you can change the command above to
 
 ```
-java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --defaultVersion 0.14.0-incubating --clean -c org.apache.druid.extensions:mysql-metadata-storage -c org.apache.druid.extensions.contrib:druid-rabbitmq -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0
+java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --defaultVersion #{DRUIDVERSION} --clean -c org.apache.druid.extensions:mysql-metadata-storage -c org.apache.druid.extensions.contrib:druid-rabbitmq -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0
 ```
 
 <div class="note info">
diff --git a/docs/0.14.0-incubating/operations/recommendations.md b/docs/0.14.0-incubating/operations/recommendations.md
index 311b46d..61cb871 100644
--- a/docs/0.14.0-incubating/operations/recommendations.md
+++ b/docs/0.14.0-incubating/operations/recommendations.md
@@ -84,10 +84,8 @@ Timeseries and TopN queries are much more optimized and significantly faster tha
 Segments should generally be between 300MB-700MB in size. Too many small segments results in inefficient CPU utilizations and 
 too many large segments impacts query performance, most notably with TopN queries.
 
-# Read FAQs
+# FAQs and Guides
 
-You should read common problems people have here:
+1) The [Ingestion FAQ](../ingestion/faq.html) provides help with common ingestion problems.
 
-1) [Ingestion-FAQ](../ingestion/faq.html)
-
-2) [Performance-FAQ](../operations/performance-faq.html)
+2) The [Basic Cluster Tuning Guide](../operations/basic-cluster-tuning.html) offers introductory guidelines for tuning your Druid cluster.
diff --git a/docs/0.14.0-incubating/operations/rule-configuration.md b/docs/0.14.0-incubating/operations/rule-configuration.md
index c2d38b7..9c7caa9 100644
--- a/docs/0.14.0-incubating/operations/rule-configuration.md
+++ b/docs/0.14.0-incubating/operations/rule-configuration.md
@@ -33,8 +33,6 @@ The Coordinator loads a set of rules from the metadata storage. Rules may be spe
 
 Note: It is recommended that the Coordinator console is used to configure rules. However, the Coordinator process does have HTTP endpoints to programmatically configure rules.
 
-When a rule is updated, the change may not be reflected until the next time the Coordinator runs. This will be fixed in the near future.
-
 Load Rules
 ----------
 
diff --git a/docs/0.14.0-incubating/operations/tls-support.md b/docs/0.14.0-incubating/operations/tls-support.md
index e7aefda..4c94276 100644
--- a/docs/0.14.0-incubating/operations/tls-support.md
+++ b/docs/0.14.0-incubating/operations/tls-support.md
@@ -54,13 +54,14 @@ The following table contains configuration options related to client certificate
 
 |Property|Description|Default|Required|
 |--------|-----------|-------|--------|
-|`druid.server.https.requireClientCertificate`|If set to true, clients must identify themselves by providing a TLS certificate.  If `requireClientCertificate` is false, the rest of the options in this table are ignored.|false|no|
-|`druid.server.https.trustStoreType`|The type of the trust store containing certificates used to validate client certificates. Not needed if `requireClientCertificate` is false.|`java.security.KeyStore.getDefaultType()`|no|
-|`druid.server.https.trustStorePath`|The file path or URL of the trust store containing certificates used to validate client certificates. Not needed if `requireClientCertificate` is false.|none|yes, only if `requireClientCertificate` is true|
-|`druid.server.https.trustStoreAlgorithm`|Algorithm to be used by TrustManager to validate client certificate chains. Not needed if `requireClientCertificate` is false.|`javax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()`|no|
-|`druid.server.https.trustStorePassword`|The [Password Provider](../operations/password-provider.html) or String password for the Trust Store.  Not needed if `requireClientCertificate` is false.|none|no|
-|`druid.server.https.validateHostnames`|If set to true, check that the client's hostname matches the CN/subjectAltNames in the client certificate.  Not used if `requireClientCertificate` is false.|true|no|
-|`druid.server.https.crlPath`|Specifies a path to a file containing static [Certificate Revocation Lists](https://en.wikipedia.org/wiki/Certificate_revocation_list), used to check if a client certificate has been revoked. Not used if `requireClientCertificate` is false.|null|no|
+|`druid.server.https.requireClientCertificate`|If set to true, clients must identify themselves by providing a TLS certificate, without which connections will fail.|false|no|
+|`druid.server.https.requestClientCertificate`|If set to true, clients may optionally identify themselves by providing a TLS certificate. Connections will not fail if TLS certificate is not provided. This property is ignored if `requireClientCertificate` is set to true. If `requireClientCertificate` and `requestClientCertificate` are false, the rest of the options in this table are ignored.|false|no|
+|`druid.server.https.trustStoreType`|The type of the trust store containing certificates used to validate client certificates. Not needed if `requireClientCertificate` and `requestClientCertificate` are false.|`java.security.KeyStore.getDefaultType()`|no|
+|`druid.server.https.trustStorePath`|The file path or URL of the trust store containing certificates used to validate client certificates. Not needed if `requireClientCertificate` and `requestClientCertificate` are false.|none|yes, only if `requireClientCertificate` is true|
+|`druid.server.https.trustStoreAlgorithm`|Algorithm to be used by TrustManager to validate client certificate chains. Not needed if `requireClientCertificate` and `requestClientCertificate` are false.|`javax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()`|no|
+|`druid.server.https.trustStorePassword`|The [Password Provider](../operations/password-provider.html) or String password for the Trust Store.  Not needed if `requireClientCertificate` and `requestClientCertificate` are false.|none|no|
+|`druid.server.https.validateHostnames`|If set to true, check that the client's hostname matches the CN/subjectAltNames in the client certificate.  Not used if `requireClientCertificate` and `requestClientCertificate` are false.|true|no|
+|`druid.server.https.crlPath`|Specifies a path to a file containing static [Certificate Revocation Lists](https://en.wikipedia.org/wiki/Certificate_revocation_list), used to check if a client certificate has been revoked. Not used if `requireClientCertificate` and `requestClientCertificate` are false.|null|no|
 
 The following table contains non-mandatory advanced configuration options, use caution.
 
diff --git a/docs/0.14.0-incubating/querying/aggregations.md b/docs/0.14.0-incubating/querying/aggregations.md
index f4d31e1..b6b3e03 100644
--- a/docs/0.14.0-incubating/querying/aggregations.md
+++ b/docs/0.14.0-incubating/querying/aggregations.md
@@ -279,28 +279,26 @@ The [DataSketches HLL Sketch](../development/extensions-core/datasketches-hll.ht
 
 Compared to the Theta sketch, the HLL sketch does not support set operations and has slightly slower update and merge speed, but requires significantly less space.
 
-#### Cardinality/HyperUnique (Deprecated)
+#### Cardinality, hyperUnique
 
-<div class="note caution">
-The Cardinality and HyperUnique aggregators are deprecated.
+<div class="note info">
 For new use cases, we recommend evaluating <a href="../development/extensions-core/datasketches-theta.html">DataSketches Theta Sketch</a> or <a href="../development/extensions-core/datasketches-hll.html">DataSketches HLL Sketch</a> instead.
-For existing users, we recommend evaluating the newer DataSketches aggregators and migrating if possible.
+The DataSketches aggregators are generally able to offer more flexibility and better accuracy than the classic Druid `cardinality` and `hyperUnique` aggregators.
 </div>
 
 The [Cardinality and HyperUnique](../querying/hll-old.html) aggregators are older aggregator implementations available by default in Druid that also provide distinct count estimates using the HyperLogLog algorithm. The newer DataSketches Theta and HLL extension-provided aggregators described above have superior accuracy and performance and are recommended instead. 
 
-The DataSketches team has published a [comparison study](https://datasketches.github.io/docs/HLL/HllSketchVsDruidHyperLogLogCollector.html) between Druid's original HLL algorithm and the DataSketches HLL algorithm. Based on the demonstrated advantages of the DataSketches implementation, we have deprecated Druid's original HLL aggregator.
+The DataSketches team has published a [comparison study](https://datasketches.github.io/docs/HLL/HllSketchVsDruidHyperLogLogCollector.html) between Druid's original HLL algorithm and the DataSketches HLL algorithm. Based on the demonstrated advantages of the DataSketches implementation, we are recommending using them in preference to Druid's original HLL-based aggregators.
+However, to ensure backwards compatibility, we will continue to support the classic aggregators.
 
-Please note that `hyperUnique` aggregators are not mutually compatible with Datasketches HLL or Theta sketches. 
-
-Although deprecated, we will continue to support the older Cardinality/HyperUnique aggregators for backwards compatibility. 
+Please note that `hyperUnique` aggregators are not mutually compatible with Datasketches HLL or Theta sketches.
 
 ##### Multi-column handling
 
 Note the DataSketches Theta and HLL aggregators currently only support single-column inputs. If you were previously using the Cardinality aggregator with multiple-column inputs, equivalent operations using Theta or HLL sketches are described below:
 
 * Multi-column `byValue` Cardinality can be replaced with a union of Theta sketches on the individual input columns
-* Multi-column `byRow` Cardinality can be replaced with a Theta or HLL sketch on a single [virtual column]((../querying/virtual-columns.html) that combines the individual input columns.
+* Multi-column `byRow` Cardinality can be replaced with a Theta or HLL sketch on a single [virtual column](../querying/virtual-columns.html) that combines the individual input columns.
 
 ### Histograms and quantiles
 
@@ -310,18 +308,27 @@ The [DataSketches Quantiles Sketch](../development/extensions-core/datasketches-
 
 We recommend this aggregator in general for quantiles/histogram use cases, as it provides formal error bounds and has distribution-independent accuracy.
 
+#### Moments Sketch (Experimental)
+
+The [Moments Sketch](../development/extensions-contrib/momentsketch-quantiles.html) extension-provided aggregator is an experimental aggregator that provides quantile estimates using the [Moments Sketch](https://github.com/stanford-futuredata/momentsketch).
+
+The Moments Sketch aggregator is provided as an experimental option. It is optimized for merging speed and it can have higher aggregation performance compared to the DataSketches quantiles aggregator. However, the accuracy of the Moments Sketch is distribution-dependent, so users will need to empirically verify that the aggregator is suitable for their input data.
+
+As a general guideline for experimentation, the [Moments Sketch paper](https://arxiv.org/pdf/1803.01969.pdf) points out that this algorithm works better on inputs with high entropy. In particular, the algorithm is not a good fit when the input data consists of a small number of clustered discrete values.
+
 #### Fixed Buckets Histogram
 
-Druid also provides a [simple histogram implementation]((../development/extensions-core/approxiate-histograms.html#fixed-buckets-histogram) that uses a fixed range and fixed number of buckets with support for quantile estimation, backed by an array of bucket count values.
+Druid also provides a [simple histogram implementation](../development/extensions-core/approximate-histograms.html#fixed-buckets-histogram) that uses a fixed range and fixed number of buckets with support for quantile estimation, backed by an array of bucket count values.
 
 The fixed buckets histogram can perform well when the distribution of the input data allows a small number of buckets to be used.
 
 We do not recommend the fixed buckets histogram for general use, as its usefulness is extremely data dependent. However, it is made available for users that have already identified use cases where a fixed buckets histogram is suitable.
 
-#### Approximate Histogram (Deprecated)
+#### Approximate Histogram (deprecated)
 
 <div class="note caution">
 The Approximate Histogram aggregator is deprecated.
+There are a number of other quantile estimation algorithms that offer better performance, accuracy, and memory footprint.
 We recommend using <a href="../development/extensions-core/datasketches-quantiles.html">DataSketches Quantiles</a> instead.
 </div>
 
diff --git a/docs/0.14.0-incubating/querying/caching.md b/docs/0.14.0-incubating/querying/caching.md
index c5e7363..41b01f7 100644
--- a/docs/0.14.0-incubating/querying/caching.md
+++ b/docs/0.14.0-incubating/querying/caching.md
@@ -24,23 +24,41 @@ title: "Query Caching"
 
 # Query Caching
 
-Apache Druid (incubating) supports query result caching through an LRU cache. Results are stored as a whole or either on a per segment basis along with the 
-parameters of a given query. Segment level caching allows Druid to return final results based partially on segment results in the cache 
-and partially on segment results from scanning historical/real-time segments. Result level caching enables Druid to cache the entire 
-result set, so that query results can be completely retrieved from the cache for identical queries.
+Apache Druid (incubating) supports query result caching at both the segment and whole-query result level. Cache data can be stored in the
+local JVM heap or in an external distributed key/value store. In all cases, the Druid cache is a query result cache.
+The only difference is whether the result is a _partial result_ for a particular segment, or the result for an entire
+query. In both cases, the cache is invalidated as soon as any underlying data changes; it will never return a stale
+result.
 
-Segment results can be stored in a local heap cache or in an external distributed key/value store. Segment query caches 
-can be enabled at either the Historical and Broker level (it is not recommended to enable caching on both).
+Segment-level caching allows the cache to be leveraged even when some of the underling segments are mutable and
+undergoing real-time ingestion. In this case, Druid will potentially cache query results for immutable historical
+segments, while re-computing results for the real-time segments on each query. Whole-query result level caching is not
+useful in this scenario, since it would be continuously invalidated.
+
+Segment-level caching does require Druid to merge the per-segment results on each query, even when they are served
+from the cache. For this reason, whole-query result level caching can be more efficient if invalidation due to real-time
+ingestion is not an issue.
 
 ## Query caching on Brokers
 
-Enabling caching on the Broker can yield faster results than if query caches were enabled on Historicals for small clusters. This is 
-the recommended setup for smaller production clusters (< 20 servers). Take note that when caching is enabled on the Broker, 
-results from Historicals are returned on a per segment basis, and Historicals will not be able to do any local result merging.
-Result level caching is enabled only on the Broker side.
+Brokers support both segment-level and whole-query result level caching. Segment-level caching is controlled by the
+parameters `useCache` and `populateCache`. Whole-query result level caching is controlled by the parameters
+`useResultLevelCache` and `populateResultLevelCache` and [runtime properties](../configuration/index.html)
+`druid.broker.cache.*`..
+
+Enabling segment-level caching on the Broker can yield faster results than if query caches were enabled on Historicals for small
+clusters. This is the recommended setup for smaller production clusters (< 5 servers). Populating segment-level caches on
+the Broker is _not_ recommended for large production clusters, since when the property `druid.broker.cache.populateCache` is
+set to `true` (and query context parameter `populateCache` is _not_ set to `false`), results from Historicals are returned
+on a per segment basis, and Historicals will not be able to do any local result merging. This impairs the ability of the
+Druid cluster to scale well.
 
 ## Query caching on Historicals
 
-Larger production clusters should enable caching only on the Historicals to avoid having to use Brokers to merge all query 
-results. Enabling caching on the Historicals instead of the Brokers enables the Historicals to do their own local result
-merging and puts less strain on the Brokers.
+Historicals only support segment-level caching. Segment-level caching is controlled by the query context
+parameters `useCache` and `populateCache` and [runtime properties](../configuration/index.html)
+`druid.historical.cache.*`.
+
+Larger production clusters should enable segment-level cache population on Historicals only (not on Brokers) to avoid
+having to use Brokers to merge all query results. Enabling cache population on the Historicals instead of the Brokers
+enables the Historicals to do their own local result merging and puts less strain on the Brokers.
diff --git a/docs/0.14.0-incubating/querying/filters.md b/docs/0.14.0-incubating/querying/filters.md
index 2f9b23a..53e0853 100644
--- a/docs/0.14.0-incubating/querying/filters.md
+++ b/docs/0.14.0-incubating/querying/filters.md
@@ -282,6 +282,7 @@ greater than, less than, greater than or equal to, less than or equal to, and "b
 Bound filters support the use of extraction functions, see [Filtering with Extraction Functions](#filtering-with-extraction-functions) for details.
 
 The following bound filter expresses the condition `21 <= age <= 31`:
+
 ```json
 {
     "type": "bound",
@@ -293,6 +294,7 @@ The following bound filter expresses the condition `21 <= age <= 31`:
 ```
 
 This filter expresses the condition `foo <= name <= hoo`, using the default lexicographic sorting order.
+
 ```json
 {
     "type": "bound",
@@ -303,6 +305,7 @@ This filter expresses the condition `foo <= name <= hoo`, using the default lexi
 ```
 
 Using strict bounds, this filter expresses the condition `21 < age < 31`
+
 ```json
 {
     "type": "bound",
@@ -316,6 +319,7 @@ Using strict bounds, this filter expresses the condition `21 < age < 31`
 ```
 
 The user can also specify a one-sided bound by omitting "upper" or "lower". This filter expresses `age < 31`.
+
 ```json
 {
     "type": "bound",
@@ -327,6 +331,7 @@ The user can also specify a one-sided bound by omitting "upper" or "lower". This
 ```
 
 Likewise, this filter expresses `age >= 18`
+
 ```json
 {
     "type": "bound",
@@ -355,6 +360,7 @@ The interval filter supports the use of extraction functions, see [Filtering wit
 If an extraction function is used with this filter, the extraction function should output values that are parseable as long milliseconds.
 
 The following example filters on the time ranges of October 1-7, 2014 and November 15-16, 2014.
+
 ```json
 {
     "type" : "interval",
diff --git a/docs/0.14.0-incubating/querying/groupbyquery.md b/docs/0.14.0-incubating/querying/groupbyquery.md
index 125d793..1445fee 100644
--- a/docs/0.14.0-incubating/querying/groupbyquery.md
+++ b/docs/0.14.0-incubating/querying/groupbyquery.md
@@ -288,7 +288,8 @@ disk space.
 
 With groupBy v2, cluster operators should make sure that the off-heap hash tables and on-heap merging dictionaries
 will not exceed available memory for the maximum possible concurrent query load (given by
-druid.processing.numMergeBuffers). See [How much direct memory does Druid use?](../operations/performance-faq.html) for more details.
+druid.processing.numMergeBuffers). See the [Basic Cluster Tuning Guide](../operations/basic-cluster-tuning.html) 
+for more details about direct memory usage, organized by Druid process type.
 
 Brokers do not need merge buffers for basic groupBy queries. Queries with subqueries (using a "query" [dataSource](datasource.html#query-data-source)) require one merge buffer if there is a single subquery, or two merge buffers if there is more than one layer of nested subqueries. Queries with [subtotals](groupbyquery.html#more-on-subtotalsspec) need one merge buffer. These can stack on top of each other: a groupBy query with multiple layers of nested subqueries, and that also uses subto [...]
 
diff --git a/docs/0.14.0-incubating/querying/lookups.md b/docs/0.14.0-incubating/querying/lookups.md
index 68f3287..b54f769 100644
--- a/docs/0.14.0-incubating/querying/lookups.md
+++ b/docs/0.14.0-incubating/querying/lookups.md
@@ -55,6 +55,17 @@ Other lookup types are available as extensions, including:
 - Globally cached lookups from local files, remote URIs, or JDBC through [lookups-cached-global](../development/extensions-core/lookups-cached-global.html).
 - Globally cached lookups from a Kafka topic through [kafka-extraction-namespace](../development/extensions-core/kafka-extraction-namespace.html).
 
+Query Syntax
+------------
+
+In [Druid SQL](sql.html), lookups can be queried using the `LOOKUP` function, for example:
+
+```
+SELECT LOOKUP(column_name, 'lookup-name'), COUNT(*) FROM datasource GROUP BY 1
+```
+
+In native queries, lookups can be queried with [dimension specs or extraction functions](dimensionspecs.html).
+
 Query Execution
 ---------------
 When executing an aggregation query involving lookups, Druid can decide to apply lookups either while scanning and
@@ -285,7 +296,7 @@ A `DELETE` to `/druid/coordinator/v1/lookups/config/{tier}/{id}` will remove tha
 
 ## List tier names
 A `GET` to `/druid/coordinator/v1/lookups/config` will return a list of known tier names in the dynamic configuration.
-To discover a list of tiers currently active in the cluster **instead of** ones known in the dynamic configuration, the parameter `discover=true` can be added as per `/druid/coordinator/v1/lookups?discover=true`.
+To discover a list of tiers currently active in the cluster in addition to ones known in the dynamic configuration, the parameter `discover=true` can be added as per `/druid/coordinator/v1/lookups/config?discover=true`.
 
 ## List lookup names
 A `GET` to `/druid/coordinator/v1/lookups/config/{tier}` will return a list of known lookup names for that tier.
diff --git a/docs/0.14.0-incubating/querying/multitenancy.md b/docs/0.14.0-incubating/querying/multitenancy.md
index cbac624..2405d96 100644
--- a/docs/0.14.0-incubating/querying/multitenancy.md
+++ b/docs/0.14.0-incubating/querying/multitenancy.md
@@ -57,7 +57,7 @@ If your multitenant cluster uses shared datasources, most of your queries will l
 dimension. These sorts of queries perform best when data is well-partitioned by tenant. There are a few ways to
 accomplish this.
 
-With batch indexing, you can use [single-dimension partitioning](../indexing/batch-ingestion.html#single-dimension-partitioning)
+With batch indexing, you can use [single-dimension partitioning](../ingestion/hadoop.html#single-dimension-partitioning)
 to partition your data by tenant_id. Druid always partitions by time first, but the secondary partition within each
 time bucket will be on tenant_id.
 
diff --git a/docs/0.14.0-incubating/querying/query-context.md b/docs/0.14.0-incubating/querying/query-context.md
index abcdf3d..d9d8218 100644
--- a/docs/0.14.0-incubating/querying/query-context.md
+++ b/docs/0.14.0-incubating/querying/query-context.md
@@ -33,8 +33,8 @@ The query context is used for various query configuration parameters. The follow
 |queryId          | auto-generated                         | Unique identifier given to this query. If a query ID is set or known, this can be used to cancel the query |
 |useCache         | `true`                                 | Flag indicating whether to leverage the query cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Apache Druid (incubating) uses druid.broker.cache.useCache or druid.historical.cache.useCache to determine whether or not to read from the query cache |
 |populateCache    | `true`                                 | Flag indicating whether to save the results of the query to the query cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateCache or druid.historical.cache.populateCache to determine whether or not to save the results of this query to the query cache |
-|useResultLevelCache         | `false`                                 | Flag indicating whether to leverage the result level cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Druid uses druid.broker.cache.useResultLevelCache to determine whether or not to read from the query cache |
-|populateResultLevelCache    | `false`                                 | Flag indicating whether to save the results of the query to the result level cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateCache to determine whether or not to save the results of this query to the query cache |
+|useResultLevelCache         | `true`                      | Flag indicating whether to leverage the result level cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Druid uses druid.broker.cache.useResultLevelCache to determine whether or not to read from the result-level query cache |
+|populateResultLevelCache    | `true`                      | Flag indicating whether to save the results of the query to the result level cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateResultLevelCache to determine whether or not to save the results of this query to the result-level query cache |
 |bySegment        | `false`                                | Return "by segment" results. Primarily used for debugging, setting it to `true` returns results associated with the data segment they came from |
 |finalize         | `true`                                 | Flag indicating whether to "finalize" aggregation results. Primarily used for debugging. For instance, the `hyperUnique` aggregator will return the full HyperLogLog sketch instead of the estimated cardinality when this flag is set to `false` |
 |chunkPeriod      | `P0D` (off)                            | At the Broker process level, long interval queries (of any type) may be broken into shorter interval queries to parallelize merging more than normal. Broken up queries will use a larger share of cluster resources, but, if you use groupBy "v1, it may be able to complete faster as a result. Use ISO 8601 periods. For example, if this property is set to `P1M` (one month), then a query covering a year would be broken into 12 smaller [...]
diff --git a/docs/0.14.0-incubating/querying/querying.md b/docs/0.14.0-incubating/querying/querying.md
index 73c56bd..b3c4fd2 100644
--- a/docs/0.14.0-incubating/querying/querying.md
+++ b/docs/0.14.0-incubating/querying/querying.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Querying"
+title: "Native queries"
 ---
 
 <!--
@@ -22,26 +22,28 @@ title: "Querying"
   ~ under the License.
   -->
 
-# Querying
+# Native queries
 
-Apache Druid (incubating) queries are made using an HTTP REST style request to queryable processes ([Broker](../design/broker.html),
-[Historical](../design/historical.html). [Peons](../design/peons.html)) that are running stream ingestion tasks can also accept queries. The
-query is expressed in JSON and each of these process types expose the same
-REST query interface. For normal Druid operations, queries should be issued to the Broker processes. Queries can be posted
-to the queryable processes like this -
+<div class="note info">
+Apache Druid (incubating) supports two query languages: [Druid SQL](sql.html) and native queries, which SQL queries
+are planned into, and which end users can also issue directly. This document describes the native query language.
+</div>
 
- ```bash
- curl -X POST '<queryable_host>:<port>/druid/v2/?pretty' -H 'Content-Type:application/json' -H 'Accept:application/json' -d @<query_json_file>
- ```
+Native queries in Druid are JSON objects and are typically issued to the Broker or Router processes. Queries can be
+posted like this:
+
+```bash
+curl -X POST '<queryable_host>:<port>/druid/v2/?pretty' -H 'Content-Type:application/json' -H 'Accept:application/json' -d @<query_json_file>
+```
  
 Druid's native query language is JSON over HTTP, although many members of the community have contributed different 
-[client libraries](../development/libraries.html) in other languages to query Druid. 
+[client libraries](/libraries.html) in other languages to query Druid.
 
 The Content-Type/Accept Headers can also take 'application/x-jackson-smile'.
 
- ```bash
- curl -X POST '<queryable_host>:<port>/druid/v2/?pretty' -H 'Content-Type:application/json' -H 'Accept:application/x-jackson-smile' -d @<query_json_file>
- ```
+```bash
+curl -X POST '<queryable_host>:<port>/druid/v2/?pretty' -H 'Content-Type:application/json' -H 'Accept:application/x-jackson-smile' -d @<query_json_file>
+```
 
 Note: If Accept header is not provided, it defaults to value of 'Content-Type' header.
 
@@ -49,6 +51,11 @@ Druid's native query is relatively low level, mapping closely to how computation
 are designed to be lightweight and complete very quickly. This means that for more complex analysis, or to build 
 more complex visualizations, multiple Druid queries may be required.
 
+Even though queries are typically made to Brokers or Routers, they can also be accepted by
+[Historical](../design/historical.html) processes and by [Peons (task JVMs)](../design/peons.html)) that are running
+stream ingestion tasks. This may be valuable if you want to query results for specific segments that are served by
+specific processes.
+
 ## Available Queries
 
 Druid has numerous query types for various use cases. Queries are composed of various JSON properties and Druid has different types of queries for different use cases. The documentation for the various query types describe all the JSON properties that can be set.
@@ -122,4 +129,5 @@ Possible codes for the *error* field include:
 |`Query cancelled`|The query was cancelled through the query cancellation API.|
 |`Resource limit exceeded`|The query exceeded a configured resource limit (e.g. groupBy maxResults).|
 |`Unauthorized request.`|The query was denied due to security policy. Either the user was not recognized, or the user was recognized but does not have access to the requested resource.|
+|`Unsupported operation`|The query attempted to perform an unsupported operation. This may occur when using undocumented features or when using an incompletely implemented extension.|
 |`Unknown exception`|Some other exception occurred. Check errorMessage and errorClass for details, although keep in mind that the contents of those fields are free-form and may change from release to release.|
diff --git a/docs/0.14.0-incubating/querying/scan-query.md b/docs/0.14.0-incubating/querying/scan-query.md
index f7c56d7..1b9d360 100644
--- a/docs/0.14.0-incubating/querying/scan-query.md
+++ b/docs/0.14.0-incubating/querying/scan-query.md
@@ -24,7 +24,16 @@ title: "Scan query"
 
 # Scan query
 
-Scan query returns raw Apache Druid (incubating) rows in streaming mode.
+The Scan query returns raw Apache Druid (incubating) rows in streaming mode.  The biggest difference between the Select query and the Scan
+query is that the Scan query does not retain all the returned rows in memory before they are returned to the client.  
+The Select query _will_ retain the rows in memory, causing memory pressure if too many rows are returned.  
+The Scan query can return all the rows without issuing another pagination query.
+
+In addition to straightforward usage where a Scan query is issued to the Broker, the Scan query can also be issued
+directly to Historical processes or streaming ingestion tasks. This can be useful if you want to retrieve large 
+amounts of data in parallel.
+
+An example Scan query object is shown below:
 
 ```json
  {
@@ -36,28 +45,29 @@ Scan query returns raw Apache Druid (incubating) rows in streaming mode.
      "2013-01-01/2013-01-02"
    ],
    "batchSize":20480,
-   "limit":5
+   "limit":3
  }
 ```
 
-There are several main parts to a scan query:
+The following are the main parameters for Scan queries:
 
 |property|description|required?|
 |--------|-----------|---------|
 |queryType|This String should always be "scan"; this is the first thing Druid looks at to figure out how to interpret the query|yes|
 |dataSource|A String or Object defining the data source to query, very similar to a table in a relational database. See [DataSource](../querying/datasource.html) for more information.|yes|
 |intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
-|resultFormat|How result represented, list or compactedList or valueVector. Currently only `list` and `compactedList` are supported. Default is `list`|no|
+|resultFormat|How the results are represented: list, compactedList or valueVector. Currently only `list` and `compactedList` are supported. Default is `list`|no|
 |filter|See [Filters](../querying/filters.html)|no|
 |columns|A String array of dimensions and metrics to scan. If left empty, all dimensions and metrics are returned.|no|
 |batchSize|How many rows buffered before return to client. Default is `20480`|no|
 |limit|How many rows to return. If not specified, all rows will be returned.|no|
+|order|The ordering of returned rows based on timestamp.  "ascending", "descending", and "none" (default) are supported.  Currently, "ascending" and "descending" are only supported for queries where the `__time` column is included in the `columns` field and the requirements outlined in the [time ordering](#time-ordering) section are met.|none|
 |legacy|Return results consistent with the legacy "scan-query" contrib extension. Defaults to the value set by `druid.query.scan.legacy`, which in turn defaults to false. See [Legacy mode](#legacy-mode) for details.|no|
-|context|An additional JSON Object which can be used to specify certain flags.|no|
+|context|An additional JSON Object which can be used to specify certain flags (see the Query Context Properties section below).|no|
 
 ## Example results
 
-The format of the result when resultFormat equals to `list`:
+The format of the result when resultFormat equals `list`:
 
 ```json
  [{
@@ -123,41 +133,11 @@ The format of the result when resultFormat equals to `list`:
         "delta" : 77.0,
         "variation" : 77.0,
         "deleted" : 0.0
-    }, {
-        "timestamp" : "2013-01-01T00:00:00.000Z",
-        "robot" : "0",
-        "namespace" : "article",
-        "anonymous" : "0",
-        "unpatrolled" : "0",
-        "page" : "113_U.S._73",
-        "language" : "en",
-        "newpage" : "1",
-        "user" : "MZMcBride",
-        "count" : 1.0,
-        "added" : 70.0,
-        "delta" : 70.0,
-        "variation" : 70.0,
-        "deleted" : 0.0
-    }, {
-        "timestamp" : "2013-01-01T00:00:00.000Z",
-        "robot" : "0",
-        "namespace" : "article",
-        "anonymous" : "0",
-        "unpatrolled" : "0",
-        "page" : "113_U.S._756",
-        "language" : "en",
-        "newpage" : "1",
-        "user" : "MZMcBride",
-        "count" : 1.0,
-        "added" : 68.0,
-        "delta" : 68.0,
-        "variation" : 68.0,
-        "deleted" : 0.0
     } ]
 } ]
 ```
 
-The format of the result when resultFormat equals to `compactedList`:
+The format of the result when resultFormat equals `compactedList`:
 
 ```json
  [{
@@ -168,18 +148,41 @@ The format of the result when resultFormat equals to `compactedList`:
     "events" : [
      ["2013-01-01T00:00:00.000Z", "1", "article", "0", "0", "11._korpus_(NOVJ)", "sl", "0", "EmausBot", 1.0, 39.0, 39.0, 39.0, 0.0],
      ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "112_U.S._580", "en", "1", "MZMcBride", 1.0, 70.0, 70.0, 70.0, 0.0],
-     ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "113_U.S._243", "en", "1", "MZMcBride", 1.0, 77.0, 77.0, 77.0, 0.0],
-     ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "113_U.S._73", "en", "1", "MZMcBride", 1.0, 70.0, 70.0, 70.0, 0.0],
-     ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "113_U.S._756", "en", "1", "MZMcBride", 1.0, 68.0, 68.0, 68.0, 0.0]
+     ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "113_U.S._243", "en", "1", "MZMcBride", 1.0, 77.0, 77.0, 77.0, 0.0]
     ]
 } ]
 ```
 
-The biggest difference between select query and scan query is that, scan query doesn't retain all rows in memory before rows can be returned to client.  
-It will cause memory pressure if too many rows required by select query.  
-Scan query doesn't have this issue.  
-Scan query can return all rows without issuing another pagination query, which is extremely useful when query against Historical or realtime process directly.
-
+## Time Ordering
+
+The Scan query currently supports ordering based on timestamp for non-legacy queries.  Note that using time ordering
+will yield results that do not indicate which segment rows are from (`segmentId` will show up as `null`).  Furthermore,
+time ordering is only supported where the result set limit is less than `druid.query.scan.maxRowsQueuedForOrdering` 
+rows **or** all segments scanned have fewer than `druid.query.scan.maxSegmentPartitionsOrderedInMemory` partitions.  Also,
+time ordering is not supported for queries issued directly to historicals unless a list of segments is specified.  The 
+reasoning behind these limitations is that the implementation of time ordering uses two strategies that can consume too 
+much heap memory if left unbounded.  These strategies (listed below) are chosen on a per-Historical basis depending on
+query result set limit and the number of segments being scanned.
+
+1. Priority Queue: Each segment on a Historical is opened sequentially.  Every row is added to a bounded priority
+queue which is ordered by timestamp.  For every row above the result set limit, the row with the earliest (if descending)
+or latest (if ascending) timestamp will be dequeued.  After every row has been processed, the sorted contents of the
+priority queue are streamed back to the Broker(s) in batches.  Attempting to load too many rows into memory runs the
+risk of Historical nodes running out of memory.  The `druid.query.scan.maxRowsQueuedForOrdering` property protects
+from this by limiting the number of rows in the query result set when time ordering is used.
+
+2. N-Way Merge: For each segment, each partition is opened in parallel.  Since each partition's rows are already
+time-ordered, an n-way merge can be performed on the results from each partition.  This approach doesn't persist the entire
+result set in memory (like the Priority Queue) as it streams back batches as they are returned from the merge function.
+However, attempting to query too many partition could also result in high memory usage due to the need to open 
+decompression and decoding buffers for each.  The `druid.query.scan.maxSegmentPartitionsOrderedInMemory` limit protects
+from this by capping the number of partitions opened at any times when time ordering is used.
+
+Both `druid.query.scan.maxRowsQueuedForOrdering` and `druid.query.scan.maxSegmentPartitionsOrderedInMemory` are 
+configurable and can be tuned based on hardware specs and number of dimensions being queried.  These config properties
+can also be overridden using the `maxRowsQueuedForOrdering` and `maxSegmentPartitionsOrderedInMemory` properties in 
+the query context (see the Query Context Properties section).
+  
 ## Legacy mode
 
 The Scan query supports a legacy mode designed for protocol compatibility with the former scan-query contrib extension.
@@ -194,3 +197,30 @@ Legacy mode can be triggered either by passing `"legacy" : true` in your query J
 `druid.query.scan.legacy = true` on your Druid processes. If you were previously using the scan-query contrib extension,
 the best way to migrate is to activate legacy mode during a rolling upgrade, then switch it off after the upgrade
 is complete.
+
+## Configuration Properties
+
+Configuration properties:
+
+|property|description|values|default|
+|--------|-----------|------|-------|
+|druid.query.scan.maxRowsQueuedForOrdering|The maximum number of rows returned when time ordering is used|An integer in [1, 2147483647]|100000|
+|druid.query.scan.maxSegmentPartitionsOrderedInMemory|The maximum number of segments scanned per historical when time ordering is used|An integer in [1, 2147483647]|50|
+|druid.query.scan.legacy|Whether legacy mode should be turned on for Scan queries|true or false|false|
+
+
+## Query Context Properties
+
+|property|description|values|default|
+|--------|-----------|------|-------|
+|maxRowsQueuedForOrdering|The maximum number of rows returned when time ordering is used.  Overrides the identically named config.|An integer in [1, 2147483647]|`druid.query.scan.maxRowsQueuedForOrdering`|
+|maxSegmentPartitionsOrderedInMemory|The maximum number of segments scanned per historical when time ordering is used.  Overrides the identically named config.|An integer in [1, 2147483647]|`druid.query.scan.maxSegmentPartitionsOrderedInMemory`|
+
+Sample query context JSON object:
+
+```json
+{
+  "maxRowsQueuedForOrdering": 100001,
+  "maxSegmentPartitionsOrderedInMemory": 100	
+}
+```
diff --git a/docs/0.14.0-incubating/querying/select-query.md b/docs/0.14.0-incubating/querying/select-query.md
index 8df2155..4c7ba20 100644
--- a/docs/0.14.0-incubating/querying/select-query.md
+++ b/docs/0.14.0-incubating/querying/select-query.md
@@ -24,7 +24,15 @@ title: "Select Queries"
 
 # Select Queries
 
-Select queries return raw Apache Druid (incubating) rows and support pagination.
+<div class="note caution">
+We encourage you to use the [Scan query](../querying/scan-query.html) type rather than Select whenever possible.
+In situations involving larger numbers of segments, the Select query can have very high memory and performance overhead.
+The Scan query does not have this issue.
+The major difference between the two is that the Scan query does not support pagination.
+However, the Scan query type is able to return a virtually unlimited number of results even without pagination, making it unnecessary in many cases.
+</div>
+
+Select queries return raw Druid rows and support pagination.
 
 ```json
  {
@@ -41,13 +49,6 @@ Select queries return raw Apache Druid (incubating) rows and support pagination.
  }
 ```
 
-<div class="note info">
-Consider using the [Scan query](../querying/scan-query.html) instead of the Select query if you don't need pagination, and you
-don't need the strict time-ascending or time-descending ordering offered by the Select query. The Scan query returns
-results without pagination, and offers "looser" ordering than Select, but is significantly more efficient in terms of
-both processing time and memory requirements. It is also capable of returning a virtually unlimited number of results.
-</div>
-
 There are several main parts to a select query:
 
 |property|description|required?|
diff --git a/docs/0.14.0-incubating/querying/sql.md b/docs/0.14.0-incubating/querying/sql.md
index 84be5b1..6c68a5f 100644
--- a/docs/0.14.0-incubating/querying/sql.md
+++ b/docs/0.14.0-incubating/querying/sql.md
@@ -22,14 +22,21 @@ title: "SQL"
   ~ under the License.
   -->
 
+<!--
+  The format of the tables that describe the functions and operators
+  should not be changed without updating the script create-sql-function-doc
+  in web-console/script/create-sql-function-doc, because the script detects
+  patterns in this markdown file and parse it to TypeScript file for web console
+-->
+
 # SQL
 
-<div class="note caution">
-Built-in SQL is an <a href="../development/experimental.html">experimental</a> feature. The API described here is
-subject to change.
+<div class="note info">
+Apache Druid (incubating) supports two query languages: Druid SQL and [native queries](querying.html), which SQL queries
+are planned into, and which end users can also issue directly. This document describes the SQL language.
 </div>
 
-Apache Druid (incubating) SQL is a built-in SQL layer and an alternative to Druid's native JSON-based query language, and is powered by a
+Druid SQL is a built-in SQL layer and an alternative to Druid's native JSON-based query language, and is powered by a
 parser and planner based on [Apache Calcite](https://calcite.apache.org/). Druid SQL translates SQL into native Druid
 queries on the query Broker (the first process you query), which are then passed down to data processes as native Druid
 queries. Other than the (slight) overhead of translating SQL on the Broker, there isn't an additional performance
@@ -118,7 +125,7 @@ Only the COUNT aggregation can accept DISTINCT.
 |`MIN(expr)`|Takes the minimum of numbers.|
 |`MAX(expr)`|Takes the maximum of numbers.|
 |`AVG(expr)`|Averages numbers.|
-|`APPROX_COUNT_DISTINCT(expr)`|Counts distinct values of expr, which can be a regular column or a hyperUnique column. This is always approximate, regardless of the value of "useApproximateCountDistinct". See also `COUNT(DISTINCT expr)`.|
+|`APPROX_COUNT_DISTINCT(expr)`|Counts distinct values of expr, which can be a regular column or a hyperUnique column. This is always approximate, regardless of the value of "useApproximateCountDistinct". This uses Druid's builtin "cardinality" or "hyperUnique" aggregators. See also `COUNT(DISTINCT expr)`.|
 |`APPROX_COUNT_DISTINCT_DS_HLL(expr, [lgK, tgtHllType])`|Counts distinct values of expr, which can be a regular column or an [HLL sketch](../development/extensions-core/datasketches-hll.html) column. The `lgK` and `tgtHllType` parameters are described in the HLL sketch documentation. This is always approximate, regardless of the value of "useApproximateCountDistinct". See also `COUNT(DISTINCT expr)`. The [DataSketches extension](../development/extensions-core/datasketches-extension.html) [...]
 |`APPROX_COUNT_DISTINCT_DS_THETA(expr, [size])`|Counts distinct values of expr, which can be a regular column or a [Theta sketch](../development/extensions-core/datasketches-theta.html) column. The `size` parameter is described in the Theta sketch documentation. This is always approximate, regardless of the value of "useApproximateCountDistinct". See also `COUNT(DISTINCT expr)`. The [DataSketches extension](../development/extensions-core/datasketches-extension.html) must be loaded to use [...]
 |`APPROX_QUANTILE(expr, probability, [resolution])`|Computes approximate quantiles on numeric or [approxHistogram](../development/extensions-core/approximate-histograms.html#approximate-histogram-aggregator) exprs. The "probability" should be between 0 and 1 (exclusive). The "resolution" is the number of centroids to use for the computation. Higher resolutions will give more precise results but also have higher overhead. If not provided, the default resolution is 50. The [approximate his [...]
@@ -126,6 +133,8 @@ Only the COUNT aggregation can accept DISTINCT.
 |`APPROX_QUANTILE_FIXED_BUCKETS(expr, probability, numBuckets, lowerLimit, upperLimit, [outlierHandlingMode])`|Computes approximate quantiles on numeric or [fixed buckets histogram](../development/extensions-core/approximate-histograms.html#fixed-buckets-histogram) exprs. The "probability" should be between 0 and 1 (exclusive). The `numBuckets`, `lowerLimit`, `upperLimit`, and `outlierHandlingMode` parameters are described in the fixed buckets histogram documentation. The [approximate hi [...]
 |`BLOOM_FILTER(expr, numEntries)`|Computes a bloom filter from values produced by `expr`, with `numEntries` maximum number of distinct values before false positve rate increases. See [bloom filter extension](../development/extensions-core/bloom-filter.html) documentation for additional details.|
 
+For advice on choosing approximate aggregation functions, check out our [approximate aggregations documentation](aggregations.html#approx).
+
 ### Numeric functions
 
 Numeric functions will return 64 bit integers or 64 bit floats, depending on their inputs.
@@ -142,11 +151,22 @@ Numeric functions will return 64 bit integers or 64 bit floats, depending on the
 |`SQRT(expr)`|Square root.|
 |`TRUNCATE(expr[, digits])`|Truncate expr to a specific number of decimal digits. If digits is negative, then this truncates that many places to the left of the decimal point. Digits defaults to zero if not specified.|
 |`TRUNC(expr[, digits])`|Synonym for `TRUNCATE`.|
+|`ROUND(expr[, digits])`|`ROUND(x, y)` would return the value of the x rounded to the y decimal places. While x can be an integer or floating-point number, y must be an integer. The type of the return value is specified by that of x. y defaults to 0 if omitted. When y is negative, x is rounded on the left side of the y decimal points.|
 |`x + y`|Addition.|
 |`x - y`|Subtraction.|
 |`x * y`|Multiplication.|
 |`x / y`|Division.|
 |`MOD(x, y)`|Modulo (remainder of x divided by y).|
+|`SIN(expr)`|Trigonometric sine of an angle expr.|
+|`COS(expr)`|Trigonometric cosine of an angle expr.|
+|`TAN(expr)`|Trigonometric tangent of an angle expr.|
+|`COT(expr)`|Trigonometric cotangent of an angle expr.|
+|`ASIN(expr)`|Arc sine of expr.|
+|`ACOS(expr)`|Arc cosine of expr.|
+|`ATAN(expr)`|Arc tangent of expr.|
+|`ATAN2(y, x)`|Angle theta from the conversion of rectangular coordinates (x, y) to polar * coordinates (r, theta).|
+|`DEGREES(expr)`|Converts an angle measured in radians to an approximately equivalent angle measured in degrees|
+|`RADIANS(expr)`|Converts an angle measured in degrees to an approximately equivalent angle measured in radians|
 
 ### String functions
 
@@ -154,26 +174,35 @@ String functions accept strings, and return a type appropriate to the function.
 
 |Function|Notes|
 |--------|-----|
-|`x \|\| y`|Concat strings x and y.|
+|`<code>x &#124;&#124; y</code>`|Concat strings x and y.|
 |`CONCAT(expr, expr...)`|Concats a list of expressions.|
 |`TEXTCAT(expr, expr)`|Two argument version of CONCAT.|
+|`STRING_FORMAT(pattern[, args...])`|Returns a string formatted in the manner of Java's [String.format](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#format-java.lang.String-java.lang.Object...-).|
 |`LENGTH(expr)`|Length of expr in UTF-16 code units.|
 |`CHAR_LENGTH(expr)`|Synonym for `LENGTH`.|
 |`CHARACTER_LENGTH(expr)`|Synonym for `LENGTH`.|
 |`STRLEN(expr)`|Synonym for `LENGTH`.|
 |`LOOKUP(expr, lookupName)`|Look up expr in a registered [query-time lookup table](lookups.html).|
 |`LOWER(expr)`|Returns expr in all lowercase.|
+|`PARSE_LONG(string[, radix])`|Parses a string into a long (BIGINT) with the given radix, or 10 (decimal) if a radix is not provided.|
 |`POSITION(needle IN haystack [FROM fromIndex])`|Returns the index of needle within haystack, with indexes starting from 1. The search will begin at fromIndex, or 1 if fromIndex is not specified. If the needle is not found, returns 0.|
 |`REGEXP_EXTRACT(expr, pattern, [index])`|Apply regular expression pattern and extract a capture group, or null if there is no match. If index is unspecified or zero, returns the substring that matched the pattern.|
 |`REPLACE(expr, pattern, replacement)`|Replaces pattern with replacement in expr, and returns the result.|
 |`STRPOS(haystack, needle)`|Returns the index of needle within haystack, with indexes starting from 1. If the needle is not found, returns 0.|
 |`SUBSTRING(expr, index, [length])`|Returns a substring of expr starting at index, with a max length, both measured in UTF-16 code units.|
+|`RIGHT(expr, [length])`|Returns the rightmost length characters from expr.|
+|`LEFT(expr, [length])`|Returns the leftmost length characters from expr.|
 |`SUBSTR(expr, index, [length])`|Synonym for SUBSTRING.|
-|`TRIM([BOTH \| LEADING \| TRAILING] [<chars> FROM] expr)`|Returns expr with characters removed from the leading, trailing, or both ends of "expr" if they are in "chars". If "chars" is not provided, it defaults to " " (a space). If the directional argument is not provided, it defaults to "BOTH".|
+|`TRIM([BOTH &#124; LEADING &#124; TRAILING] [<chars> FROM] expr)`|Returns expr with characters removed from the leading, trailing, or both ends of "expr" if they are in "chars". If "chars" is not provided, it defaults to " " (a space). If the directional argument is not provided, it defaults to "BOTH".|
 |`BTRIM(expr[, chars])`|Alternate form of `TRIM(BOTH <chars> FROM <expr>`).|
 |`LTRIM(expr[, chars])`|Alternate form of `TRIM(LEADING <chars> FROM <expr>`).|
 |`RTRIM(expr[, chars])`|Alternate form of `TRIM(TRAILING <chars> FROM <expr>`).|
 |`UPPER(expr)`|Returns expr in all uppercase.|
+|`REVERSE(expr)`|Reverses expr.|
+|`REPEAT(expr, [N])`|Repeats expr N times|
+|`LPAD(expr, length[, chars])`|Returns a string of "length" from "expr" left-padded with "chars". If "length" is shorter than the length of "expr", the result is "expr" which is truncated to "length". If either "expr" or "chars" are null, the result will be null.|
+|`RPAD(expr, length[, chars])`|Returns a string of "length" from "expr" right-padded with "chars". If "length" is shorter than the length of "expr", the result is "expr" which is truncated to "length". If either "expr" or "chars" are null, the result will be null.|
+
 
 ### Time functions
 
@@ -201,7 +230,7 @@ over the connection time zone.
 |`FLOOR(timestamp_expr TO <unit>)`|Rounds down a timestamp, returning it as a new timestamp. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR.|
 |`CEIL(timestamp_expr TO <unit>)`|Rounds up a timestamp, returning it as a new timestamp. Unit can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, or YEAR.|
 |`TIMESTAMPADD(<unit>, <count>, <timestamp>)`|Equivalent to `timestamp + count * INTERVAL '1' UNIT`.|
-|`timestamp_expr { + \| - } <interval_expr>`|Add or subtract an amount of time from a timestamp. interval_expr can include interval literals like `INTERVAL '2' HOUR`, and may include interval arithmetic as well. This operator treats days as uniformly 86400 seconds long, and does not take into account daylight savings time. To account for daylight savings time, use TIME_SHIFT instead.|
+|`timestamp_expr { + &#124; - } <interval_expr>`|Add or subtract an amount of time from a timestamp. interval_expr can include interval literals like `INTERVAL '2' HOUR`, and may include interval arithmetic as well. This operator treats days as uniformly 86400 seconds long, and does not take into account daylight savings time. To account for daylight savings time, use TIME_SHIFT instead.|
 
 ### Comparison operators
 
@@ -305,9 +334,7 @@ converted to zeroes).
 
 ## Query execution
 
-Queries without aggregations will use Druid's [Scan](scan-query.html) or [Select](select-query.html) native query types.
-Scan is used whenever possible, as it is generally higher performance and more efficient than Select. However, Select
-is used in one case: when the query includes an `ORDER BY __time`, since Scan does not have a sorting feature.
+Queries without aggregations will use Druid's [Scan](scan-query.html) native query type.
 
 Aggregation queries (using GROUP BY, DISTINCT, or any aggregation functions) will use one of Druid's three native
 aggregation query types. Two (Timeseries and TopN) are specialized for specific types of aggregations, whereas the other
@@ -499,7 +526,6 @@ Connection context can be specified as JDBC connection properties or as a "conte
 |`sqlTimeZone`|Sets the time zone for this connection, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|druid.sql.planner.sqlTimeZone on the Broker (default: UTC)|
 |`useApproximateCountDistinct`|Whether to use an approximate cardinalty algorithm for `COUNT(DISTINCT foo)`.|druid.sql.planner.useApproximateCountDistinct on the Broker (default: true)|
 |`useApproximateTopN`|Whether to use approximate [TopN queries](topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](groupbyquery.html) will be used instead.|druid.sql.planner.useApproximateTopN on the Broker (default: true)|
-|`useFallback`|Whether to evaluate operations on the Broker when they cannot be expressed as Druid queries. This option is not recommended for production since it can generate unscalable query plans. If false, SQL queries that cannot be translated to Druid queries will fail.|druid.sql.planner.useFallback on the Broker (default: false)|
 
 ### Retrieving metadata
 
@@ -574,21 +600,22 @@ Segments table provides details on all Druid segments, whether they are publishe
 #### CAVEAT
 Note that a segment can be served by more than one stream ingestion tasks or Historical processes, in that case it would have multiple replicas. These replicas are weakly consistent with each other when served by multiple ingestion tasks, until a segment is eventually served by a Historical, at that point the segment is immutable. Broker prefers to query a segment from Historical over an ingestion task. But if a segment has multiple realtime replicas, for eg. kafka index tasks, and one t [...]
 
-|Column|Notes|
-|------|-----|
-|segment_id|Unique segment identifier|
-|datasource|Name of datasource|
-|start|Interval start time (in ISO 8601 format)|
-|end|Interval end time (in ISO 8601 format)|
-|size|Size of segment in bytes|
-|version|Version string (generally an ISO8601 timestamp corresponding to when the segment set was first started). Higher version means the more recently created segment. Version comparing is based on string comparison.|
-|partition_num|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)|
-|num_replicas|Number of replicas of this segment currently being served|
-|num_rows|Number of rows in current segment, this value could be null if unkown to Broker at query time|
-|is_published|Boolean is represented as long type where 1 = true, 0 = false. 1 represents this segment has been published to the metadata store|
-|is_available|Boolean is represented as long type where 1 = true, 0 = false. 1 if this segment is currently being served by any server(Historical or realtime)|
-|is_realtime|Boolean is represented as long type where 1 = true, 0 = false. 1 if this segment is being served on any type of realtime tasks|
-|payload|JSON-serialized data segment payload|
+|Column|Type|Notes|
+|------|-----|-----|
+|segment_id|STRING|Unique segment identifier|
+|datasource|STRING|Name of datasource|
+|start|STRING|Interval start time (in ISO 8601 format)|
+|end|STRING|Interval end time (in ISO 8601 format)|
+|size|LONG|Size of segment in bytes|
+|version|STRING|Version string (generally an ISO8601 timestamp corresponding to when the segment set was first started). Higher version means the more recently created segment. Version comparing is based on string comparison.|
+|partition_num|LONG|Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous)|
+|num_replicas|LONG|Number of replicas of this segment currently being served|
+|num_rows|LONG|Number of rows in current segment, this value could be null if unkown to Broker at query time|
+|is_published|LONG|Boolean is represented as long type where 1 = true, 0 = false. 1 represents this segment has been published to the metadata store with `used=1`|
+|is_available|LONG|Boolean is represented as long type where 1 = true, 0 = false. 1 if this segment is currently being served by any process(Historical or realtime)|
+|is_realtime|LONG|Boolean is represented as long type where 1 = true, 0 = false. 1 if this segment is being served on any type of realtime tasks|
+|is_overshadowed|LONG|Boolean is represented as long type where 1 = true, 0 = false. 1 if this segment is published and is _fully_ overshadowed by some other published segments. Currently, is_overshadowed is always false for unpublished segments, although this may change in the future. You can filter for segments that "should be published" by filtering for `is_published = 1 AND is_overshadowed = 0`. Segments can briefly be both published and overshadowed if they were recently replaced, b [...]
+|payload|STRING|JSON-serialized data segment payload|
 
 For example to retrieve all segments for datasource "wikipedia", use the query:
 
@@ -613,16 +640,16 @@ ORDER BY 2 DESC
 ### SERVERS table
 Servers table lists all data servers(any server that hosts a segment). It includes both Historicals and Peons.
 
-|Column|Notes|
-|------|-----|
-|server|Server name in the form host:port|
-|host|Hostname of the server|
-|plaintext_port|Unsecured port of the server, or -1 if plaintext traffic is disabled|
-|tls_port|TLS port of the server, or -1 if TLS is disabled|
-|server_type|Type of Druid service. Possible values include: Historical, realtime and indexer_executor(Peon).|
-|tier|Distribution tier see [druid.server.tier](#../configuration/index.html#Historical-General-Configuration)|
-|current_size|Current size of segments in bytes on this server|
-|max_size|Max size in bytes this server recommends to assign to segments see [druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration)|
+|Column|Type|Notes|
+|------|-----|-----|
+|server|STRING|Server name in the form host:port|
+|host|STRING|Hostname of the server|
+|plaintext_port|LONG|Unsecured port of the server, or -1 if plaintext traffic is disabled|
+|tls_port|LONG|TLS port of the server, or -1 if TLS is disabled|
+|server_type|STRING|Type of Druid service. Possible values include: Historical, realtime and indexer_executor(Peon).|
+|tier|STRING|Distribution tier see [druid.server.tier](#../configuration/index.html#Historical-General-Configuration)|
+|current_size|LONG|Current size of segments in bytes on this server|
+|max_size|LONG|Max size in bytes this server recommends to assign to segments see [druid.server.maxSize](#../configuration/index.html#Historical-General-Configuration)|
 
 To retrieve information about all servers, use the query:
 
@@ -634,44 +661,44 @@ SELECT * FROM sys.servers;
 
 SERVER_SEGMENTS is used to join servers with segments table
 
-|Column|Notes|
-|------|-----|
-|server|Server name in format host:port (Primary key of [servers table](#SERVERS-table))|
-|segment_id|Segment identifier (Primary key of [segments table](#SEGMENTS-table))|
+|Column|Type|Notes|
+|------|-----|-----|
+|server|STRING|Server name in format host:port (Primary key of [servers table](#SERVERS-table))|
+|segment_id|STRING|Segment identifier (Primary key of [segments table](#SEGMENTS-table))|
 
-JOIN between "servers" and "segments" can be used to query the number of segments for a specific datasource, 
+JOIN between "servers" and "segments" can be used to query the number of segments for a specific datasource,
 grouped by server, example query:
 
 ```sql
-SELECT count(segments.segment_id) as num_segments from sys.segments as segments 
-INNER JOIN sys.server_segments as server_segments 
-ON segments.segment_id  = server_segments.segment_id 
-INNER JOIN sys.servers as servers 
+SELECT count(segments.segment_id) as num_segments from sys.segments as segments
+INNER JOIN sys.server_segments as server_segments
+ON segments.segment_id  = server_segments.segment_id
+INNER JOIN sys.servers as servers
 ON servers.server = server_segments.server
-WHERE segments.datasource = 'wikipedia' 
+WHERE segments.datasource = 'wikipedia'
 GROUP BY servers.server;
 ```
 
 ### TASKS table
 
-The tasks table provides information about active and recently-completed indexing tasks. For more information 
+The tasks table provides information about active and recently-completed indexing tasks. For more information
 check out [ingestion tasks](#../ingestion/tasks.html)
 
-|Column|Notes|
-|------|-----|
-|task_id|Unique task identifier|
-|type|Task type, for example this value is "index" for indexing tasks. See [tasks-overview](../ingestion/tasks.html)|
-|datasource|Datasource name being indexed|
-|created_time|Timestamp in ISO8601 format corresponding to when the ingestion task was created. Note that this value is populated for completed and waiting tasks. For running and pending tasks this value is set to 1970-01-01T00:00:00Z|
-|queue_insertion_time|Timestamp in ISO8601 format corresponding to when this task was added to the queue on the Overlord|
-|status|Status of a task can be RUNNING, FAILED, SUCCESS|
-|runner_status|Runner status of a completed task would be NONE, for in-progress tasks this can be RUNNING, WAITING, PENDING|
-|duration|Time it took to finish the task in milliseconds, this value is present only for completed tasks|
-|location|Server name where this task is running in the format host:port, this information is present only for RUNNING tasks|
-|host|Hostname of the server where task is running|
-|plaintext_port|Unsecured port of the server, or -1 if plaintext traffic is disabled|
-|tls_port|TLS port of the server, or -1 if TLS is disabled|
-|error_msg|Detailed error message in case of FAILED tasks|
+|Column|Type|Notes|
+|------|-----|-----|
+|task_id|STRING|Unique task identifier|
+|type|STRING|Task type, for example this value is "index" for indexing tasks. See [tasks-overview](../ingestion/tasks.html)|
+|datasource|STRING|Datasource name being indexed|
+|created_time|STRING|Timestamp in ISO8601 format corresponding to when the ingestion task was created. Note that this value is populated for completed and waiting tasks. For running and pending tasks this value is set to 1970-01-01T00:00:00Z|
+|queue_insertion_time|STRING|Timestamp in ISO8601 format corresponding to when this task was added to the queue on the Overlord|
+|status|STRING|Status of a task can be RUNNING, FAILED, SUCCESS|
+|runner_status|STRING|Runner status of a completed task would be NONE, for in-progress tasks this can be RUNNING, WAITING, PENDING|
+|duration|LONG|Time it took to finish the task in milliseconds, this value is present only for completed tasks|
+|location|STRING|Server name where this task is running in the format host:port, this information is present only for RUNNING tasks|
+|host|STRING|Hostname of the server where task is running|
+|plaintext_port|LONG|Unsecured port of the server, or -1 if plaintext traffic is disabled|
+|tls_port|LONG|TLS port of the server, or -1 if TLS is disabled|
+|error_msg|STRING|Detailed error message in case of FAILED tasks|
 
 For example, to retrieve tasks information filtered by status, use the query
 
@@ -698,10 +725,8 @@ The Druid SQL server is configured through the following properties on the Broke
 |`druid.sql.planner.maxSemiJoinRowsInMemory`|Maximum number of rows to keep in memory for executing two-stage semi-join queries like `SELECT * FROM Employee WHERE DeptName IN (SELECT DeptName FROM Dept)`.|100000|
 |`druid.sql.planner.maxTopNLimit`|Maximum threshold for a [TopN query](../querying/topnquery.html). Higher limits will be planned as [GroupBy queries](../querying/groupbyquery.html) instead.|100000|
 |`druid.sql.planner.metadataRefreshPeriod`|Throttle for metadata refreshes.|PT1M|
-|`druid.sql.planner.selectThreshold`|Page size threshold for [Select queries](../querying/select-query.html). Select queries for larger resultsets will be issued back-to-back using pagination.|1000|
 |`druid.sql.planner.useApproximateCountDistinct`|Whether to use an approximate cardinalty algorithm for `COUNT(DISTINCT foo)`.|true|
 |`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.html) will be used instead.|true|
-|`druid.sql.planner.useFallback`|Whether to evaluate operations on the Broker when they cannot be expressed as Druid queries. This option is not recommended for production since it can generate unscalable query plans. If false, SQL queries that cannot be translated to Druid queries will fail.|false|
 |`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries wihout filter condition on __time column will fail|false|
 |`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
 |`druid.sql.planner.metadataSegmentCacheEnable`|Whether to keep a cache of published segments in broker. If true, broker polls coordinator in background to get segments from metadata store and maintains a local cache. If false, coordinator's REST api will be invoked when broker needs published segments info.|false|
@@ -716,3 +741,7 @@ Broker will emit the following metrics for SQL.
 |`sqlQuery/time`|Milliseconds taken to complete a SQL.|id, nativeQueryIds, dataSource, remoteAddress, success.|< 1s|
 |`sqlQuery/bytes`|number of bytes returned in SQL response.|id, nativeQueryIds, dataSource, remoteAddress, success.| |
 
+
+## Authorization Permissions
+
+Please see [Defining SQL permissions](../development/extensions-core/druid-basic-security.html#sql-permissions) for information on what permissions are needed for making SQL queries in a secured cluster.
\ No newline at end of file
diff --git a/docs/0.14.0-incubating/toc.md b/docs/0.14.0-incubating/toc.md
index 138d9d1..6ee4908 100644
--- a/docs/0.14.0-incubating/toc.md
+++ b/docs/0.14.0-incubating/toc.md
@@ -24,27 +24,31 @@ layout: toc
 ## Getting Started
   * [Design](/docs/VERSION/design/index.html)
     * [What is Druid?](/docs/VERSION/design/index.html#what-is-druid)
-    * [When should I use Druid](/docs/VERSION/design/index.html#when-to-use-druid)
+    * [When should I use Druid?](/docs/VERSION/design/index.html#when-to-use-druid)
     * [Architecture](/docs/VERSION/design/index.html#architecture)
     * [Datasources & Segments](/docs/VERSION/design/index.html#datasources-and-segments)
     * [Query processing](/docs/VERSION/design/index.html#query-processing)
     * [External dependencies](/docs/VERSION/design/index.html#external-dependencies)
     * [Ingestion overview](/docs/VERSION/ingestion/index.html)
-  * [Quickstart](/docs/VERSION/tutorials/index.html)
-    * [Tutorial: Loading a file](/docs/VERSION/tutorials/tutorial-batch.html)
-    * [Tutorial: Loading stream data from Apache Kafka](/docs/VERSION/tutorials/tutorial-kafka.html)
-    * [Tutorial: Loading a file using Apache Hadoop](/docs/VERSION/tutorials/tutorial-batch-hadoop.html)
-    * [Tutorial: Loading stream data using HTTP push](/docs/VERSION/tutorials/tutorial-tranquility.html)
-    * [Tutorial: Querying data](/docs/VERSION/tutorials/tutorial-query.html)
-  * Further tutorials
-    * [Tutorial: Rollup](/docs/VERSION/tutorials/tutorial-rollup.html)
-    * [Tutorial: Configuring retention](/docs/VERSION/tutorials/tutorial-retention.html)
-    * [Tutorial: Updating existing data](/docs/VERSION/tutorials/tutorial-update-data.html)
-    * [Tutorial: Compacting segments](/docs/VERSION/tutorials/tutorial-compaction.html)
-    * [Tutorial: Deleting data](/docs/VERSION/tutorials/tutorial-delete-data.html)
-    * [Tutorial: Writing your own ingestion specs](/docs/VERSION/tutorials/tutorial-ingestion-spec.html)
-    * [Tutorial: Transforming input data](/docs/VERSION/tutorials/tutorial-transform-spec.html)
-  * [Clustering](/docs/VERSION/tutorials/cluster.html)
+  * [Getting Started](/docs/VERSION/operations/getting-started.html)
+    * [Single-server Quickstart](/docs/VERSION/tutorials/index.html)
+      * [Tutorial: Loading a file from local disk](/docs/VERSION/tutorials/tutorial-batch.html)
+      * [Tutorial: Loading stream data from Apache Kafka](/docs/VERSION/tutorials/tutorial-kafka.html)
+      * [Tutorial: Loading a file using Apache Hadoop](/docs/VERSION/tutorials/tutorial-batch-hadoop.html)
+      * [Tutorial: Loading stream data using HTTP push](/docs/VERSION/tutorials/tutorial-tranquility.html)
+      * [Tutorial: Querying data](/docs/VERSION/tutorials/tutorial-query.html)
+      * Further tutorials
+        * [Tutorial: Rollup](/docs/VERSION/tutorials/tutorial-rollup.html)
+        * [Tutorial: Configuring retention](/docs/VERSION/tutorials/tutorial-retention.html)
+        * [Tutorial: Updating existing data](/docs/VERSION/tutorials/tutorial-update-data.html)
+        * [Tutorial: Compacting segments](/docs/VERSION/tutorials/tutorial-compaction.html)
+        * [Tutorial: Deleting data](/docs/VERSION/tutorials/tutorial-delete-data.html)
+        * [Tutorial: Writing your own ingestion specs](/docs/VERSION/tutorials/tutorial-ingestion-spec.html)
+        * [Tutorial: Transforming input data](/docs/VERSION/tutorials/tutorial-transform-spec.html)    
+    * [Clustering](/docs/VERSION/tutorials/cluster.html)
+    * Further examples
+      * [Single-server deployment](/docs/VERSION/operations/single-server.html)
+      * [Clustered deployment](/docs/VERSION/tutorials/cluster.html#fresh-deployment)
 
 ## Data Ingestion
   * [Ingestion overview](/docs/VERSION/ingestion/index.html)
@@ -70,102 +74,107 @@ layout: toc
   * [Misc. Tasks](/docs/VERSION/ingestion/misc-tasks.html)
 
 ## Querying
-  * [Overview](/docs/VERSION/querying/querying.html)
-  * [Timeseries](/docs/VERSION/querying/timeseriesquery.html)
-  * [TopN](/docs/VERSION/querying/topnquery.html)
-  * [GroupBy](/docs/VERSION/querying/groupbyquery.html)
-  * [Time Boundary](/docs/VERSION/querying/timeboundaryquery.html)
-  * [Segment Metadata](/docs/VERSION/querying/segmentmetadataquery.html)
-  * [DataSource Metadata](/docs/VERSION/querying/datasourcemetadataquery.html)
-  * [Search](/docs/VERSION/querying/searchquery.html)
-  * [Select](/docs/VERSION/querying/select-query.html)
-  * [Scan](/docs/VERSION/querying/scan-query.html)
-  * Components
-    * [Datasources](/docs/VERSION/querying/datasource.html)
-    * [Filters](/docs/VERSION/querying/filters.html)
-    * [Aggregations](/docs/VERSION/querying/aggregations.html)
-    * [Post Aggregations](/docs/VERSION/querying/post-aggregations.html)
-    * [Granularities](/docs/VERSION/querying/granularities.html)
-    * [DimensionSpecs](/docs/VERSION/querying/dimensionspecs.html)
-    * [Context](/docs/VERSION/querying/query-context.html)
-  * [Multi-value dimensions](/docs/VERSION/querying/multi-value-dimensions.html)
-  * [SQL](/docs/VERSION/querying/sql.html)
-  * [Lookups](/docs/VERSION/querying/lookups.html)
-  * [Joins](/docs/VERSION/querying/joins.html)
-  * [Multitenancy](/docs/VERSION/querying/multitenancy.html)
-  * [Caching](/docs/VERSION/querying/caching.html)
-  * [Sorting Orders](/docs/VERSION/querying/sorting-orders.html)
-  * [Virtual Columns](/docs/VERSION/querying/virtual-columns.html)
+  * [Druid SQL](/docs/VERSION/querying/sql.html)
+  * [Native queries](/docs/VERSION/querying/querying.html)
+    * [Timeseries](/docs/VERSION/querying/timeseriesquery.html)
+    * [TopN](/docs/VERSION/querying/topnquery.html)
+    * [GroupBy](/docs/VERSION/querying/groupbyquery.html)
+    * [Time Boundary](/docs/VERSION/querying/timeboundaryquery.html)
+    * [Segment Metadata](/docs/VERSION/querying/segmentmetadataquery.html)
+    * [DataSource Metadata](/docs/VERSION/querying/datasourcemetadataquery.html)
+    * [Search](/docs/VERSION/querying/searchquery.html)
+    * [Scan](/docs/VERSION/querying/scan-query.html)
+    * [Select](/docs/VERSION/querying/select-query.html)
+    * Components
+      * [Datasources](/docs/VERSION/querying/datasource.html)
+      * [Filters](/docs/VERSION/querying/filters.html)
+      * [Aggregations](/docs/VERSION/querying/aggregations.html)
+      * [Post Aggregations](/docs/VERSION/querying/post-aggregations.html)
+      * [Granularities](/docs/VERSION/querying/granularities.html)
+      * [DimensionSpecs](/docs/VERSION/querying/dimensionspecs.html)
+      * [Sorting Orders](/docs/VERSION/querying/sorting-orders.html)
+      * [Virtual Columns](/docs/VERSION/querying/virtual-columns.html)
+      * [Context](/docs/VERSION/querying/query-context.html)
+  * Concepts
+    * [Multi-value dimensions](/docs/VERSION/querying/multi-value-dimensions.html)
+    * [Lookups](/docs/VERSION/querying/lookups.html)
+    * [Joins](/docs/VERSION/querying/joins.html)
+    * [Multitenancy](/docs/VERSION/querying/multitenancy.html)
+    * [Caching](/docs/VERSION/querying/caching.html)
+    * [Geographic Queries](/docs/VERSION/development/geo.html) (experimental)
 
 ## Design
   * [Overview](/docs/VERSION/design/index.html)
   * Storage
     * [Segments](/docs/VERSION/design/segments.html)
-  * [Processes and Servers](/docs/VERSION/design/processes.html)
-    * [Coordinator](/docs/VERSION/design/coordinator.html)
-    * [Overlord](/docs/VERSION/design/overlord.html)
-    * [Broker](/docs/VERSION/design/broker.html)
-    * [Historical](/docs/VERSION/design/historical.html)
-    * [MiddleManager](/docs/VERSION/design/middlemanager.html)
-      * [Peons](/docs/VERSION/design/peons.html)
-    * [Realtime (Deprecated)](/docs/VERSION/design/realtime.html)
+  * [Servers and Processes](/docs/VERSION/design/processes.html)
+    * Master server
+      * [Coordinator](/docs/VERSION/design/coordinator.html)
+      * [Overlord](/docs/VERSION/design/overlord.html)
+    * Query server
+      * [Broker](/docs/VERSION/design/broker.html)
+      * [Router](/docs/VERSION/development/router.html) (optional; experimental)
+    * Data server
+      * [Historical](/docs/VERSION/design/historical.html)
+      * [MiddleManager](/docs/VERSION/design/middlemanager.html)
+        * [Peons](/docs/VERSION/design/peons.html)    
   * Dependencies
     * [Deep Storage](/docs/VERSION/dependencies/deep-storage.html)
     * [Metadata Storage](/docs/VERSION/dependencies/metadata-storage.html)
     * [ZooKeeper](/docs/VERSION/dependencies/zookeeper.html)
 
 ## Operations
-  * [API Reference](/docs/VERSION/operations/api-reference.html)
-    * [Coordinator](/docs/VERSION/operations/api-reference.html#coordinator)
-    * [Overlord](/docs/VERSION/operations/api-reference.html#overlord)
-    * [MiddleManager](/docs/VERSION/operations/api-reference.html#middlemanager)
-    * [Peon](/docs/VERSION/operations/api-reference.html#peon)
-    * [Broker](/docs/VERSION/operations/api-reference.html#broker)
-    * [Historical](/docs/VERSION/operations/api-reference.html#historical)
+  * [Management UIs](/docs/VERSION/operations/management-uis.html)    
   * [Including Extensions](/docs/VERSION/operations/including-extensions.html)
   * [Data Retention](/docs/VERSION/operations/rule-configuration.html)
+  * [High Availability](/docs/VERSION/operations/high-availability.html)
+  * [Updating the Cluster](/docs/VERSION/operations/rolling-updates.html)
   * [Metrics and Monitoring](/docs/VERSION/operations/metrics.html)
   * [Alerts](/docs/VERSION/operations/alerts.html)
-  * [Updating the Cluster](/docs/VERSION/operations/rolling-updates.html)
   * [Different Hadoop Versions](/docs/VERSION/operations/other-hadoop.html)
-  * [Performance FAQ](/docs/VERSION/operations/performance-faq.html)
-  * [Management UIs](/docs/VERSION/operations/management-uis.html)
-  * [Dump Segment Tool](/docs/VERSION/operations/dump-segment.html)
-  * [Insert Segment Tool](/docs/VERSION/operations/insert-segment-to-db.html)
-  * [Pull Dependencies Tool](/docs/VERSION/operations/pull-deps.html)
-  * [Recommendations](/docs/VERSION/operations/recommendations.html)
-  * [TLS Support](/docs/VERSION/operations/tls-support.html)
-  * [Password Provider](/docs/VERSION/operations/password-provider.html)
+  * [HTTP Compression](/docs/VERSION/operations/http-compression.html)  
+  * [API Reference](/docs/VERSION/operations/api-reference.html)
+      * [Coordinator](/docs/VERSION/operations/api-reference.html#coordinator)
+      * [Overlord](/docs/VERSION/operations/api-reference.html#overlord)
+      * [MiddleManager](/docs/VERSION/operations/api-reference.html#middlemanager)
+      * [Peon](/docs/VERSION/operations/api-reference.html#peon)
+      * [Broker](/docs/VERSION/operations/api-reference.html#broker)
+      * [Historical](/docs/VERSION/operations/api-reference.html#historical)
+  * Tuning and Recommendations
+    * [Basic Cluster Tuning](/docs/VERSION/operations/basic-cluster-tuning.html)  
+    * [General Recommendations](/docs/VERSION/operations/recommendations.html)
+    * [JVM Best Practices](/docs/VERSION/configuration/index.html#jvm-configuration-best-practices)        
+  * Tools
+    * [Dump Segment Tool](/docs/VERSION/operations/dump-segment.html)
+    * [Insert Segment Tool](/docs/VERSION/operations/insert-segment-to-db.html)
+    * [Pull Dependencies Tool](/docs/VERSION/operations/pull-deps.html)  
+  * Security
+    * [TLS Support](/docs/VERSION/operations/tls-support.html)
+    * [Password Provider](/docs/VERSION/operations/password-provider.html)  
 
 ## Configuration
   * [Configuration Reference](/docs/VERSION/configuration/index.html)
-  * [Recommended Configuration File Organization](/docs/VERSION/configuration/index.html#recommended-configuration-file-organization)
-  * [JVM Configuration Best Practices](/docs/VERSION/configuration/index.html#jvm-configuration-best-practices)
+  * [Recommended Configuration File Organization](/docs/VERSION/configuration/index.html#recommended-configuration-file-organization)  
   * [Common Configuration](/docs/VERSION/configuration/index.html#common-configurations)
-  * [Coordinator](/docs/VERSION/configuration/index.html#coordinator)
-  * [Overlord](/docs/VERSION/configuration/index.html#overlord)
-  * [MiddleManager & Peons](/docs/VERSION/configuration/index.html#middle-manager-and-peons)
-  * [Broker](/docs/VERSION/configuration/index.html#broker)
-  * [Historical](/docs/VERSION/configuration/index.html#historical)
+  * Processes
+    * [Coordinator](/docs/VERSION/configuration/index.html#coordinator)
+    * [Overlord](/docs/VERSION/configuration/index.html#overlord)
+    * [MiddleManager & Peons](/docs/VERSION/configuration/index.html#middle-manager-and-peons)    
+    * [Historical](/docs/VERSION/configuration/index.html#historical)
+    * [Broker](/docs/VERSION/configuration/index.html#broker)
   * [Caching](/docs/VERSION/configuration/index.html#cache-configuration)
   * [General Query Configuration](/docs/VERSION/configuration/index.html#general-query-configuration)
   * [Configuring Logging](/docs/VERSION/configuration/logging.html)
-  
+
 ## Development
   * [Overview](/docs/VERSION/development/overview.html)
-  * [Libraries](/docs/VERSION/development/libraries.html)
+  * [Libraries](/libraries.html)
   * [Extensions](/docs/VERSION/development/extensions.html)
   * [JavaScript](/docs/VERSION/development/javascript.html)
   * [Build From Source](/docs/VERSION/development/build.html)
   * [Versioning](/docs/VERSION/development/versioning.html)
   * [Integration](/docs/VERSION/development/integrating-druid-with-other-technologies.html)
-  * Experimental Features
-    * [Overview](/docs/VERSION/development/experimental.html)
-    * [Approximate Histograms and Quantiles](/docs/VERSION/development/extensions-core/approximate-histograms.html)
-    * [Datasketches](/docs/VERSION/development/extensions-core/datasketches-extension.html)
-    * [Geographic Queries](/docs/VERSION/development/geo.html)
-    * [Router](/docs/VERSION/development/router.html)
-    * [Kafka Indexing Service](/docs/VERSION/development/extensions-core/kafka-ingestion.html)
+  * [Experimental Features](/docs/VERSION/development/experimental.html)
 
 ## Misc
   * [Druid Expressions Language](/docs/VERSION/misc/math-expr.html)
diff --git a/docs/0.14.0-incubating/tutorials/cluster.md b/docs/0.14.0-incubating/tutorials/cluster.md
index 5518f81..a202d25 100644
--- a/docs/0.14.0-incubating/tutorials/cluster.md
+++ b/docs/0.14.0-incubating/tutorials/cluster.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Clustering"
+title: "Setting up a Clustered Deployment"
 ---
 
 <!--
@@ -22,7 +22,7 @@ title: "Clustering"
   ~ under the License.
   -->
 
-# Clustering
+# Setting up a Clustered Deployment
 
 Apache Druid (incubating) is designed to be deployed as a scalable, fault-tolerant cluster.
 
@@ -30,48 +30,101 @@ In this document, we'll set up a simple cluster and discuss how it can be furthe
 your needs. 
 
 This simple cluster will feature:
- - A single Master server to host the Coordinator and Overlord processes
- - Scalable, fault-tolerant Data servers running Historical and MiddleManager processes
- - Query servers, hosting Druid Broker processes
+ - A Master server to host the Coordinator and Overlord processes
+ - Two scalable, fault-tolerant Data servers running Historical and MiddleManager processes
+ - A query server, hosting the Druid Broker and Router processes
 
-In production, we recommend deploying multiple Master servers with Coordinator and Overlord processes in a fault-tolerant configuration as well.
+In production, we recommend deploying multiple Master servers and multiple Query servers in a fault-tolerant configuration based on your specific fault-tolerance needs, but you can get started quickly with one Master and one Query server and add more servers later.
 
 ## Select hardware
 
-### Master Server
+### Fresh Deployment
 
-The Coordinator and Overlord processes can be co-located on a single server that is responsible for handling the metadata and coordination needs of your cluster.
-The equivalent of an AWS [m3.xlarge](https://aws.amazon.com/ec2/instance-types/#M3) is sufficient for most clusters. This
-hardware offers:
+If you do not have an existing Druid cluster, and wish to start running Druid in a clustered deployment, this guide provides an example clustered deployment with pre-made configurations.
 
-- 4 vCPUs
-- 15 GB RAM
-- 80 GB SSD storage
+#### Master Server
 
-### Data Server
+The Coordinator and Overlord processes are responsible for handling the metadata and coordination needs of your cluster. They can be colocated together on the same server. 
 
-Historicals and MiddleManagers can be colocated on a single server to handle the actual data in your cluster. These servers benefit greatly from CPU, RAM,
-and SSDs. The equivalent of an AWS [r3.2xlarge](https://aws.amazon.com/ec2/instance-types/#r3) is a
-good starting point. This hardware offers:
+In this example, we will be deploying the equivalent of one AWS [m5.2xlarge](https://aws.amazon.com/ec2/instance-types/m5/) instance.
 
+This hardware offers:
 - 8 vCPUs
-- 61 GB RAM
-- 160 GB SSD storage
+- 31 GB RAM
 
-### Query Server
+Example Master server configurations that have been sized for this hardware can be found under `conf/druid/cluster/master`.
+
+#### Data Server
+
+Historicals and MiddleManagers can be colocated on the same server to handle the actual data in your cluster. These servers benefit greatly from CPU, RAM,
+and SSDs. 
+
+In this example, we will be deploying the equivalent of two AWS [i3.4xlarge](https://aws.amazon.com/ec2/instance-types/i3/) instances. 
+
+This hardware offers:
+
+- 16 vCPUs
+- 122 GB RAM
+- 2 * 1.9TB SSD storage
+
+Example Data server configurations that have been sized for this hardware can be found under `conf/druid/cluster/data`.
+
+#### Query Server
 
 Druid Brokers accept queries and farm them out to the rest of the cluster. They also optionally maintain an
-in-memory query cache. These servers benefit greatly from CPU and RAM, and can also be deployed on
-the equivalent of an AWS [r3.2xlarge](https://aws.amazon.com/ec2/instance-types/#r3). This hardware
-offers:
+in-memory query cache. These servers benefit greatly from CPU and RAM.
+ 
+In this example, we will be deploying the equivalent of one AWS [m5.2xlarge](https://aws.amazon.com/ec2/instance-types/m5/) instance. 
 
+This hardware offers:
 - 8 vCPUs
-- 61 GB RAM
-- 160 GB SSD storage
+- 31 GB RAM
 
 You can consider co-locating any open source UIs or query libraries on the same server that the Broker is running on.
 
-Very large clusters should consider selecting larger servers.
+Example Query server configurations that have been sized for this hardware can be found under `conf/druid/cluster/query`.
+
+#### Other Hardware Sizes
+
+The example cluster above is chosen as a single example out of many possible ways to size a Druid cluster.
+
+You can choose smaller/larger hardware or less/more servers for your specific needs and constraints.
+
+If your use case has complex scaling requirements, you can also choose to not co-locate Druid processes (e.g., standalone Historical servers).
+
+The information in the [basic cluster tuning guide](../operations/basic-cluster-tuning.html) can help with your decision-making process and with sizing your configurations.
+
+### Migrating from a Single-Server Deployment
+
+If you have an existing single-server deployment, such as the ones from the [single-server deployment examples](../operations/single-server.html), and you wish to migrate to a clustered deployment of similar scale, the following section contains guidelines for choosing equivalent hardware using the Master/Data/Query server organization.
+
+#### Master Server
+
+The main considerations for the Master server are available CPUs and RAM for the Coordinator and Overlord heaps.
+
+Sum up the allocated heap sizes for your Coordinator and Overlord from the single-server deployment, and choose Master server hardware with enough RAM for the combined heaps, with some extra RAM for other processes on the machine.
+
+For CPU cores, you can choose hardware with approximately 1/4th of the cores of the single-server deployment.
+
+#### Data Server
+
+When choosing Data server hardware for the cluster, the main considerations are available CPUs and RAM, and using SSD storage if feasible.
+
+In a clustered deployment, having multiple Data servers is a good idea for fault-tolerance purposes.
+
+When choosing the Data server hardware, you can choose a split factor `N`, divide the original CPU/RAM of the single-server deployment by `N`, and deploy `N` Data servers of reduced size in the new cluster.
+
+Instructions for adjusting the Historical/MiddleManager configs for the split are described in a later section in this guide.
+
+#### Query Server 
+
+The main considerations for the Query server are available CPUs and RAM for the Broker heap + direct memory, and Router heap.
+
+Sum up the allocated memory sizes for your Broker and Router from the single-server deployment, and choose Query server hardware with enough RAM to cover the Broker/Router, with some extra RAM for other processes on the machine.
+
+For CPU cores, you can choose hardware with approximately 1/4th of the cores of the single-server deployment.
+
+The [basic cluster tuning guide](../operations/basic-cluster-tuning.html) has information on how to calculate Broker/Router memory usage.
 
 ## Select OS
 
@@ -89,39 +142,67 @@ First, download and unpack the release archive. It's best to do this on a single
 since you will be editing the configurations and then copying the modified distribution out to all
 of your servers.
 
-[Download](https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.14.0-incubating/apache-druid-0.14.0-incubating-bin.tar.gz)
-the 0.14.0-incubating release.
+[Download](https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/#{DRUIDVERSION}/apache-druid-#{DRUIDVERSION}-bin.tar.gz)
+the #{DRUIDVERSION} release.
 
 Extract Druid by running the following commands in your terminal:
 
 ```bash
-tar -xzf apache-druid-0.14.0-incubating-bin.tar.gz
-cd apache-druid-0.14.0-incubating
+tar -xzf apache-druid-#{DRUIDVERSION}-bin.tar.gz
+cd apache-druid-#{DRUIDVERSION}
 ```
 
 In the package, you should find:
 
 * `DISCLAIMER`, `LICENSE`, and `NOTICE` files
-* `bin/*` - scripts related to the [single-machine quickstart](quickstart.html)
-* `conf/*` - template configurations for a clustered setup
+* `bin/*` - scripts related to the [single-machine quickstart](index.html)
+* `conf/druid/cluster/*` - template configurations for a clustered setup
 * `extensions/*` - core Druid extensions
 * `hadoop-dependencies/*` - Druid Hadoop dependencies
 * `lib/*` - libraries and dependencies for core Druid
-* `quickstart/*` - files related to the [single-machine quickstart](quickstart.html)
+* `quickstart/*` - files related to the [single-machine quickstart](index.html)
+
+We'll be editing the files in `conf/druid/cluster/` in order to get things running.
+
+### Migrating from Single-Server Deployments
+
+In the following sections we will be editing the configs under `conf/druid/cluster`.
+
+If you have an existing single-server deployment, please copy your existing configs to `conf/druid/cluster` to preserve any config changes you have made.
+
+## Configure metadata storage and deep storage
+
+### Migrating from Single-Server Deployments
 
-We'll be editing the files in `conf/` in order to get things running.
+If you have an existing single-server deployment and you wish to preserve your data across the migration, please follow the instructions at [metadata migration](../operations/metadata-migration.html) and [deep storage migration](../operations/deep-storage-migration.html) before updating your metadata/deep storage configs.
 
-## Configure deep storage
+These guides are targeted at single-server deployments that use the Derby metadata store and local deep storage. If you are already using a non-Derby metadata store in your single-server cluster, you can reuse the existing metadata store for the new cluster.
+
+These guides also provide information on migrating segments from local deep storage. A clustered deployment requires distributed deep storage like S3 or HDFS. If your single-server deployment was already using distributed deep storage, you can reuse the existing deep storage for the new cluster.
+
+### Metadata Storage
+
+In `conf/druid/cluster/_common/common.runtime.properties`, replace
+"metadata.storage.*" with the address of the machine that you will use as your metadata store:
+
+- `druid.metadata.storage.connector.connectURI`
+- `druid.metadata.storage.connector.host`
+
+In a production deployment, we recommend running a dedicated metadata store such as MySQL or PostgreSQL with replication, deployed separately from the Druid servers.
+
+The [MySQL extension](../development/extensions-core/mysql.html) and [PostgreSQL extension](../development/extensions-core/postgresql.html) docs have instructions for extension configuration and initial database setup.
+
+### Deep Storage
 
 Druid relies on a distributed filesystem or large object (blob) store for data storage. The most
 commonly used deep storage implementations are S3 (popular for those on AWS) and HDFS (popular if
 you already have a Hadoop deployment).
 
-### S3
+#### S3
 
-In `conf/druid/_common/common.runtime.properties`,
+In `conf/druid/cluster/_common/common.runtime.properties`,
 
-- Set `druid.extensions.loadList=["druid-s3-extensions"]`.
+- Add "druid-s3-extensions" to `druid.extensions.loadList`.
 
 - Comment out the configurations for local storage under "Deep Storage" and "Indexing service logs".
 
@@ -150,11 +231,13 @@ druid.indexer.logs.s3Bucket=your-bucket
 druid.indexer.logs.s3Prefix=druid/indexing-logs
 ```
 
-### HDFS
+Please see the [S3 extension](../development/extensions-core/s3.html) documentation for more info.
+
+#### HDFS
 
-In `conf/druid/_common/common.runtime.properties`,
+In `conf/druid/cluster/_common/common.runtime.properties`,
 
-- Set `druid.extensions.loadList=["druid-hdfs-storage"]`.
+- Add "druid-hdfs-storage" to `druid.extensions.loadList`.
 
 - Comment out the configurations for local storage under "Deep Storage" and "Indexing service logs".
 
@@ -183,7 +266,9 @@ Also,
 
 - Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml,
 mapred-site.xml) on the classpath of your Druid processes. You can do this by copying them into
-`conf/druid/_common/`.
+`conf/druid/cluster/_common/`.
+
+Please see the [HDFS extension](../development/extensions-core/hdfs.html) documentation for more info.
 
 ## Configure Tranquility Server (optional)
 
@@ -191,24 +276,18 @@ Data streams can be sent to Druid through a simple HTTP API powered by Tranquili
 Server. If you will be using this functionality, then at this point you should [configure
 Tranquility Server](../ingestion/stream-ingestion.html#server).
 
-## Configure Tranquility Kafka (optional)
-
-Druid can consuming streams from Kafka through Tranquility Kafka. If you will be
-using this functionality, then at this point you should
-[configure Tranquility Kafka](../ingestion/stream-ingestion.html#kafka).
-
 ## Configure for connecting to Hadoop (optional)
 
 If you will be loading data from a Hadoop cluster, then at this point you should configure Druid to be aware
 of your cluster:
 
-- Update `druid.indexer.task.hadoopWorkingPath` in `conf/druid/middleManager/runtime.properties` to
+- Update `druid.indexer.task.hadoopWorkingPath` in `conf/druid/cluster/middleManager/runtime.properties` to
 a path on HDFS that you'd like to use for temporary files required during the indexing process.
 `druid.indexer.task.hadoopWorkingPath=/tmp/druid-indexing` is a common choice.
 
 - Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml,
 mapred-site.xml) on the classpath of your Druid processes. You can do this by copying them into
-`conf/druid/_common/core-site.xml`, `conf/druid/_common/hdfs-site.xml`, and so on.
+`conf/druid/cluster/_common/core-site.xml`, `conf/druid/cluster/_common/hdfs-site.xml`, and so on.
 
 Note that you don't need to use HDFS deep storage in order to load data from Hadoop. For example, if
 your cluster is running on Amazon Web Services, we recommend using S3 for deep storage even if you
@@ -216,86 +295,92 @@ are loading data using Hadoop or Elastic MapReduce.
 
 For more info, please see [batch ingestion](../ingestion/batch-ingestion.html).
 
-## Configure addresses for Druid coordination
+## Configure Zookeeper connection
 
-In this simple cluster, you will deploy a single Master server containing the following:
-- A single Druid Coordinator process
-- A single Druid Overlord process
-- A single ZooKeeper istance
-- An embedded Derby metadata store
+In a production cluster, we recommend using a dedicated ZK cluster in a quorum, deployed separately from the Druid servers.
 
-The processes on the cluster need to be configured with the addresses of this ZK instance and the metadata store.
+In `conf/druid/cluster/_common/common.runtime.properties`, set
+`druid.zk.service.host` to a [connection string](https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html)
+containing a comma separated list of host:port pairs, each corresponding to a ZooKeeper server in your ZK quorum.
+(e.g. "127.0.0.1:4545" or "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002")
 
-In `conf/druid/_common/common.runtime.properties`, replace
-"zk.service.host" with [connection string](https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html)
-containing a comma separated list of host:port pairs, each corresponding to a ZooKeeper server
-(e.g. "127.0.0.1:4545" or "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"):
+You can also choose to run ZK on the Master servers instead of having a dedicated ZK cluster. If doing so, we recommend deploying 3 Master servers so that you have a ZK quorum.
 
-- `druid.zk.service.host`
+## Configuration Tuning
 
-In `conf/druid/_common/common.runtime.properties`, replace
-"metadata.storage.*" with the address of the machine that you will use as your metadata store:
+### Migrating from a Single-Server Deployment
 
-- `druid.metadata.storage.connector.connectURI`
-- `druid.metadata.storage.connector.host`
+#### Master
 
-<div class="note caution">
-In production, we recommend running 2 Master servers, each running a Druid Coordinator process
-and a Druid Overlord process. We also recommend running a ZooKeeper cluster on its own dedicated hardware,
-as well as replicated <a href = "../dependencies/metadata-storage.html">metadata storage</a>
-such as MySQL or PostgreSQL, on its own dedicated hardware.
-</div>
+If you are using an example configuration from [single-server deployment examples](../operations/single-server.html), these examples combine the Coordinator and Overlord processes into one combined process.
 
-## Tune processes on the Data Server
+The example configs under `conf/druid/cluster/master/coordinator-overlord` also combine the Coordinator and Overlord processes.
 
-Druid Historicals and MiddleManagers can be co-located on the same hardware. Both Druid processes benefit greatly from
-being tuned to the hardware they run on. If you are running Tranquility Server or Kafka, you can also colocate Tranquility with these two Druid processes.
-If you are using [r3.2xlarge](https://aws.amazon.com/ec2/instance-types/#r3)
-EC2 instances, or similar hardware, the configuration in the distribution is a
-reasonable starting point.
+You can copy your existing `coordinator-overlord` configs from the single-server deployment to `conf/druid/cluster/master/coordinator-overlord`.
 
-If you are using different hardware, we recommend adjusting configurations for your specific
-hardware. The most commonly adjusted configurations are:
+#### Data
 
-- `-Xmx` and `-Xms`
-- `druid.server.http.numThreads`
-- `druid.processing.buffer.sizeBytes`
-- `druid.processing.numThreads`
-- `druid.query.groupBy.maxIntermediateRows`
-- `druid.query.groupBy.maxResults`
-- `druid.server.maxSize` and `druid.segmentCache.locations` on Historical processes
-- `druid.worker.capacity` on MiddleManagers
+Suppose we are migrating from a single-server deployment that had 32 CPU and 256GB RAM. In the old deployment, the following configurations for Historicals and MiddleManagers were applied:
 
-<div class="note info">
-Keep -XX:MaxDirectMemory >= numThreads*sizeBytes, otherwise Druid will fail to start up..
-</div>
+Historical (Single-server)
+```
+druid.processing.buffer.sizeBytes=500000000
+druid.processing.numMergeBuffers=8
+druid.processing.numThreads=31
+```
 
-Please see the Druid [configuration documentation](../configuration/index.html) for a full description of all
-possible configuration options.
+MiddleManager (Single-server)
+```
+druid.worker.capacity=8
+druid.indexer.fork.property.druid.processing.numMergeBuffers=2
+druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000
+druid.indexer.fork.property.druid.processing.numThreads=1
+```
 
-## Tune Druid Brokers on the Query Server
+In the clustered deployment, we can choose a split factor (2 in this example), and deploy 2 Data servers with 16CPU and 128GB RAM each. The areas to scale are the following:
 
-Druid Brokers also benefit greatly from being tuned to the hardware they
-run on. If you are using [r3.2xlarge](https://aws.amazon.com/ec2/instance-types/#r3) EC2 instances,
-or similar hardware, the configuration in the distribution is a reasonable starting point.
+Historical
+- `druid.processing.numThreads`: Set to `(num_cores - 1)` based on the new hardware
+- `druid.processing.numMergeBuffers`: Divide the old value from the single-server deployment by the split factor
+- `druid.processing.buffer.sizeBytes`: Keep this unchanged
 
-If you are using different hardware, we recommend adjusting configurations for your specific
-hardware. The most commonly adjusted configurations are:
+MiddleManager:
+- `druid.worker.capacity`: Divide the old value from the single-server deployment by the split factor
+- `druid.indexer.fork.property.druid.processing.numMergeBuffers`: Keep this unchanged
+- `druid.indexer.fork.property.druid.processing.buffer.sizeBytes`: Keep this unchanged
+- `druid.indexer.fork.property.druid.processing.numThreads`: Keep this unchanged
 
-- `-Xmx` and `-Xms`
-- `druid.server.http.numThreads`
-- `druid.cache.sizeInBytes`
-- `druid.processing.buffer.sizeBytes`
-- `druid.processing.numThreads`
-- `druid.query.groupBy.maxIntermediateRows`
-- `druid.query.groupBy.maxResults`
+The resulting configs after the split:
 
-<div class="note caution">
-Keep -XX:MaxDirectMemory >= numThreads*sizeBytes, otherwise Druid will fail to start up.
-</div>
+New Historical (on 2 Data servers)
+```
+ druid.processing.buffer.sizeBytes=500000000
+ druid.processing.numMergeBuffers=8
+ druid.processing.numThreads=31
+```
+
+New MiddleManager (on 2 Data servers)
+```
+druid.worker.capacity=4
+druid.indexer.fork.property.druid.processing.numMergeBuffers=2
+druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000
+druid.indexer.fork.property.druid.processing.numThreads=1
+```
+
+#### Query
+
+You can copy your existing Broker and Router configs to the directories under `conf/druid/cluster/query`, no modifications are needed, as long as the new hardware is sized accordingly.
+
+### Fresh deployment
+
+If you are using the example cluster described above:
+- 1 Master server (m5.2xlarge)
+- 2 Data servers (i3.4xlarge)
+- 1 Query server (m5.2xlarge)
+
+The configurations under `conf/druid/cluster` have already been sized for this hardware and you do not need to make further modifications for general use cases.
 
-Please see the Druid [configuration documentation](../configuration/index.html) for a full description of all
-possible configuration options.
+If you have chosen different hardware, the [basic cluster tuning guide](../operations/basic-cluster-tuning.html) can help you size your configurations.
 
 ## Open ports (if using a firewall)
 
@@ -318,7 +403,6 @@ inbound connections on the following:
 
 ### Other
 - 8200 (Tranquility Server, if used)
-- 8084 (Standalone Realtime, if used, deprecated)
 
 <div class="note caution">
 In production, we recommend deploying ZooKeeper and your metadata store on their own dedicated hardware,
@@ -327,80 +411,88 @@ rather than on the Master server.
 
 ## Start Master Server
 
-Copy the Druid distribution and your edited configurations to your Master server. 
+Copy the Druid distribution and your edited configurations to your Master server.
 
 If you have been editing the configurations on your local machine, you can use *rsync* to copy them:
 
 ```bash
-rsync -az apache-druid-0.14.0-incubating/ COORDINATION_SERVER:apache-druid-0.14.0-incubating/
+rsync -az apache-druid-#{DRUIDVERSION}/ MASTER_SERVER:apache-druid-#{DRUIDVERSION}/
 ```
 
-Log on to your coordination server and install Zookeeper:
+### No Zookeper on Master
+
+From the distribution root, run the following command to start the Master server:
+
+```
+bin/start-cluster-master-no-zk-server
+```
+
+### With Zookeeper on Master
+
+If you plan to run ZK on Master servers, first update `conf/zoo.cfg` to reflect how you plan to run ZK. Then log on to your Master servers and install Zookeeper:
 
 ```bash
 curl http://www.gtlib.gatech.edu/pub/apache/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz
 tar -xzf zookeeper-3.4.11.tar.gz
-cd zookeeper-3.4.11
-cp conf/zoo_sample.cfg conf/zoo.cfg
-./bin/zkServer.sh start
+mv zookeeper-3.4.11 zk
 ```
 
-<div class="note caution">
-In production, we also recommend running a ZooKeeper cluster on its own dedicated hardware.
-</div>
+If you are running ZK on the Master server, you can start the Master server processes together with ZK using:
 
-On your coordination server, *cd* into the distribution and start up the coordination services (you should do this in different windows or pipe the log to a file):
-
-```bash
-java `cat conf/druid/coordinator/jvm.config | xargs` -cp conf/druid/_common:conf/druid/coordinator:lib/* org.apache.druid.cli.Main server coordinator
-java `cat conf/druid/overlord/jvm.config | xargs` -cp conf/druid/_common:conf/druid/overlord:lib/* org.apache.druid.cli.Main server overlord
+```
+bin/start-cluster-master-with-zk-server
 ```
 
-You should see a log message printed out for each service that starts up. You can view detailed logs
-for any service by looking in the `var/log/druid` directory using another terminal.
+<div class="note caution">
+In production, we also recommend running a ZooKeeper cluster on its own dedicated hardware.
+</div>
 
 ## Start Data Server
 
-Copy the Druid distribution and your edited configurations to your Data servers set aside for the Druid Historicals and MiddleManagers.
+Copy the Druid distribution and your edited configurations to your Data servers.
 
-On each one, *cd* into the distribution and run this command to start the Data server processes:
+From the distribution root, run the following command to start the Data server:
 
-```bash
-java `cat conf/druid/historical/jvm.config | xargs` -cp conf/druid/_common:conf/druid/historical:lib/* org.apache.druid.cli.Main server historical
-java `cat conf/druid/middleManager/jvm.config | xargs` -cp conf/druid/_common:conf/druid/middleManager:lib/* org.apache.druid.cli.Main server middleManager
+```
+bin/start-cluster-data-server
 ```
 
-You can add more Data servers with Druid Historicals and MiddleManagers as needed.
+You can add more Data servers as needed.
 
 <div class="note info">
 For clusters with complex resource allocation needs, you can break apart Historicals and MiddleManagers and scale the components individually.
-This also allows you take advantage of Druid's built-in MiddleManager
-autoscaling facility.
+This also allows you take advantage of Druid's built-in MiddleManager autoscaling facility.
 </div>
 
-If you are doing push-based stream ingestion with Kafka or over HTTP, you can also start Tranquility Server on the same
-hardware that holds MiddleManagers and Historicals. For large scale production, MiddleManagers and Tranquility Server
-can still be co-located. If you are running Tranquility (not server) with a stream processor, you can co-locate
-Tranquility with the stream processor and not require Tranquility Server.
+### Tranquility
+
+If you are doing push-based stream ingestion with Kafka or over HTTP, you can also start Tranquility Server on the Data server. 
+
+For large scale production, Data server processes and the Tranquility Server can still be co-located. 
+
+If you are running Tranquility (not server) with a stream processor, you can co-locate Tranquility with the stream processor and not require Tranquility Server.
+
+First install Tranquility:
 
 ```bash
-curl -O http://static.druid.io/tranquility/releases/tranquility-distribution-0.8.0.tgz
-tar -xzf tranquility-distribution-0.8.0.tgz
-cd tranquility-distribution-0.8.0
-bin/tranquility <server or kafka> -configFile <path_to_druid_distro>/conf/tranquility/<server or kafka>.json
+curl http://static.druid.io/tranquility/releases/tranquility-distribution-0.8.3.tgz -o tranquility-distribution-0.8.3.tgz
+tar -xzf tranquility-distribution-0.8.3.tgz
+mv tranquility-distribution-0.8.3 tranquility
 ```
 
+Afterwards, in `conf/supervise/cluster/data.conf`, uncomment out the `tranquility-server` line, and restart the Data server proceses.
+
 ## Start Query Server
 
-Copy the Druid distribution and your edited configurations to your Query servers set aside for the Druid Brokers.
+Copy the Druid distribution and your edited configurations to your Query servers.
 
-On each Query server, *cd* into the distribution and run this command to start the Broker process (you may want to pipe the output to a log file):
+From the distribution root, run the following command to start the Query server:
 
-```bash
-java `cat conf/druid/broker/jvm.config | xargs` -cp conf/druid/_common:conf/druid/broker:lib/* org.apache.druid.cli.Main server broker
+```
+bin/start-cluster-query-server
 ```
 
-You can add more Query servers as needed based on query load.
+You can add more Query servers as needed based on query load. If you increase the number of Query servers, be sure to adjust the connection pools on your Historicals and Tasks as described in the [basic cluster tuning guide](../operations/basic-cluster-tuning.html).
 
 ## Loading data
 
diff --git a/docs/0.14.0-incubating/tutorials/img/tutorial-deletion-02.png b/docs/0.14.0-incubating/tutorials/img/tutorial-deletion-02.png
index fdea20f..9b84f0c 100644
Binary files a/docs/0.14.0-incubating/tutorials/img/tutorial-deletion-02.png and b/docs/0.14.0-incubating/tutorials/img/tutorial-deletion-02.png differ
diff --git a/docs/0.14.0-incubating/tutorials/index.md b/docs/0.14.0-incubating/tutorials/index.md
index 027ca77..a1cc4ef 100644
--- a/docs/0.14.0-incubating/tutorials/index.md
+++ b/docs/0.14.0-incubating/tutorials/index.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Apache Druid (incubating) Quickstart"
+title: "Apache Druid (incubating) Single-Server Quickstart"
 ---
 
 <!--
@@ -22,7 +22,7 @@ title: "Apache Druid (incubating) Quickstart"
   ~ under the License.
   -->
 
-# Druid Quickstart
+# Apache Druid (incubating) Single-Server Quickstart
 
 In this quickstart, we will download Druid and set it up on a single machine. The cluster will be ready to load data
 after completing this initial setup.
@@ -32,38 +32,42 @@ Before beginning the quickstart, it is helpful to read the [general Druid overvi
 
 ## Prerequisites
 
+### Software
+
 You will need:
 
-  * Java 8
-  * Linux, Mac OS X, or other Unix-like OS (Windows is not supported)
-  * 8G of RAM
-  * 2 vCPUs
+* Java 8 (8u92+)
+* Linux, Mac OS X, or other Unix-like OS (Windows is not supported)
+
+
+### Hardware
+
+Druid includes several example [single-server configurations](../operations/single-server.html), along with scripts to
+start the Druid processes using these configurations.
 
-On Mac OS X, you can use [Oracle's JDK
-8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) to install
-Java.
+If you're running on a small machine such as a laptop for a quick evaluation, the `micro-quickstart` configuration is
+a good choice, sized for a 4CPU/16GB RAM environment.
 
-On Linux, your OS package manager should be able to help for Java. If your Ubuntu-
-based OS does not have a recent enough version of Java, WebUpd8 offers [packages for those
-OSes](http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html).
+If you plan to use the single-machine deployment for further evaluation beyond the tutorials, we recommend a larger
+configuration than `micro-quickstart`.
 
 ## Getting started
 
-[Download](https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.14.0-incubating/apache-druid-0.14.0-incubating-bin.tar.gz)
-the 0.14.0-incubating release.
+[Download](https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/#{DRUIDVERSION}/apache-druid-#{DRUIDVERSION}-bin.tar.gz)
+the #{DRUIDVERSION} release.
 
 Extract Druid by running the following commands in your terminal:
 
 ```bash
-tar -xzf apache-druid-0.14.0-incubating-bin.tar.gz
-cd apache-druid-0.14.0-incubating
+tar -xzf apache-druid-#{DRUIDVERSION}-bin.tar.gz
+cd apache-druid-#{DRUIDVERSION}
 ```
 
 In the package, you should find:
 
 * `DISCLAIMER`, `LICENSE`, and `NOTICE` files
 * `bin/*` - scripts useful for this quickstart
-* `conf/*` - template configurations for a clustered setup
+* `conf/*` - example configurations for single-server and clustered setup
 * `extensions/*` - core Druid extensions
 * `hadoop-dependencies/*` - Druid Hadoop dependencies
 * `lib/*` - libraries and dependencies for core Druid
@@ -82,48 +86,44 @@ tar -xzf zookeeper-3.4.11.tar.gz
 mv zookeeper-3.4.11 zk
 ```
 
-The startup scripts for the tutorial will expect the contents of the Zookeeper tarball to be located at `zk` under the apache-druid-0.14.0-incubating package root.
+The startup scripts for the tutorial will expect the contents of the Zookeeper tarball to be located at `zk` under the
+apache-druid-#{DRUIDVERSION} package root.
 
 ## Start up Druid services
 
-From the apache-druid-0.14.0-incubating package root, run the following command:
+The following commands will assume that you are using the `micro-quickstart` single-machine configuration. If you are
+using a different configuration, the `bin` directory has equivalent scripts for each configuration, such as
+`bin/start-single-server-small`.
+
+From the apache-druid-#{DRUIDVERSION} package root, run the following command:
 
 ```bash
-bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf
+./bin/start-micro-quickstart
 ```
 
 This will bring up instances of Zookeeper and the Druid services, all running on the local machine, e.g.:
 
 ```bash
-bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf
-[Wed Feb 27 12:46:13 2019] Running command[zk], logging to[/apache-druid-0.14.0-incubating/var/sv/zk.log]: bin/run-zk quickstart/tutorial/conf
-[Wed Feb 27 12:46:13 2019] Running command[coordinator], logging to[/apache-druid-0.14.0-incubating/var/sv/coordinator.log]: bin/run-druid coordinator quickstart/tutorial/conf
-[Wed Feb 27 12:46:13 2019] Running command[broker], logging to[/apache-druid-0.14.0-incubating/var/sv/broker.log]: bin/run-druid broker quickstart/tutorial/conf
-[Wed Feb 27 12:46:13 2019] Running command[router], logging to[/apache-druid-0.14.0-incubating/var/sv/router.log]: bin/run-druid router quickstart/tutorial/conf
-[Wed Feb 27 12:46:13 2019] Running command[historical], logging to[/apache-druid-0.14.0-incubating/var/sv/historical.log]: bin/run-druid historical quickstart/tutorial/conf
-[Wed Feb 27 12:46:13 2019] Running command[overlord], logging to[/apache-druid-0.14.0-incubating/var/sv/overlord.log]: bin/run-druid overlord quickstart/tutorial/conf
-[Wed Feb 27 12:46:13 2019] Running command[middleManager], logging to[/apache-druid-0.14.0-incubating/var/sv/middleManager.log]: bin/run-druid middleManager quickstart/tutorial/conf
+$ ./bin/start-micro-quickstart 
+[Fri May  3 11:40:50 2019] Running command[zk], logging to[/apache-druid-#{DRUIDVERSION}/var/sv/zk.log]: bin/run-zk conf
+[Fri May  3 11:40:50 2019] Running command[coordinator-overlord], logging to[/apache-druid-#{DRUIDVERSION}/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[broker], logging to[/apache-druid-#{DRUIDVERSION}/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[router], logging to[/apache-druid-#{DRUIDVERSION}/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[historical], logging to[/apache-druid-#{DRUIDVERSION}/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[middleManager], logging to[/apache-druid-#{DRUIDVERSION}/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
 ```
 
-All persistent state such as the cluster metadata store and segments for the services will be kept in the `var` directory under the apache-druid-0.14.0-incubating package root. Logs for the services are located at `var/sv`.
-
-Later on, if you'd like to stop the services, CTRL-C to exit the `bin/supervise` script, which will terminate the Druid processes.
-
-### Resetting cluster state
+All persistent state such as the cluster metadata store and segments for the services will be kept in the `var` directory under the apache-druid-#{DRUIDVERSION} package root. Logs for the services are located at `var/sv`.
 
-If you want a clean start after stopping the services, delete the `var` directory and run the `bin/supervise` script again.
+Later on, if you'd like to stop the services, CTRL-C to exit the `bin/start-micro-quickstart` script, which will terminate the Druid processes.
 
-Once every service has started, you are now ready to load data.
+Once the cluster has started, you can navigate to [http://localhost:8888](http://localhost:8888).
+The [Druid router process](../development/router.html), which serves the Druid console, resides at this address.
 
-#### Resetting Kafka
+![Druid console](../tutorials/img/tutorial-quickstart-01.png "Druid console")
 
-If you completed [Tutorial: Loading stream data from Kafka](./tutorial-kafka.html) and wish to reset the cluster state, you should additionally clear out any Kafka state.
+It takes a few seconds for all the Druid processes to fully start up. If you open the console immediately after starting the services, you may see some errors that you can safely ignore.
 
-Shut down the Kafka broker with CTRL-C before stopping Zookeeper and the Druid services, and then delete the Kafka log directory at `/tmp/kafka-logs`:
-
-```bash
-rm -rf /tmp/kafka-logs
-```
 
 ## Loading Data
 
@@ -131,7 +131,8 @@ rm -rf /tmp/kafka-logs
 
 For the following data loading tutorials, we have included a sample data file containing Wikipedia page edit events that occurred on 2015-09-12.
 
-This sample data is located at `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` from the Druid package root. The page edit events are stored as JSON objects in a text file.
+This sample data is located at `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` from the Druid package root.
+The page edit events are stored as JSON objects in a text file.
 
 The sample data has the following columns, and an example event is shown below:
 
@@ -179,24 +180,31 @@ The sample data has the following columns, and an example event is shown below:
 }
 ```
 
-The following tutorials demonstrate various methods of loading data into Druid, including both batch and streaming use cases.
 
-### [Tutorial: Loading a file](./tutorial-batch.html)
+### Data loading tutorials
 
-This tutorial demonstrates how to perform a batch file load, using Druid's native batch ingestion.
+The following tutorials demonstrate various methods of loading data into Druid, including both batch and streaming use cases.
+All tutorials assume that you are using the `micro-quickstart` single-machine configuration mentioned above.
 
-### [Tutorial: Loading stream data from Apache Kafka](./tutorial-kafka.html)
+- [Loading a file](./tutorial-batch.html) - this tutorial demonstrates how to perform a batch file load, using Druid's native batch ingestion.
+- [Loading stream data from Apache Kafka](./tutorial-kafka.html) - this tutorial demonstrates how to load streaming data from a Kafka topic.
+- [Loading a file using Apache Hadoop](./tutorial-batch-hadoop.html) - this tutorial demonstrates how to perform a batch file load, using a remote Hadoop cluster.
+- [Loading data using Tranquility](./tutorial-tranquility.html) - this tutorial demonstrates how to load streaming data by pushing events to Druid using the Tranquility service.
+- [Writing your own ingestion spec](./tutorial-ingestion-spec.html) - this tutorial demonstrates how to write a new ingestion spec and use it to load data.
 
-This tutorial demonstrates how to load streaming data from a Kafka topic.
 
-### [Tutorial: Loading a file using Apache Hadoop](./tutorial-batch-hadoop.html)
+### Resetting cluster state
 
-This tutorial demonstrates how to perform a batch file load, using a remote Hadoop cluster.
+If you want a clean start after stopping the services, delete the `var` directory and run the `bin/start-micro-quickstart` script again.
 
-### [Tutorial: Loading data using Tranquility](./tutorial-tranquility.html)
+Once every service has started, you are now ready to load data.
 
-This tutorial demonstrates how to load streaming data by pushing events to Druid using the Tranquility service.
+#### Resetting Kafka
 
-### [Tutorial: Writing your own ingestion spec](./tutorial-ingestion-spec.html)
+If you completed [Tutorial: Loading stream data from Kafka](./tutorial-kafka.html) and wish to reset the cluster state, you should additionally clear out any Kafka state.
+
+Shut down the Kafka broker with CTRL-C before stopping Zookeeper and the Druid services, and then delete the Kafka log directory at `/tmp/kafka-logs`:
 
-This tutorial demonstrates how to write a new ingestion spec and use it to load data.
+```bash
+rm -rf /tmp/kafka-logs
+```
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-batch-hadoop.md b/docs/0.14.0-incubating/tutorials/tutorial-batch-hadoop.md
index 337f9a4..01dffeb 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-batch-hadoop.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-batch-hadoop.md
@@ -26,7 +26,9 @@ title: "Tutorial: Load batch data using Apache Hadoop"
 
 This tutorial shows you how to load data files into Apache Druid (incubating) using a remote Hadoop cluster.
 
-For this tutorial, we'll assume that you've already completed the previous [batch ingestion tutorial](tutorial-batch.html) using Druid's native batch ingestion system.
+For this tutorial, we'll assume that you've already completed the previous
+[batch ingestion tutorial](tutorial-batch.html) using Druid's native batch ingestion system and are using the
+`micro-quickstart` single-machine configuration as described in the [quickstart](index.html).
 
 ## Install Docker
 
@@ -40,7 +42,7 @@ For this tutorial, we've provided a Dockerfile for a Hadoop 2.8.3 cluster, which
 
 This Dockerfile and related files are located at `quickstart/tutorial/hadoop/docker`.
 
-From the apache-druid-0.14.0-incubating package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":
+From the apache-druid-#{DRUIDVERSION} package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":
 
 ```bash
 cd quickstart/tutorial/hadoop/docker
@@ -108,7 +110,7 @@ docker exec -it druid-hadoop-demo bash
 
 ### Copy input data to the Hadoop container
 
-From the apache-druid-0.14.0-incubating package root on the host, copy the `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared folder:
+From the apache-druid-#{DRUIDVERSION} package root on the host, copy the `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared folder:
 
 ```bash
 cp quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz /tmp/shared/wikiticker-2015-09-12-sampled.json.gz
@@ -148,13 +150,13 @@ cp /usr/local/hadoop/etc/hadoop/*.xml /shared/hadoop_xml
 From the host machine, run the following, where {PATH_TO_DRUID} is replaced by the path to the Druid package.
 
 ```bash
-mkdir -p {PATH_TO_DRUID}/quickstart/tutorial/conf/druid/_common/hadoop-xml
-cp /tmp/shared/hadoop_xml/*.xml {PATH_TO_DRUID}/quickstart/tutorial/conf/druid/_common/hadoop-xml/
+mkdir -p {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml
+cp /tmp/shared/hadoop_xml/*.xml {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml/
 ```
 
 ### Update Druid segment and log storage
 
-In your favorite text editor, open `quickstart/tutorial/conf/druid/_common/common.runtime.properties`, and make the following edits:
+In your favorite text editor, open `conf/druid/single-server/micro-quickstart/_common/common.runtime.properties`, and make the following edits:
 
 #### Disable local deep storage and enable HDFS deep storage
 
@@ -194,7 +196,7 @@ druid.indexer.logs.directory=/druid/indexing-logs
 
 Once the Hadoop .xml files have been copied to the Druid cluster and the segment/log storage configuration has been updated to use HDFS, the Druid cluster needs to be restarted for the new configurations to take effect.
 
-If the cluster is still running, CTRL-C to terminate the `bin/supervise` script, and re-reun it to bring the Druid services back up.
+If the cluster is still running, CTRL-C to terminate the `bin/start-micro-quickstart` script, and re-reun it to bring the Druid services back up.
 
 ## Load batch data
 
@@ -206,7 +208,7 @@ a task that loads the `wikiticker-2015-09-12-sampled.json.gz` file included in t
 Let's submit the `wikipedia-index-hadoop-.json` task:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/wikipedia-index-hadoop.json 
+bin/post-index-task --file quickstart/tutorial/wikipedia-index-hadoop.json --url http://localhost:8081
 ```
 
 ## Querying your data
@@ -219,7 +221,7 @@ This tutorial is only meant to be used together with the [query tutorial](../tut
 
 If you wish to go through any of the other tutorials, you will need to:
 * Shut down the cluster and reset the cluster state by removing the contents of the `var` directory under the druid package.
-* Revert the deep storage and task storage config back to local types in `quickstart/tutorial/conf/druid/_common/common.runtime.properties`
+* Revert the deep storage and task storage config back to local types in `conf/druid/single-server/micro-quickstart/_common/common.runtime.properties`
 * Restart the cluster
 
 This is necessary because the other ingestion tutorials will write to the same "wikipedia" datasource, and later tutorials expect the cluster to use local deep storage.
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-batch.md b/docs/0.14.0-incubating/tutorials/tutorial-batch.md
index 2922efb..1d47123 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-batch.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-batch.md
@@ -24,18 +24,104 @@ title: "Tutorial: Loading a file"
 
 # Tutorial: Loading a file
 
-## Getting started
-
 This tutorial demonstrates how to perform a batch file load, using Apache Druid (incubating)'s native batch ingestion.
 
 For this tutorial, we'll assume you've already downloaded Druid as described in 
-the [single-machine quickstart](index.html) and have it running on your local machine. You 
-don't need to have loaded any data yet.
-
-## Preparing the data and the ingestion task spec
+the [quickstart](index.html) using the `micro-quickstart` single-machine configuration and have it
+running on your local machine. You don't need to have loaded any data yet.
 
 A data load is initiated by submitting an *ingestion task* spec to the Druid Overlord. For this tutorial, we'll be loading the sample Wikipedia page edits data.
 
+An ingestion spec can be written by hand or by using the "Data loader" that is built into the Druid console.
+The data loader can help you build an ingestion spec by sampling your data and and iteratively configuring various ingestion parameters.
+The data loader currently only supports native batch ingestion (support for streaming, including data stored in Apache Kafka and AWS Kinesis, is coming in future releases).
+Streaming ingestion is only available through a written ingestion spec today.
+
+We've included a sample of Wikipedia edits from September 12, 2015 to get you started.
+
+
+## Loading data with the data loader
+
+Navigate to [localhost:8888](http://localhost:8888) and click `Load data` in the console header.
+Select `Local disk`.
+
+![Data loader init](../tutorials/img/tutorial-batch-data-loader-01.png "Data loader init")
+
+Enter the value of `quickstart/tutorial/` as the base directory and `wikiticker-2015-09-12-sampled.json.gz` as a filter.
+The separation of base directory and [wildcard file filter](https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/filefilter/WildcardFileFilter.html) is there if you need to ingest data from multiple files.
+
+Click `Preview` and make sure that the the data you are seeing is correct.
+
+![Data loader sample](../tutorials/img/tutorial-batch-data-loader-02.png "Data loader sample")
+
+Once the data is located, you can click "Next: Parse data" to go to the next step.
+The data loader will try to automatically determine the correct parser for the data.
+In this case it will successfully determine `json`.
+Feel free to play around with different parser options to get a preview of how Druid will parse your data.
+
+![Data loader parse data](../tutorials/img/tutorial-batch-data-loader-03.png "Data loader parse data")
+
+With the `json` parser selected, click `Next: Parse time` to get to the step centered around determining your primary timestamp column.
+Druid's architecture requires a primary timestamp column (internally stored in a column called `__time`).
+If you do not have a timestamp in your data, select `Constant value`.
+In our example, the data loader will determine that the `time` column in our raw data is the only candidate that can be used as the primary time column. 
+
+![Data loader parse time](../tutorials/img/tutorial-batch-data-loader-04.png "Data loader parse time")
+
+Click `Next: ...` twice to go past the `Transform` and `Filter` steps.
+You do not need to enter anything in these steps as applying ingestion time transforms and filters are out of scope for this tutorial.
+
+In the `Configure schema` step, you can configure which dimensions (and metrics) will be ingested into Druid.
+This is exactly what the data will appear like in Druid once it is ingested.
+Since our dataset is very small, go ahead and turn off `Rollup` by clicking on the switch and confirming the change.
+
+![Data loader schema](../tutorials/img/tutorial-batch-data-loader-05.png "Data loader schema")
+
+Once you are satisfied with the schema, click `Next` to go to the `Partition` step where you can fine tune how the data will be partitioned into segments.
+Here you can adjust how the data will be split up into segments in Druid.
+Since this is a small dataset, there are no adjustments that need to be made in this step.
+
+![Data loader partition](../tutorials/img/tutorial-batch-data-loader-06.png "Data loader partition")
+
+Clicking past the `Tune` step, we get to the publish step, which is where we can specify what the datasource name in Druid.
+Let's name this datasource `wikipedia`.  
+
+![Data loader publish](../tutorials/img/tutorial-batch-data-loader-07.png "Data loader publish")
+
+Finally, click `Next` to review your spec.
+This is the spec you have constructed.
+Feel free to go back and make changes in previous steps to see how changes will update the spec.
+Similarly, you can also edit the spec directly and see it reflected in the previous steps.
+
+![Data loader spec](../tutorials/img/tutorial-batch-data-loader-08.png "Data loader spec")
+
+Once you are satisfied with the spec, click `Submit` and an ingestion task will be created.
+
+You will be taken to the task view with the focus on the newly created task. 
+
+![Tasks view](../tutorials/img/tutorial-batch-data-loader-09.png "Tasks view")
+
+In the tasks view, you can click `Refresh` a couple of times until your ingestion task (hopefully) succeeds.
+
+When a tasks succeeds it means that it built one or more segments that will now be picked up by the data servers.  
+
+Navigate to the `Datasources` view and click refresh until your datasource (`wikipedia`) appears.
+This can take a few seconds as the segments are being loaded.  
+
+![Datasource view](../tutorials/img/tutorial-batch-data-loader-10.png "Datasource view")
+
+A datasource is queryable once you see a green (fully available) circle.
+At this point, you can go to the `Query` view to run SQL queries against the datasource.
+
+Since this is a small dataset, you can simply run a `SELECT * FROM wikipedia` query to see your results.
+
+![Query view](../tutorials/img/tutorial-batch-data-loader-11.png "Query view")
+
+Check out the [query tutorial](../tutorials/tutorial-query.html) to run some example queries on the newly loaded data.
+
+
+## Loading data with a spec (via console)
+
 The Druid package includes the following sample native batch ingestion task spec at `quickstart/tutorial/wikipedia-index.json`, shown here for convenience,
 which has been configured to read the `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` input file:
 
@@ -99,21 +185,26 @@ which has been configured to read the `quickstart/tutorial/wikiticker-2015-09-12
     "tuningConfig" : {
       "type" : "index",
       "maxRowsPerSegment" : 5000000,
-      "maxRowsInMemory" : 25000,
-      "forceExtendableShardSpecs" : true
+      "maxRowsInMemory" : 25000
     }
   }
 }
 ```
 
-This spec will create a datasource named "wikipedia", 
+This spec will create a datasource named "wikipedia".
 
-## Load batch data
+From the task view, click on `Submit task` and select `Raw JSON task`.
 
-We've included a sample of Wikipedia edits from September 12, 2015 to get you started.
+![Tasks view add task](../tutorials/img/tutorial-batch-submit-task-01.png "Tasks view add task")
+
+This will bring up the spec submission dialog where you can paste the spec above.  
+
+![Query view](../tutorials/img/tutorial-batch-submit-task-02.png "Query view")
 
-To load this data into Druid, you can submit an *ingestion task* pointing to the file. We've included
-a task that loads the `wikiticker-2015-09-12-sampled.json.gz` file included in the archive. 
+Once the spec is submitted, you can follow the same instructions as above to wait for the data to load and then query it.
+
+
+## Loading data with a spec (via command line)
 
 For convenience, the Druid package includes a batch ingestion helper script at `bin/post-index-task`.
 
@@ -122,7 +213,7 @@ This script will POST an ingestion task to the Druid Overlord and poll Druid unt
 Run the following command from Druid package root:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/wikipedia-index.json 
+bin/post-index-task --file quickstart/tutorial/wikipedia-index.json --url http://localhost:8081
 ```
 
 You should see output like the following:
@@ -130,8 +221,8 @@ You should see output like the following:
 ```bash
 Beginning indexing data for wikipedia
 Task started: index_wikipedia_2018-07-27T06:37:44.323Z
-Task log:     http://localhost:8090/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/log
-Task status:  http://localhost:8090/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/status
+Task log:     http://localhost:8081/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/log
+Task status:  http://localhost:8081/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/status
 Task index_wikipedia_2018-07-27T06:37:44.323Z still running...
 Task index_wikipedia_2018-07-27T06:37:44.323Z still running...
 Task finished with status: SUCCESS
@@ -139,22 +230,17 @@ Completed indexing data for wikipedia. Now loading indexed data onto the cluster
 wikipedia loading complete! You may now query your data
 ```
 
-## Querying your data
+Once the spec is submitted, you can follow the same instructions as above to wait for the data to load and then query it.
 
-Once the data is loaded, please follow the [query tutorial](../tutorials/tutorial-query.html) to run some example queries on the newly loaded data.
-
-## Cleanup
 
-If you wish to go through any of the other ingestion tutorials, you will need to shut down the cluster and reset the cluster state by removing the contents of the `var` directory under the druid package, as the other tutorials will write to the same "wikipedia" datasource.
-
-## Extra: Loading data without the script
+## Loading data without the script
 
 Let's briefly discuss how we would've submitted the ingestion task without using the script. You do not need to run these commands.
 
-To submit the task, POST it to Druid in a new terminal window from the apache-druid-0.14.0-incubating directory:
+To submit the task, POST it to Druid in a new terminal window from the apache-druid-#{DRUIDVERSION} directory:
 
 ```bash
-curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-index.json http://localhost:8090/druid/indexer/v1/task
+curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-index.json http://localhost:8081/druid/indexer/v1/task
 ```
 
 Which will print the ID of the task if the submission was successful:
@@ -163,16 +249,18 @@ Which will print the ID of the task if the submission was successful:
 {"task":"index_wikipedia_2018-06-09T21:30:32.802Z"}
 ```
 
-To view the status of the ingestion task, go to the Druid Console:
-[http://localhost:8888/](http://localhost:8888). You can refresh the console periodically, and after
-the task is successful, you should see a "SUCCESS" status for the task under the [Tasks view](http://localhost:8888/unified-console.html#tasks).
+You can monitor the status of this task from the console as outlined above. 
+
+
+## Querying your data
+
+Once the data is loaded, please follow the [query tutorial](../tutorials/tutorial-query.html) to run some example queries on the newly loaded data.
+
 
-After the ingestion task finishes, the data will be loaded by Historical processes and available for
-querying within a minute or two. You can monitor the progress of loading the data in the
-Datasources view, by checking whether there is a datasource "wikipedia" with a green circle
-indicating "fully available": [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources).
+## Cleanup
+
+If you wish to go through any of the other ingestion tutorials, you will need to shut down the cluster and reset the cluster state by removing the contents of the `var` directory under the druid package, as the other tutorials will write to the same "wikipedia" datasource.
 
-![Druid Console](../tutorials/img/tutorial-batch-01.png "Wikipedia 100% loaded")
 
 ## Further reading
 
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-compaction.md b/docs/0.14.0-incubating/tutorials/tutorial-compaction.md
index 8919a4c..0051796 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-compaction.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-compaction.md
@@ -41,7 +41,7 @@ For this tutorial, we'll be using the Wikipedia edits sample data, with an inges
 The ingestion spec can be found at `quickstart/tutorial/compaction-init-index.json`. Let's submit that spec, which will create a datasource called `compaction-tutorial`:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/compaction-init-index.json 
+bin/post-index-task --file quickstart/tutorial/compaction-init-index.json --url http://localhost:8081
 ```
 
 <div class="note caution">
@@ -85,8 +85,7 @@ We have included a compaction task spec for this tutorial datasource at `quickst
   "tuningConfig" : {
     "type" : "index",
     "maxRowsPerSegment" : 5000000,
-    "maxRowsInMemory" : 25000,
-    "forceExtendableShardSpecs" : true
+    "maxRowsInMemory" : 25000
   }
 }
 ```
@@ -100,7 +99,7 @@ In this tutorial example, only one compacted segment will be created per hour, a
 Let's submit this task now:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/compaction-keep-granularity.json
+bin/post-index-task --file quickstart/tutorial/compaction-keep-granularity.json --url http://localhost:8081
 ```
 
 After the task finishes, refresh the [segments view](http://localhost:8888/unified-console.html#segments).
@@ -159,7 +158,7 @@ Note that `segmentGranularity` is set to `DAY` in this compaction task spec.
 Let's submit this task now:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/compaction-day-granularity.json
+bin/post-index-task --file quickstart/tutorial/compaction-day-granularity.json --url http://localhost:8081
 ```
 
 It will take a bit of time before the Coordinator marks the old input segments as unused, so you may see an intermediate state with 25 total segments. Eventually, there will only be one DAY granularity segment:
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-delete-data.md b/docs/0.14.0-incubating/tutorials/tutorial-delete-data.md
index 8e47082..46fbbdc 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-delete-data.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-delete-data.md
@@ -29,8 +29,6 @@ This tutorial demonstrates how to delete existing data.
 For this tutorial, we'll assume you've already downloaded Apache Druid (incubating) as described in 
 the [single-machine quickstart](index.html) and have it running on your local machine. 
 
-Completing [Tutorial: Configuring retention](../tutorials/tutorial-retention.html) first is highly recommended, as we will be using retention rules in this tutorial.
-
 ## Load initial data
 
 In this tutorial, we will use the Wikipedia edits data, with an indexing spec that creates hourly segments. This spec is located at `quickstart/tutorial/deletion-index.json`, and it creates a datasource called `deletion-tutorial`.
@@ -38,7 +36,7 @@ In this tutorial, we will use the Wikipedia edits data, with an indexing spec th
 Let's load this initial data:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/deletion-index.json 
+bin/post-index-task --file quickstart/tutorial/deletion-index.json --url http://localhost:8081
 ```
 
 When the load finishes, open [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources) in a browser.
@@ -47,30 +45,25 @@ When the load finishes, open [http://localhost:8888/unified-console.html#datasou
 
 Permanent deletion of a Druid segment has two steps:
 
-1. The segment must first be marked as "unused". This occurs when a segment is dropped by retention rules, and when a user manually disables a segment through the Coordinator API. This tutorial will cover both cases.
+1. The segment must first be marked as "unused". This occurs when a user manually disables a segment through the Coordinator API.
 2. After segments have been marked as "unused", a Kill Task will delete any "unused" segments from Druid's metadata store as well as deep storage.
 
-Let's drop some segments now, first with load rules, then manually.
-
-## Drop some data with load rules
-
-As with the previous retention tutorial, there are currently 24 segments in the `deletion-tutorial` datasource.
-
-click the blue pencil icon next to `Cluster default: loadForever` for the `deletion-tutorial` datasource.
+Let's drop some segments now, by using the coordinator API to drop data by interval and segmentIds.
 
-A rule configuration window will appear. 
+## Disable segments by interval
 
-Now click the `+ New rule` button twice. 
+Let's disable segments in a specified interval. This will mark all segments in the interval as "unused", but not remove them from deep storage.
+Let's disable segments in interval `2015-09-12T18:00:00.000Z/2015-09-12T20:00:00.000Z` i.e. between hour 18 and 20.
 
-In the upper rule box, select `Load` and `by interval`, and then enter `2015-09-12T12:00:00.000Z/2015-09-13T00:00:00.000Z` in field next to `by interval`. Replicants can remain at 2 in the `_default_tier`.
-
-In the lower rule box, select `Drop` and `forever`.
+```bash
+curl -X 'POST' -H 'Content-Type:application/json' -d '{ "interval" : "2015-09-12T18:00:00.000Z/2015-09-12T20:00:00.000Z" }' http://localhost:8081/druid/coordinator/v1/datasources/deletion-tutorial/markUnused
+```
 
-Now click `Next` and enter `tutorial` for both the user and changelog comment field.
+After that command completes, you should see that the segment for hour 18 and 19 have been disabled:
 
-This will cause the first 12 segments of `deletion-tutorial` to be dropped. However, these dropped segments are not removed from deep storage.
+![Segments 2](../tutorials/img/tutorial-deletion-02.png "Segments 2")
 
-You can see that all 24 segments are still present in deep storage by listing the contents of `apache-druid-0.14.0-incubating/var/druid/segments/deletion-tutorial`:
+Note that the hour 18 and 19 segments are still present in deep storage:
 
 ```bash
 $ ls -l1 var/druid/segments/deletion-tutorial/
@@ -100,9 +93,9 @@ $ ls -l1 var/druid/segments/deletion-tutorial/
 2015-09-12T23:00:00.000Z_2015-09-13T00:00:00.000Z
 ```
 
-## Manually disable a segment
+## Disable segments by segment IDs
 
-Let's manually disable a segment now. This will mark a segment as "unused", but not remove it from deep storage.
+Let's disable some segments by their segmentID. This will again mark the segments as "unused", but not remove them from deep storage. You can see the full segmentID for a segment from UI as explained below.
 
 In the [segments view](http://localhost:8888/unified-console.html#segments), click the arrow on the left side of one of the remaining segments to expand the segment entry:
 
@@ -110,17 +103,29 @@ In the [segments view](http://localhost:8888/unified-console.html#segments), cli
 
 The top of the info box shows the full segment ID, e.g. `deletion-tutorial_2015-09-12T14:00:00.000Z_2015-09-12T15:00:00.000Z_2019-02-28T01:11:51.606Z` for the segment of hour 14.
 
-Let's disable the hour 14 segment by sending the following DELETE request to the Coordinator, where {SEGMENT-ID} is the full segment ID shown in the info box:
+Let's disable the hour 13 and 14 segments by sending a POST request to the Coordinator with this payload
+
+```json
+{
+  "segmentIds":
+  [
+    "deletion-tutorial_2015-09-12T13:00:00.000Z_2015-09-12T14:00:00.000Z_2019-05-01T17:38:46.961Z",
+    "deletion-tutorial_2015-09-12T14:00:00.000Z_2015-09-12T15:00:00.000Z_2019-05-01T17:38:46.961Z"
+  ]
+}
+```
+
+This payload json has been provided at `quickstart/tutorial/deletion-disable-segments.json`. Submit the POST request to Coordinator like this:
 
 ```bash
-curl -XDELETE http://localhost:8081/druid/coordinator/v1/datasources/deletion-tutorial/segments/{SEGMENT-ID}
+curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/deletion-disable-segments.json http://localhost:8081/druid/coordinator/v1/datasources/deletion-tutorial/markUnused
 ```
 
-After that command completes, you should see that the segment for hour 14 has been disabled:
+After that command completes, you should see that the segments for hour 13 and 14 have been disabled:
 
-![Segments 2](../tutorials/img/tutorial-deletion-02.png "Segments 2")
+![Segments 3](../tutorials/img/tutorial-deletion-03.png "Segments 3")
 
-Note that the hour 14 segment is still in deep storage:
+Note that the hour 13 and 14 segments are still in deep storage:
 
 ```bash
 $ ls -l1 var/druid/segments/deletion-tutorial/
@@ -165,12 +170,9 @@ After this task completes, you can see that the disabled segments have now been
 ```bash
 $ ls -l1 var/druid/segments/deletion-tutorial/
 2015-09-12T12:00:00.000Z_2015-09-12T13:00:00.000Z
-2015-09-12T13:00:00.000Z_2015-09-12T14:00:00.000Z
 2015-09-12T15:00:00.000Z_2015-09-12T16:00:00.000Z
 2015-09-12T16:00:00.000Z_2015-09-12T17:00:00.000Z
 2015-09-12T17:00:00.000Z_2015-09-12T18:00:00.000Z
-2015-09-12T18:00:00.000Z_2015-09-12T19:00:00.000Z
-2015-09-12T19:00:00.000Z_2015-09-12T20:00:00.000Z
 2015-09-12T20:00:00.000Z_2015-09-12T21:00:00.000Z
 2015-09-12T21:00:00.000Z_2015-09-12T22:00:00.000Z
 2015-09-12T22:00:00.000Z_2015-09-12T23:00:00.000Z
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-ingestion-spec.md b/docs/0.14.0-incubating/tutorials/tutorial-ingestion-spec.md
index f02b675..5f05d18 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-ingestion-spec.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-ingestion-spec.md
@@ -631,10 +631,10 @@ We've finished defining the ingestion spec, it should now look like the followin
 
 ## Submit the task and query the data
 
-From the apache-druid-0.14.0-incubating package root, run the following command:
+From the apache-druid-#{DRUIDVERSION} package root, run the following command:
 
 ```bash
-bin/post-index-task --file quickstart/ingestion-tutorial-index.json 
+bin/post-index-task --file quickstart/ingestion-tutorial-index.json --url http://localhost:8081
 ```
 
 After the script completes, we will query the data.
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-kafka.md b/docs/0.14.0-incubating/tutorials/tutorial-kafka.md
index 98e81a6..0dc91cf 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-kafka.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-kafka.md
@@ -29,19 +29,19 @@ title: "Tutorial: Load streaming data from Apache Kafka"
 This tutorial demonstrates how to load data into Apache Druid (incubating) from a Kafka stream, using Druid's Kafka indexing service.
 
 For this tutorial, we'll assume you've already downloaded Druid as described in 
-the [single-machine quickstart](index.html) and have it running on your local machine. You 
-don't need to have loaded any data yet.
+the [quickstart](index.html) using the `micro-quickstart` single-machine configuration and have it
+running on your local machine. You don't need to have loaded any data yet.
 
 ## Download and start Kafka
 
 [Apache Kafka](http://kafka.apache.org/) is a high throughput message bus that works well with
-Druid.  For this tutorial, we will use Kafka 0.10.2.2. To download Kafka, issue the following
+Druid.  For this tutorial, we will use Kafka 2.1.0. To download Kafka, issue the following
 commands in your terminal:
 
 ```bash
-curl -O https://archive.apache.org/dist/kafka/0.10.2.2/kafka_2.12-0.10.2.2.tgz
-tar -xzf kafka_2.12-0.10.2.2.tgz
-cd kafka_2.12-0.10.2.2
+curl -O https://archive.apache.org/dist/kafka/2.1.0/kafka_2.12-2.1.0.tgz
+tar -xzf kafka_2.12-2.1.0.tgz
+cd kafka_2.12-2.1.0
 ```
 
 Start a Kafka broker by running the following command in a new terminal:
@@ -56,25 +56,104 @@ Run this command to create a Kafka topic called *wikipedia*, to which we'll send
 ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wikipedia
 ```
 
-## Enable Druid Kafka ingestion
+## Start Druid Kafka ingestion
+
+We will use Druid's Kafka indexing service to ingest messages from our newly created *wikipedia* topic.
+
+### Submit a supervisor via the console
+
+In the console, click `Submit supervisor` to open the submit supervisor dialog.
+
+![Submit supervisor](../tutorials/img/tutorial-kafka-01.png "Submit supervisor")
+
+Paste in this spec and click `Submit`.
+
+```json
+{
+  "type": "kafka",
+  "dataSchema": {
+    "dataSource": "wikipedia",
+    "parser": {
+      "type": "string",
+      "parseSpec": {
+        "format": "json",
+        "timestampSpec": {
+          "column": "time",
+          "format": "auto"
+        },
+        "dimensionsSpec": {
+          "dimensions": [
+            "channel",
+            "cityName",
+            "comment",
+            "countryIsoCode",
+            "countryName",
+            "isAnonymous",
+            "isMinor",
+            "isNew",
+            "isRobot",
+            "isUnpatrolled",
+            "metroCode",
+            "namespace",
+            "page",
+            "regionIsoCode",
+            "regionName",
+            "user",
+            { "name": "added", "type": "long" },
+            { "name": "deleted", "type": "long" },
+            { "name": "delta", "type": "long" }
+          ]
+        }
+      }
+    },
+    "metricsSpec" : [],
+    "granularitySpec": {
+      "type": "uniform",
+      "segmentGranularity": "DAY",
+      "queryGranularity": "NONE",
+      "rollup": false
+    }
+  },
+  "tuningConfig": {
+    "type": "kafka",
+    "reportParseExceptions": false
+  },
+  "ioConfig": {
+    "topic": "wikipedia",
+    "replicas": 2,
+    "taskDuration": "PT10M",
+    "completionTimeout": "PT20M",
+    "consumerProperties": {
+      "bootstrap.servers": "localhost:9092"
+    }
+  }
+}
+```
+
+This will start the supervisor that will in turn spawn some tasks that will start listening for incoming data.
+
+![Running supervisor](../tutorials/img/tutorial-kafka-02.png "Running supervisor")
 
-We will use Druid's Kafka indexing service to ingest messages from our newly created *wikipedia* topic. To start the
-service, we will need to submit a supervisor spec to the Druid overlord by running the following from the Druid package root:
+### Submit a supervisor directly
+
+To start the service directly, we will need to submit a supervisor spec to the Druid overlord by running the following from the Druid package root:
 
 ```bash
-curl -XPOST -H'Content-Type: application/json' -d @quickstart/tutorial/wikipedia-kafka-supervisor.json http://localhost:8090/druid/indexer/v1/supervisor
+curl -XPOST -H'Content-Type: application/json' -d @quickstart/tutorial/wikipedia-kafka-supervisor.json http://localhost:8081/druid/indexer/v1/supervisor
 ```
 
-If the supervisor was successfully created, you will get a response containing the ID of the supervisor; in our case we should see `{"id":"wikipedia-kafka"}`.
+
+If the supervisor was successfully created, you will get a response containing the ID of the supervisor; in our case we should see `{"id":"wikipedia"}`.
 
 For more details about what's going on here, check out the
 [Druid Kafka indexing service documentation](../development/extensions-core/kafka-ingestion.html).
 
 You can view the current supervisors and tasks in the Druid Console: [http://localhost:8888/unified-console.html#tasks](http://localhost:8888/unified-console.html#tasks).
 
+
 ## Load data
 
-Let's launch a console producer for our topic and send some data!
+Let's launch a producer for our topic and send some data!
 
 In your Druid directory, run the following command:
 
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-query.md b/docs/0.14.0-incubating/tutorials/tutorial-query.md
index 9829197..960655e 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-query.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-query.md
@@ -24,7 +24,7 @@ title: "Tutorial: Querying data"
 
 # Tutorial: Querying data
 
-This tutorial will demonstrate how to query data in Apache Druid (incubating), with examples for Druid's native query format and Druid SQL.
+This tutorial will demonstrate how to query data in Apache Druid (incubating), with examples for Druid SQL and Druid's native query format.
 
 The tutorial assumes that you've already completed one of the 4 ingestion tutorials, as we will be querying the sample Wikipedia edits data.
 
@@ -33,91 +33,80 @@ The tutorial assumes that you've already completed one of the 4 ingestion tutori
 * [Tutorial: Loading a file using Hadoop](../tutorials/tutorial-batch-hadoop.html)
 * [Tutorial: Loading stream data using Tranquility](../tutorials/tutorial-tranquility.html)
 
-## Native JSON queries
+Druid queries are sent over HTTP.
+The Druid console includes a view to issue queries to Druid and nicely format the results. 
 
-Druid's native query format is expressed in JSON. We have included a sample native TopN query under `quickstart/tutorial/wikipedia-top-pages.json`:
+## Druid SQL queries
 
-```json
-{
-  "queryType" : "topN",
-  "dataSource" : "wikipedia",
-  "intervals" : ["2015-09-12/2015-09-13"],
-  "granularity" : "all",
-  "dimension" : "page",
-  "metric" : "count",
-  "threshold" : 10,
-  "aggregations" : [
-    {
-      "type" : "count",
-      "name" : "count"
-    }
-  ]
-}
-```
+Druid supports a dialect of SQL for querying.
 
 This query retrieves the 10 Wikipedia pages with the most page edits on 2015-09-12.
 
-Let's submit this query to the Druid Broker:
-
-```bash
-curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages.json http://localhost:8082/druid/v2?pretty
+```sql
+SELECT page, COUNT(*) AS Edits
+FROM wikipedia
+WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00'
+GROUP BY page ORDER BY Edits DESC
+LIMIT 10
 ```
 
-You should see the following query results:
+Let's look at the different ways to issue this query.
 
-```json
-[ {
-  "timestamp" : "2015-09-12T00:46:58.771Z",
-  "result" : [ {
-    "count" : 33,
-    "page" : "Wikipedia:Vandalismusmeldung"
-  }, {
-    "count" : 28,
-    "page" : "User:Cyde/List of candidates for speedy deletion/Subpage"
-  }, {
-    "count" : 27,
-    "page" : "Jeremy Corbyn"
-  }, {
-    "count" : 21,
-    "page" : "Wikipedia:Administrators' noticeboard/Incidents"
-  }, {
-    "count" : 20,
-    "page" : "Flavia Pennetta"
-  }, {
-    "count" : 18,
-    "page" : "Total Drama Presents: The Ridonculous Race"
-  }, {
-    "count" : 18,
-    "page" : "User talk:Dudeperson176123"
-  }, {
-    "count" : 18,
-    "page" : "Wikipédia:Le Bistro/12 septembre 2015"
-  }, {
-    "count" : 17,
-    "page" : "Wikipedia:In the news/Candidates"
-  }, {
-    "count" : 17,
-    "page" : "Wikipedia:Requests for page protection"
-  } ]
-} ]
-```
+### Query SQL via the console
 
-## Druid SQL queries
+You can issue the above query from the console.
+
+![Query autocomplete](../tutorials/img/tutorial-query-01.png "Query autocomplete")
+
+The console query view provides autocomplete together with inline function documentation.
+You can also configure extra context flags to be sent with the query from the more options menu.
+
+![Query options](../tutorials/img/tutorial-query-02.png "Query options")
+
+Note that the console will by default wrap your SQL queries in a limit so that you can issue queries like `SELECT * FROM wikipedia` without much hesitation - you can turn off this behaviour. 
 
-Druid also supports a dialect of SQL for querying. Let's run a SQL query that is equivalent to the native JSON query shown above:
+### Query SQL via dsql
 
+For convenience, the Druid package includes a SQL command-line client, located at `bin/dsql` from the Druid package root.
+
+Let's now run `bin/dsql`; you should see the following prompt:
+
+```bash
+Welcome to dsql, the command-line client for Druid SQL.
+Type "\h" for help.
+dsql> 
 ```
-SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;
+
+To submit the query, paste it to the `dsql` prompt and press enter:
+
+```bash
+dsql> SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;
+┌──────────────────────────────────────────────────────────┬───────┐
+│ page                                                     │ Edits │
+├──────────────────────────────────────────────────────────┼───────┤
+│ Wikipedia:Vandalismusmeldung                             │    33 │
+│ User:Cyde/List of candidates for speedy deletion/Subpage │    28 │
+│ Jeremy Corbyn                                            │    27 │
+│ Wikipedia:Administrators' noticeboard/Incidents          │    21 │
+│ Flavia Pennetta                                          │    20 │
+│ Total Drama Presents: The Ridonculous Race               │    18 │
+│ User talk:Dudeperson176123                               │    18 │
+│ Wikipédia:Le Bistro/12 septembre 2015                    │    18 │
+│ Wikipedia:In the news/Candidates                         │    17 │
+│ Wikipedia:Requests for page protection                   │    17 │
+└──────────────────────────────────────────────────────────┴───────┘
+Retrieved 10 rows in 0.06s.
 ```
 
-The SQL queries are submitted as JSON over HTTP.
 
-### TopN query example
+### Query SQL over HTTP
+
+The SQL queries are submitted as JSON over HTTP.
 
 The tutorial package includes an example file that contains the SQL query shown above at `quickstart/tutorial/wikipedia-top-pages-sql.json`. Let's submit that query to the Druid Broker:
 
 ```bash
-curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages-sql.json http://localhost:8082/druid/v2/sql
+curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages-sql.json http://localhost:8888/druid/v2/sql
 ```
 
 The following results should be returned:
@@ -167,119 +156,51 @@ The following results should be returned:
 ]
 ```
 
-### dsql client
+### More Druid SQL examples
 
-For convenience, the Druid package includes a SQL command-line client, located at `bin/dsql` from the Druid package root.
-
-Let's now run `bin/dsql`; you should see the following prompt:
-
-```bash
-Welcome to dsql, the command-line client for Druid SQL.
-Type "\h" for help.
-dsql> 
-```
+Here is a collection of queries to try out:
 
-To submit the query, paste it to the `dsql` prompt and press enter:
+#### Query over time
 
-```bash
-dsql> SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;
-┌──────────────────────────────────────────────────────────┬───────┐
-│ page                                                     │ Edits │
-├──────────────────────────────────────────────────────────┼───────┤
-│ Wikipedia:Vandalismusmeldung                             │    33 │
-│ User:Cyde/List of candidates for speedy deletion/Subpage │    28 │
-│ Jeremy Corbyn                                            │    27 │
-│ Wikipedia:Administrators' noticeboard/Incidents          │    21 │
-│ Flavia Pennetta                                          │    20 │
-│ Total Drama Presents: The Ridonculous Race               │    18 │
-│ User talk:Dudeperson176123                               │    18 │
-│ Wikipédia:Le Bistro/12 septembre 2015                    │    18 │
-│ Wikipedia:In the news/Candidates                         │    17 │
-│ Wikipedia:Requests for page protection                   │    17 │
-└──────────────────────────────────────────────────────────┴───────┘
-Retrieved 10 rows in 0.06s.
+```sql
+SELECT FLOOR(__time to HOUR) AS HourTime, SUM(deleted) AS LinesDeleted
+FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00'
+GROUP BY 1
 ```
 
-### Additional Druid SQL queries
-
-#### Timeseries
+![Query example](../tutorials/img/tutorial-query-03.png "Query example")
 
-`SELECT FLOOR(__time to HOUR) AS HourTime, SUM(deleted) AS LinesDeleted FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY FLOOR(__time to HOUR);`
+#### General group by
 
-```bash
-dsql> SELECT FLOOR(__time to HOUR) AS HourTime, SUM(deleted) AS LinesDeleted FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY FLOOR(__time to HOUR);
-┌──────────────────────────┬──────────────┐
-│ HourTime                 │ LinesDeleted │
-├──────────────────────────┼──────────────┤
-│ 2015-09-12T00:00:00.000Z │         1761 │
-│ 2015-09-12T01:00:00.000Z │        16208 │
-│ 2015-09-12T02:00:00.000Z │        14543 │
-│ 2015-09-12T03:00:00.000Z │        13101 │
-│ 2015-09-12T04:00:00.000Z │        12040 │
-│ 2015-09-12T05:00:00.000Z │         6399 │
-│ 2015-09-12T06:00:00.000Z │         9036 │
-│ 2015-09-12T07:00:00.000Z │        11409 │
-│ 2015-09-12T08:00:00.000Z │        11616 │
-│ 2015-09-12T09:00:00.000Z │        17509 │
-│ 2015-09-12T10:00:00.000Z │        19406 │
-│ 2015-09-12T11:00:00.000Z │        16284 │
-│ 2015-09-12T12:00:00.000Z │        18672 │
-│ 2015-09-12T13:00:00.000Z │        30520 │
-│ 2015-09-12T14:00:00.000Z │        18025 │
-│ 2015-09-12T15:00:00.000Z │        26399 │
-│ 2015-09-12T16:00:00.000Z │        24759 │
-│ 2015-09-12T17:00:00.000Z │        19634 │
-│ 2015-09-12T18:00:00.000Z │        17345 │
-│ 2015-09-12T19:00:00.000Z │        19305 │
-│ 2015-09-12T20:00:00.000Z │        22265 │
-│ 2015-09-12T21:00:00.000Z │        16394 │
-│ 2015-09-12T22:00:00.000Z │        16379 │
-│ 2015-09-12T23:00:00.000Z │        15289 │
-└──────────────────────────┴──────────────┘
-Retrieved 24 rows in 0.08s.
+```sql
+SELECT channel, page, SUM(added)
+FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00'
+GROUP BY channel, page
+ORDER BY SUM(added) DESC
 ```
 
-#### GroupBy
+![Query example](../tutorials/img/tutorial-query-04.png "Query example")
 
-`SELECT channel, SUM(added) FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY channel ORDER BY SUM(added) DESC LIMIT 5;`
+#### Select raw data
 
-```bash
-dsql> SELECT channel, SUM(added) FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY channel ORDER BY SUM(added) DESC LIMIT 5;
-┌───────────────┬─────────┐
-│ channel       │ EXPR$1  │
-├───────────────┼─────────┤
-│ #en.wikipedia │ 3045299 │
-│ #it.wikipedia │  711011 │
-│ #fr.wikipedia │  642555 │
-│ #ru.wikipedia │  640698 │
-│ #es.wikipedia │  634670 │
-└───────────────┴─────────┘
-Retrieved 5 rows in 0.05s.
+```sql
+SELECT user, page
+FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 02:00:00' AND TIMESTAMP '2015-09-12 03:00:00'
+LIMIT 5
 ```
 
-#### Scan
+![Query example](../tutorials/img/tutorial-query-05.png "Query example")
 
-` SELECT user, page FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 02:00:00' AND TIMESTAMP '2015-09-12 03:00:00' LIMIT 5;`
+### Explain query plan
 
-```bash
- dsql> SELECT user, page FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 02:00:00' AND TIMESTAMP '2015-09-12 03:00:00' LIMIT 5;
-┌────────────────────────┬────────────────────────────────────────────────────────┐
-│ user                   │ page                                                   │
-├────────────────────────┼────────────────────────────────────────────────────────┤
-│ Thiago89               │ Campeonato Mundial de Voleibol Femenino Sub-20 de 2015 │
-│ 91.34.200.249          │ Friede von Schönbrunn                                  │
-│ TuHan-Bot              │ Trĩ vàng                                               │
-│ Lowercase sigmabot III │ User talk:ErrantX                                      │
-│ BattyBot               │ Hans W. Jung                                           │
-└────────────────────────┴────────────────────────────────────────────────────────┘
-Retrieved 5 rows in 0.04s.
-```
+Druid SQL has the ability to explain the query plan for a given query.
+In the console this functionality is accessible from the `...` button.
 
-#### EXPLAIN PLAN FOR
+![Explain query](../tutorials/img/tutorial-query-06.png "Explain query")
 
-By prepending `EXPLAIN PLAN FOR ` to a Druid SQL query, it is possible to see what native Druid queries a SQL query will plan into.
+If you are querying in other ways you can get the plan by prepending `EXPLAIN PLAN FOR ` to a Druid SQL query.
 
-Using the TopN query above as an example:
+Using a query from an example above:
 
 `EXPLAIN PLAN FOR SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2015-09-12 00:00:00' AND TIMESTAMP '2015-09-13 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 10;`
 
@@ -293,6 +214,90 @@ dsql> EXPLAIN PLAN FOR SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__ti
 Retrieved 1 row in 0.03s.
 ```
 
+
+## Native JSON queries
+
+Druid's native query format is expressed in JSON.
+
+### Native query via the console
+
+You can issue native Druid queries from the console's Query view.
+
+Here is a query that retrieves the 10 Wikipedia pages with the most page edits on 2015-09-12.
+
+```json
+{
+  "queryType" : "topN",
+  "dataSource" : "wikipedia",
+  "intervals" : ["2015-09-12/2015-09-13"],
+  "granularity" : "all",
+  "dimension" : "page",
+  "metric" : "count",
+  "threshold" : 10,
+  "aggregations" : [
+    {
+      "type" : "count",
+      "name" : "count"
+    }
+  ]
+}
+```
+
+Simply paste it into the console to switch the editor into JSON mode.
+
+![Native query](../tutorials/img/tutorial-query-07.png "Native query")
+
+
+### Native queries over HTTP
+
+We have included a sample native TopN query under `quickstart/tutorial/wikipedia-top-pages.json`:
+
+Let's submit this query to Druid:
+
+```bash
+curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-top-pages.json http://localhost:8888/druid/v2?pretty
+```
+
+You should see the following query results:
+
+```json
+[ {
+  "timestamp" : "2015-09-12T00:46:58.771Z",
+  "result" : [ {
+    "count" : 33,
+    "page" : "Wikipedia:Vandalismusmeldung"
+  }, {
+    "count" : 28,
+    "page" : "User:Cyde/List of candidates for speedy deletion/Subpage"
+  }, {
+    "count" : 27,
+    "page" : "Jeremy Corbyn"
+  }, {
+    "count" : 21,
+    "page" : "Wikipedia:Administrators' noticeboard/Incidents"
+  }, {
+    "count" : 20,
+    "page" : "Flavia Pennetta"
+  }, {
+    "count" : 18,
+    "page" : "Total Drama Presents: The Ridonculous Race"
+  }, {
+    "count" : 18,
+    "page" : "User talk:Dudeperson176123"
+  }, {
+    "count" : 18,
+    "page" : "Wikipédia:Le Bistro/12 septembre 2015"
+  }, {
+    "count" : 17,
+    "page" : "Wikipedia:In the news/Candidates"
+  }, {
+    "count" : 17,
+    "page" : "Wikipedia:Requests for page protection"
+  } ]
+} ]
+```
+
+
 ## Further reading
 
 The [Queries documentation](../querying/querying.html) has more information on Druid's native JSON queries.
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-retention.md b/docs/0.14.0-incubating/tutorials/tutorial-retention.md
index dafca32..6f5c91c 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-retention.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-retention.md
@@ -38,7 +38,7 @@ For this tutorial, we'll be using the Wikipedia edits sample data, with an inges
 The ingestion spec can be found at `quickstart/tutorial/retention-index.json`. Let's submit that spec, which will create a datasource called `retention-tutorial`:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/retention-index.json 
+bin/post-index-task --file quickstart/tutorial/retention-index.json --url http://localhost:8081
 ```
 
 After the ingestion completes, go to [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources) in a browser to access the Druid Console's datasource view.
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-rollup.md b/docs/0.14.0-incubating/tutorials/tutorial-rollup.md
index ceef218..e4ca658 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-rollup.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-rollup.md
@@ -100,8 +100,7 @@ We'll ingest this data using the following ingestion task spec, located at `quic
     "tuningConfig" : {
       "type" : "index",
       "maxRowsPerSegment" : 5000000,
-      "maxRowsInMemory" : 25000,
-      "forceExtendableShardSpecs" : true
+      "maxRowsInMemory" : 25000
     }
   }
 }
@@ -115,10 +114,10 @@ We will see how these definitions are used after we load this data.
 
 ## Load the example data
 
-From the apache-druid-0.14.0-incubating package root, run the following command:
+From the apache-druid-#{DRUIDVERSION} package root, run the following command:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/rollup-index.json 
+bin/post-index-task --file quickstart/tutorial/rollup-index.json --url http://localhost:8081
 ```
 
 After the script completes, we will query the data.
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-tranquility.md b/docs/0.14.0-incubating/tutorials/tutorial-tranquility.md
index 90c0cdf..fbf3e13 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-tranquility.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-tranquility.md
@@ -31,8 +31,8 @@ This tutorial shows you how to load streaming data into Apache Druid (incubating
 [Tranquility Server](https://github.com/druid-io/tranquility/blob/master/docs/server.md) allows a stream of data to be pushed into Druid using HTTP POSTs.
 
 For this tutorial, we'll assume you've already downloaded Druid as described in
-the [single-machine quickstart](quickstart.html) and have it running on your local machine. You
-don't need to have loaded any data yet.
+the [quickstart](index.html) using the `micro-quickstart` single-machine configuration and have it
+running on your local machine. You don't need to have loaded any data yet.
 
 ## Download Tranquility
 
@@ -44,17 +44,17 @@ tar -xzf tranquility-distribution-0.8.3.tgz
 mv tranquility-distribution-0.8.3 tranquility
 ```
 
-The startup scripts for the tutorial will expect the contents of the Tranquility tarball to be located at `tranquility` under the apache-druid-0.14.0-incubating package root.
+The startup scripts for the tutorial will expect the contents of the Tranquility tarball to be located at `tranquility` under the apache-druid-#{DRUIDVERSION} package root.
 
 ## Enable Tranquility Server
 
-- In your `quickstart/tutorial/conf/tutorial-cluster.conf`, uncomment the `tranquility-server` line.
-- Stop your *bin/supervise* command (CTRL-C) and then restart it by again running `bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf`.
+- In your `conf/supervise/single-server/micro-quickstart.conf`, uncomment the `tranquility-server` line.
+- Stop *micro-quickstart* cluster command (CTRL-C) then restart it again by running `bin/start-micro-quickstart`
 
 As part of the output of *supervise* you should see something like:
 
 ```bash
-Running command[tranquility-server], logging to[/stage/apache-druid-0.14.0-incubating/var/sv/tranquility-server.log]: tranquility/bin/tranquility server -configFile quickstart/tutorial/conf/tranquility/server.json -Ddruid.extensions.loadList=[]
+Running command[tranquility-server], logging to[/stage/apache-druid-#{DRUIDVERSION}/var/sv/tranquility-server.log]: tranquility/bin/tranquility server -configFile conf/tranquility/server.json -Ddruid.extensions.loadList=[]
 ```
 
 You can check the log file in `var/sv/tranquility-server.log` to confirm that the server is starting up properly.
@@ -96,7 +96,7 @@ Please follow the [query tutorial](../tutorials/tutorial-query.html) to run some
 
 If you wish to go through any of the other ingestion tutorials, you will need to shut down the cluster and reset the cluster state by removing the contents of the `var` directory under the druid package, as the other tutorials will write to the same "wikipedia" datasource.
 
-When cleaning up after running this Tranquility tutorial, it is also necessary to recomment the `tranquility-server` line in `quickstart/tutorial/conf/tutorial-cluster.conf` before restarting the cluster.
+When cleaning up after running this Tranquility tutorial, it is also necessary to recomment the `tranquility-server` line in `conf/supervise/single-server/micro-quickstart.conf` before restarting the cluster.
 
 
 ## Further reading
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-transform-spec.md b/docs/0.14.0-incubating/tutorials/tutorial-transform-spec.md
index 9a96da2..b30eebb 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-transform-spec.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-transform-spec.md
@@ -115,8 +115,7 @@ We will ingest the sample data using the following spec, which demonstrates the
     "tuningConfig" : {
       "type" : "index",
       "maxRowsPerSegment" : 5000000,
-      "maxRowsInMemory" : 25000,
-      "forceExtendableShardSpecs" : true
+      "maxRowsInMemory" : 25000
     }
   }
 }
@@ -136,7 +135,7 @@ This filter selects the first 3 rows, and it will exclude the final "lion" row i
 Let's submit this task now, which has been included at `quickstart/tutorial/transform-index.json`:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/transform-index.json
+bin/post-index-task --file quickstart/tutorial/transform-index.json --url http://localhost:8081
 ```
 
 ## Query the transformed data
diff --git a/docs/0.14.0-incubating/tutorials/tutorial-update-data.md b/docs/0.14.0-incubating/tutorials/tutorial-update-data.md
index d55ce97..ce0abfc 100644
--- a/docs/0.14.0-incubating/tutorials/tutorial-update-data.md
+++ b/docs/0.14.0-incubating/tutorials/tutorial-update-data.md
@@ -44,7 +44,7 @@ The spec we'll use for this tutorial is located at `quickstart/tutorial/updates-
 Let's submit that task:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/updates-init-index.json 
+bin/post-index-task --file quickstart/tutorial/updates-init-index.json --url http://localhost:8081
 ```
 
 We have three initial rows containing an "animal" dimension and "number" metric:
@@ -72,7 +72,7 @@ Note that this task reads input from `quickstart/tutorial/updates-data2.json`, a
 Let's submit that task:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/updates-overwrite-index.json 
+bin/post-index-task --file quickstart/tutorial/updates-overwrite-index.json --url http://localhost:8081
 ```
 
 When Druid finishes loading the new segment from this overwrite task, the "tiger" row now has the value "lion", the "aardvark" row has a different number, and the "giraffe" row has been replaced. It may take a couple of minutes for the changes to take effect:
@@ -98,7 +98,7 @@ The `quickstart/tutorial/updates-append-index.json` task spec has been configure
 Let's submit that task:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/updates-append-index.json 
+bin/post-index-task --file quickstart/tutorial/updates-append-index.json --url http://localhost:8081
 ```
 
 When Druid finishes loading the new segment from this overwrite task, the new rows will have been added to the datasource. Note that roll-up occurred for the "lion" row:
@@ -127,7 +127,7 @@ The `quickstart/tutorial/updates-append-index2.json` task spec reads input from
 Let's submit that task:
 
 ```bash
-bin/post-index-task --file quickstart/tutorial/updates-append-index2.json 
+bin/post-index-task --file quickstart/tutorial/updates-append-index2.json --url http://localhost:8081
 ```
 
 When the new data is loaded, we can see two additional rows after "octopus". Note that the new "bear" row with number 222 has not been rolled up with the existing bear-111 row, because the new data is held in a separate segment.
diff --git a/docs/latest/development/extensions-contrib/distinctcount.md b/docs/latest/development/extensions-contrib/distinctcount.md
index a392360..7cf67b5 100644
--- a/docs/latest/development/extensions-contrib/distinctcount.md
+++ b/docs/latest/development/extensions-contrib/distinctcount.md
@@ -28,8 +28,8 @@ To use this Apache Druid (incubating) extension, make sure to [include](../../op
 
 Additionally, follow these steps:
 
-(1) First, use a single dimension hash-based partition spec to partition data by a single dimension. For example visitor_id. This to make sure all rows with a particular value for that dimension will go into the same segment, or this might over count.
-(2) Second, use distinctCount to calculate the distinct count, make sure queryGranularity is divided exactly by segmentGranularity or else the result will be wrong.
+1. First, use a single dimension hash-based partition spec to partition data by a single dimension. For example visitor_id. This to make sure all rows with a particular value for that dimension will go into the same segment, or this might over count.
+2. Second, use distinctCount to calculate the distinct count, make sure queryGranularity is divided exactly by segmentGranularity or else the result will be wrong.
 
 There are some limitations, when used with groupBy, the groupBy keys' numbers should not exceed maxIntermediateRows in every segment. If exceeded the result will be wrong. When used with topN, numValuesPerPass should not be too big. If too big the distinctCount will use a lot of memory and might cause the JVM to go our of memory.
 
diff --git a/docs/latest/development/extensions-contrib/influx.md b/docs/latest/development/extensions-contrib/influx.md
index c5c071b..62e036b 100644
--- a/docs/latest/development/extensions-contrib/influx.md
+++ b/docs/latest/development/extensions-contrib/influx.md
@@ -35,6 +35,7 @@ A typical line looks like this:
 ```cpu,application=dbhost=prdb123,region=us-east-1 usage_idle=99.24,usage_user=0.55 1520722030000000000```
 
 which contains four parts:
+
   - measurement: A string indicating the name of the measurement represented (e.g. cpu, network, web_requests)
   - tags: zero or more key-value pairs (i.e. dimensions)
   - measurements: one or more key-value pairs; values can be numeric, boolean, or string
@@ -43,6 +44,7 @@ which contains four parts:
 The parser extracts these fields into a map, giving the measurement the key `measurement` and the timestamp the key `_ts`. The tag and measurement keys are copied verbatim, so users should take care to avoid name collisions. It is up to the ingestion spec to decide which fields should be treated as dimensions and which should be treated as metrics (typically tags correspond to dimensions and measurements correspond to metrics).
 
 The parser is configured like so:
+
 ```json
 "parser": {
       "type": "string",
diff --git a/docs/latest/development/extensions-contrib/materialized-view.md b/docs/latest/development/extensions-contrib/materialized-view.md
index 95bfde9..963a944 100644
--- a/docs/latest/development/extensions-contrib/materialized-view.md
+++ b/docs/latest/development/extensions-contrib/materialized-view.md
@@ -33,6 +33,7 @@ In materialized-view-maintenance, dataSouces user ingested are called "base-data
 The `derivativeDataSource` supervisor is used to keep the timeline of derived-dataSource consistent with base-dataSource. Each `derivativeDataSource` supervisor  is responsible for one derived-dataSource.
 
 A sample derivativeDataSource supervisor spec is shown below:
+
 ```json
    {
        "type": "derivativeDataSource",
@@ -90,6 +91,7 @@ A sample derivativeDataSource supervisor spec is shown below:
 In materialized-view-selection, we implement a new query type `view`. When we request a view query, Druid will try its best to optimize the query based on query dataSource and intervals.
 
 A sample view query spec is shown below:
+
 ```json
    {
        "queryType": "view",
@@ -124,6 +126,7 @@ A sample view query spec is shown below:
        }
    }
 ```
+
 There are 2 parts in a view query:
 
 |Field|Description|Required|
diff --git a/docs/latest/development/extensions-contrib/momentsketch-quantiles.md b/docs/latest/development/extensions-contrib/momentsketch-quantiles.md
index 966caa2..3eeadaf 100644
--- a/docs/latest/development/extensions-contrib/momentsketch-quantiles.md
+++ b/docs/latest/development/extensions-contrib/momentsketch-quantiles.md
@@ -38,6 +38,7 @@ druid.extensions.loadList=["druid-momentsketch"]
 The result of the aggregation is a momentsketch that is the union of all sketches either built from raw data or read from the segments.
 
 The `momentSketch` aggregator operates over raw data while the `momentSketchMerge` aggregator should be used when aggregating pre-computed sketches.
+
 ```json
 {
   "type" : <aggregator_type>,
@@ -59,6 +60,7 @@ The `momentSketch` aggregator operates over raw data while the `momentSketchMerg
 ### Post Aggregators
 
 Users can query for a set of quantiles using the `momentSketchSolveQuantiles` post-aggregator on the sketches created by the `momentSketch` or `momentSketchMerge` aggregators.
+
 ```json
 {
   "type"  : "momentSketchSolveQuantiles",
@@ -69,6 +71,7 @@ Users can query for a set of quantiles using the `momentSketchSolveQuantiles` po
 ```
 
 Users can also query for the min/max of a distribution:
+
 ```json
 {
   "type" : "momentSketchMin" | "momentSketchMax",
@@ -79,6 +82,7 @@ Users can also query for the min/max of a distribution:
 
 ### Example
 As an example of a query with sketches pre-aggregated at ingestion time, one could set up the following aggregator at ingest:
+
 ```json
 {
   "type": "momentSketch", 
@@ -88,7 +92,9 @@ As an example of a query with sketches pre-aggregated at ingestion time, one cou
   "compress": true,
 }
 ```
+
 and make queries using the following aggregator + post-aggregator:
+
 ```json
 {
   "aggregations": [{
diff --git a/docs/latest/development/extensions-contrib/moving-average-query.md b/docs/latest/development/extensions-contrib/moving-average-query.md
index 5fc7268..7e028cc 100644
--- a/docs/latest/development/extensions-contrib/moving-average-query.md
+++ b/docs/latest/development/extensions-contrib/moving-average-query.md
@@ -33,6 +33,7 @@ These Aggregate Window Functions consume standard Druid Aggregators and outputs
 Moving Average encapsulates the [groupBy query](../../querying/groupbyquery.html) (Or [timeseries](../../querying/timeseriesquery.html) in case of no dimensions) in order to rely on the maturity of these query types.
 
 It runs the query in two main phases:
+
 1. Runs an inner [groupBy](../../querying/groupbyquery.html) or [timeseries](../../querying/timeseriesquery.html) query to compute Aggregators (i.e. daily count of events).
 2. Passes over aggregated results in Broker, in order to compute Averagers (i.e. moving 7 day average of the daily count).
 
@@ -110,6 +111,7 @@ These are properties which are common to all Averagers:
 #### Standard averagers
 
 These averagers offer four functions:
+
 * Mean (Average)
 * MeanNoNulls (Ignores empty buckets).
 * Max
@@ -121,6 +123,7 @@ In that case, the first records will ignore missing buckets and average won't be
 However, this also means that empty days in a sparse dataset will also be ignored.
 
 Example of usage:
+
 ```json
 { "type" : "doubleMean", "name" : <output_name>, "fieldName": <input_name> }
 ```
@@ -130,6 +133,7 @@ This optional parameter is used to calculate over a single bucket within each cy
 A prime example would be weekly buckets, resulting in a Day of Week calculation. (Other examples: Month of year, Hour of day).
 
 I.e. when using these parameters:
+
 * *granularity*: period=P1D (daily)
 * *buckets*: 28
 * *cycleSize*: 7
@@ -146,6 +150,7 @@ All examples are based on the Wikipedia dataset provided in the Druid [tutorials
 Calculating a 7-buckets moving average for Wikipedia edit deltas.
 
 Query syntax:
+
 ```json
 {
   "queryType": "movingAverage",
@@ -176,6 +181,7 @@ Query syntax:
 ```
 
 Result:
+
 ```json
 [ {
    "version" : "v1",
@@ -217,6 +223,7 @@ Result:
 Calculating a 7-buckets moving average for Wikipedia edit deltas, plus a ratio between the current period and the moving average.
 
 Query syntax:
+
 ```json
 {
   "queryType": "movingAverage",
@@ -264,6 +271,7 @@ Query syntax:
 ```
 
 Result:
+
 ```json
 [ {
   "version" : "v1",
@@ -306,6 +314,7 @@ Result:
 Calculating an average of every first 10-minutes of the last 3 hours:
 
 Query syntax:
+
 ```json
 {
   "queryType": "movingAverage",
diff --git a/docs/latest/development/extensions-core/druid-basic-security.md b/docs/latest/development/extensions-core/druid-basic-security.md
index 28eff1f..e067fdf 100644
--- a/docs/latest/development/extensions-core/druid-basic-security.md
+++ b/docs/latest/development/extensions-core/druid-basic-security.md
@@ -172,6 +172,90 @@ Return a list of all user names.
 `GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`
 Return the name and role information of the user with name {userName}
 
+Example output:
+
+```json
+{
+  "name": "druid2",
+  "roles": [
+    "druidRole"
+  ]
+}
+```
+
+This API supports the following flags:
+
+- `?full`: The response will also include the full information for each role currently assigned to the user.
+
+Example output:
+
+```json
+{
+  "name": "druid2",
+  "roles": [
+    {
+      "name": "druidRole",
+      "permissions": [
+        {
+          "resourceAction": {
+            "resource": {
+              "name": "A",
+              "type": "DATASOURCE"
+            },
+            "action": "READ"
+          },
+          "resourceNamePattern": "A"
+        },
+        {
+          "resourceAction": {
+            "resource": {
+              "name": "C",
+              "type": "CONFIG"
+            },
+            "action": "WRITE"
+          },
+          "resourceNamePattern": "C"
+        }
+      ]
+    }
+  ]
+}
+```
+
+The output format of this API when `?full` is specified is deprecated and in later versions will be switched to the output format used when both `?full` and `?simplifyPermissions` flag is set. 
+
+The `resourceNamePattern` is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.
+
+- `?full?simplifyPermissions`: When both `?full` and `?simplifyPermissions` are set, the permissions in the output will contain only a list of `resourceAction` objects, without the extraneous `resourceNamePattern` field.
+
+```json
+{
+  "name": "druid2",
+  "roles": [
+    {
+      "name": "druidRole",
+      "users": null,
+      "permissions": [
+        {
+          "resource": {
+            "name": "A",
+            "type": "DATASOURCE"
+          },
+          "action": "READ"
+        },
+        {
+          "resource": {
+            "name": "C",
+            "type": "CONFIG"
+          },
+          "action": "WRITE"
+        }
+      ]
+    }
+  ]
+}
+```
+
 `POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`
 Create a new user with name {userName}
 
@@ -184,7 +268,58 @@ Delete the user with name {userName}
 Return a list of all role names.
 
 `GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`
-Return name and permissions for the role named {roleName}
+Return name and permissions for the role named {roleName}.
+
+Example output:
+
+```json
+{
+  "name": "druidRole2",
+  "permissions": [
+    {
+      "resourceAction": {
+        "resource": {
+          "name": "E",
+          "type": "DATASOURCE"
+        },
+        "action": "WRITE"
+      },
+      "resourceNamePattern": "E"
+    }
+  ]
+}
+```
+
+The default output format of this API is deprecated and in later versions will be switched to the output format used when the `?simplifyPermissions` flag is set. The `resourceNamePattern` is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.
+
+This API supports the following flags:
+
+- `?full`: The output will contain an extra `users` list, containing the users that currently have this role.
+
+```json
+"users":["druid"]
+```
+
+- `?simplifyPermissions`: The permissions in the output will contain only a list of `resourceAction` objects, without the extraneous `resourceNamePattern` field. The `users` field will be null when `?full` is not specified.
+
+Example output:
+
+```json
+{
+  "name": "druidRole2",
+  "users": null,
+  "permissions": [
+    {
+      "resource": {
+        "name": "E",
+        "type": "DATASOURCE"
+      },
+      "action": "WRITE"
+    }
+  ]
+}
+```
+
 
 `POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`
 Create a new role with name {roleName}.
diff --git a/docs/latest/development/extensions-core/druid-lookups.md b/docs/latest/development/extensions-core/druid-lookups.md
index 53476eb..9f5798e 100644
--- a/docs/latest/development/extensions-core/druid-lookups.md
+++ b/docs/latest/development/extensions-core/druid-lookups.md
@@ -75,6 +75,7 @@ Same for Loading cache, developer can implement a new type of loading cache by i
 
 #####   Example of Polling On-heap Lookup
 This example demonstrates a polling cache that will update its on-heap cache every 10 minutes
+
 ```json
 {
     "type":"pollingLookup",
diff --git a/docs/latest/development/extensions-core/orc.md b/docs/latest/development/extensions-core/orc.md
index af7a315..791531d 100644
--- a/docs/latest/development/extensions-core/orc.md
+++ b/docs/latest/development/extensions-core/orc.md
@@ -269,6 +269,7 @@ This extension, first available in version 0.15.0, replaces the previous 'contri
 ingestion task is *incompatible*, and will need modified to work with the newer 'core' extension. 
 
 To migrate to 0.15.0+:
+
 * In `inputSpec` of `ioConfig`, `inputFormat` must be changed from `"org.apache.hadoop.hive.ql.io.orc.OrcNewInputFormat"` to 
 `"org.apache.orc.mapreduce.OrcInputFormat"`
 * The 'contrib' extension supported a `typeString` property, which provided the schema of the
@@ -276,6 +277,7 @@ ORC file, of which was essentially required to have the types correct, but notab
 facilitated column renaming. In the 'core' extension, column renaming can be achieved with 
 [`flattenSpec` expressions](../../ingestion/flatten-json.html). For example, `"typeString":"struct<time:string,name:string>"`
 with the actual schema `struct<_col0:string,_col1:string>`, to preserve Druid schema would need replaced with:
+
 ```json
 "flattenSpec": {
   "fields": [
@@ -293,10 +295,12 @@ with the actual schema `struct<_col0:string,_col1:string>`, to preserve Druid sc
   ...
 }
 ```
+
 * The 'contrib' extension supported a `mapFieldNameFormat` property, which provided a way to specify a dimension to
  flatten `OrcMap` columns with primitive types. This functionality has also been replaced with
  [`flattenSpec` expressions](../../ingestion/flatten-json.html). For example: `"mapFieldNameFormat": "<PARENT>_<CHILD>"`
  for a dimension `nestedData_dim1`, to preserve Druid schema could be replaced with 
+
  ```json
 "flattenSpec": {
   "fields": [
diff --git a/docs/latest/development/extensions.md b/docs/latest/development/extensions.md
index 2190793..15b087c 100644
--- a/docs/latest/development/extensions.md
+++ b/docs/latest/development/extensions.md
@@ -74,7 +74,7 @@ Community extensions are not maintained by Druid committers, although we accept
 A number of community members have contributed their own extensions to Druid that are not packaged with the default Druid tarball.
 If you'd like to take on maintenance for a community extension, please post on [dev@druid.apache.org](https://lists.apache.org/list.html?dev@druid.apache.org) to let us know!
 
-All of these community extensions can be downloaded using *pull-deps* with the coordinate org.apache.druid.extensions.contrib:EXTENSION_NAME:LATEST_DRUID_STABLE_VERSION.
+All of these community extensions can be downloaded using [pull-deps](../operations/pull-deps.html) while specifying a `-c` coordinate option to pull `org.apache.druid.extensions.contrib:{EXTENSION_NAME}:{DRUID_VERSION}`.
 
 |Name|Description|Docs|
 |----|-----------|----|
@@ -95,6 +95,7 @@ All of these community extensions can be downloaded using *pull-deps* with the c
 |kafka-emitter|Kafka metrics emitter|[link](../development/extensions-contrib/kafka-emitter.html)|
 |druid-thrift-extensions|Support thrift ingestion |[link](../development/extensions-contrib/thrift.html)|
 |druid-opentsdb-emitter|OpenTSDB metrics emitter |[link](../development/extensions-contrib/opentsdb-emitter.html)|
+|materialized-view-selection, materialized-view-maintenance|Materialized View|[link](../development/extensions-contrib/materialized-view.html)|
 |druid-moving-average-query|Support for [Moving Average](https://en.wikipedia.org/wiki/Moving_average) and other Aggregate [Window Functions](https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions) in Druid queries.|[link](../development/extensions-contrib/moving-average-query.html)|
 
 ## Promoting Community Extension to Core Extension
diff --git a/docs/latest/operations/pull-deps.md b/docs/latest/operations/pull-deps.md
index 12d443d..d7c1e49 100644
--- a/docs/latest/operations/pull-deps.md
+++ b/docs/latest/operations/pull-deps.md
@@ -58,7 +58,7 @@ Don't use the default remote repositories, only use the repositories provided di
 
 `-d` or `--defaultVersion`
 
-Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is `org.apache.druid.extensions:mysql-metadata-storage`, and default version is `0.15.0-incubating`, then this coordinate will be treated as `org.apache.druid.extensions:mysql-metadata-storage:0.15.0-incubating`
+Version to use for extension coordinate that doesn't have a version information. For example, if extension coordinate is `org.apache.druid.extensions:mysql-metadata-storage`, and default version is `0.15.1-incubating`, then this coordinate will be treated as `org.apache.druid.extensions:mysql-metadata-storage:0.15.1-incubating`
 
 `--use-proxy`
 
@@ -92,10 +92,10 @@ To run `pull-deps`, you should
 
 Example:
 
-Suppose you want to download ```druid-rabbitmq```, ```mysql-metadata-storage``` and ```hadoop-client```(both 2.3.0 and 2.4.0) with a specific version, you can run `pull-deps` command with `-c org.apache.druid.extensions:druid-examples:0.15.0-incubating`, `-c org.apache.druid.extensions:mysql-metadata-storage:0.15.0-incubating`, `-h org.apache.hadoop:hadoop-client:2.3.0` and `-h org.apache.hadoop:hadoop-client:2.4.0`, an example command would be:
+Suppose you want to download ```druid-rabbitmq```, ```mysql-metadata-storage``` and ```hadoop-client```(both 2.3.0 and 2.4.0) with a specific version, you can run `pull-deps` command with `-c org.apache.druid.extensions:druid-examples:0.15.1-incubating`, `-c org.apache.druid.extensions:mysql-metadata-storage:0.15.1-incubating`, `-h org.apache.hadoop:hadoop-client:2.3.0` and `-h org.apache.hadoop:hadoop-client:2.4.0`, an example command would be:
 
 ```
-java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --clean -c org.apache.druid.extensions:mysql-metadata-storage:0.15.0-incubating -c org.apache.druid.extensions.contrib:druid-rabbitmq:0.15.0-incubating -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0
+java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --clean -c org.apache.druid.extensions:mysql-metadata-storage:0.15.1-incubating -c org.apache.druid.extensions.contrib:druid-rabbitmq:0.15.1-incubating -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0
 ```
 
 Because `--clean` is supplied, this command will first remove the directories specified at `druid.extensions.directory` and `druid.extensions.hadoopDependenciesDir`, then recreate them and start downloading the extensions there. After finishing downloading, if you go to the extension directories you specified, you will see
@@ -108,12 +108,12 @@ extensions
 │   ├── commons-digester-1.8.jar
 │   ├── commons-logging-1.1.1.jar
 │   ├── commons-validator-1.4.0.jar
-│   ├── druid-examples-0.15.0-incubating.jar
+│   ├── druid-examples-0.15.1-incubating.jar
 │   ├── twitter4j-async-3.0.3.jar
 │   ├── twitter4j-core-3.0.3.jar
 │   └── twitter4j-stream-3.0.3.jar
 └── mysql-metadata-storage
-    └── mysql-metadata-storage-0.15.0-incubating.jar
+    └── mysql-metadata-storage-0.15.1-incubating.jar
 ```
 
 ```
@@ -138,10 +138,10 @@ hadoop-dependencies/
     ..... lots of jars
 ```
 
-Note that if you specify `--defaultVersion`, you don't have to put version information in the coordinate. For example, if you want both `druid-rabbitmq` and `mysql-metadata-storage` to use version `0.15.0-incubating`,  you can change the command above to
+Note that if you specify `--defaultVersion`, you don't have to put version information in the coordinate. For example, if you want both `druid-rabbitmq` and `mysql-metadata-storage` to use version `0.15.1-incubating`,  you can change the command above to
 
 ```
-java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --defaultVersion 0.15.0-incubating --clean -c org.apache.druid.extensions:mysql-metadata-storage -c org.apache.druid.extensions.contrib:druid-rabbitmq -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0
+java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools pull-deps --defaultVersion 0.15.1-incubating --clean -c org.apache.druid.extensions:mysql-metadata-storage -c org.apache.druid.extensions.contrib:druid-rabbitmq -h org.apache.hadoop:hadoop-client:2.3.0 -h org.apache.hadoop:hadoop-client:2.4.0
 ```
 
 <div class="note info">
diff --git a/docs/latest/querying/filters.md b/docs/latest/querying/filters.md
index 2f9b23a..53e0853 100644
--- a/docs/latest/querying/filters.md
+++ b/docs/latest/querying/filters.md
@@ -282,6 +282,7 @@ greater than, less than, greater than or equal to, less than or equal to, and "b
 Bound filters support the use of extraction functions, see [Filtering with Extraction Functions](#filtering-with-extraction-functions) for details.
 
 The following bound filter expresses the condition `21 <= age <= 31`:
+
 ```json
 {
     "type": "bound",
@@ -293,6 +294,7 @@ The following bound filter expresses the condition `21 <= age <= 31`:
 ```
 
 This filter expresses the condition `foo <= name <= hoo`, using the default lexicographic sorting order.
+
 ```json
 {
     "type": "bound",
@@ -303,6 +305,7 @@ This filter expresses the condition `foo <= name <= hoo`, using the default lexi
 ```
 
 Using strict bounds, this filter expresses the condition `21 < age < 31`
+
 ```json
 {
     "type": "bound",
@@ -316,6 +319,7 @@ Using strict bounds, this filter expresses the condition `21 < age < 31`
 ```
 
 The user can also specify a one-sided bound by omitting "upper" or "lower". This filter expresses `age < 31`.
+
 ```json
 {
     "type": "bound",
@@ -327,6 +331,7 @@ The user can also specify a one-sided bound by omitting "upper" or "lower". This
 ```
 
 Likewise, this filter expresses `age >= 18`
+
 ```json
 {
     "type": "bound",
@@ -355,6 +360,7 @@ The interval filter supports the use of extraction functions, see [Filtering wit
 If an extraction function is used with this filter, the extraction function should output values that are parseable as long milliseconds.
 
 The following example filters on the time ranges of October 1-7, 2014 and November 15-16, 2014.
+
 ```json
 {
     "type" : "interval",
diff --git a/docs/latest/tutorials/cluster.md b/docs/latest/tutorials/cluster.md
index b93423e..af37bf9 100644
--- a/docs/latest/tutorials/cluster.md
+++ b/docs/latest/tutorials/cluster.md
@@ -142,14 +142,14 @@ First, download and unpack the release archive. It's best to do this on a single
 since you will be editing the configurations and then copying the modified distribution out to all
 of your servers.
 
-[Download](https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.15.0-incubating/apache-druid-0.15.0-incubating-bin.tar.gz)
-the 0.15.0-incubating release.
+[Download](https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.15.1-incubating/apache-druid-0.15.1-incubating-bin.tar.gz)
+the 0.15.1-incubating release.
 
 Extract Druid by running the following commands in your terminal:
 
 ```bash
-tar -xzf apache-druid-0.15.0-incubating-bin.tar.gz
-cd apache-druid-0.15.0-incubating
+tar -xzf apache-druid-0.15.1-incubating-bin.tar.gz
+cd apache-druid-0.15.1-incubating
 ```
 
 In the package, you should find:
@@ -416,7 +416,7 @@ Copy the Druid distribution and your edited configurations to your Master server
 If you have been editing the configurations on your local machine, you can use *rsync* to copy them:
 
 ```bash
-rsync -az apache-druid-0.15.0-incubating/ MASTER_SERVER:apache-druid-0.15.0-incubating/
+rsync -az apache-druid-0.15.1-incubating/ MASTER_SERVER:apache-druid-0.15.1-incubating/
 ```
 
 ### No Zookeper on Master
diff --git a/docs/latest/tutorials/index.md b/docs/latest/tutorials/index.md
index 6a7c89c..1b37a87 100644
--- a/docs/latest/tutorials/index.md
+++ b/docs/latest/tutorials/index.md
@@ -53,14 +53,14 @@ configuration than `micro-quickstart`.
 
 ## Getting started
 
-[Download](https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.15.0-incubating/apache-druid-0.15.0-incubating-bin.tar.gz)
-the 0.15.0-incubating release.
+[Download](https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/0.15.1-incubating/apache-druid-0.15.1-incubating-bin.tar.gz)
+the 0.15.1-incubating release.
 
 Extract Druid by running the following commands in your terminal:
 
 ```bash
-tar -xzf apache-druid-0.15.0-incubating-bin.tar.gz
-cd apache-druid-0.15.0-incubating
+tar -xzf apache-druid-0.15.1-incubating-bin.tar.gz
+cd apache-druid-0.15.1-incubating
 ```
 
 In the package, you should find:
@@ -87,7 +87,7 @@ mv zookeeper-3.4.11 zk
 ```
 
 The startup scripts for the tutorial will expect the contents of the Zookeeper tarball to be located at `zk` under the
-apache-druid-0.15.0-incubating package root.
+apache-druid-0.15.1-incubating package root.
 
 ## Start up Druid services
 
@@ -95,7 +95,7 @@ The following commands will assume that you are using the `micro-quickstart` sin
 using a different configuration, the `bin` directory has equivalent scripts for each configuration, such as
 `bin/start-single-server-small`.
 
-From the apache-druid-0.15.0-incubating package root, run the following command:
+From the apache-druid-0.15.1-incubating package root, run the following command:
 
 ```bash
 ./bin/start-micro-quickstart
@@ -105,15 +105,15 @@ This will bring up instances of Zookeeper and the Druid services, all running on
 
 ```bash
 $ ./bin/start-micro-quickstart 
-[Fri May  3 11:40:50 2019] Running command[zk], logging to[/apache-druid-0.15.0-incubating/var/sv/zk.log]: bin/run-zk conf
-[Fri May  3 11:40:50 2019] Running command[coordinator-overlord], logging to[/apache-druid-0.15.0-incubating/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
-[Fri May  3 11:40:50 2019] Running command[broker], logging to[/apache-druid-0.15.0-incubating/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
-[Fri May  3 11:40:50 2019] Running command[router], logging to[/apache-druid-0.15.0-incubating/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
-[Fri May  3 11:40:50 2019] Running command[historical], logging to[/apache-druid-0.15.0-incubating/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
-[Fri May  3 11:40:50 2019] Running command[middleManager], logging to[/apache-druid-0.15.0-incubating/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[zk], logging to[/apache-druid-0.15.1-incubating/var/sv/zk.log]: bin/run-zk conf
+[Fri May  3 11:40:50 2019] Running command[coordinator-overlord], logging to[/apache-druid-0.15.1-incubating/var/sv/coordinator-overlord.log]: bin/run-druid coordinator-overlord conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[broker], logging to[/apache-druid-0.15.1-incubating/var/sv/broker.log]: bin/run-druid broker conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[router], logging to[/apache-druid-0.15.1-incubating/var/sv/router.log]: bin/run-druid router conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[historical], logging to[/apache-druid-0.15.1-incubating/var/sv/historical.log]: bin/run-druid historical conf/druid/single-server/micro-quickstart
+[Fri May  3 11:40:50 2019] Running command[middleManager], logging to[/apache-druid-0.15.1-incubating/var/sv/middleManager.log]: bin/run-druid middleManager conf/druid/single-server/micro-quickstart
 ```
 
-All persistent state such as the cluster metadata store and segments for the services will be kept in the `var` directory under the apache-druid-0.15.0-incubating package root. Logs for the services are located at `var/sv`.
+All persistent state such as the cluster metadata store and segments for the services will be kept in the `var` directory under the apache-druid-0.15.1-incubating package root. Logs for the services are located at `var/sv`.
 
 Later on, if you'd like to stop the services, CTRL-C to exit the `bin/start-micro-quickstart` script, which will terminate the Druid processes.
 
diff --git a/docs/latest/tutorials/tutorial-batch-hadoop.md b/docs/latest/tutorials/tutorial-batch-hadoop.md
index 3ea0911..71869c8 100644
--- a/docs/latest/tutorials/tutorial-batch-hadoop.md
+++ b/docs/latest/tutorials/tutorial-batch-hadoop.md
@@ -42,7 +42,7 @@ For this tutorial, we've provided a Dockerfile for a Hadoop 2.8.3 cluster, which
 
 This Dockerfile and related files are located at `quickstart/tutorial/hadoop/docker`.
 
-From the apache-druid-0.15.0-incubating package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":
+From the apache-druid-0.15.1-incubating package root, run the following commands to build a Docker image named "druid-hadoop-demo" with version tag "2.8.3":
 
 ```bash
 cd quickstart/tutorial/hadoop/docker
@@ -110,7 +110,7 @@ docker exec -it druid-hadoop-demo bash
 
 ### Copy input data to the Hadoop container
 
-From the apache-druid-0.15.0-incubating package root on the host, copy the `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared folder:
+From the apache-druid-0.15.1-incubating package root on the host, copy the `quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz` sample data to the shared folder:
 
 ```bash
 cp quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz /tmp/shared/wikiticker-2015-09-12-sampled.json.gz
@@ -196,7 +196,7 @@ druid.indexer.logs.directory=/druid/indexing-logs
 
 Once the Hadoop .xml files have been copied to the Druid cluster and the segment/log storage configuration has been updated to use HDFS, the Druid cluster needs to be restarted for the new configurations to take effect.
 
-If the cluster is still running, CTRL-C to terminate the `bin/supervise` script, and re-reun it to bring the Druid services back up.
+If the cluster is still running, CTRL-C to terminate the `bin/start-micro-quickstart` script, and re-reun it to bring the Druid services back up.
 
 ## Load batch data
 
diff --git a/docs/latest/tutorials/tutorial-batch.md b/docs/latest/tutorials/tutorial-batch.md
index 1102c23..b2b3c3e 100644
--- a/docs/latest/tutorials/tutorial-batch.md
+++ b/docs/latest/tutorials/tutorial-batch.md
@@ -237,7 +237,7 @@ Once the spec is submitted, you can follow the same instructions as above to wai
 
 Let's briefly discuss how we would've submitted the ingestion task without using the script. You do not need to run these commands.
 
-To submit the task, POST it to Druid in a new terminal window from the apache-druid-0.15.0-incubating directory:
+To submit the task, POST it to Druid in a new terminal window from the apache-druid-0.15.1-incubating directory:
 
 ```bash
 curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-index.json http://localhost:8081/druid/indexer/v1/task
diff --git a/docs/latest/tutorials/tutorial-ingestion-spec.md b/docs/latest/tutorials/tutorial-ingestion-spec.md
index d769fdf..aba78b8 100644
--- a/docs/latest/tutorials/tutorial-ingestion-spec.md
+++ b/docs/latest/tutorials/tutorial-ingestion-spec.md
@@ -631,7 +631,7 @@ We've finished defining the ingestion spec, it should now look like the followin
 
 ## Submit the task and query the data
 
-From the apache-druid-0.15.0-incubating package root, run the following command:
+From the apache-druid-0.15.1-incubating package root, run the following command:
 
 ```bash
 bin/post-index-task --file quickstart/ingestion-tutorial-index.json --url http://localhost:8081
diff --git a/docs/latest/tutorials/tutorial-rollup.md b/docs/latest/tutorials/tutorial-rollup.md
index c6a6f30..17679b0 100644
--- a/docs/latest/tutorials/tutorial-rollup.md
+++ b/docs/latest/tutorials/tutorial-rollup.md
@@ -114,7 +114,7 @@ We will see how these definitions are used after we load this data.
 
 ## Load the example data
 
-From the apache-druid-0.15.0-incubating package root, run the following command:
+From the apache-druid-0.15.1-incubating package root, run the following command:
 
 ```bash
 bin/post-index-task --file quickstart/tutorial/rollup-index.json --url http://localhost:8081
diff --git a/docs/latest/tutorials/tutorial-tranquility.md b/docs/latest/tutorials/tutorial-tranquility.md
index e4d1426..5ff8176 100644
--- a/docs/latest/tutorials/tutorial-tranquility.md
+++ b/docs/latest/tutorials/tutorial-tranquility.md
@@ -44,17 +44,17 @@ tar -xzf tranquility-distribution-0.8.3.tgz
 mv tranquility-distribution-0.8.3 tranquility
 ```
 
-The startup scripts for the tutorial will expect the contents of the Tranquility tarball to be located at `tranquility` under the apache-druid-0.15.0-incubating package root.
+The startup scripts for the tutorial will expect the contents of the Tranquility tarball to be located at `tranquility` under the apache-druid-0.15.1-incubating package root.
 
 ## Enable Tranquility Server
 
 - In your `conf/supervise/single-server/micro-quickstart.conf`, uncomment the `tranquility-server` line.
-- Stop your *bin/supervise* command (CTRL-C) and then restart it by again running `bin/supervise -c conf/supervise/single-server/micro-quickstart.conf`.
+- Stop *micro-quickstart* cluster command (CTRL-C) then restart it again by running `bin/start-micro-quickstart`
 
 As part of the output of *supervise* you should see something like:
 
 ```bash
-Running command[tranquility-server], logging to[/stage/apache-druid-0.15.0-incubating/var/sv/tranquility-server.log]: tranquility/bin/tranquility server -configFile conf/tranquility/server.json -Ddruid.extensions.loadList=[]
+Running command[tranquility-server], logging to[/stage/apache-druid-0.15.1-incubating/var/sv/tranquility-server.log]: tranquility/bin/tranquility server -configFile conf/tranquility/server.json -Ddruid.extensions.loadList=[]
 ```
 
 You can check the log file in `var/sv/tranquility-server.log` to confirm that the server is starting up properly.
diff --git a/package-lock.json b/package-lock.json
index c2be32c..cff3f79 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -1,6 +1,6 @@
 {
   "name": "druid-website",
-  "version": "0.1.0",
+  "version": "0.2.0",
   "lockfileVersion": 1,
   "requires": true,
   "dependencies": {


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org