You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by cw...@apache.org on 2019/09/10 23:08:01 UTC

[incubator-druid-website-src] branch 0.16.0-incubating created (now b6c5fcf)

This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a change to branch 0.16.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid-website-src.git.


      at b6c5fcf  docs update for 0.16.0-incubating (also skip checkstyle on docs build)

This branch includes the following new commits:

     new b6c5fcf  docs update for 0.16.0-incubating (also skip checkstyle on docs build)

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org


[incubator-druid-website-src] 01/01: docs update for 0.16.0-incubating (also skip checkstyle on docs build)

Posted by cw...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

cwylie pushed a commit to branch 0.16.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid-website-src.git

commit b6c5fcfb16c1385af0ac91eac5184e51f7b9a1b7
Author: Clint Wylie <cw...@apache.org>
AuthorDate: Tue Sep 10 16:07:33 2019 -0700

    docs update for 0.16.0-incubating (also skip checkstyle on docs build)
---
 .../About-Experimental-Features.html               |    8 +
 docs/0.16.0-incubating/Aggregations.html           |    8 +
 docs/0.16.0-incubating/ApproxHisto.html            |    8 +
 docs/0.16.0-incubating/Batch-ingestion.html        |    8 +
 .../Booting-a-production-cluster.html              |    8 +
 docs/0.16.0-incubating/Broker-Config.html          |    8 +
 docs/0.16.0-incubating/Broker.html                 |    8 +
 docs/0.16.0-incubating/Build-from-source.html      |    8 +
 docs/0.16.0-incubating/Cassandra-Deep-Storage.html |    8 +
 docs/0.16.0-incubating/Cluster-setup.html          |    8 +
 docs/0.16.0-incubating/Compute.html                |    8 +
 .../Concepts-and-Terminology.html                  |    8 +
 docs/0.16.0-incubating/Configuration.html          |    8 +
 docs/0.16.0-incubating/Contribute.html             |    8 +
 docs/0.16.0-incubating/Coordinator-Config.html     |    8 +
 docs/0.16.0-incubating/Coordinator.html            |    8 +
 docs/0.16.0-incubating/DataSource.html             |    8 +
 .../0.16.0-incubating/DataSourceMetadataQuery.html |    8 +
 docs/0.16.0-incubating/Data_formats.html           |    8 +
 docs/0.16.0-incubating/Deep-Storage.html           |    8 +
 docs/0.16.0-incubating/Design.html                 |    8 +
 docs/0.16.0-incubating/DimensionSpecs.html         |    8 +
 docs/0.16.0-incubating/Download.html               |    8 +
 .../Druid-Personal-Demo-Cluster.html               |    8 +
 docs/0.16.0-incubating/Druid-vs-Cassandra.html     |    8 +
 docs/0.16.0-incubating/Druid-vs-Elasticsearch.html |    8 +
 docs/0.16.0-incubating/Druid-vs-Hadoop.html        |    8 +
 .../Druid-vs-Impala-or-Shark.html                  |    8 +
 docs/0.16.0-incubating/Druid-vs-Redshift.html      |    8 +
 docs/0.16.0-incubating/Druid-vs-Spark.html         |    8 +
 docs/0.16.0-incubating/Druid-vs-Vertica.html       |    8 +
 docs/0.16.0-incubating/Evaluate.html               |    8 +
 docs/0.16.0-incubating/Examples.html               |    8 +
 docs/0.16.0-incubating/Filters.html                |    8 +
 docs/0.16.0-incubating/Firehose.html               |    8 +
 docs/0.16.0-incubating/GeographicQueries.html      |    8 +
 docs/0.16.0-incubating/Granularities.html          |    8 +
 docs/0.16.0-incubating/GroupByQuery.html           |    8 +
 docs/0.16.0-incubating/Hadoop-Configuration.html   |    8 +
 docs/0.16.0-incubating/Having.html                 |    8 +
 docs/0.16.0-incubating/Historical-Config.html      |    8 +
 docs/0.16.0-incubating/Historical.html             |    8 +
 docs/0.16.0-incubating/Home.html                   |    8 +
 docs/0.16.0-incubating/Including-Extensions.html   |    8 +
 .../0.16.0-incubating/Indexing-Service-Config.html |    8 +
 docs/0.16.0-incubating/Indexing-Service.html       |    8 +
 docs/0.16.0-incubating/Ingestion-FAQ.html          |    8 +
 docs/0.16.0-incubating/Ingestion-overview.html     |    8 +
 docs/0.16.0-incubating/Ingestion.html              |    8 +
 .../Integrating-Druid-With-Other-Technologies.html |    8 +
 docs/0.16.0-incubating/Kafka-Eight.html            |    8 +
 docs/0.16.0-incubating/Libraries.html              |    8 +
 docs/0.16.0-incubating/LimitSpec.html              |    8 +
 docs/0.16.0-incubating/Loading-Your-Data.html      |    8 +
 docs/0.16.0-incubating/Logging.html                |    8 +
 docs/0.16.0-incubating/Master.html                 |    8 +
 docs/0.16.0-incubating/Metadata-storage.html       |    8 +
 docs/0.16.0-incubating/Metrics.html                |    8 +
 docs/0.16.0-incubating/Middlemanager.html          |    8 +
 docs/0.16.0-incubating/Modules.html                |    8 +
 docs/0.16.0-incubating/MySQL.html                  |    8 +
 docs/0.16.0-incubating/OrderBy.html                |    8 +
 docs/0.16.0-incubating/Other-Hadoop.html           |    8 +
 docs/0.16.0-incubating/Papers-and-talks.html       |    8 +
 docs/0.16.0-incubating/Peons.html                  |    8 +
 docs/0.16.0-incubating/Performance-FAQ.html        |    8 +
 docs/0.16.0-incubating/Plumber.html                |    8 +
 docs/0.16.0-incubating/Post-aggregations.html      |    8 +
 .../Production-Cluster-Configuration.html          |    8 +
 docs/0.16.0-incubating/Query-Context.html          |    8 +
 docs/0.16.0-incubating/Querying-your-data.html     |    8 +
 docs/0.16.0-incubating/Querying.html               |    8 +
 docs/0.16.0-incubating/Realtime-Config.html        |    8 +
 docs/0.16.0-incubating/Realtime-ingestion.html     |    8 +
 docs/0.16.0-incubating/Realtime.html               |    8 +
 docs/0.16.0-incubating/Recommendations.html        |    8 +
 docs/0.16.0-incubating/Rolling-Updates.html        |    8 +
 docs/0.16.0-incubating/Router.html                 |    8 +
 docs/0.16.0-incubating/Rule-Configuration.html     |    8 +
 docs/0.16.0-incubating/SearchQuery.html            |    8 +
 docs/0.16.0-incubating/SearchQuerySpec.html        |    8 +
 docs/0.16.0-incubating/SegmentMetadataQuery.html   |    8 +
 docs/0.16.0-incubating/Segments.html               |    8 +
 docs/0.16.0-incubating/SelectQuery.html            |    8 +
 .../Simple-Cluster-Configuration.html              |    8 +
 docs/0.16.0-incubating/Spatial-Filters.html        |    8 +
 docs/0.16.0-incubating/Spatial-Indexing.html       |    8 +
 .../Stand-Alone-With-Riak-CS.html                  |    8 +
 docs/0.16.0-incubating/Support.html                |    8 +
 docs/0.16.0-incubating/Tasks.html                  |    8 +
 docs/0.16.0-incubating/Thanks.html                 |    8 +
 docs/0.16.0-incubating/TimeBoundaryQuery.html      |    8 +
 docs/0.16.0-incubating/TimeseriesQuery.html        |    8 +
 docs/0.16.0-incubating/TopNMetricSpec.html         |    8 +
 docs/0.16.0-incubating/TopNQuery.html              |    8 +
 .../Tutorial-A-First-Look-at-Druid.html            |    8 +
 .../Tutorial-All-About-Queries.html                |    8 +
 .../Tutorial-Loading-Batch-Data.html               |    8 +
 .../Tutorial-Loading-Streaming-Data.html           |    8 +
 .../Tutorial-The-Druid-Cluster.html                |    8 +
 .../Tutorial:-A-First-Look-at-Druid.html           |    8 +
 .../Tutorial:-All-About-Queries.html               |    8 +
 .../Tutorial:-Loading-Batch-Data.html              |    8 +
 .../Tutorial:-Loading-Streaming-Data.html          |    8 +
 .../Tutorial:-Loading-Your-Data-Part-1.html        |    8 +
 .../Tutorial:-Loading-Your-Data-Part-2.html        |    8 +
 .../Tutorial:-The-Druid-Cluster.html               |    8 +
 docs/0.16.0-incubating/Tutorial:-Webstream.html    |    8 +
 docs/0.16.0-incubating/Tutorials.html              |    8 +
 docs/0.16.0-incubating/Twitter-Tutorial.html       |    8 +
 docs/0.16.0-incubating/Versioning.html             |    8 +
 docs/0.16.0-incubating/ZooKeeper.html              |    8 +
 docs/0.16.0-incubating/alerts.html                 |    8 +
 .../assets/druid-architecture.png                  |  Bin 0 -> 134117 bytes
 .../assets/druid-column-types.png                  |  Bin 0 -> 93363 bytes
 .../0.16.0-incubating/assets/druid-dataflow-2x.png |  Bin 0 -> 130160 bytes
 docs/0.16.0-incubating/assets/druid-dataflow-3.png |  Bin 0 -> 71425 bytes
 docs/0.16.0-incubating/assets/druid-manage-1.png   |  Bin 0 -> 80415 bytes
 docs/0.16.0-incubating/assets/druid-production.png |  Bin 0 -> 40124 bytes
 docs/0.16.0-incubating/assets/druid-timeline.png   |  Bin 0 -> 24160 bytes
 docs/0.16.0-incubating/assets/indexing_service.png |  Bin 0 -> 22490 bytes
 .../assets/segmentPropagation.png                  |  Bin 0 -> 30569 bytes
 .../assets/tutorial-batch-data-loader-01.png       |  Bin 0 -> 56488 bytes
 .../assets/tutorial-batch-data-loader-02.png       |  Bin 0 -> 360295 bytes
 .../assets/tutorial-batch-data-loader-03.png       |  Bin 0 -> 137443 bytes
 .../assets/tutorial-batch-data-loader-04.png       |  Bin 0 -> 167252 bytes
 .../assets/tutorial-batch-data-loader-05.png       |  Bin 0 -> 162488 bytes
 .../assets/tutorial-batch-data-loader-06.png       |  Bin 0 -> 64301 bytes
 .../assets/tutorial-batch-data-loader-07.png       |  Bin 0 -> 46529 bytes
 .../assets/tutorial-batch-data-loader-08.png       |  Bin 0 -> 103928 bytes
 .../assets/tutorial-batch-data-loader-09.png       |  Bin 0 -> 63348 bytes
 .../assets/tutorial-batch-data-loader-10.png       |  Bin 0 -> 44516 bytes
 .../assets/tutorial-batch-data-loader-11.png       |  Bin 0 -> 83288 bytes
 .../assets/tutorial-batch-submit-task-01.png       |  Bin 0 -> 69356 bytes
 .../assets/tutorial-batch-submit-task-02.png       |  Bin 0 -> 86076 bytes
 .../assets/tutorial-compaction-01.png              |  Bin 0 -> 35710 bytes
 .../assets/tutorial-compaction-02.png              |  Bin 0 -> 166571 bytes
 .../assets/tutorial-compaction-03.png              |  Bin 0 -> 26755 bytes
 .../assets/tutorial-compaction-04.png              |  Bin 0 -> 184365 bytes
 .../assets/tutorial-compaction-05.png              |  Bin 0 -> 26588 bytes
 .../assets/tutorial-compaction-06.png              |  Bin 0 -> 206717 bytes
 .../assets/tutorial-compaction-07.png              |  Bin 0 -> 26683 bytes
 .../assets/tutorial-compaction-08.png              |  Bin 0 -> 28751 bytes
 .../assets/tutorial-deletion-01.png                |  Bin 0 -> 43586 bytes
 .../assets/tutorial-deletion-02.png                |  Bin 0 -> 439602 bytes
 .../assets/tutorial-deletion-03.png                |  Bin 0 -> 437304 bytes
 .../0.16.0-incubating/assets/tutorial-kafka-01.png |  Bin 0 -> 85477 bytes
 .../0.16.0-incubating/assets/tutorial-kafka-02.png |  Bin 0 -> 75709 bytes
 .../0.16.0-incubating/assets/tutorial-query-01.png |  Bin 0 -> 100930 bytes
 .../0.16.0-incubating/assets/tutorial-query-02.png |  Bin 0 -> 83369 bytes
 .../0.16.0-incubating/assets/tutorial-query-03.png |  Bin 0 -> 65038 bytes
 .../0.16.0-incubating/assets/tutorial-query-04.png |  Bin 0 -> 66423 bytes
 .../0.16.0-incubating/assets/tutorial-query-05.png |  Bin 0 -> 51855 bytes
 .../0.16.0-incubating/assets/tutorial-query-06.png |  Bin 0 -> 82211 bytes
 .../0.16.0-incubating/assets/tutorial-query-07.png |  Bin 0 -> 78633 bytes
 .../assets/tutorial-quickstart-01.png              |  Bin 0 -> 29834 bytes
 .../assets/tutorial-retention-00.png               |  Bin 0 -> 77704 bytes
 .../assets/tutorial-retention-01.png               |  Bin 0 -> 35171 bytes
 .../assets/tutorial-retention-02.png               |  Bin 0 -> 240310 bytes
 .../assets/tutorial-retention-03.png               |  Bin 0 -> 30029 bytes
 .../assets/tutorial-retention-04.png               |  Bin 0 -> 44617 bytes
 .../assets/tutorial-retention-05.png               |  Bin 0 -> 38992 bytes
 .../assets/tutorial-retention-06.png               |  Bin 0 -> 137570 bytes
 .../assets/web-console-01-home-view.png}           |  Bin
 .../assets/web-console-02-data-loader-1.png}       |  Bin
 .../assets/web-console-03-data-loader-2.png}       |  Bin
 .../assets/web-console-04-datasources.png}         |  Bin
 .../assets/web-console-05-retention.png}           |  Bin
 .../assets/web-console-06-segments.png}            |  Bin
 .../assets/web-console-07-supervisors.png}         |  Bin
 .../assets/web-console-08-tasks.png}               |  Bin
 .../assets/web-console-09-task-status.png}         |  Bin
 .../assets/web-console-10-servers.png}             |  Bin
 .../assets/web-console-11-query-sql.png}           |  Bin
 .../assets/web-console-12-query-rune.png}          |  Bin
 .../assets/web-console-13-lookups.png}             |  Bin
 .../comparisons/druid-vs-cassandra.html            |    8 +
 .../comparisons/druid-vs-elasticsearch.html        |   91 +
 .../comparisons/druid-vs-hadoop.html               |    8 +
 .../comparisons/druid-vs-impala-or-shark.html      |    8 +
 .../comparisons/druid-vs-key-value.html            |   99 ++
 .../comparisons/druid-vs-kudu.html                 |   93 +
 .../comparisons/druid-vs-redshift.html             |  102 ++
 .../comparisons/druid-vs-spark.html                |   93 +
 .../comparisons/druid-vs-sql-on-hadoop.html        |  124 ++
 .../comparisons/druid-vs-vertica.html              |    8 +
 docs/0.16.0-incubating/configuration/auth.html     |    8 +
 docs/0.16.0-incubating/configuration/broker.html   |    8 +
 docs/0.16.0-incubating/configuration/caching.html  |    8 +
 .../configuration/coordinator.html                 |    8 +
 docs/0.16.0-incubating/configuration/hadoop.html   |    8 +
 .../configuration/historical.html                  |    8 +
 docs/0.16.0-incubating/configuration/index.html    | 1801 ++++++++++++++++++++
 .../configuration/indexing-service.html            |    8 +
 docs/0.16.0-incubating/configuration/logging.html  |  133 ++
 .../configuration/production-cluster.html          |    8 +
 docs/0.16.0-incubating/configuration/realtime.md   |    8 +
 .../configuration/simple-cluster.html              |    8 +
 .../0.16.0-incubating/configuration/zookeeper.html |    8 +
 .../dependencies/cassandra-deep-storage.md         |    8 +
 .../dependencies/deep-storage.html                 |  101 ++
 .../dependencies/metadata-storage.html             |  163 ++
 docs/0.16.0-incubating/dependencies/zookeeper.html |  110 ++
 docs/0.16.0-incubating/design/architecture.html    |  252 +++
 docs/0.16.0-incubating/design/auth.html            |  174 ++
 docs/0.16.0-incubating/design/broker.html          |   96 ++
 .../design/concepts-and-terminology.html           |    8 +
 docs/0.16.0-incubating/design/coordinator.html     |  148 ++
 docs/0.16.0-incubating/design/design.html          |    8 +
 docs/0.16.0-incubating/design/historical.html      |   97 ++
 docs/0.16.0-incubating/design/index.html           |  153 ++
 docs/0.16.0-incubating/design/indexer.html         |  191 +++
 .../0.16.0-incubating/design/indexing-service.html |   94 +
 docs/0.16.0-incubating/design/middlemanager.html   |   90 +
 docs/0.16.0-incubating/design/overlord.html        |  102 ++
 docs/0.16.0-incubating/design/peons.html           |   92 +
 docs/0.16.0-incubating/design/plumber.md           |    8 +
 docs/0.16.0-incubating/design/processes.html       |  154 ++
 docs/0.16.0-incubating/design/realtime.md          |    8 +
 docs/0.16.0-incubating/design/router.html          |  241 +++
 docs/0.16.0-incubating/design/segments.html        |  260 +++
 .../development/approximate-histograms.html        |    8 +
 docs/0.16.0-incubating/development/build.html      |  108 ++
 .../development/community-extensions/azure.html    |    8 +
 .../community-extensions/cassandra.html            |    8 +
 .../community-extensions/cloudfiles.html           |    8 +
 .../development/community-extensions/graphite.html |    8 +
 .../community-extensions/kafka-simple.html         |    8 +
 .../development/community-extensions/rabbitmq.html |    8 +
 .../development/datasketches-aggregators.html      |    8 +
 .../development/experimental.html                  |   91 +
 .../extensions-contrib/ambari-metrics-emitter.html |  141 ++
 .../development/extensions-contrib/azure.html      |  147 ++
 .../development/extensions-contrib/cassandra.html  |   84 +
 .../development/extensions-contrib/cloudfiles.html |  151 ++
 .../extensions-contrib/distinctcount.html          |  143 ++
 .../development/extensions-contrib/google.html     |  142 ++
 .../development/extensions-contrib/graphite.html   |  151 ++
 .../development/extensions-contrib/influx.html     |  113 ++
 .../extensions-contrib/influxdb-emitter.html       |  120 ++
 .../extensions-contrib/kafka-emitter.html          |  106 ++
 .../development/extensions-contrib/kafka-simple.md |    8 +
 .../extensions-contrib/materialized-view.html      |  189 ++
 .../extensions-contrib/momentsketch-quantiles.html |  163 ++
 .../extensions-contrib/moving-average-query.html   |  369 ++++
 .../extensions-contrib/opentsdb-emitter.html       |  111 ++
 .../development/extensions-contrib/orc.html        |    8 +
 .../development/extensions-contrib/parquet.html    |    8 +
 .../development/extensions-contrib/rabbitmq.md     |    8 +
 .../extensions-contrib/redis-cache.html            |  112 ++
 .../development/extensions-contrib/rocketmq.md     |    8 +
 .../development/extensions-contrib/scan-query.html |    8 +
 .../development/extensions-contrib/sqlserver.html  |  113 ++
 .../development/extensions-contrib/statsd.html     |  120 ++
 .../tdigestsketch-quantiles.html                   |  193 +++
 .../development/extensions-contrib/thrift.html     |  172 ++
 .../extensions-contrib/time-min-max.html           |  143 ++
 .../extensions-core/approximate-histograms.html    |  306 ++++
 .../development/extensions-core/avro.html          |  276 +++
 .../development/extensions-core/bloom-filter.html  |  204 +++
 .../extensions-core/caffeine-cache.html            |    8 +
 .../extensions-core/datasketches-aggregators.html  |    8 +
 .../extensions-core/datasketches-extension.html    |   91 +
 .../extensions-core/datasketches-hll.html          |  144 ++
 .../extensions-core/datasketches-quantiles.html    |  164 ++
 .../extensions-core/datasketches-theta.html        |  304 ++++
 .../extensions-core/datasketches-tuple.html        |  191 +++
 .../extensions-core/druid-basic-security.html      |  450 +++++
 .../extensions-core/druid-kerberos.html            |  169 ++
 .../development/extensions-core/druid-lookups.html |  192 +++
 .../development/extensions-core/examples.html      |   99 ++
 .../development/extensions-core/hdfs.html          |  110 ++
 .../extensions-core/kafka-eight-firehose.md        |    8 +
 .../kafka-extraction-namespace.html                |  114 ++
 .../extensions-core/kafka-ingestion.html           |  427 +++++
 .../extensions-core/kinesis-ingestion.html         |  464 +++++
 .../extensions-core/lookups-cached-global.html     |  390 +++++
 .../development/extensions-core/mysql.html         |  216 +++
 .../extensions-core/namespaced-lookup.html         |    8 +
 .../development/extensions-core/orc.html           |  345 ++++
 .../development/extensions-core/parquet.html       |  266 +++
 .../development/extensions-core/postgresql.html    |  197 +++
 .../development/extensions-core/protobuf.html      |  257 +++
 .../development/extensions-core/s3.html            |  176 ++
 .../extensions-core/simple-client-sslcontext.html  |  112 ++
 .../development/extensions-core/stats.html         |  201 +++
 .../development/extensions-core/test-stats.html    |  155 ++
 docs/0.16.0-incubating/development/extensions.html |  199 +++
 docs/0.16.0-incubating/development/geo.html        |  159 ++
 docs/0.16.0-incubating/development/indexer.md      |    8 +
 .../integrating-druid-with-other-technologies.html |   88 +
 docs/0.16.0-incubating/development/javascript.html |  116 ++
 .../kafka-simple-consumer-firehose.html            |    8 +
 docs/0.16.0-incubating/development/libraries.html  |    8 +
 docs/0.16.0-incubating/development/modules.html    |  265 +++
 docs/0.16.0-incubating/development/overview.html   |  110 ++
 docs/0.16.0-incubating/development/router.md       |    8 +
 .../development/select-query.html                  |    8 +
 docs/0.16.0-incubating/development/versioning.html |   93 +
 docs/0.16.0-incubating/index.html                  |    8 +
 .../0.16.0-incubating/ingestion/batch-ingestion.md |    8 +
 .../ingestion/command-line-hadoop-indexer.md       |    8 +
 docs/0.16.0-incubating/ingestion/compaction.md     |    8 +
 docs/0.16.0-incubating/ingestion/data-formats.html |  297 ++++
 .../ingestion/data-management.html                 |  244 +++
 docs/0.16.0-incubating/ingestion/delete-data.md    |    8 +
 docs/0.16.0-incubating/ingestion/faq.html          |  126 ++
 docs/0.16.0-incubating/ingestion/firehose.md       |    8 +
 docs/0.16.0-incubating/ingestion/flatten-json.md   |    8 +
 .../ingestion/hadoop-vs-native-batch.md            |    8 +
 docs/0.16.0-incubating/ingestion/hadoop.html       |  529 ++++++
 docs/0.16.0-incubating/ingestion/index.html        |  740 ++++++++
 docs/0.16.0-incubating/ingestion/ingestion-spec.md |    8 +
 docs/0.16.0-incubating/ingestion/ingestion.html    |    8 +
 .../ingestion/locking-and-priority.md              |    8 +
 docs/0.16.0-incubating/ingestion/misc-tasks.md     |    8 +
 docs/0.16.0-incubating/ingestion/native-batch.html |  907 ++++++++++
 docs/0.16.0-incubating/ingestion/native_tasks.html |    8 +
 docs/0.16.0-incubating/ingestion/native_tasks.md   |    8 +
 docs/0.16.0-incubating/ingestion/overview.html     |    8 +
 .../ingestion/realtime-ingestion.html              |    8 +
 docs/0.16.0-incubating/ingestion/reports.md        |    8 +
 docs/0.16.0-incubating/ingestion/schema-changes.md |    8 +
 .../0.16.0-incubating/ingestion/schema-design.html |  267 +++
 .../ingestion/standalone-realtime.html             |   94 +
 .../ingestion/stream-ingestion.md                  |    8 +
 docs/0.16.0-incubating/ingestion/stream-pull.md    |    8 +
 docs/0.16.0-incubating/ingestion/stream-push.md    |    8 +
 docs/0.16.0-incubating/ingestion/tasks.html        |  364 ++++
 docs/0.16.0-incubating/ingestion/tranquility.html  |   89 +
 docs/0.16.0-incubating/ingestion/transform-spec.md |    8 +
 .../ingestion/update-existing-data.md              |    8 +
 docs/0.16.0-incubating/misc/cluster-setup.html     |    8 +
 docs/0.16.0-incubating/misc/evaluate.html          |    8 +
 docs/0.16.0-incubating/misc/math-expr.html         |  277 +++
 docs/0.16.0-incubating/misc/papers-and-talks.html  |   94 +
 docs/0.16.0-incubating/misc/tasks.html             |    8 +
 docs/0.16.0-incubating/operations/alerts.html      |   91 +
 .../operations/api-reference.html                  |  753 ++++++++
 .../operations/basic-cluster-tuning.html           |  290 ++++
 .../operations/deep-storage-migration.html         |  106 ++
 .../operations/druid-console.html                  |  130 ++
 .../0.16.0-incubating/operations/dump-segment.html |  156 ++
 .../operations/export-metadata.html                |  204 +++
 .../operations/getting-started.html                |   92 +
 .../operations/high-availability.html              |   95 ++
 .../operations/http-compression.html               |   90 +
 .../operations/including-extensions.md             |    8 +
 .../operations/insert-segment-to-db.html           |  101 ++
 .../operations/management-uis.html                 |  110 ++
 .../operations/metadata-migration.html             |  121 ++
 docs/0.16.0-incubating/operations/metrics.html     |  362 ++++
 .../0.16.0-incubating/operations/multitenancy.html |    8 +
 .../0.16.0-incubating/operations/other-hadoop.html |  285 ++++
 .../operations/password-provider.html              |  104 ++
 .../operations/performance-faq.html                |    8 +
 docs/0.16.0-incubating/operations/pull-deps.html   |  152 ++
 .../operations/recommendations.html                |  130 ++
 .../operations/reset-cluster.html                  |  118 ++
 .../operations/rolling-updates.html                |  134 ++
 .../operations/rule-configuration.html             |  239 +++
 .../operations/segment-optimization.html           |  150 ++
 .../operations/single-server.html                  |  114 ++
 docs/0.16.0-incubating/operations/tls-support.html |  165 ++
 .../operations/use_sbt_to_build_fat_jar.html       |  181 ++
 docs/0.16.0-incubating/querying/aggregations.html  |  295 ++++
 docs/0.16.0-incubating/querying/caching.html       |  122 ++
 docs/0.16.0-incubating/querying/datasource.html    |  107 ++
 .../querying/datasourcemetadataquery.html          |  109 ++
 .../0.16.0-incubating/querying/dimensionspecs.html |  449 +++++
 docs/0.16.0-incubating/querying/filters.html       |  471 +++++
 docs/0.16.0-incubating/querying/granularities.html |  414 +++++
 docs/0.16.0-incubating/querying/groupbyquery.html  |  471 +++++
 docs/0.16.0-incubating/querying/having.html        |  262 +++
 docs/0.16.0-incubating/querying/hll-old.html       |  166 ++
 docs/0.16.0-incubating/querying/joins.html         |  107 ++
 docs/0.16.0-incubating/querying/limitspec.html     |   99 ++
 docs/0.16.0-incubating/querying/lookups.html       |  437 +++++
 .../querying/multi-value-dimensions.html           |  362 ++++
 docs/0.16.0-incubating/querying/multitenancy.html  |  140 ++
 docs/0.16.0-incubating/querying/optimizations.html |    8 +
 .../querying/post-aggregations.html                |  227 +++
 docs/0.16.0-incubating/querying/query-context.html |  152 ++
 docs/0.16.0-incubating/querying/querying.html      |  168 ++
 docs/0.16.0-incubating/querying/scan-query.html    |  268 +++
 docs/0.16.0-incubating/querying/searchquery.html   |  193 +++
 .../querying/searchqueryspec.html                  |  111 ++
 .../querying/segmentmetadataquery.html             |  210 +++
 docs/0.16.0-incubating/querying/select-query.html  |  286 ++++
 .../0.16.0-incubating/querying/sorting-orders.html |  100 ++
 docs/0.16.0-incubating/querying/sql.html           |  811 +++++++++
 .../querying/timeboundaryquery.html                |  110 ++
 .../querying/timeseriesquery.html                  |  201 +++
 .../0.16.0-incubating/querying/topnmetricspec.html |  136 ++
 docs/0.16.0-incubating/querying/topnquery.html     |  290 ++++
 .../querying/virtual-columns.html                  |  127 ++
 .../tutorials/booting-a-production-cluster.html    |    8 +
 docs/0.16.0-incubating/tutorials/cluster.html      |  410 +++++
 docs/0.16.0-incubating/tutorials/examples.html     |    8 +
 docs/0.16.0-incubating/tutorials/firewall.html     |    8 +
 docs/0.16.0-incubating/tutorials/index.html        |  213 +++
 .../tutorials/ingestion-streams.html               |    8 +
 docs/0.16.0-incubating/tutorials/ingestion.html    |    8 +
 docs/0.16.0-incubating/tutorials/quickstart.html   |    8 +
 .../tutorials/tutorial-a-first-look-at-druid.html  |    8 +
 .../tutorials/tutorial-all-about-queries.html      |    8 +
 .../tutorials/tutorial-batch-hadoop.html           |  236 +++
 .../tutorials/tutorial-batch.html                  |  245 +++
 .../tutorials/tutorial-compaction.html             |  173 ++
 .../tutorials/tutorial-delete-data.html            |  192 +++
 .../tutorials/tutorial-ingestion-spec.html         |  601 +++++++
 .../tutorials/tutorial-kafka.html                  |  193 +++
 .../tutorials/tutorial-kerberos-hadoop.html        |  152 ++
 .../tutorials/tutorial-loading-batch-data.html     |    8 +
 .../tutorials/tutorial-loading-streaming-data.html |    8 +
 .../tutorials/tutorial-query.html                  |  283 +++
 .../tutorials/tutorial-retention.html              |  130 ++
 .../tutorials/tutorial-rollup.html                 |  211 +++
 .../tutorials/tutorial-the-druid-cluster.html      |    8 +
 .../tutorials/tutorial-tranquility.md              |    8 +
 .../tutorials/tutorial-transform-spec.html         |  194 +++
 .../tutorials/tutorial-update-data.html            |  179 ++
 docs/latest/About-Experimental-Features.html       |   12 +-
 docs/latest/Aggregations.html                      |   12 +-
 docs/latest/ApproxHisto.html                       |   12 +-
 docs/latest/Batch-ingestion.html                   |   12 +-
 docs/latest/Booting-a-production-cluster.html      |   12 +-
 docs/latest/Broker-Config.html                     |   12 +-
 docs/latest/Broker.html                            |   12 +-
 docs/latest/Build-from-source.html                 |   12 +-
 docs/latest/Cassandra-Deep-Storage.html            |   12 +-
 docs/latest/Cluster-setup.html                     |   12 +-
 docs/latest/Compute.html                           |   12 +-
 docs/latest/Concepts-and-Terminology.html          |   12 +-
 docs/latest/Configuration.html                     |   12 +-
 docs/latest/Contribute.html                        |   12 +-
 docs/latest/Coordinator-Config.html                |   12 +-
 docs/latest/Coordinator.html                       |   12 +-
 docs/latest/DataSource.html                        |   12 +-
 docs/latest/DataSourceMetadataQuery.html           |   12 +-
 docs/latest/Data_formats.html                      |   12 +-
 docs/latest/Deep-Storage.html                      |   12 +-
 docs/latest/Design.html                            |   12 +-
 docs/latest/DimensionSpecs.html                    |   12 +-
 docs/latest/Download.html                          |   12 +-
 docs/latest/Druid-Personal-Demo-Cluster.html       |   12 +-
 docs/latest/Druid-vs-Cassandra.html                |   12 +-
 docs/latest/Druid-vs-Elasticsearch.html            |   12 +-
 docs/latest/Druid-vs-Hadoop.html                   |   12 +-
 docs/latest/Druid-vs-Impala-or-Shark.html          |   12 +-
 docs/latest/Druid-vs-Redshift.html                 |   12 +-
 docs/latest/Druid-vs-Spark.html                    |   12 +-
 docs/latest/Druid-vs-Vertica.html                  |   12 +-
 docs/latest/Evaluate.html                          |   12 +-
 docs/latest/Examples.html                          |   12 +-
 docs/latest/Filters.html                           |   12 +-
 docs/latest/Firehose.html                          |   12 +-
 docs/latest/GeographicQueries.html                 |   12 +-
 docs/latest/Granularities.html                     |   12 +-
 docs/latest/GroupByQuery.html                      |   12 +-
 docs/latest/Hadoop-Configuration.html              |   12 +-
 docs/latest/Having.html                            |   12 +-
 docs/latest/Historical-Config.html                 |   12 +-
 docs/latest/Historical.html                        |   12 +-
 docs/latest/Home.html                              |   12 +-
 docs/latest/Including-Extensions.html              |   12 +-
 docs/latest/Indexing-Service-Config.html           |   12 +-
 docs/latest/Indexing-Service.html                  |   12 +-
 docs/latest/Ingestion-FAQ.html                     |   12 +-
 docs/latest/Ingestion-overview.html                |   12 +-
 docs/latest/Ingestion.html                         |   12 +-
 .../Integrating-Druid-With-Other-Technologies.html |   12 +-
 docs/latest/Kafka-Eight.html                       |   12 +-
 docs/latest/Libraries.html                         |   12 +-
 docs/latest/LimitSpec.html                         |   12 +-
 docs/latest/Loading-Your-Data.html                 |   12 +-
 docs/latest/Logging.html                           |   12 +-
 docs/latest/Master.html                            |   12 +-
 docs/latest/Metadata-storage.html                  |   12 +-
 docs/latest/Metrics.html                           |   12 +-
 docs/latest/Middlemanager.html                     |   12 +-
 docs/latest/Modules.html                           |   12 +-
 docs/latest/MySQL.html                             |   12 +-
 docs/latest/OrderBy.html                           |   12 +-
 docs/latest/Other-Hadoop.html                      |   12 +-
 docs/latest/Papers-and-talks.html                  |   12 +-
 docs/latest/Peons.html                             |   12 +-
 docs/latest/Performance-FAQ.html                   |   12 +-
 docs/latest/Plumber.html                           |   12 +-
 docs/latest/Post-aggregations.html                 |   12 +-
 docs/latest/Production-Cluster-Configuration.html  |   12 +-
 docs/latest/Query-Context.html                     |   12 +-
 docs/latest/Querying-your-data.html                |   12 +-
 docs/latest/Querying.html                          |   12 +-
 docs/latest/Realtime-Config.html                   |   12 +-
 docs/latest/Realtime-ingestion.html                |   12 +-
 docs/latest/Realtime.html                          |   12 +-
 docs/latest/Recommendations.html                   |   12 +-
 docs/latest/Rolling-Updates.html                   |   12 +-
 docs/latest/Router.html                            |   12 +-
 docs/latest/Rule-Configuration.html                |   12 +-
 docs/latest/SearchQuery.html                       |   12 +-
 docs/latest/SearchQuerySpec.html                   |   12 +-
 docs/latest/SegmentMetadataQuery.html              |   12 +-
 docs/latest/Segments.html                          |   12 +-
 docs/latest/SelectQuery.html                       |   12 +-
 docs/latest/Simple-Cluster-Configuration.html      |   12 +-
 docs/latest/Spatial-Filters.html                   |   12 +-
 docs/latest/Spatial-Indexing.html                  |   12 +-
 docs/latest/Stand-Alone-With-Riak-CS.html          |   12 +-
 docs/latest/Support.html                           |   12 +-
 docs/latest/Tasks.html                             |   12 +-
 docs/latest/Thanks.html                            |   12 +-
 docs/latest/TimeBoundaryQuery.html                 |   12 +-
 docs/latest/TimeseriesQuery.html                   |   12 +-
 docs/latest/TopNMetricSpec.html                    |   12 +-
 docs/latest/TopNQuery.html                         |   12 +-
 docs/latest/Tutorial-A-First-Look-at-Druid.html    |   12 +-
 docs/latest/Tutorial-All-About-Queries.html        |   12 +-
 docs/latest/Tutorial-Loading-Batch-Data.html       |   12 +-
 docs/latest/Tutorial-Loading-Streaming-Data.html   |   12 +-
 docs/latest/Tutorial-The-Druid-Cluster.html        |   12 +-
 docs/latest/Tutorial:-A-First-Look-at-Druid.html   |   12 +-
 docs/latest/Tutorial:-All-About-Queries.html       |   12 +-
 docs/latest/Tutorial:-Loading-Batch-Data.html      |   12 +-
 docs/latest/Tutorial:-Loading-Streaming-Data.html  |   12 +-
 .../latest/Tutorial:-Loading-Your-Data-Part-1.html |   12 +-
 .../latest/Tutorial:-Loading-Your-Data-Part-2.html |   12 +-
 docs/latest/Tutorial:-The-Druid-Cluster.html       |   12 +-
 docs/latest/Tutorial:-Webstream.html               |   12 +-
 docs/latest/Tutorials.html                         |   12 +-
 docs/latest/Twitter-Tutorial.html                  |   12 +-
 docs/latest/Versioning.html                        |   12 +-
 docs/latest/ZooKeeper.html                         |   12 +-
 docs/latest/alerts.html                            |   12 +-
 docs/latest/assets/druid-architecture.png          |  Bin 0 -> 134117 bytes
 docs/latest/assets/druid-column-types.png          |  Bin 0 -> 93363 bytes
 docs/latest/assets/druid-dataflow-2x.png           |  Bin 0 -> 130160 bytes
 docs/latest/assets/druid-dataflow-3.png            |  Bin 0 -> 71425 bytes
 docs/latest/assets/druid-manage-1.png              |  Bin 0 -> 80415 bytes
 docs/latest/assets/druid-production.png            |  Bin 0 -> 40124 bytes
 docs/latest/assets/druid-timeline.png              |  Bin 0 -> 24160 bytes
 docs/latest/assets/indexing_service.png            |  Bin 0 -> 22490 bytes
 docs/latest/assets/segmentPropagation.png          |  Bin 0 -> 30569 bytes
 .../assets/tutorial-batch-data-loader-01.png       |  Bin 0 -> 56488 bytes
 .../assets/tutorial-batch-data-loader-02.png       |  Bin 0 -> 360295 bytes
 .../assets/tutorial-batch-data-loader-03.png       |  Bin 0 -> 137443 bytes
 .../assets/tutorial-batch-data-loader-04.png       |  Bin 0 -> 167252 bytes
 .../assets/tutorial-batch-data-loader-05.png       |  Bin 0 -> 162488 bytes
 .../assets/tutorial-batch-data-loader-06.png       |  Bin 0 -> 64301 bytes
 .../assets/tutorial-batch-data-loader-07.png       |  Bin 0 -> 46529 bytes
 .../assets/tutorial-batch-data-loader-08.png       |  Bin 0 -> 103928 bytes
 .../assets/tutorial-batch-data-loader-09.png       |  Bin 0 -> 63348 bytes
 .../assets/tutorial-batch-data-loader-10.png       |  Bin 0 -> 44516 bytes
 .../assets/tutorial-batch-data-loader-11.png       |  Bin 0 -> 83288 bytes
 .../assets/tutorial-batch-submit-task-01.png       |  Bin 0 -> 69356 bytes
 .../assets/tutorial-batch-submit-task-02.png       |  Bin 0 -> 86076 bytes
 docs/latest/assets/tutorial-compaction-01.png      |  Bin 0 -> 35710 bytes
 docs/latest/assets/tutorial-compaction-02.png      |  Bin 0 -> 166571 bytes
 docs/latest/assets/tutorial-compaction-03.png      |  Bin 0 -> 26755 bytes
 docs/latest/assets/tutorial-compaction-04.png      |  Bin 0 -> 184365 bytes
 docs/latest/assets/tutorial-compaction-05.png      |  Bin 0 -> 26588 bytes
 docs/latest/assets/tutorial-compaction-06.png      |  Bin 0 -> 206717 bytes
 docs/latest/assets/tutorial-compaction-07.png      |  Bin 0 -> 26683 bytes
 docs/latest/assets/tutorial-compaction-08.png      |  Bin 0 -> 28751 bytes
 docs/latest/assets/tutorial-deletion-01.png        |  Bin 0 -> 43586 bytes
 docs/latest/assets/tutorial-deletion-02.png        |  Bin 0 -> 439602 bytes
 docs/latest/assets/tutorial-deletion-03.png        |  Bin 0 -> 437304 bytes
 docs/latest/assets/tutorial-kafka-01.png           |  Bin 0 -> 85477 bytes
 docs/latest/assets/tutorial-kafka-02.png           |  Bin 0 -> 75709 bytes
 docs/latest/assets/tutorial-query-01.png           |  Bin 0 -> 100930 bytes
 docs/latest/assets/tutorial-query-02.png           |  Bin 0 -> 83369 bytes
 docs/latest/assets/tutorial-query-03.png           |  Bin 0 -> 65038 bytes
 docs/latest/assets/tutorial-query-04.png           |  Bin 0 -> 66423 bytes
 docs/latest/assets/tutorial-query-05.png           |  Bin 0 -> 51855 bytes
 docs/latest/assets/tutorial-query-06.png           |  Bin 0 -> 82211 bytes
 docs/latest/assets/tutorial-query-07.png           |  Bin 0 -> 78633 bytes
 docs/latest/assets/tutorial-quickstart-01.png      |  Bin 0 -> 29834 bytes
 docs/latest/assets/tutorial-retention-00.png       |  Bin 0 -> 77704 bytes
 docs/latest/assets/tutorial-retention-01.png       |  Bin 0 -> 35171 bytes
 docs/latest/assets/tutorial-retention-02.png       |  Bin 0 -> 240310 bytes
 docs/latest/assets/tutorial-retention-03.png       |  Bin 0 -> 30029 bytes
 docs/latest/assets/tutorial-retention-04.png       |  Bin 0 -> 44617 bytes
 docs/latest/assets/tutorial-retention-05.png       |  Bin 0 -> 38992 bytes
 docs/latest/assets/tutorial-retention-06.png       |  Bin 0 -> 137570 bytes
 .../web-console-01-home-view.png}                  |  Bin
 .../web-console-02-data-loader-1.png}              |  Bin
 .../web-console-03-data-loader-2.png}              |  Bin
 .../web-console-04-datasources.png}                |  Bin
 .../web-console-05-retention.png}                  |  Bin
 .../web-console-06-segments.png}                   |  Bin
 .../web-console-07-supervisors.png}                |  Bin
 .../web-console-08-tasks.png}                      |  Bin
 .../web-console-09-task-status.png}                |  Bin
 .../web-console-10-servers.png}                    |  Bin
 .../web-console-11-query-sql.png}                  |  Bin
 .../web-console-12-query-rune.png}                 |  Bin
 .../web-console-13-lookups.png}                    |  Bin
 docs/latest/comparisons/druid-vs-cassandra.html    |   12 +-
 .../latest/comparisons/druid-vs-elasticsearch.html |   91 +
 docs/latest/comparisons/druid-vs-elasticsearch.md  |   40 -
 docs/latest/comparisons/druid-vs-hadoop.html       |   12 +-
 .../comparisons/druid-vs-impala-or-shark.html      |   12 +-
 docs/latest/comparisons/druid-vs-key-value.html    |   99 ++
 docs/latest/comparisons/druid-vs-key-value.md      |   47 -
 docs/latest/comparisons/druid-vs-kudu.html         |   93 +
 docs/latest/comparisons/druid-vs-kudu.md           |   40 -
 docs/latest/comparisons/druid-vs-redshift.html     |  102 ++
 docs/latest/comparisons/druid-vs-redshift.md       |   63 -
 docs/latest/comparisons/druid-vs-spark.html        |   93 +
 docs/latest/comparisons/druid-vs-spark.md          |   43 -
 .../latest/comparisons/druid-vs-sql-on-hadoop.html |  124 ++
 docs/latest/comparisons/druid-vs-sql-on-hadoop.md  |   83 -
 docs/latest/comparisons/druid-vs-vertica.html      |   12 +-
 docs/latest/configuration/auth.html                |   12 +-
 docs/latest/configuration/broker.html              |   12 +-
 docs/latest/configuration/caching.html             |   12 +-
 docs/latest/configuration/coordinator.html         |   12 +-
 docs/latest/configuration/hadoop.html              |   12 +-
 docs/latest/configuration/historical.html          |   12 +-
 docs/latest/configuration/index.html               | 1801 ++++++++++++++++++++
 docs/latest/configuration/index.md                 | 1667 ------------------
 docs/latest/configuration/indexing-service.html    |   12 +-
 docs/latest/configuration/logging.html             |  133 ++
 docs/latest/configuration/logging.md               |   88 -
 docs/latest/configuration/production-cluster.html  |   12 +-
 docs/latest/configuration/realtime.md              |  106 +-
 docs/latest/configuration/simple-cluster.html      |   12 +-
 docs/latest/configuration/zookeeper.html           |   12 +-
 docs/latest/dependencies/cassandra-deep-storage.md |   70 +-
 docs/latest/dependencies/deep-storage.html         |  101 ++
 docs/latest/dependencies/deep-storage.md           |   54 -
 docs/latest/dependencies/metadata-storage.html     |  163 ++
 docs/latest/dependencies/metadata-storage.md       |  144 --
 docs/latest/dependencies/zookeeper.html            |  110 ++
 docs/latest/dependencies/zookeeper.md              |   77 -
 docs/latest/design/architecture.html               |  252 +++
 docs/latest/design/auth.html                       |  174 ++
 docs/latest/design/auth.md                         |  168 --
 docs/latest/design/broker.html                     |   96 ++
 docs/latest/design/broker.md                       |   55 -
 docs/latest/design/concepts-and-terminology.html   |   12 +-
 docs/latest/design/coordinator.html                |  148 ++
 docs/latest/design/coordinator.md                  |  131 --
 docs/latest/design/design.html                     |   12 +-
 docs/latest/design/historical.html                 |   97 ++
 docs/latest/design/historical.md                   |   59 -
 docs/latest/design/index.html                      |  153 ++
 docs/latest/design/index.md                        |  212 ---
 docs/latest/design/indexer.html                    |  191 +++
 docs/latest/design/indexing-service.html           |   94 +
 docs/latest/design/indexing-service.md             |   65 -
 docs/latest/design/middlemanager.html              |   90 +
 docs/latest/design/middlemanager.md                |   44 -
 docs/latest/design/overlord.html                   |  102 ++
 docs/latest/design/overlord.md                     |   63 -
 docs/latest/design/peons.html                      |   92 +
 docs/latest/design/peons.md                        |   47 -
 docs/latest/design/plumber.md                      |   46 +-
 docs/latest/design/processes.html                  |  154 ++
 docs/latest/design/processes.md                    |  131 --
 docs/latest/design/realtime.md                     |   88 +-
 docs/latest/design/router.html                     |  241 +++
 docs/latest/design/segments.html                   |  260 +++
 docs/latest/design/segments.md                     |  205 ---
 .../latest/development/approximate-histograms.html |   12 +-
 docs/latest/development/build.html                 |  108 ++
 docs/latest/development/build.md                   |   69 -
 .../development/community-extensions/azure.html    |   12 +-
 .../community-extensions/cassandra.html            |   12 +-
 .../community-extensions/cloudfiles.html           |   12 +-
 .../development/community-extensions/graphite.html |   12 +-
 .../community-extensions/kafka-simple.html         |   12 +-
 .../development/community-extensions/rabbitmq.html |   12 +-
 .../development/datasketches-aggregators.html      |   12 +-
 docs/latest/development/experimental.html          |   91 +
 docs/latest/development/experimental.md            |   38 -
 .../extensions-contrib/ambari-metrics-emitter.html |  141 ++
 .../extensions-contrib/ambari-metrics-emitter.md   |  100 --
 .../development/extensions-contrib/azure.html      |  147 ++
 .../latest/development/extensions-contrib/azure.md |   95 --
 .../development/extensions-contrib/cassandra.html  |   84 +
 .../development/extensions-contrib/cassandra.md    |   31 -
 .../development/extensions-contrib/cloudfiles.html |  151 ++
 .../development/extensions-contrib/cloudfiles.md   |   97 --
 .../extensions-contrib/distinctcount.html          |  143 ++
 .../extensions-contrib/distinctcount.md            |   99 --
 .../development/extensions-contrib/google.html     |  142 ++
 .../development/extensions-contrib/google.md       |   89 -
 .../development/extensions-contrib/graphite.html   |  151 ++
 .../development/extensions-contrib/graphite.md     |  118 --
 .../development/extensions-contrib/influx.html     |  113 ++
 .../development/extensions-contrib/influx.md       |   68 -
 .../extensions-contrib/influxdb-emitter.html       |  120 ++
 .../extensions-contrib/kafka-emitter.html          |  106 ++
 .../extensions-contrib/kafka-emitter.md            |   55 -
 .../development/extensions-contrib/kafka-simple.md |   64 +-
 .../extensions-contrib/materialized-view.html      |  189 ++
 .../extensions-contrib/materialized-view.md        |  137 --
 .../extensions-contrib/momentsketch-quantiles.html |  163 ++
 .../extensions-contrib/momentsketch-quantiles.md   |  126 --
 .../extensions-contrib/moving-average-query.html   |  369 ++++
 .../extensions-contrib/moving-average-query.md     |  346 ----
 .../extensions-contrib/opentsdb-emitter.html       |  111 ++
 .../extensions-contrib/opentsdb-emitter.md         |   62 -
 .../latest/development/extensions-contrib/orc.html |   12 +-
 .../development/extensions-contrib/parquet.html    |   12 +-
 .../development/extensions-contrib/rabbitmq.md     |   89 +-
 .../extensions-contrib/redis-cache.html            |  112 ++
 .../development/extensions-contrib/redis-cache.md  |   58 -
 .../development/extensions-contrib/rocketmq.md     |   37 +-
 .../development/extensions-contrib/scan-query.html |   12 +-
 .../development/extensions-contrib/sqlserver.html  |  113 ++
 .../development/extensions-contrib/sqlserver.md    |   57 -
 .../development/extensions-contrib/statsd.html     |  120 ++
 .../development/extensions-contrib/statsd.md       |   70 -
 .../tdigestsketch-quantiles.html                   |  193 +++
 .../development/extensions-contrib/thrift.html     |  172 ++
 .../development/extensions-contrib/thrift.md       |  128 --
 .../extensions-contrib/time-min-max.html           |  143 ++
 .../development/extensions-contrib/time-min-max.md |  105 --
 .../extensions-core/approximate-histograms.html    |  306 ++++
 .../extensions-core/approximate-histograms.md      |  318 ----
 docs/latest/development/extensions-core/avro.html  |  276 +++
 docs/latest/development/extensions-core/avro.md    |  222 ---
 .../development/extensions-core/bloom-filter.html  |  204 +++
 .../development/extensions-core/bloom-filter.md    |  179 --
 .../extensions-core/caffeine-cache.html            |   12 +-
 .../extensions-core/datasketches-aggregators.html  |   12 +-
 .../extensions-core/datasketches-extension.html    |   91 +
 .../extensions-core/datasketches-extension.md      |   40 -
 .../extensions-core/datasketches-hll.html          |  144 ++
 .../extensions-core/datasketches-hll.md            |  102 --
 .../extensions-core/datasketches-quantiles.html    |  164 ++
 .../extensions-core/datasketches-quantiles.md      |  112 --
 .../extensions-core/datasketches-theta.html        |  304 ++++
 .../extensions-core/datasketches-theta.md          |  273 ---
 .../extensions-core/datasketches-tuple.html        |  191 +++
 .../extensions-core/datasketches-tuple.md          |  175 --
 .../extensions-core/druid-basic-security.html      |  450 +++++
 .../extensions-core/druid-basic-security.md        |  470 -----
 .../extensions-core/druid-kerberos.html            |  169 ++
 .../development/extensions-core/druid-kerberos.md  |  123 --
 .../development/extensions-core/druid-lookups.html |  192 +++
 .../development/extensions-core/druid-lookups.md   |  151 --
 .../development/extensions-core/examples.html      |   99 ++
 .../latest/development/extensions-core/examples.md |   45 -
 docs/latest/development/extensions-core/hdfs.html  |  110 ++
 docs/latest/development/extensions-core/hdfs.md    |   56 -
 .../extensions-core/kafka-eight-firehose.md        |   62 +-
 .../kafka-extraction-namespace.html                |  114 ++
 .../extensions-core/kafka-extraction-namespace.md  |   70 -
 .../extensions-core/kafka-ingestion.html           |  427 +++++
 .../development/extensions-core/kafka-ingestion.md |  346 ----
 .../extensions-core/kinesis-ingestion.html         |  464 +++++
 .../extensions-core/kinesis-ingestion.md           |  393 -----
 .../extensions-core/lookups-cached-global.html     |  390 +++++
 .../extensions-core/lookups-cached-global.md       |  379 ----
 docs/latest/development/extensions-core/mysql.html |  216 +++
 docs/latest/development/extensions-core/mysql.md   |  109 --
 .../extensions-core/namespaced-lookup.html         |   12 +-
 docs/latest/development/extensions-core/orc.html   |  345 ++++
 docs/latest/development/extensions-core/orc.md     |  315 ----
 .../development/extensions-core/parquet.html       |  266 +++
 docs/latest/development/extensions-core/parquet.md |  223 ---
 .../development/extensions-core/postgresql.html    |  197 +++
 .../development/extensions-core/postgresql.md      |   85 -
 .../development/extensions-core/protobuf.html      |  257 +++
 .../latest/development/extensions-core/protobuf.md |  223 ---
 docs/latest/development/extensions-core/s3.html    |  176 ++
 docs/latest/development/extensions-core/s3.md      |  104 --
 .../extensions-core/simple-client-sslcontext.html  |  112 ++
 .../extensions-core/simple-client-sslcontext.md    |   54 -
 docs/latest/development/extensions-core/stats.html |  201 +++
 docs/latest/development/extensions-core/stats.md   |  172 --
 .../development/extensions-core/test-stats.html    |  155 ++
 .../development/extensions-core/test-stats.md      |  118 --
 docs/latest/development/extensions.html            |  199 +++
 docs/latest/development/extensions.md              |  108 --
 docs/latest/development/geo.html                   |  159 ++
 docs/latest/development/geo.md                     |   93 -
 docs/latest/development/indexer.md                 |    8 +
 .../integrating-druid-with-other-technologies.html |   88 +
 .../integrating-druid-with-other-technologies.md   |   39 -
 docs/latest/development/javascript.html            |  116 ++
 docs/latest/development/javascript.md              |   75 -
 .../kafka-simple-consumer-firehose.html            |   12 +-
 docs/latest/development/libraries.html             |   12 +-
 docs/latest/development/modules.html               |  265 +++
 docs/latest/development/modules.md                 |  273 ---
 docs/latest/development/overview.html              |  110 ++
 docs/latest/development/overview.md                |   76 -
 docs/latest/development/router.md                  |  257 +--
 docs/latest/development/select-query.html          |   12 +-
 docs/latest/development/versioning.html            |   93 +
 docs/latest/development/versioning.md              |   47 -
 docs/latest/index.html                             |   12 +-
 docs/latest/ingestion/batch-ingestion.md           |   47 +-
 .../ingestion/command-line-hadoop-indexer.md       |  103 +-
 docs/latest/ingestion/compaction.md                |  110 +-
 docs/latest/ingestion/data-formats.html            |  297 ++++
 docs/latest/ingestion/data-formats.md              |  205 ---
 docs/latest/ingestion/data-management.html         |  244 +++
 docs/latest/ingestion/delete-data.md               |   58 +-
 docs/latest/ingestion/faq.html                     |  126 ++
 docs/latest/ingestion/faq.md                       |  106 --
 docs/latest/ingestion/firehose.md                  |  268 +--
 docs/latest/ingestion/flatten-json.md              |  188 +-
 docs/latest/ingestion/hadoop-vs-native-batch.md    |   51 +-
 docs/latest/ingestion/hadoop.html                  |  529 ++++++
 docs/latest/ingestion/hadoop.md                    |  363 ----
 docs/latest/ingestion/index.html                   |  740 ++++++++
 docs/latest/ingestion/index.md                     |  306 ----
 docs/latest/ingestion/ingestion-spec.md            |  340 +---
 docs/latest/ingestion/ingestion.html               |   12 +-
 docs/latest/ingestion/locking-and-priority.md      |   87 +-
 docs/latest/ingestion/misc-tasks.md                |  102 +-
 docs/latest/ingestion/native-batch.html            |  911 +++++++++-
 docs/latest/ingestion/native_tasks.html            |    8 +
 docs/latest/ingestion/native_tasks.md              |  625 +------
 docs/latest/ingestion/overview.html                |   12 +-
 docs/latest/ingestion/realtime-ingestion.html      |   12 +-
 docs/latest/ingestion/reports.md                   |  160 +-
 docs/latest/ingestion/schema-changes.md            |   90 +-
 docs/latest/ingestion/schema-design.html           |  267 +++
 docs/latest/ingestion/schema-design.md             |  338 ----
 docs/latest/ingestion/standalone-realtime.html     |   94 +
 docs/latest/ingestion/stream-ingestion.md          |   64 +-
 docs/latest/ingestion/stream-pull.md               |  384 +----
 docs/latest/ingestion/stream-push.md               |  194 +--
 docs/latest/ingestion/tasks.html                   |  364 ++++
 docs/latest/ingestion/tasks.md                     |   78 -
 docs/latest/ingestion/tranquility.html             |   89 +
 docs/latest/ingestion/transform-spec.md            |  112 +-
 docs/latest/ingestion/update-existing-data.md      |  170 +-
 docs/latest/misc/cluster-setup.html                |   12 +-
 docs/latest/misc/evaluate.html                     |   12 +-
 docs/latest/misc/math-expr.html                    |  277 +++
 docs/latest/misc/math-expr.md                      |  148 --
 docs/latest/misc/papers-and-talks.html             |   94 +
 docs/latest/misc/papers-and-talks.md               |   43 -
 docs/latest/misc/tasks.html                        |   12 +-
 docs/latest/operations/alerts.html                 |   91 +
 docs/latest/operations/alerts.md                   |   38 -
 docs/latest/operations/api-reference.html          |  753 ++++++++
 docs/latest/operations/api-reference.md            |  761 ---------
 docs/latest/operations/basic-cluster-tuning.html   |  290 ++++
 docs/latest/operations/basic-cluster-tuning.md     |  382 -----
 docs/latest/operations/deep-storage-migration.html |  106 ++
 docs/latest/operations/deep-storage-migration.md   |   66 -
 docs/latest/operations/druid-console.html          |  130 ++
 docs/latest/operations/druid-console.md            |  115 --
 docs/latest/operations/dump-segment.html           |  156 ++
 docs/latest/operations/dump-segment.md             |  116 --
 docs/latest/operations/export-metadata.html        |  204 +++
 docs/latest/operations/export-metadata.md          |  201 ---
 docs/latest/operations/getting-started.html        |   92 +
 docs/latest/operations/getting-started.md          |   49 -
 docs/latest/operations/high-availability.html      |   95 ++
 docs/latest/operations/high-availability.md        |   40 -
 docs/latest/operations/http-compression.html       |   90 +
 docs/latest/operations/http-compression.md         |   34 -
 docs/latest/operations/including-extensions.md     |   95 +-
 docs/latest/operations/insert-segment-to-db.html   |  101 ++
 docs/latest/operations/insert-segment-to-db.md     |   49 -
 docs/latest/operations/management-uis.html         |  110 ++
 docs/latest/operations/management-uis.md           |   80 -
 docs/latest/operations/metadata-migration.html     |  121 ++
 docs/latest/operations/metadata-migration.md       |   92 -
 docs/latest/operations/metrics.html                |  362 ++++
 docs/latest/operations/metrics.md                  |  279 ---
 docs/latest/operations/multitenancy.html           |   12 +-
 docs/latest/operations/other-hadoop.html           |  285 ++++
 docs/latest/operations/other-hadoop.md             |  300 ----
 docs/latest/operations/password-provider.html      |  104 ++
 docs/latest/operations/password-provider.md        |   55 -
 docs/latest/operations/performance-faq.html        |   12 +-
 docs/latest/operations/pull-deps.html              |  152 ++
 docs/latest/operations/pull-deps.md                |  151 --
 docs/latest/operations/recommendations.html        |  130 ++
 docs/latest/operations/recommendations.md          |   91 -
 docs/latest/operations/reset-cluster.html          |  118 ++
 docs/latest/operations/reset-cluster.md            |   76 -
 docs/latest/operations/rolling-updates.html        |  134 ++
 docs/latest/operations/rolling-updates.md          |  102 --
 docs/latest/operations/rule-configuration.html     |  239 +++
 docs/latest/operations/rule-configuration.md       |  240 ---
 docs/latest/operations/segment-optimization.html   |  150 ++
 docs/latest/operations/segment-optimization.md     |  100 --
 docs/latest/operations/single-server.html          |  114 ++
 docs/latest/operations/single-server.md            |   71 -
 docs/latest/operations/tls-support.html            |  165 ++
 docs/latest/operations/tls-support.md              |  106 --
 .../operations/use_sbt_to_build_fat_jar.html       |  181 ++
 docs/latest/operations/use_sbt_to_build_fat_jar.md |  128 --
 docs/latest/querying/aggregations.html             |  295 ++++
 docs/latest/querying/aggregations.md               |  368 ----
 docs/latest/querying/caching.html                  |  122 ++
 docs/latest/querying/caching.md                    |   64 -
 docs/latest/querying/datasource.html               |  107 ++
 docs/latest/querying/datasource.md                 |   65 -
 docs/latest/querying/datasourcemetadataquery.html  |  109 ++
 docs/latest/querying/datasourcemetadataquery.md    |   57 -
 docs/latest/querying/dimensionspecs.html           |  449 +++++
 docs/latest/querying/dimensionspecs.md             |  545 ------
 docs/latest/querying/filters.html                  |  471 +++++
 docs/latest/querying/filters.md                    |  527 ------
 docs/latest/querying/granularities.html            |  414 +++++
 docs/latest/querying/granularities.md              |  438 -----
 docs/latest/querying/groupbyquery.html             |  471 +++++
 docs/latest/querying/groupbyquery.md               |  446 -----
 docs/latest/querying/having.html                   |  262 +++
 docs/latest/querying/having.md                     |  261 ---
 docs/latest/querying/hll-old.html                  |  166 ++
 docs/latest/querying/hll-old.md                    |  142 --
 docs/latest/querying/joins.html                    |  107 ++
 docs/latest/querying/joins.md                      |   55 -
 docs/latest/querying/limitspec.html                |   99 ++
 docs/latest/querying/limitspec.md                  |   55 -
 docs/latest/querying/lookups.html                  |  437 +++++
 docs/latest/querying/lookups.md                    |  455 -----
 docs/latest/querying/multi-value-dimensions.html   |  362 ++++
 docs/latest/querying/multi-value-dimensions.md     |  340 ----
 docs/latest/querying/multitenancy.html             |  140 ++
 docs/latest/querying/multitenancy.md               |   99 --
 docs/latest/querying/optimizations.html            |   12 +-
 docs/latest/querying/post-aggregations.html        |  227 +++
 docs/latest/querying/post-aggregations.md          |  223 ---
 docs/latest/querying/query-context.html            |  152 ++
 docs/latest/querying/query-context.md              |   62 -
 docs/latest/querying/querying.html                 |  168 ++
 docs/latest/querying/querying.md                   |  133 --
 docs/latest/querying/scan-query.html               |  268 +++
 docs/latest/querying/scan-query.md                 |  226 ---
 docs/latest/querying/searchquery.html              |  193 +++
 docs/latest/querying/searchquery.md                |  141 --
 docs/latest/querying/searchqueryspec.html          |  111 ++
 docs/latest/querying/searchqueryspec.md            |   77 -
 docs/latest/querying/segmentmetadataquery.html     |  210 +++
 docs/latest/querying/segmentmetadataquery.md       |  188 --
 docs/latest/querying/select-query.html             |  286 ++++
 docs/latest/querying/select-query.md               |  260 ---
 docs/latest/querying/sorting-orders.html           |  100 ++
 docs/latest/querying/sorting-orders.md             |   54 -
 docs/latest/querying/sql.html                      |  811 +++++++++
 docs/latest/querying/sql.md                        |  747 --------
 docs/latest/querying/timeboundaryquery.html        |  110 ++
 docs/latest/querying/timeboundaryquery.md          |   58 -
 docs/latest/querying/timeseriesquery.html          |  201 +++
 docs/latest/querying/timeseriesquery.md            |  163 --
 docs/latest/querying/topnmetricspec.html           |  136 ++
 docs/latest/querying/topnmetricspec.md             |   87 -
 docs/latest/querying/topnquery.html                |  290 ++++
 docs/latest/querying/topnquery.md                  |  257 ---
 docs/latest/querying/virtual-columns.html          |  127 ++
 docs/latest/querying/virtual-columns.md            |   80 -
 docs/latest/toc.md                                 |  182 --
 .../tutorials/booting-a-production-cluster.html    |   12 +-
 docs/latest/tutorials/cluster.html                 |  410 +++++
 docs/latest/tutorials/cluster.md                   |  500 ------
 docs/latest/tutorials/examples.html                |   12 +-
 docs/latest/tutorials/firewall.html                |   12 +-
 .../img/tutorial-batch-data-loader-01.png          |  Bin 99355 -> 0 bytes
 .../img/tutorial-batch-data-loader-02.png          |  Bin 521148 -> 0 bytes
 .../img/tutorial-batch-data-loader-03.png          |  Bin 217008 -> 0 bytes
 .../img/tutorial-batch-data-loader-04.png          |  Bin 261225 -> 0 bytes
 .../img/tutorial-batch-data-loader-05.png          |  Bin 256368 -> 0 bytes
 .../img/tutorial-batch-data-loader-06.png          |  Bin 105983 -> 0 bytes
 .../img/tutorial-batch-data-loader-07.png          |  Bin 81399 -> 0 bytes
 .../img/tutorial-batch-data-loader-08.png          |  Bin 162397 -> 0 bytes
 .../img/tutorial-batch-data-loader-09.png          |  Bin 107662 -> 0 bytes
 .../img/tutorial-batch-data-loader-10.png          |  Bin 79080 -> 0 bytes
 .../img/tutorial-batch-data-loader-11.png          |  Bin 133329 -> 0 bytes
 .../img/tutorial-batch-submit-task-01.png          |  Bin 113916 -> 0 bytes
 .../img/tutorial-batch-submit-task-02.png          |  Bin 136268 -> 0 bytes
 .../tutorials/img/tutorial-compaction-01.png       |  Bin 55153 -> 0 bytes
 .../tutorials/img/tutorial-compaction-02.png       |  Bin 279736 -> 0 bytes
 .../tutorials/img/tutorial-compaction-03.png       |  Bin 40114 -> 0 bytes
 .../tutorials/img/tutorial-compaction-04.png       |  Bin 312142 -> 0 bytes
 .../tutorials/img/tutorial-compaction-05.png       |  Bin 39784 -> 0 bytes
 .../tutorials/img/tutorial-compaction-06.png       |  Bin 351505 -> 0 bytes
 .../tutorials/img/tutorial-compaction-07.png       |  Bin 40106 -> 0 bytes
 .../tutorials/img/tutorial-compaction-08.png       |  Bin 43257 -> 0 bytes
 docs/latest/tutorials/img/tutorial-deletion-01.png |  Bin 72062 -> 0 bytes
 docs/latest/tutorials/img/tutorial-deletion-02.png |  Bin 810422 -> 0 bytes
 docs/latest/tutorials/img/tutorial-deletion-03.png |  Bin 805673 -> 0 bytes
 docs/latest/tutorials/img/tutorial-kafka-01.png    |  Bin 136317 -> 0 bytes
 docs/latest/tutorials/img/tutorial-kafka-02.png    |  Bin 125452 -> 0 bytes
 docs/latest/tutorials/img/tutorial-query-01.png    |  Bin 153120 -> 0 bytes
 docs/latest/tutorials/img/tutorial-query-02.png    |  Bin 129962 -> 0 bytes
 docs/latest/tutorials/img/tutorial-query-03.png    |  Bin 106082 -> 0 bytes
 docs/latest/tutorials/img/tutorial-query-04.png    |  Bin 108331 -> 0 bytes
 docs/latest/tutorials/img/tutorial-query-05.png    |  Bin 87070 -> 0 bytes
 docs/latest/tutorials/img/tutorial-query-06.png    |  Bin 130612 -> 0 bytes
 docs/latest/tutorials/img/tutorial-query-07.png    |  Bin 125457 -> 0 bytes
 .../tutorials/img/tutorial-quickstart-01.png       |  Bin 56955 -> 0 bytes
 .../latest/tutorials/img/tutorial-retention-00.png |  Bin 138304 -> 0 bytes
 .../latest/tutorials/img/tutorial-retention-01.png |  Bin 53955 -> 0 bytes
 .../latest/tutorials/img/tutorial-retention-02.png |  Bin 410930 -> 0 bytes
 .../latest/tutorials/img/tutorial-retention-03.png |  Bin 44144 -> 0 bytes
 .../latest/tutorials/img/tutorial-retention-04.png |  Bin 67493 -> 0 bytes
 .../latest/tutorials/img/tutorial-retention-05.png |  Bin 61639 -> 0 bytes
 .../latest/tutorials/img/tutorial-retention-06.png |  Bin 233034 -> 0 bytes
 docs/latest/tutorials/index.html                   |  213 +++
 docs/latest/tutorials/index.md                     |  210 ---
 docs/latest/tutorials/ingestion-streams.html       |   12 +-
 docs/latest/tutorials/ingestion.html               |   12 +-
 docs/latest/tutorials/quickstart.html              |   12 +-
 .../tutorials/tutorial-a-first-look-at-druid.html  |   12 +-
 .../tutorials/tutorial-all-about-queries.html      |   12 +-
 docs/latest/tutorials/tutorial-batch-hadoop.html   |  236 +++
 docs/latest/tutorials/tutorial-batch-hadoop.md     |  261 ---
 docs/latest/tutorials/tutorial-batch.html          |  245 +++
 docs/latest/tutorials/tutorial-batch.md            |  267 ---
 docs/latest/tutorials/tutorial-compaction.html     |  173 ++
 docs/latest/tutorials/tutorial-compaction.md       |  175 --
 docs/latest/tutorials/tutorial-delete-data.html    |  192 +++
 docs/latest/tutorials/tutorial-delete-data.md      |  180 --
 docs/latest/tutorials/tutorial-ingestion-spec.html |  601 +++++++
 docs/latest/tutorials/tutorial-ingestion-spec.md   |  662 -------
 docs/latest/tutorials/tutorial-kafka.html          |  193 +++
 docs/latest/tutorials/tutorial-kafka.md            |  186 --
 .../latest/tutorials/tutorial-kerberos-hadoop.html |  152 ++
 docs/latest/tutorials/tutorial-kerberos-hadoop.md  |  122 --
 .../tutorials/tutorial-loading-batch-data.html     |   12 +-
 .../tutorials/tutorial-loading-streaming-data.html |   12 +-
 docs/latest/tutorials/tutorial-query.html          |  283 +++
 docs/latest/tutorials/tutorial-query.md            |  305 ----
 docs/latest/tutorials/tutorial-retention.html      |  130 ++
 docs/latest/tutorials/tutorial-retention.md        |  115 --
 docs/latest/tutorials/tutorial-rollup.html         |  211 +++
 docs/latest/tutorials/tutorial-rollup.md           |  199 ---
 .../tutorials/tutorial-the-druid-cluster.html      |   12 +-
 docs/latest/tutorials/tutorial-tranquility.md      |  112 +-
 docs/latest/tutorials/tutorial-transform-spec.html |  194 +++
 docs/latest/tutorials/tutorial-transform-spec.md   |  157 --
 docs/latest/tutorials/tutorial-update-data.html    |  179 ++
 docs/latest/tutorials/tutorial-update-data.md      |  169 --
 js/codetabs.js                                     |   31 +
 js/scrollSpy.js                                    |   76 +
 release.sh                                         |    4 +-
 1041 files changed, 72477 insertions(+), 31335 deletions(-)

diff --git a/docs/0.16.0-incubating/About-Experimental-Features.html b/docs/0.16.0-incubating/About-Experimental-Features.html
new file mode 100644
index 0000000..4c16ee2
--- /dev/null
+++ b/docs/0.16.0-incubating/About-Experimental-Features.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/experimental.html">
+<meta http-equiv="refresh" content="0; url=development/experimental.html">
+<h1>Redirecting...</h1>
+<a href="development/experimental.html">Click here if you are not redirected.</a>
+<script>location="development/experimental.html"</script>
diff --git a/docs/0.16.0-incubating/Aggregations.html b/docs/0.16.0-incubating/Aggregations.html
new file mode 100644
index 0000000..08794b9
--- /dev/null
+++ b/docs/0.16.0-incubating/Aggregations.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/aggregations.html">
+<meta http-equiv="refresh" content="0; url=querying/aggregations.html">
+<h1>Redirecting...</h1>
+<a href="querying/aggregations.html">Click here if you are not redirected.</a>
+<script>location="querying/aggregations.html"</script>
diff --git a/docs/0.16.0-incubating/ApproxHisto.html b/docs/0.16.0-incubating/ApproxHisto.html
new file mode 100644
index 0000000..4e654a9
--- /dev/null
+++ b/docs/0.16.0-incubating/ApproxHisto.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/extensions-core/approximate-histograms.html">
+<meta http-equiv="refresh" content="0; url=development/extensions-core/approximate-histograms.html">
+<h1>Redirecting...</h1>
+<a href="development/extensions-core/approximate-histograms.html">Click here if you are not redirected.</a>
+<script>location="development/extensions-core/approximate-histograms.html"</script>
diff --git a/docs/0.16.0-incubating/Batch-ingestion.html b/docs/0.16.0-incubating/Batch-ingestion.html
new file mode 100644
index 0000000..b571cbd
--- /dev/null
+++ b/docs/0.16.0-incubating/Batch-ingestion.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/index.html">
+<meta http-equiv="refresh" content="0; url=ingestion/index.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/index.html">Click here if you are not redirected.</a>
+<script>location="ingestion/index.html"</script>
diff --git a/docs/0.16.0-incubating/Booting-a-production-cluster.html b/docs/0.16.0-incubating/Booting-a-production-cluster.html
new file mode 100644
index 0000000..bd0e468
--- /dev/null
+++ b/docs/0.16.0-incubating/Booting-a-production-cluster.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/cluster.html">
+<meta http-equiv="refresh" content="0; url=tutorials/cluster.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/cluster.html">Click here if you are not redirected.</a>
+<script>location="tutorials/cluster.html"</script>
diff --git a/docs/0.16.0-incubating/Broker-Config.html b/docs/0.16.0-incubating/Broker-Config.html
new file mode 100644
index 0000000..01b38ae
--- /dev/null
+++ b/docs/0.16.0-incubating/Broker-Config.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="configuration/index.html#broker">
+<meta http-equiv="refresh" content="0; url=configuration/index.html#broker">
+<h1>Redirecting...</h1>
+<a href="configuration/index.html#broker">Click here if you are not redirected.</a>
+<script>location="configuration/index.html#broker"</script>
diff --git a/docs/0.16.0-incubating/Broker.html b/docs/0.16.0-incubating/Broker.html
new file mode 100644
index 0000000..a28166f
--- /dev/null
+++ b/docs/0.16.0-incubating/Broker.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/broker.html">
+<meta http-equiv="refresh" content="0; url=design/broker.html">
+<h1>Redirecting...</h1>
+<a href="design/broker.html">Click here if you are not redirected.</a>
+<script>location="design/broker.html"</script>
diff --git a/docs/0.16.0-incubating/Build-from-source.html b/docs/0.16.0-incubating/Build-from-source.html
new file mode 100644
index 0000000..30898e7
--- /dev/null
+++ b/docs/0.16.0-incubating/Build-from-source.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/build.html">
+<meta http-equiv="refresh" content="0; url=development/build.html">
+<h1>Redirecting...</h1>
+<a href="development/build.html">Click here if you are not redirected.</a>
+<script>location="development/build.html"</script>
diff --git a/docs/0.16.0-incubating/Cassandra-Deep-Storage.html b/docs/0.16.0-incubating/Cassandra-Deep-Storage.html
new file mode 100644
index 0000000..31d6841
--- /dev/null
+++ b/docs/0.16.0-incubating/Cassandra-Deep-Storage.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="configuration/index.html#cassandra-deep-storage">
+<meta http-equiv="refresh" content="0; url=configuration/index.html#cassandra-deep-storage">
+<h1>Redirecting...</h1>
+<a href="configuration/index.html#cassandra-deep-storage">Click here if you are not redirected.</a>
+<script>location="configuration/index.html#cassandra-deep-storage"</script>
diff --git a/docs/0.16.0-incubating/Cluster-setup.html b/docs/0.16.0-incubating/Cluster-setup.html
new file mode 100644
index 0000000..bd0e468
--- /dev/null
+++ b/docs/0.16.0-incubating/Cluster-setup.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/cluster.html">
+<meta http-equiv="refresh" content="0; url=tutorials/cluster.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/cluster.html">Click here if you are not redirected.</a>
+<script>location="tutorials/cluster.html"</script>
diff --git a/docs/0.16.0-incubating/Compute.html b/docs/0.16.0-incubating/Compute.html
new file mode 100644
index 0000000..c3bea73
--- /dev/null
+++ b/docs/0.16.0-incubating/Compute.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/processes.html">
+<meta http-equiv="refresh" content="0; url=design/processes.html">
+<h1>Redirecting...</h1>
+<a href="design/processes.html">Click here if you are not redirected.</a>
+<script>location="design/processes.html"</script>
diff --git a/docs/0.16.0-incubating/Concepts-and-Terminology.html b/docs/0.16.0-incubating/Concepts-and-Terminology.html
new file mode 100644
index 0000000..57986ec
--- /dev/null
+++ b/docs/0.16.0-incubating/Concepts-and-Terminology.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/index.html">
+<meta http-equiv="refresh" content="0; url=design/index.html">
+<h1>Redirecting...</h1>
+<a href="design/index.html">Click here if you are not redirected.</a>
+<script>location="design/index.html"</script>
diff --git a/docs/0.16.0-incubating/Configuration.html b/docs/0.16.0-incubating/Configuration.html
new file mode 100644
index 0000000..ea6ae53
--- /dev/null
+++ b/docs/0.16.0-incubating/Configuration.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="configuration/index.html">
+<meta http-equiv="refresh" content="0; url=configuration/index.html">
+<h1>Redirecting...</h1>
+<a href="configuration/index.html">Click here if you are not redirected.</a>
+<script>location="configuration/index.html"</script>
diff --git a/docs/0.16.0-incubating/Contribute.html b/docs/0.16.0-incubating/Contribute.html
new file mode 100644
index 0000000..ea71408
--- /dev/null
+++ b/docs/0.16.0-incubating/Contribute.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="/community/">
+<meta http-equiv="refresh" content="0; url=/community/">
+<h1>Redirecting...</h1>
+<a href="/community/">Click here if you are not redirected.</a>
+<script>location="/community/"</script>
diff --git a/docs/0.16.0-incubating/Coordinator-Config.html b/docs/0.16.0-incubating/Coordinator-Config.html
new file mode 100644
index 0000000..bb3def4
--- /dev/null
+++ b/docs/0.16.0-incubating/Coordinator-Config.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="configuration/index.html#coordinator">
+<meta http-equiv="refresh" content="0; url=configuration/index.html#coordinator">
+<h1>Redirecting...</h1>
+<a href="configuration/index.html#coordinator">Click here if you are not redirected.</a>
+<script>location="configuration/index.html#coordinator"</script>
diff --git a/docs/0.16.0-incubating/Coordinator.html b/docs/0.16.0-incubating/Coordinator.html
new file mode 100644
index 0000000..accfe43
--- /dev/null
+++ b/docs/0.16.0-incubating/Coordinator.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/coordinator.html">
+<meta http-equiv="refresh" content="0; url=design/coordinator.html">
+<h1>Redirecting...</h1>
+<a href="design/coordinator.html">Click here if you are not redirected.</a>
+<script>location="design/coordinator.html"</script>
diff --git a/docs/0.16.0-incubating/DataSource.html b/docs/0.16.0-incubating/DataSource.html
new file mode 100644
index 0000000..cde1771
--- /dev/null
+++ b/docs/0.16.0-incubating/DataSource.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/datasource.html">
+<meta http-equiv="refresh" content="0; url=querying/datasource.html">
+<h1>Redirecting...</h1>
+<a href="querying/datasource.html">Click here if you are not redirected.</a>
+<script>location="querying/datasource.html"</script>
diff --git a/docs/0.16.0-incubating/DataSourceMetadataQuery.html b/docs/0.16.0-incubating/DataSourceMetadataQuery.html
new file mode 100644
index 0000000..6c7cd57
--- /dev/null
+++ b/docs/0.16.0-incubating/DataSourceMetadataQuery.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/datasourcemetadataquery.html">
+<meta http-equiv="refresh" content="0; url=querying/datasourcemetadataquery.html">
+<h1>Redirecting...</h1>
+<a href="querying/datasourcemetadataquery.html">Click here if you are not redirected.</a>
+<script>location="querying/datasourcemetadataquery.html"</script>
diff --git a/docs/0.16.0-incubating/Data_formats.html b/docs/0.16.0-incubating/Data_formats.html
new file mode 100644
index 0000000..ae6f673
--- /dev/null
+++ b/docs/0.16.0-incubating/Data_formats.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/data-formats.html">
+<meta http-equiv="refresh" content="0; url=ingestion/data-formats.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/data-formats.html">Click here if you are not redirected.</a>
+<script>location="ingestion/data-formats.html"</script>
diff --git a/docs/0.16.0-incubating/Deep-Storage.html b/docs/0.16.0-incubating/Deep-Storage.html
new file mode 100644
index 0000000..07a7cd5
--- /dev/null
+++ b/docs/0.16.0-incubating/Deep-Storage.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="dependencies/deep-storage.html">
+<meta http-equiv="refresh" content="0; url=dependencies/deep-storage.html">
+<h1>Redirecting...</h1>
+<a href="dependencies/deep-storage.html">Click here if you are not redirected.</a>
+<script>location="dependencies/deep-storage.html"</script>
diff --git a/docs/0.16.0-incubating/Design.html b/docs/0.16.0-incubating/Design.html
new file mode 100644
index 0000000..57986ec
--- /dev/null
+++ b/docs/0.16.0-incubating/Design.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/index.html">
+<meta http-equiv="refresh" content="0; url=design/index.html">
+<h1>Redirecting...</h1>
+<a href="design/index.html">Click here if you are not redirected.</a>
+<script>location="design/index.html"</script>
diff --git a/docs/0.16.0-incubating/DimensionSpecs.html b/docs/0.16.0-incubating/DimensionSpecs.html
new file mode 100644
index 0000000..7ab536a
--- /dev/null
+++ b/docs/0.16.0-incubating/DimensionSpecs.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/dimensionspecs.html">
+<meta http-equiv="refresh" content="0; url=querying/dimensionspecs.html">
+<h1>Redirecting...</h1>
+<a href="querying/dimensionspecs.html">Click here if you are not redirected.</a>
+<script>location="querying/dimensionspecs.html"</script>
diff --git a/docs/0.16.0-incubating/Download.html b/docs/0.16.0-incubating/Download.html
new file mode 100644
index 0000000..cfb2d2b
--- /dev/null
+++ b/docs/0.16.0-incubating/Download.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="/downloads.html">
+<meta http-equiv="refresh" content="0; url=/downloads.html">
+<h1>Redirecting...</h1>
+<a href="/downloads.html">Click here if you are not redirected.</a>
+<script>location="/downloads.html"</script>
diff --git a/docs/0.16.0-incubating/Druid-Personal-Demo-Cluster.html b/docs/0.16.0-incubating/Druid-Personal-Demo-Cluster.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Druid-Personal-Demo-Cluster.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Druid-vs-Cassandra.html b/docs/0.16.0-incubating/Druid-vs-Cassandra.html
new file mode 100644
index 0000000..1a0249a
--- /dev/null
+++ b/docs/0.16.0-incubating/Druid-vs-Cassandra.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="comparisons/druid-vs-key-value.html">
+<meta http-equiv="refresh" content="0; url=comparisons/druid-vs-key-value.html">
+<h1>Redirecting...</h1>
+<a href="comparisons/druid-vs-key-value.html">Click here if you are not redirected.</a>
+<script>location="comparisons/druid-vs-key-value.html"</script>
diff --git a/docs/0.16.0-incubating/Druid-vs-Elasticsearch.html b/docs/0.16.0-incubating/Druid-vs-Elasticsearch.html
new file mode 100644
index 0000000..5d519fb
--- /dev/null
+++ b/docs/0.16.0-incubating/Druid-vs-Elasticsearch.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="comparisons/druid-vs-elasticsearch.html">
+<meta http-equiv="refresh" content="0; url=comparisons/druid-vs-elasticsearch.html">
+<h1>Redirecting...</h1>
+<a href="comparisons/druid-vs-elasticsearch.html">Click here if you are not redirected.</a>
+<script>location="comparisons/druid-vs-elasticsearch.html"</script>
diff --git a/docs/0.16.0-incubating/Druid-vs-Hadoop.html b/docs/0.16.0-incubating/Druid-vs-Hadoop.html
new file mode 100644
index 0000000..d5202e7
--- /dev/null
+++ b/docs/0.16.0-incubating/Druid-vs-Hadoop.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="comparisons/druid-vs-sql-on-hadoop.html">
+<meta http-equiv="refresh" content="0; url=comparisons/druid-vs-sql-on-hadoop.html">
+<h1>Redirecting...</h1>
+<a href="comparisons/druid-vs-sql-on-hadoop.html">Click here if you are not redirected.</a>
+<script>location="comparisons/druid-vs-sql-on-hadoop.html"</script>
diff --git a/docs/0.16.0-incubating/Druid-vs-Impala-or-Shark.html b/docs/0.16.0-incubating/Druid-vs-Impala-or-Shark.html
new file mode 100644
index 0000000..d5202e7
--- /dev/null
+++ b/docs/0.16.0-incubating/Druid-vs-Impala-or-Shark.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="comparisons/druid-vs-sql-on-hadoop.html">
+<meta http-equiv="refresh" content="0; url=comparisons/druid-vs-sql-on-hadoop.html">
+<h1>Redirecting...</h1>
+<a href="comparisons/druid-vs-sql-on-hadoop.html">Click here if you are not redirected.</a>
+<script>location="comparisons/druid-vs-sql-on-hadoop.html"</script>
diff --git a/docs/0.16.0-incubating/Druid-vs-Redshift.html b/docs/0.16.0-incubating/Druid-vs-Redshift.html
new file mode 100644
index 0000000..4199696
--- /dev/null
+++ b/docs/0.16.0-incubating/Druid-vs-Redshift.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="comparisons/druid-vs-redshift.html">
+<meta http-equiv="refresh" content="0; url=comparisons/druid-vs-redshift.html">
+<h1>Redirecting...</h1>
+<a href="comparisons/druid-vs-redshift.html">Click here if you are not redirected.</a>
+<script>location="comparisons/druid-vs-redshift.html"</script>
diff --git a/docs/0.16.0-incubating/Druid-vs-Spark.html b/docs/0.16.0-incubating/Druid-vs-Spark.html
new file mode 100644
index 0000000..31713ad
--- /dev/null
+++ b/docs/0.16.0-incubating/Druid-vs-Spark.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="comparisons/druid-vs-spark.html">
+<meta http-equiv="refresh" content="0; url=comparisons/druid-vs-spark.html">
+<h1>Redirecting...</h1>
+<a href="comparisons/druid-vs-spark.html">Click here if you are not redirected.</a>
+<script>location="comparisons/druid-vs-spark.html"</script>
diff --git a/docs/0.16.0-incubating/Druid-vs-Vertica.html b/docs/0.16.0-incubating/Druid-vs-Vertica.html
new file mode 100644
index 0000000..4199696
--- /dev/null
+++ b/docs/0.16.0-incubating/Druid-vs-Vertica.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="comparisons/druid-vs-redshift.html">
+<meta http-equiv="refresh" content="0; url=comparisons/druid-vs-redshift.html">
+<h1>Redirecting...</h1>
+<a href="comparisons/druid-vs-redshift.html">Click here if you are not redirected.</a>
+<script>location="comparisons/druid-vs-redshift.html"</script>
diff --git a/docs/0.16.0-incubating/Evaluate.html b/docs/0.16.0-incubating/Evaluate.html
new file mode 100644
index 0000000..bd0e468
--- /dev/null
+++ b/docs/0.16.0-incubating/Evaluate.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/cluster.html">
+<meta http-equiv="refresh" content="0; url=tutorials/cluster.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/cluster.html">Click here if you are not redirected.</a>
+<script>location="tutorials/cluster.html"</script>
diff --git a/docs/0.16.0-incubating/Examples.html b/docs/0.16.0-incubating/Examples.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Examples.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Filters.html b/docs/0.16.0-incubating/Filters.html
new file mode 100644
index 0000000..a318095
--- /dev/null
+++ b/docs/0.16.0-incubating/Filters.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/filters.html">
+<meta http-equiv="refresh" content="0; url=querying/filters.html">
+<h1>Redirecting...</h1>
+<a href="querying/filters.html">Click here if you are not redirected.</a>
+<script>location="querying/filters.html"</script>
diff --git a/docs/0.16.0-incubating/Firehose.html b/docs/0.16.0-incubating/Firehose.html
new file mode 100644
index 0000000..318e3fb
--- /dev/null
+++ b/docs/0.16.0-incubating/Firehose.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/native-batch.html#firehoses">
+<meta http-equiv="refresh" content="0; url=ingestion/native-batch.html#firehoses">
+<h1>Redirecting...</h1>
+<a href="ingestion/native-batch.html#firehoses">Click here if you are not redirected.</a>
+<script>location="ingestion/native-batch.html#firehoses"</script>
diff --git a/docs/0.16.0-incubating/GeographicQueries.html b/docs/0.16.0-incubating/GeographicQueries.html
new file mode 100644
index 0000000..566645a
--- /dev/null
+++ b/docs/0.16.0-incubating/GeographicQueries.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/geo.html">
+<meta http-equiv="refresh" content="0; url=development/geo.html">
+<h1>Redirecting...</h1>
+<a href="development/geo.html">Click here if you are not redirected.</a>
+<script>location="development/geo.html"</script>
diff --git a/docs/0.16.0-incubating/Granularities.html b/docs/0.16.0-incubating/Granularities.html
new file mode 100644
index 0000000..05ddc32
--- /dev/null
+++ b/docs/0.16.0-incubating/Granularities.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/granularities.html">
+<meta http-equiv="refresh" content="0; url=querying/granularities.html">
+<h1>Redirecting...</h1>
+<a href="querying/granularities.html">Click here if you are not redirected.</a>
+<script>location="querying/granularities.html"</script>
diff --git a/docs/0.16.0-incubating/GroupByQuery.html b/docs/0.16.0-incubating/GroupByQuery.html
new file mode 100644
index 0000000..51a97c6
--- /dev/null
+++ b/docs/0.16.0-incubating/GroupByQuery.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/groupbyquery.html">
+<meta http-equiv="refresh" content="0; url=querying/groupbyquery.html">
+<h1>Redirecting...</h1>
+<a href="querying/groupbyquery.html">Click here if you are not redirected.</a>
+<script>location="querying/groupbyquery.html"</script>
diff --git a/docs/0.16.0-incubating/Hadoop-Configuration.html b/docs/0.16.0-incubating/Hadoop-Configuration.html
new file mode 100644
index 0000000..7e5143c
--- /dev/null
+++ b/docs/0.16.0-incubating/Hadoop-Configuration.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/hadoop.html">
+<meta http-equiv="refresh" content="0; url=ingestion/hadoop.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/hadoop.html">Click here if you are not redirected.</a>
+<script>location="ingestion/hadoop.html"</script>
diff --git a/docs/0.16.0-incubating/Having.html b/docs/0.16.0-incubating/Having.html
new file mode 100644
index 0000000..7715018
--- /dev/null
+++ b/docs/0.16.0-incubating/Having.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/having.html">
+<meta http-equiv="refresh" content="0; url=querying/having.html">
+<h1>Redirecting...</h1>
+<a href="querying/having.html">Click here if you are not redirected.</a>
+<script>location="querying/having.html"</script>
diff --git a/docs/0.16.0-incubating/Historical-Config.html b/docs/0.16.0-incubating/Historical-Config.html
new file mode 100644
index 0000000..20901ec
--- /dev/null
+++ b/docs/0.16.0-incubating/Historical-Config.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="configuration/index.html#historical">
+<meta http-equiv="refresh" content="0; url=configuration/index.html#historical">
+<h1>Redirecting...</h1>
+<a href="configuration/index.html#historical">Click here if you are not redirected.</a>
+<script>location="configuration/index.html#historical"</script>
diff --git a/docs/0.16.0-incubating/Historical.html b/docs/0.16.0-incubating/Historical.html
new file mode 100644
index 0000000..4654f6a
--- /dev/null
+++ b/docs/0.16.0-incubating/Historical.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/historical.html">
+<meta http-equiv="refresh" content="0; url=design/historical.html">
+<h1>Redirecting...</h1>
+<a href="design/historical.html">Click here if you are not redirected.</a>
+<script>location="design/historical.html"</script>
diff --git a/docs/0.16.0-incubating/Home.html b/docs/0.16.0-incubating/Home.html
new file mode 100644
index 0000000..57986ec
--- /dev/null
+++ b/docs/0.16.0-incubating/Home.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/index.html">
+<meta http-equiv="refresh" content="0; url=design/index.html">
+<h1>Redirecting...</h1>
+<a href="design/index.html">Click here if you are not redirected.</a>
+<script>location="design/index.html"</script>
diff --git a/docs/0.16.0-incubating/Including-Extensions.html b/docs/0.16.0-incubating/Including-Extensions.html
new file mode 100644
index 0000000..89a2675
--- /dev/null
+++ b/docs/0.16.0-incubating/Including-Extensions.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/extensions.html#loading-extensions">
+<meta http-equiv="refresh" content="0; url=development/extensions.html#loading-extensions">
+<h1>Redirecting...</h1>
+<a href="development/extensions.html#loading-extensions">Click here if you are not redirected.</a>
+<script>location="development/extensions.html#loading-extensions"</script>
diff --git a/docs/0.16.0-incubating/Indexing-Service-Config.html b/docs/0.16.0-incubating/Indexing-Service-Config.html
new file mode 100644
index 0000000..b6aa387
--- /dev/null
+++ b/docs/0.16.0-incubating/Indexing-Service-Config.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="configuration/index.html#overlord">
+<meta http-equiv="refresh" content="0; url=configuration/index.html#overlord">
+<h1>Redirecting...</h1>
+<a href="configuration/index.html#overlord">Click here if you are not redirected.</a>
+<script>location="configuration/index.html#overlord"</script>
diff --git a/docs/0.16.0-incubating/Indexing-Service.html b/docs/0.16.0-incubating/Indexing-Service.html
new file mode 100644
index 0000000..20f139d
--- /dev/null
+++ b/docs/0.16.0-incubating/Indexing-Service.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/indexing-service.html">
+<meta http-equiv="refresh" content="0; url=design/indexing-service.html">
+<h1>Redirecting...</h1>
+<a href="design/indexing-service.html">Click here if you are not redirected.</a>
+<script>location="design/indexing-service.html"</script>
diff --git a/docs/0.16.0-incubating/Ingestion-FAQ.html b/docs/0.16.0-incubating/Ingestion-FAQ.html
new file mode 100644
index 0000000..dddd3a4
--- /dev/null
+++ b/docs/0.16.0-incubating/Ingestion-FAQ.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/faq.html">
+<meta http-equiv="refresh" content="0; url=ingestion/faq.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/faq.html">Click here if you are not redirected.</a>
+<script>location="ingestion/faq.html"</script>
diff --git a/docs/0.16.0-incubating/Ingestion-overview.html b/docs/0.16.0-incubating/Ingestion-overview.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Ingestion-overview.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Ingestion.html b/docs/0.16.0-incubating/Ingestion.html
new file mode 100644
index 0000000..b571cbd
--- /dev/null
+++ b/docs/0.16.0-incubating/Ingestion.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/index.html">
+<meta http-equiv="refresh" content="0; url=ingestion/index.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/index.html">Click here if you are not redirected.</a>
+<script>location="ingestion/index.html"</script>
diff --git a/docs/0.16.0-incubating/Integrating-Druid-With-Other-Technologies.html b/docs/0.16.0-incubating/Integrating-Druid-With-Other-Technologies.html
new file mode 100644
index 0000000..5f4584b
--- /dev/null
+++ b/docs/0.16.0-incubating/Integrating-Druid-With-Other-Technologies.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/integrating-druid-with-other-technologies.html">
+<meta http-equiv="refresh" content="0; url=development/integrating-druid-with-other-technologies.html">
+<h1>Redirecting...</h1>
+<a href="development/integrating-druid-with-other-technologies.html">Click here if you are not redirected.</a>
+<script>location="development/integrating-druid-with-other-technologies.html"</script>
diff --git a/docs/0.16.0-incubating/Kafka-Eight.html b/docs/0.16.0-incubating/Kafka-Eight.html
new file mode 100644
index 0000000..b654b03
--- /dev/null
+++ b/docs/0.16.0-incubating/Kafka-Eight.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/extensions-core/kafka-ingestion.html">
+<meta http-equiv="refresh" content="0; url=development/extensions-core/kafka-ingestion.html">
+<h1>Redirecting...</h1>
+<a href="development/extensions-core/kafka-ingestion.html">Click here if you are not redirected.</a>
+<script>location="development/extensions-core/kafka-ingestion.html"</script>
diff --git a/docs/0.16.0-incubating/Libraries.html b/docs/0.16.0-incubating/Libraries.html
new file mode 100644
index 0000000..545edee
--- /dev/null
+++ b/docs/0.16.0-incubating/Libraries.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="/libraries.html">
+<meta http-equiv="refresh" content="0; url=/libraries.html">
+<h1>Redirecting...</h1>
+<a href="/libraries.html">Click here if you are not redirected.</a>
+<script>location="/libraries.html"</script>
diff --git a/docs/0.16.0-incubating/LimitSpec.html b/docs/0.16.0-incubating/LimitSpec.html
new file mode 100644
index 0000000..8b6a28d
--- /dev/null
+++ b/docs/0.16.0-incubating/LimitSpec.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/limitspec.html">
+<meta http-equiv="refresh" content="0; url=querying/limitspec.html">
+<h1>Redirecting...</h1>
+<a href="querying/limitspec.html">Click here if you are not redirected.</a>
+<script>location="querying/limitspec.html"</script>
diff --git a/docs/0.16.0-incubating/Loading-Your-Data.html b/docs/0.16.0-incubating/Loading-Your-Data.html
new file mode 100644
index 0000000..b571cbd
--- /dev/null
+++ b/docs/0.16.0-incubating/Loading-Your-Data.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/index.html">
+<meta http-equiv="refresh" content="0; url=ingestion/index.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/index.html">Click here if you are not redirected.</a>
+<script>location="ingestion/index.html"</script>
diff --git a/docs/0.16.0-incubating/Logging.html b/docs/0.16.0-incubating/Logging.html
new file mode 100644
index 0000000..3b2b135
--- /dev/null
+++ b/docs/0.16.0-incubating/Logging.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="configuration/logging.html">
+<meta http-equiv="refresh" content="0; url=configuration/logging.html">
+<h1>Redirecting...</h1>
+<a href="configuration/logging.html">Click here if you are not redirected.</a>
+<script>location="configuration/logging.html"</script>
diff --git a/docs/0.16.0-incubating/Master.html b/docs/0.16.0-incubating/Master.html
new file mode 100644
index 0000000..c3bea73
--- /dev/null
+++ b/docs/0.16.0-incubating/Master.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/processes.html">
+<meta http-equiv="refresh" content="0; url=design/processes.html">
+<h1>Redirecting...</h1>
+<a href="design/processes.html">Click here if you are not redirected.</a>
+<script>location="design/processes.html"</script>
diff --git a/docs/0.16.0-incubating/Metadata-storage.html b/docs/0.16.0-incubating/Metadata-storage.html
new file mode 100644
index 0000000..f13b365
--- /dev/null
+++ b/docs/0.16.0-incubating/Metadata-storage.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="dependencies/metadata-storage.html">
+<meta http-equiv="refresh" content="0; url=dependencies/metadata-storage.html">
+<h1>Redirecting...</h1>
+<a href="dependencies/metadata-storage.html">Click here if you are not redirected.</a>
+<script>location="dependencies/metadata-storage.html"</script>
diff --git a/docs/0.16.0-incubating/Metrics.html b/docs/0.16.0-incubating/Metrics.html
new file mode 100644
index 0000000..011ab0b
--- /dev/null
+++ b/docs/0.16.0-incubating/Metrics.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="operations/metrics.html">
+<meta http-equiv="refresh" content="0; url=operations/metrics.html">
+<h1>Redirecting...</h1>
+<a href="operations/metrics.html">Click here if you are not redirected.</a>
+<script>location="operations/metrics.html"</script>
diff --git a/docs/0.16.0-incubating/Middlemanager.html b/docs/0.16.0-incubating/Middlemanager.html
new file mode 100644
index 0000000..8e9da09
--- /dev/null
+++ b/docs/0.16.0-incubating/Middlemanager.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/middlemanager.html">
+<meta http-equiv="refresh" content="0; url=design/middlemanager.html">
+<h1>Redirecting...</h1>
+<a href="design/middlemanager.html">Click here if you are not redirected.</a>
+<script>location="design/middlemanager.html"</script>
diff --git a/docs/0.16.0-incubating/Modules.html b/docs/0.16.0-incubating/Modules.html
new file mode 100644
index 0000000..93a8be1
--- /dev/null
+++ b/docs/0.16.0-incubating/Modules.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/modules.html">
+<meta http-equiv="refresh" content="0; url=development/modules.html">
+<h1>Redirecting...</h1>
+<a href="development/modules.html">Click here if you are not redirected.</a>
+<script>location="development/modules.html"</script>
diff --git a/docs/0.16.0-incubating/MySQL.html b/docs/0.16.0-incubating/MySQL.html
new file mode 100644
index 0000000..5c90272
--- /dev/null
+++ b/docs/0.16.0-incubating/MySQL.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/extensions-core/mysql.html">
+<meta http-equiv="refresh" content="0; url=development/extensions-core/mysql.html">
+<h1>Redirecting...</h1>
+<a href="development/extensions-core/mysql.html">Click here if you are not redirected.</a>
+<script>location="development/extensions-core/mysql.html"</script>
diff --git a/docs/0.16.0-incubating/OrderBy.html b/docs/0.16.0-incubating/OrderBy.html
new file mode 100644
index 0000000..8b6a28d
--- /dev/null
+++ b/docs/0.16.0-incubating/OrderBy.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/limitspec.html">
+<meta http-equiv="refresh" content="0; url=querying/limitspec.html">
+<h1>Redirecting...</h1>
+<a href="querying/limitspec.html">Click here if you are not redirected.</a>
+<script>location="querying/limitspec.html"</script>
diff --git a/docs/0.16.0-incubating/Other-Hadoop.html b/docs/0.16.0-incubating/Other-Hadoop.html
new file mode 100644
index 0000000..b7bdd99
--- /dev/null
+++ b/docs/0.16.0-incubating/Other-Hadoop.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="operations/other-hadoop.html">
+<meta http-equiv="refresh" content="0; url=operations/other-hadoop.html">
+<h1>Redirecting...</h1>
+<a href="operations/other-hadoop.html">Click here if you are not redirected.</a>
+<script>location="operations/other-hadoop.html"</script>
diff --git a/docs/0.16.0-incubating/Papers-and-talks.html b/docs/0.16.0-incubating/Papers-and-talks.html
new file mode 100644
index 0000000..4602adb
--- /dev/null
+++ b/docs/0.16.0-incubating/Papers-and-talks.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="misc/papers-and-talks.html">
+<meta http-equiv="refresh" content="0; url=misc/papers-and-talks.html">
+<h1>Redirecting...</h1>
+<a href="misc/papers-and-talks.html">Click here if you are not redirected.</a>
+<script>location="misc/papers-and-talks.html"</script>
diff --git a/docs/0.16.0-incubating/Peons.html b/docs/0.16.0-incubating/Peons.html
new file mode 100644
index 0000000..e9793f4
--- /dev/null
+++ b/docs/0.16.0-incubating/Peons.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/peons.html">
+<meta http-equiv="refresh" content="0; url=design/peons.html">
+<h1>Redirecting...</h1>
+<a href="design/peons.html">Click here if you are not redirected.</a>
+<script>location="design/peons.html"</script>
diff --git a/docs/0.16.0-incubating/Performance-FAQ.html b/docs/0.16.0-incubating/Performance-FAQ.html
new file mode 100644
index 0000000..e6da9b2
--- /dev/null
+++ b/docs/0.16.0-incubating/Performance-FAQ.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="operations/basic-cluster-tuning.html">
+<meta http-equiv="refresh" content="0; url=operations/basic-cluster-tuning.html">
+<h1>Redirecting...</h1>
+<a href="operations/basic-cluster-tuning.html">Click here if you are not redirected.</a>
+<script>location="operations/basic-cluster-tuning.html"</script>
diff --git a/docs/0.16.0-incubating/Plumber.html b/docs/0.16.0-incubating/Plumber.html
new file mode 100644
index 0000000..b571cbd
--- /dev/null
+++ b/docs/0.16.0-incubating/Plumber.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/index.html">
+<meta http-equiv="refresh" content="0; url=ingestion/index.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/index.html">Click here if you are not redirected.</a>
+<script>location="ingestion/index.html"</script>
diff --git a/docs/0.16.0-incubating/Post-aggregations.html b/docs/0.16.0-incubating/Post-aggregations.html
new file mode 100644
index 0000000..e0c4e24
--- /dev/null
+++ b/docs/0.16.0-incubating/Post-aggregations.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/post-aggregations.html">
+<meta http-equiv="refresh" content="0; url=querying/post-aggregations.html">
+<h1>Redirecting...</h1>
+<a href="querying/post-aggregations.html">Click here if you are not redirected.</a>
+<script>location="querying/post-aggregations.html"</script>
diff --git a/docs/0.16.0-incubating/Production-Cluster-Configuration.html b/docs/0.16.0-incubating/Production-Cluster-Configuration.html
new file mode 100644
index 0000000..bd0e468
--- /dev/null
+++ b/docs/0.16.0-incubating/Production-Cluster-Configuration.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/cluster.html">
+<meta http-equiv="refresh" content="0; url=tutorials/cluster.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/cluster.html">Click here if you are not redirected.</a>
+<script>location="tutorials/cluster.html"</script>
diff --git a/docs/0.16.0-incubating/Query-Context.html b/docs/0.16.0-incubating/Query-Context.html
new file mode 100644
index 0000000..711a21e
--- /dev/null
+++ b/docs/0.16.0-incubating/Query-Context.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/query-context.html">
+<meta http-equiv="refresh" content="0; url=querying/query-context.html">
+<h1>Redirecting...</h1>
+<a href="querying/query-context.html">Click here if you are not redirected.</a>
+<script>location="querying/query-context.html"</script>
diff --git a/docs/0.16.0-incubating/Querying-your-data.html b/docs/0.16.0-incubating/Querying-your-data.html
new file mode 100644
index 0000000..702ed7a
--- /dev/null
+++ b/docs/0.16.0-incubating/Querying-your-data.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/querying.html">
+<meta http-equiv="refresh" content="0; url=querying/querying.html">
+<h1>Redirecting...</h1>
+<a href="querying/querying.html">Click here if you are not redirected.</a>
+<script>location="querying/querying.html"</script>
diff --git a/docs/0.16.0-incubating/Querying.html b/docs/0.16.0-incubating/Querying.html
new file mode 100644
index 0000000..702ed7a
--- /dev/null
+++ b/docs/0.16.0-incubating/Querying.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/querying.html">
+<meta http-equiv="refresh" content="0; url=querying/querying.html">
+<h1>Redirecting...</h1>
+<a href="querying/querying.html">Click here if you are not redirected.</a>
+<script>location="querying/querying.html"</script>
diff --git a/docs/0.16.0-incubating/Realtime-Config.html b/docs/0.16.0-incubating/Realtime-Config.html
new file mode 100644
index 0000000..3b8f656
--- /dev/null
+++ b/docs/0.16.0-incubating/Realtime-Config.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/standalone-realtime.html">
+<meta http-equiv="refresh" content="0; url=ingestion/standalone-realtime.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/standalone-realtime.html">Click here if you are not redirected.</a>
+<script>location="ingestion/standalone-realtime.html"</script>
diff --git a/docs/0.16.0-incubating/Realtime-ingestion.html b/docs/0.16.0-incubating/Realtime-ingestion.html
new file mode 100644
index 0000000..b571cbd
--- /dev/null
+++ b/docs/0.16.0-incubating/Realtime-ingestion.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/index.html">
+<meta http-equiv="refresh" content="0; url=ingestion/index.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/index.html">Click here if you are not redirected.</a>
+<script>location="ingestion/index.html"</script>
diff --git a/docs/0.16.0-incubating/Realtime.html b/docs/0.16.0-incubating/Realtime.html
new file mode 100644
index 0000000..3b8f656
--- /dev/null
+++ b/docs/0.16.0-incubating/Realtime.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/standalone-realtime.html">
+<meta http-equiv="refresh" content="0; url=ingestion/standalone-realtime.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/standalone-realtime.html">Click here if you are not redirected.</a>
+<script>location="ingestion/standalone-realtime.html"</script>
diff --git a/docs/0.16.0-incubating/Recommendations.html b/docs/0.16.0-incubating/Recommendations.html
new file mode 100644
index 0000000..b7c96c1
--- /dev/null
+++ b/docs/0.16.0-incubating/Recommendations.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="operations/recommendations.html">
+<meta http-equiv="refresh" content="0; url=operations/recommendations.html">
+<h1>Redirecting...</h1>
+<a href="operations/recommendations.html">Click here if you are not redirected.</a>
+<script>location="operations/recommendations.html"</script>
diff --git a/docs/0.16.0-incubating/Rolling-Updates.html b/docs/0.16.0-incubating/Rolling-Updates.html
new file mode 100644
index 0000000..90bc8b0
--- /dev/null
+++ b/docs/0.16.0-incubating/Rolling-Updates.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="operations/rolling-updates.html">
+<meta http-equiv="refresh" content="0; url=operations/rolling-updates.html">
+<h1>Redirecting...</h1>
+<a href="operations/rolling-updates.html">Click here if you are not redirected.</a>
+<script>location="operations/rolling-updates.html"</script>
diff --git a/docs/0.16.0-incubating/Router.html b/docs/0.16.0-incubating/Router.html
new file mode 100644
index 0000000..64ba7f4
--- /dev/null
+++ b/docs/0.16.0-incubating/Router.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/router.html">
+<meta http-equiv="refresh" content="0; url=design/router.html">
+<h1>Redirecting...</h1>
+<a href="design/router.html">Click here if you are not redirected.</a>
+<script>location="design/router.html"</script>
diff --git a/docs/0.16.0-incubating/Rule-Configuration.html b/docs/0.16.0-incubating/Rule-Configuration.html
new file mode 100644
index 0000000..19c0e7e
--- /dev/null
+++ b/docs/0.16.0-incubating/Rule-Configuration.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="operations/rule-configuration.html">
+<meta http-equiv="refresh" content="0; url=operations/rule-configuration.html">
+<h1>Redirecting...</h1>
+<a href="operations/rule-configuration.html">Click here if you are not redirected.</a>
+<script>location="operations/rule-configuration.html"</script>
diff --git a/docs/0.16.0-incubating/SearchQuery.html b/docs/0.16.0-incubating/SearchQuery.html
new file mode 100644
index 0000000..ee66987
--- /dev/null
+++ b/docs/0.16.0-incubating/SearchQuery.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/searchquery.html">
+<meta http-equiv="refresh" content="0; url=querying/searchquery.html">
+<h1>Redirecting...</h1>
+<a href="querying/searchquery.html">Click here if you are not redirected.</a>
+<script>location="querying/searchquery.html"</script>
diff --git a/docs/0.16.0-incubating/SearchQuerySpec.html b/docs/0.16.0-incubating/SearchQuerySpec.html
new file mode 100644
index 0000000..c28fca2
--- /dev/null
+++ b/docs/0.16.0-incubating/SearchQuerySpec.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/searchqueryspec.html">
+<meta http-equiv="refresh" content="0; url=querying/searchqueryspec.html">
+<h1>Redirecting...</h1>
+<a href="querying/searchqueryspec.html">Click here if you are not redirected.</a>
+<script>location="querying/searchqueryspec.html"</script>
diff --git a/docs/0.16.0-incubating/SegmentMetadataQuery.html b/docs/0.16.0-incubating/SegmentMetadataQuery.html
new file mode 100644
index 0000000..21294cb
--- /dev/null
+++ b/docs/0.16.0-incubating/SegmentMetadataQuery.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/segmentmetadataquery.html">
+<meta http-equiv="refresh" content="0; url=querying/segmentmetadataquery.html">
+<h1>Redirecting...</h1>
+<a href="querying/segmentmetadataquery.html">Click here if you are not redirected.</a>
+<script>location="querying/segmentmetadataquery.html"</script>
diff --git a/docs/0.16.0-incubating/Segments.html b/docs/0.16.0-incubating/Segments.html
new file mode 100644
index 0000000..040d647
--- /dev/null
+++ b/docs/0.16.0-incubating/Segments.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/segments.html">
+<meta http-equiv="refresh" content="0; url=design/segments.html">
+<h1>Redirecting...</h1>
+<a href="design/segments.html">Click here if you are not redirected.</a>
+<script>location="design/segments.html"</script>
diff --git a/docs/0.16.0-incubating/SelectQuery.html b/docs/0.16.0-incubating/SelectQuery.html
new file mode 100644
index 0000000..526110d
--- /dev/null
+++ b/docs/0.16.0-incubating/SelectQuery.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/select-query.html">
+<meta http-equiv="refresh" content="0; url=querying/select-query.html">
+<h1>Redirecting...</h1>
+<a href="querying/select-query.html">Click here if you are not redirected.</a>
+<script>location="querying/select-query.html"</script>
diff --git a/docs/0.16.0-incubating/Simple-Cluster-Configuration.html b/docs/0.16.0-incubating/Simple-Cluster-Configuration.html
new file mode 100644
index 0000000..bd0e468
--- /dev/null
+++ b/docs/0.16.0-incubating/Simple-Cluster-Configuration.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/cluster.html">
+<meta http-equiv="refresh" content="0; url=tutorials/cluster.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/cluster.html">Click here if you are not redirected.</a>
+<script>location="tutorials/cluster.html"</script>
diff --git a/docs/0.16.0-incubating/Spatial-Filters.html b/docs/0.16.0-incubating/Spatial-Filters.html
new file mode 100644
index 0000000..566645a
--- /dev/null
+++ b/docs/0.16.0-incubating/Spatial-Filters.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/geo.html">
+<meta http-equiv="refresh" content="0; url=development/geo.html">
+<h1>Redirecting...</h1>
+<a href="development/geo.html">Click here if you are not redirected.</a>
+<script>location="development/geo.html"</script>
diff --git a/docs/0.16.0-incubating/Spatial-Indexing.html b/docs/0.16.0-incubating/Spatial-Indexing.html
new file mode 100644
index 0000000..566645a
--- /dev/null
+++ b/docs/0.16.0-incubating/Spatial-Indexing.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/geo.html">
+<meta http-equiv="refresh" content="0; url=development/geo.html">
+<h1>Redirecting...</h1>
+<a href="development/geo.html">Click here if you are not redirected.</a>
+<script>location="development/geo.html"</script>
diff --git a/docs/0.16.0-incubating/Stand-Alone-With-Riak-CS.html b/docs/0.16.0-incubating/Stand-Alone-With-Riak-CS.html
new file mode 100644
index 0000000..57986ec
--- /dev/null
+++ b/docs/0.16.0-incubating/Stand-Alone-With-Riak-CS.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="design/index.html">
+<meta http-equiv="refresh" content="0; url=design/index.html">
+<h1>Redirecting...</h1>
+<a href="design/index.html">Click here if you are not redirected.</a>
+<script>location="design/index.html"</script>
diff --git a/docs/0.16.0-incubating/Support.html b/docs/0.16.0-incubating/Support.html
new file mode 100644
index 0000000..ea71408
--- /dev/null
+++ b/docs/0.16.0-incubating/Support.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="/community/">
+<meta http-equiv="refresh" content="0; url=/community/">
+<h1>Redirecting...</h1>
+<a href="/community/">Click here if you are not redirected.</a>
+<script>location="/community/"</script>
diff --git a/docs/0.16.0-incubating/Tasks.html b/docs/0.16.0-incubating/Tasks.html
new file mode 100644
index 0000000..71e90f8
--- /dev/null
+++ b/docs/0.16.0-incubating/Tasks.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="ingestion/tasks.html">
+<meta http-equiv="refresh" content="0; url=ingestion/tasks.html">
+<h1>Redirecting...</h1>
+<a href="ingestion/tasks.html">Click here if you are not redirected.</a>
+<script>location="ingestion/tasks.html"</script>
diff --git a/docs/0.16.0-incubating/Thanks.html b/docs/0.16.0-incubating/Thanks.html
new file mode 100644
index 0000000..ea71408
--- /dev/null
+++ b/docs/0.16.0-incubating/Thanks.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="/community/">
+<meta http-equiv="refresh" content="0; url=/community/">
+<h1>Redirecting...</h1>
+<a href="/community/">Click here if you are not redirected.</a>
+<script>location="/community/"</script>
diff --git a/docs/0.16.0-incubating/TimeBoundaryQuery.html b/docs/0.16.0-incubating/TimeBoundaryQuery.html
new file mode 100644
index 0000000..8e512e1
--- /dev/null
+++ b/docs/0.16.0-incubating/TimeBoundaryQuery.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/timeboundaryquery.html">
+<meta http-equiv="refresh" content="0; url=querying/timeboundaryquery.html">
+<h1>Redirecting...</h1>
+<a href="querying/timeboundaryquery.html">Click here if you are not redirected.</a>
+<script>location="querying/timeboundaryquery.html"</script>
diff --git a/docs/0.16.0-incubating/TimeseriesQuery.html b/docs/0.16.0-incubating/TimeseriesQuery.html
new file mode 100644
index 0000000..8d7f3e8
--- /dev/null
+++ b/docs/0.16.0-incubating/TimeseriesQuery.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/timeseriesquery.html">
+<meta http-equiv="refresh" content="0; url=querying/timeseriesquery.html">
+<h1>Redirecting...</h1>
+<a href="querying/timeseriesquery.html">Click here if you are not redirected.</a>
+<script>location="querying/timeseriesquery.html"</script>
diff --git a/docs/0.16.0-incubating/TopNMetricSpec.html b/docs/0.16.0-incubating/TopNMetricSpec.html
new file mode 100644
index 0000000..14acf27
--- /dev/null
+++ b/docs/0.16.0-incubating/TopNMetricSpec.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/topnmetricspec.html">
+<meta http-equiv="refresh" content="0; url=querying/topnmetricspec.html">
+<h1>Redirecting...</h1>
+<a href="querying/topnmetricspec.html">Click here if you are not redirected.</a>
+<script>location="querying/topnmetricspec.html"</script>
diff --git a/docs/0.16.0-incubating/TopNQuery.html b/docs/0.16.0-incubating/TopNQuery.html
new file mode 100644
index 0000000..dd719fd
--- /dev/null
+++ b/docs/0.16.0-incubating/TopNQuery.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="querying/topnquery.html">
+<meta http-equiv="refresh" content="0; url=querying/topnquery.html">
+<h1>Redirecting...</h1>
+<a href="querying/topnquery.html">Click here if you are not redirected.</a>
+<script>location="querying/topnquery.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial-A-First-Look-at-Druid.html b/docs/0.16.0-incubating/Tutorial-A-First-Look-at-Druid.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial-A-First-Look-at-Druid.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial-All-About-Queries.html b/docs/0.16.0-incubating/Tutorial-All-About-Queries.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial-All-About-Queries.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial-Loading-Batch-Data.html b/docs/0.16.0-incubating/Tutorial-Loading-Batch-Data.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial-Loading-Batch-Data.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial-Loading-Streaming-Data.html b/docs/0.16.0-incubating/Tutorial-Loading-Streaming-Data.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial-Loading-Streaming-Data.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial-The-Druid-Cluster.html b/docs/0.16.0-incubating/Tutorial-The-Druid-Cluster.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial-The-Druid-Cluster.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial:-A-First-Look-at-Druid.html b/docs/0.16.0-incubating/Tutorial:-A-First-Look-at-Druid.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial:-A-First-Look-at-Druid.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial:-All-About-Queries.html b/docs/0.16.0-incubating/Tutorial:-All-About-Queries.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial:-All-About-Queries.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial:-Loading-Batch-Data.html b/docs/0.16.0-incubating/Tutorial:-Loading-Batch-Data.html
new file mode 100644
index 0000000..744d5b6
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial:-Loading-Batch-Data.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/tutorial-batch.html">
+<meta http-equiv="refresh" content="0; url=tutorials/tutorial-batch.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/tutorial-batch.html">Click here if you are not redirected.</a>
+<script>location="tutorials/tutorial-batch.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial:-Loading-Streaming-Data.html b/docs/0.16.0-incubating/Tutorial:-Loading-Streaming-Data.html
new file mode 100644
index 0000000..487fb6d
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial:-Loading-Streaming-Data.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/tutorial-kafka.html">
+<meta http-equiv="refresh" content="0; url=tutorials/tutorial-kafka.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/tutorial-kafka.html">Click here if you are not redirected.</a>
+<script>location="tutorials/tutorial-kafka.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial:-Loading-Your-Data-Part-1.html b/docs/0.16.0-incubating/Tutorial:-Loading-Your-Data-Part-1.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial:-Loading-Your-Data-Part-1.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial:-Loading-Your-Data-Part-2.html b/docs/0.16.0-incubating/Tutorial:-Loading-Your-Data-Part-2.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial:-Loading-Your-Data-Part-2.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial:-The-Druid-Cluster.html b/docs/0.16.0-incubating/Tutorial:-The-Druid-Cluster.html
new file mode 100644
index 0000000..bd0e468
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial:-The-Druid-Cluster.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/cluster.html">
+<meta http-equiv="refresh" content="0; url=tutorials/cluster.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/cluster.html">Click here if you are not redirected.</a>
+<script>location="tutorials/cluster.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorial:-Webstream.html b/docs/0.16.0-incubating/Tutorial:-Webstream.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorial:-Webstream.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Tutorials.html b/docs/0.16.0-incubating/Tutorials.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Tutorials.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Twitter-Tutorial.html b/docs/0.16.0-incubating/Twitter-Tutorial.html
new file mode 100644
index 0000000..94e7c06
--- /dev/null
+++ b/docs/0.16.0-incubating/Twitter-Tutorial.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="tutorials/index.html">
+<meta http-equiv="refresh" content="0; url=tutorials/index.html">
+<h1>Redirecting...</h1>
+<a href="tutorials/index.html">Click here if you are not redirected.</a>
+<script>location="tutorials/index.html"</script>
diff --git a/docs/0.16.0-incubating/Versioning.html b/docs/0.16.0-incubating/Versioning.html
new file mode 100644
index 0000000..fe2eb7c
--- /dev/null
+++ b/docs/0.16.0-incubating/Versioning.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="development/versioning.html">
+<meta http-equiv="refresh" content="0; url=development/versioning.html">
+<h1>Redirecting...</h1>
+<a href="development/versioning.html">Click here if you are not redirected.</a>
+<script>location="development/versioning.html"</script>
diff --git a/docs/0.16.0-incubating/ZooKeeper.html b/docs/0.16.0-incubating/ZooKeeper.html
new file mode 100644
index 0000000..52406ac
--- /dev/null
+++ b/docs/0.16.0-incubating/ZooKeeper.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="dependencies/zookeeper.html">
+<meta http-equiv="refresh" content="0; url=dependencies/zookeeper.html">
+<h1>Redirecting...</h1>
+<a href="dependencies/zookeeper.html">Click here if you are not redirected.</a>
+<script>location="dependencies/zookeeper.html"</script>
diff --git a/docs/0.16.0-incubating/alerts.html b/docs/0.16.0-incubating/alerts.html
new file mode 100644
index 0000000..6286bcd
--- /dev/null
+++ b/docs/0.16.0-incubating/alerts.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="operations/alerts.html">
+<meta http-equiv="refresh" content="0; url=operations/alerts.html">
+<h1>Redirecting...</h1>
+<a href="operations/alerts.html">Click here if you are not redirected.</a>
+<script>location="operations/alerts.html"</script>
diff --git a/docs/0.16.0-incubating/assets/druid-architecture.png b/docs/0.16.0-incubating/assets/druid-architecture.png
new file mode 100644
index 0000000..954a87b
Binary files /dev/null and b/docs/0.16.0-incubating/assets/druid-architecture.png differ
diff --git a/docs/0.16.0-incubating/assets/druid-column-types.png b/docs/0.16.0-incubating/assets/druid-column-types.png
new file mode 100644
index 0000000..9db56c0
Binary files /dev/null and b/docs/0.16.0-incubating/assets/druid-column-types.png differ
diff --git a/docs/0.16.0-incubating/assets/druid-dataflow-2x.png b/docs/0.16.0-incubating/assets/druid-dataflow-2x.png
new file mode 100644
index 0000000..ab1c583
Binary files /dev/null and b/docs/0.16.0-incubating/assets/druid-dataflow-2x.png differ
diff --git a/docs/0.16.0-incubating/assets/druid-dataflow-3.png b/docs/0.16.0-incubating/assets/druid-dataflow-3.png
new file mode 100644
index 0000000..355215c
Binary files /dev/null and b/docs/0.16.0-incubating/assets/druid-dataflow-3.png differ
diff --git a/docs/0.16.0-incubating/assets/druid-manage-1.png b/docs/0.16.0-incubating/assets/druid-manage-1.png
new file mode 100644
index 0000000..0d10c6e
Binary files /dev/null and b/docs/0.16.0-incubating/assets/druid-manage-1.png differ
diff --git a/docs/0.16.0-incubating/assets/druid-production.png b/docs/0.16.0-incubating/assets/druid-production.png
new file mode 100644
index 0000000..a257dcb
Binary files /dev/null and b/docs/0.16.0-incubating/assets/druid-production.png differ
diff --git a/docs/0.16.0-incubating/assets/druid-timeline.png b/docs/0.16.0-incubating/assets/druid-timeline.png
new file mode 100644
index 0000000..40380e2
Binary files /dev/null and b/docs/0.16.0-incubating/assets/druid-timeline.png differ
diff --git a/docs/0.16.0-incubating/assets/indexing_service.png b/docs/0.16.0-incubating/assets/indexing_service.png
new file mode 100644
index 0000000..a4462a4
Binary files /dev/null and b/docs/0.16.0-incubating/assets/indexing_service.png differ
diff --git a/docs/0.16.0-incubating/assets/segmentPropagation.png b/docs/0.16.0-incubating/assets/segmentPropagation.png
new file mode 100644
index 0000000..e1ec820
Binary files /dev/null and b/docs/0.16.0-incubating/assets/segmentPropagation.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-01.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-01.png
new file mode 100644
index 0000000..08426fd
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-01.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-02.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-02.png
new file mode 100644
index 0000000..76a1a7f
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-02.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-03.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-03.png
new file mode 100644
index 0000000..ce3b0f0
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-03.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-04.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-04.png
new file mode 100644
index 0000000..b30ef7f
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-04.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-05.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-05.png
new file mode 100644
index 0000000..9ef3f80
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-05.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-06.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-06.png
new file mode 100644
index 0000000..b1f08c8
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-06.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-07.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-07.png
new file mode 100644
index 0000000..d7a8e68
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-07.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-08.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-08.png
new file mode 100644
index 0000000..4e36aab
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-08.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-09.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-09.png
new file mode 100644
index 0000000..144c02c
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-09.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-10.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-10.png
new file mode 100644
index 0000000..75487a2
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-10.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-11.png b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-11.png
new file mode 100644
index 0000000..5cadd52
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-data-loader-11.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-submit-task-01.png b/docs/0.16.0-incubating/assets/tutorial-batch-submit-task-01.png
new file mode 100644
index 0000000..e8a1346
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-submit-task-01.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-batch-submit-task-02.png b/docs/0.16.0-incubating/assets/tutorial-batch-submit-task-02.png
new file mode 100644
index 0000000..fc0c924
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-batch-submit-task-02.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-compaction-01.png b/docs/0.16.0-incubating/assets/tutorial-compaction-01.png
new file mode 100644
index 0000000..aeb9bf3
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-compaction-01.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-compaction-02.png b/docs/0.16.0-incubating/assets/tutorial-compaction-02.png
new file mode 100644
index 0000000..836d8a7
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-compaction-02.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-compaction-03.png b/docs/0.16.0-incubating/assets/tutorial-compaction-03.png
new file mode 100644
index 0000000..d51f8f8
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-compaction-03.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-compaction-04.png b/docs/0.16.0-incubating/assets/tutorial-compaction-04.png
new file mode 100644
index 0000000..46c5b1d
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-compaction-04.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-compaction-05.png b/docs/0.16.0-incubating/assets/tutorial-compaction-05.png
new file mode 100644
index 0000000..e692694
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-compaction-05.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-compaction-06.png b/docs/0.16.0-incubating/assets/tutorial-compaction-06.png
new file mode 100644
index 0000000..55c999f
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-compaction-06.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-compaction-07.png b/docs/0.16.0-incubating/assets/tutorial-compaction-07.png
new file mode 100644
index 0000000..661e897
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-compaction-07.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-compaction-08.png b/docs/0.16.0-incubating/assets/tutorial-compaction-08.png
new file mode 100644
index 0000000..6e3f1aa
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-compaction-08.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-deletion-01.png b/docs/0.16.0-incubating/assets/tutorial-deletion-01.png
new file mode 100644
index 0000000..de68d38
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-deletion-01.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-deletion-02.png b/docs/0.16.0-incubating/assets/tutorial-deletion-02.png
new file mode 100644
index 0000000..ffe4585
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-deletion-02.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-deletion-03.png b/docs/0.16.0-incubating/assets/tutorial-deletion-03.png
new file mode 100644
index 0000000..221774f
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-deletion-03.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-kafka-01.png b/docs/0.16.0-incubating/assets/tutorial-kafka-01.png
new file mode 100644
index 0000000..b085625
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-kafka-01.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-kafka-02.png b/docs/0.16.0-incubating/assets/tutorial-kafka-02.png
new file mode 100644
index 0000000..f23e084
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-kafka-02.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-query-01.png b/docs/0.16.0-incubating/assets/tutorial-query-01.png
new file mode 100644
index 0000000..b366b2b
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-query-01.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-query-02.png b/docs/0.16.0-incubating/assets/tutorial-query-02.png
new file mode 100644
index 0000000..f3ba025
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-query-02.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-query-03.png b/docs/0.16.0-incubating/assets/tutorial-query-03.png
new file mode 100644
index 0000000..9f7ae27
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-query-03.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-query-04.png b/docs/0.16.0-incubating/assets/tutorial-query-04.png
new file mode 100644
index 0000000..3f800a6
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-query-04.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-query-05.png b/docs/0.16.0-incubating/assets/tutorial-query-05.png
new file mode 100644
index 0000000..2fc59ce
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-query-05.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-query-06.png b/docs/0.16.0-incubating/assets/tutorial-query-06.png
new file mode 100644
index 0000000..60b4e1a
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-query-06.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-query-07.png b/docs/0.16.0-incubating/assets/tutorial-query-07.png
new file mode 100644
index 0000000..d2e5a85
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-query-07.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-quickstart-01.png b/docs/0.16.0-incubating/assets/tutorial-quickstart-01.png
new file mode 100644
index 0000000..9a47bc7
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-quickstart-01.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-retention-00.png b/docs/0.16.0-incubating/assets/tutorial-retention-00.png
new file mode 100644
index 0000000..a3f84a9
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-retention-00.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-retention-01.png b/docs/0.16.0-incubating/assets/tutorial-retention-01.png
new file mode 100644
index 0000000..35a97c2
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-retention-01.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-retention-02.png b/docs/0.16.0-incubating/assets/tutorial-retention-02.png
new file mode 100644
index 0000000..f38fad0
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-retention-02.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-retention-03.png b/docs/0.16.0-incubating/assets/tutorial-retention-03.png
new file mode 100644
index 0000000..256836a
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-retention-03.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-retention-04.png b/docs/0.16.0-incubating/assets/tutorial-retention-04.png
new file mode 100644
index 0000000..d39495f
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-retention-04.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-retention-05.png b/docs/0.16.0-incubating/assets/tutorial-retention-05.png
new file mode 100644
index 0000000..638a752
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-retention-05.png differ
diff --git a/docs/0.16.0-incubating/assets/tutorial-retention-06.png b/docs/0.16.0-incubating/assets/tutorial-retention-06.png
new file mode 100644
index 0000000..f47cbff
Binary files /dev/null and b/docs/0.16.0-incubating/assets/tutorial-retention-06.png differ
diff --git a/docs/latest/operations/img/01-home-view.png b/docs/0.16.0-incubating/assets/web-console-01-home-view.png
similarity index 100%
copy from docs/latest/operations/img/01-home-view.png
copy to docs/0.16.0-incubating/assets/web-console-01-home-view.png
diff --git a/docs/latest/operations/img/02-data-loader-1.png b/docs/0.16.0-incubating/assets/web-console-02-data-loader-1.png
similarity index 100%
copy from docs/latest/operations/img/02-data-loader-1.png
copy to docs/0.16.0-incubating/assets/web-console-02-data-loader-1.png
diff --git a/docs/latest/operations/img/03-data-loader-2.png b/docs/0.16.0-incubating/assets/web-console-03-data-loader-2.png
similarity index 100%
copy from docs/latest/operations/img/03-data-loader-2.png
copy to docs/0.16.0-incubating/assets/web-console-03-data-loader-2.png
diff --git a/docs/latest/operations/img/04-datasources.png b/docs/0.16.0-incubating/assets/web-console-04-datasources.png
similarity index 100%
copy from docs/latest/operations/img/04-datasources.png
copy to docs/0.16.0-incubating/assets/web-console-04-datasources.png
diff --git a/docs/latest/operations/img/05-retention.png b/docs/0.16.0-incubating/assets/web-console-05-retention.png
similarity index 100%
copy from docs/latest/operations/img/05-retention.png
copy to docs/0.16.0-incubating/assets/web-console-05-retention.png
diff --git a/docs/latest/operations/img/06-segments.png b/docs/0.16.0-incubating/assets/web-console-06-segments.png
similarity index 100%
copy from docs/latest/operations/img/06-segments.png
copy to docs/0.16.0-incubating/assets/web-console-06-segments.png
diff --git a/docs/latest/operations/img/07-supervisors.png b/docs/0.16.0-incubating/assets/web-console-07-supervisors.png
similarity index 100%
copy from docs/latest/operations/img/07-supervisors.png
copy to docs/0.16.0-incubating/assets/web-console-07-supervisors.png
diff --git a/docs/latest/operations/img/08-tasks.png b/docs/0.16.0-incubating/assets/web-console-08-tasks.png
similarity index 100%
copy from docs/latest/operations/img/08-tasks.png
copy to docs/0.16.0-incubating/assets/web-console-08-tasks.png
diff --git a/docs/latest/operations/img/09-task-status.png b/docs/0.16.0-incubating/assets/web-console-09-task-status.png
similarity index 100%
copy from docs/latest/operations/img/09-task-status.png
copy to docs/0.16.0-incubating/assets/web-console-09-task-status.png
diff --git a/docs/latest/operations/img/10-servers.png b/docs/0.16.0-incubating/assets/web-console-10-servers.png
similarity index 100%
copy from docs/latest/operations/img/10-servers.png
copy to docs/0.16.0-incubating/assets/web-console-10-servers.png
diff --git a/docs/latest/operations/img/11-query-sql.png b/docs/0.16.0-incubating/assets/web-console-11-query-sql.png
similarity index 100%
copy from docs/latest/operations/img/11-query-sql.png
copy to docs/0.16.0-incubating/assets/web-console-11-query-sql.png
diff --git a/docs/latest/operations/img/12-query-rune.png b/docs/0.16.0-incubating/assets/web-console-12-query-rune.png
similarity index 100%
copy from docs/latest/operations/img/12-query-rune.png
copy to docs/0.16.0-incubating/assets/web-console-12-query-rune.png
diff --git a/docs/latest/operations/img/13-lookups.png b/docs/0.16.0-incubating/assets/web-console-13-lookups.png
similarity index 100%
copy from docs/latest/operations/img/13-lookups.png
copy to docs/0.16.0-incubating/assets/web-console-13-lookups.png
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-cassandra.html b/docs/0.16.0-incubating/comparisons/druid-vs-cassandra.html
new file mode 100644
index 0000000..235af8f
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-cassandra.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="druid-vs-key-value.html">
+<meta http-equiv="refresh" content="0; url=druid-vs-key-value.html">
+<h1>Redirecting...</h1>
+<a href="druid-vs-key-value.html">Click here if you are not redirected.</a>
+<script>location="druid-vs-key-value.html"</script>
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-elasticsearch.html b/docs/0.16.0-incubating/comparisons/druid-vs-elasticsearch.html
new file mode 100644
index 0000000..09e2bce
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-elasticsearch.html
@@ -0,0 +1,91 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Apache Druid vs Elasticsearch · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Apache Druid vs Elasticsearch · Apache Druid"/><meta property="og:type" content="website"/><meta  [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Apache Druid vs Elasticsearch</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>We are not experts on search systems, if anything is incorrect about our portrayal, please let us know on the mailing list or via some other means.</p>
+<p>Elasticsearch is a search system based on Apache Lucene. It provides full text search for schema-free documents
+and provides access to raw event level data. Elasticsearch is increasingly adding more support for analytics and aggregations.
+<a href="https://groups.google.com/forum/#!msg/druid-development/nlpwTHNclj8/sOuWlKOzPpYJ">Some members of the community</a> have pointed out
+the resource requirements for data ingestion and aggregation in Elasticsearch is much higher than those of Druid.</p>
+<p>Elasticsearch also does not support data summarization/roll-up at ingestion time, which can compact the data that needs to be
+stored up to 100x with real-world data sets. This leads to Elasticsearch having greater storage requirements.</p>
+<p>Druid focuses on OLAP work flows. Druid is optimized for high performance (fast aggregation and ingestion) at low cost,
+and supports a wide range of analytic operations. Druid has some basic search support for structured event data, but does not support
+full text search. Druid also does not support completely unstructured data. Measures must be defined in a Druid schema such that
+summarization/roll-up can be done.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/misc/papers-and-talks.html"><span class="arrow-prev">← </span><span>Papers</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/comparisons/druid-vs-key-value.html"><span class="function-name-prevnext">Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><fo [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-hadoop.html b/docs/0.16.0-incubating/comparisons/druid-vs-hadoop.html
new file mode 100644
index 0000000..cab995e
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-hadoop.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="druid-vs-sql-on-hadoop.html">
+<meta http-equiv="refresh" content="0; url=druid-vs-sql-on-hadoop.html">
+<h1>Redirecting...</h1>
+<a href="druid-vs-sql-on-hadoop.html">Click here if you are not redirected.</a>
+<script>location="druid-vs-sql-on-hadoop.html"</script>
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-impala-or-shark.html b/docs/0.16.0-incubating/comparisons/druid-vs-impala-or-shark.html
new file mode 100644
index 0000000..cab995e
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-impala-or-shark.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="druid-vs-sql-on-hadoop.html">
+<meta http-equiv="refresh" content="0; url=druid-vs-sql-on-hadoop.html">
+<h1>Redirecting...</h1>
+<a href="druid-vs-sql-on-hadoop.html">Click here if you are not redirected.</a>
+<script>location="druid-vs-sql-on-hadoop.html"</script>
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-key-value.html b/docs/0.16.0-incubating/comparisons/druid-vs-key-value.html
new file mode 100644
index 0000000..74369c8
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-key-value.html
@@ -0,0 +1,99 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB) · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB) · Apa [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Druid is highly optimized for scans and aggregations, it supports arbitrarily deep drill downs into data sets. This same functionality
+is supported in key/value stores in 2 ways:</p>
+<ol>
+<li>Pre-compute all permutations of possible user queries</li>
+<li>Range scans on event data</li>
+</ol>
+<p>When pre-computing results, the key is the exact parameters of the query, and the value is the result of the query.
+The queries return extremely quickly, but at the cost of flexibility, as ad-hoc exploratory queries are not possible with
+pre-computing every possible query permutation. Pre-computing all permutations of all ad-hoc queries leads to result sets
+that grow exponentially with the number of columns of a data set, and pre-computing queries for complex real-world data sets
+can require hours of pre-processing time.</p>
+<p>The other approach to using key/value stores for aggregations to use the dimensions of an event as the key and the event measures as the value.
+Aggregations are done by issuing range scans on this data. Timeseries specific databases such as OpenTSDB use this approach.
+One of the limitations here is that the key/value storage model does not have indexes for any kind of filtering other than prefix ranges,
+which can be used to filter a query down to a metric and time range, but cannot resolve complex predicates to narrow the exact data to scan.
+When the number of rows to scan gets large, this limitation can greatly reduce performance. It is also harder to achieve good
+locality with key/value stores because most don’t support pushing down aggregates to the storage layer.</p>
+<p>For arbitrary exploration of data (flexible data filtering), Druid's custom column format enables ad-hoc queries without pre-computation. The format
+also enables fast scans on columns, which is important for good aggregation performance.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/comparisons/druid-vs-elasticsearch.html"><span class="arrow-prev">← </span><span>Apache Druid vs Elasticsearch</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/comparisons/druid-vs-kudu.html"><span>Apache Druid vs Kudu</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id= [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-kudu.html b/docs/0.16.0-incubating/comparisons/druid-vs-kudu.html
new file mode 100644
index 0000000..b862fd1
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-kudu.html
@@ -0,0 +1,93 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Apache Druid vs Kudu · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Apache Druid vs Kudu · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url"  [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Apache Druid vs Kudu</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Kudu's storage format enables single row updates, whereas updates to existing Druid segments requires recreating the segment, so theoretically
+the process for updating old values should be higher latency in Druid. However, the requirements in Kudu for maintaining extra head space to store
+updates as well as organizing data by id instead of time has the potential to introduce some extra latency and accessing
+of data that is not need to answer a query at query time.</p>
+<p>Druid summarizes/rollups up data at ingestion time, which in practice reduces the raw data that needs to be
+stored significantly (up to 40 times on average), and increases performance of scanning raw data significantly.
+Druid segments also contain bitmap indexes for fast filtering, which Kudu does not currently support.
+Druid's segment architecture is heavily geared towards fast aggregates and filters, and for OLAP workflows. Appends are very
+fast in Druid, whereas updates of older data is higher latency. This is by design as the data Druid is good for is typically event data,
+and does not need to be updated too frequently. Kudu supports arbitrary primary keys with uniqueness constraints, and
+efficient lookup by ranges of those keys. Kudu chooses not to include the execution engine, but supports sufficient
+operations so as to allow node-local processing from the execution engines. This means that Kudu can support multiple frameworks on the same data (eg MR, Spark, and SQL).
+Druid includes its own query layer that allows it to push down aggregations and computations directly to data processes for faster query processing.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/comparisons/druid-vs-key-value.html"><span class="arrow-prev">← </span><span class="function-name-prevnext">Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/comparisons/druid-vs-redshift.html"><span>Apache Druid vs Redshift</span><span class="arrow-next"> →</span></a></div></div></div><nav class=" [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-redshift.html b/docs/0.16.0-incubating/comparisons/druid-vs-redshift.html
new file mode 100644
index 0000000..a408d49
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-redshift.html
@@ -0,0 +1,102 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Apache Druid vs Redshift · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Apache Druid vs Redshift · Apache Druid"/><meta property="og:type" content="website"/><meta property=" [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Apache Druid vs Redshift</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h3><a class="anchor" aria-hidden="true" id="how-does-druid-compare-to-redshift"></a><a href="#how-does-druid-compare-to-redshift" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5 [...]
+<p>In terms of drawing a differentiation, Redshift started out as ParAccel (Actian), which Amazon is licensing and has since heavily modified.</p>
+<p>Aside from potential performance differences, there are some functional differences:</p>
+<h3><a class="anchor" aria-hidden="true" id="real-time-data-ingestion"></a><a href="#real-time-data-ingestion" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<p>Because Druid is optimized to provide insight against massive quantities of streaming data; it is able to load and aggregate data in real-time.</p>
+<p>Generally traditional data warehouses including column stores work only with batch ingestion and are not optimal for streaming data in regularly.</p>
+<h3><a class="anchor" aria-hidden="true" id="druid-is-a-read-oriented-analytical-data-store"></a><a href="#druid-is-a-read-oriented-analytical-data-store" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3 [...]
+<p>Druid’s write semantics are not as fluid and does not support full joins (we support large table to small table joins). Redshift provides full SQL support including joins and insert/update statements.</p>
+<h3><a class="anchor" aria-hidden="true" id="data-distribution-model"></a><a href="#data-distribution-model" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<p>Druid’s data distribution is segment-based and leverages a highly available &quot;deep&quot; storage such as S3 or HDFS. Scaling up (or down) does not require massive copy actions or downtime; in fact, losing any number of Historical processes does not result in data loss because new Historical processes can always be brought up by reading data from &quot;deep&quot; storage.</p>
+<p>To contrast, ParAccel’s data distribution model is hash-based. Expanding the cluster requires re-hashing the data across the nodes, making it difficult to perform without taking downtime. Amazon’s Redshift works around this issue with a multi-step process:</p>
+<ul>
+<li>set cluster into read-only mode</li>
+<li>copy data from cluster to new cluster that exists in parallel</li>
+<li>redirect traffic to new cluster</li>
+</ul>
+<h3><a class="anchor" aria-hidden="true" id="replication-strategy"></a><a href="#replication-strategy" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>Druid employs segment-level data distribution meaning that more processes can be added and rebalanced without having to perform a staged swap. The replication strategy also makes all replicas available for querying. Replication is done automatically and without any impact to performance.</p>
+<p>ParAccel’s hash-based distribution generally means that replication is conducted via hot spares. This puts a numerical limit on the number of nodes you can lose without losing data, and this replication strategy often does not allow the hot spare to help share query load.</p>
+<h3><a class="anchor" aria-hidden="true" id="indexing-strategy"></a><a href="#indexing-strategy" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<p>Along with column oriented structures, Druid uses indexing structures to speed up query execution when a filter is provided. Indexing structures do increase storage overhead (and make it more difficult to allow for mutation), but they also significantly speed up queries.</p>
+<p>ParAccel does not appear to employ indexing strategies.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/comparisons/druid-vs-kudu.html"><span class="arrow-prev">← </span><span>Apache Druid vs Kudu</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/comparisons/druid-vs-spark.html"><span>Apache Druid vs Spark</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id="footer"><div cl [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-spark.html b/docs/0.16.0-incubating/comparisons/druid-vs-spark.html
new file mode 100644
index 0000000..d72155e
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-spark.html
@@ -0,0 +1,93 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Apache Druid vs Spark · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Apache Druid vs Spark · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Apache Druid vs Spark</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Druid and Spark are complementary solutions as Druid can be used to accelerate OLAP queries in Spark.</p>
+<p>Spark is a general cluster computing framework initially designed around the concept of Resilient Distributed Datasets (RDDs).
+RDDs enable data reuse by persisting intermediate results
+in memory and enable Spark to provide fast computations for iterative algorithms.
+This is especially beneficial for certain work flows such as machine
+learning, where the same operation may be applied over and over
+again until some result is converged upon. The generality of Spark makes it very suitable as an engine to process (clean or transform) data.
+Although Spark provides the ability to query data through Spark SQL, much like Hadoop, the query latencies are not specifically targeted to be interactive (sub-second).</p>
+<p>Druid's focus is on extremely low latency queries, and is ideal for powering applications used by thousands of users, and where each query must
+return fast enough such that users can interactively explore through data. Druid fully indexes all data, and can act as a middle layer between Spark and your application.
+One typical setup seen in production is to process data in Spark, and load the processed data into Druid for faster access.</p>
+<p>For more information about using Druid and Spark together, including benchmarks of the two systems, please see:</p>
+<p><a href="https://www.linkedin.com/pulse/combining-druid-spark-interactive-flexible-analytics-scale-butani">https://www.linkedin.com/pulse/combining-druid-spark-interactive-flexible-analytics-scale-butani</a></p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/comparisons/druid-vs-redshift.html"><span class="arrow-prev">← </span><span>Apache Druid vs Redshift</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/comparisons/druid-vs-sql-on-hadoop.html"><span>Apache Druid vs SQL-on-Hadoop</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-foo [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-sql-on-hadoop.html b/docs/0.16.0-incubating/comparisons/druid-vs-sql-on-hadoop.html
new file mode 100644
index 0000000..c048900
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-sql-on-hadoop.html
@@ -0,0 +1,124 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Apache Druid vs SQL-on-Hadoop · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Apache Druid vs SQL-on-Hadoop · Apache Druid"/><meta property="og:type" content="website"/><meta  [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Apache Druid vs SQL-on-Hadoop</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>SQL-on-Hadoop engines provide an
+execution engine for various data formats and data stores, and
+many can be made to push down computations down to Druid, while providing a SQL interface to Druid.</p>
+<p>For a direct comparison between the technologies and when to only use one or the other, things basically comes down to your
+product requirements and what the systems were designed to do.</p>
+<p>Druid was designed to</p>
+<ol>
+<li>be an always on service</li>
+<li>ingest data in real-time</li>
+<li>handle slice-n-dice style ad-hoc queries</li>
+</ol>
+<p>SQL-on-Hadoop engines generally sidestep Map/Reduce, instead querying data directly from HDFS or, in some cases, other storage systems.
+Some of these engines (including Impala and Presto) can be colocated with HDFS data nodes and coordinate with them to achieve data locality for queries.
+What does this mean?  We can talk about it in terms of three general areas</p>
+<ol>
+<li>Queries</li>
+<li>Data Ingestion</li>
+<li>Query Flexibility</li>
+</ol>
+<h3><a class="anchor" aria-hidden="true" id="queries"></a><a href="#queries" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<p>Druid segments stores data in a custom column format. Segments are scanned directly as part of queries and each Druid server
+calculates a set of results that are eventually merged at the Broker level. This means the data that is transferred between servers
+are queries and results, and all computation is done internally as part of the Druid servers.</p>
+<p>Most SQL-on-Hadoop engines are responsible for query planning and execution for underlying storage layers and storage formats.
+They are processes that stay on even if there is no query running (eliminating the JVM startup costs from Hadoop MapReduce).
+Some (Impala/Presto) SQL-on-Hadoop engines have daemon processes that can be run where the data is stored, virtually eliminating network transfer costs. There is still
+some latency overhead (e.g. serde time) associated with pulling data from the underlying storage layer into the computation layer. We are unaware of exactly
+how much of a performance impact this makes.</p>
+<h3><a class="anchor" aria-hidden="true" id="data-ingestion"></a><a href="#data-ingestion" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>Druid is built to allow for real-time ingestion of data.  You can ingest data and query it immediately upon ingestion,
+the latency between how quickly the event is reflected in the data is dominated by how long it takes to deliver the event to Druid.</p>
+<p>SQL-on-Hadoop, being based on data in HDFS or some other backing store, are limited in their data ingestion rates by the
+rate at which that backing store can make data available.  Generally, the backing store is the biggest bottleneck for
+how quickly data can become available.</p>
+<h3><a class="anchor" aria-hidden="true" id="query-flexibility"></a><a href="#query-flexibility" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<p>Druid's query language is fairly low level and maps to how Druid operates internally. Although Druid can be combined with a high level query
+planner such as <a href="https://github.com/implydata/plywood">Plywood</a> to support most SQL queries and analytic SQL queries (minus joins among large tables),
+base Druid is less flexible than SQL-on-Hadoop solutions for generic processing.</p>
+<p>SQL-on-Hadoop support SQL style queries with full joins.</p>
+<h2><a class="anchor" aria-hidden="true" id="druid-vs-parquet"></a><a href="#druid-vs-parquet" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p>Parquet is a column storage format that is designed to work with SQL-on-Hadoop engines. Parquet doesn't have a query execution engine, and instead
+relies on external sources to pull data out of it.</p>
+<p>Druid's storage format is highly optimized for linear scans. Although Druid has support for nested data, Parquet's storage format is much
+more hierachical, and is more designed for binary chunking. In theory, this should lead to faster scans in Druid.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/comparisons/druid-vs-spark.html"><span class="arrow-prev">← </span><span>Apache Druid vs Spark</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/auth.html"><span>Authentication and Authorization</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#druid-vs-parquet">Druid vs Parquet [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/comparisons/druid-vs-vertica.html b/docs/0.16.0-incubating/comparisons/druid-vs-vertica.html
new file mode 100644
index 0000000..a1d74ac
--- /dev/null
+++ b/docs/0.16.0-incubating/comparisons/druid-vs-vertica.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="druid-vs-redshift.html">
+<meta http-equiv="refresh" content="0; url=druid-vs-redshift.html">
+<h1>Redirecting...</h1>
+<a href="druid-vs-redshift.html">Click here if you are not redirected.</a>
+<script>location="druid-vs-redshift.html"</script>
diff --git a/docs/0.16.0-incubating/configuration/auth.html b/docs/0.16.0-incubating/configuration/auth.html
new file mode 100644
index 0000000..ea2aebe
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/auth.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../design/auth.html">
+<meta http-equiv="refresh" content="0; url=../design/auth.html">
+<h1>Redirecting...</h1>
+<a href="../design/auth.html">Click here if you are not redirected.</a>
+<script>location="../design/auth.html"</script>
diff --git a/docs/0.16.0-incubating/configuration/broker.html b/docs/0.16.0-incubating/configuration/broker.html
new file mode 100644
index 0000000..72363c4
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/broker.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../configuration/index.html#broker">
+<meta http-equiv="refresh" content="0; url=../configuration/index.html#broker">
+<h1>Redirecting...</h1>
+<a href="../configuration/index.html#broker">Click here if you are not redirected.</a>
+<script>location="../configuration/index.html#broker"</script>
diff --git a/docs/0.16.0-incubating/configuration/caching.html b/docs/0.16.0-incubating/configuration/caching.html
new file mode 100644
index 0000000..dcb5dd6
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/caching.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../configuration/index.html#cache-configuration">
+<meta http-equiv="refresh" content="0; url=../configuration/index.html#cache-configuration">
+<h1>Redirecting...</h1>
+<a href="../configuration/index.html#cache-configuration">Click here if you are not redirected.</a>
+<script>location="../configuration/index.html#cache-configuration"</script>
diff --git a/docs/0.16.0-incubating/configuration/coordinator.html b/docs/0.16.0-incubating/configuration/coordinator.html
new file mode 100644
index 0000000..e32a17a
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/coordinator.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../configuration/index.html#coordinator">
+<meta http-equiv="refresh" content="0; url=../configuration/index.html#coordinator">
+<h1>Redirecting...</h1>
+<a href="../configuration/index.html#coordinator">Click here if you are not redirected.</a>
+<script>location="../configuration/index.html#coordinator"</script>
diff --git a/docs/0.16.0-incubating/configuration/hadoop.html b/docs/0.16.0-incubating/configuration/hadoop.html
new file mode 100644
index 0000000..a14b2fb
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/hadoop.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../ingestion/hadoop.html">
+<meta http-equiv="refresh" content="0; url=../ingestion/hadoop.html">
+<h1>Redirecting...</h1>
+<a href="../ingestion/hadoop.html">Click here if you are not redirected.</a>
+<script>location="../ingestion/hadoop.html"</script>
diff --git a/docs/0.16.0-incubating/configuration/historical.html b/docs/0.16.0-incubating/configuration/historical.html
new file mode 100644
index 0000000..1112bc0
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/historical.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../configuration/index.html#historical">
+<meta http-equiv="refresh" content="0; url=../configuration/index.html#historical">
+<h1>Redirecting...</h1>
+<a href="../configuration/index.html#historical">Click here if you are not redirected.</a>
+<script>location="../configuration/index.html#historical"</script>
diff --git a/docs/0.16.0-incubating/configuration/index.html b/docs/0.16.0-incubating/configuration/index.html
new file mode 100644
index 0000000..62afd03
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/index.html
@@ -0,0 +1,1801 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Configuration reference · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Configuration reference · Apache Druid"/><meta property="og:type" content="website"/><meta property="og [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Configuration reference</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>This page documents all of the configuration properties for each Druid service type.</p>
+<h2><a class="anchor" aria-hidden="true" id="recommended-configuration-file-organization"></a><a href="#recommended-configuration-file-organization" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h [...]
+<p>A recommended way of organizing Druid configuration files can be seen in the <code>conf</code> directory in the Druid package root, shown below:</p>
+<pre><code class="hljs">$ ls -R conf
+druid       tranquility
+
+conf/druid:
+_common       broker        coordinator   historical    middleManager overlord
+
+conf/druid/_common:
+common<span class="hljs-selector-class">.runtime</span><span class="hljs-selector-class">.properties</span> log4j2<span class="hljs-selector-class">.xml</span>
+
+conf/druid/broker:
+jvm<span class="hljs-selector-class">.config</span>         runtime<span class="hljs-selector-class">.properties</span>
+
+conf/druid/coordinator:
+jvm<span class="hljs-selector-class">.config</span>         runtime<span class="hljs-selector-class">.properties</span>
+
+conf/druid/historical:
+jvm<span class="hljs-selector-class">.config</span>         runtime<span class="hljs-selector-class">.properties</span>
+
+conf/druid/middleManager:
+jvm<span class="hljs-selector-class">.config</span>         runtime<span class="hljs-selector-class">.properties</span>
+
+conf/druid/overlord:
+jvm<span class="hljs-selector-class">.config</span>         runtime<span class="hljs-selector-class">.properties</span>
+
+conf/tranquility:
+kafka<span class="hljs-selector-class">.json</span>  server<span class="hljs-selector-class">.json</span>
+</code></pre>
+<p>Each directory has a <code>runtime.properties</code> file containing configuration properties for the specific Druid process correponding to the directory (e.g., <code>historical</code>).</p>
+<p>The <code>jvm.config</code> files contain JVM flags such as heap sizing properties for each service.</p>
+<p>Common properties shared by all services are placed in <code>_common/common.runtime.properties</code>.</p>
+<h2><a class="anchor" aria-hidden="true" id="common-configurations"></a><a href="#common-configurations" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>The properties under this section are common configurations that should be shared across all Druid services in a cluster.</p>
+<h3><a class="anchor" aria-hidden="true" id="jvm-configuration-best-practices"></a><a href="#jvm-configuration-best-practices" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<p>There are four JVM parameters that we set on all of our processes:</p>
+<ol>
+<li><code>-Duser.timezone=UTC</code> This sets the default timezone of the JVM to UTC. We always set this and do not test with other default timezones, so local timezones might work, but they also might uncover weird and interesting bugs. To issue queries in a non-UTC timezone, see <a href="../querying/granularities.html#period-granularities">query granularities</a></li>
+<li><code>-Dfile.encoding=UTF-8</code> This is similar to timezone, we test assuming UTF-8. Local encodings might work, but they also might result in weird and interesting bugs.</li>
+<li><code>-Djava.io.tmpdir=&lt;a path&gt;</code> Various parts of the system that interact with the file system do it via temporary files, and these files can get somewhat large. Many production systems are set up to have small (but fast) <code>/tmp</code> directories, which can be problematic with Druid so we recommend pointing the JVM’s tmp directory to something with a little more meat.</li>
+<li><code>-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager</code> This allows log4j2 to handle logs for non-log4j2 components (like jetty) which use standard java logging.</li>
+</ol>
+<h3><a class="anchor" aria-hidden="true" id="extensions"></a><a href="#extensions" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>Many of Druid's external dependencies can be plugged in as modules. Extensions can be provided using the following configs:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.extensions.directory</code></td><td>The root extension directory where user can put extensions related files. Druid will load extensions stored under this directory.</td><td><code>extensions</code> (This is a relative path to Druid's working directory)</td></tr>
+<tr><td><code>druid.extensions.hadoopDependenciesDir</code></td><td>The root hadoop dependencies directory where user can put hadoop related dependencies files. Druid will load the dependencies based on the hadoop coordinate specified in the hadoop index task.</td><td><code>hadoop-dependencies</code> (This is a relative path to Druid's working directory</td></tr>
+<tr><td><code>druid.extensions.loadList</code></td><td>A JSON array of extensions to load from extension directories by Druid. If it is not specified, its value will be <code>null</code> and Druid will load all the extensions under <code>druid.extensions.directory</code>. If its value is empty list <code>[]</code>, then no extensions will be loaded at all. It is also allowed to specify absolute path of other custom extensions not stored in the common extensions directory.</td><td>null</td></tr>
+<tr><td><code>druid.extensions.searchCurrentClassloader</code></td><td>This is a boolean flag that determines if Druid will search the main classloader for extensions.  It defaults to true but can be turned off if you have reason to not automatically add all modules on the classpath.</td><td>true</td></tr>
+<tr><td><code>druid.extensions.useExtensionClassloaderFirst</code></td><td>This is a boolean flag that determines if Druid extensions should prefer loading classes from their own jars rather than jars bundled with Druid. If false, extensions must be compatible with classes provided by any jars bundled with Druid. If true, extensions may depend on conflicting versions.</td><td>false</td></tr>
+<tr><td><code>druid.extensions.hadoopContainerDruidClasspath</code></td><td>Hadoop Indexing launches hadoop jobs and this configuration provides way to explicitly set the user classpath for the hadoop job. By default this is computed automatically by druid based on the druid process classpath and set of extensions. However, sometimes you might want to be explicit to resolve dependency conflicts between druid and hadoop.</td><td>null</td></tr>
+<tr><td><code>druid.extensions.addExtensionsToHadoopContainer</code></td><td>Only applicable if <code>druid.extensions.hadoopContainerDruidClasspath</code> is provided. If set to true, then extensions specified in the loadList are added to hadoop container classpath. Note that when <code>druid.extensions.hadoopContainerDruidClasspath</code> is not provided then extensions are always added to hadoop container classpath.</td><td>false</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="modules"></a><a href="#modules" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.modules.excludeList</code></td><td>A JSON array of canonical class names (e. g. <code>&quot;org.apache.druid.somepackage.SomeModule&quot;</code>) of module classes which shouldn't be loaded, even if they are found in extensions specified by <code>druid.extensions.loadList</code>, or in the list of core modules specified to be loaded on a particular Druid process type. Useful when some useful extension contains some module, which shouldn't be loaded on some Druid proce [...]
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="zookeeper"></a><a href="#zookeeper" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.6 [...]
+<p>We recommend just setting the base ZK path and the ZK service host, but all ZK paths that Druid uses can be overwritten to absolute paths.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.zk.paths.base</code></td><td>Base Zookeeper path.</td><td><code>/druid</code></td></tr>
+<tr><td><code>druid.zk.service.host</code></td><td>The ZooKeeper hosts to connect to. This is a REQUIRED property and therefore a host address must be supplied.</td><td>none</td></tr>
+<tr><td><code>druid.zk.service.user</code></td><td>The username to authenticate with ZooKeeper. This is an optional property.</td><td>none</td></tr>
+<tr><td><code>druid.zk.service.pwd</code></td><td>The <a href="/docs/0.16.0-incubating/operations/password-provider.html">Password Provider</a> or the string password to authenticate with ZooKeeper. This is an optional property.</td><td>none</td></tr>
+<tr><td><code>druid.zk.service.authScheme</code></td><td>digest is the only authentication scheme supported.</td><td>digest</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="zookeeper-behavior"></a><a href="#zookeeper-behavior" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.zk.service.sessionTimeoutMs</code></td><td>ZooKeeper session timeout, in milliseconds.</td><td><code>30000</code></td></tr>
+<tr><td><code>druid.zk.service.connectionTimeoutMs</code></td><td>ZooKeeper connection timeout, in milliseconds.</td><td><code>15000</code></td></tr>
+<tr><td><code>druid.zk.service.compress</code></td><td>Boolean flag for whether or not created Znodes should be compressed.</td><td><code>true</code></td></tr>
+<tr><td><code>druid.zk.service.acl</code></td><td>Boolean flag for whether or not to enable ACL security for ZooKeeper. If ACL is enabled, zNode creators will have all permissions.</td><td><code>false</code></td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="path-configuration"></a><a href="#path-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>Druid interacts with ZK through a set of standard path configurations. We recommend just setting the base ZK path, but all ZK paths that Druid uses can be overwritten to absolute paths.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.zk.paths.base</code></td><td>Base Zookeeper path.</td><td><code>/druid</code></td></tr>
+<tr><td><code>druid.zk.paths.propertiesPath</code></td><td>Zookeeper properties path.</td><td><code>${druid.zk.paths.base}/properties</code></td></tr>
+<tr><td><code>druid.zk.paths.announcementsPath</code></td><td>Druid process announcement path.</td><td><code>${druid.zk.paths.base}/announcements</code></td></tr>
+<tr><td><code>druid.zk.paths.liveSegmentsPath</code></td><td>Current path for where Druid processes announce their segments.</td><td><code>${druid.zk.paths.base}/segments</code></td></tr>
+<tr><td><code>druid.zk.paths.loadQueuePath</code></td><td>Entries here cause Historical processes to load and drop segments.</td><td><code>${druid.zk.paths.base}/loadQueue</code></td></tr>
+<tr><td><code>druid.zk.paths.coordinatorPath</code></td><td>Used by the Coordinator for leader election.</td><td><code>${druid.zk.paths.base}/coordinator</code></td></tr>
+<tr><td><code>druid.zk.paths.servedSegmentsPath</code></td><td>@Deprecated. Legacy path for where Druid processes announce their segments.</td><td><code>${druid.zk.paths.base}/servedSegments</code></td></tr>
+</tbody>
+</table>
+<p>The indexing service also uses its own set of paths. These configs can be included in the common configuration.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.zk.paths.indexer.base</code></td><td>Base zookeeper path for</td><td><code>${druid.zk.paths.base}/indexer</code></td></tr>
+<tr><td><code>druid.zk.paths.indexer.announcementsPath</code></td><td>Middle managers announce themselves here.</td><td><code>${druid.zk.paths.indexer.base}/announcements</code></td></tr>
+<tr><td><code>druid.zk.paths.indexer.tasksPath</code></td><td>Used to assign tasks to MiddleManagers.</td><td><code>${druid.zk.paths.indexer.base}/tasks</code></td></tr>
+<tr><td><code>druid.zk.paths.indexer.statusPath</code></td><td>Parent path for announcement of task statuses.</td><td><code>${druid.zk.paths.indexer.base}/status</code></td></tr>
+</tbody>
+</table>
+<p>If <code>druid.zk.paths.base</code> and <code>druid.zk.paths.indexer.base</code> are both set, and none of the other <code>druid.zk.paths.*</code> or <code>druid.zk.paths.indexer.*</code> values are set, then the other properties will be evaluated relative to their respective <code>base</code>.
+For example, if <code>druid.zk.paths.base</code> is set to <code>/druid1</code> and <code>druid.zk.paths.indexer.base</code> is set to <code>/druid2</code> then <code>druid.zk.paths.announcementsPath</code> will default to <code>/druid1/announcements</code> while <code>druid.zk.paths.indexer.announcementsPath</code> will default to <code>/druid2/announcements</code>.</p>
+<p>The following path is used for service discovery. It is <strong>not</strong> affected by <code>druid.zk.paths.base</code> and <strong>must</strong> be specified separately.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.discovery.curator.path</code></td><td>Services announce themselves under this ZooKeeper path.</td><td><code>/druid/discovery</code></td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="exhibitor"></a><a href="#exhibitor" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.6 [...]
+<p><a href="https://github.com/Netflix/exhibitor/wiki">Exhibitor</a> is a supervisor system for ZooKeeper.
+Exhibitor can dynamically scale-up/down the cluster of ZooKeeper servers.
+Druid can update self-owned list of ZooKeeper servers through Exhibitor without restarting.
+That is, it allows Druid to keep the connections of Exhibitor-supervised ZooKeeper servers.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.exhibitor.service.hosts</code></td><td>A JSON array which contains the hostnames of Exhibitor instances. Please specify this property if you want to use Exhibitor-supervised cluster.</td><td>none</td></tr>
+<tr><td><code>druid.exhibitor.service.port</code></td><td>The REST port used to connect to Exhibitor.</td><td><code>8080</code></td></tr>
+<tr><td><code>druid.exhibitor.service.restUriPath</code></td><td>The path of the REST call used to get the server set.</td><td><code>/exhibitor/v1/cluster/list</code></td></tr>
+<tr><td><code>druid.exhibitor.service.useSsl</code></td><td>Boolean flag for whether or not to use https protocol.</td><td><code>false</code></td></tr>
+<tr><td><code>druid.exhibitor.service.pollingMs</code></td><td>How ofter to poll the exhibitors for the list</td><td><code>10000</code></td></tr>
+</tbody>
+</table>
+<p>Note that <code>druid.zk.service.host</code> is used as a backup in case an Exhibitor instance can't be contacted and therefore should still be set.</p>
+<h3><a class="anchor" aria-hidden="true" id="tls"></a><a href="#tls" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.2 [...]
+<h4><a class="anchor" aria-hidden="true" id="general-configuration"></a><a href="#general-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.enablePlaintextPort</code></td><td>Enable/Disable HTTP connector.</td><td><code>true</code></td></tr>
+<tr><td><code>druid.enableTlsPort</code></td><td>Enable/Disable HTTPS connector.</td><td><code>false</code></td></tr>
+</tbody>
+</table>
+<p>Although not recommended but both HTTP and HTTPS connectors can be enabled at a time and respective ports are configurable using <code>druid.plaintextPort</code>
+and <code>druid.tlsPort</code> properties on each process. Please see <code>Configuration</code> section of individual processes to check the valid and default values for these ports.</p>
+<h4><a class="anchor" aria-hidden="true" id="jetty-server-tls-configuration"></a><a href="#jetty-server-tls-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 1 [...]
+<p>Druid uses Jetty as an embedded web server. To get familiar with TLS/SSL in general and related concepts like Certificates etc.
+reading this <a href="http://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html">Jetty documentation</a> might be helpful.
+To get more in depth knowledge of TLS/SSL support in Java in general, please refer to this <a href="http://docs.oracle.com/javase/8/docs/0.16.0-incubating/technotes/guides/security/jsse/JSSERefGuide.html">guide</a>.
+The documentation <a href="http://www.eclipse.org/jetty/documentation/9.4.x/configuring-ssl.html#configuring-sslcontextfactory">here</a>
+can help in understanding TLS/SSL configurations listed below. This <a href="http://docs.oracle.com/javase/8/docs/0.16.0-incubating/technotes/guides/security/StandardNames.html">document</a> lists all the possible
+values for the below mentioned configs among others provided by Java implementation.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th><th>Required</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.server.https.keyStorePath</code></td><td>The file path or URL of the TLS/SSL Key store.</td><td>none</td><td>yes</td></tr>
+<tr><td><code>druid.server.https.keyStoreType</code></td><td>The type of the key store.</td><td>none</td><td>yes</td></tr>
+<tr><td><code>druid.server.https.certAlias</code></td><td>Alias of TLS/SSL certificate for the connector.</td><td>none</td><td>yes</td></tr>
+<tr><td><code>druid.server.https.keyStorePassword</code></td><td>The <a href="/docs/0.16.0-incubating/operations/password-provider.html">Password Provider</a> or String password for the Key Store.</td><td>none</td><td>yes</td></tr>
+</tbody>
+</table>
+<p>Following table contains non-mandatory advanced configuration options, use caution.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th><th>Required</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.server.https.keyManagerFactoryAlgorithm</code></td><td>Algorithm to use for creating KeyManager, more details <a href="https://docs.oracle.com/javase/7/docs/0.16.0-incubating/technotes/guides/security/jsse/JSSERefGuide.html#KeyManager">here</a>.</td><td><code>javax.net.ssl.KeyManagerFactory.getDefaultAlgorithm()</code></td><td>no</td></tr>
+<tr><td><code>druid.server.https.keyManagerPassword</code></td><td>The <a href="/docs/0.16.0-incubating/operations/password-provider.html">Password Provider</a> or String password for the Key Manager.</td><td>none</td><td>no</td></tr>
+<tr><td><code>druid.server.https.includeCipherSuites</code></td><td>List of cipher suite names to include. You can either use the exact cipher suite name or a regular expression.</td><td>Jetty's default include cipher list</td><td>no</td></tr>
+<tr><td><code>druid.server.https.excludeCipherSuites</code></td><td>List of cipher suite names to exclude. You can either use the exact cipher suite name or a regular expression.</td><td>Jetty's default exclude cipher list</td><td>no</td></tr>
+<tr><td><code>druid.server.https.includeProtocols</code></td><td>List of exact protocols names to include.</td><td>Jetty's default include protocol list</td><td>no</td></tr>
+<tr><td><code>druid.server.https.excludeProtocols</code></td><td>List of exact protocols names to exclude.</td><td>Jetty's default exclude protocol list</td><td>no</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="internal-client-tls-configuration-requires-simple-client-sslcontext-extension"></a><a href="#internal-client-tls-configuration-requires-simple-client-sslcontext-extension" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.2 [...]
+<p>These properties apply to the SSLContext that will be provided to the internal HTTP client that Druid services use to communicate with each other. These properties require the <code>simple-client-sslcontext</code> extension to be loaded. Without it, Druid services will be unable to communicate with each other when TLS is enabled.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th><th>Required</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.client.https.protocol</code></td><td>SSL protocol to use.</td><td><code>TLSv1.2</code></td><td>no</td></tr>
+<tr><td><code>druid.client.https.trustStoreType</code></td><td>The type of the key store where trusted root certificates are stored.</td><td><code>java.security.KeyStore.getDefaultType()</code></td><td>no</td></tr>
+<tr><td><code>druid.client.https.trustStorePath</code></td><td>The file path or URL of the TLS/SSL Key store where trusted root certificates are stored.</td><td>none</td><td>yes</td></tr>
+<tr><td><code>druid.client.https.trustStoreAlgorithm</code></td><td>Algorithm to be used by TrustManager to validate certificate chains</td><td><code>javax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()</code></td><td>no</td></tr>
+<tr><td><code>druid.client.https.trustStorePassword</code></td><td>The <a href="/docs/0.16.0-incubating/operations/password-provider.html">Password Provider</a> or String password for the Trust Store.</td><td>none</td><td>yes</td></tr>
+</tbody>
+</table>
+<p>This <a href="http://docs.oracle.com/javase/8/docs/0.16.0-incubating/technotes/guides/security/StandardNames.html">document</a> lists all the possible
+values for the above mentioned configs among others provided by Java implementation.</p>
+<h3><a class="anchor" aria-hidden="true" id="authentication-and-authorization"></a><a href="#authentication-and-authorization" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Type</th><th>Description</th><th>Default</th><th>Required</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.auth.authenticatorChain</code></td><td>JSON List of Strings</td><td>List of Authenticator type names</td><td>[&quot;allowAll&quot;]</td><td>no</td></tr>
+<tr><td><code>druid.escalator.type</code></td><td>String</td><td>Type of the Escalator that should be used for internal Druid communications. This Escalator must use an authentication scheme that is supported by an Authenticator in <code>druid.auth.authenticationChain</code>.</td><td>&quot;noop&quot;</td><td>no</td></tr>
+<tr><td><code>druid.auth.authorizers</code></td><td>JSON List of Strings</td><td>List of Authorizer type names</td><td>[&quot;allowAll&quot;]</td><td>no</td></tr>
+<tr><td><code>druid.auth.unsecuredPaths</code></td><td>List of Strings</td><td>List of paths for which security checks will not be performed. All requests to these paths will be allowed.</td><td>[]</td><td>no</td></tr>
+<tr><td><code>druid.auth.allowUnauthenticatedHttpOptions</code></td><td>Boolean</td><td>If true, skip authentication checks for HTTP OPTIONS requests. This is needed for certain use cases, such as supporting CORS pre-flight requests. Note that disabling authentication checks for OPTIONS requests will allow unauthenticated users to determine what Druid endpoints are valid (by checking if the OPTIONS request returns a 200 instead of 404), so enabling this option may reveal information abou [...]
+</tbody>
+</table>
+<p>For more information, please see <a href="/docs/0.16.0-incubating/design/auth.html">Authentication and Authorization</a>.</p>
+<p>For configuration options for specific auth extensions, please refer to the extension documentation.</p>
+<h3><a class="anchor" aria-hidden="true" id="startup-logging"></a><a href="#startup-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5  [...]
+<p>All processes can log debugging information on startup.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.startup.logging.logProperties</code></td><td>Log all properties on startup (from common.runtime.properties, runtime.properties, and the JVM command line).</td><td>false</td></tr>
+<tr><td><code>druid.startup.logging.maskProperties</code></td><td>Masks sensitive properties (passwords, for example) containing theses words.</td><td>[&quot;password&quot;]</td></tr>
+</tbody>
+</table>
+<p>Note that some sensitive information may be logged if these settings are enabled.</p>
+<h3><a class="anchor" aria-hidden="true" id="request-logging"></a><a href="#request-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5  [...]
+<p>All processes that can serve queries can also log the query requests they see. Broker processes can additionally log the SQL requests (both from HTTP and JDBC) they see.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.request.logging.type</code></td><td>Choices: noop, file, emitter, slf4j, filtered, composing, switching. How to log every query request.</td><td>[required to configure request logging]</td></tr>
+</tbody>
+</table>
+<p>Note that, you can enable sending all the HTTP requests to log by setting  &quot;org.apache.druid.jetty.RequestLog&quot; to DEBUG level. See <a href="/docs/0.16.0-incubating/configuration/logging.html">Logging</a></p>
+<h4><a class="anchor" aria-hidden="true" id="file-request-logging"></a><a href="#file-request-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>Daily request logs are stored on disk.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.request.logging.dir</code></td><td>Historical, Realtime and Broker processes maintain request logs of all of the requests they get (interacton is via POST, so normal request logs don’t generally capture information about the actual query), this specifies the directory to store the request logs in</td><td>none</td></tr>
+<tr><td><code>druid.request.logging.filePattern</code></td><td><a href="http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html">Joda datetime format</a> for each file</td><td>&quot;yyyy-MM-dd'.log'&quot;</td></tr>
+</tbody>
+</table>
+<p>The format of request logs is TSV, one line per requests, with five fields: timestamp, remote_addr, native_query, query_context, sql_query.</p>
+<p>For native JSON request, the <code>sql_query</code> field is empty. Example</p>
+<pre><code class="hljs"><span class="hljs-number">2019</span>-<span class="hljs-number">01</span>-<span class="hljs-number">14</span><span class="hljs-symbol">T10:</span><span class="hljs-number">00</span><span class="hljs-symbol">:</span><span class="hljs-number">00</span>.<span class="hljs-number">000</span>Z        <span class="hljs-number">127.0</span>.<span class="hljs-number">0</span>.<span class="hljs-number">1</span>   {<span class="hljs-string">"queryType"</span><span class="hlj [...]
+</code></pre>
+<p>For SQL query request, the <code>native_query</code> field is empty. Example</p>
+<pre><code class="hljs"><span class="hljs-number">2019</span><span class="hljs-number">-01</span><span class="hljs-number">-14</span><span class="hljs-string">T10:</span><span class="hljs-number">00</span>:<span class="hljs-number">00.000</span>Z        <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>       {<span class="hljs-string">"sqlQuery/time"</span>:<span class="hljs-number">100</span>,<span class="hljs-string">"sqlQuery/ [...]
+</code></pre>
+<h4><a class="anchor" aria-hidden="true" id="emitter-request-logging"></a><a href="#emitter-request-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<p>Every request is emitted to some external location.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.request.logging.feed</code></td><td>Feed name for requests.</td><td>none</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="slf4j-request-logging"></a><a href="#slf4j-request-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>Every request is logged via SLF4J. Native queries are serialized into JSON in the log message regardless of the SJF4J format specification. They will be logged under the class <code>org.apache.druid.server.log.LoggingRequestLogger</code>.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.request.logging.setMDC</code></td><td>If MDC entries should be set in the log entry. Your logging setup still has to be configured to handle MDC to format this data</td><td>false</td></tr>
+<tr><td><code>druid.request.logging.setContextMDC</code></td><td>If the druid query <code>context</code> should be added to the MDC entries. Has no effect unless <code>setMDC</code> is <code>true</code></td><td>false</td></tr>
+</tbody>
+</table>
+<p>For native query, the following MDC fields are populated with <code>setMDC</code>:</p>
+<table>
+<thead>
+<tr><th>MDC field</th><th>Description</th></tr>
+</thead>
+<tbody>
+<tr><td><code>queryId</code></td><td>The query ID</td></tr>
+<tr><td><code>sqlQueryId</code></td><td>The SQL query ID if this query is part of a SQL request</td></tr>
+<tr><td><code>dataSource</code></td><td>The datasource the query was against</td></tr>
+<tr><td><code>queryType</code></td><td>The type of the query</td></tr>
+<tr><td><code>hasFilters</code></td><td>If the query has any filters</td></tr>
+<tr><td><code>remoteAddr</code></td><td>The remote address of the requesting client</td></tr>
+<tr><td><code>duration</code></td><td>The duration of the query interval</td></tr>
+<tr><td><code>resultOrdering</code></td><td>The ordering of results</td></tr>
+<tr><td><code>descending</code></td><td>If the query is a descending query</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="filtered-request-logging"></a><a href="#filtered-request-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<p>Filtered Request Logger filters requests based on a configurable query/time threshold (for native query) and sqlQuery/time threshold (for SQL query).
+For native query, only request logs where query/time is above the threshold are emitted. For SQL query, only request logs where sqlQuery/time is above the threshold are emitted.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.request.logging.queryTimeThresholdMs</code></td><td>Threshold value for query/time in milliseconds.</td><td>0 i.e no filtering</td></tr>
+<tr><td><code>druid.request.logging.sqlQueryTimeThresholdMs</code></td><td>Threshold value for sqlQuery/time in milliseconds.</td><td>0 i.e no filtering</td></tr>
+<tr><td><code>druid.request.logging.mutedQueryTypes</code></td><td>Query requests of these types are not logged. Query types are defined as string objects corresponding to the &quot;queryType&quot; value for the specified query in the Druid's <a href="http://druid.apache.org/docs/0.16.0-incubating/latest/querying/querying.html">native JSON query API</a>. Misspelled query types will be ignored. Example to ignore scan and timeBoundary queries: [&quot;scan&quot;, &quot;timeBoundary&quot;]</ [...]
+<tr><td><code>druid.request.logging.delegate.type</code></td><td>Type of delegate request logger to log requests.</td><td>none</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="composite-request-logging"></a><a href="#composite-request-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c [...]
+<p>Composite Request Logger emits request logs to multiple request loggers.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.request.logging.loggerProviders</code></td><td>List of request loggers for emitting request logs.</td><td>none</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="switching-request-logging"></a><a href="#switching-request-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c [...]
+<p>Switching Request Logger routes native query's request logs to one request logger and SQL query's request logs to another request logger.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.request.logging.nativeQueryLogger</code></td><td>request logger for emitting native query's request logs.</td><td>none</td></tr>
+<tr><td><code>druid.request.logging.sqlQueryLogger</code></td><td>request logger for emitting SQL query's request logs.</td><td>none</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="enabling-metrics"></a><a href="#enabling-metrics" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p>Druid processes periodically emit metrics and different metrics monitors can be included. Each process can overwrite the default list of monitors.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.monitoring.emissionPeriod</code></td><td>How often metrics are emitted.</td><td>PT1M</td></tr>
+<tr><td><code>druid.monitoring.monitors</code></td><td>Sets list of Druid monitors used by a process. See below for names and more information. For example, you can specify monitors for a Broker with <code>druid.monitoring.monitors=[&quot;org.apache.druid.java.util.metrics.SysMonitor&quot;,&quot;org.apache.druid.java.util.metrics.JvmMonitor&quot;]</code>.</td><td>none (no monitors)</td></tr>
+</tbody>
+</table>
+<p>The following monitors are available:</p>
+<table>
+<thead>
+<tr><th>Name</th><th>Description</th></tr>
+</thead>
+<tbody>
+<tr><td><code>org.apache.druid.client.cache.CacheMonitor</code></td><td>Emits metrics (to logs) about the segment results cache for Historical and Broker processes. Reports typical cache statistics include hits, misses, rates, and size (bytes and number of entries), as well as timeouts and and errors.</td></tr>
+<tr><td><code>org.apache.druid.java.util.metrics.SysMonitor</code></td><td>This uses the <a href="https://github.com/hyperic/sigar">SIGAR library</a> to report on various system activities and statuses.</td></tr>
+<tr><td><code>org.apache.druid.server.metrics.HistoricalMetricsMonitor</code></td><td>Reports statistics on Historical processes.</td></tr>
+<tr><td><code>org.apache.druid.java.util.metrics.JvmMonitor</code></td><td>Reports various JVM-related statistics.</td></tr>
+<tr><td><code>org.apache.druid.java.util.metrics.JvmCpuMonitor</code></td><td>Reports statistics of CPU consumption by the JVM.</td></tr>
+<tr><td><code>org.apache.druid.java.util.metrics.CpuAcctDeltaMonitor</code></td><td>Reports consumed CPU as per the cpuacct cgroup.</td></tr>
+<tr><td><code>org.apache.druid.java.util.metrics.JvmThreadsMonitor</code></td><td>Reports Thread statistics in the JVM, like numbers of total, daemon, started, died threads.</td></tr>
+<tr><td><code>org.apache.druid.segment.realtime.RealtimeMetricsMonitor</code></td><td>Reports statistics on Realtime processes.</td></tr>
+<tr><td><code>org.apache.druid.server.metrics.EventReceiverFirehoseMonitor</code></td><td>Reports how many events have been queued in the EventReceiverFirehose.</td></tr>
+<tr><td><code>org.apache.druid.server.metrics.QueryCountStatsMonitor</code></td><td>Reports how many queries have been successful/failed/interrupted.</td></tr>
+<tr><td><code>org.apache.druid.server.emitter.HttpEmittingMonitor</code></td><td>Reports internal metrics of <code>http</code> or <code>parametrized</code> emitter (see below). Must not be used with another emitter type. See the description of the metrics here: <a href="https://github.com/apache/incubator-druid/pull/4973">https://github.com/apache/incubator-druid/pull/4973</a>.</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="emitting-metrics"></a><a href="#emitting-metrics" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p>The Druid servers <a href="/docs/0.16.0-incubating/operations/metrics.html">emit various metrics</a> and alerts via something we call an Emitter. There are three emitter implementations included with the code, a &quot;noop&quot; emitter (the default if none is specified), one that just logs to log4j (&quot;logging&quot;), and one that does POSTs of JSON events to a server (&quot;http&quot;). The properties for using the logging emitter are described below.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter</code></td><td>Setting this value to &quot;noop&quot;, &quot;logging&quot;, &quot;http&quot; or &quot;parametrized&quot; will initialize one of the emitter modules. The value &quot;composing&quot; can be used to initialize multiple emitter modules.</td><td>noop</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="logging-emitter-module"></a><a href="#logging-emitter-module" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter.logging.loggerClass</code></td><td>Choices: HttpPostEmitter, LoggingEmitter, NoopServiceEmitter, ServiceEmitter. The class used for logging.</td><td>LoggingEmitter</td></tr>
+<tr><td><code>druid.emitter.logging.logLevel</code></td><td>Choices: debug, info, warn, error. The log level at which message are logged.</td><td>info</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="http-emitter-module"></a><a href="#http-emitter-module" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter.http.flushMillis</code></td><td>How often the internal message buffer is flushed (data is sent).</td><td>60000</td></tr>
+<tr><td><code>druid.emitter.http.flushCount</code></td><td>How many messages the internal message buffer can hold before flushing (sending).</td><td>500</td></tr>
+<tr><td><code>druid.emitter.http.basicAuthentication</code></td><td>Login and password for authentification in &quot;login:password&quot; form, e. g. <code>druid.emitter.http.basicAuthentication=admin:adminpassword</code></td><td>not specified = no authentification</td></tr>
+<tr><td><code>druid.emitter.http.flushTimeOut</code></td><td>The timeout after which an event should be sent to the endpoint, even if internal buffers are not filled, in milliseconds.</td><td>not specified = no timeout</td></tr>
+<tr><td><code>druid.emitter.http.batchingStrategy</code></td><td>The strategy of how the batch is formatted. &quot;ARRAY&quot; means <code>[event1,event2]</code>, &quot;NEWLINES&quot; means <code>event1\nevent2</code>, ONLY_EVENTS means <code>event1event2</code>.</td><td>ARRAY</td></tr>
+<tr><td><code>druid.emitter.http.maxBatchSize</code></td><td>The maximum batch size, in bytes.</td><td>the minimum of (10% of JVM heap size divided by 2) or (5191680 (i. e. 5 MB))</td></tr>
+<tr><td><code>druid.emitter.http.batchQueueSizeLimit</code></td><td>The maximum number of batches in emitter queue, if there are problems with emitting.</td><td>the maximum of (2) or (10% of the JVM heap size divided by 5MB)</td></tr>
+<tr><td><code>druid.emitter.http.minHttpTimeoutMillis</code></td><td>If the speed of filling batches imposes timeout smaller than that, not even trying to send batch to endpoint, because it will likely fail, not being able to send the data that fast. Configure this depending based on emitter/successfulSending/minTimeMs metric. Reasonable values are 10ms..100ms.</td><td>0</td></tr>
+<tr><td><code>druid.emitter.http.recipientBaseUrl</code></td><td>The base URL to emit messages to. Druid will POST JSON to be consumed at the HTTP endpoint specified by this property.</td><td>none, required config</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="http-emitter-module-tls-overrides"></a><a href="#http-emitter-module-tls-overrides" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S1 [...]
+<p>When emitting events to a TLS-enabled receiver, the Http Emitter will by default use an SSLContext obtained via the
+process described at <a href="../operations/tls-support.html">Druid's internal communication over TLS</a>, i.e., the same
+SSLContext that would be used for internal communications between Druid processes.</p>
+<p>In some use cases it may be desirable to have the Http Emitter use its own separate truststore configuration. For example, there may be organizational policies that prevent the TLS-enabled metrics receiver's certificate from being added to the same truststore used by Druid's internal HTTP client.</p>
+<p>The following properties allow the Http Emitter to use its own truststore configuration when building its SSLContext.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter.http.ssl.useDefaultJavaContext</code></td><td>If set to true, the HttpEmitter will use <code>SSLContext.getDefault()</code>, the default Java SSLContext, and all other properties below are ignored.</td><td>false</td></tr>
+<tr><td><code>druid.emitter.http.ssl.trustStorePath</code></td><td>The file path or URL of the TLS/SSL Key store where trusted root certificates are stored. If this is unspecified, the Http Emitter will use the same SSLContext as Druid's internal HTTP client, as described in the beginning of this section, and all other properties below are ignored.</td><td>null</td></tr>
+<tr><td><code>druid.emitter.http.ssl.trustStoreType</code></td><td>The type of the key store where trusted root certificates are stored.</td><td><code>java.security.KeyStore.getDefaultType()</code></td></tr>
+<tr><td><code>druid.emitter.http.ssl.trustStoreAlgorithm</code></td><td>Algorithm to be used by TrustManager to validate certificate chains</td><td><code>javax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()</code></td></tr>
+<tr><td><code>druid.emitter.http.ssl.trustStorePassword</code></td><td>The <a href="/docs/0.16.0-incubating/operations/password-provider.html">Password Provider</a> or String password for the Trust Store.</td><td>none</td></tr>
+<tr><td><code>druid.emitter.http.ssl.protocol</code></td><td>TLS protocol to use.</td><td>&quot;TLSv1.2&quot;</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="parametrized-http-emitter-module"></a><a href="#parametrized-http-emitter-module" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<p><code>druid.emitter.parametrized.httpEmitting.*</code> configs correspond to the configs of Http Emitter Modules, see above.
+Except <code>recipientBaseUrl</code>. E. g. <code>druid.emitter.parametrized.httpEmitting.flushMillis</code>,
+<code>druid.emitter.parametrized.httpEmitting.flushCount</code>, <code>druid.emitter.parametrized.httpEmitting.ssl.trustStorePath</code>, etc.</p>
+<p>The additional configs are:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter.parametrized.recipientBaseUrlPattern</code></td><td>The URL pattern to send an event to, based on the event's feed. E. g. <code>http://foo.bar/{feed}</code>, that will send event to <code>http://foo.bar/metrics</code> if the event's feed is &quot;metrics&quot;.</td><td>none, required config</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="composing-emitter-module"></a><a href="#composing-emitter-module" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter.composing.emitters</code></td><td>List of emitter modules to load e.g. [&quot;logging&quot;,&quot;http&quot;].</td><td>[]</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="graphite-emitter"></a><a href="#graphite-emitter" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p>To use graphite as emitter set <code>druid.emitter=graphite</code>. For configuration details please follow this <a href="/docs/0.16.0-incubating/development/extensions-contrib/graphite.html">link</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="metadata-storage"></a><a href="#metadata-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p>These properties specify the jdbc connection and other configuration around the metadata storage. The only processes that connect to the metadata storage with these properties are the <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> and <a href="/docs/0.16.0-incubating/design/overlord.html">Overlord</a>.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.metadata.storage.type</code></td><td>The type of metadata storage to use. Choose from &quot;mysql&quot;, &quot;postgresql&quot;, or &quot;derby&quot;.</td><td>derby</td></tr>
+<tr><td><code>druid.metadata.storage.connector.connectURI</code></td><td>The jdbc uri for the database to connect to</td><td>none</td></tr>
+<tr><td><code>druid.metadata.storage.connector.user</code></td><td>The username to connect with.</td><td>none</td></tr>
+<tr><td><code>druid.metadata.storage.connector.password</code></td><td>The <a href="/docs/0.16.0-incubating/operations/password-provider.html">Password Provider</a> or String password used to connect with.</td><td>none</td></tr>
+<tr><td><code>druid.metadata.storage.connector.createTables</code></td><td>If Druid requires a table and it doesn't exist, create it?</td><td>true</td></tr>
+<tr><td><code>druid.metadata.storage.tables.base</code></td><td>The base name for tables.</td><td>druid</td></tr>
+<tr><td><code>druid.metadata.storage.tables.dataSource</code></td><td>The table to use to look for dataSources which created by <a href="/docs/0.16.0-incubating/development/extensions-core/kafka-ingestion.html">Kafka Indexing Service</a>.</td><td>druid_dataSource</td></tr>
+<tr><td><code>druid.metadata.storage.tables.pendingSegments</code></td><td>The table to use to look for pending segments.</td><td>druid_pendingSegments</td></tr>
+<tr><td><code>druid.metadata.storage.tables.segments</code></td><td>The table to use to look for segments.</td><td>druid_segments</td></tr>
+<tr><td><code>druid.metadata.storage.tables.rules</code></td><td>The table to use to look for segment load/drop rules.</td><td>druid_rules</td></tr>
+<tr><td><code>druid.metadata.storage.tables.config</code></td><td>The table to use to look for configs.</td><td>druid_config</td></tr>
+<tr><td><code>druid.metadata.storage.tables.tasks</code></td><td>Used by the indexing service to store tasks.</td><td>druid_tasks</td></tr>
+<tr><td><code>druid.metadata.storage.tables.taskLog</code></td><td>Used by the indexing service to store task logs.</td><td>druid_taskLog</td></tr>
+<tr><td><code>druid.metadata.storage.tables.taskLock</code></td><td>Used by the indexing service to store task locks.</td><td>druid_taskLock</td></tr>
+<tr><td><code>druid.metadata.storage.tables.supervisors</code></td><td>Used by the indexing service to store supervisor configurations.</td><td>druid_supervisors</td></tr>
+<tr><td><code>druid.metadata.storage.tables.audit</code></td><td>The table to use for audit history of configuration changes e.g. Coordinator rules.</td><td>druid_audit</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="deep-storage"></a><a href="#deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>The configurations concern how to push and pull <a href="/docs/0.16.0-incubating/design/segments.html">Segments</a> from deep storage.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.type</code></td><td>Choices:local, noop, s3, hdfs, c*. The type of deep storage to use.</td><td>local</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="local-deep-storage"></a><a href="#local-deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>Local deep storage uses the local filesystem.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.storageDirectory</code></td><td>Directory on disk to use as deep storage.</td><td>/tmp/druid/localStorage</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="noop-deep-storage"></a><a href="#noop-deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<p>This deep storage doesn't do anything. There are no configs.</p>
+<h4><a class="anchor" aria-hidden="true" id="s3-deep-storage"></a><a href="#s3-deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5  [...]
+<p>This deep storage is used to interface with Amazon's S3. Note that the <code>druid-s3-extensions</code> extension must be loaded.
+The below table shows some important configurations for S3. See <a href="/docs/0.16.0-incubating/development/extensions-core/s3.html">S3 Deep Storage</a> for full configurations.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.bucket</code></td><td>S3 bucket name.</td><td>none</td></tr>
+<tr><td><code>druid.storage.baseKey</code></td><td>S3 object key prefix for storage.</td><td>none</td></tr>
+<tr><td><code>druid.storage.disableAcl</code></td><td>Boolean flag for ACL. If this is set to <code>false</code>, the full control would be granted to the bucket owner. This may require to set additional permissions. See <a href="../development/extensions-core/s3.html#s3-permissions-settings">S3 permissions settings</a>.</td><td>false</td></tr>
+<tr><td><code>druid.storage.archiveBucket</code></td><td>S3 bucket name for archiving when running the <em>archive task</em>.</td><td>none</td></tr>
+<tr><td><code>druid.storage.archiveBaseKey</code></td><td>S3 object key prefix for archiving.</td><td>none</td></tr>
+<tr><td><code>druid.storage.sse.type</code></td><td>Server-side encryption type. Should be one of <code>s3</code>, <code>kms</code>, and <code>custom</code>. See the below <a href="../development/extensions-core/s3.html#server-side-encryption">Server-side encryption section</a> for more details.</td><td>None</td></tr>
+<tr><td><code>druid.storage.sse.kms.keyId</code></td><td>AWS KMS key ID. This is used only when <code>druid.storage.sse.type</code> is <code>kms</code> and can be empty to use the default key ID.</td><td>None</td></tr>
+<tr><td><code>druid.storage.sse.custom.base64EncodedKey</code></td><td>Base64-encoded key. Should be specified if <code>druid.storage.sse.type</code> is <code>custom</code>.</td><td>None</td></tr>
+<tr><td><code>druid.storage.useS3aSchema</code></td><td>If true, use the &quot;s3a&quot; filesystem when using Hadoop-based ingestion. If false, the &quot;s3n&quot; filesystem will be used. Only affects Hadoop-based ingestion.</td><td>false</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="hdfs-deep-storage"></a><a href="#hdfs-deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<p>This deep storage is used to interface with HDFS.  Note that the <code>druid-hdfs-storage</code> extension must be loaded.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.storageDirectory</code></td><td>HDFS directory to use as deep storage.</td><td>none</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="cassandra-deep-storage"></a><a href="#cassandra-deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<p>This deep storage is used to interface with Cassandra.  Note that the <code>druid-cassandra-storage</code> extension must be loaded.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.host</code></td><td>Cassandra host.</td><td>none</td></tr>
+<tr><td><code>druid.storage.keyspace</code></td><td>Cassandra key space.</td><td>none</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="task-logging"></a><a href="#task-logging" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>If you are running the indexing service in remote mode, the task logs must be stored in S3, Azure Blob Store, Google Cloud Storage or HDFS.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.logs.type</code></td><td>Choices:noop, s3, azure, google, hdfs, file. Where to store task logs</td><td>file</td></tr>
+</tbody>
+</table>
+<p>You can also configure the Overlord to automatically retain the task logs in log directory and entries in task-related metadata storage tables only for last x milliseconds by configuring following additional properties.
+Caution: Automatic log file deletion typically works based on log file modification timestamp on the backing store, so large clock skews between druid processes and backing store nodes might result in un-intended behavior.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.logs.kill.enabled</code></td><td>Boolean value for whether to enable deletion of old task logs. If set to true, Overlord will submit kill tasks periodically based on <code>druid.indexer.logs.kill.delay</code> specified, which will delete task logs from the log directory as well as tasks and tasklogs table entries in metadata storage except for tasks created in the last <code>druid.indexer.logs.kill.durationToRetain</code> period.</td><td>false</td></tr>
+<tr><td><code>druid.indexer.logs.kill.durationToRetain</code></td><td>Required if kill is enabled. In milliseconds, task logs and entries in task-related metadata storage tables to be retained created in last x milliseconds.</td><td>None</td></tr>
+<tr><td><code>druid.indexer.logs.kill.initialDelay</code></td><td>Optional. Number of milliseconds after Overlord start when first auto kill is run.</td><td>random value less than 300000 (5 mins)</td></tr>
+<tr><td><code>druid.indexer.logs.kill.delay</code></td><td>Optional. Number of milliseconds of delay between successive executions of auto kill run.</td><td>21600000 (6 hours)</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="file-task-logs"></a><a href="#file-task-logs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>Store task logs in the local filesystem.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.logs.directory</code></td><td>Local filesystem path.</td><td>log</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="s3-task-logs"></a><a href="#s3-task-logs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>Store task logs in S3. Note that the <code>druid-s3-extensions</code> extension must be loaded.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.logs.s3Bucket</code></td><td>S3 bucket name.</td><td>none</td></tr>
+<tr><td><code>druid.indexer.logs.s3Prefix</code></td><td>S3 key prefix.</td><td>none</td></tr>
+<tr><td><code>druid.indexer.logs.disableAcl</code></td><td>Boolean flag for ACL. If this is set to <code>false</code>, the full control would be granted to the bucket owner. If the task logs bucket is the same as the deep storage (S3) bucket, then the value of this property will need to be set to true if druid.storage.disableAcl has been set to true.</td><td>false</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="azure-blob-store-task-logs"></a><a href="#azure-blob-store-task-logs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H [...]
+<p>Store task logs in Azure Blob Store.</p>
+<p>Note: The <code>druid-azure-extensions</code> extension must be loaded, and this uses the same storage account as the deep storage module for azure.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.logs.container</code></td><td>The Azure Blob Store container to write logs to</td><td>none</td></tr>
+<tr><td><code>druid.indexer.logs.prefix</code></td><td>The path to prepend to logs</td><td>none</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="google-cloud-storage-task-logs"></a><a href="#google-cloud-storage-task-logs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 1 [...]
+<p>Store task logs in Google Cloud Storage.</p>
+<p>Note: The <code>druid-google-extensions</code> extension must be loaded, and this uses the same storage settings as the deep storage module for google.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.logs.bucket</code></td><td>The Google Cloud Storage bucket to write logs to</td><td>none</td></tr>
+<tr><td><code>druid.indexer.logs.prefix</code></td><td>The path to prepend to logs</td><td>none</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="hdfs-task-logs"></a><a href="#hdfs-task-logs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>Store task logs in HDFS. Note that the <code>druid-hdfs-storage</code> extension must be loaded.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.logs.directory</code></td><td>The directory to store logs.</td><td>none</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="overlord-discovery"></a><a href="#overlord-discovery" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>This config is used to find the <a href="/docs/0.16.0-incubating/design/overlord.html">Overlord</a> using Curator service discovery. Only required if you are actually running an Overlord.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.selectors.indexing.serviceName</code></td><td>The druid.service name of the Overlord process. To start the Overlord with a different name, set it with this property.</td><td>druid/overlord</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="coordinator-discovery"></a><a href="#coordinator-discovery" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>This config is used to find the <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> using Curator service discovery. This config is used by the realtime indexing processes to get information about the segments loaded in the cluster.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.selectors.coordinator.serviceName</code></td><td>The druid.service name of the Coordinator process. To start the Coordinator with a different name, set it with this property.</td><td>druid/coordinator</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="announcing-segments"></a><a href="#announcing-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>You can configure how to announce and unannounce Znodes in ZooKeeper (using Curator). For normal operations you do not need to override any of these configs.</p>
+<h5><a class="anchor" aria-hidden="true" id="batch-data-segment-announcer"></a><a href="#batch-data-segment-announcer" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 [...]
+<p>In current Druid, multiple data segments may be announced under the same Znode.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.announcer.segmentsPerNode</code></td><td>Each Znode contains info for up to this many segments.</td><td>50</td></tr>
+<tr><td><code>druid.announcer.maxBytesPerNode</code></td><td>Max byte size for Znode.</td><td>524288</td></tr>
+<tr><td><code>druid.announcer.skipDimensionsAndMetrics</code></td><td>Skip Dimensions and Metrics list from segment announcements. NOTE: Enabling this will also remove the dimensions and metrics list from Coordinator and Broker endpoints.</td><td>false</td></tr>
+<tr><td><code>druid.announcer.skipLoadSpec</code></td><td>Skip segment LoadSpec from segment announcements. NOTE: Enabling this will also remove the loadspec from Coordinator and Broker endpoints.</td><td>false</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="javascript"></a><a href="#javascript" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>Druid supports dynamic runtime extension through JavaScript functions. This functionality can be configured through
+the following properties.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.javascript.enabled</code></td><td>Set to &quot;true&quot; to enable JavaScript functionality. This affects the JavaScript parser, filter, extractionFn, aggregator, post-aggregator, router strategy, and worker selection strategy.</td><td>false</td></tr>
+</tbody>
+</table>
+<blockquote>
+<p>JavaScript-based functionality is disabled by default. Please refer to the Druid <a href="/docs/0.16.0-incubating/development/javascript.html">JavaScript programming guide</a> for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it.</p>
+</blockquote>
+<h3><a class="anchor" aria-hidden="true" id="double-column-storage"></a><a href="#double-column-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>Prior to version 0.13.0 Druid's storage layer used a 32-bit float representation to store columns created by the
+doubleSum, doubleMin, and doubleMax aggregators at indexing time.
+Starting from version 0.13.0 the default will be 64-bit floats for Double columns.
+Using 64-bit representation for double column will lead to avoid precesion loss at the cost of doubling the storage size of such columns.
+To keep the old format set the system-wide property <code>druid.indexing.doubleStorage=float</code>.
+You can also use floatSum, floatMin and floatMax to use 32-bit float representation.
+Support for 64-bit floating point columns was released in Druid 0.11.0, so if you use this feature then older versions of Druid will not be able to read your data segments.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexing.doubleStorage</code></td><td>Set to &quot;float&quot; to use 32-bit double representation for double columns.</td><td>double</td></tr>
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="master-server"></a><a href="#master-server" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>This section contains the configuration options for the processes that reside on Master servers (Coordinators and Overlords) in the suggested <a href="../design/processes.html#server-types">three-server configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="coordinator"></a><a href="#coordinator" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<p>For general Coordinator Process information, see <a href="/docs/0.16.0-incubating/design/coordinator.html">here</a>.</p>
+<h4><a class="anchor" aria-hidden="true" id="static-configuration"></a><a href="#static-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>These Coordinator static configurations can be defined in the <code>coordinator/runtime.properties</code> file.</p>
+<h5><a class="anchor" aria-hidden="true" id="coordinator-process-config"></a><a href="#coordinator-process-config" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.host</code></td><td>The host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that <code>http://${druid.host}/</code> could actually talk to this process</td><td>InetAddress.getLocalHost().getCanonicalHostName()</td></tr>
+<tr><td><code>druid.bindOnHost</code></td><td>Indicating whether the process's internal jetty server bind on <code>druid.host</code>. Default is false, which means binding to all interfaces.</td><td>false</td></tr>
+<tr><td><code>druid.plaintextPort</code></td><td>This is the port to actually listen on; unless port mapping is used, this will be the same port as is on <code>druid.host</code></td><td>8081</td></tr>
+<tr><td><code>druid.tlsPort</code></td><td>TLS port for HTTPS connector, if <a href="/docs/0.16.0-incubating/operations/tls-support.html">druid.enableTlsPort</a> is set then this config will be used. If <code>druid.host</code> contains port then that port will be ignored. This should be a non-negative Integer.</td><td>8281</td></tr>
+<tr><td><code>druid.service</code></td><td>The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services</td><td>druid/coordinator</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="coordinator-operation"></a><a href="#coordinator-operation" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.coordinator.period</code></td><td>The run period for the Coordinator. The Coordinator’s operates by maintaining the current state of the world in memory and periodically looking at the set of segments available and segments being served to make decisions about whether any changes need to be made to the data topology. This property sets the delay between each of these runs.</td><td>PT60S</td></tr>
+<tr><td><code>druid.coordinator.period.indexingPeriod</code></td><td>How often to send compact/merge/conversion tasks to the indexing service. It's recommended to be longer than <code>druid.manager.segments.pollDuration</code></td><td>PT1800S (30 mins)</td></tr>
+<tr><td><code>druid.coordinator.startDelay</code></td><td>The operation of the Coordinator works on the assumption that it has an up-to-date view of the state of the world when it runs, the current ZK interaction code, however, is written in a way that doesn’t allow the Coordinator to know for a fact that it’s done loading the current state of the world. This delay is a hack to give it enough time to believe that it has all the data.</td><td>PT300S</td></tr>
+<tr><td><code>druid.coordinator.load.timeout</code></td><td>The timeout duration for when the Coordinator assigns a segment to a Historical process.</td><td>PT15M</td></tr>
+<tr><td><code>druid.coordinator.kill.pendingSegments.on</code></td><td>Boolean flag for whether or not the Coordinator clean up old entries in the <code>pendingSegments</code> table of metadata store. If set to true, Coordinator will check the created time of most recently complete task. If it doesn't exist, it finds the created time of the earlist running/pending/waiting tasks. Once the created time is found, then for all dataSources not in the <code>killPendingSegmentsSkipList</code> ( [...]
+<tr><td><code>druid.coordinator.kill.on</code></td><td>Boolean flag for whether or not the Coordinator should submit kill task for unused segments, that is, hard delete them from metadata store and deep storage. If set to true, then for all whitelisted dataSources (or optionally all), Coordinator will submit tasks periodically based on <code>period</code> specified. These kill tasks will delete all segments except for the last <code>durationToRetain</code> period. Whitelist or All can be [...]
+<tr><td><code>druid.coordinator.kill.period</code></td><td>How often to send kill tasks to the indexing service. Value must be greater than <code>druid.coordinator.period.indexingPeriod</code>. Only applies if kill is turned on.</td><td>P1D (1 Day)</td></tr>
+<tr><td><code>druid.coordinator.kill.durationToRetain</code></td><td>Do not kill segments in last <code>durationToRetain</code>, must be greater or equal to 0. Only applies and MUST be specified if kill is turned on. Note that default value is invalid.</td><td>PT-1S (-1 seconds)</td></tr>
+<tr><td><code>druid.coordinator.kill.maxSegments</code></td><td>Kill at most n segments per kill task submission, must be greater than 0. Only applies and MUST be specified if kill is turned on. Note that default value is invalid.</td><td>0</td></tr>
+<tr><td><code>druid.coordinator.balancer.strategy</code></td><td>Specify the type of balancing strategy that the coordinator should use to distribute segments among the historicals. <code>cachingCost</code> is logically equivalent to <code>cost</code> but is more CPU-efficient on large clusters and will replace <code>cost</code> in the future versions, users are invited to try it. Use <code>diskNormalized</code> to distribute segments among processes so that the disks fill up uniformly a [...]
+<tr><td><code>druid.coordinator.balancer.cachingCost.awaitInitialization</code></td><td>Whether to wait for segment view initialization before creating the <code>cachingCost</code> balancing strategy. This property is enabled only when <code>druid.coordinator.balancer.strategy</code> is <code>cachingCost</code>. If set to 'true', the Coordinator will not start to assign segments, until the segment view is initialized. If set to 'false', the Coordinator will fallback to use the <code>cost [...]
+<tr><td><code>druid.coordinator.loadqueuepeon.repeatDelay</code></td><td>The start and repeat delay for the loadqueuepeon , which manages the load and drop of segments.</td><td>PT0.050S (50 ms)</td></tr>
+<tr><td><code>druid.coordinator.asOverlord.enabled</code></td><td>Boolean value for whether this Coordinator process should act like an Overlord as well. This configuration allows users to simplify a druid cluster by not having to deploy any standalone Overlord processes. If set to true, then Overlord console is available at <code>http://coordinator-host:port/console.html</code> and be sure to set <code>druid.coordinator.asOverlord.overlordService</code> also. See next.</td><td>false</td></tr>
+<tr><td><code>druid.coordinator.asOverlord.overlordService</code></td><td>Required, if <code>druid.coordinator.asOverlord.enabled</code> is <code>true</code>. This must be same value as <code>druid.service</code> on standalone Overlord processes and <code>druid.selectors.indexing.serviceName</code> on Middle Managers.</td><td>NULL</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="segment-management"></a><a href="#segment-management" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.serverview.type</code></td><td>batch or http</td><td>Segment discovery method to use. &quot;http&quot; enables discovering segments using HTTP instead of zookeeper.</td><td>batch</td></tr>
+<tr><td><code>druid.coordinator.loadqueuepeon.type</code></td><td>curator or http</td><td>Whether to use &quot;http&quot; or &quot;curator&quot; implementation to assign segment loads/drops to historical</td><td>curator</td></tr>
+<tr><td><code>druid.coordinator.segment.awaitInitializationOnStart</code></td><td>true or false</td><td>Whether the the Coordinator will wait for its view of segments to fully initialize before starting up. If set to 'true', the Coordinator's HTTP server will not start up, and the Coordinator will not announce itself as available, until the server view is initialized.</td><td>true</td></tr>
+</tbody>
+</table>
+<h6><a class="anchor" aria-hidden="true" id="additional-config-when-http-loadqueuepeon-is-used"></a><a href="#additional-config-when-http-loadqueuepeon-is-used" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4  [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.coordinator.loadqueuepeon.http.batchSize</code></td><td>Number of segment load/drop requests to batch in one HTTP request. Note that it must be smaller than <code>druid.segmentCache.numLoadingThreads</code> config on Historical process.</td><td>1</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="metadata-retrieval"></a><a href="#metadata-retrieval" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.manager.config.pollDuration</code></td><td>How often the manager polls the config table for updates.</td><td>PT1M</td></tr>
+<tr><td><code>druid.manager.segments.pollDuration</code></td><td>The duration between polls the Coordinator does for updates to the set of active segments. Generally defines the amount of lag time it can take for the Coordinator to notice new segments.</td><td>PT1M</td></tr>
+<tr><td><code>druid.manager.rules.pollDuration</code></td><td>The duration between polls the Coordinator does for updates to the set of active rules. Generally defines the amount of lag time it can take for the Coordinator to notice rules.</td><td>PT1M</td></tr>
+<tr><td><code>druid.manager.rules.defaultTier</code></td><td>The default tier from which default rules will be loaded from.</td><td>_default</td></tr>
+<tr><td><code>druid.manager.rules.alertThreshold</code></td><td>The duration after a failed poll upon which an alert should be emitted.</td><td>PT10M</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="dynamic-configuration"></a><a href="#dynamic-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>The Coordinator has dynamic configuration to change certain behaviour on the fly. The Coordinator uses a JSON spec object from the Druid <a href="/docs/0.16.0-incubating/dependencies/metadata-storage.html">metadata storage</a> config table. This object is detailed below:</p>
+<p>It is recommended that you use the Coordinator Console to configure these parameters. However, if you need to do it via HTTP, the JSON object can be submitted to the Coordinator via a POST request at:</p>
+<pre><code class="hljs">http:<span class="hljs-regexp">//</span>&lt;COORDINATOR_IP&gt;:&lt;PORT&gt;<span class="hljs-regexp">/druid/</span>coordinator<span class="hljs-regexp">/v1/</span>config
+</code></pre>
+<p>Optional Header Parameters for auditing the config change can also be specified.</p>
+<table>
+<thead>
+<tr><th>Header Param Name</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>X-Druid-Author</code></td><td>author making the config change</td><td>&quot;&quot;</td></tr>
+<tr><td><code>X-Druid-Comment</code></td><td>comment describing the change being done</td><td>&quot;&quot;</td></tr>
+</tbody>
+</table>
+<p>A sample Coordinator dynamic config JSON object is shown below:</p>
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"millisToWaitBeforeDeleting"</span>: <span class="hljs-number">900000</span>,
+  <span class="hljs-attr">"mergeBytesLimit"</span>: <span class="hljs-number">100000000</span>,
+  <span class="hljs-attr">"mergeSegmentsLimit"</span> : <span class="hljs-number">1000</span>,
+  <span class="hljs-attr">"maxSegmentsToMove"</span>: <span class="hljs-number">5</span>,
+  <span class="hljs-attr">"replicantLifetime"</span>: <span class="hljs-number">15</span>,
+  <span class="hljs-attr">"replicationThrottleLimit"</span>: <span class="hljs-number">10</span>,
+  <span class="hljs-attr">"emitBalancingStats"</span>: <span class="hljs-literal">false</span>,
+  <span class="hljs-attr">"killDataSourceWhitelist"</span>: [<span class="hljs-string">"wikipedia"</span>, <span class="hljs-string">"testDatasource"</span>],
+  <span class="hljs-attr">"decommissioningNodes"</span>: [<span class="hljs-string">"localhost:8182"</span>, <span class="hljs-string">"localhost:8282"</span>],
+  <span class="hljs-attr">"decommissioningMaxPercentOfMaxSegmentsToMove"</span>: <span class="hljs-number">70</span>
+}
+</code></pre>
+<p>Issuing a GET request at the same URL will return the spec that is currently in place. A description of the config setup spec is shown below.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>millisToWaitBeforeDeleting</code></td><td>How long does the Coordinator need to be active before it can start removing (marking unused) segments in metadata storage.</td><td>900000 (15 mins)</td></tr>
+<tr><td><code>mergeBytesLimit</code></td><td>The maximum total uncompressed size in bytes of segments to merge.</td><td>524288000L</td></tr>
+<tr><td><code>mergeSegmentsLimit</code></td><td>The maximum number of segments that can be in a single <a href="/docs/0.16.0-incubating/ingestion/tasks.html">append task</a>.</td><td>100</td></tr>
+<tr><td><code>maxSegmentsToMove</code></td><td>The maximum number of segments that can be moved at any given time.</td><td>5</td></tr>
+<tr><td><code>replicantLifetime</code></td><td>The maximum number of Coordinator runs for a segment to be replicated before we start alerting.</td><td>15</td></tr>
+<tr><td><code>replicationThrottleLimit</code></td><td>The maximum number of segments that can be replicated at one time.</td><td>10</td></tr>
+<tr><td><code>balancerComputeThreads</code></td><td>Thread pool size for computing moving cost of segments in segment balancing. Consider increasing this if you have a lot of segments and moving segments starts to get stuck.</td><td>1</td></tr>
+<tr><td><code>emitBalancingStats</code></td><td>Boolean flag for whether or not we should emit balancing stats. This is an expensive operation.</td><td>false</td></tr>
+<tr><td><code>killDataSourceWhitelist</code></td><td>List of dataSources for which kill tasks are sent if property <code>druid.coordinator.kill.on</code> is true. This can be a list of comma-separated dataSources or a JSON array.</td><td>none</td></tr>
+<tr><td><code>killAllDataSources</code></td><td>Send kill tasks for ALL dataSources if property <code>druid.coordinator.kill.on</code> is true. If this is set to true then <code>killDataSourceWhitelist</code> must not be specified or be empty list.</td><td>false</td></tr>
+<tr><td><code>killPendingSegmentsSkipList</code></td><td>List of dataSources for which pendingSegments are <em>NOT</em> cleaned up if property <code>druid.coordinator.kill.pendingSegments.on</code> is true. This can be a list of comma-separated dataSources or a JSON array.</td><td>none</td></tr>
+<tr><td><code>maxSegmentsInNodeLoadingQueue</code></td><td>The maximum number of segments that could be queued for loading to any given server. This parameter could be used to speed up segments loading process, especially if there are &quot;slow&quot; nodes in the cluster (with low loading speed) or if too much segments scheduled to be replicated to some particular node (faster loading could be preferred to better segments distribution). Desired value depends on segments loading speed, a [...]
+<tr><td><code>decommissioningNodes</code></td><td>List of historical servers to 'decommission'. Coordinator will not assign new segments to 'decommissioning' servers,  and segments will be moved away from them to be placed on non-decommissioning servers at the maximum rate specified by <code>decommissioningMaxPercentOfMaxSegmentsToMove</code>.</td><td>none</td></tr>
+<tr><td><code>decommissioningMaxPercentOfMaxSegmentsToMove</code></td><td>The maximum number of segments that may be moved away from 'decommissioning' servers to non-decommissioning (that is, active) servers during one Coordinator run. This value is relative to the total maximum segment movements allowed during one run which is determined by <code>maxSegmentsToMove</code>. If <code>decommissioningMaxPercentOfMaxSegmentsToMove</code> is 0, segments will neither be moved from <em>or to</em [...]
+</tbody>
+</table>
+<p>To view the audit history of Coordinator dynamic config issue a GET request to the URL -</p>
+<pre><code class="hljs">http://<span class="hljs-symbol">&lt;COORDINATOR_IP&gt;</span>:<span class="hljs-symbol">&lt;PORT&gt;</span>/druid/coordinator/v1/config/<span class="hljs-keyword">history</span>?interval=<span class="hljs-symbol">&lt;interval&gt;</span>
+</code></pre>
+<p>default value of interval can be specified by setting <code>druid.audit.manager.auditHistoryMillis</code> (1 week if not configured) in Coordinator runtime.properties</p>
+<p>To view last <n> entries of the audit history of Coordinator dynamic config issue a GET request to the URL -</p>
+<pre><code class="hljs">http://<span class="hljs-symbol">&lt;COORDINATOR_IP&gt;</span>:<span class="hljs-symbol">&lt;PORT&gt;</span>/druid/coordinator/v1/config/<span class="hljs-keyword">history</span>?<span class="hljs-built_in">count</span>=<span class="hljs-symbol">&lt;n&gt;</span>
+</code></pre>
+<h5><a class="anchor" aria-hidden="true" id="lookups-dynamic-configuration"></a><a href="#lookups-dynamic-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12  [...]
+<p>These configuration options control the behavior of the Lookup dynamic configuration described in the <a href="/docs/0.16.0-incubating/querying/lookups.html">lookups page</a></p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.manager.lookups.hostDeleteTimeout</code></td><td>How long to wait for a <code>DELETE</code> request to a particular process before considering the <code>DELETE</code> a failure</td><td>PT1S</td></tr>
+<tr><td><code>druid.manager.lookups.hostUpdateTimeout</code></td><td>How long to wait for a <code>POST</code> request to a particular process before considering the <code>POST</code> a failure</td><td>PT10S</td></tr>
+<tr><td><code>druid.manager.lookups.deleteAllTimeout</code></td><td>How long to wait for all <code>DELETE</code> requests to finish before considering the delete attempt a failure</td><td>PT10S</td></tr>
+<tr><td><code>druid.manager.lookups.updateAllTimeout</code></td><td>How long to wait for all <code>POST</code> requests to finish before considering the attempt a failure</td><td>PT60S</td></tr>
+<tr><td><code>druid.manager.lookups.threadPoolSize</code></td><td>How many processes can be managed concurrently (concurrent POST and DELETE requests). Requests this limit will wait in a queue until a slot becomes available.</td><td>10</td></tr>
+<tr><td><code>druid.manager.lookups.period</code></td><td>How many milliseconds between checks for configuration changes</td><td>30_000</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="compaction-dynamic-configuration"></a><a href="#compaction-dynamic-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<p>Compaction configurations can also be set or updated dynamically using
+<a href="../operations/api-reference.html#compaction-configuration">Coordinator's API</a> without restarting Coordinators.</p>
+<p>For details about segment compaction, please check <a href="/docs/0.16.0-incubating/operations/segment-optimization.html">Segment Size Optimization</a>.</p>
+<p>A description of the compaction config is:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Required</th></tr>
+</thead>
+<tbody>
+<tr><td><code>dataSource</code></td><td>dataSource name to be compacted.</td><td>yes</td></tr>
+<tr><td><code>taskPriority</code></td><td><a href="../ingestion/tasks.html#priority">Priority</a> of compaction task.</td><td>no (default = 25)</td></tr>
+<tr><td><code>inputSegmentSizeBytes</code></td><td>Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.</td><td>no [...]
+<tr><td><code>targetCompactionSizeBytes</code></td><td>The target segment size, for each segment, after compaction. The actual sizes of compacted segments might be slightly larger or smaller than this value. Each compaction task may generate more than one output segment, and it will try to keep each output segment close to this configured size. This configuration cannot be used together with <code>maxRowsPerSegment</code>.</td><td>no (default = 419430400)</td></tr>
+<tr><td><code>maxRowsPerSegment</code></td><td>Max number of rows per segment after compaction. This configuration cannot be used together with <code>targetCompactionSizeBytes</code>.</td><td>no</td></tr>
+<tr><td><code>maxNumSegmentsToCompact</code></td><td>Maximum number of segments to compact together per compaction task. Since a time chunk must be processed in its entirety, if a time chunk has a total number of segments greater than this parameter, compaction will not run for that time chunk.</td><td>no (default = 150)</td></tr>
+<tr><td><code>skipOffsetFromLatest</code></td><td>The offset for searching segments to be compacted. Strongly recommended to set for realtime dataSources.</td><td>no (default = &quot;P1D&quot;)</td></tr>
+<tr><td><code>tuningConfig</code></td><td>Tuning config for compaction tasks. See below <a href="#compaction-tuningconfig">Compaction Task TuningConfig</a>.</td><td>no</td></tr>
+<tr><td><code>taskContext</code></td><td><a href="../ingestion/tasks.html#context">Task context</a> for compaction tasks.</td><td>no</td></tr>
+</tbody>
+</table>
+<p>An example of compaction config is:</p>
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"dataSource"</span>: <span class="hljs-string">"wikiticker"</span>
+}
+</code></pre>
+<p>Note that compaction tasks can fail if their locks are revoked by other tasks of higher priorities.
+Since realtime tasks have a higher priority than compaction task by default,
+it can be problematic if there are frequent conflicts between compaction tasks and realtime tasks.
+If this is the case, the coordinator's automatic compaction might get stuck because of frequent compaction task failures.
+This kind of problem may happen especially in Kafka/Kinesis indexing systems which allow late data arrival.
+If you see this problem, it's recommended to set <code>skipOffsetFromLatest</code> to some large enough value to avoid such conflicts between compaction tasks and realtime tasks.</p>
+<h6><a class="anchor" aria-hidden="true" id="compaction-tuningconfig"></a><a href="#compaction-tuningconfig" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Required</th></tr>
+</thead>
+<tbody>
+<tr><td><code>maxRowsInMemory</code></td><td>See <a href="/docs/0.16.0-incubating/ingestion/native-batch.html#tuningconfig">tuningConfig for indexTask</a></td><td>no (default = 1000000)</td></tr>
+<tr><td><code>maxBytesInMemory</code></td><td>See <a href="/docs/0.16.0-incubating/ingestion/native-batch.html#tuningconfig">tuningConfig for indexTask</a></td><td>no (1/6 of max JVM memory)</td></tr>
+<tr><td><code>maxTotalRows</code></td><td>See <a href="/docs/0.16.0-incubating/ingestion/native-batch.html#tuningconfig">tuningConfig for indexTask</a></td><td>no (default = 20000000)</td></tr>
+<tr><td><code>indexSpec</code></td><td>See <a href="/docs/0.16.0-incubating/ingestion/index.html#indexspec">IndexSpec</a></td><td>no</td></tr>
+<tr><td><code>maxPendingPersists</code></td><td>See <a href="/docs/0.16.0-incubating/ingestion/native-batch.html#tuningconfig">tuningConfig for indexTask</a></td><td>no (default = 0 (meaning one persist can be running concurrently with ingestion, and none can be queued up))</td></tr>
+<tr><td><code>pushTimeout</code></td><td>See <a href="/docs/0.16.0-incubating/ingestion/native-batch.html#tuningconfig">tuningConfig for indexTask</a></td><td>no (default = 0)</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="overlord"></a><a href="#overlord" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p>For general Overlord Process information, see <a href="/docs/0.16.0-incubating/design/overlord.html">here</a>.</p>
+<h4><a class="anchor" aria-hidden="true" id="overlord-static-configuration"></a><a href="#overlord-static-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12  [...]
+<p>These Overlord static configurations can be defined in the <code>overlord/runtime.properties</code> file.</p>
+<h5><a class="anchor" aria-hidden="true" id="overlord-process-configs"></a><a href="#overlord-process-configs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.host</code></td><td>The host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that <code>http://${druid.host}/</code> could actually talk to this process</td><td>InetAddress.getLocalHost().getCanonicalHostName()</td></tr>
+<tr><td><code>druid.bindOnHost</code></td><td>Indicating whether the process's internal jetty server bind on <code>druid.host</code>. Default is false, which means binding to all interfaces.</td><td>false</td></tr>
+<tr><td><code>druid.plaintextPort</code></td><td>This is the port to actually listen on; unless port mapping is used, this will be the same port as is on <code>druid.host</code></td><td>8090</td></tr>
+<tr><td><code>druid.tlsPort</code></td><td>TLS port for HTTPS connector, if <a href="/docs/0.16.0-incubating/operations/tls-support.html">druid.enableTlsPort</a> is set then this config will be used. If <code>druid.host</code> contains port then that port will be ignored. This should be a non-negative Integer.</td><td>8290</td></tr>
+<tr><td><code>druid.service</code></td><td>The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services</td><td>druid/overlord</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="overlord-operations"></a><a href="#overlord-operations" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.runner.type</code></td><td>Choices &quot;local&quot; or &quot;remote&quot;. Indicates whether tasks should be run locally or in a distributed environment. Experimental task runner &quot;httpRemote&quot; is also available which is same as &quot;remote&quot; but uses HTTP to interact with Middle Manaters instead of Zookeeper.</td><td>local</td></tr>
+<tr><td><code>druid.indexer.storage.type</code></td><td>Choices are &quot;local&quot; or &quot;metadata&quot;. Indicates whether incoming tasks should be stored locally (in heap) or in metadata storage. Storing incoming tasks in metadata storage allows for tasks to be resumed if the Overlord should fail.</td><td>local</td></tr>
+<tr><td><code>druid.indexer.storage.recentlyFinishedThreshold</code></td><td>A duration of time to store task results.</td><td>PT24H</td></tr>
+<tr><td><code>druid.indexer.tasklock.forceTimeChunkLock</code></td><td><em><strong>Setting this to false is still experimental</strong></em><br/> If set, all tasks are enforced to use time chunk lock. If not set, each task automatically chooses a lock type to use. This configuration can be overwritten by setting <code>forceTimeChunkLock</code> in the <a href="/docs/0.16.0-incubating/ingestion/tasks.html#context">task context</a>. See <a href="/docs/0.16.0-incubating/ingestion/tasks.html# [...]
+<tr><td><code>druid.indexer.queue.maxSize</code></td><td>Maximum number of active tasks at one time.</td><td>Integer.MAX_VALUE</td></tr>
+<tr><td><code>druid.indexer.queue.startDelay</code></td><td>Sleep this long before starting Overlord queue management. This can be useful to give a cluster time to re-orient itself after e.g. a widespread network issue.</td><td>PT1M</td></tr>
+<tr><td><code>druid.indexer.queue.restartDelay</code></td><td>Sleep this long when Overlord queue management throws an exception before trying again.</td><td>PT30S</td></tr>
+<tr><td><code>druid.indexer.queue.storageSyncRate</code></td><td>Sync Overlord state this often with an underlying task persistence mechanism.</td><td>PT1M</td></tr>
+</tbody>
+</table>
+<p>The following configs only apply if the Overlord is running in remote mode. For a description of local vs. remote mode, please see (../design/overlord.html).</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.runner.taskAssignmentTimeout</code></td><td>How long to wait after a task as been assigned to a MiddleManager before throwing an error.</td><td>PT5M</td></tr>
+<tr><td><code>druid.indexer.runner.minWorkerVersion</code></td><td>The minimum MiddleManager version to send tasks to.</td><td>&quot;0&quot;</td></tr>
+<tr><td><code>druid.indexer.runner.compressZnodes</code></td><td>Indicates whether or not the Overlord should expect MiddleManagers to compress Znodes.</td><td>true</td></tr>
+<tr><td><code>druid.indexer.runner.maxZnodeBytes</code></td><td>The maximum size Znode in bytes that can be created in Zookeeper.</td><td>524288</td></tr>
+<tr><td><code>druid.indexer.runner.taskCleanupTimeout</code></td><td>How long to wait before failing a task after a MiddleManager is disconnected from Zookeeper.</td><td>PT15M</td></tr>
+<tr><td><code>druid.indexer.runner.taskShutdownLinkTimeout</code></td><td>How long to wait on a shutdown request to a MiddleManager before timing out</td><td>PT1M</td></tr>
+<tr><td><code>druid.indexer.runner.pendingTasksRunnerNumThreads</code></td><td>Number of threads to allocate pending-tasks to workers, must be at least 1.</td><td>1</td></tr>
+<tr><td><code>druid.indexer.runner.maxRetriesBeforeBlacklist</code></td><td>Number of consecutive times the MiddleManager can fail tasks,  before the worker is blacklisted, must be at least 1</td><td>5</td></tr>
+<tr><td><code>druid.indexer.runner.workerBlackListBackoffTime</code></td><td>How long to wait before a task is whitelisted again. This value should be greater that the value set for taskBlackListCleanupPeriod.</td><td>PT15M</td></tr>
+<tr><td><code>druid.indexer.runner.workerBlackListCleanupPeriod</code></td><td>A duration after which the cleanup thread will startup to clean blacklisted workers.</td><td>PT5M</td></tr>
+<tr><td><code>druid.indexer.runner.maxPercentageBlacklistWorkers</code></td><td>The maximum percentage of workers to blacklist, this must be between 0 and 100.</td><td>20</td></tr>
+</tbody>
+</table>
+<p>There are additional configs for autoscaling (if it is enabled):</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.autoscale.strategy</code></td><td>Choices are &quot;noop&quot; or &quot;ec2&quot;. Sets the strategy to run when autoscaling is required.</td><td>noop</td></tr>
+<tr><td><code>druid.indexer.autoscale.doAutoscale</code></td><td>If set to &quot;true&quot; autoscaling will be enabled.</td><td>false</td></tr>
+<tr><td><code>druid.indexer.autoscale.provisionPeriod</code></td><td>How often to check whether or not new MiddleManagers should be added.</td><td>PT1M</td></tr>
+<tr><td><code>druid.indexer.autoscale.terminatePeriod</code></td><td>How often to check when MiddleManagers should be removed.</td><td>PT5M</td></tr>
+<tr><td><code>druid.indexer.autoscale.originTime</code></td><td>The starting reference timestamp that the terminate period increments upon.</td><td>2012-01-01T00:55:00.000Z</td></tr>
+<tr><td><code>druid.indexer.autoscale.workerIdleTimeout</code></td><td>How long can a worker be idle (not a run task) before it can be considered for termination.</td><td>PT90M</td></tr>
+<tr><td><code>druid.indexer.autoscale.maxScalingDuration</code></td><td>How long the Overlord will wait around for a MiddleManager to show up before giving up.</td><td>PT15M</td></tr>
+<tr><td><code>druid.indexer.autoscale.numEventsToTrack</code></td><td>The number of autoscaling related events (node creation and termination) to track.</td><td>10</td></tr>
+<tr><td><code>druid.indexer.autoscale.pendingTaskTimeout</code></td><td>How long a task can be in &quot;pending&quot; state before the Overlord tries to scale up.</td><td>PT30S</td></tr>
+<tr><td><code>druid.indexer.autoscale.workerVersion</code></td><td>If set, will only create nodes of set version during autoscaling. Overrides dynamic configuration.</td><td>null</td></tr>
+<tr><td><code>druid.indexer.autoscale.workerPort</code></td><td>The port that MiddleManagers will run on.</td><td>8080</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="supervisors"></a><a href="#supervisors" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.supervisor.healthinessThreshold</code></td><td>The number of successful runs before an unhealthy supervisor is again considered healthy.</td><td>3</td></tr>
+<tr><td><code>druid.supervisor.unhealthinessThreshold</code></td><td>The number of failed runs before the supervisor is considered unhealthy.</td><td>3</td></tr>
+<tr><td><code>druid.supervisor.taskHealthinessThreshold</code></td><td>The number of consecutive task successes before an unhealthy supervisor is again considered healthy.</td><td>3</td></tr>
+<tr><td><code>druid.supervisor.taskUnhealthinessThreshold</code></td><td>The number of consecutive task failures before the supervisor is considered unhealthy.</td><td>3</td></tr>
+<tr><td><code>druid.supervisor.storeStackTrace</code></td><td>Whether full stack traces of supervisor exceptions should be stored and returned by the supervisor <code>/status</code> endpoint.</td><td>false</td></tr>
+<tr><td><code>druid.supervisor.maxStoredExceptionEvents</code></td><td>The maximum number of exception events that can be returned through the supervisor <code>/status</code> endpoint.</td><td><code>max(healthinessThreshold, unhealthinessThreshold)</code></td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="overlord-dynamic-configuration"></a><a href="#overlord-dynamic-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 1 [...]
+<p>The Overlord can dynamically change worker behavior.</p>
+<p>The JSON object can be submitted to the Overlord via a POST request at:</p>
+<pre><code class="hljs">http:<span class="hljs-regexp">//</span>&lt;OVERLORD_IP&gt;:&lt;port&gt;<span class="hljs-regexp">/druid/i</span>ndexer<span class="hljs-regexp">/v1/</span>worker
+</code></pre>
+<p>Optional Header Parameters for auditing the config change can also be specified.</p>
+<table>
+<thead>
+<tr><th>Header Param Name</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>X-Druid-Author</code></td><td>author making the config change</td><td>&quot;&quot;</td></tr>
+<tr><td><code>X-Druid-Comment</code></td><td>comment describing the change being done</td><td>&quot;&quot;</td></tr>
+</tbody>
+</table>
+<p>A sample worker config spec is shown below:</p>
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"selectStrategy"</span>: {
+    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"fillCapacity"</span>,
+    <span class="hljs-attr">"affinityConfig"</span>: {
+      <span class="hljs-attr">"affinity"</span>: {
+        <span class="hljs-attr">"datasource1"</span>: [<span class="hljs-string">"host1:port"</span>, <span class="hljs-string">"host2:port"</span>],
+        <span class="hljs-attr">"datasource2"</span>: [<span class="hljs-string">"host3:port"</span>]
+      }
+    }
+  },
+  <span class="hljs-attr">"autoScaler"</span>: {
+    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ec2"</span>,
+    <span class="hljs-attr">"minNumWorkers"</span>: <span class="hljs-number">2</span>,
+    <span class="hljs-attr">"maxNumWorkers"</span>: <span class="hljs-number">12</span>,
+    <span class="hljs-attr">"envConfig"</span>: {
+      <span class="hljs-attr">"availabilityZone"</span>: <span class="hljs-string">"us-east-1a"</span>,
+      <span class="hljs-attr">"nodeData"</span>: {
+        <span class="hljs-attr">"amiId"</span>: <span class="hljs-string">"${AMI}"</span>,
+        <span class="hljs-attr">"instanceType"</span>: <span class="hljs-string">"c3.8xlarge"</span>,
+        <span class="hljs-attr">"minInstances"</span>: <span class="hljs-number">1</span>,
+        <span class="hljs-attr">"maxInstances"</span>: <span class="hljs-number">1</span>,
+        <span class="hljs-attr">"securityGroupIds"</span>: [<span class="hljs-string">"${IDs}"</span>],
+        <span class="hljs-attr">"keyName"</span>: <span class="hljs-string">"${KEY_NAME}"</span>
+      },
+      <span class="hljs-attr">"userData"</span>: {
+        <span class="hljs-attr">"impl"</span>: <span class="hljs-string">"string"</span>,
+        <span class="hljs-attr">"data"</span>: <span class="hljs-string">"${SCRIPT_COMMAND}"</span>,
+        <span class="hljs-attr">"versionReplacementString"</span>: <span class="hljs-string">":VERSION:"</span>,
+        <span class="hljs-attr">"version"</span>: <span class="hljs-literal">null</span>
+      }
+    }
+  }
+}
+</code></pre>
+<p>Issuing a GET request at the same URL will return the current worker config spec that is currently in place. The worker config spec list above is just a sample for EC2 and it is possible to extend the code base for other deployment environments. A description of the worker config spec is shown below.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>selectStrategy</code></td><td>How to assign tasks to MiddleManagers. Choices are <code>fillCapacity</code>, <code>equalDistribution</code>, and <code>javascript</code>.</td><td>equalDistribution</td></tr>
+<tr><td><code>autoScaler</code></td><td>Only used if autoscaling is enabled. See below.</td><td>null</td></tr>
+</tbody>
+</table>
+<p>To view the audit history of worker config issue a GET request to the URL -</p>
+<pre><code class="hljs">http://<span class="hljs-symbol">&lt;OVERLORD_IP&gt;</span>:<span class="hljs-symbol">&lt;port&gt;</span>/druid/indexer/v1/worker/<span class="hljs-keyword">history</span>?interval=<span class="hljs-symbol">&lt;interval&gt;</span>
+</code></pre>
+<p>default value of interval can be specified by setting <code>druid.audit.manager.auditHistoryMillis</code> (1 week if not configured) in Overlord runtime.properties.</p>
+<p>To view last <n> entries of the audit history of worker config issue a GET request to the URL -</p>
+<pre><code class="hljs">http://<span class="hljs-symbol">&lt;OVERLORD_IP&gt;</span>:<span class="hljs-symbol">&lt;port&gt;</span>/druid/indexer/v1/worker/<span class="hljs-keyword">history</span>?<span class="hljs-built_in">count</span>=<span class="hljs-symbol">&lt;n&gt;</span>
+</code></pre>
+<h5><a class="anchor" aria-hidden="true" id="worker-select-strategy"></a><a href="#worker-select-strategy" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<p>Worker select strategies control how Druid assigns tasks to middleManagers.</p>
+<h6><a class="anchor" aria-hidden="true" id="equal-distribution"></a><a href="#equal-distribution" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>Tasks are assigned to the middleManager with the most available capacity at the time the task begins running. This is
+useful if you want work evenly distributed across your middleManagers.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>type</code></td><td><code>equalDistribution</code>.</td><td>required; must be <code>equalDistribution</code></td></tr>
+<tr><td><code>affinityConfig</code></td><td><a href="#affinity">Affinity config</a> object</td><td>null (no affinity)</td></tr>
+</tbody>
+</table>
+<h6><a class="anchor" aria-hidden="true" id="fill-capacity"></a><a href="#fill-capacity" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>Tasks are assigned to the worker with the most currently-running tasks at the time the task begins running. This is
+useful in situations where you are elastically auto-scaling middleManagers, since it will tend to pack some full and
+leave others empty. The empty ones can be safely terminated.</p>
+<p>Note that if <code>druid.indexer.runner.pendingTasksRunnerNumThreads</code> is set to <em>N</em> &gt; 1, then this strategy will fill <em>N</em>
+middleManagers up to capacity simultaneously, rather than a single middleManager.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>type</code></td><td><code>fillCapacity</code>.</td><td>required; must be <code>fillCapacity</code></td></tr>
+<tr><td><code>affinityConfig</code></td><td><a href="#affinity">Affinity config</a> object</td><td>null (no affinity)</td></tr>
+</tbody>
+</table>
+<p><a name="javascript-worker-select-strategy"></a></p>
+<h6><a class="anchor" aria-hidden="true" id="javascript-1"></a><a href="#javascript-1" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>Allows defining arbitrary logic for selecting workers to run task using a JavaScript function.
+The function is passed remoteTaskRunnerConfig, map of workerId to available workers and task to be executed and returns the workerId on which the task should be run or null if the task cannot be run.
+It can be used for rapid development of missing features where the worker selection logic is to be changed or tuned often.
+If the selection logic is quite complex and cannot be easily tested in javascript environment,
+its better to write a druid extension module with extending current worker selection strategies written in java.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>type</code></td><td><code>javascript</code>.</td><td>required; must be <code>javascript</code></td></tr>
+<tr><td><code>function</code></td><td>String representing javascript function</td><td></td></tr>
+</tbody>
+</table>
+<p>Example: a function that sends batch_index_task to workers 10.0.0.1 and 10.0.0.2 and all other tasks to other available workers.</p>
+<pre><code class="hljs">{
+"type":"javascript",
+"function":"function (config, zkWorkers, task) {<span class="hljs-symbol">\n</span>var batch_workers = new java.util.ArrayList();<span class="hljs-symbol">\n</span>batch_workers.add(<span class="hljs-symbol">\"</span>middleManager1_hostname:8091<span class="hljs-symbol">\"</span>);<span class="hljs-symbol">\n</span>batch_workers.add(<span class="hljs-symbol">\"</span>middleManager2_hostname:8091<span class="hljs-symbol">\"</span>);<span class="hljs-symbol">\n</span>workers = zkWorkers.ke [...]
+}
+</code></pre>
+<blockquote>
+<p>JavaScript-based functionality is disabled by default. Please refer to the Druid <a href="/docs/0.16.0-incubating/development/javascript.html">JavaScript programming guide</a> for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it.</p>
+</blockquote>
+<h6><a class="anchor" aria-hidden="true" id="affinity"></a><a href="#affinity" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p>Affinity configs can be provided to the <em>equalDistribution</em> and <em>fillCapacity</em> strategies using the &quot;affinityConfig&quot;
+field. If not provided, the default is to not use affinity at all.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>affinity</code></td><td>JSON object mapping a datasource String name to a list of indexing service middleManager host:port String values. Druid doesn't perform DNS resolution, so the 'host' value must match what is configured on the middleManager and what the middleManager announces itself as (examine the Overlord logs to see what your middleManager announces itself as).</td><td>{}</td></tr>
+<tr><td><code>strong</code></td><td>With weak affinity (the default), tasks for a dataSource may be assigned to other middleManagers if their affinity-mapped middleManagers are not able to run all pending tasks in the queue for that dataSource. With strong affinity, tasks for a dataSource will only ever be assigned to their affinity-mapped middleManagers, and will wait in the pending queue if necessary.</td><td>false</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="autoscaler"></a><a href="#autoscaler" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>Amazon's EC2 is currently the only supported autoscaler.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>minNumWorkers</code></td><td>The minimum number of workers that can be in the cluster at any given time.</td><td>0</td></tr>
+<tr><td><code>maxNumWorkers</code></td><td>The maximum number of workers that can be in the cluster at any given time.</td><td>0</td></tr>
+<tr><td><code>availabilityZone</code></td><td>What availability zone to run in.</td><td>none</td></tr>
+<tr><td><code>nodeData</code></td><td>A JSON object that describes how to launch new nodes.</td><td>none; required</td></tr>
+<tr><td><code>userData</code></td><td>A JSON object that describes how to configure new nodes. If you have set druid.indexer.autoscale.workerVersion, this must have a versionReplacementString. Otherwise, a versionReplacementString is not necessary.</td><td>none; optional</td></tr>
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="data-server"></a><a href="#data-server" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<p>This section contains the configuration options for the processes that reside on Data servers (MiddleManagers/Peons and Historicals) in the suggested <a href="../design/processes.html#server-types">three-server configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="middlemanager-and-peons"></a><a href="#middlemanager-and-peons" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<p>These MiddleManager and Peon configurations can be defined in the <code>middleManager/runtime.properties</code> file.</p>
+<h4><a class="anchor" aria-hidden="true" id="middlemanager-process-config"></a><a href="#middlemanager-process-config" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.host</code></td><td>The host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that <code>http://${druid.host}/</code> could actually talk to this process</td><td>InetAddress.getLocalHost().getCanonicalHostName()</td></tr>
+<tr><td><code>druid.bindOnHost</code></td><td>Indicating whether the process's internal jetty server bind on <code>druid.host</code>. Default is false, which means binding to all interfaces.</td><td>false</td></tr>
+<tr><td><code>druid.plaintextPort</code></td><td>This is the port to actually listen on; unless port mapping is used, this will be the same port as is on <code>druid.host</code></td><td>8091</td></tr>
+<tr><td><code>druid.tlsPort</code></td><td>TLS port for HTTPS connector, if <a href="/docs/0.16.0-incubating/operations/tls-support.html">druid.enableTlsPort</a> is set then this config will be used. If <code>druid.host</code> contains port then that port will be ignored. This should be a non-negative Integer.</td><td>8291</td></tr>
+<tr><td><code>druid.service</code></td><td>The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services</td><td>druid/middlemanager</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="middlemanager-configuration"></a><a href="#middlemanager-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 1 [...]
+<p>Middle managers pass their configurations down to their child peons. The MiddleManager requires the following configs:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.indexer.runner.allowedPrefixes</code></td><td>Whitelist of prefixes for configs that can be passed down to child peons.</td><td>&quot;com.metamx&quot;, &quot;druid&quot;, &quot;org.apache.druid&quot;, &quot;user.timezone&quot;, &quot;file.encoding&quot;, &quot;java.io.tmpdir&quot;, &quot;hadoop&quot;</td></tr>
+<tr><td><code>druid.indexer.runner.compressZnodes</code></td><td>Indicates whether or not the MiddleManagers should compress Znodes.</td><td>true</td></tr>
+<tr><td><code>druid.indexer.runner.classpath</code></td><td>Java classpath for the peon.</td><td>System.getProperty(&quot;java.class.path&quot;)</td></tr>
+<tr><td><code>druid.indexer.runner.javaCommand</code></td><td>Command required to execute java.</td><td>java</td></tr>
+<tr><td><code>druid.indexer.runner.javaOpts</code></td><td><em>DEPRECATED</em> A string of -X Java options to pass to the peon's JVM. Quotable parameters or parameters with spaces are encouraged to use javaOptsArray</td><td>&quot;&quot;</td></tr>
+<tr><td><code>druid.indexer.runner.javaOptsArray</code></td><td>A json array of strings to be passed in as options to the peon's jvm. This is additive to javaOpts and is recommended for properly handling arguments which contain quotes or spaces like <code>[&quot;-XX:OnOutOfMemoryError=kill -9 %p&quot;]</code></td><td><code>[]</code></td></tr>
+<tr><td><code>druid.indexer.runner.maxZnodeBytes</code></td><td>The maximum size Znode in bytes that can be created in Zookeeper.</td><td>524288</td></tr>
+<tr><td><code>druid.indexer.runner.startPort</code></td><td>Starting port used for peon processes, should be greater than 1023 and less than 65536.</td><td>8100</td></tr>
+<tr><td><code>druid.indexer.runner.endPort</code></td><td>Ending port used for peon processes, should be greater than or equal to <code>druid.indexer.runner.startPort</code> and less than 65536.</td><td>65535</td></tr>
+<tr><td><code>druid.indexer.runner.ports</code></td><td>A json array of integers to specify ports that used for peon processes. If provided and non-empty, ports for peon processes will be chosen from these ports. And <code>druid.indexer.runner.startPort/druid.indexer.runner.endPort</code> will be completely ignored.</td><td><code>[]</code></td></tr>
+<tr><td><code>druid.worker.ip</code></td><td>The IP of the worker.</td><td>localhost</td></tr>
+<tr><td><code>druid.worker.version</code></td><td>Version identifier for the MiddleManager.</td><td>0</td></tr>
+<tr><td><code>druid.worker.capacity</code></td><td>Maximum number of tasks the MiddleManager can accept.</td><td>Number of available processors - 1</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="peon-processing"></a><a href="#peon-processing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5  [...]
+<p>Processing properties set on the Middlemanager will be passed through to Peons.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.processing.buffer.sizeBytes</code></td><td>This specifies a buffer size for the storage of intermediate results. The computation engine in both the Historical and Realtime processes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.</td><td>auto (max 1GB)</td></tr>
+<tr><td><code>druid.processing.buffer.poolCacheMaxCount</code></td><td>processing buffer pool caches the buffers for later use, this is the maximum count cache will grow to. note that pool can create more buffers than it can cache if necessary.</td><td>Integer.MAX_VALUE</td></tr>
+<tr><td><code>druid.processing.formatString</code></td><td>Realtime and Historical processes use this format string to name their processing threads.</td><td>processing-%s</td></tr>
+<tr><td><code>druid.processing.numMergeBuffers</code></td><td>The number of direct memory buffers available for merging query results. The buffers are sized by <code>druid.processing.buffer.sizeBytes</code>. This property is effectively a concurrency limit for queries that require merging buffers. If you are using any queries that require merge buffers (currently, just groupBy v2) then you should have at least two of these.</td><td><code>max(2, druid.processing.numThreads / 4)</code></td></tr>
+<tr><td><code>druid.processing.numThreads</code></td><td>The number of processing threads to have available for parallel processing of segments. Our rule of thumb is <code>num_cores - 1</code>, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value <code>1</code>.</td><td>Number of cores - 1 (or 1)</td></tr>
+<tr><td><code>druid.processing.columnCache.sizeBytes</code></td><td>Maximum size in bytes for the dimension value lookup cache. Any value greater than <code>0</code> enables the cache. It is currently disabled by default. Enabling the lookup cache can significantly improve the performance of aggregators operating on dimension values, such as the JavaScript aggregator, or cardinality aggregator, but can slow things down if the cache hit rate is low (i.e. dimensions with few repeating valu [...]
+<tr><td><code>druid.processing.fifo</code></td><td>If the processing queue should treat tasks of equal priority in a FIFO manner</td><td><code>false</code></td></tr>
+<tr><td><code>druid.processing.tmpDir</code></td><td>Path where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default <code>java.io.tmpdir</code> path.</td><td>path represented by <code>java.io.tmpdir</code></td></tr>
+</tbody>
+</table>
+<p>The amount of direct memory needed by Druid is at least
+<code>druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)</code>. You can
+ensure at least this amount of direct memory is available by providing <code>-XX:MaxDirectMemorySize=&lt;VALUE&gt;</code> in
+<code>druid.indexer.runner.javaOptsArray</code> as documented above.</p>
+<h4><a class="anchor" aria-hidden="true" id="peon-query-configuration"></a><a href="#peon-query-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<p>See <a href="#general-query-configuration">general query configuration</a>.</p>
+<h4><a class="anchor" aria-hidden="true" id="peon-caching"></a><a href="#peon-caching" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>You can optionally configure caching to be enabled on the peons by setting caching configs here.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.realtime.cache.useCache</code></td><td>true, false</td><td>Enable the cache on the realtime.</td><td>false</td></tr>
+<tr><td><code>druid.realtime.cache.populateCache</code></td><td>true, false</td><td>Populate the cache on the realtime.</td><td>false</td></tr>
+<tr><td><code>druid.realtime.cache.unCacheable</code></td><td>All druid query types</td><td>All query types to not cache.</td><td><code>[&quot;groupBy&quot;, &quot;select&quot;]</code></td></tr>
+<tr><td><code>druid.realtime.cache.maxEntrySize</code></td><td>Maximum cache entry size in bytes.</td><td>1_000_000</td></tr>
+</tbody>
+</table>
+<p>See <a href="#cache-configuration">cache configuration</a> for how to configure cache settings.</p>
+<h4><a class="anchor" aria-hidden="true" id="additional-peon-configuration"></a><a href="#additional-peon-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12  [...]
+<p>Although peons inherit the configurations of their parent MiddleManagers, explicit child peon configs in MiddleManager can be set by prefixing them with:</p>
+<pre><code class="hljs">druid<span class="hljs-selector-class">.indexer</span><span class="hljs-selector-class">.fork</span><span class="hljs-selector-class">.property</span>
+</code></pre>
+<p>Additional peon configs include:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.peon.mode</code></td><td>Choices are &quot;local&quot; and &quot;remote&quot;. Setting this to local means you intend to run the peon as a standalone process (Not recommended).</td><td>remote</td></tr>
+<tr><td><code>druid.indexer.task.baseDir</code></td><td>Base temporary working directory.</td><td><code>System.getProperty(&quot;java.io.tmpdir&quot;)</code></td></tr>
+<tr><td><code>druid.indexer.task.baseTaskDir</code></td><td>Base temporary working directory for tasks.</td><td><code>${druid.indexer.task.baseDir}/persistent/tasks</code></td></tr>
+<tr><td><code>druid.indexer.task.defaultHadoopCoordinates</code></td><td>Hadoop version to use with HadoopIndexTasks that do not request a particular version.</td><td>org.apache.hadoop:hadoop-client:2.8.3</td></tr>
+<tr><td><code>druid.indexer.task.defaultRowFlushBoundary</code></td><td>Highest row count before persisting to disk. Used for indexing generating tasks.</td><td>75000</td></tr>
+<tr><td><code>druid.indexer.task.directoryLockTimeout</code></td><td>Wait this long for zombie peons to exit before giving up on their replacements.</td><td>PT10M</td></tr>
+<tr><td><code>druid.indexer.task.gracefulShutdownTimeout</code></td><td>Wait this long on middleManager restart for restorable tasks to gracefully exit.</td><td>PT5M</td></tr>
+<tr><td><code>druid.indexer.task.hadoopWorkingPath</code></td><td>Temporary working directory for Hadoop tasks.</td><td><code>/tmp/druid-indexing</code></td></tr>
+<tr><td><code>druid.indexer.task.restoreTasksOnRestart</code></td><td>If true, middleManagers will attempt to stop tasks gracefully on shutdown and restore them on restart.</td><td>false</td></tr>
+<tr><td><code>druid.indexer.server.maxChatRequests</code></td><td>Maximum number of concurrent requests served by a task's chat handler. Set to 0 to disable limiting.</td><td>0</td></tr>
+</tbody>
+</table>
+<p>If the peon is running in remote mode, there must be an Overlord up and running. Peons in remote mode can set the following configurations:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.peon.taskActionClient.retry.minWait</code></td><td>The minimum retry time to communicate with Overlord.</td><td>PT5S</td></tr>
+<tr><td><code>druid.peon.taskActionClient.retry.maxWait</code></td><td>The maximum retry time to communicate with Overlord.</td><td>PT1M</td></tr>
+<tr><td><code>druid.peon.taskActionClient.retry.maxRetryCount</code></td><td>The maximum number of retries to communicate with Overlord.</td><td>60</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="segmentwriteoutmediumfactory"></a><a href="#segmentwriteoutmediumfactory" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 [...]
+<p>When new segments are created, Druid temporarily stores some pre-processed data in some buffers. Currently two types of
+<em>medium</em> exist for those buffers: <em>temporary files</em> and <em>off-heap memory</em>.</p>
+<p><em>Temporary files</em> (<code>tmpFile</code>) are stored under the task working directory (see <code>druid.indexer.task.baseTaskDir</code>
+configuration above) and thus share it's mounting properies, e. g. they could be backed by HDD, SSD or memory (tmpfs).
+This type of medium may do unnecessary disk I/O and requires some disk space to be available.</p>
+<p><em>Off-heap memory medium</em> (<code>offHeapMemory</code>) creates buffers in off-heap memory of a JVM process that is running a task.
+This type of medium is preferred, but it may require to allow the JVM to have more off-heap memory, by changing
+<code>-XX:MaxDirectMemorySize</code> configuration. It is not yet understood how does the required off-heap memory size relates
+to the size of the segments being created. But definitely it doesn't make sense to add more extra off-heap memory,
+than the configured maximum <em>heap</em> size (<code>-Xmx</code>) for the same JVM.</p>
+<p>For most types of tasks SegmentWriteOutMediumFactory could be configured per-task (see <a href="/docs/0.16.0-incubating/ingestion/tasks.html">Tasks</a>
+page, &quot;TuningConfig&quot; section), but if it's not specified for a task, or it's not supported for a particular task type,
+then the value from the configuration below is used:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.peon.defaultSegmentWriteOutMediumFactory.type</code></td><td><code>tmpFile</code> or <code>offHeapMemory</code>, see explanation above</td><td><code>tmpFile</code></td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="historical"></a><a href="#historical" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>For general Historical Process information, see <a href="/docs/0.16.0-incubating/design/historical.html">here</a>.</p>
+<p>These Historical configurations can be defined in the <code>historical/runtime.properties</code> file.</p>
+<h4><a class="anchor" aria-hidden="true" id="historical-process-configuration"></a><a href="#historical-process-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.host</code></td><td>The host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that <code>http://${druid.host}/</code> could actually talk to this process</td><td>InetAddress.getLocalHost().getCanonicalHostName()</td></tr>
+<tr><td><code>druid.bindOnHost</code></td><td>Indicating whether the process's internal jetty server bind on <code>druid.host</code>. Default is false, which means binding to all interfaces.</td><td>false</td></tr>
+<tr><td><code>druid.plaintextPort</code></td><td>This is the port to actually listen on; unless port mapping is used, this will be the same port as is on <code>druid.host</code></td><td>8083</td></tr>
+<tr><td><code>druid.tlsPort</code></td><td>TLS port for HTTPS connector, if <a href="/docs/0.16.0-incubating/operations/tls-support.html">druid.enableTlsPort</a> is set then this config will be used. If <code>druid.host</code> contains port then that port will be ignored. This should be a non-negative Integer.</td><td>8283</td></tr>
+<tr><td><code>druid.service</code></td><td>The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services</td><td>druid/historical</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="historical-general-configuration"></a><a href="#historical-general-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.server.maxSize</code></td><td>The maximum number of bytes-worth of segments that the process wants assigned to it. This is not a limit that Historical processes actually enforces, just a value published to the Coordinator process so it can plan accordingly.</td><td>0</td></tr>
+<tr><td><code>druid.server.tier</code></td><td>A string to name the distribution tier that the storage process belongs to. Many of the <a href="/docs/0.16.0-incubating/operations/rule-configuration.html">rules Coordinator processes use</a> to manage segments can be keyed on tiers.</td><td><code>_default_tier</code></td></tr>
+<tr><td><code>druid.server.priority</code></td><td>In a tiered architecture, the priority of the tier, thus allowing control over which processes are queried. Higher numbers mean higher priority. The default (no priority) works for architecture with no cross replication (tiers that have no data-storage overlap). Data centers typically have equal priority.</td><td>0</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="storing-segments"></a><a href="#storing-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.segmentCache.locations</code></td><td>Segments assigned to a Historical process are first stored on the local file system (in a disk cache) and then served by the Historical process. These locations define where that local cache resides. This value cannot be NULL or EMPTY. Here is an example <code>druid.segmentCache.locations=[{&quot;path&quot;: &quot;/mnt/druidSegments&quot;, &quot;maxSize&quot;: 10000, &quot;freeSpacePercent&quot;: 1.0}]</code>. &quot;freeSpacePerce [...]
+<tr><td><code>druid.segmentCache.deleteOnRemove</code></td><td>Delete segment files from cache once a process is no longer serving a segment.</td><td>true</td></tr>
+<tr><td><code>druid.segmentCache.dropSegmentDelayMillis</code></td><td>How long a process delays before completely dropping segment.</td><td>30000 (30 seconds)</td></tr>
+<tr><td><code>druid.segmentCache.infoDir</code></td><td>Historical processes keep track of the segments they are serving so that when the process is restarted they can reload the same segments without waiting for the Coordinator to reassign. This path defines where this metadata is kept. Directory will be created if needed.</td><td>${first_location}/info_dir</td></tr>
+<tr><td><code>druid.segmentCache.announceIntervalMillis</code></td><td>How frequently to announce segments while segments are loading from cache. Set this value to zero to wait for all segments to be loaded before announcing.</td><td>5000 (5 seconds)</td></tr>
+<tr><td><code>druid.segmentCache.numLoadingThreads</code></td><td>How many segments to drop or load concurrently from deep storage. Note that the work of loading segments involves downloading segments from deep storage, decompressing them and loading them to a memory mapped location. So the work is not all I/O Bound. Depending on CPU and network load, one could possibly increase this config to a higher value.</td><td>Number of cores</td></tr>
+<tr><td><code>druid.segmentCache.numBootstrapThreads</code></td><td>How many segments to load concurrently during historical startup.</td><td><code>druid.segmentCache.numLoadingThreads</code></td></tr>
+<tr><td><code>druid.coordinator.loadqueuepeon.curator.numCallbackThreads</code></td><td>Number of threads for executing callback actions associated with loading or dropping of segments. One might want to increase this number when noticing clusters are lagging behind w.r.t. balancing segments across historical nodes.</td><td>2</td></tr>
+</tbody>
+</table>
+<p>In <code>druid.segmentCache.locations</code>, <em>freeSpacePercent</em> was added because <em>maxSize</em> setting is only a theoretical limit and assumes that much space will always be available for storing segments. In case of any druid bug leading to unaccounted segment files left alone on disk or some other process writing stuff to disk, This check can start failing segment loading early before filling up the disk completely and leaving the host usable otherwise.</p>
+<h4><a class="anchor" aria-hidden="true" id="historical-query-configs"></a><a href="#historical-query-configs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<h5><a class="anchor" aria-hidden="true" id="concurrent-requests"></a><a href="#concurrent-requests" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>Druid uses Jetty to serve HTTP requests.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.server.http.numThreads</code></td><td>Number of threads for HTTP requests.</td><td>max(10, (Number of cores * 17) / 16 + 2) + 30</td></tr>
+<tr><td><code>druid.server.http.queueSize</code></td><td>Size of the worker queue used by Jetty server to temporarily store incoming client connections. If this value is set and a request is rejected by jetty because queue is full then client would observe request failure with TCP connection being closed immediately with a completely empty response from server.</td><td>Unbounded</td></tr>
+<tr><td><code>druid.server.http.maxIdleTime</code></td><td>The Jetty max idle time for a connection.</td><td>PT5M</td></tr>
+<tr><td><code>druid.server.http.enableRequestLimit</code></td><td>If enabled, no requests would be queued in jetty queue and &quot;HTTP 429 Too Many Requests&quot; error response would be sent.</td><td>false</td></tr>
+<tr><td><code>druid.server.http.defaultQueryTimeout</code></td><td>Query timeout in millis, beyond which unfinished queries will be cancelled</td><td>300000</td></tr>
+<tr><td><code>druid.server.http.gracefulShutdownTimeout</code></td><td>The maximum amount of time Jetty waits after receiving shutdown signal. After this timeout the threads will be forcefully shutdown. This allows any queries that are executing to complete.</td><td><code>PT0S</code> (do not wait)</td></tr>
+<tr><td><code>druid.server.http.unannouncePropagationDelay</code></td><td>How long to wait for zookeeper unannouncements to propagate before shutting down Jetty. This is a minimum and <code>druid.server.http.gracefulShutdownTimeout</code> does not start counting down until after this period elapses.</td><td><code>PT0S</code> (do not wait)</td></tr>
+<tr><td><code>druid.server.http.maxQueryTimeout</code></td><td>Maximum allowed value (in milliseconds) for <code>timeout</code> parameter. See <a href="/docs/0.16.0-incubating/querying/query-context.html">query-context</a> to know more about <code>timeout</code>. Query is rejected if the query context <code>timeout</code> is greater than this value.</td><td>Long.MAX_VALUE</td></tr>
+<tr><td><code>druid.server.http.maxRequestHeaderSize</code></td><td>Maximum size of a request header in bytes. Larger headers consume more memory and can make a server more vulnerable to denial of service attacks.</td><td>8 * 1024</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="processing"></a><a href="#processing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.processing.buffer.sizeBytes</code></td><td>This specifies a buffer size for the storage of intermediate results. The computation engine in both the Historical and Realtime processes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.</td><td>auto (max 1GB)</td></tr>
+<tr><td><code>druid.processing.buffer.poolCacheMaxCount</code></td><td>processing buffer pool caches the buffers for later use, this is the maximum count cache will grow to. note that pool can create more buffers than it can cache if necessary.</td><td>Integer.MAX_VALUE</td></tr>
+<tr><td><code>druid.processing.formatString</code></td><td>Realtime and Historical processes use this format string to name their processing threads.</td><td>processing-%s</td></tr>
+<tr><td><code>druid.processing.numMergeBuffers</code></td><td>The number of direct memory buffers available for merging query results. The buffers are sized by <code>druid.processing.buffer.sizeBytes</code>. This property is effectively a concurrency limit for queries that require merging buffers. If you are using any queries that require merge buffers (currently, just groupBy v2) then you should have at least two of these.</td><td><code>max(2, druid.processing.numThreads / 4)</code></td></tr>
+<tr><td><code>druid.processing.numThreads</code></td><td>The number of processing threads to have available for parallel processing of segments. Our rule of thumb is <code>num_cores - 1</code>, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value <code>1</code>.</td><td>Number of cores - 1 (or 1)</td></tr>
+<tr><td><code>druid.processing.columnCache.sizeBytes</code></td><td>Maximum size in bytes for the dimension value lookup cache. Any value greater than <code>0</code> enables the cache. It is currently disabled by default. Enabling the lookup cache can significantly improve the performance of aggregators operating on dimension values, such as the JavaScript aggregator, or cardinality aggregator, but can slow things down if the cache hit rate is low (i.e. dimensions with few repeating valu [...]
+<tr><td><code>druid.processing.fifo</code></td><td>If the processing queue should treat tasks of equal priority in a FIFO manner</td><td><code>false</code></td></tr>
+<tr><td><code>druid.processing.tmpDir</code></td><td>Path where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default <code>java.io.tmpdir</code> path.</td><td>path represented by <code>java.io.tmpdir</code></td></tr>
+</tbody>
+</table>
+<p>The amount of direct memory needed by Druid is at least
+<code>druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)</code>. You can
+ensure at least this amount of direct memory is available by providing <code>-XX:MaxDirectMemorySize=&lt;VALUE&gt;</code> at the command
+line.</p>
+<h5><a class="anchor" aria-hidden="true" id="historical-query-configuration"></a><a href="#historical-query-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 1 [...]
+<p>See <a href="#general-query-configuration">general query configuration</a>.</p>
+<h4><a class="anchor" aria-hidden="true" id="historical-caching"></a><a href="#historical-caching" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>You can optionally only configure caching to be enabled on the Historical by setting caching configs here.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.historical.cache.useCache</code></td><td>true, false</td><td>Enable the cache on the Historical.</td><td>false</td></tr>
+<tr><td><code>druid.historical.cache.populateCache</code></td><td>true, false</td><td>Populate the cache on the Historical.</td><td>false</td></tr>
+<tr><td><code>druid.historical.cache.unCacheable</code></td><td>All druid query types</td><td>All query types to not cache.</td><td>[&quot;groupBy&quot;, &quot;select&quot;]</td></tr>
+<tr><td><code>druid.historical.cache.maxEntrySize</code></td><td>Maximum cache entry size in bytes.</td><td>1_000_000</td></tr>
+</tbody>
+</table>
+<p>See <a href="#cache-configuration">cache configuration</a> for how to configure cache settings.</p>
+<h2><a class="anchor" aria-hidden="true" id="query-server"></a><a href="#query-server" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>This section contains the configuration options for the processes that reside on Query servers (Brokers) in the suggested <a href="../design/processes.html#server-types">three-server configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="broker"></a><a href="#broker" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2. [...]
+<p>For general Broker process information, see <a href="/docs/0.16.0-incubating/design/broker.html">here</a>.</p>
+<p>These Broker configurations can be defined in the <code>broker/runtime.properties</code> file.</p>
+<h4><a class="anchor" aria-hidden="true" id="broker-process-configs"></a><a href="#broker-process-configs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.host</code></td><td>The host for the current process. This is used to advertise the current processes location as reachable from another process and should generally be specified such that <code>http://${druid.host}/</code> could actually talk to this process</td><td>InetAddress.getLocalHost().getCanonicalHostName()</td></tr>
+<tr><td><code>druid.bindOnHost</code></td><td>Indicating whether the process's internal jetty server bind on <code>druid.host</code>. Default is false, which means binding to all interfaces.</td><td>false</td></tr>
+<tr><td><code>druid.plaintextPort</code></td><td>This is the port to actually listen on; unless port mapping is used, this will be the same port as is on <code>druid.host</code></td><td>8082</td></tr>
+<tr><td><code>druid.tlsPort</code></td><td>TLS port for HTTPS connector, if <a href="/docs/0.16.0-incubating/operations/tls-support.html">druid.enableTlsPort</a> is set then this config will be used. If <code>druid.host</code> contains port then that port will be ignored. This should be a non-negative Integer.</td><td>8282</td></tr>
+<tr><td><code>druid.service</code></td><td>The name of the service. This is used as a dimension when emitting metrics and alerts to differentiate between the various services</td><td>druid/broker</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="query-configuration"></a><a href="#query-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<h5><a class="anchor" aria-hidden="true" id="query-prioritization"></a><a href="#query-prioritization" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.broker.balancer.type</code></td><td><code>random</code>, <code>connectionCount</code></td><td>Determines how the broker balances connections to Historical processes. <code>random</code> choose randomly, <code>connectionCount</code> picks the process with the fewest number of active connections to</td><td><code>random</code></td></tr>
+<tr><td><code>druid.broker.select.tier</code></td><td><code>highestPriority</code>, <code>lowestPriority</code>, <code>custom</code></td><td>If segments are cross-replicated across tiers in a cluster, you can tell the broker to prefer to select segments in a tier with a certain priority.</td><td><code>highestPriority</code></td></tr>
+<tr><td><code>druid.broker.select.tier.custom.priorities</code></td><td><code>An array of integer priorities.</code></td><td>Select servers in tiers with a custom priority list.</td><td>None</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="server-configuration"></a><a href="#server-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>Druid uses Jetty to serve HTTP requests.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.server.http.numThreads</code></td><td>Number of threads for HTTP requests.</td><td>max(10, (Number of cores * 17) / 16 + 2) + 30</td></tr>
+<tr><td><code>druid.server.http.queueSize</code></td><td>Size of the worker queue used by Jetty server to temporarily store incoming client connections. If this value is set and a request is rejected by jetty because queue is full then client would observe request failure with TCP connection being closed immediately with a completely empty response from server.</td><td>Unbounded</td></tr>
+<tr><td><code>druid.server.http.maxIdleTime</code></td><td>The Jetty max idle time for a connection.</td><td>PT5M</td></tr>
+<tr><td><code>druid.server.http.enableRequestLimit</code></td><td>If enabled, no requests would be queued in jetty queue and &quot;HTTP 429 Too Many Requests&quot; error response would be sent.</td><td>false</td></tr>
+<tr><td><code>druid.server.http.defaultQueryTimeout</code></td><td>Query timeout in millis, beyond which unfinished queries will be cancelled</td><td>300000</td></tr>
+<tr><td><code>druid.server.http.maxScatterGatherBytes</code></td><td>Maximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query. Queries that exceed this limit will fail. This is an advance configuration that allows to protect in case Broker is under heavy load and not utilizing the data gathered in memory fast enough and leading to OOMs. This limit can be further reduced at query time using <code>maxScatterGatherBytes</code> in the [...]
+<tr><td><code>druid.server.http.gracefulShutdownTimeout</code></td><td>The maximum amount of time Jetty waits after receiving shutdown signal. After this timeout the threads will be forcefully shutdown. This allows any queries that are executing to complete.</td><td><code>PT0S</code> (do not wait)</td></tr>
+<tr><td><code>druid.server.http.unannouncePropagationDelay</code></td><td>How long to wait for zookeeper unannouncements to propagate before shutting down Jetty. This is a minimum and <code>druid.server.http.gracefulShutdownTimeout</code> does not start counting down until after this period elapses.</td><td><code>PT0S</code> (do not wait)</td></tr>
+<tr><td><code>druid.server.http.maxQueryTimeout</code></td><td>Maximum allowed value (in milliseconds) for <code>timeout</code> parameter. See <a href="/docs/0.16.0-incubating/querying/query-context.html">query-context</a> to know more about <code>timeout</code>. Query is rejected if the query context <code>timeout</code> is greater than this value.</td><td>Long.MAX_VALUE</td></tr>
+<tr><td><code>druid.server.http.maxRequestHeaderSize</code></td><td>Maximum size of a request header in bytes. Larger headers consume more memory and can make a server more vulnerable to denial of service attacks.</td><td>8 * 1024</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="client-configuration"></a><a href="#client-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>Druid Brokers use an HTTP client to communicate with with data servers (Historical servers and real-time tasks). This
+client has the following configuration options.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.broker.http.numConnections</code></td><td>Size of connection pool for the Broker to connect to Historical and real-time processes. If there are more queries than this number that all need to speak to the same process, then they will queue up.</td><td>20</td></tr>
+<tr><td><code>druid.broker.http.compressionCodec</code></td><td>Compression codec the Broker uses to communicate with Historical and real-time processes. May be &quot;gzip&quot; or &quot;identity&quot;.</td><td>gzip</td></tr>
+<tr><td><code>druid.broker.http.readTimeout</code></td><td>The timeout for data reads from Historical servers and real-time tasks.</td><td>PT15M</td></tr>
+<tr><td><code>druid.broker.http.unusedConnectionTimeout</code></td><td>The timeout for idle connections in connection pool. This timeout should be less than <code>druid.broker.http.readTimeout</code>. Set this timeout = ~90% of <code>druid.broker.http.readTimeout</code></td><td><code>PT4M</code></td></tr>
+<tr><td><code>druid.broker.http.maxQueuedBytes</code></td><td>Maximum number of bytes queued per query before exerting backpressure on the channel to the data server. Similar to <code>druid.server.http.maxScatterGatherBytes</code>, except unlike that configuration, this one will trigger backpressure rather than query failure. Zero means disabled. Can be overridden by the <a href="/docs/0.16.0-incubating/querying/query-context.html">&quot;maxQueuedBytes&quot; query context parameter</a>.< [...]
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="retry-policy"></a><a href="#retry-policy" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>Druid broker can optionally retry queries internally for transient errors.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.broker.retryPolicy.numTries</code></td><td>Number of tries.</td><td>1</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="processing-1"></a><a href="#processing-1" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>The broker uses processing configs for nested groupBy queries. And, if you use groupBy v1, long-interval queries (of any type) can be broken into shorter interval queries and processed in parallel inside this thread pool. For more details, see &quot;chunkPeriod&quot; in the <a href="/docs/0.16.0-incubating/querying/query-context.html">query context</a> doc.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.processing.buffer.sizeBytes</code></td><td>This specifies a buffer size for the storage of intermediate results. The computation engine in both the Historical and Realtime processes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.</td><td>auto (max 1GB)</td></tr>
+<tr><td><code>druid.processing.buffer.poolCacheMaxCount</code></td><td>processing buffer pool caches the buffers for later use, this is the maximum count cache will grow to. note that pool can create more buffers than it can cache if necessary.</td><td>Integer.MAX_VALUE</td></tr>
+<tr><td><code>druid.processing.formatString</code></td><td>Realtime and Historical processes use this format string to name their processing threads.</td><td>processing-%s</td></tr>
+<tr><td><code>druid.processing.numMergeBuffers</code></td><td>The number of direct memory buffers available for merging query results. The buffers are sized by <code>druid.processing.buffer.sizeBytes</code>. This property is effectively a concurrency limit for queries that require merging buffers. If you are using any queries that require merge buffers (currently, just groupBy v2) then you should have at least two of these.</td><td><code>max(2, druid.processing.numThreads / 4)</code></td></tr>
+<tr><td><code>druid.processing.numThreads</code></td><td>The number of processing threads to have available for parallel processing of segments. Our rule of thumb is <code>num_cores - 1</code>, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value <code>1</code>.</td><td>Number of cores - 1 (or 1)</td></tr>
+<tr><td><code>druid.processing.columnCache.sizeBytes</code></td><td>Maximum size in bytes for the dimension value lookup cache. Any value greater than <code>0</code> enables the cache. It is currently disabled by default. Enabling the lookup cache can significantly improve the performance of aggregators operating on dimension values, such as the JavaScript aggregator, or cardinality aggregator, but can slow things down if the cache hit rate is low (i.e. dimensions with few repeating valu [...]
+<tr><td><code>druid.processing.fifo</code></td><td>If the processing queue should treat tasks of equal priority in a FIFO manner</td><td><code>false</code></td></tr>
+<tr><td><code>druid.processing.tmpDir</code></td><td>Path where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default <code>java.io.tmpdir</code> path.</td><td>path represented by <code>java.io.tmpdir</code></td></tr>
+</tbody>
+</table>
+<p>The amount of direct memory needed by Druid is at least
+<code>druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)</code>. You can
+ensure at least this amount of direct memory is available by providing <code>-XX:MaxDirectMemorySize=&lt;VALUE&gt;</code> at the command
+line.</p>
+<h5><a class="anchor" aria-hidden="true" id="broker-query-configuration"></a><a href="#broker-query-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H [...]
+<p>See <a href="#general-query-configuration">general query configuration</a>.</p>
+<h4><a class="anchor" aria-hidden="true" id="sql"></a><a href="#sql" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.2 [...]
+<p>The Druid SQL server is configured through the following properties on the Broker.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.sql.enable</code></td><td>Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.</td><td>true</td></tr>
+<tr><td><code>druid.sql.avatica.enable</code></td><td>Whether to enable JDBC querying at <code>/druid/v2/sql/avatica/</code>.</td><td>true</td></tr>
+<tr><td><code>druid.sql.avatica.maxConnections</code></td><td>Maximum number of open connections for the Avatica server. These are not HTTP connections, but are logical client connections that may span multiple HTTP connections.</td><td>50</td></tr>
+<tr><td><code>druid.sql.avatica.maxRowsPerFrame</code></td><td>Maximum number of rows to return in a single JDBC frame. Setting this property to -1 indicates that no row limit should be applied. Clients can optionally specify a row limit in their requests; if a client specifies a row limit, the lesser value of the client-provided limit and <code>maxRowsPerFrame</code> will be used.</td><td>5,000</td></tr>
+<tr><td><code>druid.sql.avatica.maxStatementsPerConnection</code></td><td>Maximum number of simultaneous open statements per Avatica client connection.</td><td>1</td></tr>
+<tr><td><code>druid.sql.avatica.connectionIdleTimeout</code></td><td>Avatica client connection idle timeout.</td><td>PT5M</td></tr>
+<tr><td><code>druid.sql.http.enable</code></td><td>Whether to enable JSON over HTTP querying at <code>/druid/v2/sql/</code>.</td><td>true</td></tr>
+<tr><td><code>druid.sql.planner.awaitInitializationOnStart</code></td><td>Boolean</td><td>Whether the the Broker will wait for its SQL metadata view to fully initialize before starting up. If set to 'true', the Broker's HTTP server will not start up, and the Broker will not announce itself as available, until the server view is initialized. See also <code>druid.broker.segment.awaitInitializationOnStart</code>, a related setting.</td><td>true</td></tr>
+<tr><td><code>druid.sql.planner.maxQueryCount</code></td><td>Maximum number of queries to issue, including nested queries. Set to 1 to disable sub-queries, or set to 0 for unlimited.</td><td>8</td></tr>
+<tr><td><code>druid.sql.planner.maxSemiJoinRowsInMemory</code></td><td>Maximum number of rows to keep in memory for executing two-stage semi-join queries like <code>SELECT * FROM Employee WHERE DeptName IN (SELECT DeptName FROM Dept)</code>.</td><td>100000</td></tr>
+<tr><td><code>druid.sql.planner.maxTopNLimit</code></td><td>Maximum threshold for a <a href="/docs/0.16.0-incubating/querying/topnquery.html">TopN query</a>. Higher limits will be planned as <a href="/docs/0.16.0-incubating/querying/groupbyquery.html">GroupBy queries</a> instead.</td><td>100000</td></tr>
+<tr><td><code>druid.sql.planner.metadataRefreshPeriod</code></td><td>Throttle for metadata refreshes.</td><td>PT1M</td></tr>
+<tr><td><code>druid.sql.planner.selectThreshold</code></td><td>Page size threshold for <a href="/docs/0.16.0-incubating/querying/select-query.html">Select queries</a>. Select queries for larger resultsets will be issued back-to-back using pagination.</td><td>1000</td></tr>
+<tr><td><code>druid.sql.planner.useApproximateCountDistinct</code></td><td>Whether to use an approximate cardinalty algorithm for <code>COUNT(DISTINCT foo)</code>.</td><td>true</td></tr>
+<tr><td><code>druid.sql.planner.useApproximateTopN</code></td><td>Whether to use approximate <a href="/docs/0.16.0-incubating/querying/topnquery.html">TopN queries</a> when a SQL query could be expressed as such. If false, exact <a href="/docs/0.16.0-incubating/querying/groupbyquery.html">GroupBy queries</a> will be used instead.</td><td>true</td></tr>
+<tr><td><code>druid.sql.planner.requireTimeCondition</code></td><td>Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries wihout filter condition on __time column will fail</td><td>false</td></tr>
+<tr><td><code>druid.sql.planner.sqlTimeZone</code></td><td>Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like &quot;America/Los_Angeles&quot; or offset like &quot;-08:00&quot;.</td><td>UTC</td></tr>
+<tr><td><code>druid.sql.planner.serializeComplexValues</code></td><td>Whether to serialize &quot;complex&quot; output values, false will return the class name instead of the serialized value.</td><td>true</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="broker-caching"></a><a href="#broker-caching" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>You can optionally only configure caching to be enabled on the Broker by setting caching configs here.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.broker.cache.useCache</code></td><td>true, false</td><td>Enable the cache on the Broker.</td><td>false</td></tr>
+<tr><td><code>druid.broker.cache.populateCache</code></td><td>true, false</td><td>Populate the cache on the Broker.</td><td>false</td></tr>
+<tr><td><code>druid.broker.cache.useResultLevelCache</code></td><td>true, false</td><td>Enable result level caching on the Broker.</td><td>false</td></tr>
+<tr><td><code>druid.broker.cache.populateResultLevelCache</code></td><td>true, false</td><td>Populate the result level cache on the Broker.</td><td>false</td></tr>
+<tr><td><code>druid.broker.cache.resultLevelCacheLimit</code></td><td>positive integer</td><td>Maximum size of query response that can be cached.</td><td><code>Integer.MAX_VALUE</code></td></tr>
+<tr><td><code>druid.broker.cache.unCacheable</code></td><td>All druid query types</td><td>All query types to not cache.</td><td><code>[&quot;groupBy&quot;, &quot;select&quot;]</code></td></tr>
+<tr><td><code>druid.broker.cache.cacheBulkMergeLimit</code></td><td>positive integer or 0</td><td>Queries with more segments than this number will not attempt to fetch from cache at the broker level, leaving potential caching fetches (and cache result merging) to the Historicals</td><td><code>Integer.MAX_VALUE</code></td></tr>
+<tr><td><code>druid.broker.cache.maxEntrySize</code></td><td>Maximum cache entry size in bytes.</td><td>1_000_000</td></tr>
+</tbody>
+</table>
+<p>See <a href="#cache-configuration">cache configuration</a> for how to configure cache settings.</p>
+<h4><a class="anchor" aria-hidden="true" id="segment-discovery"></a><a href="#segment-discovery" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.serverview.type</code></td><td>batch or http</td><td>Segment discovery method to use. &quot;http&quot; enables discovering segments using HTTP instead of zookeeper.</td><td>batch</td></tr>
+<tr><td><code>druid.broker.segment.watchedTiers</code></td><td>List of strings</td><td>Broker watches the segment announcements from processes serving segments to build cache of which process is serving which segments, this configuration allows to only consider segments being served from a whitelist of tiers. By default, Broker would consider all tiers. This can be used to partition your dataSources in specific Historical tiers and configure brokers in partitions so that they are only qu [...]
+<tr><td><code>druid.broker.segment.watchedDataSources</code></td><td>List of strings</td><td>Broker watches the segment announcements from processes serving segments to build cache of which process is serving which segments, this configuration allows to only consider segments being served from a whitelist of dataSources. By default, Broker would consider all datasources. This can be used to configure brokers in partitions so that they are only queryable for specific dataSources.</td><td> [...]
+<tr><td><code>druid.broker.segment.awaitInitializationOnStart</code></td><td>Boolean</td><td>Whether the the Broker will wait for its view of segments to fully initialize before starting up. If set to 'true', the Broker's HTTP server will not start up, and the Broker will not announce itself as available, until the server view is initialized. See also <code>druid.sql.planner.awaitInitializationOnStart</code>, a related setting.</td><td>true</td></tr>
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="cache-configuration"></a><a href="#cache-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>This section describes caching configuration that is common to Broker, Historical, and MiddleManager/Peon processes.</p>
+<p>Caching can optionally be enabled on the Broker, Historical, and MiddleManager/Peon processses. See <a href="#broker-caching">Broker</a>,
+<a href="#historical-caching">Historical</a>, and <a href="#peon-caching">Peon</a> configuration options for how to enable it for different processes.</p>
+<p>Druid uses a local in-memory cache by default, unless a diffrent type of cache is specified.
+Use the <code>druid.cache.type</code> configuration to set a different kind of cache.</p>
+<p>Cache settings are set globally, so the same configuration can be re-used
+for both Broker and Historical processes, when defined in the common properties file.</p>
+<h3><a class="anchor" aria-hidden="true" id="cache-type"></a><a href="#cache-type" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.cache.type</code></td><td><code>local</code>, <code>memcached</code>, <code>hybrid</code>, <code>caffeine</code></td><td>The type of cache to use for queries. See below of the configuration options for each cache type</td><td><code>caffeine</code></td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="local-cache"></a><a href="#local-cache" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<blockquote>
+<p>DEPRECATED: Use caffeine (default as of v0.12.0) instead</p>
+</blockquote>
+<p>The local cache is deprecated in favor of the Caffeine cache, and may be removed in a future version of Druid. The Caffeine cache affords significantly better performance and control over eviction behavior compared to <code>local</code> cache, and is recommended in any situation where you are using JRE 8u60 or higher.</p>
+<p>A simple in-memory LRU cache. Local cache resides in JVM heap memory, so if you enable it, make sure you increase heap size accordingly.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.cache.sizeInBytes</code></td><td>Maximum cache size in bytes. Zero disables caching.</td><td>0</td></tr>
+<tr><td><code>druid.cache.initialSize</code></td><td>Initial size of the hashtable backing the cache.</td><td>500000</td></tr>
+<tr><td><code>druid.cache.logEvictionCount</code></td><td>If non-zero, log cache eviction every <code>logEvictionCount</code> items.</td><td>0</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="caffeine-cache"></a><a href="#caffeine-cache" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>A highly performant local cache implementation for Druid based on <a href="https://github.com/ben-manes/caffeine">Caffeine</a>. Requires a JRE8u60 or higher if using <code>COMMON_FJP</code>.</p>
+<h5><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>Below are the configuration options known to this module:</p>
+<table>
+<thead>
+<tr><th><code>runtime.properties</code></th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.cache.type</code></td><td>Set this to <code>caffeine</code> or leave out parameter</td><td><code>caffeine</code></td></tr>
+<tr><td><code>druid.cache.sizeInBytes</code></td><td>The maximum size of the cache in bytes on heap.</td><td>min(1GB, Runtime.maxMemory / 10)</td></tr>
+<tr><td><code>druid.cache.expireAfter</code></td><td>The time (in ms) after an access for which a cache entry may be expired</td><td>None (no time limit)</td></tr>
+<tr><td><code>druid.cache.cacheExecutorFactory</code></td><td>The executor factory to use for Caffeine maintenance. One of <code>COMMON_FJP</code>, <code>SINGLE_THREAD</code>, or <code>SAME_THREAD</code></td><td>ForkJoinPool common pool (<code>COMMON_FJP</code>)</td></tr>
+<tr><td><code>druid.cache.evictOnClose</code></td><td>If a close of a namespace (ex: removing a segment from a process) should cause an eager eviction of associated cache values</td><td><code>false</code></td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="druidcachecacheexecutorfactory"></a><a href="#druidcachecacheexecutorfactory" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 1 [...]
+<p>Here are the possible values for <code>druid.cache.cacheExecutorFactory</code>, which controls how maintenance tasks are run</p>
+<ul>
+<li><code>COMMON_FJP</code> (default) use the common ForkJoinPool. Should use with <a href="https://github.com/apache/incubator-druid/pull/4810#issuecomment-329922810">JRE 8u60 or higher</a>. Older versions of the JRE may have worse performance than newer JRE versions.</li>
+<li><code>SINGLE_THREAD</code> Use a single-threaded executor.</li>
+<li><code>SAME_THREAD</code> Cache maintenance is done eagerly.</li>
+</ul>
+<h5><a class="anchor" aria-hidden="true" id="metrics"></a><a href="#metrics" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<p>In addition to the normal cache metrics, the caffeine cache implementation also reports the following in both <code>total</code> and <code>delta</code></p>
+<table>
+<thead>
+<tr><th>Metric</th><th>Description</th><th>Normal value</th></tr>
+</thead>
+<tbody>
+<tr><td><code>query/cache/caffeine/*/requests</code></td><td>Count of hits or misses</td><td>hit + miss</td></tr>
+<tr><td><code>query/cache/caffeine/*/loadTime</code></td><td>Length of time caffeine spends loading new values (unused feature)</td><td>0</td></tr>
+<tr><td><code>query/cache/caffeine/*/evictionBytes</code></td><td>Size in bytes that have been evicted from the cache</td><td>Varies, should tune cache <code>sizeInBytes</code> so that <code>sizeInBytes</code>/<code>evictionBytes</code> is approximately the rate of cache churn you desire</td></tr>
+</tbody>
+</table>
+<h5><a class="anchor" aria-hidden="true" id="memcached"></a><a href="#memcached" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.6 [...]
+<p>Uses memcached as cache backend. This allows all processes to share the same cache.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.cache.expiration</code></td><td>Memcached <a href="https://code.google.com/p/memcached/wiki/NewCommands#Standard_Protocol">expiration time</a>.</td><td>2592000 (30 days)</td></tr>
+<tr><td><code>druid.cache.timeout</code></td><td>Maximum time in milliseconds to wait for a response from Memcached.</td><td>500</td></tr>
+<tr><td><code>druid.cache.hosts</code></td><td>Comma separated list of Memcached hosts <code>&lt;host:port&gt;</code>.</td><td>none</td></tr>
+<tr><td><code>druid.cache.maxObjectSize</code></td><td>Maximum object size in bytes for a Memcached object.</td><td>52428800 (50 MB)</td></tr>
+<tr><td><code>druid.cache.memcachedPrefix</code></td><td>Key prefix for all keys in Memcached.</td><td>druid</td></tr>
+<tr><td><code>druid.cache.numConnections</code></td><td>Number of memcached connections to use.</td><td>1</td></tr>
+<tr><td><code>druid.cache.protocol</code></td><td>Memcached communication protocol. Can be binary or text.</td><td>binary</td></tr>
+<tr><td><code>druid.cache.locator</code></td><td>Memcached locator. Can be consistent or array_mod.</td><td>consistent</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="hybrid"></a><a href="#hybrid" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2. [...]
+<p>Uses a combination of any two caches as a two-level L1 / L2 cache.
+This may be used to combine a local in-memory cache with a remote memcached cache.</p>
+<p>Cache requests will first check L1 cache before checking L2.
+If there is an L1 miss and L2 hit, it will also populate L1.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.cache.l1.type</code></td><td>type of cache to use for L1 cache. See <code>druid.cache.type</code> configuration for valid types.</td><td><code>caffeine</code></td></tr>
+<tr><td><code>druid.cache.l2.type</code></td><td>type of cache to use for L2 cache. See <code>druid.cache.type</code> configuration for valid types.</td><td><code>caffeine</code></td></tr>
+<tr><td><code>druid.cache.l1.*</code></td><td>Any property valid for the given type of L1 cache can be set using this prefix. For instance, if you are using a <code>caffeine</code> L1 cache, specify <code>druid.cache.l1.sizeInBytes</code> to set its size.</td><td>defaults are the same as for the given cache type.</td></tr>
+<tr><td><code>druid.cache.l2.*</code></td><td>Prefix for L2 cache settings, see description for L1.</td><td>defaults are the same as for the given cache type.</td></tr>
+<tr><td><code>druid.cache.useL2</code></td><td>A boolean indicating whether to query L2 cache, if it's a miss in L1. It makes sense to configure this to <code>false</code> on Historical processes, if L2 is a remote cache like <code>memcached</code>, and this cache also used on brokers, because in this case if a query reached Historical it means that a broker didn't find corresponding results in the same remote cache, so a query to the remote cache from Historical is guaranteed to be a mi [...]
+<tr><td><code>druid.cache.populateL2</code></td><td>A boolean indicating whether to put results into L2 cache.</td><td><code>true</code></td></tr>
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="general-query-configuration"></a><a href="#general-query-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 1 [...]
+<p>This section describes configurations that control behavior of Druid's query types, applicable to Broker, Historical, and MiddleManager processes.</p>
+<h3><a class="anchor" aria-hidden="true" id="topn-query-config"></a><a href="#topn-query-config" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.query.topN.minTopNThreshold</code></td><td>See <a href="../querying/topnquery.html#aliasing">TopN Aliasing</a> for details.</td><td>1000</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="search-query-config"></a><a href="#search-query-config" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.query.search.maxSearchLimit</code></td><td>Maximum number of search results to return.</td><td>1000</td></tr>
+<tr><td><code>druid.query.search.searchStrategy</code></td><td>Default search query strategy.</td><td>useIndexes</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="segmentmetadata-query-config"></a><a href="#segmentmetadata-query-config" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.query.segmentMetadata.defaultHistory</code></td><td>When no interval is specified in the query, use a default interval of defaultHistory before the end time of the most recent segment, specified in ISO8601 format. This property also controls the duration of the default interval used by GET /druid/v2/datasources/{dataSourceName} interactions for retrieving datasource dimensions/metrics.</td><td>P1W</td></tr>
+<tr><td><code>druid.query.segmentMetadata.defaultAnalysisTypes</code></td><td>This can be used to set the Default Analysis Types for all segment metadata queries, this can be overridden when making the query</td><td>[&quot;cardinality&quot;, &quot;interval&quot;, &quot;minmax&quot;]</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="groupby-query-config"></a><a href="#groupby-query-config" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>This section describes the configurations for groupBy queries. You can set the runtime properties in the <code>runtime.properties</code> file on Broker, Historical, and MiddleManager processes. You can set the query context parameters through the <a href="/docs/0.16.0-incubating/querying/query-context.html">query context</a>.</p>
+<h4><a class="anchor" aria-hidden="true" id="configurations-for-groupby-v2"></a><a href="#configurations-for-groupby-v2" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12  [...]
+<p>Supported runtime properties:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.query.groupBy.maxMergingDictionarySize</code></td><td>Maximum amount of heap space (approximately) to use for the string dictionary during merging. When the dictionary exceeds this size, a spill to disk will be triggered.</td><td>100000000</td></tr>
+<tr><td><code>druid.query.groupBy.maxOnDiskStorage</code></td><td>Maximum amount of disk space to use, per-query, for spilling result sets to disk when either the merging buffer or the dictionary fills up. Queries that exceed this limit will fail. Set to zero to disable disk spilling.</td><td>0 (disabled)</td></tr>
+</tbody>
+</table>
+<p>Supported query contexts:</p>
+<table>
+<thead>
+<tr><th>Key</th><th>Description</th></tr>
+</thead>
+<tbody>
+<tr><td><code>maxMergingDictionarySize</code></td><td>Can be used to lower the value of <code>druid.query.groupBy.maxMergingDictionarySize</code> for this query.</td></tr>
+<tr><td><code>maxOnDiskStorage</code></td><td>Can be used to lower the value of <code>druid.query.groupBy.maxOnDiskStorage</code> for this query.</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="advanced-configurations"></a><a href="#advanced-configurations" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<h4><a class="anchor" aria-hidden="true" id="common-configurations-for-all-groupby-strategies"></a><a href="#common-configurations-for-all-groupby-strategies" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9z [...]
+<p>Supported runtime properties:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.query.groupBy.defaultStrategy</code></td><td>Default groupBy query strategy.</td><td>v2</td></tr>
+<tr><td><code>druid.query.groupBy.singleThreaded</code></td><td>Merge results using a single thread.</td><td>false</td></tr>
+</tbody>
+</table>
+<p>Supported query contexts:</p>
+<table>
+<thead>
+<tr><th>Key</th><th>Description</th></tr>
+</thead>
+<tbody>
+<tr><td><code>groupByStrategy</code></td><td>Overrides the value of <code>druid.query.groupBy.defaultStrategy</code> for this query.</td></tr>
+<tr><td><code>groupByIsSingleThreaded</code></td><td>Overrides the value of <code>druid.query.groupBy.singleThreaded</code> for this query.</td></tr>
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="groupby-v2-configurations"></a><a href="#groupby-v2-configurations" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c [...]
+<p>Supported runtime properties:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.query.groupBy.bufferGrouperInitialBuckets</code></td><td>Initial number of buckets in the off-heap hash table used for grouping results. Set to 0 to use a reasonable default (1024).</td><td>0</td></tr>
+<tr><td><code>druid.query.groupBy.bufferGrouperMaxLoadFactor</code></td><td>Maximum load factor of the off-heap hash table used for grouping results. When the load factor exceeds this size, the table will be grown or spilled to disk. Set to 0 to use a reasonable default (0.7).</td><td>0</td></tr>
+<tr><td><code>druid.query.groupBy.forceHashAggregation</code></td><td>Force to use hash-based aggregation.</td><td>false</td></tr>
+<tr><td><code>druid.query.groupBy.intermediateCombineDegree</code></td><td>Number of intermediate processes combined together in the combining tree. Higher degrees will need less threads which might be helpful to improve the query performance by reducing the overhead of too many threads if the server has sufficiently powerful cpu cores.</td><td>8</td></tr>
+<tr><td><code>druid.query.groupBy.numParallelCombineThreads</code></td><td>Hint for the number of parallel combining threads. This should be larger than 1 to turn on the parallel combining feature. The actual number of threads used for parallel combining is min(<code>druid.query.groupBy.numParallelCombineThreads</code>, <code>druid.processing.numThreads</code>).</td><td>1 (disabled)</td></tr>
+</tbody>
+</table>
+<p>Supported query contexts:</p>
+<table>
+<thead>
+<tr><th>Key</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>bufferGrouperInitialBuckets</code></td><td>Overrides the value of <code>druid.query.groupBy.bufferGrouperInitialBuckets</code> for this query.</td><td>None</td></tr>
+<tr><td><code>bufferGrouperMaxLoadFactor</code></td><td>Overrides the value of <code>druid.query.groupBy.bufferGrouperMaxLoadFactor</code> for this query.</td><td>None</td></tr>
+<tr><td><code>forceHashAggregation</code></td><td>Overrides the value of <code>druid.query.groupBy.forceHashAggregation</code></td><td>None</td></tr>
+<tr><td><code>intermediateCombineDegree</code></td><td>Overrides the value of <code>druid.query.groupBy.intermediateCombineDegree</code></td><td>None</td></tr>
+<tr><td><code>numParallelCombineThreads</code></td><td>Overrides the value of <code>druid.query.groupBy.numParallelCombineThreads</code></td><td>None</td></tr>
+<tr><td><code>sortByDimsFirst</code></td><td>Sort the results first by dimension values and then by timestamp.</td><td>false</td></tr>
+<tr><td><code>forceLimitPushDown</code></td><td>When all fields in the orderby are part of the grouping key, the broker will push limit application down to the Historical processes. When the sorting order uses fields that are not in the grouping key, applying this optimization can result in approximate results with unknown accuracy, so this optimization is disabled by default in that case. Enabling this context flag turns on limit push down for limit/orderbys that contain non-grouping ke [...]
+</tbody>
+</table>
+<h4><a class="anchor" aria-hidden="true" id="groupby-v1-configurations"></a><a href="#groupby-v1-configurations" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c [...]
+<p>Supported runtime properties:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.query.groupBy.maxIntermediateRows</code></td><td>Maximum number of intermediate rows for the per-segment grouping engine. This is a tuning parameter that does not impose a hard limit; rather, it potentially shifts merging work from the per-segment engine to the overall merging index. Queries that exceed this limit will not fail.</td><td>50000</td></tr>
+<tr><td><code>druid.query.groupBy.maxResults</code></td><td>Maximum number of results. Queries that exceed this limit will fail.</td><td>500000</td></tr>
+</tbody>
+</table>
+<p>Supported query contexts:</p>
+<table>
+<thead>
+<tr><th>Key</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>maxIntermediateRows</code></td><td>Can be used to lower the value of <code>druid.query.groupBy.maxIntermediateRows</code> for this query.</td><td>None</td></tr>
+<tr><td><code>maxResults</code></td><td>Can be used to lower the value of <code>druid.query.groupBy.maxResults</code> for this query.</td><td>None</td></tr>
+<tr><td><code>useOffheap</code></td><td>Set to true to store aggregations off-heap when merging results.</td><td>false</td></tr>
+</tbody>
+</table>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/geo.html"><span class="arrow-prev">← </span><span>Spatial filters</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions.html"><span>Extensions</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#recommended-configuration-file-organization">Recommended Confi [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/configuration/indexing-service.html b/docs/0.16.0-incubating/configuration/indexing-service.html
new file mode 100644
index 0000000..4189126
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/indexing-service.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../configuration/index.html#overlord">
+<meta http-equiv="refresh" content="0; url=../configuration/index.html#overlord">
+<h1>Redirecting...</h1>
+<a href="../configuration/index.html#overlord">Click here if you are not redirected.</a>
+<script>location="../configuration/index.html#overlord"</script>
diff --git a/docs/0.16.0-incubating/configuration/logging.html b/docs/0.16.0-incubating/configuration/logging.html
new file mode 100644
index 0000000..e444fc0
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/logging.html
@@ -0,0 +1,133 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Logging · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Logging · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https://druid.apa [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Logging</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Apache Druid (incubating) processes will emit logs that are useful for debugging to the console. Druid processes also emit periodic metrics about their state. For more about metrics, see <a href="../configuration/index.html#enabling-metrics">Configuration</a>. Metric logs are printed to the console by default, and can be disabled with <code>-Ddruid.emitter.logging.logLevel=debug</code>.</p>
+<p>Druid uses <a href="http://logging.apache.org/log4j/2.x/">log4j2</a> for logging. Logging can be configured with a log4j2.xml file. Add the path to the directory containing the log4j2.xml file (e.g. the _common/ dir) to your classpath if you want to override default Druid log configuration. Note that this directory should be earlier in the classpath than the druid jars. The easiest way to do this is to prefix the classpath with the config dir.</p>
+<p>To enable java logging to go through log4j2, set the <code>-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager</code> server parameter.</p>
+<p>An example log4j2.xml ships with Druid under config/_common/log4j2.xml, and a sample file is also shown below:</p>
+<pre><code class="hljs"><span class="xml"><span class="hljs-meta">&lt;?xml version="1.0" encoding="UTF-8" ?&gt;</span>
+<span class="hljs-tag">&lt;<span class="hljs-name">Configuration</span> <span class="hljs-attr">status</span>=<span class="hljs-string">"WARN"</span>&gt;</span>
+  <span class="hljs-tag">&lt;<span class="hljs-name">Appenders</span>&gt;</span>
+    <span class="hljs-tag">&lt;<span class="hljs-name">Console</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"Console"</span> <span class="hljs-attr">target</span>=<span class="hljs-string">"SYSTEM_OUT"</span>&gt;</span>
+      <span class="hljs-tag">&lt;<span class="hljs-name">PatternLayout</span> <span class="hljs-attr">pattern</span>=<span class="hljs-string">"%d</span></span></span><span class="hljs-template-variable">{ISO8601}</span><span class="xml"><span class="hljs-tag"><span class="hljs-string"> %p [%t] %c - %m%n"</span>/&gt;</span>
+    <span class="hljs-tag">&lt;/<span class="hljs-name">Console</span>&gt;</span>
+  <span class="hljs-tag">&lt;/<span class="hljs-name">Appenders</span>&gt;</span>
+  <span class="hljs-tag">&lt;<span class="hljs-name">Loggers</span>&gt;</span>
+    <span class="hljs-tag">&lt;<span class="hljs-name">Root</span> <span class="hljs-attr">level</span>=<span class="hljs-string">"info"</span>&gt;</span>
+      <span class="hljs-tag">&lt;<span class="hljs-name">AppenderRef</span> <span class="hljs-attr">ref</span>=<span class="hljs-string">"Console"</span>/&gt;</span>
+    <span class="hljs-tag">&lt;/<span class="hljs-name">Root</span>&gt;</span>
+
+    <span class="hljs-comment">&lt;!-- Uncomment to enable logging of all HTTP requests
+    &lt;Logger name="org.apache.druid.jetty.RequestLog" additivity="false" level="DEBUG"&gt;
+        &lt;AppenderRef ref="Console"/&gt;
+    &lt;/Logger&gt;
+    --&gt;</span>
+  <span class="hljs-tag">&lt;/<span class="hljs-name">Loggers</span>&gt;</span>
+<span class="hljs-tag">&lt;/<span class="hljs-name">Configuration</span>&gt;</span>
+</span></code></pre>
+<h2><a class="anchor" aria-hidden="true" id="my-logs-are-really-chatty-can-i-set-them-to-asynchronously-write"></a><a href="#my-logs-are-really-chatty-can-i-set-them-to-asynchronously-write" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8  [...]
+<p>Yes, using a <code>log4j2.xml</code> similar to the following causes some of the more chatty classes to write asynchronously:</p>
+<pre><code class="hljs"><span class="xml"><span class="hljs-meta">&lt;?xml version="1.0" encoding="UTF-8" ?&gt;</span>
+<span class="hljs-tag">&lt;<span class="hljs-name">Configuration</span> <span class="hljs-attr">status</span>=<span class="hljs-string">"WARN"</span>&gt;</span>
+  <span class="hljs-tag">&lt;<span class="hljs-name">Appenders</span>&gt;</span>
+    <span class="hljs-tag">&lt;<span class="hljs-name">Console</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"Console"</span> <span class="hljs-attr">target</span>=<span class="hljs-string">"SYSTEM_OUT"</span>&gt;</span>
+      <span class="hljs-tag">&lt;<span class="hljs-name">PatternLayout</span> <span class="hljs-attr">pattern</span>=<span class="hljs-string">"%d</span></span></span><span class="hljs-template-variable">{ISO8601}</span><span class="xml"><span class="hljs-tag"><span class="hljs-string"> %p [%t] %c - %m%n"</span>/&gt;</span>
+    <span class="hljs-tag">&lt;/<span class="hljs-name">Console</span>&gt;</span>
+  <span class="hljs-tag">&lt;/<span class="hljs-name">Appenders</span>&gt;</span>
+  <span class="hljs-tag">&lt;<span class="hljs-name">Loggers</span>&gt;</span>
+    <span class="hljs-tag">&lt;<span class="hljs-name">AsyncLogger</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"org.apache.druid.curator.inventory.CuratorInventoryManager"</span> <span class="hljs-attr">level</span>=<span class="hljs-string">"debug"</span> <span class="hljs-attr">additivity</span>=<span class="hljs-string">"false"</span>&gt;</span>
+      <span class="hljs-tag">&lt;<span class="hljs-name">AppenderRef</span> <span class="hljs-attr">ref</span>=<span class="hljs-string">"Console"</span>/&gt;</span>
+    <span class="hljs-tag">&lt;/<span class="hljs-name">AsyncLogger</span>&gt;</span>
+    <span class="hljs-tag">&lt;<span class="hljs-name">AsyncLogger</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"org.apache.druid.client.BatchServerInventoryView"</span> <span class="hljs-attr">level</span>=<span class="hljs-string">"debug"</span> <span class="hljs-attr">additivity</span>=<span class="hljs-string">"false"</span>&gt;</span>
+      <span class="hljs-tag">&lt;<span class="hljs-name">AppenderRef</span> <span class="hljs-attr">ref</span>=<span class="hljs-string">"Console"</span>/&gt;</span>
+    <span class="hljs-tag">&lt;/<span class="hljs-name">AsyncLogger</span>&gt;</span>
+    <span class="hljs-comment">&lt;!-- Make extra sure nobody adds logs in a bad way that can hurt performance --&gt;</span>
+    <span class="hljs-tag">&lt;<span class="hljs-name">AsyncLogger</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"org.apache.druid.client.ServerInventoryView"</span> <span class="hljs-attr">level</span>=<span class="hljs-string">"debug"</span> <span class="hljs-attr">additivity</span>=<span class="hljs-string">"false"</span>&gt;</span>
+      <span class="hljs-tag">&lt;<span class="hljs-name">AppenderRef</span> <span class="hljs-attr">ref</span>=<span class="hljs-string">"Console"</span>/&gt;</span>
+    <span class="hljs-tag">&lt;/<span class="hljs-name">AsyncLogger</span>&gt;</span>
+    <span class="hljs-tag">&lt;<span class="hljs-name">AsyncLogger</span> <span class="hljs-attr">name</span> =<span class="hljs-string">"org.apache.druid.java.util.http.client.pool.ChannelResourceFactory"</span> <span class="hljs-attr">level</span>=<span class="hljs-string">"info"</span> <span class="hljs-attr">additivity</span>=<span class="hljs-string">"false"</span>&gt;</span>
+      <span class="hljs-tag">&lt;<span class="hljs-name">AppenderRef</span> <span class="hljs-attr">ref</span>=<span class="hljs-string">"Console"</span>/&gt;</span>
+    <span class="hljs-tag">&lt;/<span class="hljs-name">AsyncLogger</span>&gt;</span>
+    <span class="hljs-tag">&lt;<span class="hljs-name">Root</span> <span class="hljs-attr">level</span>=<span class="hljs-string">"info"</span>&gt;</span>
+      <span class="hljs-tag">&lt;<span class="hljs-name">AppenderRef</span> <span class="hljs-attr">ref</span>=<span class="hljs-string">"Console"</span>/&gt;</span>
+    <span class="hljs-tag">&lt;/<span class="hljs-name">Root</span>&gt;</span>
+  <span class="hljs-tag">&lt;/<span class="hljs-name">Loggers</span>&gt;</span>
+<span class="hljs-tag">&lt;/<span class="hljs-name">Configuration</span>&gt;</span>
+</span></code></pre>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions.html"><span class="arrow-prev">← </span><span>Extensions</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/operations/management-uis.html"><span>Management UIs</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#my-logs-are-really-chatty-can-i-set-them-to-asynchron [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/configuration/production-cluster.html b/docs/0.16.0-incubating/configuration/production-cluster.html
new file mode 100644
index 0000000..d3b1d39
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/production-cluster.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../tutorials/cluster.html">
+<meta http-equiv="refresh" content="0; url=../tutorials/cluster.html">
+<h1>Redirecting...</h1>
+<a href="../tutorials/cluster.html">Click here if you are not redirected.</a>
+<script>location="../tutorials/cluster.html"</script>
diff --git a/docs/0.16.0-incubating/configuration/realtime.md b/docs/0.16.0-incubating/configuration/realtime.md
new file mode 100644
index 0000000..2f57fca
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/realtime.md
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../ingestion/standalone-realtime.html">
+<meta http-equiv="refresh" content="0; url=../ingestion/standalone-realtime.html">
+<h1>Redirecting...</h1>
+<a href="../ingestion/standalone-realtime.html">Click here if you are not redirected.</a>
+<script>location="../ingestion/standalone-realtime.html"</script>
diff --git a/docs/0.16.0-incubating/configuration/simple-cluster.html b/docs/0.16.0-incubating/configuration/simple-cluster.html
new file mode 100644
index 0000000..d3b1d39
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/simple-cluster.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../tutorials/cluster.html">
+<meta http-equiv="refresh" content="0; url=../tutorials/cluster.html">
+<h1>Redirecting...</h1>
+<a href="../tutorials/cluster.html">Click here if you are not redirected.</a>
+<script>location="../tutorials/cluster.html"</script>
diff --git a/docs/0.16.0-incubating/configuration/zookeeper.html b/docs/0.16.0-incubating/configuration/zookeeper.html
new file mode 100644
index 0000000..24b4630
--- /dev/null
+++ b/docs/0.16.0-incubating/configuration/zookeeper.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../dependencies/zookeeper.html">
+<meta http-equiv="refresh" content="0; url=../dependencies/zookeeper.html">
+<h1>Redirecting...</h1>
+<a href="../dependencies/zookeeper.html">Click here if you are not redirected.</a>
+<script>location="../dependencies/zookeeper.html"</script>
diff --git a/docs/0.16.0-incubating/dependencies/cassandra-deep-storage.md b/docs/0.16.0-incubating/dependencies/cassandra-deep-storage.md
new file mode 100644
index 0000000..c3328a4
--- /dev/null
+++ b/docs/0.16.0-incubating/dependencies/cassandra-deep-storage.md
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../development/extensions-contrib/cassandra.html">
+<meta http-equiv="refresh" content="0; url=../development/extensions-contrib/cassandra.html">
+<h1>Redirecting...</h1>
+<a href="../development/extensions-contrib/cassandra.html">Click here if you are not redirected.</a>
+<script>location="../development/extensions-contrib/cassandra.html"</script>
diff --git a/docs/0.16.0-incubating/dependencies/deep-storage.html b/docs/0.16.0-incubating/dependencies/deep-storage.html
new file mode 100644
index 0000000..43a2ea2
--- /dev/null
+++ b/docs/0.16.0-incubating/dependencies/deep-storage.html
@@ -0,0 +1,101 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Deep storage · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Deep storage · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https:/ [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Deep storage</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Deep storage is where segments are stored.  It is a storage mechanism that Apache Druid (incubating) does not provide.  This deep storage infrastructure defines the level of durability of your data, as long as Druid processes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose.  If segments disappear from this storage layer, then you will lose whatever data those segments represented.</p>
+<h2><a class="anchor" aria-hidden="true" id="local-mount"></a><a href="#local-mount" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<p>A local mount can be used for storage of segments as well.  This allows you to use just your local file system or anything else that can be mount locally like NFS, Ceph, etc.  This is the default deep storage implementation.</p>
+<p>In order to use a local mount for deep storage, you need to set the following configuration in your common configs.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.type</code></td><td>local</td><td></td><td>Must be set.</td></tr>
+<tr><td><code>druid.storage.storageDirectory</code></td><td></td><td>Directory for storing segments.</td><td>Must be set.</td></tr>
+</tbody>
+</table>
+<p>Note that you should generally set <code>druid.storage.storageDirectory</code> to something different from <code>druid.segmentCache.locations</code> and <code>druid.segmentCache.infoDir</code>.</p>
+<p>If you are using the Hadoop indexer in local mode, then just give it a local file as your output directory and it will work.</p>
+<h2><a class="anchor" aria-hidden="true" id="s3-compatible"></a><a href="#s3-compatible" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>See <a href="/docs/0.16.0-incubating/development/extensions-core/s3.html">druid-s3-extensions extension documentation</a>.</p>
+<h2><a class="anchor" aria-hidden="true" id="hdfs"></a><a href="#hdfs" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6 [...]
+<p>See <a href="/docs/0.16.0-incubating/development/extensions-core/hdfs.html">druid-hdfs-storage extension documentation</a>.</p>
+<h2><a class="anchor" aria-hidden="true" id="additional-deep-stores"></a><a href="#additional-deep-stores" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<p>For additional deep stores, please see our <a href="/docs/0.16.0-incubating/development/extensions.html">extensions list</a>.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/processes.html"><span class="arrow-prev">← </span><span>Processes and servers</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/dependencies/metadata-storage.html"><span>Metadata storage</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#local-mount">Local Mount</a></li><li><a hr [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/dependencies/metadata-storage.html b/docs/0.16.0-incubating/dependencies/metadata-storage.html
new file mode 100644
index 0000000..9cb903b
--- /dev/null
+++ b/docs/0.16.0-incubating/dependencies/metadata-storage.html
@@ -0,0 +1,163 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Metadata storage · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Metadata storage · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content= [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Metadata storage</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>The Metadata Storage is an external dependency of Apache Druid (incubating). Druid uses it to store
+various metadata about the system, but not to store the actual data. There are
+a number of tables used for various purposes described below.</p>
+<p>Derby is the default metadata store for Druid, however, it is not suitable for production.
+<a href="/docs/0.16.0-incubating/development/extensions-core/mysql.html">MySQL</a> and <a href="/docs/0.16.0-incubating/development/extensions-core/postgresql.html">PostgreSQL</a> are more production suitable metadata stores.</p>
+<blockquote>
+<p>The Metadata Storage stores the entire metadata which is essential for a Druid cluster to work.
+For production clusters, consider using MySQL or PostgreSQL instead of Derby.
+Also, it's highly recommended to set up a high availability environment
+because there is no way to restore if you lose any metadata.</p>
+</blockquote>
+<h2><a class="anchor" aria-hidden="true" id="using-derby"></a><a href="#using-derby" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<p>Add the following to your Druid configuration.</p>
+<pre><code class="hljs css language-properties"><span class="hljs-meta">druid.metadata.storage.type</span>=<span class="hljs-string">derby</span>
+<span class="hljs-meta">druid.metadata.storage.connector.connectURI</span>=<span class="hljs-string">jdbc:derby://localhost:1527//opt/var/druid_state/derby;create=true</span>
+</code></pre>
+<h2><a class="anchor" aria-hidden="true" id="mysql"></a><a href="#mysql" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09 [...]
+<p>See <a href="/docs/0.16.0-incubating/development/extensions-core/mysql.html">mysql-metadata-storage extension documentation</a>.</p>
+<h2><a class="anchor" aria-hidden="true" id="postgresql"></a><a href="#postgresql" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>See <a href="/docs/0.16.0-incubating/development/extensions-core/postgresql.html">postgresql-metadata-storage</a>.</p>
+<h2><a class="anchor" aria-hidden="true" id="adding-custom-dbcp-properties"></a><a href="#adding-custom-dbcp-properties" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12  [...]
+<p>NOTE: These properties are not settable through the druid.metadata.storage.connector.dbcp properties : username, password, connectURI, validationQuery, testOnBorrow. These must be set through druid.metadata.storage.connector properties.</p>
+<p>Example supported properties:</p>
+<pre><code class="hljs css language-properties"><span class="hljs-meta">druid.metadata.storage.connector.dbcp.maxConnLifetimeMillis</span>=<span class="hljs-string">1200000</span>
+<span class="hljs-meta">druid.metadata.storage.connector.dbcp.defaultQueryTimeout</span>=<span class="hljs-string">30000</span>
+</code></pre>
+<p>See <a href="https://commons.apache.org/proper/commons-dbcp/configuration.html">BasicDataSource Configuration</a> for full list.</p>
+<h2><a class="anchor" aria-hidden="true" id="metadata-storage-tables"></a><a href="#metadata-storage-tables" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<h3><a class="anchor" aria-hidden="true" id="segments-table"></a><a href="#segments-table" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>This is dictated by the <code>druid.metadata.storage.tables.segments</code> property.</p>
+<p>This table stores metadata about the segments that are available in the system.
+The table is polled by the <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> to
+determine the set of segments that should be available for querying in the
+system. The table has two main functional columns, the other columns are for
+indexing purposes.</p>
+<p>The <code>used</code> column is a boolean &quot;tombstone&quot;. A 1 means that the segment should
+be &quot;used&quot; by the cluster (i.e. it should be loaded and available for requests).
+A 0 means that the segment should not be actively loaded into the cluster. We
+do this as a means of removing segments from the cluster without actually
+removing their metadata (which allows for simpler rolling back if that is ever
+an issue).</p>
+<p>The <code>payload</code> column stores a JSON blob that has all of the metadata for the segment (some of the data stored in this payload is redundant with some of the columns in the table, that is intentional). This looks something like</p>
+<pre><code class="hljs css language-json">{
+ <span class="hljs-attr">"dataSource"</span>:<span class="hljs-string">"wikipedia"</span>,
+ <span class="hljs-attr">"interval"</span>:<span class="hljs-string">"2012-05-23T00:00:00.000Z/2012-05-24T00:00:00.000Z"</span>,
+ <span class="hljs-attr">"version"</span>:<span class="hljs-string">"2012-05-24T00:10:00.046Z"</span>,
+ <span class="hljs-attr">"loadSpec"</span>:{
+    <span class="hljs-attr">"type"</span>:<span class="hljs-string">"s3_zip"</span>,
+    <span class="hljs-attr">"bucket"</span>:<span class="hljs-string">"bucket_for_segment"</span>,
+    <span class="hljs-attr">"key"</span>:<span class="hljs-string">"path/to/segment/on/s3"</span>
+ },
+ <span class="hljs-attr">"dimensions"</span>:<span class="hljs-string">"comma-delimited-list-of-dimension-names"</span>,
+ <span class="hljs-attr">"metrics"</span>:<span class="hljs-string">"comma-delimited-list-of-metric-names"</span>,
+ <span class="hljs-attr">"shardSpec"</span>:{<span class="hljs-attr">"type"</span>:<span class="hljs-string">"none"</span>},
+ <span class="hljs-attr">"binaryVersion"</span>:<span class="hljs-number">9</span>,
+ <span class="hljs-attr">"size"</span>:size_of_segment,
+ <span class="hljs-attr">"identifier"</span>:<span class="hljs-string">"wikipedia_2012-05-23T00:00:00.000Z_2012-05-24T00:00:00.000Z_2012-05-23T00:10:00.046Z"</span>
+}
+</code></pre>
+<p>Note that the format of this blob can and will change from time-to-time.</p>
+<h3><a class="anchor" aria-hidden="true" id="rule-table"></a><a href="#rule-table" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>The rule table is used to store the various rules about where segments should
+land. These rules are used by the <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a>
+when making segment (re-)allocation decisions about the cluster.</p>
+<h3><a class="anchor" aria-hidden="true" id="config-table"></a><a href="#config-table" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>The config table is used to store runtime configuration objects. We do not have
+many of these yet and we are not sure if we will keep this mechanism going
+forward, but it is the beginnings of a method of changing some configuration
+parameters across the cluster at runtime.</p>
+<h3><a class="anchor" aria-hidden="true" id="task-related-tables"></a><a href="#task-related-tables" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>There are also a number of tables created and used by the <a href="/docs/0.16.0-incubating/design/overlord.html">Overlord</a> and <a href="/docs/0.16.0-incubating/design/middlemanager.html">MiddleManager</a> when managing tasks.</p>
+<h3><a class="anchor" aria-hidden="true" id="audit-table"></a><a href="#audit-table" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<p>The Audit table is used to store the audit history for configuration changes
+e.g rule changes done by <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> and other
+config changes.</p>
+<p>##Accessed by: ##</p>
+<p>The Metadata Storage is accessed only by:</p>
+<ol>
+<li>Indexing Service Processes (if any)</li>
+<li>Realtime Processes (if any)</li>
+<li>Coordinator Processes</li>
+</ol>
+<p>Thus you need to give permissions (eg in AWS Security Groups)  only for these machines to access the Metadata storage.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/dependencies/deep-storage.html"><span class="arrow-prev">← </span><span>Deep storage</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/dependencies/zookeeper.html"><span class="function-name-prevnext">ZooKeeper</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#using-derby">Using Derby< [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/dependencies/zookeeper.html b/docs/0.16.0-incubating/dependencies/zookeeper.html
new file mode 100644
index 0000000..e555cbc
--- /dev/null
+++ b/docs/0.16.0-incubating/dependencies/zookeeper.html
@@ -0,0 +1,110 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>ZooKeeper · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="ZooKeeper · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https://druid [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">ZooKeeper</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Apache Druid (incubating) uses <a href="http://zookeeper.apache.org/">Apache ZooKeeper</a> (ZK) for management of current cluster state. The operations that happen over ZK are</p>
+<ol>
+<li><a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> leader election</li>
+<li>Segment &quot;publishing&quot; protocol from <a href="/docs/0.16.0-incubating/design/historical.html">Historical</a></li>
+<li>Segment load/drop protocol between <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> and <a href="/docs/0.16.0-incubating/design/historical.html">Historical</a></li>
+<li><a href="/docs/0.16.0-incubating/design/overlord.html">Overlord</a> leader election</li>
+<li><a href="/docs/0.16.0-incubating/design/overlord.html">Overlord</a> and <a href="/docs/0.16.0-incubating/design/middlemanager.html">MiddleManager</a> task management</li>
+</ol>
+<h3><a class="anchor" aria-hidden="true" id="coordinator-leader-election"></a><a href="#coordinator-leader-election" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 1 [...]
+<p>We use the Curator LeadershipLatch recipe to do leader election at path</p>
+<pre><code class="hljs">${druid<span class="hljs-selector-class">.zk</span><span class="hljs-selector-class">.paths</span><span class="hljs-selector-class">.coordinatorPath</span>}/_COORDINATOR
+</code></pre>
+<h3><a class="anchor" aria-hidden="true" id="segment-publishing-protocol-from-historical-and-realtime"></a><a href="#segment-publishing-protocol-from-historical-and-realtime" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.2 [...]
+<p>The <code>announcementsPath</code> and <code>servedSegmentsPath</code> are used for this.</p>
+<p>All <a href="/docs/0.16.0-incubating/design/historical.html">Historical</a> processes publish themselves on the <code>announcementsPath</code>, specifically, they will create an ephemeral znode at</p>
+<pre><code class="hljs">${druid<span class="hljs-selector-class">.zk</span><span class="hljs-selector-class">.paths</span><span class="hljs-selector-class">.announcementsPath</span>}/${druid.host}
+</code></pre>
+<p>Which signifies that they exist. They will also subsequently create a permanent znode at</p>
+<pre><code class="hljs">${druid<span class="hljs-selector-class">.zk</span><span class="hljs-selector-class">.paths</span><span class="hljs-selector-class">.servedSegmentsPath</span>}/${druid.host}
+</code></pre>
+<p>And as they load up segments, they will attach ephemeral znodes that look like</p>
+<pre><code class="hljs">${druid<span class="hljs-selector-class">.zk</span><span class="hljs-selector-class">.paths</span><span class="hljs-selector-class">.servedSegmentsPath</span>}/${druid.host}/_segment_identifier_
+</code></pre>
+<p>Processes like the <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> and <a href="/docs/0.16.0-incubating/design/broker.html">Broker</a> can then watch these paths to see which processes are currently serving which segments.</p>
+<h3><a class="anchor" aria-hidden="true" id="segment-load-drop-protocol-between-coordinator-and-historical"></a><a href="#segment-load-drop-protocol-between-coordinator-and-historical" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-. [...]
+<p>The <code>loadQueuePath</code> is used for this.</p>
+<p>When the <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> decides that a <a href="/docs/0.16.0-incubating/design/historical.html">Historical</a> process should load or drop a segment, it writes an ephemeral znode to</p>
+<pre><code class="hljs">${druid<span class="hljs-selector-class">.zk</span><span class="hljs-selector-class">.paths</span><span class="hljs-selector-class">.loadQueuePath</span>}/_host_of_historical_process/_segment_identifier
+</code></pre>
+<p>This znode will contain a payload that indicates to the Historical process what it should do with the given segment. When the Historical process is done with the work, it will delete the znode in order to signify to the Coordinator that it is complete.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/dependencies/metadata-storage.html"><span class="arrow-prev">← </span><span>Metadata storage</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/ingestion/index.html"><span>Ingestion</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id="footer"><div class="container"><div cl [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/architecture.html b/docs/0.16.0-incubating/design/architecture.html
new file mode 100644
index 0000000..7fb6fe7
--- /dev/null
+++ b/docs/0.16.0-incubating/design/architecture.html
@@ -0,0 +1,252 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Design · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Design · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https://druid.apach [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Design</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Druid has a multi-process, distributed architecture that is designed to be cloud-friendly and easy to operate. Each
+Druid process type can be configured and scaled independently, giving you maximum flexibility over your cluster. This
+design also provides enhanced fault tolerance: an outage of one component will not immediately affect other components.</p>
+<h2><a class="anchor" aria-hidden="true" id="processes-and-servers"></a><a href="#processes-and-servers" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>Druid has several process types, briefly described below:</p>
+<ul>
+<li><a href="/docs/0.16.0-incubating/design/coordinator.html"><strong>Coordinator</strong></a> processes manage data availability on the cluster.</li>
+<li><a href="/docs/0.16.0-incubating/design/overlord.html"><strong>Overlord</strong></a> processes control the assignment of data ingestion workloads.</li>
+<li><a href="/docs/0.16.0-incubating/design/broker.html"><strong>Broker</strong></a> processes handle queries from external clients.</li>
+<li><a href="/docs/0.16.0-incubating/design/router.html"><strong>Router</strong></a> processes are optional processes that can route requests to Brokers, Coordinators, and Overlords.</li>
+<li><a href="/docs/0.16.0-incubating/design/historical.html"><strong>Historical</strong></a> processes store queryable data.</li>
+<li><a href="/docs/0.16.0-incubating/design/middlemanager.html"><strong>MiddleManager</strong></a> processes are responsible for ingesting data.</li>
+</ul>
+<p>Druid processes can be deployed any way you like, but for ease of deployment we suggest organizing them into three server types: Master, Query, and Data.</p>
+<ul>
+<li><strong>Master</strong>: Runs Coordinator and Overlord processes, manages data availability and ingestion.</li>
+<li><strong>Query</strong>: Runs Broker and optional Router processes, handles queries from external clients.</li>
+<li><strong>Data</strong>: Runs Historical and MiddleManager processes, executes ingestion workloads and stores all queryable data.</li>
+</ul>
+<p>For more details on process and server organization, please see <a href="/docs/0.16.0-incubating/design/processes.html">Druid Processses and Servers</a>.</p>
+<h2><a class="anchor" aria-hidden="true" id="external-dependencies"></a><a href="#external-dependencies" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>In addition to its built-in process types, Druid also has three external dependencies. These are intended to be able to
+leverage existing infrastructure, where present.</p>
+<h3><a class="anchor" aria-hidden="true" id="deep-storage"></a><a href="#deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>Shared file storage accessible by every Druid server. In a clustered deployment, this is typically going to
+be a distributed object store like S3 or HDFS, or a network mounted filesystem. In a single-server deployment,
+this is typically going to be local disk. Druid uses deep storage to store any data that has been ingested into the
+system.</p>
+<p>Druid uses deep storage only as a backup of your data and as a way to transfer data in the background between
+Druid processes. To respond to queries, Historical processes do not read from deep storage, but instead read pre-fetched
+segments from their local disks before any queries are served. This means that Druid never needs to access deep storage
+during a query, helping it offer the best query latencies possible. It also means that you must have enough disk space
+both in deep storage and across your Historical processes for the data you plan to load.</p>
+<p>Deep storage is an important part of Druid's elastic, fault-tolerant design. Druid can bootstrap from deep storage even
+if every single data server is lost and re-provisioned.</p>
+<p>For more details, please see the <a href="/docs/0.16.0-incubating/dependencies/deep-storage.html">Deep storage</a> page.</p>
+<h3><a class="anchor" aria-hidden="true" id="metadata-storage"></a><a href="#metadata-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p>The metadata storage holds various shared system metadata such as segment availability information and task information.
+In a clustered deployment, this is typically going to be a traditional RDBMS like PostgreSQL or MySQL. In a single-server
+deployment, it is typically going to be a locally-stored Apache Derby database.</p>
+<p>For more details, please see the <a href="/docs/0.16.0-incubating/dependencies/metadata-storage.html">Metadata storage</a> page.</p>
+<h3><a class="anchor" aria-hidden="true" id="zookeeper"></a><a href="#zookeeper" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.6 [...]
+<p>Used for internal service discovery, coordination, and leader election.</p>
+<p>For more details, please see the <a href="/docs/0.16.0-incubating/dependencies/zookeeper.html">ZooKeeper</a> page.</p>
+<h2><a class="anchor" aria-hidden="true" id="architecture-diagram"></a><a href="#architecture-diagram" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>The following diagram shows how queries and data flow through this architecture, using the suggested Master/Query/Data server organization:</p>
+<p><img src="../assets/druid-architecture.png" width="800"/></p>
+<h2><a class="anchor" aria-hidden="true" id="storage-design"></a><a href="#storage-design" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<h3><a class="anchor" aria-hidden="true" id="datasources-and-segments"></a><a href="#datasources-and-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<p>Druid data is stored in &quot;datasources&quot;, which are similar to tables in a traditional RDBMS. Each datasource is
+partitioned by time and, optionally, further partitioned by other attributes. Each time range is called a &quot;chunk&quot; (for
+example, a single day, if your datasource is partitioned by day). Within a chunk, data is partitioned into one or more
+<a href="/docs/0.16.0-incubating/design/segments.html">&quot;segments&quot;</a>. Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
+organized into time chunks, it's sometimes helpful to think of segments as living on a timeline like the following:</p>
+<p><img src="../assets/druid-timeline.png" width="800" /></p>
+<p>A datasource may have anywhere from just a few segments, up to hundreds of thousands and even millions of segments. Each
+segment starts life off being created on a MiddleManager, and at that point, is mutable and uncommitted. The segment
+building process includes the following steps, designed to produce a data file that is compact and supports fast
+queries:</p>
+<ul>
+<li>Conversion to columnar format</li>
+<li>Indexing with bitmap indexes</li>
+<li>Compression using various algorithms
+<ul>
+<li>Dictionary encoding with id storage minimization for String columns</li>
+<li>Bitmap compression for bitmap indexes</li>
+<li>Type-aware compression for all columns</li>
+</ul></li>
+</ul>
+<p>Periodically, segments are committed and published. At this point, they are written to <a href="#deep-storage">deep storage</a>,
+become immutable, and move from MiddleManagers to the Historical processes. An entry about the segment is also written
+to the <a href="#metadata-storage">metadata store</a>. This entry is a self-describing bit of metadata about the segment, including
+things like the schema of the segment, its size, and its location on deep storage. These entries are what the
+Coordinator uses to know what data <em>should</em> be available on the cluster.</p>
+<p>For details on the segment file format, please see <a href="segments.html">segment files</a>.</p>
+<p>For details on modeling your data in Druid, see <a href="/docs/0.16.0-incubating/ingestion/schema-design.html">schema design</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="indexing-and-handoff"></a><a href="#indexing-and-handoff" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p><em>Indexing</em> is the mechanism by which new segments are created, and <em>handoff</em> is the mechanism by which they are published
+and begin being served by Historical processes. The mechanism works like this on the indexing side:</p>
+<ol>
+<li>An <em>indexing task</em> starts running and building a new segment. It must determine the identifier of the segment before
+it starts building it. For a task that is appending (like a Kafka task, or an index task in append mode) this will be
+done by calling an &quot;allocate&quot; API on the Overlord to potentially add a new partition to an existing set of segments. For
+a task that is overwriting (like a Hadoop task, or an index task <em>not</em> in append mode) this is done by locking an
+interval and creating a new version number and new set of segments.</li>
+<li>If the indexing task is a realtime task (like a Kafka task) then the segment is immediately queryable at this point.
+It's available, but unpublished.</li>
+<li>When the indexing task has finished reading data for the segment, it pushes it to deep storage and then publishes it
+by writing a record into the metadata store.</li>
+<li>If the indexing task is a realtime task, at this point it waits for a Historical process to load the segment. If the
+indexing task is not a realtime task, it exits immediately.</li>
+</ol>
+<p>And like this on the Coordinator / Historical side:</p>
+<ol>
+<li>The Coordinator polls the metadata store periodically (by default, every 1 minute) for newly published segments.</li>
+<li>When the Coordinator finds a segment that is published and used, but unavailable, it chooses a Historical process
+to load that segment and instructs that Historical to do so.</li>
+<li>The Historical loads the segment and begins serving it.</li>
+<li>At this point, if the indexing task was waiting for handoff, it will exit.</li>
+</ol>
+<h3><a class="anchor" aria-hidden="true" id="segment-identifiers"></a><a href="#segment-identifiers" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>Segments all have a four-part identifier with the following components:</p>
+<ul>
+<li>Datasource name.</li>
+<li>Time interval (for the time chunk containing the segment; this corresponds to the <code>segmentGranularity</code> specified
+at ingestion time).</li>
+<li>Version number (generally an ISO8601 timestamp corresponding to when the segment set was first started).</li>
+<li>Partition number (an integer, unique within a datasource+interval+version; may not necessarily be contiguous).</li>
+</ul>
+<p>For example, this is the identifier for a segment in datasource <code>clarity-cloud0</code>, time chunk
+<code>2018-05-21T16:00:00.000Z/2018-05-21T17:00:00.000Z</code>, version <code>2018-05-21T15:56:09.909Z</code>, and partition number 1:</p>
+<pre><code class="hljs">clarity-cloud0_2018<span class="hljs-number">-05</span><span class="hljs-number">-21</span>T16:<span class="hljs-number">00</span>:<span class="hljs-number">00.000</span>Z_2018<span class="hljs-number">-05</span><span class="hljs-number">-21</span>T17:<span class="hljs-number">00</span>:<span class="hljs-number">00.000</span>Z_2018<span class="hljs-number">-05</span><span class="hljs-number">-21</span>T15:<span class="hljs-number">56</span>:<span class="hljs-numbe [...]
+</code></pre>
+<p>Segments with partition number 0 (the first partition in a chunk) omit the partition number, like the following
+example, which is a segment in the same time chunk as the previous one, but with partition number 0 instead of 1:</p>
+<pre><code class="hljs">clarity-cloud0_2018<span class="hljs-number">-05</span><span class="hljs-number">-21</span>T16:<span class="hljs-number">00</span>:<span class="hljs-number">00.000</span>Z_2018<span class="hljs-number">-05</span><span class="hljs-number">-21</span>T17:<span class="hljs-number">00</span>:<span class="hljs-number">00.000</span>Z_2018<span class="hljs-number">-05</span><span class="hljs-number">-21</span>T15:<span class="hljs-number">56</span>:<span class="hljs-numbe [...]
+</code></pre>
+<h3><a class="anchor" aria-hidden="true" id="segment-versioning"></a><a href="#segment-versioning" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>You may be wondering what the &quot;version number&quot; described in the previous section is for. Or, you might not be, in which
+case good for you and you can skip this section!</p>
+<p>It's there to support batch-mode overwriting. In Druid, if all you ever do is append data, then there will be just a
+single version for each time chunk. But when you overwrite data, what happens behind the scenes is that a new set of
+segments is created with the same datasource, same time interval, but a higher version number. This is a signal to the
+rest of the Druid system that the older version should be removed from the cluster, and the new version should replace
+it.</p>
+<p>The switch appears to happen instantaneously to a user, because Druid handles this by first loading the new data (but
+not allowing it to be queried), and then, as soon as the new data is all loaded, switching all new queries to use those
+new segments. Then it drops the old segments a few minutes later.</p>
+<h3><a class="anchor" aria-hidden="true" id="segment-lifecycle"></a><a href="#segment-lifecycle" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<p>Each segment has a lifecycle that involves the following three major areas:</p>
+<ol>
+<li><strong>Metadata store:</strong> Segment metadata (a small JSON payload generally no more than a few KB) is stored in the
+<a href="/docs/0.16.0-incubating/dependencies/metadata-storage.html">metadata store</a> once a segment is done being constructed. The act of inserting
+a record for a segment into the metadata store is called <em>publishing</em>. These metadata records have a boolean flag
+named <code>used</code>, which controls whether the segment is intended to be queryable or not. Segments created by realtime tasks will be
+available before they are published, since they are only published when the segment is complete and will not accept
+any additional rows of data.</li>
+<li><strong>Deep storage:</strong> Segment data files are pushed to deep storage once a segment is done being constructed. This
+happens immediately before publishing metadata to the metadata store.</li>
+<li><strong>Availability for querying:</strong> Segments are available for querying on some Druid data server, like a realtime task
+or a Historical process.</li>
+</ol>
+<p>You can inspect the state of currently active segments using the Druid SQL
+<a href="/docs/0.16.0-incubating/querying/sql.html#segments-table"><code>sys.segments</code> table</a>. It includes the following flags:</p>
+<ul>
+<li><code>is_published</code>: True if segment metadata has been published to the metadata stored and <code>used</code> is true.</li>
+<li><code>is_available</code>: True if the segment is currently available for querying, either on a realtime task or Historical
+process.</li>
+<li><code>is_realtime</code>: True if the segment is <em>only</em> available on realtime tasks. For datasources that use realtime ingestion,
+this will generally start off <code>true</code> and then become <code>false</code> as the segment is published and handed off.</li>
+<li><code>is_overshadowed</code>: True if the segment is published (with <code>used</code> set to true) and is fully overshadowed by some other
+published segments. Generally this is a transient state, and segments in this state will soon have their <code>used</code> flag
+automatically set to false.</li>
+</ul>
+<h2><a class="anchor" aria-hidden="true" id="query-processing"></a><a href="#query-processing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p>Queries first enter the <a href="/docs/0.16.0-incubating/design/broker.html">Broker</a>, where the Broker will identify which segments have data that may pertain to that query.
+The list of segments is always pruned by time, and may also be pruned by other attributes depending on how your
+datasource is partitioned. The Broker will then identify which <a href="/docs/0.16.0-incubating/design/historical.html">Historicals</a> and
+<a href="/docs/0.16.0-incubating/design/middlemanager.html">MiddleManagers</a> are serving those segments and send a rewritten subquery to each of those processes. The Historical/MiddleManager processes will take in the
+queries, process them and return results. The Broker receives results and merges them together to get the final answer,
+which it returns to the original caller.</p>
+<p>Broker pruning is an important way that Druid limits the amount of data that must be scanned for each query, but it is
+not the only way. For filters at a more granular level than what the Broker can use for pruning, indexing structures
+inside each segment allow Druid to figure out which (if any) rows match the filter set before looking at any row of
+data. Once Druid knows which rows match a particular query, it only accesses the specific columns it needs for that
+query. Within those columns, Druid can skip from row to row, avoiding reading data that doesn't match the query filter.</p>
+<p>So Druid uses three different techniques to maximize query performance:</p>
+<ul>
+<li>Pruning which segments are accessed for each query.</li>
+<li>Within each segment, using indexes to identify which rows must be accessed.</li>
+<li>Within each segment, only reading the specific rows and columns that are relevant to a particular query.</li>
+</ul>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/tutorials/tutorial-kerberos-hadoop.html"><span class="arrow-prev">← </span><span>Kerberized HDFS deep storage</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/segments.html"><span>Segments</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#processes-and-servers">Processes and Se [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/auth.html b/docs/0.16.0-incubating/design/auth.html
new file mode 100644
index 0000000..eecc93c
--- /dev/null
+++ b/docs/0.16.0-incubating/design/auth.html
@@ -0,0 +1,174 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Authentication and Authorization · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Authentication and Authorization · Apache Druid"/><meta property="og:type" content="website"/> [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Authentication and Authorization</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>This document describes non-extension specific Apache Druid (incubating) authentication and authorization configurations.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Type</th><th>Description</th><th>Default</th><th>Required</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.auth.authenticatorChain</code></td><td>JSON List of Strings</td><td>List of Authenticator type names</td><td>[&quot;allowAll&quot;]</td><td>no</td></tr>
+<tr><td><code>druid.escalator.type</code></td><td>String</td><td>Type of the Escalator that should be used for internal Druid communications. This Escalator must use an authentication scheme that is supported by an Authenticator in <code>druid.auth.authenticationChain</code>.</td><td>&quot;noop&quot;</td><td>no</td></tr>
+<tr><td><code>druid.auth.authorizers</code></td><td>JSON List of Strings</td><td>List of Authorizer type names</td><td>[&quot;allowAll&quot;]</td><td>no</td></tr>
+<tr><td><code>druid.auth.unsecuredPaths</code></td><td>List of Strings</td><td>List of paths for which security checks will not be performed. All requests to these paths will be allowed.</td><td>[]</td><td>no</td></tr>
+<tr><td><code>druid.auth.allowUnauthenticatedHttpOptions</code></td><td>Boolean</td><td>If true, skip authentication checks for HTTP OPTIONS requests. This is needed for certain use cases, such as supporting CORS pre-flight requests. Note that disabling authentication checks for OPTIONS requests will allow unauthenticated users to determine what Druid endpoints are valid (by checking if the OPTIONS request returns a 200 instead of 404), so enabling this option may reveal information abou [...]
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="enabling-authentication-authorizationloadinglookuptest"></a><a href="#enabling-authentication-authorizationloadinglookuptest" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2  [...]
+<h2><a class="anchor" aria-hidden="true" id="authenticator-chain"></a><a href="#authenticator-chain" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>Authentication decisions are handled by a chain of Authenticator instances. A request will be checked by Authenticators in the sequence defined by the <code>druid.auth.authenticatorChain</code>.</p>
+<p>Authenticator implementions are provided by extensions.</p>
+<p>For example, the following authentication chain definition enables the Kerberos and HTTP Basic authenticators, from the <code>druid-kerberos</code> and <code>druid-basic-security</code> core extensions, respectively:</p>
+<pre><code class="hljs">druid<span class="hljs-selector-class">.auth</span><span class="hljs-selector-class">.authenticatorChain</span>=[<span class="hljs-string">"kerberos"</span>, <span class="hljs-string">"basic"</span>]
+</code></pre>
+<p>A request will pass through all Authenticators in the chain, until one of the Authenticators successfully authenticates the request or sends an HTTP error response. Authenticators later in the chain will be skipped after the first successful authentication or if the request is terminated with an error response.</p>
+<p>If no Authenticator in the chain successfully authenticated a request or sent an HTTP error response, an HTTP error response will be sent at the end of the chain.</p>
+<p>Druid includes two built-in Authenticators, one of which is used for the default unsecured configuration.</p>
+<h3><a class="anchor" aria-hidden="true" id="allowall-authenticator"></a><a href="#allowall-authenticator" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<p>This built-in Authenticator authenticates all requests, and always directs them to an Authorizer named &quot;allowAll&quot;. It is not intended to be used for anything other than the default unsecured configuration.</p>
+<h3><a class="anchor" aria-hidden="true" id="anonymous-authenticator"></a><a href="#anonymous-authenticator" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<p>This built-in Authenticator authenticates all requests, and directs them to an Authorizer specified in the configuration by the user. It is intended to be used for adding a default level of access so
+the Anonymous Authenticator should be added to the end of the authentication chain. A request that reaches the Anonymous Authenticator at the end of the chain will succeed or fail depending on how the Authorizer linked to the Anonymous Authenticator is configured.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th><th>Required</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.auth.authenticator.&lt;authenticatorName&gt;.authorizerName</code></td><td>Authorizer that requests should be directed to.</td><td>N/A</td><td>Yes</td></tr>
+<tr><td><code>druid.auth.authenticator.&lt;authenticatorName&gt;.identity</code></td><td>The identity of the requester.</td><td>defaultUser</td><td>No</td></tr>
+</tbody>
+</table>
+<p>To use the Anonymous Authenticator, add an authenticator with type <code>anonymous</code> to the authenticatorChain.</p>
+<p>For example, the following enables the Anonymous Authenticator with the <code>druid-basic-security</code> extension:</p>
+<pre><code class="hljs"><span class="hljs-attr">druid.auth.authenticatorChain</span>=[<span class="hljs-string">"basic"</span>, <span class="hljs-string">"anonymous"</span>]
+
+<span class="hljs-attr">druid.auth.authenticator.anonymous.type</span>=anonymous
+<span class="hljs-attr">druid.auth.authenticator.anonymous.identity</span>=defaultUser
+<span class="hljs-attr">druid.auth.authenticator.anonymous.authorizerName</span>=myBasicAuthorizer
+
+<span class="hljs-comment"># ... usual configs for basic authentication would go here ...</span>
+</code></pre>
+<h2><a class="anchor" aria-hidden="true" id="escalator"></a><a href="#escalator" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.6 [...]
+<p>The <code>druid.escalator.type</code> property determines what authentication scheme should be used for internal Druid cluster communications (such as when a Broker process communicates with Historical processes for query processing).</p>
+<p>The Escalator chosen for this property must use an authentication scheme that is supported by an Authenticator in <code>druid.auth.authenticationChain</code>. Authenticator extension implementors must also provide a corresponding Escalator implementation if they intend to use a particular authentication scheme for internal Druid communications.</p>
+<h3><a class="anchor" aria-hidden="true" id="noop-escalator"></a><a href="#noop-escalator" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>This built-in default Escalator is intended for use only with the default AllowAll Authenticator and Authorizer.</p>
+<h2><a class="anchor" aria-hidden="true" id="authorizers"></a><a href="#authorizers" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<p>Authorization decisions are handled by an Authorizer. The <code>druid.auth.authorizers</code> property determines what Authorizer implementations will be active.</p>
+<p>There are two built-in Authorizers, &quot;default&quot; and &quot;noop&quot;. Other implementations are provided by extensions.</p>
+<p>For example, the following authorizers definition enables the &quot;basic&quot; implementation from <code>druid-basic-security</code>:</p>
+<pre><code class="hljs">druid<span class="hljs-selector-class">.auth</span><span class="hljs-selector-class">.authorizers</span>=[<span class="hljs-string">"basic"</span>]
+</code></pre>
+<p>Only a single Authorizer will authorize any given request.</p>
+<p>Druid includes one built in authorizer:</p>
+<h3><a class="anchor" aria-hidden="true" id="allowall-authorizer"></a><a href="#allowall-authorizer" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>The Authorizer with type name &quot;allowAll&quot; accepts all requests.</p>
+<h2><a class="anchor" aria-hidden="true" id="default-unsecured-configuration"></a><a href="#default-unsecured-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 [...]
+<p>When <code>druid.auth.authenticationChain</code> is left empty or unspecified, Druid will create an authentication chain with a single AllowAll Authenticator named &quot;allowAll&quot;.</p>
+<p>When <code>druid.auth.authorizers</code> is left empty or unspecified, Druid will create a single AllowAll Authorizer named &quot;allowAll&quot;.</p>
+<p>The default value of <code>druid.escalator.type</code> is &quot;noop&quot; to match the default unsecured Authenticator/Authorizer configurations.</p>
+<h2><a class="anchor" aria-hidden="true" id="authenticator-to-authorizer-routing"></a><a href="#authenticator-to-authorizer-routing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2 [...]
+<p>When an Authenticator successfully authenticates a request, it must attach a AuthenticationResult to the request, containing an information about the identity of the requester, as well as the name of the Authorizer that should authorize the authenticated request.</p>
+<p>An Authenticator implementation should provide some means through configuration to allow users to select what Authorizer(s) the Authenticator should route requests to.</p>
+<h2><a class="anchor" aria-hidden="true" id="internal-system-user"></a><a href="#internal-system-user" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>Internal requests between Druid processes (non-user initiated communications) need to have authentication credentials attached.</p>
+<p>These requests should be run as an &quot;internal system user&quot;, an identity that represents the Druid cluster itself, with full access permissions.</p>
+<p>The details of how the internal system user is defined is left to extension implementations.</p>
+<h3><a class="anchor" aria-hidden="true" id="authorizer-internal-system-user-handling"></a><a href="#authorizer-internal-system-user-handling" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0  [...]
+<p>Authorizers implementations must recognize and authorize an identity for the &quot;internal system user&quot;, with full access permissions.</p>
+<h3><a class="anchor" aria-hidden="true" id="authenticator-and-escalator-internal-system-user-handling"></a><a href="#authenticator-and-escalator-internal-system-user-handling" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1 [...]
+<p>An Authenticator implementation that is intended to support internal Druid communications must recognize credentials for the &quot;internal system user&quot;, as provided by a corresponding Escalator implementation.</p>
+<p>An Escalator must implement three methods related to the internal system user:</p>
+<pre><code class="hljs css language-java">  <span class="hljs-function"><span class="hljs-keyword">public</span> HttpClient <span class="hljs-title">createEscalatedClient</span><span class="hljs-params">(HttpClient baseClient)</span></span>;
+
+  <span class="hljs-keyword">public</span> org.eclipse.jetty.client.<span class="hljs-function">HttpClient <span class="hljs-title">createEscalatedJettyClient</span><span class="hljs-params">(org.eclipse.jetty.client.HttpClient baseClient)</span></span>;
+
+  <span class="hljs-function"><span class="hljs-keyword">public</span> AuthenticationResult <span class="hljs-title">createEscalatedAuthenticationResult</span><span class="hljs-params">()</span></span>;
+</code></pre>
+<p><code>createEscalatedClient</code> returns an wrapped HttpClient that attaches the credentials of the &quot;internal system user&quot; to requests.</p>
+<p><code>createEscalatedJettyClient</code> is similar to <code>createEscalatedClient</code>, except that it operates on a Jetty HttpClient.</p>
+<p><code>createEscalatedAuthenticationResult</code> returns an AuthenticationResult containing the identity of the &quot;internal system user&quot;.</p>
+<h2><a class="anchor" aria-hidden="true" id="reserved-name-configuration-property"></a><a href="#reserved-name-configuration-property" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 [...]
+<p>For extension implementers, please note that the following configuration properties are reserved for the names of Authenticators and Authorizers:</p>
+<pre><code class="hljs">druid.auth.authenticator.&lt;authenticator-<span class="hljs-built_in">name</span>&gt;.<span class="hljs-built_in">name</span>=&lt;authenticator-<span class="hljs-built_in">name</span>&gt;
+druid.auth.authorizer.&lt;authorizer-<span class="hljs-built_in">name</span>&gt;.<span class="hljs-built_in">name</span>=&lt;authorizer-<span class="hljs-built_in">name</span>&gt;
+
+</code></pre>
+<p>These properties provide the authenticator and authorizer names to the implementations as @JsonProperty parameters, potentially useful when multiple authenticators or authorizers of the same type are configured.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/comparisons/druid-vs-sql-on-hadoop.html"><span class="arrow-prev">← </span><span>Apache Druid vs SQL-on-Hadoop</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/broker.html"><span>Broker</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#enabling-authentication-authorizationloadi [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/broker.html b/docs/0.16.0-incubating/design/broker.html
new file mode 100644
index 0000000..8a828f1
--- /dev/null
+++ b/docs/0.16.0-incubating/design/broker.html
@@ -0,0 +1,96 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Broker · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Broker · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https://druid.apach [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Broker</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h3><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>For Apache Druid (incubating) Broker Process Configuration, see <a href="../configuration/index.html#broker">Broker Configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="http-endpoints"></a><a href="#http-endpoints" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>For a list of API endpoints supported by the Broker, see <a href="../operations/api-reference.html#broker">Broker API</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="overview"></a><a href="#overview" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p>The Broker is the process to route queries to if you want to run a distributed cluster. It understands the metadata published to ZooKeeper about what segments exist on what processes and routes queries such that they hit the right processes. This process also merges the result sets from all of the individual processes together.
+On start up, Historical processes announce themselves and the segments they are serving in Zookeeper.</p>
+<h3><a class="anchor" aria-hidden="true" id="running"></a><a href="#running" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<pre><code class="hljs">org.apache.druid.cli.Main<span class="hljs-built_in"> server </span>broker
+</code></pre>
+<h3><a class="anchor" aria-hidden="true" id="forwarding-queries"></a><a href="#forwarding-queries" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>Most Druid queries contain an interval object that indicates a span of time for which data is requested. Likewise, Druid <a href="/docs/0.16.0-incubating/design/segments.html">Segments</a> are partitioned to contain data for some interval of time and segments are distributed across a cluster. Consider a simple datasource with 7 segments where each segment contains data for a given day of the week. Any query issued to the datasource for more than one day of data will hit more than one  [...]
+<p>To determine which processes to forward queries to, the Broker process first builds a view of the world from information in Zookeeper. Zookeeper maintains information about <a href="/docs/0.16.0-incubating/design/historical.html">Historical</a> and streaming ingestion <a href="/docs/0.16.0-incubating/design/peons.html">Peon</a> processes and the segments they are serving. For every datasource in Zookeeper, the Broker process builds a timeline of segments and the processes that serve t [...]
+<h3><a class="anchor" aria-hidden="true" id="caching"></a><a href="#caching" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<p>Broker processes employ a cache with a LRU cache invalidation strategy. The Broker cache stores per-segment results. The cache can be local to each Broker process or shared across multiple processes using an external distributed cache such as <a href="http://memcached.org/">memcached</a>. Each time a broker process receives a query, it first maps the query to a set of segments. A subset of these segment results may already exist in the cache and the results can be directly pulled from  [...]
+Historical processes. Once the Historical processes return their results, the Broker will store those results in the cache. Real-time segments are never cached and hence requests for real-time data will always be forwarded to real-time processes. Real-time data is perpetually changing and caching the results would be unreliable.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/auth.html"><span class="arrow-prev">← </span><span>Authentication and Authorization</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/coordinator.html"><span>Coordinator Process</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id="footer"><div class="contain [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/concepts-and-terminology.html b/docs/0.16.0-incubating/design/concepts-and-terminology.html
new file mode 100644
index 0000000..eed33a4
--- /dev/null
+++ b/docs/0.16.0-incubating/design/concepts-and-terminology.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="index.html">
+<meta http-equiv="refresh" content="0; url=index.html">
+<h1>Redirecting...</h1>
+<a href="index.html">Click here if you are not redirected.</a>
+<script>location="index.html"</script>
diff --git a/docs/0.16.0-incubating/design/coordinator.html b/docs/0.16.0-incubating/design/coordinator.html
new file mode 100644
index 0000000..000b78f
--- /dev/null
+++ b/docs/0.16.0-incubating/design/coordinator.html
@@ -0,0 +1,148 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Coordinator Process · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Coordinator Process · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" co [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Coordinator Process</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h3><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>For Apache Druid (incubating) Coordinator Process Configuration, see <a href="../configuration/index.html#coordinator">Coordinator Configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="http-endpoints"></a><a href="#http-endpoints" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>For a list of API endpoints supported by the Coordinator, see <a href="../operations/api-reference.html#coordinator">Coordinator API</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="overview"></a><a href="#overview" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p>The Druid Coordinator process is primarily responsible for segment management and distribution. More specifically, the Druid Coordinator process communicates to Historical processes to load or drop segments based on configurations. The Druid Coordinator is responsible for loading new segments, dropping outdated segments, managing segment replication, and balancing segment load.</p>
+<p>The Druid Coordinator runs periodically and the time between each run is a configurable parameter. Each time the Druid Coordinator runs, it assesses the current state of the cluster before deciding on the appropriate actions to take. Similar to the Broker and Historical processses, the Druid Coordinator maintains a connection to a Zookeeper cluster for current cluster information. The Coordinator also maintains a connection to a database containing information about available segments [...]
+<p>Before any unassigned segments are serviced by Historical processes, the available Historical processes for each tier are first sorted in terms of capacity, with least capacity servers having the highest priority. Unassigned segments are always assigned to the processes with least capacity to maintain a level of balance between processes. The Coordinator does not directly communicate with a historical process when assigning it a new segment; instead the Coordinator creates some tempor [...]
+<h3><a class="anchor" aria-hidden="true" id="running"></a><a href="#running" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<pre><code class="hljs">org.apache.druid.cli.Main<span class="hljs-built_in"> server </span>coordinator
+</code></pre>
+<h3><a class="anchor" aria-hidden="true" id="rules"></a><a href="#rules" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09 [...]
+<p>Segments can be automatically loaded and dropped from the cluster based on a set of rules. For more information on rules, see <a href="/docs/0.16.0-incubating/operations/rule-configuration.html">Rule Configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="cleaning-up-segments"></a><a href="#cleaning-up-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>Each run, the Druid coordinator compares the list of available database segments in the database with the current segments in the cluster. Segments that are not in the database but are still being served in the cluster are flagged and appended to a removal list. Segments that are overshadowed (their versions are too old and their data has been replaced by newer segments) are also dropped.</p>
+<h3><a class="anchor" aria-hidden="true" id="segment-availability"></a><a href="#segment-availability" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>If a Historical process restarts or becomes unavailable for any reason, the Druid Coordinator will notice a process has gone missing and treat all segments served by that process as being dropped. Given a sufficient period of time, the segments may be reassigned to other Historical processes in the cluster. However, each segment that is dropped is not immediately forgotten. Instead, there is a transitional data structure that stores all dropped segments with an associated lifetime. Th [...]
+<h3><a class="anchor" aria-hidden="true" id="balancing-segment-load"></a><a href="#balancing-segment-load" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<p>To ensure an even distribution of segments across Historical processes in the cluster, the Coordinator process will find the total size of all segments being served by every Historical process each time the Coordinator runs. For every Historical process tier in the cluster, the Coordinator process will determine the Historical process with the highest utilization and the Historical process with the lowest utilization. The percent difference in utilization between the two processes is  [...]
+<h3><a class="anchor" aria-hidden="true" id="compacting-segments"></a><a href="#compacting-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>Each run, the Druid Coordinator compacts small segments abutting each other. This is useful when you have a lot of small
+segments which may degrade query performance as well as increase disk space usage. See <a href="/docs/0.16.0-incubating/operations/segment-optimization.html">Segment Size Optimization</a> for details.</p>
+<p>The Coordinator first finds the segments to compact together based on the <a href="#segment-search-policy">segment search policy</a>.
+Once some segments are found, it launches a <a href="/docs/0.16.0-incubating/ingestion/tasks.html#compact">compaction task</a> to compact those segments.
+The maximum number of running compaction tasks is <code>min(sum of worker capacity * slotRatio, maxSlots)</code>.
+Note that even though <code>min(sum of worker capacity * slotRatio, maxSlots)</code> = 0, at least one compaction task is always submitted
+if the compaction is enabled for a dataSource.
+See <a href="../operations/api-reference.html#compaction-configuration">Compaction Configuration API</a> and <a href="../configuration/index.html#compaction-dynamic-configuration">Compaction Configuration</a> to enable the compaction.</p>
+<p>Compaction tasks might fail due to the following reasons.</p>
+<ul>
+<li>If the input segments of a compaction task are removed or overshadowed before it starts, that compaction task fails immediately.</li>
+<li>If a task of a higher priority acquires a lock for an interval overlapping with the interval of a compaction task, the compaction task fails.</li>
+</ul>
+<p>Once a compaction task fails, the Coordinator simply finds the segments for the interval of the failed task again, and launches a new compaction task in the next run.</p>
+<h3><a class="anchor" aria-hidden="true" id="segment-search-policy"></a><a href="#segment-search-policy" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<h4><a class="anchor" aria-hidden="true" id="newest-segment-first-policy"></a><a href="#newest-segment-first-policy" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 1 [...]
+<p>At every coordinator run, this policy searches for segments to compact by iterating segments from the latest to the oldest.
+Once it finds the latest segment among all dataSources, it checks if the segment is <em>compactible</em> with other segments of the same dataSource which have the same or abutting intervals.
+Note that segments are compactible if their total size is smaller than or equal to the configured <code>inputSegmentSizeBytes</code>.</p>
+<p>Here are some details with an example. Let us assume we have two dataSources (<code>foo</code>, <code>bar</code>)
+and 5 segments (<code>foo_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSION</code>, <code>foo_2017-11-01T00:00:00.000Z_2017-12-01T00:00:00.000Z_VERSION</code>, <code>bar_2017-08-01T00:00:00.000Z_2017-09-01T00:00:00.000Z_VERSION</code>, <code>bar_2017-09-01T00:00:00.000Z_2017-10-01T00:00:00.000Z_VERSION</code>, <code>bar_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSION</code>).
+When each segment has the same size of 10 MB and <code>inputSegmentSizeBytes</code> is 20 MB, this policy first returns two segments (<code>foo_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSION</code> and <code>foo_2017-11-01T00:00:00.000Z_2017-12-01T00:00:00.000Z_VERSION</code>) to compact together because
+<code>foo_2017-11-01T00:00:00.000Z_2017-12-01T00:00:00.000Z_VERSION</code> is the latest segment and <code>foo_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSION</code> abuts to it.</p>
+<p>If the coordinator has enough task slots for compaction, this policy would continue searching for the next segments and return
+<code>bar_2017-10-01T00:00:00.000Z_2017-11-01T00:00:00.000Z_VERSION</code> and <code>bar_2017-09-01T00:00:00.000Z_2017-10-01T00:00:00.000Z_VERSION</code>.
+Note that <code>bar_2017-08-01T00:00:00.000Z_2017-09-01T00:00:00.000Z_VERSION</code> is not compacted together even though it abuts to <code>bar_2017-09-01T00:00:00.000Z_2017-10-01T00:00:00.000Z_VERSION</code>.
+This is because the total segment size to compact would be greater than <code>inputSegmentSizeBytes</code> if it's included.</p>
+<p>The search start point can be changed by setting <a href="../configuration/index.html#compaction-dynamic-configuration">skipOffsetFromLatest</a>.
+If this is set, this policy will ignore the segments falling into the interval of (the end time of the very latest segment - <code>skipOffsetFromLatest</code>).
+This is to avoid conflicts between compaction tasks and realtime tasks.
+Note that realtime tasks have a higher priority than compaction tasks by default. Realtime tasks will revoke the locks of compaction tasks if their intervals overlap, resulting in the termination of the compaction task.</p>
+<blockquote>
+<p>This policy currently cannot handle the situation when there are a lot of small segments which have the same interval,
+and their total size exceeds <a href="/docs/0.16.0-incubating/configuration/index.html#compaction-dynamic-configuration">inputSegmentSizeBytes</a>.
+If it finds such segments, it simply skips them.</p>
+</blockquote>
+<h3><a class="anchor" aria-hidden="true" id="the-coordinator-console"></a><a href="#the-coordinator-console" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<p>The Druid Coordinator exposes a web GUI for displaying cluster information and rule configuration. For more details, please see <a href="../operations/management-uis.html#coordinator-consoles">coordinator console</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="faq"></a><a href="#faq" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.2 [...]
+<ol>
+<li><p><strong>Do clients ever contact the Coordinator process?</strong></p>
+<p>The Coordinator is not involved in a query.</p>
+<p>Historical processes never directly contact the Coordinator process. The Druid Coordinator tells the Historical processes to load/drop data via Zookeeper, but the Historical processes are completely unaware of the Coordinator.</p>
+<p>Brokers also never contact the Coordinator. Brokers base their understanding of the data topology on metadata exposed by the Historical processes via ZK and are completely unaware of the Coordinator.</p></li>
+<li><p><strong>Does it matter if the Coordinator process starts up before or after other processes?</strong></p>
+<p>No. If the Druid Coordinator is not started up, no new segments will be loaded in the cluster and outdated segments will not be dropped. However, the Coordinator process can be started up at any time, and after a configurable delay, will start running Coordinator tasks.</p>
+<p>This also means that if you have a working cluster and all of your Coordinators die, the cluster will continue to function, it just won’t experience any changes to its data topology.</p></li>
+</ol>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/broker.html"><span class="arrow-prev">← </span><span>Broker</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/historical.html"><span>Historical Process</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id="footer"><div class="container"><div class="text-cente [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/design.html b/docs/0.16.0-incubating/design/design.html
new file mode 100644
index 0000000..eed33a4
--- /dev/null
+++ b/docs/0.16.0-incubating/design/design.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="index.html">
+<meta http-equiv="refresh" content="0; url=index.html">
+<h1>Redirecting...</h1>
+<a href="index.html">Click here if you are not redirected.</a>
+<script>location="index.html"</script>
diff --git a/docs/0.16.0-incubating/design/historical.html b/docs/0.16.0-incubating/design/historical.html
new file mode 100644
index 0000000..0fb87ea
--- /dev/null
+++ b/docs/0.16.0-incubating/design/historical.html
@@ -0,0 +1,97 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Historical Process · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Historical Process · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" cont [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Historical Process</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h3><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>For Apache Druid (incubating) Historical Process Configuration, see <a href="../configuration/index.html#historical">Historical Configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="http-endpoints"></a><a href="#http-endpoints" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>For a list of API endpoints supported by the Historical, please see the <a href="../operations/api-reference.html#historical">API reference</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="running"></a><a href="#running" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<pre><code class="hljs">org.apache.druid.cli.Main<span class="hljs-built_in"> server </span>historical
+</code></pre>
+<h3><a class="anchor" aria-hidden="true" id="loading-and-serving-segments"></a><a href="#loading-and-serving-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 [...]
+<p>Each Historical process maintains a constant connection to Zookeeper and watches a configurable set of Zookeeper paths for new segment information. Historical processes do not communicate directly with each other or with the Coordinator processes but instead rely on Zookeeper for coordination.</p>
+<p>The <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a> process is responsible for assigning new segments to Historical processes. Assignment is done by creating an ephemeral Zookeeper entry under a load queue path associated with a Historical process. For more information on how the Coordinator assigns segments to Historical processes, please see <a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a>.</p>
+<p>When a Historical process notices a new load queue entry in its load queue path, it will first check a local disk directory (cache) for the information about segment. If no information about the segment exists in the cache, the Historical process will download metadata about the new segment to serve from Zookeeper. This metadata includes specifications about where the segment is located in deep storage and about how to decompress and process the segment. For more information about seg [...]
+<h3><a class="anchor" aria-hidden="true" id="loading-and-serving-segments-from-cache"></a><a href="#loading-and-serving-segments-from-cache" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2  [...]
+<p>Recall that when a Historical process notices a new segment entry in its load queue path, the Historical process first checks a configurable cache directory on its local disk to see if the segment had been previously downloaded. If a local cache entry already exists, the Historical process will directly read the segment binary files from disk and load the segment.</p>
+<p>The segment cache is also leveraged when a Historical process is first started. On startup, a Historical process will search through its cache directory and immediately load and serve all segments that are found. This feature allows Historical processes to be queried as soon they come online.</p>
+<h3><a class="anchor" aria-hidden="true" id="querying-segments"></a><a href="#querying-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<p>Please see <a href="/docs/0.16.0-incubating/querying/querying.html">Querying</a> for more information on querying Historical processes.</p>
+<p>A Historical can be configured to log and report metrics for every query it services.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/coordinator.html"><span class="arrow-prev">← </span><span>Coordinator Process</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/indexer.html"><span>Indexer Process</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id="footer"><div class="container"><div class [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/index.html b/docs/0.16.0-incubating/design/index.html
new file mode 100644
index 0000000..fc2855e
--- /dev/null
+++ b/docs/0.16.0-incubating/design/index.html
@@ -0,0 +1,153 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Introduction to Apache Druid · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Introduction to Apache Druid · Apache Druid"/><meta property="og:type" content="website"/><meta pr [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Introduction to Apache Druid</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h2><a class="anchor" aria-hidden="true" id="what-is-druid"></a><a href="#what-is-druid" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>Apache Druid (incubating) is a real-time analytics database designed for fast slice-and-dice analytics
+(&quot;<a href="http://en.wikipedia.org/wiki/Online_analytical_processing">OLAP</a>&quot; queries) on large data sets. Druid is most often
+used as a database for powering use cases where real-time ingest, fast query performance, and high uptime are important.
+As such, Druid is commonly used for powering GUIs of analytical applications, or as a backend for highly-concurrent APIs
+that need fast aggregations. Druid works best with event-oriented data.</p>
+<p>Common application areas for Druid include:</p>
+<ul>
+<li>Clickstream analytics (web and mobile analytics)</li>
+<li>Network telemetry analytics (network performance monitoring)</li>
+<li>Server metrics storage</li>
+<li>Supply chain analytics (manufacturing metrics)</li>
+<li>Application performance metrics</li>
+<li>Digital marketing/advertising analytics</li>
+<li>Business intelligence / OLAP</li>
+</ul>
+<p>Druid's core architecture combines ideas from data warehouses, timeseries databases, and logsearch systems. Some of
+Druid's key features are:</p>
+<ol>
+<li><strong>Columnar storage format.</strong> Druid uses column-oriented storage, meaning it only needs to load the exact columns
+needed for a particular query.  This gives a huge speed boost to queries that only hit a few columns. In addition, each
+column is stored optimized for its particular data type, which supports fast scans and aggregations.</li>
+<li><strong>Scalable distributed system.</strong> Druid is typically deployed in clusters of tens to hundreds of servers, and can
+offer ingest rates of millions of records/sec, retention of trillions of records, and query latencies of sub-second to a
+few seconds.</li>
+<li><strong>Massively parallel processing.</strong> Druid can process a query in parallel across the entire cluster.</li>
+<li><strong>Realtime or batch ingestion.</strong> Druid can ingest data either real-time (ingested data is immediately available for
+querying) or in batches.</li>
+<li><strong>Self-healing, self-balancing, easy to operate.</strong> As an operator, to scale the cluster out or in, simply add or
+remove servers and the cluster will rebalance itself automatically, in the background, without any downtime. If any
+Druid servers fail, the system will automatically route around the damage until those servers can be replaced. Druid
+is designed to run 24/7 with no need for planned downtimes for any reason, including configuration changes and software
+updates.</li>
+<li><strong>Cloud-native, fault-tolerant architecture that won't lose data.</strong> Once Druid has ingested your data, a copy is
+stored safely in <a href="architecture.html#deep-storage">deep storage</a> (typically cloud storage, HDFS, or a shared filesystem).
+Your data can be recovered from deep storage even if every single Druid server fails. For more limited failures affecting
+just a few Druid servers, replication ensures that queries are still possible while the system recovers.</li>
+<li><strong>Indexes for quick filtering.</strong> Druid uses <a href="https://arxiv.org/pdf/1004.0403">CONCISE</a> or
+<a href="https://roaringbitmap.org/">Roaring</a> compressed bitmap indexes to create indexes that power fast filtering and
+searching across multiple columns.</li>
+<li><strong>Time-based partitioning.</strong> Druid first partitions data by time, and can additionally partition based on other fields.
+This means time-based queries will only access the partitions that match the time range of the query. This leads to
+significant performance improvements for time-based data.</li>
+<li><strong>Approximate algorithms.</strong> Druid includes algorithms for approximate count-distinct, approximate ranking, and
+computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often
+substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also
+offers exact count-distinct and exact ranking.</li>
+<li><strong>Automatic summarization at ingest time.</strong> Druid optionally supports data summarization at ingestion time. This
+summarization partially pre-aggregates your data, and can lead to big costs savings and performance boosts.</li>
+</ol>
+<h2><a class="anchor" aria-hidden="true" id="when-should-i-use-druid"></a><a href="#when-should-i-use-druid" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<p>Druid is used by many companies of various sizes for many different use cases. Check out the
+<a href="/druid-powered">Powered by Apache Druid</a> page</p>
+<p>Druid is likely a good choice if your use case fits a few of the following descriptors:</p>
+<ul>
+<li>Insert rates are very high, but updates are less common.</li>
+<li>Most of your queries are aggregation and reporting queries (&quot;group by&quot; queries). You may also have searching and
+scanning queries.</li>
+<li>You are targeting query latencies of 100ms to a few seconds.</li>
+<li>Your data has a time component (Druid includes optimizations and design choices specifically related to time).</li>
+<li>You may have more than one table, but each query hits just one big distributed table. Queries may potentially hit more
+than one smaller &quot;lookup&quot; table.</li>
+<li>You have high cardinality data columns (e.g. URLs, user IDs) and need fast counting and ranking over them.</li>
+<li>You want to load data from Kafka, HDFS, flat files, or object storage like Amazon S3.</li>
+</ul>
+<p>Situations where you would likely <em>not</em> want to use Druid include:</p>
+<ul>
+<li>You need low-latency updates of <em>existing</em> records using a primary key. Druid supports streaming inserts, but not streaming updates (updates are done using
+background batch jobs).</li>
+<li>You are building an offline reporting system where query latency is not very important.</li>
+<li>You want to do &quot;big&quot; joins (joining one big fact table to another big fact table) and you are okay with these queries
+taking a long time to complete.</li>
+</ul>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-next button" href="/docs/0.16.0-incubating/tutorials/index.html"><span>Quickstart</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#what-is-druid">What is Druid?</a></li><li><a href="#when-should-i-use-druid">When should I use Druid?</a></li></ul></nav></div><footer class="nav-footer druid-footer" id="footer"><div class="container"><div class="t [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/indexer.html b/docs/0.16.0-incubating/design/indexer.html
new file mode 100644
index 0000000..d3deb7e
--- /dev/null
+++ b/docs/0.16.0-incubating/design/indexer.html
@@ -0,0 +1,191 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Indexer Process · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Indexer Process · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="h [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Indexer Process</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<blockquote>
+<p>The Indexer is an optional and <a href="../development/experimental.html">experimental</a> feature.
+Its memory management system is still under development and will be significantly enhanced in later releases.</p>
+</blockquote>
+<p>The Apache Druid (incubating) Indexer process is an alternative to the MiddleManager + Peon task execution system. Instead of forking a separate JVM process per-task, the Indexer runs tasks as separate threads within a single JVM process.</p>
+<p>The Indexer is designed to be easier to configure and deploy compared to the MiddleManager + Peon system and to better enable resource sharing across tasks.</p>
+<h2><a class="anchor" aria-hidden="true" id="running"></a><a href="#running" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<pre><code class="hljs">org.apache.druid.cli.Main<span class="hljs-built_in"> server </span>indexer
+</code></pre>
+<h2><a class="anchor" aria-hidden="true" id="task-resource-sharing"></a><a href="#task-resource-sharing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>The following resources are shared across all tasks running inside an Indexer process.</p>
+<h3><a class="anchor" aria-hidden="true" id="query-resources"></a><a href="#query-resources" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5  [...]
+<p>The query processing threads and buffers are shared across all tasks. The Indexer will serve queries from a single endpoint shared by all tasks.</p>
+<p>If <a href="#indexer-caching">query caching</a> is enabled, the query cache is also shared across all tasks.</p>
+<h3><a class="anchor" aria-hidden="true" id="server-http-threads"></a><a href="#server-http-threads" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>The Indexer maintains two equally sized pools of HTTP threads.</p>
+<p>One pool is exclusively used for task control messages between the Overlord and the Indexer (&quot;chat handler threads&quot;). The other pool is used for handling all other HTTP requests.</p>
+<p>The size of the pools are configured by the <code>druid.server.http.numThreads</code> configuration (e.g., if this is set to 10, there will be 10 chat handler threads and 10 non-chat handler threads).</p>
+<p>In addition to these two pools, 2 separate threads are allocated for lookup handling. If lookups are not used, these threads will not be used.</p>
+<h3><a class="anchor" aria-hidden="true" id="memory-sharing"></a><a href="#memory-sharing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>The Indexer uses the <code>druid.worker.globalIngestionHeapLimitBytes</code> configuration to impose a global heap limit across all of the tasks it is running.</p>
+<p>This global limit is evenly divided across the number of task slots configured by <code>druid.worker.capacity</code>.</p>
+<p>To apply the per-task heap limit, the Indexer will override <code>maxBytesInMemory</code> in task tuning configs (i.e., ignoring the default value or any user configured value). <code>maxRowsInMemory</code> will also be overridden to an essentially unlimited value: the Indexer does not support row limits.</p>
+<p>By default, <code>druid.worker.globalIngestionHeapLimitBytes</code> is set to 1/6th of the available JVM heap. This default is chosen to align with the default value of <code>maxBytesInMemory</code> in task tuning configs when using the MiddleManager/Peon system, which is also 1/6th of the JVM heap.</p>
+<p>The peak usage for rows held in heap memory relates to the interaction between the <code>maxBytesInMemory</code> and <code>maxPendingPersists</code> properties in the task tuning configs. When the amount of row data held in-heap by a task reaches the limit specified by <code>maxBytesInMemory</code>, a task will persist the in-heap row data. After the persist has been started, the task can again ingest up to <code>maxBytesInMemory</code> bytes worth of row data while the persist is run [...]
+<p>This means that the peak in-heap usage for row data can be up to approximately <code>maxBytesInMemory</code> * (2 + <code>maxPendingPersists</code>). The default value of <code>maxPendingPersists</code> is 0, which allows for 1 persist to run concurrently with ingestion work.</p>
+<p>The remaining portion of the heap is reserved for query processing and segment persist/merge operations, and miscellaneous heap usage.</p>
+<h4><a class="anchor" aria-hidden="true" id="concurrent-segment-persist-merge-limits"></a><a href="#concurrent-segment-persist-merge-limits" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2  [...]
+<p>To help reduce peak memory usage, the Indexer imposes a limit on the number of concurrent segment persist/merge operations across all running tasks.</p>
+<p>By default, the number of concurrent persist/merge operations is limited to (<code>druid.worker.capacity</code> / 2), rounded down. This limit can be configured with the <code>druid.worker.numConcurrentMerges</code> property.</p>
+<h2><a class="anchor" aria-hidden="true" id="runtime-configuration"></a><a href="#runtime-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>In addition to the <a href="../configuration/index.html#common-configurations">common configurations</a>, the Indexer accepts the following configurations:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.worker.version</code></td><td>Version identifier for the Indexer.</td><td>0</td></tr>
+<tr><td><code>druid.worker.capacity</code></td><td>Maximum number of tasks the Indexer can accept.</td><td>Number of available processors - 1</td></tr>
+<tr><td><code>druid.worker.globalIngestionHeapLimitBytes</code></td><td>Total amount of heap available for ingestion processing. This is applied by automatically setting the <code>maxBytesInMemory</code> property on tasks.</td><td>60% of configured JVM heap</td></tr>
+<tr><td><code>druid.worker.numConcurrentMerges</code></td><td>Maximum number of segment persist or merge operations that can run concurrently across all tasks.</td><td><code>druid.worker.capacity</code> / 2, rounded down</td></tr>
+<tr><td><code>druid.indexer.task.baseDir</code></td><td>Base temporary working directory.</td><td><code>System.getProperty(&quot;java.io.tmpdir&quot;)</code></td></tr>
+<tr><td><code>druid.indexer.task.baseTaskDir</code></td><td>Base temporary working directory for tasks.</td><td><code>${druid.indexer.task.baseDir}/persistent/tasks</code></td></tr>
+<tr><td><code>druid.indexer.task.defaultHadoopCoordinates</code></td><td>Hadoop version to use with HadoopIndexTasks that do not request a particular version.</td><td>org.apache.hadoop:hadoop-client:2.8.3</td></tr>
+<tr><td><code>druid.indexer.task.gracefulShutdownTimeout</code></td><td>Wait this long on Indexer restart for restorable tasks to gracefully exit.</td><td>PT5M</td></tr>
+<tr><td><code>druid.indexer.task.hadoopWorkingPath</code></td><td>Temporary working directory for Hadoop tasks.</td><td><code>/tmp/druid-indexing</code></td></tr>
+<tr><td><code>druid.indexer.task.restoreTasksOnRestart</code></td><td>If true, the Indexer will attempt to stop tasks gracefully on shutdown and restore them on restart.</td><td>false</td></tr>
+<tr><td><code>druid.peon.taskActionClient.retry.minWait</code></td><td>The minimum retry time to communicate with Overlord.</td><td>PT5S</td></tr>
+<tr><td><code>druid.peon.taskActionClient.retry.maxWait</code></td><td>The maximum retry time to communicate with Overlord.</td><td>PT1M</td></tr>
+<tr><td><code>druid.peon.taskActionClient.retry.maxRetryCount</code></td><td>The maximum number of retries to communicate with Overlord.</td><td>60</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="concurrent-requests"></a><a href="#concurrent-requests" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>Druid uses Jetty to serve HTTP requests.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.server.http.numThreads</code></td><td>Number of threads for HTTP requests. Please see the <a href="#server-http-threads">Server HTTP threads</a> section for more details on how the Indexer uses this configuration.</td><td>max(10, (Number of cores * 17) / 16 + 2) + 30</td></tr>
+<tr><td><code>druid.server.http.queueSize</code></td><td>Size of the worker queue used by Jetty server to temporarily store incoming client connections. If this value is set and a request is rejected by jetty because queue is full then client would observe request failure with TCP connection being closed immediately with a completely empty response from server.</td><td>Unbounded</td></tr>
+<tr><td><code>druid.server.http.maxIdleTime</code></td><td>The Jetty max idle time for a connection.</td><td>PT5M</td></tr>
+<tr><td><code>druid.server.http.enableRequestLimit</code></td><td>If enabled, no requests would be queued in jetty queue and &quot;HTTP 429 Too Many Requests&quot; error response would be sent.</td><td>false</td></tr>
+<tr><td><code>druid.server.http.defaultQueryTimeout</code></td><td>Query timeout in millis, beyond which unfinished queries will be cancelled</td><td>300000</td></tr>
+<tr><td><code>druid.server.http.gracefulShutdownTimeout</code></td><td>The maximum amount of time Jetty waits after receiving shutdown signal. After this timeout the threads will be forcefully shutdown. This allows any queries that are executing to complete.</td><td><code>PT0S</code> (do not wait)</td></tr>
+<tr><td><code>druid.server.http.unannouncePropagationDelay</code></td><td>How long to wait for zookeeper unannouncements to propagate before shutting down Jetty. This is a minimum and <code>druid.server.http.gracefulShutdownTimeout</code> does not start counting down until after this period elapses.</td><td><code>PT0S</code> (do not wait)</td></tr>
+<tr><td><code>druid.server.http.maxQueryTimeout</code></td><td>Maximum allowed value (in milliseconds) for <code>timeout</code> parameter. See <a href="../querying/query-context.html">query-context</a> to know more about <code>timeout</code>. Query is rejected if the query context <code>timeout</code> is greater than this value.</td><td>Long.MAX_VALUE</td></tr>
+<tr><td><code>druid.server.http.maxRequestHeaderSize</code></td><td>Maximum size of a request header in bytes. Larger headers consume more memory and can make a server more vulnerable to denial of service attacks.</td><td>8 * 1024</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="processing"></a><a href="#processing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.processing.buffer.sizeBytes</code></td><td>This specifies a buffer size for the storage of intermediate results. The computation engine in the Indexer processes will use a scratch buffer of this size to do all of their intermediate computations off-heap. Larger values allow for more aggregations in a single pass over the data while smaller values can require more passes depending on the query that is being executed.</td><td>auto (max 1GB)</td></tr>
+<tr><td><code>druid.processing.buffer.poolCacheMaxCount</code></td><td>processing buffer pool caches the buffers for later use, this is the maximum count cache will grow to. note that pool can create more buffers than it can cache if necessary.</td><td>Integer.MAX_VALUE</td></tr>
+<tr><td><code>druid.processing.formatString</code></td><td>Indexer processes use this format string to name their processing threads.</td><td>processing-%s</td></tr>
+<tr><td><code>druid.processing.numMergeBuffers</code></td><td>The number of direct memory buffers available for merging query results. The buffers are sized by <code>druid.processing.buffer.sizeBytes</code>. This property is effectively a concurrency limit for queries that require merging buffers. If you are using any queries that require merge buffers (currently, just groupBy v2) then you should have at least two of these.</td><td><code>max(2, druid.processing.numThreads / 4)</code></td></tr>
+<tr><td><code>druid.processing.numThreads</code></td><td>The number of processing threads to have available for parallel processing of segments. Our rule of thumb is <code>num_cores - 1</code>, which means that even under heavy load there will still be one core available to do background tasks like talking with ZooKeeper and pulling down segments. If only one core is available, this property defaults to the value <code>1</code>.</td><td>Number of cores - 1 (or 1)</td></tr>
+<tr><td><code>druid.processing.columnCache.sizeBytes</code></td><td>Maximum size in bytes for the dimension value lookup cache. Any value greater than <code>0</code> enables the cache. It is currently disabled by default. Enabling the lookup cache can significantly improve the performance of aggregators operating on dimension values, such as the JavaScript aggregator, or cardinality aggregator, but can slow things down if the cache hit rate is low (i.e. dimensions with few repeating valu [...]
+<tr><td><code>druid.processing.fifo</code></td><td>If the processing queue should treat tasks of equal priority in a FIFO manner</td><td><code>false</code></td></tr>
+<tr><td><code>druid.processing.tmpDir</code></td><td>Path where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default <code>java.io.tmpdir</code> path.</td><td>path represented by <code>java.io.tmpdir</code></td></tr>
+</tbody>
+</table>
+<p>The amount of direct memory needed by Druid is at least
+<code>druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)</code>. You can
+ensure at least this amount of direct memory is available by providing <code>-XX:MaxDirectMemorySize=&lt;VALUE&gt;</code> at the command
+line.</p>
+<h3><a class="anchor" aria-hidden="true" id="query-configurations"></a><a href="#query-configurations" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1 [...]
+<p>See <a href="../configuration/index.html#general-query-configuration">general query configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="indexer-caching"></a><a href="#indexer-caching" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5  [...]
+<p>You can optionally configure caching to be enabled on the Indexer by setting caching configs here.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.realtime.cache.useCache</code></td><td>true, false</td><td>Enable the cache on the realtime.</td><td>false</td></tr>
+<tr><td><code>druid.realtime.cache.populateCache</code></td><td>true, false</td><td>Populate the cache on the realtime.</td><td>false</td></tr>
+<tr><td><code>druid.realtime.cache.unCacheable</code></td><td>All druid query types</td><td>All query types to not cache.</td><td><code>[&quot;groupBy&quot;, &quot;select&quot;]</code></td></tr>
+<tr><td><code>druid.realtime.cache.maxEntrySize</code></td><td>Maximum cache entry size in bytes.</td><td>1_000_000</td></tr>
+</tbody>
+</table>
+<p>See <a href="../configuration/index.html#cache-configuration">cache configuration</a> for how to configure cache settings.</p>
+<p>Note that only local caches such as the <code>local</code>-type cache and <code>caffeine</code> cache are supported. If a remote cache such as <code>memcached</code> is used, it will be ignored.</p>
+<h2><a class="anchor" aria-hidden="true" id="current-limitations"></a><a href="#current-limitations" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>Separate task logs are not currently supported when using the Indexer; all task log messages will instead be logged in the Indexer process log.</p>
+<p>The Indexer currently imposes an identical memory limit on each task. In later releases, the per-task memory limit will be removed and only the global limit will apply. The limit on concurrent merges will also be removed.</p>
+<p>In later releases, per-task memory usage will be dynamically managed. Please see <a href="https://github.com/apache/incubator-druid/issues/7900">https://github.com/apache/incubator-druid/issues/7900</a> for details on future enhancements to the Indexer.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/historical.html"><span class="arrow-prev">← </span><span>Historical Process</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/indexing-service.html"><span>Indexing Service</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#running">Running</a></li><li><a href="#task-resour [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/indexing-service.html b/docs/0.16.0-incubating/design/indexing-service.html
new file mode 100644
index 0000000..f0d438b
--- /dev/null
+++ b/docs/0.16.0-incubating/design/indexing-service.html
@@ -0,0 +1,94 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Indexing Service · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Indexing Service · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content= [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Indexing Service</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>The Apache Druid (incubating) indexing service is a highly-available, distributed service that runs indexing related tasks.</p>
+<p>Indexing <a href="/docs/0.16.0-incubating/ingestion/tasks.html">tasks</a> create (and sometimes destroy) Druid <a href="/docs/0.16.0-incubating/design/segments.html">segments</a>. The indexing service has a master/slave like architecture.</p>
+<p>The indexing service is composed of three main components: a <a href="/docs/0.16.0-incubating/design/peons.html">Peon</a> component that can run a single task, a <a href="/docs/0.16.0-incubating/design/middlemanager.html">Middle Manager</a> component that manages Peons, and an <a href="/docs/0.16.0-incubating/design/overlord.html">Overlord</a> component that manages task distribution to MiddleManagers.
+Overlords and MiddleManagers may run on the same process or across multiple processes while MiddleManagers and Peons always run on the same process.</p>
+<p>Tasks are managed using API endpoints on the Overlord service. Please see <a href="../operations/api-reference.html#tasks">Overlord Task API</a> for more information.</p>
+<p><img src="../assets/indexing_service.png" alt="Indexing Service" title="Indexing Service"></p>
+<h2><a class="anchor" aria-hidden="true" id="overlord"></a><a href="#overlord" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p>See <a href="/docs/0.16.0-incubating/design/overlord.html">Overlord</a>.</p>
+<h2><a class="anchor" aria-hidden="true" id="middle-managers"></a><a href="#middle-managers" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5  [...]
+<p>See <a href="/docs/0.16.0-incubating/design/middlemanager.html">Middle Manager</a>.</p>
+<h2><a class="anchor" aria-hidden="true" id="peons"></a><a href="#peons" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09 [...]
+<p>See <a href="/docs/0.16.0-incubating/design/peons.html">Peon</a>.</p>
+<h2><a class="anchor" aria-hidden="true" id="tasks"></a><a href="#tasks" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09 [...]
+<p>See <a href="/docs/0.16.0-incubating/ingestion/tasks.html">Tasks</a>.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/indexer.html"><span class="arrow-prev">← </span><span>Indexer Process</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/middlemanager.html"><span class="function-name-prevnext">MiddleManager Process</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#overlord">Overlord</a>< [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/middlemanager.html b/docs/0.16.0-incubating/design/middlemanager.html
new file mode 100644
index 0000000..7247476
--- /dev/null
+++ b/docs/0.16.0-incubating/design/middlemanager.html
@@ -0,0 +1,90 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>MiddleManager Process · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="MiddleManager Process · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">MiddleManager Process</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h3><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>For Apache Druid (incubating) Middlemanager Process Configuration, see <a href="../configuration/index.html#middlemanager-and-peons">Indexing Service Configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="http-endpoints"></a><a href="#http-endpoints" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>For a list of API endpoints supported by the MiddleManager, please see the <a href="../operations/api-reference.html#middlemanager">API reference</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="overview"></a><a href="#overview" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p>The MiddleManager process is a worker process that executes submitted tasks. Middle Managers forward tasks to Peons that run in separate JVMs.
+The reason we have separate JVMs for tasks is for resource and log isolation. Each <a href="/docs/0.16.0-incubating/design/peons.html">Peon</a> is capable of running only one task at a time, however, a MiddleManager may have multiple Peons.</p>
+<h3><a class="anchor" aria-hidden="true" id="running"></a><a href="#running" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<pre><code class="hljs">org.apache.druid.cli.Main<span class="hljs-built_in"> server </span>middleManager
+</code></pre>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/indexing-service.html"><span class="arrow-prev">← </span><span>Indexing Service</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/overlord.html"><span>Overlord Process</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id="footer"><div class="container"><div c [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/overlord.html b/docs/0.16.0-incubating/design/overlord.html
new file mode 100644
index 0000000..d51f9a4
--- /dev/null
+++ b/docs/0.16.0-incubating/design/overlord.html
@@ -0,0 +1,102 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Overlord Process · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Overlord Process · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content= [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Overlord Process</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h3><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>For Apache Druid (incubating) Overlord Process Configuration, see <a href="../configuration/index.html#overlord">Overlord Configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="http-endpoints"></a><a href="#http-endpoints" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>For a list of API endpoints supported by the Overlord, please see the <a href="../operations/api-reference.html#overlord">API reference</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="overview"></a><a href="#overview" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p>The Overlord process is responsible for accepting tasks, coordinating task distribution, creating locks around tasks, and returning statuses to callers. Overlord can be configured to run in one of two modes - local or remote (local being default).
+In local mode Overlord is also responsible for creating Peons for executing tasks. When running the Overlord in local mode, all MiddleManager and Peon configurations must be provided as well.
+Local mode is typically used for simple workflows.  In remote mode, the Overlord and MiddleManager are run in separate processes and you can run each on a different server.
+This mode is recommended if you intend to use the indexing service as the single endpoint for all Druid indexing.</p>
+<h3><a class="anchor" aria-hidden="true" id="overlord-console"></a><a href="#overlord-console" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p>The Overlord provides a UI for managing tasks and workers. For more details, please see <a href="../operations/management-uis.html#overlord-console">overlord console</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="blacklisted-workers"></a><a href="#blacklisted-workers" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>If a MiddleManager has task failures above a threshold, the Overlord will blacklist these MiddleManagers. No more than 20% of the MiddleManagers can be blacklisted. Blacklisted MiddleManagers will be periodically whitelisted.</p>
+<p>The following vairables can be used to set the threshold and blacklist timeouts.</p>
+<pre><code class="hljs">druid<span class="hljs-selector-class">.indexer</span><span class="hljs-selector-class">.runner</span><span class="hljs-selector-class">.maxRetriesBeforeBlacklist</span>
+druid<span class="hljs-selector-class">.indexer</span><span class="hljs-selector-class">.runner</span><span class="hljs-selector-class">.workerBlackListBackoffTime</span>
+druid<span class="hljs-selector-class">.indexer</span><span class="hljs-selector-class">.runner</span><span class="hljs-selector-class">.workerBlackListCleanupPeriod</span>
+druid<span class="hljs-selector-class">.indexer</span><span class="hljs-selector-class">.runner</span><span class="hljs-selector-class">.maxPercentageBlacklistWorkers</span>
+</code></pre>
+<h3><a class="anchor" aria-hidden="true" id="autoscaling"></a><a href="#autoscaling" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<p>The Autoscaling mechanisms currently in place are tightly coupled with our deployment infrastructure but the framework should be in place for other implementations. We are highly open to new implementations or extensions of the existing mechanisms. In our own deployments, MiddleManager processes are Amazon AWS EC2 nodes and they are provisioned to register themselves in a <a href="https://github.com/ning/galaxy">galaxy</a> environment.</p>
+<p>If autoscaling is enabled, new MiddleManagers may be added when a task has been in pending state for too long. MiddleManagers may be terminated if they have not run any tasks for a period of time.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/middlemanager.html"><span class="arrow-prev">← </span><span class="function-name-prevnext">MiddleManager Process</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/router.html"><span>Router Process</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id="footer"> [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/peons.html b/docs/0.16.0-incubating/design/peons.html
new file mode 100644
index 0000000..429911f
--- /dev/null
+++ b/docs/0.16.0-incubating/design/peons.html
@@ -0,0 +1,92 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Peons · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Peons · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https://druid.apache. [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Peons</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h3><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>For Apache Druid (incubating) Peon Configuration, see <a href="../configuration/index.html#peon-query-configuration">Peon Query Configuration</a> and <a href="../configuration/index.html#additional-peon-configuration">Additional Peon Configuration</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="http-endpoints"></a><a href="#http-endpoints" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>For a list of API endpoints supported by the Peon, please see the <a href="../operations/api-reference.html#peon">Peon API reference</a>.</p>
+<p>Peons run a single task in a single JVM. MiddleManager is responsible for creating Peons for running tasks.
+Peons should rarely (if ever for testing purposes) be run on their own.</p>
+<h3><a class="anchor" aria-hidden="true" id="running"></a><a href="#running" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<p>The Peon should very rarely ever be run independent of the MiddleManager unless for development purposes.</p>
+<pre><code class="hljs">org<span class="hljs-selector-class">.apache</span><span class="hljs-selector-class">.druid</span><span class="hljs-selector-class">.cli</span><span class="hljs-selector-class">.Main</span> internal peon &lt;task_file&gt; &lt;status_file&gt;
+</code></pre>
+<p>The task file contains the task JSON object.
+The status file indicates where the task status will be output.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/router.html"><span class="arrow-prev">← </span><span>Router Process</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/integrating-druid-with-other-technologies.html"><span>Integrating with other technologies</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-foot [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/plumber.md b/docs/0.16.0-incubating/design/plumber.md
new file mode 100644
index 0000000..2f57fca
--- /dev/null
+++ b/docs/0.16.0-incubating/design/plumber.md
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../ingestion/standalone-realtime.html">
+<meta http-equiv="refresh" content="0; url=../ingestion/standalone-realtime.html">
+<h1>Redirecting...</h1>
+<a href="../ingestion/standalone-realtime.html">Click here if you are not redirected.</a>
+<script>location="../ingestion/standalone-realtime.html"</script>
diff --git a/docs/0.16.0-incubating/design/processes.html b/docs/0.16.0-incubating/design/processes.html
new file mode 100644
index 0000000..7a491d7
--- /dev/null
+++ b/docs/0.16.0-incubating/design/processes.html
@@ -0,0 +1,154 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Processes and servers · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Processes and servers · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Processes and servers</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<h2><a class="anchor" aria-hidden="true" id="process-types"></a><a href="#process-types" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>Druid has several process types:</p>
+<ul>
+<li><a href="/docs/0.16.0-incubating/design/coordinator.html">Coordinator</a></li>
+<li><a href="/docs/0.16.0-incubating/design/overlord.html">Overlord</a></li>
+<li><a href="/docs/0.16.0-incubating/design/broker.html">Broker</a></li>
+<li><a href="/docs/0.16.0-incubating/design/historical.html">Historical</a></li>
+<li><a href="/docs/0.16.0-incubating/design/middlemanager.html">MiddleManager</a> and <a href="/docs/0.16.0-incubating/design/peons.html">Peons</a></li>
+<li><a href="/docs/0.16.0-incubating/design/indexer.html">Indexer (Optional)</a></li>
+<li><a href="/docs/0.16.0-incubating/design/router.html">Router (Optional)</a></li>
+</ul>
+<h2><a class="anchor" aria-hidden="true" id="server-types"></a><a href="#server-types" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>Druid processes can be deployed any way you like, but for ease of deployment we suggest organizing them into three server types:</p>
+<ul>
+<li><strong>Master</strong></li>
+<li><strong>Query</strong></li>
+<li><strong>Data</strong></li>
+</ul>
+<p><img src="../assets/druid-architecture.png" width="800"/></p>
+<p>This section describes the Druid processes and the suggested Master/Query/Data server organization, as shown in the architecture diagram above.</p>
+<h3><a class="anchor" aria-hidden="true" id="master-server"></a><a href="#master-server" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>A Master server manages data ingestion and availability: it is responsible for starting new ingestion jobs and coordinating availability of data on the &quot;Data servers&quot; described below.</p>
+<p>Within a Master server, functionality is split between two processes, the Coordinator and Overlord.</p>
+<h4><a class="anchor" aria-hidden="true" id="coordinator-process"></a><a href="#coordinator-process" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p><a href="/docs/0.16.0-incubating/design/coordinator.html"><strong>Coordinator</strong></a> processes watch over the Historical processes on the Data servers. They are responsible for assigning segments to specific servers, and for ensuring segments are well-balanced across Historicals.</p>
+<h4><a class="anchor" aria-hidden="true" id="overlord-process"></a><a href="#overlord-process" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<p><a href="/docs/0.16.0-incubating/design/overlord.html"><strong>Overlord</strong></a> processes watch over the MiddleManager processes on the Data servers and are the controllers of data ingestion into Druid. They are responsible for assigning ingestion tasks to MiddleManagers and for coordinating segment publishing.</p>
+<h3><a class="anchor" aria-hidden="true" id="query-server"></a><a href="#query-server" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>A Query server provides the endpoints that users and client applications interact with, routing queries to Data servers or other Query servers (and optionally proxied Master server requests as well).</p>
+<p>Within a Query server, functionality is split between two processes, the Broker and Router.</p>
+<h4><a class="anchor" aria-hidden="true" id="broker-process"></a><a href="#broker-process" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p><a href="/docs/0.16.0-incubating/design/broker.html"><strong>Broker</strong></a> processes receive queries from external clients and forward those queries to Data servers. When Brokers receive results from those subqueries, they merge those results and return them to the
+caller. End users typically query Brokers rather than querying Historicals or MiddleManagers processes on Data servers directly.</p>
+<h4><a class="anchor" aria-hidden="true" id="router-process-optional"></a><a href="#router-process-optional" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<p><a href="/docs/0.16.0-incubating/design/router.html"><strong>Router</strong></a> processes are <em>optional</em> processes that provide a unified API gateway in front of Druid Brokers,
+Overlords, and Coordinators. They are optional since you can also simply contact the Druid Brokers, Overlords, and
+Coordinators directly.</p>
+<p>The Router also runs the <a href="../operations/management-uis.html#druid-console">Druid Console</a>, a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console.</p>
+<h3><a class="anchor" aria-hidden="true" id="data-server"></a><a href="#data-server" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42 [...]
+<p>A Data server executes ingestion jobs and stores queryable data.</p>
+<p>Within a Data server, functionality is split between two processes, the Historical and MiddleManager.</p>
+<h3><a class="anchor" aria-hidden="true" id="historical-process"></a><a href="#historical-process" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p><a href="/docs/0.16.0-incubating/design/historical.html"><strong>Historical</strong></a> processes are the workhorses that handle storage and querying on &quot;historical&quot; data
+(including any streaming data that has been in the system long enough to be committed). Historical processes
+download segments from deep storage and respond to queries about these segments. They don't accept writes.</p>
+<h3><a class="anchor" aria-hidden="true" id="middle-manager-process"></a><a href="#middle-manager-process" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<p><a href="/docs/0.16.0-incubating/design/middlemanager.html"><strong>MiddleManager</strong></a> processes handle ingestion of new data into the cluster. They are responsible
+for reading from external data sources and publishing new Druid segments.</p>
+<h4><a class="anchor" aria-hidden="true" id="peon-processes"></a><a href="#peon-processes" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p><a href="/docs/0.16.0-incubating/design/peons.html"><strong>Peon</strong></a> processes are task execution engines spawned by MiddleManagers. Each Peon runs a separate JVM and is responsible for executing a single task. Peons always run on the same host as the MiddleManager that spawned them.</p>
+<h3><a class="anchor" aria-hidden="true" id="indexer-process-optional"></a><a href="#indexer-process-optional" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<p><a href="/docs/0.16.0-incubating/design/indexer.html"><strong>Indexer</strong></a> processes are an alternative to MiddleManagers and Peons. Instead of
+forking separate JVM processes per-task, the Indexer runs tasks as individual threads within a single JVM process.</p>
+<p>The Indexer is designed to be easier to configure and deploy compared to the MiddleManager + Peon system and to
+better enable resource sharing across tasks. The Indexer is a newer feature and is currently designated
+<a href="/docs/0.16.0-incubating/development/experimental.html">experimental</a> due to the fact that its memory management system is still under
+development. It will continue to mature in future versions of Druid.</p>
+<p>Typically, you would deploy either MiddleManagers or Indexers, but not both.</p>
+<h2><a class="anchor" aria-hidden="true" id="pros-and-cons-of-colocation"></a><a href="#pros-and-cons-of-colocation" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 1 [...]
+<p>Druid processes can be colocated based on the Master/Data/Query server organization as
+described above. This organization generally results in better utilization of
+hardware resources for most clusters.</p>
+<p>For very large scale clusters, however, it can be desirable to split the Druid processes
+such that they run on individual servers to avoid resource contention.</p>
+<p>This section describes guidelines and configuration parameters related to process colocation.</p>
+<h3><a class="anchor" aria-hidden="true" id="coordinators-and-overlords"></a><a href="#coordinators-and-overlords" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H [...]
+<p>The workload on the Coordinator process tends to increase with the number of segments in the cluster. The Overlord's workload also increases based on the number of segments in the cluster, but to a lesser degree than the Coordinator.</p>
+<p>In clusters with very high segment counts, it can make sense to separate the Coordinator and Overlord processes to provide more resources for the Coordinator's segment balancing workload.</p>
+<h4><a class="anchor" aria-hidden="true" id="unified-process"></a><a href="#unified-process" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5  [...]
+<p>The Coordinator and Overlord processes can be run as a single combined process by setting the <code>druid.coordinator.asOverlord.enabled</code> property.</p>
+<p>Please see <a href="../configuration/index.html#coordinator-operation">Coordinator Configuration: Operation</a> for details.</p>
+<h3><a class="anchor" aria-hidden="true" id="historicals-and-middlemanagers"></a><a href="#historicals-and-middlemanagers" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 1 [...]
+<p>With higher levels of ingestion or query load, it can make sense to deploy the Historical and MiddleManager processes on separate hosts to to avoid CPU and memory contention.</p>
+<p>The Historical also benefits from having free memory for memory mapped segments, which can be another reason to deploy the Historical and MiddleManager processes separately.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/segments.html"><span class="arrow-prev">← </span><span>Segments</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/dependencies/deep-storage.html"><span>Deep storage</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#process-types">Process types</a></li><li><a href="#server-types" [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/realtime.md b/docs/0.16.0-incubating/design/realtime.md
new file mode 100644
index 0000000..2f57fca
--- /dev/null
+++ b/docs/0.16.0-incubating/design/realtime.md
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../ingestion/standalone-realtime.html">
+<meta http-equiv="refresh" content="0; url=../ingestion/standalone-realtime.html">
+<h1>Redirecting...</h1>
+<a href="../ingestion/standalone-realtime.html">Click here if you are not redirected.</a>
+<script>location="../ingestion/standalone-realtime.html"</script>
diff --git a/docs/0.16.0-incubating/design/router.html b/docs/0.16.0-incubating/design/router.html
new file mode 100644
index 0000000..93da771
--- /dev/null
+++ b/docs/0.16.0-incubating/design/router.html
@@ -0,0 +1,241 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Router Process · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Router Process · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="htt [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Router Process</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<blockquote>
+<p>The Router is an optional and <a href="/docs/0.16.0-incubating/development/experimental.html">experimental</a> feature due to the fact that its recommended place in the Druid cluster architecture is still evolving.
+However, it has been battle-tested in production, and it hosts the powerful <a href="../operations/management-uis.html#druid-console">Druid Console</a>, so you should feel safe deploying it.</p>
+</blockquote>
+<p>The Apache Druid (incubating) Router process can be used to route queries to different Broker processes. By default, the broker routes queries based on how <a href="/docs/0.16.0-incubating/operations/rule-configuration.html">Rules</a> are set up. For example, if 1 month of recent data is loaded into a <code>hot</code> cluster, queries that fall within the recent month can be routed to a dedicated set of brokers. Queries outside this range are routed to another set of brokers. This set [...]
+<p>For query routing purposes, you should only ever need the Router process if you have a Druid cluster well into the terabyte range.</p>
+<p>In addition to query routing, the Router also runs the <a href="../operations/management-uis.html#druid-console">Druid Console</a>, a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console.</p>
+<h2><a class="anchor" aria-hidden="true" id="running"></a><a href="#running" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<pre><code class="hljs">org.apache.druid.cli.Main<span class="hljs-built_in"> server </span>router
+</code></pre>
+<h2><a class="anchor" aria-hidden="true" id="example-production-configuration"></a><a href="#example-production-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<p>In this example, we have two tiers in our production cluster: <code>hot</code> and <code>_default_tier</code>. Queries for the <code>hot</code> tier are routed through the <code>broker-hot</code> set of Brokers, and queries for the <code>_default_tier</code> are routed through the <code>broker-cold</code> set of Brokers. If any exceptions or network problems occur, queries are routed to the <code>broker-cold</code> set of brokers. In our example, we are running with a c3.2xlarge EC2 i [...]
+<p>JVM settings:</p>
+<pre><code class="hljs">-<span class="ruby">server
+</span>-<span class="ruby">Xmx13g
+</span>-<span class="ruby">Xms13g
+</span>-<span class="ruby"><span class="hljs-symbol">XX:</span>NewSize=<span class="hljs-number">256</span>m
+</span>-<span class="ruby"><span class="hljs-symbol">XX:</span>MaxNewSize=<span class="hljs-number">256</span>m
+</span>-<span class="ruby"><span class="hljs-symbol">XX:</span>+UseConcMarkSweepGC
+</span>-<span class="ruby"><span class="hljs-symbol">XX:</span>+PrintGCDetails
+</span>-<span class="ruby"><span class="hljs-symbol">XX:</span>+PrintGCTimeStamps
+</span>-<span class="ruby"><span class="hljs-symbol">XX:</span>+UseLargePages
+</span>-<span class="ruby"><span class="hljs-symbol">XX:</span>+HeapDumpOnOutOfMemoryError
+</span>-<span class="ruby"><span class="hljs-symbol">XX:</span>HeapDumpPath=<span class="hljs-regexp">/mnt/galaxy</span><span class="hljs-regexp">/deploy/current</span><span class="hljs-regexp">/
+</span></span>-<span class="ruby"><span class="hljs-regexp">Duser.timezone=UTC
+</span></span>-<span class="ruby"><span class="hljs-regexp">Dfile.encoding=UTF-8
+</span></span>-<span class="ruby"><span class="hljs-regexp">Djava.io.tmpdir=/mnt</span><span class="hljs-regexp">/tmp
+</span></span>
+-<span class="ruby"><span class="hljs-regexp">Dcom.sun.management.jmxremote.port=17071
+</span></span>-<span class="ruby"><span class="hljs-regexp">Dcom.sun.management.jmxremote.authenticate=false
+</span></span>-<span class="ruby"><span class="hljs-regexp">Dcom.sun.management.jmxremote.ssl=false
+</span></span></code></pre>
+<p>Runtime.properties:</p>
+<pre><code class="hljs"><span class="hljs-attr">druid.host</span>=<span class="hljs-comment">#{IP_ADDR}:8080</span>
+<span class="hljs-attr">druid.plaintextPort</span>=<span class="hljs-number">8080</span>
+<span class="hljs-attr">druid.service</span>=druid/router
+
+<span class="hljs-attr">druid.router.defaultBrokerServiceName</span>=druid:broker-cold
+<span class="hljs-attr">druid.router.coordinatorServiceName</span>=druid:coordinator
+<span class="hljs-attr">druid.router.tierToBrokerMap</span>={<span class="hljs-string">"hot"</span>:<span class="hljs-string">"druid:broker-hot"</span>,<span class="hljs-string">"_default_tier"</span>:<span class="hljs-string">"druid:broker-cold"</span>}
+<span class="hljs-attr">druid.router.http.numConnections</span>=<span class="hljs-number">50</span>
+<span class="hljs-attr">druid.router.http.readTimeout</span>=PT5M
+
+<span class="hljs-comment"># Number of threads used by the Router proxy http client</span>
+<span class="hljs-attr">druid.router.http.numMaxThreads</span>=<span class="hljs-number">100</span>
+
+<span class="hljs-attr">druid.server.http.numThreads</span>=<span class="hljs-number">100</span>
+</code></pre>
+<h2><a class="anchor" aria-hidden="true" id="runtime-configuration"></a><a href="#runtime-configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>The Router module uses several of the default modules in <a href="/docs/0.16.0-incubating/configuration/index.html">Configuration</a> and has the following set of configurations as well:</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.router.defaultBrokerServiceName</code></td><td>Any string.</td><td>The default Broker to connect to in case service discovery fails.</td><td>druid/broker</td></tr>
+<tr><td><code>druid.router.tierToBrokerMap</code></td><td>An ordered JSON map of tiers to Broker names. The priority of Brokers is based on the ordering.</td><td>Queries for a certain tier of data are routed to their appropriate Broker.</td><td>{&quot;_default_tier&quot;: &quot;<defaultBrokerServiceName>&quot;}</td></tr>
+<tr><td><code>druid.router.defaultRule</code></td><td>Any string.</td><td>The default rule for all datasources.</td><td>&quot;_default&quot;</td></tr>
+<tr><td><code>druid.router.pollPeriod</code></td><td>Any ISO8601 duration.</td><td>How often to poll for new rules.</td><td>PT1M</td></tr>
+<tr><td><code>druid.router.strategies</code></td><td>An ordered JSON array of objects.</td><td>All custom strategies to use for routing.</td><td>[{&quot;type&quot;:&quot;timeBoundary&quot;},{&quot;type&quot;:&quot;priority&quot;}]</td></tr>
+<tr><td><code>druid.router.avatica.balancer.type</code></td><td>String representing an AvaticaConnectionBalancer name</td><td>Class to use for balancing Avatica queries across Brokers</td><td>rendezvousHash</td></tr>
+<tr><td><code>druid.router.http.maxRequestBufferSize</code></td><td>Maximum size of the buffer used to write requests when forwarding them to the Broker. This should be set to atleast the maxHeaderSize allowed on the Broker</td><td>8 * 1024</td></tr>
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="router-strategies"></a><a href="#router-strategies" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<p>The Router has a configurable list of strategies for how it selects which Brokers to route queries to. The order of the strategies matter because as soon as a strategy condition is matched, a Broker is selected.</p>
+<h3><a class="anchor" aria-hidden="true" id="timeboundary"></a><a href="#timeboundary" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"type"</span>:<span class="hljs-string">"timeBoundary"</span>
+}
+</code></pre>
+<p>Including this strategy means all timeBoundary queries are always routed to the highest priority Broker.</p>
+<h3><a class="anchor" aria-hidden="true" id="priority"></a><a href="#priority" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"type"</span>:<span class="hljs-string">"priority"</span>,
+  <span class="hljs-attr">"minPriority"</span>:<span class="hljs-number">0</span>,
+  <span class="hljs-attr">"maxPriority"</span>:<span class="hljs-number">1</span>
+}
+</code></pre>
+<p>Queries with a priority set to less than minPriority are routed to the lowest priority Broker. Queries with priority set to greater than maxPriority are routed to the highest priority Broker. By default, minPriority is 0 and maxPriority is 1. Using these default values, if a query with priority 0 (the default query priority is 0) is sent, the query skips the priority selection logic.</p>
+<h3><a class="anchor" aria-hidden="true" id="javascript"></a><a href="#javascript" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<p>Allows defining arbitrary routing rules using a JavaScript function. The function is passed the configuration and the query to be executed, and returns the tier it should be routed to, or null for the default tier.</p>
+<p><em>Example</em>: a function that sends queries containing more than three aggregators to the lowest priority Broker.</p>
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"type"</span> : <span class="hljs-string">"javascript"</span>,
+  <span class="hljs-attr">"function"</span> : <span class="hljs-string">"function (config, query) { if (query.getAggregatorSpecs &amp;&amp; query.getAggregatorSpecs().size() &gt;= 3) { var size = config.getTierToBrokerMap().values().size(); if (size &gt; 0) { return config.getTierToBrokerMap().values().toArray()[size-1] } else { return config.getDefaultBrokerServiceName() } } else { return null } }"</span>
+}
+</code></pre>
+<blockquote>
+<p>JavaScript-based functionality is disabled by default. Please refer to the Druid <a href="/docs/0.16.0-incubating/development/javascript.html">JavaScript programming guide</a> for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it.</p>
+</blockquote>
+<h2><a class="anchor" aria-hidden="true" id="avatica-query-balancing"></a><a href="#avatica-query-balancing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 [...]
+<p>All Avatica JDBC requests with a given connection ID must be routed to the same Broker, since Druid Brokers do not share connection state with each other.</p>
+<p>To accomplish this, Druid provides two built-in balancers that use rendezvous hashing and consistent hashing of a request's connection ID respectively to assign requests to Brokers.</p>
+<p>Note that when multiple Routers are used, all Routers should have identical balancer configuration to ensure that they make the same routing decisions.</p>
+<h3><a class="anchor" aria-hidden="true" id="rendezvous-hash-balancer"></a><a href="#rendezvous-hash-balancer" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<p>This balancer uses <a href="https://en.wikipedia.org/wiki/Rendezvous_hashing">Rendezvous Hashing</a> on an Avatica request's connection ID to assign the request to a Broker.</p>
+<p>To use this balancer, specify the following property:</p>
+<pre><code class="hljs">druid<span class="hljs-selector-class">.router</span><span class="hljs-selector-class">.avatica</span><span class="hljs-selector-class">.balancer</span><span class="hljs-selector-class">.type</span>=rendezvousHash
+</code></pre>
+<p>If no <code>druid.router.avatica.balancer</code> property is set, the Router will also default to using the Rendezvous Hash Balancer.</p>
+<h3><a class="anchor" aria-hidden="true" id="consistent-hash-balancer"></a><a href="#consistent-hash-balancer" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<p>This balancer uses <a href="https://en.wikipedia.org/wiki/Consistent_hashing">Consistent Hashing</a> on an Avatica request's connection ID to assign the request to a Broker.</p>
+<p>To use this balancer, specify the following property:</p>
+<pre><code class="hljs">druid<span class="hljs-selector-class">.router</span><span class="hljs-selector-class">.avatica</span><span class="hljs-selector-class">.balancer</span><span class="hljs-selector-class">.type</span>=consistentHash
+</code></pre>
+<p>This is a non-default implementation that is provided for experimentation purposes. The consistent hasher has longer setup times on initialization and when the set of Brokers changes, but has a faster Broker assignment time than the rendezous hasher when tested with 5 Brokers. Benchmarks for both implementations have been provided in <code>ConsistentHasherBenchmark</code> and <code>RendezvousHasherBenchmark</code>. The consistent hasher also requires locking, while the rendezvous hash [...]
+<h2><a class="anchor" aria-hidden="true" id="http-endpoints"></a><a href="#http-endpoints" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<p>The Router process exposes several HTTP endpoints for interactions.</p>
+<h3><a class="anchor" aria-hidden="true" id="get"></a><a href="#get" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.2 [...]
+<ul>
+<li><code>/status</code></li>
+</ul>
+<p>Returns the Druid version, loaded extensions, memory used, total memory and other useful information about the process.</p>
+<ul>
+<li><code>/druid/v2/datasources</code></li>
+</ul>
+<p>Returns a list of queryable datasources.</p>
+<ul>
+<li><code>/druid/v2/datasources/{dataSourceName}</code></li>
+</ul>
+<p>Returns the dimensions and metrics of the datasource.</p>
+<ul>
+<li><code>/druid/v2/datasources/{dataSourceName}/dimensions</code></li>
+</ul>
+<p>Returns the dimensions of the datasource.</p>
+<ul>
+<li><code>/druid/v2/datasources/{dataSourceName}/metrics</code></li>
+</ul>
+<p>Returns the metrics of the datasource.</p>
+<h2><a class="anchor" aria-hidden="true" id="router-as-management-proxy"></a><a href="#router-as-management-proxy" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H [...]
+<p>The Router can be configured to forward requests to the active Coordinator or Overlord process. This may be useful for
+setting up a highly available cluster in situations where the HTTP redirect mechanism of the inactive -&gt; active
+Coordinator/Overlord does not function correctly (servers are behind a load balancer, the hostname used in the redirect
+is only resolvable internally, etc.).</p>
+<h3><a class="anchor" aria-hidden="true" id="enabling-the-management-proxy"></a><a href="#enabling-the-management-proxy" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12  [...]
+<p>To enable this functionality, set the following in the Router's runtime.properties:</p>
+<pre><code class="hljs">druid<span class="hljs-selector-class">.router</span><span class="hljs-selector-class">.managementProxy</span><span class="hljs-selector-class">.enabled</span>=true
+</code></pre>
+<h3><a class="anchor" aria-hidden="true" id="routing"></a><a href="#routing" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1- [...]
+<p>The management proxy supports implicit and explicit routes. Implicit routes are those where the destination can be
+determined from the original request path based on Druid API path conventions. For the Coordinator the convention is
+<code>/druid/coordinator/*</code> and for the Overlord the convention is <code>/druid/indexer/*</code>. These are convenient because they mean
+that using the management proxy does not require modifying the API request other than issuing the request to the Router
+instead of the Coordinator or Overlord. Most Druid API requests can be routed implicitly.</p>
+<p>Explicit routes are those where the request to the Router contains a path prefix indicating which process the request
+should be routed to. For the Coordinator this prefix is <code>/proxy/coordinator</code> and for the Overlord it is <code>/proxy/overlord</code>.
+This is required for API calls with an ambiguous destination. For example, the <code>/status</code> API is present on all Druid
+processes, so explicit routing needs to be used to indicate the proxy destination.</p>
+<p>This is summarized in the table below:</p>
+<table>
+<thead>
+<tr><th>Request Route</th><th>Destination</th><th>Rewritten Route</th><th>Example</th></tr>
+</thead>
+<tbody>
+<tr><td><code>/druid/coordinator/*</code></td><td>Coordinator</td><td><code>/druid/coordinator/*</code></td><td><code>router:8888/druid/coordinator/v1/datasources</code> -&gt; <code>coordinator:8081/druid/coordinator/v1/datasources</code></td></tr>
+<tr><td><code>/druid/indexer/*</code></td><td>Overlord</td><td><code>/druid/indexer/*</code></td><td><code>router:8888/druid/indexer/v1/task</code> -&gt; <code>overlord:8090/druid/indexer/v1/task</code></td></tr>
+<tr><td><code>/proxy/coordinator/*</code></td><td>Coordinator</td><td><code>/*</code></td><td><code>router:8888/proxy/coordinator/status</code> -&gt; <code>coordinator:8081/status</code></td></tr>
+<tr><td><code>/proxy/overlord/*</code></td><td>Overlord</td><td><code>/*</code></td><td><code>router:8888/proxy/overlord/druid/indexer/v1/isLeader</code> -&gt; <code>overlord:8090/druid/indexer/v1/isLeader</code></td></tr>
+</tbody>
+</table>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/overlord.html"><span class="arrow-prev">← </span><span>Overlord Process</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/peons.html"><span>Peons</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#running">Running</a></li><li><a href="#example-production-configuration">Exa [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/design/segments.html b/docs/0.16.0-incubating/design/segments.html
new file mode 100644
index 0000000..774c8bd
--- /dev/null
+++ b/docs/0.16.0-incubating/design/segments.html
@@ -0,0 +1,260 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Segments · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Segments · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https://druid.a [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Segments</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Apache Druid (incubating) stores its index in <em>segment files</em>, which are partitioned by
+time. In a basic setup, one segment file is created for each time
+interval, where the time interval is configurable in the
+<code>segmentGranularity</code> parameter of the
+<a href="/docs/0.16.0-incubating/ingestion/index.html#granularityspec"><code>granularitySpec</code></a>.  For Druid to
+operate well under heavy query load, it is important for the segment
+file size to be within the recommended range of 300mb-700mb. If your
+segment files are larger than this range, then consider either
+changing the granularity of the time interval or partitioning your
+data and tweaking the <code>targetPartitionSize</code> in your <code>partitionsSpec</code>
+(a good starting point for this parameter is 5 million rows).  See the
+sharding section below and the 'Partitioning specification' section of
+the <a href="/docs/0.16.0-incubating/ingestion/hadoop.html#partitionsspec">Batch ingestion</a> documentation
+for more information.</p>
+<h3><a class="anchor" aria-hidden="true" id="a-segment-files-core-data-structures"></a><a href="#a-segment-files-core-data-structures" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 [...]
+<p>Here we describe the internal structure of segment files, which is
+essentially <em>columnar</em>: the data for each column is laid out in
+separate data structures. By storing each column separately, Druid can
+decrease query latency by scanning only those columns actually needed
+for a query.  There are three basic column types: the timestamp
+column, dimension columns, and metric columns, as illustrated in the
+image below:</p>
+<p><img src="../assets/druid-column-types.png" alt="Druid column types" title="Druid Column Types"></p>
+<p>The timestamp and metric columns are simple: behind the scenes each of
+these is an array of integer or floating point values compressed with
+LZ4. Once a query knows which rows it needs to select, it simply
+decompresses these, pulls out the relevant rows, and applies the
+desired aggregation operator. As with all columns, if a query doesn’t
+require a column, then that column’s data is just skipped over.</p>
+<p>Dimensions columns are different because they support filter and
+group-by operations, so each dimension requires the following
+three data structures:</p>
+<ol>
+<li>A dictionary that maps values (which are always treated as strings) to integer IDs,</li>
+<li>A list of the column’s values, encoded using the dictionary in 1, and</li>
+<li>For each distinct value in the column, a bitmap that indicates which rows contain that value.</li>
+</ol>
+<p>Why these three data structures? The dictionary simply maps string
+values to integer ids so that the values in 2 and 3 can be
+represented compactly. The bitmaps in 3 -- also known as <em>inverted
+indexes</em> allow for quick filtering operations (specifically, bitmaps
+are convenient for quickly applying AND and OR operators). Finally,
+the list of values in 2 is needed for <em>group by</em> and <em>TopN</em>
+queries. In other words, queries that solely aggregate metrics based
+on filters do not need to touch the list of dimension values stored in</p>
+<ol start="2">
+<li></li>
+</ol>
+<p>To get a concrete sense of these data structures, consider the ‘page’
+column from the example data above.  The three data structures that
+represent this dimension are illustrated in the diagram below.</p>
+<pre><code class="hljs"><span class="hljs-number">1</span>: Dictionary that encodes column values
+  {
+    <span class="hljs-string">"Justin Bieber"</span>: <span class="hljs-number">0</span>,
+    <span class="hljs-string">"Ke$ha"</span>:         <span class="hljs-number">1</span>
+  }
+
+<span class="hljs-number">2</span>: Column data
+  [<span class="hljs-number">0</span>,
+   <span class="hljs-number">0</span>,
+   <span class="hljs-number">1</span>,
+   <span class="hljs-number">1</span>]
+
+<span class="hljs-number">3</span>: Bitmaps - one <span class="hljs-keyword">for</span> each unique value of the column
+  value=<span class="hljs-string">"Justin Bieber"</span>: [<span class="hljs-number">1</span>,<span class="hljs-number">1</span>,<span class="hljs-number">0</span>,<span class="hljs-number">0</span>]
+  value=<span class="hljs-string">"Ke$ha"</span>:         [<span class="hljs-number">0</span>,<span class="hljs-number">0</span>,<span class="hljs-number">1</span>,<span class="hljs-number">1</span>]
+</code></pre>
+<p>Note that the bitmap is different from the first two data structures:
+whereas the first two grow linearly in the size of the data (in the
+worst case), the size of the bitmap section is the product of data
+size * column cardinality. Compression will help us here though
+because we know that for each row in 'column data', there will only be a
+single bitmap that has non-zero entry. This means that high cardinality
+columns will have extremely sparse, and therefore highly compressible,
+bitmaps. Druid exploits this using compression algorithms that are
+specially suited for bitmaps, such as roaring bitmap compression.</p>
+<h3><a class="anchor" aria-hidden="true" id="multi-value-columns"></a><a href="#multi-value-columns" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>If a data source makes use of multi-value columns, then the data
+structures within the segment files look a bit different. Let's
+imagine that in the example above, the second row were tagged with
+both the 'Ke$ha' <em>and</em> 'Justin Bieber' topics. In this case, the three
+data structures would now look as follows:</p>
+<pre><code class="hljs"><span class="hljs-number">1</span>: Dictionary that encodes column values
+  {
+    <span class="hljs-string">"Justin Bieber"</span>: <span class="hljs-number">0</span>,
+    <span class="hljs-string">"Ke$ha"</span>:         <span class="hljs-number">1</span>
+  }
+
+<span class="hljs-number">2</span>: Column data
+  [<span class="hljs-number">0</span>,
+   [<span class="hljs-number">0</span>,<span class="hljs-number">1</span>],  &lt;--Row value of multi-value column can have <span class="hljs-built_in">array</span> of values
+   <span class="hljs-number">1</span>,
+   <span class="hljs-number">1</span>]
+
+<span class="hljs-number">3</span>: Bitmaps - one <span class="hljs-keyword">for</span> each unique value
+  value=<span class="hljs-string">"Justin Bieber"</span>: [<span class="hljs-number">1</span>,<span class="hljs-number">1</span>,<span class="hljs-number">0</span>,<span class="hljs-number">0</span>]
+  value=<span class="hljs-string">"Ke$ha"</span>:         [<span class="hljs-number">0</span>,<span class="hljs-number">1</span>,<span class="hljs-number">1</span>,<span class="hljs-number">1</span>]
+                            ^
+                            |
+                            |
+    Multi-value column has multiple non-zero entries
+</code></pre>
+<p>Note the changes to the second row in the column data and the Ke$ha
+bitmap. If a row has more than one value for a column, its entry in
+the 'column data' is an array of values. Additionally, a row with <em>n</em>
+values in 'column data' will have <em>n</em> non-zero valued entries in
+bitmaps.</p>
+<h2><a class="anchor" aria-hidden="true" id="naming-convention"></a><a href="#naming-convention" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2- [...]
+<p>Identifiers for segments are typically constructed using the segment datasource, interval start time (in ISO 8601 format), interval end time (in ISO 8601 format), and a version. If data is additionally sharded beyond a time range, the segment identifier will also contain a partition number.</p>
+<p>An example segment identifier may be:
+datasource_intervalStart_intervalEnd_version_partitionNum</p>
+<h2><a class="anchor" aria-hidden="true" id="segment-components"></a><a href="#segment-components" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>Behind the scenes, a segment is comprised of several files, listed below.</p>
+<ul>
+<li><p><code>version.bin</code></p>
+<p>4 bytes representing the current segment version as an integer. E.g., for v9 segments, the version is 0x0, 0x0, 0x0, 0x9</p></li>
+<li><p><code>meta.smoosh</code></p>
+<p>A file with metadata (filenames and offsets) about the contents of the other <code>smoosh</code> files</p></li>
+<li><p><code>XXXXX.smoosh</code></p>
+<p>There are some number of these files, which are concatenated binary data</p>
+<p>The <code>smoosh</code> files represent multiple files &quot;smooshed&quot; together in order to minimize the number of file descriptors that must be open to house the data. They are files of up to 2GB in size (to match the limit of a memory mapped ByteBuffer in Java). The <code>smoosh</code> files house individual files for each of the columns in the data as well as an <code>index.drd</code> file with extra metadata about the segment.</p>
+<p>There is also a special column called <code>__time</code> that refers to the time column of the segment. This will hopefully become less and less special as the code evolves, but for now it’s as special as my Mommy always told me I am.</p></li>
+</ul>
+<p>In the codebase, segments have an internal format version. The current segment format version is <code>v9</code>.</p>
+<h2><a class="anchor" aria-hidden="true" id="format-of-a-column"></a><a href="#format-of-a-column" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>Each column is stored as two parts:</p>
+<ol>
+<li>A Jackson-serialized ColumnDescriptor</li>
+<li>The rest of the binary for the column</li>
+</ol>
+<p>A ColumnDescriptor is essentially an object that allows us to use jackson’s polymorphic deserialization to add new and interesting methods of serialization with minimal impact to the code. It consists of some metadata about the column (what type is it, is it multi-value, etc.) and then a list of serde logic that can deserialize the rest of the binary.</p>
+<h2><a class="anchor" aria-hidden="true" id="sharding-data-to-create-segments"></a><a href="#sharding-data-to-create-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<h3><a class="anchor" aria-hidden="true" id="sharding"></a><a href="#sharding" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p>Multiple segments may exist for the same interval of time for the same datasource. These segments form a <code>block</code> for an interval.
+Depending on the type of <code>shardSpec</code> that is used to shard the data, Druid queries may only complete if a <code>block</code> is complete. That is to say, if a block consists of 3 segments, such as:</p>
+<p><code>sampleData_2011-01-01T02:00:00:00Z_2011-01-01T03:00:00:00Z_v1_0</code></p>
+<p><code>sampleData_2011-01-01T02:00:00:00Z_2011-01-01T03:00:00:00Z_v1_1</code></p>
+<p><code>sampleData_2011-01-01T02:00:00:00Z_2011-01-01T03:00:00:00Z_v1_2</code></p>
+<p>All 3 segments must be loaded before a query for the interval <code>2011-01-01T02:00:00:00Z_2011-01-01T03:00:00:00Z</code> completes.</p>
+<p>The exception to this rule is with using linear shard specs. Linear shard specs do not force 'completeness' and queries can complete even if shards are not loaded in the system.
+For example, if your real-time ingestion creates 3 segments that were sharded with linear shard spec, and only two of the segments were loaded in the system, queries would return results only for those 2 segments.</p>
+<h2><a class="anchor" aria-hidden="true" id="schema-changes"></a><a href="#schema-changes" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0- [...]
+<h2><a class="anchor" aria-hidden="true" id="replacing-segments"></a><a href="#replacing-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>Druid uniquely
+identifies segments using the datasource, interval, version, and partition number. The partition number is only visible in the segment id if
+there are multiple segments created for some granularity of time. For example, if you have hourly segments, but you
+have more data in an hour than a single segment can hold, you can create multiple segments for the same hour. These segments will share
+the same datasource, interval, and version, but have linearly increasing partition numbers.</p>
+<pre><code class="hljs">foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-01</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-02</span>_v1_0
+foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-01</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-02</span>_v1_1
+foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-01</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-02</span>_v1_2
+</code></pre>
+<p>In the example segments above, the dataSource = foo, interval = 2015-01-01/2015-01-02, version = v1, partitionNum = 0.
+If at some later point in time, you reindex the data with a new schema, the newly created segments will have a higher version id.</p>
+<pre><code class="hljs">foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-01</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-02</span>_v2_0
+foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-01</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-02</span>_v2_1
+foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-01</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-02</span>_v2_2
+</code></pre>
+<p>Druid batch indexing (either Hadoop-based or IndexTask-based) guarantees atomic updates on an interval-by-interval basis.
+In our example, until all <code>v2</code> segments for <code>2015-01-01/2015-01-02</code> are loaded in a Druid cluster, queries exclusively use <code>v1</code> segments.
+Once all <code>v2</code> segments are loaded and queryable, all queries ignore <code>v1</code> segments and switch to the <code>v2</code> segments.
+Shortly afterwards, the <code>v1</code> segments are unloaded from the cluster.</p>
+<p>Note that updates that span multiple segment intervals are only atomic within each interval. They are not atomic across the entire update.
+For example, you have segments such as the following:</p>
+<pre><code class="hljs">foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-01</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-02</span>_v1_0
+foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-02</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-03</span>_v1_1
+foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-03</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-04</span>_v1_2
+</code></pre>
+<p><code>v2</code> segments will be loaded into the cluster as soon as they are built and replace <code>v1</code> segments for the period of time the
+segments overlap. Before v2 segments are completely loaded, your cluster may have a mixture of <code>v1</code> and <code>v2</code> segments.</p>
+<pre><code class="hljs">foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-01</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-02</span>_v1_0
+foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-02</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-03</span>_v2_1
+foo_2015<span class="hljs-number">-01</span><span class="hljs-number">-03</span>/<span class="hljs-number">2015</span><span class="hljs-number">-01</span><span class="hljs-number">-04</span>_v1_2
+</code></pre>
+<p>In this case, queries may hit a mixture of <code>v1</code> and <code>v2</code> segments.</p>
+<h2><a class="anchor" aria-hidden="true" id="different-schemas-among-segments"></a><a href="#different-schemas-among-segments" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13. [...]
+<p>Druid segments for the same datasource may have different schemas. If a string column (dimension) exists in one segment but not
+another, queries that involve both segments still work. Queries for the segment missing the dimension will behave as if the dimension has only null values.
+Similarly, if one segment has a numeric column (metric) but another does not, queries on the segment missing the
+metric will generally &quot;do the right thing&quot;. Aggregations over this missing metric behave as if the metric were missing.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/design/architecture.html"><span class="arrow-prev">← </span><span>Design</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/design/processes.html"><span>Processes and servers</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#naming-convention">Naming Convention</a></li><li><a href="#seg [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/approximate-histograms.html b/docs/0.16.0-incubating/development/approximate-histograms.html
new file mode 100644
index 0000000..8df4839
--- /dev/null
+++ b/docs/0.16.0-incubating/development/approximate-histograms.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="extensions-core/approximate-histograms.html">
+<meta http-equiv="refresh" content="0; url=extensions-core/approximate-histograms.html">
+<h1>Redirecting...</h1>
+<a href="extensions-core/approximate-histograms.html">Click here if you are not redirected.</a>
+<script>location="extensions-core/approximate-histograms.html"</script>
diff --git a/docs/0.16.0-incubating/development/build.html b/docs/0.16.0-incubating/development/build.html
new file mode 100644
index 0000000..d497cd5
--- /dev/null
+++ b/docs/0.16.0-incubating/development/build.html
@@ -0,0 +1,108 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Build from source · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Build from source · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" conten [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Build from source</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>You can build Apache Druid (incubating) directly from source. Please note that these instructions are for building the latest stable version of Druid.
+For building the latest code in master, follow the instructions <a href="https://github.com/apache/incubator-druid/blob/master/docs/0.16.0-incubating/content/development/build.md">here</a>.</p>
+<h4><a class="anchor" aria-hidden="true" id="prerequisites"></a><a href="#prerequisites" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<h5><a class="anchor" aria-hidden="true" id="installing-java-and-maven"></a><a href="#installing-java-and-maven" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c [...]
+<ul>
+<li>JDK 8, 8u92+. We recommend using an OpenJDK distribution that provides long-term support and open-source licensing,
+like <a href="https://aws.amazon.com/corretto/">Amazon Corretto</a> or <a href="https://www.azul.com/downloads/zulu/">Azul Zulu</a>.</li>
+<li><a href="http://maven.apache.org/download.cgi">Maven version 3.x</a></li>
+</ul>
+<h5><a class="anchor" aria-hidden="true" id="downloading-the-source"></a><a href="#downloading-the-source" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<pre><code class="hljs css language-bash">git <span class="hljs-built_in">clone</span> git@github.com:apache/incubator-druid.git
+<span class="hljs-built_in">cd</span> druid
+</code></pre>
+<h4><a class="anchor" aria-hidden="true" id="building-the-source"></a><a href="#building-the-source" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.2 [...]
+<p>The basic command to build Druid from source is:</p>
+<pre><code class="hljs css language-bash">mvn clean install
+</code></pre>
+<p>This will run static analysis, unit tests, compile classes, and package the projects into JARs. It will <em>not</em> generate the source or binary distribution tarball.</p>
+<p>In addition to the basic stages, you may also want to add the following profiles and properties:</p>
+<ul>
+<li><strong>-Pdist</strong> - Distribution profile: Generates the binary distribution tarball by pulling in core extensions and dependencies and packaging the files as <code>distribution/target/apache-druid-x.x.x-bin.tar.gz</code></li>
+<li><strong>-Papache-release</strong> - Apache release profile: Generates GPG signature and checksums, and builds the source distribution tarball as <code>distribution/target/apache-druid-x.x.x-src.tar.gz</code></li>
+<li><strong>-Prat</strong> - Apache Rat profile: Runs the Apache Rat license audit tool</li>
+<li><strong>-DskipTests</strong> - Skips unit tests (which reduces build time)</li>
+</ul>
+<p>Putting these together, if you wish to build the source and binary distributions with signatures and checksums, audit licenses, and skip the unit tests, you would run:</p>
+<pre><code class="hljs css language-bash">mvn clean install -Papache-release,dist,rat -DskipTests
+</code></pre>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/javascript.html"><span class="arrow-prev">← </span><span class="function-name-prevnext">JavaScript functionality</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/versioning.html"><span>Versioning</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/community-extensions/azure.html b/docs/0.16.0-incubating/development/community-extensions/azure.html
new file mode 100644
index 0000000..67da9a3
--- /dev/null
+++ b/docs/0.16.0-incubating/development/community-extensions/azure.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../extensions-contrib/azure.html">
+<meta http-equiv="refresh" content="0; url=../extensions-contrib/azure.html">
+<h1>Redirecting...</h1>
+<a href="../extensions-contrib/azure.html">Click here if you are not redirected.</a>
+<script>location="../extensions-contrib/azure.html"</script>
diff --git a/docs/0.16.0-incubating/development/community-extensions/cassandra.html b/docs/0.16.0-incubating/development/community-extensions/cassandra.html
new file mode 100644
index 0000000..7026a6a
--- /dev/null
+++ b/docs/0.16.0-incubating/development/community-extensions/cassandra.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../extensions-contrib/cassandra.html">
+<meta http-equiv="refresh" content="0; url=../extensions-contrib/cassandra.html">
+<h1>Redirecting...</h1>
+<a href="../extensions-contrib/cassandra.html">Click here if you are not redirected.</a>
+<script>location="../extensions-contrib/cassandra.html"</script>
diff --git a/docs/0.16.0-incubating/development/community-extensions/cloudfiles.html b/docs/0.16.0-incubating/development/community-extensions/cloudfiles.html
new file mode 100644
index 0000000..0c9b8c5
--- /dev/null
+++ b/docs/0.16.0-incubating/development/community-extensions/cloudfiles.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../extensions-contrib/cloudfiles.html">
+<meta http-equiv="refresh" content="0; url=../extensions-contrib/cloudfiles.html">
+<h1>Redirecting...</h1>
+<a href="../extensions-contrib/cloudfiles.html">Click here if you are not redirected.</a>
+<script>location="../extensions-contrib/cloudfiles.html"</script>
diff --git a/docs/0.16.0-incubating/development/community-extensions/graphite.html b/docs/0.16.0-incubating/development/community-extensions/graphite.html
new file mode 100644
index 0000000..dc31894
--- /dev/null
+++ b/docs/0.16.0-incubating/development/community-extensions/graphite.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../extensions-contrib/graphite.html">
+<meta http-equiv="refresh" content="0; url=../extensions-contrib/graphite.html">
+<h1>Redirecting...</h1>
+<a href="../extensions-contrib/graphite.html">Click here if you are not redirected.</a>
+<script>location="../extensions-contrib/graphite.html"</script>
diff --git a/docs/0.16.0-incubating/development/community-extensions/kafka-simple.html b/docs/0.16.0-incubating/development/community-extensions/kafka-simple.html
new file mode 100644
index 0000000..179d16a
--- /dev/null
+++ b/docs/0.16.0-incubating/development/community-extensions/kafka-simple.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../extensions-core/kafka-ingestion.html">
+<meta http-equiv="refresh" content="0; url=../extensions-core/kafka-ingestion.html">
+<h1>Redirecting...</h1>
+<a href="../extensions-core/kafka-ingestion.html">Click here if you are not redirected.</a>
+<script>location="../extensions-core/kafka-ingestion.html"</script>
diff --git a/docs/0.16.0-incubating/development/community-extensions/rabbitmq.html b/docs/0.16.0-incubating/development/community-extensions/rabbitmq.html
new file mode 100644
index 0000000..179d16a
--- /dev/null
+++ b/docs/0.16.0-incubating/development/community-extensions/rabbitmq.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="../extensions-core/kafka-ingestion.html">
+<meta http-equiv="refresh" content="0; url=../extensions-core/kafka-ingestion.html">
+<h1>Redirecting...</h1>
+<a href="../extensions-core/kafka-ingestion.html">Click here if you are not redirected.</a>
+<script>location="../extensions-core/kafka-ingestion.html"</script>
diff --git a/docs/0.16.0-incubating/development/datasketches-aggregators.html b/docs/0.16.0-incubating/development/datasketches-aggregators.html
new file mode 100644
index 0000000..9ef81f4
--- /dev/null
+++ b/docs/0.16.0-incubating/development/datasketches-aggregators.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="extensions-core/datasketches-extension.html">
+<meta http-equiv="refresh" content="0; url=extensions-core/datasketches-extension.html">
+<h1>Redirecting...</h1>
+<a href="extensions-core/datasketches-extension.html">Click here if you are not redirected.</a>
+<script>location="extensions-core/datasketches-extension.html"</script>
diff --git a/docs/0.16.0-incubating/development/experimental.html b/docs/0.16.0-incubating/development/experimental.html
new file mode 100644
index 0000000..97d0811
--- /dev/null
+++ b/docs/0.16.0-incubating/development/experimental.html
@@ -0,0 +1,91 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Experimental features · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Experimental features · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Experimental features</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>Features often start out in &quot;experimental&quot; status that indicates they are still evolving.
+This can mean any of the following things:</p>
+<ol>
+<li>The feature's API may change even in minor releases or patch releases.</li>
+<li>The feature may have known &quot;missing&quot; pieces that will be added later.</li>
+<li>The feature may or may not have received full battle-testing in production environments.</li>
+</ol>
+<p>All experimental features are optional.</p>
+<p>Note that not all of these points apply to every experimental feature. Some have been battle-tested in terms of
+implementation, but are still marked experimental due to an evolving API. Please check the documentation for each
+feature for full details.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/versioning.html"><span class="arrow-prev">← </span><span>Versioning</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/misc/math-expr.html"><span>Expressions</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer" id="footer"><div class="container"><div class="text-ce [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/ambari-metrics-emitter.html b/docs/0.16.0-incubating/development/extensions-contrib/ambari-metrics-emitter.html
new file mode 100644
index 0000000..ec48358
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/ambari-metrics-emitter.html
@@ -0,0 +1,141 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Ambari Metrics Emitter · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Ambari Metrics Emitter · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:u [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Ambari Metrics Emitter</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> <code>ambari-metrics-emitter</code> extension.</p>
+<h2><a class="anchor" aria-hidden="true" id="introduction"></a><a href="#introduction" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>This extension emits Druid metrics to a ambari-metrics carbon server.
+Events are sent after been <a href="http://ambari-metrics.readthedocs.org/en/latest/feeding-carbon.html#the-pickle-protocol">pickled</a>; the size of the batch is configurable.</p>
+<h2><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>All the configuration parameters for ambari-metrics emitter are under <code>druid.emitter.ambari-metrics</code>.</p>
+<table>
+<thead>
+<tr><th>property</th><th>description</th><th>required?</th><th>default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter.ambari-metrics.hostname</code></td><td>The hostname of the ambari-metrics server.</td><td>yes</td><td>none</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.port</code></td><td>The port of the ambari-metrics server.</td><td>yes</td><td>none</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.protocol</code></td><td>The protocol used to send metrics to ambari metrics collector. One of http/https</td><td>no</td><td>http</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.trustStorePath</code></td><td>Path to trustStore to be used for https</td><td>no</td><td>none</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.trustStoreType</code></td><td>trustStore type to be used for https</td><td>no</td><td>none</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.trustStoreType</code></td><td>trustStore password to be used for https</td><td>no</td><td>none</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.batchSize</code></td><td>Number of events to send as one batch.</td><td>no</td><td>100</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.eventConverter</code></td><td>Filter and converter of druid events to ambari-metrics timeline event(please see next section).</td><td>yes</td><td>none</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.flushPeriod</code></td><td>Queue flushing period in milliseconds.</td><td>no</td><td>1 minute</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.maxQueueSize</code></td><td>Maximum size of the queue used to buffer events.</td><td>no</td><td><code>MAX_INT</code></td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.alertEmitters</code></td><td>List of emitters where alerts will be forwarded to.</td><td>no</td><td>empty list (no forwarding)</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.emitWaitTime</code></td><td>wait time in milliseconds to try to send the event otherwise emitter will throwing event.</td><td>no</td><td>0</td></tr>
+<tr><td><code>druid.emitter.ambari-metrics.waitForEventTime</code></td><td>waiting time in milliseconds if necessary for an event to become available.</td><td>no</td><td>1000 (1 sec)</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="druid-to-ambari-metrics-timeline-event-converter"></a><a href="#druid-to-ambari-metrics-timeline-event-converter" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9z [...]
+<p>Ambari Metrics Timeline Event Converter defines a mapping between druid metrics name plus dimensions to a timeline event metricName.
+ambari-metrics metric path is organized using the following schema:
+<code>&lt;namespacePrefix&gt;.[&lt;druid service name&gt;].[&lt;druid hostname&gt;].&lt;druid metrics dimensions&gt;.&lt;druid metrics name&gt;</code>
+Properly naming the metrics is critical to avoid conflicts, confusing data and potentially wrong interpretation later on.</p>
+<p>Example <code>druid.historical.hist-host1:8080.MyDataSourceName.GroupBy.query/time</code>:</p>
+<ul>
+<li><code>druid</code> -&gt; namespace prefix</li>
+<li><code>historical</code> -&gt; service name</li>
+<li><code>hist-host1:8080</code> -&gt; druid hostname</li>
+<li><code>MyDataSourceName</code> -&gt; dimension value</li>
+<li><code>GroupBy</code> -&gt; dimension value</li>
+<li><code>query/time</code> -&gt; metric name</li>
+</ul>
+<p>We have two different implementation of event converter:</p>
+<h4><a class="anchor" aria-hidden="true" id="send-all-converter"></a><a href="#send-all-converter" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>The first implementation called <code>all</code>, will send all the druid service metrics events.
+The path will be in the form <code>&lt;namespacePrefix&gt;.[&lt;druid service name&gt;].[&lt;druid hostname&gt;].&lt;dimensions values ordered by dimension's name&gt;.&lt;metric&gt;</code>
+User has control of <code>&lt;namespacePrefix&gt;.[&lt;druid service name&gt;].[&lt;druid hostname&gt;].</code></p>
+<pre><code class="hljs css language-json">
+druid.emitter.ambari-metrics.eventConverter={"type":"all", "namespacePrefix": "druid.test", "appName":"druid"}
+
+</code></pre>
+<h4><a class="anchor" aria-hidden="true" id="white-list-based-converter"></a><a href="#white-list-based-converter" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H [...]
+<p>The second implementation called <code>whiteList</code>, will send only the white listed metrics and dimensions.
+Same as for the <code>all</code> converter user has control of <code>&lt;namespacePrefix&gt;.[&lt;druid service name&gt;].[&lt;druid hostname&gt;].</code>
+White-list based converter comes with the following  default white list map located under resources in <code>./src/main/resources/defaultWhiteListMap.json</code></p>
+<p>Although user can override the default white list map by supplying a property called <code>mapPath</code>.
+This property is a String containing  the path for the file containing <strong>white list map Json object</strong>.
+For example the following converter will read the map from the file <code>/pathPrefix/fileName.json</code>.</p>
+<pre><code class="hljs css language-json">
+druid.emitter.ambari-metrics.eventConverter={"type":"whiteList", "namespacePrefix": "druid.test", "ignoreHostname":true, "appName":"druid", "mapPath":"/pathPrefix/fileName.json"}
+
+</code></pre>
+<p><strong>Druid emits a huge number of metrics we highly recommend to use the <code>whiteList</code> converter</strong></p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions-core/test-stats.html"><span class="arrow-prev">← </span><span>Test Stats Aggregators</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions-contrib/azure.html"><span>Microsoft Azure</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#introduction" [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/azure.html b/docs/0.16.0-incubating/development/extensions-contrib/azure.html
new file mode 100644
index 0000000..72511cb
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/azure.html
@@ -0,0 +1,147 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Microsoft Azure · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Microsoft Azure · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="h [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Microsoft Azure</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> <code>druid-azure-extensions</code> extension.</p>
+<h2><a class="anchor" aria-hidden="true" id="deep-storage"></a><a href="#deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p><a href="http://azure.microsoft.com/en-us/services/storage/">Microsoft Azure Storage</a> is another option for deep storage. This requires some additional Druid configuration.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.type</code></td><td>azure</td><td></td><td>Must be set.</td></tr>
+<tr><td><code>druid.azure.account</code></td><td></td><td>Azure Storage account name.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.azure.key</code></td><td></td><td>Azure Storage account key.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.azure.container</code></td><td></td><td>Azure Storage container name.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.azure.protocol</code></td><td>http or https</td><td></td><td>https</td></tr>
+<tr><td><code>druid.azure.maxTries</code></td><td></td><td>Number of tries before cancel an Azure operation.</td><td>3</td></tr>
+</tbody>
+</table>
+<p>See <a href="http://azure.microsoft.com/en-us/pricing/free-trial/">Azure Services</a> for more information.</p>
+<h2><a class="anchor" aria-hidden="true" id="firehose"></a><a href="#firehose" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p><a name="firehose"></a></p>
+<h4><a class="anchor" aria-hidden="true" id="staticazureblobstorefirehose"></a><a href="#staticazureblobstorefirehose" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 [...]
+<p>This firehose ingests events, similar to the StaticS3Firehose, but from an Azure Blob Store.</p>
+<p>Data is newline delimited, with one JSON object per line and parsed as per the <code>InputRowParser</code> configuration.</p>
+<p>The storage account is shared with the one used for Azure deep storage functionality, but blobs can be in a different container.</p>
+<p>As with the S3 blobstore, it is assumed to be gzipped if the extension ends in .gz</p>
+<p>This firehose is <em>splittable</em> and can be used by <a href="/docs/0.16.0-incubating/ingestion/native-batch.html#parallel-task">native parallel index tasks</a>.
+Since each split represents an object in this firehose, each worker task of <code>index_parallel</code> will read an object.</p>
+<p>Sample spec:</p>
+<pre><code class="hljs css language-json">"firehose" : {
+    "type" : "static-azure-blobstore",
+    "blobs": [
+        {
+          "container": "container",
+          "path": "/path/to/your/file.json"
+        },
+        {
+          "container": "anothercontainer",
+          "path": "/another/path.json"
+        }
+    ]
+}
+</code></pre>
+<p>This firehose provides caching and prefetching features. In IndexTask, a firehose can be read twice if intervals or
+shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow.</p>
+<table>
+<thead>
+<tr><th>property</th><th>description</th><th>default</th><th>required?</th></tr>
+</thead>
+<tbody>
+<tr><td>type</td><td>This should be <code>static-azure-blobstore</code>.</td><td>N/A</td><td>yes</td></tr>
+<tr><td>blobs</td><td>JSON array of <a href="https://msdn.microsoft.com/en-us/library/azure/ee691964.aspx">Azure blobs</a>.</td><td>N/A</td><td>yes</td></tr>
+<tr><td>maxCacheCapacityBytes</td><td>Maximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.</td><td>1073741824</td><td>no</td></tr>
+<tr><td>maxFetchCapacityBytes</td><td>Maximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.</td><td>1073741824</td><td>no</td></tr>
+<tr><td>prefetchTriggerBytes</td><td>Threshold to trigger prefetching Azure objects.</td><td>maxFetchCapacityBytes / 2</td><td>no</td></tr>
+<tr><td>fetchTimeout</td><td>Timeout for fetching an Azure object.</td><td>60000</td><td>no</td></tr>
+<tr><td>maxFetchRetry</td><td>Maximum retry for fetching an Azure object.</td><td>3</td><td>no</td></tr>
+</tbody>
+</table>
+<p>Azure Blobs:</p>
+<table>
+<thead>
+<tr><th>property</th><th>description</th><th>default</th><th>required?</th></tr>
+</thead>
+<tbody>
+<tr><td>container</td><td>Name of the azure <a href="https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/#create-a-container">container</a></td><td>N/A</td><td>yes</td></tr>
+<tr><td>path</td><td>The path where data is located.</td><td>N/A</td><td>yes</td></tr>
+</tbody>
+</table>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions-contrib/ambari-metrics-emitter.html"><span class="arrow-prev">← </span><span>Ambari Metrics Emitter</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions-contrib/cassandra.html"><span>Apache Cassandra</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a  [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/cassandra.html b/docs/0.16.0-incubating/development/extensions-contrib/cassandra.html
new file mode 100644
index 0000000..82d859a
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/cassandra.html
@@ -0,0 +1,84 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Apache Cassandra · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Apache Cassandra · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content= [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Apache Cassandra</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> <code>druid-cassandra-storage</code> extension.</p>
+<p><a href="http://www.datastax.com/what-we-offer/products-services/datastax-enterprise/apache-cassandra">Apache Cassandra</a> can also
+be leveraged for deep storage.  This requires some additional Druid configuration as well as setting up the necessary
+schema within a Cassandra keystore.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions-contrib/azure.html"><span class="arrow-prev">← </span><span>Microsoft Azure</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions-contrib/cloudfiles.html"><span>Rackspace Cloud Files</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"></nav></div><footer class="nav-footer druid-footer [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/cloudfiles.html b/docs/0.16.0-incubating/development/extensions-contrib/cloudfiles.html
new file mode 100644
index 0000000..9e7d739
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/cloudfiles.html
@@ -0,0 +1,151 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Rackspace Cloud Files · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Rackspace Cloud Files · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Rackspace Cloud Files</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> <code>druid-cloudfiles-extensions</code> extension.</p>
+<h2><a class="anchor" aria-hidden="true" id="deep-storage"></a><a href="#deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p><a href="http://www.rackspace.com/cloud/files/">Rackspace Cloud Files</a> is another option for deep storage. This requires some additional Druid configuration.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.type</code></td><td>cloudfiles</td><td></td><td>Must be set.</td></tr>
+<tr><td><code>druid.storage.region</code></td><td></td><td>Rackspace Cloud Files region.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.storage.container</code></td><td></td><td>Rackspace Cloud Files container name.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.storage.basePath</code></td><td></td><td>Rackspace Cloud Files base path to use in the container.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.storage.operationMaxRetries</code></td><td></td><td>Number of tries before cancel a Rackspace operation.</td><td>10</td></tr>
+<tr><td><code>druid.cloudfiles.userName</code></td><td></td><td>Rackspace Cloud username</td><td>Must be set.</td></tr>
+<tr><td><code>druid.cloudfiles.apiKey</code></td><td></td><td>Rackspace Cloud api key.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.cloudfiles.provider</code></td><td>rackspace-cloudfiles-us,rackspace-cloudfiles-uk</td><td>Name of the provider depending on the region.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.cloudfiles.useServiceNet</code></td><td>true,false</td><td>Whether to use the internal service net.</td><td>true</td></tr>
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="firehose"></a><a href="#firehose" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p><a name="firehose"></a></p>
+<h4><a class="anchor" aria-hidden="true" id="staticcloudfilesfirehose"></a><a href="#staticcloudfilesfirehose" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-. [...]
+<p>This firehose ingests events, similar to the StaticAzureBlobStoreFirehose, but from Rackspace's Cloud Files.</p>
+<p>Data is newline delimited, with one JSON object per line and parsed as per the <code>InputRowParser</code> configuration.</p>
+<p>The storage account is shared with the one used for Racksapce's Cloud Files deep storage functionality, but blobs can be in a different region and container.</p>
+<p>As with the Azure blobstore, it is assumed to be gzipped if the extension ends in .gz</p>
+<p>This firehose is <em>splittable</em> and can be used by <a href="/docs/0.16.0-incubating/ingestion/native-batch.html#parallel-task">native parallel index tasks</a>.
+Since each split represents an object in this firehose, each worker task of <code>index_parallel</code> will read an object.</p>
+<p>Sample spec:</p>
+<pre><code class="hljs css language-json">"firehose" : {
+    "type" : "static-cloudfiles",
+    "blobs": [
+        {
+          "region": "DFW"
+          "container": "container",
+          "path": "/path/to/your/file.json"
+        },
+        {
+          "region": "ORD"
+          "container": "anothercontainer",
+          "path": "/another/path.json"
+        }
+    ]
+}
+</code></pre>
+<p>This firehose provides caching and prefetching features. In IndexTask, a firehose can be read twice if intervals or
+shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow.</p>
+<table>
+<thead>
+<tr><th>property</th><th>description</th><th>default</th><th>required?</th></tr>
+</thead>
+<tbody>
+<tr><td>type</td><td>This should be <code>static-cloudfiles</code>.</td><td>N/A</td><td>yes</td></tr>
+<tr><td>blobs</td><td>JSON array of Cloud Files blobs.</td><td>N/A</td><td>yes</td></tr>
+<tr><td>maxCacheCapacityBytes</td><td>Maximum size of the cache space in bytes. 0 means disabling cache.</td><td>1073741824</td><td>no</td></tr>
+<tr><td>maxCacheCapacityBytes</td><td>Maximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.</td><td>1073741824</td><td>no</td></tr>
+<tr><td>maxFetchCapacityBytes</td><td>Maximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.</td><td>1073741824</td><td>no</td></tr>
+<tr><td>fetchTimeout</td><td>Timeout for fetching a Cloud Files object.</td><td>60000</td><td>no</td></tr>
+<tr><td>maxFetchRetry</td><td>Maximum retry for fetching a Cloud Files object.</td><td>3</td><td>no</td></tr>
+</tbody>
+</table>
+<p>Cloud Files Blobs:</p>
+<table>
+<thead>
+<tr><th>property</th><th>description</th><th>default</th><th>required?</th></tr>
+</thead>
+<tbody>
+<tr><td>container</td><td>Name of the Cloud Files container</td><td>N/A</td><td>yes</td></tr>
+<tr><td>path</td><td>The path where data is located.</td><td>N/A</td><td>yes</td></tr>
+</tbody>
+</table>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions-contrib/cassandra.html"><span class="arrow-prev">← </span><span>Apache Cassandra</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions-contrib/distinctcount.html"><span class="function-name-prevnext">DistinctCount Aggregator</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul clas [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/distinctcount.html b/docs/0.16.0-incubating/development/extensions-contrib/distinctcount.html
new file mode 100644
index 0000000..32e198f
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/distinctcount.html
@@ -0,0 +1,143 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>DistinctCount Aggregator · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="DistinctCount Aggregator · Apache Druid"/><meta property="og:type" content="website"/><meta property=" [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">DistinctCount Aggregator</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> the <code>druid-distinctcount</code> extension.</p>
+<p>Additionally, follow these steps:</p>
+<ol>
+<li>First, use a single dimension hash-based partition spec to partition data by a single dimension. For example visitor_id. This to make sure all rows with a particular value for that dimension will go into the same segment, or this might over count.</li>
+<li>Second, use distinctCount to calculate the distinct count, make sure queryGranularity is divided exactly by segmentGranularity or else the result will be wrong.</li>
+</ol>
+<p>There are some limitations, when used with groupBy, the groupBy keys' numbers should not exceed maxIntermediateRows in every segment. If exceeded the result will be wrong. When used with topN, numValuesPerPass should not be too big. If too big the distinctCount will use a lot of memory and might cause the JVM to go our of memory.</p>
+<p>Example:</p>
+<h2><a class="anchor" aria-hidden="true" id="timeseries-query"></a><a href="#timeseries-query" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2. [...]
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"queryType"</span>: <span class="hljs-string">"timeseries"</span>,
+  <span class="hljs-attr">"dataSource"</span>: <span class="hljs-string">"sample_datasource"</span>,
+  <span class="hljs-attr">"granularity"</span>: <span class="hljs-string">"day"</span>,
+  <span class="hljs-attr">"aggregations"</span>: [
+    {
+      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"distinctCount"</span>,
+      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"uv"</span>,
+      <span class="hljs-attr">"fieldName"</span>: <span class="hljs-string">"visitor_id"</span>
+    }
+  ],
+  <span class="hljs-attr">"intervals"</span>: [
+    <span class="hljs-string">"2016-03-01T00:00:00.000/2013-03-20T00:00:00.000"</span>
+  ]
+}
+</code></pre>
+<h2><a class="anchor" aria-hidden="true" id="topn-query"></a><a href="#topn-query" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1 [...]
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"queryType"</span>: <span class="hljs-string">"topN"</span>,
+  <span class="hljs-attr">"dataSource"</span>: <span class="hljs-string">"sample_datasource"</span>,
+  <span class="hljs-attr">"dimension"</span>: <span class="hljs-string">"sample_dim"</span>,
+  <span class="hljs-attr">"threshold"</span>: <span class="hljs-number">5</span>,
+  <span class="hljs-attr">"metric"</span>: <span class="hljs-string">"uv"</span>,
+  <span class="hljs-attr">"granularity"</span>: <span class="hljs-string">"all"</span>,
+  <span class="hljs-attr">"aggregations"</span>: [
+    {
+      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"distinctCount"</span>,
+      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"uv"</span>,
+      <span class="hljs-attr">"fieldName"</span>: <span class="hljs-string">"visitor_id"</span>
+    }
+  ],
+  <span class="hljs-attr">"intervals"</span>: [
+    <span class="hljs-string">"2016-03-06T00:00:00/2016-03-06T23:59:59"</span>
+  ]
+}
+</code></pre>
+<h2><a class="anchor" aria-hidden="true" id="groupby-query"></a><a href="#groupby-query" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<pre><code class="hljs css language-json">{
+  <span class="hljs-attr">"queryType"</span>: <span class="hljs-string">"groupBy"</span>,
+  <span class="hljs-attr">"dataSource"</span>: <span class="hljs-string">"sample_datasource"</span>,
+  <span class="hljs-attr">"dimensions"</span>: <span class="hljs-string">"[sample_dim]"</span>,
+  <span class="hljs-attr">"granularity"</span>: <span class="hljs-string">"all"</span>,
+  <span class="hljs-attr">"aggregations"</span>: [
+    {
+      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"distinctCount"</span>,
+      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"uv"</span>,
+      <span class="hljs-attr">"fieldName"</span>: <span class="hljs-string">"visitor_id"</span>
+    }
+  ],
+  <span class="hljs-attr">"intervals"</span>: [
+    <span class="hljs-string">"2016-03-06T00:00:00/2016-03-06T23:59:59"</span>
+  ]
+}
+</code></pre>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions-contrib/cloudfiles.html"><span class="arrow-prev">← </span><span>Rackspace Cloud Files</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions-contrib/google.html"><span>Google Cloud Storage</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#times [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/google.html b/docs/0.16.0-incubating/development/extensions-contrib/google.html
new file mode 100644
index 0000000..f71c1ac
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/google.html
@@ -0,0 +1,142 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Google Cloud Storage · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Google Cloud Storage · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url"  [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Google Cloud Storage</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> <code>druid-google-extensions</code> extension.</p>
+<h2><a class="anchor" aria-hidden="true" id="deep-storage"></a><a href="#deep-storage" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>Deep storage can be written to Google Cloud Storage either via this extension or the <a href="/docs/0.16.0-incubating/development/extensions-core/hdfs.html">druid-hdfs-storage extension</a>.</p>
+<h3><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<table>
+<thead>
+<tr><th>Property</th><th>Possible Values</th><th>Description</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.storage.type</code></td><td>google</td><td></td><td>Must be set.</td></tr>
+<tr><td><code>druid.google.bucket</code></td><td></td><td>GCS bucket name.</td><td>Must be set.</td></tr>
+<tr><td><code>druid.google.prefix</code></td><td></td><td>GCS prefix.</td><td>Must be set.</td></tr>
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="firehose"></a><a href="#firehose" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64  [...]
+<p><a name="firehose"></a></p>
+<h4><a class="anchor" aria-hidden="true" id="staticgoogleblobstorefirehose"></a><a href="#staticgoogleblobstorefirehose" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12  [...]
+<p>This firehose ingests events, similar to the StaticS3Firehose, but from an Google Cloud Store.</p>
+<p>As with the S3 blobstore, it is assumed to be gzipped if the extension ends in .gz</p>
+<p>This firehose is <em>splittable</em> and can be used by <a href="/docs/0.16.0-incubating/ingestion/native-batch.html#parallel-task">native parallel index tasks</a>.
+Since each split represents an object in this firehose, each worker task of <code>index_parallel</code> will read an object.</p>
+<p>Sample spec:</p>
+<pre><code class="hljs css language-json">"firehose" : {
+    "type" : "static-google-blobstore",
+    "blobs": [
+        {
+          "bucket": "foo",
+          "path": "/path/to/your/file.json"
+        },
+        {
+          "bucket": "bar",
+          "path": "/another/path.json"
+        }
+    ]
+}
+</code></pre>
+<p>This firehose provides caching and prefetching features. In IndexTask, a firehose can be read twice if intervals or
+shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow.</p>
+<table>
+<thead>
+<tr><th>property</th><th>description</th><th>default</th><th>required?</th></tr>
+</thead>
+<tbody>
+<tr><td>type</td><td>This should be <code>static-google-blobstore</code>.</td><td>N/A</td><td>yes</td></tr>
+<tr><td>blobs</td><td>JSON array of Google Blobs.</td><td>N/A</td><td>yes</td></tr>
+<tr><td>maxCacheCapacityBytes</td><td>Maximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.</td><td>1073741824</td><td>no</td></tr>
+<tr><td>maxFetchCapacityBytes</td><td>Maximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.</td><td>1073741824</td><td>no</td></tr>
+<tr><td>prefetchTriggerBytes</td><td>Threshold to trigger prefetching Google Blobs.</td><td>maxFetchCapacityBytes / 2</td><td>no</td></tr>
+<tr><td>fetchTimeout</td><td>Timeout for fetching a Google Blob.</td><td>60000</td><td>no</td></tr>
+<tr><td>maxFetchRetry</td><td>Maximum retry for fetching a Google Blob.</td><td>3</td><td>no</td></tr>
+</tbody>
+</table>
+<p>Google Blobs:</p>
+<table>
+<thead>
+<tr><th>property</th><th>description</th><th>default</th><th>required?</th></tr>
+</thead>
+<tbody>
+<tr><td>bucket</td><td>Name of the Google Cloud bucket</td><td>N/A</td><td>yes</td></tr>
+<tr><td>path</td><td>The path where data is located.</td><td>N/A</td><td>yes</td></tr>
+</tbody>
+</table>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions-contrib/distinctcount.html"><span class="arrow-prev">← </span><span class="function-name-prevnext">DistinctCount Aggregator</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions-contrib/graphite.html"><span>Graphite Emitter</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/graphite.html b/docs/0.16.0-incubating/development/extensions-contrib/graphite.html
new file mode 100644
index 0000000..24ba3fc
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/graphite.html
@@ -0,0 +1,151 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Graphite Emitter · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Graphite Emitter · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content= [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">Graphite Emitter</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> <code>graphite-emitter</code> extension.</p>
+<h2><a class="anchor" aria-hidden="true" id="introduction"></a><a href="#introduction" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>This extension emits druid metrics to a graphite carbon server.
+Metrics can be sent by using <a href="http://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol">plaintext</a> or <a href="http://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-pickle-protocol">pickle</a> protocol.
+The pickle protocol is more efficient and supports sending batches of metrics (plaintext protocol send only one metric) in one request; batch size is configurable.</p>
+<h2><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>All the configuration parameters for graphite emitter are under <code>druid.emitter.graphite</code>.</p>
+<table>
+<thead>
+<tr><th>property</th><th>description</th><th>required?</th><th>default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter.graphite.hostname</code></td><td>The hostname of the graphite server.</td><td>yes</td><td>none</td></tr>
+<tr><td><code>druid.emitter.graphite.port</code></td><td>The port of the graphite server.</td><td>yes</td><td>none</td></tr>
+<tr><td><code>druid.emitter.graphite.batchSize</code></td><td>Number of events to send as one batch (only for pickle protocol)</td><td>no</td><td>100</td></tr>
+<tr><td><code>druid.emitter.graphite.protocol</code></td><td>Graphite protocol; available protocols: pickle, plaintext.</td><td>no</td><td>pickle</td></tr>
+<tr><td><code>druid.emitter.graphite.eventConverter</code></td><td>Filter and converter of druid events to graphite event (please see next section).</td><td>yes</td><td>none</td></tr>
+<tr><td><code>druid.emitter.graphite.flushPeriod</code></td><td>Queue flushing period in milliseconds.</td><td>no</td><td>1 minute</td></tr>
+<tr><td><code>druid.emitter.graphite.maxQueueSize</code></td><td>Maximum size of the queue used to buffer events.</td><td>no</td><td><code>MAX_INT</code></td></tr>
+<tr><td><code>druid.emitter.graphite.alertEmitters</code></td><td>List of emitters where alerts will be forwarded to. This is a JSON list of emitter names, e.g. <code>[&quot;logging&quot;, &quot;http&quot;]</code></td><td>no</td><td>empty list (no forwarding)</td></tr>
+<tr><td><code>druid.emitter.graphite.requestLogEmitters</code></td><td>List of emitters where request logs (i.e., query logging events sent to emitters when <code>druid.request.logging.type</code> is set to <code>emitter</code>) will be forwarded to. This is a JSON list of emitter names, e.g. <code>[&quot;logging&quot;, &quot;http&quot;]</code></td><td>no</td><td>empty list (no forwarding)</td></tr>
+<tr><td><code>druid.emitter.graphite.emitWaitTime</code></td><td>wait time in milliseconds to try to send the event otherwise emitter will throwing event.</td><td>no</td><td>0</td></tr>
+<tr><td><code>druid.emitter.graphite.waitForEventTime</code></td><td>waiting time in milliseconds if necessary for an event to become available.</td><td>no</td><td>1000 (1 sec)</td></tr>
+</tbody>
+</table>
+<h3><a class="anchor" aria-hidden="true" id="supported-event-types"></a><a href="#supported-event-types" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2 [...]
+<p>The graphite emitter only emits service metric events to graphite (See <a href="/docs/0.16.0-incubating/operations/metrics.html">Druid Metrics</a> for a list of metrics).</p>
+<p>Alerts and request logs are not sent to graphite. These event types are not well represented in Graphite, which is more suited for timeseries views on numeric metrics, vs. storing non-numeric log events.</p>
+<p>Instead, alerts and request logs are optionally forwarded to other emitter implementations, specified by <code>druid.emitter.graphite.alertEmitters</code> and <code>druid.emitter.graphite.requestLogEmitters</code> respectively.</p>
+<h3><a class="anchor" aria-hidden="true" id="druid-to-graphite-event-converter"></a><a href="#druid-to-graphite-event-converter" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S1 [...]
+<p>Graphite Event Converter defines a mapping between druid metrics name plus dimensions to a Graphite metric path.
+Graphite metric path is organized using the following schema:
+<code>&lt;namespacePrefix&gt;.[&lt;druid service name&gt;].[&lt;druid hostname&gt;].&lt;druid metrics dimensions&gt;.&lt;druid metrics name&gt;</code>
+Properly naming the metrics is critical to avoid conflicts, confusing data and potentially wrong interpretation later on.</p>
+<p>Example <code>druid.historical.hist-host1_yahoo_com:8080.MyDataSourceName.GroupBy.query/time</code>:</p>
+<ul>
+<li><code>druid</code> -&gt; namespace prefix</li>
+<li><code>historical</code> -&gt; service name</li>
+<li><code>hist-host1.yahoo.com:8080</code> -&gt; druid hostname</li>
+<li><code>MyDataSourceName</code> -&gt; dimension value</li>
+<li><code>GroupBy</code> -&gt; dimension value</li>
+<li><code>query/time</code> -&gt; metric name</li>
+</ul>
+<p>We have two different implementation of event converter:</p>
+<h4><a class="anchor" aria-hidden="true" id="send-all-converter"></a><a href="#send-all-converter" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22- [...]
+<p>The first implementation called <code>all</code>, will send all the druid service metrics events.
+The path will be in the form <code>&lt;namespacePrefix&gt;.[&lt;druid service name&gt;].[&lt;druid hostname&gt;].&lt;dimensions values ordered by dimension's name&gt;.&lt;metric&gt;</code>
+User has control of <code>&lt;namespacePrefix&gt;.[&lt;druid service name&gt;].[&lt;druid hostname&gt;].</code></p>
+<p>You can omit the hostname by setting <code>ignoreHostname=true</code>
+<code>druid.SERVICE_NAME.dataSourceName.queryType.query/time</code></p>
+<p>You can omit the service name by setting <code>ignoreServiceName=true</code>
+<code>druid.HOSTNAME.dataSourceName.queryType.query/time</code></p>
+<p>Elements in metric name by default are separated by &quot;/&quot;, so graphite will create all metrics on one level. If you want to have metrics in the tree structure, you have to set <code>replaceSlashWithDot=true</code>
+Original: <code>druid.HOSTNAME.dataSourceName.queryType.query/time</code>
+Changed: <code>druid.HOSTNAME.dataSourceName.queryType.query.time</code></p>
+<pre><code class="hljs css language-json">
+druid.emitter.graphite.eventConverter={"type":"all", "namespacePrefix": "druid.test", "ignoreHostname":true, "ignoreServiceName":true}
+
+</code></pre>
+<h4><a class="anchor" aria-hidden="true" id="white-list-based-converter"></a><a href="#white-list-based-converter" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H [...]
+<p>The second implementation called <code>whiteList</code>, will send only the white listed metrics and dimensions.
+Same as for the <code>all</code> converter user has control of <code>&lt;namespacePrefix&gt;.[&lt;druid service name&gt;].[&lt;druid hostname&gt;].</code>
+White-list based converter comes with the following  default white list map located under resources in <code>./src/main/resources/defaultWhiteListMap.json</code></p>
+<p>Although user can override the default white list map by supplying a property called <code>mapPath</code>.
+This property is a String containing the path for the file containing <strong>white list map Json object</strong>.
+For example the following converter will read the map from the file <code>/pathPrefix/fileName.json</code>.</p>
+<pre><code class="hljs css language-json">
+druid.emitter.graphite.eventConverter={"type":"whiteList", "namespacePrefix": "druid.test", "ignoreHostname":true, "ignoreServiceName":true, "mapPath":"/pathPrefix/fileName.json"}
+
+</code></pre>
+<p><strong>Druid emits a huge number of metrics we highly recommend to use the <code>whiteList</code> converter</strong></p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions-contrib/google.html"><span class="arrow-prev">← </span><span>Google Cloud Storage</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/querying/aggregations.html"><span>Aggregations</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li><a href="#introduction">Introduction</a></li [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/influx.html b/docs/0.16.0-incubating/development/extensions-contrib/influx.html
new file mode 100644
index 0000000..042c54f
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/influx.html
@@ -0,0 +1,113 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>InfluxDB Line Protocol Parser · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="InfluxDB Line Protocol Parser · Apache Druid"/><meta property="og:type" content="website"/><meta  [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">InfluxDB Line Protocol Parser</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> <code>druid-influx-extensions</code>.</p>
+<p>This extension enables Druid to parse the <a href="https://docs.influxdata.com/influxdb/v1.5/write_protocols/line_protocol_tutorial/">InfluxDB Line Protocol</a>, a popular text-based timeseries metric serialization format.</p>
+<h2><a class="anchor" aria-hidden="true" id="line-protocol"></a><a href="#line-protocol" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>A typical line looks like this:</p>
+<p><code>cpu,application=dbhost=prdb123,region=us-east-1 usage_idle=99.24,usage_user=0.55 1520722030000000000</code></p>
+<p>which contains four parts:</p>
+<ul>
+<li>measurement: A string indicating the name of the measurement represented (e.g. cpu, network, web_requests)</li>
+<li>tags: zero or more key-value pairs (i.e. dimensions)</li>
+<li>measurements: one or more key-value pairs; values can be numeric, boolean, or string</li>
+<li>timestamp: nanoseconds since Unix epoch (the parser truncates it to milliseconds)</li>
+</ul>
+<p>The parser extracts these fields into a map, giving the measurement the key <code>measurement</code> and the timestamp the key <code>_ts</code>. The tag and measurement keys are copied verbatim, so users should take care to avoid name collisions. It is up to the ingestion spec to decide which fields should be treated as dimensions and which should be treated as metrics (typically tags correspond to dimensions and measurements correspond to metrics).</p>
+<p>The parser is configured like so:</p>
+<pre><code class="hljs css language-json">"parser": {
+      "type": "string",
+      "parseSpec": {
+        "format": "influx",
+        "timestampSpec": {
+          "column": "__ts",
+          "format": "millis"
+        },
+        "dimensionsSpec": {
+          "dimensionExclusions": [
+            "__ts"
+          ]
+        },
+        "whitelistMeasurements": [
+          "cpu"
+        ]
+      }
+</code></pre>
+<p>The <code>whitelistMeasurements</code> field is an optional list of strings. If present, measurements that do not match one of the strings in the list will be ignored.</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/querying/virtual-columns.html"><span class="arrow-prev">← </span><span>Virtual Columns</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions-contrib/influxdb-emitter.html"><span class="function-name-prevnext">InfluxDB Emitter</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class="toc-headings"><li>< [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/influxdb-emitter.html b/docs/0.16.0-incubating/development/extensions-contrib/influxdb-emitter.html
new file mode 100644
index 0000000..823557c
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/influxdb-emitter.html
@@ -0,0 +1,120 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>InfluxDB Emitter · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="InfluxDB Emitter · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content= [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
+                var arrow = this.childNodes[1];
+                arrow.classList.toggle('rotate');
+                var content = this.nextElementSibling;
+                content.classList.toggle('hide');
+              });
+            }
+
+            document.addEventListener('DOMContentLoaded', function() {
+              createToggler('#navToggler', '#docsNav', 'docsSliderActive');
+              createToggler('#tocToggler', 'body', 'tocActive');
+
+              var headings = document.querySelector('.toc-headings');
+              headings && headings.addEventListener('click', function(event) {
+                var el = event.target;
+                while(el !== headings){
+                  if (el.tagName === 'A') {
+                    document.body.classList.remove('tocActive');
+                    break;
+                  } else{
+                    el = el.parentNode;
+                  }
+                }
+              }, false);
+
+              function createToggler(togglerSelector, targetSelector, className) {
+                var toggler = document.querySelector(togglerSelector);
+                var target = document.querySelector(targetSelector);
+
+                if (!toggler) {
+                  return;
+                }
+
+                toggler.onclick = function(event) {
+                  event.preventDefault();
+
+                  target.classList.toggle(className);
+                };
+              }
+            });
+        </script></nav></div><div class="container mainContainer"><div class="wrapper"><div class="post"><header class="postHeader"><h1 class="postHeaderTitle">InfluxDB Emitter</h1></header><article><div><span><!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+<p>To use this Apache Druid (incubating) extension, make sure to <a href="/docs/0.16.0-incubating/development/extensions.html#loading-extensions">include</a> <code>druid-influxdb-emitter</code> extension.</p>
+<h2><a class="anchor" aria-hidden="true" id="introduction"></a><a href="#introduction" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83. [...]
+<p>This extension emits druid metrics to <a href="https://www.influxdata.com/time-series-platform/influxdb/">InfluxDB</a> over HTTP. Currently this emitter only emits service metric events to InfluxDB (See <a href="/docs/0.16.0-incubating/operations/metrics.html">Druid metrics</a> for a list of metrics).
+When a metric event is fired it is added to a queue of events. After a configurable amount of time, the events on the queue are transformed to InfluxDB's line protocol
+and POSTed to the InfluxDB HTTP API. The entire queue is flushed at this point. The queue is also flushed as the emitter is shutdown.</p>
+<p>Note that authentication and authorization must be <a href="https://docs.influxdata.com/influxdb/v1.7/administration/authentication_and_authorization/">enabled</a> on the InfluxDB server.</p>
+<h2><a class="anchor" aria-hidden="true" id="configuration"></a><a href="#configuration" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.8 [...]
+<p>All the configuration parameters for the influxdb emitter are under <code>druid.emitter.influxdb</code>.</p>
+<table>
+<thead>
+<tr><th>Property</th><th>Description</th><th>Required?</th><th>Default</th></tr>
+</thead>
+<tbody>
+<tr><td><code>druid.emitter.influxdb.hostname</code></td><td>The hostname of the InfluxDB server.</td><td>Yes</td><td>N/A</td></tr>
+<tr><td><code>druid.emitter.influxdb.port</code></td><td>The port of the InfluxDB server.</td><td>No</td><td>8086</td></tr>
+<tr><td><code>druid.emitter.influxdb.databaseName</code></td><td>The name of the database in InfluxDB.</td><td>Yes</td><td>N/A</td></tr>
+<tr><td><code>druid.emitter.influxdb.maxQueueSize</code></td><td>The size of the queue that holds events.</td><td>No</td><td>Integer.Max_Value(=2^31-1)</td></tr>
+<tr><td><code>druid.emitter.influxdb.flushPeriod</code></td><td>How often (in milliseconds) the events queue is parsed into Line Protocol and POSTed to InfluxDB.</td><td>No</td><td>60000</td></tr>
+<tr><td><code>druid.emitter.influxdb.flushDelay</code></td><td>How long (in milliseconds) the scheduled method will wait until it first runs.</td><td>No</td><td>60000</td></tr>
+<tr><td><code>druid.emitter.influxdb.influxdbUserName</code></td><td>The username for authenticating with the InfluxDB database.</td><td>Yes</td><td>N/A</td></tr>
+<tr><td><code>druid.emitter.influxdb.influxdbPassword</code></td><td>The password of the database authorized user</td><td>Yes</td><td>N/A</td></tr>
+<tr><td><code>druid.emitter.influxdb.dimensionWhitelist</code></td><td>A whitelist of metric dimensions to include as tags</td><td>No</td><td><code>[&quot;dataSource&quot;,&quot;type&quot;,&quot;numMetrics&quot;,&quot;numDimensions&quot;,&quot;threshold&quot;,&quot;dimension&quot;,&quot;taskType&quot;,&quot;taskStatus&quot;,&quot;tier&quot;]</code></td></tr>
+</tbody>
+</table>
+<h2><a class="anchor" aria-hidden="true" id="influxdb-line-protocol"></a><a href="#influxdb-line-protocol" aria-hidden="true" class="hash-link"><svg class="hash-link-icon" aria-hidden="true" height="16" version="1.1" viewBox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0 [...]
+<p>An example of how this emitter parses a Druid metric event into InfluxDB's <a href="https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_reference/">line protocol</a> is given here:</p>
+<p>The syntax of the line protocol is :</p>
+<p><code>&lt;measurement&gt;[,&lt;tag_key&gt;=&lt;tag_value&gt;[,&lt;tag_key&gt;=&lt;tag_value&gt;]] &lt;field_key&gt;=&lt;field_value&gt;[,&lt;field_key&gt;=&lt;field_value&gt;] [&lt;timestamp&gt;]</code></p>
+<p>where timestamp is in nano-seconds since epoch.</p>
+<p>A typical service metric event as recorded by Druid's logging emitter is: <code>Event [{&quot;feed&quot;:&quot;metrics&quot;,&quot;timestamp&quot;:&quot;2017-10-31T09:09:06.857Z&quot;,&quot;service&quot;:&quot;druid/historical&quot;,&quot;host&quot;:&quot;historical001:8083&quot;,&quot;version&quot;:&quot;0.11.0-SNAPSHOT&quot;,&quot;metric&quot;:&quot;query/cache/total/hits&quot;,&quot;value&quot;:34787256}]</code>.</p>
+<p>This event is parsed into line protocol according to these rules:</p>
+<ul>
+<li>The measurement becomes druid_query since query is the first part of the metric.</li>
+<li>The tags are service=druid/historical, hostname=historical001, metric=druid_cache_total. (The metric tag is the middle part of the druid metric separated with _ and preceded by druid_. Another example would be if an event has metric=query/time then there is no middle part and hence no metric tag)</li>
+<li>The field is druid_hits since this is the last part of the metric.</li>
+</ul>
+<p>This gives the following String which can be POSTed to InfluxDB: <code>&quot;druid_query,service=druid/historical,hostname=historical001,metric=druid_cache_total druid_hits=34787256 1509440946857000000&quot;</code></p>
+<p>The InfluxDB emitter has a white list of dimensions
+which will be added as a tag to the line protocol string if the metric has a dimension from the white list.
+The value of the dimension is sanitized such that every occurence of a dot or whitespace is replaced with a <code>_</code> .</p>
+</span></div></article></div><div class="docs-prevnext"><a class="docs-prev button" href="/docs/0.16.0-incubating/development/extensions-contrib/influx.html"><span class="arrow-prev">← </span><span class="function-name-prevnext">InfluxDB Line Protocol Parser</span></a><a class="docs-next button" href="/docs/0.16.0-incubating/development/extensions-contrib/kafka-emitter.html"><span>Kafka Emitter</span><span class="arrow-next"> →</span></a></div></div></div><nav class="onPageNav"><ul class [...]
\ No newline at end of file
diff --git a/docs/0.16.0-incubating/development/extensions-contrib/kafka-emitter.html b/docs/0.16.0-incubating/development/extensions-contrib/kafka-emitter.html
new file mode 100644
index 0000000..7592cbe
--- /dev/null
+++ b/docs/0.16.0-incubating/development/extensions-contrib/kafka-emitter.html
@@ -0,0 +1,106 @@
+<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><title>Kafka Emitter · Apache Druid</title><meta name="viewport" content="width=device-width"/><meta name="generator" content="Docusaurus"/><meta name="description" content="&lt;!--"/><meta name="docsearch:language" content="en"/><meta property="og:title" content="Kafka Emitter · Apache Druid"/><meta property="og:type" content="website"/><meta property="og:url" content="https [...]
+              window.dataLayer = window.dataLayer || [];
+              function gtag(){dataLayer.push(arguments); }
+              gtag('js', new Date());
+              gtag('config', 'UA-131010415-1');
+            </script><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css"/><script src="/js/scrollSpy.js"></script><link rel="stylesheet" href="/css/main.css"/><script src="/js/codetabs.js"></script></head><body class="sideNavVisible separateOnPageNav"><div class="fixedHeaderContainer"><div class="headerWrapper wrapper"><header><a href="/"><img class="logo" src="/img/druid_nav.png" alt="Apache Druid"/></a><div class="navigationWrapper navigationSlider"><n [...]
+            var coll = document.getElementsByClassName('collapsible');
+            var checkActiveCategory = true;
+            for (var i = 0; i < coll.length; i++) {
+              var links = coll[i].nextElementSibling.getElementsByTagName('*');
+              if (checkActiveCategory){
+                for (var j = 0; j < links.length; j++) {
+                  if (links[j].classList.contains('navListItemActive')){
+                    coll[i].nextElementSibling.classList.toggle('hide');
+                    coll[i].childNodes[1].classList.toggle('rotate');
+                    checkActiveCategory = false;
+                    break;
+                  }
+                }
+              }
+
+              coll[i].addEventListener('click', function() {
... 100918 lines suppressed ...


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org