You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by ji...@apache.org on 2019/06/27 22:58:24 UTC

[incubator-druid-website-src] 43/48: Add 0.15.0-doc

This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid-website-src.git

commit b7a908313b31cb55ef9ff696b41a8a3e1d3a08e6
Author: Jihoon Son <ji...@apache.org>
AuthorDate: Thu Jun 27 11:16:43 2019 -0700

    Add 0.15.0-doc
---
 .../About-Experimental-Features.html               |   4 +
 docs/0.15.0-incubating/Aggregations.html           |   4 +
 docs/0.15.0-incubating/ApproxHisto.html            |   4 +
 docs/0.15.0-incubating/Batch-ingestion.html        |   4 +
 .../Booting-a-production-cluster.html}             |   0
 docs/0.15.0-incubating/Broker-Config.html          |   4 +
 docs/0.15.0-incubating/Broker.html                 |   4 +
 docs/0.15.0-incubating/Build-from-source.html      |   4 +
 docs/0.15.0-incubating/Cassandra-Deep-Storage.html |   4 +
 .../Cluster-setup.html}                            |   0
 docs/0.15.0-incubating/Compute.html                |   4 +
 .../Concepts-and-Terminology.html                  |   4 +
 docs/0.15.0-incubating/Configuration.html          |   4 +
 .../Contribute.html}                               |   2 +-
 docs/0.15.0-incubating/Coordinator-Config.html     |   4 +
 docs/0.15.0-incubating/Coordinator.html            |   4 +
 docs/0.15.0-incubating/DataSource.html             |   4 +
 .../0.15.0-incubating/DataSourceMetadataQuery.html |   4 +
 docs/0.15.0-incubating/Data_formats.html           |   4 +
 docs/0.15.0-incubating/Deep-Storage.html           |   4 +
 docs/0.15.0-incubating/Design.html                 |   4 +
 docs/0.15.0-incubating/DimensionSpecs.html         |   4 +
 docs/0.15.0-incubating/Download.html               |   4 +
 .../Druid-Personal-Demo-Cluster.html               |   4 +
 docs/0.15.0-incubating/Druid-vs-Cassandra.html     |   4 +
 docs/0.15.0-incubating/Druid-vs-Elasticsearch.html |   4 +
 docs/0.15.0-incubating/Druid-vs-Hadoop.html        |   4 +
 .../Druid-vs-Impala-or-Shark.html                  |   4 +
 docs/0.15.0-incubating/Druid-vs-Redshift.html      |   4 +
 docs/0.15.0-incubating/Druid-vs-Spark.html         |   4 +
 docs/0.15.0-incubating/Druid-vs-Vertica.html       |   4 +
 .../Evaluate.html}                                 |   0
 docs/0.15.0-incubating/Examples.html               |   4 +
 docs/0.15.0-incubating/Filters.html                |   4 +
 docs/0.15.0-incubating/Firehose.html               |   4 +
 docs/0.15.0-incubating/GeographicQueries.html      |   4 +
 docs/0.15.0-incubating/Granularities.html          |   4 +
 docs/0.15.0-incubating/GroupByQuery.html           |   4 +
 .../Hadoop-Configuration.html}                     |   0
 docs/0.15.0-incubating/Having.html                 |   4 +
 docs/0.15.0-incubating/Historical-Config.html      |   4 +
 docs/0.15.0-incubating/Historical.html             |   4 +
 docs/0.15.0-incubating/Home.html                   |   4 +
 docs/0.15.0-incubating/Including-Extensions.html   |   4 +
 .../0.15.0-incubating/Indexing-Service-Config.html |   4 +
 docs/0.15.0-incubating/Indexing-Service.html       |   4 +
 docs/0.15.0-incubating/Ingestion-FAQ.html          |   4 +
 docs/0.15.0-incubating/Ingestion-overview.html     |   4 +
 docs/0.15.0-incubating/Ingestion.html              |   4 +
 .../Integrating-Druid-With-Other-Technologies.html |   4 +
 docs/0.15.0-incubating/Kafka-Eight.html            |   4 +
 docs/0.15.0-incubating/Libraries.html              |   4 +
 docs/0.15.0-incubating/LimitSpec.html              |   4 +
 docs/0.15.0-incubating/Loading-Your-Data.html      |   4 +
 docs/0.15.0-incubating/Logging.html                |   4 +
 docs/0.15.0-incubating/Master.html                 |   4 +
 docs/0.15.0-incubating/Metadata-storage.html       |   4 +
 docs/0.15.0-incubating/Metrics.html                |   4 +
 docs/0.15.0-incubating/Middlemanager.html          |   4 +
 docs/0.15.0-incubating/Modules.html                |   4 +
 docs/0.15.0-incubating/MySQL.html                  |   4 +
 docs/0.15.0-incubating/OrderBy.html                |   4 +
 docs/0.15.0-incubating/Other-Hadoop.html           |   4 +
 docs/0.15.0-incubating/Papers-and-talks.html       |   4 +
 docs/0.15.0-incubating/Peons.html                  |   4 +
 docs/0.15.0-incubating/Performance-FAQ.html        |   4 +
 docs/0.15.0-incubating/Plumber.html                |   4 +
 docs/0.15.0-incubating/Post-aggregations.html      |   4 +
 .../Production-Cluster-Configuration.html}         |   0
 docs/0.15.0-incubating/Query-Context.html          |   4 +
 docs/0.15.0-incubating/Querying-your-data.html     |   4 +
 docs/0.15.0-incubating/Querying.html               |   4 +
 docs/0.15.0-incubating/Realtime-Config.html        |   4 +
 docs/0.15.0-incubating/Realtime-ingestion.html     |   4 +
 docs/0.15.0-incubating/Realtime.html               |   4 +
 docs/0.15.0-incubating/Recommendations.html        |   4 +
 docs/0.15.0-incubating/Rolling-Updates.html        |   4 +
 docs/0.15.0-incubating/Router.html                 |   4 +
 docs/0.15.0-incubating/Rule-Configuration.html     |   4 +
 docs/0.15.0-incubating/SearchQuery.html            |   4 +
 docs/0.15.0-incubating/SearchQuerySpec.html        |   4 +
 docs/0.15.0-incubating/SegmentMetadataQuery.html   |   4 +
 docs/0.15.0-incubating/Segments.html               |   4 +
 docs/0.15.0-incubating/SelectQuery.html            |   4 +
 .../Simple-Cluster-Configuration.html}             |   0
 docs/0.15.0-incubating/Spatial-Filters.html        |   4 +
 docs/0.15.0-incubating/Spatial-Indexing.html       |   4 +
 .../Stand-Alone-With-Riak-CS.html                  |   4 +
 .../Home.html => 0.15.0-incubating/Support.html}   |   2 +-
 docs/0.15.0-incubating/Tasks.html                  |   4 +
 .../Home.html => 0.15.0-incubating/Thanks.html}    |   2 +-
 docs/0.15.0-incubating/TimeBoundaryQuery.html      |   4 +
 docs/0.15.0-incubating/TimeseriesQuery.html        |   4 +
 docs/0.15.0-incubating/TopNMetricSpec.html         |   4 +
 docs/0.15.0-incubating/TopNQuery.html              |   4 +
 .../Tutorial-A-First-Look-at-Druid.html            |   4 +
 .../Tutorial-All-About-Queries.html                |   4 +
 .../Tutorial-Loading-Batch-Data.html               |   4 +
 .../Tutorial-Loading-Streaming-Data.html           |   4 +
 .../Tutorial-The-Druid-Cluster.html                |   4 +
 .../Tutorial:-A-First-Look-at-Druid.html           |   4 +
 .../Tutorial:-All-About-Queries.html               |   4 +
 .../Tutorial:-Loading-Batch-Data.html              |   4 +
 .../Tutorial:-Loading-Streaming-Data.html          |   4 +
 .../Tutorial:-Loading-Your-Data-Part-1.html        |   4 +
 .../Tutorial:-Loading-Your-Data-Part-2.html        |   4 +
 .../Tutorial:-The-Druid-Cluster.html}              |   0
 docs/0.15.0-incubating/Tutorial:-Webstream.html    |   4 +
 docs/0.15.0-incubating/Tutorials.html              |   4 +
 docs/0.15.0-incubating/Twitter-Tutorial.html       |   4 +
 docs/0.15.0-incubating/Versioning.html             |   4 +
 .../ZooKeeper.html}                                |   0
 docs/0.15.0-incubating/alerts.html                 |   4 +
 .../comparisons/druid-vs-cassandra.html            |   4 +
 .../comparisons/druid-vs-elasticsearch.md          |  40 ++
 .../comparisons/druid-vs-hadoop.html               |   4 +
 .../comparisons/druid-vs-impala-or-shark.html      |   4 +
 .../comparisons/druid-vs-key-value.md              |  47 ++
 .../0.15.0-incubating/comparisons/druid-vs-kudu.md |  40 ++
 .../comparisons/druid-vs-redshift.md               |  63 +++
 .../comparisons/druid-vs-spark.md                  |  43 ++
 .../comparisons/druid-vs-sql-on-hadoop.md          |  83 ++++
 .../comparisons/druid-vs-vertica.html              |   4 +
 docs/0.15.0-incubating/configuration/auth.html     |   4 +
 docs/0.15.0-incubating/configuration/broker.html   |   4 +
 docs/0.15.0-incubating/configuration/caching.html  |   4 +
 .../configuration/coordinator.html                 |   4 +
 docs/0.15.0-incubating/configuration/hadoop.html   |   4 +
 .../configuration/historical.html                  |   4 +
 .../configuration/index.md                         |  39 +-
 .../configuration/indexing-service.html            |   4 +
 .../configuration/logging.md                       |  33 ++
 .../configuration/production-cluster.html          |   4 +
 .../configuration/realtime.md                      |   2 +-
 .../configuration/simple-cluster.html              |   4 +
 .../0.15.0-incubating/configuration/zookeeper.html |   4 +
 .../dependencies/cassandra-deep-storage.md         |  62 +++
 .../0.15.0-incubating/dependencies/deep-storage.md |  54 ++
 .../dependencies/metadata-storage.md               |   5 +-
 docs/0.15.0-incubating/dependencies/zookeeper.md   |  77 +++
 docs/0.15.0-incubating/design/auth.md              | 168 +++++++
 docs/0.15.0-incubating/design/broker.md            |  55 +++
 .../design/concepts-and-terminology.html}          |   0
 .../design/coordinator.md                          |   3 +-
 .../design/design.html}                            |   0
 docs/0.15.0-incubating/design/historical.md        |  59 +++
 docs/{latest => 0.15.0-incubating}/design/index.md |  41 +-
 docs/0.15.0-incubating/design/indexing-service.md  |  65 +++
 .../design/middlemanager.md}                       |  25 +-
 docs/0.15.0-incubating/design/overlord.md          |  63 +++
 docs/0.15.0-incubating/design/peons.md             |  47 ++
 .../design/plumber.md}                             |  21 +-
 docs/0.15.0-incubating/design/processes.md         | 131 +++++
 docs/0.15.0-incubating/design/realtime.md          |  80 +++
 .../design/segments.md                             |   2 +-
 .../development/approximate-histograms.html        |   4 +
 docs/0.15.0-incubating/development/build.md        |  69 +++
 .../development/community-extensions/azure.html    |   4 +
 .../community-extensions/cassandra.html            |   4 +
 .../community-extensions/cloudfiles.html           |   4 +
 .../development/community-extensions/graphite.html |   4 +
 .../community-extensions/kafka-simple.html         |   4 +
 .../development/community-extensions/rabbitmq.html |   4 +
 .../development/datasketches-aggregators.html      |   4 +
 .../development/experimental.md                    |  19 +-
 .../extensions-contrib/ambari-metrics-emitter.md   | 100 ++++
 .../development/extensions-contrib/azure.md        |  95 ++++
 .../development/extensions-contrib/cassandra.md}   |  20 +-
 .../development/extensions-contrib/cloudfiles.md   |  97 ++++
 .../extensions-contrib/distinctcount.md            |  99 ++++
 .../development/extensions-contrib/google.md       |  89 ++++
 .../development/extensions-contrib/graphite.md     | 118 +++++
 .../development/extensions-contrib/influx.md       |  66 +++
 .../extensions-contrib/influxdb-emitter.md         |  75 +++
 .../extensions-contrib/kafka-emitter.md            |  55 +++
 .../development/extensions-contrib/kafka-simple.md |  56 +++
 .../extensions-contrib/materialized-view.md        | 134 +++++
 .../extensions-contrib/momentsketch-quantiles.md   | 120 +++++
 .../extensions-contrib/moving-average-query.md     | 337 +++++++++++++
 .../extensions-contrib/opentsdb-emitter.md         |  62 +++
 .../development/extensions-contrib/parquet.html    |   4 +
 .../development/extensions-contrib/rabbitmq.md     |  81 +++
 .../development/extensions-contrib/redis-cache.md  |  58 +++
 .../development/extensions-contrib/rocketmq.md}    |  18 +-
 .../development/extensions-contrib/scan-query.html |   4 +
 .../development/extensions-contrib/sqlserver.md    |  57 +++
 .../development/extensions-contrib/statsd.md       |  70 +++
 .../extensions-contrib/tdigestsketch-quantiles.md  | 159 ++++++
 .../development/extensions-contrib/thrift.md       | 128 +++++
 .../development/extensions-contrib/time-min-max.md | 105 ++++
 .../extensions-core/approximate-histograms.md      |   2 +
 .../development/extensions-core/avro.md            | 222 +++++++++
 .../development/extensions-core/bloom-filter.md    | 179 +++++++
 .../extensions-core/caffeine-cache.html            |   4 +
 .../extensions-core/datasketches-aggregators.html  |   4 +
 .../extensions-core/datasketches-extension.md      |   2 +-
 .../extensions-core/datasketches-hll.md            |   2 +-
 .../extensions-core/datasketches-quantiles.md      |  27 +-
 .../extensions-core/datasketches-theta.md          |   2 +-
 .../extensions-core/datasketches-tuple.md          |   2 +-
 .../extensions-core/druid-basic-security.md        | 146 +++++-
 .../development/extensions-core/druid-kerberos.md  |   5 +-
 .../development/extensions-core/druid-lookups.md   | 150 ++++++
 .../development/extensions-core/examples.md}       |  26 +-
 .../development/extensions-core/hdfs.md            |  56 +++
 .../extensions-core/kafka-eight-firehose.md        |  54 ++
 .../extensions-core/kafka-extraction-namespace.md  |  70 +++
 .../development/extensions-core/kafka-ingestion.md |  50 +-
 .../extensions-core/kinesis-ingestion.md           |  56 ++-
 .../extensions-core/lookups-cached-global.md       | 379 ++++++++++++++
 .../development/extensions-core/mysql.md           | 109 +++++
 .../extensions-core/namespaced-lookup.html         |   4 +
 .../development/extensions-core/orc.md             | 311 ++++++++++++
 .../development/extensions-core/parquet.md         |   9 +-
 .../development/extensions-core/postgresql.md      |   2 +
 .../development/extensions-core/protobuf.md        | 223 +++++++++
 .../development/extensions-core/s3.md              |  45 +-
 .../extensions-core/simple-client-sslcontext.md    |  54 ++
 .../development/extensions-core/stats.md           | 172 +++++++
 .../development/extensions-core/test-stats.md      | 118 +++++
 .../development/extensions.md                      |  15 +-
 .../development/geo.md                             |   7 +
 .../integrating-druid-with-other-technologies.md}  |  22 +-
 docs/0.15.0-incubating/development/javascript.md   |  75 +++
 .../kafka-simple-consumer-firehose.html            |   4 +
 docs/0.15.0-incubating/development/libraries.html  |   4 +
 .../development/modules.md                         |   8 +-
 .../development/overview.md                        |   2 +-
 .../development/router.md                          |   5 +
 .../development/select-query.html                  |   4 +
 docs/0.15.0-incubating/development/versioning.md   |  47 ++
 docs/0.15.0-incubating/index.html                  |   4 +
 .../ingestion/batch-ingestion.md}                  |  22 +-
 .../ingestion/command-line-hadoop-indexer.md       |  95 ++++
 .../ingestion/compaction.md                        |  16 +-
 docs/0.15.0-incubating/ingestion/data-formats.md   | 205 ++++++++
 docs/0.15.0-incubating/ingestion/delete-data.md    |  50 ++
 docs/0.15.0-incubating/ingestion/faq.md            | 106 ++++
 .../ingestion/firehose.md                          |  64 ++-
 docs/0.15.0-incubating/ingestion/flatten-json.md   | 180 +++++++
 .../ingestion/hadoop-vs-native-batch.md            |   4 +-
 .../ingestion/hadoop.md                            |   3 +-
 .../ingestion/index.md                             |   2 +-
 docs/0.15.0-incubating/ingestion/ingestion-spec.md | 332 +++++++++++++
 .../ingestion/ingestion.html}                      |   0
 .../ingestion/locking-and-priority.md              |  79 +++
 docs/0.15.0-incubating/ingestion/misc-tasks.md     |  94 ++++
 docs/0.15.0-incubating/ingestion/native-batch.html |   4 +
 .../ingestion/native_tasks.md                      |  11 +-
 .../ingestion/overview.html}                       |   0
 .../ingestion/realtime-ingestion.html              |   4 +
 docs/0.15.0-incubating/ingestion/reports.md        | 152 ++++++
 docs/0.15.0-incubating/ingestion/schema-changes.md |  82 ++++
 docs/0.15.0-incubating/ingestion/schema-design.md  | 338 +++++++++++++
 .../ingestion/stream-ingestion.md                  |  56 +++
 docs/0.15.0-incubating/ingestion/stream-pull.md    | 376 ++++++++++++++
 docs/0.15.0-incubating/ingestion/stream-push.md    | 186 +++++++
 docs/0.15.0-incubating/ingestion/tasks.md          |  78 +++
 docs/0.15.0-incubating/ingestion/transform-spec.md | 104 ++++
 .../ingestion/update-existing-data.md              | 162 ++++++
 docs/0.15.0-incubating/misc/cluster-setup.html     |   4 +
 docs/0.15.0-incubating/misc/evaluate.html          |   4 +
 .../misc/math-expr.md                              |  74 ++-
 docs/0.15.0-incubating/misc/papers-and-talks.md    |  43 ++
 docs/0.15.0-incubating/misc/tasks.html             |   4 +
 .../operations/alerts.md}                          |  23 +-
 .../operations/api-reference.md                    |  45 +-
 .../operations/basic-cluster-tuning.md             | 382 +++++++++++++++
 .../operations/deep-storage-migration.md           |  66 +++
 .../operations/druid-console.md                    |  51 +-
 docs/0.15.0-incubating/operations/dump-segment.md  | 116 +++++
 .../operations/export-metadata.md                  | 201 ++++++++
 .../operations/getting-started.md                  |  49 ++
 .../operations/high-availability.md                |  40 ++
 .../operations/http-compression.md}                |  21 +-
 .../operations/img/01-home-view.png                | Bin 0 -> 58587 bytes
 .../operations/img/02-data-loader-1.png            | Bin 0 -> 68576 bytes
 .../operations/img/03-data-loader-2.png            | Bin 0 -> 456607 bytes
 .../operations/img/04-datasources.png              | Bin 0 -> 178133 bytes
 .../operations/img/05-retention.png                | Bin 0 -> 173350 bytes
 .../operations/img/06-segments.png                 | Bin 0 -> 209772 bytes
 .../operations/img/07-supervisors.png              | Bin 0 -> 120310 bytes
 docs/0.15.0-incubating/operations/img/08-tasks.png | Bin 0 -> 64362 bytes
 .../operations/img/09-task-status.png              | Bin 0 -> 94299 bytes
 .../operations/img/10-servers.png                  | Bin 0 -> 79421 bytes
 .../operations/img/11-query-sql.png                | Bin 0 -> 111209 bytes
 .../operations/img/12-query-rune.png               | Bin 0 -> 137679 bytes
 .../operations/img/13-lookups.png                  | Bin 0 -> 54480 bytes
 .../operations/including-extensions.md             |  87 ++++
 .../operations/insert-segment-to-db.md             |  49 ++
 .../0.15.0-incubating/operations/management-uis.md |  80 +++
 .../operations/metadata-migration.md               |  92 ++++
 docs/0.15.0-incubating/operations/metrics.md       | 279 +++++++++++
 .../0.15.0-incubating/operations/multitenancy.html |   4 +
 docs/0.15.0-incubating/operations/other-hadoop.md  | 300 ++++++++++++
 .../operations/password-provider.md                |  55 +++
 .../operations/performance-faq.html                |   4 +
 .../operations/pull-deps.md                        |  14 +-
 .../operations/recommendations.md                  |   8 +-
 docs/0.15.0-incubating/operations/reset-cluster.md |  76 +++
 .../operations/rolling-updates.md                  | 102 ++++
 .../operations/rule-configuration.md               |   2 -
 .../operations/segment-optimization.md             | 100 ++++
 docs/0.15.0-incubating/operations/single-server.md |  71 +++
 .../operations/tls-support.md                      |  15 +-
 .../operations/use_sbt_to_build_fat_jar.md         | 128 +++++
 .../querying/aggregations.md                       |  33 +-
 docs/0.15.0-incubating/querying/caching.md         |  64 +++
 docs/0.15.0-incubating/querying/datasource.md      |  65 +++
 .../querying/datasourcemetadataquery.md            |  57 +++
 docs/0.15.0-incubating/querying/dimensionspecs.md  | 545 +++++++++++++++++++++
 docs/0.15.0-incubating/querying/filters.md         | 521 ++++++++++++++++++++
 .../querying/granularities.md                      |   9 +-
 .../querying/groupbyquery.md                       |   3 +-
 docs/0.15.0-incubating/querying/having.md          | 261 ++++++++++
 docs/0.15.0-incubating/querying/hll-old.md         | 142 ++++++
 docs/0.15.0-incubating/querying/joins.md           |  55 +++
 docs/0.15.0-incubating/querying/limitspec.md       |  55 +++
 .../querying/lookups.md                            |  18 +-
 .../querying/multi-value-dimensions.md             | 340 +++++++++++++
 .../querying/multitenancy.md                       |   2 +-
 docs/0.15.0-incubating/querying/optimizations.html |   4 +
 .../querying/post-aggregations.md                  | 223 +++++++++
 .../querying/query-context.md                      |   4 +-
 .../querying/querying.md                           |  36 +-
 docs/0.15.0-incubating/querying/scan-query.md      | 226 +++++++++
 docs/0.15.0-incubating/querying/searchquery.md     | 141 ++++++
 docs/0.15.0-incubating/querying/searchqueryspec.md |  77 +++
 .../querying/segmentmetadataquery.md               | 188 +++++++
 .../querying/select-query.md                       |  17 +-
 docs/0.15.0-incubating/querying/sorting-orders.md  |  54 ++
 docs/{latest => 0.15.0-incubating}/querying/sql.md | 171 ++++---
 .../querying/timeboundaryquery.md                  |  58 +++
 .../querying/timeseriesquery.md                    |   1 +
 docs/0.15.0-incubating/querying/topnmetricspec.md  |  87 ++++
 docs/0.15.0-incubating/querying/topnquery.md       | 257 ++++++++++
 docs/0.15.0-incubating/querying/virtual-columns.md |  80 +++
 docs/0.15.0-incubating/toc.md                      | 182 +++++++
 .../tutorials/booting-a-production-cluster.html}   |   2 +-
 docs/0.15.0-incubating/tutorials/cluster.md        | 500 +++++++++++++++++++
 .../tutorials/examples.html}                       |   0
 .../tutorials/firewall.html}                       |   2 +-
 .../img/tutorial-batch-data-loader-01.png          | Bin 0 -> 56488 bytes
 .../img/tutorial-batch-data-loader-02.png          | Bin 0 -> 360295 bytes
 .../img/tutorial-batch-data-loader-03.png          | Bin 0 -> 137443 bytes
 .../img/tutorial-batch-data-loader-04.png          | Bin 0 -> 167252 bytes
 .../img/tutorial-batch-data-loader-05.png          | Bin 0 -> 162488 bytes
 .../img/tutorial-batch-data-loader-06.png          | Bin 0 -> 64301 bytes
 .../img/tutorial-batch-data-loader-07.png          | Bin 0 -> 46529 bytes
 .../img/tutorial-batch-data-loader-08.png          | Bin 0 -> 103928 bytes
 .../img/tutorial-batch-data-loader-09.png          | Bin 0 -> 63348 bytes
 .../img/tutorial-batch-data-loader-10.png          | Bin 0 -> 44516 bytes
 .../img/tutorial-batch-data-loader-11.png          | Bin 0 -> 83288 bytes
 .../img/tutorial-batch-submit-task-01.png          | Bin 0 -> 69356 bytes
 .../img/tutorial-batch-submit-task-02.png          | Bin 0 -> 86076 bytes
 .../tutorials/img/tutorial-compaction-01.png       | Bin 0 -> 35710 bytes
 .../tutorials/img/tutorial-compaction-02.png       | Bin 0 -> 166571 bytes
 .../tutorials/img/tutorial-compaction-03.png       | Bin 0 -> 26755 bytes
 .../tutorials/img/tutorial-compaction-04.png       | Bin 0 -> 184365 bytes
 .../tutorials/img/tutorial-compaction-05.png       | Bin 0 -> 26588 bytes
 .../tutorials/img/tutorial-compaction-06.png       | Bin 0 -> 206717 bytes
 .../tutorials/img/tutorial-compaction-07.png       | Bin 0 -> 26683 bytes
 .../tutorials/img/tutorial-compaction-08.png       | Bin 0 -> 28751 bytes
 .../tutorials/img/tutorial-deletion-01.png         | Bin 0 -> 43586 bytes
 .../tutorials/img/tutorial-deletion-02.png         | Bin 0 -> 439602 bytes
 .../tutorials/img/tutorial-deletion-03.png         | Bin 0 -> 437304 bytes
 .../tutorials/img/tutorial-kafka-01.png            | Bin 0 -> 85477 bytes
 .../tutorials/img/tutorial-kafka-02.png            | Bin 0 -> 75709 bytes
 .../tutorials/img/tutorial-query-01.png            | Bin 0 -> 100930 bytes
 .../tutorials/img/tutorial-query-02.png            | Bin 0 -> 83369 bytes
 .../tutorials/img/tutorial-query-03.png            | Bin 0 -> 65038 bytes
 .../tutorials/img/tutorial-query-04.png            | Bin 0 -> 66423 bytes
 .../tutorials/img/tutorial-query-05.png            | Bin 0 -> 51855 bytes
 .../tutorials/img/tutorial-query-06.png            | Bin 0 -> 82211 bytes
 .../tutorials/img/tutorial-query-07.png            | Bin 0 -> 78633 bytes
 .../tutorials/img/tutorial-quickstart-01.png       | Bin 0 -> 29834 bytes
 .../tutorials/img/tutorial-retention-00.png        | Bin 0 -> 77704 bytes
 .../tutorials/img/tutorial-retention-01.png        | Bin 0 -> 35171 bytes
 .../tutorials/img/tutorial-retention-02.png        | Bin 0 -> 240310 bytes
 .../tutorials/img/tutorial-retention-03.png        | Bin 0 -> 30029 bytes
 .../tutorials/img/tutorial-retention-04.png        | Bin 0 -> 44617 bytes
 .../tutorials/img/tutorial-retention-05.png        | Bin 0 -> 38992 bytes
 .../tutorials/img/tutorial-retention-06.png        | Bin 0 -> 137570 bytes
 .../tutorials/index.md                             | 110 +++--
 .../tutorials/ingestion-streams.html}              |   0
 .../tutorials/ingestion.html}                      |   0
 .../tutorials/quickstart.html}                     |   0
 .../tutorials/tutorial-a-first-look-at-druid.html} |   0
 .../tutorials/tutorial-all-about-queries.html}     |   0
 .../tutorials/tutorial-batch-hadoop.md             |  18 +-
 docs/0.15.0-incubating/tutorials/tutorial-batch.md | 267 ++++++++++
 .../tutorials/tutorial-compaction.md               |   9 +-
 .../tutorials/tutorial-delete-data.md              |  60 +--
 .../tutorials/tutorial-ingestion-spec.md           |   4 +-
 .../tutorials/tutorial-kafka.md                    | 103 +++-
 .../tutorials/tutorial-kerberos-hadoop.md          | 122 +++++
 .../tutorials/tutorial-loading-batch-data.html     |   4 +
 .../tutorials/tutorial-loading-streaming-data.html |   4 +
 .../tutorials/tutorial-query.md                    | 329 +++++++------
 .../tutorials/tutorial-retention.md                |   2 +-
 .../tutorials/tutorial-rollup.md                   |   7 +-
 .../tutorials/tutorial-the-druid-cluster.html}     |   2 +-
 .../tutorials/tutorial-tranquility.md              |  14 +-
 .../tutorials/tutorial-transform-spec.md           |   5 +-
 .../tutorials/tutorial-update-data.md              |   8 +-
 docs/latest/Contribute.html                        |   2 +-
 docs/latest/Design.html                            |   2 +-
 docs/latest/Download.html                          |   2 +-
 docs/latest/Examples.html                          |   2 +-
 docs/latest/Hadoop-Configuration.html              |   2 +-
 docs/latest/Home.html                              |   2 +-
 docs/latest/Ingestion-overview.html                |   2 +-
 docs/latest/Performance-FAQ.html                   |   2 +-
 docs/latest/Stand-Alone-With-Riak-CS.html          |   2 +-
 docs/latest/Support.html                           |   2 +-
 docs/latest/Thanks.html                            |   2 +-
 docs/latest/Tutorial:-A-First-Look-at-Druid.html   |   2 +-
 docs/latest/Tutorial:-All-About-Queries.html       |   2 +-
 docs/latest/Tutorial:-Loading-Streaming-Data.html  |   2 +-
 docs/latest/configuration/auth.html                |   2 +-
 docs/latest/configuration/hadoop.html              |   2 +-
 docs/latest/configuration/index.md                 |  39 +-
 docs/latest/configuration/logging.md               |  33 ++
 docs/latest/configuration/production-cluster.html  |   2 +-
 docs/latest/configuration/realtime.md              |   2 +-
 docs/latest/configuration/zookeeper.html           |   2 +-
 docs/latest/dependencies/metadata-storage.md       |   5 +-
 docs/latest/design/coordinator.md                  |   3 +-
 docs/latest/design/index.md                        |  41 +-
 docs/latest/design/segments.md                     |   2 +-
 docs/latest/development/experimental.md            |  19 +-
 .../extensions-contrib/influxdb-emitter.md         |  75 +++
 .../extensions-contrib/momentsketch-quantiles.md   | 120 +++++
 .../extensions-contrib/moving-average-query.md     | 337 +++++++++++++
 docs/latest/development/extensions-contrib/orc.md  | 113 -----
 .../extensions-contrib/tdigestsketch-quantiles.md  | 159 ++++++
 .../extensions-core/approximate-histograms.md      |   2 +
 .../extensions-core/caffeine-cache.html            |   2 +-
 .../extensions-core/datasketches-extension.md      |   2 +-
 .../extensions-core/datasketches-hll.md            |   2 +-
 .../extensions-core/datasketches-quantiles.md      |  27 +-
 .../extensions-core/datasketches-theta.md          |   2 +-
 .../extensions-core/datasketches-tuple.md          |   2 +-
 .../extensions-core/druid-basic-security.md        | 146 +++++-
 .../development/extensions-core/druid-kerberos.md  |   5 +-
 .../development/extensions-core/kafka-ingestion.md |  50 +-
 .../extensions-core/kinesis-ingestion.md           |  56 ++-
 docs/latest/development/extensions-core/orc.md     | 311 ++++++++++++
 docs/latest/development/extensions-core/parquet.md |   9 +-
 .../development/extensions-core/postgresql.md      |   2 +
 docs/latest/development/extensions-core/s3.md      |  45 +-
 docs/latest/development/extensions.md              |  15 +-
 docs/latest/development/geo.md                     |   7 +
 docs/latest/development/modules.md                 |   8 +-
 docs/latest/development/overview.md                |   2 +-
 docs/latest/development/router.md                  |   5 +
 docs/latest/ingestion/compaction.md                |  16 +-
 docs/latest/ingestion/firehose.md                  |  64 ++-
 docs/latest/ingestion/hadoop-vs-native-batch.md    |   4 +-
 docs/latest/ingestion/hadoop.md                    |   3 +-
 docs/latest/ingestion/index.md                     |   2 +-
 docs/latest/ingestion/native_tasks.md              |  11 +-
 docs/latest/misc/math-expr.md                      |  74 ++-
 docs/latest/operations/api-reference.md            |  45 +-
 docs/latest/operations/basic-cluster-tuning.md     | 382 +++++++++++++++
 docs/latest/operations/deep-storage-migration.md   |  66 +++
 docs/latest/operations/druid-console.md            |  51 +-
 docs/latest/operations/export-metadata.md          | 201 ++++++++
 docs/latest/operations/getting-started.md          |  49 ++
 docs/latest/operations/high-availability.md        |  40 ++
 docs/latest/operations/img/01-home-view.png        | Bin 60287 -> 58587 bytes
 docs/latest/operations/img/02-data-loader-1.png    | Bin 0 -> 68576 bytes
 docs/latest/operations/img/02-datasources.png      | Bin 163824 -> 0 bytes
 docs/latest/operations/img/03-data-loader-2.png    | Bin 0 -> 456607 bytes
 docs/latest/operations/img/03-retention.png        | Bin 123857 -> 0 bytes
 docs/latest/operations/img/04-datasources.png      | Bin 0 -> 178133 bytes
 docs/latest/operations/img/04-segments.png         | Bin 125873 -> 0 bytes
 docs/latest/operations/img/05-retention.png        | Bin 0 -> 173350 bytes
 docs/latest/operations/img/05-tasks-1.png          | Bin 101635 -> 0 bytes
 docs/latest/operations/img/06-segments.png         | Bin 0 -> 209772 bytes
 docs/latest/operations/img/06-tasks-2.png          | Bin 221977 -> 0 bytes
 docs/latest/operations/img/07-supervisors.png      | Bin 0 -> 120310 bytes
 docs/latest/operations/img/07-tasks-3.png          | Bin 195170 -> 0 bytes
 docs/latest/operations/img/08-servers.png          | Bin 119310 -> 0 bytes
 docs/latest/operations/img/08-tasks.png            | Bin 0 -> 64362 bytes
 docs/latest/operations/img/09-sql.png              | Bin 80580 -> 0 bytes
 docs/latest/operations/img/09-task-status.png      | Bin 0 -> 94299 bytes
 docs/latest/operations/img/10-servers.png          | Bin 0 -> 79421 bytes
 docs/latest/operations/img/11-query-sql.png        | Bin 0 -> 111209 bytes
 docs/latest/operations/img/12-query-rune.png       | Bin 0 -> 137679 bytes
 docs/latest/operations/img/13-lookups.png          | Bin 0 -> 54480 bytes
 docs/latest/operations/insert-segment-to-db.html   |   4 -
 docs/latest/operations/insert-segment-to-db.md     | 153 +-----
 docs/latest/operations/metadata-migration.md       |  92 ++++
 docs/latest/operations/performance-faq.html        |   4 +
 docs/latest/operations/performance-faq.md          |  95 ----
 docs/latest/operations/pull-deps.md                |  14 +-
 docs/latest/operations/recommendations.md          |   8 +-
 docs/latest/operations/rule-configuration.md       |   2 -
 docs/latest/operations/single-server.md            |  71 +++
 docs/latest/operations/tls-support.md              |  15 +-
 docs/latest/querying/aggregations.md               |  33 +-
 docs/latest/querying/caching.md                    |  44 +-
 docs/latest/querying/granularities.md              |   9 +-
 docs/latest/querying/groupbyquery.md               |   3 +-
 docs/latest/querying/lookups.md                    |  18 +-
 docs/latest/querying/multitenancy.md               |   2 +-
 docs/latest/querying/optimizations.html            |   2 +-
 docs/latest/querying/query-context.md              |   4 +-
 docs/latest/querying/querying.md                   |  36 +-
 docs/latest/querying/scan-query.md                 | 128 +++--
 docs/latest/querying/select-query.md               |  17 +-
 docs/latest/querying/sql.md                        | 171 ++++---
 docs/latest/querying/timeseriesquery.md            |   1 +
 docs/latest/toc.md                                 | 174 +++----
 docs/latest/tutorials/cluster.md                   | 390 +++++++++------
 docs/latest/tutorials/img/tutorial-batch-01.png    | Bin 54435 -> 0 bytes
 .../img/tutorial-batch-data-loader-01.png          | Bin 0 -> 56488 bytes
 .../img/tutorial-batch-data-loader-02.png          | Bin 0 -> 360295 bytes
 .../img/tutorial-batch-data-loader-03.png          | Bin 0 -> 137443 bytes
 .../img/tutorial-batch-data-loader-04.png          | Bin 0 -> 167252 bytes
 .../img/tutorial-batch-data-loader-05.png          | Bin 0 -> 162488 bytes
 .../img/tutorial-batch-data-loader-06.png          | Bin 0 -> 64301 bytes
 .../img/tutorial-batch-data-loader-07.png          | Bin 0 -> 46529 bytes
 .../img/tutorial-batch-data-loader-08.png          | Bin 0 -> 103928 bytes
 .../img/tutorial-batch-data-loader-09.png          | Bin 0 -> 63348 bytes
 .../img/tutorial-batch-data-loader-10.png          | Bin 0 -> 44516 bytes
 .../img/tutorial-batch-data-loader-11.png          | Bin 0 -> 83288 bytes
 .../img/tutorial-batch-submit-task-01.png          | Bin 0 -> 69356 bytes
 .../img/tutorial-batch-submit-task-02.png          | Bin 0 -> 86076 bytes
 .../tutorials/img/tutorial-compaction-01.png       | Bin 55153 -> 35710 bytes
 .../tutorials/img/tutorial-compaction-02.png       | Bin 279736 -> 166571 bytes
 .../tutorials/img/tutorial-compaction-03.png       | Bin 40114 -> 26755 bytes
 .../tutorials/img/tutorial-compaction-04.png       | Bin 312142 -> 184365 bytes
 .../tutorials/img/tutorial-compaction-05.png       | Bin 39784 -> 26588 bytes
 .../tutorials/img/tutorial-compaction-06.png       | Bin 351505 -> 206717 bytes
 .../tutorials/img/tutorial-compaction-07.png       | Bin 40106 -> 26683 bytes
 .../tutorials/img/tutorial-compaction-08.png       | Bin 43257 -> 28751 bytes
 docs/latest/tutorials/img/tutorial-deletion-01.png | Bin 72062 -> 43586 bytes
 docs/latest/tutorials/img/tutorial-deletion-02.png | Bin 200459 -> 439602 bytes
 docs/latest/tutorials/img/tutorial-deletion-03.png | Bin 0 -> 437304 bytes
 docs/latest/tutorials/img/tutorial-kafka-01.png    | Bin 0 -> 85477 bytes
 docs/latest/tutorials/img/tutorial-kafka-02.png    | Bin 0 -> 75709 bytes
 docs/latest/tutorials/img/tutorial-query-01.png    | Bin 0 -> 100930 bytes
 docs/latest/tutorials/img/tutorial-query-02.png    | Bin 0 -> 83369 bytes
 docs/latest/tutorials/img/tutorial-query-03.png    | Bin 0 -> 65038 bytes
 docs/latest/tutorials/img/tutorial-query-04.png    | Bin 0 -> 66423 bytes
 docs/latest/tutorials/img/tutorial-query-05.png    | Bin 0 -> 51855 bytes
 docs/latest/tutorials/img/tutorial-query-06.png    | Bin 0 -> 82211 bytes
 docs/latest/tutorials/img/tutorial-query-07.png    | Bin 0 -> 78633 bytes
 .../tutorials/img/tutorial-quickstart-01.png       | Bin 0 -> 29834 bytes
 .../latest/tutorials/img/tutorial-retention-00.png | Bin 138304 -> 77704 bytes
 .../latest/tutorials/img/tutorial-retention-01.png | Bin 53955 -> 35171 bytes
 .../latest/tutorials/img/tutorial-retention-02.png | Bin 410930 -> 240310 bytes
 .../latest/tutorials/img/tutorial-retention-03.png | Bin 44144 -> 30029 bytes
 .../latest/tutorials/img/tutorial-retention-04.png | Bin 67493 -> 44617 bytes
 .../latest/tutorials/img/tutorial-retention-05.png | Bin 61639 -> 38992 bytes
 .../latest/tutorials/img/tutorial-retention-06.png | Bin 233034 -> 137570 bytes
 docs/latest/tutorials/index.md                     | 110 +++--
 docs/latest/tutorials/tutorial-batch-hadoop.md     |  18 +-
 docs/latest/tutorials/tutorial-batch.md            | 154 ++++--
 docs/latest/tutorials/tutorial-compaction.md       |   9 +-
 docs/latest/tutorials/tutorial-delete-data.md      |  60 +--
 docs/latest/tutorials/tutorial-ingestion-spec.md   |   4 +-
 docs/latest/tutorials/tutorial-kafka.md            | 103 +++-
 docs/latest/tutorials/tutorial-kerberos-hadoop.md  | 122 +++++
 .../tutorials/tutorial-loading-streaming-data.html |   2 +-
 docs/latest/tutorials/tutorial-query.md            | 329 +++++++------
 docs/latest/tutorials/tutorial-retention.md        |   2 +-
 docs/latest/tutorials/tutorial-rollup.md           |   7 +-
 docs/latest/tutorials/tutorial-tranquility.md      |  14 +-
 docs/latest/tutorials/tutorial-transform-spec.md   |   5 +-
 docs/latest/tutorials/tutorial-update-data.md      |   8 +-
 573 files changed, 20988 insertions(+), 1910 deletions(-)

diff --git a/docs/0.15.0-incubating/About-Experimental-Features.html b/docs/0.15.0-incubating/About-Experimental-Features.html
new file mode 100644
index 0000000..e864904
--- /dev/null
+++ b/docs/0.15.0-incubating/About-Experimental-Features.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/experimental.html
+---
diff --git a/docs/0.15.0-incubating/Aggregations.html b/docs/0.15.0-incubating/Aggregations.html
new file mode 100644
index 0000000..e9a37a4
--- /dev/null
+++ b/docs/0.15.0-incubating/Aggregations.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/aggregations.html
+---
diff --git a/docs/0.15.0-incubating/ApproxHisto.html b/docs/0.15.0-incubating/ApproxHisto.html
new file mode 100644
index 0000000..fdf95f7
--- /dev/null
+++ b/docs/0.15.0-incubating/ApproxHisto.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/extensions-core/approximate-histograms.html
+---
diff --git a/docs/0.15.0-incubating/Batch-ingestion.html b/docs/0.15.0-incubating/Batch-ingestion.html
new file mode 100644
index 0000000..ff5fcdb
--- /dev/null
+++ b/docs/0.15.0-incubating/Batch-ingestion.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ingestion/batch-ingestion.html
+---
diff --git a/docs/latest/configuration/production-cluster.html b/docs/0.15.0-incubating/Booting-a-production-cluster.html
similarity index 100%
copy from docs/latest/configuration/production-cluster.html
copy to docs/0.15.0-incubating/Booting-a-production-cluster.html
diff --git a/docs/0.15.0-incubating/Broker-Config.html b/docs/0.15.0-incubating/Broker-Config.html
new file mode 100644
index 0000000..bae2edc
--- /dev/null
+++ b/docs/0.15.0-incubating/Broker-Config.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: configuration/index.html#broker
+---
diff --git a/docs/0.15.0-incubating/Broker.html b/docs/0.15.0-incubating/Broker.html
new file mode 100644
index 0000000..9a82073
--- /dev/null
+++ b/docs/0.15.0-incubating/Broker.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/broker.html
+---
diff --git a/docs/0.15.0-incubating/Build-from-source.html b/docs/0.15.0-incubating/Build-from-source.html
new file mode 100644
index 0000000..ecebc94
--- /dev/null
+++ b/docs/0.15.0-incubating/Build-from-source.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/build.html
+---
diff --git a/docs/0.15.0-incubating/Cassandra-Deep-Storage.html b/docs/0.15.0-incubating/Cassandra-Deep-Storage.html
new file mode 100644
index 0000000..2b12a56
--- /dev/null
+++ b/docs/0.15.0-incubating/Cassandra-Deep-Storage.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: dependencies/cassandra-deep-storage.html
+---
diff --git a/docs/latest/configuration/production-cluster.html b/docs/0.15.0-incubating/Cluster-setup.html
similarity index 100%
copy from docs/latest/configuration/production-cluster.html
copy to docs/0.15.0-incubating/Cluster-setup.html
diff --git a/docs/0.15.0-incubating/Compute.html b/docs/0.15.0-incubating/Compute.html
new file mode 100644
index 0000000..5082f2e
--- /dev/null
+++ b/docs/0.15.0-incubating/Compute.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/processes.html
+---
diff --git a/docs/0.15.0-incubating/Concepts-and-Terminology.html b/docs/0.15.0-incubating/Concepts-and-Terminology.html
new file mode 100644
index 0000000..356fcfc
--- /dev/null
+++ b/docs/0.15.0-incubating/Concepts-and-Terminology.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/index.html
+---
diff --git a/docs/0.15.0-incubating/Configuration.html b/docs/0.15.0-incubating/Configuration.html
new file mode 100644
index 0000000..3faa36e
--- /dev/null
+++ b/docs/0.15.0-incubating/Configuration.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: configuration/index.html
+---
diff --git a/docs/latest/Home.html b/docs/0.15.0-incubating/Contribute.html
similarity index 50%
copy from docs/latest/Home.html
copy to docs/0.15.0-incubating/Contribute.html
index f2f2acb..9a81ea2 100644
--- a/docs/latest/Home.html
+++ b/docs/0.15.0-incubating/Contribute.html
@@ -1,4 +1,4 @@
 ---
 layout: redirect_page
-redirect_target: index.html
+redirect_target: /community/
 ---
diff --git a/docs/0.15.0-incubating/Coordinator-Config.html b/docs/0.15.0-incubating/Coordinator-Config.html
new file mode 100644
index 0000000..7301b28
--- /dev/null
+++ b/docs/0.15.0-incubating/Coordinator-Config.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: configuration/index.html#coordinator
+---
diff --git a/docs/0.15.0-incubating/Coordinator.html b/docs/0.15.0-incubating/Coordinator.html
new file mode 100644
index 0000000..08b74da
--- /dev/null
+++ b/docs/0.15.0-incubating/Coordinator.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/coordinator.html
+---
diff --git a/docs/0.15.0-incubating/DataSource.html b/docs/0.15.0-incubating/DataSource.html
new file mode 100644
index 0000000..1e50912
--- /dev/null
+++ b/docs/0.15.0-incubating/DataSource.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/datasource.html
+---
diff --git a/docs/0.15.0-incubating/DataSourceMetadataQuery.html b/docs/0.15.0-incubating/DataSourceMetadataQuery.html
new file mode 100644
index 0000000..e9f68fc
--- /dev/null
+++ b/docs/0.15.0-incubating/DataSourceMetadataQuery.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/datasourcemetadataquery.html
+---
diff --git a/docs/0.15.0-incubating/Data_formats.html b/docs/0.15.0-incubating/Data_formats.html
new file mode 100644
index 0000000..cf66ca3
--- /dev/null
+++ b/docs/0.15.0-incubating/Data_formats.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ingestion/data-formats.html
+---
diff --git a/docs/0.15.0-incubating/Deep-Storage.html b/docs/0.15.0-incubating/Deep-Storage.html
new file mode 100644
index 0000000..c0d5b49
--- /dev/null
+++ b/docs/0.15.0-incubating/Deep-Storage.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: dependencies/deep-storage.html
+---
diff --git a/docs/0.15.0-incubating/Design.html b/docs/0.15.0-incubating/Design.html
new file mode 100644
index 0000000..356fcfc
--- /dev/null
+++ b/docs/0.15.0-incubating/Design.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/index.html
+---
diff --git a/docs/0.15.0-incubating/DimensionSpecs.html b/docs/0.15.0-incubating/DimensionSpecs.html
new file mode 100644
index 0000000..fa2a037
--- /dev/null
+++ b/docs/0.15.0-incubating/DimensionSpecs.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/dimensionspecs.html
+---
diff --git a/docs/0.15.0-incubating/Download.html b/docs/0.15.0-incubating/Download.html
new file mode 100644
index 0000000..a4432f2
--- /dev/null
+++ b/docs/0.15.0-incubating/Download.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: /downloads.html
+---
diff --git a/docs/0.15.0-incubating/Druid-Personal-Demo-Cluster.html b/docs/0.15.0-incubating/Druid-Personal-Demo-Cluster.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Druid-Personal-Demo-Cluster.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Druid-vs-Cassandra.html b/docs/0.15.0-incubating/Druid-vs-Cassandra.html
new file mode 100644
index 0000000..325192f
--- /dev/null
+++ b/docs/0.15.0-incubating/Druid-vs-Cassandra.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: comparisons/druid-vs-key-value.html
+---
diff --git a/docs/0.15.0-incubating/Druid-vs-Elasticsearch.html b/docs/0.15.0-incubating/Druid-vs-Elasticsearch.html
new file mode 100644
index 0000000..4553038
--- /dev/null
+++ b/docs/0.15.0-incubating/Druid-vs-Elasticsearch.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: comparisons/druid-vs-elasticsearch.html
+---
diff --git a/docs/0.15.0-incubating/Druid-vs-Hadoop.html b/docs/0.15.0-incubating/Druid-vs-Hadoop.html
new file mode 100644
index 0000000..11c3b4a
--- /dev/null
+++ b/docs/0.15.0-incubating/Druid-vs-Hadoop.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: comparisons/druid-vs-sql-on-hadoop.html
+---
diff --git a/docs/0.15.0-incubating/Druid-vs-Impala-or-Shark.html b/docs/0.15.0-incubating/Druid-vs-Impala-or-Shark.html
new file mode 100644
index 0000000..11c3b4a
--- /dev/null
+++ b/docs/0.15.0-incubating/Druid-vs-Impala-or-Shark.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: comparisons/druid-vs-sql-on-hadoop.html
+---
diff --git a/docs/0.15.0-incubating/Druid-vs-Redshift.html b/docs/0.15.0-incubating/Druid-vs-Redshift.html
new file mode 100644
index 0000000..eb6b1ee
--- /dev/null
+++ b/docs/0.15.0-incubating/Druid-vs-Redshift.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: comparisons/druid-vs-redshift.html
+---
diff --git a/docs/0.15.0-incubating/Druid-vs-Spark.html b/docs/0.15.0-incubating/Druid-vs-Spark.html
new file mode 100644
index 0000000..4b2eb01
--- /dev/null
+++ b/docs/0.15.0-incubating/Druid-vs-Spark.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: comparisons/druid-vs-spark.html
+---
diff --git a/docs/0.15.0-incubating/Druid-vs-Vertica.html b/docs/0.15.0-incubating/Druid-vs-Vertica.html
new file mode 100644
index 0000000..eb6b1ee
--- /dev/null
+++ b/docs/0.15.0-incubating/Druid-vs-Vertica.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: comparisons/druid-vs-redshift.html
+---
diff --git a/docs/latest/configuration/production-cluster.html b/docs/0.15.0-incubating/Evaluate.html
similarity index 100%
copy from docs/latest/configuration/production-cluster.html
copy to docs/0.15.0-incubating/Evaluate.html
diff --git a/docs/0.15.0-incubating/Examples.html b/docs/0.15.0-incubating/Examples.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Examples.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Filters.html b/docs/0.15.0-incubating/Filters.html
new file mode 100644
index 0000000..1952f64
--- /dev/null
+++ b/docs/0.15.0-incubating/Filters.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/filters.html
+---
diff --git a/docs/0.15.0-incubating/Firehose.html b/docs/0.15.0-incubating/Firehose.html
new file mode 100644
index 0000000..f70f590
--- /dev/null
+++ b/docs/0.15.0-incubating/Firehose.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ingestion/firehose.html
+---
diff --git a/docs/0.15.0-incubating/GeographicQueries.html b/docs/0.15.0-incubating/GeographicQueries.html
new file mode 100644
index 0000000..c23dccd
--- /dev/null
+++ b/docs/0.15.0-incubating/GeographicQueries.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/geo.html
+---
diff --git a/docs/0.15.0-incubating/Granularities.html b/docs/0.15.0-incubating/Granularities.html
new file mode 100644
index 0000000..3585bd1
--- /dev/null
+++ b/docs/0.15.0-incubating/Granularities.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/granularities.html
+---
diff --git a/docs/0.15.0-incubating/GroupByQuery.html b/docs/0.15.0-incubating/GroupByQuery.html
new file mode 100644
index 0000000..520f950
--- /dev/null
+++ b/docs/0.15.0-incubating/GroupByQuery.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/groupbyquery.html
+---
diff --git a/docs/latest/configuration/hadoop.html b/docs/0.15.0-incubating/Hadoop-Configuration.html
similarity index 100%
copy from docs/latest/configuration/hadoop.html
copy to docs/0.15.0-incubating/Hadoop-Configuration.html
diff --git a/docs/0.15.0-incubating/Having.html b/docs/0.15.0-incubating/Having.html
new file mode 100644
index 0000000..bde8e63
--- /dev/null
+++ b/docs/0.15.0-incubating/Having.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/having.html
+---
diff --git a/docs/0.15.0-incubating/Historical-Config.html b/docs/0.15.0-incubating/Historical-Config.html
new file mode 100644
index 0000000..ce3923c
--- /dev/null
+++ b/docs/0.15.0-incubating/Historical-Config.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: configuration/index.html#historical
+---
diff --git a/docs/0.15.0-incubating/Historical.html b/docs/0.15.0-incubating/Historical.html
new file mode 100644
index 0000000..e9f4b23
--- /dev/null
+++ b/docs/0.15.0-incubating/Historical.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/historical.html
+---
diff --git a/docs/0.15.0-incubating/Home.html b/docs/0.15.0-incubating/Home.html
new file mode 100644
index 0000000..356fcfc
--- /dev/null
+++ b/docs/0.15.0-incubating/Home.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/index.html
+---
diff --git a/docs/0.15.0-incubating/Including-Extensions.html b/docs/0.15.0-incubating/Including-Extensions.html
new file mode 100644
index 0000000..e08d154
--- /dev/null
+++ b/docs/0.15.0-incubating/Including-Extensions.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: operations/including-extensions.html
+---
diff --git a/docs/0.15.0-incubating/Indexing-Service-Config.html b/docs/0.15.0-incubating/Indexing-Service-Config.html
new file mode 100644
index 0000000..754720c
--- /dev/null
+++ b/docs/0.15.0-incubating/Indexing-Service-Config.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: configuration/index.html#overlord
+---
diff --git a/docs/0.15.0-incubating/Indexing-Service.html b/docs/0.15.0-incubating/Indexing-Service.html
new file mode 100644
index 0000000..00d7c98
--- /dev/null
+++ b/docs/0.15.0-incubating/Indexing-Service.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/indexing-service.html
+---
diff --git a/docs/0.15.0-incubating/Ingestion-FAQ.html b/docs/0.15.0-incubating/Ingestion-FAQ.html
new file mode 100644
index 0000000..f4e7109
--- /dev/null
+++ b/docs/0.15.0-incubating/Ingestion-FAQ.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ingestion/faq.html
+---
diff --git a/docs/0.15.0-incubating/Ingestion-overview.html b/docs/0.15.0-incubating/Ingestion-overview.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Ingestion-overview.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Ingestion.html b/docs/0.15.0-incubating/Ingestion.html
new file mode 100644
index 0000000..eb8d213
--- /dev/null
+++ b/docs/0.15.0-incubating/Ingestion.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ingestion/index.html
+---
diff --git a/docs/0.15.0-incubating/Integrating-Druid-With-Other-Technologies.html b/docs/0.15.0-incubating/Integrating-Druid-With-Other-Technologies.html
new file mode 100644
index 0000000..f4046b9
--- /dev/null
+++ b/docs/0.15.0-incubating/Integrating-Druid-With-Other-Technologies.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/integrating-druid-with-other-technologies.html
+---
diff --git a/docs/0.15.0-incubating/Kafka-Eight.html b/docs/0.15.0-incubating/Kafka-Eight.html
new file mode 100644
index 0000000..216383e
--- /dev/null
+++ b/docs/0.15.0-incubating/Kafka-Eight.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/extensions-core/kafka-eight-firehose.html
+---
diff --git a/docs/0.15.0-incubating/Libraries.html b/docs/0.15.0-incubating/Libraries.html
new file mode 100644
index 0000000..10a2691
--- /dev/null
+++ b/docs/0.15.0-incubating/Libraries.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: /libraries.html
+---
diff --git a/docs/0.15.0-incubating/LimitSpec.html b/docs/0.15.0-incubating/LimitSpec.html
new file mode 100644
index 0000000..d36b182
--- /dev/null
+++ b/docs/0.15.0-incubating/LimitSpec.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/limitspec.html
+---
diff --git a/docs/0.15.0-incubating/Loading-Your-Data.html b/docs/0.15.0-incubating/Loading-Your-Data.html
new file mode 100644
index 0000000..eb8d213
--- /dev/null
+++ b/docs/0.15.0-incubating/Loading-Your-Data.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ingestion/index.html
+---
diff --git a/docs/0.15.0-incubating/Logging.html b/docs/0.15.0-incubating/Logging.html
new file mode 100644
index 0000000..e6e6f2d
--- /dev/null
+++ b/docs/0.15.0-incubating/Logging.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: configuration/logging.html
+---
diff --git a/docs/0.15.0-incubating/Master.html b/docs/0.15.0-incubating/Master.html
new file mode 100644
index 0000000..5082f2e
--- /dev/null
+++ b/docs/0.15.0-incubating/Master.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/processes.html
+---
diff --git a/docs/0.15.0-incubating/Metadata-storage.html b/docs/0.15.0-incubating/Metadata-storage.html
new file mode 100644
index 0000000..3802fc8
--- /dev/null
+++ b/docs/0.15.0-incubating/Metadata-storage.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: dependencies/metadata-storage.html
+---
diff --git a/docs/0.15.0-incubating/Metrics.html b/docs/0.15.0-incubating/Metrics.html
new file mode 100644
index 0000000..1f51d53
--- /dev/null
+++ b/docs/0.15.0-incubating/Metrics.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: operations/metrics.html
+---
diff --git a/docs/0.15.0-incubating/Middlemanager.html b/docs/0.15.0-incubating/Middlemanager.html
new file mode 100644
index 0000000..5951154
--- /dev/null
+++ b/docs/0.15.0-incubating/Middlemanager.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/middlemanager.html
+---
diff --git a/docs/0.15.0-incubating/Modules.html b/docs/0.15.0-incubating/Modules.html
new file mode 100644
index 0000000..cca9056
--- /dev/null
+++ b/docs/0.15.0-incubating/Modules.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/modules.html
+---
diff --git a/docs/0.15.0-incubating/MySQL.html b/docs/0.15.0-incubating/MySQL.html
new file mode 100644
index 0000000..74649e4
--- /dev/null
+++ b/docs/0.15.0-incubating/MySQL.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/extensions-core/mysql.html
+---
diff --git a/docs/0.15.0-incubating/OrderBy.html b/docs/0.15.0-incubating/OrderBy.html
new file mode 100644
index 0000000..d36b182
--- /dev/null
+++ b/docs/0.15.0-incubating/OrderBy.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/limitspec.html
+---
diff --git a/docs/0.15.0-incubating/Other-Hadoop.html b/docs/0.15.0-incubating/Other-Hadoop.html
new file mode 100644
index 0000000..d63a763
--- /dev/null
+++ b/docs/0.15.0-incubating/Other-Hadoop.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: operations/other-hadoop.html
+---
diff --git a/docs/0.15.0-incubating/Papers-and-talks.html b/docs/0.15.0-incubating/Papers-and-talks.html
new file mode 100644
index 0000000..d9c399d
--- /dev/null
+++ b/docs/0.15.0-incubating/Papers-and-talks.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: misc/papers-and-talks.html
+---
diff --git a/docs/0.15.0-incubating/Peons.html b/docs/0.15.0-incubating/Peons.html
new file mode 100644
index 0000000..a421a2d
--- /dev/null
+++ b/docs/0.15.0-incubating/Peons.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/peons.html
+---
diff --git a/docs/0.15.0-incubating/Performance-FAQ.html b/docs/0.15.0-incubating/Performance-FAQ.html
new file mode 100644
index 0000000..5c40313
--- /dev/null
+++ b/docs/0.15.0-incubating/Performance-FAQ.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: operations/basic-cluster-tuning.html
+---
diff --git a/docs/0.15.0-incubating/Plumber.html b/docs/0.15.0-incubating/Plumber.html
new file mode 100644
index 0000000..411833b
--- /dev/null
+++ b/docs/0.15.0-incubating/Plumber.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/plumber.html
+---
diff --git a/docs/0.15.0-incubating/Post-aggregations.html b/docs/0.15.0-incubating/Post-aggregations.html
new file mode 100644
index 0000000..8ac7b3d
--- /dev/null
+++ b/docs/0.15.0-incubating/Post-aggregations.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/post-aggregations.html
+---
diff --git a/docs/latest/configuration/production-cluster.html b/docs/0.15.0-incubating/Production-Cluster-Configuration.html
similarity index 100%
copy from docs/latest/configuration/production-cluster.html
copy to docs/0.15.0-incubating/Production-Cluster-Configuration.html
diff --git a/docs/0.15.0-incubating/Query-Context.html b/docs/0.15.0-incubating/Query-Context.html
new file mode 100644
index 0000000..f1b4ed1
--- /dev/null
+++ b/docs/0.15.0-incubating/Query-Context.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/query-context.html
+---
diff --git a/docs/0.15.0-incubating/Querying-your-data.html b/docs/0.15.0-incubating/Querying-your-data.html
new file mode 100644
index 0000000..f83f0db
--- /dev/null
+++ b/docs/0.15.0-incubating/Querying-your-data.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/querying.html
+---
diff --git a/docs/0.15.0-incubating/Querying.html b/docs/0.15.0-incubating/Querying.html
new file mode 100644
index 0000000..f83f0db
--- /dev/null
+++ b/docs/0.15.0-incubating/Querying.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/querying.html
+---
diff --git a/docs/0.15.0-incubating/Realtime-Config.html b/docs/0.15.0-incubating/Realtime-Config.html
new file mode 100644
index 0000000..5af1fb4
--- /dev/null
+++ b/docs/0.15.0-incubating/Realtime-Config.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: configuration/realtime.html
+---
diff --git a/docs/0.15.0-incubating/Realtime-ingestion.html b/docs/0.15.0-incubating/Realtime-ingestion.html
new file mode 100644
index 0000000..f7f037e
--- /dev/null
+++ b/docs/0.15.0-incubating/Realtime-ingestion.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ingestion/stream-ingestion.html
+---
diff --git a/docs/0.15.0-incubating/Realtime.html b/docs/0.15.0-incubating/Realtime.html
new file mode 100644
index 0000000..028d55c
--- /dev/null
+++ b/docs/0.15.0-incubating/Realtime.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/realtime.html
+---
diff --git a/docs/0.15.0-incubating/Recommendations.html b/docs/0.15.0-incubating/Recommendations.html
new file mode 100644
index 0000000..b483662
--- /dev/null
+++ b/docs/0.15.0-incubating/Recommendations.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: operations/recommendations.html
+---
diff --git a/docs/0.15.0-incubating/Rolling-Updates.html b/docs/0.15.0-incubating/Rolling-Updates.html
new file mode 100644
index 0000000..a08111c
--- /dev/null
+++ b/docs/0.15.0-incubating/Rolling-Updates.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: operations/rolling-updates.html
+---
diff --git a/docs/0.15.0-incubating/Router.html b/docs/0.15.0-incubating/Router.html
new file mode 100644
index 0000000..fab025b
--- /dev/null
+++ b/docs/0.15.0-incubating/Router.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/router.html
+---
diff --git a/docs/0.15.0-incubating/Rule-Configuration.html b/docs/0.15.0-incubating/Rule-Configuration.html
new file mode 100644
index 0000000..e770394
--- /dev/null
+++ b/docs/0.15.0-incubating/Rule-Configuration.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: operations/rule-configuration.html
+---
diff --git a/docs/0.15.0-incubating/SearchQuery.html b/docs/0.15.0-incubating/SearchQuery.html
new file mode 100644
index 0000000..69142d2
--- /dev/null
+++ b/docs/0.15.0-incubating/SearchQuery.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/searchquery.html
+---
diff --git a/docs/0.15.0-incubating/SearchQuerySpec.html b/docs/0.15.0-incubating/SearchQuerySpec.html
new file mode 100644
index 0000000..01df323
--- /dev/null
+++ b/docs/0.15.0-incubating/SearchQuerySpec.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/searchqueryspec.html
+---
diff --git a/docs/0.15.0-incubating/SegmentMetadataQuery.html b/docs/0.15.0-incubating/SegmentMetadataQuery.html
new file mode 100644
index 0000000..6859ef7
--- /dev/null
+++ b/docs/0.15.0-incubating/SegmentMetadataQuery.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/segmentmetadataquery.html
+---
diff --git a/docs/0.15.0-incubating/Segments.html b/docs/0.15.0-incubating/Segments.html
new file mode 100644
index 0000000..8c2784b
--- /dev/null
+++ b/docs/0.15.0-incubating/Segments.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/segments.html
+---
diff --git a/docs/0.15.0-incubating/SelectQuery.html b/docs/0.15.0-incubating/SelectQuery.html
new file mode 100644
index 0000000..f5c9548
--- /dev/null
+++ b/docs/0.15.0-incubating/SelectQuery.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/select-query.html
+---
diff --git a/docs/latest/configuration/production-cluster.html b/docs/0.15.0-incubating/Simple-Cluster-Configuration.html
similarity index 100%
copy from docs/latest/configuration/production-cluster.html
copy to docs/0.15.0-incubating/Simple-Cluster-Configuration.html
diff --git a/docs/0.15.0-incubating/Spatial-Filters.html b/docs/0.15.0-incubating/Spatial-Filters.html
new file mode 100644
index 0000000..c23dccd
--- /dev/null
+++ b/docs/0.15.0-incubating/Spatial-Filters.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/geo.html
+---
diff --git a/docs/0.15.0-incubating/Spatial-Indexing.html b/docs/0.15.0-incubating/Spatial-Indexing.html
new file mode 100644
index 0000000..c23dccd
--- /dev/null
+++ b/docs/0.15.0-incubating/Spatial-Indexing.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/geo.html
+---
diff --git a/docs/0.15.0-incubating/Stand-Alone-With-Riak-CS.html b/docs/0.15.0-incubating/Stand-Alone-With-Riak-CS.html
new file mode 100644
index 0000000..356fcfc
--- /dev/null
+++ b/docs/0.15.0-incubating/Stand-Alone-With-Riak-CS.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/index.html
+---
diff --git a/docs/latest/Home.html b/docs/0.15.0-incubating/Support.html
similarity index 50%
copy from docs/latest/Home.html
copy to docs/0.15.0-incubating/Support.html
index f2f2acb..9a81ea2 100644
--- a/docs/latest/Home.html
+++ b/docs/0.15.0-incubating/Support.html
@@ -1,4 +1,4 @@
 ---
 layout: redirect_page
-redirect_target: index.html
+redirect_target: /community/
 ---
diff --git a/docs/0.15.0-incubating/Tasks.html b/docs/0.15.0-incubating/Tasks.html
new file mode 100644
index 0000000..f159140
--- /dev/null
+++ b/docs/0.15.0-incubating/Tasks.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ingestion/tasks.html
+---
diff --git a/docs/latest/Home.html b/docs/0.15.0-incubating/Thanks.html
similarity index 50%
copy from docs/latest/Home.html
copy to docs/0.15.0-incubating/Thanks.html
index f2f2acb..9a81ea2 100644
--- a/docs/latest/Home.html
+++ b/docs/0.15.0-incubating/Thanks.html
@@ -1,4 +1,4 @@
 ---
 layout: redirect_page
-redirect_target: index.html
+redirect_target: /community/
 ---
diff --git a/docs/0.15.0-incubating/TimeBoundaryQuery.html b/docs/0.15.0-incubating/TimeBoundaryQuery.html
new file mode 100644
index 0000000..b9de682
--- /dev/null
+++ b/docs/0.15.0-incubating/TimeBoundaryQuery.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/timeboundaryquery.html
+---
diff --git a/docs/0.15.0-incubating/TimeseriesQuery.html b/docs/0.15.0-incubating/TimeseriesQuery.html
new file mode 100644
index 0000000..c691c00
--- /dev/null
+++ b/docs/0.15.0-incubating/TimeseriesQuery.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/timeseriesquery.html
+---
diff --git a/docs/0.15.0-incubating/TopNMetricSpec.html b/docs/0.15.0-incubating/TopNMetricSpec.html
new file mode 100644
index 0000000..362d629
--- /dev/null
+++ b/docs/0.15.0-incubating/TopNMetricSpec.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/topnmetricspec.html
+---
diff --git a/docs/0.15.0-incubating/TopNQuery.html b/docs/0.15.0-incubating/TopNQuery.html
new file mode 100644
index 0000000..87f23e4
--- /dev/null
+++ b/docs/0.15.0-incubating/TopNQuery.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: querying/topnquery.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial-A-First-Look-at-Druid.html b/docs/0.15.0-incubating/Tutorial-A-First-Look-at-Druid.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial-A-First-Look-at-Druid.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial-All-About-Queries.html b/docs/0.15.0-incubating/Tutorial-All-About-Queries.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial-All-About-Queries.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial-Loading-Batch-Data.html b/docs/0.15.0-incubating/Tutorial-Loading-Batch-Data.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial-Loading-Batch-Data.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial-Loading-Streaming-Data.html b/docs/0.15.0-incubating/Tutorial-Loading-Streaming-Data.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial-Loading-Streaming-Data.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial-The-Druid-Cluster.html b/docs/0.15.0-incubating/Tutorial-The-Druid-Cluster.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial-The-Druid-Cluster.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial:-A-First-Look-at-Druid.html b/docs/0.15.0-incubating/Tutorial:-A-First-Look-at-Druid.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial:-A-First-Look-at-Druid.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial:-All-About-Queries.html b/docs/0.15.0-incubating/Tutorial:-All-About-Queries.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial:-All-About-Queries.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial:-Loading-Batch-Data.html b/docs/0.15.0-incubating/Tutorial:-Loading-Batch-Data.html
new file mode 100644
index 0000000..6c43ace
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial:-Loading-Batch-Data.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/tutorial-batch.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial:-Loading-Streaming-Data.html b/docs/0.15.0-incubating/Tutorial:-Loading-Streaming-Data.html
new file mode 100644
index 0000000..90d233b
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial:-Loading-Streaming-Data.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/tutorial-kafka.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial:-Loading-Your-Data-Part-1.html b/docs/0.15.0-incubating/Tutorial:-Loading-Your-Data-Part-1.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial:-Loading-Your-Data-Part-1.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorial:-Loading-Your-Data-Part-2.html b/docs/0.15.0-incubating/Tutorial:-Loading-Your-Data-Part-2.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial:-Loading-Your-Data-Part-2.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/latest/configuration/production-cluster.html b/docs/0.15.0-incubating/Tutorial:-The-Druid-Cluster.html
similarity index 100%
copy from docs/latest/configuration/production-cluster.html
copy to docs/0.15.0-incubating/Tutorial:-The-Druid-Cluster.html
diff --git a/docs/0.15.0-incubating/Tutorial:-Webstream.html b/docs/0.15.0-incubating/Tutorial:-Webstream.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorial:-Webstream.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Tutorials.html b/docs/0.15.0-incubating/Tutorials.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Tutorials.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Twitter-Tutorial.html b/docs/0.15.0-incubating/Twitter-Tutorial.html
new file mode 100644
index 0000000..733f9ff
--- /dev/null
+++ b/docs/0.15.0-incubating/Twitter-Tutorial.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: tutorials/index.html
+---
diff --git a/docs/0.15.0-incubating/Versioning.html b/docs/0.15.0-incubating/Versioning.html
new file mode 100644
index 0000000..92fa554
--- /dev/null
+++ b/docs/0.15.0-incubating/Versioning.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: development/versioning.html
+---
diff --git a/docs/latest/configuration/zookeeper.html b/docs/0.15.0-incubating/ZooKeeper.html
similarity index 100%
copy from docs/latest/configuration/zookeeper.html
copy to docs/0.15.0-incubating/ZooKeeper.html
diff --git a/docs/0.15.0-incubating/alerts.html b/docs/0.15.0-incubating/alerts.html
new file mode 100644
index 0000000..5f7f24f
--- /dev/null
+++ b/docs/0.15.0-incubating/alerts.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: operations/alerts.html
+---
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-cassandra.html b/docs/0.15.0-incubating/comparisons/druid-vs-cassandra.html
new file mode 100644
index 0000000..fc3e34a
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-cassandra.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: druid-vs-key-value.html
+---
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-elasticsearch.md b/docs/0.15.0-incubating/comparisons/druid-vs-elasticsearch.md
new file mode 100644
index 0000000..ada48f3
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-elasticsearch.md
@@ -0,0 +1,40 @@
+---
+layout: doc_page
+title: "Apache Druid (incubating) vs Elasticsearch"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Apache Druid (incubating) vs Elasticsearch
+
+We are not experts on search systems, if anything is incorrect about our portrayal, please let us know on the mailing list or via some other means.
+
+Elasticsearch is a search systems based on Apache Lucene. It provides full text search for schema-free documents 
+and provides access to raw event level data. Elasticsearch is increasingly adding more support for analytics and aggregations. 
+[Some members of the community](https://groups.google.com/forum/#!msg/druid-development/nlpwTHNclj8/sOuWlKOzPpYJ) have pointed out  
+the resource requirements for data ingestion and aggregation in Elasticsearch is much higher than those of Druid.
+
+Elasticsearch also does not support data summarization/roll-up at ingestion time, which can compact the data that needs to be 
+stored up to 100x with real-world data sets. This leads to Elasticsearch having greater storage requirements.
+
+Druid focuses on OLAP work flows. Druid is optimized for high performance (fast aggregation and ingestion) at low cost, 
+and supports a wide range of analytic operations. Druid has some basic search support for structured event data, but does not support 
+full text search. Druid also does not support completely unstructured data. Measures must be defined in a Druid schema such that 
+summarization/roll-up can be done.
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-hadoop.html b/docs/0.15.0-incubating/comparisons/druid-vs-hadoop.html
new file mode 100644
index 0000000..fbdc4a1
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-hadoop.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: druid-vs-sql-on-hadoop.html
+---
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-impala-or-shark.html b/docs/0.15.0-incubating/comparisons/druid-vs-impala-or-shark.html
new file mode 100644
index 0000000..fbdc4a1
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-impala-or-shark.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: druid-vs-sql-on-hadoop.html
+---
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-key-value.md b/docs/0.15.0-incubating/comparisons/druid-vs-key-value.md
new file mode 100644
index 0000000..1e655be
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-key-value.md
@@ -0,0 +1,47 @@
+---
+layout: doc_page
+title: "Apache Druid (incubating) vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Apache Druid (incubating) vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
+
+Druid is highly optimized for scans and aggregations, it supports arbitrarily deep drill downs into data sets. This same functionality 
+is supported in key/value stores in 2 ways:
+
+1. Pre-compute all permutations of possible user queries
+2. Range scans on event data
+
+When pre-computing results, the key is the exact parameters of the query, and the value is the result of the query.  
+The queries return extremely quickly, but at the cost of flexibility, as ad-hoc exploratory queries are not possible with 
+pre-computing every possible query permutation. Pre-computing all permutations of all ad-hoc queries leads to result sets 
+that grow exponentially with the number of columns of a data set, and pre-computing queries for complex real-world data sets 
+can require hours of pre-processing time.
+
+The other approach to using key/value stores for aggregations to use the dimensions of an event as the key and the event measures as the value. 
+Aggregations are done by issuing range scans on this data. Timeseries specific databases such as OpenTSDB use this approach. 
+One of the limitations here is that the key/value storage model does not have indexes for any kind of filtering other than prefix ranges, 
+which can be used to filter a query down to a metric and time range, but cannot resolve complex predicates to narrow the exact data to scan. 
+When the number of rows to scan gets large, this limitation can greatly reduce performance. It is also harder to achieve good 
+locality with key/value stores because most don’t support pushing down aggregates to the storage layer.
+
+For arbitrary exploration of data (flexible data filtering), Druid's custom column format enables ad-hoc queries without pre-computation. The format 
+also enables fast scans on columns, which is important for good aggregation performance.
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-kudu.md b/docs/0.15.0-incubating/comparisons/druid-vs-kudu.md
new file mode 100644
index 0000000..3b27d5b
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-kudu.md
@@ -0,0 +1,40 @@
+---
+layout: doc_page
+title: "Apache Druid (incubating) vs Kudu"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Apache Druid (incubating) vs Apache Kudu
+
+Kudu's storage format enables single row updates, whereas updates to existing Druid segments requires recreating the segment, so theoretically  
+the process for updating old values should be higher latency in Druid. However, the requirements in Kudu for maintaining extra head space to store 
+updates as well as organizing data by id instead of time has the potential to introduce some extra latency and accessing 
+of data that is not need to answer a query at query time. 
+
+Druid summarizes/rollups up data at ingestion time, which in practice reduces the raw data that needs to be 
+stored significantly (up to 40 times on average), and increases performance of scanning raw data significantly. 
+Druid segments also contain bitmap indexes for fast filtering, which Kudu does not currently support. 
+Druid's segment architecture is heavily geared towards fast aggregates and filters, and for OLAP workflows. Appends are very 
+fast in Druid, whereas updates of older data is higher latency. This is by design as the data Druid is good for is typically event data, 
+and does not need to be updated too frequently. Kudu supports arbitrary primary keys with uniqueness constraints, and 
+efficient lookup by ranges of those keys. Kudu chooses not to include the execution engine, but supports sufficient 
+operations so as to allow node-local processing from the execution engines. This means that Kudu can support multiple frameworks on the same data (eg MR, Spark, and SQL). 
+Druid includes its own query layer that allows it to push down aggregations and computations directly to data processes for faster query processing.
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-redshift.md b/docs/0.15.0-incubating/comparisons/druid-vs-redshift.md
new file mode 100644
index 0000000..ce741af
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-redshift.md
@@ -0,0 +1,63 @@
+---
+layout: doc_page
+title: "Apache Druid (incubating) vs Redshift"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Apache Druid (incubating) vs Redshift
+
+### How does Druid compare to Redshift?
+
+In terms of drawing a differentiation, Redshift started out as ParAccel (Actian), which Amazon is licensing and has since heavily modified.
+
+Aside from potential performance differences, there are some functional differences:
+
+### Real-time data ingestion
+
+Because Druid is optimized to provide insight against massive quantities of streaming data; it is able to load and aggregate data in real-time.
+
+Generally traditional data warehouses including column stores work only with batch ingestion and are not optimal for streaming data in regularly.
+
+### Druid is a read oriented analytical data store
+
+Druid’s write semantics are not as fluid and does not support full joins (we support large table to small table joins). Redshift provides full SQL support including joins and insert/update statements.
+
+### Data distribution model
+
+Druid’s data distribution is segment-based and leverages a highly available "deep" storage such as S3 or HDFS. Scaling up (or down) does not require massive copy actions or downtime; in fact, losing any number of Historical processes does not result in data loss because new Historical processes can always be brought up by reading data from "deep" storage.
+
+To contrast, ParAccel’s data distribution model is hash-based. Expanding the cluster requires re-hashing the data across the nodes, making it difficult to perform without taking downtime. Amazon’s Redshift works around this issue with a multi-step process:
+
+* set cluster into read-only mode
+* copy data from cluster to new cluster that exists in parallel
+* redirect traffic to new cluster
+
+### Replication strategy
+
+Druid employs segment-level data distribution meaning that more processes can be added and rebalanced without having to perform a staged swap. The replication strategy also makes all replicas available for querying. Replication is done automatically and without any impact to performance.
+
+ParAccel’s hash-based distribution generally means that replication is conducted via hot spares. This puts a numerical limit on the number of nodes you can lose without losing data, and this replication strategy often does not allow the hot spare to help share query load.
+
+### Indexing strategy
+
+Along with column oriented structures, Druid uses indexing structures to speed up query execution when a filter is provided. Indexing structures do increase storage overhead (and make it more difficult to allow for mutation), but they also significantly speed up queries.
+
+ParAccel does not appear to employ indexing strategies.
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-spark.md b/docs/0.15.0-incubating/comparisons/druid-vs-spark.md
new file mode 100644
index 0000000..82fe78c
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-spark.md
@@ -0,0 +1,43 @@
+---
+layout: doc_page
+title: "Apache Druid (incubating) vs Spark"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Apache Druid (incubating) vs Apache Spark
+
+Druid and Spark are complementary solutions as Druid can be used to accelerate OLAP queries in Spark.
+
+Spark is a general cluster computing framework initially designed around the concept of Resilient Distributed Datasets (RDDs). 
+RDDs enable data reuse by persisting intermediate results 
+in memory and enable Spark to provide fast computations for iterative algorithms.
+This is especially beneficial for certain work flows such as machine
+learning, where the same operation may be applied over and over
+again until some result is converged upon. The generality of Spark makes it very suitable as an engine to process (clean or transform) data. 
+Although Spark provides the ability to query data through Spark SQL, much like Hadoop, the query latencies are not specifically targeted to be interactive (sub-second).
+
+Druid's focus is on extremely low latency queries, and is ideal for powering applications used by thousands of users, and where each query must 
+return fast enough such that users can interactively explore through data. Druid fully indexes all data, and can act as a middle layer between Spark and your application. 
+One typical setup seen in production is to process data in Spark, and load the processed data into Druid for faster access.
+
+For more information about using Druid and Spark together, including benchmarks of the two systems, please see:
+
+<https://www.linkedin.com/pulse/combining-druid-spark-interactive-flexible-analytics-scale-butani>
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-sql-on-hadoop.md b/docs/0.15.0-incubating/comparisons/druid-vs-sql-on-hadoop.md
new file mode 100644
index 0000000..c386bb5
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-sql-on-hadoop.md
@@ -0,0 +1,83 @@
+---
+layout: doc_page
+title: "Apache Druid (incubating) vs SQL-on-Hadoop"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Apache Druid (incubating) vs SQL-on-Hadoop (Impala/Drill/Spark SQL/Presto)
+
+SQL-on-Hadoop engines provide an
+execution engine for various data formats and data stores, and
+many can be made to push down computations down to Druid, while providing a SQL interface to Druid.
+
+For a direct comparison between the technologies and when to only use one or the other, things basically comes down to your
+product requirements and what the systems were designed to do.
+
+Druid was designed to
+
+1. be an always on service
+1. ingest data in real-time
+1. handle slice-n-dice style ad-hoc queries
+
+SQL-on-Hadoop engines generally sidestep Map/Reduce, instead querying data directly from HDFS or, in some cases, other storage systems.
+Some of these engines (including Impala and Presto) can be colocated with HDFS data nodes and coordinate with them to achieve data locality for queries.
+What does this mean?  We can talk about it in terms of three general areas
+
+1. Queries
+1. Data Ingestion
+1. Query Flexibility
+
+### Queries
+
+Druid segments stores data in a custom column format. Segments are scanned directly as part of queries and each Druid server
+calculates a set of results that are eventually merged at the Broker level. This means the data that is transferred between servers
+are queries and results, and all computation is done internally as part of the Druid servers.
+
+Most SQL-on-Hadoop engines are responsible for query planning and execution for underlying storage layers and storage formats.
+They are processes that stay on even if there is no query running (eliminating the JVM startup costs from Hadoop MapReduce).
+Some (Impala/Presto) SQL-on-Hadoop engines have daemon processes that can be run where the data is stored, virtually eliminating network transfer costs. There is still
+some latency overhead (e.g. serde time) associated with pulling data from the underlying storage layer into the computation layer. We are unaware of exactly
+how much of a performance impact this makes.
+
+### Data Ingestion
+
+Druid is built to allow for real-time ingestion of data.  You can ingest data and query it immediately upon ingestion,
+the latency between how quickly the event is reflected in the data is dominated by how long it takes to deliver the event to Druid.
+
+SQL-on-Hadoop, being based on data in HDFS or some other backing store, are limited in their data ingestion rates by the
+rate at which that backing store can make data available.  Generally, the backing store is the biggest bottleneck for
+how quickly data can become available.
+
+### Query Flexibility
+
+Druid's query language is fairly low level and maps to how Druid operates internally. Although Druid can be combined with a high level query
+planner such as [Plywood](https://github.com/implydata/plywood) to support most SQL queries and analytic SQL queries (minus joins among large tables),
+base Druid is less flexible than SQL-on-Hadoop solutions for generic processing.
+
+SQL-on-Hadoop support SQL style queries with full joins.
+
+## Druid vs Parquet
+
+Parquet is a column storage format that is designed to work with SQL-on-Hadoop engines. Parquet doesn't have a query execution engine, and instead
+relies on external sources to pull data out of it.
+
+Druid's storage format is highly optimized for linear scans. Although Druid has support for nested data, Parquet's storage format is much
+more hierachical, and is more designed for binary chunking. In theory, this should lead to faster scans in Druid.
diff --git a/docs/0.15.0-incubating/comparisons/druid-vs-vertica.html b/docs/0.15.0-incubating/comparisons/druid-vs-vertica.html
new file mode 100644
index 0000000..16933cd
--- /dev/null
+++ b/docs/0.15.0-incubating/comparisons/druid-vs-vertica.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: druid-vs-redshift.html
+---
diff --git a/docs/0.15.0-incubating/configuration/auth.html b/docs/0.15.0-incubating/configuration/auth.html
new file mode 100644
index 0000000..ba6486c
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/auth.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../design/auth.html
+---
diff --git a/docs/0.15.0-incubating/configuration/broker.html b/docs/0.15.0-incubating/configuration/broker.html
new file mode 100644
index 0000000..de12a5d
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/broker.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../configuration/index.html#broker
+---
diff --git a/docs/0.15.0-incubating/configuration/caching.html b/docs/0.15.0-incubating/configuration/caching.html
new file mode 100644
index 0000000..9d7c4e1
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/caching.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../configuration/index.html#cache-configuration
+---
diff --git a/docs/0.15.0-incubating/configuration/coordinator.html b/docs/0.15.0-incubating/configuration/coordinator.html
new file mode 100644
index 0000000..d6ff856
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/coordinator.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../configuration/index.html#coordinator
+---
diff --git a/docs/0.15.0-incubating/configuration/hadoop.html b/docs/0.15.0-incubating/configuration/hadoop.html
new file mode 100644
index 0000000..cae4c1a
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/hadoop.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../ingestion/hadoop.html
+---
diff --git a/docs/0.15.0-incubating/configuration/historical.html b/docs/0.15.0-incubating/configuration/historical.html
new file mode 100644
index 0000000..d83e936
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/historical.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../configuration/index.html#historical
+---
diff --git a/docs/latest/configuration/index.md b/docs/0.15.0-incubating/configuration/index.md
similarity index 96%
copy from docs/latest/configuration/index.md
copy to docs/0.15.0-incubating/configuration/index.md
index b77ac42..0f70489 100644
--- a/docs/latest/configuration/index.md
+++ b/docs/0.15.0-incubating/configuration/index.md
@@ -171,6 +171,7 @@ We recommend just setting the base ZK path and the ZK service host, but all ZK p
 |`druid.zk.service.user`|The username to authenticate with ZooKeeper. This is an optional property.|none|
 |`druid.zk.service.pwd`|The [Password Provider](../operations/password-provider.html) or the string password to authenticate with ZooKeeper. This is an optional property.|none|
 |`druid.zk.service.authScheme`|digest is the only authentication scheme supported. |digest|
+|`druid.zk.service.terminateDruidProcessOnConnectFail`|If set to 'true' and the connection to ZooKeeper fails (after exhausting all potential backoff retires), Druid process terminates itself with exit code 1.|false|
 
 #### Zookeeper Behavior
 
@@ -536,16 +537,18 @@ This deep storage doesn't do anything. There are no configs.
 #### S3 Deep Storage
 
 This deep storage is used to interface with Amazon's S3. Note that the `druid-s3-extensions` extension must be loaded.
+The below table shows some important configurations for S3. See [S3 Deep Storage](../development/extensions-core/s3.html) for full configurations.
 
 |Property|Description|Default|
 |--------|-----------|-------|
-|`druid.s3.accessKey`|The access key to use to access S3.|none|
-|`druid.s3.secretKey`|The secret key to use to access S3.|none|
 |`druid.storage.bucket`|S3 bucket name.|none|
 |`druid.storage.baseKey`|S3 object key prefix for storage.|none|
-|`druid.storage.disableAcl`|Boolean flag for ACL.|false|
+|`druid.storage.disableAcl`|Boolean flag for ACL. If this is set to `false`, the full control would be granted to the bucket owner. This may require to set additional permissions. See [S3 permissions settings](../development/extensions-core/s3.html#s3-permissions-settings).|false|
 |`druid.storage.archiveBucket`|S3 bucket name for archiving when running the *archive task*.|none|
 |`druid.storage.archiveBaseKey`|S3 object key prefix for archiving.|none|
+|`druid.storage.sse.type`|Server-side encryption type. Should be one of `s3`, `kms`, and `custom`. See the below [Server-side encryption section](../development/extensions-core/s3.html#server-side-encryption) for more details.|None|
+|`druid.storage.sse.kms.keyId`|AWS KMS key ID. This is used only when `druid.storage.sse.type` is `kms` and can be empty to use the default key ID.|None|
+|`druid.storage.sse.custom.base64EncodedKey`|Base64-encoded key. Should be specified if `druid.storage.sse.type` is `custom`.|None|
 |`druid.storage.useS3aSchema`|If true, use the "s3a" filesystem when using Hadoop-based ingestion. If false, the "s3n" filesystem will be used. Only affects Hadoop-based ingestion.|false|
 
 #### HDFS Deep Storage
@@ -575,7 +578,7 @@ If you are running the indexing service in remote mode, the task logs must be st
 |`druid.indexer.logs.type`|Choices:noop, s3, azure, google, hdfs, file. Where to store task logs|file|
 
 You can also configure the Overlord to automatically retain the task logs in log directory and entries in task-related metadata storage tables only for last x milliseconds by configuring following additional properties.
-Caution: Automatic log file deletion typically works based on log file modification timestamp on the backing store, so large clock skews between druid processes and backing store nodes might result in un-intended behavior.  
+Caution: Automatic log file deletion typically works based on log file modification timestamp on the backing store, so large clock skews between druid processes and backing store nodes might result in un-intended behavior.
 
 |Property|Description|Default|
 |--------|-----------|-------|
@@ -718,14 +721,14 @@ These Coordinator static configurations can be defined in the `coordinator/runti
 |`druid.coordinator.period`|The run period for the Coordinator. The Coordinator’s operates by maintaining the current state of the world in memory and periodically looking at the set of segments available and segments being served to make decisions about whether any changes need to be made to the data topology. This property sets the delay between each of these runs.|PT60S|
 |`druid.coordinator.period.indexingPeriod`|How often to send compact/merge/conversion tasks to the indexing service. It's recommended to be longer than `druid.manager.segments.pollDuration`|PT1800S (30 mins)|
 |`druid.coordinator.startDelay`|The operation of the Coordinator works on the assumption that it has an up-to-date view of the state of the world when it runs, the current ZK interaction code, however, is written in a way that doesn’t allow the Coordinator to know for a fact that it’s done loading the current state of the world. This delay is a hack to give it enough time to believe that it has all the data.|PT300S|
-|`druid.coordinator.merge.on`|Boolean flag for whether or not the Coordinator should try and merge small segments into a more optimal segment size.|false|
 |`druid.coordinator.load.timeout`|The timeout duration for when the Coordinator assigns a segment to a Historical process.|PT15M|
 |`druid.coordinator.kill.pendingSegments.on`|Boolean flag for whether or not the Coordinator clean up old entries in the `pendingSegments` table of metadata store. If set to true, Coordinator will check the created time of most recently complete task. If it doesn't exist, it finds the created time of the earlist running/pending/waiting tasks. Once the created time is found, then for all dataSources not in the `killPendingSegmentsSkipList` (see [Dynamic configuration](#dynamic-configurati [...]
 |`druid.coordinator.kill.on`|Boolean flag for whether or not the Coordinator should submit kill task for unused segments, that is, hard delete them from metadata store and deep storage. If set to true, then for all whitelisted dataSources (or optionally all), Coordinator will submit tasks periodically based on `period` specified. These kill tasks will delete all segments except for the last `durationToRetain` period. Whitelist or All can be set via dynamic configuration `killAllDataSourc [...]
 |`druid.coordinator.kill.period`|How often to send kill tasks to the indexing service. Value must be greater than `druid.coordinator.period.indexingPeriod`. Only applies if kill is turned on.|P1D (1 Day)|
 |`druid.coordinator.kill.durationToRetain`| Do not kill segments in last `durationToRetain`, must be greater or equal to 0. Only applies and MUST be specified if kill is turned on. Note that default value is invalid.|PT-1S (-1 seconds)|
 |`druid.coordinator.kill.maxSegments`|Kill at most n segments per kill task submission, must be greater than 0. Only applies and MUST be specified if kill is turned on. Note that default value is invalid.|0|
-|`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy that the Coordinator should use to distribute segments among the Historicals. `cachingCost` is logically equivalent to `cost` but is more CPU-efficient on large clusters and will replace `cost` in the future versions, users are invited to try it. Use `diskNormalized` to distribute segments among Historical processes so that the disks fill up uniformly and use `random` to randomly pick nodes to distribute segmen [...]
+|`druid.coordinator.balancer.strategy`|Specify the type of balancing strategy that the coordinator should use to distribute segments among the historicals. `cachingCost` is logically equivalent to `cost` but is more CPU-efficient on large clusters and will replace `cost` in the future versions, users are invited to try it. Use `diskNormalized` to distribute segments among processes so that the disks fill up uniformly and use `random` to randomly pick processes to distribute segments.|`cost`|
+|`druid.coordinator.balancer.cachingCost.awaitInitialization`|Whether to wait for segment view initialization before creating the `cachingCost` balancing strategy. This property is enabled only when `druid.coordinator.balancer.strategy` is `cachingCost`. If set to 'true', the Coordinator will not start to assign segments, until the segment view is initialized. If set to 'false', the Coordinator will fallback to use the `cost` balancing strategy only if the segment view is not initialized [...]
 |`druid.coordinator.loadqueuepeon.repeatDelay`|The start and repeat delay for the loadqueuepeon , which manages the load and drop of segments.|PT0.050S (50 ms)|
 |`druid.coordinator.asOverlord.enabled`|Boolean value for whether this Coordinator process should act like an Overlord as well. This configuration allows users to simplify a druid cluster by not having to deploy any standalone Overlord processes. If set to true, then Overlord console is available at `http://coordinator-host:port/console.html` and be sure to set `druid.coordinator.asOverlord.overlordService` also. See next.|false|
 |`druid.coordinator.asOverlord.overlordService`| Required, if `druid.coordinator.asOverlord.enabled` is `true`. This must be same value as `druid.service` on standalone Overlord processes and `druid.selectors.indexing.serviceName` on Middle Managers.|NULL|
@@ -734,7 +737,8 @@ These Coordinator static configurations can be defined in the `coordinator/runti
 |Property|Possible Values|Description|Default|
 |--------|---------------|-----------|-------|
 |`druid.serverview.type`|batch or http|Segment discovery method to use. "http" enables discovering segments using HTTP instead of zookeeper.|batch|
-|`druid.coordinator.loadqueuepeon.type`|curator or http|Whether to use "http" or "curator" implementation to assign segment loads/drops to Historical|curator|
+|`druid.coordinator.loadqueuepeon.type`|curator or http|Whether to use "http" or "curator" implementation to assign segment loads/drops to historical|curator|
+|`druid.coordinator.segment.awaitInitializationOnStart`|true or false|Whether the the Coordinator will wait for its view of segments to fully initialize before starting up. If set to 'true', the Coordinator's HTTP server will not start up, and the Coordinator will not announce itself as available, until the server view is initialized.|true|
 
 ###### Additional config when "http" loadqueuepeon is used
 |Property|Description|Default|
@@ -842,7 +846,6 @@ A description of the compaction config is:
 |Property|Description|Required|
 |--------|-----------|--------|
 |`dataSource`|dataSource name to be compacted.|yes|
-|`keepSegmentGranularity`|Set [keepSegmentGranularity](../ingestion/compaction.html) to true for compactionTask.|no (default = true)|
 |`taskPriority`|[Priority](../ingestion/tasks.html#task-priorities) of compaction task.|no (default = 25)|
 |`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per compaction task. Since a time chunk must be processed in its entirety, if the segments for a particular time chunk have a total size in bytes greater than this parameter, compaction will not run for that time chunk. Because each compaction task runs with a single thread, setting this value too far above 1–2GB will result in compaction tasks taking an excessive amount of time.|no (default = 419430400)|
 |`targetCompactionSizeBytes`|The target segment size, for each segment, after compaction. The actual sizes of compacted segments might be slightly larger or smaller than this value. Each compaction task may generate more than one output segment, and it will try to keep each output segment close to this configured size. This configuration cannot be used together with `maxRowsPerSegment`.|no (default = 419430400)|
@@ -939,6 +942,17 @@ There are additional configs for autoscaling (if it is enabled):
 |`druid.indexer.autoscale.workerVersion`|If set, will only create nodes of set version during autoscaling. Overrides dynamic configuration. |null|
 |`druid.indexer.autoscale.workerPort`|The port that MiddleManagers will run on.|8080|
 
+##### Supervisors
+
+|Property|Description|Default|
+|--------|-----------|-------|
+|`druid.supervisor.healthinessThreshold`|The number of successful runs before an unhealthy supervisor is again considered healthy.|3|
+|`druid.supervisor.unhealthinessThreshold`|The number of failed runs before the supervisor is considered unhealthy.|3|
+|`druid.supervisor.taskHealthinessThreshold`|The number of consecutive task successes before an unhealthy supervisor is again considered healthy.|3|
+|`druid.supervisor.taskUnhealthinessThreshold`|The number of consecutive task failures before the supervisor is considered unhealthy.|3|
+|`druid.supervisor.storeStackTrace`|Whether full stack traces of supervisor exceptions should be stored and returned by the supervisor `/status` endpoint.|false|
+|`druid.supervisor.maxStoredExceptionEvents`|The maximum number of exception events that can be returned through the supervisor `/status` endpoint.|`max(healthinessThreshold, unhealthinessThreshold)`|
+
 #### Overlord Dynamic Configuration
 
 The Overlord can dynamically change worker behavior.
@@ -1251,8 +1265,8 @@ These Historical configurations can be defined in the `historical/runtime.proper
 |`druid.segmentCache.dropSegmentDelayMillis`|How long a process delays before completely dropping segment.|30000 (30 seconds)|
 |`druid.segmentCache.infoDir`|Historical processes keep track of the segments they are serving so that when the process is restarted they can reload the same segments without waiting for the Coordinator to reassign. This path defines where this metadata is kept. Directory will be created if needed.|${first_location}/info_dir|
 |`druid.segmentCache.announceIntervalMillis`|How frequently to announce segments while segments are loading from cache. Set this value to zero to wait for all segments to be loaded before announcing.|5000 (5 seconds)|
-|`druid.segmentCache.numLoadingThreads`|How many segments to drop or load concurrently from from deep storage.|10|
-|`druid.segmentCache.numBootstrapThreads`|How many segments to load concurrently from local storage at startup.|Same as numLoadingThreads|
+|`druid.segmentCache.numLoadingThreads`|How many segments to drop or load concurrently from deep storage. Note that the work of loading segments involves downloading segments from deep storage, decompressing them and loading them to a memory mapped location. So the work is not all I/O Bound. Depending on CPU and network load, one could possibly increase this config to a higher value.|Number of cores|
+|`druid.coordinator.loadqueuepeon.curator.numCallbackThreads`|Number of threads for executing callback actions associated with loading or dropping of segments. One might want to increase this number when noticing clusters are lagging behind w.r.t. balancing segments across historical nodes.|2|
 
 In `druid.segmentCache.locations`, *freeSpacePercent* was added because *maxSize* setting is only a theoretical limit and assumes that much space will always be available for storing segments. In case of any druid bug leading to unaccounted segment files left alone on disk or some other process writing stuff to disk, This check can start failing segment loading early before filling up the disk completely and leaving the host usable otherwise.
 
@@ -1406,7 +1420,7 @@ The Druid SQL server is configured through the following properties on the Broke
 
 |Property|Description|Default|
 |--------|-----------|-------|
-|`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|false|
+|`druid.sql.enable`|Whether to enable SQL at all, including background metadata fetching. If false, this overrides all other SQL-related properties and disables SQL metadata, serving, and planning completely.|true|
 |`druid.sql.avatica.enable`|Whether to enable JDBC querying at `/druid/v2/sql/avatica/`.|true|
 |`druid.sql.avatica.maxConnections`|Maximum number of open connections for the Avatica server. These are not HTTP connections, but are logical client connections that may span multiple HTTP connections.|50|
 |`druid.sql.avatica.maxRowsPerFrame`|Maximum number of rows to return in a single JDBC frame. Setting this property to -1 indicates that no row limit should be applied. Clients can optionally specify a row limit in their requests; if a client specifies a row limit, the lesser value of the client-provided limit and `maxRowsPerFrame` will be used.|5,000|
@@ -1421,7 +1435,6 @@ The Druid SQL server is configured through the following properties on the Broke
 |`druid.sql.planner.selectThreshold`|Page size threshold for [Select queries](../querying/select-query.html). Select queries for larger resultsets will be issued back-to-back using pagination.|1000|
 |`druid.sql.planner.useApproximateCountDistinct`|Whether to use an approximate cardinalty algorithm for `COUNT(DISTINCT foo)`.|true|
 |`druid.sql.planner.useApproximateTopN`|Whether to use approximate [TopN queries](../querying/topnquery.html) when a SQL query could be expressed as such. If false, exact [GroupBy queries](../querying/groupbyquery.html) will be used instead.|true|
-|`druid.sql.planner.useFallback`|Whether to evaluate operations on the Broker when they cannot be expressed as Druid queries. This option is not recommended for production since it can generate unscalable query plans. If false, SQL queries that cannot be translated to Druid queries will fail.|false|
 |`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have filter conditions on __time column so that all generated native queries will have user specified intervals. If true, all queries wihout filter condition on __time column will fail|false|
 |`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, which will affect how time functions and timestamp literals behave. Should be a time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
 |`druid.sql.planner.serializeComplexValues`|Whether to serialize "complex" output values, false will return the class name instead of the serialized value.|true|
@@ -1454,7 +1467,7 @@ See [cache configuration](#cache-configuration) for how to configure cache setti
 
 This section describes caching configuration that is common to Broker, Historical, and MiddleManager/Peon processes.
  
-Caching can optionally be enabled on the Broker, Historical, and MiddleManager/Peon processses. See [Broker](#broker-caching), 
+Caching can optionally be enabled on the Broker, Historical, and MiddleManager/Peon processses. See [Broker](#broker-caching),
 [Historical](#Historical-caching), and [Peon](#peon-caching) configuration options for how to enable it for different processes.
 
 Druid uses a local in-memory cache by default, unless a diffrent type of cache is specified.
diff --git a/docs/0.15.0-incubating/configuration/indexing-service.html b/docs/0.15.0-incubating/configuration/indexing-service.html
new file mode 100644
index 0000000..456c441
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/indexing-service.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../configuration/index.html#overlord
+---
diff --git a/docs/latest/configuration/logging.md b/docs/0.15.0-incubating/configuration/logging.md
similarity index 65%
copy from docs/latest/configuration/logging.md
copy to docs/0.15.0-incubating/configuration/logging.md
index 1c89b7d..28c9052 100644
--- a/docs/latest/configuration/logging.md
+++ b/docs/0.15.0-incubating/configuration/logging.md
@@ -53,3 +53,36 @@ An example log4j2.xml ships with Druid under config/_common/log4j2.xml, and a sa
   </Loggers>
 </Configuration>
 ```
+
+## My logs are really chatty, can I set them to asynchronously write?
+
+Yes, using a `log4j2.xml` similar to the following causes some of the more chatty classes to write asynchronously:
+
+```
+<?xml version="1.0" encoding="UTF-8" ?>
+<Configuration status="WARN">
+  <Appenders>
+    <Console name="Console" target="SYSTEM_OUT">
+      <PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/>
+    </Console>
+  </Appenders>
+  <Loggers>
+    <AsyncLogger name="org.apache.druid.curator.inventory.CuratorInventoryManager" level="debug" additivity="false">
+      <AppenderRef ref="Console"/>
+    </AsyncLogger>
+    <AsyncLogger name="org.apache.druid.client.BatchServerInventoryView" level="debug" additivity="false">
+      <AppenderRef ref="Console"/>
+    </AsyncLogger>
+    <!-- Make extra sure nobody adds logs in a bad way that can hurt performance -->
+    <AsyncLogger name="org.apache.druid.client.ServerInventoryView" level="debug" additivity="false">
+      <AppenderRef ref="Console"/>
+    </AsyncLogger>
+    <AsyncLogger name ="org.apache.druid.java.util.http.client.pool.ChannelResourceFactory" level="info" additivity="false">
+      <AppenderRef ref="Console"/>
+    </AsyncLogger>
+    <Root level="info">
+      <AppenderRef ref="Console"/>
+    </Root>
+  </Loggers>
+</Configuration>
+```
diff --git a/docs/0.15.0-incubating/configuration/production-cluster.html b/docs/0.15.0-incubating/configuration/production-cluster.html
new file mode 100644
index 0000000..aa3a66d
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/production-cluster.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../tutorials/cluster.html
+---
diff --git a/docs/latest/configuration/realtime.md b/docs/0.15.0-incubating/configuration/realtime.md
similarity index 98%
copy from docs/latest/configuration/realtime.md
copy to docs/0.15.0-incubating/configuration/realtime.md
index 49cc934..dd319fe 100644
--- a/docs/latest/configuration/realtime.md
+++ b/docs/0.15.0-incubating/configuration/realtime.md
@@ -95,4 +95,4 @@ You can optionally configure caching to be enabled on the realtime process by se
 |`druid.realtime.cache.unCacheable`|All druid query types|All query types to not cache.|`["select"]`|
 |`druid.realtime.cache.maxEntrySize`|positive integer or -1|Maximum size of an individual cache entry (processed results for one segment), in bytes, or -1 for unlimited.|`1000000` (1MB)|
 
-See [cache configuration](caching.html) for how to configure cache settings.
+See [cache configuration](index.html#cache-configuration) for how to configure cache settings.
diff --git a/docs/0.15.0-incubating/configuration/simple-cluster.html b/docs/0.15.0-incubating/configuration/simple-cluster.html
new file mode 100644
index 0000000..aa3a66d
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/simple-cluster.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../tutorials/cluster.html
+---
diff --git a/docs/0.15.0-incubating/configuration/zookeeper.html b/docs/0.15.0-incubating/configuration/zookeeper.html
new file mode 100644
index 0000000..cb2dbd2
--- /dev/null
+++ b/docs/0.15.0-incubating/configuration/zookeeper.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../dependencies/zookeeper.html
+---
diff --git a/docs/0.15.0-incubating/dependencies/cassandra-deep-storage.md b/docs/0.15.0-incubating/dependencies/cassandra-deep-storage.md
new file mode 100644
index 0000000..6cb42d2
--- /dev/null
+++ b/docs/0.15.0-incubating/dependencies/cassandra-deep-storage.md
@@ -0,0 +1,62 @@
+---
+layout: doc_page
+title: "Cassandra Deep Storage"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Cassandra Deep Storage
+
+## Introduction
+
+Apache Druid (incubating) can use Apache Cassandra as a deep storage mechanism. Segments and their metadata are stored in Cassandra in two tables:
+`index_storage` and `descriptor_storage`.  Underneath the hood, the Cassandra integration leverages Astyanax.  The
+index storage table is a [Chunked Object](https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store) repository. It contains
+compressed segments for distribution to Historical processes.  Since segments can be large, the Chunked Object storage allows the integration to multi-thread
+the write to Cassandra, and spreads the data across all the processes in a cluster.  The descriptor storage table is a normal C* table that
+stores the segment metadatak.
+
+## Schema
+Below are the create statements for each:
+
+```sql
+CREATE TABLE index_storage(key text,
+                           chunk text,
+                           value blob,
+                           PRIMARY KEY (key, chunk)) WITH COMPACT STORAGE;
+
+CREATE TABLE descriptor_storage(key varchar,
+                                lastModified timestamp,
+                                descriptor varchar,
+                                PRIMARY KEY (key)) WITH COMPACT STORAGE;
+```
+
+## Getting Started
+First create the schema above. I use a new keyspace called `druid` for this purpose, which can be created using the
+[Cassandra CQL `CREATE KEYSPACE`](http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/create_keyspace_r.html) command.
+
+Then, add the following to your Historical and realtime runtime properties files to enable a Cassandra backend.
+
+```properties
+druid.extensions.loadList=["druid-cassandra-storage"]
+druid.storage.type=c*
+druid.storage.host=localhost:9160
+druid.storage.keyspace=druid
+```
diff --git a/docs/0.15.0-incubating/dependencies/deep-storage.md b/docs/0.15.0-incubating/dependencies/deep-storage.md
new file mode 100644
index 0000000..c9c8eff
--- /dev/null
+++ b/docs/0.15.0-incubating/dependencies/deep-storage.md
@@ -0,0 +1,54 @@
+---
+layout: doc_page
+title: "Deep Storage"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Deep Storage
+
+Deep storage is where segments are stored.  It is a storage mechanism that Apache Druid (incubating) does not provide.  This deep storage infrastructure defines the level of durability of your data, as long as Druid processes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose.  If segments disappear from this storage layer, then you will lose whatever data those segments represented.
+
+## Local Mount
+
+A local mount can be used for storage of segments as well.  This allows you to use just your local file system or anything else that can be mount locally like NFS, Ceph, etc.  This is the default deep storage implementation.
+
+In order to use a local mount for deep storage, you need to set the following configuration in your common configs.
+
+|Property|Possible Values|Description|Default|
+|--------|---------------|-----------|-------|
+|`druid.storage.type`|local||Must be set.|
+|`druid.storage.storageDirectory`||Directory for storing segments.|Must be set.|
+
+Note that you should generally set `druid.storage.storageDirectory` to something different from `druid.segmentCache.locations` and `druid.segmentCache.infoDir`.
+
+If you are using the Hadoop indexer in local mode, then just give it a local file as your output directory and it will work.
+
+## S3-compatible
+
+See [druid-s3-extensions extension documentation](../development/extensions-core/s3.html).
+
+## HDFS
+
+See [druid-hdfs-storage extension documentation](../development/extensions-core/hdfs.html).
+
+## Additional Deep Stores
+
+For additional deep stores, please see our [extensions list](../development/extensions.html).
diff --git a/docs/latest/dependencies/metadata-storage.md b/docs/0.15.0-incubating/dependencies/metadata-storage.md
similarity index 94%
copy from docs/latest/dependencies/metadata-storage.md
copy to docs/0.15.0-incubating/dependencies/metadata-storage.md
index e76eb2f..c05e732 100644
--- a/docs/latest/dependencies/metadata-storage.md
+++ b/docs/0.15.0-incubating/dependencies/metadata-storage.md
@@ -32,7 +32,10 @@ Derby is the default metadata store for Druid, however, it is not suitable for p
 [MySQL](../development/extensions-core/mysql.html) and [PostgreSQL](../development/extensions-core/postgresql.html) are more production suitable metadata stores.
 
 <div class="note caution">
-Derby is not suitable for production use as a metadata store. Use MySQL or PostgreSQL instead.
+The Metadata Storage stores the entire metadata which is essential for a Druid cluster to work.
+For production clusters, consider using MySQL or PostgreSQL instead of Derby.
+Also, it's highly recommended to set up a high availability environment
+because there is no way to restore if you lose any metadata.
 </div>
 
 ## Using derby
diff --git a/docs/0.15.0-incubating/dependencies/zookeeper.md b/docs/0.15.0-incubating/dependencies/zookeeper.md
new file mode 100644
index 0000000..a41e815
--- /dev/null
+++ b/docs/0.15.0-incubating/dependencies/zookeeper.md
@@ -0,0 +1,77 @@
+---
+layout: doc_page
+title: "ZooKeeper"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# ZooKeeper
+
+Apache Druid (incubating) uses [Apache ZooKeeper](http://zookeeper.apache.org/) (ZK) for management of current cluster state. The operations that happen over ZK are
+
+1.  [Coordinator](../design/coordinator.html) leader election
+2.  Segment "publishing" protocol from [Historical](../design/historical.html) and [Realtime](../design/realtime.html)
+3.  Segment load/drop protocol between [Coordinator](../design/coordinator.html) and [Historical](../design/historical.html)
+4.  [Overlord](../design/overlord.html) leader election
+5.  [Overlord](../design/overlord.html) and [MiddleManager](../design/middlemanager.html) task management
+
+### Coordinator Leader Election
+
+We use the Curator LeadershipLatch recipe to do leader election at path
+
+```
+${druid.zk.paths.coordinatorPath}/_COORDINATOR
+```
+
+### Segment "publishing" protocol from Historical and Realtime
+
+The `announcementsPath` and `servedSegmentsPath` are used for this.
+
+All [Historical](../design/historical.html) and [Realtime](../design/realtime.html) processes publish themselves on the `announcementsPath`, specifically, they will create an ephemeral znode at
+
+```
+${druid.zk.paths.announcementsPath}/${druid.host}
+```
+
+Which signifies that they exist. They will also subsequently create a permanent znode at
+
+```
+${druid.zk.paths.servedSegmentsPath}/${druid.host}
+```
+
+And as they load up segments, they will attach ephemeral znodes that look like
+
+```
+${druid.zk.paths.servedSegmentsPath}/${druid.host}/_segment_identifier_
+```
+
+Processes like the [Coordinator](../design/coordinator.html) and [Broker](../design/broker.html) can then watch these paths to see which processes are currently serving which segments.
+
+### Segment load/drop protocol between Coordinator and Historical
+
+The `loadQueuePath` is used for this.
+
+When the [Coordinator](../design/coordinator.html) decides that a [Historical](../design/historical.html) process should load or drop a segment, it writes an ephemeral znode to
+
+```
+${druid.zk.paths.loadQueuePath}/_host_of_historical_process/_segment_identifier
+```
+
+This znode will contain a payload that indicates to the Historical process what it should do with the given segment. When the Historical process is done with the work, it will delete the znode in order to signify to the Coordinator that it is complete.
diff --git a/docs/0.15.0-incubating/design/auth.md b/docs/0.15.0-incubating/design/auth.md
new file mode 100644
index 0000000..c46c83f
--- /dev/null
+++ b/docs/0.15.0-incubating/design/auth.md
@@ -0,0 +1,168 @@
+---
+layout: doc_page
+title: "Authentication and Authorization"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Authentication and Authorization
+
+This document describes non-extension specific Apache Druid (incubating) authentication and authorization configurations.
+
+|Property|Type|Description|Default|Required|
+|--------|-----------|--------|--------|--------|
+|`druid.auth.authenticatorChain`|JSON List of Strings|List of Authenticator type names|["allowAll"]|no|
+|`druid.escalator.type`|String|Type of the Escalator that should be used for internal Druid communications. This Escalator must use an authentication scheme that is supported by an Authenticator in `druid.auth.authenticationChain`.|"noop"|no|
+|`druid.auth.authorizers`|JSON List of Strings|List of Authorizer type names |["allowAll"]|no|
+|`druid.auth.unsecuredPaths`| List of Strings|List of paths for which security checks will not be performed. All requests to these paths will be allowed.|[]|no|
+|`druid.auth.allowUnauthenticatedHttpOptions`|Boolean|If true, skip authentication checks for HTTP OPTIONS requests. This is needed for certain use cases, such as supporting CORS pre-flight requests. Note that disabling authentication checks for OPTIONS requests will allow unauthenticated users to determine what Druid endpoints are valid (by checking if the OPTIONS request returns a 200 instead of 404), so enabling this option may reveal information about server configuration, including  [...]
+
+## Enabling Authentication/AuthorizationLoadingLookupTest
+
+## Authenticator Chain
+Authentication decisions are handled by a chain of Authenticator instances. A request will be checked by Authenticators in the sequence defined by the `druid.auth.authenticatorChain`.
+
+Authenticator implementions are provided by extensions.
+
+For example, the following authentication chain definition enables the Kerberos and HTTP Basic authenticators, from the `druid-kerberos` and `druid-basic-security` core extensions, respectively:
+
+```
+druid.auth.authenticatorChain=["kerberos", "basic"]
+```
+
+A request will pass through all Authenticators in the chain, until one of the Authenticators successfully authenticates the request or sends an HTTP error response. Authenticators later in the chain will be skipped after the first successful authentication or if the request is terminated with an error response.
+
+If no Authenticator in the chain successfully authenticated a request or sent an HTTP error response, an HTTP error response will be sent at the end of the chain.
+
+Druid includes two built-in Authenticators, one of which is used for the default unsecured configuration.
+
+### AllowAll Authenticator
+
+This built-in Authenticator authenticates all requests, and always directs them to an Authorizer named "allowAll". It is not intended to be used for anything other than the default unsecured configuration.
+
+### Anonymous Authenticator
+
+This built-in Authenticator authenticates all requests, and directs them to an Authorizer specified in the configuration by the user. It is intended to be used for adding a default level of access so 
+the Anonymous Authenticator should be added to the end of the authentication chain. A request that reaches the Anonymous Authenticator at the end of the chain will succeed or fail depending on how the Authorizer linked to the Anonymous Authenticator is configured.
+
+|Property|Description|Default|Required|
+|--------|-----------|-------|--------|
+|`druid.auth.authenticator.<authenticatorName>.authorizerName`|Authorizer that requests should be directed to.|N/A|Yes|
+|`druid.auth.authenticator.<authenticatorName>.identity`|The identity of the requester.|defaultUser|No|
+
+To use the Anonymous Authenticator, add an authenticator with type `anonymous` to the authenticatorChain.
+
+For example, the following enables the Anonymous Authenticator with the `druid-basic-security` extension:
+
+```
+druid.auth.authenticatorChain=["basic", "anonymous"]
+
+druid.auth.authenticator.anonymous.type=anonymous
+druid.auth.authenticator.anonymous.identity=defaultUser
+druid.auth.authenticator.anonymous.authorizerName=myBasicAuthorizer
+
+# ... usual configs for basic authentication would go here ...
+```
+
+## Escalator
+The `druid.escalator.type` property determines what authentication scheme should be used for internal Druid cluster communications (such as when a Broker process communicates with Historical processes for query processing).
+
+The Escalator chosen for this property must use an authentication scheme that is supported by an Authenticator in `druid.auth.authenticationChain`. Authenticator extension implementors must also provide a corresponding Escalator implementation if they intend to use a particular authentication scheme for internal Druid communications.
+
+### Noop Escalator
+
+This built-in default Escalator is intended for use only with the default AllowAll Authenticator and Authorizer.
+
+## Authorizers
+Authorization decisions are handled by an Authorizer. The `druid.auth.authorizers` property determines what Authorizer implementations will be active.
+
+There are two built-in Authorizers, "default" and "noop". Other implementations are provided by extensions.
+
+For example, the following authorizers definition enables the "basic" implementation from `druid-basic-security`:
+
+```
+druid.auth.authorizers=["basic"]
+```
+
+
+Only a single Authorizer will authorize any given request.
+
+Druid includes one built in authorizer:
+
+### AllowAll Authorizer
+The Authorizer with type name "allowAll" accepts all requests.
+
+## Default Unsecured Configuration
+
+When `druid.auth.authenticationChain` is left empty or unspecified, Druid will create an authentication chain with a single AllowAll Authenticator named "allowAll".
+
+When `druid.auth.authorizers` is left empty or unspecified, Druid will create a single AllowAll Authorizer named "allowAll".
+
+The default value of `druid.escalator.type` is "noop" to match the default unsecured Authenticator/Authorizer configurations.
+
+## Authenticator to Authorizer Routing
+
+When an Authenticator successfully authenticates a request, it must attach a AuthenticationResult to the request, containing an information about the identity of the requester, as well as the name of the Authorizer that should authorize the authenticated request.
+
+An Authenticator implementation should provide some means through configuration to allow users to select what Authorizer(s) the Authenticator should route requests to.
+
+## Internal System User
+
+Internal requests between Druid processes (non-user initiated communications) need to have authentication credentials attached. 
+
+These requests should be run as an "internal system user", an identity that represents the Druid cluster itself, with full access permissions.
+
+The details of how the internal system user is defined is left to extension implementations.
+
+### Authorizer Internal System User Handling
+
+Authorizers implementations must recognize and authorize an identity for the "internal system user", with full access permissions.
+
+### Authenticator and Escalator Internal System User Handling
+
+An Authenticator implementation that is intended to support internal Druid communications must recognize credentials for the "internal system user", as provided by a corresponding Escalator implementation.
+
+An Escalator must implement three methods related to the internal system user:
+
+```java
+  public HttpClient createEscalatedClient(HttpClient baseClient);
+
+  public org.eclipse.jetty.client.HttpClient createEscalatedJettyClient(org.eclipse.jetty.client.HttpClient baseClient);
+
+  public AuthenticationResult createEscalatedAuthenticationResult();
+```
+
+`createEscalatedClient` returns an wrapped HttpClient that attaches the credentials of the "internal system user" to requests.
+
+`createEscalatedJettyClient` is similar to `createEscalatedClient`, except that it operates on a Jetty HttpClient.
+
+`createEscalatedAuthenticationResult` returns an AuthenticationResult containing the identity of the "internal system user".
+
+## Reserved Name Configuration Property
+
+For extension implementers, please note that the following configuration properties are reserved for the names of Authenticators and Authorizers:
+
+```
+druid.auth.authenticator.<authenticator-name>.name=<authenticator-name>
+druid.auth.authorizer.<authorizer-name>.name=<authorizer-name>
+
+```
+
+These properties provide the authenticator and authorizer names to the implementations as @JsonProperty parameters, potentially useful when multiple authenticators or authorizers of the same type are configured.
diff --git a/docs/0.15.0-incubating/design/broker.md b/docs/0.15.0-incubating/design/broker.md
new file mode 100644
index 0000000..9f11551
--- /dev/null
+++ b/docs/0.15.0-incubating/design/broker.md
@@ -0,0 +1,55 @@
+---
+layout: doc_page
+title: "Broker"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Broker
+
+### Configuration
+
+For Apache Druid (incubating) Broker Process Configuration, see [Broker Configuration](../configuration/index.html#broker).
+
+### HTTP endpoints
+
+For a list of API endpoints supported by the Broker, see [Broker API](../operations/api-reference.html#broker).
+
+### Overview
+
+The Broker is the process to route queries to if you want to run a distributed cluster. It understands the metadata published to ZooKeeper about what segments exist on what processes and routes queries such that they hit the right processes. This process also merges the result sets from all of the individual processes together.
+On start up, Historical processes announce themselves and the segments they are serving in Zookeeper. 
+
+### Running
+
+```
+org.apache.druid.cli.Main server broker
+```
+
+### Forwarding Queries
+
+Most Druid queries contain an interval object that indicates a span of time for which data is requested. Likewise, Druid [Segments](../design/segments.html) are partitioned to contain data for some interval of time and segments are distributed across a cluster. Consider a simple datasource with 7 segments where each segment contains data for a given day of the week. Any query issued to the datasource for more than one day of data will hit more than one segment. These segments will likely [...]
+
+To determine which processes to forward queries to, the Broker process first builds a view of the world from information in Zookeeper. Zookeeper maintains information about [Historical](../design/historical.html) and streaming ingestion [Peon](../design/peons.html) processes and the segments they are serving. For every datasource in Zookeeper, the Broker process builds a timeline of segments and the processes that serve them. When queries are received for a specific datasource and interv [...]
+
+### Caching
+
+Broker processes employ a cache with a LRU cache invalidation strategy. The Broker cache stores per-segment results. The cache can be local to each Broker process or shared across multiple processes using an external distributed cache such as [memcached](http://memcached.org/). Each time a broker process receives a query, it first maps the query to a set of segments. A subset of these segment results may already exist in the cache and the results can be directly pulled from the cache. For [...]
+Historical processes. Once the Historical processes return their results, the Broker will store those results in the cache. Real-time segments are never cached and hence requests for real-time data will always be forwarded to real-time processes. Real-time data is perpetually changing and caching the results would be unreliable.
diff --git a/docs/latest/Stand-Alone-With-Riak-CS.html b/docs/0.15.0-incubating/design/concepts-and-terminology.html
similarity index 100%
copy from docs/latest/Stand-Alone-With-Riak-CS.html
copy to docs/0.15.0-incubating/design/concepts-and-terminology.html
diff --git a/docs/latest/design/coordinator.md b/docs/0.15.0-incubating/design/coordinator.md
similarity index 96%
copy from docs/latest/design/coordinator.md
copy to docs/0.15.0-incubating/design/coordinator.md
index 49d8a51..0dbbd47 100644
--- a/docs/latest/design/coordinator.md
+++ b/docs/0.15.0-incubating/design/coordinator.md
@@ -52,8 +52,7 @@ Segments can be automatically loaded and dropped from the cluster based on a set
 
 ### Cleaning Up Segments
 
-Each run, the Druid Coordinator compares the list of available database segments in the database with the current segments in the cluster. Segments that are not in the database but are still being served in the cluster are flagged and appended to a removal list. Segments that are overshadowed (their versions are too old and their data has been replaced by newer segments) are also dropped.
-Note that if all segments in database are deleted(or marked unused), then Coordinator will not drop anything from the Historicals. This is done to prevent a race condition in which the Coordinator would drop all segments if it started running cleanup before it finished polling the database for available segments for the first time and believed that there were no segments.
+Each run, the Druid coordinator compares the list of available database segments in the database with the current segments in the cluster. Segments that are not in the database but are still being served in the cluster are flagged and appended to a removal list. Segments that are overshadowed (their versions are too old and their data has been replaced by newer segments) are also dropped.
 
 ### Segment Availability
 
diff --git a/docs/latest/Stand-Alone-With-Riak-CS.html b/docs/0.15.0-incubating/design/design.html
similarity index 100%
copy from docs/latest/Stand-Alone-With-Riak-CS.html
copy to docs/0.15.0-incubating/design/design.html
diff --git a/docs/0.15.0-incubating/design/historical.md b/docs/0.15.0-incubating/design/historical.md
new file mode 100644
index 0000000..098950c
--- /dev/null
+++ b/docs/0.15.0-incubating/design/historical.md
@@ -0,0 +1,59 @@
+---
+layout: doc_page
+title: "Historical Process"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Historical Process
+
+### Configuration
+
+For Apache Druid (incubating) Historical Process Configuration, see [Historical Configuration](../configuration/index.html#historical).
+
+### HTTP Endpoints
+
+For a list of API endpoints supported by the Historical, please see the [API reference](../operations/api-reference.html#historical).
+
+### Running
+
+```
+org.apache.druid.cli.Main server historical
+```
+
+### Loading and Serving Segments
+
+Each Historical process maintains a constant connection to Zookeeper and watches a configurable set of Zookeeper paths for new segment information. Historical processes do not communicate directly with each other or with the Coordinator processes but instead rely on Zookeeper for coordination.
+
+The [Coordinator](../design/coordinator.html) process is responsible for assigning new segments to Historical processes. Assignment is done by creating an ephemeral Zookeeper entry under a load queue path associated with a Historical process. For more information on how the Coordinator assigns segments to Historical processes, please see [Coordinator](../design/coordinator.html).
+
+When a Historical process notices a new load queue entry in its load queue path, it will first check a local disk directory (cache) for the information about segment. If no information about the segment exists in the cache, the Historical process will download metadata about the new segment to serve from Zookeeper. This metadata includes specifications about where the segment is located in deep storage and about how to decompress and process the segment. For more information about segmen [...]
+
+### Loading and Serving Segments From Cache
+
+Recall that when a Historical process notices a new segment entry in its load queue path, the Historical process first checks a configurable cache directory on its local disk to see if the segment had been previously downloaded. If a local cache entry already exists, the Historical process will directly read the segment binary files from disk and load the segment.
+
+The segment cache is also leveraged when a Historical process is first started. On startup, a Historical process will search through its cache directory and immediately load and serve all segments that are found. This feature allows Historical processes to be queried as soon they come online.
+
+### Querying Segments
+
+Please see [Querying](../querying/querying.html) for more information on querying Historical processes.
+
+A Historical can be configured to log and report metrics for every query it services.
diff --git a/docs/latest/design/index.md b/docs/0.15.0-incubating/design/index.md
similarity index 84%
copy from docs/latest/design/index.md
copy to docs/0.15.0-incubating/design/index.md
index ec7e38a..191a7d6 100644
--- a/docs/latest/design/index.md
+++ b/docs/0.15.0-incubating/design/index.md
@@ -24,18 +24,23 @@ title: "Apache Druid (incubating) Design"
 
 # What is Druid?<a id="what-is-druid"></a>
 
-Apache Druid (incubating) is a data store designed for high-performance slice-and-dice analytics
-("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)"-style) on large data sets. Druid is most often
-used as a data store for powering GUI analytical applications, or as a backend for highly-concurrent APIs that need
-fast aggregations. Common application areas for Druid include:
+Apache Druid (incubating) is a real-time analytics database designed for fast slice-and-dice analytics
+("[OLAP](http://en.wikipedia.org/wiki/Online_analytical_processing)" queries) on large data sets. Druid is most often
+used as a database for powering use cases where real-time ingest, fast query performance, and high uptime are important. 
+As such, Druid is commonly used for powering GUIs of analytical applications, or as a backend for highly-concurrent APIs 
+that need fast aggregations. Druid works best with event-oriented data.
 
-- Clickstream analytics
-- Network flow analytics
+Common application areas for Druid include:
+
+- Clickstream analytics (web and mobile analytics)
+- Network telemetry analytics (network performance monitoring)
 - Server metrics storage
+- Supply chain analytics (manufacturing metrics)
 - Application performance metrics
-- Digital marketing analytics
+- Digital marketing/advertising analytics
 - Business intelligence / OLAP
 
+Druid's core architecture combines ideas from data warehouses, timeseries databases, and logsearch systems. Some of 
 Druid's key features are:
 
 1. **Columnar storage format.** Druid uses column-oriented storage, meaning it only needs to load the exact columns
@@ -45,7 +50,7 @@ column is stored optimized for its particular data type, which supports fast sca
 offer ingest rates of millions of records/sec, retention of trillions of records, and query latencies of sub-second to a
 few seconds.
 3. **Massively parallel processing.** Druid can process a query in parallel across the entire cluster.
-4. **Realtime or batch ingestion.** Druid can ingest data either realtime (ingested data is immediately available for
+4. **Realtime or batch ingestion.** Druid can ingest data either real-time (ingested data is immediately available for
 querying) or in batches.
 5. **Self-healing, self-balancing, easy to operate.** As an operator, to scale the cluster out or in, simply add or
 remove servers and the cluster will rebalance itself automatically, in the background, without any downtime. If any
@@ -59,11 +64,14 @@ Druid servers, replication ensures that queries are still possible while the sys
 7. **Indexes for quick filtering.** Druid uses [CONCISE](https://arxiv.org/pdf/1004.0403) or
 [Roaring](https://roaringbitmap.org/) compressed bitmap indexes to create indexes that power fast filtering and
 searching across multiple columns.
-8. **Approximate algorithms.** Druid includes algorithms for approximate count-distinct, approximate ranking, and
+8. **Time-based partitioning.** Druid first partitions data by time, and can additionally partition based on other fields. 
+This means time-based queries will only access the partitions that match the time range of the query. This leads to 
+significant performance improvements for time-based data. 
+9. **Approximate algorithms.** Druid includes algorithms for approximate count-distinct, approximate ranking, and
 computation of approximate histograms and quantiles. These algorithms offer bounded memory usage and are often
 substantially faster than exact computations. For situations where accuracy is more important than speed, Druid also
 offers exact count-distinct and exact ranking.
-9. **Automatic summarization at ingest time.** Druid optionally supports data summarization at ingestion time. This
+10. **Automatic summarization at ingest time.** Druid optionally supports data summarization at ingestion time. This
 summarization partially pre-aggregates your data, and can lead to big costs savings and performance boosts.
 
 # When should I use Druid?<a id="when-to-use-druid"></a>
@@ -85,7 +93,8 @@ Situations where you would likely _not_ want to use Druid include:
 - You need low-latency updates of _existing_ records using a primary key. Druid supports streaming inserts, but not streaming updates (updates are done using
 background batch jobs).
 - You are building an offline reporting system where query latency is not very important.
-- You want to do "big" joins (joining one big fact table to another big fact table).
+- You want to do "big" joins (joining one big fact table to another big fact table) and you are okay with these queries 
+taking up to hours to complete.
 
 # Architecture
 
@@ -157,7 +166,7 @@ The following diagram shows how queries and data flow through this architecture,
 Druid data is stored in "datasources", which are similar to tables in a traditional RDBMS. Each datasource is
 partitioned by time and, optionally, further partitioned by other attributes. Each time range is called a "chunk" (for
 example, a single day, if your datasource is partitioned by day). Within a chunk, data is partitioned into one or more
-"segments". Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
+["segments"](../design/segments.html). Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
 organized into time chunks, it's sometimes helpful to think of segments as living on a timeline like the following:
 
 <img src="../../img/druid-timeline.png" width="800" />
@@ -183,10 +192,10 @@ cluster.
 
 # Query processing
 
-Queries first enter the Broker, where the Broker will identify which segments have data that may pertain to that query.
+Queries first enter the [Broker](../design/broker.html), where the Broker will identify which segments have data that may pertain to that query.
 The list of segments is always pruned by time, and may also be pruned by other attributes depending on how your
-datasource is partitioned. The Broker will then identify which Historicals and MiddleManagers are serving those segments
-and send a rewritten subquery to each of those processes. The Historical/MiddleManager processes will take in the
+datasource is partitioned. The Broker will then identify which [Historicals](../design/historical.html) and 
+[MiddleManagers](../design/middlemanager.html) are serving those segments and send a rewritten subquery to each of those processes. The Historical/MiddleManager processes will take in the
 queries, process them and return results. The Broker receives results and merges them together to get the final answer,
 which it returns to the original caller.
 
@@ -200,4 +209,4 @@ So Druid uses three different techniques to maximize query performance:
 
 - Pruning which segments are accessed for each query.
 - Within each segment, using indexes to identify which rows must be accessed.
-- Within each segment, only reading the specific rows and columns that are relevant to a particular query.
\ No newline at end of file
+- Within each segment, only reading the specific rows and columns that are relevant to a particular query.
diff --git a/docs/0.15.0-incubating/design/indexing-service.md b/docs/0.15.0-incubating/design/indexing-service.md
new file mode 100644
index 0000000..3c66bc1
--- /dev/null
+++ b/docs/0.15.0-incubating/design/indexing-service.md
@@ -0,0 +1,65 @@
+---
+layout: doc_page
+title: "Indexing Service"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Indexing Service
+
+The Apache Druid (incubating) indexing service is a highly-available, distributed service that runs indexing related tasks. 
+
+Indexing [tasks](../ingestion/tasks.html) create (and sometimes destroy) Druid [segments](../design/segments.html). The indexing service has a master/slave like architecture.
+
+The indexing service is composed of three main components: a [Peon](../design/peons.html) component that can run a single task, a [Middle Manager](../design/middlemanager.html) component that manages Peons, and an [Overlord](../design/overlord.html) component that manages task distribution to MiddleManagers.
+Overlords and MiddleManagers may run on the same process or across multiple processes while MiddleManagers and Peons always run on the same process.
+
+Tasks are managed using API endpoints on the Overlord service. Please see [Overlord Task API](../operations/api-reference.html#overlord-tasks) for more information.
+
+![Indexing Service](../../img/indexing_service.png "Indexing Service")
+
+<!--
+Preamble
+--------
+
+The truth is, the indexing service is an experience that is difficult to characterize with words. When they asked me to write this preamble, I was taken aback. I wasn’t quite sure what exactly to write or how to describe this… entity. I accepted the job, as much for the challenge and inner growth as the money, and took to the mountains for reflection. Six months later, I knew I had it, I was done and had achieved the next euphoric victory in the continuous struggle that plagues my life.  [...]
+
+The indexing service is philosophical transcendence, an infallible truth that will shape your soul, mold your character, and define your reality. The indexing service is creating world peace, playing with puppies, unwrapping presents on Christmas morning, cradling a loved one, and beating Goro in Mortal Kombat for the first time. The indexing service is sustainable economic growth, global propensity, and a world of transparent financial transactions. The indexing service is a true belieb [...]
+-->
+
+Overlord
+--------------
+
+See [Overlord](../design/overlord.html).
+
+Middle Managers
+---------------
+
+See [Middle Manager](../design/middlemanager.html).
+
+Peons
+-----
+
+See [Peon](../design/peons.html).
+
+Tasks
+-----
+
+See [Tasks](../ingestion/tasks.html).
diff --git a/docs/latest/development/experimental.md b/docs/0.15.0-incubating/design/middlemanager.md
similarity index 51%
copy from docs/latest/development/experimental.md
copy to docs/0.15.0-incubating/design/middlemanager.md
index adf4e24..52b193f 100644
--- a/docs/latest/development/experimental.md
+++ b/docs/0.15.0-incubating/design/middlemanager.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Experimental Features"
+title: "MiddleManager Process"
 ---
 
 <!--
@@ -22,18 +22,23 @@ title: "Experimental Features"
   ~ under the License.
   -->
 
-# Experimental Features
+# MiddleManager Process
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+### Configuration
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
+For Apache Druid (incubating) Middlemanager Process Configuration, see [Indexing Service Configuration](../configuration/index.html#middlemanager-and-peons).
 
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
+### HTTP Endpoints
+
+For a list of API endpoints supported by the MiddleManager, please see the [API reference](../operations/api-reference.html#middlemanager).
+
+### Overview
+
+The MiddleManager process is a worker process that executes submitted tasks. Middle Managers forward tasks to Peons that run in separate JVMs.
+The reason we have separate JVMs for tasks is for resource and log isolation. Each [Peon](../design/peons.html) is capable of running only one task at a time, however, a MiddleManager may have multiple Peons.
+
+### Running
 
 ```
-druid.extensions.loadList=["druid-histogram"]
+org.apache.druid.cli.Main server middleManager
 ```
-
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
diff --git a/docs/0.15.0-incubating/design/overlord.md b/docs/0.15.0-incubating/design/overlord.md
new file mode 100644
index 0000000..139c91e
--- /dev/null
+++ b/docs/0.15.0-incubating/design/overlord.md
@@ -0,0 +1,63 @@
+---
+layout: doc_page
+title: "Overlord Process"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Overlord Process
+
+### Configuration
+
+For Apache Druid (incubating) Overlord Process Configuration, see [Overlord Configuration](../configuration/index.html#overlord).
+
+### HTTP Endpoints
+
+For a list of API endpoints supported by the Overlord, please see the [API reference](../operations/api-reference.html#overlord).
+
+### Overview
+
+The Overlord process is responsible for accepting tasks, coordinating task distribution, creating locks around tasks, and returning statuses to callers. Overlord can be configured to run in one of two modes - local or remote (local being default).
+In local mode Overlord is also responsible for creating Peons for executing tasks. When running the Overlord in local mode, all MiddleManager and Peon configurations must be provided as well.
+Local mode is typically used for simple workflows.  In remote mode, the Overlord and MiddleManager are run in separate processes and you can run each on a different server.
+This mode is recommended if you intend to use the indexing service as the single endpoint for all Druid indexing.
+
+### Overlord Console
+
+The Overlord provides a UI for managing tasks and workers. For more details, please see [overlord console](../operations/management-uis.html#overlord-console).
+
+### Blacklisted Workers
+
+If a MiddleManager has task failures above a threshold, the Overlord will blacklist these MiddleManagers. No more than 20% of the MiddleManagers can be blacklisted. Blacklisted MiddleManagers will be periodically whitelisted.
+
+The following vairables can be used to set the threshold and blacklist timeouts.
+
+```
+druid.indexer.runner.maxRetriesBeforeBlacklist
+druid.indexer.runner.workerBlackListBackoffTime
+druid.indexer.runner.workerBlackListCleanupPeriod
+druid.indexer.runner.maxPercentageBlacklistWorkers
+```
+
+### Autoscaling
+
+The Autoscaling mechanisms currently in place are tightly coupled with our deployment infrastructure but the framework should be in place for other implementations. We are highly open to new implementations or extensions of the existing mechanisms. In our own deployments, MiddleManager processes are Amazon AWS EC2 nodes and they are provisioned to register themselves in a [galaxy](https://github.com/ning/galaxy) environment.
+
+If autoscaling is enabled, new MiddleManagers may be added when a task has been in pending state for too long. MiddleManagers may be terminated if they have not run any tasks for a period of time.
diff --git a/docs/0.15.0-incubating/design/peons.md b/docs/0.15.0-incubating/design/peons.md
new file mode 100644
index 0000000..668a26a
--- /dev/null
+++ b/docs/0.15.0-incubating/design/peons.md
@@ -0,0 +1,47 @@
+---
+layout: doc_page
+title: "Peons"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Peons
+
+### Configuration
+
+For Apache Druid (incubating) Peon Configuration, see [Peon Query Configuration](../configuration/index.html#peon-query-configuration) and [Additional Peon Configuration](../configuration/index.html#additional-peon-configuration).
+
+### HTTP Endpoints
+
+For a list of API endpoints supported by the Peon, please see the [Peon API reference](../operations/api-reference.html#peon).
+
+Peons run a single task in a single JVM. MiddleManager is responsible for creating Peons for running tasks.
+Peons should rarely (if ever for testing purposes) be run on their own.
+
+### Running
+
+The Peon should very rarely ever be run independent of the MiddleManager unless for development purposes.
+
+```
+org.apache.druid.cli.Main internal peon <task_file> <status_file>
+```
+
+The task file contains the task JSON object.
+The status file indicates where the task status will be output.
diff --git a/docs/latest/development/experimental.md b/docs/0.15.0-incubating/design/plumber.md
similarity index 52%
copy from docs/latest/development/experimental.md
copy to docs/0.15.0-incubating/design/plumber.md
index adf4e24..944ec78 100644
--- a/docs/latest/development/experimental.md
+++ b/docs/0.15.0-incubating/design/plumber.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Experimental Features"
+title: "Apache Druid (incubating) Plumbers"
 ---
 
 <!--
@@ -22,18 +22,17 @@ title: "Experimental Features"
   ~ under the License.
   -->
 
-# Experimental Features
+# Apache Druid (incubating) Plumbers
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+The plumber handles generated segments both while they are being generated and when they are "done". This is also technically a pluggable interface and there are multiple implementations. However, plumbers handle numerous complex details, and therefore an advanced understanding of Druid is recommended before implementing your own.
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
+Available Plumbers
+------------------
 
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
+#### YeOldePlumber
 
-```
-druid.extensions.loadList=["druid-histogram"]
-```
+This plumber creates single historical segments.
 
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
+#### RealtimePlumber
+
+This plumber creates real-time/mutable segments.
diff --git a/docs/0.15.0-incubating/design/processes.md b/docs/0.15.0-incubating/design/processes.md
new file mode 100644
index 0000000..8c2debf
--- /dev/null
+++ b/docs/0.15.0-incubating/design/processes.md
@@ -0,0 +1,131 @@
+---
+layout: doc_page
+title: "Apache Druid (incubating) Processes and Servers"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Apache Druid (incubating) Processes and Servers
+
+## Process Types
+
+Druid has several process types:
+
+* [Coordinator](../design/coordinator.html)
+* [Overlord](../design/overlord.html)
+* [Broker](../design/broker.html)
+* [Historical](../design/historical.html)
+* [MiddleManager](../design/middlemanager.html) and [Peons](../design/peons.html)
+* [Router (Optional)](../development/router.html) 
+
+## Server Types
+
+Druid processes can be deployed any way you like, but for ease of deployment we suggest organizing them into three server types:
+
+* **Master**
+* **Query**
+* **Data**
+
+<img src="../../img/druid-architecture.png" width="800"/>
+
+This section describes the Druid processes and the suggested Master/Query/Data server organization, as shown in the architecture diagram above.
+
+### Master server
+
+A Master server manages data ingestion and availability: it is responsible for starting new ingestion jobs and coordinating availability of data on the "Data servers" described below.
+
+Within a Master server, functionality is split between two processes, the Coordinator and Overlord.
+
+#### Coordinator process
+
+[**Coordinator**](../design/coordinator.html) processes watch over the Historical processes on the Data servers. They are responsible for assigning segments to specific servers, and for ensuring segments are well-balanced across Historicals.
+
+#### Overlord process
+
+[**Overlord**](../design/overlord.html) processes watch over the MiddleManager processes on the Data servers and are the controllers of data ingestion into Druid. They are responsible for assigning ingestion tasks to MiddleManagers and for coordinating segment publishing.
+
+### Query server
+
+A Query server provides the endpoints that users and client applications interact with, routing queries to Data servers or other Query servers (and optionally proxied Master server requests as well).
+
+Within a Query server, functionality is split between two processes, the Broker and Router.
+
+#### Broker process
+
+[**Broker**](../design/broker.html) processes receive queries from external clients and forward those queries to Data servers. When Brokers receive results from those subqueries, they merge those results and return them to the
+caller. End users typically query Brokers rather than querying Historicals or MiddleManagers processes on Data servers directly.
+
+#### Router process (optional)
+
+[**Router**](../development/router.html) processes are _optional_ processes that provide a unified API gateway in front of Druid Brokers,
+Overlords, and Coordinators. They are optional since you can also simply contact the Druid Brokers, Overlords, and
+Coordinators directly.
+
+The Router also runs the [Druid Console](../operations/management-uis.html#druid-console), a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console.
+
+### Data server
+
+A Data server executes ingestion jobs and stores queryable data.
+
+Within a Data server, functionality is split between two processes, the Historical and MiddleManager.
+
+### Historical process
+
+[**Historical**](../design/historical.html) processes are the workhorses that handle storage and querying on "historical" data
+(including any streaming data that has been in the system long enough to be committed). Historical processes
+download segments from deep storage and respond to queries about these segments. They don't accept writes.
+
+### Middle Manager process
+
+[**MiddleManager**](../design/middlemanager.html) processes handle ingestion of new data into the cluster. They are responsible
+for reading from external data sources and publishing new Druid segments.
+
+#### Peon processes
+
+[**Peon**](../design/peons.html) processes are task execution engines spawned by MiddleManagers. Each Peon runs a separate JVM and is responsible for executing a single task. Peons always run on the same host as the MiddleManager that spawned them.
+
+## Pros and cons of colocation
+
+Druid processes can be colocated based on the Master/Data/Query server organization as
+described above. This organization generally results in better utilization of
+hardware resources for most clusters.
+
+For very large scale clusters, however, it can be desirable to split the Druid processes
+such that they run on individual servers to avoid resource contention.
+
+This section describes guidelines and configuration parameters related to process colocation.
+
+### Coordinators and Overlords
+
+The workload on the Coordinator process tends to increase with the number of segments in the cluster. The Overlord's workload also increases based on the number of segments in the cluster, but to a lesser degree than the Coordinator.
+
+In clusters with very high segment counts, it can make sense to separate the Coordinator and Overlord processes to provide more resources for the Coordinator's segment balancing workload.
+
+#### Unified Process
+
+The Coordinator and Overlord processes can be run as a single combined process by setting the `druid.coordinator.asOverlord.enabled` property.
+
+Please see [Coordinator Configuration: Operation](../configuration/index.html#coordinator-operation) for details.
+
+### Historicals and MiddleManagers
+
+With higher levels of ingestion or query load, it can make sense to deploy the Historical and MiddleManager processes on separate hosts to to avoid CPU and memory contention. 
+
+The Historical also benefits from having free memory for memory mapped segments, which can be another reason to deploy the Historical and MiddleManager processes separately.
\ No newline at end of file
diff --git a/docs/0.15.0-incubating/design/realtime.md b/docs/0.15.0-incubating/design/realtime.md
new file mode 100644
index 0000000..df6b4e0
--- /dev/null
+++ b/docs/0.15.0-incubating/design/realtime.md
@@ -0,0 +1,80 @@
+---
+layout: doc_page
+title: "Real-time Process"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Real-time Process
+
+<div class="note info">
+NOTE: Realtime processes are deprecated. Please use the <a href="../development/extensions-core/kafka-ingestion.html">Kafka Indexing Service</a> for stream pull use cases instead. 
+</div>
+
+For Apache Druid (incubating) Real-time Process Configuration, see [Realtime Configuration](../configuration/realtime.html).
+
+For Real-time Ingestion, see [Realtime Ingestion](../ingestion/stream-ingestion.html).
+
+Realtime processes provide a realtime index. Data indexed via these processes is immediately available for querying. Realtime processes will periodically build segments representing the data they’ve collected over some span of time and transfer these segments off to [Historical](../design/historical.html) processes. They use ZooKeeper to monitor the transfer and the metadata storage to store metadata about the transferred segment. Once transfered, segments are forgotten by the Realtime p [...]
+
+### Running
+
+```
+org.apache.druid.cli.Main server realtime
+```
+Segment Propagation
+-------------------
+
+The segment propagation diagram for real-time data ingestion can be seen below:
+
+![Segment Propagation](../../img/segmentPropagation.png "Segment Propagation")
+
+You can read about the various components shown in this diagram under the Architecture section (see the menu on the right). Note that some of the names are now outdated.
+
+### Firehose
+
+See [Firehose](../ingestion/firehose.html).
+
+### Plumber
+
+See [Plumber](../design/plumber.html)
+
+Extending the code
+------------------
+
+Realtime integration is intended to be extended in two ways:
+
+1.  Connect to data streams from varied systems ([Firehose](https://github.com/apache/incubator-druid/blob/master/core/src/main/org/apache/druid/data/input/FirehoseFactory.java))
+2.  Adjust the publishing strategy to match your needs ([Plumber](https://github.com/apache/incubator-druid/blob/master/server/src/main/java/org/apache/druid/segment/realtime/plumber/PlumberSchool.java))
+
+The expectations are that the former will be very common and something that users of Druid will do on a fairly regular basis. Most users will probably never have to deal with the latter form of customization. Indeed, we hope that all potential use cases can be packaged up as part of Druid proper without requiring proprietary customization.
+
+Given those expectations, adding a firehose is straightforward and completely encapsulated inside of the interface. Adding a plumber is more involved and requires understanding of how the system works to get right, it’s not impossible, but it’s not intended that individuals new to Druid will be able to do it immediately.
+
+HTTP Endpoints
+--------------
+
+The real-time process exposes several HTTP endpoints for interactions.
+
+### GET
+
+* `/status`
+
+Returns the Druid version, loaded extensions, memory used, total memory and other useful information about the process.
diff --git a/docs/latest/design/segments.md b/docs/0.15.0-incubating/design/segments.md
similarity index 99%
copy from docs/latest/design/segments.md
copy to docs/0.15.0-incubating/design/segments.md
index d8d69c1..adc454b 100644
--- a/docs/latest/design/segments.md
+++ b/docs/0.15.0-incubating/design/segments.md
@@ -28,7 +28,7 @@ Apache Druid (incubating) stores its index in *segment files*, which are partiti
 time. In a basic setup, one segment file is created for each time
 interval, where the time interval is configurable in the
 `segmentGranularity` parameter of the `granularitySpec`, which is
-documented [here](../ingestion/ingestion-spec.html#granularityspec).  For druid to
+documented [here](../ingestion/ingestion-spec.html#granularityspec).  For Druid to
 operate well under heavy query load, it is important for the segment
 file size to be within the recommended range of 300mb-700mb. If your
 segment files are larger than this range, then consider either
diff --git a/docs/0.15.0-incubating/development/approximate-histograms.html b/docs/0.15.0-incubating/development/approximate-histograms.html
new file mode 100644
index 0000000..8745d6b
--- /dev/null
+++ b/docs/0.15.0-incubating/development/approximate-histograms.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: extensions-core/approximate-histograms.html
+---
diff --git a/docs/0.15.0-incubating/development/build.md b/docs/0.15.0-incubating/development/build.md
new file mode 100644
index 0000000..b28a836
--- /dev/null
+++ b/docs/0.15.0-incubating/development/build.md
@@ -0,0 +1,69 @@
+---
+layout: doc_page
+title: "Build from Source"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Build from Source
+
+You can build Apache Druid (incubating) directly from source. Please note that these instructions are for building the latest stable version of Druid.
+For building the latest code in master, follow the instructions [here](https://github.com/apache/incubator-druid/blob/master/docs/content/development/build.md).
+
+
+#### Prerequisites
+
+##### Installing Java and Maven:
+- JDK 8, 8u92+. We recommend using an OpenJDK distribution that provides long-term support and open-source licensing,
+  like [Amazon Corretto](https://aws.amazon.com/corretto/) or [Azul Zulu](https://www.azul.com/downloads/zulu/).
+- [Maven version 3.x](http://maven.apache.org/download.cgi)
+
+
+
+##### Downloading the source:
+
+```bash
+git clone git@github.com:apache/incubator-druid.git
+cd druid
+```
+
+
+#### Building the source
+
+The basic command to build Druid from source is:
+
+```bash
+mvn clean install
+```
+
+This will run static analysis, unit tests, compile classes, and package the projects into JARs. It will _not_ generate the source or binary distribution tarball.
+
+In addition to the basic stages, you may also want to add the following profiles and properties:
+
+- **-Pdist** - Distribution profile: Generates the binary distribution tarball by pulling in core extensions and dependencies and packaging the files as `distribution/target/apache-druid-x.x.x-bin.tar.gz`
+- **-Papache-release** - Apache release profile: Generates GPG signature and checksums, and builds the source distribution tarball as `distribution/target/apache-druid-x.x.x-src.tar.gz`
+- **-Prat** - Apache Rat profile: Runs the Apache Rat license audit tool
+- **-DskipTests** - Skips unit tests (which reduces build time)
+
+Putting these together, if you wish to build the source and binary distributions with signatures and checksums, audit licenses, and skip the unit tests, you would run:
+
+```bash
+mvn clean install -Papache-release,dist,rat -DskipTests
+```
diff --git a/docs/0.15.0-incubating/development/community-extensions/azure.html b/docs/0.15.0-incubating/development/community-extensions/azure.html
new file mode 100644
index 0000000..4ef57bc
--- /dev/null
+++ b/docs/0.15.0-incubating/development/community-extensions/azure.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../extensions-contrib/azure.html
+---
diff --git a/docs/0.15.0-incubating/development/community-extensions/cassandra.html b/docs/0.15.0-incubating/development/community-extensions/cassandra.html
new file mode 100644
index 0000000..9435ce0
--- /dev/null
+++ b/docs/0.15.0-incubating/development/community-extensions/cassandra.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../extensions-contrib/cassandra.html
+---
diff --git a/docs/0.15.0-incubating/development/community-extensions/cloudfiles.html b/docs/0.15.0-incubating/development/community-extensions/cloudfiles.html
new file mode 100644
index 0000000..6934355
--- /dev/null
+++ b/docs/0.15.0-incubating/development/community-extensions/cloudfiles.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../extensions-contrib/cloudfiles.html
+---
diff --git a/docs/0.15.0-incubating/development/community-extensions/graphite.html b/docs/0.15.0-incubating/development/community-extensions/graphite.html
new file mode 100644
index 0000000..df5f2ce
--- /dev/null
+++ b/docs/0.15.0-incubating/development/community-extensions/graphite.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../extensions-contrib/graphite.html
+---
diff --git a/docs/0.15.0-incubating/development/community-extensions/kafka-simple.html b/docs/0.15.0-incubating/development/community-extensions/kafka-simple.html
new file mode 100644
index 0000000..e8ce17b
--- /dev/null
+++ b/docs/0.15.0-incubating/development/community-extensions/kafka-simple.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../extensions-contrib/kafka-simple.html
+---
diff --git a/docs/0.15.0-incubating/development/community-extensions/rabbitmq.html b/docs/0.15.0-incubating/development/community-extensions/rabbitmq.html
new file mode 100644
index 0000000..6da1e57
--- /dev/null
+++ b/docs/0.15.0-incubating/development/community-extensions/rabbitmq.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../extensions-contrib/rabbitmq.html
+---
diff --git a/docs/0.15.0-incubating/development/datasketches-aggregators.html b/docs/0.15.0-incubating/development/datasketches-aggregators.html
new file mode 100644
index 0000000..a1729e1
--- /dev/null
+++ b/docs/0.15.0-incubating/development/datasketches-aggregators.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: extensions-core/datasketches-extension.html
+---
diff --git a/docs/latest/development/experimental.md b/docs/0.15.0-incubating/development/experimental.md
similarity index 55%
copy from docs/latest/development/experimental.md
copy to docs/0.15.0-incubating/development/experimental.md
index adf4e24..eb3c051 100644
--- a/docs/latest/development/experimental.md
+++ b/docs/0.15.0-incubating/development/experimental.md
@@ -24,16 +24,15 @@ title: "Experimental Features"
 
 # Experimental Features
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+Features often start out in "experimental" status that indicates they are still evolving.
+This can mean any of the following things:
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
+1. The feature's API may change even in minor releases or patch releases.
+2. The feature may have known "missing" pieces that will be added later.
+3. The feature may or may not have received full battle-testing in production environments.
 
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
+All experimental features are optional.
 
-```
-druid.extensions.loadList=["druid-histogram"]
-```
-
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
+Note that not all of these points apply to every experimental feature. Some have been battle-tested in terms of
+implementation, but are still marked experimental due to an evolving API. Please check the documentation for each
+feature for full details.
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/ambari-metrics-emitter.md b/docs/0.15.0-incubating/development/extensions-contrib/ambari-metrics-emitter.md
new file mode 100644
index 0000000..d8c3833
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/ambari-metrics-emitter.md
@@ -0,0 +1,100 @@
+---
+layout: doc_page
+title: "Ambari Metrics Emitter"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Ambari Metrics Emitter
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `ambari-metrics-emitter` extension.
+
+## Introduction
+
+This extension emits Druid metrics to a ambari-metrics carbon server.
+Events are sent after been [pickled](http://ambari-metrics.readthedocs.org/en/latest/feeding-carbon.html#the-pickle-protocol); the size of the batch is configurable. 
+
+## Configuration
+
+All the configuration parameters for ambari-metrics emitter are under `druid.emitter.ambari-metrics`.
+
+|property|description|required?|default|
+|--------|-----------|---------|-------|
+|`druid.emitter.ambari-metrics.hostname`|The hostname of the ambari-metrics server.|yes|none|
+|`druid.emitter.ambari-metrics.port`|The port of the ambari-metrics server.|yes|none|
+|`druid.emitter.ambari-metrics.protocol`|The protocol used to send metrics to ambari metrics collector. One of http/https|no|http|
+|`druid.emitter.ambari-metrics.trustStorePath`|Path to trustStore to be used for https|no|none|
+|`druid.emitter.ambari-metrics.trustStoreType`|trustStore type to be used for https|no|none|
+|`druid.emitter.ambari-metrics.trustStoreType`|trustStore password to be used for https|no|none|
+|`druid.emitter.ambari-metrics.batchSize`|Number of events to send as one batch.|no|100|
+|`druid.emitter.ambari-metrics.eventConverter`| Filter and converter of druid events to ambari-metrics timeline event(please see next section). |yes|none|  
+|`druid.emitter.ambari-metrics.flushPeriod` | Queue flushing period in milliseconds. |no|1 minute|
+|`druid.emitter.ambari-metrics.maxQueueSize`| Maximum size of the queue used to buffer events. |no|`MAX_INT`|
+|`druid.emitter.ambari-metrics.alertEmitters`| List of emitters where alerts will be forwarded to. |no| empty list (no forwarding)|
+|`druid.emitter.ambari-metrics.emitWaitTime` | wait time in milliseconds to try to send the event otherwise emitter will throwing event. |no|0|
+|`druid.emitter.ambari-metrics.waitForEventTime` | waiting time in milliseconds if necessary for an event to become available. |no|1000 (1 sec)|
+
+### Druid to Ambari Metrics Timeline Event Converter
+ 
+Ambari Metrics Timeline Event Converter defines a mapping between druid metrics name plus dimensions to a timeline event metricName.
+ambari-metrics metric path is organized using the following schema:
+`<namespacePrefix>.[<druid service name>].[<druid hostname>].<druid metrics dimensions>.<druid metrics name>`
+Properly naming the metrics is critical to avoid conflicts, confusing data and potentially wrong interpretation later on.
+
+Example `druid.historical.hist-host1:8080.MyDataSourceName.GroupBy.query/time`:
+
+ * `druid` -> namespace prefix 
+ * `historical` -> service name 
+ * `hist-host1:8080` -> druid hostname
+ * `MyDataSourceName` -> dimension value 
+ * `GroupBy` -> dimension value
+ * `query/time` -> metric name
+
+We have two different implementation of event converter:
+
+#### Send-All converter
+
+The first implementation called `all`, will send all the druid service metrics events. 
+The path will be in the form `<namespacePrefix>.[<druid service name>].[<druid hostname>].<dimensions values ordered by dimension's name>.<metric>`
+User has control of `<namespacePrefix>.[<druid service name>].[<druid hostname>].`
+
+```json
+
+druid.emitter.ambari-metrics.eventConverter={"type":"all", "namespacePrefix": "druid.test", "appName":"druid"}
+
+```
+
+#### White-list based converter
+
+The second implementation called `whiteList`, will send only the white listed metrics and dimensions.
+Same as for the `all` converter user has control of `<namespacePrefix>.[<druid service name>].[<druid hostname>].`
+White-list based converter comes with the following  default white list map located under resources in `./src/main/resources/defaultWhiteListMap.json`
+
+Although user can override the default white list map by supplying a property called `mapPath`.
+This property is a String containing  the path for the file containing **white list map Json object**.
+For example the following converter will read the map from the file `/pathPrefix/fileName.json`.  
+
+```json
+
+druid.emitter.ambari-metrics.eventConverter={"type":"whiteList", "namespacePrefix": "druid.test", "ignoreHostname":true, "appName":"druid", "mapPath":"/pathPrefix/fileName.json"}
+
+```
+
+**Druid emits a huge number of metrics we highly recommend to use the `whiteList` converter**
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/azure.md b/docs/0.15.0-incubating/development/extensions-contrib/azure.md
new file mode 100644
index 0000000..6bdb020
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/azure.md
@@ -0,0 +1,95 @@
+---
+layout: doc_page
+title: "Microsoft Azure"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Microsoft Azure
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-azure-extensions` extension.
+
+## Deep Storage
+
+[Microsoft Azure Storage](http://azure.microsoft.com/en-us/services/storage/) is another option for deep storage. This requires some additional Druid configuration.
+
+|Property|Possible Values|Description|Default|
+|--------|---------------|-----------|-------|
+|`druid.storage.type`|azure||Must be set.|
+|`druid.azure.account`||Azure Storage account name.|Must be set.|
+|`druid.azure.key`||Azure Storage account key.|Must be set.|
+|`druid.azure.container`||Azure Storage container name.|Must be set.|
+|`druid.azure.protocol`|http or https||https|
+|`druid.azure.maxTries`||Number of tries before cancel an Azure operation.|3|
+
+See [Azure Services](http://azure.microsoft.com/en-us/pricing/free-trial/) for more information.
+
+## Firehose
+
+#### StaticAzureBlobStoreFirehose
+
+This firehose ingests events, similar to the StaticS3Firehose, but from an Azure Blob Store.
+
+Data is newline delimited, with one JSON object per line and parsed as per the `InputRowParser` configuration.
+
+The storage account is shared with the one used for Azure deep storage functionality, but blobs can be in a different container.
+
+As with the S3 blobstore, it is assumed to be gzipped if the extension ends in .gz
+
+This firehose is _splittable_ and can be used by [native parallel index tasks](../../ingestion/native_tasks.html#parallel-index-task).
+Since each split represents an object in this firehose, each worker task of `index_parallel` will read an object.
+
+Sample spec:
+
+```json
+"firehose" : {
+    "type" : "static-azure-blobstore",
+    "blobs": [
+        {
+          "container": "container",
+          "path": "/path/to/your/file.json"
+        },
+        {
+          "container": "anothercontainer",
+          "path": "/another/path.json"
+        }
+    ]
+}
+```
+
+This firehose provides caching and prefetching features. In IndexTask, a firehose can be read twice if intervals or
+shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow.
+
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|type|This should be `static-azure-blobstore`.|N/A|yes|
+|blobs|JSON array of [Azure blobs](https://msdn.microsoft.com/en-us/library/azure/ee691964.aspx).|N/A|yes|
+|maxCacheCapacityBytes|Maximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.|1073741824|no|
+|maxFetchCapacityBytes|Maximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.|1073741824|no|
+|prefetchTriggerBytes|Threshold to trigger prefetching Azure objects.|maxFetchCapacityBytes / 2|no|
+|fetchTimeout|Timeout for fetching an Azure object.|60000|no|
+|maxFetchRetry|Maximum retry for fetching an Azure object.|3|no|
+
+Azure Blobs:
+
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|container|Name of the azure [container](https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/#create-a-container)|N/A|yes|
+|path|The path where data is located.|N/A|yes|
diff --git a/docs/latest/development/experimental.md b/docs/0.15.0-incubating/development/extensions-contrib/cassandra.md
similarity index 52%
copy from docs/latest/development/experimental.md
copy to docs/0.15.0-incubating/development/extensions-contrib/cassandra.md
index adf4e24..2bbf641 100644
--- a/docs/latest/development/experimental.md
+++ b/docs/0.15.0-incubating/development/extensions-contrib/cassandra.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Experimental Features"
+title: "Apache Cassandra"
 ---
 
 <!--
@@ -22,18 +22,10 @@ title: "Experimental Features"
   ~ under the License.
   -->
 
-# Experimental Features
+# Apache Cassandra
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-cassandra-storage` extension.
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
-
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
-
-```
-druid.extensions.loadList=["druid-histogram"]
-```
-
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
+[Apache Cassandra](http://www.datastax.com/what-we-offer/products-services/datastax-enterprise/apache-cassandra) can also 
+be leveraged for deep storage.  This requires some additional Druid configuration as well as setting up the necessary 
+schema within a Cassandra keystore.
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/cloudfiles.md b/docs/0.15.0-incubating/development/extensions-contrib/cloudfiles.md
new file mode 100644
index 0000000..ad11caa
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/cloudfiles.md
@@ -0,0 +1,97 @@
+---
+layout: doc_page
+title: "Rackspace Cloud Files"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Rackspace Cloud Files
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-cloudfiles-extensions` extension.
+
+## Deep Storage
+
+[Rackspace Cloud Files](http://www.rackspace.com/cloud/files/) is another option for deep storage. This requires some additional Druid configuration.
+
+|Property|Possible Values|Description|Default|
+|--------|---------------|-----------|-------|
+|`druid.storage.type`|cloudfiles||Must be set.|
+|`druid.storage.region`||Rackspace Cloud Files region.|Must be set.|
+|`druid.storage.container`||Rackspace Cloud Files container name.|Must be set.|
+|`druid.storage.basePath`||Rackspace Cloud Files base path to use in the container.|Must be set.|
+|`druid.storage.operationMaxRetries`||Number of tries before cancel a Rackspace operation.|10|
+|`druid.cloudfiles.userName`||Rackspace Cloud username|Must be set.|
+|`druid.cloudfiles.apiKey`||Rackspace Cloud api key.|Must be set.|
+|`druid.cloudfiles.provider`|rackspace-cloudfiles-us,rackspace-cloudfiles-uk|Name of the provider depending on the region.|Must be set.|
+|`druid.cloudfiles.useServiceNet`|true,false|Whether to use the internal service net.|true|
+
+## Firehose
+
+#### StaticCloudFilesFirehose
+
+This firehose ingests events, similar to the StaticAzureBlobStoreFirehose, but from Rackspace's Cloud Files.
+
+Data is newline delimited, with one JSON object per line and parsed as per the `InputRowParser` configuration.
+
+The storage account is shared with the one used for Racksapce's Cloud Files deep storage functionality, but blobs can be in a different region and container.
+
+As with the Azure blobstore, it is assumed to be gzipped if the extension ends in .gz
+
+This firehose is _splittable_ and can be used by [native parallel index tasks](../../ingestion/native_tasks.html#parallel-index-task).
+Since each split represents an object in this firehose, each worker task of `index_parallel` will read an object.
+
+Sample spec:
+
+```json
+"firehose" : {
+    "type" : "static-cloudfiles",
+    "blobs": [
+        {
+          "region": "DFW"
+          "container": "container",
+          "path": "/path/to/your/file.json"
+        },
+        {
+          "region": "ORD"
+          "container": "anothercontainer",
+          "path": "/another/path.json"
+        }
+    ]
+}
+```
+This firehose provides caching and prefetching features. In IndexTask, a firehose can be read twice if intervals or
+shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow.
+
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|type|This should be `static-cloudfiles`.|N/A|yes|
+|blobs|JSON array of Cloud Files blobs.|N/A|yes|
+|maxCacheCapacityBytes|Maximum size of the cache space in bytes. 0 means disabling cache.|1073741824|no|
+|maxCacheCapacityBytes|Maximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.|1073741824|no|
+|maxFetchCapacityBytes|Maximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.|1073741824|no|
+|fetchTimeout|Timeout for fetching a Cloud Files object.|60000|no|
+|maxFetchRetry|Maximum retry for fetching a Cloud Files object.|3|no|
+
+Cloud Files Blobs:
+
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|container|Name of the Cloud Files container|N/A|yes|
+|path|The path where data is located.|N/A|yes|
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/distinctcount.md b/docs/0.15.0-incubating/development/extensions-contrib/distinctcount.md
new file mode 100644
index 0000000..a392360
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/distinctcount.md
@@ -0,0 +1,99 @@
+---
+layout: doc_page
+title: "DistinctCount Aggregator"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# DistinctCount Aggregator
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) the `druid-distinctcount` extension.
+
+Additionally, follow these steps:
+
+(1) First, use a single dimension hash-based partition spec to partition data by a single dimension. For example visitor_id. This to make sure all rows with a particular value for that dimension will go into the same segment, or this might over count.
+(2) Second, use distinctCount to calculate the distinct count, make sure queryGranularity is divided exactly by segmentGranularity or else the result will be wrong.
+
+There are some limitations, when used with groupBy, the groupBy keys' numbers should not exceed maxIntermediateRows in every segment. If exceeded the result will be wrong. When used with topN, numValuesPerPass should not be too big. If too big the distinctCount will use a lot of memory and might cause the JVM to go our of memory.
+
+Example:
+# Timeseries Query
+
+```json
+{
+  "queryType": "timeseries",
+  "dataSource": "sample_datasource",
+  "granularity": "day",
+  "aggregations": [
+    {
+      "type": "distinctCount",
+      "name": "uv",
+      "fieldName": "visitor_id"
+    }
+  ],
+  "intervals": [
+    "2016-03-01T00:00:00.000/2013-03-20T00:00:00.000"
+  ]
+}
+```
+
+# TopN Query
+
+```json
+{
+  "queryType": "topN",
+  "dataSource": "sample_datasource",
+  "dimension": "sample_dim",
+  "threshold": 5,
+  "metric": "uv",
+  "granularity": "all",
+  "aggregations": [
+    {
+      "type": "distinctCount",
+      "name": "uv",
+      "fieldName": "visitor_id"
+    }
+  ],
+  "intervals": [
+    "2016-03-06T00:00:00/2016-03-06T23:59:59"
+  ]
+}
+```
+
+# GroupBy Query
+
+```json
+{
+  "queryType": "groupBy",
+  "dataSource": "sample_datasource",
+  "dimensions": "[sample_dim]",
+  "granularity": "all",
+  "aggregations": [
+    {
+      "type": "distinctCount",
+      "name": "uv",
+      "fieldName": "visitor_id"
+    }
+  ],
+  "intervals": [
+    "2016-03-06T00:00:00/2016-03-06T23:59:59"
+  ]
+}
+```
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/google.md b/docs/0.15.0-incubating/development/extensions-contrib/google.md
new file mode 100644
index 0000000..ac49ff1
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/google.md
@@ -0,0 +1,89 @@
+---
+layout: doc_page
+title: "Google Cloud Storage"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Google Cloud Storage
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-google-extensions` extension.
+
+## Deep Storage
+
+Deep storage can be written to Google Cloud Storage either via this extension or the [druid-hdfs-storage extension](../extensions-core/hdfs.html).
+
+### Configuration
+
+|Property|Possible Values|Description|Default|
+|--------|---------------|-----------|-------|
+|`druid.storage.type`|google||Must be set.|
+|`druid.google.bucket`||GCS bucket name.|Must be set.|
+|`druid.google.prefix`||GCS prefix.|Must be set.|
+
+
+## Firehose
+
+#### StaticGoogleBlobStoreFirehose
+
+This firehose ingests events, similar to the StaticS3Firehose, but from an Google Cloud Store.
+
+As with the S3 blobstore, it is assumed to be gzipped if the extension ends in .gz
+
+This firehose is _splittable_ and can be used by [native parallel index tasks](../../ingestion/native_tasks.html#parallel-index-task).
+Since each split represents an object in this firehose, each worker task of `index_parallel` will read an object.
+
+Sample spec:
+
+```json
+"firehose" : {
+    "type" : "static-google-blobstore",
+    "blobs": [
+        {
+          "bucket": "foo",
+          "path": "/path/to/your/file.json"
+        },
+        {
+          "bucket": "bar",
+          "path": "/another/path.json"
+        }
+    ]
+}
+```
+
+This firehose provides caching and prefetching features. In IndexTask, a firehose can be read twice if intervals or
+shardSpecs are not specified, and, in this case, caching can be useful. Prefetching is preferred when direct scan of objects is slow.
+
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|type|This should be `static-google-blobstore`.|N/A|yes|
+|blobs|JSON array of Google Blobs.|N/A|yes|
+|maxCacheCapacityBytes|Maximum size of the cache space in bytes. 0 means disabling cache. Cached files are not removed until the ingestion task completes.|1073741824|no|
+|maxFetchCapacityBytes|Maximum size of the fetch space in bytes. 0 means disabling prefetch. Prefetched files are removed immediately once they are read.|1073741824|no|
+|prefetchTriggerBytes|Threshold to trigger prefetching Google Blobs.|maxFetchCapacityBytes / 2|no|
+|fetchTimeout|Timeout for fetching a Google Blob.|60000|no|
+|maxFetchRetry|Maximum retry for fetching a Google Blob.|3|no|
+
+Google Blobs:
+
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|bucket|Name of the Google Cloud bucket|N/A|yes|
+|path|The path where data is located.|N/A|yes|
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/graphite.md b/docs/0.15.0-incubating/development/extensions-contrib/graphite.md
new file mode 100644
index 0000000..deac93a
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/graphite.md
@@ -0,0 +1,118 @@
+---
+layout: doc_page
+title: "Graphite Emitter"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Graphite Emitter
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `graphite-emitter` extension.
+
+## Introduction
+
+This extension emits druid metrics to a graphite carbon server.
+Metrics can be sent by using [plaintext](http://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-plaintext-protocol) or [pickle](http://graphite.readthedocs.io/en/latest/feeding-carbon.html#the-pickle-protocol) protocol.
+The pickle protocol is more efficient and supports sending batches of metrics (plaintext protocol send only one metric) in one request; batch size is configurable.
+
+## Configuration
+
+All the configuration parameters for graphite emitter are under `druid.emitter.graphite`.
+
+|property|description|required?|default|
+|--------|-----------|---------|-------|
+|`druid.emitter.graphite.hostname`|The hostname of the graphite server.|yes|none|
+|`druid.emitter.graphite.port`|The port of the graphite server.|yes|none|
+|`druid.emitter.graphite.batchSize`|Number of events to send as one batch (only for pickle protocol)|no|100|
+|`druid.emitter.graphite.protocol`|Graphite protocol; available protocols: pickle, plaintext.|no|pickle|
+|`druid.emitter.graphite.eventConverter`| Filter and converter of druid events to graphite event (please see next section).|yes|none|
+|`druid.emitter.graphite.flushPeriod` | Queue flushing period in milliseconds. |no|1 minute|
+|`druid.emitter.graphite.maxQueueSize`| Maximum size of the queue used to buffer events. |no|`MAX_INT`|
+|`druid.emitter.graphite.alertEmitters`| List of emitters where alerts will be forwarded to. This is a JSON list of emitter names, e.g. `["logging", "http"]`|no| empty list (no forwarding)|
+|`druid.emitter.graphite.requestLogEmitters`| List of emitters where request logs (i.e., query logging events sent to emitters when `druid.request.logging.type` is set to `emitter`) will be forwarded to. This is a JSON list of emitter names, e.g. `["logging", "http"]`|no| empty list (no forwarding)|
+|`druid.emitter.graphite.emitWaitTime` | wait time in milliseconds to try to send the event otherwise emitter will throwing event. |no|0|
+|`druid.emitter.graphite.waitForEventTime` | waiting time in milliseconds if necessary for an event to become available. |no|1000 (1 sec)|
+
+### Supported event types
+
+The graphite emitter only emits service metric events to graphite (See [Druid Metrics](../../operations/metrics.html) for a list of metrics).
+
+Alerts and request logs are not sent to graphite. These event types are not well represented in Graphite, which is more suited for timeseries views on numeric metrics, vs. storing non-numeric log events.
+
+Instead, alerts and request logs are optionally forwarded to other emitter implementations, specified by `druid.emitter.graphite.alertEmitters` and `druid.emitter.graphite.requestLogEmitters` respectively.
+
+### Druid to Graphite Event Converter
+ 
+Graphite Event Converter defines a mapping between druid metrics name plus dimensions to a Graphite metric path.
+Graphite metric path is organized using the following schema:
+`<namespacePrefix>.[<druid service name>].[<druid hostname>].<druid metrics dimensions>.<druid metrics name>`
+Properly naming the metrics is critical to avoid conflicts, confusing data and potentially wrong interpretation later on.
+
+Example `druid.historical.hist-host1_yahoo_com:8080.MyDataSourceName.GroupBy.query/time`:
+
+ * `druid` -> namespace prefix 
+ * `historical` -> service name 
+ * `hist-host1.yahoo.com:8080` -> druid hostname
+ * `MyDataSourceName` -> dimension value 
+ * `GroupBy` -> dimension value
+ * `query/time` -> metric name
+
+We have two different implementation of event converter:
+
+#### Send-All converter
+
+The first implementation called `all`, will send all the druid service metrics events. 
+The path will be in the form `<namespacePrefix>.[<druid service name>].[<druid hostname>].<dimensions values ordered by dimension's name>.<metric>`
+User has control of `<namespacePrefix>.[<druid service name>].[<druid hostname>].`
+
+You can omit the hostname by setting `ignoreHostname=true`
+`druid.SERVICE_NAME.dataSourceName.queryType.query/time`
+
+You can omit the service name by setting `ignoreServiceName=true`
+`druid.HOSTNAME.dataSourceName.queryType.query/time`
+
+Elements in metric name by default are separated by "/", so graphite will create all metrics on one level. If you want to have metrics in the tree structure, you have to set `replaceSlashWithDot=true`
+Original: `druid.HOSTNAME.dataSourceName.queryType.query/time`
+Changed: `druid.HOSTNAME.dataSourceName.queryType.query.time`
+
+
+```json
+
+druid.emitter.graphite.eventConverter={"type":"all", "namespacePrefix": "druid.test", "ignoreHostname":true, "ignoreServiceName":true}
+
+```
+
+#### White-list based converter
+
+The second implementation called `whiteList`, will send only the white listed metrics and dimensions.
+Same as for the `all` converter user has control of `<namespacePrefix>.[<druid service name>].[<druid hostname>].`
+White-list based converter comes with the following  default white list map located under resources in `./src/main/resources/defaultWhiteListMap.json`
+
+Although user can override the default white list map by supplying a property called `mapPath`.
+This property is a String containing the path for the file containing **white list map Json object**.
+For example the following converter will read the map from the file `/pathPrefix/fileName.json`.  
+
+```json
+
+druid.emitter.graphite.eventConverter={"type":"whiteList", "namespacePrefix": "druid.test", "ignoreHostname":true, "ignoreServiceName":true, "mapPath":"/pathPrefix/fileName.json"}
+
+```
+
+**Druid emits a huge number of metrics we highly recommend to use the `whiteList` converter**
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/influx.md b/docs/0.15.0-incubating/development/extensions-contrib/influx.md
new file mode 100644
index 0000000..c5c071b
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/influx.md
@@ -0,0 +1,66 @@
+---
+layout: doc_page
+title: "InfluxDB Line Protocol Parser"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# InfluxDB Line Protocol Parser
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-influx-extensions`.
+
+This extension enables Druid to parse the [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v1.5/write_protocols/line_protocol_tutorial/), a popular text-based timeseries metric serialization format. 
+
+## Line Protocol
+
+A typical line looks like this:
+
+```cpu,application=dbhost=prdb123,region=us-east-1 usage_idle=99.24,usage_user=0.55 1520722030000000000```
+
+which contains four parts:
+  - measurement: A string indicating the name of the measurement represented (e.g. cpu, network, web_requests)
+  - tags: zero or more key-value pairs (i.e. dimensions)
+  - measurements: one or more key-value pairs; values can be numeric, boolean, or string
+  - timestamp: nanoseconds since Unix epoch (the parser truncates it to milliseconds)
+
+The parser extracts these fields into a map, giving the measurement the key `measurement` and the timestamp the key `_ts`. The tag and measurement keys are copied verbatim, so users should take care to avoid name collisions. It is up to the ingestion spec to decide which fields should be treated as dimensions and which should be treated as metrics (typically tags correspond to dimensions and measurements correspond to metrics).
+
+The parser is configured like so:
+```json
+"parser": {
+      "type": "string",
+      "parseSpec": {
+        "format": "influx",
+        "timestampSpec": {
+          "column": "__ts",
+          "format": "millis"
+        },
+        "dimensionsSpec": {
+          "dimensionExclusions": [
+            "__ts"
+          ]
+        },
+        "whitelistMeasurements": [
+          "cpu"
+        ]
+      }
+```
+
+The `whitelistMeasurements` field is an optional list of strings. If present, measurements that do not match one of the strings in the list will be ignored.
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/influxdb-emitter.md b/docs/0.15.0-incubating/development/extensions-contrib/influxdb-emitter.md
new file mode 100644
index 0000000..138a0bb
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/influxdb-emitter.md
@@ -0,0 +1,75 @@
+---
+layout: doc_page
+title: "InfluxDB Emitter"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# InfluxDB Emitter
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-influxdb-emitter` extension.
+
+## Introduction
+
+This extension emits druid metrics to [InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/) over HTTP. Currently this emitter only emits service metric events to InfluxDB (See [Druid metrics](../../operations/metrics.html) for a list of metrics).
+When a metric event is fired it is added to a queue of events. After a configurable amount of time, the events on the queue are transformed to InfluxDB's line protocol 
+and POSTed to the InfluxDB HTTP API. The entire queue is flushed at this point. The queue is also flushed as the emitter is shutdown.
+
+Note that authentication and authorization must be [enabled](https://docs.influxdata.com/influxdb/v1.7/administration/authentication_and_authorization/) on the InfluxDB server.
+
+## Configuration
+
+All the configuration parameters for the influxdb emitter are under `druid.emitter.influxdb`.
+
+|Property|Description|Required?|Default|
+|--------|-----------|---------|-------|
+|`druid.emitter.influxdb.hostname`|The hostname of the InfluxDB server.|Yes|N/A|
+|`druid.emitter.influxdb.port`|The port of the InfluxDB server.|No|8086|
+|`druid.emitter.influxdb.databaseName`|The name of the database in InfluxDB.|Yes|N/A|
+|`druid.emitter.influxdb.maxQueueSize`|The size of the queue that holds events.|No|Integer.Max_Value(=2^31-1)|
+|`druid.emitter.influxdb.flushPeriod`|How often (in milliseconds) the events queue is parsed into Line Protocol and POSTed to InfluxDB.|No|60000|
+|`druid.emitter.influxdb.flushDelay`|How long (in milliseconds) the scheduled method will wait until it first runs.|No|60000|
+|`druid.emitter.influxdb.influxdbUserName`|The username for authenticating with the InfluxDB database.|Yes|N/A|
+|`druid.emitter.influxdb.influxdbPassword`|The password of the database authorized user|Yes|N/A|
+|`druid.emitter.influxdb.dimensionWhitelist`|A whitelist of metric dimensions to include as tags|No|`["dataSource","type","numMetrics","numDimensions","threshold","dimension","taskType","taskStatus","tier"]`|
+
+## InfluxDB Line Protocol
+
+An example of how this emitter parses a Druid metric event into InfluxDB's [line protocol](https://docs.influxdata.com/influxdb/v1.7/write_protocols/line_protocol_reference/) is given here: 
+
+The syntax of the line protocol is :  
+
+`<measurement>[,<tag_key>=<tag_value>[,<tag_key>=<tag_value>]] <field_key>=<field_value>[,<field_key>=<field_value>] [<timestamp>]`
+ 
+where timestamp is in nano-seconds since epoch.
+
+A typical service metric event as recorded by Druid's logging emitter is: `Event [{"feed":"metrics","timestamp":"2017-10-31T09:09:06.857Z","service":"druid/historical","host":"historical001:8083","version":"0.11.0-SNAPSHOT","metric":"query/cache/total/hits","value":34787256}]`.
+
+This event is parsed into line protocol according to these rules:
+
+* The measurement becomes druid_query since query is the first part of the metric. 
+* The tags are service=druid/historical, hostname=historical001, metric=druid_cache_total. (The metric tag is the middle part of the druid metric separated with _ and preceded by druid_. Another example would be if an event has metric=query/time then there is no middle part and hence no metric tag)
+* The field is druid_hits since this is the last part of the metric.
+
+This gives the following String which can be POSTed to InfluxDB: `"druid_query,service=druid/historical,hostname=historical001,metric=druid_cache_total druid_hits=34787256 1509440946857000000"`
+
+The InfluxDB emitter has a white list of dimensions
+which will be added as a tag to the line protocol string if the metric has a dimension from the white list.
+The value of the dimension is sanitized such that every occurence of a dot or whitespace is replaced with a `_` .
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/kafka-emitter.md b/docs/0.15.0-incubating/development/extensions-contrib/kafka-emitter.md
new file mode 100644
index 0000000..a059306
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/kafka-emitter.md
@@ -0,0 +1,55 @@
+---
+layout: doc_page
+title: "Kafka Emitter"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Kafka Emitter
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `kafka-emitter` extension.
+
+## Introduction
+
+This extension emits Druid metrics to [Apache Kafka](https://kafka.apache.org) directly with JSON format.<br>
+Currently, Kafka has not only their nice ecosystem but also consumer API readily available. 
+So, If you currently use Kafka, It's easy to integrate various tool or UI 
+to monitor the status of your Druid cluster with this extension.
+
+## Configuration
+
+All the configuration parameters for the Kafka emitter are under `druid.emitter.kafka`.
+
+|property|description|required?|default|
+|--------|-----------|---------|-------|
+|`druid.emitter.kafka.bootstrap.servers`|Comma-separated Kafka broker. (`[hostname:port],[hostname:port]...`)|yes|none|
+|`druid.emitter.kafka.metric.topic`|Kafka topic name for emitter's target to emit service metric.|yes|none|
+|`druid.emitter.kafka.alert.topic`|Kafka topic name for emitter's target to emit alert.|yes|none|
+|`druid.emitter.kafka.producer.config`|JSON formatted configuration which user want to set additional properties to Kafka producer.|no|none|
+|`druid.emitter.kafka.clusterName`|Optional value to specify name of your druid cluster. It can help make groups in your monitoring environment. |no|none|
+
+### Example
+
+```
+druid.emitter.kafka.bootstrap.servers=hostname1:9092,hostname2:9092
+druid.emitter.kafka.metric.topic=druid-metric
+druid.emitter.kafka.alert.topic=druid-alert
+druid.emitter.kafka.producer.config={"max.block.ms":10000}
+```
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/kafka-simple.md b/docs/0.15.0-incubating/development/extensions-contrib/kafka-simple.md
new file mode 100644
index 0000000..3211efe
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/kafka-simple.md
@@ -0,0 +1,56 @@
+---
+layout: doc_page
+title: "Kafka Simple Consumer"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Kafka Simple Consumer
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-kafka-eight-simpleConsumer` extension.
+
+## Firehose
+
+This is an experimental firehose to ingest data from Apache Kafka using the Kafka simple consumer api. Currently, this firehose would only work inside standalone realtime processes.
+The configuration for KafkaSimpleConsumerFirehose is similar to the Kafka Eight Firehose , except `firehose` should be replaced with `firehoseV2` like this:
+
+```json
+"firehoseV2": {
+  "type" : "kafka-0.8-v2",
+  "brokerList" :  ["localhost:4443"],
+  "queueBufferLength":10001,
+  "resetOffsetToEarliest":"true",
+  "partitionIdList" : ["0"],
+  "clientId" : "localclient",
+  "feed": "wikipedia"
+}
+```
+
+|property|description|required?|
+|--------|-----------|---------|
+|type|kafka-0.8-v2|yes|
+|brokerList|list of the kafka brokers|yes|
+|queueBufferLength|the buffer length for kafka message queue|no default(20000)|
+|resetOffsetToEarliest|in case of kafkaOffsetOutOfRange error happens, consumer should starts from the earliest or latest message available|true|
+|partitionIdList|list of kafka partition ids|yes|
+|clientId|the clientId for kafka SimpleConsumer|yes|
+|feed|kafka topic|yes|
+
+For using this firehose at scale and possibly in production, it is recommended to set replication factor to at least three, which means at least three Kafka brokers in the `brokerList`. For a 1*10^4 events per second kafka topic, keeping one partition can work properly, but more partitions could be added if higher throughput is required.
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/materialized-view.md b/docs/0.15.0-incubating/development/extensions-contrib/materialized-view.md
new file mode 100644
index 0000000..95bfde9
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/materialized-view.md
@@ -0,0 +1,134 @@
+---
+layout: doc_page
+title: "Materialized View"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Materialized View
+
+To use this Apache Druid (incubating) feature, make sure to only load `materialized-view-selection` on Broker and load `materialized-view-maintenance` on Overlord. In addtion, this feature currently requires a Hadoop cluster.
+
+This feature enables Druid to greatly improve the query performance, especially when the query dataSource has a very large number of dimensions but the query only required several dimensions. This feature includes two parts. One is `materialized-view-maintenance`, and the other is `materialized-view-selection`.
+
+## Materialized-view-maintenance
+In materialized-view-maintenance, dataSouces user ingested are called "base-dataSource". For each base-dataSource, we can submit `derivativeDataSource` supervisors to create and maintain other dataSources which we called  "derived-dataSource". The deminsions and metrics of derived-dataSources are the subset of base-dataSource's.
+The `derivativeDataSource` supervisor is used to keep the timeline of derived-dataSource consistent with base-dataSource. Each `derivativeDataSource` supervisor  is responsible for one derived-dataSource.
+
+A sample derivativeDataSource supervisor spec is shown below:
+```json
+   {
+       "type": "derivativeDataSource",
+       "baseDataSource": "wikiticker",
+       "dimensionsSpec": {
+           "dimensions": [
+               "isUnpatrolled",
+               "metroCode",
+               "namespace",
+               "page",
+               "regionIsoCode",
+               "regionName",
+               "user"
+           ]
+       },
+       "metricsSpec": [
+           {
+               "name": "count",
+               "type": "count"
+           },
+           {
+               "name": "added",
+               "type": "longSum",
+               "fieldName": "added"
+           }
+       ],
+       "tuningConfig": {
+           "type": "hadoop"
+       }
+   }
+```
+
+**Supervisor Configuration**
+
+|Field|Description|Required|
+|--------|-----------|---------|
+|Type	|The supervisor type. This should always be `derivativeDataSource`.|yes|
+|baseDataSource	|The name of base dataSource. This dataSource data should be already stored inside Druid, and the dataSource will be used as input data.|yes|
+|dimensionsSpec	|Specifies the dimensions of the data. These dimensions must be the subset of baseDataSource's dimensions.|yes|
+|metricsSpec	|A list of aggregators. These metrics must be the subset of baseDataSource's metrics. See [aggregations](../../querying/aggregations.html).|yes|
+|tuningConfig	|TuningConfig must be HadoopTuningConfig. See [Hadoop tuning config](../../ingestion/hadoop.html#tuningconfig).|yes|
+|dataSource	|The name of this derived dataSource. 	|no(default=baseDataSource-hashCode of supervisor)|
+|hadoopDependencyCoordinates	|A JSON array of Hadoop dependency coordinates that Druid will use, this property will override the default Hadoop coordinates. Once specified, Druid will look for those Hadoop dependencies from the location specified by druid.extensions.hadoopDependenciesDir	|no|
+|classpathPrefix	|Classpath that will be pre-appended for the Peon process.	|no|
+|context	|See below.	|no|
+
+**Context**
+
+|Field|Description|Required|
+|--------|-----------|---------|
+|maxTaskCount |The max number of tasks the supervisor can submit simultaneously.	|no(default=1)|
+
+##  Materialized-view-selection
+
+In materialized-view-selection, we implement a new query type `view`. When we request a view query, Druid will try its best to optimize the query based on query dataSource and intervals.
+
+A sample view query spec is shown below:
+```json
+   {
+       "queryType": "view",
+       "query": {
+           "queryType": "groupBy",
+           "dataSource": "wikiticker",
+           "granularity": "all",
+           "dimensions": [
+               "user"
+           ],
+           "limitSpec": {
+               "type": "default",
+               "limit": 1,
+               "columns": [
+                   {
+                       "dimension": "added",
+                       "direction": "descending",
+                       "dimensionOrder": "numeric"
+                   }
+               ]
+           },
+           "aggregations": [
+               {
+                   "type": "longSum",
+                   "name": "added",
+                   "fieldName": "added"
+               }
+           ],
+           "intervals": [
+               "2015-09-12/2015-09-13"
+           ]
+       }
+   }
+```
+There are 2 parts in a view query:
+
+|Field|Description|Required|
+|--------|-----------|---------|
+|queryType	|The query type. This should always be view	|yes|
+|query	|The real query of this `view` query. The real query must be [groupBy](../../querying/groupbyquery.html), [topN](../../querying/topnquery.html), or [timeseries](../../querying/timeseriesquery.html) type.|yes|
+
+**Note that Materialized View is currently designated as experimental. Please make sure the time of all processes are the same and increase monotonically. Otherwise, some unexpected errors may happen on query results.**
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/momentsketch-quantiles.md b/docs/0.15.0-incubating/development/extensions-contrib/momentsketch-quantiles.md
new file mode 100644
index 0000000..966caa2
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/momentsketch-quantiles.md
@@ -0,0 +1,120 @@
+---
+layout: doc_page
+title: "Moment Sketches for Approximate Quantiles module"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# MomentSketch Quantiles Sketch module
+
+This module provides aggregators for approximate quantile queries using the [momentsketch](https://github.com/stanford-futuredata/momentsketch) library. 
+The momentsketch provides coarse quantile estimates with less space and aggregation time overheads than traditional sketches, approaching the performance of counts and sums by reconstructing distributions from computed statistics.
+
+To use this Apache Druid (incubating) extension, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
+
+```
+druid.extensions.loadList=["druid-momentsketch"]
+```
+
+### Aggregator
+
+The result of the aggregation is a momentsketch that is the union of all sketches either built from raw data or read from the segments.
+
+The `momentSketch` aggregator operates over raw data while the `momentSketchMerge` aggregator should be used when aggregating pre-computed sketches.
+```json
+{
+  "type" : <aggregator_type>,
+  "name" : <output_name>,
+  "fieldName" : <input_name>,
+  "k" : <int>,
+  "compress" : <boolean>
+ }
+```
+
+|property|description|required?|
+|--------|-----------|---------|
+|type|Type of aggregator desired. Either "momentSketch" or "momentSketchMerge" |yes|
+|name|A String for the output (result) name of the calculation.|yes|
+|fieldName|A String for the name of the input field (can contain sketches or raw numeric values).|yes|
+|k|Parameter that determines the accuracy and size of the sketch. Higher k means higher accuracy but more space to store sketches. Usable range is generally [3,15] |no, defaults to 13.|
+|compress|Flag for whether the aggregator compresses numeric values using arcsinh. Can improve robustness to skewed and long-tailed distributions, but reduces accuracy slightly on more uniform distributions.| no, defaults to true
+
+### Post Aggregators
+
+Users can query for a set of quantiles using the `momentSketchSolveQuantiles` post-aggregator on the sketches created by the `momentSketch` or `momentSketchMerge` aggregators.
+```json
+{
+  "type"  : "momentSketchSolveQuantiles",
+  "name" : <output_name>,
+  "field" : <reference to moment sketch>,
+  "fractions" : <array of doubles in [0,1]>
+}
+```
+
+Users can also query for the min/max of a distribution:
+```json
+{
+  "type" : "momentSketchMin" | "momentSketchMax",
+  "name" : <output_name>,
+  "field" : <reference to moment sketch>,
+}
+```
+
+### Example
+As an example of a query with sketches pre-aggregated at ingestion time, one could set up the following aggregator at ingest:
+```json
+{
+  "type": "momentSketch", 
+  "name": "sketch", 
+  "fieldName": "value", 
+  "k": 10, 
+  "compress": true,
+}
+```
+and make queries using the following aggregator + post-aggregator:
+```json
+{
+  "aggregations": [{
+    "type": "momentSketchMerge",
+    "name": "sketch",
+    "fieldName": "sketch",
+    "k": 10,
+    "compress": true
+  }],
+  "postAggregations": [
+  {
+    "type": "momentSketchSolveQuantiles",
+    "name": "quantiles",
+    "fractions": [0.1, 0.5, 0.9],
+    "field": {
+      "type": "fieldAccess",
+      "fieldName": "sketch"
+    }
+  },
+  {
+    "type": "momentSketchMin",
+    "name": "min",
+    "field": {
+      "type": "fieldAccess",
+      "fieldName": "sketch"
+    }
+  }]
+}
+```
\ No newline at end of file
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/moving-average-query.md b/docs/0.15.0-incubating/development/extensions-contrib/moving-average-query.md
new file mode 100644
index 0000000..5fc7268
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/moving-average-query.md
@@ -0,0 +1,337 @@
+---
+layout: doc_page
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Moving Average Queries
+
+## Overview
+**Moving Average Query** is an extension which provides support for [Moving Average](https://en.wikipedia.org/wiki/Moving_average) and other Aggregate [Window Functions](https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions) in Druid queries.
+
+These Aggregate Window Functions consume standard Druid Aggregators and outputs additional windowed aggregates called [Averagers](#averagers).
+
+#### High level algorithm 
+
+Moving Average encapsulates the [groupBy query](../../querying/groupbyquery.html) (Or [timeseries](../../querying/timeseriesquery.html) in case of no dimensions) in order to rely on the maturity of these query types.
+
+It runs the query in two main phases:
+1. Runs an inner [groupBy](../../querying/groupbyquery.html) or [timeseries](../../querying/timeseriesquery.html) query to compute Aggregators (i.e. daily count of events).
+2. Passes over aggregated results in Broker, in order to compute Averagers (i.e. moving 7 day average of the daily count).
+
+#### Main enhancements provided by this extension:
+1. Functionality: Extending druid query functionality (i.e. initial introduction of Window Functions).
+2. Performance: Improving performance of such moving aggregations by eliminating multiple segment scans.
+
+#### Further reading
+[Moving Average](https://en.wikipedia.org/wiki/Moving_average)
+
+[Window Functions](https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions)
+
+[Analytic Functions](https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts)
+
+
+## Operations
+To use this extension, make sure to [load](../../operations/including-extensions.html) `druid-moving-average-query` only to the Broker.
+
+## Configuration
+There are currently no configuration properties specific to Moving Average.
+
+## Limitations
+* movingAverage is missing support for the following groupBy properties: `subtotalsSpec`, `virtualColumns`.
+* movingAverage is missing support for the following timeseries properties: `descending`.
+* movingAverage is missing support for [SQL-compatible null handling](https://github.com/apache/incubator-druid/issues/4349) (So setting druid.generic.useDefaultValueForNull in configuration will give an error). 
+
+##Query spec:
+* Most properties in the query spec derived from  [groupBy query](../../querying/groupbyquery.html) / [timeseries](../../querying/timeseriesquery.html), see documentation for these query types.
+
+|property|description|required?|
+|--------|-----------|---------|
+|queryType|This String should always be "movingAverage"; this is the first thing Druid looks at to figure out how to interpret the query.|yes|
+|dataSource|A String or Object defining the data source to query, very similar to a table in a relational database. See [DataSource](../../querying/datasource.html) for more information.|yes|
+|dimensions|A JSON list of [DimensionSpec](../../querying/dimensionspecs.html) (Notice that property is optional)|no|
+|limitSpec|See [LimitSpec](../../querying/limitspec.html)|no|
+|having|See [Having](../../querying/having.html)|no|
+|granularity|A period granilarity; See [Period Granularities](../../querying/granularities.html#period-granularities)|yes|
+|filter|See [Filters](../../querying/filters.html)|no|
+|aggregations|Aggregations forms the input to Averagers; See [Aggregations](../../querying/aggregations.html)|yes|
+|postAggregations|Supports only aggregations as input; See [Post Aggregations](../../querying/post-aggregations.html)|no|
+|intervals|A JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.|yes|
+|context|An additional JSON Object which can be used to specify certain flags.|no|
+|averagers|Defines the moving average function; See [Averagers](#averagers)|yes|
+|postAveragers|Support input of both averagers and aggregations; Syntax is identical to postAggregations (See [Post Aggregations](../../querying/post-aggregations.html))|no|
+
+## Averagers
+
+Averagers are used to define the Moving-Average function. Averagers are not limited to an average - they can also provide other types of window functions such as MAX()/MIN().
+
+### Properties
+
+These are properties which are common to all Averagers:
+
+|property|description|required?|
+|--------|-----------|---------|
+|type|Averager type; See [Averager types](#averager-types)|yes|
+|name|Averager name|yes|
+|fieldName|Input name (An aggregation name)|yes|
+|buckets|Number of lookback buckets (time periods), including current one. Must be >0|yes|
+|cycleSize|Cycle size; Used to calculate day-of-week option; See [Cycle size (Day of Week)](#cycle-size-day-of-week)|no, defaults to 1|
+
+
+### Averager types:
+
+* [Standard averagers](#standard-averagers):
+  * doubleMean
+  * doubleMeanNoNulls
+  * doubleMax
+  * doubleMin
+  * longMean
+  * longMeanNoNulls
+  * longMax
+  * longMin
+
+#### Standard averagers
+
+These averagers offer four functions:
+* Mean (Average)
+* MeanNoNulls (Ignores empty buckets).
+* Max
+* Min
+
+**Ignoring nulls**:
+Using a MeanNoNulls averager is useful when the interval starts at the dataset beginning time. 
+In that case, the first records will ignore missing buckets and average won't be artificially low.
+However, this also means that empty days in a sparse dataset will also be ignored.
+
+Example of usage:
+```json
+{ "type" : "doubleMean", "name" : <output_name>, "fieldName": <input_name> }
+```
+
+### Cycle size (Day of Week)
+This optional parameter is used to calculate over a single bucket within each cycle instead of all buckets. 
+A prime example would be weekly buckets, resulting in a Day of Week calculation. (Other examples: Month of year, Hour of day).
+
+I.e. when using these parameters:
+* *granularity*: period=P1D (daily)
+* *buckets*: 28
+* *cycleSize*: 7
+
+Within each output record, the averager will compute the result over the following buckets: current (#0), #7, #14, #21. 
+Whereas without specifying cycleSize it would have computed over all 28 buckets.
+
+## Examples
+
+All examples are based on the Wikipedia dataset provided in the Druid [tutorials](../../tutorials/index.html).
+
+### Basic example
+
+Calculating a 7-buckets moving average for Wikipedia edit deltas.
+
+Query syntax:
+```json
+{
+  "queryType": "movingAverage",
+  "dataSource": "wikipedia",
+  "granularity": {
+    "type": "period",
+    "period": "PT30M"
+  },
+  "intervals": [
+    "2015-09-12T00:00:00Z/2015-09-13T00:00:00Z"
+  ],
+  "aggregations": [
+    {
+      "name": "delta30Min",
+      "fieldName": "delta",
+      "type": "longSum"
+    }
+  ],
+  "averagers": [
+    {
+      "name": "trailing30MinChanges",
+      "fieldName": "delta30Min",
+      "type": "longMean",
+      "buckets": 7
+    }
+  ]
+}
+```
+
+Result:
+```json
+[ {
+   "version" : "v1",
+   "timestamp" : "2015-09-12T00:30:00.000Z",
+   "event" : {
+     "delta30Min" : 30490,
+     "trailing30MinChanges" : 4355.714285714285
+   }
+ }, {
+   "version" : "v1",
+   "timestamp" : "2015-09-12T01:00:00.000Z",
+   "event" : {
+     "delta30Min" : 96526,
+     "trailing30MinChanges" : 18145.14285714286
+   }
+ }, {
+...
+...
+... 
+}, {
+  "version" : "v1",
+  "timestamp" : "2015-09-12T23:00:00.000Z",
+  "event" : {
+    "delta30Min" : 119100,
+    "trailing30MinChanges" : 198697.2857142857
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2015-09-12T23:30:00.000Z",
+  "event" : {
+    "delta30Min" : 177882,
+    "trailing30MinChanges" : 193890.0
+  }
+}
+```
+
+### Post averager example
+
+Calculating a 7-buckets moving average for Wikipedia edit deltas, plus a ratio between the current period and the moving average.
+
+Query syntax:
+```json
+{
+  "queryType": "movingAverage",
+  "dataSource": "wikipedia",
+  "granularity": {
+    "type": "period",
+    "period": "PT30M"
+  },
+  "intervals": [
+    "2015-09-12T22:00:00Z/2015-09-13T00:00:00Z"
+  ],
+  "aggregations": [
+    {
+      "name": "delta30Min",
+      "fieldName": "delta",
+      "type": "longSum"
+    }
+  ],
+  "averagers": [
+    {
+      "name": "trailing30MinChanges",
+      "fieldName": "delta30Min",
+      "type": "longMean",
+      "buckets": 7
+    }
+  ],
+  "postAveragers" : [
+    {
+      "name": "ratioTrailing30MinChanges",
+      "type": "arithmetic",
+      "fn": "/",
+      "fields": [
+        {
+          "type": "fieldAccess",
+          "fieldName": "delta30Min"
+        },
+        {
+          "type": "fieldAccess",
+          "fieldName": "trailing30MinChanges"
+        }
+      ]
+    }
+  ]
+}
+```
+
+Result:
+```json
+[ {
+  "version" : "v1",
+  "timestamp" : "2015-09-12T22:00:00.000Z",
+  "event" : {
+    "delta30Min" : 144269,
+    "trailing30MinChanges" : 204088.14285714287,
+    "ratioTrailing30MinChanges" : 0.7068955500319539
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2015-09-12T22:30:00.000Z",
+  "event" : {
+    "delta30Min" : 242860,
+    "trailing30MinChanges" : 214031.57142857142,
+    "ratioTrailing30MinChanges" : 1.134692411867141
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2015-09-12T23:00:00.000Z",
+  "event" : {
+    "delta30Min" : 119100,
+    "trailing30MinChanges" : 198697.2857142857,
+    "ratioTrailing30MinChanges" : 0.5994042624782422
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2015-09-12T23:30:00.000Z",
+  "event" : {
+    "delta30Min" : 177882,
+    "trailing30MinChanges" : 193890.0,
+    "ratioTrailing30MinChanges" : 0.9174377224199288
+  }
+} ]
+```
+
+
+### Cycle size example
+
+Calculating an average of every first 10-minutes of the last 3 hours:
+
+Query syntax:
+```json
+{
+  "queryType": "movingAverage",
+  "dataSource": "wikipedia",
+  "granularity": {
+    "type": "period",
+    "period": "PT10M"
+  },
+  "intervals": [
+    "2015-09-12T00:00:00Z/2015-09-13T00:00:00Z"
+  ],
+  "aggregations": [
+    {
+      "name": "delta10Min",
+      "fieldName": "delta",
+      "type": "doubleSum"
+    }
+  ],
+  "averagers": [
+    {
+      "name": "trailing10MinPerHourChanges",
+      "fieldName": "delta10Min",
+      "type": "doubleMeanNoNulls",
+      "buckets": 18,
+      "cycleSize": 6
+    }
+  ]
+}
+```
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/opentsdb-emitter.md b/docs/0.15.0-incubating/development/extensions-contrib/opentsdb-emitter.md
new file mode 100644
index 0000000..fc18717
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/opentsdb-emitter.md
@@ -0,0 +1,62 @@
+---
+layout: doc_page
+title: "OpenTSDB Emitter"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# OpenTSDB Emitter
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `opentsdb-emitter` extension.
+
+## Introduction
+
+This extension emits druid metrics to [OpenTSDB](https://github.com/OpenTSDB/opentsdb) over HTTP (Using `Jersey client`). And this emitter only emits service metric events to OpenTSDB (See [Druid metrics](../../operations/metrics.html) for a list of metrics).
+
+## Configuration
+
+All the configuration parameters for the opentsdb emitter are under `druid.emitter.opentsdb`.
+
+|property|description|required?|default|
+|--------|-----------|---------|-------|
+|`druid.emitter.opentsdb.host`|The host of the OpenTSDB server.|yes|none|
+|`druid.emitter.opentsdb.port`|The port of the OpenTSDB server.|yes|none|
+|`druid.emitter.opentsdb.connectionTimeout`|`Jersey client` connection timeout(in milliseconds).|no|2000|
+|`druid.emitter.opentsdb.readTimeout`|`Jersey client` read timeout(in milliseconds).|no|2000|
+|`druid.emitter.opentsdb.flushThreshold`|Queue flushing threshold.(Events will be sent as one batch)|no|100|
+|`druid.emitter.opentsdb.maxQueueSize`|Maximum size of the queue used to buffer events.|no|1000|
+|`druid.emitter.opentsdb.consumeDelay`|Queue consuming delay(in milliseconds). Actually, we use `ScheduledExecutorService` to schedule consuming events, so this `consumeDelay` means the delay between the termination of one execution and the commencement of the next. If your druid processes produce metric events fast, then you should decrease this `consumeDelay` or increase the `maxQueueSize`.|no|10000|
+|`druid.emitter.opentsdb.metricMapPath`|JSON file defining the desired metrics and dimensions for every Druid metric|no|./src/main/resources/defaultMetrics.json|
+
+### Druid to OpenTSDB Event Converter
+
+The opentsdb emitter will send only the desired metrics and dimensions which is defined in a JSON file.
+If the user does not specify their own JSON file, a default file is used.  All metrics are expected to be configured in the JSON file. Metrics which are not configured will be logged.
+Desired metrics and dimensions is organized using the following schema:`<druid metric name> : [ <dimension list> ]`<br />
+e.g.
+
+```json
+"query/time": [
+    "dataSource",
+    "type"
+]
+```
+
+For most use-cases, the default configuration is sufficient.
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/parquet.html b/docs/0.15.0-incubating/development/extensions-contrib/parquet.html
new file mode 100644
index 0000000..2192cd1
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/parquet.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../../development/extensions-core/parquet.html
+---
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/rabbitmq.md b/docs/0.15.0-incubating/development/extensions-contrib/rabbitmq.md
new file mode 100644
index 0000000..e9eefc5
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/rabbitmq.md
@@ -0,0 +1,81 @@
+---
+layout: doc_page
+title: "RabbitMQ"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# RabbitMQ
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-rabbitmq` extension.
+
+## Firehose
+
+#### RabbitMQFirehose
+
+This firehose ingests events from a define rabbit-mq queue.
+
+**Note:** Add **amqp-client-3.2.1.jar** to lib directory of druid to use this firehose.
+
+A sample spec for rabbitmq firehose:
+
+```json
+"firehose" : {
+   "type" : "rabbitmq",
+   "connection" : {
+     "host": "localhost",
+     "port": "5672",
+     "username": "test-dude",
+     "password": "test-word",
+     "virtualHost": "test-vhost",
+     "uri": "amqp://mqserver:1234/vhost"
+   },
+   "config" : {
+     "exchange": "test-exchange",
+     "queue" : "druidtest",
+     "routingKey": "#",
+     "durable": "true",
+     "exclusive": "false",
+     "autoDelete": "false",
+     "maxRetries": "10",
+     "retryIntervalSeconds": "1",
+     "maxDurationSeconds": "300"
+   }
+}
+```
+
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|type|This should be "rabbitmq"|N/A|yes|
+|host|The hostname of the RabbitMQ broker to connect to|localhost|no|
+|port|The port number to connect to on the RabbitMQ broker|5672|no|
+|username|The username to use to connect to RabbitMQ|guest|no|
+|password|The password to use to connect to RabbitMQ|guest|no|
+|virtualHost|The virtual host to connect to|/|no|
+|uri|The URI string to use to connect to RabbitMQ| |no|
+|exchange|The exchange to connect to| |yes|
+|queue|The queue to connect to or create| |yes|
+|routingKey|The routing key to use to bind the queue to the exchange| |yes|
+|durable|Whether the queue should be durable|false|no|
+|exclusive|Whether the queue should be exclusive|false|no|
+|autoDelete|Whether the queue should auto-delete on disconnect|false|no|
+|maxRetries|The max number of reconnection retry attempts| |yes|
+|retryIntervalSeconds|The reconnection interval| |yes|
+|maxDurationSeconds|The max duration of trying to reconnect| |yes|
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/redis-cache.md b/docs/0.15.0-incubating/development/extensions-contrib/redis-cache.md
new file mode 100644
index 0000000..4dd8276
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/redis-cache.md
@@ -0,0 +1,58 @@
+---
+layout: doc_page
+title: "Druid Redis Cache"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Druid Redis Cache
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-redis-cache` extension.
+
+A cache implementation for Druid based on [Redis](https://github.com/antirez/redis).
+
+# Configuration
+Below are the configuration options known to this module.
+
+Note that just adding these properties does not enable the cache. You still need to add the `druid.<process-type>.cache.useCache` and `druid.<process-type>.cache.populateCache` properties for the processes you want to enable the cache on as described in the [cache configuration docs](../../configuration/index.html#cache-configuration).
+
+A possible configuration would be to keep the properties below in your `common.runtime.properties` file (present on all processes) and then add `druid.<nodetype>.cache.useCache` and `druid.<nodetype>.cache.populateCache` in the `runtime.properties` file of the process types you want to enable caching on.
+
+
+|`common.runtime.properties`|Description|Default|Required|
+|--------------------|-----------|-------|--------|
+|`druid.cache.host`|Redis server host|None|yes|
+|`druid.cache.port`|Redis server port|None|yes|
+|`druid.cache.expiration`|Expiration(in milliseconds) for cache entries|24 * 3600 * 1000|no|
+|`druid.cache.timeout`|Timeout(in milliseconds) for get cache entries from Redis|2000|no|
+|`druid.cache.maxTotalConnections`|Max total connections to Redis|8|no|
+|`druid.cache.maxIdleConnections`|Max idle connections to Redis|8|no|
+|`druid.cache.minIdleConnections`|Min idle connections to Redis|0|no|
+
+# Enabling
+
+To enable the redis cache, include this module on the loadList and set `druid.cache.type` to `redis` in your properties.
+
+# Metrics
+In addition to the normal cache metrics, the redis cache implementation also reports the following in both `total` and `delta`
+
+|Metric|Description|Normal value|
+|------|-----------|------------|
+|`query/cache/redis/*/requests`|Count of requests to redis cache|whatever request to redis will increase request count by 1|
diff --git a/docs/latest/development/experimental.md b/docs/0.15.0-incubating/development/extensions-contrib/rocketmq.md
similarity index 52%
copy from docs/latest/development/experimental.md
copy to docs/0.15.0-incubating/development/extensions-contrib/rocketmq.md
index adf4e24..4dd0eea 100644
--- a/docs/latest/development/experimental.md
+++ b/docs/0.15.0-incubating/development/extensions-contrib/rocketmq.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Experimental Features"
+title: "RocketMQ"
 ---
 
 <!--
@@ -22,18 +22,8 @@ title: "Experimental Features"
   ~ under the License.
   -->
 
-# Experimental Features
+# RocketMQ
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-rocketmq` extension.
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
-
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
-
-```
-druid.extensions.loadList=["druid-histogram"]
-```
-
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
+Original author: [https://github.com/lizhanhui](https://github.com/lizhanhui).
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/scan-query.html b/docs/0.15.0-incubating/development/extensions-contrib/scan-query.html
new file mode 100644
index 0000000..748657e
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/scan-query.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../../querying/scan-query.html
+---
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/sqlserver.md b/docs/0.15.0-incubating/development/extensions-contrib/sqlserver.md
new file mode 100644
index 0000000..e14b7e1
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/sqlserver.md
@@ -0,0 +1,57 @@
+---
+layout: doc_page
+title: "Microsoft SQLServer"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Microsoft SQLServer
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `sqlserver-metadata-storage` as an extension.
+
+## Setting up SQLServer
+
+1. Install Microsoft SQLServer
+
+2. Create a druid database and user
+
+  Create the druid user
+  - Microsoft SQL Server Management Studio - Security - Logins - New Login...
+  - Create a druid user, enter `diurd` when prompted for the password.
+
+  Create a druid database owned by the user we just created
+  - Databases - New Database
+  - Database Name: druid, Owner: druid
+
+3. Add the Microsoft JDBC library to the Druid classpath
+  - To ensure the com.microsoft.sqlserver.jdbc.SQLServerDriver class is loaded you will have to add the appropriate Microsoft JDBC library (sqljdbc*.jar) to the Druid classpath.
+  - For instance, if all jar files in your "druid/lib" directory are automatically added to your Druid classpath, then manually download the Microsoft JDBC drivers from ( https://www.microsoft.com/en-ca/download/details.aspx?id=11774) and drop it into my druid/lib directory.
+
+4. Configure your Druid metadata storage extension:
+
+  Add the following parameters to your Druid configuration, replacing `<host>`
+  with the location (host name and port) of the database.
+
+  ```properties
+  druid.metadata.storage.type=sqlserver
+  druid.metadata.storage.connector.connectURI=jdbc:sqlserver://<host>;databaseName=druid
+  druid.metadata.storage.connector.user=druid
+  druid.metadata.storage.connector.password=diurd
+  ```
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/statsd.md b/docs/0.15.0-incubating/development/extensions-contrib/statsd.md
new file mode 100644
index 0000000..b25a113
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/statsd.md
@@ -0,0 +1,70 @@
+---
+layout: doc_page
+title: "StatsD Emitter"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# StatsD Emitter
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `statsd-emitter` extension.
+
+## Introduction
+
+This extension emits druid metrics to a StatsD server.
+(https://github.com/etsy/statsd)
+(https://github.com/armon/statsite)
+
+## Configuration
+
+All the configuration parameters for the StatsD emitter are under `druid.emitter.statsd`.
+
+|property|description|required?|default|
+|--------|-----------|---------|-------|
+|`druid.emitter.statsd.hostname`|The hostname of the StatsD server.|yes|none|
+|`druid.emitter.statsd.port`|The port of the StatsD server.|yes|none|
+|`druid.emitter.statsd.prefix`|Optional metric name prefix.|no|""|
+|`druid.emitter.statsd.separator`|Metric name separator|no|.|  
+|`druid.emitter.statsd.includeHost`|Flag to include the hostname as part of the metric name.|no|false|  
+|`druid.emitter.statsd.dimensionMapPath`|JSON file defining the StatsD type, and desired dimensions for every Druid metric|no|Default mapping provided. See below.|  
+|`druid.emitter.statsd.blankHolder`|The blank character replacement as statsD does not support path with blank character|no|"-"|  
+|`druid.emitter.statsd.dogstatsd`|Flag to enable [DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/) support. Causes dimensions to be included as tags, not as a part of the metric name. `convertRange` fields will be ignored.|no|false|
+|`druid.emitter.statsd.dogstatsdConstantTags`|If `druid.emitter.statsd.dogstatsd` is true, the tags in the JSON list of strings will be sent with every event.|no|[]|
+
+### Druid to StatsD Event Converter
+
+Each metric sent to StatsD must specify a type, one of `[timer, counter, guage]`. StatsD Emitter expects this mapping to
+be provided as a JSON file.  Additionally, this mapping specifies which dimensions should be included for each metric.
+StatsD expects that metric values be integers.  Druid emits some metrics with values between the range 0 and 1. To accommodate these metrics they are converted
+into the range 0 to 100.  This conversion can be enabled by setting the optional "convertRange" field true in the JSON mapping file.
+If the user does not specify their own JSON file, a default mapping is used.  All
+metrics are expected to be mapped. Metrics which are not mapped will log an error.
+StatsD metric path is organized using the following schema:
+`<druid metric name> : { "dimensions" : <dimension list>, "type" : <StatsD type>, "convertRange" : true/false}`
+e.g.
+`query/time" : { "dimensions" : ["dataSource", "type"], "type" : "timer"}`
+
+For metrics which are emitted from multiple services with different dimensions, the metric name is prefixed with 
+the service name. 
+e.g.
+`"coordinator-segment/count" : { "dimensions" : ["dataSource"], "type" : "gauge" },
+ "historical-segment/count" : { "dimensions" : ["dataSource", "tier", "priority"], "type" : "gauge" }`
+ 
+For most use-cases, the default mapping is sufficient.
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/tdigestsketch-quantiles.md b/docs/0.15.0-incubating/development/extensions-contrib/tdigestsketch-quantiles.md
new file mode 100644
index 0000000..9947e01
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/tdigestsketch-quantiles.md
@@ -0,0 +1,159 @@
+---
+layout: doc_page
+title: "T-Digest Quantiles Sketch module"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# T-Digest Quantiles Sketch module
+
+This module provides Apache Druid (incubating) approximate sketch aggregators based on T-Digest.
+T-Digest (https://github.com/tdunning/t-digest) is a popular datastructure for accurate on-line accumulation of
+rank-based statistics such as quantiles and trimmed means.
+The datastructure is also designed for parallel programming use cases like distributed aggregations or map reduce jobs by making combining two intermediate t-digests easy and efficient.
+
+There are three flavors of T-Digest sketch aggregator available in Apache Druid (incubating):
+
+1. buildTDigestSketch - used for building T-Digest sketches from raw numeric values. It generally makes sense to
+use this aggregator when ingesting raw data into Druid. One can also use this aggregator during query time too to
+generate sketches, just that one would be building these sketches on every query execution instead of building them
+once during ingestion.
+2. mergeTDigestSketch - used for merging pre-built T-Digest sketches. This aggregator is generally used during
+query time to combine sketches generated by buildTDigestSketch aggregator.
+3. quantilesFromTDigestSketch - used for generating quantiles from T-Digest sketches. This aggregator is generally used
+during query time to generate quantiles from sketches built using the above two sketch generating aggregators.
+
+To use this aggregator, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
+
+```
+druid.extensions.loadList=["druid-tdigestsketch"]
+```
+
+### Aggregator
+
+The result of the aggregation is a T-Digest sketch that is built ingesting numeric values from the raw data.
+
+```json
+{
+  "type" : "buildTDigestSketch",
+  "name" : <output_name>,
+  "fieldName" : <metric_name>,
+  "compression": <parameter that controls size and accuracy>
+ }
+```
+Example:
+```json
+{
+	"type": "buildTDigestSketch",
+	"name": "sketch",
+	"fieldName": "session_duration",
+	"compression": 200
+}
+```
+
+|property|description|required?|
+|--------|-----------|---------|
+|type|This String should always be "buildTDigestSketch"|yes|
+|name|A String for the output (result) name of the calculation.|yes|
+|fieldName|A String for the name of the input field containing raw numeric values.|yes|
+|compression|Parameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.|no, defaults to 100|
+
+
+The result of the aggregation is a T-Digest sketch that is built by merging pre-built T-Digest sketches.
+
+```json
+{
+  "type" : "mergeTDigestSketch",
+  "name" : <output_name>,
+  "fieldName" : <metric_name>,
+  "compression": <parameter that controls size and accuracy>
+ }
+```
+
+|property|description|required?|
+|--------|-----------|---------|
+|type|This String should always be "mergeTDigestSketch"|yes|
+|name|A String for the output (result) name of the calculation.|yes|
+|fieldName|A String for the name of the input field containing raw numeric values.|yes|
+|compression|Parameter that determines the accuracy and size of the sketch. Higher compression means higher accuracy but more space to store sketches.|no, defaults to 100|
+
+Example:
+```json
+{
+	"queryType": "groupBy",
+	"dataSource": "test_datasource",
+	"granularity": "ALL",
+	"dimensions": [],
+	"aggregations": [{
+		"type": "mergeTDigestSketch",
+		"name": "merged_sketch",
+		"fieldName": "ingested_sketch",
+		"compression": 200
+	}],
+	"intervals": ["2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z"]
+}
+```
+### Post Aggregators
+
+#### Quantiles
+
+This returns an array of quantiles corresponding to a given array of fractions.
+
+```json
+{
+  "type"  : "quantilesFromTDigestSketch",
+  "name": <output name>,
+  "field"  : <post aggregator that refers to a TDigestSketch (fieldAccess or another post aggregator)>,
+  "fractions" : <array of fractions>
+}
+```
+
+|property|description|required?|
+|--------|-----------|---------|
+|type|This String should always be "quantilesFromTDigestSketch"|yes|
+|name|A String for the output (result) name of the calculation.|yes|
+|fieldName|A String for the name of the input field containing raw numeric values.|yes|
+|fractions|Non-empty array of fractions between 0 and 1|yes|
+
+Example:
+```json
+{
+	"queryType": "groupBy",
+	"dataSource": "test_datasource",
+	"granularity": "ALL",
+	"dimensions": [],
+	"aggregations": [{
+		"type": "mergeTDigestSketch",
+		"name": "merged_sketch",
+		"fieldName": "ingested_sketch",
+		"compression": 200
+	}],
+	"postAggregations": [{
+		"type": "quantilesFromTDigestSketch",
+		"name": "quantiles",
+		"fractions": [0, 0.5, 1],
+		"field": {
+			"type": "fieldAccess",
+			"fieldName": "merged_sketch"
+		}
+	}],
+	"intervals": ["2016-01-01T00:00:00.000Z/2016-01-31T00:00:00.000Z"]
+}
+```
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/thrift.md b/docs/0.15.0-incubating/development/extensions-contrib/thrift.md
new file mode 100644
index 0000000..9b8a54f
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/thrift.md
@@ -0,0 +1,128 @@
+---
+layout: doc_page
+title: "Thrift"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Thrift
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-thrift-extensions`.
+
+This extension enables Druid to ingest thrift compact data online (`ByteBuffer`) and offline (SequenceFile of type `<Writable, BytesWritable>` or LzoThriftBlock File).
+
+You may want to use another version of thrift, change the dependency in pom and compile yourself.
+
+## LZO Support
+
+If you plan to read LZO-compressed Thrift files, you will need to download version 0.4.19 of the [hadoop-lzo JAR](https://mvnrepository.com/artifact/com.hadoop.gplcompression/hadoop-lzo/0.4.19) and place it in your `extensions/druid-thrift-extensions` directory.
+
+## Thrift Parser
+
+
+| Field       | Type        | Description                              | Required |
+| ----------- | ----------- | ---------------------------------------- | -------- |
+| type        | String      | This should say `thrift`                 | yes      |
+| parseSpec   | JSON Object | Specifies the timestamp and dimensions of the data. Should be a Json parseSpec. | yes      |
+| thriftJar   | String      | path of thrift jar, if not provided, it will try to find the thrift class in classpath. Thrift jar in batch ingestion should be uploaded to HDFS first and configure `jobProperties` with `"tmpjars":"/path/to/your/thrift.jar"` | no       |
+| thriftClass | String      | classname of thrift                      | yes      |
+
+- Realtime Ingestion (tranquility example)
+
+```json
+{
+  "dataSources": [{
+    "spec": {
+      "dataSchema": {
+        "dataSource": "book",
+        "granularitySpec": {          },
+        "parser": {
+          "type": "thrift",
+          "thriftClass": "org.apache.druid.data.input.thrift.Book",
+          "protocol": "compact",
+          "parseSpec": {
+            "format": "json",
+            ...
+          }
+        },
+        "metricsSpec": [...]
+      },
+      "tuningConfig": {...}
+    },
+    "properties": {...}
+  }],
+  "properties": {...}
+}
+```
+
+To use it with tranquility,
+
+```bash
+bin/tranquility kafka \
+  -configFile $jsonConfig \
+  -Ddruid.extensions.directory=/path/to/extensions \
+  -Ddruid.extensions.loadList='["druid-thrift-extensions"]'
+```
+
+Hadoop-client is also needed, you may copy all the hadoop-client dependency jars into directory `druid-thrift-extensions` to make is simple.
+
+
+- Batch Ingestion - `inputFormat` and `tmpjars` should be set.
+
+This is for batch ingestion using the HadoopDruidIndexer. The inputFormat of inputSpec in ioConfig could be one of `"org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat"` and `com.twitter.elephantbird.mapreduce.input.LzoThriftBlockInputFormat`. Be carefull, when `LzoThriftBlockInputFormat` is used, thrift class must be provided twice.
+
+```json
+{
+  "type": "index_hadoop",
+  "spec": {
+    "dataSchema": {
+      "dataSource": "book",
+      "parser": {
+        "type": "thrift",
+        "jarPath": "book.jar",
+        "thriftClass": "org.apache.druid.data.input.thrift.Book",
+        "protocol": "compact",
+        "parseSpec": {
+          "format": "json",
+          ...
+        }
+      },
+      "metricsSpec": [],
+      "granularitySpec": {}
+    },
+    "ioConfig": {
+      "type": "hadoop",
+      "inputSpec": {
+        "type": "static",
+        "inputFormat": "org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat",
+        // "inputFormat": "com.twitter.elephantbird.mapreduce.input.LzoThriftBlockInputFormat",
+        "paths": "/user/to/some/book.seq"
+      }
+    },
+    "tuningConfig": {
+      "type": "hadoop",
+      "jobProperties": {
+        "tmpjars":"/user/h_user_profile/du00/druid/test/book.jar",
+        // "elephantbird.class.for.MultiInputFormat" : "${YOUR_THRIFT_CLASS_NAME}"
+      }
+    }
+  }
+}
+```
diff --git a/docs/0.15.0-incubating/development/extensions-contrib/time-min-max.md b/docs/0.15.0-incubating/development/extensions-contrib/time-min-max.md
new file mode 100644
index 0000000..ff9e4d0
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-contrib/time-min-max.md
@@ -0,0 +1,105 @@
+---
+layout: doc_page
+title: "Timestamp Min/Max aggregators"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Timestamp Min/Max aggregators
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-time-min-max`.
+
+These aggregators enable more precise calculation of min and max time of given events than `__time` column whose granularity is sparse, the same as query granularity.
+To use this feature, a "timeMin" or "timeMax" aggregator must be included at indexing time. 
+They can apply to any columns that can be converted to timestamp, which include Long, DateTime, Timestamp, and String types.
+
+For example, when a data set consists of timestamp, dimension, and metric value like followings.
+
+```
+2015-07-28T01:00:00.000Z  A  1
+2015-07-28T02:00:00.000Z  A  1
+2015-07-28T03:00:00.000Z  A  1
+2015-07-28T04:00:00.000Z  B  1
+2015-07-28T05:00:00.000Z  A  1
+2015-07-28T06:00:00.000Z  B  1
+2015-07-29T01:00:00.000Z  C  1
+2015-07-29T02:00:00.000Z  C  1
+2015-07-29T03:00:00.000Z  A  1
+2015-07-29T04:00:00.000Z  A  1
+```
+
+At ingestion time, timeMin and timeMax aggregator can be included as other aggregators.
+
+```json
+{
+    "type": "timeMin",
+    "name": "tmin",
+    "fieldName": "<field_name, typically column specified in timestamp spec>"
+}
+```
+
+```json
+{
+    "type": "timeMax",
+    "name": "tmax",
+    "fieldName": "<field_name, typically column specified in timestamp spec>"
+}
+```
+
+`name` is output name of aggregator and can be any string. `fieldName` is typically column specified in timestamp spec but can be any column that can be converted to timestamp.
+
+To query for results, the same aggregators "timeMin" and "timeMax" is used. 
+
+```json
+{
+  "queryType": "groupBy",
+  "dataSource": "timeMinMax",
+  "granularity": "DAY",
+  "dimensions": ["product"],
+  "aggregations": [
+    {
+      "type": "count",
+      "name": "count"
+    },
+    {
+      "type": "timeMin",
+      "name": "<output_name of timeMin>",
+      "fieldName": "tmin"
+    },
+    {
+      "type": "timeMax",
+      "name": "<output_name of timeMax>",
+      "fieldName": "tmax"
+    }
+  ],
+  "intervals": [
+    "2010-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z"
+  ]
+}
+```
+
+Then, result has min and max of timestamp, which is finer than query granularity.
+
+```
+2015-07-28T00:00:00.000Z A 4 2015-07-28T01:00:00.000Z 2015-07-28T05:00:00.000Z
+2015-07-28T00:00:00.000Z B 2 2015-07-28T04:00:00.000Z 2015-07-28T06:00:00.000Z
+2015-07-29T00:00:00.000Z A 2 2015-07-29T03:00:00.000Z 2015-07-29T04:00:00.000Z
+2015-07-29T00:00:00.000Z C 2 2015-07-29T01:00:00.000Z 2015-07-29T02:00:00.000Z
+```
diff --git a/docs/latest/development/extensions-core/approximate-histograms.md b/docs/0.15.0-incubating/development/extensions-core/approximate-histograms.md
similarity index 96%
copy from docs/latest/development/extensions-core/approximate-histograms.md
copy to docs/0.15.0-incubating/development/extensions-core/approximate-histograms.md
index 73a5207..30b5f32 100644
--- a/docs/latest/development/extensions-core/approximate-histograms.md
+++ b/docs/0.15.0-incubating/development/extensions-core/approximate-histograms.md
@@ -99,6 +99,7 @@ query.
 |`resolution`             |Number of centroids (data points) to store. The higher the resolution, the more accurate results are, but the slower the computation will be.|50|
 |`numBuckets`             |Number of output buckets for the resulting histogram. Bucket intervals are dynamic, based on the range of the underlying data. Use a post-aggregator to have finer control over the bucketing scheme|7|
 |`lowerLimit`/`upperLimit`|Restrict the approximation to the given range. The values outside this range will be aggregated into two centroids. Counts of values outside this range are still maintained. |-INF/+INF|
+|`finalizeAsBase64Binary` |If true, the finalized aggregator value will be a Base64-encoded byte array containing the serialized form of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.|false|
 
 ## Fixed Buckets Histogram
 
@@ -124,6 +125,7 @@ For general histogram and quantile use cases, the [DataSketches Quantiles Sketch
 |`upperLimit`|Upper limit of the histogram. |No default, must be specified|
 |`numBuckets`|Number of buckets for the histogram. The range [lowerLimit, upperLimit] will be divided into `numBuckets` intervals of equal size.|10|
 |`outlierHandlingMode`|Specifies how values outside of [lowerLimit, upperLimit] will be handled. Supported modes are "ignore", "overflow", and "clip". See [outlier handling modes](#outlier-handling-modes) for more details.|No default, must be specified|
+|`finalizeAsBase64Binary`|If true, the finalized aggregator value will be a Base64-encoded byte array containing the [serialized form](#serialization-formats) of the histogram. If false, the finalized aggregator value will be a JSON representation of the histogram.|false|
 
 An example aggregator spec is shown below:
 
diff --git a/docs/0.15.0-incubating/development/extensions-core/avro.md b/docs/0.15.0-incubating/development/extensions-core/avro.md
new file mode 100644
index 0000000..156149a
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/avro.md
@@ -0,0 +1,222 @@
+---
+layout: doc_page
+title: "Avro"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Avro
+
+This Apache Druid (incubating) extension enables Druid to ingest and understand the Apache Avro data format. Make sure to [include](../../operations/including-extensions.html) `druid-avro-extensions` as an extension.
+
+### Avro Stream Parser
+
+This is for streaming/realtime ingestion.
+
+| Field | Type | Description | Required |
+|-------|------|-------------|----------|
+| type | String | This should say `avro_stream`. | no |
+| avroBytesDecoder | JSON Object | Specifies how to decode bytes to Avro record. | yes |
+| parseSpec | JSON Object | Specifies the timestamp and dimensions of the data. Should be an "avro" parseSpec. | yes |
+
+An Avro parseSpec can contain a [flattenSpec](../../ingestion/flatten-json.html) using either the "root" or "path"
+field types, which can be used to read nested Avro records. The "jq" field type is not currently supported for Avro.
+
+For example, using Avro stream parser with schema repo Avro bytes decoder:
+
+```json
+"parser" : {
+  "type" : "avro_stream",
+  "avroBytesDecoder" : {
+    "type" : "schema_repo",
+    "subjectAndIdConverter" : {
+      "type" : "avro_1124",
+      "topic" : "${YOUR_TOPIC}"
+    },
+    "schemaRepository" : {
+      "type" : "avro_1124_rest_client",
+      "url" : "${YOUR_SCHEMA_REPO_END_POINT}",
+    }
+  },
+  "parseSpec" : {
+    "format": "avro",
+    "timestampSpec": <standard timestampSpec>,
+    "dimensionsSpec": <standard dimensionsSpec>,
+    "flattenSpec": <optional>
+  }
+}
+```
+
+#### Avro Bytes Decoder
+
+If `type` is not included, the avroBytesDecoder defaults to `schema_repo`.
+
+##### Inline Schema Based Avro Bytes Decoder
+
+<div class="note info">
+The "schema_inline" decoder reads Avro records using a fixed schema and does not support schema migration. If you
+may need to migrate schemas in the future, consider one of the other decoders, all of which use a message header that
+allows the parser to identify the proper Avro schema for reading records.
+</div>
+
+This decoder can be used if all the input events can be read using the same schema. In that case schema can be specified in the input task json itself as described below.
+
+```
+...
+"avroBytesDecoder": {
+  "type": "schema_inline",
+  "schema": {
+    //your schema goes here, for example
+    "namespace": "org.apache.druid.data",
+    "name": "User",
+    "type": "record",
+    "fields": [
+      { "name": "FullName", "type": "string" },
+      { "name": "Country", "type": "string" }
+    ]
+  }
+}
+...
+```
+
+##### Multiple Inline Schemas Based Avro Bytes Decoder
+
+This decoder can be used if different input events can have different read schema. In that case schema can be specified in the input task json itself as described below.
+
+```
+...
+"avroBytesDecoder": {
+  "type": "multiple_schemas_inline",
+  "schemas": {
+    //your id -> schema map goes here, for example
+    "1": {
+      "namespace": "org.apache.druid.data",
+      "name": "User",
+      "type": "record",
+      "fields": [
+        { "name": "FullName", "type": "string" },
+        { "name": "Country", "type": "string" }
+      ]
+    },
+    "2": {
+      "namespace": "org.apache.druid.otherdata",
+      "name": "UserIdentity",
+      "type": "record",
+      "fields": [
+        { "name": "Name", "type": "string" },
+        { "name": "Location", "type": "string" }
+      ]
+    },
+    ...
+    ...
+  }
+}
+...
+```
+
+Note that it is essentially a map of integer schema ID to avro schema object. This parser assumes that record has following format.
+  first 1 byte is version and must always be 1.
+  next 4 bytes are integer schema ID serialized using big-endian byte order.
+  remaining bytes contain serialized avro message.
+
+##### SchemaRepo Based Avro Bytes Decoder
+
+This Avro bytes decoder first extract `subject` and `id` from input message bytes, then use them to lookup the Avro schema with which to decode Avro record from bytes. Details can be found in [schema repo](https://github.com/schema-repo/schema-repo) and [AVRO-1124](https://issues.apache.org/jira/browse/AVRO-1124). You will need an http service like schema repo to hold the avro schema. Towards schema registration on the message producer side, you can refer to `org.apache.druid.data.input. [...]
+
+| Field | Type | Description | Required |
+|-------|------|-------------|----------|
+| type | String | This should say `schema_repo`. | no |
+| subjectAndIdConverter | JSON Object | Specifies the how to extract subject and id from message bytes. | yes |
+| schemaRepository | JSON Object | Specifies the how to lookup Avro schema from subject and id. | yes |
+
+##### Avro-1124 Subject And Id Converter
+
+| Field | Type | Description | Required |
+|-------|------|-------------|----------|
+| type | String | This should say `avro_1124`. | no |
+| topic | String | Specifies the topic of your kafka stream. | yes |
+
+
+##### Avro-1124 Schema Repository
+
+| Field | Type | Description | Required |
+|-------|------|-------------|----------|
+| type | String | This should say `avro_1124_rest_client`. | no |
+| url | String | Specifies the endpoint url of your Avro-1124 schema repository. | yes |
+
+##### Confluent's Schema Registry
+
+This Avro bytes decoder first extract unique `id` from input message bytes, then use them it lookup in the Schema Registry for the related schema, with which to decode Avro record from bytes.
+Details can be found in Schema Registry [documentation](http://docs.confluent.io/current/schema-registry/docs/) and [repository](https://github.com/confluentinc/schema-registry).
+
+| Field | Type | Description | Required |
+|-------|------|-------------|----------|
+| type | String | This should say `schema_registry`. | no |
+| url | String | Specifies the url endpoint of the Schema Registry. | yes |
+| capacity | Integer | Specifies the max size of the cache (default == Integer.MAX_VALUE). | no |
+
+
+### Avro Hadoop Parser
+
+This is for batch ingestion using the HadoopDruidIndexer. The `inputFormat` of `inputSpec` in `ioConfig` must be set to `"org.apache.druid.data.input.avro.AvroValueInputFormat"`. You may want to set Avro reader's schema in `jobProperties` in `tuningConfig`, eg: `"avro.schema.input.value.path": "/path/to/your/schema.avsc"` or `"avro.schema.input.value": "your_schema_JSON_object"`, if reader's schema is not set, the schema in Avro object container file will be used, see [Avro specification [...]
+
+| Field | Type | Description | Required |
+|-------|------|-------------|----------|
+| type | String | This should say `avro_hadoop`. | no |
+| parseSpec | JSON Object | Specifies the timestamp and dimensions of the data. Should be an "avro" parseSpec. | yes |
+| fromPigAvroStorage | Boolean | Specifies whether the data file is stored using AvroStorage. | no(default == false) |
+
+An Avro parseSpec can contain a [flattenSpec](../../ingestion/flatten-json.html) using either the "root" or "path"
+field types, which can be used to read nested Avro records. The "jq" field type is not currently supported for Avro.
+
+For example, using Avro Hadoop parser with custom reader's schema file:
+
+```json
+{
+  "type" : "index_hadoop",  
+  "spec" : {
+    "dataSchema" : {
+      "dataSource" : "",
+      "parser" : {
+        "type" : "avro_hadoop",
+        "parseSpec" : {
+          "format": "avro",
+          "timestampSpec": <standard timestampSpec>,
+          "dimensionsSpec": <standard dimensionsSpec>,
+          "flattenSpec": <optional>
+        }
+      }
+    },
+    "ioConfig" : {
+      "type" : "hadoop",
+      "inputSpec" : {
+        "type" : "static",
+        "inputFormat": "org.apache.druid.data.input.avro.AvroValueInputFormat",
+        "paths" : ""
+      }
+    },
+    "tuningConfig" : {
+       "jobProperties" : {
+          "avro.schema.input.value.path" : "/path/to/my/schema.avsc"
+      }
+    }
+  }
+}
+```
diff --git a/docs/0.15.0-incubating/development/extensions-core/bloom-filter.md b/docs/0.15.0-incubating/development/extensions-core/bloom-filter.md
new file mode 100644
index 0000000..3d6749a
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/bloom-filter.md
@@ -0,0 +1,179 @@
+---
+layout: doc_page
+title: "Bloom Filter"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Bloom Filter
+
+This Apache Druid (incubating) extension adds the ability to both construct bloom filters from query results, and filter query results by testing 
+against a bloom filter. Make sure to [include](../../operations/including-extensions.html) `druid-bloom-filter` as an 
+extension.
+
+A BloomFilter is a probabilistic data structure for performing a set membership check. A bloom filter is a good candidate 
+to use with Druid for cases where an explicit filter is impossible, e.g. filtering a query against a set of millions of
+ values.
+ 
+Following are some characteristics of BloomFilters:
+- BloomFilters are highly space efficient when compared to using a HashSet.
+- Because of the probabilistic nature of bloom filters, false positive results are possible (element was not actually 
+inserted into a bloom filter during construction, but `test()` says true)
+- False negatives are not possible (if element is present then `test()` will never say false). 
+- The false positive probability of this implementation is currently fixed at 5%, but increasing the number of entries 
+that the filter can hold can decrease this false positive rate in exchange for overall size.
+- Bloom filters are sensitive to number of elements that will be inserted in the bloom filter. During the creation of bloom filter expected number of entries must be specified. If the number of insertions exceed
+ the specified initial number of entries then false positive probability will increase accordingly.
+
+This extension is currently based on `org.apache.hive.common.util.BloomKFilter` from `hive-storage-api`. Internally, 
+this implementation uses Murmur3 as the hash algorithm.
+
+To construct a BloomKFilter externally with Java to use as a filter in a Druid query:
+
+```java
+BloomKFilter bloomFilter = new BloomKFilter(1500);
+bloomFilter.addString("value 1");
+bloomFilter.addString("value 2");
+bloomFilter.addString("value 3");
+ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
+BloomKFilter.serialize(byteArrayOutputStream, bloomFilter);
+String base64Serialized = Base64.encodeBase64String(byteArrayOutputStream.toByteArray());
+```
+
+This string can then be used in the native or sql Druid query.
+
+## Filtering queries with a Bloom Filter
+
+### JSON Specification of Bloom Filter
+```json
+{
+  "type" : "bloom",
+  "dimension" : <dimension_name>,
+  "bloomKFilter" : <serialized_bytes_for_BloomKFilter>,
+  "extractionFn" : <extraction_fn>
+}
+```
+
+|Property                 |Description                   |required?                           |
+|-------------------------|------------------------------|----------------------------------|
+|`type`                   |Filter Type. Should always be `bloom`|yes|
+|`dimension`              |The dimension to filter over. | yes |
+|`bloomKFilter`           |Base64 encoded Binary representation of `org.apache.hive.common.util.BloomKFilter`| yes |
+|`extractionFn`|[Extraction function](../../querying/dimensionspecs.html#extraction-functions) to apply to the dimension values |no|
+
+
+### Serialized Format for BloomKFilter
+
+ Serialized BloomKFilter format:
+ 
+ - 1 byte for the number of hash functions.
+ - 1 big endian int(That is how OutputStream works) for the number of longs in the bitset
+ - big endian longs in the BloomKFilter bitset
+
+Note: `org.apache.hive.common.util.BloomKFilter` provides a serialize method which can be used to serialize bloom filters to outputStream.
+
+### Filtering SQL Queries
+
+Bloom filters can be used in SQL `WHERE` clauses via the `bloom_filter_test` operator:
+
+```sql
+SELECT COUNT(*) FROM druid.foo WHERE bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>')
+```
+
+### Expression and Virtual Column Support
+
+The bloom filter extension also adds a bloom filter [Druid expression](../../misc/math-expr.html) which shares syntax 
+with the SQL operator.
+
+```sql
+bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>')
+```
+
+## Bloom Filter Query Aggregator
+
+Input for a `bloomKFilter` can also be created from a druid query with the `bloom` aggregator. Note that it is very 
+important to set a reasonable value for the `maxNumEntries` parameter, which is the maximum number of distinct entries 
+that the bloom filter can represent without increasing the false postive rate. It may be worth performing a query using
+one of the unique count sketches to calculate the value for this parameter in order to build a bloom filter appropriate 
+for the query. 
+
+### JSON Specification of Bloom Filter Aggregator
+
+```json
+{
+      "type": "bloom",
+      "name": <output_field_name>,
+      "maxNumEntries": <maximum_number_of_elements_for_BloomKFilter>
+      "field": <dimension_spec>
+    }
+```
+
+|Property                 |Description                   |required?                           |
+|-------------------------|------------------------------|----------------------------------|
+|`type`                   |Aggregator Type. Should always be `bloom`|yes|
+|`name`                   |Output field name |yes|
+|`field`                  |[DimensionSpec](../../querying/dimensionspecs.html) to add to `org.apache.hive.common.util.BloomKFilter` | yes |
+|`maxNumEntries`          |Maximum number of distinct values supported by `org.apache.hive.common.util.BloomKFilter`, default `1500`| no |
+
+### Example
+
+```json
+{
+  "queryType": "timeseries",
+  "dataSource": "wikiticker",
+  "intervals": [ "2015-09-12T00:00:00.000/2015-09-13T00:00:00.000" ],
+  "granularity": "day",
+  "aggregations": [
+    {
+      "type": "bloom",
+      "name": "userBloom",
+      "maxNumEntries": 100000,
+      "field": {
+        "type":"default",
+        "dimension":"user",
+        "outputType": "STRING"
+      }
+    }
+  ]
+}
+```
+
+response
+
+```json
+[{"timestamp":"2015-09-12T00:00:00.000Z","result":{"userBloom":"BAAAJhAAAA..."}}]
+```
+
+These values can then be set in the filter specification described above. 
+
+Ordering results by a bloom filter aggregator, for example in a TopN query, will perform a comparatively expensive 
+linear scan _of the filter itself_ to count the number of set bits as a means of approximating how many items have been 
+added to the set. As such, ordering by an alternate aggregation is recommended if possible. 
+
+
+### SQL Bloom Filter Aggregator
+Bloom filters can be computed in SQL expressions with the `bloom_filter` aggregator:
+
+```sql
+SELECT BLOOM_FILTER(<expression>, <max number of entries>) FROM druid.foo WHERE dim2 = 'abc'
+```
+
+but requires the setting `druid.sql.planner.serializeComplexValues` to be set to `true`. Bloom filter results in an SQL
+ response are serialized into a base64 string, which can then be used in subsequent queries as a filter.
\ No newline at end of file
diff --git a/docs/0.15.0-incubating/development/extensions-core/caffeine-cache.html b/docs/0.15.0-incubating/development/extensions-core/caffeine-cache.html
new file mode 100644
index 0000000..6ded175
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/caffeine-cache.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../../configuration/index.html#cache-configuration
+---
diff --git a/docs/0.15.0-incubating/development/extensions-core/datasketches-aggregators.html b/docs/0.15.0-incubating/development/extensions-core/datasketches-aggregators.html
new file mode 100644
index 0000000..06a5d17
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-aggregators.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: datasketches-extension.html
+---
diff --git a/docs/latest/development/extensions-core/datasketches-extension.md b/docs/0.15.0-incubating/development/extensions-core/datasketches-extension.md
similarity index 86%
copy from docs/latest/development/extensions-core/datasketches-extension.md
copy to docs/0.15.0-incubating/development/extensions-core/datasketches-extension.md
index 3a5b126..49ac225 100644
--- a/docs/latest/development/extensions-core/datasketches-extension.md
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-extension.md
@@ -24,7 +24,7 @@ title: "DataSketches extension"
 
 # DataSketches extension
 
-Apache Druid (incubating) aggregators based on [datasketches](http://datasketches.github.io/) library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.
+Apache Druid (incubating) aggregators based on [datasketches](https://datasketches.github.io/) library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.
 
 To use the datasketches aggregators, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
 
diff --git a/docs/latest/development/extensions-core/datasketches-hll.md b/docs/0.15.0-incubating/development/extensions-core/datasketches-hll.md
similarity index 86%
copy from docs/latest/development/extensions-core/datasketches-hll.md
copy to docs/0.15.0-incubating/development/extensions-core/datasketches-hll.md
index 799cbc0..90e284f 100644
--- a/docs/latest/development/extensions-core/datasketches-hll.md
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-hll.md
@@ -24,7 +24,7 @@ title: "DataSketches HLL Sketch module"
 
 # DataSketches HLL Sketch module
 
-This module provides Apache Druid (incubating) aggregators for distinct counting based on HLL sketch from [datasketches](http://datasketches.github.io/) library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of sketch columns  [...]
+This module provides Apache Druid (incubating) aggregators for distinct counting based on HLL sketch from [datasketches](https://datasketches.github.io/) library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of sketch columns [...]
 You can use the HLL sketch aggregator on columns of any identifiers. It will return estimated cardinality of the column.
 
 To use this aggregator, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
diff --git a/docs/latest/development/extensions-core/datasketches-quantiles.md b/docs/0.15.0-incubating/development/extensions-core/datasketches-quantiles.md
similarity index 72%
copy from docs/latest/development/extensions-core/datasketches-quantiles.md
copy to docs/0.15.0-incubating/development/extensions-core/datasketches-quantiles.md
index 2282de2..39b7cb9 100644
--- a/docs/latest/development/extensions-core/datasketches-quantiles.md
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-quantiles.md
@@ -24,7 +24,7 @@ title: "DataSketches Quantiles Sketch module"
 
 # DataSketches Quantiles Sketch module
 
-This module provides Apache Druid (incubating) aggregators based on numeric quantiles DoublesSketch from [datasketches](http://datasketches.github.io/) library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such). See [Quantil [...]
+This module provides Apache Druid (incubating) aggregators based on numeric quantiles DoublesSketch from [datasketches](https://datasketches.github.io/) library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such). See [Quanti [...]
 
 There are three major modes of operation:
 
@@ -99,6 +99,31 @@ This returns an approximation to the histogram given an array of split points th
 }
 ```
 
+#### Rank
+
+This returns an approximation to the rank of a given value that is the fraction of the distribution less than that value.
+
+```json
+{
+  "type"  : "quantilesDoublesSketchToRank",
+  "name": <output name>,
+  "field"  : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>,
+  "value" : <value>
+}
+```
+#### CDF
+
+This returns an approximation to the Cumulative Distribution Function given an array of split points that define the edges of the bins. An array of <i>m</i> unique, monotonically increasing split points divide the real number line into <i>m+1</i> consecutive disjoint intervals. The definition of an interval is inclusive of the left split point and exclusive of the right split point. The resulting array of fractions can be viewed as ranks of each split point with one additional rank that  [...]
+
+```json
+{
+  "type"  : "quantilesDoublesSketchToCDF",
+  "name": <output name>,
+  "field"  : <post aggregator that refers to a DoublesSketch (fieldAccess or another post aggregator)>,
+  "splitPoints" : <array of split points>
+}
+```
+
 #### Sketch Summary
 
 This returns a summary of the sketch that can be used for debugging. This is the result of calling toString() method.
diff --git a/docs/latest/development/extensions-core/datasketches-theta.md b/docs/0.15.0-incubating/development/extensions-core/datasketches-theta.md
similarity index 97%
copy from docs/latest/development/extensions-core/datasketches-theta.md
copy to docs/0.15.0-incubating/development/extensions-core/datasketches-theta.md
index e248da3..5a2d1af 100644
--- a/docs/latest/development/extensions-core/datasketches-theta.md
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-theta.md
@@ -24,7 +24,7 @@ title: "DataSketches Theta Sketch module"
 
 # DataSketches Theta Sketch module
 
-This module provides Apache Druid (incubating) aggregators based on Theta sketch from [datasketches](http://datasketches.github.io/) library. Note that sketch algorithms are approximate; see details in the "Accuracy" section of the datasketches doc. 
+This module provides Apache Druid (incubating) aggregators based on Theta sketch from [datasketches](https://datasketches.github.io/) library. Note that sketch algorithms are approximate; see details in the "Accuracy" section of the datasketches doc.
 At ingestion time, this aggregator creates the Theta sketch objects which get stored in Druid segments. Logically speaking, a Theta sketch object can be thought of as a Set data structure. At query time, sketches are read and aggregated (set unioned) together. In the end, by default, you receive the estimate of the number of unique entries in the sketch object. Also, you can use post aggregators to do union, intersection or difference on sketch columns in the same row. 
 Note that you can use `thetaSketch` aggregator on columns which were not ingested using the same. It will return estimated cardinality of the column. It is recommended to use it at ingestion time as well to make querying faster.
 
diff --git a/docs/latest/development/extensions-core/datasketches-tuple.md b/docs/0.15.0-incubating/development/extensions-core/datasketches-tuple.md
similarity index 96%
copy from docs/latest/development/extensions-core/datasketches-tuple.md
copy to docs/0.15.0-incubating/development/extensions-core/datasketches-tuple.md
index 69db25a..bd83c9f 100644
--- a/docs/latest/development/extensions-core/datasketches-tuple.md
+++ b/docs/0.15.0-incubating/development/extensions-core/datasketches-tuple.md
@@ -24,7 +24,7 @@ title: "DataSketches Tuple Sketch module"
 
 # DataSketches Tuple Sketch module
 
-This module provides Apache Druid (incubating) aggregators based on Tuple sketch from [datasketches](http://datasketches.github.io/) library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.
+This module provides Apache Druid (incubating) aggregators based on Tuple sketch from [datasketches](https://datasketches.github.io/) library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.
 
 To use this aggregator, make sure you [include](../../operations/including-extensions.html) the extension in your config file:
 
diff --git a/docs/latest/development/extensions-core/druid-basic-security.md b/docs/0.15.0-incubating/development/extensions-core/druid-basic-security.md
similarity index 77%
copy from docs/latest/development/extensions-core/druid-basic-security.md
copy to docs/0.15.0-incubating/development/extensions-core/druid-basic-security.md
index adba32b..4282f91 100644
--- a/docs/latest/development/extensions-core/druid-basic-security.md
+++ b/docs/0.15.0-incubating/development/extensions-core/druid-basic-security.md
@@ -172,6 +172,87 @@ Return a list of all user names.
 `GET(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`
 Return the name and role information of the user with name {userName}
 
+Example output:
+```json
+{
+  "name": "druid2",
+  "roles": [
+    "druidRole"
+  ]
+}
+```
+
+This API supports the following flags:
+- `?full`: The response will also include the full information for each role currently assigned to the user.
+
+Example output:
+```json
+{
+  "name": "druid2",
+  "roles": [
+    {
+      "name": "druidRole",
+      "permissions": [
+        {
+          "resourceAction": {
+            "resource": {
+              "name": "A",
+              "type": "DATASOURCE"
+            },
+            "action": "READ"
+          },
+          "resourceNamePattern": "A"
+        },
+        {
+          "resourceAction": {
+            "resource": {
+              "name": "C",
+              "type": "CONFIG"
+            },
+            "action": "WRITE"
+          },
+          "resourceNamePattern": "C"
+        }
+      ]
+    }
+  ]
+}
+```
+
+The output format of this API when `?full` is specified is deprecated and in later versions will be switched to the output format used when both `?full` and `?simplifyPermissions` flag is set. 
+
+The `resourceNamePattern` is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.
+
+- `?full?simplifyPermissions`: When both `?full` and `?simplifyPermissions` are set, the permissions in the output will contain only a list of `resourceAction` objects, without the extraneous `resourceNamePattern` field.
+
+```json
+{
+  "name": "druid2",
+  "roles": [
+    {
+      "name": "druidRole",
+      "users": null,
+      "permissions": [
+        {
+          "resource": {
+            "name": "A",
+            "type": "DATASOURCE"
+          },
+          "action": "READ"
+        },
+        {
+          "resource": {
+            "name": "C",
+            "type": "CONFIG"
+          },
+          "action": "WRITE"
+        }
+      ]
+    }
+  ]
+}
+```
+
 `POST(/druid-ext/basic-security/authorization/db/{authorizerName}/users/{userName})`
 Create a new user with name {userName}
 
@@ -184,7 +265,56 @@ Delete the user with name {userName}
 Return a list of all role names.
 
 `GET(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`
-Return name and permissions for the role named {roleName}
+Return name and permissions for the role named {roleName}.
+
+Example output:
+```json
+{
+  "name": "druidRole2",
+  "permissions": [
+    {
+      "resourceAction": {
+        "resource": {
+          "name": "E",
+          "type": "DATASOURCE"
+        },
+        "action": "WRITE"
+      },
+      "resourceNamePattern": "E"
+    }
+  ]
+}
+```
+
+The default output format of this API is deprecated and in later versions will be switched to the output format used when the `?simplifyPermissions` flag is set. The `resourceNamePattern` is a compiled version of the resource name regex. It is redundant and complicates the use of this API for clients such as frontends that edit the authorization configuration, as the permission format in this output does not match the format used for adding permissions to a role.
+
+This API supports the following flags:
+
+- `?full`: The output will contain an extra `users` list, containing the users that currently have this role.
+
+```json
+"users":["druid"]
+```
+
+- `?simplifyPermissions`: The permissions in the output will contain only a list of `resourceAction` objects, without the extraneous `resourceNamePattern` field. The `users` field will be null when `?full` is not specified.
+
+Example output:
+```json
+{
+  "name": "druidRole2",
+  "users": null,
+  "permissions": [
+    {
+      "resource": {
+        "name": "E",
+        "type": "DATASOURCE"
+      },
+      "action": "WRITE"
+    }
+  ]
+}
+```
+
 
 `POST(/druid-ext/basic-security/authorization/db/{authorizerName}/roles/{roleName})`
 Create a new role with name {roleName}.
@@ -310,6 +440,20 @@ For information on what HTTP methods are supported on a particular request endpo
 
 GET requires READ permission, while POST and DELETE require WRITE permission.
 
+### SQL Permissions
+
+Queries on Druid datasources require DATASOURCE READ permissions for the specified datasource.
+
+Queries on the [INFORMATION_SCHEMA tables](../../querying/sql.html#information-schema) will
+return information about datasources that the caller has DATASOURCE READ access to. Other
+datasources will be omitted.
+
+Queries on the [system schema tables](../../querying/sql.html#system-schema) require the following permissions:
+- `segments`: Segments will be filtered based on DATASOURCE READ permissions.
+- `servers`: The user requires STATE READ permissions.
+- `server_segments`: The user requires STATE READ permissions and segments will be filtered based on DATASOURCE READ permissions.
+- `tasks`: Tasks will be filtered based on DATASOURCE READ permissions.
+
 ## Configuration Propagation
 
 To prevent excessive load on the Coordinator, the Authenticator and Authorizer user/role database state is cached on each Druid process.
diff --git a/docs/latest/development/extensions-core/druid-kerberos.md b/docs/0.15.0-incubating/development/extensions-core/druid-kerberos.md
similarity index 94%
copy from docs/latest/development/extensions-core/druid-kerberos.md
copy to docs/0.15.0-incubating/development/extensions-core/druid-kerberos.md
index 46af7f4..99d6e45 100644
--- a/docs/latest/development/extensions-core/druid-kerberos.md
+++ b/docs/0.15.0-incubating/development/extensions-core/druid-kerberos.md
@@ -54,13 +54,16 @@ The configuration examples in the rest of this document will use "kerberos" as t
 |`druid.auth.authenticator.kerberos.serverPrincipal`|`HTTP/_HOST@EXAMPLE.COM`| SPNego service principal used by druid processes|empty|Yes|
 |`druid.auth.authenticator.kerberos.serverKeytab`|`/etc/security/keytabs/spnego.service.keytab`|SPNego service keytab used by druid processes|empty|Yes|
 |`druid.auth.authenticator.kerberos.authToLocal`|`RULE:[1:$1@$0](druid@EXAMPLE.COM)s/.*/druid DEFAULT`|It allows you to set a general rule for mapping principal names to local user names. It will be used if there is not an explicit mapping for the principal name that is being translated.|DEFAULT|No|
-|`druid.auth.authenticator.kerberos.excludedPaths`|`['/status','/health']`| Array of HTTP paths which which does NOT need to be authenticated.|None|No|
 |`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid ndoes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.|<Random value>|No|
 |`druid.auth.authenticator.kerberos.authorizerName`|Depends on available authorizers|Authorizer that requests should be directed to|Empty|Yes|
 
 As a note, it is required that the SPNego principal in use by the druid processes must start with HTTP (This specified by [RFC-4559](https://tools.ietf.org/html/rfc4559)) and must be of the form "HTTP/_HOST@REALM".
 The special string _HOST will be replaced automatically with the value of config `druid.host`
 
+### `druid.auth.authenticator.kerberos.excludedPaths`
+
+In older releases, the Kerberos authenticator had an `excludedPaths` property that allowed the user to specify a list of paths where authentication checks should be skipped. This property has been removed from the Kerberos authenticator because the path exclusion functionality is now handled across all authenticators/authorizers by setting `druid.auth.unsecuredPaths`, as described in the [main auth documentation](../../design/auth.html).
+
 ### Auth to Local Syntax
 `druid.auth.authenticator.kerberos.authToLocal` allows you to set a general rules for mapping principal names to local user names.
 The syntax for mapping rules is `RULE:\[n:string](regexp)s/pattern/replacement/g`. The integer n indicates how many components the target principal should have. If this matches, then a string will be formed from string, substituting the realm of the principal for $0 and the n‘th component of the principal for $n. e.g. if the principal was druid/admin then `\[2:$2$1suffix]` would result in the string `admindruidsuffix`.
diff --git a/docs/0.15.0-incubating/development/extensions-core/druid-lookups.md b/docs/0.15.0-incubating/development/extensions-core/druid-lookups.md
new file mode 100644
index 0000000..53476eb
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/druid-lookups.md
@@ -0,0 +1,150 @@
+---
+layout: doc_page
+title: "Cached Lookup Module"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Cached Lookup Module
+
+<div class="note info">Please note that this is an experimental module and the development/testing still at early stage. Feel free to try it and give us your feedback.</div>
+ 
+## Description
+This Apache Druid (incubating) module provides a per-lookup caching mechanism for JDBC data sources.
+The main goal of this cache is to speed up the access to a high latency lookup sources and to provide a caching isolation for every lookup source.
+Thus user can define various caching strategies or and implementation per lookup, even if the source is the same.
+This module can be used side to side with other lookup module like the global cached lookup module.
+
+To use this extension please make sure to  [include](../../operations/including-extensions.html) `druid-lookups-cached-single` as an extension.
+
+## Architecture
+Generally speaking this module can be divided into two main component, namely, the data fetcher layer and caching layer.
+
+### Data Fetcher layer
+
+First part is the data fetcher layer API `DataFetcher`, that exposes a set of fetch methods to fetch data from the actual Lookup dimension source.
+For instance `JdbcDataFetcher` provides an implementation of `DataFetcher` that can be used to fetch key/value from a RDBMS via JDBC driver.
+If you need new type of data fetcher, all you need to do, is to implement the interface `DataFetcher` and load it via another druid module.
+### Caching layer
+
+This extension comes with two different caching strategies. First strategy is a poll based and the second is a load based.
+#### Poll lookup cache
+
+The poll strategy cache strategy will fetch and swap all the pair of key/values periodically from the lookup source.
+Hence, user should make sure that the cache can fit all the data. 
+The current implementation provides 2 type of poll cache, the first is onheap (uses immutable map), while the second uses MapBD based offheap map.
+User can also implement a different lookup polling cache by implementing `PollingCacheFactory` and `PollingCache` interfaces. 
+
+#### Loading lookup
+Loading cache strategy will load the key/value pair upon request on the key it self, the general algorithm is load key if absent.
+Once the key/value  pair is loaded eviction will occur according to the cache eviction policy.
+This module comes with two loading lookup implementation, the first is onheap backed by a Guava cache implementation, the second is MapDB offheap implementation.
+Both implementations offer various eviction strategies.
+Same for Loading cache, developer can implement a new type of loading cache by implementing `LookupLoadingCache` interface.
+ 
+## Configuration and Operation:
+
+
+### Polling Lookup
+
+**Note that the current implementation of `offHeapPolling` and `onHeapPolling` will create two caches one to lookup value based on key and the other to reverse lookup the key from value**
+
+|Field|Type|Description|Required|default|
+|-----|----|-----------|--------|-------|
+|dataFetcher|Json object|Specifies the lookup data fetcher type  to use in order to fetch data|yes|null|
+|cacheFactory|Json Object|Cache factory implementation|no |onHeapPolling|
+|pollPeriod|Period|polling period |no |null (poll once)|
+
+
+#####   Example of Polling On-heap Lookup
+This example demonstrates a polling cache that will update its on-heap cache every 10 minutes
+```json
+{
+    "type":"pollingLookup",
+   "pollPeriod":"PT10M",
+   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
+   "cacheFactory":{"type":"onHeapPolling"}
+}
+
+```
+
+#####   Example Polling Off-heap Lookup
+This example demonstrates an off-heap lookup that will be cached once and never swapped `(pollPeriod == null)`
+
+```json
+{
+    "type":"pollingLookup",
+   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
+   "cacheFactory":{"type":"offHeapPolling"}
+}
+
+```
+
+
+### Loading lookup
+
+|Field|Type|Description|Required|default|
+|-----|----|-----------|--------|-------|
+|dataFetcher|Json object|Specifies the lookup data fetcher type  to use in order to fetch data|yes|null|
+|loadingCacheSpec|Json Object|Lookup cache spec implementation|yes |null|
+|reverseLoadingCacheSpec|Json Object| Reverse lookup cache  implementation|yes |null|
+ 
+
+##### Example Loading On-heap Guava
+
+Guava cache configuration spec. 
+
+|Field|Type|Description|Required|default|
+|-----|----|-----------|--------|-------|
+|concurrencyLevel|int|Allowed concurrency among update operations|no|4|
+|initialCapacity|int|Initial capacity size|no |null|
+|maximumSize|long| Specifies the maximum number of entries the cache may contain.|no |null (infinite capacity)|
+|expireAfterAccess|long| Specifies the eviction time after last read in milliseconds.|no |null (No read-time-based eviction when set to null)|
+|expireAfterWrite|long| Specifies the eviction time after last write in milliseconds.|no |null (No write-time-based eviction when set to null)|
+
+```json
+{
+   "type":"loadingLookup",
+   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
+   "loadingCacheSpec":{"type":"guava"},
+   "reverseLoadingCacheSpec":{"type":"guava", "maximumSize":500000, "expireAfterAccess":100000, "expireAfterAccess":10000}
+}
+```
+
+##### Example Loading Off-heap MapDB
+
+Off heap cache is backed by [MapDB](http://www.mapdb.org/) implementation. MapDB is using direct memory as memory pool, please take that into account when limiting the JVM direct memory setup. 
+
+|Field|Type|Description|Required|default|
+|-----|----|-----------|--------|-------|
+|maxStoreSize|double|maximal size of store in GB, if store is larger entries will start expiring|no |0|
+|maxEntriesSize|long| Specifies the maximum number of entries the cache may contain.|no |0 (infinite capacity)|
+|expireAfterAccess|long| Specifies the eviction time after last read in milliseconds.|no |0 (No read-time-based eviction when set to null)|
+|expireAfterWrite|long| Specifies the eviction time after last write in milliseconds.|no |0 (No write-time-based eviction when set to null)|
+
+
+```json
+{
+   "type":"loadingLookup",
+   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
+   "loadingCacheSpec":{"type":"mapDb", "maxEntriesSize":100000},
+   "reverseLoadingCacheSpec":{"type":"mapDb", "maxStoreSize":5, "expireAfterAccess":100000, "expireAfterAccess":10000}
+}
+```
diff --git a/docs/latest/development/experimental.md b/docs/0.15.0-incubating/development/extensions-core/examples.md
similarity index 52%
copy from docs/latest/development/experimental.md
copy to docs/0.15.0-incubating/development/extensions-core/examples.md
index adf4e24..bea6cf5 100644
--- a/docs/latest/development/experimental.md
+++ b/docs/0.15.0-incubating/development/extensions-core/examples.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Experimental Features"
+title: "Extension Examples"
 ---
 
 <!--
@@ -22,18 +22,24 @@ title: "Experimental Features"
   ~ under the License.
   -->
 
-# Experimental Features
+# Extension Examples
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+## TwitterSpritzerFirehose
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
+This firehose connects directly to the twitter spritzer data stream.
 
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
+Sample spec:
 
-```
-druid.extensions.loadList=["druid-histogram"]
+```json
+"firehose" : {
+    "type" : "twitzer",
+    "maxEventCount": -1,
+    "maxRunMinutes": 0
+}
 ```
 
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
+|property|description|default|required?|
+|--------|-----------|-------|---------|
+|type|This should be "twitzer"|N/A|yes|
+|maxEventCount|max events to receive, -1 is infinite, 0 means nothing is delivered; use this to prevent infinite space consumption or to prevent getting throttled at an inconvenient time.|N/A|yes|
+|maxRunMinutes|maximum number of minutes to fetch Twitter events.  Use this to prevent getting throttled at an inconvenient time. If zero or less, no time limit for run.|N/A|yes|
diff --git a/docs/0.15.0-incubating/development/extensions-core/hdfs.md b/docs/0.15.0-incubating/development/extensions-core/hdfs.md
new file mode 100644
index 0000000..c129698
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/hdfs.md
@@ -0,0 +1,56 @@
+---
+layout: doc_page
+title: "HDFS"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# HDFS
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-hdfs-storage` as an extension.
+
+## Deep Storage 
+
+### Configuration
+
+|Property|Possible Values|Description|Default|
+|--------|---------------|-----------|-------|
+|`druid.storage.type`|hdfs||Must be set.|
+|`druid.storage.storageDirectory`||Directory for storing segments.|Must be set.|
+|`druid.hadoop.security.kerberos.principal`|`druid@EXAMPLE.COM`| Principal user name |empty|
+|`druid.hadoop.security.kerberos.keytab`|`/etc/security/keytabs/druid.headlessUser.keytab`|Path to keytab file|empty|
+
+If you are using the Hadoop indexer, set your output directory to be a location on Hadoop and it will work.
+If you want to eagerly authenticate against a secured hadoop/hdfs cluster you must set `druid.hadoop.security.kerberos.principal` and `druid.hadoop.security.kerberos.keytab`, this is an alternative to the cron job method that runs `kinit` command periodically.  
+
+## Google Cloud Storage
+
+The HDFS extension can also be used for GCS as deep storage.
+
+### Configuration
+
+|Property|Possible Values|Description|Default|
+|--------|---------------|-----------|-------|
+|`druid.storage.type`|hdfs||Must be set.|
+|`druid.storage.storageDirectory`||gs://bucket/example/directory|Must be set.|
+
+All services that need to access GCS need to have the [GCS connector jar](https://cloud.google.com/hadoop/google-cloud-storage-connector#manualinstallation) in their class path. One option is to place this jar in <druid>/lib/ and <druid>/extensions/druid-hdfs-storage/
+
+Tested with Druid 0.9.0, Hadoop 2.7.2 and gcs-connector jar 1.4.4-hadoop2.
diff --git a/docs/0.15.0-incubating/development/extensions-core/kafka-eight-firehose.md b/docs/0.15.0-incubating/development/extensions-core/kafka-eight-firehose.md
new file mode 100644
index 0000000..740e5fa
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/kafka-eight-firehose.md
@@ -0,0 +1,54 @@
+---
+layout: doc_page
+title: "Apache Kafka Eight Firehose"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Kafka Eight Firehose
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-kafka-eight` as an extension.
+
+This firehose acts as a Kafka 0.8.x consumer and ingests data from Kafka.
+
+Sample spec:
+
+```json
+"firehose": {
+  "type": "kafka-0.8",
+  "consumerProps": {
+    "zookeeper.connect": "localhost:2181",
+    "zookeeper.connection.timeout.ms" : "15000",
+    "zookeeper.session.timeout.ms" : "15000",
+    "zookeeper.sync.time.ms" : "5000",
+    "group.id": "druid-example",
+    "fetch.message.max.bytes" : "1048586",
+    "auto.offset.reset": "largest",
+    "auto.commit.enable": "false"
+  },
+  "feed": "wikipedia"
+}
+```
+
+|property|description|required?|
+|--------|-----------|---------|
+|type|This should be "kafka-0.8"|yes|
+|consumerProps|The full list of consumer configs can be [here](https://kafka.apache.org/08/configuration.html).|yes|
+|feed|Kafka maintains feeds of messages in categories called topics. This is the topic name.|yes|
diff --git a/docs/0.15.0-incubating/development/extensions-core/kafka-extraction-namespace.md b/docs/0.15.0-incubating/development/extensions-core/kafka-extraction-namespace.md
new file mode 100644
index 0000000..f28c233
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/kafka-extraction-namespace.md
@@ -0,0 +1,70 @@
+---
+layout: doc_page
+title: "Apache Kafka Lookups"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Kafka Lookups
+
+<div class="note caution">
+Lookups are an <a href="../experimental.html">experimental</a> feature.
+</div>
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-lookups-cached-global` and `druid-kafka-extraction-namespace` as an extension.
+
+If you need updates to populate as promptly as possible, it is possible to plug into a kafka topic whose key is the old value and message is the desired new value (both in UTF-8) as a LookupExtractorFactory.
+
+```json
+{
+  "type":"kafka",
+  "kafkaTopic":"testTopic",
+  "kafkaProperties":{"zookeeper.connect":"somehost:2181/kafka"}
+}
+```
+
+|Parameter|Description|Required|Default|
+|---------|-----------|--------|-------|
+|`kafkaTopic`|The kafka topic to read the data from|Yes||
+|`kafkaProperties`|Kafka consumer properties. At least"zookeeper.connect" must be specified. Only the zookeeper connector is supported|Yes||
+|`connectTimeout`|How long to wait for an initial connection|No|`0` (do not wait)|
+|`isOneToOne`|The map is a one-to-one (see [Lookup DimensionSpecs](../../querying/dimensionspecs.html))|No|`false`|
+
+The extension `kafka-extraction-namespace` enables reading from a kafka feed which has name/key pairs to allow renaming of dimension values. An example use case would be to rename an ID to a human readable format.
+
+The consumer properties `group.id` and `auto.offset.reset` CANNOT be set in `kafkaProperties` as they are set by the extension as `UUID.randomUUID().toString()` and `smallest` respectively.
+
+See [lookups](../../querying/lookups.html) for how to configure and use lookups.
+
+# Limitations
+
+Currently the Kafka lookup extractor feeds the entire kafka stream into a local cache. If you are using OnHeap caching, this can easily clobber your java heap if the kafka stream spews a lot of unique keys.
+OffHeap caching should alleviate these concerns, but there is still a limit to the quantity of data that can be stored.
+There is currently no eviction policy.
+
+## Testing the Kafka rename functionality
+
+To test this setup, you can send key/value pairs to a kafka stream via the following producer console:
+
+```
+./bin/kafka-console-producer.sh --property parse.key=true --property key.separator="->" --broker-list localhost:9092 --topic testTopic
+```
+
+Renames can then be published as `OLD_VAL->NEW_VAL` followed by newline (enter or return)
diff --git a/docs/latest/development/extensions-core/kafka-ingestion.md b/docs/0.15.0-incubating/development/extensions-core/kafka-ingestion.md
similarity index 85%
copy from docs/latest/development/extensions-core/kafka-ingestion.md
copy to docs/0.15.0-incubating/development/extensions-core/kafka-ingestion.md
index 96828f9..c070e46 100644
--- a/docs/latest/development/extensions-core/kafka-ingestion.md
+++ b/docs/0.15.0-incubating/development/extensions-core/kafka-ingestion.md
@@ -201,7 +201,6 @@ For Roaring bitmaps:
 |`completionTimeout`|ISO8601 Period|The length of time to wait before declaring a publishing task as failed and terminating it. If this is set too low, your tasks may never publish. The publishing clock for a task begins roughly after `taskDuration` elapses.|no (default == PT30M)|
 |`lateMessageRejectionPeriod`|ISO8601 Period|Configure tasks to reject messages with timestamps earlier than this period before the task was created; for example if this is set to `PT1H` and the supervisor creates a task at *2016-01-01T12:00Z*, messages with timestamps earlier than *2016-01-01T11:00Z* will be dropped. This may help prevent concurrency issues if your data stream has late messages and you have multiple pipelines that need to operate on the same segments (e.g. a realtime an [...]
 |`earlyMessageRejectionPeriod`|ISO8601 Period|Configure tasks to reject messages with timestamps later than this period after the task reached its taskDuration; for example if this is set to `PT1H`, the taskDuration is set to `PT1H` and the supervisor creates a task at *2016-01-01T12:00Z*, messages with timestamps later than *2016-01-01T14:00Z* will be dropped. **Note:** Tasks sometimes run past their task duration, for example, in cases of supervisor failover. Setting earlyMessageReject [...]
-|`skipOffsetGaps`|Boolean|Whether or not to allow gaps of missing offsets in the Kafka stream. This is required for compatibility with implementations such as MapR Streams which does not guarantee consecutive offsets. If this is false, an exception will be thrown if offsets are not consecutive.|no (default == false)|
 
 ## Operations
 
@@ -215,12 +214,61 @@ offsets as reported by Kafka, the consumer lag per partition, as well as the agg
 consumer lag per partition may be reported as negative values if the supervisor has not received a recent latest offset
 response from Kafka. The aggregate lag value will always be >= 0.
 
+The status report also contains the supervisor's state and a list of recently thrown exceptions (reported as
+`recentErrors`, whose max size can be controlled using the `druid.supervisor.maxStoredExceptionEvents` configuration).
+There are two fields related to the supervisor's state - `state` and `detailedState`. The `state` field will always be
+one of a small number of generic states that are applicable to any type of supervisor, while the `detailedState` field
+will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor's
+activities than the generic `state` field.
+
+The list of possible `state` values are: [`PENDING`, `RUNNING`, `SUSPENDED`, `STOPPING`, `UNHEALTHY_SUPERVISOR`, `UNHEALTHY_TASKS`]
+
+The list of `detailedState` values and their corresponding `state` mapping is as follows:
+
+|Detailed State|Corresponding State|Description|
+|--------------|-------------------|-----------|
+|UNHEALTHY_SUPERVISOR|UNHEALTHY_SUPERVISOR|The supervisor has encountered errors on the past `druid.supervisor.unhealthinessThreshold` iterations|
+|UNHEALTHY_TASKS|UNHEALTHY_TASKS|The last `druid.supervisor.taskUnhealthinessThreshold` tasks have all failed|
+|UNABLE_TO_CONNECT_TO_STREAM|UNHEALTHY_SUPERVISOR|The supervisor is encountering connectivity issues with Kafka and has not successfully connected in the past|
+|LOST_CONTACT_WITH_STREAM|UNHEALTHY_SUPERVISOR|The supervisor is encountering connectivity issues with Kafka but has successfully connected in the past|
+|PENDING (first iteration only)|PENDING|The supervisor has been initialized and hasn't started connecting to the stream|
+|CONNECTING_TO_STREAM (first iteration only)|RUNNING|The supervisor is trying to connect to the stream and update partition data|
+|DISCOVERING_INITIAL_TASKS (first iteration only)|RUNNING|The supervisor is discovering already-running tasks|
+|CREATING_TASKS (first iteration only)|RUNNING|The supervisor is creating tasks and discovering state|
+|RUNNING|RUNNING|The supervisor has started tasks and is waiting for taskDuration to elapse|
+|SUSPENDED|SUSPENDED|The supervisor has been suspended|
+|STOPPING|STOPPING|The supervisor is stopping|
+
+On each iteration of the supervisor's run loop, the supervisor completes the following tasks in sequence:
+  1) Fetch the list of partitions from Kafka and determine the starting offset for each partition (either based on the
+  last processed offset if continuing, or starting from the beginning or ending of the stream if this is a new topic).
+  2) Discover any running indexing tasks that are writing to the supervisor's datasource and adopt them if they match
+  the supervisor's configuration, else signal them to stop.
+  3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision.
+  4) Handle tasks that have exceeded `taskDuration` and should transition from the reading to publishing state.
+  5) Handle tasks that have finished publishing and signal redundant replica tasks to stop.
+  6) Handle tasks that have failed and clean up the supervisor's internal state.
+  7) Compare the list of healthy tasks to the requested `taskCount` and `replicas` configurations and create additional tasks if required.
+
+The `detailedState` field will show additional values (those marked with "first iteration only") the first time the
+supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface
+initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can't connect to
+Kafka, it can't read from the Kafka topic, or it can't communicate with existing tasks). Once the supervisor is stable -
+that is, once it has completed a full execution without encountering any issues - `detailedState` will show a `RUNNING`
+state until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state.
+
 ### Getting Supervisor Ingestion Stats Report
 
 `GET /druid/indexer/v1/supervisor/<supervisorId>/stats` returns a snapshot of the current ingestion row counters for each task being managed by the supervisor, along with moving averages for the row counters.
 
 See [Task Reports: Row Stats](../../ingestion/reports.html#row-stats) for more information.
 
+### Supervisor Health Check
+
+`GET /druid/indexer/v1/supervisor/<supervisorId>/health` returns `200 OK` if the supervisor is healthy and
+`503 Service Unavailable` if it is unhealthy. Healthiness is determined by the supervisor's `state` (as returned by the
+`/status` endpoint) and the `druid.supervisor.*` Overlord configuration thresholds.
+
 ### Updating Existing Supervisors
 
 `POST /druid/indexer/v1/supervisor` can be used to update existing supervisor spec.
diff --git a/docs/latest/development/extensions-core/kinesis-ingestion.md b/docs/0.15.0-incubating/development/extensions-core/kinesis-ingestion.md
similarity index 87%
copy from docs/latest/development/extensions-core/kinesis-ingestion.md
copy to docs/0.15.0-incubating/development/extensions-core/kinesis-ingestion.md
index 3d406ed..0578dd2 100644
--- a/docs/latest/development/extensions-core/kinesis-ingestion.md
+++ b/docs/0.15.0-incubating/development/extensions-core/kinesis-ingestion.md
@@ -113,7 +113,7 @@ A sample supervisor spec is shown below:
 }
 ```
 
-## Supervisor Configuration
+## Supervisor Spec
 
 |Field|Description|Required|
 |--------|-----------|---------|
@@ -218,12 +218,58 @@ To authenticate with AWS, you must provide your AWS access key and AWS secret ke
 ```
 -Ddruid.kinesis.accessKey=123 -Ddruid.kinesis.secretKey=456
 ```
-The AWS access key ID and secret access key are used for Kinesis API requests. If this is not provided, the service will look for credentials set in environment variables, in the default profile configuration file, and from the EC2 instance profile provider (in this order).
+The AWS access key ID and secret access key are used for Kinesis API requests. If this is not provided, the service will
+look for credentials set in environment variables, in the default profile configuration file, and from the EC2 instance
+profile provider (in this order).
 
 ### Getting Supervisor Status Report
 
-`GET /druid/indexer/v1/supervisor/<supervisorId>/status` returns a snapshot report of the current state of the tasks managed by the given supervisor. This includes the latest
-sequence numbers as reported by Kinesis. Unlike the Kafka Indexing Service, stats about lag is not yet supported.
+`GET /druid/indexer/v1/supervisor/<supervisorId>/status` returns a snapshot report of the current state of the tasks 
+managed by the given supervisor. This includes the latest sequence numbers as reported by Kinesis. Unlike the Kafka
+Indexing Service, stats about lag are not yet supported.
+
+The status report also contains the supervisor's state and a list of recently thrown exceptions (reported as
+`recentErrors`, whose max size can be controlled using the `druid.supervisor.maxStoredExceptionEvents` configuration).
+There are two fields related to the supervisor's state - `state` and `detailedState`. The `state` field will always be
+one of a small number of generic states that are applicable to any type of supervisor, while the `detailedState` field
+will contain a more descriptive, implementation-specific state that may provide more insight into the supervisor's
+activities than the generic `state` field.
+
+The list of possible `state` values are: [`PENDING`, `RUNNING`, `SUSPENDED`, `STOPPING`, `UNHEALTHY_SUPERVISOR`, `UNHEALTHY_TASKS`]
+
+The list of `detailedState` values and their corresponding `state` mapping is as follows:
+
+|Detailed State|Corresponding State|Description|
+|--------------|-------------------|-----------|
+|UNHEALTHY_SUPERVISOR|UNHEALTHY_SUPERVISOR|The supervisor has encountered errors on the past `druid.supervisor.unhealthinessThreshold` iterations|
+|UNHEALTHY_TASKS|UNHEALTHY_TASKS|The last `druid.supervisor.taskUnhealthinessThreshold` tasks have all failed|
+|UNABLE_TO_CONNECT_TO_STREAM|UNHEALTHY_SUPERVISOR|The supervisor is encountering connectivity issues with Kinesis and has not successfully connected in the past|
+|LOST_CONTACT_WITH_STREAM|UNHEALTHY_SUPERVISOR|The supervisor is encountering connectivity issues with Kinesis but has successfully connected in the past|
+|PENDING (first iteration only)|PENDING|The supervisor has been initialized and hasn't started connecting to the stream|
+|CONNECTING_TO_STREAM (first iteration only)|RUNNING|The supervisor is trying to connect to the stream and update partition data|
+|DISCOVERING_INITIAL_TASKS (first iteration only)|RUNNING|The supervisor is discovering already-running tasks|
+|CREATING_TASKS (first iteration only)|RUNNING|The supervisor is creating tasks and discovering state|
+|RUNNING|RUNNING|The supervisor has started tasks and is waiting for taskDuration to elapse|
+|SUSPENDED|SUSPENDED|The supervisor has been suspended|
+|STOPPING|STOPPING|The supervisor is stopping|
+
+On each iteration of the supervisor's run loop, the supervisor completes the following tasks in sequence:
+  1) Fetch the list of shards from Kinesis and determine the starting sequence number for each shard (either based on the
+  last processed sequence number if continuing, or starting from the beginning or ending of the stream if this is a new stream).
+  2) Discover any running indexing tasks that are writing to the supervisor's datasource and adopt them if they match
+  the supervisor's configuration, else signal them to stop.
+  3) Send a status request to each supervised task to update our view of the state of the tasks under our supervision.
+  4) Handle tasks that have exceeded `taskDuration` and should transition from the reading to publishing state.
+  5) Handle tasks that have finished publishing and signal redundant replica tasks to stop.
+  6) Handle tasks that have failed and clean up the supervisor's internal state.
+  7) Compare the list of healthy tasks to the requested `taskCount` and `replicas` configurations and create additional tasks if required.
+
+The `detailedState` field will show additional values (those marked with "first iteration only") the first time the
+supervisor executes this run loop after startup or after resuming from a suspension. This is intended to surface
+initialization-type issues, where the supervisor is unable to reach a stable state (perhaps because it can't connect to
+Kinesis, it can't read from the stream, or it can't communicate with existing tasks). Once the supervisor is stable -
+that is, once it has completed a full execution without encountering any issues - `detailedState` will show a `RUNNING`
+state until it is stopped, suspended, or hits a failure threshold and transitions to an unhealthy state.
 
 ### Updating Existing Supervisors
 
@@ -390,4 +436,4 @@ requires the user to manually provide the Kinesis Client Library on the classpat
 compatible with Apache projects.
 
 To enable this feature, add the `amazon-kinesis-client` (tested on version `1.9.2`) jar file ([link](https://mvnrepository.com/artifact/com.amazonaws/amazon-kinesis-client/1.9.2)) under `dist/druid/extensions/druid-kinesis-indexing-service/`.
-Then when submitting a supervisor-spec, set `deaggregate` to true.
\ No newline at end of file
+Then when submitting a supervisor-spec, set `deaggregate` to true.
diff --git a/docs/0.15.0-incubating/development/extensions-core/lookups-cached-global.md b/docs/0.15.0-incubating/development/extensions-core/lookups-cached-global.md
new file mode 100644
index 0000000..55a2c38
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/lookups-cached-global.md
@@ -0,0 +1,379 @@
+---
+layout: doc_page
+title: "Globally Cached Lookups"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Globally Cached Lookups
+
+<div class="note caution">
+Lookups are an <a href="../experimental.html">experimental</a> feature.
+</div>
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `druid-lookups-cached-global` as an extension.
+
+## Configuration
+<div class="note caution">
+Static configuration is no longer supported. Lookups can be configured through
+<a href="../../querying/lookups.html#configuration">dynamic configuration</a>.
+</div>
+
+Globally cached lookups are appropriate for lookups which are not possible to pass at query time due to their size,
+or are not desired to be passed at query time because the data is to reside in and be handled by the Druid servers,
+and are small enough to reasonably populate in-memory. This usually means tens to tens of thousands of entries per lookup.
+
+Globally cached lookups all draw from the same cache pool, allowing each process to have a fixed cache pool that can be used by cached lookups.
+
+Globally cached lookups can be specified as part of the [cluster wide config for lookups](../../querying/lookups.html) as a type of `cachedNamespace`
+
+ ```json
+ {
+    "type": "cachedNamespace",
+    "extractionNamespace": {
+       "type": "uri",
+       "uri": "file:/tmp/prefix/",
+       "namespaceParseSpec": {
+         "format": "csv",
+         "columns": [
+           "key",
+           "value"
+         ]
+       },
+       "pollPeriod": "PT5M"
+     },
+     "firstCacheTimeout": 0
+ }
+ ```
+ 
+ ```json
+{
+    "type": "cachedNamespace",
+    "extractionNamespace": {
+       "type": "jdbc",
+       "connectorConfig": {
+         "createTables": true,
+         "connectURI": "jdbc:mysql:\/\/localhost:3306\/druid",
+         "user": "druid",
+         "password": "diurd"
+       },
+       "table": "lookupTable",
+       "keyColumn": "mykeyColumn",
+       "valueColumn": "myValueColumn",
+       "filter" : "myFilterSQL (Where clause statement  e.g LOOKUPTYPE=1)",
+       "tsColumn": "timeColumn"
+    },
+    "firstCacheTimeout": 120000,
+    "injective":true
+}
+ ```
+
+The parameters are as follows
+
+|Property|Description|Required|Default|
+|--------|-----------|--------|-------|
+|`extractionNamespace`|Specifies how to populate the local cache. See below|Yes|-|
+|`firstCacheTimeout`|How long to wait (in ms) for the first run of the cache to populate. 0 indicates to not wait|No|`0` (do not wait)|
+|`injective`|If the underlying map is [injective](../../querying/lookups.html#query-execution) (keys and values are unique) then optimizations can occur internally by setting this to `true`|No|`false`|
+
+If `firstCacheTimeout` is set to a non-zero value, it should be less than `druid.manager.lookups.hostUpdateTimeout`. If `firstCacheTimeout` is NOT set, then management is essentially asynchronous and does not know if a lookup succeeded or failed in starting. In such a case logs from the processes using lookups should be monitored for repeated failures.
+
+Proper functionality of globally cached lookups requires the following extension to be loaded on the Broker, Peon, and Historical processes:
+`druid-lookups-cached-global`
+
+## Example configuration
+
+In a simple case where only one [tier](../../querying/lookups.html#dynamic-configuration) exists (`realtime_customer2`) with one `cachedNamespace` lookup called `country_code`, the resulting configuration json looks similar to the following:
+
+```json
+{
+  "realtime_customer2": {
+    "country_code": {
+      "version": "v0",
+      "lookupExtractorFactory": {
+        "type": "cachedNamespace",
+        "extractionNamespace": {
+          "type": "jdbc",
+          "connectorConfig": {
+            "createTables": true,
+            "connectURI": "jdbc:mysql:\/\/localhost:3306\/druid",
+            "user": "druid",
+            "password": "diurd"
+          },
+          "table": "lookupValues",
+          "keyColumn": "value_id",
+          "valueColumn": "value_text",
+          "filter": "value_type='country'",
+          "tsColumn": "timeColumn"
+        },
+        "firstCacheTimeout": 120000,
+        "injective": true
+      }
+    }
+  }
+}
+```
+
+Where the Coordinator endpoint `/druid/coordinator/v1/lookups/realtime_customer2/country_code` should return
+
+```json
+{
+  "version": "v0",
+  "lookupExtractorFactory": {
+    "type": "cachedNamespace",
+    "extractionNamespace": {
+      "type": "jdbc",
+      "connectorConfig": {
+        "createTables": true,
+        "connectURI": "jdbc:mysql://localhost:3306/druid",
+        "user": "druid",
+        "password": "diurd"
+      },
+      "table": "lookupValues",
+      "keyColumn": "value_id",
+      "valueColumn": "value_text",
+      "filter": "value_type='country'",
+      "tsColumn": "timeColumn"
+    },
+    "firstCacheTimeout": 120000,
+    "injective": true
+  }
+}
+```
+
+## Cache Settings
+
+Lookups are cached locally on Historical processes. The following are settings used by the processes which service queries when 
+setting namespaces (Broker, Peon, Historical)
+
+|Property|Description|Default|
+|--------|-----------|-------|
+|`druid.lookup.namespace.cache.type`|Specifies the type of caching to be used by the namespaces. May be one of [`offHeap`, `onHeap`]. `offHeap` uses a temporary file for off-heap storage of the namespace (memory mapped files). `onHeap` stores all cache on the heap in standard java map types.|`onHeap`|
+|`druid.lookup.namespace.numExtractionThreads`|The number of threads in the thread pool dedicated for lookup extraction and updates. This number may need to be scaled up, if you have a lot of lookups and they take long time to extract, to avoid timeouts.|2|
+|`druid.lookup.namespace.numBufferedEntries`|If using offHeap caching, the number of records to be stored on an on-heap buffer.|100,000|
+
+The cache is populated in different ways depending on the settings below. In general, most namespaces employ 
+a `pollPeriod` at the end of which time they poll the remote resource of interest for updates.
+
+`onHeap` uses `ConcurrentMap`s in the java heap, and thus affects garbage collection and heap sizing.
+`offHeap` uses an on-heap buffer and MapDB using memory-mapped files in the java temporary directory.
+So if total number of entries in the `cachedNamespace` is in excess of the buffer's configured capacity, the extra will be kept in memory as page cache, and paged in and out by general OS tunings.
+It's highly recommended that `druid.lookup.namespace.numBufferedEntries` is set when using `offHeap`, the value should be chosen from the range between 10% and 50% of the number of entries in the lookup.
+
+
+# Supported Lookups
+
+For additional lookups, please see our [extensions list](../extensions.html).
+
+## URI lookup
+
+The remapping values for each globally cached lookup can be specified by a json object as per the following examples:
+
+```json
+{
+  "type":"uri",
+  "uri": "s3://bucket/some/key/prefix/renames-0003.gz",
+  "namespaceParseSpec":{
+    "format":"csv",
+    "columns":["key","value"]
+  },
+  "pollPeriod":"PT5M"
+}
+```
+
+```json
+{
+  "type":"uri",
+  "uriPrefix": "s3://bucket/some/key/prefix/",
+  "fileRegex":"renames-[0-9]*\\.gz",
+  "namespaceParseSpec":{
+    "format":"csv",
+    "columns":["key","value"]
+  },
+  "pollPeriod":"PT5M"
+}
+```
+
+|Property|Description|Required|Default|
+|--------|-----------|--------|-------|
+|`pollPeriod`|Period between polling for updates|No|0 (only once)|
+|`uri`|URI for the file of interest|No|Use `uriPrefix`|
+|`uriPrefix`|A URI which specifies a directory (or other searchable resource) in which to search for files|No|Use `uri`|
+|`fileRegex`|Optional regex for matching the file name under `uriPrefix`. Only used if `uriPrefix` is used|No|`".*"`|
+|`namespaceParseSpec`|How to interpret the data at the URI|Yes||
+
+One of either `uri` xor `uriPrefix` must be specified.
+
+The `pollPeriod` value specifies the period in ISO 8601 format between checks for replacement data for the lookup. If the source of the lookup is capable of providing a timestamp, the lookup will only be updated if it has changed since the prior tick of `pollPeriod`. A value of 0, an absent parameter, or `null` all mean populate once and do not attempt to look for new data later. Whenever an poll occurs, the updating system will look for a file with the most recent timestamp and assume t [...]
+
+The `namespaceParseSpec` can be one of a number of values. Each of the examples below would rename foo to bar, baz to bat, and buck to truck. All parseSpec types assumes each input is delimited by a new line. See below for the types of parseSpec supported.
+
+Only ONE file which matches the search will be used. For most implementations, the discriminator for choosing the URIs is by whichever one reports the most recent timestamp for its modification time.
+
+### csv lookupParseSpec
+|Parameter|Description|Required|Default|
+|---------|-----------|--------|-------|
+|`columns`|The list of columns in the csv file|no if `hasHeaderRow` is set|`null`|
+|`keyColumn`|The name of the column containing the key|no|The first column|
+|`valueColumn`|The name of the column containing the value|no|The second column|
+|`hasHeaderRow`|A flag to indicate that column information can be extracted from the input files' header row|no|false|
+|`skipHeaderRows`|Number of header rows to be skipped|no|0|
+
+If both `skipHeaderRows` and `hasHeaderRow` options are set, `skipHeaderRows` is first applied. For example, if you set
+`skipHeaderRows` to 2 and `hasHeaderRow` to true, Druid will skip the first two lines and then extract column information
+from the third line.
+
+*example input*
+
+```
+bar,something,foo
+bat,something2,baz
+truck,something3,buck
+```
+
+*example namespaceParseSpec*
+
+```json
+"namespaceParseSpec": {
+  "format": "csv",
+  "columns": ["value","somethingElse","key"],
+  "keyColumn": "key",
+  "valueColumn": "value"
+}
+```
+
+### tsv lookupParseSpec
+|Parameter|Description|Required|Default|
+|---------|-----------|--------|-------|
+|`columns`|The list of columns in the tsv file|yes|`null`|
+|`keyColumn`|The name of the column containing the key|no|The first column|
+|`valueColumn`|The name of the column containing the value|no|The second column|
+|`delimiter`|The delimiter in the file|no|tab (`\t`)|
+|`listDelimiter`|The list delimiter in the file|no| (`\u0001`)|
+|`hasHeaderRow`|A flag to indicate that column information can be extracted from the input files' header row|no|false|
+|`skipHeaderRows`|Number of header rows to be skipped|no|0|
+
+If both `skipHeaderRows` and `hasHeaderRow` options are set, `skipHeaderRows` is first applied. For example, if you set
+`skipHeaderRows` to 2 and `hasHeaderRow` to true, Druid will skip the first two lines and then extract column information
+from the third line.
+
+*example input*
+
+```
+bar|something,1|foo
+bat|something,2|baz
+truck|something,3|buck
+```
+
+*example namespaceParseSpec*
+
+```json
+"namespaceParseSpec": {
+  "format": "tsv",
+  "columns": ["value","somethingElse","key"],
+  "keyColumn": "key",
+  "valueColumn": "value",
+  "delimiter": "|"
+}
+```
+
+### customJson lookupParseSpec
+
+|Parameter|Description|Required|Default|
+|---------|-----------|--------|-------|
+|`keyFieldName`|The field name of the key|yes|null|
+|`valueFieldName`|The field name of the value|yes|null|
+
+*example input*
+
+```json
+{"key": "foo", "value": "bar", "somethingElse" : "something"}
+{"key": "baz", "value": "bat", "somethingElse" : "something"}
+{"key": "buck", "somethingElse": "something", "value": "truck"}
+```
+
+*example namespaceParseSpec*
+
+```json
+"namespaceParseSpec": {
+  "format": "customJson",
+  "keyFieldName": "key",
+  "valueFieldName": "value"
+}
+```
+
+With customJson parsing, if the value field for a particular row is missing or null then that line will be skipped, and
+will not be included in the lookup.
+
+### simpleJson lookupParseSpec
+The `simpleJson` lookupParseSpec does not take any parameters. It is simply a line delimited json file where the field is the key, and the field's value is the value.
+
+*example input*
+ 
+```json
+{"foo": "bar"}
+{"baz": "bat"}
+{"buck": "truck"}
+```
+
+*example namespaceParseSpec*
+
+```json
+"namespaceParseSpec":{
+  "format": "simpleJson"
+}
+```
+
+## JDBC lookup
+
+The JDBC lookups will poll a database to populate its local cache. If the `tsColumn` is set it must be able to accept comparisons in the format `'2015-01-01 00:00:00'`. For example, the following must be valid sql for the table `SELECT * FROM some_lookup_table WHERE timestamp_column >  '2015-01-01 00:00:00'`. If `tsColumn` is set, the caching service will attempt to only poll values that were written *after* the last sync. If `tsColumn` is not set, the entire table is pulled every time.
+
+|Parameter|Description|Required|Default|
+|---------|-----------|--------|-------|
+|`namespace`|The namespace to define|Yes||
+|`connectorConfig`|The connector config to use|Yes||
+|`table`|The table which contains the key value pairs|Yes||
+|`keyColumn`|The column in `table` which contains the keys|Yes||
+|`valueColumn`|The column in `table` which contains the values|Yes||
+|`filter`|The filter to use when selecting lookups, this is used to create a where clause on lookup population|No|No Filter|
+|`tsColumn`| The column in `table` which contains when the key was updated|No|Not used|
+|`pollPeriod`|How often to poll the DB|No|0 (only once)|
+
+```json
+{
+  "type":"jdbc",
+  "namespace":"some_lookup",
+  "connectorConfig":{
+    "createTables":true,
+    "connectURI":"jdbc:mysql://localhost:3306/druid",
+    "user":"druid",
+    "password":"diurd"
+  },
+  "table":"some_lookup_table",
+  "keyColumn":"the_old_dim_value",
+  "valueColumn":"the_new_dim_value",
+  "tsColumn":"timestamp_column",
+  "pollPeriod":600000
+}
+```
+
+# Introspection
+
+Globally cached lookups have introspection points at `/keys` and `/values` which return a complete set of the keys and values (respectively) in the lookup. Introspection to `/` returns the entire map. Introspection to `/version` returns the version indicator for the lookup.
diff --git a/docs/0.15.0-incubating/development/extensions-core/mysql.md b/docs/0.15.0-incubating/development/extensions-core/mysql.md
new file mode 100644
index 0000000..6cdcf3c
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/mysql.md
@@ -0,0 +1,109 @@
+---
+layout: doc_page
+title: "MySQL Metadata Store"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# MySQL Metadata Store
+
+To use this Apache Druid (incubating) extension, make sure to [include](../../operations/including-extensions.html) `mysql-metadata-storage` as an extension.
+
+<div class="note caution">
+The MySQL extension requires the MySQL Connector/J library which is not included in the Druid distribution. 
+Refer to the following section for instructions on how to install this library.
+</div>
+
+## Installing the MySQL connector library
+
+This extension uses Oracle's MySQL JDBC driver which is not included in the Druid distribution and must be
+installed separately. There are a few ways to obtain this library:
+
+- It can be downloaded from the MySQL site at: https://dev.mysql.com/downloads/connector/j/
+- It can be fetched from Maven Central at: http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar
+- It may be available through your package manager, e.g. as `libmysql-java` on APT for a Debian-based OS
+
+This should fetch a JAR file named similar to 'mysql-connector-java-x.x.xx.jar'.
+
+Copy or symlink this file to `extensions/mysql-metadata-storage` under the distribution root directory.
+
+## Setting up MySQL
+
+1. Install MySQL
+
+  Use your favorite package manager to install mysql, e.g.:
+  - on Ubuntu/Debian using apt `apt-get install mysql-server`
+  - on OS X, using [Homebrew](http://brew.sh/) `brew install mysql`
+
+  Alternatively, download and follow installation instructions for MySQL
+  Community Server here:
+  [http://dev.mysql.com/downloads/mysql/](http://dev.mysql.com/downloads/mysql/)
+
+2. Create a druid database and user
+
+  Connect to MySQL from the machine where it is installed.
+
+  ```bash
+  > mysql -u root
+  ```
+
+  Paste the following snippet into the mysql prompt:
+
+  ```sql
+  -- create a druid database, make sure to use utf8mb4 as encoding
+  CREATE DATABASE druid DEFAULT CHARACTER SET utf8mb4;
+
+  -- create a druid user
+  CREATE USER 'druid'@'localhost' IDENTIFIED BY 'diurd';
+
+  -- grant the user all the permissions on the database we just created
+  GRANT ALL PRIVILEGES ON druid.* TO 'druid'@'localhost';
+  ```
+
+3. Configure your Druid metadata storage extension:
+
+  Add the following parameters to your Druid configuration, replacing `<host>`
+  with the location (host name and port) of the database.
+
+  ```properties
+  druid.extensions.loadList=["mysql-metadata-storage"]
+  druid.metadata.storage.type=mysql
+  druid.metadata.storage.connector.connectURI=jdbc:mysql://<host>/druid
+  druid.metadata.storage.connector.user=druid
+  druid.metadata.storage.connector.password=druid
+  ```
+
+## Encrypting MySQL connections
+  This extension provides support for encrypting MySQL connections. To get more information about encrypting MySQL connections using TLS/SSL in general, please refer to this [guide](https://dev.mysql.com/doc/refman/5.7/en/using-encrypted-connections.html).
+
+## Configuration
+
+|Property|Description|Default|Required|
+|--------|-----------|-------|--------|
+|`druid.metadata.mysql.ssl.useSSL`|Enable SSL|`false`|no|
+|`druid.metadata.mysql.ssl.clientCertificateKeyStoreUrl`|The file path URL to the client certificate key store.|none|no|
+|`druid.metadata.mysql.ssl.clientCertificateKeyStoreType`|The type of the key store where the client certificate is stored.|none|no|
+|`druid.metadata.mysql.ssl.clientCertificateKeyStorePassword`|The [Password Provider](../../operations/password-provider.html) or String password for the client key store.|none|no|
+|`druid.metadata.mysql.ssl.verifyServerCertificate`|Enables server certificate verification.|false|no|
+|`druid.metadata.mysql.ssl.trustCertificateKeyStoreUrl`|The file path to the trusted root certificate key store.|Default trust store provided by MySQL|yes if `verifyServerCertificate` is set to true and a custom trust store is used|
+|`druid.metadata.mysql.ssl.trustCertificateKeyStoreType`|The type of the key store where trusted root certificates are stored.|JKS|yes if `verifyServerCertificate` is set to true and keystore type is not JKS|
+|`druid.metadata.mysql.ssl.trustCertificateKeyStorePassword`|The [Password Provider](../../operations/password-provider.html) or String password for the trust store.|none|yes if `verifyServerCertificate` is set to true and password is not null|
+|`druid.metadata.mysql.ssl.enabledSSLCipherSuites`|Overrides the existing cipher suites with these cipher suites.|none|no|
+|`druid.metadata.mysql.ssl.enabledTLSProtocols`|Overrides the TLS protocols with these protocols.|none|no|
diff --git a/docs/0.15.0-incubating/development/extensions-core/namespaced-lookup.html b/docs/0.15.0-incubating/development/extensions-core/namespaced-lookup.html
new file mode 100644
index 0000000..82f21f1
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/namespaced-lookup.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: lookups-cached-global.html
+---
diff --git a/docs/0.15.0-incubating/development/extensions-core/orc.md b/docs/0.15.0-incubating/development/extensions-core/orc.md
new file mode 100644
index 0000000..af7a315
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/orc.md
@@ -0,0 +1,311 @@
+---
+layout: doc_page
+title: "ORC Extension"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+  
+# ORC Extension
+
+This Apache Druid (incubating) module extends [Druid Hadoop based indexing](../../ingestion/hadoop.html) to ingest data directly from offline 
+Apache ORC files. 
+
+To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-orc-extensions`.
+
+## ORC Hadoop Parser
+
+The `inputFormat` of `inputSpec` in `ioConfig` must be set to `"org.apache.orc.mapreduce.OrcInputFormat"`.
+
+
+|Field     | Type        | Description                                                                            | Required|
+|----------|-------------|----------------------------------------------------------------------------------------|---------|
+|type      | String      | This should say `orc`                                                                  | yes|
+|parseSpec | JSON Object | Specifies the timestamp and dimensions of the data (`timeAndDims` and `orc` format) and a `flattenSpec` (`orc` format) | yes|
+
+The parser supports two `parseSpec` formats: `orc` and `timeAndDims`. 
+
+`orc` supports auto field discovery and flattening, if specified with a [flattenSpec](../../ingestion/flatten-json.html). 
+If no `flattenSpec` is specified, `useFieldDiscovery` will be enabled by default. Specifying a `dimensionSpec` is 
+optional if `useFieldDiscovery` is enabled: if a `dimensionSpec` is supplied, the list of `dimensions` it defines will be
+the set of ingested dimensions, if missing the discovered fields will make up the list.
+ 
+`timeAndDims` parse spec must specify which fields will be extracted as dimensions through the `dimensionSpec`.
+
+[All column types](https://orc.apache.org/docs/types.html) are supported, with the exception of `union` types. Columns of
+ `list` type, if filled with primitives, may be used as a multi-value dimension, or specific elements can be extracted with 
+`flattenSpec` expressions. Likewise, primitive fields may be extracted from `map` and `struct` types in the same manner.
+Auto field discovery will automatically create a string dimension for every (non-timestamp) primitive or `list` of 
+primitives, as well as any flatten expressions defined in the `flattenSpec`.
+
+### Hadoop Job Properties
+Like most Hadoop jobs, the best outcomes will add `"mapreduce.job.user.classpath.first": "true"` or
+`"mapreduce.job.classloader": "true"` to the `jobProperties` section of `tuningConfig`. Note that it is likely if using
+`"mapreduce.job.classloader": "true"` that you will need to set `mapreduce.job.classloader.system.classes` to include 
+`-org.apache.hadoop.hive.` to instruct Hadoop to load `org.apache.hadoop.hive` classes from the application jars instead
+of system jars, e.g.
+
+```json
+...
+    "mapreduce.job.classloader": "true",
+    "mapreduce.job.classloader.system.classes" : "java., javax.accessibility., javax.activation., javax.activity., javax.annotation., javax.annotation.processing., javax.crypto., javax.imageio., javax.jws., javax.lang.model., -javax.management.j2ee., javax.management., javax.naming., javax.net., javax.print., javax.rmi., javax.script., -javax.security.auth.message., javax.security.auth., javax.security.cert., javax.security.sasl., javax.sound., javax.sql., javax.swing., javax.tools., jav [...]
+...
+```
+
+This is due to the `hive-storage-api` dependency of the
+`orc-mapreduce` library, which provides some classes under the `org.apache.hadoop.hive` package. If instead using the
+setting `"mapreduce.job.user.classpath.first": "true"`, then this will not be an issue.
+
+### Examples
+
+#### `orc` parser, `orc` parseSpec, auto field discovery, flatten expressions
+
+```json
+{
+  "type": "index_hadoop",
+  "spec": {
+    "ioConfig": {
+      "type": "hadoop",
+      "inputSpec": {
+        "type": "static",
+        "inputFormat": "org.apache.orc.mapreduce.OrcInputFormat",
+        "paths": "path/to/file.orc"
+      },
+      ...
+    },
+    "dataSchema": {
+      "dataSource": "example",
+      "parser": {
+        "type": "orc",
+        "parseSpec": {
+          "format": "orc",
+          "flattenSpec": {
+            "useFieldDiscovery": true,
+            "fields": [
+              {
+                "type": "path",
+                "name": "nestedDim",
+                "expr": "$.nestedData.dim1"
+              },
+              {
+                "type": "path",
+                "name": "listDimFirstItem",
+                "expr": "$.listDim[1]"
+              }
+            ]
+          },
+          "timestampSpec": {
+            "column": "timestamp",
+            "format": "millis"
+          }
+        }
+      },
+      ...
+    },
+    "tuningConfig": <hadoop-tuning-config>
+    }
+  }
+}
+```
+
+#### `orc` parser, `orc` parseSpec, field discovery with no flattenSpec or dimensionSpec
+
+```json
+{
+  "type": "index_hadoop",
+  "spec": {
+    "ioConfig": {
+      "type": "hadoop",
+      "inputSpec": {
+        "type": "static",
+        "inputFormat": "org.apache.orc.mapreduce.OrcInputFormat",
+        "paths": "path/to/file.orc"
+      },
+      ...
+    },
+    "dataSchema": {
+      "dataSource": "example",
+      "parser": {
+        "type": "orc",
+        "parseSpec": {
+          "format": "orc",
+          "timestampSpec": {
+            "column": "timestamp",
+            "format": "millis"
+          }
+        }
+      },
+      ...
+    },
+    "tuningConfig": <hadoop-tuning-config>
+    }
+  }
+}
+```
+
+#### `orc` parser, `orc` parseSpec, no autodiscovery
+
+```json
+{
+  "type": "index_hadoop",
+  "spec": {
+    "ioConfig": {
+      "type": "hadoop",
+      "inputSpec": {
+        "type": "static",
+        "inputFormat": "org.apache.orc.mapreduce.OrcInputFormat",
+        "paths": "path/to/file.orc"
+      },
+      ...
+    },
+    "dataSchema": {
+      "dataSource": "example",
+      "parser": {
+        "type": "orc",
+        "parseSpec": {
+          "format": "orc",
+          "flattenSpec": {
+            "useFieldDiscovery": false,
+            "fields": [
+              {
+                "type": "path",
+                "name": "nestedDim",
+                "expr": "$.nestedData.dim1"
+              },
+              {
+                "type": "path",
+                "name": "listDimFirstItem",
+                "expr": "$.listDim[1]"
+              }
+            ]
+          },
+          "timestampSpec": {
+            "column": "timestamp",
+            "format": "millis"
+          },
+          "dimensionsSpec": {
+            "dimensions": [
+              "dim1",
+              "dim3",
+              "nestedDim",
+              "listDimFirstItem"
+            ],
+            "dimensionExclusions": [],
+            "spatialDimensions": []
+          }
+        }
+      },
+      ...
+    },
+    "tuningConfig": <hadoop-tuning-config>
+    }
+  }
+}
+```
+
+#### `orc` parser, `timeAndDims` parseSpec
+```json
+{
+  "type": "index_hadoop",
+  "spec": {
+    "ioConfig": {
+      "type": "hadoop",
+      "inputSpec": {
+        "type": "static",
+        "inputFormat": "org.apache.orc.mapreduce.OrcInputFormat",
+        "paths": "path/to/file.orc"
+      },
+      ...
+    },
+    "dataSchema": {
+      "dataSource": "example",
+      "parser": {
+        "type": "orc",
+        "parseSpec": {
+          "format": "timeAndDims",
+          "timestampSpec": {
+            "column": "timestamp",
+            "format": "auto"
+          },
+          "dimensionsSpec": {
+            "dimensions": [
+              "dim1",
+              "dim2",
+              "dim3",
+              "listDim"
+            ],
+            "dimensionExclusions": [],
+            "spatialDimensions": []
+          }
+        }
+      },
+      ...
+    },
+    "tuningConfig": <hadoop-tuning-config>
+  }
+}
+
+```
+
+### Migration from 'contrib' extension
+This extension, first available in version 0.15.0, replaces the previous 'contrib' extension which was available until 
+0.14.0-incubating. While this extension can index any data the 'contrib' extension could, the json spec for the 
+ingestion task is *incompatible*, and will need modified to work with the newer 'core' extension. 
+
+To migrate to 0.15.0+:
+* In `inputSpec` of `ioConfig`, `inputFormat` must be changed from `"org.apache.hadoop.hive.ql.io.orc.OrcNewInputFormat"` to 
+`"org.apache.orc.mapreduce.OrcInputFormat"`
+* The 'contrib' extension supported a `typeString` property, which provided the schema of the
+ORC file, of which was essentially required to have the types correct, but notably _not_ the column names, which 
+facilitated column renaming. In the 'core' extension, column renaming can be achieved with 
+[`flattenSpec` expressions](../../ingestion/flatten-json.html). For example, `"typeString":"struct<time:string,name:string>"`
+with the actual schema `struct<_col0:string,_col1:string>`, to preserve Druid schema would need replaced with:
+```json
+"flattenSpec": {
+  "fields": [
+    {
+      "type": "path",
+      "name": "time",
+      "expr": "$._col0"
+    },
+    {
+      "type": "path",
+      "name": "name",
+      "expr": "$._col1"
+    }
+  ]
+  ...
+}
+```
+* The 'contrib' extension supported a `mapFieldNameFormat` property, which provided a way to specify a dimension to
+ flatten `OrcMap` columns with primitive types. This functionality has also been replaced with
+ [`flattenSpec` expressions](../../ingestion/flatten-json.html). For example: `"mapFieldNameFormat": "<PARENT>_<CHILD>"`
+ for a dimension `nestedData_dim1`, to preserve Druid schema could be replaced with 
+ ```json
+"flattenSpec": {
+  "fields": [
+    {
+      "type": "path",
+      "name": "nestedData_dim1",
+      "expr": "$.nestedData.dim1"
+    }
+  ]
+  ...
+}
+```
\ No newline at end of file
diff --git a/docs/latest/development/extensions-core/parquet.md b/docs/0.15.0-incubating/development/extensions-core/parquet.md
similarity index 96%
copy from docs/latest/development/extensions-core/parquet.md
copy to docs/0.15.0-incubating/development/extensions-core/parquet.md
index 9b628b9..207fae7 100644
--- a/docs/latest/development/extensions-core/parquet.md
+++ b/docs/0.15.0-incubating/development/extensions-core/parquet.md
@@ -33,17 +33,20 @@ Note: `druid-parquet-extensions` depends on the `druid-avro-extensions` module,
 ## Parquet Hadoop Parser
 
 This extension provides two ways to parse Parquet files:
+
 * `parquet` - using a simple conversion contained within this extension 
 * `parquet-avro` - conversion to avro records with the `parquet-avro` library and using the `druid-avro-extensions`
  module to parse the avro data
 
 Selection of conversion method is controlled by parser type, and the correct hadoop input format must also be set in 
-the `ioConfig`,  `org.apache.druid.data.input.parquet.DruidParquetInputFormat` for `parquet` and 
-`org.apache.druid.data.input.parquet.DruidParquetAvroInputFormat` for `parquet-avro`.
+the `ioConfig`:
+
+* `org.apache.druid.data.input.parquet.DruidParquetInputFormat` for `parquet`
+* `org.apache.druid.data.input.parquet.DruidParquetAvroInputFormat` for `parquet-avro`
  
 
 Both parse options support auto field discovery and flattening if provided with a 
-[flattenSpec](../../ingestion/flatten-json.html) with `parquet` or `avro` as the `format`. Parquet nested list and map 
+[flattenSpec](../../ingestion/flatten-json.html) with `parquet` or `avro` as the format. Parquet nested list and map 
 [logical types](https://github.com/apache/parquet-format/blob/master/LogicalTypes.md) _should_ operate correctly with 
 json path expressions for all supported types. `parquet-avro` sets a hadoop job property 
 `parquet.avro.add-list-element-records` to `false` (which normally defaults to `true`), in order to 'unwrap' primitive 
diff --git a/docs/latest/development/extensions-core/postgresql.md b/docs/0.15.0-incubating/development/extensions-core/postgresql.md
similarity index 97%
copy from docs/latest/development/extensions-core/postgresql.md
copy to docs/0.15.0-incubating/development/extensions-core/postgresql.md
index 07a2a78..26f77fc 100644
--- a/docs/latest/development/extensions-core/postgresql.md
+++ b/docs/0.15.0-incubating/development/extensions-core/postgresql.md
@@ -83,3 +83,5 @@ In most cases, the configuration options map directly to the [postgres jdbc conn
 | `druid.metadata.postgres.ssl.sslRootCert` | The full path to the root certificate. | none | no |
 | `druid.metadata.postgres.ssl.sslHostNameVerifier` | The classname of the hostname verifier. | none | no |
 | `druid.metadata.postgres.ssl.sslPasswordCallback` | The classname of the SSL password provider. | none | no |
+| `druid.metadata.postgres.dbTableSchema` | druid meta table schema | `public` | no |
+
diff --git a/docs/0.15.0-incubating/development/extensions-core/protobuf.md b/docs/0.15.0-incubating/development/extensions-core/protobuf.md
new file mode 100644
index 0000000..655d0c7
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/protobuf.md
@@ -0,0 +1,223 @@
+---
+layout: doc_page
+title: "Protobuf"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Protobuf
+
+This Apache Druid (incubating) extension enables Druid to ingest and understand the Protobuf data format. Make sure to [include](../../operations/including-extensions.html) `druid-protobuf-extensions` as an extension.
+
+## Protobuf Parser
+
+
+| Field | Type | Description | Required |
+|-------|------|-------------|----------|
+| type | String | This should say `protobuf`. | no |
+| descriptor | String | Protobuf descriptor file name in the classpath or URL. | yes |
+| protoMessageType | String | Protobuf message type in the descriptor.  Both short name and fully qualified name are accepted.  The parser uses the first message type found in the descriptor if not specified. | no |
+| parseSpec | JSON Object | Specifies the timestamp and dimensions of the data.  The format must be json. See [JSON ParseSpec](../../ingestion/index.html) for more configuration options.  Please note timeAndDims parseSpec is no longer supported. | yes |
+
+## Example: Load Protobuf messages from Kafka
+
+This example demonstrates how to load Protobuf messages from Kafka.  Please read the [Load from Kafka tutorial](../../tutorials/tutorial-kafka.html) first.  This example will use the same "metrics" dataset.
+
+Files used in this example are found at `./examples/quickstart/protobuf` in your Druid directory.
+
+- We will use [Kafka Indexing Service](./kafka-ingestion.html) instead of Tranquility.
+- Kafka broker host is `localhost:9092`.
+- Kafka topic is `metrics_pb` instead of `metrics`.
+- datasource name is `metrics-kafka-pb` instead of `metrics-kafka` to avoid the confusion.
+
+Here is the metrics JSON example.
+
+```json
+{
+  "unit": "milliseconds",
+  "http_method": "GET",
+  "value": 44,
+  "timestamp": "2017-04-06T02:36:22Z",
+  "http_code": "200",
+  "page": "/",
+  "metricType": "request/latency",
+  "server": "www1.example.com"
+}
+```
+
+### Proto file
+
+The proto file should look like this.  Save it as metrics.proto.
+
+```
+syntax = "proto3";
+message Metrics {
+  string unit = 1;
+  string http_method = 2;
+  int32 value = 3;
+  string timestamp = 4;
+  string http_code = 5;
+  string page = 6;
+  string metricType = 7;
+  string server = 8;
+}
+```
+
+### Descriptor file
+
+Using the `protoc` Protobuf compiler to generate the descriptor file.  Save the metrics.desc file either in the classpath or reachable by URL.  In this example the descriptor file was saved at /tmp/metrics.desc.
+
+```
+protoc -o /tmp/metrics.desc metrics.proto
+```
+
+### Supervisor spec JSON
+
+Below is the complete Supervisor spec JSON to be submitted to the Overlord.
+Please make sure these keys are properly configured for successful ingestion.
+
+- `descriptor` for the descriptor file URL.
+- `protoMessageType` from the proto definition.
+- parseSpec `format` must be `json`.
+- `topic` to subscribe.  The topic is "metrics_pb" instead of "metrics".
+- `bootstrap.server` is the kafka broker host.
+
+```json
+{
+  "type": "kafka",
+  "dataSchema": {
+    "dataSource": "metrics-kafka2",
+    "parser": {
+      "type": "protobuf",
+      "descriptor": "file:///tmp/metrics.desc",
+      "protoMessageType": "Metrics",
+      "parseSpec": {
+        "format": "json",
+        "timestampSpec": {
+          "column": "timestamp",
+          "format": "auto"
+        },
+        "dimensionsSpec": {
+          "dimensions": [
+            "unit",
+            "http_method",
+            "http_code",
+            "page",
+            "metricType",
+            "server"
+          ],
+          "dimensionExclusions": [
+            "timestamp",
+            "value"
+          ]
+        }
+      }
+    },
+    "metricsSpec": [
+      {
+        "name": "count",
+        "type": "count"
+      },
+      {
+        "name": "value_sum",
+        "fieldName": "value",
+        "type": "doubleSum"
+      },
+      {
+        "name": "value_min",
+        "fieldName": "value",
+        "type": "doubleMin"
+      },
+      {
+        "name": "value_max",
+        "fieldName": "value",
+        "type": "doubleMax"
+      }
+    ],
+    "granularitySpec": {
+      "type": "uniform",
+      "segmentGranularity": "HOUR",
+      "queryGranularity": "NONE"
+    }
+  },
+  "tuningConfig": {
+    "type": "kafka",
+    "maxRowsPerSegment": 5000000
+  },
+  "ioConfig": {
+    "topic": "metrics_pb",
+    "consumerProperties": {
+      "bootstrap.servers": "localhost:9092"
+    },
+    "taskCount": 1,
+    "replicas": 1,
+    "taskDuration": "PT1H"
+  }
+}
+```
+
+## Kafka Producer
+
+Here is the sample script that publishes the metrics to Kafka in Protobuf format.
+
+1. Run `protoc` again with the Python binding option.  This command generates `metrics_pb2.py` file.
+ ```
+  protoc -o metrics.desc metrics.proto --python_out=.
+ ```
+
+2. Create Kafka producer script.
+
+This script requires `protobuf` and `kafka-python` modules.
+
+```python
+#!/usr/bin/env python
+
+import sys
+import json
+
+from kafka import KafkaProducer
+from metrics_pb2 import Metrics
+
+producer = KafkaProducer(bootstrap_servers='localhost:9092')
+topic = 'metrics_pb'
+metrics = Metrics()
+
+for row in iter(sys.stdin):
+    d = json.loads(row)
+    for k, v in d.items():
+        setattr(metrics, k, v)
+    pb = metrics.SerializeToString()
+    producer.send(topic, pb)
+```
+
+3. run producer
+
+```
+./bin/generate-example-metrics | ./pb_publisher.py
+```
+
+4. test
+
+```
+kafka-console-consumer --zookeeper localhost --topic metrics_pb
+```
+
+It should print messages like this
+> millisecondsGETR"2017-04-06T03:23:56Z*2002/list:request/latencyBwww1.example.com
diff --git a/docs/latest/development/extensions-core/s3.md b/docs/0.15.0-incubating/development/extensions-core/s3.md
similarity index 64%
copy from docs/latest/development/extensions-core/s3.md
copy to docs/0.15.0-incubating/development/extensions-core/s3.md
index e93e5e0..41b4b56 100644
--- a/docs/latest/development/extensions-core/s3.md
+++ b/docs/0.15.0-incubating/development/extensions-core/s3.md
@@ -41,14 +41,10 @@ As an example, to set the region to 'us-east-1' through system properties:
 
 |Property|Description|Default|
 |--------|-----------|-------|
-|`druid.s3.accessKey`|S3 access key.|Must be set.|
-|`druid.s3.secretKey`|S3 secret key.|Must be set.|
-|`druid.storage.bucket`|Bucket to store in.|Must be set.|
-|`druid.storage.baseKey`|Base key prefix to use, i.e. what directory.|Must be set.|
-|`druid.storage.sse.type`|Server-side encryption type. Should be one of `s3`, `kms`, and `custom`. See the below [Server-side encryption section](#server-side-encryption) for more details.|None|
-|`druid.storage.sse.kms.keyId`|AWS KMS key ID. Can be empty if `druid.storage.sse.type` is `kms`.|None|
-|`druid.storage.sse.custom.base64EncodedKey`|Base64-encoded key. Should be specified if `druid.storage.sse.type` is `custom`.|None|
-|`druid.s3.protocol`|Communication protocol type to use when sending requests to AWS. `http` or `https` can be used.|`https`|
+|`druid.s3.accessKey`|S3 access key.See [S3 authentication methods](#s3-authentication-methods) for more details|Can be ommitted according to authentication methods chosen.|
+|`druid.s3.secretKey`|S3 secret key.See [S3 authentication methods](#s3-authentication-methods) for more details|Can be ommitted according to authentication methods chosen.|
+|`druid.s3.fileSessionCredentials`|Path to properties file containing `sessionToken`, `accessKey` and `secretKey` value. One key/value pair per line (format `key=value`). See [S3 authentication methods](#s3-authentication-methods) for more details |Can be ommitted according to authentication methods chosen.|
+|`druid.s3.protocol`|Communication protocol type to use when sending requests to AWS. `http` or `https` can be used. This configuration would be ignored if `druid.s3.endpoint.url` is filled with a URL with a different protocol.|`https`|
 |`druid.s3.disableChunkedEncoding`|Disables chunked encoding. See [AWS document](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#disableChunkedEncoding--) for details.|false|
 |`druid.s3.enablePathStyleAccess`|Enables path style access. See [AWS document](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#enablePathStyleAccess--) for details.|false|
 |`druid.s3.forceGlobalBucketAccessEnabled`|Enables global bucket access. See [AWS document](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#setForceGlobalBucketAccessEnabled-java.lang.Boolean-) for details.|false|
@@ -58,6 +54,37 @@ As an example, to set the region to 'us-east-1' through system properties:
 |`druid.s3.proxy.port`|Port on the proxy host to connect through.|None|
 |`druid.s3.proxy.username`|User name to use when connecting through a proxy.|None|
 |`druid.s3.proxy.password`|Password to use when connecting through a proxy.|None|
+|`druid.storage.bucket`|Bucket to store in.|Must be set.|
+|`druid.storage.baseKey`|Base key prefix to use, i.e. what directory.|Must be set.|
+|`druid.storage.archiveBucket`|S3 bucket name for archiving when running the *archive task*.|none|
+|`druid.storage.archiveBaseKey`|S3 object key prefix for archiving.|none|
+|`druid.storage.disableAcl`|Boolean flag to disable ACL. If this is set to `false`, the full control would be granted to the bucket owner. This may require to set additional permissions. See [S3 permissions settings](#s3-permissions-settings).|false|
+|`druid.storage.sse.type`|Server-side encryption type. Should be one of `s3`, `kms`, and `custom`. See the below [Server-side encryption section](#server-side-encryption) for more details.|None|
+|`druid.storage.sse.kms.keyId`|AWS KMS key ID. This is used only when `druid.storage.sse.type` is `kms` and can be empty to use the default key ID.|None|
+|`druid.storage.sse.custom.base64EncodedKey`|Base64-encoded key. Should be specified if `druid.storage.sse.type` is `custom`.|None|
+|`druid.storage.useS3aSchema`|If true, use the "s3a" filesystem when using Hadoop-based ingestion. If false, the "s3n" filesystem will be used. Only affects Hadoop-based ingestion.|false|
+
+### S3 permissions settings
+
+`s3:GetObject` and `s3:PutObject` are basically required for pushing/loading segments to/from S3.
+If `druid.storage.disableAcl` is set to `false`, then `s3:GetBucketAcl` and `s3:PutObjectAcl` are additionally required to set ACL for objects.
+
+### S3 authentication methods
+
+To connect to your S3 bucket (whether deep storage bucket or source bucket), Druid use the following credentials providers chain
+
+|order|type|details|
+|--------|-----------|-------|
+|1|Druid config file|Based on your runtime.properties if it contains values `druid.s3.accessKey` and `druid.s3.secretKey` |
+|2|Custom properties file| Based on custom properties file where you can supply `sessionToken`, `accessKey` and `secretKey` values. This file is provided to Druid through `druid.s3.fileSessionCredentials` propertie|
+|3|Environment variables|Based on environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`|
+|4|Java system properties|Based on JVM properties `aws.accessKeyId` and `aws.secretKey` |
+|5|Profile informations|Based on credentials you may have on your druid instance (generally in `~/.aws/credentials`)|
+|6|Instance profile informations|Based on the instance profile you may have attached to your druid instance|
+
+You can find more informations about authentication method [here](https://docs.aws.amazon.com/fr_fr/sdk-for-java/v1/developer-guide/credentials.html)<br/>
+**Note :** *Order is important here as it indicates the precedence of authentication methods.<br/> 
+So if you are trying to use Instance profile informations, you **must not** set `druid.s3.accessKey` and `druid.s3.secretKey` in your Druid runtime.properties* 
 
 ## Server-side encryption
 
@@ -96,3 +123,5 @@ shardSpecs are not specified, and, in this case, caching can be useful. Prefetch
 |prefetchTriggerBytes|Threshold to trigger prefetching s3 objects.|maxFetchCapacityBytes / 2|no|
 |fetchTimeout|Timeout for fetching an s3 object.|60000|no|
 |maxFetchRetry|Maximum retry for fetching an s3 object.|3|no|
+
+
diff --git a/docs/0.15.0-incubating/development/extensions-core/simple-client-sslcontext.md b/docs/0.15.0-incubating/development/extensions-core/simple-client-sslcontext.md
new file mode 100644
index 0000000..7247f26
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/simple-client-sslcontext.md
@@ -0,0 +1,54 @@
+---
+layout: doc_page
+title: "Simple SSLContext Provider Module"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Simple SSLContext Provider Module
+
+This Apache Druid (incubating) module contains a simple implementation of [SSLContext](http://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLContext.html)
+that will be injected to be used with HttpClient that Druid processes use internally to communicate with each other. To learn more about
+Java's SSL support, please refer to [this](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html) guide.
+
+# Configuration
+
+|Property|Description|Default|Required|
+|--------|-----------|-------|--------|
+|`druid.client.https.protocol`|SSL protocol to use.|`TLSv1.2`|no|
+|`druid.client.https.trustStoreType`|The type of the key store where trusted root certificates are stored.|`java.security.KeyStore.getDefaultType()`|no|
+|`druid.client.https.trustStorePath`|The file path or URL of the TLS/SSL Key store where trusted root certificates are stored.|none|yes|
+|`druid.client.https.trustStoreAlgorithm`|Algorithm to be used by TrustManager to validate certificate chains|`javax.net.ssl.TrustManagerFactory.getDefaultAlgorithm()`|no|
+|`druid.client.https.trustStorePassword`|The [Password Provider](../../operations/password-provider.html) or String password for the Trust Store.|none|yes|
+
+The following table contains optional parameters for supporting client certificate authentication:
+
+|Property|Description|Default|Required|
+|--------|-----------|-------|--------|
+|`druid.client.https.keyStorePath`|The file path or URL of the TLS/SSL Key store containing the client certificate that Druid will use when communicating with other Druid services. If this is null, the other properties in this table are ignored.|none|yes|
+|`druid.client.https.keyStoreType`|The type of the key store.|none|yes|
+|`druid.client.https.certAlias`|Alias of TLS client certificate in the keystore.|none|yes|
+|`druid.client.https.keyStorePassword`|The [Password Provider](../../operations/password-provider.html) or String password for the Key Store.|none|no|
+|`druid.client.https.keyManagerFactoryAlgorithm`|Algorithm to use for creating KeyManager, more details [here](https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#KeyManager).|`javax.net.ssl.KeyManagerFactory.getDefaultAlgorithm()`|no|
+|`druid.client.https.keyManagerPassword`|The [Password Provider](../../operations/password-provider.html) or String password for the Key Manager.|none|no|
+|`druid.client.https.validateHostnames`|Validate the hostname of the server. This should not be disabled unless you are using [custom TLS certificate checks](../../operations/tls-support.html#custom-tls-certificate-checks) and know that standard hostname validation is not needed.|true|no|
+
+This [document](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html) lists all the possible
+values for the above mentioned configs among others provided by Java implementation.
diff --git a/docs/0.15.0-incubating/development/extensions-core/stats.md b/docs/0.15.0-incubating/development/extensions-core/stats.md
new file mode 100644
index 0000000..0b3b4e3
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/stats.md
@@ -0,0 +1,172 @@
+---
+layout: doc_page
+title: "Stats aggregator"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Stats aggregator
+
+This Apache Druid (incubating) extension includes stat-related aggregators, including variance and standard deviations, etc. Make sure to [include](../../operations/including-extensions.html) `druid-stats` as an extension.
+
+## Variance aggregator
+
+Algorithm of the aggregator is the same with that of apache hive. This is the description in GenericUDAFVariance in hive.
+
+Evaluate the variance using the algorithm described by Chan, Golub, and LeVeque in
+"Algorithms for computing the sample variance: analysis and recommendations"
+The American Statistician, 37 (1983) pp. 242--247.
+
+variance = variance1 + variance2 + n/(m*(m+n)) * pow(((m/n)*t1 - t2),2)
+
+where: - variance is sum(x-avg^2) (this is actually n times the variance)
+and is updated at every step. - n is the count of elements in chunk1 - m is
+the count of elements in chunk2 - t1 = sum of elements in chunk1, t2 =
+sum of elements in chunk2.
+
+This algorithm was proven to be numerically stable by J.L. Barlow in
+"Error analysis of a pairwise summation algorithm to compute sample variance"
+Numer. Math, 58 (1991) pp. 583--590
+
+### Pre-aggregating variance at ingestion time
+
+To use this feature, an "variance" aggregator must be included at indexing time.
+The ingestion aggregator can only apply to numeric values. If you use "variance"
+then any input rows missing the value will be considered to have a value of 0.
+
+User can specify expected input type as one of "float", "long", "variance" for ingestion, which is by default "float".
+
+```json
+{
+  "type" : "variance",
+  "name" : <output_name>,
+  "fieldName" : <metric_name>,
+  "inputType" : <input_type>,
+  "estimator" : <string>
+}
+```
+
+To query for results, "variance" aggregator with "variance" input type or simply a "varianceFold" aggregator must be included in the query.
+
+```json
+{
+  "type" : "varianceFold",
+  "name" : <output_name>,
+  "fieldName" : <metric_name>,
+  "estimator" : <string>
+}
+```
+
+|Property                 |Description                   |Default                           |
+|-------------------------|------------------------------|----------------------------------|
+|`estimator`|Set "population" to get variance_pop rather than variance_sample, which is default.|null|
+
+
+### Standard Deviation post-aggregator
+
+To acquire standard deviation from variance, user can use "stddev" post aggregator.
+
+```json
+{
+  "type": "stddev",
+  "name": "<output_name>",
+  "fieldName": "<aggregator_name>",
+  "estimator": <string>
+}
+```
+
+## Query Examples:
+
+### Timeseries Query
+
+```json
+{
+  "queryType": "timeseries",
+  "dataSource": "testing",
+  "granularity": "day",
+  "aggregations": [
+    {
+      "type": "variance",
+      "name": "index_var",
+      "fieldName": "index_var"
+    }
+  ],
+  "intervals": [
+    "2016-03-01T00:00:00.000/2013-03-20T00:00:00.000"
+  ]
+}
+```
+
+### TopN Query
+
+```json
+{
+  "queryType": "topN",
+  "dataSource": "testing",
+  "dimensions": ["alias"],
+  "threshold": 5,
+  "granularity": "all",
+  "aggregations": [
+    {
+      "type": "variance",
+      "name": "index_var",
+      "fieldName": "index"
+    }
+  ],
+  "postAggregations": [
+    {
+      "type": "stddev",
+      "name": "index_stddev",
+      "fieldName": "index_var"
+    }
+  ],
+  "intervals": [
+    "2016-03-06T00:00:00/2016-03-06T23:59:59"
+  ]
+}
+```
+
+### GroupBy Query
+
+```json
+{
+  "queryType": "groupBy",
+  "dataSource": "testing",
+  "dimensions": ["alias"],
+  "granularity": "all",
+  "aggregations": [
+    {
+      "type": "variance",
+      "name": "index_var",
+      "fieldName": "index"
+    }
+  ],
+  "postAggregations": [
+    {
+      "type": "stddev",
+      "name": "index_stddev",
+      "fieldName": "index_var"
+    }
+  ],
+  "intervals": [
+    "2016-03-06T00:00:00/2016-03-06T23:59:59"
+  ]
+}
+```
diff --git a/docs/0.15.0-incubating/development/extensions-core/test-stats.md b/docs/0.15.0-incubating/development/extensions-core/test-stats.md
new file mode 100644
index 0000000..156052f
--- /dev/null
+++ b/docs/0.15.0-incubating/development/extensions-core/test-stats.md
@@ -0,0 +1,118 @@
+---
+layout: doc_page
+title: "Test Stats Aggregators"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Test Stats Aggregators
+
+This Apache Druid (incubating) extension incorporates test statistics related aggregators, including z-score and p-value. Please refer to [https://www.paypal-engineering.com/2017/06/29/democratizing-experimentation-data-for-product-innovations/](https://www.paypal-engineering.com/2017/06/29/democratizing-experimentation-data-for-product-innovations/) for math background and details.
+
+Make sure to include `druid-stats` extension in order to use these aggregrators.
+
+## Z-Score for two sample ztests post aggregator
+
+Please refer to [https://www.isixsigma.com/tools-templates/hypothesis-testing/making-sense-two-proportions-test/](https://www.isixsigma.com/tools-templates/hypothesis-testing/making-sense-two-proportions-test/) and [http://www.ucs.louisiana.edu/~jcb0773/Berry_statbook/Berry_statbook_chpt6.pdf](http://www.ucs.louisiana.edu/~jcb0773/Berry_statbook/Berry_statbook_chpt6.pdf) for more details.
+
+z = (p1 - p2) / S.E.  (assuming null hypothesis is true)
+
+Please see below for p1 and p2.
+Please note S.E. stands for standard error where 
+
+S.E. = sqrt{ p1 * ( 1 - p1 )/n1 + p2 * (1 - p2)/n2) }
+
+(p1 – p2) is the observed difference between two sample proportions.
+
+### zscore2sample post aggregator
+* **`zscore2sample`**: calculate the z-score using two-sample z-test while converting binary variables (***e.g.*** success or not) to continuous variables (***e.g.*** conversion rate).
+
+```json
+{
+  "type": "zscore2sample",
+  "name": "<output_name>",
+  "successCount1": <post_aggregator> success count of sample 1,
+  "sample1Size": <post_aggregaror> sample 1 size,
+  "successCount2": <post_aggregator> success count of sample 2,
+  "sample2Size" : <post_aggregator> sample 2 size
+}
+```
+
+Please note the post aggregator will be converting binary variables to continuous variables for two population proportions.  Specifically
+
+p1 = (successCount1) / (sample size 1)
+
+p2 = (successCount2) / (sample size 2)
+
+### pvalue2tailedZtest post aggregator
+
+* **`pvalue2tailedZtest`**: calculate p-value of two-sided z-test from zscore
+    - ***pvalue2tailedZtest(zscore)*** - the input is a z-score which can be calculated using the zscore2sample post aggregator
+
+
+```json
+{
+  "type": "pvalue2tailedZtest",
+  "name": "<output_name>",
+  "zScore": <zscore post_aggregator>
+}
+```
+  
+## Example Usage
+
+In this example, we use zscore2sample post aggregator to calculate z-score, and then feed the z-score to pvalue2tailedZtest post aggregator to calculate p-value.
+
+A JSON query example can be as follows:
+
+```json
+{
+  ...
+    "postAggregations" : {
+    "type"   : "pvalue2tailedZtest",
+    "name"   : "pvalue",
+    "zScore" : 
+    {
+     "type"   : "zscore2sample",
+     "name"   : "zscore",
+     "successCount1" :
+       { "type"   : "constant",
+         "name"   : "successCountFromPopulation1Sample",
+         "value"  : 300
+       },
+     "sample1Size" :
+       { "type"   : "constant",
+         "name"   : "sampleSizeOfPopulation1",
+         "value"  : 500
+       },
+     "successCount2":
+       { "type"   : "constant",
+         "name"   : "successCountFromPopulation2Sample",
+         "value"  : 450
+       },
+     "sample2Size" :
+       { "type"   : "constant",
+         "name"   : "sampleSizeOfPopulation2",
+         "value"  : 600
+       }
+     }
+    }
+}
+
+```
diff --git a/docs/latest/development/extensions.md b/docs/0.15.0-incubating/development/extensions.md
similarity index 85%
copy from docs/latest/development/extensions.md
copy to docs/0.15.0-incubating/development/extensions.md
index 4cebe0e..c56ff4f 100644
--- a/docs/latest/development/extensions.md
+++ b/docs/0.15.0-incubating/development/extensions.md
@@ -44,20 +44,22 @@ Core extensions are maintained by Druid committers.
 |druid-avro-extensions|Support for data in Apache Avro data format.|[link](../development/extensions-core/avro.html)|
 |druid-basic-security|Support for Basic HTTP authentication and role-based access control.|[link](../development/extensions-core/druid-basic-security.html)|
 |druid-bloom-filter|Support for providing Bloom filters in druid queries.|[link](../development/extensions-core/bloom-filter.html)|
-|druid-caffeine-cache|A local cache implementation backed by Caffeine.|[link](../development/extensions-core/caffeine-cache.html)|
-|druid-datasketches|Support for approximate counts and set operations with [DataSketches](http://datasketches.github.io/).|[link](../development/extensions-core/datasketches-extension.html)|
+|druid-caffeine-cache|A local cache implementation backed by Caffeine.|[link](../configuration/index.html#cache-configuration)|
+|druid-datasketches|Support for approximate counts and set operations with [DataSketches](https://datasketches.github.io/).|[link](../development/extensions-core/datasketches-extension.html)|
 |druid-hdfs-storage|HDFS deep storage.|[link](../development/extensions-core/hdfs.html)|
 |druid-histogram|Approximate histograms and quantiles aggregator. Deprecated, please use the [DataSketches quantiles aggregator](../development/extensions-core/datasketches-quantiles.html) from the `druid-datasketches` extension instead.|[link](../development/extensions-core/approximate-histograms.html)|
-|druid-kafka-eight|Kafka ingest firehose (high level consumer) for realtime nodes.|[link](../development/extensions-core/kafka-eight-firehose.html)|
+|druid-kafka-eight|Kafka ingest firehose (high level consumer) for realtime nodes(deprecated).|[link](../development/extensions-core/kafka-eight-firehose.html)|
 |druid-kafka-extraction-namespace|Kafka-based namespaced lookup. Requires namespace lookup extension.|[link](../development/extensions-core/kafka-extraction-namespace.html)|
 |druid-kafka-indexing-service|Supervised exactly-once Kafka ingestion for the indexing service.|[link](../development/extensions-core/kafka-ingestion.html)|
 |druid-kinesis-indexing-service|Supervised exactly-once Kinesis ingestion for the indexing service.|[link](../development/extensions-core/kinesis-ingestion.html)|
 |druid-kerberos|Kerberos authentication for druid processes.|[link](../development/extensions-core/druid-kerberos.html)|
 |druid-lookups-cached-global|A module for [lookups](../querying/lookups.html) providing a jvm-global eager caching for lookups. It provides JDBC and URI implementations for fetching lookup data.|[link](../development/extensions-core/lookups-cached-global.html)|
 |druid-lookups-cached-single| Per lookup caching module to support the use cases where a lookup need to be isolated from the global pool of lookups |[link](../development/extensions-core/druid-lookups.html)|
+|druid-orc-extensions|Support for data in Apache Orc data format.|[link](../development/extensions-core/orc.html)|
 |druid-parquet-extensions|Support for data in Apache Parquet data format. Requires druid-avro-extensions to be loaded.|[link](../development/extensions-core/parquet.html)|
 |druid-protobuf-extensions| Support for data in Protobuf data format.|[link](../development/extensions-core/protobuf.html)|
 |druid-s3-extensions|Interfacing with data in AWS S3, and using S3 as deep storage.|[link](../development/extensions-core/s3.html)|
+|druid-ec2-extensions|Interfacing with AWS EC2 for autoscaling middle managers|UNDOCUMENTED|
 |druid-stats|Statistics related module including variance and standard deviation.|[link](../development/extensions-core/stats.html)|
 |mysql-metadata-storage|MySQL metadata store.|[link](../development/extensions-core/mysql.html)|
 |postgresql-metadata-storage|PostgreSQL metadata store.|[link](../development/extensions-core/postgresql.html)|
@@ -81,8 +83,7 @@ All of these community extensions can be downloaded using *pull-deps* with the c
 |druid-cassandra-storage|Apache Cassandra deep storage.|[link](../development/extensions-contrib/cassandra.html)|
 |druid-cloudfiles-extensions|Rackspace Cloudfiles deep storage and firehose.|[link](../development/extensions-contrib/cloudfiles.html)|
 |druid-distinctcount|DistinctCount aggregator|[link](../development/extensions-contrib/distinctcount.html)|
-|druid-kafka-eight-simpleConsumer|Kafka ingest firehose (low level consumer).|[link](../development/extensions-contrib/kafka-simple.html)|
-|druid-orc-extensions|Support for data in Apache Orc data format.|[link](../development/extensions-contrib/orc.html)|
+|druid-kafka-eight-simpleConsumer|Kafka ingest firehose (low level consumer)(deprecated).|[link](../development/extensions-contrib/kafka-simple.html)|
 |druid-rabbitmq|RabbitMQ firehose.|[link](../development/extensions-contrib/rabbitmq.html)|
 |druid-redis-cache|A cache implementation for Druid based on Redis.|[link](../development/extensions-contrib/redis-cache.html)|
 |druid-rocketmq|RocketMQ firehose.|[link](../development/extensions-contrib/rocketmq.html)|
@@ -94,6 +95,10 @@ All of these community extensions can be downloaded using *pull-deps* with the c
 |kafka-emitter|Kafka metrics emitter|[link](../development/extensions-contrib/kafka-emitter.html)|
 |druid-thrift-extensions|Support thrift ingestion |[link](../development/extensions-contrib/thrift.html)|
 |druid-opentsdb-emitter|OpenTSDB metrics emitter |[link](../development/extensions-contrib/opentsdb-emitter.html)|
+|druid-moving-average-query|Support for [Moving Average](https://en.wikipedia.org/wiki/Moving_average) and other Aggregate [Window Functions](https://en.wikibooks.org/wiki/Structured_Query_Language/Window_functions) in Druid queries.|[link](../development/extensions-contrib/moving-average-query.html)|
+|druid-influxdb-emitter|InfluxDB metrics emitter|[link](../development/extensions-contrib/influxdb-emitter.html)|
+|druid-momentsketch|Support for approximate quantile queries using the [momentsketch](https://github.com/stanford-futuredata/momentsketch) library|[link](../development/extensions-contrib/momentsketch-quantiles.html)|
+|druid-tdigestsketch|Support for approximate sketch aggregators based on [T-Digest](https://github.com/tdunning/t-digest)|[link](../development/extensions-contrib/tdigestsketch-quantiles.html)|
 
 ## Promoting Community Extension to Core Extension
 
diff --git a/docs/latest/development/geo.md b/docs/0.15.0-incubating/development/geo.md
similarity index 92%
copy from docs/latest/development/geo.md
copy to docs/0.15.0-incubating/development/geo.md
index b482740..8d6a6cf 100644
--- a/docs/latest/development/geo.md
+++ b/docs/0.15.0-incubating/development/geo.md
@@ -91,3 +91,10 @@ Bounds
 |--------|-----------|---------|
 |coords|Origin coordinates in the form [x, y, z, …]|yes|
 |radius|The float radius value|yes|
+
+### PolygonBound
+
+|property|description|required?|
+|--------|-----------|---------|
+|abscissa|Horizontal coordinate for corners of the polygon|yes|
+|ordinate|Vertical coordinate for corners of the polygon|yes|
diff --git a/docs/latest/development/experimental.md b/docs/0.15.0-incubating/development/integrating-druid-with-other-technologies.md
similarity index 51%
copy from docs/latest/development/experimental.md
copy to docs/0.15.0-incubating/development/integrating-druid-with-other-technologies.md
index adf4e24..873cdbd 100644
--- a/docs/latest/development/experimental.md
+++ b/docs/0.15.0-incubating/development/integrating-druid-with-other-technologies.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Experimental Features"
+title: "Integrating Apache Druid (incubating) With Other Technologies"
 ---
 
 <!--
@@ -22,18 +22,18 @@ title: "Experimental Features"
   ~ under the License.
   -->
 
-# Experimental Features
+# Integrating Apache Druid (incubating) With Other Technologies
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+This page discusses how we can integrate Druid with other technologies. 
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
+## Integrating with Open Source Streaming Technologies
 
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
+Event streams can be stored in a distributed message bus such as Kafka and further processed via a distributed stream  
+processor system such as Storm, Samza, or Spark Streaming. Data processed by the stream processor can feed into Druid using 
+the [Tranquility](https://github.com/druid-io/tranquility) library.
 
-```
-druid.extensions.loadList=["druid-histogram"]
-```
+<img src="../../img/druid-production.png" width="800"/>
 
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
+## Integrating with SQL-on-Hadoop Technologies
+
+Druid should theoretically integrate well with SQL-on-Hadoop technologies such as Apache Drill, Spark SQL, Presto, Impala, and Hive.
diff --git a/docs/0.15.0-incubating/development/javascript.md b/docs/0.15.0-incubating/development/javascript.md
new file mode 100644
index 0000000..ae0aad4
--- /dev/null
+++ b/docs/0.15.0-incubating/development/javascript.md
@@ -0,0 +1,75 @@
+---
+layout: doc_page
+title: "JavaScript Programming Guide"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# JavaScript Programming Guide
+
+This page discusses how to use JavaScript to extend Apache Druid (incubating).
+
+## Examples
+
+JavaScript can be used to extend Druid in a variety of ways:
+
+- [Aggregators](../querying/aggregations.html#javascript-aggregator)
+- [Extraction functions](../querying/dimensionspecs.html#javascript-extraction-function)
+- [Filters](../querying/filters.html#javascript-filter)
+- [Post-aggregators](../querying/post-aggregations.html#javascript-post-aggregator)
+- [Input parsers](../ingestion/data-formats.html#javascript)
+- [Router strategy](../development/router.html#javascript)
+- [Worker select strategy](../configuration/index.html#javascript-worker-select-strategy)
+
+JavaScript can be injected dynamically at runtime, making it convenient to rapidly prototype new functionality
+without needing to write and deploy Druid extensions.
+
+Druid uses the Mozilla Rhino engine at optimization level 9 to compile and execute JavaScript.
+
+## Security
+
+Druid does not execute JavaScript functions in a sandbox, so they have full access to the machine. So Javascript
+functions allow users to execute arbitrary code inside druid process. So, by default, Javascript is disabled.
+However, on dev/staging environments or secured production environments you can enable those by setting
+the [configuration property](../configuration/index.html#javascript)
+`druid.javascript.enabled = true`.
+
+## Global variables
+
+Avoid using global variables. Druid may share the global scope between multiple threads, which can lead to
+unpredictable results if global variables are used.
+
+## Performance
+
+Simple JavaScript functions typically have a slight performance penalty to native speed. More complex JavaScript
+functions can have steeper performance penalties. Druid compiles JavaScript functions once on each data process per query.
+
+You may need to pay special attention to garbage collection when making heavy use of JavaScript functions, especially
+garbage collection of the compiled classes themselves. Be sure to use a garbage collector configuration that supports
+timely collection of unused classes (this is generally easier on JDK8 with the Metaspace than it is on JDK7).
+
+## JavaScript vs. Native Extensions
+
+Generally we recommend using JavaScript when security is not an issue, and when speed of development is more important
+than performance or memory use. If security is an issue, or if performance and memory use are of the utmost importance,
+we recommend developing a native Druid extension.
+
+In addition, native Druid extensions are more flexible than JavaScript functions. There are some kinds of extensions
+(like sketches) that must be written as native Druid extensions due to their need for custom data formats.
diff --git a/docs/0.15.0-incubating/development/kafka-simple-consumer-firehose.html b/docs/0.15.0-incubating/development/kafka-simple-consumer-firehose.html
new file mode 100644
index 0000000..7552ebe
--- /dev/null
+++ b/docs/0.15.0-incubating/development/kafka-simple-consumer-firehose.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: extensions-contrib/kafka-simple.html
+---
diff --git a/docs/0.15.0-incubating/development/libraries.html b/docs/0.15.0-incubating/development/libraries.html
new file mode 100644
index 0000000..10a2691
--- /dev/null
+++ b/docs/0.15.0-incubating/development/libraries.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: /libraries.html
+---
diff --git a/docs/latest/development/modules.md b/docs/0.15.0-incubating/development/modules.md
similarity index 96%
copy from docs/latest/development/modules.md
copy to docs/0.15.0-incubating/development/modules.md
index c665b8e..44ce7bd 100644
--- a/docs/latest/development/modules.md
+++ b/docs/0.15.0-incubating/development/modules.md
@@ -39,7 +39,7 @@ Druid's extensions leverage Guice in order to add things at runtime.  Basically,
    and `org.apache.druid.query.aggregation.BufferAggregator`.
 1. Add PostAggregators by extending `org.apache.druid.query.aggregation.PostAggregator`.
 1. Add ExtractionFns by extending `org.apache.druid.query.extraction.ExtractionFn`.
-1. Add Complex metrics by extending `org.apache.druid.segment.serde.ComplexMetricsSerde`.
+1. Add Complex metrics by extending `org.apache.druid.segment.serde.ComplexMetricSerde`.
 1. Add new Query types by extending `org.apache.druid.query.QueryRunnerFactory`, `org.apache.druid.query.QueryToolChest`, and
    `org.apache.druid.query.Query`.
 1. Add new Jersey resources by calling `Jerseys.addResource(binder, clazz)`.
@@ -114,7 +114,7 @@ In this way, you can validate both push (at realtime process) and pull (at Histo
 
 * DataSegmentPusher
 
-Wherever your data storage (cloud storage service, distributed file system, etc.) is, you should be able to see two new files: `descriptor.json` (`partitionNum_descriptor.json` for HDFS data storage) and `index.zip` (`partitionNum_index.zip` for HDFS data storage) after your ingestion task ends.
+Wherever your data storage (cloud storage service, distributed file system, etc.) is, you should be able to see one new file: `index.zip` (`partitionNum_index.zip` for HDFS data storage) after your ingestion task ends.
 
 * DataSegmentPuller
 
@@ -130,7 +130,7 @@ The following example was retrieved from a Historical process configured to use
 00Z_2015-04-14T02:41:09.484Z
 2015-04-14T02:42:33,463 INFO [ZkCoordinator-0] org.apache.druid.guice.JsonConfigurator - Loaded class[class org.apache.druid.storage.azure.AzureAccountConfig] from props[drui
 d.azure.] as [org.apache.druid.storage.azure.AzureAccountConfig@759c9ad9]
-2015-04-14T02:49:08,275 INFO [ZkCoordinator-0] org.apache.druid.java.util.common.CompressionUtils - Unzipping file[/opt/druid/tmp/compressionUtilZipCache1263964429587449785.z
+2015-04-14T02:49:08,275 INFO [ZkCoordinator-0] org.apache.druid.utils.CompressionUtils - Unzipping file[/opt/druid/tmp/compressionUtilZipCache1263964429587449785.z
 ip] to [/opt/druid/zk_druid/dde/2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z/2015-04-14T02:41:09.484Z/0]
 2015-04-14T02:49:08,276 INFO [ZkCoordinator-0] org.apache.druid.storage.azure.AzureDataSegmentPuller - Loaded 1196 bytes from [dde/2015-01-02T00:00:00.000Z_2015-01-03
 T00:00:00.000Z/2015-04-14T02:41:09.484Z/0/index.zip] to [/opt/druid/zk_druid/dde/2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z/2015-04-14T02:41:09.484Z/0]
@@ -147,7 +147,7 @@ To mark a segment as not used, you need to connect to your metadata storage and
 
 To start a segment killing task, you need to access the old Coordinator console `http://<COODRINATOR_IP>:<COORDINATOR_PORT/old-console/kill.html` then select the appropriate datasource and then input a time range (e.g. `2000/3000`).
 
-After the killing task ends, both `descriptor.json` (`partitionNum_descriptor.json` for HDFS data storage)  and `index.zip` (`partitionNum_index.zip` for HDFS data storage) files should be deleted from the data storage.
+After the killing task ends, `index.zip` (`partitionNum_index.zip` for HDFS data storage) file should be deleted from the data storage.
 
 ### Adding a new Firehose
 
diff --git a/docs/latest/development/overview.md b/docs/0.15.0-incubating/development/overview.md
similarity index 97%
copy from docs/latest/development/overview.md
copy to docs/0.15.0-incubating/development/overview.md
index c0ca8de..ad360a5 100644
--- a/docs/latest/development/overview.md
+++ b/docs/0.15.0-incubating/development/overview.md
@@ -73,4 +73,4 @@ At some point in the future, we will likely move the internal UI code out of cor
 ## Client Libraries
 
 We welcome contributions for new client libraries to interact with Druid. See client 
-[libraries](../development/libraries.html) for existing client libraries.
+[libraries](/libraries.html) for existing client libraries.
diff --git a/docs/latest/development/router.md b/docs/0.15.0-incubating/development/router.md
similarity index 96%
copy from docs/latest/development/router.md
copy to docs/0.15.0-incubating/development/router.md
index 3c8f3b7..11508ac 100644
--- a/docs/latest/development/router.md
+++ b/docs/0.15.0-incubating/development/router.md
@@ -24,6 +24,11 @@ title: "Router Process"
 
 # Router Process
 
+<div class="note info">
+The Router is an optional and <a href="../development/experimental.html">experimental</a> feature due to the fact that its recommended place in the Druid cluster architecture is still evolving.
+However, it has been battle-tested in production, and it hosts the powerful [Druid Console](../operations/management-uis.html#druid-console), so you should feel safe deploying it.
+</div>
+
 The Apache Druid (incubating) Router process can be used to route queries to different Broker processes. By default, the broker routes queries based on how [Rules](../operations/rule-configuration.html) are set up. For example, if 1 month of recent data is loaded into a `hot` cluster, queries that fall within the recent month can be routed to a dedicated set of brokers. Queries outside this range are routed to another set of brokers. This set up provides query isolation such that queries [...]
 
 For query routing purposes, you should only ever need the Router process if you have a Druid cluster well into the terabyte range. 
diff --git a/docs/0.15.0-incubating/development/select-query.html b/docs/0.15.0-incubating/development/select-query.html
new file mode 100644
index 0000000..ce62ed8
--- /dev/null
+++ b/docs/0.15.0-incubating/development/select-query.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: ../querying/select-query.html
+---
diff --git a/docs/0.15.0-incubating/development/versioning.md b/docs/0.15.0-incubating/development/versioning.md
new file mode 100644
index 0000000..c0227c1
--- /dev/null
+++ b/docs/0.15.0-incubating/development/versioning.md
@@ -0,0 +1,47 @@
+---
+layout: doc_page
+title: "Versioning Apache Druid (incubating)"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Versioning Apache Druid (incubating)
+
+This page discusses how we do versioning and provides information on our stable releases.
+
+Versioning Strategy
+-------------------
+
+We generally follow [semantic versioning](http://semver.org/). The general idea is
+
+* "Major" version (leftmost): backwards incompatible, no guarantees exist about APIs between the versions
+* "Minor" version (middle number): you can move forward from a smaller number to a larger number, but moving backwards *might* be incompatible.
+* "bug-fix" version ("patch" or the rightmost): Interchangeable. The higher the number, the more things are fixed (hopefully), but the programming interfaces are completely compatible and you should be able to just drop in a new jar and have it work.
+
+Note that this is defined in terms of programming API, **not** in terms of functionality. It is possible that a brand new awesome way of doing something is introduced in a "bug-fix" release version if it doesn’t add to the public API or change it.
+
+One exception for right now, while we are still in major version 0, we are considering the APIs to be in beta and are conflating "major" and "minor" so a minor version increase could be backwards incompatible for as long as we are at major version 0. These will be communicated via email on the group.
+
+For external deployments, we recommend running the stable release tag. Releases are considered stable after we have deployed them into our production environment and they have operated bug-free for some time.
+
+Tagging strategy
+----------------
+
+Tags of the codebase are equivalent to release candidates. We tag the code every time we want to take it through our release process, which includes some QA cycles and deployments. So, it is not safe to assume that a tag is a stable release, it is a solidification of the code as it goes through our production QA cycle and deployment. Tags will never change, but we often go through a number of iterations of tags before actually getting a stable release onto production. So, it is recommend [...]
diff --git a/docs/0.15.0-incubating/index.html b/docs/0.15.0-incubating/index.html
new file mode 100644
index 0000000..356fcfc
--- /dev/null
+++ b/docs/0.15.0-incubating/index.html
@@ -0,0 +1,4 @@
+---
+layout: redirect_page
+redirect_target: design/index.html
+---
diff --git a/docs/latest/development/experimental.md b/docs/0.15.0-incubating/ingestion/batch-ingestion.md
similarity index 52%
copy from docs/latest/development/experimental.md
copy to docs/0.15.0-incubating/ingestion/batch-ingestion.md
index adf4e24..27c57d8 100644
--- a/docs/latest/development/experimental.md
+++ b/docs/0.15.0-incubating/ingestion/batch-ingestion.md
@@ -1,6 +1,6 @@
 ---
 layout: doc_page
-title: "Experimental Features"
+title: "Batch Data Ingestion"
 ---
 
 <!--
@@ -22,18 +22,18 @@ title: "Experimental Features"
   ~ under the License.
   -->
 
-# Experimental Features
+# Batch Data Ingestion
 
-Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
+Apache Druid (incubating) can load data from static files through a variety of methods described here.
 
-<div class="note caution">
-APIs for experimental features may change in backwards incompatible ways.
-</div>
+## Native Batch Ingestion
 
-To enable experimental features, include their artifacts in the configuration runtime.properties file, e.g.,
+Druid has built-in batch ingestion functionality. See [here](../ingestion/native_tasks.html) for more info.
 
-```
-druid.extensions.loadList=["druid-histogram"]
-```
+## Hadoop Batch Ingestion
 
-The configuration files for all the Apache Druid (incubating) processes need to be updated with this.
+Hadoop can be used for batch ingestion. The Hadoop-based batch ingestion will be faster and more scalable than the native batch ingestion. See [here](../ingestion/hadoop.html) for more details.
+
+Having Problems?
+----------------
+Getting data into Druid can definitely be difficult for first time users. Please don't hesitate to ask questions in our IRC channel or on our [google groups page](https://groups.google.com/forum/#!forum/druid-user).
diff --git a/docs/0.15.0-incubating/ingestion/command-line-hadoop-indexer.md b/docs/0.15.0-incubating/ingestion/command-line-hadoop-indexer.md
new file mode 100644
index 0000000..231852e
--- /dev/null
+++ b/docs/0.15.0-incubating/ingestion/command-line-hadoop-indexer.md
@@ -0,0 +1,95 @@
+---
+layout: doc_page
+title: "Command Line Hadoop Indexer"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Command Line Hadoop Indexer
+
+To run:
+
+```
+java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:<hadoop_config_dir> org.apache.druid.cli.Main index hadoop <spec_file>
+```
+
+## Options
+
+- "--coordinate" - provide a version of Apache Hadoop to use. This property will override the default Hadoop coordinates. Once specified, Apache Druid (incubating) will look for those Hadoop dependencies from the location specified by `druid.extensions.hadoopDependenciesDir`.
+- "--no-default-hadoop" - don't pull down the default hadoop version
+
+## Spec file
+
+The spec file needs to contain a JSON object where the contents are the same as the "spec" field in the Hadoop index task. See [Hadoop Batch Ingestion](../ingestion/hadoop.html) for details on the spec format. 
+
+In addition, a `metadataUpdateSpec` and `segmentOutputPath` field needs to be added to the ioConfig:
+
+```
+      "ioConfig" : {
+        ...
+        "metadataUpdateSpec" : {
+          "type":"mysql",
+          "connectURI" : "jdbc:mysql://localhost:3306/druid",
+          "password" : "diurd",
+          "segmentTable" : "druid_segments",
+          "user" : "druid"
+        },
+        "segmentOutputPath" : "/MyDirectory/data/index/output"
+      },
+```    
+
+and a `workingPath` field needs to be added to the tuningConfig:
+
+```
+  "tuningConfig" : {
+   ...
+    "workingPath": "/tmp",
+    ...
+  }
+```    
+
+#### Metadata Update Job Spec
+
+This is a specification of the properties that tell the job how to update metadata such that the Druid cluster will see the output segments and load them.
+
+|Field|Type|Description|Required|
+|-----|----|-----------|--------|
+|type|String|"metadata" is the only value available.|yes|
+|connectURI|String|A valid JDBC url to metadata storage.|yes|
+|user|String|Username for db.|yes|
+|password|String|password for db.|yes|
+|segmentTable|String|Table to use in DB.|yes|
+
+These properties should parrot what you have configured for your [Coordinator](../design/coordinator.html).
+
+#### segmentOutputPath Config
+
+|Field|Type|Description|Required|
+|-----|----|-----------|--------|
+|segmentOutputPath|String|the path to dump segments into.|yes|
+
+#### workingPath Config
+
+|Field|Type|Description|Required|
+|-----|----|-----------|--------|
+|workingPath|String|the working path to use for intermediate results (results between Hadoop jobs).|no (default == '/tmp/druid-indexing')|
+ 
+Please note that the command line Hadoop indexer doesn't have the locking capabilities of the indexing service, so if you choose to use it, 
+you have to take caution to not override segments created by real-time processing (if you that a real-time pipeline set up).
diff --git a/docs/latest/ingestion/compaction.md b/docs/0.15.0-incubating/ingestion/compaction.md
similarity index 83%
copy from docs/latest/ingestion/compaction.md
copy to docs/0.15.0-incubating/ingestion/compaction.md
index 1c5dfe4..759cd21 100644
--- a/docs/latest/ingestion/compaction.md
+++ b/docs/0.15.0-incubating/ingestion/compaction.md
@@ -33,7 +33,6 @@ Compaction tasks merge all segments of the given interval. The syntax is:
     "dataSource": <task_datasource>,
     "interval": <interval to specify segments to be merged>,
     "dimensions" <custom dimensionsSpec>,
-    "keepSegmentGranularity": <true or false>,
     "segmentGranularity": <segment granularity after compaction>,
     "targetCompactionSizeBytes": <target size of compacted segments>
     "tuningConfig" <index task tuningConfig>,
@@ -50,21 +49,10 @@ Compaction tasks merge all segments of the given interval. The syntax is:
 |`dimensionsSpec`|Custom dimensionsSpec. Compaction task will use this dimensionsSpec if exist instead of generating one. See below for more details.|No|
 |`metricsSpec`|Custom metricsSpec. Compaction task will use this metricsSpec if specified rather than generating one.|No|
 |`segmentGranularity`|If this is set, compactionTask will change the segment granularity for the given interval. See [segmentGranularity of Uniform Granularity Spec](./ingestion-spec.html#uniform-granularity-spec) for more details. See the below table for the behavior.|No|
-|`keepSegmentGranularity`|Deprecated. Please use `segmentGranularity` instead. See the below table for its behavior.|No|
 |`targetCompactionSizeBytes`|Target segment size after comapction. Cannot be used with `maxRowsPerSegment`, `maxTotalRows`, and `numShards` in tuningConfig.|No|
 |`tuningConfig`|[Index task tuningConfig](../ingestion/native_tasks.html#tuningconfig)|No|
 |`context`|[Task context](../ingestion/locking-and-priority.html#task-context)|No|
 
-### Used segmentGranularity based on `segmentGranularity` and `keepSegmentGranularity`
-
-|SegmentGranularity|keepSegmentGranularity|Used SegmentGranularity|
-|------------------|----------------------|-----------------------|
-|Non-null|True|Error|
-|Non-null|False|Given segmentGranularity|
-|Non-null|Null|Given segmentGranularity|
-|Null|True|Original segmentGranularity|
-|Null|False|ALL segmentGranularity. All events will fall into the single time chunk.|
-|Null|Null|Original segmentGranularity|
 
 An example of compaction task is
 
@@ -77,12 +65,12 @@ An example of compaction task is
 ```
 
 This compaction task reads _all segments_ of the interval `2017-01-01/2018-01-01` and results in new segments.
-Since both `segmentGranularity` and `keepSegmentGranularity` are null, the original segment granularity will be remained and not changed after compaction.
+Since `segmentGranularity` is null, the original segment granularity will be remained and not changed after compaction.
 To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.html#compaction-dynamic-configuration) or [numShards](../ingestion/native_tasks.html#tuningconfig).
 Please note that you can run multiple compactionTasks at the same time. For example, you can run 12 compactionTasks per month instead of running a single task for the entire year.
 
 A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters.
-For example, its `firehose` is always the [ingestSegmentSpec](./firehose.html#ingestsegmentfirehose), and `dimensionsSpec` and `metricsSpec`
+For example, its `firehose` is always the [ingestSegmentFirehose](./firehose.html#ingestsegmentfirehose), and `dimensionsSpec` and `metricsSpec`
 include all dimensions and metrics of the input segments by default.
 
 Compaction tasks will exit with a failure status code, without doing anything, if the interval you specify has no
diff --git a/docs/0.15.0-incubating/ingestion/data-formats.md b/docs/0.15.0-incubating/ingestion/data-formats.md
new file mode 100644
index 0000000..73ad2ae
--- /dev/null
+++ b/docs/0.15.0-incubating/ingestion/data-formats.md
@@ -0,0 +1,205 @@
+---
+layout: doc_page
+title: "Data Formats for Ingestion"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Data Formats for Ingestion
+
+Apache Druid (incubating) can ingest denormalized data in JSON, CSV, or a delimited form such as TSV, or any custom format. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest any other delimited data.
+We welcome any contributions to new formats.
+
+For additional data formats, please see our [extensions list](../development/extensions.html).
+
+## Formatting the Data
+
+The following samples show data formats that are natively supported in Druid:
+
+_JSON_
+
+```json
+{"timestamp": "2013-08-31T01:02:33Z", "page": "Gypsy Danger", "language" : "en", "user" : "nuclear", "unpatrolled" : "true", "newPage" : "true", "robot": "false", "anonymous": "false", "namespace":"article", "continent":"North America", "country":"United States", "region":"Bay Area", "city":"San Francisco", "added": 57, "deleted": 200, "delta": -143}
+{"timestamp": "2013-08-31T03:32:45Z", "page": "Striker Eureka", "language" : "en", "user" : "speed", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Australia", "country":"Australia", "region":"Cantebury", "city":"Syndey", "added": 459, "deleted": 129, "delta": 330}
+{"timestamp": "2013-08-31T07:11:21Z", "page": "Cherno Alpha", "language" : "ru", "user" : "masterYi", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"article", "continent":"Asia", "country":"Russia", "region":"Oblast", "city":"Moscow", "added": 123, "deleted": 12, "delta": 111}
+{"timestamp": "2013-08-31T11:58:39Z", "page": "Crimson Typhoon", "language" : "zh", "user" : "triplets", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"China", "region":"Shanxi", "city":"Taiyuan", "added": 905, "deleted": 5, "delta": 900}
+{"timestamp": "2013-08-31T12:41:27Z", "page": "Coyote Tango", "language" : "ja", "user" : "cancer", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"Japan", "region":"Kanto", "city":"Tokyo", "added": 1, "deleted": 10, "delta": -9}
+```
+
+_CSV_
+
+```
+2013-08-31T01:02:33Z,"Gypsy Danger","en","nuclear","true","true","false","false","article","North America","United States","Bay Area","San Francisco",57,200,-143
+2013-08-31T03:32:45Z,"Striker Eureka","en","speed","false","true","true","false","wikipedia","Australia","Australia","Cantebury","Syndey",459,129,330
+2013-08-31T07:11:21Z,"Cherno Alpha","ru","masterYi","false","true","true","false","article","Asia","Russia","Oblast","Moscow",123,12,111
+2013-08-31T11:58:39Z,"Crimson Typhoon","zh","triplets","true","false","true","false","wikipedia","Asia","China","Shanxi","Taiyuan",905,5,900
+2013-08-31T12:41:27Z,"Coyote Tango","ja","cancer","true","false","true","false","wikipedia","Asia","Japan","Kanto","Tokyo",1,10,-9
+```
+
+_TSV (Delimited)_
+
+```
+2013-08-31T01:02:33Z	"Gypsy Danger"	"en"	"nuclear"	"true"	"true"	"false"	"false"	"article"	"North America"	"United States"	"Bay Area"	"San Francisco"	57	200	-143
+2013-08-31T03:32:45Z	"Striker Eureka"	"en"	"speed"	"false"	"true"	"true"	"false"	"wikipedia"	"Australia"	"Australia"	"Cantebury"	"Syndey"	459	129	330
+2013-08-31T07:11:21Z	"Cherno Alpha"	"ru"	"masterYi"	"false"	"true"	"true"	"false"	"article"	"Asia"	"Russia"	"Oblast"	"Moscow"	123	12	111
+2013-08-31T11:58:39Z	"Crimson Typhoon"	"zh"	"triplets"	"true"	"false"	"true"	"false"	"wikipedia"	"Asia"	"China"	"Shanxi"	"Taiyuan"	905	5	900
+2013-08-31T12:41:27Z	"Coyote Tango"	"ja"	"cancer"	"true"	"false"	"true"	"false"	"wikipedia"	"Asia"	"Japan"	"Kanto"	"Tokyo"	1	10	-9
+```
+
+Note that the CSV and TSV data do not contain column heads. This becomes important when you specify the data for ingesting.
+
+## Custom Formats
+
+Druid supports custom data formats and can use the `Regex` parser or the `JavaScript` parsers to parse these formats. Please note that using any of these parsers for 
+parsing data will not be as efficient as writing a native Java parser or using an external stream processor. We welcome contributions of new Parsers.
+
+## Configuration
+
+All forms of Druid ingestion require some form of schema object. The format of the data to be ingested is specified using the`parseSpec` entry in your `dataSchema`.
+
+### JSON
+
+```json
+  "parseSpec":{
+    "format" : "json",
+    "timestampSpec" : {
+      "column" : "timestamp"
+    },
+    "dimensionSpec" : {
+      "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
+    }
+  }
+```
+
+If you have nested JSON, [Druid can automatically flatten it for you](flatten-json.html).
+
+### CSV
+
+```json
+  "parseSpec": {
+    "format" : "csv",
+    "timestampSpec" : {
+      "column" : "timestamp"
+    },
+    "columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city","added","deleted","delta"],
+    "dimensionsSpec" : {
+      "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
+    }
+  }
+```
+
+#### CSV Index Tasks
+
+If your input files contain a header, the `columns` field is optional and you don't need to set.
+Instead, you can set the `hasHeaderRow` field to true, which makes Druid automatically extract the column information from the header.
+Otherwise, you must set the `columns` field and ensure that field must match the columns of your input data in the same order.
+
+Also, you can skip some header rows by setting `skipHeaderRows` in your parseSpec. If both `skipHeaderRows` and `hasHeaderRow` options are set,
+`skipHeaderRows` is first applied. For example, if you set `skipHeaderRows` to 2 and `hasHeaderRow` to true, Druid will
+skip the first two lines and then extract column information from the third line.
+
+Note that `hasHeaderRow` and `skipHeaderRows` are effective only for non-Hadoop batch index tasks. Other types of index
+tasks will fail with an exception.
+
+#### Other CSV Ingestion Tasks
+
+The `columns` field must be included and and ensure that the order of the fields matches the columns of your input data in the same order.
+
+### TSV (Delimited)
+
+```json
+  "parseSpec": {
+    "format" : "tsv",
+    "timestampSpec" : {
+      "column" : "timestamp"
+    },
+    "columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city","added","deleted","delta"],
+    "delimiter":"|",
+    "dimensionsSpec" : {
+      "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
+    }
+  }
+```
+
+Be sure to change the `delimiter` to the appropriate delimiter for your data. Like CSV, you must specify the columns and which subset of the columns you want indexed.
+
+#### TSV (Delimited) Index Tasks
+
+If your input files contain a header, the `columns` field is optional and you don't need to set.
+Instead, you can set the `hasHeaderRow` field to true, which makes Druid automatically extract the column information from the header.
+Otherwise, you must set the `columns` field and ensure that field must match the columns of your input data in the same order.
+
+Also, you can skip some header rows by setting `skipHeaderRows` in your parseSpec. If both `skipHeaderRows` and `hasHeaderRow` options are set,
+`skipHeaderRows` is first applied. For example, if you set `skipHeaderRows` to 2 and `hasHeaderRow` to true, Druid will
+skip the first two lines and then extract column information from the third line.
+
+Note that `hasHeaderRow` and `skipHeaderRows` are effective only for non-Hadoop batch index tasks. Other types of index
+tasks will fail with an exception.
+
+#### Other TSV (Delimited) Ingestion Tasks
+
+The `columns` field must be included and and ensure that the order of the fields matches the columns of your input data in the same order.
+
+### Regex
+
+```json
+  "parseSpec":{
+    "format" : "regex",
+    "timestampSpec" : {
+      "column" : "timestamp"
+    },        
+    "dimensionsSpec" : {
+      "dimensions" : [<your_list_of_dimensions>]
+    },
+    "columns" : [<your_columns_here>],
+    "pattern" : <regex pattern for partitioning data>
+  }
+```
+
+The `columns` field must match the columns of your regex matching groups in the same order. If columns are not provided, default 
+columns names ("column_1", "column2", ... "column_n") will be assigned. Ensure that your column names include all your dimensions. 
+
+### JavaScript
+
+```json
+  "parseSpec":{
+    "format" : "javascript",
+    "timestampSpec" : {
+      "column" : "timestamp"
+    },        
+    "dimensionsSpec" : {
+      "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
+    },
+    "function" : "function(str) { var parts = str.split(\"-\"); return { one: parts[0], two: parts[1] } }"
+  }
+```
+
+Note with the JavaScript parser that data must be fully parsed and returned as a `{key:value}` format in the JS logic.
+This means any flattening or parsing multi-dimensional values must be done here.
+
+<div class="note info">
+JavaScript-based functionality is disabled by default. Please refer to the Druid <a href="../development/javascript.html">JavaScript programming guide</a> for guidelines about using Druid's JavaScript functionality, including instructions on how to enable it.
+</div>
+
+### Multi-value dimensions
+
+Dimensions can have multiple values for TSV and CSV data. To specify the delimiter for a multi-value dimension, set the `listDelimiter` in the `parseSpec`.
+
+JSON data can contain multi-value dimensions as well. The multiple values for a dimension must be formatted as a JSON array in the ingested data. No additional `parseSpec` configuration is needed.
diff --git a/docs/0.15.0-incubating/ingestion/delete-data.md b/docs/0.15.0-incubating/ingestion/delete-data.md
new file mode 100644
index 0000000..7e21e99
--- /dev/null
+++ b/docs/0.15.0-incubating/ingestion/delete-data.md
@@ -0,0 +1,50 @@
+---
+layout: doc_page
+title: "Deleting Data"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Deleting Data
+
+Permanent deletion of a segment in Apache Druid (incubating) has two steps:
+
+1. The segment must first be marked as "unused". This occurs when a segment is dropped by retention rules, and when a user manually disables a segment through the Coordinator API.
+2. After segments have been marked as "unused", a Kill Task will delete any "unused" segments from Druid's metadata store as well as deep storage.
+
+For documentation on retention rules, please see [Data Retention](../operations/rule-configuration.html).
+
+For documentation on disabling segments using the Coordinator API, please see [Coordinator Delete API](../operations/api-reference.html#coordinator-delete)
+
+A data deletion tutorial is available at [Tutorial: Deleting data](../tutorials/tutorial-delete-data.html)
+
+## Kill Task
+
+Kill tasks delete all information about a segment and removes it from deep storage. Killable segments must be disabled (used==0) in the Druid segment table. The available grammar is:
+
+```json
+{
+    "type": "kill",
+    "id": <task_id>,
+    "dataSource": <task_datasource>,
+    "interval" : <all_segments_in_this_interval_will_die!>,
+    "context": <task context>
+}
+```
diff --git a/docs/0.15.0-incubating/ingestion/faq.md b/docs/0.15.0-incubating/ingestion/faq.md
new file mode 100644
index 0000000..e134c82
--- /dev/null
+++ b/docs/0.15.0-incubating/ingestion/faq.md
@@ -0,0 +1,106 @@
+---
+layout: doc_page
+title: "Apache Druid (incubating) FAQ"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+# Apache Druid (incubating) FAQ
+
+### Realtime Ingestion
+
+The most common cause of this is because events being ingested are out of band of Druid's `windowPeriod`. Druid realtime ingestion 
+only accepts events within a configurable windowPeriod of the current time. You can verify this is what is happening by looking at the logs of your real-time process for log lines containing "ingest/events/*". These metrics will indicate the events ingested, rejected, etc. 
+We recommend using batch ingestion methods for historical data in production. 
+ 
+### Batch Ingestion
+ 
+If you are trying to batch load historical data but no events are being loaded, make sure the interval of your ingestion spec actually encapsulates the interval of your data. Events outside this interval are dropped. 
+
+## What types of data does Druid support?
+
+Druid can ingest JSON, CSV, TSV and other delimited data out of the box. Druid supports single dimension values, or multiple dimension values (an array of strings). Druid supports long, float, and double numeric columns.
+
+## Not all of my events were ingested
+
+Druid will reject events outside of a window period. The best way to see if events are being rejected is to check the [Druid ingest metrics](../operations/metrics.html).
+
+If the number of ingested events seem correct, make sure your query is correctly formed. If you included a `count` aggregator in your ingestion spec, you will need to query for the results of this aggregate with a `longSum` aggregator. Issuing a query with a count aggregator will count the number of Druid rows, which includes [roll-up](../design/index.html).
+
+## Where do my Druid segments end up after ingestion?
+
+Depending on what `druid.storage.type` is set to, Druid will upload segments to some [Deep Storage](../dependencies/deep-storage.html). Local disk is used as the default deep storage.
+
+## My stream ingest is not handing segments off
+
+First, make sure there are no exceptions in the logs of the ingestion process. Also make sure that `druid.storage.type` is set to a deep storage that isn't `local` if you are running a distributed cluster.
+
+Other common reasons that hand-off fails are as follows:
+
+1) Druid is unable to write to the metadata storage. Make sure your configurations are correct.
+
+2) Historical processes are out of capacity and cannot download any more segments. You'll see exceptions in the Coordinator logs if this occurs and the Coordinator console will show the Historicals are near capacity.
+
+3) Segments are corrupt and cannot be downloaded. You'll see exceptions in your Historical processes if this occurs.
+
+4) Deep storage is improperly configured. Make sure that your segment actually exists in deep storage and that the Coordinator logs have no errors.
+
+## How do I get HDFS to work?
+
+Make sure to include the `druid-hdfs-storage` and all the hadoop configuration, dependencies (that can be obtained by running command `hadoop classpath` on a machine where hadoop has been setup) in the classpath. And, provide necessary HDFS settings as described in [Deep Storage](../dependencies/deep-storage.html) .
+
+## I don't see my Druid segments on my Historical processes
+
+You can check the Coordinator console located at `<COORDINATOR_IP>:<PORT>`. Make sure that your segments have actually loaded on [Historical processes](../design/historical.html). If your segments are not present, check the Coordinator logs for messages about capacity of replication errors. One reason that segments are not downloaded is because Historical processes have maxSizes that are too small, making them incapable of downloading more data. You can change that with (for example):
+
+```
+-Ddruid.segmentCache.locations=[{"path":"/tmp/druid/storageLocation","maxSize":"500000000000"}]
+-Ddruid.server.maxSize=500000000000
+ ```
+
+## My queries are returning empty results
+
+You can use a [segment metadata query](../querying/segmentmetadataquery.html) for the dimensions and metrics that have been created for your datasource. Make sure that the name of the aggregators you use in your query match one of these metrics. Also make sure that the query interval you specify match a valid time range where data exists.
+
+## How can I Reindex existing data in Druid with schema changes?
+
+You can use IngestSegmentFirehose with index task to ingest existing druid segments using a new schema and change the name, dimensions, metrics, rollup, etc. of the segment.
+See [Firehose](../ingestion/firehose.html) for more details on IngestSegmentFirehose.
+Or, if you use hadoop based ingestion, then you can use "dataSource" input spec to do reindexing.
+
+See [Update Existing Data](../ingestion/update-existing-data.html) for more details.
+
+## How can I change the granularity of existing data in Druid?
+
+In a lot of situations you may want to lower the granularity of older data. Example, any data older than 1 month has only hour level granularity but newer data has minute level granularity. This use case is same as re-indexing.
+
+To do this use the IngestSegmentFirehose and run an indexer task. The IngestSegment firehose will allow you to take in existing segments from Druid and aggregate them and feed them back into Druid. It will also allow you to filter the data in those segments while feeding it back in. This means if there are rows you want to delete, you can just filter them away during re-ingestion.
+Typically the above will be run as a batch job to say everyday feed in a chunk of data and aggregate it.
+Or, if you use hadoop based ingestion, then you can use "dataSource" input spec to do reindexing.
+
+See [Update Existing Data](../ingestion/update-existing-data.html) for more details.
+
+## Real-time ingestion seems to be stuck
+
+There are a few ways this can occur. Druid will throttle ingestion to prevent out of memory problems if the intermediate persists are taking too long or if hand-off is taking too long. If your process logs indicate certain columns are taking a very long time to build (for example, if your segment granularity is hourly, but creating a single column takes 30 minutes), you should re-evaluate your configuration or scale up your real-time ingestion. 
+
+## More information
+
+Getting data into Druid can definitely be difficult for first time users. Please don't hesitate to ask questions in our IRC channel or on our [google groups page](https://groups.google.com/forum/#!forum/druid-user).
diff --git a/docs/latest/ingestion/firehose.md b/docs/0.15.0-incubating/ingestion/firehose.md
similarity index 74%
copy from docs/latest/ingestion/firehose.md
copy to docs/0.15.0-incubating/ingestion/firehose.md
index 51749b9..f35bcc0 100644
--- a/docs/latest/ingestion/firehose.md
+++ b/docs/0.15.0-incubating/ingestion/firehose.md
@@ -74,6 +74,39 @@ A sample http firehose spec is shown below:
 }
 ```
 
+The below configurations can be optionally used if the URIs specified in the spec require a Basic Authentication Header.
+Omitting these fields from your spec will result in HTTP requests with no Basic Authentication Header.
+
+|property|description|default|
+|--------|-----------|-------|
+|httpAuthenticationUsername|Username to use for authentication with specified URIs|None|
+|httpAuthenticationPassword|PasswordProvider to use with specified URIs|None|
+
+Example with authentication fields using the DefaultPassword provider (this requires the password to be in the ingestion spec):
+
+```json
+{
+    "type": "http",
+    "uris": ["http://example.com/uri1", "http://example2.com/uri2"],
+    "httpAuthenticationUsername": "username",
+    "httpAuthenticationPassword": "password123"
+}
+```
+
+You can also use the other existing Druid PasswordProviders. Here is an example using the EnvironmentVariablePasswordProvider:
+
+```json
+{
+    "type": "http",
+    "uris": ["http://example.com/uri1", "http://example2.com/uri2"],
+    "httpAuthenticationUsername": "username",
+    "httpAuthenticationPassword": {
+        "type": "environment",
+        "variable": "HTTP_FIREHOSE_PW"
+    }
+}
+```
+
 The below configurations can be optionally used for tuning the firehose performance.
 
 |property|description|default|
@@ -87,7 +120,8 @@ The below configurations can be optionally used for tuning the firehose performa
 ### IngestSegmentFirehose
 
 This Firehose can be used to read the data from existing druid segments.
-It can be used ingest existing druid segments using a new schema and change the name, dimensions, metrics, rollup, etc. of the segment.
+It can be used to ingest existing druid segments using a new schema and change the name, dimensions, metrics, rollup, etc. of the segment.
+This firehose is _splittable_ and can be used by [native parallel index tasks](./native_tasks.html#parallel-index-task).
 A sample ingest firehose spec is shown below -
 
 ```json
@@ -106,11 +140,15 @@ A sample ingest firehose spec is shown below -
 |dimensions|The list of dimensions to select. If left empty, no dimensions are returned. If left null or not defined, all dimensions are returned. |no|
 |metrics|The list of metrics to select. If left empty, no metrics are returned. If left null or not defined, all metrics are selected.|no|
 |filter| See [Filters](../querying/filters.html)|no|
+|maxInputSegmentBytesPerTask|When used with the native parallel index task, the maximum number of bytes of input segments to process in a single task. If a single segment is larger than this number, it will be processed by itself in a single task (input segments are never split across tasks). Defaults to 150MB.|no|
 
 #### SqlFirehose
 
 SqlFirehoseFactory can be used to ingest events residing in RDBMS. The database connection information is provided as part of the ingestion spec. For each query, the results are fetched locally and indexed. If there are multiple queries from which data needs to be indexed, queries are prefetched in the background upto `maxFetchCapacityBytes` bytes.
-An example is shown below:
+
+Requires one of the following extensions:
+ * [MySQL Metadata Store](../development/extensions-core/mysql.html).
+ * [PostgreSQL Metadata Store](../development/extensions-core/postgresql.html).
 
 ```json
 {
@@ -118,20 +156,19 @@ An example is shown below:
     "database": {
         "type": "mysql",
         "connectorConfig" : {
-        "connectURI" : "jdbc:mysql://host:port/schema",
-        "user" : "user",
-        "password" : "password"
... 20120 lines suppressed ...


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org