You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by ji...@apache.org on 2019/06/27 22:57:43 UTC

[incubator-druid-website-src] 02/48: base

This is an automated email from the ASF dual-hosted git repository.

jihoonson pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid-website-src.git

commit 6417b4e8f7c0148c9556a4f05f59e0d9afdd2956
Author: Vadim Ogievetsky <va...@gmail.com>
AuthorDate: Tue Jun 4 08:09:53 2019 -0700

    base
---
 .gitignore                                         |     7 +
 404.html                                           |    12 +
 CNAME                                              |     1 +
 CONTRIBUTING.md                                    |    14 +
 Gemfile                                            |     5 +
 README.md                                          |    30 +-
 _config.yml                                        |    69 +
 _data/events.yml                                   |    14 +
 _data/featured.yml                                 |    47 +
 _images/druid_explorer_chart.png                   |   Bin 0 -> 14508 bytes
 _images/map-usgs-napa.png                          |   Bin 0 -> 3672165 bytes
 _includes/event-list.html                          |    26 +
 _includes/featured-list.html                       |    17 +
 _includes/news-list.html                           |    24 +
 _includes/page_footer.html                         |    46 +
 _includes/page_header.html                         |    63 +
 _includes/site_head.html                           |    35 +
 _layouts/doc_page.html                             |    60 +
 _layouts/html_page.html                            |    14 +
 _layouts/post.html                                 |    32 +
 _layouts/redirect_page.html                        |     8 +
 _layouts/simple_page.html                          |    18 +
 _layouts/toc.html                                  |     7 +
 _posts/2011-04-30-introducing-druid.md             |   199 +
 _posts/2011-05-20-druid-part-deux.md               |   107 +
 _posts/2012-01-19-scaling-the-druid-data-store.md  |   215 +
 ...98-right-cardinality-estimation-for-big-data.md |   128 +
 _posts/2012-09-21-druid-bitmap-compression.md      |  1204 ++
 ...eyond-hadoop-fast-ad-hoc-queries-on-big-data.md |     9 +
 _posts/2012-10-24-introducing-druid.md             |   104 +
 ...2-28-interactive-queries-meet-real-time-data.md |     9 +
 _posts/2013-04-03-15-minutes-to-live-druid.md      |    93 +
 _posts/2013-04-03-druid-r-meetup.md                |    32 +
 _posts/2013-04-26-meet-the-druid.md                |   111 +
 _posts/2013-05-10-real-time-for-real.md            |   140 +
 _posts/2013-07-11-booting-ec2.md                   |    25 +
 _posts/2013-08-06-twitter-tutorial.md              |   333 +
 _posts/2013-08-30-loading-data.md                  |   184 +
 ...09-12-the-art-of-approximating-distributions.md |   303 +
 _posts/2013-09-16-upcoming-events.md               |    17 +
 ...2013-09-19-launching-druid-with-apache-whirr.md |    66 +
 _posts/2013-09-20-druid-at-xldb.md                 |    19 +
 _posts/2013-10-18-R-applications.md                |    44 +
 _posts/2013-10-18-python-applications.md           |    56 +
 _posts/2013-10-18-realtime-web-applications.md     |   157 +
 _posts/2013-10-18-ruby-applications.md             |    50 +
 _posts/2013-11-04-querying-your-data.md            |   336 +
 _posts/2014-02-03-rdruid-and-twitterstream.md      |   292 +
 ...rloglog-optimizations-for-real-world-systems.md |   189 +
 _posts/2014-03-12-batch-ingestion.md               |   202 +
 _posts/2014-03-17-benchmarking-druid.md            |   360 +
 _posts/2014-04-15-intro-to-pydruid.md              |   181 +
 ...-off-on-the-rise-of-the-real-time-data-stack.md |    45 +
 .../2014-07-23-five-tips-for-a-f-ing-great-logo.md |    87 +
 _posts/2015-02-20-towards-a-community-led-druid.md |    31 +
 _posts/2015-11-03-seeking-new-committers.md        |    39 +
 _posts/2016-01-06-announcing-new-committers.md     |    24 +
 _posts/2016-06-28-druid-0-9-1.md                   |    19 +
 _posts/2016-12-01-druid-0-9-2.md                   |    21 +
 _posts/2017-04-18-druid-0-10-0.md                  |    23 +
 _posts/2017-08-22-druid-0-10-1.md                  |    31 +
 _posts/2017-12-04-druid-0-11-0.md                  |    29 +
 _posts/2018-03-08-druid-0-12-0.md                  |    31 +
 _posts/2018-06-08-druid-0-12-1.md                  |    28 +
 _posts/r_druid_ggplot.png                          |   Bin 0 -> 72480 bytes
 api/0.6.146/index.md                               |     4 +
 api/0.6.151/index.md                               |     4 +
 api/0.6.152/index.md                               |     4 +
 api/0.6.154/index.md                               |     4 +
 api/0.6.156/index.md                               |     4 +
 api/0.6.157/index.md                               |     4 +
 api/0.6.158/index.md                               |     4 +
 api/0.6.159/index.md                               |     4 +
 api/0.6.160/index.md                               |     4 +
 api/0.6.162/index.md                               |     4 +
 api/0.6.163/index.md                               |     4 +
 api/0.6.164/index.md                               |     4 +
 api/0.6.165/index.md                               |     4 +
 api/0.6.166/index.md                               |     4 +
 api/0.6.169/index.md                               |     4 +
 api/0.6.170/index.md                               |     4 +
 api/0.6.171/index.md                               |     4 +
 api/0.6.172/index.md                               |     4 +
 api/0.6.173/index.md                               |     4 +
 api/0.6.174/allclasses-frame.html                  |  1096 ++
 api/0.6.174/allclasses-noframe.html                |  1096 ++
 .../introspect/GuiceAnnotationIntrospector.html    |   321 +
 .../databind/introspect/GuiceInjectableValues.html |   288 +
 .../class-use/GuiceAnnotationIntrospector.html     |   117 +
 .../class-use/GuiceInjectableValues.html           |   117 +
 .../jackson/databind/introspect/package-frame.html |    21 +
 .../databind/introspect/package-summary.html       |   139 +
 .../jackson/databind/introspect/package-tree.html  |   143 +
 .../jackson/databind/introspect/package-use.html   |   117 +
 api/0.6.174/constant-values.html                   |  1620 ++
 api/0.6.174/deprecated-list.html                   |   242 +
 api/0.6.174/help-doc.html                          |   222 +
 api/0.6.174/index-all.html                         | 15815 +++++++++++++++++++
 api/0.6.174/index.html                             |    68 +
 api/0.6.174/io/druid/cli/CliBridge.html            |   292 +
 api/0.6.174/io/druid/cli/CliBroker.html            |   292 +
 api/0.6.174/io/druid/cli/CliCoordinator.html       |   292 +
 api/0.6.174/io/druid/cli/CliHadoopIndexer.html     |   305 +
 api/0.6.174/io/druid/cli/CliHistorical.html        |   292 +
 .../io/druid/cli/CliInternalHadoopIndexer.html     |   282 +
 api/0.6.174/io/druid/cli/CliMiddleManager.html     |   292 +
 api/0.6.174/io/druid/cli/CliOverlord.html          |   292 +
 api/0.6.174/io/druid/cli/CliPeon.html              |   342 +
 api/0.6.174/io/druid/cli/CliRealtime.html          |   292 +
 api/0.6.174/io/druid/cli/CliRealtimeExample.html   |   292 +
 api/0.6.174/io/druid/cli/CliRouter.html            |   292 +
 api/0.6.174/io/druid/cli/GuiceRunnable.html        |   314 +
 api/0.6.174/io/druid/cli/Main.html                 |   258 +
 api/0.6.174/io/druid/cli/PullDependencies.html     |   332 +
 .../io/druid/cli/QueryJettyServerInitializer.html  |   276 +
 .../io/druid/cli/RouterJettyServerInitializer.html |   287 +
 api/0.6.174/io/druid/cli/ServerRunnable.html       |   278 +
 api/0.6.174/io/druid/cli/Version.html              |   269 +
 api/0.6.174/io/druid/cli/class-use/CliBridge.html  |   117 +
 api/0.6.174/io/druid/cli/class-use/CliBroker.html  |   117 +
 .../io/druid/cli/class-use/CliCoordinator.html     |   117 +
 .../io/druid/cli/class-use/CliHadoopIndexer.html   |   117 +
 .../io/druid/cli/class-use/CliHistorical.html      |   117 +
 .../cli/class-use/CliInternalHadoopIndexer.html    |   117 +
 .../io/druid/cli/class-use/CliMiddleManager.html   |   117 +
 .../io/druid/cli/class-use/CliOverlord.html        |   117 +
 api/0.6.174/io/druid/cli/class-use/CliPeon.html    |   117 +
 .../io/druid/cli/class-use/CliRealtime.html        |   117 +
 .../io/druid/cli/class-use/CliRealtimeExample.html |   117 +
 api/0.6.174/io/druid/cli/class-use/CliRouter.html  |   117 +
 .../io/druid/cli/class-use/GuiceRunnable.html      |   197 +
 api/0.6.174/io/druid/cli/class-use/Main.html       |   117 +
 .../io/druid/cli/class-use/PullDependencies.html   |   117 +
 .../cli/class-use/QueryJettyServerInitializer.html |   117 +
 .../class-use/RouterJettyServerInitializer.html    |   117 +
 .../io/druid/cli/class-use/ServerRunnable.html     |   189 +
 api/0.6.174/io/druid/cli/class-use/Version.html    |   117 +
 .../io/druid/cli/convert/ChatHandlerConverter.html |   284 +
 .../io/druid/cli/convert/ConvertIngestionSpec.html |   331 +
 .../io/druid/cli/convert/ConvertProperties.html    |   318 +
 .../convert/DataSegmentPusherDefaultConverter.html |   284 +
 .../cli/convert/DatabasePropertiesConverter.html   |   284 +
 .../io/druid/cli/convert/IndexCacheConverter.html  |   284 +
 api/0.6.174/io/druid/cli/convert/PrefixRename.html |   286 +
 .../io/druid/cli/convert/PropertyConverter.html    |   225 +
 api/0.6.174/io/druid/cli/convert/Rename.html       |   286 +
 .../convert/class-use/ChatHandlerConverter.html    |   117 +
 .../convert/class-use/ConvertIngestionSpec.html    |   117 +
 .../cli/convert/class-use/ConvertProperties.html   |   117 +
 .../DataSegmentPusherDefaultConverter.html         |   117 +
 .../class-use/DatabasePropertiesConverter.html     |   117 +
 .../cli/convert/class-use/IndexCacheConverter.html |   117 +
 .../druid/cli/convert/class-use/PrefixRename.html  |   117 +
 .../cli/convert/class-use/PropertyConverter.html   |   177 +
 .../io/druid/cli/convert/class-use/Rename.html     |   117 +
 .../io/druid/cli/convert/package-frame.html        |    31 +
 .../io/druid/cli/convert/package-summary.html      |   178 +
 api/0.6.174/io/druid/cli/convert/package-tree.html |   141 +
 api/0.6.174/io/druid/cli/convert/package-use.html  |   150 +
 api/0.6.174/io/druid/cli/package-frame.html        |    38 +
 api/0.6.174/io/druid/cli/package-summary.html      |   207 +
 api/0.6.174/io/druid/cli/package-tree.html         |   158 +
 api/0.6.174/io/druid/cli/package-use.html          |   153 +
 .../io/druid/cli/validate/DruidJsonValidator.html  |   318 +
 .../cli/validate/class-use/DruidJsonValidator.html |   117 +
 .../io/druid/cli/validate/package-frame.html       |    20 +
 .../io/druid/cli/validate/package-summary.html     |   135 +
 .../io/druid/cli/validate/package-tree.html        |   130 +
 api/0.6.174/io/druid/cli/validate/package-use.html |   117 +
 .../io/druid/client/BatchServerInventoryView.html  |   366 +
 .../client/BatchServerInventoryViewProvider.html   |   267 +
 api/0.6.174/io/druid/client/BrokerServerView.html  |   361 +
 api/0.6.174/io/druid/client/CacheUtil.html         |   281 +
 .../io/druid/client/CachingClusteredClient.html    |   291 +
 .../io/druid/client/CachingQueryRunner.html        |   279 +
 api/0.6.174/io/druid/client/DirectDruidClient.html |   288 +
 api/0.6.174/io/druid/client/DruidDataSource.html   |   370 +
 api/0.6.174/io/druid/client/DruidServer.html       |   632 +
 api/0.6.174/io/druid/client/DruidServerConfig.html |   284 +
 .../client/FilteredBatchServerViewProvider.html    |   267 +
 .../io/druid/client/FilteredServerView.html        |   216 +
 .../druid/client/FilteredServerViewProvider.html   |   190 +
 .../client/FilteredSingleServerViewProvider.html   |   267 +
 .../io/druid/client/ImmutableDruidDataSource.html  |   316 +
 .../io/druid/client/ImmutableDruidServer.html      |   394 +
 api/0.6.174/io/druid/client/InventoryView.html     |   225 +
 .../io/druid/client/ServerInventoryView.html       |   518 +
 .../druid/client/ServerInventoryViewProvider.html  |   190 +
 .../client/ServerView.BaseSegmentCallback.html     |   335 +
 .../io/druid/client/ServerView.CallbackAction.html |   327 +
 .../druid/client/ServerView.SegmentCallback.html   |   272 +
 .../io/druid/client/ServerView.ServerCallback.html |   225 +
 api/0.6.174/io/druid/client/ServerView.html        |   264 +
 .../client/SingleServerInventoryProvider.html      |   267 +
 .../io/druid/client/SingleServerInventoryView.html |   366 +
 .../io/druid/client/TimelineServerView.html        |   252 +
 .../client/cache/BytesBoundedLinkedQueue.html      |   576 +
 .../io/druid/client/cache/Cache.NamedKey.html      |   347 +
 api/0.6.174/io/druid/client/cache/Cache.html       |   298 +
 api/0.6.174/io/druid/client/cache/CacheConfig.html |   335 +
 .../io/druid/client/cache/CacheMonitor.html        |   279 +
 .../io/druid/client/cache/CacheProvider.html       |   190 +
 api/0.6.174/io/druid/client/cache/CacheSerde.html  |   223 +
 api/0.6.174/io/druid/client/cache/CacheStats.html  |   400 +
 .../io/druid/client/cache/LZ4Transcoder.html       |   353 +
 .../io/druid/client/cache/LocalCacheProvider.html  |   267 +
 api/0.6.174/io/druid/client/cache/MapCache.html    |   348 +
 .../io/druid/client/cache/MemcachedCache.html      |   385 +
 .../druid/client/cache/MemcachedCacheConfig.html   |   340 +
 .../druid/client/cache/MemcachedCacheProvider.html |   279 +
 .../cache/MemcachedOperationQueueFactory.html      |   303 +
 .../cache/class-use/BytesBoundedLinkedQueue.html   |   117 +
 .../client/cache/class-use/Cache.NamedKey.html     |   262 +
 .../io/druid/client/cache/class-use/Cache.html     |   268 +
 .../druid/client/cache/class-use/CacheConfig.html  |   194 +
 .../druid/client/cache/class-use/CacheMonitor.html |   117 +
 .../client/cache/class-use/CacheProvider.html      |   161 +
 .../druid/client/cache/class-use/CacheSerde.html   |   117 +
 .../druid/client/cache/class-use/CacheStats.html   |   182 +
 .../client/cache/class-use/LZ4Transcoder.html      |   117 +
 .../client/cache/class-use/LocalCacheProvider.html |   117 +
 .../io/druid/client/cache/class-use/MapCache.html  |   117 +
 .../client/cache/class-use/MemcachedCache.html     |   157 +
 .../cache/class-use/MemcachedCacheConfig.html      |   170 +
 .../cache/class-use/MemcachedCacheProvider.html    |   117 +
 .../class-use/MemcachedOperationQueueFactory.html  |   117 +
 .../io/druid/client/cache/package-frame.html       |    37 +
 .../io/druid/client/cache/package-summary.html     |   205 +
 .../io/druid/client/cache/package-tree.html        |   182 +
 api/0.6.174/io/druid/client/cache/package-use.html |   212 +
 .../client/class-use/BatchServerInventoryView.html |   161 +
 .../BatchServerInventoryViewProvider.html          |   117 +
 .../druid/client/class-use/BrokerServerView.html   |   117 +
 .../io/druid/client/class-use/CacheUtil.html       |   117 +
 .../client/class-use/CachingClusteredClient.html   |   158 +
 .../druid/client/class-use/CachingQueryRunner.html |   117 +
 .../druid/client/class-use/DirectDruidClient.html  |   190 +
 .../io/druid/client/class-use/DruidDataSource.html |   276 +
 .../io/druid/client/class-use/DruidServer.html     |   368 +
 .../druid/client/class-use/DruidServerConfig.html  |   202 +
 .../class-use/FilteredBatchServerViewProvider.html |   117 +
 .../druid/client/class-use/FilteredServerView.html |   270 +
 .../class-use/FilteredServerViewProvider.html      |   161 +
 .../FilteredSingleServerViewProvider.html          |   117 +
 .../client/class-use/ImmutableDruidDataSource.html |   184 +
 .../client/class-use/ImmutableDruidServer.html     |   239 +
 .../io/druid/client/class-use/InventoryView.html   |   233 +
 .../client/class-use/ServerInventoryView.html      |   273 +
 .../class-use/ServerInventoryViewProvider.html     |   161 +
 .../class-use/ServerView.BaseSegmentCallback.html  |   117 +
 .../class-use/ServerView.CallbackAction.html       |   217 +
 .../class-use/ServerView.SegmentCallback.html      |   212 +
 .../class-use/ServerView.ServerCallback.html       |   168 +
 .../io/druid/client/class-use/ServerView.html      |   209 +
 .../class-use/SingleServerInventoryProvider.html   |   117 +
 .../class-use/SingleServerInventoryView.html       |   157 +
 .../druid/client/class-use/TimelineServerView.html |   172 +
 .../druid/client/indexing/ClientAppendQuery.html   |   303 +
 .../client/indexing/ClientConversionQuery.html     |   311 +
 .../io/druid/client/indexing/ClientKillQuery.html  |   286 +
 .../io/druid/client/indexing/ClientMergeQuery.html |   318 +
 .../io/druid/client/indexing/IndexingService.html  |   154 +
 .../client/indexing/IndexingServiceClient.html     |   306 +
 .../indexing/IndexingServiceSelectorConfig.html    |   258 +
 .../indexing/class-use/ClientAppendQuery.html      |   117 +
 .../indexing/class-use/ClientConversionQuery.html  |   117 +
 .../client/indexing/class-use/ClientKillQuery.html |   117 +
 .../indexing/class-use/ClientMergeQuery.html       |   117 +
 .../client/indexing/class-use/IndexingService.html |   203 +
 .../indexing/class-use/IndexingServiceClient.html  |   238 +
 .../class-use/IndexingServiceSelectorConfig.html   |   158 +
 .../io/druid/client/indexing/package-frame.html    |    29 +
 .../io/druid/client/indexing/package-summary.html  |   170 +
 .../io/druid/client/indexing/package-tree.html     |   139 +
 .../io/druid/client/indexing/package-use.html      |   248 +
 api/0.6.174/io/druid/client/package-frame.html     |    52 +
 api/0.6.174/io/druid/client/package-summary.html   |   261 +
 api/0.6.174/io/druid/client/package-tree.html      |   184 +
 api/0.6.174/io/druid/client/package-use.html       |   444 +
 .../selector/AbstractTierSelectorStrategy.html     |   280 +
 .../ConnectionCountServerSelectorStrategy.html     |   269 +
 .../selector/CustomTierSelectorStrategy.html       |   276 +
 .../selector/CustomTierSelectorStrategyConfig.html |   258 +
 .../druid/client/selector/DiscoverySelector.html   |   212 +
 .../HighestPriorityTierSelectorStrategy.html       |   274 +
 .../io/druid/client/selector/HostSelector.html     |   225 +
 .../LowestPriorityTierSelectorStrategy.html        |   274 +
 .../client/selector/QueryableDruidServer.html      |   273 +
 .../selector/RandomServerSelectorStrategy.html     |   269 +
 api/0.6.174/io/druid/client/selector/Server.html   |   247 +
 .../io/druid/client/selector/ServerSelector.html   |   321 +
 .../client/selector/ServerSelectorStrategy.html    |   214 +
 .../client/selector/TierSelectorStrategy.html      |   227 +
 .../class-use/AbstractTierSelectorStrategy.html    |   165 +
 .../ConnectionCountServerSelectorStrategy.html     |   117 +
 .../class-use/CustomTierSelectorStrategy.html      |   117 +
 .../CustomTierSelectorStrategyConfig.html          |   156 +
 .../selector/class-use/DiscoverySelector.html      |   179 +
 .../HighestPriorityTierSelectorStrategy.html       |   117 +
 .../client/selector/class-use/HostSelector.html    |   157 +
 .../LowestPriorityTierSelectorStrategy.html        |   117 +
 .../selector/class-use/QueryableDruidServer.html   |   233 +
 .../class-use/RandomServerSelectorStrategy.html    |   117 +
 .../io/druid/client/selector/class-use/Server.html |   183 +
 .../client/selector/class-use/ServerSelector.html  |   161 +
 .../selector/class-use/ServerSelectorStrategy.html |   182 +
 .../selector/class-use/TierSelectorStrategy.html   |   206 +
 .../io/druid/client/selector/package-frame.html    |    36 +
 .../io/druid/client/selector/package-summary.html  |   198 +
 .../io/druid/client/selector/package-tree.html     |   149 +
 .../io/druid/client/selector/package-use.html      |   231 +
 .../io/druid/collections/CombiningIterable.html    |   316 +
 .../io/druid/collections/CombiningIterator.html    |   322 +
 api/0.6.174/io/druid/collections/CountingMap.html  |   338 +
 api/0.6.174/io/druid/collections/IntList.html      |   363 +
 .../io/druid/collections/OrderedMergeIterator.html |   314 +
 .../io/druid/collections/OrderedMergeSequence.html |   305 +
 .../io/druid/collections/ResourceHolder.html       |   224 +
 api/0.6.174/io/druid/collections/StupidPool.html   |   258 +
 .../io/druid/collections/StupidResourceHolder.html |   305 +
 .../collections/class-use/CombiningIterable.html   |   169 +
 .../collections/class-use/CombiningIterator.html   |   159 +
 .../druid/collections/class-use/CountingMap.html   |   178 +
 .../io/druid/collections/class-use/IntList.html    |   180 +
 .../class-use/OrderedMergeIterator.html            |   117 +
 .../class-use/OrderedMergeSequence.html            |   117 +
 .../collections/class-use/ResourceHolder.html      |   329 +
 .../io/druid/collections/class-use/StupidPool.html |   213 +
 .../class-use/StupidResourceHolder.html            |   157 +
 .../io/druid/collections/package-frame.html        |    31 +
 .../io/druid/collections/package-summary.html      |   182 +
 api/0.6.174/io/druid/collections/package-tree.html |   161 +
 api/0.6.174/io/druid/collections/package-use.html  |   298 +
 .../io/druid/common/config/ConfigManager.html      |   310 +
 .../druid/common/config/ConfigManagerConfig.html   |   258 +
 .../io/druid/common/config/ConfigSerde.html        |   223 +
 .../druid/common/config/JacksonConfigManager.html  |   333 +
 .../common/config/class-use/ConfigManager.html     |   181 +
 .../config/class-use/ConfigManagerConfig.html      |   182 +
 .../druid/common/config/class-use/ConfigSerde.html |   164 +
 .../config/class-use/JacksonConfigManager.html     |   232 +
 .../io/druid/common/config/package-frame.html      |    26 +
 .../io/druid/common/config/package-summary.html    |   158 +
 .../io/druid/common/config/package-tree.html       |   136 +
 .../io/druid/common/config/package-use.html        |   238 +
 .../io/druid/common/guava/CombiningSequence.html   |   330 +
 api/0.6.174/io/druid/common/guava/DSuppliers.html  |   258 +
 .../io/druid/common/guava/FileOutputSupplier.html  |   284 +
 api/0.6.174/io/druid/common/guava/GuavaUtils.html  |   284 +
 .../druid/common/guava/ThreadRenamingCallable.html |   280 +
 .../druid/common/guava/ThreadRenamingRunnable.html |   280 +
 .../common/guava/class-use/CombiningSequence.html  |   159 +
 .../druid/common/guava/class-use/DSuppliers.html   |   117 +
 .../common/guava/class-use/FileOutputSupplier.html |   117 +
 .../druid/common/guava/class-use/GuavaUtils.html   |   117 +
 .../guava/class-use/ThreadRenamingCallable.html    |   117 +
 .../guava/class-use/ThreadRenamingRunnable.html    |   117 +
 .../io/druid/common/guava/package-frame.html       |    25 +
 .../io/druid/common/guava/package-summary.html     |   155 +
 .../io/druid/common/guava/package-tree.html        |   135 +
 api/0.6.174/io/druid/common/guava/package-use.html |   150 +
 api/0.6.174/io/druid/common/utils/JodaUtils.html   |   312 +
 api/0.6.174/io/druid/common/utils/PropUtils.html   |   292 +
 .../io/druid/common/utils/SerializerUtils.html     |   677 +
 api/0.6.174/io/druid/common/utils/SocketUtil.html  |   258 +
 api/0.6.174/io/druid/common/utils/UUIDUtils.html   |   300 +
 api/0.6.174/io/druid/common/utils/VMUtils.html     |   261 +
 .../io/druid/common/utils/class-use/JodaUtils.html |   117 +
 .../io/druid/common/utils/class-use/PropUtils.html |   117 +
 .../common/utils/class-use/SerializerUtils.html    |   117 +
 .../druid/common/utils/class-use/SocketUtil.html   |   117 +
 .../io/druid/common/utils/class-use/UUIDUtils.html |   117 +
 .../io/druid/common/utils/class-use/VMUtils.html   |   117 +
 .../io/druid/common/utils/package-frame.html       |    25 +
 .../io/druid/common/utils/package-summary.html     |   155 +
 .../io/druid/common/utils/package-tree.html        |   135 +
 api/0.6.174/io/druid/common/utils/package-use.html |   117 +
 api/0.6.174/io/druid/concurrent/Execs.html         |   316 +
 .../io/druid/concurrent/class-use/Execs.html       |   117 +
 api/0.6.174/io/druid/concurrent/package-frame.html |    20 +
 .../io/druid/concurrent/package-summary.html       |   135 +
 api/0.6.174/io/druid/concurrent/package-tree.html  |   130 +
 api/0.6.174/io/druid/concurrent/package-use.html   |   117 +
 api/0.6.174/io/druid/curator/CuratorConfig.html    |   288 +
 api/0.6.174/io/druid/curator/CuratorModule.html    |   286 +
 .../PotentiallyGzippedCompressionProvider.html     |   292 +
 .../ShutdownNowIgnoringExecutorService.html        |   503 +
 .../io/druid/curator/announcement/Announcer.html   |   329 +
 .../curator/announcement/class-use/Announcer.html  |   222 +
 .../druid/curator/announcement/package-frame.html  |    20 +
 .../curator/announcement/package-summary.html      |   137 +
 .../druid/curator/announcement/package-tree.html   |   130 +
 .../io/druid/curator/announcement/package-use.html |   194 +
 .../curator/cache/PathChildrenCacheFactory.html    |   214 +
 .../SimplePathChildrenCacheFactory.Builder.html    |   301 +
 .../cache/SimplePathChildrenCacheFactory.html      |   292 +
 .../cache/class-use/PathChildrenCacheFactory.html  |   183 +
 .../SimplePathChildrenCacheFactory.Builder.html    |   165 +
 .../class-use/SimplePathChildrenCacheFactory.html  |   157 +
 .../io/druid/curator/cache/package-frame.html      |    25 +
 .../io/druid/curator/cache/package-summary.html    |   154 +
 .../io/druid/curator/cache/package-tree.html       |   135 +
 .../io/druid/curator/cache/package-use.html        |   175 +
 .../io/druid/curator/class-use/CuratorConfig.html  |   180 +
 .../io/druid/curator/class-use/CuratorModule.html  |   117 +
 .../PotentiallyGzippedCompressionProvider.html     |   117 +
 .../ShutdownNowIgnoringExecutorService.html        |   117 +
 .../curator/discovery/CuratorServiceAnnouncer.html |   286 +
 .../druid/curator/discovery/DiscoveryModule.html   |   420 +
 .../curator/discovery/NoopServiceAnnouncer.html    |   285 +
 .../curator/discovery/ServerDiscoveryFactory.html  |   259 +
 .../curator/discovery/ServerDiscoverySelector.html |   299 +
 .../druid/curator/discovery/ServiceAnnouncer.html  |   227 +
 .../class-use/CuratorServiceAnnouncer.html         |   176 +
 .../discovery/class-use/DiscoveryModule.html       |   117 +
 .../discovery/class-use/NoopServiceAnnouncer.html  |   117 +
 .../class-use/ServerDiscoveryFactory.html          |   203 +
 .../class-use/ServerDiscoverySelector.html         |   316 +
 .../discovery/class-use/ServiceAnnouncer.html      |   270 +
 .../io/druid/curator/discovery/package-frame.html  |    28 +
 .../druid/curator/discovery/package-summary.html   |   175 +
 .../io/druid/curator/discovery/package-tree.html   |   138 +
 .../io/druid/curator/discovery/package-use.html    |   367 +
 .../curator/inventory/CuratorInventoryManager.html |   330 +
 .../inventory/CuratorInventoryManagerStrategy.html |   366 +
 .../curator/inventory/InventoryManagerConfig.html  |   243 +
 .../class-use/CuratorInventoryManager.html         |   117 +
 .../class-use/CuratorInventoryManagerStrategy.html |   158 +
 .../class-use/InventoryManagerConfig.html          |   193 +
 .../io/druid/curator/inventory/package-frame.html  |    25 +
 .../druid/curator/inventory/package-summary.html   |   157 +
 .../io/druid/curator/inventory/package-tree.html   |   135 +
 .../io/druid/curator/inventory/package-use.html    |   172 +
 api/0.6.174/io/druid/curator/package-frame.html    |    23 +
 api/0.6.174/io/druid/curator/package-summary.html  |   150 +
 api/0.6.174/io/druid/curator/package-tree.html     |   133 +
 api/0.6.174/io/druid/curator/package-use.html      |   169 +
 .../druid/data/input/ProtoBufInputRowParser.html   |   313 +
 .../input/class-use/ProtoBufInputRowParser.html    |   157 +
 api/0.6.174/io/druid/data/input/package-frame.html |    20 +
 .../io/druid/data/input/package-summary.html       |   135 +
 api/0.6.174/io/druid/data/input/package-tree.html  |   130 +
 api/0.6.174/io/druid/data/input/package-use.html   |   150 +
 api/0.6.174/io/druid/db/DatabaseRuleManager.html   |   364 +
 .../io/druid/db/DatabaseRuleManagerConfig.html     |   271 +
 .../io/druid/db/DatabaseRuleManagerProvider.html   |   276 +
 .../io/druid/db/DatabaseSegmentManager.html        |   397 +
 .../io/druid/db/DatabaseSegmentManagerConfig.html  |   258 +
 .../druid/db/DatabaseSegmentManagerProvider.html   |   276 +
 api/0.6.174/io/druid/db/DbConnector.html           |   519 +
 api/0.6.174/io/druid/db/DbConnectorConfig.html     |   340 +
 api/0.6.174/io/druid/db/DbTablesConfig.html        |   361 +
 .../io/druid/db/class-use/DatabaseRuleManager.html |   270 +
 .../db/class-use/DatabaseRuleManagerConfig.html    |   165 +
 .../db/class-use/DatabaseRuleManagerProvider.html  |   117 +
 .../druid/db/class-use/DatabaseSegmentManager.html |   261 +
 .../db/class-use/DatabaseSegmentManagerConfig.html |   165 +
 .../class-use/DatabaseSegmentManagerProvider.html  |   117 +
 api/0.6.174/io/druid/db/class-use/DbConnector.html |   246 +
 .../io/druid/db/class-use/DbConnectorConfig.html   |   178 +
 .../io/druid/db/class-use/DbTablesConfig.html      |   293 +
 api/0.6.174/io/druid/db/package-frame.html         |    28 +
 api/0.6.174/io/druid/db/package-summary.html       |   167 +
 api/0.6.174/io/druid/db/package-tree.html          |   138 +
 api/0.6.174/io/druid/db/package-use.html           |   335 +
 .../io/druid/examples/ExamplesDruidModule.html     |   284 +
 .../examples/class-use/ExamplesDruidModule.html    |   117 +
 api/0.6.174/io/druid/examples/package-frame.html   |    20 +
 api/0.6.174/io/druid/examples/package-summary.html |   135 +
 api/0.6.174/io/druid/examples/package-tree.html    |   130 +
 api/0.6.174/io/druid/examples/package-use.html     |   117 +
 .../druid/examples/rand/RandomFirehoseFactory.html |   362 +
 .../rand/class-use/RandomFirehoseFactory.html      |   117 +
 .../io/druid/examples/rand/package-frame.html      |    20 +
 .../io/druid/examples/rand/package-summary.html    |   137 +
 .../io/druid/examples/rand/package-tree.html       |   130 +
 .../io/druid/examples/rand/package-use.html        |   117 +
 .../twitter/TwitterSpritzerFirehoseFactory.html    |   311 +
 .../class-use/TwitterSpritzerFirehoseFactory.html  |   117 +
 .../io/druid/examples/twitter/package-frame.html   |    20 +
 .../io/druid/examples/twitter/package-summary.html |   137 +
 .../io/druid/examples/twitter/package-tree.html    |   130 +
 .../io/druid/examples/twitter/package-use.html     |   117 +
 .../examples/web/InputSupplierUpdateStream.html    |   337 +
 .../web/InputSupplierUpdateStreamFactory.html      |   269 +
 .../examples/web/RenamingKeysUpdateStream.html     |   324 +
 .../web/RenamingKeysUpdateStreamFactory.html       |   269 +
 .../io/druid/examples/web/UpdateStream.html        |   256 +
 .../io/druid/examples/web/UpdateStreamFactory.html |   212 +
 .../io/druid/examples/web/WebFirehoseFactory.html  |   306 +
 .../io/druid/examples/web/WebJsonSupplier.html     |   269 +
 .../web/class-use/InputSupplierUpdateStream.html   |   169 +
 .../InputSupplierUpdateStreamFactory.html          |   156 +
 .../web/class-use/RenamingKeysUpdateStream.html    |   157 +
 .../class-use/RenamingKeysUpdateStreamFactory.html |   117 +
 .../druid/examples/web/class-use/UpdateStream.html |   174 +
 .../web/class-use/UpdateStreamFactory.html         |   173 +
 .../examples/web/class-use/WebFirehoseFactory.html |   117 +
 .../examples/web/class-use/WebJsonSupplier.html    |   117 +
 .../io/druid/examples/web/package-frame.html       |    30 +
 .../io/druid/examples/web/package-summary.html     |   174 +
 .../io/druid/examples/web/package-tree.html        |   140 +
 api/0.6.174/io/druid/examples/web/package-use.html |   162 +
 .../firehose/kafka/KafkaEightDruidModule.html      |   284 +
 .../firehose/kafka/KafkaEightFirehoseFactory.html  |   290 +
 .../firehose/kafka/KafkaSevenDruidModule.html      |   284 +
 .../firehose/kafka/KafkaSevenFirehoseFactory.html  |   316 +
 .../kafka/class-use/KafkaEightDruidModule.html     |   117 +
 .../kafka/class-use/KafkaEightFirehoseFactory.html |   117 +
 .../kafka/class-use/KafkaSevenDruidModule.html     |   117 +
 .../kafka/class-use/KafkaSevenFirehoseFactory.html |   117 +
 .../io/druid/firehose/kafka/package-frame.html     |    23 +
 .../io/druid/firehose/kafka/package-summary.html   |   147 +
 .../io/druid/firehose/kafka/package-tree.html      |   133 +
 .../io/druid/firehose/kafka/package-use.html       |   117 +
 .../rabbitmq/JacksonifiedConnectionFactory.html    |   530 +
 .../firehose/rabbitmq/RabbitMQDruidModule.html     |   284 +
 .../firehose/rabbitmq/RabbitMQFirehoseConfig.html  |   426 +
 .../firehose/rabbitmq/RabbitMQFirehoseFactory.html |   370 +
 .../class-use/JacksonifiedConnectionFactory.html   |   174 +
 .../rabbitmq/class-use/RabbitMQDruidModule.html    |   117 +
 .../rabbitmq/class-use/RabbitMQFirehoseConfig.html |   174 +
 .../class-use/RabbitMQFirehoseFactory.html         |   117 +
 .../io/druid/firehose/rabbitmq/package-frame.html  |    23 +
 .../druid/firehose/rabbitmq/package-summary.html   |   154 +
 .../io/druid/firehose/rabbitmq/package-tree.html   |   137 +
 .../io/druid/firehose/rabbitmq/package-use.html    |   158 +
 .../druid/firehose/s3/S3FirehoseDruidModule.html   |   284 +
 .../druid/firehose/s3/StaticS3FirehoseFactory.html |   304 +
 .../s3/class-use/S3FirehoseDruidModule.html        |   117 +
 .../s3/class-use/StaticS3FirehoseFactory.html      |   117 +
 .../io/druid/firehose/s3/package-frame.html        |    21 +
 .../io/druid/firehose/s3/package-summary.html      |   141 +
 api/0.6.174/io/druid/firehose/s3/package-tree.html |   131 +
 api/0.6.174/io/druid/firehose/s3/package-use.html  |   117 +
 .../io/druid/granularity/AllGranularity.html       |   405 +
 .../io/druid/granularity/BaseQueryGranularity.html |   363 +
 .../io/druid/granularity/DurationGranularity.html  |   454 +
 .../io/druid/granularity/NoneGranularity.html      |   386 +
 .../io/druid/granularity/PeriodGranularity.html    |   446 +
 .../io/druid/granularity/QueryGranularity.html     |   430 +
 .../granularity/class-use/AllGranularity.html      |   117 +
 .../class-use/BaseQueryGranularity.html            |   169 +
 .../granularity/class-use/DurationGranularity.html |   117 +
 .../granularity/class-use/NoneGranularity.html     |   117 +
 .../granularity/class-use/PeriodGranularity.html   |   117 +
 .../granularity/class-use/QueryGranularity.html    |   869 +
 .../io/druid/granularity/package-frame.html        |    25 +
 .../io/druid/granularity/package-summary.html      |   155 +
 api/0.6.174/io/druid/granularity/package-tree.html |   141 +
 api/0.6.174/io/druid/granularity/package-use.html  |   419 +
 .../guice/AWSModule.AWSCredentialsConfig.html      |   275 +
 api/0.6.174/io/druid/guice/AWSModule.html          |   314 +
 api/0.6.174/io/druid/guice/AnnouncerModule.html    |   281 +
 api/0.6.174/io/druid/guice/ConfigModule.html       |   281 +
 api/0.6.174/io/druid/guice/ConfigProvider.html     |   343 +
 api/0.6.174/io/druid/guice/DbConnectorModule.html  |   281 +
 .../io/druid/guice/DbSegmentPublisherProvider.html |   267 +
 api/0.6.174/io/druid/guice/DruidBinders.html       |   297 +
 .../io/druid/guice/DruidProcessingModule.html      |   299 +
 .../io/druid/guice/DruidSecondaryModule.html       |   306 +
 api/0.6.174/io/druid/guice/ExtensionsConfig.html   |   314 +
 .../io/druid/guice/FireDepartmentsProvider.html    |   270 +
 api/0.6.174/io/druid/guice/FirehoseModule.html     |   284 +
 api/0.6.174/io/druid/guice/GuiceInjectors.html     |   271 +
 .../guice/IndexingServiceDiscoveryModule.html      |   283 +
 .../druid/guice/IndexingServiceFirehoseModule.html |   284 +
 .../druid/guice/IndexingServiceModuleHelper.html   |   258 +
 .../druid/guice/IndexingServiceTaskLogsModule.html |   267 +
 .../io/druid/guice/JacksonConfigManagerModule.html |   287 +
 .../io/druid/guice/JacksonConfigProvider.html      |   336 +
 api/0.6.174/io/druid/guice/ListProvider.html       |   323 +
 .../druid/guice/LocalDataStorageDruidModule.html   |   267 +
 api/0.6.174/io/druid/guice/ModuleList.html         |   284 +
 api/0.6.174/io/druid/guice/NodeTypeConfig.html     |   258 +
 .../druid/guice/NoopSegmentPublisherProvider.html  |   267 +
 api/0.6.174/io/druid/guice/ParsersModule.html      |   284 +
 api/0.6.174/io/druid/guice/PropertiesModule.html   |   267 +
 .../io/druid/guice/QueryRunnerFactoryModule.html   |   273 +
 .../io/druid/guice/QueryToolChestModule.html       |   307 +
 api/0.6.174/io/druid/guice/QueryableModule.html    |   284 +
 .../io/druid/guice/RealtimeManagerConfig.html      |   258 +
 api/0.6.174/io/druid/guice/RealtimeModule.html     |   267 +
 .../io/druid/guice/SegmentPublisherProvider.html   |   190 +
 api/0.6.174/io/druid/guice/ServerModule.html       |   298 +
 api/0.6.174/io/druid/guice/ServerViewModule.html   |   267 +
 api/0.6.174/io/druid/guice/StorageNodeModule.html  |   286 +
 api/0.6.174/io/druid/guice/annotations/Client.html |   154 +
 .../io/druid/guice/annotations/Processing.html     |   154 +
 .../druid/guice/annotations/class-use/Client.html  |   160 +
 .../guice/annotations/class-use/Processing.html    |   265 +
 .../io/druid/guice/annotations/package-frame.html  |    21 +
 .../druid/guice/annotations/package-summary.html   |   139 +
 .../io/druid/guice/annotations/package-tree.html   |   127 +
 .../io/druid/guice/annotations/package-use.html    |   226 +
 .../class-use/AWSModule.AWSCredentialsConfig.html  |   157 +
 .../io/druid/guice/class-use/AWSModule.html        |   117 +
 .../io/druid/guice/class-use/AnnouncerModule.html  |   117 +
 .../io/druid/guice/class-use/ConfigModule.html     |   117 +
 .../io/druid/guice/class-use/ConfigProvider.html   |   117 +
 .../druid/guice/class-use/DbConnectorModule.html   |   117 +
 .../class-use/DbSegmentPublisherProvider.html      |   117 +
 .../io/druid/guice/class-use/DruidBinders.html     |   117 +
 .../guice/class-use/DruidProcessingModule.html     |   117 +
 .../guice/class-use/DruidSecondaryModule.html      |   117 +
 .../io/druid/guice/class-use/ExtensionsConfig.html |   184 +
 .../guice/class-use/FireDepartmentsProvider.html   |   117 +
 .../io/druid/guice/class-use/FirehoseModule.html   |   117 +
 .../io/druid/guice/class-use/GuiceInjectors.html   |   117 +
 .../class-use/IndexingServiceDiscoveryModule.html  |   117 +
 .../class-use/IndexingServiceFirehoseModule.html   |   117 +
 .../class-use/IndexingServiceModuleHelper.html     |   117 +
 .../class-use/IndexingServiceTaskLogsModule.html   |   117 +
 .../class-use/JacksonConfigManagerModule.html      |   117 +
 .../guice/class-use/JacksonConfigProvider.html     |   175 +
 .../io/druid/guice/class-use/ListProvider.html     |   171 +
 .../class-use/LocalDataStorageDruidModule.html     |   117 +
 .../io/druid/guice/class-use/ModuleList.html       |   117 +
 .../io/druid/guice/class-use/NodeTypeConfig.html   |   159 +
 .../class-use/NoopSegmentPublisherProvider.html    |   117 +
 .../io/druid/guice/class-use/ParsersModule.html    |   117 +
 .../io/druid/guice/class-use/PropertiesModule.html |   117 +
 .../guice/class-use/QueryRunnerFactoryModule.html  |   117 +
 .../guice/class-use/QueryToolChestModule.html      |   157 +
 .../io/druid/guice/class-use/QueryableModule.html  |   117 +
 .../guice/class-use/RealtimeManagerConfig.html     |   156 +
 .../io/druid/guice/class-use/RealtimeModule.html   |   117 +
 .../guice/class-use/SegmentPublisherProvider.html  |   161 +
 .../io/druid/guice/class-use/ServerModule.html     |   117 +
 .../io/druid/guice/class-use/ServerViewModule.html |   117 +
 .../druid/guice/class-use/StorageNodeModule.html   |   117 +
 .../guice/http/AbstractHttpClientProvider.html     |   364 +
 .../io/druid/guice/http/DruidHttpClientConfig.html |   271 +
 .../http/HttpClientModule.HttpClientProvider.html  |   302 +
 .../io/druid/guice/http/HttpClientModule.html      |   327 +
 .../JettyHttpClientModule.HttpClientProvider.html  |   302 +
 .../io/druid/guice/http/JettyHttpClientModule.html |   327 +
 .../http/class-use/AbstractHttpClientProvider.html |   161 +
 .../http/class-use/DruidHttpClientConfig.html      |   161 +
 .../HttpClientModule.HttpClientProvider.html       |   117 +
 .../guice/http/class-use/HttpClientModule.html     |   157 +
 .../JettyHttpClientModule.HttpClientProvider.html  |   117 +
 .../http/class-use/JettyHttpClientModule.html      |   157 +
 api/0.6.174/io/druid/guice/http/package-frame.html |    25 +
 .../io/druid/guice/http/package-summary.html       |   155 +
 api/0.6.174/io/druid/guice/http/package-tree.html  |   138 +
 api/0.6.174/io/druid/guice/http/package-use.html   |   159 +
 api/0.6.174/io/druid/guice/package-frame.html      |    58 +
 api/0.6.174/io/druid/guice/package-summary.html    |   286 +
 api/0.6.174/io/druid/guice/package-tree.html       |   179 +
 api/0.6.174/io/druid/guice/package-use.html        |   206 +
 api/0.6.174/io/druid/indexer/Bucket.html           |   395 +
 api/0.6.174/io/druid/indexer/DbUpdaterJob.html     |   267 +
 ...edPartitionsJob.DetermineCardinalityMapper.html |   355 +
 ...dPartitionsJob.DetermineCardinalityReducer.html |   379 +
 ...nsJob.DetermineHashedPartitionsPartitioner.html |   314 +
 .../indexer/DetermineHashedPartitionsJob.html      |   295 +
 ...ePartitionsDimSelectionAssumeGroupedMapper.html |   335 +
 ...ob.DeterminePartitionsDimSelectionCombiner.html |   298 +
 ...eterminePartitionsDimSelectionMapperHelper.html |   275 +
 ...eterminePartitionsDimSelectionOutputFormat.html |   336 +
 ...DeterminePartitionsDimSelectionPartitioner.html |   314 +
 ...inePartitionsDimSelectionPostGroupByMapper.html |   323 +
 ...Job.DeterminePartitionsDimSelectionReducer.html |   298 +
 ...itionsJob.DeterminePartitionsGroupByMapper.html |   334 +
 ...tionsJob.DeterminePartitionsGroupByReducer.html |   301 +
 .../io/druid/indexer/DeterminePartitionsJob.html   |   336 +
 .../HadoopDruidDetermineConfigurationJob.html      |   268 +
 .../HadoopDruidIndexerConfig.IndexJobCounters.html |   315 +
 .../io/druid/indexer/HadoopDruidIndexerConfig.html |   837 +
 .../io/druid/indexer/HadoopDruidIndexerJob.html    |   294 +
 .../io/druid/indexer/HadoopDruidIndexerMapper.html |   370 +
 api/0.6.174/io/druid/indexer/HadoopIOConfig.html   |   306 +
 .../io/druid/indexer/HadoopIngestionSpec.html      |   392 +
 .../io/druid/indexer/HadoopTuningConfig.html       |   465 +
 api/0.6.174/io/druid/indexer/HadoopyShardSpec.html |   292 +
 .../IndexGeneratorJob.IndexGeneratorMapper.html    |   313 +
 ...dexGeneratorJob.IndexGeneratorOutputFormat.html |   341 +
 ...ndexGeneratorJob.IndexGeneratorPartitioner.html |   314 +
 .../IndexGeneratorJob.IndexGeneratorReducer.html   |   322 +
 .../IndexGeneratorJob.IndexGeneratorStats.html     |   275 +
 .../io/druid/indexer/IndexGeneratorJob.html        |   328 +
 api/0.6.174/io/druid/indexer/JobHelper.html        |   304 +
 api/0.6.174/io/druid/indexer/Jobby.html            |   212 +
 ...tableBytes.SortableBytesGroupingComparator.html |   303 +
 .../SortableBytes.SortableBytesPartitioner.html    |   275 +
 ...rtableBytes.SortableBytesSortingComparator.html |   303 +
 api/0.6.174/io/druid/indexer/SortableBytes.html    |   399 +
 api/0.6.174/io/druid/indexer/Utils.html            |   376 +
 api/0.6.174/io/druid/indexer/class-use/Bucket.html |   185 +
 .../io/druid/indexer/class-use/DbUpdaterJob.html   |   117 +
 ...edPartitionsJob.DetermineCardinalityMapper.html |   117 +
 ...dPartitionsJob.DetermineCardinalityReducer.html |   117 +
 ...nsJob.DetermineHashedPartitionsPartitioner.html |   117 +
 .../class-use/DetermineHashedPartitionsJob.html    |   117 +
 ...ePartitionsDimSelectionAssumeGroupedMapper.html |   117 +
 ...ob.DeterminePartitionsDimSelectionCombiner.html |   117 +
 ...eterminePartitionsDimSelectionMapperHelper.html |   117 +
 ...eterminePartitionsDimSelectionOutputFormat.html |   117 +
 ...DeterminePartitionsDimSelectionPartitioner.html |   117 +
 ...inePartitionsDimSelectionPostGroupByMapper.html |   117 +
 ...Job.DeterminePartitionsDimSelectionReducer.html |   117 +
 ...itionsJob.DeterminePartitionsGroupByMapper.html |   117 +
 ...tionsJob.DeterminePartitionsGroupByReducer.html |   117 +
 .../indexer/class-use/DeterminePartitionsJob.html  |   117 +
 .../HadoopDruidDetermineConfigurationJob.html      |   117 +
 .../HadoopDruidIndexerConfig.IndexJobCounters.html |   166 +
 .../class-use/HadoopDruidIndexerConfig.html        |   337 +
 .../indexer/class-use/HadoopDruidIndexerJob.html   |   117 +
 .../class-use/HadoopDruidIndexerMapper.html        |   171 +
 .../io/druid/indexer/class-use/HadoopIOConfig.html |   211 +
 .../indexer/class-use/HadoopIngestionSpec.html     |   232 +
 .../indexer/class-use/HadoopTuningConfig.html      |   223 +
 .../druid/indexer/class-use/HadoopyShardSpec.html  |   237 +
 .../IndexGeneratorJob.IndexGeneratorMapper.html    |   117 +
 ...dexGeneratorJob.IndexGeneratorOutputFormat.html |   117 +
 ...ndexGeneratorJob.IndexGeneratorPartitioner.html |   117 +
 .../IndexGeneratorJob.IndexGeneratorReducer.html   |   117 +
 .../IndexGeneratorJob.IndexGeneratorStats.html     |   161 +
 .../druid/indexer/class-use/IndexGeneratorJob.html |   117 +
 .../io/druid/indexer/class-use/JobHelper.html      |   117 +
 api/0.6.174/io/druid/indexer/class-use/Jobby.html  |   230 +
 ...tableBytes.SortableBytesGroupingComparator.html |   117 +
 .../SortableBytes.SortableBytesPartitioner.html    |   117 +
 ...rtableBytes.SortableBytesSortingComparator.html |   117 +
 .../io/druid/indexer/class-use/SortableBytes.html  |   188 +
 api/0.6.174/io/druid/indexer/class-use/Utils.html  |   117 +
 .../druid/indexer/hadoop/FSSpideringIterator.html  |   333 +
 .../hadoop/class-use/FSSpideringIterator.html      |   158 +
 .../io/druid/indexer/hadoop/package-frame.html     |    20 +
 .../io/druid/indexer/hadoop/package-summary.html   |   135 +
 .../io/druid/indexer/hadoop/package-tree.html      |   130 +
 .../io/druid/indexer/hadoop/package-use.html       |   150 +
 api/0.6.174/io/druid/indexer/package-frame.html    |    63 +
 api/0.6.174/io/druid/indexer/package-summary.html  |   323 +
 api/0.6.174/io/druid/indexer/package-tree.html     |   216 +
 api/0.6.174/io/druid/indexer/package-use.html      |   261 +
 .../indexer/partitions/AbstractPartitionsSpec.html |   352 +
 .../indexer/partitions/HashedPartitionsSpec.html   |   301 +
 .../druid/indexer/partitions/PartitionsSpec.html   |   277 +
 .../indexer/partitions/RandomPartitionsSpec.html   |   273 +
 .../partitions/SingleDimensionPartitionsSpec.html  |   298 +
 .../class-use/AbstractPartitionsSpec.html          |   167 +
 .../partitions/class-use/HashedPartitionsSpec.html |   172 +
 .../partitions/class-use/PartitionsSpec.html       |   247 +
 .../partitions/class-use/RandomPartitionsSpec.html |   117 +
 .../class-use/SingleDimensionPartitionsSpec.html   |   117 +
 .../io/druid/indexer/partitions/package-frame.html |    27 +
 .../druid/indexer/partitions/package-summary.html  |   162 +
 .../io/druid/indexer/partitions/package-tree.html  |   143 +
 .../io/druid/indexer/partitions/package-use.html   |   175 +
 .../indexer/path/GranularUnprocessedPathSpec.html  |   310 +
 .../io/druid/indexer/path/GranularityPathSpec.html |   379 +
 api/0.6.174/io/druid/indexer/path/PathSpec.html    |   217 +
 .../io/druid/indexer/path/StaticPathSpec.html      |   322 +
 .../class-use/GranularUnprocessedPathSpec.html     |   117 +
 .../path/class-use/GranularityPathSpec.html        |   157 +
 .../io/druid/indexer/path/class-use/PathSpec.html  |   169 +
 .../indexer/path/class-use/StaticPathSpec.html     |   117 +
 .../io/druid/indexer/path/package-frame.html       |    26 +
 .../io/druid/indexer/path/package-summary.html     |   162 +
 .../io/druid/indexer/path/package-tree.html        |   139 +
 api/0.6.174/io/druid/indexer/path/package-use.html |   153 +
 .../io/druid/indexer/rollup/DataRollupSpec.html    |   350 +
 .../indexer/rollup/class-use/DataRollupSpec.html   |   181 +
 .../io/druid/indexer/rollup/package-frame.html     |    20 +
 .../io/druid/indexer/rollup/package-summary.html   |   139 +
 .../io/druid/indexer/rollup/package-tree.html      |   130 +
 .../io/druid/indexer/rollup/package-use.html       |   154 +
 .../io/druid/indexer/updater/DbUpdaterJobSpec.html |   355 +
 .../updater/class-use/DbUpdaterJobSpec.html        |   199 +
 .../io/druid/indexer/updater/package-frame.html    |    20 +
 .../io/druid/indexer/updater/package-summary.html  |   135 +
 .../io/druid/indexer/updater/package-tree.html     |   130 +
 .../io/druid/indexer/updater/package-use.html      |   150 +
 .../io/druid/indexing/common/RetryPolicy.html      |   271 +
 .../druid/indexing/common/RetryPolicyConfig.html   |   284 +
 .../druid/indexing/common/RetryPolicyFactory.html  |   259 +
 .../indexing/common/SegmentLoaderFactory.html      |   259 +
 api/0.6.174/io/druid/indexing/common/TaskLock.html |   355 +
 .../druid/indexing/common/TaskStatus.Status.html   |   339 +
 .../io/druid/indexing/common/TaskStatus.html       |   428 +
 .../io/druid/indexing/common/TaskToolbox.html      |   490 +
 .../druid/indexing/common/TaskToolboxFactory.html  |   286 +
 .../common/actions/LocalTaskActionClient.html      |   273 +
 .../actions/LocalTaskActionClientFactory.html      |   270 +
 .../indexing/common/actions/LockAcquireAction.html |   333 +
 .../indexing/common/actions/LockListAction.html    |   320 +
 .../indexing/common/actions/LockReleaseAction.html |   333 +
 .../common/actions/LockTryAcquireAction.html       |   333 +
 .../common/actions/RemoteTaskActionClient.html     |   277 +
 .../actions/RemoteTaskActionClientFactory.html     |   274 +
 .../common/actions/SegmentInsertAction.html        |   375 +
 .../common/actions/SegmentListUnusedAction.html    |   350 +
 .../common/actions/SegmentListUsedAction.html      |   350 +
 .../actions/SegmentMetadataUpdateAction.html       |   335 +
 .../indexing/common/actions/SegmentNukeAction.html |   335 +
 .../druid/indexing/common/actions/TaskAction.html  |   243 +
 .../indexing/common/actions/TaskActionClient.html  |   215 +
 .../common/actions/TaskActionClientFactory.html    |   212 +
 .../indexing/common/actions/TaskActionHolder.html  |   273 +
 .../indexing/common/actions/TaskActionToolbox.html |   336 +
 .../actions/class-use/LocalTaskActionClient.html   |   117 +
 .../class-use/LocalTaskActionClientFactory.html    |   117 +
 .../actions/class-use/LockAcquireAction.html       |   117 +
 .../common/actions/class-use/LockListAction.html   |   117 +
 .../actions/class-use/LockReleaseAction.html       |   117 +
 .../actions/class-use/LockTryAcquireAction.html    |   117 +
 .../actions/class-use/RemoteTaskActionClient.html  |   117 +
 .../class-use/RemoteTaskActionClientFactory.html   |   117 +
 .../actions/class-use/SegmentInsertAction.html     |   157 +
 .../actions/class-use/SegmentListUnusedAction.html |   117 +
 .../actions/class-use/SegmentListUsedAction.html   |   117 +
 .../class-use/SegmentMetadataUpdateAction.html     |   117 +
 .../actions/class-use/SegmentNukeAction.html       |   117 +
 .../common/actions/class-use/TaskAction.html       |   293 +
 .../common/actions/class-use/TaskActionClient.html |   272 +
 .../actions/class-use/TaskActionClientFactory.html |   273 +
 .../common/actions/class-use/TaskActionHolder.html |   157 +
 .../actions/class-use/TaskActionToolbox.html       |   220 +
 .../indexing/common/actions/package-frame.html     |    40 +
 .../indexing/common/actions/package-summary.html   |   214 +
 .../indexing/common/actions/package-tree.html      |   150 +
 .../druid/indexing/common/actions/package-use.html |   266 +
 .../indexing/common/class-use/RetryPolicy.html     |   157 +
 .../common/class-use/RetryPolicyConfig.html        |   158 +
 .../common/class-use/RetryPolicyFactory.html       |   165 +
 .../common/class-use/SegmentLoaderFactory.html     |   168 +
 .../druid/indexing/common/class-use/TaskLock.html  |   315 +
 .../common/class-use/TaskStatus.Status.html        |   208 +
 .../indexing/common/class-use/TaskStatus.html      |   426 +
 .../indexing/common/class-use/TaskToolbox.html     |   245 +
 .../common/class-use/TaskToolboxFactory.html       |   155 +
 .../indexing/common/config/FileTaskLogsConfig.html |   270 +
 .../druid/indexing/common/config/TaskConfig.html   |   354 +
 .../indexing/common/config/TaskStorageConfig.html  |   295 +
 .../config/class-use/FileTaskLogsConfig.html       |   155 +
 .../common/config/class-use/TaskConfig.html        |   234 +
 .../common/config/class-use/TaskStorageConfig.html |   162 +
 .../indexing/common/config/package-frame.html      |    22 +
 .../indexing/common/config/package-summary.html    |   143 +
 .../druid/indexing/common/config/package-tree.html |   132 +
 .../druid/indexing/common/config/package-use.html  |   191 +
 .../indexing/common/index/YeOldePlumberSchool.html |   299 +
 .../index/class-use/YeOldePlumberSchool.html       |   117 +
 .../druid/indexing/common/index/package-frame.html |    20 +
 .../indexing/common/index/package-summary.html     |   137 +
 .../druid/indexing/common/index/package-tree.html  |   130 +
 .../druid/indexing/common/index/package-use.html   |   117 +
 .../io/druid/indexing/common/package-frame.html    |    31 +
 .../io/druid/indexing/common/package-summary.html  |   186 +
 .../io/druid/indexing/common/package-tree.html     |   149 +
 .../io/druid/indexing/common/package-use.html      |   279 +
 .../common/task/AbstractFixedIntervalTask.html     |   357 +
 .../druid/indexing/common/task/AbstractTask.html   |   552 +
 .../io/druid/indexing/common/task/AppendTask.html  |   328 +
 .../io/druid/indexing/common/task/ArchiveTask.html |   319 +
 .../io/druid/indexing/common/task/DeleteTask.html  |   319 +
 ...xTask.HadoopDetermineConfigInnerProcessing.html |   265 +
 ...exTask.HadoopIndexGeneratorInnerProcessing.html |   265 +
 .../indexing/common/task/HadoopIndexTask.html      |   419 +
 .../common/task/IndexTask.IndexIOConfig.html       |   267 +
 .../common/task/IndexTask.IndexIngestionSpec.html  |   309 +
 .../common/task/IndexTask.IndexTuningConfig.html   |   298 +
 .../io/druid/indexing/common/task/IndexTask.html   |   375 +
 .../io/druid/indexing/common/task/KillTask.html    |   319 +
 .../io/druid/indexing/common/task/MergeTask.html   |   343 +
 .../druid/indexing/common/task/MergeTaskBase.html  |   389 +
 .../io/druid/indexing/common/task/MoveTask.html    |   334 +
 .../io/druid/indexing/common/task/NoopTask.html    |   402 +
 ...altimeIndexTask.TaskActionSegmentPublisher.html |   275 +
 .../indexing/common/task/RealtimeIndexTask.html    |   430 +
 .../io/druid/indexing/common/task/RestoreTask.html |   319 +
 .../io/druid/indexing/common/task/Task.html        |   397 +
 .../druid/indexing/common/task/TaskResource.html   |   297 +
 .../io/druid/indexing/common/task/TaskUtils.html   |   264 +
 .../common/task/VersionConverterTask.SubTask.html  |   334 +
 .../indexing/common/task/VersionConverterTask.html |   358 +
 .../task/class-use/AbstractFixedIntervalTask.html  |   197 +
 .../common/task/class-use/AbstractTask.html        |   213 +
 .../indexing/common/task/class-use/AppendTask.html |   117 +
 .../common/task/class-use/ArchiveTask.html         |   117 +
 .../indexing/common/task/class-use/DeleteTask.html |   117 +
 ...xTask.HadoopDetermineConfigInnerProcessing.html |   117 +
 ...exTask.HadoopIndexGeneratorInnerProcessing.html |   117 +
 .../common/task/class-use/HadoopIndexTask.html     |   117 +
 .../task/class-use/IndexTask.IndexIOConfig.html    |   170 +
 .../class-use/IndexTask.IndexIngestionSpec.html    |   178 +
 .../class-use/IndexTask.IndexTuningConfig.html     |   170 +
 .../indexing/common/task/class-use/IndexTask.html  |   117 +
 .../indexing/common/task/class-use/KillTask.html   |   117 +
 .../indexing/common/task/class-use/MergeTask.html  |   117 +
 .../common/task/class-use/MergeTaskBase.html       |   161 +
 .../indexing/common/task/class-use/MoveTask.html   |   117 +
 .../indexing/common/task/class-use/NoopTask.html   |   157 +
 ...altimeIndexTask.TaskActionSegmentPublisher.html |   117 +
 .../common/task/class-use/RealtimeIndexTask.html   |   117 +
 .../common/task/class-use/RestoreTask.html         |   117 +
 .../druid/indexing/common/task/class-use/Task.html |   665 +
 .../common/task/class-use/TaskResource.html        |   219 +
 .../indexing/common/task/class-use/TaskUtils.html  |   117 +
 .../class-use/VersionConverterTask.SubTask.html    |   117 +
 .../task/class-use/VersionConverterTask.html       |   162 +
 .../druid/indexing/common/task/package-frame.html  |    47 +
 .../indexing/common/task/package-summary.html      |   244 +
 .../druid/indexing/common/task/package-tree.html   |   170 +
 .../io/druid/indexing/common/task/package-use.html |   308 +
 .../indexing/common/tasklogs/FileTaskLogs.html     |   293 +
 .../druid/indexing/common/tasklogs/LogUtils.html   |   269 +
 .../common/tasklogs/SwitchingTaskLogStreamer.html  |   273 +
 .../common/tasklogs/TaskRunnerTaskLogStreamer.html |   272 +
 .../common/tasklogs/class-use/FileTaskLogs.html    |   117 +
 .../common/tasklogs/class-use/LogUtils.html        |   117 +
 .../class-use/SwitchingTaskLogStreamer.html        |   117 +
 .../class-use/TaskRunnerTaskLogStreamer.html       |   117 +
 .../indexing/common/tasklogs/package-frame.html    |    23 +
 .../indexing/common/tasklogs/package-summary.html  |   149 +
 .../indexing/common/tasklogs/package-tree.html     |   133 +
 .../indexing/common/tasklogs/package-use.html      |   117 +
 ...gmentFirehoseFactory.IngestSegmentFirehose.html |   330 +
 .../firehose/IngestSegmentFirehoseFactory.html     |   382 +
 ...gmentFirehoseFactory.IngestSegmentFirehose.html |   117 +
 .../class-use/IngestSegmentFirehoseFactory.html    |   117 +
 .../io/druid/indexing/firehose/package-frame.html  |    20 +
 .../druid/indexing/firehose/package-summary.html   |   135 +
 .../io/druid/indexing/firehose/package-tree.html   |   131 +
 .../io/druid/indexing/firehose/package-use.html    |   117 +
 .../io/druid/indexing/overlord/DbTaskStorage.html  |   540 +
 .../druid/indexing/overlord/ForkingTaskRunner.html |   408 +
 .../overlord/ForkingTaskRunnerFactory.html         |   280 +
 .../indexing/overlord/HeapMemoryTaskStorage.html   |   508 +
 .../druid/indexing/overlord/ImmutableZkWorker.html |   315 +
 .../indexing/overlord/IndexerDBCoordinator.html    |   337 +
 .../io/druid/indexing/overlord/PortFinder.html     |   271 +
 .../druid/indexing/overlord/RemoteTaskRunner.html  |   457 +
 .../indexing/overlord/RemoteTaskRunnerFactory.html |   278 +
 .../overlord/RemoteTaskRunnerWorkItem.html         |   337 +
 .../overlord/RemoteTaskRunnerWorkQueue.html        |   324 +
 .../indexing/overlord/TaskExistsException.html     |   294 +
 .../io/druid/indexing/overlord/TaskLockbox.html    |   365 +
 .../io/druid/indexing/overlord/TaskMaster.html     |   378 +
 .../io/druid/indexing/overlord/TaskQueue.html      |   333 +
 .../io/druid/indexing/overlord/TaskRunner.html     |   288 +
 .../druid/indexing/overlord/TaskRunnerFactory.html |   212 +
 .../indexing/overlord/TaskRunnerWorkItem.html      |   374 +
 .../io/druid/indexing/overlord/TaskStorage.html    |   410 +
 .../indexing/overlord/TaskStorageQueryAdapter.html |   321 +
 .../indexing/overlord/ThreadPoolTaskRunner.html    |   427 +
 .../io/druid/indexing/overlord/ZkWorker.html       |   465 +
 .../indexing/overlord/autoscaling/AutoScaler.html  |   314 +
 .../overlord/autoscaling/AutoScalingData.html      |   275 +
 .../overlord/autoscaling/NoopAutoScaler.html       |   397 +
 .../NoopResourceManagementScheduler.html           |   301 +
 .../autoscaling/ResourceManagementScheduler.html   |   299 +
 .../ResourceManagementSchedulerConfig.html         |   297 +
 .../ResourceManagementSchedulerFactory.html        |   214 +
 .../ResourceManagementSchedulerFactoryImpl.html    |   274 +
 .../autoscaling/ResourceManagementStrategy.html    |   244 +
 .../overlord/autoscaling/ScalingStats.EVENT.html   |   327 +
 .../autoscaling/ScalingStats.ScalingEvent.html     |   271 +
 .../overlord/autoscaling/ScalingStats.html         |   307 +
 .../SimpleResourceManagementConfig.html            |   401 +
 .../SimpleResourceManagementStrategy.html          |   308 +
 .../overlord/autoscaling/class-use/AutoScaler.html |   228 +
 .../autoscaling/class-use/AutoScalingData.html     |   228 +
 .../autoscaling/class-use/NoopAutoScaler.html      |   117 +
 .../class-use/NoopResourceManagementScheduler.html |   117 +
 .../class-use/ResourceManagementScheduler.html     |   198 +
 .../ResourceManagementSchedulerConfig.html         |   163 +
 .../ResourceManagementSchedulerFactory.html        |   187 +
 .../ResourceManagementSchedulerFactoryImpl.html    |   117 +
 .../class-use/ResourceManagementStrategy.html      |   176 +
 .../autoscaling/class-use/ScalingStats.EVENT.html  |   170 +
 .../class-use/ScalingStats.ScalingEvent.html       |   157 +
 .../autoscaling/class-use/ScalingStats.html        |   169 +
 .../class-use/SimpleResourceManagementConfig.html  |   213 +
 .../SimpleResourceManagementStrategy.html          |   117 +
 .../overlord/autoscaling/ec2/EC2AutoScaler.html    |   455 +
 .../autoscaling/ec2/EC2EnvironmentConfig.html      |   339 +
 .../overlord/autoscaling/ec2/EC2NodeData.html      |   384 +
 .../overlord/autoscaling/ec2/EC2UserData.html      |   232 +
 .../autoscaling/ec2/GalaxyEC2UserData.html         |   386 +
 .../autoscaling/ec2/StringEC2UserData.html         |   384 +
 .../autoscaling/ec2/class-use/EC2AutoScaler.html   |   117 +
 .../ec2/class-use/EC2EnvironmentConfig.html        |   172 +
 .../autoscaling/ec2/class-use/EC2NodeData.html     |   211 +
 .../autoscaling/ec2/class-use/EC2UserData.html     |   249 +
 .../ec2/class-use/GalaxyEC2UserData.html           |   157 +
 .../ec2/class-use/StringEC2UserData.html           |   157 +
 .../overlord/autoscaling/ec2/package-frame.html    |    28 +
 .../overlord/autoscaling/ec2/package-summary.html  |   168 +
 .../overlord/autoscaling/ec2/package-tree.html     |   138 +
 .../overlord/autoscaling/ec2/package-use.html      |   188 +
 .../overlord/autoscaling/package-frame.html        |    39 +
 .../overlord/autoscaling/package-summary.html      |   218 +
 .../overlord/autoscaling/package-tree.html         |   160 +
 .../indexing/overlord/autoscaling/package-use.html |   256 +
 .../indexing/overlord/class-use/DbTaskStorage.html |   117 +
 .../overlord/class-use/ForkingTaskRunner.html      |   157 +
 .../class-use/ForkingTaskRunnerFactory.html        |   117 +
 .../overlord/class-use/HeapMemoryTaskStorage.html  |   117 +
 .../overlord/class-use/ImmutableZkWorker.html      |   224 +
 .../overlord/class-use/IndexerDBCoordinator.html   |   170 +
 .../indexing/overlord/class-use/PortFinder.html    |   117 +
 .../overlord/class-use/RemoteTaskRunner.html       |   177 +
 .../class-use/RemoteTaskRunnerFactory.html         |   117 +
 .../class-use/RemoteTaskRunnerWorkItem.html        |   239 +
 .../class-use/RemoteTaskRunnerWorkQueue.html       |   117 +
 .../overlord/class-use/TaskExistsException.html    |   176 +
 .../indexing/overlord/class-use/TaskLockbox.html   |   208 +
 .../indexing/overlord/class-use/TaskMaster.html    |   181 +
 .../indexing/overlord/class-use/TaskQueue.html     |   157 +
 .../indexing/overlord/class-use/TaskRunner.html    |   268 +
 .../overlord/class-use/TaskRunnerFactory.html      |   182 +
 .../overlord/class-use/TaskRunnerWorkItem.html     |   228 +
 .../indexing/overlord/class-use/TaskStorage.html   |   224 +
 .../class-use/TaskStorageQueryAdapter.html         |   158 +
 .../overlord/class-use/ThreadPoolTaskRunner.html   |   117 +
 .../indexing/overlord/class-use/ZkWorker.html      |   220 +
 .../overlord/config/ForkingTaskRunnerConfig.html   |   310 +
 .../overlord/config/RemoteTaskRunnerConfig.html    |   297 +
 .../indexing/overlord/config/TaskQueueConfig.html  |   303 +
 .../config/class-use/ForkingTaskRunnerConfig.html  |   170 +
 .../config/class-use/RemoteTaskRunnerConfig.html   |   231 +
 .../overlord/config/class-use/TaskQueueConfig.html |   173 +
 .../indexing/overlord/config/package-frame.html    |    22 +
 .../indexing/overlord/config/package-summary.html  |   143 +
 .../indexing/overlord/config/package-tree.html     |   132 +
 .../indexing/overlord/config/package-use.html      |   194 +
 .../overlord/http/OverlordRedirectInfo.html        |   287 +
 .../indexing/overlord/http/OverlordResource.html   |   520 +
 .../http/class-use/OverlordRedirectInfo.html       |   117 +
 .../overlord/http/class-use/OverlordResource.html  |   117 +
 .../indexing/overlord/http/package-frame.html      |    21 +
 .../indexing/overlord/http/package-summary.html    |   139 +
 .../druid/indexing/overlord/http/package-tree.html |   131 +
 .../druid/indexing/overlord/http/package-use.html  |   117 +
 .../io/druid/indexing/overlord/package-frame.html  |    47 +
 .../druid/indexing/overlord/package-summary.html   |   265 +
 .../io/druid/indexing/overlord/package-tree.html   |   173 +
 .../io/druid/indexing/overlord/package-use.html    |   376 +
 .../setup/FillCapacityWithAffinityConfig.html      |   292 +
 ...llCapacityWithAffinityWorkerSelectStrategy.html |   329 +
 .../setup/FillCapacityWorkerSelectStrategy.html    |   280 +
 .../overlord/setup/WorkerBehaviorConfig.html       |   400 +
 .../overlord/setup/WorkerSelectStrategy.html       |   222 +
 .../indexing/overlord/setup/WorkerSetupData.html   |   398 +
 .../class-use/FillCapacityWithAffinityConfig.html  |   168 +
 ...llCapacityWithAffinityWorkerSelectStrategy.html |   117 +
 .../FillCapacityWorkerSelectStrategy.html          |   157 +
 .../setup/class-use/WorkerBehaviorConfig.html      |   234 +
 .../setup/class-use/WorkerSelectStrategy.html      |   199 +
 .../overlord/setup/class-use/WorkerSetupData.html  |   159 +
 .../indexing/overlord/setup/package-frame.html     |    28 +
 .../indexing/overlord/setup/package-summary.html   |   168 +
 .../indexing/overlord/setup/package-tree.html      |   141 +
 .../druid/indexing/overlord/setup/package-use.html |   223 +
 .../io/druid/indexing/worker/TaskAnnouncement.html |   287 +
 api/0.6.174/io/druid/indexing/worker/Worker.html   |   321 +
 .../indexing/worker/WorkerCuratorCoordinator.html  |   453 +
 .../druid/indexing/worker/WorkerTaskMonitor.html   |   290 +
 .../worker/class-use/TaskAnnouncement.html         |   197 +
 .../io/druid/indexing/worker/class-use/Worker.html |   287 +
 .../worker/class-use/WorkerCuratorCoordinator.html |   181 +
 .../worker/class-use/WorkerTaskMonitor.html        |   117 +
 .../druid/indexing/worker/config/WorkerConfig.html |   297 +
 .../worker/config/class-use/WorkerConfig.html      |   216 +
 .../indexing/worker/config/package-frame.html      |    20 +
 .../indexing/worker/config/package-summary.html    |   135 +
 .../druid/indexing/worker/config/package-tree.html |   130 +
 .../druid/indexing/worker/config/package-use.html  |   188 +
 .../worker/executor/ExecutorLifecycle.html         |   293 +
 .../worker/executor/ExecutorLifecycleConfig.html   |   336 +
 .../executor/class-use/ExecutorLifecycle.html      |   117 +
 .../class-use/ExecutorLifecycleConfig.html         |   179 +
 .../indexing/worker/executor/package-frame.html    |    21 +
 .../indexing/worker/executor/package-summary.html  |   141 +
 .../indexing/worker/executor/package-tree.html     |   131 +
 .../indexing/worker/executor/package-use.html      |   150 +
 .../druid/indexing/worker/http/WorkerResource.html |   333 +
 .../worker/http/class-use/WorkerResource.html      |   117 +
 .../druid/indexing/worker/http/package-frame.html  |    20 +
 .../indexing/worker/http/package-summary.html      |   135 +
 .../druid/indexing/worker/http/package-tree.html   |   130 +
 .../io/druid/indexing/worker/http/package-use.html |   117 +
 .../io/druid/indexing/worker/package-frame.html    |    23 +
 .../io/druid/indexing/worker/package-summary.html  |   155 +
 .../io/druid/indexing/worker/package-tree.html     |   133 +
 .../io/druid/indexing/worker/package-use.html      |   214 +
 .../io/druid/initialization/Initialization.html    |   339 +
 .../io/druid/initialization/LogLevelAdjuster.html  |   302 +
 .../initialization/LogLevelAdjusterMBean.html      |   227 +
 .../initialization/class-use/Initialization.html   |   117 +
 .../initialization/class-use/LogLevelAdjuster.html |   117 +
 .../class-use/LogLevelAdjusterMBean.html           |   157 +
 .../io/druid/initialization/package-frame.html     |    25 +
 .../io/druid/initialization/package-summary.html   |   154 +
 .../io/druid/initialization/package-tree.html      |   135 +
 .../io/druid/initialization/package-use.html       |   150 +
 .../AggregatorsModule.AggregatorFactoryMixin.html  |   164 +
 .../AggregatorsModule.PostAggregatorMixin.html     |   164 +
 .../io/druid/jackson/AggregatorsModule.html        |   297 +
 .../io/druid/jackson/DefaultObjectMapper.html      |   338 +
 .../jackson/DruidDefaultSerializersModule.html     |   282 +
 api/0.6.174/io/druid/jackson/JacksonModule.html    |   295 +
 ...eryGranularityModule.QueryGranularityMixin.html |   164 +
 .../io/druid/jackson/QueryGranularityModule.html   |   293 +
 .../jackson/SegmentsModule.ShardSpecMixin.html     |   164 +
 api/0.6.174/io/druid/jackson/SegmentsModule.html   |   293 +
 .../AggregatorsModule.AggregatorFactoryMixin.html  |   117 +
 .../AggregatorsModule.PostAggregatorMixin.html     |   117 +
 .../druid/jackson/class-use/AggregatorsModule.html |   117 +
 .../jackson/class-use/DefaultObjectMapper.html     |   155 +
 .../class-use/DruidDefaultSerializersModule.html   |   117 +
 .../io/druid/jackson/class-use/JacksonModule.html  |   117 +
 ...eryGranularityModule.QueryGranularityMixin.html |   117 +
 .../jackson/class-use/QueryGranularityModule.html  |   117 +
 .../class-use/SegmentsModule.ShardSpecMixin.html   |   117 +
 .../io/druid/jackson/class-use/SegmentsModule.html |   117 +
 api/0.6.174/io/druid/jackson/package-frame.html    |    32 +
 api/0.6.174/io/druid/jackson/package-summary.html  |   182 +
 api/0.6.174/io/druid/jackson/package-tree.html     |   158 +
 api/0.6.174/io/druid/jackson/package-use.html      |   150 +
 .../druid/query/AbstractPrioritizedCallable.html   |   274 +
 api/0.6.174/io/druid/query/BaseQuery.html          |   628 +
 .../io/druid/query/BySegmentQueryRunner.html       |   271 +
 .../io/druid/query/BySegmentResultValue.html       |   238 +
 .../io/druid/query/BySegmentResultValueClass.html  |   356 +
 .../druid/query/BySegmentSkippingQueryRunner.html  |   286 +
 api/0.6.174/io/druid/query/CacheStrategy.html      |   262 +
 .../druid/query/ChainedExecutionQueryRunner.html   |   302 +
 api/0.6.174/io/druid/query/ConcatQueryRunner.html  |   267 +
 api/0.6.174/io/druid/query/DataSource.html         |   212 +
 api/0.6.174/io/druid/query/DataSourceUtil.html     |   258 +
 .../DefaultQueryRunnerFactoryConglomerate.html     |   270 +
 .../io/druid/query/DruidProcessingConfig.html      |   309 +
 .../io/druid/query/Druids.AndDimFilterBuilder.html |   299 +
 .../druid/query/Druids.NoopDimFilterBuilder.html   |   269 +
 .../io/druid/query/Druids.NotDimFilterBuilder.html |   299 +
 .../io/druid/query/Druids.OrDimFilterBuilder.html  |   316 +
 .../io/druid/query/Druids.ResultBuilder.html       |   313 +
 .../io/druid/query/Druids.SearchQueryBuilder.html  |   531 +
 .../query/Druids.SegmentMetadataQueryBuilder.html  |   391 +
 .../io/druid/query/Druids.SelectQueryBuilder.html  |   475 +
 .../query/Druids.SelectorDimFilterBuilder.html     |   313 +
 .../query/Druids.TimeBoundaryQueryBuilder.html     |   377 +
 .../druid/query/Druids.TimeseriesQueryBuilder.html |   534 +
 api/0.6.174/io/druid/query/Druids.html             |   461 +
 .../io/druid/query/FinalizeResultsQueryRunner.html |   269 +
 .../io/druid/query/GroupByParallelQueryRunner.html |   273 +
 .../druid/query/IntervalChunkingQueryRunner.html   |   269 +
 api/0.6.174/io/druid/query/LegacyDataSource.html   |   246 +
 .../io/druid/query/MapQueryToolChestWarehouse.html |   270 +
 .../io/druid/query/MetricValueExtractor.html       |   365 +
 .../query/MetricsEmittingExecutorService.html      |   360 +
 .../io/druid/query/MetricsEmittingQueryRunner.html |   322 +
 api/0.6.174/io/druid/query/NoopQueryRunner.html    |   267 +
 .../io/druid/query/PostProcessingOperator.html     |   212 +
 .../io/druid/query/PrioritizedCallable.html        |   224 +
 ...torService.PrioritizedListenableFutureTask.html |   435 +
 .../io/druid/query/PrioritizedExecutorService.html |   551 +
 .../io/druid/query/PrioritizedRunnable.html        |   224 +
 api/0.6.174/io/druid/query/Queries.html            |   260 +
 api/0.6.174/io/druid/query/Query.html              |   604 +
 api/0.6.174/io/druid/query/QueryCacheHelper.html   |   258 +
 api/0.6.174/io/druid/query/QueryConfig.html        |   262 +
 api/0.6.174/io/druid/query/QueryDataSource.html    |   331 +
 .../io/druid/query/QueryInterruptedException.html  |   313 +
 api/0.6.174/io/druid/query/QueryMetricUtil.html    |   278 +
 api/0.6.174/io/druid/query/QueryRunner.html        |   212 +
 api/0.6.174/io/druid/query/QueryRunnerFactory.html |   240 +
 .../query/QueryRunnerFactoryConglomerate.html      |   214 +
 api/0.6.174/io/druid/query/QueryRunnerHelper.html  |   281 +
 api/0.6.174/io/druid/query/QuerySegmentWalker.html |   241 +
 api/0.6.174/io/druid/query/QueryToolChest.html     |   416 +
 .../io/druid/query/QueryToolChestWarehouse.html    |   214 +
 api/0.6.174/io/druid/query/QueryWatcher.html       |   229 +
 .../query/ReferenceCountingSegmentQueryRunner.html |   269 +
 .../io/druid/query/ReflectionLoaderThingy.html     |   282 +
 .../query/ReflectionQueryToolChestWarehouse.html   |   277 +
 api/0.6.174/io/druid/query/Result.html             |   348 +
 .../query/ResultGranularTimestampComparator.html   |   276 +
 .../io/druid/query/ResultMergeQueryRunner.html     |   306 +
 .../io/druid/query/SubqueryQueryRunner.html        |   268 +
 api/0.6.174/io/druid/query/TableDataSource.html    |   335 +
 api/0.6.174/io/druid/query/TimewarpOperator.html   |   310 +
 api/0.6.174/io/druid/query/UnionDataSource.html    |   331 +
 api/0.6.174/io/druid/query/UnionQueryRunner.html   |   269 +
 .../io/druid/query/aggregation/Aggregator.html     |   288 +
 .../druid/query/aggregation/AggregatorFactory.html |   431 +
 .../io/druid/query/aggregation/AggregatorUtil.html |   284 +
 .../io/druid/query/aggregation/Aggregators.html    |   271 +
 .../druid/query/aggregation/BufferAggregator.html  |   335 +
 .../druid/query/aggregation/CountAggregator.html   |   369 +
 .../query/aggregation/CountAggregatorFactory.html  |   581 +
 .../query/aggregation/CountBufferAggregator.html   |   401 +
 .../query/aggregation/DoubleSumAggregator.html     |   371 +
 .../aggregation/DoubleSumAggregatorFactory.html    |   596 +
 .../aggregation/DoubleSumBufferAggregator.html     |   401 +
 .../query/aggregation/FilteredAggregator.html      |   356 +
 .../aggregation/FilteredAggregatorFactory.html     |   611 +
 .../aggregation/FilteredBufferAggregator.html      |   403 +
 .../io/druid/query/aggregation/Histogram.html      |   486 +
 .../query/aggregation/HistogramAggregator.html     |   356 +
 .../aggregation/HistogramAggregatorFactory.html    |   611 +
 .../aggregation/HistogramBufferAggregator.html     |   403 +
 .../druid/query/aggregation/HistogramVisual.html   |   344 +
 .../io/druid/query/aggregation/IntPredicate.html   |   210 +
 .../query/aggregation/JavaScriptAggregator.html    |   356 +
 .../aggregation/JavaScriptAggregatorFactory.html   |   658 +
 .../aggregation/JavaScriptBufferAggregator.html    |   403 +
 .../druid/query/aggregation/LongSumAggregator.html |   371 +
 .../aggregation/LongSumAggregatorFactory.html      |   596 +
 .../query/aggregation/LongSumBufferAggregator.html |   401 +
 .../io/druid/query/aggregation/MaxAggregator.html  |   371 +
 .../query/aggregation/MaxAggregatorFactory.html    |   596 +
 .../query/aggregation/MaxBufferAggregator.html     |   401 +
 .../query/aggregation/MetricManipulationFn.html    |   210 +
 .../query/aggregation/MetricManipulatorFns.html    |   284 +
 .../io/druid/query/aggregation/MinAggregator.html  |   371 +
 .../query/aggregation/MinAggregatorFactory.html    |   596 +
 .../query/aggregation/MinBufferAggregator.html     |   401 +
 .../io/druid/query/aggregation/PostAggregator.html |   252 +
 .../aggregation/ToLowerCaseAggregatorFactory.html  |   564 +
 .../cardinality/CardinalityAggregator.html         |   440 +
 .../cardinality/CardinalityAggregatorFactory.html  |   624 +
 .../cardinality/CardinalityBufferAggregator.html   |   403 +
 .../class-use/CardinalityAggregator.html           |   117 +
 .../class-use/CardinalityAggregatorFactory.html    |   117 +
 .../class-use/CardinalityBufferAggregator.html     |   117 +
 .../aggregation/cardinality/package-frame.html     |    22 +
 .../aggregation/cardinality/package-summary.html   |   143 +
 .../aggregation/cardinality/package-tree.html      |   132 +
 .../query/aggregation/cardinality/package-use.html |   117 +
 .../query/aggregation/class-use/Aggregator.html    |   518 +
 .../aggregation/class-use/AggregatorFactory.html   |  1229 ++
 .../aggregation/class-use/AggregatorUtil.html      |   117 +
 .../query/aggregation/class-use/Aggregators.html   |   117 +
 .../aggregation/class-use/BufferAggregator.html    |   417 +
 .../aggregation/class-use/CountAggregator.html     |   117 +
 .../class-use/CountAggregatorFactory.html          |   117 +
 .../class-use/CountBufferAggregator.html           |   117 +
 .../aggregation/class-use/DoubleSumAggregator.html |   117 +
 .../class-use/DoubleSumAggregatorFactory.html      |   117 +
 .../class-use/DoubleSumBufferAggregator.html       |   117 +
 .../aggregation/class-use/FilteredAggregator.html  |   117 +
 .../class-use/FilteredAggregatorFactory.html       |   117 +
 .../class-use/FilteredBufferAggregator.html        |   117 +
 .../query/aggregation/class-use/Histogram.html     |   178 +
 .../aggregation/class-use/HistogramAggregator.html |   117 +
 .../class-use/HistogramAggregatorFactory.html      |   117 +
 .../class-use/HistogramBufferAggregator.html       |   117 +
 .../aggregation/class-use/HistogramVisual.html     |   159 +
 .../query/aggregation/class-use/IntPredicate.html  |   117 +
 .../class-use/JavaScriptAggregator.html            |   117 +
 .../class-use/JavaScriptAggregatorFactory.html     |   117 +
 .../class-use/JavaScriptBufferAggregator.html      |   117 +
 .../aggregation/class-use/LongSumAggregator.html   |   117 +
 .../class-use/LongSumAggregatorFactory.html        |   117 +
 .../class-use/LongSumBufferAggregator.html         |   117 +
 .../query/aggregation/class-use/MaxAggregator.html |   117 +
 .../class-use/MaxAggregatorFactory.html            |   117 +
 .../aggregation/class-use/MaxBufferAggregator.html |   117 +
 .../class-use/MetricManipulationFn.html            |   364 +
 .../class-use/MetricManipulatorFns.html            |   117 +
 .../query/aggregation/class-use/MinAggregator.html |   117 +
 .../class-use/MinAggregatorFactory.html            |   117 +
 .../aggregation/class-use/MinBufferAggregator.html |   117 +
 .../aggregation/class-use/PostAggregator.html      |   752 +
 .../class-use/ToLowerCaseAggregatorFactory.html    |   117 +
 .../histogram/ApproximateHistogram.html            |  1244 ++
 .../histogram/ApproximateHistogramAggregator.html  |   396 +
 .../ApproximateHistogramAggregatorFactory.html     |   761 +
 .../ApproximateHistogramBufferAggregator.html      |   407 +
 .../histogram/ApproximateHistogramDruidModule.html |   284 +
 .../ApproximateHistogramFoldingAggregator.html     |   360 +
 ...proximateHistogramFoldingAggregatorFactory.html |   419 +
 ...pproximateHistogramFoldingBufferAggregator.html |   407 +
 .../ApproximateHistogramFoldingSerde.html          |   345 +
 .../ApproximateHistogramPostAggregator.html        |   344 +
 .../query/aggregation/histogram/ArrayUtils.html    |   296 +
 .../histogram/BucketsPostAggregator.html           |   342 +
 .../query/aggregation/histogram/BufferUtils.html   |   283 +
 .../histogram/CustomBucketsPostAggregator.html     |   327 +
 .../histogram/EqualBucketsPostAggregator.html      |   327 +
 .../query/aggregation/histogram/Histogram.html     |   324 +
 .../aggregation/histogram/MaxPostAggregator.html   |   331 +
 .../aggregation/histogram/MinPostAggregator.html   |   331 +
 .../histogram/QuantilePostAggregator.html          |   346 +
 .../query/aggregation/histogram/Quantiles.html     |   337 +
 .../histogram/QuantilesPostAggregator.html         |   346 +
 .../histogram/class-use/ApproximateHistogram.html  |   293 +
 .../class-use/ApproximateHistogramAggregator.html  |   117 +
 .../ApproximateHistogramAggregatorFactory.html     |   157 +
 .../ApproximateHistogramBufferAggregator.html      |   117 +
 .../class-use/ApproximateHistogramDruidModule.html |   117 +
 .../ApproximateHistogramFoldingAggregator.html     |   117 +
 ...proximateHistogramFoldingAggregatorFactory.html |   117 +
 ...pproximateHistogramFoldingBufferAggregator.html |   117 +
 .../ApproximateHistogramFoldingSerde.html          |   117 +
 .../ApproximateHistogramPostAggregator.html        |   181 +
 .../histogram/class-use/ArrayUtils.html            |   117 +
 .../histogram/class-use/BucketsPostAggregator.html |   117 +
 .../histogram/class-use/BufferUtils.html           |   117 +
 .../class-use/CustomBucketsPostAggregator.html     |   117 +
 .../class-use/EqualBucketsPostAggregator.html      |   117 +
 .../aggregation/histogram/class-use/Histogram.html |   172 +
 .../histogram/class-use/MaxPostAggregator.html     |   117 +
 .../histogram/class-use/MinPostAggregator.html     |   117 +
 .../class-use/QuantilePostAggregator.html          |   117 +
 .../aggregation/histogram/class-use/Quantiles.html |   117 +
 .../class-use/QuantilesPostAggregator.html         |   117 +
 .../query/aggregation/histogram/package-frame.html |    40 +
 .../aggregation/histogram/package-summary.html     |   215 +
 .../query/aggregation/histogram/package-tree.html  |   160 +
 .../query/aggregation/histogram/package-use.html   |   159 +
 .../aggregation/hyperloglog/ByteBitLookup.html     |   266 +
 .../query/aggregation/hyperloglog/HLLCV0.html      |   739 +
 .../query/aggregation/hyperloglog/HLLCV1.html      |   731 +
 .../hyperloglog/HyperLogLogCollector.html          |   899 ++
 .../HyperUniqueFinalizingPostAggregator.html       |   318 +
 .../hyperloglog/HyperUniquesAggregator.html        |   371 +
 .../hyperloglog/HyperUniquesAggregatorFactory.html |   609 +
 .../hyperloglog/HyperUniquesBufferAggregator.html  |   401 +
 .../aggregation/hyperloglog/HyperUniquesSerde.html |   345 +
 .../hyperloglog/class-use/ByteBitLookup.html       |   117 +
 .../aggregation/hyperloglog/class-use/HLLCV0.html  |   117 +
 .../aggregation/hyperloglog/class-use/HLLCV1.html  |   117 +
 .../class-use/HyperLogLogCollector.html            |   233 +
 .../HyperUniqueFinalizingPostAggregator.html       |   117 +
 .../class-use/HyperUniquesAggregator.html          |   117 +
 .../class-use/HyperUniquesAggregatorFactory.html   |   117 +
 .../class-use/HyperUniquesBufferAggregator.html    |   117 +
 .../hyperloglog/class-use/HyperUniquesSerde.html   |   117 +
 .../aggregation/hyperloglog/package-frame.html     |    28 +
 .../aggregation/hyperloglog/package-summary.html   |   181 +
 .../aggregation/hyperloglog/package-tree.html      |   145 +
 .../query/aggregation/hyperloglog/package-use.html |   197 +
 .../io/druid/query/aggregation/package-frame.html  |    58 +
 .../druid/query/aggregation/package-summary.html   |   299 +
 .../io/druid/query/aggregation/package-tree.html   |   168 +
 .../io/druid/query/aggregation/package-use.html    |   750 +
 .../aggregation/post/ArithmeticPostAggregator.html |   399 +
 .../aggregation/post/ConstantPostAggregator.html   |   386 +
 .../post/FieldAccessPostAggregator.html            |   384 +
 .../aggregation/post/JavaScriptPostAggregator.html |   382 +
 .../post/class-use/ArithmeticPostAggregator.html   |   117 +
 .../post/class-use/ConstantPostAggregator.html     |   157 +
 .../post/class-use/FieldAccessPostAggregator.html  |   117 +
 .../post/class-use/JavaScriptPostAggregator.html   |   117 +
 .../query/aggregation/post/package-frame.html      |    23 +
 .../query/aggregation/post/package-summary.html    |   147 +
 .../druid/query/aggregation/post/package-tree.html |   133 +
 .../druid/query/aggregation/post/package-use.html  |   150 +
 .../class-use/AbstractPrioritizedCallable.html     |   117 +
 .../io/druid/query/class-use/BaseQuery.html        |   289 +
 .../query/class-use/BySegmentQueryRunner.html      |   117 +
 .../query/class-use/BySegmentResultValue.html      |   201 +
 .../query/class-use/BySegmentResultValueClass.html |   117 +
 .../class-use/BySegmentSkippingQueryRunner.html    |   157 +
 .../io/druid/query/class-use/CacheStrategy.html    |   311 +
 .../class-use/ChainedExecutionQueryRunner.html     |   117 +
 .../druid/query/class-use/ConcatQueryRunner.html   |   117 +
 .../io/druid/query/class-use/DataSource.html       |   565 +
 .../io/druid/query/class-use/DataSourceUtil.html   |   117 +
 .../DefaultQueryRunnerFactoryConglomerate.html     |   117 +
 .../query/class-use/DruidProcessingConfig.html     |   157 +
 .../class-use/Druids.AndDimFilterBuilder.html      |   178 +
 .../class-use/Druids.NoopDimFilterBuilder.html     |   157 +
 .../class-use/Druids.NotDimFilterBuilder.html      |   178 +
 .../query/class-use/Druids.OrDimFilterBuilder.html |   184 +
 .../query/class-use/Druids.ResultBuilder.html      |   190 +
 .../query/class-use/Druids.SearchQueryBuilder.html |   249 +
 .../Druids.SegmentMetadataQueryBuilder.html        |   206 +
 .../query/class-use/Druids.SelectQueryBuilder.html |   233 +
 .../class-use/Druids.SelectorDimFilterBuilder.html |   182 +
 .../class-use/Druids.TimeBoundaryQueryBuilder.html |   202 +
 .../class-use/Druids.TimeseriesQueryBuilder.html   |   233 +
 api/0.6.174/io/druid/query/class-use/Druids.html   |   117 +
 .../class-use/FinalizeResultsQueryRunner.html      |   117 +
 .../class-use/GroupByParallelQueryRunner.html      |   117 +
 .../class-use/IntervalChunkingQueryRunner.html     |   117 +
 .../io/druid/query/class-use/LegacyDataSource.html |   117 +
 .../class-use/MapQueryToolChestWarehouse.html      |   117 +
 .../query/class-use/MetricValueExtractor.html      |   179 +
 .../class-use/MetricsEmittingExecutorService.html  |   117 +
 .../class-use/MetricsEmittingQueryRunner.html      |   157 +
 .../io/druid/query/class-use/NoopQueryRunner.html  |   117 +
 .../query/class-use/PostProcessingOperator.html    |   161 +
 .../druid/query/class-use/PrioritizedCallable.html |   170 +
 ...torService.PrioritizedListenableFutureTask.html |   189 +
 .../class-use/PrioritizedExecutorService.html      |   158 +
 .../druid/query/class-use/PrioritizedRunnable.html |   171 +
 api/0.6.174/io/druid/query/class-use/Queries.html  |   117 +
 api/0.6.174/io/druid/query/class-use/Query.html    |  1184 ++
 .../io/druid/query/class-use/QueryCacheHelper.html |   117 +
 .../io/druid/query/class-use/QueryConfig.html      |   242 +
 .../io/druid/query/class-use/QueryDataSource.html  |   117 +
 .../query/class-use/QueryInterruptedException.html |   117 +
 .../io/druid/query/class-use/QueryMetricUtil.html  |   117 +
 .../io/druid/query/class-use/QueryRunner.html      |  1216 ++
 .../druid/query/class-use/QueryRunnerFactory.html  |   360 +
 .../class-use/QueryRunnerFactoryConglomerate.html  |   337 +
 .../druid/query/class-use/QueryRunnerHelper.html   |   117 +
 .../druid/query/class-use/QuerySegmentWalker.html  |   332 +
 .../io/druid/query/class-use/QueryToolChest.html   |   518 +
 .../query/class-use/QueryToolChestWarehouse.html   |   229 +
 .../io/druid/query/class-use/QueryWatcher.html     |   375 +
 .../ReferenceCountingSegmentQueryRunner.html       |   117 +
 .../query/class-use/ReflectionLoaderThingy.html    |   159 +
 .../ReflectionQueryToolChestWarehouse.html         |   117 +
 api/0.6.174/io/druid/query/class-use/Result.html   |  1004 ++
 .../ResultGranularTimestampComparator.html         |   117 +
 .../query/class-use/ResultMergeQueryRunner.html    |   117 +
 .../druid/query/class-use/SubqueryQueryRunner.html |   117 +
 .../io/druid/query/class-use/TableDataSource.html  |   181 +
 .../io/druid/query/class-use/TimewarpOperator.html |   117 +
 .../io/druid/query/class-use/UnionDataSource.html  |   117 +
 .../io/druid/query/class-use/UnionQueryRunner.html |   117 +
 .../query/dimension/DefaultDimensionSpec.html      |   392 +
 .../io/druid/query/dimension/DimensionSpec.html    |   264 +
 .../query/dimension/ExtractionDimensionSpec.html   |   390 +
 .../druid/query/dimension/LegacyDimensionSpec.html |   246 +
 .../dimension/class-use/DefaultDimensionSpec.html  |   157 +
 .../query/dimension/class-use/DimensionSpec.html   |   429 +
 .../class-use/ExtractionDimensionSpec.html         |   117 +
 .../dimension/class-use/LegacyDimensionSpec.html   |   117 +
 .../io/druid/query/dimension/package-frame.html    |    26 +
 .../io/druid/query/dimension/package-summary.html  |   158 +
 .../io/druid/query/dimension/package-tree.html     |   139 +
 .../io/druid/query/dimension/package-use.html      |   229 +
 .../io/druid/query/extraction/DimExtractionFn.html |   238 +
 .../extraction/JavascriptDimExtractionFn.html      |   331 +
 .../query/extraction/PartialDimExtractionFn.html   |   331 +
 .../query/extraction/RegexDimExtractionFn.html     |   331 +
 .../extraction/SearchQuerySpecDimExtractionFn.html |   331 +
 .../query/extraction/TimeDimExtractionFn.html      |   346 +
 .../extraction/class-use/DimExtractionFn.html      |   273 +
 .../class-use/JavascriptDimExtractionFn.html       |   117 +
 .../class-use/PartialDimExtractionFn.html          |   117 +
 .../extraction/class-use/RegexDimExtractionFn.html |   117 +
 .../class-use/SearchQuerySpecDimExtractionFn.html  |   117 +
 .../extraction/class-use/TimeDimExtractionFn.html  |   117 +
 .../io/druid/query/extraction/package-frame.html   |    28 +
 .../io/druid/query/extraction/package-summary.html |   166 +
 .../io/druid/query/extraction/package-tree.html    |   138 +
 .../io/druid/query/extraction/package-use.html     |   207 +
 .../io/druid/query/filter/AndDimFilter.html        |   331 +
 .../io/druid/query/filter/BitmapIndexSelector.html |   268 +
 api/0.6.174/io/druid/query/filter/DimFilter.html   |   212 +
 api/0.6.174/io/druid/query/filter/DimFilters.html  |   355 +
 .../io/druid/query/filter/ExtractionDimFilter.html |   327 +
 api/0.6.174/io/druid/query/filter/Filter.html      |   238 +
 .../io/druid/query/filter/JavaScriptDimFilter.html |   312 +
 .../io/druid/query/filter/NoopDimFilter.html       |   267 +
 .../io/druid/query/filter/NotDimFilter.html        |   331 +
 api/0.6.174/io/druid/query/filter/OrDimFilter.html |   331 +
 .../io/druid/query/filter/RegexDimFilter.html      |   312 +
 .../druid/query/filter/SearchQueryDimFilter.html   |   312 +
 .../io/druid/query/filter/SelectorDimFilter.html   |   346 +
 .../io/druid/query/filter/SpatialDimFilter.html    |   346 +
 .../io/druid/query/filter/ValueMatcher.html        |   212 +
 .../io/druid/query/filter/ValueMatcherFactory.html |   240 +
 .../druid/query/filter/class-use/AndDimFilter.html |   183 +
 .../filter/class-use/BitmapIndexSelector.html      |   225 +
 .../io/druid/query/filter/class-use/DimFilter.html |   748 +
 .../druid/query/filter/class-use/DimFilters.html   |   117 +
 .../filter/class-use/ExtractionDimFilter.html      |   117 +
 .../io/druid/query/filter/class-use/Filter.html    |   320 +
 .../filter/class-use/JavaScriptDimFilter.html      |   117 +
 .../query/filter/class-use/NoopDimFilter.html      |   157 +
 .../druid/query/filter/class-use/NotDimFilter.html |   179 +
 .../druid/query/filter/class-use/OrDimFilter.html  |   183 +
 .../query/filter/class-use/RegexDimFilter.html     |   158 +
 .../filter/class-use/SearchQueryDimFilter.html     |   117 +
 .../query/filter/class-use/SelectorDimFilter.html  |   180 +
 .../query/filter/class-use/SpatialDimFilter.html   |   117 +
 .../druid/query/filter/class-use/ValueMatcher.html |   289 +
 .../filter/class-use/ValueMatcherFactory.html      |   203 +
 .../io/druid/query/filter/package-frame.html       |    38 +
 .../io/druid/query/filter/package-summary.html     |   206 +
 .../io/druid/query/filter/package-tree.html        |   148 +
 api/0.6.174/io/druid/query/filter/package-use.html |   438 +
 .../druid/query/groupby/GroupByQuery.Builder.html  |   656 +
 .../io/druid/query/groupby/GroupByQuery.html       |   555 +
 .../io/druid/query/groupby/GroupByQueryConfig.html |   309 +
 .../io/druid/query/groupby/GroupByQueryEngine.html |   263 +
 .../io/druid/query/groupby/GroupByQueryHelper.html |   273 +
 .../query/groupby/GroupByQueryQueryToolChest.html  |   405 +
 .../query/groupby/GroupByQueryRunnerFactory.html   |   310 +
 .../groupby/class-use/GroupByQuery.Builder.html    |   283 +
 .../query/groupby/class-use/GroupByQuery.html      |   225 +
 .../groupby/class-use/GroupByQueryConfig.html      |   204 +
 .../groupby/class-use/GroupByQueryEngine.html      |   163 +
 .../groupby/class-use/GroupByQueryHelper.html      |   117 +
 .../class-use/GroupByQueryQueryToolChest.html      |   158 +
 .../class-use/GroupByQueryRunnerFactory.html       |   117 +
 .../query/groupby/having/AlwaysHavingSpec.html     |   305 +
 .../druid/query/groupby/having/AndHavingSpec.html  |   369 +
 .../query/groupby/having/EqualToHavingSpec.html    |   388 +
 .../groupby/having/GreaterThanHavingSpec.html      |   388 +
 .../io/druid/query/groupby/having/HavingSpec.html  |   281 +
 .../query/groupby/having/LessThanHavingSpec.html   |   388 +
 .../query/groupby/having/NeverHavingSpec.html      |   305 +
 .../druid/query/groupby/having/NotHavingSpec.html  |   369 +
 .../druid/query/groupby/having/OrHavingSpec.html   |   369 +
 .../groupby/having/class-use/AlwaysHavingSpec.html |   117 +
 .../groupby/having/class-use/AndHavingSpec.html    |   117 +
 .../having/class-use/EqualToHavingSpec.html        |   117 +
 .../having/class-use/GreaterThanHavingSpec.html    |   117 +
 .../query/groupby/having/class-use/HavingSpec.html |   329 +
 .../having/class-use/LessThanHavingSpec.html       |   117 +
 .../groupby/having/class-use/NeverHavingSpec.html  |   117 +
 .../groupby/having/class-use/NotHavingSpec.html    |   117 +
 .../groupby/having/class-use/OrHavingSpec.html     |   117 +
 .../druid/query/groupby/having/package-frame.html  |    31 +
 .../query/groupby/having/package-summary.html      |   196 +
 .../druid/query/groupby/having/package-tree.html   |   141 +
 .../io/druid/query/groupby/having/package-use.html |   173 +
 .../query/groupby/orderby/DefaultLimitSpec.html    |   384 +
 .../io/druid/query/groupby/orderby/LimitSpec.html  |   242 +
 .../druid/query/groupby/orderby/NoopLimitSpec.html |   356 +
 .../orderby/OrderByColumnSpec.Direction.html       |   327 +
 .../query/groupby/orderby/OrderByColumnSpec.html   |   400 +
 .../io/druid/query/groupby/orderby/TopNSorter.html |   271 +
 .../orderby/class-use/DefaultLimitSpec.html        |   117 +
 .../query/groupby/orderby/class-use/LimitSpec.html |   259 +
 .../groupby/orderby/class-use/NoopLimitSpec.html   |   117 +
 .../class-use/OrderByColumnSpec.Direction.html     |   209 +
 .../orderby/class-use/OrderByColumnSpec.html       |   220 +
 .../groupby/orderby/class-use/TopNSorter.html      |   117 +
 .../druid/query/groupby/orderby/package-frame.html |    31 +
 .../query/groupby/orderby/package-summary.html     |   180 +
 .../druid/query/groupby/orderby/package-tree.html  |   149 +
 .../druid/query/groupby/orderby/package-use.html   |   181 +
 .../io/druid/query/groupby/package-frame.html      |    26 +
 .../io/druid/query/groupby/package-summary.html    |   159 +
 .../io/druid/query/groupby/package-tree.html       |   148 +
 .../io/druid/query/groupby/package-use.html        |   181 +
 .../io/druid/query/metadata/SegmentAnalyzer.html   |   310 +
 .../SegmentMetadataQueryQueryToolChest.html        |   383 +
 .../SegmentMetadataQueryRunnerFactory.html         |   306 +
 .../query/metadata/class-use/SegmentAnalyzer.html  |   117 +
 .../SegmentMetadataQueryQueryToolChest.html        |   156 +
 .../SegmentMetadataQueryRunnerFactory.html         |   117 +
 .../metadata/metadata/AllColumnIncluderator.html   |   333 +
 .../query/metadata/metadata/ColumnAnalysis.html    |   359 +
 .../metadata/metadata/ColumnIncluderator.html      |   287 +
 .../metadata/metadata/ListColumnIncluderator.html  |   312 +
 .../metadata/metadata/NoneColumnIncluderator.html  |   299 +
 .../query/metadata/metadata/SegmentAnalysis.html   |   333 +
 .../metadata/metadata/SegmentMetadataQuery.html    |   416 +
 .../metadata/class-use/AllColumnIncluderator.html  |   117 +
 .../metadata/class-use/ColumnAnalysis.html         |   248 +
 .../metadata/class-use/ColumnIncluderator.html     |   215 +
 .../metadata/class-use/ListColumnIncluderator.html |   117 +
 .../metadata/class-use/NoneColumnIncluderator.html |   117 +
 .../metadata/class-use/SegmentAnalysis.html        |   256 +
 .../metadata/class-use/SegmentMetadataQuery.html   |   205 +
 .../query/metadata/metadata/package-frame.html     |    29 +
 .../query/metadata/metadata/package-summary.html   |   170 +
 .../query/metadata/metadata/package-tree.html      |   143 +
 .../druid/query/metadata/metadata/package-use.html |   203 +
 .../io/druid/query/metadata/package-frame.html     |    22 +
 .../io/druid/query/metadata/package-summary.html   |   143 +
 .../io/druid/query/metadata/package-tree.html      |   136 +
 .../io/druid/query/metadata/package-use.html       |   150 +
 api/0.6.174/io/druid/query/package-frame.html      |    90 +
 api/0.6.174/io/druid/query/package-summary.html    |   447 +
 api/0.6.174/io/druid/query/package-tree.html       |   243 +
 api/0.6.174/io/druid/query/package-use.html        |  1033 ++
 .../query/search/BySegmentSearchResultValue.html   |   351 +
 .../io/druid/query/search/SearchBinaryFn.html      |   273 +
 .../query/search/SearchQueryQueryToolChest.html    |   414 +
 .../io/druid/query/search/SearchQueryRunner.html   |   267 +
 .../query/search/SearchQueryRunnerFactory.html     |   306 +
 .../io/druid/query/search/SearchResultValue.html   |   335 +
 .../class-use/BySegmentSearchResultValue.html      |   117 +
 .../query/search/class-use/SearchBinaryFn.html     |   117 +
 .../class-use/SearchQueryQueryToolChest.html       |   156 +
 .../query/search/class-use/SearchQueryRunner.html  |   117 +
 .../search/class-use/SearchQueryRunnerFactory.html |   117 +
 .../query/search/class-use/SearchResultValue.html  |   331 +
 .../io/druid/query/search/package-frame.html       |    25 +
 .../io/druid/query/search/package-summary.html     |   155 +
 .../io/druid/query/search/package-tree.html        |   142 +
 api/0.6.174/io/druid/query/search/package-use.html |   191 +
 .../search/search/FragmentSearchQuerySpec.html     |   348 +
 .../search/InsensitiveContainsSearchQuerySpec.html |   348 +
 .../search/search/LexicographicSearchSortSpec.html |   301 +
 .../io/druid/query/search/search/SearchHit.html    |   346 +
 .../io/druid/query/search/search/SearchQuery.html  |   506 +
 .../query/search/search/SearchQueryConfig.html     |   270 +
 .../druid/query/search/search/SearchQuerySpec.html |   225 +
 .../druid/query/search/search/SearchSortSpec.html  |   212 +
 .../query/search/search/StrlenSearchSortSpec.html  |   284 +
 .../search/class-use/FragmentSearchQuerySpec.html  |   117 +
 .../InsensitiveContainsSearchQuerySpec.html        |   117 +
 .../class-use/LexicographicSearchSortSpec.html     |   117 +
 .../query/search/search/class-use/SearchHit.html   |   219 +
 .../query/search/search/class-use/SearchQuery.html |   248 +
 .../search/search/class-use/SearchQueryConfig.html |   155 +
 .../search/search/class-use/SearchQuerySpec.html   |   303 +
 .../search/search/class-use/SearchSortSpec.html    |   215 +
 .../search/class-use/StrlenSearchSortSpec.html     |   117 +
 .../druid/query/search/search/package-frame.html   |    31 +
 .../druid/query/search/search/package-summary.html |   178 +
 .../io/druid/query/search/search/package-tree.html |   149 +
 .../io/druid/query/search/search/package-use.html  |   266 +
 api/0.6.174/io/druid/query/select/EventHolder.html |   389 +
 api/0.6.174/io/druid/query/select/PagingSpec.html  |   303 +
 .../io/druid/query/select/SelectBinaryFn.html      |   271 +
 api/0.6.174/io/druid/query/select/SelectQuery.html |   478 +
 .../io/druid/query/select/SelectQueryEngine.html   |   260 +
 .../query/select/SelectQueryQueryToolChest.html    |   416 +
 .../query/select/SelectQueryRunnerFactory.html     |   308 +
 .../io/druid/query/select/SelectResultValue.html   |   346 +
 .../query/select/SelectResultValueBuilder.html     |   273 +
 .../druid/query/select/class-use/EventHolder.html  |   186 +
 .../druid/query/select/class-use/PagingSpec.html   |   201 +
 .../query/select/class-use/SelectBinaryFn.html     |   117 +
 .../druid/query/select/class-use/SelectQuery.html  |   227 +
 .../query/select/class-use/SelectQueryEngine.html  |   157 +
 .../class-use/SelectQueryQueryToolChest.html       |   157 +
 .../select/class-use/SelectQueryRunnerFactory.html |   117 +
 .../query/select/class-use/SelectResultValue.html  |   262 +
 .../select/class-use/SelectResultValueBuilder.html |   117 +
 .../io/druid/query/select/package-frame.html       |    28 +
 .../io/druid/query/select/package-summary.html     |   167 +
 .../io/druid/query/select/package-tree.html        |   146 +
 api/0.6.174/io/druid/query/select/package-use.html |   187 +
 .../io/druid/query/spec/LegacySegmentSpec.html     |   246 +
 .../query/spec/MultipleIntervalSegmentSpec.html    |   341 +
 .../query/spec/MultipleSpecificSegmentSpec.html    |   350 +
 .../io/druid/query/spec/QuerySegmentSpec.html      |   227 +
 .../io/druid/query/spec/QuerySegmentSpecs.html     |   284 +
 .../query/spec/SpecificSegmentQueryRunner.html     |   269 +
 .../io/druid/query/spec/SpecificSegmentSpec.html   |   320 +
 .../query/spec/class-use/LegacySegmentSpec.html    |   117 +
 .../class-use/MultipleIntervalSegmentSpec.html     |   157 +
 .../class-use/MultipleSpecificSegmentSpec.html     |   117 +
 .../query/spec/class-use/QuerySegmentSpec.html     |   577 +
 .../query/spec/class-use/QuerySegmentSpecs.html    |   117 +
 .../spec/class-use/SpecificSegmentQueryRunner.html |   117 +
 .../query/spec/class-use/SpecificSegmentSpec.html  |   117 +
 api/0.6.174/io/druid/query/spec/package-frame.html |    29 +
 .../io/druid/query/spec/package-summary.html       |   170 +
 api/0.6.174/io/druid/query/spec/package-tree.html  |   142 +
 api/0.6.174/io/druid/query/spec/package-use.html   |   305 +
 .../query/timeboundary/TimeBoundaryQuery.html      |   517 +
 .../TimeBoundaryQueryQueryToolChest.html           |   415 +
 .../TimeBoundaryQueryRunnerFactory.html            |   304 +
 .../timeboundary/TimeBoundaryResultValue.html      |   335 +
 .../timeboundary/class-use/TimeBoundaryQuery.html  |   227 +
 .../class-use/TimeBoundaryQueryQueryToolChest.html |   117 +
 .../class-use/TimeBoundaryQueryRunnerFactory.html  |   117 +
 .../class-use/TimeBoundaryResultValue.html         |   266 +
 .../io/druid/query/timeboundary/package-frame.html |    23 +
 .../druid/query/timeboundary/package-summary.html  |   147 +
 .../io/druid/query/timeboundary/package-tree.html  |   141 +
 .../io/druid/query/timeboundary/package-use.html   |   175 +
 .../druid/query/timeseries/TimeseriesBinaryFn.html |   271 +
 .../io/druid/query/timeseries/TimeseriesQuery.html |   463 +
 .../query/timeseries/TimeseriesQueryEngine.html    |   260 +
 .../timeseries/TimeseriesQueryQueryToolChest.html  |   433 +
 .../timeseries/TimeseriesQueryRunnerFactory.html   |   308 +
 .../query/timeseries/TimeseriesResultBuilder.html  |   284 +
 .../query/timeseries/TimeseriesResultValue.html    |   274 +
 .../timeseries/class-use/TimeseriesBinaryFn.html   |   117 +
 .../timeseries/class-use/TimeseriesQuery.html      |   245 +
 .../class-use/TimeseriesQueryEngine.html           |   157 +
 .../class-use/TimeseriesQueryQueryToolChest.html   |   157 +
 .../class-use/TimeseriesQueryRunnerFactory.html    |   117 +
 .../class-use/TimeseriesResultBuilder.html         |   161 +
 .../class-use/TimeseriesResultValue.html           |   272 +
 .../io/druid/query/timeseries/package-frame.html   |    26 +
 .../io/druid/query/timeseries/package-summary.html |   159 +
 .../io/druid/query/timeseries/package-tree.html    |   148 +
 .../io/druid/query/timeseries/package-use.html     |   181 +
 .../topn/AggregateTopNMetricFirstAlgorithm.html    |   326 +
 .../query/topn/AlphaNumericTopNMetricSpec.html     |   337 +
 .../BaseTopNAlgorithm.AggregatorArrayProvider.html |   282 +
 .../topn/BaseTopNAlgorithm.BaseArrayProvider.html  |   350 +
 .../io/druid/query/topn/BaseTopNAlgorithm.html     |   479 +
 .../druid/query/topn/BySegmentTopNResultValue.html |   351 +
 .../query/topn/DimExtractionTopNAlgorithm.html     |   443 +
 .../io/druid/query/topn/DimValHolder.Builder.html  |   314 +
 api/0.6.174/io/druid/query/topn/DimValHolder.html  |   322 +
 .../topn/DimensionAndMetricValueExtractor.html     |   334 +
 .../druid/query/topn/InvertedTopNMetricSpec.html   |   447 +
 .../io/druid/query/topn/LegacyTopNMetricSpec.html  |   246 +
 .../query/topn/LexicographicTopNMetricSpec.html    |   468 +
 .../io/druid/query/topn/NumericTopNMetricSpec.html |   468 +
 ...oledTopNAlgorithm.PooledTopNParams.Builder.html |   379 +
 .../topn/PooledTopNAlgorithm.PooledTopNParams.html |   374 +
 .../io/druid/query/topn/PooledTopNAlgorithm.html   |   456 +
 api/0.6.174/io/druid/query/topn/TopNAlgorithm.html |   312 +
 .../io/druid/query/topn/TopNAlgorithmSelector.html |   325 +
 api/0.6.174/io/druid/query/topn/TopNBinaryFn.html  |   281 +
 .../query/topn/TopNLexicographicResultBuilder.html |   332 +
 api/0.6.174/io/druid/query/topn/TopNMapFn.html     |   276 +
 .../io/druid/query/topn/TopNMetricSpec.html        |   317 +
 .../io/druid/query/topn/TopNMetricSpecBuilder.html |   264 +
 .../druid/query/topn/TopNNumericResultBuilder.html |   334 +
 api/0.6.174/io/druid/query/topn/TopNParams.html    |   309 +
 api/0.6.174/io/druid/query/topn/TopNQuery.html     |   563 +
 .../io/druid/query/topn/TopNQueryBuilder.html      |   687 +
 .../io/druid/query/topn/TopNQueryConfig.html       |   270 +
 .../io/druid/query/topn/TopNQueryEngine.html       |   260 +
 .../druid/query/topn/TopNQueryQueryToolChest.html  |   463 +
 .../druid/query/topn/TopNQueryRunnerFactory.html   |   308 +
 .../io/druid/query/topn/TopNResultBuilder.html     |   255 +
 .../io/druid/query/topn/TopNResultMerger.html      |   246 +
 .../io/druid/query/topn/TopNResultValue.html       |   335 +
 .../AggregateTopNMetricFirstAlgorithm.html         |   117 +
 .../topn/class-use/AlphaNumericTopNMetricSpec.html |   117 +
 .../BaseTopNAlgorithm.AggregatorArrayProvider.html |   117 +
 .../BaseTopNAlgorithm.BaseArrayProvider.html       |   157 +
 .../query/topn/class-use/BaseTopNAlgorithm.html    |   161 +
 .../topn/class-use/BySegmentTopNResultValue.html   |   117 +
 .../topn/class-use/DimExtractionTopNAlgorithm.html |   117 +
 .../query/topn/class-use/DimValHolder.Builder.html |   169 +
 .../druid/query/topn/class-use/DimValHolder.html   |   178 +
 .../DimensionAndMetricValueExtractor.html          |   186 +
 .../topn/class-use/InvertedTopNMetricSpec.html     |   117 +
 .../query/topn/class-use/LegacyTopNMetricSpec.html |   117 +
 .../class-use/LexicographicTopNMetricSpec.html     |   157 +
 .../topn/class-use/NumericTopNMetricSpec.html      |   157 +
 ...oledTopNAlgorithm.PooledTopNParams.Builder.html |   193 +
 .../PooledTopNAlgorithm.PooledTopNParams.html      |   199 +
 .../query/topn/class-use/PooledTopNAlgorithm.html  |   117 +
 .../druid/query/topn/class-use/TopNAlgorithm.html  |   181 +
 .../topn/class-use/TopNAlgorithmSelector.html      |   173 +
 .../druid/query/topn/class-use/TopNBinaryFn.html   |   117 +
 .../class-use/TopNLexicographicResultBuilder.html  |   117 +
 .../io/druid/query/topn/class-use/TopNMapFn.html   |   117 +
 .../druid/query/topn/class-use/TopNMetricSpec.html |   239 +
 .../topn/class-use/TopNMetricSpecBuilder.html      |   246 +
 .../topn/class-use/TopNNumericResultBuilder.html   |   159 +
 .../io/druid/query/topn/class-use/TopNParams.html  |   250 +
 .../io/druid/query/topn/class-use/TopNQuery.html   |   263 +
 .../query/topn/class-use/TopNQueryBuilder.html     |   254 +
 .../query/topn/class-use/TopNQueryConfig.html      |   155 +
 .../query/topn/class-use/TopNQueryEngine.html      |   117 +
 .../topn/class-use/TopNQueryQueryToolChest.html    |   157 +
 .../topn/class-use/TopNQueryRunnerFactory.html     |   117 +
 .../query/topn/class-use/TopNResultBuilder.html    |   283 +
 .../query/topn/class-use/TopNResultMerger.html     |   174 +
 .../query/topn/class-use/TopNResultValue.html      |   332 +
 api/0.6.174/io/druid/query/topn/package-frame.html |    56 +
 .../io/druid/query/topn/package-summary.html       |   280 +
 api/0.6.174/io/druid/query/topn/package-tree.html  |   201 +
 api/0.6.174/io/druid/query/topn/package-use.html   |   215 +
 .../segment/Capabilities.CapabilitiesBuilder.html  |   241 +
 api/0.6.174/io/druid/segment/Capabilities.html     |   256 +
 api/0.6.174/io/druid/segment/ColumnSelector.html   |   229 +
 .../segment/ColumnSelectorBitmapIndexSelector.html |   339 +
 .../io/druid/segment/ColumnSelectorFactory.html    |   252 +
 api/0.6.174/io/druid/segment/CompressedPools.html  |   284 +
 api/0.6.174/io/druid/segment/ConciseOffset.html    |   320 +
 api/0.6.174/io/druid/segment/Cursor.html           |   272 +
 api/0.6.174/io/druid/segment/CursorFactory.html    |   220 +
 .../io/druid/segment/DimensionSelector.html        |   292 +
 .../io/druid/segment/FloatColumnSelector.html      |   211 +
 .../druid/segment/FloatMetricColumnSerializer.html |   311 +
 .../io/druid/segment/IncrementalIndexSegment.html  |   341 +
 .../segment/IndexIO.DefaultIndexIOHandler.html     |   291 +
 .../io/druid/segment/IndexIO.IndexIOHandler.html   |   219 +
 api/0.6.174/io/druid/segment/IndexIO.html          |   519 +
 .../segment/IndexMerger.ProgressIndicator.html     |   212 +
 api/0.6.174/io/druid/segment/IndexMerger.html      |   477 +
 api/0.6.174/io/druid/segment/IndexableAdapter.html |   305 +
 api/0.6.174/io/druid/segment/MMappedIndex.html     |   463 +
 .../io/druid/segment/MMappedIndexAdapter.html      |   388 +
 .../io/druid/segment/MetricColumnSerializer.html   |   247 +
 .../io/druid/segment/MetricHolder.MetricType.html  |   327 +
 api/0.6.174/io/druid/segment/MetricHolder.html     |   434 +
 .../io/druid/segment/ObjectColumnSelector.html     |   221 +
 api/0.6.174/io/druid/segment/QueryableIndex.html   |   284 +
 .../segment/QueryableIndexIndexableAdapter.html    |   388 +
 .../io/druid/segment/QueryableIndexSegment.html    |   341 +
 .../segment/QueryableIndexStorageAdapter.html      |   407 +
 .../io/druid/segment/ReferenceCountingSegment.html |   391 +
 .../druid/segment/ReferenceCountingSequence.html   |   280 +
 api/0.6.174/io/druid/segment/Rowboat.html          |   385 +
 .../segment/RowboatFilteringIndexAdapter.html      |   390 +
 api/0.6.174/io/druid/segment/Segment.html          |   263 +
 .../io/druid/segment/SimpleQueryableIndex.html     |   385 +
 api/0.6.174/io/druid/segment/StorageAdapter.html   |   315 +
 .../io/druid/segment/TimestampColumnSelector.html  |   208 +
 .../Capabilities.CapabilitiesBuilder.html          |   161 +
 .../io/druid/segment/class-use/Capabilities.html   |   226 +
 .../io/druid/segment/class-use/ColumnSelector.html |   181 +
 .../ColumnSelectorBitmapIndexSelector.html         |   117 +
 .../segment/class-use/ColumnSelectorFactory.html   |   409 +
 .../druid/segment/class-use/CompressedPools.html   |   117 +
 .../io/druid/segment/class-use/ConciseOffset.html  |   117 +
 api/0.6.174/io/druid/segment/class-use/Cursor.html |   323 +
 .../io/druid/segment/class-use/CursorFactory.html  |   192 +
 .../druid/segment/class-use/DimensionSelector.html |   292 +
 .../segment/class-use/FloatColumnSelector.html     |   241 +
 .../class-use/FloatMetricColumnSerializer.html     |   117 +
 .../segment/class-use/IncrementalIndexSegment.html |   117 +
 .../class-use/IndexIO.DefaultIndexIOHandler.html   |   117 +
 .../segment/class-use/IndexIO.IndexIOHandler.html  |   170 +
 .../io/druid/segment/class-use/IndexIO.html        |   117 +
 .../class-use/IndexMerger.ProgressIndicator.html   |   180 +
 .../io/druid/segment/class-use/IndexMerger.html    |   117 +
 .../druid/segment/class-use/IndexableAdapter.html  |   232 +
 .../io/druid/segment/class-use/MMappedIndex.html   |   178 +
 .../segment/class-use/MMappedIndexAdapter.html     |   117 +
 .../segment/class-use/MetricColumnSerializer.html  |   179 +
 .../segment/class-use/MetricHolder.MetricType.html |   170 +
 .../io/druid/segment/class-use/MetricHolder.html   |   228 +
 .../segment/class-use/ObjectColumnSelector.html    |   237 +
 .../io/druid/segment/class-use/QueryableIndex.html |   274 +
 .../class-use/QueryableIndexIndexableAdapter.html  |   117 +
 .../segment/class-use/QueryableIndexSegment.html   |   117 +
 .../class-use/QueryableIndexStorageAdapter.html    |   117 +
 .../class-use/ReferenceCountingSegment.html        |   177 +
 .../class-use/ReferenceCountingSequence.html       |   117 +
 .../io/druid/segment/class-use/Rowboat.html        |   216 +
 .../class-use/RowboatFilteringIndexAdapter.html    |   117 +
 .../io/druid/segment/class-use/Segment.html        |   454 +
 .../segment/class-use/SimpleQueryableIndex.html    |   117 +
 .../io/druid/segment/class-use/StorageAdapter.html |   321 +
 .../segment/class-use/TimestampColumnSelector.html |   157 +
 .../io/druid/segment/column/AbstractColumn.html    |   380 +
 .../io/druid/segment/column/BitmapIndex.html       |   260 +
 api/0.6.174/io/druid/segment/column/Column.html    |   303 +
 .../io/druid/segment/column/ColumnBuilder.html     |   362 +
 .../druid/segment/column/ColumnCapabilities.html   |   277 +
 .../segment/column/ColumnCapabilitiesImpl.html     |   430 +
 .../io/druid/segment/column/ColumnConfig.html      |   212 +
 .../segment/column/ColumnDescriptor.Builder.html   |   301 +
 .../io/druid/segment/column/ColumnDescriptor.html  |   364 +
 .../io/druid/segment/column/ComplexColumn.html     |   250 +
 .../io/druid/segment/column/ComplexColumnImpl.html |   314 +
 .../segment/column/DictionaryEncodedColumn.html    |   302 +
 .../io/druid/segment/column/FloatColumn.html       |   312 +
 .../io/druid/segment/column/GenericColumn.html     |   328 +
 .../druid/segment/column/IndexedComplexColumn.html |   324 +
 .../segment/column/IndexedFloatsGenericColumn.html |   424 +
 .../segment/column/IndexedLongsGenericColumn.html  |   424 +
 .../io/druid/segment/column/LongColumn.html        |   312 +
 .../io/druid/segment/column/RunLengthColumn.html   |   208 +
 .../column/SimpleDictionaryEncodedColumn.html      |   394 +
 .../io/druid/segment/column/SpatialIndex.html      |   208 +
 api/0.6.174/io/druid/segment/column/ValueType.html |   347 +
 .../segment/column/class-use/AbstractColumn.html   |   165 +
 .../segment/column/class-use/BitmapIndex.html      |   196 +
 .../io/druid/segment/column/class-use/Column.html  |   287 +
 .../segment/column/class-use/ColumnBuilder.html    |   286 +
 .../column/class-use/ColumnCapabilities.html       |   186 +
 .../column/class-use/ColumnCapabilitiesImpl.html   |   177 +
 .../segment/column/class-use/ColumnConfig.html     |   250 +
 .../column/class-use/ColumnDescriptor.Builder.html |   169 +
 .../segment/column/class-use/ColumnDescriptor.html |   157 +
 .../segment/column/class-use/ComplexColumn.html    |   213 +
 .../column/class-use/ComplexColumnImpl.html        |   117 +
 .../column/class-use/DictionaryEncodedColumn.html  |   209 +
 .../segment/column/class-use/FloatColumn.html      |   117 +
 .../segment/column/class-use/GenericColumn.html    |   225 +
 .../column/class-use/IndexedComplexColumn.html     |   117 +
 .../class-use/IndexedFloatsGenericColumn.html      |   117 +
 .../class-use/IndexedLongsGenericColumn.html       |   117 +
 .../druid/segment/column/class-use/LongColumn.html |   117 +
 .../segment/column/class-use/RunLengthColumn.html  |   174 +
 .../class-use/SimpleDictionaryEncodedColumn.html   |   117 +
 .../segment/column/class-use/SpatialIndex.html     |   196 +
 .../druid/segment/column/class-use/ValueType.html  |   224 +
 .../io/druid/segment/column/package-frame.html     |    47 +
 .../io/druid/segment/column/package-summary.html   |   241 +
 .../io/druid/segment/column/package-tree.html      |   176 +
 .../io/druid/segment/column/package-use.html       |   327 +
 .../druid/segment/data/ArrayBasedIndexedInts.html  |   301 +
 .../io/druid/segment/data/ArrayBasedOffset.html    |   334 +
 .../io/druid/segment/data/ArrayIndexed.html        |   341 +
 .../druid/segment/data/ByteBufferSerializer.html   |   282 +
 .../io/druid/segment/data/ByteBufferWriter.html    |   322 +
 .../segment/data/CacheableObjectStrategy.html      |   195 +
 .../data/CompressedFloatBufferObjectStrategy.html  |   262 +
 .../data/CompressedFloatsIndexedSupplier.html      |   399 +
 .../data/CompressedFloatsSupplierSerializer.html   |   350 +
 .../data/CompressedLongBufferObjectStrategy.html   |   262 +
 .../data/CompressedLongsIndexedSupplier.html       |   372 +
 .../data/CompressedLongsSupplierSerializer.html    |   328 +
 .../CompressedObjectStrategy.BufferConverter.html  |   259 +
 .../segment/data/CompressedObjectStrategy.html     |   364 +
 .../segment/data/ConciseCompressedIndexedInts.html |   367 +
 .../io/druid/segment/data/EmptyIndexedInts.html    |   337 +
 .../io/druid/segment/data/GenericIndexed.html      |   479 +
 .../druid/segment/data/GenericIndexedWriter.html   |   323 +
 api/0.6.174/io/druid/segment/data/IOPeon.html      |   247 +
 .../segment/data/InMemoryCompressedFloats.html     |   365 +
 .../segment/data/InMemoryCompressedLongs.html      |   403 +
 api/0.6.174/io/druid/segment/data/Indexed.html     |   265 +
 .../io/druid/segment/data/IndexedFloats.html       |   253 +
 api/0.6.174/io/druid/segment/data/IndexedInts.html |   238 +
 .../io/druid/segment/data/IndexedIntsIterator.html |   301 +
 .../io/druid/segment/data/IndexedIterable.html     |   280 +
 api/0.6.174/io/druid/segment/data/IndexedList.html |   348 +
 .../io/druid/segment/data/IndexedLongs.html        |   283 +
 .../io/druid/segment/data/IndexedRTree.html        |   316 +
 api/0.6.174/io/druid/segment/data/Indexedids.html  |   258 +
 .../druid/segment/data/IntBufferIndexedInts.html   |   393 +
 .../io/druid/segment/data/IntersectingOffset.html  |   322 +
 api/0.6.174/io/druid/segment/data/ListIndexed.html |   339 +
 .../io/druid/segment/data/ObjectStrategy.html      |   266 +
 api/0.6.174/io/druid/segment/data/Offset.html      |   252 +
 .../io/druid/segment/data/ReadableOffset.html      |   221 +
 .../io/druid/segment/data/StartLimitedOffset.html  |   322 +
 .../io/druid/segment/data/TmpFileIOPeon.html       |   307 +
 .../io/druid/segment/data/UnioningOffset.html      |   322 +
 .../io/druid/segment/data/VSizeIndexed.html        |   356 +
 .../io/druid/segment/data/VSizeIndexedInts.html    |   457 +
 .../io/druid/segment/data/VSizeIndexedWriter.html  |   321 +
 .../data/class-use/ArrayBasedIndexedInts.html      |   117 +
 .../segment/data/class-use/ArrayBasedOffset.html   |   117 +
 .../druid/segment/data/class-use/ArrayIndexed.html |   117 +
 .../data/class-use/ByteBufferSerializer.html       |   117 +
 .../segment/data/class-use/ByteBufferWriter.html   |   117 +
 .../data/class-use/CacheableObjectStrategy.html    |   117 +
 .../CompressedFloatBufferObjectStrategy.html       |   157 +
 .../class-use/CompressedFloatsIndexedSupplier.html |   245 +
 .../CompressedFloatsSupplierSerializer.html        |   190 +
 .../CompressedLongBufferObjectStrategy.html        |   157 +
 .../class-use/CompressedLongsIndexedSupplier.html  |   263 +
 .../CompressedLongsSupplierSerializer.html         |   159 +
 .../CompressedObjectStrategy.BufferConverter.html  |   156 +
 .../data/class-use/CompressedObjectStrategy.html   |   161 +
 .../class-use/ConciseCompressedIndexedInts.html    |   157 +
 .../segment/data/class-use/EmptyIndexedInts.html   |   157 +
 .../segment/data/class-use/GenericIndexed.html     |   381 +
 .../data/class-use/GenericIndexedWriter.html       |   185 +
 .../io/druid/segment/data/class-use/IOPeon.html    |   253 +
 .../data/class-use/InMemoryCompressedFloats.html   |   117 +
 .../data/class-use/InMemoryCompressedLongs.html    |   117 +
 .../io/druid/segment/data/class-use/Indexed.html   |   441 +
 .../segment/data/class-use/IndexedFloats.html      |   233 +
 .../druid/segment/data/class-use/IndexedInts.html  |   290 +
 .../data/class-use/IndexedIntsIterator.html        |   117 +
 .../segment/data/class-use/IndexedIterable.html    |   157 +
 .../druid/segment/data/class-use/IndexedList.html  |   157 +
 .../druid/segment/data/class-use/IndexedLongs.html |   233 +
 .../druid/segment/data/class-use/IndexedRTree.html |   157 +
 .../druid/segment/data/class-use/Indexedids.html   |   117 +
 .../data/class-use/IntBufferIndexedInts.html       |   187 +
 .../segment/data/class-use/IntersectingOffset.html |   117 +
 .../druid/segment/data/class-use/ListIndexed.html  |   117 +
 .../segment/data/class-use/ObjectStrategy.html     |   350 +
 .../io/druid/segment/data/class-use/Offset.html    |   253 +
 .../segment/data/class-use/ReadableOffset.html     |   206 +
 .../segment/data/class-use/StartLimitedOffset.html |   117 +
 .../segment/data/class-use/TmpFileIOPeon.html      |   117 +
 .../segment/data/class-use/UnioningOffset.html     |   117 +
 .../druid/segment/data/class-use/VSizeIndexed.html |   255 +
 .../segment/data/class-use/VSizeIndexedInts.html   |   274 +
 .../segment/data/class-use/VSizeIndexedWriter.html |   117 +
 .../io/druid/segment/data/package-frame.html       |    64 +
 .../io/druid/segment/data/package-summary.html     |   329 +
 .../io/druid/segment/data/package-tree.html        |   207 +
 api/0.6.174/io/druid/segment/data/package-use.html |   482 +
 api/0.6.174/io/druid/segment/filter/AndFilter.html |   301 +
 .../druid/segment/filter/BooleanValueMatcher.html  |   267 +
 .../io/druid/segment/filter/ExtractionFilter.html  |   305 +
 api/0.6.174/io/druid/segment/filter/Filters.html   |   271 +
 .../io/druid/segment/filter/JavaScriptFilter.html  |   303 +
 api/0.6.174/io/druid/segment/filter/NotFilter.html |   301 +
 api/0.6.174/io/druid/segment/filter/OrFilter.html  |   301 +
 .../io/druid/segment/filter/RegexFilter.html       |   302 +
 .../io/druid/segment/filter/SearchQueryFilter.html |   302 +
 .../io/druid/segment/filter/SelectorFilter.html    |   303 +
 .../io/druid/segment/filter/SpatialFilter.html     |   303 +
 .../druid/segment/filter/class-use/AndFilter.html  |   117 +
 .../filter/class-use/BooleanValueMatcher.html      |   117 +
 .../segment/filter/class-use/ExtractionFilter.html |   117 +
 .../io/druid/segment/filter/class-use/Filters.html |   117 +
 .../segment/filter/class-use/JavaScriptFilter.html |   117 +
 .../druid/segment/filter/class-use/NotFilter.html  |   117 +
 .../druid/segment/filter/class-use/OrFilter.html   |   117 +
 .../segment/filter/class-use/RegexFilter.html      |   117 +
 .../filter/class-use/SearchQueryFilter.html        |   117 +
 .../segment/filter/class-use/SelectorFilter.html   |   117 +
 .../segment/filter/class-use/SpatialFilter.html    |   117 +
 .../io/druid/segment/filter/package-frame.html     |    30 +
 .../io/druid/segment/filter/package-summary.html   |   175 +
 .../io/druid/segment/filter/package-tree.html      |   140 +
 .../io/druid/segment/filter/package-use.html       |   117 +
 .../segment/incremental/IncrementalIndex.html      |   519 +
 .../incremental/IncrementalIndexAdapter.html       |   390 +
 .../IncrementalIndexSchema.Builder.html            |   340 +
 .../incremental/IncrementalIndexSchema.html        |   337 +
 .../IncrementalIndexStorageAdapter.html            |   407 +
 .../incremental/SpatialDimensionRowFormatter.html  |   259 +
 .../incremental/class-use/IncrementalIndex.html    |   273 +
 .../class-use/IncrementalIndexAdapter.html         |   117 +
 .../class-use/IncrementalIndexSchema.Builder.html  |   177 +
 .../class-use/IncrementalIndexSchema.html          |   168 +
 .../class-use/IncrementalIndexStorageAdapter.html  |   117 +
 .../class-use/SpatialDimensionRowFormatter.html    |   157 +
 .../druid/segment/incremental/package-frame.html   |    25 +
 .../druid/segment/incremental/package-summary.html |   157 +
 .../io/druid/segment/incremental/package-tree.html |   135 +
 .../io/druid/segment/incremental/package-use.html  |   218 +
 .../io/druid/segment/indexing/DataSchema.html      |   316 +
 .../io/druid/segment/indexing/IOConfig.html        |   164 +
 .../io/druid/segment/indexing/IngestionSpec.html   |   294 +
 .../druid/segment/indexing/RealtimeIOConfig.html   |   278 +
 .../segment/indexing/RealtimeTuningConfig.html     |   407 +
 .../io/druid/segment/indexing/TuningConfig.html    |   164 +
 .../segment/indexing/class-use/DataSchema.html     |   457 +
 .../druid/segment/indexing/class-use/IOConfig.html |   214 +
 .../segment/indexing/class-use/IngestionSpec.html  |   203 +
 .../indexing/class-use/RealtimeIOConfig.html       |   174 +
 .../indexing/class-use/RealtimeTuningConfig.html   |   322 +
 .../segment/indexing/class-use/TuningConfig.html   |   214 +
 .../granularity/ArbitraryGranularitySpec.html      |   346 +
 .../indexing/granularity/GranularitySpec.html      |   279 +
 .../granularity/UniformGranularitySpec.html        |   363 +
 .../class-use/ArbitraryGranularitySpec.html        |   117 +
 .../granularity/class-use/GranularitySpec.html     |   335 +
 .../class-use/UniformGranularitySpec.html          |   117 +
 .../indexing/granularity/package-frame.html        |    25 +
 .../indexing/granularity/package-summary.html      |   156 +
 .../segment/indexing/granularity/package-tree.html |   135 +
 .../segment/indexing/granularity/package-use.html  |   215 +
 .../io/druid/segment/indexing/package-frame.html   |    28 +
 .../io/druid/segment/indexing/package-summary.html |   166 +
 .../io/druid/segment/indexing/package-tree.html    |   138 +
 .../io/druid/segment/indexing/package-use.html     |   287 +
 .../segment/loading/LocalDataSegmentKiller.html    |   269 +
 .../segment/loading/LocalDataSegmentPuller.html    |   290 +
 .../segment/loading/LocalDataSegmentPusher.html    |   291 +
 .../loading/LocalDataSegmentPusherConfig.html      |   294 +
 .../loading/MMappedQueryableIndexFactory.html      |   269 +
 .../segment/loading/OmniDataSegmentArchiver.html   |   289 +
 .../segment/loading/OmniDataSegmentKiller.html     |   270 +
 .../segment/loading/OmniDataSegmentMover.html      |   272 +
 .../druid/segment/loading/OmniSegmentLoader.html   |   373 +
 .../segment/loading/QueryableIndexFactory.html     |   215 +
 .../io/druid/segment/loading/SegmentLoader.html    |   263 +
 .../druid/segment/loading/SegmentLoaderConfig.html |   353 +
 .../segment/loading/StorageLocationConfig.html     |   314 +
 .../loading/class-use/LocalDataSegmentKiller.html  |   117 +
 .../loading/class-use/LocalDataSegmentPuller.html  |   117 +
 .../loading/class-use/LocalDataSegmentPusher.html  |   117 +
 .../class-use/LocalDataSegmentPusherConfig.html    |   156 +
 .../class-use/MMappedQueryableIndexFactory.html    |   117 +
 .../loading/class-use/OmniDataSegmentArchiver.html |   117 +
 .../loading/class-use/OmniDataSegmentKiller.html   |   117 +
 .../loading/class-use/OmniDataSegmentMover.html    |   117 +
 .../loading/class-use/OmniSegmentLoader.html       |   177 +
 .../loading/class-use/QueryableIndexFactory.html   |   170 +
 .../segment/loading/class-use/SegmentLoader.html   |   231 +
 .../loading/class-use/SegmentLoaderConfig.html     |   264 +
 .../loading/class-use/StorageLocationConfig.html   |   187 +
 .../io/druid/segment/loading/package-frame.html    |    35 +
 .../io/druid/segment/loading/package-summary.html  |   194 +
 .../io/druid/segment/loading/package-tree.html     |   145 +
 .../io/druid/segment/loading/package-use.html      |   247 +
 api/0.6.174/io/druid/segment/package-frame.html    |    62 +
 api/0.6.174/io/druid/segment/package-summary.html  |   307 +
 api/0.6.174/io/druid/segment/package-tree.html     |   201 +
 api/0.6.174/io/druid/segment/package-use.html      |   660 +
 .../druid/segment/realtime/DbSegmentPublisher.html |   274 +
 .../segment/realtime/DbSegmentPublisherConfig.html |   258 +
 .../io/druid/segment/realtime/FireDepartment.html  |   363 +
 .../segment/realtime/FireDepartmentConfig.html     |   273 +
 .../segment/realtime/FireDepartmentMetrics.html    |   362 +
 .../io/druid/segment/realtime/FireHydrant.html     |   345 +
 api/0.6.174/io/druid/segment/realtime/Indexer.html |   208 +
 .../segment/realtime/NoopSegmentPublisher.html     |   269 +
 .../RealtimeCuratorDataSegmentAnnouncerConfig.html |   230 +
 .../io/druid/segment/realtime/RealtimeManager.html |   345 +
 .../segment/realtime/RealtimeMetricsMonitor.html   |   279 +
 api/0.6.174/io/druid/segment/realtime/Schema.html  |   358 +
 .../druid/segment/realtime/SegmentPublisher.html   |   215 +
 .../realtime/class-use/DbSegmentPublisher.html     |   162 +
 .../class-use/DbSegmentPublisherConfig.html        |   117 +
 .../segment/realtime/class-use/FireDepartment.html |   224 +
 .../realtime/class-use/FireDepartmentConfig.html   |   191 +
 .../realtime/class-use/FireDepartmentMetrics.html  |   257 +
 .../segment/realtime/class-use/FireHydrant.html    |   208 +
 .../druid/segment/realtime/class-use/Indexer.html  |   117 +
 .../realtime/class-use/NoopSegmentPublisher.html   |   117 +
 .../RealtimeCuratorDataSegmentAnnouncerConfig.html |   117 +
 .../realtime/class-use/RealtimeManager.html        |   117 +
 .../realtime/class-use/RealtimeMetricsMonitor.html |   117 +
 .../druid/segment/realtime/class-use/Schema.html   |   191 +
 .../realtime/class-use/SegmentPublisher.html       |   254 +
 .../segment/realtime/firehose/ChatHandler.html     |   167 +
 .../realtime/firehose/ChatHandlerProvider.html     |   240 +
 .../realtime/firehose/ChatHandlerResource.html     |   259 +
 .../realtime/firehose/ClippedFirehoseFactory.html  |   315 +
 ...CombiningFirehoseFactory.CombiningFirehose.html |   329 +
 .../firehose/CombiningFirehoseFactory.html         |   319 +
 .../segment/realtime/firehose/EventReceiver.html   |   208 +
 ...eiverFirehoseFactory.EventReceiverFirehose.html |   339 +
 .../firehose/EventReceiverFirehoseFactory.html     |   339 +
 .../segment/realtime/firehose/IrcDecoder.html      |   212 +
 .../realtime/firehose/IrcFirehoseFactory.html      |   391 +
 .../druid/segment/realtime/firehose/IrcParser.html |   314 +
 .../realtime/firehose/LocalFirehoseFactory.html    |   316 +
 .../realtime/firehose/NoopChatHandlerProvider.html |   303 +
 .../realtime/firehose/PredicateFirehose.html       |   326 +
 .../ServiceAnnouncingChatHandlerProvider.html      |   309 +
 ...hutoffFirehoseFactory.TimedShutoffFirehose.html |   329 +
 .../firehose/TimedShutoffFirehoseFactory.html      |   334 +
 .../realtime/firehose/class-use/ChatHandler.html   |   202 +
 .../firehose/class-use/ChatHandlerProvider.html    |   180 +
 .../firehose/class-use/ChatHandlerResource.html    |   117 +
 .../firehose/class-use/ClippedFirehoseFactory.html |   117 +
 ...CombiningFirehoseFactory.CombiningFirehose.html |   117 +
 .../class-use/CombiningFirehoseFactory.html        |   117 +
 .../realtime/firehose/class-use/EventReceiver.html |   117 +
 ...eiverFirehoseFactory.EventReceiverFirehose.html |   117 +
 .../class-use/EventReceiverFirehoseFactory.html    |   117 +
 .../realtime/firehose/class-use/IrcDecoder.html    |   178 +
 .../firehose/class-use/IrcFirehoseFactory.html     |   117 +
 .../realtime/firehose/class-use/IrcParser.html     |   170 +
 .../firehose/class-use/LocalFirehoseFactory.html   |   117 +
 .../class-use/NoopChatHandlerProvider.html         |   117 +
 .../firehose/class-use/PredicateFirehose.html      |   117 +
 .../ServiceAnnouncingChatHandlerProvider.html      |   117 +
 ...hutoffFirehoseFactory.TimedShutoffFirehose.html |   117 +
 .../class-use/TimedShutoffFirehoseFactory.html     |   117 +
 .../segment/realtime/firehose/package-frame.html   |    37 +
 .../segment/realtime/firehose/package-summary.html |   219 +
 .../segment/realtime/firehose/package-tree.html    |   150 +
 .../segment/realtime/firehose/package-use.html     |   162 +
 .../io/druid/segment/realtime/package-frame.html   |    35 +
 .../io/druid/segment/realtime/package-summary.html |   196 +
 .../io/druid/segment/realtime/package-tree.html    |   153 +
 .../io/druid/segment/realtime/package-use.html     |   285 +
 .../realtime/plumber/CustomVersioningPolicy.html   |   267 +
 .../segment/realtime/plumber/FlushingPlumber.html  |   338 +
 .../realtime/plumber/FlushingPlumberSchool.html    |   312 +
 .../plumber/IntervalStartVersioningPolicy.html     |   267 +
 .../plumber/MessageTimeRejectionPolicyFactory.html |   267 +
 .../plumber/NoopRejectionPolicyFactory.html        |   267 +
 .../io/druid/segment/realtime/plumber/Plumber.html |   280 +
 .../segment/realtime/plumber/PlumberSchool.html    |   237 +
 .../segment/realtime/plumber/RealtimePlumber.html  |   562 +
 .../realtime/plumber/RealtimePlumberSchool.html    |   407 +
 .../segment/realtime/plumber/RejectionPolicy.html  |   221 +
 .../realtime/plumber/RejectionPolicyFactory.html   |   212 +
 .../plumber/ServerTimeRejectionPolicyFactory.html  |   267 +
 .../io/druid/segment/realtime/plumber/Sink.html    |   418 +
 .../plumber/TestRejectionPolicyFactory.html        |   267 +
 .../segment/realtime/plumber/VersioningPolicy.html |   212 +
 .../plumber/class-use/CustomVersioningPolicy.html  |   117 +
 .../plumber/class-use/FlushingPlumber.html         |   117 +
 .../plumber/class-use/FlushingPlumberSchool.html   |   117 +
 .../class-use/IntervalStartVersioningPolicy.html   |   117 +
 .../MessageTimeRejectionPolicyFactory.html         |   117 +
 .../class-use/NoopRejectionPolicyFactory.html      |   117 +
 .../realtime/plumber/class-use/Plumber.html        |   236 +
 .../realtime/plumber/class-use/PlumberSchool.html  |   247 +
 .../plumber/class-use/RealtimePlumber.html         |   157 +
 .../plumber/class-use/RealtimePlumberSchool.html   |   159 +
 .../plumber/class-use/RejectionPolicy.html         |   177 +
 .../plumber/class-use/RejectionPolicyFactory.html  |   292 +
 .../ServerTimeRejectionPolicyFactory.html          |   117 +
 .../segment/realtime/plumber/class-use/Sink.html   |   191 +
 .../class-use/TestRejectionPolicyFactory.html      |   117 +
 .../plumber/class-use/VersioningPolicy.html        |   267 +
 .../segment/realtime/plumber/package-frame.html    |    38 +
 .../segment/realtime/plumber/package-summary.html  |   208 +
 .../segment/realtime/plumber/package-tree.html     |   154 +
 .../segment/realtime/plumber/package-use.html      |   259 +
 .../serde/BitmapIndexColumnPartSupplier.html       |   269 +
 .../io/druid/segment/serde/ColumnPartSerde.html    |   245 +
 .../segment/serde/ComplexColumnPartSerde.html      |   335 +
 .../segment/serde/ComplexColumnPartSupplier.html   |   269 +
 .../serde/ComplexMetricColumnSerializer.html       |   313 +
 .../segment/serde/ComplexMetricExtractor.html      |   223 +
 .../io/druid/segment/serde/ComplexMetricSerde.html |   345 +
 .../io/druid/segment/serde/ComplexMetrics.html     |   273 +
 .../serde/DictionaryEncodedColumnPartSerde.html    |   328 +
 .../serde/DictionaryEncodedColumnSupplier.html     |   273 +
 .../segment/serde/FloatGenericColumnPartSerde.html |   335 +
 .../segment/serde/FloatGenericColumnSupplier.html  |   269 +
 .../segment/serde/LongGenericColumnPartSerde.html  |   335 +
 .../segment/serde/LongGenericColumnSupplier.html   |   267 +
 .../serde/SpatialIndexColumnPartSupplier.html      |   267 +
 .../class-use/BitmapIndexColumnPartSupplier.html   |   117 +
 .../segment/serde/class-use/ColumnPartSerde.html   |   309 +
 .../serde/class-use/ComplexColumnPartSerde.html    |   157 +
 .../serde/class-use/ComplexColumnPartSupplier.html |   117 +
 .../class-use/ComplexMetricColumnSerializer.html   |   117 +
 .../serde/class-use/ComplexMetricExtractor.html    |   201 +
 .../serde/class-use/ComplexMetricSerde.html        |   229 +
 .../segment/serde/class-use/ComplexMetrics.html    |   117 +
 .../DictionaryEncodedColumnPartSerde.html          |   157 +
 .../class-use/DictionaryEncodedColumnSupplier.html |   117 +
 .../class-use/FloatGenericColumnPartSerde.html     |   157 +
 .../class-use/FloatGenericColumnSupplier.html      |   117 +
 .../class-use/LongGenericColumnPartSerde.html      |   157 +
 .../serde/class-use/LongGenericColumnSupplier.html |   117 +
 .../class-use/SpatialIndexColumnPartSupplier.html  |   117 +
 .../io/druid/segment/serde/package-frame.html      |    37 +
 .../io/druid/segment/serde/package-summary.html    |   202 +
 .../io/druid/segment/serde/package-tree.html       |   147 +
 .../io/druid/segment/serde/package-use.html        |   237 +
 .../druid/server/AsyncQueryForwardingServlet.html  |   388 +
 .../io/druid/server/ClientInfoResource.html        |   304 +
 .../io/druid/server/ClientQuerySegmentWalker.html  |   307 +
 .../server/DirectClientQuerySegmentWalker.html     |   302 +
 api/0.6.174/io/druid/server/DruidNode.html         |   318 +
 .../io/druid/server/GuiceServletConfig.html        |   278 +
 api/0.6.174/io/druid/server/QueryManager.html      |   288 +
 api/0.6.174/io/druid/server/QueryResource.html     |   344 +
 api/0.6.174/io/druid/server/QueryStats.html        |   258 +
 api/0.6.174/io/druid/server/RequestLogLine.html    |   319 +
 .../io/druid/server/StatusResource.Memory.html     |   301 +
 .../druid/server/StatusResource.ModuleVersion.html |   309 +
 .../io/druid/server/StatusResource.Status.html     |   305 +
 api/0.6.174/io/druid/server/StatusResource.html    |   285 +
 api/0.6.174/io/druid/server/ZkPathsModule.html     |   267 +
 api/0.6.174/io/druid/server/bridge/Bridge.html     |   154 +
 .../druid/server/bridge/BridgeCuratorConfig.html   |   270 +
 .../server/bridge/BridgeQuerySegmentWalker.html    |   305 +
 .../druid/server/bridge/BridgeZkCoordinator.html   |   340 +
 .../io/druid/server/bridge/DruidClusterBridge.html |   354 +
 .../server/bridge/DruidClusterBridgeConfig.html    |   322 +
 .../io/druid/server/bridge/class-use/Bridge.html   |   186 +
 .../bridge/class-use/BridgeCuratorConfig.html      |   117 +
 .../bridge/class-use/BridgeQuerySegmentWalker.html |   117 +
 .../bridge/class-use/BridgeZkCoordinator.html      |   164 +
 .../bridge/class-use/DruidClusterBridge.html       |   117 +
 .../bridge/class-use/DruidClusterBridgeConfig.html |   164 +
 .../io/druid/server/bridge/package-frame.html      |    28 +
 .../io/druid/server/bridge/package-summary.html    |   166 +
 .../io/druid/server/bridge/package-tree.html       |   150 +
 .../io/druid/server/bridge/package-use.html        |   156 +
 .../class-use/AsyncQueryForwardingServlet.html     |   117 +
 .../druid/server/class-use/ClientInfoResource.html |   117 +
 .../server/class-use/ClientQuerySegmentWalker.html |   117 +
 .../class-use/DirectClientQuerySegmentWalker.html  |   117 +
 .../io/druid/server/class-use/DruidNode.html       |   450 +
 .../druid/server/class-use/GuiceServletConfig.html |   117 +
 .../io/druid/server/class-use/QueryManager.html    |   161 +
 .../io/druid/server/class-use/QueryResource.html   |   117 +
 .../io/druid/server/class-use/QueryStats.html      |   193 +
 .../io/druid/server/class-use/RequestLogLine.html  |   183 +
 .../server/class-use/StatusResource.Memory.html    |   157 +
 .../class-use/StatusResource.ModuleVersion.html    |   157 +
 .../server/class-use/StatusResource.Status.html    |   157 +
 .../io/druid/server/class-use/StatusResource.html  |   117 +
 .../io/druid/server/class-use/ZkPathsModule.html   |   117 +
 .../coordination/AbstractDataSegmentAnnouncer.html |   295 +
 .../server/coordination/BaseZkCoordinator.html     |   350 +
 .../coordination/BatchDataSegmentAnnouncer.html    |   334 +
 .../BatchDataSegmentAnnouncerProvider.html         |   267 +
 .../server/coordination/DataSegmentAnnouncer.html  |   263 +
 .../coordination/DataSegmentAnnouncerProvider.html |   190 +
 .../coordination/DataSegmentChangeCallback.html    |   208 +
 .../coordination/DataSegmentChangeHandler.html     |   229 +
 .../coordination/DataSegmentChangeRequest.html     |   227 +
 .../server/coordination/DruidServerMetadata.html   |   397 +
 .../LegacyDataSegmentAnnouncerProvider.html        |   267 +
 ...leDataSegmentAnnouncerDataSegmentAnnouncer.html |   327 +
 .../coordination/SegmentChangeRequestDrop.html     |   316 +
 .../coordination/SegmentChangeRequestLoad.html     |   316 +
 .../coordination/SegmentChangeRequestNoop.html     |   286 +
 .../druid/server/coordination/ServerManager.html   |   392 +
 .../coordination/SingleDataSegmentAnnouncer.html   |   332 +
 .../druid/server/coordination/ZkCoordinator.html   |   355 +
 .../server/coordination/broker/DruidBroker.html    |   276 +
 .../coordination/broker/class-use/DruidBroker.html |   117 +
 .../server/coordination/broker/package-frame.html  |    20 +
 .../coordination/broker/package-summary.html       |   135 +
 .../server/coordination/broker/package-tree.html   |   130 +
 .../server/coordination/broker/package-use.html    |   117 +
 .../class-use/AbstractDataSegmentAnnouncer.html    |   190 +
 .../coordination/class-use/BaseZkCoordinator.html  |   179 +
 .../class-use/BatchDataSegmentAnnouncer.html       |   117 +
 .../BatchDataSegmentAnnouncerProvider.html         |   117 +
 .../class-use/DataSegmentAnnouncer.html            |   350 +
 .../class-use/DataSegmentAnnouncerProvider.html    |   161 +
 .../class-use/DataSegmentChangeCallback.html       |   226 +
 .../class-use/DataSegmentChangeHandler.html        |   242 +
 .../class-use/DataSegmentChangeRequest.html        |   165 +
 .../class-use/DruidServerMetadata.html             |   312 +
 .../LegacyDataSegmentAnnouncerProvider.html        |   117 +
 ...leDataSegmentAnnouncerDataSegmentAnnouncer.html |   117 +
 .../class-use/SegmentChangeRequestDrop.html        |   117 +
 .../class-use/SegmentChangeRequestLoad.html        |   117 +
 .../class-use/SegmentChangeRequestNoop.html        |   117 +
 .../coordination/class-use/ServerManager.html      |   183 +
 .../class-use/SingleDataSegmentAnnouncer.html      |   117 +
 .../coordination/class-use/ZkCoordinator.html      |   155 +
 .../druid/server/coordination/package-frame.html   |    40 +
 .../druid/server/coordination/package-summary.html |   216 +
 .../io/druid/server/coordination/package-tree.html |   164 +
 .../io/druid/server/coordination/package-use.html  |   319 +
 .../server/coordinator/BalancerSegmentHolder.html  |   299 +
 .../druid/server/coordinator/BalancerStrategy.html |   259 +
 .../coordinator/BalancerStrategyFactory.html       |   212 +
 .../CoordinatorDynamicConfig.Builder.html          |   353 +
 .../coordinator/CoordinatorDynamicConfig.html      |   419 +
 .../druid/server/coordinator/CoordinatorStats.html |   316 +
 .../server/coordinator/CostBalancerStrategy.html   |   431 +
 .../coordinator/CostBalancerStrategyFactory.html   |   267 +
 .../server/coordinator/DatasourceWhitelist.html    |   308 +
 .../io/druid/server/coordinator/DruidCluster.html  |   363 +
 .../DruidCoordinator.CoordinatorRunnable.html      |   275 +
 ...ordinator.DruidCoordinatorVersionConverter.html |   273 +
 .../druid/server/coordinator/DruidCoordinator.html |   531 +
 .../server/coordinator/DruidCoordinatorConfig.html |   349 +
 .../DruidCoordinatorRuntimeParams.Builder.html     |   397 +
 .../coordinator/DruidCoordinatorRuntimeParams.html |   496 +
 .../druid/server/coordinator/LoadPeonCallback.html |   208 +
 .../io/druid/server/coordinator/LoadQueuePeon.html |   306 +
 .../server/coordinator/LoadQueueTaskMaster.html    |   268 +
 .../server/coordinator/RandomBalancerStrategy.html |   326 +
 .../coordinator/RandomBalancerStrategyFactory.html |   267 +
 .../server/coordinator/ReplicationThrottler.html   |   383 +
 .../coordinator/ReservoirSegmentSampler.html       |   258 +
 .../server/coordinator/SegmentReplicantLookup.html |   335 +
 .../io/druid/server/coordinator/ServerHolder.html  |   433 +
 .../class-use/BalancerSegmentHolder.html           |   206 +
 .../coordinator/class-use/BalancerStrategy.html    |   182 +
 .../class-use/BalancerStrategyFactory.html         |   210 +
 .../CoordinatorDynamicConfig.Builder.html          |   181 +
 .../class-use/CoordinatorDynamicConfig.html        |   223 +
 .../coordinator/class-use/CoordinatorStats.html    |   255 +
 .../class-use/CostBalancerStrategy.html            |   117 +
 .../class-use/CostBalancerStrategyFactory.html     |   117 +
 .../coordinator/class-use/DatasourceWhitelist.html |   181 +
 .../server/coordinator/class-use/DruidCluster.html |   197 +
 .../DruidCoordinator.CoordinatorRunnable.html      |   117 +
 ...ordinator.DruidCoordinatorVersionConverter.html |   117 +
 .../coordinator/class-use/DruidCoordinator.html    |   268 +
 .../class-use/DruidCoordinatorConfig.html          |   174 +
 .../DruidCoordinatorRuntimeParams.Builder.html     |   213 +
 .../class-use/DruidCoordinatorRuntimeParams.html   |   315 +
 .../coordinator/class-use/LoadPeonCallback.html    |   170 +
 .../coordinator/class-use/LoadQueuePeon.html       |   226 +
 .../coordinator/class-use/LoadQueueTaskMaster.html |   167 +
 .../class-use/RandomBalancerStrategy.html          |   117 +
 .../class-use/RandomBalancerStrategyFactory.html   |   117 +
 .../class-use/ReplicationThrottler.html            |   214 +
 .../class-use/ReservoirSegmentSampler.html         |   117 +
 .../class-use/SegmentReplicantLookup.html          |   197 +
 .../server/coordinator/class-use/ServerHolder.html |   365 +
 .../helper/DruidCoordinatorBalancer.html           |   372 +
 .../DruidCoordinatorCleanupOvershadowed.html       |   267 +
 .../helper/DruidCoordinatorCleanupUnneeded.html    |   267 +
 .../coordinator/helper/DruidCoordinatorHelper.html |   212 +
 .../coordinator/helper/DruidCoordinatorLogger.html |   267 +
 .../helper/DruidCoordinatorRuleRunner.html         |   281 +
 .../helper/DruidCoordinatorSegmentInfoLoader.html  |   267 +
 .../helper/DruidCoordinatorSegmentMerger.html      |   269 +
 .../helper/DruidCoordinatorVersionConverter.html   |   269 +
 .../helper/class-use/DruidCoordinatorBalancer.html |   117 +
 .../DruidCoordinatorCleanupOvershadowed.html       |   117 +
 .../class-use/DruidCoordinatorCleanupUnneeded.html |   117 +
 .../helper/class-use/DruidCoordinatorHelper.html   |   219 +
 .../helper/class-use/DruidCoordinatorLogger.html   |   117 +
 .../class-use/DruidCoordinatorRuleRunner.html      |   117 +
 .../DruidCoordinatorSegmentInfoLoader.html         |   117 +
 .../class-use/DruidCoordinatorSegmentMerger.html   |   117 +
 .../DruidCoordinatorVersionConverter.html          |   117 +
 .../server/coordinator/helper/package-frame.html   |    31 +
 .../server/coordinator/helper/package-summary.html |   178 +
 .../server/coordinator/helper/package-tree.html    |   141 +
 .../server/coordinator/helper/package-use.html     |   169 +
 .../io/druid/server/coordinator/package-frame.html |    46 +
 .../druid/server/coordinator/package-summary.html  |   246 +
 .../io/druid/server/coordinator/package-tree.html  |   157 +
 .../io/druid/server/coordinator/package-use.html   |   289 +
 .../druid/server/coordinator/rules/DropRule.html   |   283 +
 .../server/coordinator/rules/ForeverDropRule.html  |   304 +
 .../server/coordinator/rules/ForeverLoadRule.html  |   338 +
 .../server/coordinator/rules/IntervalDropRule.html |   317 +
 .../server/coordinator/rules/IntervalLoadRule.html |   391 +
 .../druid/server/coordinator/rules/LoadRule.html   |   309 +
 .../server/coordinator/rules/PeriodDropRule.html   |   317 +
 .../server/coordinator/rules/PeriodLoadRule.html   |   357 +
 .../io/druid/server/coordinator/rules/Rule.html    |   259 +
 .../io/druid/server/coordinator/rules/RuleMap.html |   260 +
 .../coordinator/rules/class-use/DropRule.html      |   165 +
 .../rules/class-use/ForeverDropRule.html           |   117 +
 .../rules/class-use/ForeverLoadRule.html           |   117 +
 .../rules/class-use/IntervalDropRule.html          |   117 +
 .../rules/class-use/IntervalLoadRule.html          |   117 +
 .../coordinator/rules/class-use/LoadRule.html      |   165 +
 .../rules/class-use/PeriodDropRule.html            |   117 +
 .../rules/class-use/PeriodLoadRule.html            |   117 +
 .../server/coordinator/rules/class-use/Rule.html   |   314 +
 .../coordinator/rules/class-use/RuleMap.html       |   117 +
 .../server/coordinator/rules/package-frame.html    |    32 +
 .../server/coordinator/rules/package-summary.html  |   186 +
 .../server/coordinator/rules/package-tree.html     |   148 +
 .../server/coordinator/rules/package-use.html      |   217 +
 .../BackwardsCompatibleCoordinatorResource.html    |   283 +
 .../http/BackwardsCompatibleInfoResource.html      |   259 +
 .../http/CoordinatorDynamicConfigsResource.html    |   272 +
 .../druid/server/http/CoordinatorRedirectInfo.html |   287 +
 .../io/druid/server/http/CoordinatorResource.html  |   289 +
 api/0.6.174/io/druid/server/http/DBResource.html   |   304 +
 .../io/druid/server/http/DatasourcesResource.html  |   420 +
 .../io/druid/server/http/HistoricalResource.html   |   259 +
 api/0.6.174/io/druid/server/http/InfoResource.html |   676 +
 .../io/druid/server/http/RedirectFilter.html       |   312 +
 api/0.6.174/io/druid/server/http/RedirectInfo.html |   227 +
 .../io/druid/server/http/RulesResource.html        |   289 +
 .../io/druid/server/http/SegmentToDrop.html        |   273 +
 .../io/druid/server/http/SegmentToMove.html        |   288 +
 .../io/druid/server/http/ServersResource.html      |   306 +
 .../io/druid/server/http/TiersResource.html        |   274 +
 .../BackwardsCompatibleCoordinatorResource.html    |   117 +
 .../class-use/BackwardsCompatibleInfoResource.html |   117 +
 .../CoordinatorDynamicConfigsResource.html         |   117 +
 .../http/class-use/CoordinatorRedirectInfo.html    |   117 +
 .../server/http/class-use/CoordinatorResource.html |   117 +
 .../io/druid/server/http/class-use/DBResource.html |   117 +
 .../server/http/class-use/DatasourcesResource.html |   117 +
 .../server/http/class-use/HistoricalResource.html  |   117 +
 .../druid/server/http/class-use/InfoResource.html  |   159 +
 .../server/http/class-use/RedirectFilter.html      |   117 +
 .../druid/server/http/class-use/RedirectInfo.html  |   190 +
 .../druid/server/http/class-use/RulesResource.html |   117 +
 .../druid/server/http/class-use/SegmentToDrop.html |   117 +
 .../druid/server/http/class-use/SegmentToMove.html |   117 +
 .../server/http/class-use/ServersResource.html     |   117 +
 .../druid/server/http/class-use/TiersResource.html |   117 +
 .../io/druid/server/http/package-frame.html        |    38 +
 .../io/druid/server/http/package-summary.html      |   206 +
 api/0.6.174/io/druid/server/http/package-tree.html |   151 +
 api/0.6.174/io/druid/server/http/package-use.html  |   174 +
 .../initialization/BaseJettyServerInitializer.html |   287 +
 .../BatchDataSegmentAnnouncerConfig.html           |   271 +
 .../initialization/CuratorDiscoveryConfig.html     |   271 +
 .../druid/server/initialization/EmitterModule.html |   284 +
 .../server/initialization/HttpEmitterConfig.html   |   270 +
 .../server/initialization/HttpEmitterModule.html   |   288 +
 .../initialization/JettyServerInitializer.html     |   214 +
 .../JettyServerModule.DruidGuiceContainer.html     |   363 +
 .../server/initialization/JettyServerModule.html   |   362 +
 .../server/initialization/LogEmitterModule.html    |   320 +
 .../druid/server/initialization/ServerConfig.html  |   271 +
 .../druid/server/initialization/ZkPathsConfig.html |   405 +
 .../class-use/BaseJettyServerInitializer.html      |   161 +
 .../class-use/BatchDataSegmentAnnouncerConfig.html |   159 +
 .../class-use/CuratorDiscoveryConfig.html          |   159 +
 .../initialization/class-use/EmitterModule.html    |   117 +
 .../class-use/HttpEmitterConfig.html               |   160 +
 .../class-use/HttpEmitterModule.html               |   117 +
 .../class-use/JettyServerInitializer.html          |   183 +
 .../JettyServerModule.DruidGuiceContainer.html     |   117 +
 .../class-use/JettyServerModule.html               |   117 +
 .../initialization/class-use/LogEmitterModule.html |   117 +
 .../initialization/class-use/ServerConfig.html     |   186 +
 .../initialization/class-use/ZkPathsConfig.html    |   360 +
 .../druid/server/initialization/package-frame.html |    34 +
 .../server/initialization/package-summary.html     |   190 +
 .../druid/server/initialization/package-tree.html  |   176 +
 .../druid/server/initialization/package-use.html   |   333 +
 .../log/EmittingRequestLogger.RequestLogEvent.html |   393 +
 .../io/druid/server/log/EmittingRequestLogger.html |   290 +
 .../server/log/EmittingRequestLoggerProvider.html  |   281 +
 .../io/druid/server/log/FileRequestLogger.html     |   299 +
 .../server/log/FileRequestLoggerProvider.html      |   267 +
 .../io/druid/server/log/NoopRequestLogger.html     |   269 +
 .../server/log/NoopRequestLoggerProvider.html      |   267 +
 api/0.6.174/io/druid/server/log/RequestLogger.html |   215 +
 .../io/druid/server/log/RequestLoggerProvider.html |   192 +
 .../EmittingRequestLogger.RequestLogEvent.html     |   117 +
 .../log/class-use/EmittingRequestLogger.html       |   117 +
 .../class-use/EmittingRequestLoggerProvider.html   |   117 +
 .../server/log/class-use/FileRequestLogger.html    |   117 +
 .../log/class-use/FileRequestLoggerProvider.html   |   117 +
 .../server/log/class-use/NoopRequestLogger.html    |   117 +
 .../log/class-use/NoopRequestLoggerProvider.html   |   117 +
 .../druid/server/log/class-use/RequestLogger.html  |   245 +
 .../log/class-use/RequestLoggerProvider.html       |   165 +
 api/0.6.174/io/druid/server/log/package-frame.html |    31 +
 .../io/druid/server/log/package-summary.html       |   180 +
 api/0.6.174/io/druid/server/log/package-tree.html  |   149 +
 api/0.6.174/io/druid/server/log/package-use.html   |   193 +
 .../metrics/DruidMonitorSchedulerConfig.html       |   280 +
 .../io/druid/server/metrics/DruidSysMonitor.html   |   259 +
 .../io/druid/server/metrics/MetricsModule.html     |   306 +
 .../io/druid/server/metrics/MonitorsConfig.html    |   275 +
 .../io/druid/server/metrics/ServerMonitor.html     |   281 +
 .../class-use/DruidMonitorSchedulerConfig.html     |   161 +
 .../server/metrics/class-use/DruidSysMonitor.html  |   117 +
 .../server/metrics/class-use/MetricsModule.html    |   117 +
 .../server/metrics/class-use/MonitorsConfig.html   |   161 +
 .../server/metrics/class-use/ServerMonitor.html    |   117 +
 .../io/druid/server/metrics/package-frame.html     |    24 +
 .../io/druid/server/metrics/package-summary.html   |   153 +
 .../io/druid/server/metrics/package-tree.html      |   146 +
 .../io/druid/server/metrics/package-use.html       |   153 +
 api/0.6.174/io/druid/server/package-frame.html     |    34 +
 api/0.6.174/io/druid/server/package-summary.html   |   193 +
 api/0.6.174/io/druid/server/package-tree.html      |   164 +
 api/0.6.174/io/druid/server/package-use.html       |   355 +
 .../server/router/CoordinatorRuleManager.html      |   317 +
 ...redBrokerSelectorStrategy.SelectorFunction.html |   214 +
 .../JavaScriptTieredBrokerSelectorStrategy.html    |   352 +
 .../PriorityTieredBrokerSelectorStrategy.html      |   271 +
 .../io/druid/server/router/QueryHostFinder.html    |   298 +
 api/0.6.174/io/druid/server/router/Router.html     |   154 +
 .../io/druid/server/router/TieredBrokerConfig.html |   336 +
 .../server/router/TieredBrokerHostSelector.html    |   330 +
 .../TieredBrokerSelectorStrategiesProvider.html    |   268 +
 .../router/TieredBrokerSelectorStrategy.html       |   214 +
 .../TimeBoundaryTieredBrokerSelectorStrategy.html  |   269 +
 .../router/class-use/CoordinatorRuleManager.html   |   158 +
 ...redBrokerSelectorStrategy.SelectorFunction.html |   117 +
 .../JavaScriptTieredBrokerSelectorStrategy.html    |   117 +
 .../PriorityTieredBrokerSelectorStrategy.html      |   117 +
 .../server/router/class-use/QueryHostFinder.html   |   185 +
 .../io/druid/server/router/class-use/Router.html   |   185 +
 .../router/class-use/TieredBrokerConfig.html       |   209 +
 .../router/class-use/TieredBrokerHostSelector.html |   155 +
 .../TieredBrokerSelectorStrategiesProvider.html    |   117 +
 .../class-use/TieredBrokerSelectorStrategy.html    |   196 +
 .../TimeBoundaryTieredBrokerSelectorStrategy.html  |   117 +
 .../io/druid/server/router/package-frame.html      |    36 +
 .../io/druid/server/router/package-summary.html    |   197 +
 .../io/druid/server/router/package-tree.html       |   146 +
 .../io/druid/server/router/package-use.html        |   203 +
 api/0.6.174/io/druid/server/sql/SQLRunner.html     |   261 +
 .../io/druid/server/sql/class-use/SQLRunner.html   |   117 +
 api/0.6.174/io/druid/server/sql/package-frame.html |    20 +
 .../io/druid/server/sql/package-summary.html       |   135 +
 api/0.6.174/io/druid/server/sql/package-tree.html  |   130 +
 api/0.6.174/io/druid/server/sql/package-use.html   |   117 +
 .../io/druid/sql/antlr4/DruidSQLBaseListener.html  |  1134 ++
 api/0.6.174/io/druid/sql/antlr4/DruidSQLLexer.html |   998 ++
 .../io/druid/sql/antlr4/DruidSQLListener.html      |   835 +
 .../DruidSQLParser.AdditiveExpressionContext.html  |   536 +
 .../antlr4/DruidSQLParser.AggregateContext.html    |   510 +
 .../DruidSQLParser.AliasedExpressionContext.html   |   445 +
 .../antlr4/DruidSQLParser.AndDimFilterContext.html |   471 +
 .../sql/antlr4/DruidSQLParser.ConstantContext.html |   406 +
 .../antlr4/DruidSQLParser.DatasourceContext.html   |   365 +
 .../antlr4/DruidSQLParser.DimFilterContext.html    |   406 +
 .../antlr4/DruidSQLParser.ExpressionContext.html   |   406 +
 .../DruidSQLParser.GranularityFnContext.html       |   432 +
 .../DruidSQLParser.GroupByExpressionContext.html   |   419 +
 .../antlr4/DruidSQLParser.Groupby_stmtContext.html |   417 +
 .../DruidSQLParser.InListDimFilterContext.html     |   510 +
 .../DruidSQLParser.MultiplyExpressionContext.html  |   536 +
 .../antlr4/DruidSQLParser.OrDimFilterContext.html  |   471 +
 .../DruidSQLParser.PrimaryDimFilterContext.html    |   497 +
 .../DruidSQLParser.PrimaryExpressionContext.html   |   484 +
 .../sql/antlr4/DruidSQLParser.QueryContext.html    |   391 +
 .../antlr4/DruidSQLParser.Select_stmtContext.html  |   445 +
 .../DruidSQLParser.SelectorDimFilterContext.html   |   484 +
 .../DruidSQLParser.TimeAndDimFilterContext.html    |   497 +
 .../antlr4/DruidSQLParser.TimeFilterContext.html   |   458 +
 .../antlr4/DruidSQLParser.TimestampContext.html    |   432 +
 .../DruidSQLParser.UnaryExpressionContext.html     |   458 +
 .../antlr4/DruidSQLParser.Where_stmtContext.html   |   393 +
 .../io/druid/sql/antlr4/DruidSQLParser.html        |  1898 +++
 .../sql/antlr4/class-use/DruidSQLBaseListener.html |   117 +
 .../druid/sql/antlr4/class-use/DruidSQLLexer.html  |   117 +
 .../sql/antlr4/class-use/DruidSQLListener.html     |   157 +
 .../DruidSQLParser.AdditiveExpressionContext.html  |   199 +
 .../class-use/DruidSQLParser.AggregateContext.html |   199 +
 .../DruidSQLParser.AliasedExpressionContext.html   |   225 +
 .../DruidSQLParser.AndDimFilterContext.html        |   229 +
 .../class-use/DruidSQLParser.ConstantContext.html  |   199 +
 .../DruidSQLParser.DatasourceContext.html          |   199 +
 .../class-use/DruidSQLParser.DimFilterContext.html |   224 +
 .../DruidSQLParser.ExpressionContext.html          |   207 +
 .../DruidSQLParser.GranularityFnContext.html       |   199 +
 .../DruidSQLParser.GroupByExpressionContext.html   |   199 +
 .../DruidSQLParser.Groupby_stmtContext.html        |   186 +
 .../DruidSQLParser.InListDimFilterContext.html     |   199 +
 .../DruidSQLParser.MultiplyExpressionContext.html  |   229 +
 .../DruidSQLParser.OrDimFilterContext.html         |   199 +
 .../DruidSQLParser.PrimaryDimFilterContext.html    |   229 +
 .../DruidSQLParser.PrimaryExpressionContext.html   |   199 +
 .../class-use/DruidSQLParser.QueryContext.html     |   182 +
 .../DruidSQLParser.Select_stmtContext.html         |   186 +
 .../DruidSQLParser.SelectorDimFilterContext.html   |   199 +
 .../DruidSQLParser.TimeAndDimFilterContext.html    |   199 +
 .../DruidSQLParser.TimeFilterContext.html          |   199 +
 .../class-use/DruidSQLParser.TimestampContext.html |   216 +
 .../DruidSQLParser.UnaryExpressionContext.html     |   237 +
 .../DruidSQLParser.Where_stmtContext.html          |   186 +
 .../druid/sql/antlr4/class-use/DruidSQLParser.html |   117 +
 api/0.6.174/io/druid/sql/antlr4/package-frame.html |    50 +
 .../io/druid/sql/antlr4/package-summary.html       |   254 +
 api/0.6.174/io/druid/sql/antlr4/package-tree.html  |   184 +
 api/0.6.174/io/druid/sql/antlr4/package-use.html   |   222 +
 .../cassandra/CassandraDataSegmentConfig.html      |   325 +
 .../cassandra/CassandraDataSegmentPuller.html      |   299 +
 .../cassandra/CassandraDataSegmentPusher.html      |   299 +
 .../storage/cassandra/CassandraDruidModule.html    |   284 +
 .../druid/storage/cassandra/CassandraStorage.html  |   240 +
 .../class-use/CassandraDataSegmentConfig.html      |   162 +
 .../class-use/CassandraDataSegmentPuller.html      |   117 +
 .../class-use/CassandraDataSegmentPusher.html      |   117 +
 .../cassandra/class-use/CassandraDruidModule.html  |   117 +
 .../cassandra/class-use/CassandraStorage.html      |   165 +
 .../io/druid/storage/cassandra/package-frame.html  |    24 +
 .../druid/storage/cassandra/package-summary.html   |   159 +
 .../io/druid/storage/cassandra/package-tree.html   |   137 +
 .../io/druid/storage/cassandra/package-use.html    |   157 +
 .../druid/storage/hdfs/HdfsDataSegmentKiller.html  |   270 +
 .../druid/storage/hdfs/HdfsDataSegmentPuller.html  |   291 +
 .../druid/storage/hdfs/HdfsDataSegmentPusher.html  |   293 +
 .../storage/hdfs/HdfsDataSegmentPusherConfig.html  |   294 +
 .../druid/storage/hdfs/HdfsStorageDruidModule.html |   298 +
 .../hdfs/class-use/HdfsDataSegmentKiller.html      |   117 +
 .../hdfs/class-use/HdfsDataSegmentPuller.html      |   117 +
 .../hdfs/class-use/HdfsDataSegmentPusher.html      |   117 +
 .../class-use/HdfsDataSegmentPusherConfig.html     |   157 +
 .../hdfs/class-use/HdfsStorageDruidModule.html     |   117 +
 .../io/druid/storage/hdfs/package-frame.html       |    24 +
 .../io/druid/storage/hdfs/package-summary.html     |   151 +
 .../io/druid/storage/hdfs/package-tree.html        |   134 +
 api/0.6.174/io/druid/storage/hdfs/package-use.html |   150 +
 .../druid/storage/hdfs/tasklog/HdfsTaskLogs.html   |   294 +
 .../storage/hdfs/tasklog/HdfsTaskLogsConfig.html   |   259 +
 .../hdfs/tasklog/class-use/HdfsTaskLogs.html       |   117 +
 .../hdfs/tasklog/class-use/HdfsTaskLogsConfig.html |   155 +
 .../druid/storage/hdfs/tasklog/package-frame.html  |    21 +
 .../storage/hdfs/tasklog/package-summary.html      |   143 +
 .../druid/storage/hdfs/tasklog/package-tree.html   |   131 +
 .../io/druid/storage/hdfs/tasklog/package-use.html |   152 +
 .../io/druid/storage/s3/AWSCredentialsConfig.html  |   284 +
 .../storage/s3/AWSSessionCredentialsAdapter.html   |   381 +
 .../storage/s3/FileSessionCredentialsProvider.html |   284 +
 .../io/druid/storage/s3/S3DataSegmentArchiver.html |   305 +
 .../storage/s3/S3DataSegmentArchiverConfig.html    |   320 +
 .../io/druid/storage/s3/S3DataSegmentKiller.html   |   270 +
 .../io/druid/storage/s3/S3DataSegmentMover.html    |   278 +
 .../io/druid/storage/s3/S3DataSegmentPuller.html   |   291 +
 .../io/druid/storage/s3/S3DataSegmentPusher.html   |   293 +
 .../storage/s3/S3DataSegmentPusherConfig.html      |   346 +
 .../io/druid/storage/s3/S3StorageDruidModule.html  |   312 +
 api/0.6.174/io/druid/storage/s3/S3TaskLogs.html    |   296 +
 .../io/druid/storage/s3/S3TaskLogsConfig.html      |   271 +
 api/0.6.174/io/druid/storage/s3/S3Utils.html       |   326 +
 .../storage/s3/class-use/AWSCredentialsConfig.html |   157 +
 .../s3/class-use/AWSSessionCredentialsAdapter.html |   117 +
 .../class-use/FileSessionCredentialsProvider.html  |   117 +
 .../s3/class-use/S3DataSegmentArchiver.html        |   117 +
 .../s3/class-use/S3DataSegmentArchiverConfig.html  |   157 +
 .../storage/s3/class-use/S3DataSegmentKiller.html  |   117 +
 .../storage/s3/class-use/S3DataSegmentMover.html   |   157 +
 .../storage/s3/class-use/S3DataSegmentPuller.html  |   117 +
 .../storage/s3/class-use/S3DataSegmentPusher.html  |   117 +
 .../s3/class-use/S3DataSegmentPusherConfig.html    |   166 +
 .../storage/s3/class-use/S3StorageDruidModule.html |   117 +
 .../io/druid/storage/s3/class-use/S3TaskLogs.html  |   117 +
 .../storage/s3/class-use/S3TaskLogsConfig.html     |   156 +
 .../io/druid/storage/s3/class-use/S3Utils.html     |   117 +
 api/0.6.174/io/druid/storage/s3/package-frame.html |    33 +
 .../io/druid/storage/s3/package-summary.html       |   189 +
 api/0.6.174/io/druid/storage/s3/package-tree.html  |   158 +
 api/0.6.174/io/druid/storage/s3/package-use.html   |   162 +
 api/0.6.174/io/druid/timeline/LogicalSegment.html  |   212 +
 .../io/druid/timeline/TimelineObjectHolder.html    |   316 +
 .../VersionedIntervalTimeline.TimelineEntry.html   |   294 +
 .../druid/timeline/VersionedIntervalTimeline.html  |   381 +
 .../druid/timeline/class-use/LogicalSegment.html   |   203 +
 .../timeline/class-use/TimelineObjectHolder.html   |   163 +
 .../VersionedIntervalTimeline.TimelineEntry.html   |   117 +
 .../class-use/VersionedIntervalTimeline.html       |   161 +
 api/0.6.174/io/druid/timeline/package-frame.html   |    25 +
 api/0.6.174/io/druid/timeline/package-summary.html |   156 +
 api/0.6.174/io/druid/timeline/package-tree.html    |   136 +
 api/0.6.174/io/druid/timeline/package-use.html     |   212 +
 .../partition/HashBasedNumberedShardSpec.html      |   337 +
 .../partition/ImmutablePartitionHolder.html        |   295 +
 .../timeline/partition/IntegerPartitionChunk.html  |   432 +
 .../timeline/partition/LinearPartitionChunk.html   |   407 +
 .../druid/timeline/partition/LinearShardSpec.html  |   339 +
 .../timeline/partition/NumberedPartitionChunk.html |   411 +
 .../timeline/partition/NumberedShardSpec.html      |   358 +
 .../druid/timeline/partition/PartitionHolder.html  |   438 +
 .../partition/SingleDimensionShardSpec.html        |   451 +
 .../timeline/partition/StringPartitionChunk.html   |   415 +
 .../class-use/HashBasedNumberedShardSpec.html      |   117 +
 .../class-use/ImmutablePartitionHolder.html        |   117 +
 .../partition/class-use/IntegerPartitionChunk.html |   160 +
 .../partition/class-use/LinearPartitionChunk.html  |   158 +
 .../partition/class-use/LinearShardSpec.html       |   117 +
 .../class-use/NumberedPartitionChunk.html          |   159 +
 .../partition/class-use/NumberedShardSpec.html     |   157 +
 .../partition/class-use/PartitionHolder.html       |   220 +
 .../class-use/SingleDimensionShardSpec.html        |   117 +
 .../partition/class-use/StringPartitionChunk.html  |   160 +
 .../io/druid/timeline/partition/package-frame.html |    29 +
 .../druid/timeline/partition/package-summary.html  |   177 +
 .../io/druid/timeline/partition/package-tree.html  |   145 +
 .../io/druid/timeline/partition/package-use.html   |   188 +
 api/0.6.174/overview-frame.html                    |   129 +
 api/0.6.174/overview-summary.html                  |   563 +
 api/0.6.174/overview-tree.html                     |  1796 +++
 api/0.6.174/package-list                           |   109 +
 api/0.6.174/resources/background.gif               |   Bin 0 -> 2313 bytes
 api/0.6.174/resources/tab.gif                      |   Bin 0 -> 291 bytes
 api/0.6.174/resources/titlebar.gif                 |   Bin 0 -> 10701 bytes
 api/0.6.174/resources/titlebar_end.gif             |   Bin 0 -> 849 bytes
 api/0.6.174/serialized-form.html                   |   572 +
 api/0.6.174/stylesheet.css                         |   474 +
 api/0.7.0-rc1/index.md                             |     4 +
 assets/2014-07-23-logo/image00.png                 |   Bin 0 -> 47543 bytes
 assets/2014-07-23-logo/image01.jpg                 |   Bin 0 -> 45410 bytes
 assets/2014-07-23-logo/image01.png                 |   Bin 0 -> 185468 bytes
 assets/2014-07-23-logo/image02.jpg                 |   Bin 0 -> 251479 bytes
 assets/2014-07-23-logo/image03.png                 |   Bin 0 -> 155421 bytes
 assets/2014-07-23-logo/image04.png                 |   Bin 0 -> 6308 bytes
 assets/2014-07-23-logo/image05.png                 |   Bin 0 -> 8154 bytes
 assets/druid-benchmark-100gb-median.png            |   Bin 0 -> 31994 bytes
 assets/druid-benchmark-1gb-median.png              |   Bin 0 -> 27752 bytes
 assets/druid-benchmark-scaling.png                 |   Bin 0 -> 26704 bytes
 assets/hll-cardinality-error.png                   |   Bin 0 -> 16357 bytes
 assets/js/druid.js                                 |     4 +
 blog/index.html                                    |    29 +
 community/cla.md                                   |    18 +
 community/index.md                                 |     4 +
 css/blogs.css                                      |    68 +
 css/bootstrap-pure.css                             |  1855 +++
 css/docs.css                                       |   126 +
 css/footer.css                                     |    28 +
 css/header.css                                     |    98 +
 css/index.css                                      |    58 +
 css/main.css                                       |   207 +
 css/news-list.css                                  |    63 +
 css/reset.css                                      |    44 +
 css/syntax.css                                     |   281 +
 css/variables.css                                  |     0
 .../About-Experimental-Features.html               |     4 +
 docs/0.13.0-incubating/Aggregations.html           |     4 +
 docs/0.13.0-incubating/ApproxHisto.html            |     4 +
 docs/0.13.0-incubating/Batch-ingestion.html        |     4 +
 .../Booting-a-production-cluster.html              |     4 +
 docs/0.13.0-incubating/Broker-Config.html          |     4 +
 docs/0.13.0-incubating/Broker.html                 |     4 +
 docs/0.13.0-incubating/Build-from-source.html      |     4 +
 docs/0.13.0-incubating/Cassandra-Deep-Storage.html |     4 +
 docs/0.13.0-incubating/Cluster-setup.html          |     4 +
 .../Concepts-and-Terminology.html                  |     4 +
 docs/0.13.0-incubating/Configuration.html          |     4 +
 docs/0.13.0-incubating/Coordinator-Config.html     |     4 +
 docs/0.13.0-incubating/Coordinator.html            |     4 +
 docs/0.13.0-incubating/DataSource.html             |     4 +
 .../0.13.0-incubating/DataSourceMetadataQuery.html |     4 +
 docs/0.13.0-incubating/Data_formats.html           |     4 +
 docs/0.13.0-incubating/Deep-Storage.html           |     4 +
 docs/0.13.0-incubating/Design.html                 |     4 +
 docs/0.13.0-incubating/DimensionSpecs.html         |     4 +
 docs/0.13.0-incubating/Druid-vs-Cassandra.html     |     4 +
 docs/0.13.0-incubating/Druid-vs-Elasticsearch.html |     4 +
 docs/0.13.0-incubating/Druid-vs-Hadoop.html        |     4 +
 .../Druid-vs-Impala-or-Shark.html                  |     4 +
 docs/0.13.0-incubating/Druid-vs-Redshift.html      |     4 +
 docs/0.13.0-incubating/Druid-vs-Spark.html         |     4 +
 docs/0.13.0-incubating/Druid-vs-Vertica.html       |     4 +
 docs/0.13.0-incubating/Evaluate.html               |     4 +
 docs/0.13.0-incubating/Examples.html               |     4 +
 docs/0.13.0-incubating/Filters.html                |     4 +
 docs/0.13.0-incubating/Firehose.html               |     4 +
 docs/0.13.0-incubating/GeographicQueries.html      |     4 +
 docs/0.13.0-incubating/Granularities.html          |     4 +
 docs/0.13.0-incubating/GroupByQuery.html           |     4 +
 docs/0.13.0-incubating/Hadoop-Configuration.html   |     4 +
 docs/0.13.0-incubating/Having.html                 |     4 +
 docs/0.13.0-incubating/Historical-Config.html      |     4 +
 docs/0.13.0-incubating/Historical.html             |     4 +
 docs/0.13.0-incubating/Including-Extensions.html   |     4 +
 .../0.13.0-incubating/Indexing-Service-Config.html |     4 +
 docs/0.13.0-incubating/Indexing-Service.html       |     4 +
 docs/0.13.0-incubating/Ingestion-FAQ.html          |     4 +
 docs/0.13.0-incubating/Ingestion-overview.html     |     4 +
 docs/0.13.0-incubating/Ingestion.html              |     4 +
 .../Integrating-Druid-With-Other-Technologies.html |     4 +
 docs/0.13.0-incubating/Libraries.html              |     4 +
 docs/0.13.0-incubating/LimitSpec.html              |     4 +
 docs/0.13.0-incubating/Logging.html                |     4 +
 docs/0.13.0-incubating/Metadata-storage.html       |     4 +
 docs/0.13.0-incubating/Metrics.html                |     4 +
 docs/0.13.0-incubating/Middlemanager.html          |     4 +
 docs/0.13.0-incubating/Modules.html                |     4 +
 docs/0.13.0-incubating/Other-Hadoop.html           |     4 +
 docs/0.13.0-incubating/Papers-and-talks.html       |     4 +
 docs/0.13.0-incubating/Peons.html                  |     4 +
 docs/0.13.0-incubating/Performance-FAQ.html        |     4 +
 docs/0.13.0-incubating/Plumber.html                |     4 +
 docs/0.13.0-incubating/Post-aggregations.html      |     4 +
 docs/0.13.0-incubating/Query-Context.html          |     4 +
 docs/0.13.0-incubating/Querying.html               |     4 +
 docs/0.13.0-incubating/Realtime-Config.html        |     4 +
 docs/0.13.0-incubating/Realtime-ingestion.html     |     4 +
 docs/0.13.0-incubating/Realtime.html               |     4 +
 docs/0.13.0-incubating/Recommendations.html        |     4 +
 docs/0.13.0-incubating/Rolling-Updates.html        |     4 +
 docs/0.13.0-incubating/Router.html                 |     4 +
 docs/0.13.0-incubating/Rule-Configuration.html     |     4 +
 docs/0.13.0-incubating/SearchQuery.html            |     4 +
 docs/0.13.0-incubating/SearchQuerySpec.html        |     4 +
 docs/0.13.0-incubating/SegmentMetadataQuery.html   |     4 +
 docs/0.13.0-incubating/Segments.html               |     4 +
 docs/0.13.0-incubating/SelectQuery.html            |     4 +
 .../Simple-Cluster-Configuration.html              |     4 +
 docs/0.13.0-incubating/Tasks.html                  |     4 +
 docs/0.13.0-incubating/TimeBoundaryQuery.html      |     4 +
 docs/0.13.0-incubating/TimeseriesQuery.html        |     4 +
 docs/0.13.0-incubating/TopNMetricSpec.html         |     4 +
 docs/0.13.0-incubating/TopNQuery.html              |     4 +
 .../Tutorial:-A-First-Look-at-Druid.html           |     4 +
 .../Tutorial:-All-About-Queries.html               |     4 +
 .../Tutorial:-Loading-Batch-Data.html              |     4 +
 .../Tutorial:-Loading-Streaming-Data.html          |     4 +
 .../Tutorial:-The-Druid-Cluster.html               |     4 +
 docs/0.13.0-incubating/Tutorials.html              |     4 +
 docs/0.13.0-incubating/Versioning.html             |     4 +
 docs/0.13.0-incubating/ZooKeeper.html              |     4 +
 docs/0.13.0-incubating/alerts.html                 |     4 +
 .../comparisons/druid-vs-cassandra.html            |     4 +
 .../comparisons/druid-vs-elasticsearch.md          |    40 +
 .../comparisons/druid-vs-hadoop.html               |     4 +
 .../comparisons/druid-vs-impala-or-shark.html      |     4 +
 .../comparisons/druid-vs-key-value.md              |    47 +
 .../0.13.0-incubating/comparisons/druid-vs-kudu.md |    40 +
 .../comparisons/druid-vs-redshift.md               |    63 +
 .../comparisons/druid-vs-spark.md                  |    43 +
 .../comparisons/druid-vs-sql-on-hadoop.md          |    83 +
 .../comparisons/druid-vs-vertica.html              |     4 +
 docs/0.13.0-incubating/configuration/auth.html     |     4 +
 docs/0.13.0-incubating/configuration/broker.html   |     4 +
 docs/0.13.0-incubating/configuration/caching.html  |     4 +
 .../configuration/coordinator.html                 |     4 +
 .../configuration/historical.html                  |     4 +
 docs/0.13.0-incubating/configuration/index.md      |  1592 ++
 .../configuration/indexing-service.html            |     4 +
 docs/0.13.0-incubating/configuration/logging.md    |    55 +
 docs/0.13.0-incubating/configuration/realtime.md   |    98 +
 .../configuration/simple-cluster.html              |     4 +
 .../dependencies/cassandra-deep-storage.md         |    62 +
 .../0.13.0-incubating/dependencies/deep-storage.md |    54 +
 .../dependencies/metadata-storage.md               |   129 +
 docs/0.13.0-incubating/dependencies/zookeeper.md   |    77 +
 docs/0.13.0-incubating/design/auth.md              |   166 +
 docs/0.13.0-incubating/design/broker.md            |    55 +
 .../design/concepts-and-terminology.html           |     4 +
 docs/0.13.0-incubating/design/coordinator.md       |   131 +
 docs/0.13.0-incubating/design/design.html          |     4 +
 docs/0.13.0-incubating/design/historical.md        |    59 +
 docs/0.13.0-incubating/design/index.md             |   214 +
 docs/0.13.0-incubating/design/indexing-service.md  |    65 +
 docs/0.13.0-incubating/design/middlemanager.md     |    44 +
 docs/0.13.0-incubating/design/overlord.md          |    67 +
 docs/0.13.0-incubating/design/peons.md             |    47 +
 docs/0.13.0-incubating/design/plumber.md           |    38 +
 docs/0.13.0-incubating/design/realtime.md          |    80 +
 docs/0.13.0-incubating/design/segments.md          |   205 +
 .../development/approximate-histograms.html        |     4 +
 docs/0.13.0-incubating/development/build.md        |    66 +
 .../development/datasketches-aggregators.html      |     4 +
 docs/0.13.0-incubating/development/experimental.md |    39 +
 .../extensions-contrib/ambari-metrics-emitter.md   |   100 +
 .../development/extensions-contrib/azure.md        |    95 +
 .../development/extensions-contrib/cassandra.md    |    31 +
 .../development/extensions-contrib/cloudfiles.md   |    95 +
 .../extensions-contrib/distinctcount.md            |    99 +
 .../development/extensions-contrib/google.md       |    89 +
 .../development/extensions-contrib/graphite.md     |   118 +
 .../development/extensions-contrib/influx.md       |    66 +
 .../extensions-contrib/kafka-emitter.md            |    55 +
 .../development/extensions-contrib/kafka-simple.md |    56 +
 .../extensions-contrib/materialized-view.md        |   134 +
 .../extensions-contrib/opentsdb-emitter.md         |    62 +
 .../development/extensions-contrib/orc.md          |   113 +
 .../development/extensions-contrib/parquet.md      |   178 +
 .../development/extensions-contrib/rabbitmq.md     |    81 +
 .../development/extensions-contrib/redis-cache.md  |    56 +
 .../development/extensions-contrib/rocketmq.md     |    29 +
 .../development/extensions-contrib/sqlserver.md    |    57 +
 .../development/extensions-contrib/statsd.md       |    68 +
 .../development/extensions-contrib/thrift.md       |   124 +
 .../development/extensions-contrib/time-min-max.md |   105 +
 .../extensions-core/approximate-histograms.md      |   175 +
 .../development/extensions-core/avro.md            |   222 +
 .../development/extensions-core/bloom-filter.md    |    66 +
 .../extensions-core/caffeine-cache.html            |     4 +
 .../extensions-core/datasketches-aggregators.html  |     4 +
 .../extensions-core/datasketches-extension.md      |    40 +
 .../extensions-core/datasketches-hll.md            |   102 +
 .../extensions-core/datasketches-quantiles.md      |   112 +
 .../extensions-core/datasketches-theta.md          |   273 +
 .../extensions-core/datasketches-tuple.md          |   175 +
 .../extensions-core/druid-basic-security.md        |   321 +
 .../development/extensions-core/druid-kerberos.md  |   123 +
 .../development/extensions-core/druid-lookups.md   |   150 +
 .../development/extensions-core/examples.md        |    45 +
 .../development/extensions-core/hdfs.md            |    56 +
 .../extensions-core/kafka-eight-firehose.md        |    54 +
 .../extensions-core/kafka-extraction-namespace.md  |    70 +
 .../development/extensions-core/kafka-ingestion.md |   412 +
 .../extensions-core/lookups-cached-global.md       |   379 +
 .../development/extensions-core/mysql.md           |   106 +
 .../development/extensions-core/postgresql.md      |    85 +
 .../development/extensions-core/protobuf.md        |   223 +
 .../development/extensions-core/s3.md              |    97 +
 .../extensions-core/simple-client-sslcontext.md    |    53 +
 .../development/extensions-core/stats.md           |   172 +
 .../development/extensions-core/test-stats.md      |   118 +
 docs/0.13.0-incubating/development/extensions.md   |   104 +
 docs/0.13.0-incubating/development/geo.md          |    93 +
 .../integrating-druid-with-other-technologies.md   |    39 +
 docs/0.13.0-incubating/development/javascript.md   |    75 +
 .../kafka-simple-consumer-firehose.html            |     4 +
 docs/0.13.0-incubating/development/libraries.html  |     4 +
 docs/0.13.0-incubating/development/modules.md      |   273 +
 docs/0.13.0-incubating/development/overview.md     |    76 +
 docs/0.13.0-incubating/development/router.md       |   241 +
 .../development/select-query.html                  |     4 +
 docs/0.13.0-incubating/development/versioning.md   |    47 +
 docs/0.13.0-incubating/index.html                  |     4 +
 .../0.13.0-incubating/ingestion/batch-ingestion.md |    39 +
 .../ingestion/command-line-hadoop-indexer.md       |    95 +
 docs/0.13.0-incubating/ingestion/compaction.md     |    88 +
 docs/0.13.0-incubating/ingestion/data-formats.md   |   205 +
 docs/0.13.0-incubating/ingestion/delete-data.md    |    50 +
 docs/0.13.0-incubating/ingestion/faq.md            |   106 +
 docs/0.13.0-incubating/ingestion/firehose.md       |   214 +
 docs/0.13.0-incubating/ingestion/flatten-json.md   |   180 +
 docs/0.13.0-incubating/ingestion/hadoop.md         |   361 +
 docs/0.13.0-incubating/ingestion/index.md          |   299 +
 docs/0.13.0-incubating/ingestion/ingestion-spec.md |   329 +
 docs/0.13.0-incubating/ingestion/ingestion.html    |     4 +
 .../ingestion/locking-and-priority.md              |    79 +
 docs/0.13.0-incubating/ingestion/misc-tasks.md     |    94 +
 docs/0.13.0-incubating/ingestion/native_tasks.md   |   555 +
 docs/0.13.0-incubating/ingestion/overview.html     |     4 +
 .../ingestion/realtime-ingestion.html              |     4 +
 docs/0.13.0-incubating/ingestion/reports.md        |   152 +
 docs/0.13.0-incubating/ingestion/schema-changes.md |    82 +
 docs/0.13.0-incubating/ingestion/schema-design.md  |   133 +
 .../ingestion/stream-ingestion.md                  |    56 +
 docs/0.13.0-incubating/ingestion/stream-pull.md    |   370 +
 docs/0.13.0-incubating/ingestion/stream-push.md    |   186 +
 docs/0.13.0-incubating/ingestion/tasks.md          |    74 +
 docs/0.13.0-incubating/ingestion/transform-spec.md |   104 +
 .../ingestion/update-existing-data.md              |   162 +
 docs/0.13.0-incubating/misc/cluster-setup.html     |     4 +
 docs/0.13.0-incubating/misc/evaluate.html          |     4 +
 docs/0.13.0-incubating/misc/math-expr.md           |   137 +
 docs/0.13.0-incubating/misc/papers-and-talks.md    |    43 +
 docs/0.13.0-incubating/misc/tasks.html             |     4 +
 docs/0.13.0-incubating/operations/alerts.md        |    38 +
 docs/0.13.0-incubating/operations/api-reference.md |   509 +
 docs/0.13.0-incubating/operations/dump-segment.md  |   116 +
 .../operations/http-compression.md                 |    34 +
 .../operations/including-extensions.md             |    87 +
 .../operations/insert-segment-to-db.md             |   156 +
 docs/0.13.0-incubating/operations/metrics.md       |   256 +
 .../0.13.0-incubating/operations/multitenancy.html |     4 +
 docs/0.13.0-incubating/operations/other-hadoop.md  |   300 +
 .../operations/password-provider.md                |    55 +
 .../operations/performance-faq.md                  |    95 +
 docs/0.13.0-incubating/operations/pull-deps.md     |   151 +
 .../operations/recommendations.md                  |    93 +
 docs/0.13.0-incubating/operations/reset-cluster.md |    76 +
 .../operations/rolling-updates.md                  |   102 +
 .../operations/rule-configuration.md               |   220 +
 .../operations/segment-optimization.md             |    46 +
 docs/0.13.0-incubating/operations/tls-support.md   |    92 +
 .../operations/use_sbt_to_build_fat_jar.md         |   128 +
 docs/0.13.0-incubating/querying/aggregations.md    |   408 +
 docs/0.13.0-incubating/querying/caching.md         |    46 +
 docs/0.13.0-incubating/querying/datasource.md      |    65 +
 .../querying/datasourcemetadataquery.md            |    57 +
 docs/0.13.0-incubating/querying/dimensionspecs.md  |   545 +
 docs/0.13.0-incubating/querying/filters.md         |   519 +
 docs/0.13.0-incubating/querying/granularities.md   |   438 +
 docs/0.13.0-incubating/querying/groupbyquery.md    |   445 +
 docs/0.13.0-incubating/querying/having.md          |   261 +
 docs/0.13.0-incubating/querying/joins.md           |    55 +
 docs/0.13.0-incubating/querying/limitspec.md       |    55 +
 docs/0.13.0-incubating/querying/lookups.md         |   441 +
 .../querying/multi-value-dimensions.md             |   340 +
 docs/0.13.0-incubating/querying/multitenancy.md    |    99 +
 .../querying/post-aggregations.md                  |   223 +
 docs/0.13.0-incubating/querying/query-context.md   |    62 +
 docs/0.13.0-incubating/querying/querying.md        |   125 +
 docs/0.13.0-incubating/querying/scan-query.md      |   196 +
 docs/0.13.0-incubating/querying/searchquery.md     |   141 +
 docs/0.13.0-incubating/querying/searchqueryspec.md |    77 +
 .../querying/segmentmetadataquery.md               |   188 +
 docs/0.13.0-incubating/querying/select-query.md    |   259 +
 docs/0.13.0-incubating/querying/sorting-orders.md  |    49 +
 docs/0.13.0-incubating/querying/sql.md             |   674 +
 .../querying/timeboundaryquery.md                  |    58 +
 docs/0.13.0-incubating/querying/timeseriesquery.md |   163 +
 docs/0.13.0-incubating/querying/topnmetricspec.md  |    87 +
 docs/0.13.0-incubating/querying/topnquery.md       |   257 +
 docs/0.13.0-incubating/querying/virtual-columns.md |    80 +
 docs/0.13.0-incubating/toc.md                      |   173 +
 .../tutorials/booting-a-production-cluster.html    |     4 +
 docs/0.13.0-incubating/tutorials/cluster.md        |   383 +
 docs/0.13.0-incubating/tutorials/examples.html     |     4 +
 docs/0.13.0-incubating/tutorials/firewall.html     |     4 +
 .../tutorials/img/tutorial-batch-01.png            |   Bin 0 -> 90007 bytes
 .../tutorials/img/tutorial-compaction-01.png       |   Bin 0 -> 225171 bytes
 .../tutorials/img/tutorial-compaction-02.png       |   Bin 0 -> 29139 bytes
 .../tutorials/img/tutorial-deletion-01.png         |   Bin 0 -> 110687 bytes
 .../tutorials/img/tutorial-deletion-02.png         |   Bin 0 -> 130498 bytes
 .../tutorials/img/tutorial-retention-00.png        |   Bin 0 -> 138304 bytes
 .../tutorials/img/tutorial-retention-01.png        |   Bin 0 -> 218841 bytes
 .../tutorials/img/tutorial-retention-02.png        |   Bin 0 -> 77995 bytes
 .../tutorials/img/tutorial-retention-03.png        |   Bin 0 -> 138277 bytes
 docs/0.13.0-incubating/tutorials/index.md          |   202 +
 docs/0.13.0-incubating/tutorials/quickstart.html   |     4 +
 .../tutorials/tutorial-a-first-look-at-druid.html  |     4 +
 .../tutorials/tutorial-all-about-queries.html      |     4 +
 .../tutorials/tutorial-batch-hadoop.md             |   259 +
 docs/0.13.0-incubating/tutorials/tutorial-batch.md |   179 +
 .../tutorials/tutorial-compaction.md               |   126 +
 .../tutorials/tutorial-delete-data.md              |   176 +
 .../tutorials/tutorial-ingestion-spec.md           |   662 +
 docs/0.13.0-incubating/tutorials/tutorial-kafka.md |   105 +
 .../tutorials/tutorial-loading-batch-data.html     |     4 +
 .../tutorials/tutorial-loading-streaming-data.html |     4 +
 docs/0.13.0-incubating/tutorials/tutorial-query.md |   300 +
 .../tutorials/tutorial-retention.md                |   111 +
 .../0.13.0-incubating/tutorials/tutorial-rollup.md |   200 +
 .../tutorials/tutorial-the-druid-cluster.html      |     4 +
 .../tutorials/tutorial-tranquility.md              |   104 +
 .../tutorials/tutorial-transform-spec.md           |   158 +
 .../tutorials/tutorial-update-data.md              |   169 +
 .../About-Experimental-Features.html               |     4 +
 docs/0.14.0-incubating/Aggregations.html           |     4 +
 docs/0.14.0-incubating/ApproxHisto.html            |     4 +
 docs/0.14.0-incubating/Batch-ingestion.html        |     4 +
 .../Booting-a-production-cluster.html              |     4 +
 docs/0.14.0-incubating/Broker-Config.html          |     4 +
 docs/0.14.0-incubating/Broker.html                 |     4 +
 docs/0.14.0-incubating/Build-from-source.html      |     4 +
 docs/0.14.0-incubating/Cassandra-Deep-Storage.html |     4 +
 docs/0.14.0-incubating/Cluster-setup.html          |     4 +
 docs/0.14.0-incubating/Compute.html                |     4 +
 .../Concepts-and-Terminology.html                  |     4 +
 docs/0.14.0-incubating/Configuration.html          |     4 +
 docs/0.14.0-incubating/Contribute.html             |     4 +
 docs/0.14.0-incubating/Coordinator-Config.html     |     4 +
 docs/0.14.0-incubating/Coordinator.html            |     4 +
 docs/0.14.0-incubating/DataSource.html             |     4 +
 .../0.14.0-incubating/DataSourceMetadataQuery.html |     4 +
 docs/0.14.0-incubating/Data_formats.html           |     4 +
 docs/0.14.0-incubating/Deep-Storage.html           |     4 +
 docs/0.14.0-incubating/Design.html                 |     4 +
 docs/0.14.0-incubating/DimensionSpecs.html         |     4 +
 docs/0.14.0-incubating/Download.html               |     4 +
 .../Druid-Personal-Demo-Cluster.html               |     4 +
 docs/0.14.0-incubating/Druid-vs-Cassandra.html     |     4 +
 docs/0.14.0-incubating/Druid-vs-Elasticsearch.html |     4 +
 docs/0.14.0-incubating/Druid-vs-Hadoop.html        |     4 +
 .../Druid-vs-Impala-or-Shark.html                  |     4 +
 docs/0.14.0-incubating/Druid-vs-Redshift.html      |     4 +
 docs/0.14.0-incubating/Druid-vs-Spark.html         |     4 +
 docs/0.14.0-incubating/Druid-vs-Vertica.html       |     4 +
 docs/0.14.0-incubating/Evaluate.html               |     4 +
 docs/0.14.0-incubating/Examples.html               |     4 +
 docs/0.14.0-incubating/Filters.html                |     4 +
 docs/0.14.0-incubating/Firehose.html               |     4 +
 docs/0.14.0-incubating/GeographicQueries.html      |     4 +
 docs/0.14.0-incubating/Granularities.html          |     4 +
 docs/0.14.0-incubating/GroupByQuery.html           |     4 +
 docs/0.14.0-incubating/Hadoop-Configuration.html   |     4 +
 docs/0.14.0-incubating/Having.html                 |     4 +
 docs/0.14.0-incubating/Historical-Config.html      |     4 +
 docs/0.14.0-incubating/Historical.html             |     4 +
 docs/0.14.0-incubating/Home.html                   |     4 +
 docs/0.14.0-incubating/Including-Extensions.html   |     4 +
 .../0.14.0-incubating/Indexing-Service-Config.html |     4 +
 docs/0.14.0-incubating/Indexing-Service.html       |     4 +
 docs/0.14.0-incubating/Ingestion-FAQ.html          |     4 +
 docs/0.14.0-incubating/Ingestion-overview.html     |     4 +
 docs/0.14.0-incubating/Ingestion.html              |     4 +
 .../Integrating-Druid-With-Other-Technologies.html |     4 +
 docs/0.14.0-incubating/Kafka-Eight.html            |     4 +
 docs/0.14.0-incubating/Libraries.html              |     4 +
 docs/0.14.0-incubating/LimitSpec.html              |     4 +
 docs/0.14.0-incubating/Loading-Your-Data.html      |     4 +
 docs/0.14.0-incubating/Logging.html                |     4 +
 docs/0.14.0-incubating/Master.html                 |     4 +
 docs/0.14.0-incubating/Metadata-storage.html       |     4 +
 docs/0.14.0-incubating/Metrics.html                |     4 +
 docs/0.14.0-incubating/Middlemanager.html          |     4 +
 docs/0.14.0-incubating/Modules.html                |     4 +
 docs/0.14.0-incubating/MySQL.html                  |     4 +
 docs/0.14.0-incubating/OrderBy.html                |     4 +
 docs/0.14.0-incubating/Other-Hadoop.html           |     4 +
 docs/0.14.0-incubating/Papers-and-talks.html       |     4 +
 docs/0.14.0-incubating/Peons.html                  |     4 +
 docs/0.14.0-incubating/Performance-FAQ.html        |     4 +
 docs/0.14.0-incubating/Plumber.html                |     4 +
 docs/0.14.0-incubating/Post-aggregations.html      |     4 +
 .../Production-Cluster-Configuration.html          |     4 +
 docs/0.14.0-incubating/Query-Context.html          |     4 +
 docs/0.14.0-incubating/Querying-your-data.html     |     4 +
 docs/0.14.0-incubating/Querying.html               |     4 +
 docs/0.14.0-incubating/Realtime-Config.html        |     4 +
 docs/0.14.0-incubating/Realtime-ingestion.html     |     4 +
 docs/0.14.0-incubating/Realtime.html               |     4 +
 docs/0.14.0-incubating/Recommendations.html        |     4 +
 docs/0.14.0-incubating/Rolling-Updates.html        |     4 +
 docs/0.14.0-incubating/Router.html                 |     4 +
 docs/0.14.0-incubating/Rule-Configuration.html     |     4 +
 docs/0.14.0-incubating/SearchQuery.html            |     4 +
 docs/0.14.0-incubating/SearchQuerySpec.html        |     4 +
 docs/0.14.0-incubating/SegmentMetadataQuery.html   |     4 +
 docs/0.14.0-incubating/Segments.html               |     4 +
 docs/0.14.0-incubating/SelectQuery.html            |     4 +
 .../Simple-Cluster-Configuration.html              |     4 +
 docs/0.14.0-incubating/Spatial-Filters.html        |     4 +
 docs/0.14.0-incubating/Spatial-Indexing.html       |     4 +
 .../Stand-Alone-With-Riak-CS.html                  |     4 +
 docs/0.14.0-incubating/Support.html                |     4 +
 docs/0.14.0-incubating/Tasks.html                  |     4 +
 docs/0.14.0-incubating/Thanks.html                 |     4 +
 docs/0.14.0-incubating/TimeBoundaryQuery.html      |     4 +
 docs/0.14.0-incubating/TimeseriesQuery.html        |     4 +
 docs/0.14.0-incubating/TopNMetricSpec.html         |     4 +
 docs/0.14.0-incubating/TopNQuery.html              |     4 +
 .../Tutorial-A-First-Look-at-Druid.html            |     4 +
 .../Tutorial-All-About-Queries.html                |     4 +
 .../Tutorial-Loading-Batch-Data.html               |     4 +
 .../Tutorial-Loading-Streaming-Data.html           |     4 +
 .../Tutorial-The-Druid-Cluster.html                |     4 +
 .../Tutorial:-A-First-Look-at-Druid.html           |     4 +
 .../Tutorial:-All-About-Queries.html               |     4 +
 .../Tutorial:-Loading-Batch-Data.html              |     4 +
 .../Tutorial:-Loading-Streaming-Data.html          |     4 +
 .../Tutorial:-Loading-Your-Data-Part-1.html        |     4 +
 .../Tutorial:-Loading-Your-Data-Part-2.html        |     4 +
 .../Tutorial:-The-Druid-Cluster.html               |     4 +
 docs/0.14.0-incubating/Tutorial:-Webstream.html    |     4 +
 docs/0.14.0-incubating/Tutorials.html              |     4 +
 docs/0.14.0-incubating/Twitter-Tutorial.html       |     4 +
 docs/0.14.0-incubating/Versioning.html             |     4 +
 docs/0.14.0-incubating/ZooKeeper.html              |     4 +
 docs/0.14.0-incubating/alerts.html                 |     4 +
 .../comparisons/druid-vs-cassandra.html            |     4 +
 .../comparisons/druid-vs-elasticsearch.md          |    40 +
 .../comparisons/druid-vs-hadoop.html               |     4 +
 .../comparisons/druid-vs-impala-or-shark.html      |     4 +
 .../comparisons/druid-vs-key-value.md              |    47 +
 .../0.14.0-incubating/comparisons/druid-vs-kudu.md |    40 +
 .../comparisons/druid-vs-redshift.md               |    63 +
 .../comparisons/druid-vs-spark.md                  |    43 +
 .../comparisons/druid-vs-sql-on-hadoop.md          |    83 +
 .../comparisons/druid-vs-vertica.html              |     4 +
 docs/0.14.0-incubating/configuration/auth.html     |     4 +
 docs/0.14.0-incubating/configuration/broker.html   |     4 +
 docs/0.14.0-incubating/configuration/caching.html  |     4 +
 .../configuration/coordinator.html                 |     4 +
 docs/0.14.0-incubating/configuration/hadoop.html   |     4 +
 .../configuration/historical.html                  |     4 +
 docs/0.14.0-incubating/configuration/index.md      |  1665 ++
 .../configuration/indexing-service.html            |     4 +
 docs/0.14.0-incubating/configuration/logging.md    |    55 +
 .../configuration/production-cluster.html          |     4 +
 docs/0.14.0-incubating/configuration/realtime.md   |    98 +
 .../configuration/simple-cluster.html              |     4 +
 .../0.14.0-incubating/configuration/zookeeper.html |     4 +
 .../dependencies/cassandra-deep-storage.md         |    62 +
 .../0.14.0-incubating/dependencies/deep-storage.md |    54 +
 .../dependencies/metadata-storage.md               |   141 +
 docs/0.14.0-incubating/dependencies/zookeeper.md   |    77 +
 docs/0.14.0-incubating/design/auth.md              |   168 +
 docs/0.14.0-incubating/design/broker.md            |    55 +
 .../design/concepts-and-terminology.html           |     4 +
 docs/0.14.0-incubating/design/coordinator.md       |   132 +
 docs/0.14.0-incubating/design/design.html          |     4 +
 docs/0.14.0-incubating/design/historical.md        |    59 +
 docs/0.14.0-incubating/design/index.md             |   203 +
 docs/0.14.0-incubating/design/indexing-service.md  |    65 +
 docs/0.14.0-incubating/design/middlemanager.md     |    44 +
 docs/0.14.0-incubating/design/overlord.md          |    63 +
 docs/0.14.0-incubating/design/peons.md             |    47 +
 docs/0.14.0-incubating/design/plumber.md           |    38 +
 docs/0.14.0-incubating/design/processes.md         |   131 +
 docs/0.14.0-incubating/design/realtime.md          |    80 +
 docs/0.14.0-incubating/design/segments.md          |   205 +
 .../development/approximate-histograms.html        |     4 +
 docs/0.14.0-incubating/development/build.md        |    66 +
 .../development/community-extensions/azure.html    |     4 +
 .../community-extensions/cassandra.html            |     4 +
 .../community-extensions/cloudfiles.html           |     4 +
 .../development/community-extensions/graphite.html |     4 +
 .../community-extensions/kafka-simple.html         |     4 +
 .../development/community-extensions/rabbitmq.html |     4 +
 .../development/datasketches-aggregators.html      |     4 +
 docs/0.14.0-incubating/development/experimental.md |    39 +
 .../extensions-contrib/ambari-metrics-emitter.md   |   100 +
 .../development/extensions-contrib/azure.md        |    95 +
 .../development/extensions-contrib/cassandra.md    |    31 +
 .../development/extensions-contrib/cloudfiles.md   |    97 +
 .../extensions-contrib/distinctcount.md            |    99 +
 .../development/extensions-contrib/google.md       |    89 +
 .../development/extensions-contrib/graphite.md     |   118 +
 .../development/extensions-contrib/influx.md       |    66 +
 .../extensions-contrib/kafka-emitter.md            |    55 +
 .../development/extensions-contrib/kafka-simple.md |    56 +
 .../extensions-contrib/materialized-view.md        |   134 +
 .../extensions-contrib/opentsdb-emitter.md         |    62 +
 .../development/extensions-contrib/orc.md          |   113 +
 .../development/extensions-contrib/parquet.html    |     4 +
 .../development/extensions-contrib/rabbitmq.md     |    81 +
 .../development/extensions-contrib/redis-cache.md  |    58 +
 .../development/extensions-contrib/rocketmq.md     |    29 +
 .../development/extensions-contrib/scan-query.html |     4 +
 .../development/extensions-contrib/sqlserver.md    |    57 +
 .../development/extensions-contrib/statsd.md       |    70 +
 .../development/extensions-contrib/thrift.md       |   128 +
 .../development/extensions-contrib/time-min-max.md |   105 +
 .../extensions-core/approximate-histograms.md      |   318 +
 .../development/extensions-core/avro.md            |   222 +
 .../development/extensions-core/bloom-filter.md    |   179 +
 .../extensions-core/caffeine-cache.html            |     4 +
 .../extensions-core/datasketches-aggregators.html  |     4 +
 .../extensions-core/datasketches-extension.md      |    40 +
 .../extensions-core/datasketches-hll.md            |   102 +
 .../extensions-core/datasketches-quantiles.md      |   112 +
 .../extensions-core/datasketches-theta.md          |   273 +
 .../extensions-core/datasketches-tuple.md          |   175 +
 .../extensions-core/druid-basic-security.md        |   321 +
 .../development/extensions-core/druid-kerberos.md  |   123 +
 .../development/extensions-core/druid-lookups.md   |   150 +
 .../development/extensions-core/examples.md        |    45 +
 .../development/extensions-core/hdfs.md            |    56 +
 .../extensions-core/kafka-eight-firehose.md        |    54 +
 .../extensions-core/kafka-extraction-namespace.md  |    70 +
 .../development/extensions-core/kafka-ingestion.md |   347 +
 .../extensions-core/kinesis-ingestion.md           |   393 +
 .../extensions-core/lookups-cached-global.md       |   379 +
 .../development/extensions-core/mysql.md           |   109 +
 .../extensions-core/namespaced-lookup.html         |     4 +
 .../development/extensions-core/parquet.md         |   220 +
 .../development/extensions-core/postgresql.md      |    85 +
 .../development/extensions-core/protobuf.md        |   223 +
 .../development/extensions-core/s3.md              |    98 +
 .../extensions-core/simple-client-sslcontext.md    |    54 +
 .../development/extensions-core/stats.md           |   172 +
 .../development/extensions-core/test-stats.md      |   118 +
 docs/0.14.0-incubating/development/extensions.md   |   105 +
 docs/0.14.0-incubating/development/geo.md          |    93 +
 .../integrating-druid-with-other-technologies.md   |    39 +
 docs/0.14.0-incubating/development/javascript.md   |    75 +
 .../kafka-simple-consumer-firehose.html            |     4 +
 docs/0.14.0-incubating/development/libraries.html  |     4 +
 docs/0.14.0-incubating/development/modules.md      |   273 +
 docs/0.14.0-incubating/development/overview.md     |    76 +
 docs/0.14.0-incubating/development/router.md       |   244 +
 .../development/select-query.html                  |     4 +
 docs/0.14.0-incubating/development/versioning.md   |    47 +
 docs/0.14.0-incubating/index.html                  |     4 +
 .../0.14.0-incubating/ingestion/batch-ingestion.md |    39 +
 .../ingestion/command-line-hadoop-indexer.md       |    95 +
 docs/0.14.0-incubating/ingestion/compaction.md     |   102 +
 docs/0.14.0-incubating/ingestion/data-formats.md   |   205 +
 docs/0.14.0-incubating/ingestion/delete-data.md    |    50 +
 docs/0.14.0-incubating/ingestion/faq.md            |   106 +
 docs/0.14.0-incubating/ingestion/firehose.md       |   214 +
 docs/0.14.0-incubating/ingestion/flatten-json.md   |   180 +
 .../ingestion/hadoop-vs-native-batch.md            |    43 +
 docs/0.14.0-incubating/ingestion/hadoop.md         |   363 +
 docs/0.14.0-incubating/ingestion/index.md          |   306 +
 docs/0.14.0-incubating/ingestion/ingestion-spec.md |   332 +
 docs/0.14.0-incubating/ingestion/ingestion.html    |     4 +
 .../ingestion/locking-and-priority.md              |    79 +
 docs/0.14.0-incubating/ingestion/misc-tasks.md     |    94 +
 docs/0.14.0-incubating/ingestion/native-batch.html |     4 +
 docs/0.14.0-incubating/ingestion/native_tasks.md   |   620 +
 docs/0.14.0-incubating/ingestion/overview.html     |     4 +
 .../ingestion/realtime-ingestion.html              |     4 +
 docs/0.14.0-incubating/ingestion/reports.md        |   152 +
 docs/0.14.0-incubating/ingestion/schema-changes.md |    82 +
 docs/0.14.0-incubating/ingestion/schema-design.md  |   338 +
 .../ingestion/stream-ingestion.md                  |    56 +
 docs/0.14.0-incubating/ingestion/stream-pull.md    |   376 +
 docs/0.14.0-incubating/ingestion/stream-push.md    |   186 +
 docs/0.14.0-incubating/ingestion/tasks.md          |    78 +
 docs/0.14.0-incubating/ingestion/transform-spec.md |   104 +
 .../ingestion/update-existing-data.md              |   162 +
 docs/0.14.0-incubating/misc/cluster-setup.html     |     4 +
 docs/0.14.0-incubating/misc/evaluate.html          |     4 +
 docs/0.14.0-incubating/misc/math-expr.md           |   138 +
 docs/0.14.0-incubating/misc/papers-and-talks.md    |    43 +
 docs/0.14.0-incubating/misc/tasks.html             |     4 +
 docs/0.14.0-incubating/operations/alerts.md        |    38 +
 docs/0.14.0-incubating/operations/api-reference.md |   736 +
 docs/0.14.0-incubating/operations/druid-console.md |    90 +
 docs/0.14.0-incubating/operations/dump-segment.md  |   116 +
 .../operations/http-compression.md                 |    34 +
 .../operations/img/01-home-view.png                |   Bin 0 -> 60287 bytes
 .../operations/img/02-datasources.png              |   Bin 0 -> 163824 bytes
 .../operations/img/03-retention.png                |   Bin 0 -> 123857 bytes
 .../operations/img/04-segments.png                 |   Bin 0 -> 125873 bytes
 .../operations/img/05-tasks-1.png                  |   Bin 0 -> 101635 bytes
 .../operations/img/06-tasks-2.png                  |   Bin 0 -> 221977 bytes
 .../operations/img/07-tasks-3.png                  |   Bin 0 -> 195170 bytes
 .../operations/img/08-servers.png                  |   Bin 0 -> 119310 bytes
 docs/0.14.0-incubating/operations/img/09-sql.png   |   Bin 0 -> 80580 bytes
 .../operations/including-extensions.md             |    87 +
 .../operations/insert-segment-to-db.html           |     4 +
 .../operations/insert-segment-to-db.md             |   156 +
 .../0.14.0-incubating/operations/management-uis.md |    80 +
 docs/0.14.0-incubating/operations/metrics.md       |   279 +
 .../0.14.0-incubating/operations/multitenancy.html |     4 +
 docs/0.14.0-incubating/operations/other-hadoop.md  |   300 +
 .../operations/password-provider.md                |    55 +
 .../operations/performance-faq.md                  |    95 +
 docs/0.14.0-incubating/operations/pull-deps.md     |   151 +
 .../operations/recommendations.md                  |    93 +
 docs/0.14.0-incubating/operations/reset-cluster.md |    76 +
 .../operations/rolling-updates.md                  |   102 +
 .../operations/rule-configuration.md               |   242 +
 .../operations/segment-optimization.md             |   100 +
 docs/0.14.0-incubating/operations/tls-support.md   |   105 +
 .../operations/use_sbt_to_build_fat_jar.md         |   128 +
 docs/0.14.0-incubating/querying/aggregations.md    |   361 +
 docs/0.14.0-incubating/querying/caching.md         |    46 +
 docs/0.14.0-incubating/querying/datasource.md      |    65 +
 .../querying/datasourcemetadataquery.md            |    57 +
 docs/0.14.0-incubating/querying/dimensionspecs.md  |   545 +
 docs/0.14.0-incubating/querying/filters.md         |   521 +
 docs/0.14.0-incubating/querying/granularities.md   |   438 +
 docs/0.14.0-incubating/querying/groupbyquery.md    |   445 +
 docs/0.14.0-incubating/querying/having.md          |   261 +
 docs/0.14.0-incubating/querying/hll-old.md         |   142 +
 docs/0.14.0-incubating/querying/joins.md           |    55 +
 docs/0.14.0-incubating/querying/limitspec.md       |    55 +
 docs/0.14.0-incubating/querying/lookups.md         |   444 +
 .../querying/multi-value-dimensions.md             |   340 +
 docs/0.14.0-incubating/querying/multitenancy.md    |    99 +
 docs/0.14.0-incubating/querying/optimizations.html |     4 +
 .../querying/post-aggregations.md                  |   223 +
 docs/0.14.0-incubating/querying/query-context.md   |    62 +
 docs/0.14.0-incubating/querying/querying.md        |   125 +
 docs/0.14.0-incubating/querying/scan-query.md      |   196 +
 docs/0.14.0-incubating/querying/searchquery.md     |   141 +
 docs/0.14.0-incubating/querying/searchqueryspec.md |    77 +
 .../querying/segmentmetadataquery.md               |   188 +
 docs/0.14.0-incubating/querying/select-query.md    |   259 +
 docs/0.14.0-incubating/querying/sorting-orders.md  |    54 +
 docs/0.14.0-incubating/querying/sql.md             |   718 +
 .../querying/timeboundaryquery.md                  |    58 +
 docs/0.14.0-incubating/querying/timeseriesquery.md |   163 +
 docs/0.14.0-incubating/querying/topnmetricspec.md  |    87 +
 docs/0.14.0-incubating/querying/topnquery.md       |   257 +
 docs/0.14.0-incubating/querying/virtual-columns.md |    80 +
 docs/0.14.0-incubating/toc.md                      |   173 +
 .../tutorials/booting-a-production-cluster.html    |     4 +
 docs/0.14.0-incubating/tutorials/cluster.md        |   408 +
 docs/0.14.0-incubating/tutorials/examples.html     |     4 +
 docs/0.14.0-incubating/tutorials/firewall.html     |     4 +
 .../tutorials/img/tutorial-batch-01.png            |   Bin 0 -> 54435 bytes
 .../tutorials/img/tutorial-compaction-01.png       |   Bin 0 -> 55153 bytes
 .../tutorials/img/tutorial-compaction-02.png       |   Bin 0 -> 279736 bytes
 .../tutorials/img/tutorial-compaction-03.png       |   Bin 0 -> 40114 bytes
 .../tutorials/img/tutorial-compaction-04.png       |   Bin 0 -> 312142 bytes
 .../tutorials/img/tutorial-compaction-05.png       |   Bin 0 -> 39784 bytes
 .../tutorials/img/tutorial-compaction-06.png       |   Bin 0 -> 351505 bytes
 .../tutorials/img/tutorial-compaction-07.png       |   Bin 0 -> 40106 bytes
 .../tutorials/img/tutorial-compaction-08.png       |   Bin 0 -> 43257 bytes
 .../tutorials/img/tutorial-deletion-01.png         |   Bin 0 -> 72062 bytes
 .../tutorials/img/tutorial-deletion-02.png         |   Bin 0 -> 200459 bytes
 .../tutorials/img/tutorial-retention-00.png        |   Bin 0 -> 138304 bytes
 .../tutorials/img/tutorial-retention-01.png        |   Bin 0 -> 53955 bytes
 .../tutorials/img/tutorial-retention-02.png        |   Bin 0 -> 410930 bytes
 .../tutorials/img/tutorial-retention-03.png        |   Bin 0 -> 44144 bytes
 .../tutorials/img/tutorial-retention-04.png        |   Bin 0 -> 67493 bytes
 .../tutorials/img/tutorial-retention-05.png        |   Bin 0 -> 61639 bytes
 .../tutorials/img/tutorial-retention-06.png        |   Bin 0 -> 233034 bytes
 docs/0.14.0-incubating/tutorials/index.md          |   202 +
 .../tutorials/ingestion-streams.html               |     4 +
 docs/0.14.0-incubating/tutorials/ingestion.html    |     4 +
 docs/0.14.0-incubating/tutorials/quickstart.html   |     4 +
 .../tutorials/tutorial-a-first-look-at-druid.html  |     4 +
 .../tutorials/tutorial-all-about-queries.html      |     4 +
 .../tutorials/tutorial-batch-hadoop.md             |   259 +
 docs/0.14.0-incubating/tutorials/tutorial-batch.md |   179 +
 .../tutorials/tutorial-compaction.md               |   176 +
 .../tutorials/tutorial-delete-data.md              |   178 +
 .../tutorials/tutorial-ingestion-spec.md           |   662 +
 docs/0.14.0-incubating/tutorials/tutorial-kafka.md |   107 +
 .../tutorials/tutorial-loading-batch-data.html     |     4 +
 .../tutorials/tutorial-loading-streaming-data.html |     4 +
 docs/0.14.0-incubating/tutorials/tutorial-query.md |   300 +
 .../tutorials/tutorial-retention.md                |   115 +
 .../0.14.0-incubating/tutorials/tutorial-rollup.md |   200 +
 .../tutorials/tutorial-the-druid-cluster.html      |     4 +
 .../tutorials/tutorial-tranquility.md              |   104 +
 .../tutorials/tutorial-transform-spec.md           |   158 +
 .../tutorials/tutorial-update-data.md              |   169 +
 .../About-Experimental-Features.html               |     4 +
 docs/0.14.1-incubating/Aggregations.html           |     4 +
 docs/0.14.1-incubating/ApproxHisto.html            |     4 +
 docs/0.14.1-incubating/Batch-ingestion.html        |     4 +
 .../Booting-a-production-cluster.html              |     4 +
 docs/0.14.1-incubating/Broker-Config.html          |     4 +
 docs/0.14.1-incubating/Broker.html                 |     4 +
 docs/0.14.1-incubating/Build-from-source.html      |     4 +
 docs/0.14.1-incubating/Cassandra-Deep-Storage.html |     4 +
 docs/0.14.1-incubating/Cluster-setup.html          |     4 +
 docs/0.14.1-incubating/Compute.html                |     4 +
 .../Concepts-and-Terminology.html                  |     4 +
 docs/0.14.1-incubating/Configuration.html          |     4 +
 docs/0.14.1-incubating/Contribute.html             |     4 +
 docs/0.14.1-incubating/Coordinator-Config.html     |     4 +
 docs/0.14.1-incubating/Coordinator.html            |     4 +
 docs/0.14.1-incubating/DataSource.html             |     4 +
 .../0.14.1-incubating/DataSourceMetadataQuery.html |     4 +
 docs/0.14.1-incubating/Data_formats.html           |     4 +
 docs/0.14.1-incubating/Deep-Storage.html           |     4 +
 docs/0.14.1-incubating/Design.html                 |     4 +
 docs/0.14.1-incubating/DimensionSpecs.html         |     4 +
 docs/0.14.1-incubating/Download.html               |     4 +
 .../Druid-Personal-Demo-Cluster.html               |     4 +
 docs/0.14.1-incubating/Druid-vs-Cassandra.html     |     4 +
 docs/0.14.1-incubating/Druid-vs-Elasticsearch.html |     4 +
 docs/0.14.1-incubating/Druid-vs-Hadoop.html        |     4 +
 .../Druid-vs-Impala-or-Shark.html                  |     4 +
 docs/0.14.1-incubating/Druid-vs-Redshift.html      |     4 +
 docs/0.14.1-incubating/Druid-vs-Spark.html         |     4 +
 docs/0.14.1-incubating/Druid-vs-Vertica.html       |     4 +
 docs/0.14.1-incubating/Evaluate.html               |     4 +
 docs/0.14.1-incubating/Examples.html               |     4 +
 docs/0.14.1-incubating/Filters.html                |     4 +
 docs/0.14.1-incubating/Firehose.html               |     4 +
 docs/0.14.1-incubating/GeographicQueries.html      |     4 +
 docs/0.14.1-incubating/Granularities.html          |     4 +
 docs/0.14.1-incubating/GroupByQuery.html           |     4 +
 docs/0.14.1-incubating/Hadoop-Configuration.html   |     4 +
 docs/0.14.1-incubating/Having.html                 |     4 +
 docs/0.14.1-incubating/Historical-Config.html      |     4 +
 docs/0.14.1-incubating/Historical.html             |     4 +
 docs/0.14.1-incubating/Home.html                   |     4 +
 docs/0.14.1-incubating/Including-Extensions.html   |     4 +
 .../0.14.1-incubating/Indexing-Service-Config.html |     4 +
 docs/0.14.1-incubating/Indexing-Service.html       |     4 +
 docs/0.14.1-incubating/Ingestion-FAQ.html          |     4 +
 docs/0.14.1-incubating/Ingestion-overview.html     |     4 +
 docs/0.14.1-incubating/Ingestion.html              |     4 +
 .../Integrating-Druid-With-Other-Technologies.html |     4 +
 docs/0.14.1-incubating/Kafka-Eight.html            |     4 +
 docs/0.14.1-incubating/Libraries.html              |     4 +
 docs/0.14.1-incubating/LimitSpec.html              |     4 +
 docs/0.14.1-incubating/Loading-Your-Data.html      |     4 +
 docs/0.14.1-incubating/Logging.html                |     4 +
 docs/0.14.1-incubating/Master.html                 |     4 +
 docs/0.14.1-incubating/Metadata-storage.html       |     4 +
 docs/0.14.1-incubating/Metrics.html                |     4 +
 docs/0.14.1-incubating/Middlemanager.html          |     4 +
 docs/0.14.1-incubating/Modules.html                |     4 +
 docs/0.14.1-incubating/MySQL.html                  |     4 +
 docs/0.14.1-incubating/OrderBy.html                |     4 +
 docs/0.14.1-incubating/Other-Hadoop.html           |     4 +
 docs/0.14.1-incubating/Papers-and-talks.html       |     4 +
 docs/0.14.1-incubating/Peons.html                  |     4 +
 docs/0.14.1-incubating/Performance-FAQ.html        |     4 +
 docs/0.14.1-incubating/Plumber.html                |     4 +
 docs/0.14.1-incubating/Post-aggregations.html      |     4 +
 .../Production-Cluster-Configuration.html          |     4 +
 docs/0.14.1-incubating/Query-Context.html          |     4 +
 docs/0.14.1-incubating/Querying-your-data.html     |     4 +
 docs/0.14.1-incubating/Querying.html               |     4 +
 docs/0.14.1-incubating/Realtime-Config.html        |     4 +
 docs/0.14.1-incubating/Realtime-ingestion.html     |     4 +
 docs/0.14.1-incubating/Realtime.html               |     4 +
 docs/0.14.1-incubating/Recommendations.html        |     4 +
 docs/0.14.1-incubating/Rolling-Updates.html        |     4 +
 docs/0.14.1-incubating/Router.html                 |     4 +
 docs/0.14.1-incubating/Rule-Configuration.html     |     4 +
 docs/0.14.1-incubating/SearchQuery.html            |     4 +
 docs/0.14.1-incubating/SearchQuerySpec.html        |     4 +
 docs/0.14.1-incubating/SegmentMetadataQuery.html   |     4 +
 docs/0.14.1-incubating/Segments.html               |     4 +
 docs/0.14.1-incubating/SelectQuery.html            |     4 +
 .../Simple-Cluster-Configuration.html              |     4 +
 docs/0.14.1-incubating/Spatial-Filters.html        |     4 +
 docs/0.14.1-incubating/Spatial-Indexing.html       |     4 +
 .../Stand-Alone-With-Riak-CS.html                  |     4 +
 docs/0.14.1-incubating/Support.html                |     4 +
 docs/0.14.1-incubating/Tasks.html                  |     4 +
 docs/0.14.1-incubating/Thanks.html                 |     4 +
 docs/0.14.1-incubating/TimeBoundaryQuery.html      |     4 +
 docs/0.14.1-incubating/TimeseriesQuery.html        |     4 +
 docs/0.14.1-incubating/TopNMetricSpec.html         |     4 +
 docs/0.14.1-incubating/TopNQuery.html              |     4 +
 .../Tutorial-A-First-Look-at-Druid.html            |     4 +
 .../Tutorial-All-About-Queries.html                |     4 +
 .../Tutorial-Loading-Batch-Data.html               |     4 +
 .../Tutorial-Loading-Streaming-Data.html           |     4 +
 .../Tutorial-The-Druid-Cluster.html                |     4 +
 .../Tutorial:-A-First-Look-at-Druid.html           |     4 +
 .../Tutorial:-All-About-Queries.html               |     4 +
 .../Tutorial:-Loading-Batch-Data.html              |     4 +
 .../Tutorial:-Loading-Streaming-Data.html          |     4 +
 .../Tutorial:-Loading-Your-Data-Part-1.html        |     4 +
 .../Tutorial:-Loading-Your-Data-Part-2.html        |     4 +
 .../Tutorial:-The-Druid-Cluster.html               |     4 +
 docs/0.14.1-incubating/Tutorial:-Webstream.html    |     4 +
 docs/0.14.1-incubating/Tutorials.html              |     4 +
 docs/0.14.1-incubating/Twitter-Tutorial.html       |     4 +
 docs/0.14.1-incubating/Versioning.html             |     4 +
 docs/0.14.1-incubating/ZooKeeper.html              |     4 +
 docs/0.14.1-incubating/alerts.html                 |     4 +
 .../comparisons/druid-vs-cassandra.html            |     4 +
 .../comparisons/druid-vs-elasticsearch.md          |    40 +
 .../comparisons/druid-vs-hadoop.html               |     4 +
 .../comparisons/druid-vs-impala-or-shark.html      |     4 +
 .../comparisons/druid-vs-key-value.md              |    47 +
 .../0.14.1-incubating/comparisons/druid-vs-kudu.md |    40 +
 .../comparisons/druid-vs-redshift.md               |    63 +
 .../comparisons/druid-vs-spark.md                  |    43 +
 .../comparisons/druid-vs-sql-on-hadoop.md          |    83 +
 .../comparisons/druid-vs-vertica.html              |     4 +
 docs/0.14.1-incubating/configuration/auth.html     |     4 +
 docs/0.14.1-incubating/configuration/broker.html   |     4 +
 docs/0.14.1-incubating/configuration/caching.html  |     4 +
 .../configuration/coordinator.html                 |     4 +
 docs/0.14.1-incubating/configuration/hadoop.html   |     4 +
 .../configuration/historical.html                  |     4 +
 docs/0.14.1-incubating/configuration/index.md      |  1665 ++
 .../configuration/indexing-service.html            |     4 +
 docs/0.14.1-incubating/configuration/logging.md    |    55 +
 .../configuration/production-cluster.html          |     4 +
 docs/0.14.1-incubating/configuration/realtime.md   |    98 +
 .../configuration/simple-cluster.html              |     4 +
 .../0.14.1-incubating/configuration/zookeeper.html |     4 +
 .../dependencies/cassandra-deep-storage.md         |    62 +
 .../0.14.1-incubating/dependencies/deep-storage.md |    54 +
 .../dependencies/metadata-storage.md               |   141 +
 docs/0.14.1-incubating/dependencies/zookeeper.md   |    77 +
 docs/0.14.1-incubating/design/auth.md              |   168 +
 docs/0.14.1-incubating/design/broker.md            |    55 +
 .../design/concepts-and-terminology.html           |     4 +
 docs/0.14.1-incubating/design/coordinator.md       |   132 +
 docs/0.14.1-incubating/design/design.html          |     4 +
 docs/0.14.1-incubating/design/historical.md        |    59 +
 docs/0.14.1-incubating/design/index.md             |   203 +
 docs/0.14.1-incubating/design/indexing-service.md  |    65 +
 docs/0.14.1-incubating/design/middlemanager.md     |    44 +
 docs/0.14.1-incubating/design/overlord.md          |    63 +
 docs/0.14.1-incubating/design/peons.md             |    47 +
 docs/0.14.1-incubating/design/plumber.md           |    38 +
 docs/0.14.1-incubating/design/processes.md         |   131 +
 docs/0.14.1-incubating/design/realtime.md          |    80 +
 docs/0.14.1-incubating/design/segments.md          |   205 +
 .../development/approximate-histograms.html        |     4 +
 docs/0.14.1-incubating/development/build.md        |    66 +
 .../development/community-extensions/azure.html    |     4 +
 .../community-extensions/cassandra.html            |     4 +
 .../community-extensions/cloudfiles.html           |     4 +
 .../development/community-extensions/graphite.html |     4 +
 .../community-extensions/kafka-simple.html         |     4 +
 .../development/community-extensions/rabbitmq.html |     4 +
 .../development/datasketches-aggregators.html      |     4 +
 docs/0.14.1-incubating/development/experimental.md |    39 +
 .../extensions-contrib/ambari-metrics-emitter.md   |   100 +
 .../development/extensions-contrib/azure.md        |    95 +
 .../development/extensions-contrib/cassandra.md    |    31 +
 .../development/extensions-contrib/cloudfiles.md   |    97 +
 .../extensions-contrib/distinctcount.md            |    99 +
 .../development/extensions-contrib/google.md       |    89 +
 .../development/extensions-contrib/graphite.md     |   118 +
 .../development/extensions-contrib/influx.md       |    66 +
 .../extensions-contrib/kafka-emitter.md            |    55 +
 .../development/extensions-contrib/kafka-simple.md |    56 +
 .../extensions-contrib/materialized-view.md        |   134 +
 .../extensions-contrib/opentsdb-emitter.md         |    62 +
 .../development/extensions-contrib/orc.md          |   113 +
 .../development/extensions-contrib/parquet.html    |     4 +
 .../development/extensions-contrib/rabbitmq.md     |    81 +
 .../development/extensions-contrib/redis-cache.md  |    58 +
 .../development/extensions-contrib/rocketmq.md     |    29 +
 .../development/extensions-contrib/scan-query.html |     4 +
 .../development/extensions-contrib/sqlserver.md    |    57 +
 .../development/extensions-contrib/statsd.md       |    70 +
 .../development/extensions-contrib/thrift.md       |   128 +
 .../development/extensions-contrib/time-min-max.md |   105 +
 .../extensions-core/approximate-histograms.md      |   318 +
 .../development/extensions-core/avro.md            |   222 +
 .../development/extensions-core/bloom-filter.md    |   179 +
 .../extensions-core/caffeine-cache.html            |     4 +
 .../extensions-core/datasketches-aggregators.html  |     4 +
 .../extensions-core/datasketches-extension.md      |    40 +
 .../extensions-core/datasketches-hll.md            |   102 +
 .../extensions-core/datasketches-quantiles.md      |   112 +
 .../extensions-core/datasketches-theta.md          |   273 +
 .../extensions-core/datasketches-tuple.md          |   175 +
 .../extensions-core/druid-basic-security.md        |   321 +
 .../development/extensions-core/druid-kerberos.md  |   123 +
 .../development/extensions-core/druid-lookups.md   |   150 +
 .../development/extensions-core/examples.md        |    45 +
 .../development/extensions-core/hdfs.md            |    56 +
 .../extensions-core/kafka-eight-firehose.md        |    54 +
 .../extensions-core/kafka-extraction-namespace.md  |    70 +
 .../development/extensions-core/kafka-ingestion.md |   347 +
 .../extensions-core/kinesis-ingestion.md           |   393 +
 .../extensions-core/lookups-cached-global.md       |   379 +
 .../development/extensions-core/mysql.md           |   109 +
 .../extensions-core/namespaced-lookup.html         |     4 +
 .../development/extensions-core/parquet.md         |   220 +
 .../development/extensions-core/postgresql.md      |    85 +
 .../development/extensions-core/protobuf.md        |   223 +
 .../development/extensions-core/s3.md              |    98 +
 .../extensions-core/simple-client-sslcontext.md    |    54 +
 .../development/extensions-core/stats.md           |   172 +
 .../development/extensions-core/test-stats.md      |   118 +
 docs/0.14.1-incubating/development/extensions.md   |   105 +
 docs/0.14.1-incubating/development/geo.md          |    93 +
 .../integrating-druid-with-other-technologies.md   |    39 +
 docs/0.14.1-incubating/development/javascript.md   |    75 +
 .../kafka-simple-consumer-firehose.html            |     4 +
 docs/0.14.1-incubating/development/libraries.html  |     4 +
 docs/0.14.1-incubating/development/modules.md      |   273 +
 docs/0.14.1-incubating/development/overview.md     |    76 +
 docs/0.14.1-incubating/development/router.md       |   244 +
 .../development/select-query.html                  |     4 +
 docs/0.14.1-incubating/development/versioning.md   |    47 +
 docs/0.14.1-incubating/index.html                  |     4 +
 .../0.14.1-incubating/ingestion/batch-ingestion.md |    39 +
 .../ingestion/command-line-hadoop-indexer.md       |    95 +
 docs/0.14.1-incubating/ingestion/compaction.md     |   102 +
 docs/0.14.1-incubating/ingestion/data-formats.md   |   205 +
 docs/0.14.1-incubating/ingestion/delete-data.md    |    50 +
 docs/0.14.1-incubating/ingestion/faq.md            |   106 +
 docs/0.14.1-incubating/ingestion/firehose.md       |   214 +
 docs/0.14.1-incubating/ingestion/flatten-json.md   |   180 +
 .../ingestion/hadoop-vs-native-batch.md            |    43 +
 docs/0.14.1-incubating/ingestion/hadoop.md         |   363 +
 docs/0.14.1-incubating/ingestion/index.md          |   306 +
 docs/0.14.1-incubating/ingestion/ingestion-spec.md |   332 +
 docs/0.14.1-incubating/ingestion/ingestion.html    |     4 +
 .../ingestion/locking-and-priority.md              |    79 +
 docs/0.14.1-incubating/ingestion/misc-tasks.md     |    94 +
 docs/0.14.1-incubating/ingestion/native-batch.html |     4 +
 docs/0.14.1-incubating/ingestion/native_tasks.md   |   620 +
 docs/0.14.1-incubating/ingestion/overview.html     |     4 +
 .../ingestion/realtime-ingestion.html              |     4 +
 docs/0.14.1-incubating/ingestion/reports.md        |   152 +
 docs/0.14.1-incubating/ingestion/schema-changes.md |    82 +
 docs/0.14.1-incubating/ingestion/schema-design.md  |   338 +
 .../ingestion/stream-ingestion.md                  |    56 +
 docs/0.14.1-incubating/ingestion/stream-pull.md    |   376 +
 docs/0.14.1-incubating/ingestion/stream-push.md    |   186 +
 docs/0.14.1-incubating/ingestion/tasks.md          |    78 +
 docs/0.14.1-incubating/ingestion/transform-spec.md |   104 +
 .../ingestion/update-existing-data.md              |   162 +
 docs/0.14.1-incubating/misc/cluster-setup.html     |     4 +
 docs/0.14.1-incubating/misc/evaluate.html          |     4 +
 docs/0.14.1-incubating/misc/math-expr.md           |   138 +
 docs/0.14.1-incubating/misc/papers-and-talks.md    |    43 +
 docs/0.14.1-incubating/misc/tasks.html             |     4 +
 docs/0.14.1-incubating/operations/alerts.md        |    38 +
 docs/0.14.1-incubating/operations/api-reference.md |   736 +
 docs/0.14.1-incubating/operations/druid-console.md |    90 +
 docs/0.14.1-incubating/operations/dump-segment.md  |   116 +
 .../operations/http-compression.md                 |    34 +
 .../operations/img/01-home-view.png                |   Bin 0 -> 60287 bytes
 .../operations/img/02-datasources.png              |   Bin 0 -> 163824 bytes
 .../operations/img/03-retention.png                |   Bin 0 -> 123857 bytes
 .../operations/img/04-segments.png                 |   Bin 0 -> 125873 bytes
 .../operations/img/05-tasks-1.png                  |   Bin 0 -> 101635 bytes
 .../operations/img/06-tasks-2.png                  |   Bin 0 -> 221977 bytes
 .../operations/img/07-tasks-3.png                  |   Bin 0 -> 195170 bytes
 .../operations/img/08-servers.png                  |   Bin 0 -> 119310 bytes
 docs/0.14.1-incubating/operations/img/09-sql.png   |   Bin 0 -> 80580 bytes
 .../operations/including-extensions.md             |    87 +
 .../operations/insert-segment-to-db.html           |     4 +
 .../operations/insert-segment-to-db.md             |   156 +
 .../0.14.1-incubating/operations/management-uis.md |    80 +
 docs/0.14.1-incubating/operations/metrics.md       |   279 +
 .../0.14.1-incubating/operations/multitenancy.html |     4 +
 docs/0.14.1-incubating/operations/other-hadoop.md  |   300 +
 .../operations/password-provider.md                |    55 +
 .../operations/performance-faq.md                  |    95 +
 docs/0.14.1-incubating/operations/pull-deps.md     |   151 +
 .../operations/recommendations.md                  |    93 +
 docs/0.14.1-incubating/operations/reset-cluster.md |    76 +
 .../operations/rolling-updates.md                  |   102 +
 .../operations/rule-configuration.md               |   242 +
 .../operations/segment-optimization.md             |   100 +
 docs/0.14.1-incubating/operations/tls-support.md   |   105 +
 .../operations/use_sbt_to_build_fat_jar.md         |   128 +
 docs/0.14.1-incubating/querying/aggregations.md    |   361 +
 docs/0.14.1-incubating/querying/caching.md         |    46 +
 docs/0.14.1-incubating/querying/datasource.md      |    65 +
 .../querying/datasourcemetadataquery.md            |    57 +
 docs/0.14.1-incubating/querying/dimensionspecs.md  |   545 +
 docs/0.14.1-incubating/querying/filters.md         |   521 +
 docs/0.14.1-incubating/querying/granularities.md   |   438 +
 docs/0.14.1-incubating/querying/groupbyquery.md    |   445 +
 docs/0.14.1-incubating/querying/having.md          |   261 +
 docs/0.14.1-incubating/querying/hll-old.md         |   142 +
 docs/0.14.1-incubating/querying/joins.md           |    55 +
 docs/0.14.1-incubating/querying/limitspec.md       |    55 +
 docs/0.14.1-incubating/querying/lookups.md         |   444 +
 .../querying/multi-value-dimensions.md             |   340 +
 docs/0.14.1-incubating/querying/multitenancy.md    |    99 +
 docs/0.14.1-incubating/querying/optimizations.html |     4 +
 .../querying/post-aggregations.md                  |   223 +
 docs/0.14.1-incubating/querying/query-context.md   |    62 +
 docs/0.14.1-incubating/querying/querying.md        |   125 +
 docs/0.14.1-incubating/querying/scan-query.md      |   196 +
 docs/0.14.1-incubating/querying/searchquery.md     |   141 +
 docs/0.14.1-incubating/querying/searchqueryspec.md |    77 +
 .../querying/segmentmetadataquery.md               |   188 +
 docs/0.14.1-incubating/querying/select-query.md    |   259 +
 docs/0.14.1-incubating/querying/sorting-orders.md  |    54 +
 docs/0.14.1-incubating/querying/sql.md             |   718 +
 .../querying/timeboundaryquery.md                  |    58 +
 docs/0.14.1-incubating/querying/timeseriesquery.md |   163 +
 docs/0.14.1-incubating/querying/topnmetricspec.md  |    87 +
 docs/0.14.1-incubating/querying/topnquery.md       |   257 +
 docs/0.14.1-incubating/querying/virtual-columns.md |    80 +
 docs/0.14.1-incubating/toc.md                      |   174 +
 .../tutorials/booting-a-production-cluster.html    |     4 +
 docs/0.14.1-incubating/tutorials/cluster.md        |   408 +
 docs/0.14.1-incubating/tutorials/examples.html     |     4 +
 docs/0.14.1-incubating/tutorials/firewall.html     |     4 +
 .../tutorials/img/tutorial-batch-01.png            |   Bin 0 -> 54435 bytes
 .../tutorials/img/tutorial-compaction-01.png       |   Bin 0 -> 55153 bytes
 .../tutorials/img/tutorial-compaction-02.png       |   Bin 0 -> 279736 bytes
 .../tutorials/img/tutorial-compaction-03.png       |   Bin 0 -> 40114 bytes
 .../tutorials/img/tutorial-compaction-04.png       |   Bin 0 -> 312142 bytes
 .../tutorials/img/tutorial-compaction-05.png       |   Bin 0 -> 39784 bytes
 .../tutorials/img/tutorial-compaction-06.png       |   Bin 0 -> 351505 bytes
 .../tutorials/img/tutorial-compaction-07.png       |   Bin 0 -> 40106 bytes
 .../tutorials/img/tutorial-compaction-08.png       |   Bin 0 -> 43257 bytes
 .../tutorials/img/tutorial-deletion-01.png         |   Bin 0 -> 72062 bytes
 .../tutorials/img/tutorial-deletion-02.png         |   Bin 0 -> 200459 bytes
 .../tutorials/img/tutorial-retention-00.png        |   Bin 0 -> 138304 bytes
 .../tutorials/img/tutorial-retention-01.png        |   Bin 0 -> 53955 bytes
 .../tutorials/img/tutorial-retention-02.png        |   Bin 0 -> 410930 bytes
 .../tutorials/img/tutorial-retention-03.png        |   Bin 0 -> 44144 bytes
 .../tutorials/img/tutorial-retention-04.png        |   Bin 0 -> 67493 bytes
 .../tutorials/img/tutorial-retention-05.png        |   Bin 0 -> 61639 bytes
 .../tutorials/img/tutorial-retention-06.png        |   Bin 0 -> 233034 bytes
 docs/0.14.1-incubating/tutorials/index.md          |   202 +
 .../tutorials/ingestion-streams.html               |     4 +
 docs/0.14.1-incubating/tutorials/ingestion.html    |     4 +
 docs/0.14.1-incubating/tutorials/quickstart.html   |     4 +
 .../tutorials/tutorial-a-first-look-at-druid.html  |     4 +
 .../tutorials/tutorial-all-about-queries.html      |     4 +
 .../tutorials/tutorial-batch-hadoop.md             |   259 +
 docs/0.14.1-incubating/tutorials/tutorial-batch.md |   179 +
 .../tutorials/tutorial-compaction.md               |   176 +
 .../tutorials/tutorial-delete-data.md              |   178 +
 .../tutorials/tutorial-ingestion-spec.md           |   662 +
 docs/0.14.1-incubating/tutorials/tutorial-kafka.md |   107 +
 .../tutorials/tutorial-loading-batch-data.html     |     4 +
 .../tutorials/tutorial-loading-streaming-data.html |     4 +
 docs/0.14.1-incubating/tutorials/tutorial-query.md |   300 +
 .../tutorials/tutorial-retention.md                |   115 +
 .../0.14.1-incubating/tutorials/tutorial-rollup.md |   200 +
 .../tutorials/tutorial-the-druid-cluster.html      |     4 +
 .../tutorials/tutorial-tranquility.md              |   104 +
 .../tutorials/tutorial-transform-spec.md           |   158 +
 .../tutorials/tutorial-update-data.md              |   169 +
 .../About-Experimental-Features.html               |     4 +
 docs/0.14.2-incubating/Aggregations.html           |     4 +
 docs/0.14.2-incubating/ApproxHisto.html            |     4 +
 docs/0.14.2-incubating/Batch-ingestion.html        |     4 +
 .../Booting-a-production-cluster.html              |     4 +
 docs/0.14.2-incubating/Broker-Config.html          |     4 +
 docs/0.14.2-incubating/Broker.html                 |     4 +
 docs/0.14.2-incubating/Build-from-source.html      |     4 +
 docs/0.14.2-incubating/Cassandra-Deep-Storage.html |     4 +
 docs/0.14.2-incubating/Cluster-setup.html          |     4 +
 docs/0.14.2-incubating/Compute.html                |     4 +
 .../Concepts-and-Terminology.html                  |     4 +
 docs/0.14.2-incubating/Configuration.html          |     4 +
 docs/0.14.2-incubating/Contribute.html             |     4 +
 docs/0.14.2-incubating/Coordinator-Config.html     |     4 +
 docs/0.14.2-incubating/Coordinator.html            |     4 +
 docs/0.14.2-incubating/DataSource.html             |     4 +
 .../0.14.2-incubating/DataSourceMetadataQuery.html |     4 +
 docs/0.14.2-incubating/Data_formats.html           |     4 +
 docs/0.14.2-incubating/Deep-Storage.html           |     4 +
 docs/0.14.2-incubating/Design.html                 |     4 +
 docs/0.14.2-incubating/DimensionSpecs.html         |     4 +
 docs/0.14.2-incubating/Download.html               |     4 +
 .../Druid-Personal-Demo-Cluster.html               |     4 +
 docs/0.14.2-incubating/Druid-vs-Cassandra.html     |     4 +
 docs/0.14.2-incubating/Druid-vs-Elasticsearch.html |     4 +
 docs/0.14.2-incubating/Druid-vs-Hadoop.html        |     4 +
 .../Druid-vs-Impala-or-Shark.html                  |     4 +
 docs/0.14.2-incubating/Druid-vs-Redshift.html      |     4 +
 docs/0.14.2-incubating/Druid-vs-Spark.html         |     4 +
 docs/0.14.2-incubating/Druid-vs-Vertica.html       |     4 +
 docs/0.14.2-incubating/Evaluate.html               |     4 +
 docs/0.14.2-incubating/Examples.html               |     4 +
 docs/0.14.2-incubating/Filters.html                |     4 +
 docs/0.14.2-incubating/Firehose.html               |     4 +
 docs/0.14.2-incubating/GeographicQueries.html      |     4 +
 docs/0.14.2-incubating/Granularities.html          |     4 +
 docs/0.14.2-incubating/GroupByQuery.html           |     4 +
 docs/0.14.2-incubating/Hadoop-Configuration.html   |     4 +
 docs/0.14.2-incubating/Having.html                 |     4 +
 docs/0.14.2-incubating/Historical-Config.html      |     4 +
 docs/0.14.2-incubating/Historical.html             |     4 +
 docs/0.14.2-incubating/Home.html                   |     4 +
 docs/0.14.2-incubating/Including-Extensions.html   |     4 +
 .../0.14.2-incubating/Indexing-Service-Config.html |     4 +
 docs/0.14.2-incubating/Indexing-Service.html       |     4 +
 docs/0.14.2-incubating/Ingestion-FAQ.html          |     4 +
 docs/0.14.2-incubating/Ingestion-overview.html     |     4 +
 docs/0.14.2-incubating/Ingestion.html              |     4 +
 .../Integrating-Druid-With-Other-Technologies.html |     4 +
 docs/0.14.2-incubating/Kafka-Eight.html            |     4 +
 docs/0.14.2-incubating/Libraries.html              |     4 +
 docs/0.14.2-incubating/LimitSpec.html              |     4 +
 docs/0.14.2-incubating/Loading-Your-Data.html      |     4 +
 docs/0.14.2-incubating/Logging.html                |     4 +
 docs/0.14.2-incubating/Master.html                 |     4 +
 docs/0.14.2-incubating/Metadata-storage.html       |     4 +
 docs/0.14.2-incubating/Metrics.html                |     4 +
 docs/0.14.2-incubating/Middlemanager.html          |     4 +
 docs/0.14.2-incubating/Modules.html                |     4 +
 docs/0.14.2-incubating/MySQL.html                  |     4 +
 docs/0.14.2-incubating/OrderBy.html                |     4 +
 docs/0.14.2-incubating/Other-Hadoop.html           |     4 +
 docs/0.14.2-incubating/Papers-and-talks.html       |     4 +
 docs/0.14.2-incubating/Peons.html                  |     4 +
 docs/0.14.2-incubating/Performance-FAQ.html        |     4 +
 docs/0.14.2-incubating/Plumber.html                |     4 +
 docs/0.14.2-incubating/Post-aggregations.html      |     4 +
 .../Production-Cluster-Configuration.html          |     4 +
 docs/0.14.2-incubating/Query-Context.html          |     4 +
 docs/0.14.2-incubating/Querying-your-data.html     |     4 +
 docs/0.14.2-incubating/Querying.html               |     4 +
 docs/0.14.2-incubating/Realtime-Config.html        |     4 +
 docs/0.14.2-incubating/Realtime-ingestion.html     |     4 +
 docs/0.14.2-incubating/Realtime.html               |     4 +
 docs/0.14.2-incubating/Recommendations.html        |     4 +
 docs/0.14.2-incubating/Rolling-Updates.html        |     4 +
 docs/0.14.2-incubating/Router.html                 |     4 +
 docs/0.14.2-incubating/Rule-Configuration.html     |     4 +
 docs/0.14.2-incubating/SearchQuery.html            |     4 +
 docs/0.14.2-incubating/SearchQuerySpec.html        |     4 +
 docs/0.14.2-incubating/SegmentMetadataQuery.html   |     4 +
 docs/0.14.2-incubating/Segments.html               |     4 +
 docs/0.14.2-incubating/SelectQuery.html            |     4 +
 .../Simple-Cluster-Configuration.html              |     4 +
 docs/0.14.2-incubating/Spatial-Filters.html        |     4 +
 docs/0.14.2-incubating/Spatial-Indexing.html       |     4 +
 .../Stand-Alone-With-Riak-CS.html                  |     4 +
 docs/0.14.2-incubating/Support.html                |     4 +
 docs/0.14.2-incubating/Tasks.html                  |     4 +
 docs/0.14.2-incubating/Thanks.html                 |     4 +
 docs/0.14.2-incubating/TimeBoundaryQuery.html      |     4 +
 docs/0.14.2-incubating/TimeseriesQuery.html        |     4 +
 docs/0.14.2-incubating/TopNMetricSpec.html         |     4 +
 docs/0.14.2-incubating/TopNQuery.html              |     4 +
 .../Tutorial-A-First-Look-at-Druid.html            |     4 +
 .../Tutorial-All-About-Queries.html                |     4 +
 .../Tutorial-Loading-Batch-Data.html               |     4 +
 .../Tutorial-Loading-Streaming-Data.html           |     4 +
 .../Tutorial-The-Druid-Cluster.html                |     4 +
 .../Tutorial:-A-First-Look-at-Druid.html           |     4 +
 .../Tutorial:-All-About-Queries.html               |     4 +
 .../Tutorial:-Loading-Batch-Data.html              |     4 +
 .../Tutorial:-Loading-Streaming-Data.html          |     4 +
 .../Tutorial:-Loading-Your-Data-Part-1.html        |     4 +
 .../Tutorial:-Loading-Your-Data-Part-2.html        |     4 +
 .../Tutorial:-The-Druid-Cluster.html               |     4 +
 docs/0.14.2-incubating/Tutorial:-Webstream.html    |     4 +
 docs/0.14.2-incubating/Tutorials.html              |     4 +
 docs/0.14.2-incubating/Twitter-Tutorial.html       |     4 +
 docs/0.14.2-incubating/Versioning.html             |     4 +
 docs/0.14.2-incubating/ZooKeeper.html              |     4 +
 docs/0.14.2-incubating/alerts.html                 |     4 +
 .../comparisons/druid-vs-cassandra.html            |     4 +
 .../comparisons/druid-vs-elasticsearch.md          |    40 +
 .../comparisons/druid-vs-hadoop.html               |     4 +
 .../comparisons/druid-vs-impala-or-shark.html      |     4 +
 .../comparisons/druid-vs-key-value.md              |    47 +
 .../0.14.2-incubating/comparisons/druid-vs-kudu.md |    40 +
 .../comparisons/druid-vs-redshift.md               |    63 +
 .../comparisons/druid-vs-spark.md                  |    43 +
 .../comparisons/druid-vs-sql-on-hadoop.md          |    83 +
 .../comparisons/druid-vs-vertica.html              |     4 +
 docs/0.14.2-incubating/configuration/auth.html     |     4 +
 docs/0.14.2-incubating/configuration/broker.html   |     4 +
 docs/0.14.2-incubating/configuration/caching.html  |     4 +
 .../configuration/coordinator.html                 |     4 +
 docs/0.14.2-incubating/configuration/hadoop.html   |     4 +
 .../configuration/historical.html                  |     4 +
 docs/0.14.2-incubating/configuration/index.md      |  1665 ++
 .../configuration/indexing-service.html            |     4 +
 docs/0.14.2-incubating/configuration/logging.md    |    55 +
 .../configuration/production-cluster.html          |     4 +
 docs/0.14.2-incubating/configuration/realtime.md   |    98 +
 .../configuration/simple-cluster.html              |     4 +
 .../0.14.2-incubating/configuration/zookeeper.html |     4 +
 .../dependencies/cassandra-deep-storage.md         |    62 +
 .../0.14.2-incubating/dependencies/deep-storage.md |    54 +
 .../dependencies/metadata-storage.md               |   141 +
 docs/0.14.2-incubating/dependencies/zookeeper.md   |    77 +
 docs/0.14.2-incubating/design/auth.md              |   168 +
 docs/0.14.2-incubating/design/broker.md            |    55 +
 .../design/concepts-and-terminology.html           |     4 +
 docs/0.14.2-incubating/design/coordinator.md       |   132 +
 docs/0.14.2-incubating/design/design.html          |     4 +
 docs/0.14.2-incubating/design/historical.md        |    59 +
 docs/0.14.2-incubating/design/index.md             |   203 +
 docs/0.14.2-incubating/design/indexing-service.md  |    65 +
 docs/0.14.2-incubating/design/middlemanager.md     |    44 +
 docs/0.14.2-incubating/design/overlord.md          |    63 +
 docs/0.14.2-incubating/design/peons.md             |    47 +
 docs/0.14.2-incubating/design/plumber.md           |    38 +
 docs/0.14.2-incubating/design/processes.md         |   131 +
 docs/0.14.2-incubating/design/realtime.md          |    80 +
 docs/0.14.2-incubating/design/segments.md          |   205 +
 .../development/approximate-histograms.html        |     4 +
 docs/0.14.2-incubating/development/build.md        |    69 +
 .../development/community-extensions/azure.html    |     4 +
 .../community-extensions/cassandra.html            |     4 +
 .../community-extensions/cloudfiles.html           |     4 +
 .../development/community-extensions/graphite.html |     4 +
 .../community-extensions/kafka-simple.html         |     4 +
 .../development/community-extensions/rabbitmq.html |     4 +
 .../development/datasketches-aggregators.html      |     4 +
 docs/0.14.2-incubating/development/experimental.md |    39 +
 .../extensions-contrib/ambari-metrics-emitter.md   |   100 +
 .../development/extensions-contrib/azure.md        |    95 +
 .../development/extensions-contrib/cassandra.md    |    31 +
 .../development/extensions-contrib/cloudfiles.md   |    97 +
 .../extensions-contrib/distinctcount.md            |    99 +
 .../development/extensions-contrib/google.md       |    89 +
 .../development/extensions-contrib/graphite.md     |   118 +
 .../development/extensions-contrib/influx.md       |    66 +
 .../extensions-contrib/kafka-emitter.md            |    55 +
 .../development/extensions-contrib/kafka-simple.md |    56 +
 .../extensions-contrib/materialized-view.md        |   134 +
 .../extensions-contrib/opentsdb-emitter.md         |    62 +
 .../development/extensions-contrib/orc.md          |   113 +
 .../development/extensions-contrib/parquet.html    |     4 +
 .../development/extensions-contrib/rabbitmq.md     |    81 +
 .../development/extensions-contrib/redis-cache.md  |    58 +
 .../development/extensions-contrib/rocketmq.md     |    29 +
 .../development/extensions-contrib/scan-query.html |     4 +
 .../development/extensions-contrib/sqlserver.md    |    57 +
 .../development/extensions-contrib/statsd.md       |    70 +
 .../development/extensions-contrib/thrift.md       |   128 +
 .../development/extensions-contrib/time-min-max.md |   105 +
 .../extensions-core/approximate-histograms.md      |   318 +
 .../development/extensions-core/avro.md            |   222 +
 .../development/extensions-core/bloom-filter.md    |   179 +
 .../extensions-core/caffeine-cache.html            |     4 +
 .../extensions-core/datasketches-aggregators.html  |     4 +
 .../extensions-core/datasketches-extension.md      |    40 +
 .../extensions-core/datasketches-hll.md            |   102 +
 .../extensions-core/datasketches-quantiles.md      |   112 +
 .../extensions-core/datasketches-theta.md          |   273 +
 .../extensions-core/datasketches-tuple.md          |   175 +
 .../extensions-core/druid-basic-security.md        |   321 +
 .../development/extensions-core/druid-kerberos.md  |   123 +
 .../development/extensions-core/druid-lookups.md   |   150 +
 .../development/extensions-core/examples.md        |    45 +
 .../development/extensions-core/hdfs.md            |    56 +
 .../extensions-core/kafka-eight-firehose.md        |    54 +
 .../extensions-core/kafka-extraction-namespace.md  |    70 +
 .../development/extensions-core/kafka-ingestion.md |   347 +
 .../extensions-core/kinesis-ingestion.md           |   393 +
 .../extensions-core/lookups-cached-global.md       |   379 +
 .../development/extensions-core/mysql.md           |   109 +
 .../extensions-core/namespaced-lookup.html         |     4 +
 .../development/extensions-core/parquet.md         |   220 +
 .../development/extensions-core/postgresql.md      |    85 +
 .../development/extensions-core/protobuf.md        |   223 +
 .../development/extensions-core/s3.md              |    98 +
 .../extensions-core/simple-client-sslcontext.md    |    54 +
 .../development/extensions-core/stats.md           |   172 +
 .../development/extensions-core/test-stats.md      |   118 +
 docs/0.14.2-incubating/development/extensions.md   |   105 +
 docs/0.14.2-incubating/development/geo.md          |    93 +
 .../integrating-druid-with-other-technologies.md   |    39 +
 docs/0.14.2-incubating/development/javascript.md   |    75 +
 .../kafka-simple-consumer-firehose.html            |     4 +
 docs/0.14.2-incubating/development/libraries.html  |     4 +
 docs/0.14.2-incubating/development/modules.md      |   273 +
 docs/0.14.2-incubating/development/overview.md     |    76 +
 docs/0.14.2-incubating/development/router.md       |   244 +
 .../development/select-query.html                  |     4 +
 docs/0.14.2-incubating/development/versioning.md   |    47 +
 docs/0.14.2-incubating/index.html                  |     4 +
 .../0.14.2-incubating/ingestion/batch-ingestion.md |    39 +
 .../ingestion/command-line-hadoop-indexer.md       |    95 +
 docs/0.14.2-incubating/ingestion/compaction.md     |   102 +
 docs/0.14.2-incubating/ingestion/data-formats.md   |   205 +
 docs/0.14.2-incubating/ingestion/delete-data.md    |    50 +
 docs/0.14.2-incubating/ingestion/faq.md            |   106 +
 docs/0.14.2-incubating/ingestion/firehose.md       |   214 +
 docs/0.14.2-incubating/ingestion/flatten-json.md   |   180 +
 .../ingestion/hadoop-vs-native-batch.md            |    43 +
 docs/0.14.2-incubating/ingestion/hadoop.md         |   363 +
 docs/0.14.2-incubating/ingestion/index.md          |   306 +
 docs/0.14.2-incubating/ingestion/ingestion-spec.md |   332 +
 docs/0.14.2-incubating/ingestion/ingestion.html    |     4 +
 .../ingestion/locking-and-priority.md              |    79 +
 docs/0.14.2-incubating/ingestion/misc-tasks.md     |    94 +
 docs/0.14.2-incubating/ingestion/native-batch.html |     4 +
 docs/0.14.2-incubating/ingestion/native_tasks.md   |   620 +
 docs/0.14.2-incubating/ingestion/overview.html     |     4 +
 .../ingestion/realtime-ingestion.html              |     4 +
 docs/0.14.2-incubating/ingestion/reports.md        |   152 +
 docs/0.14.2-incubating/ingestion/schema-changes.md |    82 +
 docs/0.14.2-incubating/ingestion/schema-design.md  |   338 +
 .../ingestion/stream-ingestion.md                  |    56 +
 docs/0.14.2-incubating/ingestion/stream-pull.md    |   376 +
 docs/0.14.2-incubating/ingestion/stream-push.md    |   186 +
 docs/0.14.2-incubating/ingestion/tasks.md          |    78 +
 docs/0.14.2-incubating/ingestion/transform-spec.md |   104 +
 .../ingestion/update-existing-data.md              |   162 +
 docs/0.14.2-incubating/misc/cluster-setup.html     |     4 +
 docs/0.14.2-incubating/misc/evaluate.html          |     4 +
 docs/0.14.2-incubating/misc/math-expr.md           |   138 +
 docs/0.14.2-incubating/misc/papers-and-talks.md    |    43 +
 docs/0.14.2-incubating/misc/tasks.html             |     4 +
 docs/0.14.2-incubating/operations/alerts.md        |    38 +
 docs/0.14.2-incubating/operations/api-reference.md |   736 +
 docs/0.14.2-incubating/operations/druid-console.md |    90 +
 docs/0.14.2-incubating/operations/dump-segment.md  |   116 +
 .../operations/http-compression.md                 |    34 +
 .../operations/img/01-home-view.png                |   Bin 0 -> 60287 bytes
 .../operations/img/02-datasources.png              |   Bin 0 -> 163824 bytes
 .../operations/img/03-retention.png                |   Bin 0 -> 123857 bytes
 .../operations/img/04-segments.png                 |   Bin 0 -> 125873 bytes
 .../operations/img/05-tasks-1.png                  |   Bin 0 -> 101635 bytes
 .../operations/img/06-tasks-2.png                  |   Bin 0 -> 221977 bytes
 .../operations/img/07-tasks-3.png                  |   Bin 0 -> 195170 bytes
 .../operations/img/08-servers.png                  |   Bin 0 -> 119310 bytes
 docs/0.14.2-incubating/operations/img/09-sql.png   |   Bin 0 -> 80580 bytes
 .../operations/including-extensions.md             |    87 +
 .../operations/insert-segment-to-db.html           |     4 +
 .../operations/insert-segment-to-db.md             |   156 +
 .../0.14.2-incubating/operations/management-uis.md |    80 +
 docs/0.14.2-incubating/operations/metrics.md       |   279 +
 .../0.14.2-incubating/operations/multitenancy.html |     4 +
 docs/0.14.2-incubating/operations/other-hadoop.md  |   300 +
 .../operations/password-provider.md                |    55 +
 .../operations/performance-faq.md                  |    95 +
 docs/0.14.2-incubating/operations/pull-deps.md     |   151 +
 .../operations/recommendations.md                  |    93 +
 docs/0.14.2-incubating/operations/reset-cluster.md |    76 +
 .../operations/rolling-updates.md                  |   102 +
 .../operations/rule-configuration.md               |   242 +
 .../operations/segment-optimization.md             |   100 +
 docs/0.14.2-incubating/operations/tls-support.md   |   105 +
 .../operations/use_sbt_to_build_fat_jar.md         |   128 +
 docs/0.14.2-incubating/querying/aggregations.md    |   361 +
 docs/0.14.2-incubating/querying/caching.md         |    46 +
 docs/0.14.2-incubating/querying/datasource.md      |    65 +
 .../querying/datasourcemetadataquery.md            |    57 +
 docs/0.14.2-incubating/querying/dimensionspecs.md  |   545 +
 docs/0.14.2-incubating/querying/filters.md         |   521 +
 docs/0.14.2-incubating/querying/granularities.md   |   438 +
 docs/0.14.2-incubating/querying/groupbyquery.md    |   445 +
 docs/0.14.2-incubating/querying/having.md          |   261 +
 docs/0.14.2-incubating/querying/hll-old.md         |   142 +
 docs/0.14.2-incubating/querying/joins.md           |    55 +
 docs/0.14.2-incubating/querying/limitspec.md       |    55 +
 docs/0.14.2-incubating/querying/lookups.md         |   444 +
 .../querying/multi-value-dimensions.md             |   340 +
 docs/0.14.2-incubating/querying/multitenancy.md    |    99 +
 docs/0.14.2-incubating/querying/optimizations.html |     4 +
 .../querying/post-aggregations.md                  |   223 +
 docs/0.14.2-incubating/querying/query-context.md   |    62 +
 docs/0.14.2-incubating/querying/querying.md        |   125 +
 docs/0.14.2-incubating/querying/scan-query.md      |   196 +
 docs/0.14.2-incubating/querying/searchquery.md     |   141 +
 docs/0.14.2-incubating/querying/searchqueryspec.md |    77 +
 .../querying/segmentmetadataquery.md               |   188 +
 docs/0.14.2-incubating/querying/select-query.md    |   259 +
 docs/0.14.2-incubating/querying/sorting-orders.md  |    54 +
 docs/0.14.2-incubating/querying/sql.md             |   718 +
 .../querying/timeboundaryquery.md                  |    58 +
 docs/0.14.2-incubating/querying/timeseriesquery.md |   163 +
 docs/0.14.2-incubating/querying/topnmetricspec.md  |    87 +
 docs/0.14.2-incubating/querying/topnquery.md       |   257 +
 docs/0.14.2-incubating/querying/virtual-columns.md |    80 +
 docs/0.14.2-incubating/toc.md                      |   174 +
 .../tutorials/booting-a-production-cluster.html    |     4 +
 docs/0.14.2-incubating/tutorials/cluster.md        |   408 +
 docs/0.14.2-incubating/tutorials/examples.html     |     4 +
 docs/0.14.2-incubating/tutorials/firewall.html     |     4 +
 .../tutorials/img/tutorial-batch-01.png            |   Bin 0 -> 54435 bytes
 .../tutorials/img/tutorial-compaction-01.png       |   Bin 0 -> 55153 bytes
 .../tutorials/img/tutorial-compaction-02.png       |   Bin 0 -> 279736 bytes
 .../tutorials/img/tutorial-compaction-03.png       |   Bin 0 -> 40114 bytes
 .../tutorials/img/tutorial-compaction-04.png       |   Bin 0 -> 312142 bytes
 .../tutorials/img/tutorial-compaction-05.png       |   Bin 0 -> 39784 bytes
 .../tutorials/img/tutorial-compaction-06.png       |   Bin 0 -> 351505 bytes
 .../tutorials/img/tutorial-compaction-07.png       |   Bin 0 -> 40106 bytes
 .../tutorials/img/tutorial-compaction-08.png       |   Bin 0 -> 43257 bytes
 .../tutorials/img/tutorial-deletion-01.png         |   Bin 0 -> 72062 bytes
 .../tutorials/img/tutorial-deletion-02.png         |   Bin 0 -> 200459 bytes
 .../tutorials/img/tutorial-retention-00.png        |   Bin 0 -> 138304 bytes
 .../tutorials/img/tutorial-retention-01.png        |   Bin 0 -> 53955 bytes
 .../tutorials/img/tutorial-retention-02.png        |   Bin 0 -> 410930 bytes
 .../tutorials/img/tutorial-retention-03.png        |   Bin 0 -> 44144 bytes
 .../tutorials/img/tutorial-retention-04.png        |   Bin 0 -> 67493 bytes
 .../tutorials/img/tutorial-retention-05.png        |   Bin 0 -> 61639 bytes
 .../tutorials/img/tutorial-retention-06.png        |   Bin 0 -> 233034 bytes
 docs/0.14.2-incubating/tutorials/index.md          |   196 +
 .../tutorials/ingestion-streams.html               |     4 +
 docs/0.14.2-incubating/tutorials/ingestion.html    |     4 +
 docs/0.14.2-incubating/tutorials/quickstart.html   |     4 +
 .../tutorials/tutorial-a-first-look-at-druid.html  |     4 +
 .../tutorials/tutorial-all-about-queries.html      |     4 +
 .../tutorials/tutorial-batch-hadoop.md             |   259 +
 docs/0.14.2-incubating/tutorials/tutorial-batch.md |   179 +
 .../tutorials/tutorial-compaction.md               |   176 +
 .../tutorials/tutorial-delete-data.md              |   178 +
 .../tutorials/tutorial-ingestion-spec.md           |   662 +
 docs/0.14.2-incubating/tutorials/tutorial-kafka.md |   107 +
 .../tutorials/tutorial-loading-batch-data.html     |     4 +
 .../tutorials/tutorial-loading-streaming-data.html |     4 +
 docs/0.14.2-incubating/tutorials/tutorial-query.md |   300 +
 .../tutorials/tutorial-retention.md                |   115 +
 .../0.14.2-incubating/tutorials/tutorial-rollup.md |   200 +
 .../tutorials/tutorial-the-druid-cluster.html      |     4 +
 .../tutorials/tutorial-tranquility.md              |   104 +
 .../tutorials/tutorial-transform-spec.md           |   158 +
 .../tutorials/tutorial-update-data.md              |   169 +
 docs/img/druid-architecture.png                    |   Bin 0 -> 207086 bytes
 docs/img/druid-column-types.png                    |   Bin 0 -> 103962 bytes
 docs/img/druid-dataflow-2x.png                     |   Bin 0 -> 141623 bytes
 docs/img/druid-dataflow-3.png                      |   Bin 0 -> 90365 bytes
 docs/img/druid-manage-1.png                        |   Bin 0 -> 111559 bytes
 docs/img/druid-production.png                      |   Bin 0 -> 51195 bytes
 docs/img/druid-timeline.png                        |   Bin 0 -> 36729 bytes
 docs/img/indexing_service.png                      |   Bin 0 -> 48510 bytes
 docs/img/segmentPropagation.png                    |   Bin 0 -> 64451 bytes
 docs/latest/About-Experimental-Features.html       |     4 +
 docs/latest/Aggregations.html                      |     4 +
 docs/latest/ApproxHisto.html                       |     4 +
 docs/latest/Batch-ingestion.html                   |     4 +
 docs/latest/Booting-a-production-cluster.html      |     4 +
 docs/latest/Broker-Config.html                     |     4 +
 docs/latest/Broker.html                            |     4 +
 docs/latest/Build-from-source.html                 |     4 +
 docs/latest/Cassandra-Deep-Storage.html            |     4 +
 docs/latest/Cluster-setup.html                     |     4 +
 docs/latest/Compute.html                           |     4 +
 docs/latest/Concepts-and-Terminology.html          |     4 +
 docs/latest/Configuration.html                     |     4 +
 docs/latest/Contribute.html                        |     4 +
 docs/latest/Coordinator-Config.html                |     4 +
 docs/latest/Coordinator.html                       |     4 +
 docs/latest/DataSource.html                        |     4 +
 docs/latest/DataSourceMetadataQuery.html           |     4 +
 docs/latest/Data_formats.html                      |     4 +
 docs/latest/Deep-Storage.html                      |     4 +
 docs/latest/Design.html                            |     4 +
 docs/latest/DimensionSpecs.html                    |     4 +
 docs/latest/Download.html                          |     4 +
 docs/latest/Druid-Personal-Demo-Cluster.html       |     4 +
 docs/latest/Druid-vs-Cassandra.html                |     4 +
 docs/latest/Druid-vs-Elasticsearch.html            |     4 +
 docs/latest/Druid-vs-Hadoop.html                   |     4 +
 docs/latest/Druid-vs-Impala-or-Shark.html          |     4 +
 docs/latest/Druid-vs-Redshift.html                 |     4 +
 docs/latest/Druid-vs-Spark.html                    |     4 +
 docs/latest/Druid-vs-Vertica.html                  |     4 +
 docs/latest/Evaluate.html                          |     4 +
 docs/latest/Examples.html                          |     4 +
 docs/latest/Filters.html                           |     4 +
 docs/latest/Firehose.html                          |     4 +
 docs/latest/GeographicQueries.html                 |     4 +
 docs/latest/Granularities.html                     |     4 +
 docs/latest/GroupByQuery.html                      |     4 +
 docs/latest/Hadoop-Configuration.html              |     4 +
 docs/latest/Having.html                            |     4 +
 docs/latest/Historical-Config.html                 |     4 +
 docs/latest/Historical.html                        |     4 +
 docs/latest/Home.html                              |     4 +
 docs/latest/Including-Extensions.html              |     4 +
 docs/latest/Indexing-Service-Config.html           |     4 +
 docs/latest/Indexing-Service.html                  |     4 +
 docs/latest/Ingestion-FAQ.html                     |     4 +
 docs/latest/Ingestion-overview.html                |     4 +
 docs/latest/Ingestion.html                         |     4 +
 .../Integrating-Druid-With-Other-Technologies.html |     4 +
 docs/latest/Kafka-Eight.html                       |     4 +
 docs/latest/Libraries.html                         |     4 +
 docs/latest/LimitSpec.html                         |     4 +
 docs/latest/Loading-Your-Data.html                 |     4 +
 docs/latest/Logging.html                           |     4 +
 docs/latest/Master.html                            |     4 +
 docs/latest/Metadata-storage.html                  |     4 +
 docs/latest/Metrics.html                           |     4 +
 docs/latest/Middlemanager.html                     |     4 +
 docs/latest/Modules.html                           |     4 +
 docs/latest/MySQL.html                             |     4 +
 docs/latest/OrderBy.html                           |     4 +
 docs/latest/Other-Hadoop.html                      |     4 +
 docs/latest/Papers-and-talks.html                  |     4 +
 docs/latest/Peons.html                             |     4 +
 docs/latest/Performance-FAQ.html                   |     4 +
 docs/latest/Plumber.html                           |     4 +
 docs/latest/Post-aggregations.html                 |     4 +
 docs/latest/Production-Cluster-Configuration.html  |     4 +
 docs/latest/Query-Context.html                     |     4 +
 docs/latest/Querying-your-data.html                |     4 +
 docs/latest/Querying.html                          |     4 +
 docs/latest/Realtime-Config.html                   |     4 +
 docs/latest/Realtime-ingestion.html                |     4 +
 docs/latest/Realtime.html                          |     4 +
 docs/latest/Recommendations.html                   |     4 +
 docs/latest/Rolling-Updates.html                   |     4 +
 docs/latest/Router.html                            |     4 +
 docs/latest/Rule-Configuration.html                |     4 +
 docs/latest/SearchQuery.html                       |     4 +
 docs/latest/SearchQuerySpec.html                   |     4 +
 docs/latest/SegmentMetadataQuery.html              |     4 +
 docs/latest/Segments.html                          |     4 +
 docs/latest/SelectQuery.html                       |     4 +
 docs/latest/Simple-Cluster-Configuration.html      |     4 +
 docs/latest/Spatial-Filters.html                   |     4 +
 docs/latest/Spatial-Indexing.html                  |     4 +
 docs/latest/Stand-Alone-With-Riak-CS.html          |     4 +
 docs/latest/Support.html                           |     4 +
 docs/latest/Tasks.html                             |     4 +
 docs/latest/Thanks.html                            |     4 +
 docs/latest/TimeBoundaryQuery.html                 |     4 +
 docs/latest/TimeseriesQuery.html                   |     4 +
 docs/latest/TopNMetricSpec.html                    |     4 +
 docs/latest/TopNQuery.html                         |     4 +
 docs/latest/Tutorial-A-First-Look-at-Druid.html    |     4 +
 docs/latest/Tutorial-All-About-Queries.html        |     4 +
 docs/latest/Tutorial-Loading-Batch-Data.html       |     4 +
 docs/latest/Tutorial-Loading-Streaming-Data.html   |     4 +
 docs/latest/Tutorial-The-Druid-Cluster.html        |     4 +
 docs/latest/Tutorial:-A-First-Look-at-Druid.html   |     4 +
 docs/latest/Tutorial:-All-About-Queries.html       |     4 +
 docs/latest/Tutorial:-Loading-Batch-Data.html      |     4 +
 docs/latest/Tutorial:-Loading-Streaming-Data.html  |     4 +
 .../latest/Tutorial:-Loading-Your-Data-Part-1.html |     4 +
 .../latest/Tutorial:-Loading-Your-Data-Part-2.html |     4 +
 docs/latest/Tutorial:-The-Druid-Cluster.html       |     4 +
 docs/latest/Tutorial:-Webstream.html               |     4 +
 docs/latest/Tutorials.html                         |     4 +
 docs/latest/Twitter-Tutorial.html                  |     4 +
 docs/latest/Versioning.html                        |     4 +
 docs/latest/ZooKeeper.html                         |     4 +
 docs/latest/alerts.html                            |     4 +
 docs/latest/comparisons/druid-vs-cassandra.html    |     4 +
 docs/latest/comparisons/druid-vs-elasticsearch.md  |    40 +
 docs/latest/comparisons/druid-vs-hadoop.html       |     4 +
 .../comparisons/druid-vs-impala-or-shark.html      |     4 +
 docs/latest/comparisons/druid-vs-key-value.md      |    47 +
 docs/latest/comparisons/druid-vs-kudu.md           |    40 +
 docs/latest/comparisons/druid-vs-redshift.md       |    63 +
 docs/latest/comparisons/druid-vs-spark.md          |    43 +
 docs/latest/comparisons/druid-vs-sql-on-hadoop.md  |    83 +
 docs/latest/comparisons/druid-vs-vertica.html      |     4 +
 docs/latest/configuration/auth.html                |     4 +
 docs/latest/configuration/broker.html              |     4 +
 docs/latest/configuration/caching.html             |     4 +
 docs/latest/configuration/coordinator.html         |     4 +
 docs/latest/configuration/hadoop.html              |     4 +
 docs/latest/configuration/historical.html          |     4 +
 docs/latest/configuration/index.md                 |  1665 ++
 docs/latest/configuration/indexing-service.html    |     4 +
 docs/latest/configuration/logging.md               |    55 +
 docs/latest/configuration/production-cluster.html  |     4 +
 docs/latest/configuration/realtime.md              |    98 +
 docs/latest/configuration/simple-cluster.html      |     4 +
 docs/latest/configuration/zookeeper.html           |     4 +
 docs/latest/dependencies/cassandra-deep-storage.md |    62 +
 docs/latest/dependencies/deep-storage.md           |    54 +
 docs/latest/dependencies/metadata-storage.md       |   141 +
 docs/latest/dependencies/zookeeper.md              |    77 +
 docs/latest/design/auth.md                         |   168 +
 docs/latest/design/broker.md                       |    55 +
 docs/latest/design/concepts-and-terminology.html   |     4 +
 docs/latest/design/coordinator.md                  |   132 +
 docs/latest/design/design.html                     |     4 +
 docs/latest/design/historical.md                   |    59 +
 docs/latest/design/index.md                        |   203 +
 docs/latest/design/indexing-service.md             |    65 +
 docs/latest/design/middlemanager.md                |    44 +
 docs/latest/design/overlord.md                     |    63 +
 docs/latest/design/peons.md                        |    47 +
 docs/latest/design/plumber.md                      |    38 +
 docs/latest/design/processes.md                    |   131 +
 docs/latest/design/realtime.md                     |    80 +
 docs/latest/design/segments.md                     |   205 +
 .../latest/development/approximate-histograms.html |     4 +
 docs/latest/development/build.md                   |    69 +
 .../development/community-extensions/azure.html    |     4 +
 .../community-extensions/cassandra.html            |     4 +
 .../community-extensions/cloudfiles.html           |     4 +
 .../development/community-extensions/graphite.html |     4 +
 .../community-extensions/kafka-simple.html         |     4 +
 .../development/community-extensions/rabbitmq.html |     4 +
 .../development/datasketches-aggregators.html      |     4 +
 docs/latest/development/experimental.md            |    39 +
 .../extensions-contrib/ambari-metrics-emitter.md   |   100 +
 .../latest/development/extensions-contrib/azure.md |    95 +
 .../development/extensions-contrib/cassandra.md    |    31 +
 .../development/extensions-contrib/cloudfiles.md   |    97 +
 .../extensions-contrib/distinctcount.md            |    99 +
 .../development/extensions-contrib/google.md       |    89 +
 .../development/extensions-contrib/graphite.md     |   118 +
 .../development/extensions-contrib/influx.md       |    66 +
 .../extensions-contrib/kafka-emitter.md            |    55 +
 .../development/extensions-contrib/kafka-simple.md |    56 +
 .../extensions-contrib/materialized-view.md        |   134 +
 .../extensions-contrib/opentsdb-emitter.md         |    62 +
 docs/latest/development/extensions-contrib/orc.md  |   113 +
 .../development/extensions-contrib/parquet.html    |     4 +
 .../development/extensions-contrib/rabbitmq.md     |    81 +
 .../development/extensions-contrib/redis-cache.md  |    58 +
 .../development/extensions-contrib/rocketmq.md     |    29 +
 .../development/extensions-contrib/scan-query.html |     4 +
 .../development/extensions-contrib/sqlserver.md    |    57 +
 .../development/extensions-contrib/statsd.md       |    70 +
 .../development/extensions-contrib/thrift.md       |   128 +
 .../development/extensions-contrib/time-min-max.md |   105 +
 .../extensions-core/approximate-histograms.md      |   318 +
 docs/latest/development/extensions-core/avro.md    |   222 +
 .../development/extensions-core/bloom-filter.md    |   179 +
 .../extensions-core/caffeine-cache.html            |     4 +
 .../extensions-core/datasketches-aggregators.html  |     4 +
 .../extensions-core/datasketches-extension.md      |    40 +
 .../extensions-core/datasketches-hll.md            |   102 +
 .../extensions-core/datasketches-quantiles.md      |   112 +
 .../extensions-core/datasketches-theta.md          |   273 +
 .../extensions-core/datasketches-tuple.md          |   175 +
 .../extensions-core/druid-basic-security.md        |   321 +
 .../development/extensions-core/druid-kerberos.md  |   123 +
 .../development/extensions-core/druid-lookups.md   |   150 +
 .../latest/development/extensions-core/examples.md |    45 +
 docs/latest/development/extensions-core/hdfs.md    |    56 +
 .../extensions-core/kafka-eight-firehose.md        |    54 +
 .../extensions-core/kafka-extraction-namespace.md  |    70 +
 .../development/extensions-core/kafka-ingestion.md |   347 +
 .../extensions-core/kinesis-ingestion.md           |   393 +
 .../extensions-core/lookups-cached-global.md       |   379 +
 docs/latest/development/extensions-core/mysql.md   |   109 +
 .../extensions-core/namespaced-lookup.html         |     4 +
 docs/latest/development/extensions-core/parquet.md |   220 +
 .../development/extensions-core/postgresql.md      |    85 +
 .../latest/development/extensions-core/protobuf.md |   223 +
 docs/latest/development/extensions-core/s3.md      |    98 +
 .../extensions-core/simple-client-sslcontext.md    |    54 +
 docs/latest/development/extensions-core/stats.md   |   172 +
 .../development/extensions-core/test-stats.md      |   118 +
 docs/latest/development/extensions.md              |   105 +
 docs/latest/development/geo.md                     |    93 +
 .../integrating-druid-with-other-technologies.md   |    39 +
 docs/latest/development/javascript.md              |    75 +
 .../kafka-simple-consumer-firehose.html            |     4 +
 docs/latest/development/libraries.html             |     4 +
 docs/latest/development/modules.md                 |   273 +
 docs/latest/development/overview.md                |    76 +
 docs/latest/development/router.md                  |   244 +
 docs/latest/development/select-query.html          |     4 +
 docs/latest/development/versioning.md              |    47 +
 docs/latest/index.html                             |     4 +
 docs/latest/ingestion/batch-ingestion.md           |    39 +
 .../ingestion/command-line-hadoop-indexer.md       |    95 +
 docs/latest/ingestion/compaction.md                |   102 +
 docs/latest/ingestion/data-formats.md              |   205 +
 docs/latest/ingestion/delete-data.md               |    50 +
 docs/latest/ingestion/faq.md                       |   106 +
 docs/latest/ingestion/firehose.md                  |   214 +
 docs/latest/ingestion/flatten-json.md              |   180 +
 docs/latest/ingestion/hadoop-vs-native-batch.md    |    43 +
 docs/latest/ingestion/hadoop.md                    |   363 +
 docs/latest/ingestion/index.md                     |   306 +
 docs/latest/ingestion/ingestion-spec.md            |   332 +
 docs/latest/ingestion/ingestion.html               |     4 +
 docs/latest/ingestion/locking-and-priority.md      |    79 +
 docs/latest/ingestion/misc-tasks.md                |    94 +
 docs/latest/ingestion/native-batch.html            |     4 +
 docs/latest/ingestion/native_tasks.md              |   620 +
 docs/latest/ingestion/overview.html                |     4 +
 docs/latest/ingestion/realtime-ingestion.html      |     4 +
 docs/latest/ingestion/reports.md                   |   152 +
 docs/latest/ingestion/schema-changes.md            |    82 +
 docs/latest/ingestion/schema-design.md             |   338 +
 docs/latest/ingestion/stream-ingestion.md          |    56 +
 docs/latest/ingestion/stream-pull.md               |   376 +
 docs/latest/ingestion/stream-push.md               |   186 +
 docs/latest/ingestion/tasks.md                     |    78 +
 docs/latest/ingestion/transform-spec.md            |   104 +
 docs/latest/ingestion/update-existing-data.md      |   162 +
 docs/latest/misc/cluster-setup.html                |     4 +
 docs/latest/misc/evaluate.html                     |     4 +
 docs/latest/misc/math-expr.md                      |   138 +
 docs/latest/misc/papers-and-talks.md               |    43 +
 docs/latest/misc/tasks.html                        |     4 +
 docs/latest/operations/alerts.md                   |    38 +
 docs/latest/operations/api-reference.md            |   736 +
 docs/latest/operations/druid-console.md            |    90 +
 docs/latest/operations/dump-segment.md             |   116 +
 docs/latest/operations/http-compression.md         |    34 +
 docs/latest/operations/img/01-home-view.png        |   Bin 0 -> 60287 bytes
 docs/latest/operations/img/02-datasources.png      |   Bin 0 -> 163824 bytes
 docs/latest/operations/img/03-retention.png        |   Bin 0 -> 123857 bytes
 docs/latest/operations/img/04-segments.png         |   Bin 0 -> 125873 bytes
 docs/latest/operations/img/05-tasks-1.png          |   Bin 0 -> 101635 bytes
 docs/latest/operations/img/06-tasks-2.png          |   Bin 0 -> 221977 bytes
 docs/latest/operations/img/07-tasks-3.png          |   Bin 0 -> 195170 bytes
 docs/latest/operations/img/08-servers.png          |   Bin 0 -> 119310 bytes
 docs/latest/operations/img/09-sql.png              |   Bin 0 -> 80580 bytes
 docs/latest/operations/including-extensions.md     |    87 +
 docs/latest/operations/insert-segment-to-db.html   |     4 +
 docs/latest/operations/insert-segment-to-db.md     |   156 +
 docs/latest/operations/management-uis.md           |    80 +
 docs/latest/operations/metrics.md                  |   279 +
 docs/latest/operations/multitenancy.html           |     4 +
 docs/latest/operations/other-hadoop.md             |   300 +
 docs/latest/operations/password-provider.md        |    55 +
 docs/latest/operations/performance-faq.md          |    95 +
 docs/latest/operations/pull-deps.md                |   151 +
 docs/latest/operations/recommendations.md          |    93 +
 docs/latest/operations/reset-cluster.md            |    76 +
 docs/latest/operations/rolling-updates.md          |   102 +
 docs/latest/operations/rule-configuration.md       |   242 +
 docs/latest/operations/segment-optimization.md     |   100 +
 docs/latest/operations/tls-support.md              |   105 +
 docs/latest/operations/use_sbt_to_build_fat_jar.md |   128 +
 docs/latest/querying/aggregations.md               |   361 +
 docs/latest/querying/caching.md                    |    46 +
 docs/latest/querying/datasource.md                 |    65 +
 docs/latest/querying/datasourcemetadataquery.md    |    57 +
 docs/latest/querying/dimensionspecs.md             |   545 +
 docs/latest/querying/filters.md                    |   521 +
 docs/latest/querying/granularities.md              |   438 +
 docs/latest/querying/groupbyquery.md               |   445 +
 docs/latest/querying/having.md                     |   261 +
 docs/latest/querying/hll-old.md                    |   142 +
 docs/latest/querying/joins.md                      |    55 +
 docs/latest/querying/limitspec.md                  |    55 +
 docs/latest/querying/lookups.md                    |   444 +
 docs/latest/querying/multi-value-dimensions.md     |   340 +
 docs/latest/querying/multitenancy.md               |    99 +
 docs/latest/querying/optimizations.html            |     4 +
 docs/latest/querying/post-aggregations.md          |   223 +
 docs/latest/querying/query-context.md              |    62 +
 docs/latest/querying/querying.md                   |   125 +
 docs/latest/querying/scan-query.md                 |   196 +
 docs/latest/querying/searchquery.md                |   141 +
 docs/latest/querying/searchqueryspec.md            |    77 +
 docs/latest/querying/segmentmetadataquery.md       |   188 +
 docs/latest/querying/select-query.md               |   259 +
 docs/latest/querying/sorting-orders.md             |    54 +
 docs/latest/querying/sql.md                        |   718 +
 docs/latest/querying/timeboundaryquery.md          |    58 +
 docs/latest/querying/timeseriesquery.md            |   163 +
 docs/latest/querying/topnmetricspec.md             |    87 +
 docs/latest/querying/topnquery.md                  |   257 +
 docs/latest/querying/virtual-columns.md            |    80 +
 docs/latest/toc.md                                 |   174 +
 .../tutorials/booting-a-production-cluster.html    |     4 +
 docs/latest/tutorials/cluster.md                   |   408 +
 docs/latest/tutorials/examples.html                |     4 +
 docs/latest/tutorials/firewall.html                |     4 +
 docs/latest/tutorials/img/tutorial-batch-01.png    |   Bin 0 -> 54435 bytes
 .../tutorials/img/tutorial-compaction-01.png       |   Bin 0 -> 55153 bytes
 .../tutorials/img/tutorial-compaction-02.png       |   Bin 0 -> 279736 bytes
 .../tutorials/img/tutorial-compaction-03.png       |   Bin 0 -> 40114 bytes
 .../tutorials/img/tutorial-compaction-04.png       |   Bin 0 -> 312142 bytes
 .../tutorials/img/tutorial-compaction-05.png       |   Bin 0 -> 39784 bytes
 .../tutorials/img/tutorial-compaction-06.png       |   Bin 0 -> 351505 bytes
 .../tutorials/img/tutorial-compaction-07.png       |   Bin 0 -> 40106 bytes
 .../tutorials/img/tutorial-compaction-08.png       |   Bin 0 -> 43257 bytes
 docs/latest/tutorials/img/tutorial-deletion-01.png |   Bin 0 -> 72062 bytes
 docs/latest/tutorials/img/tutorial-deletion-02.png |   Bin 0 -> 200459 bytes
 .../latest/tutorials/img/tutorial-retention-00.png |   Bin 0 -> 138304 bytes
 .../latest/tutorials/img/tutorial-retention-01.png |   Bin 0 -> 53955 bytes
 .../latest/tutorials/img/tutorial-retention-02.png |   Bin 0 -> 410930 bytes
 .../latest/tutorials/img/tutorial-retention-03.png |   Bin 0 -> 44144 bytes
 .../latest/tutorials/img/tutorial-retention-04.png |   Bin 0 -> 67493 bytes
 .../latest/tutorials/img/tutorial-retention-05.png |   Bin 0 -> 61639 bytes
 .../latest/tutorials/img/tutorial-retention-06.png |   Bin 0 -> 233034 bytes
 docs/latest/tutorials/index.md                     |   196 +
 docs/latest/tutorials/ingestion-streams.html       |     4 +
 docs/latest/tutorials/ingestion.html               |     4 +
 docs/latest/tutorials/quickstart.html              |     4 +
 .../tutorials/tutorial-a-first-look-at-druid.html  |     4 +
 .../tutorials/tutorial-all-about-queries.html      |     4 +
 docs/latest/tutorials/tutorial-batch-hadoop.md     |   259 +
 docs/latest/tutorials/tutorial-batch.md            |   179 +
 docs/latest/tutorials/tutorial-compaction.md       |   176 +
 docs/latest/tutorials/tutorial-delete-data.md      |   178 +
 docs/latest/tutorials/tutorial-ingestion-spec.md   |   662 +
 docs/latest/tutorials/tutorial-kafka.md            |   107 +
 .../tutorials/tutorial-loading-batch-data.html     |     4 +
 .../tutorials/tutorial-loading-streaming-data.html |     4 +
 docs/latest/tutorials/tutorial-query.md            |   300 +
 docs/latest/tutorials/tutorial-retention.md        |   115 +
 docs/latest/tutorials/tutorial-rollup.md           |   200 +
 .../tutorials/tutorial-the-druid-cluster.html      |     4 +
 docs/latest/tutorials/tutorial-tranquility.md      |   104 +
 docs/latest/tutorials/tutorial-transform-spec.md   |   158 +
 docs/latest/tutorials/tutorial-update-data.md      |   169 +
 downloads.md                                       |    69 +
 downloads/index.md                                 |     6 +
 druid-powered.md                                   |   499 +
 druid.md                                           |    87 +
 faq.md                                             |   104 +
 feed/index.xml                                     |    25 +
 fonts/framd.eot                                    |   Bin 0 -> 139558 bytes
 fonts/framd.otf                                    |   Bin 0 -> 106040 bytes
 fonts/framd.svg                                    |  2980 ++++
 fonts/framd.ttf                                    |   Bin 0 -> 139332 bytes
 fonts/framd.woff                                   |   Bin 0 -> 58796 bytes
 gulpfile.js                                        |    26 +
 img/diagram-1.png                                  |   Bin 0 -> 51000 bytes
 img/diagram-2.png                                  |   Bin 0 -> 57391 bytes
 img/diagram-3.png                                  |   Bin 0 -> 51004 bytes
 img/diagram-4-future.png                           |   Bin 0 -> 42272 bytes
 img/diagram-4.png                                  |   Bin 0 -> 45771 bytes
 img/diagram-5.png                                  |   Bin 0 -> 122701 bytes
 img/diagram-6.png                                  |   Bin 0 -> 38947 bytes
 img/diagram-7-future.png                           |   Bin 0 -> 147941 bytes
 img/diagram-7.png                                  |   Bin 0 -> 147262 bytes
 img/diagram-8.png                                  |   Bin 0 -> 50298 bytes
 img/druid.png                                      |   Bin 0 -> 13216 bytes
 img/druid_2x.png                                   |   Bin 0 -> 20633 bytes
 img/druid_nav.png                                  |   Bin 0 -> 40196 bytes
 img/druid_watermark_30.png                         |   Bin 0 -> 7823 bytes
 img/favicon.png                                    |   Bin 0 -> 4514 bytes
 img/legos.jpg                                      |   Bin 0 -> 186569 bytes
 img/map-usgs-napa.png                              |   Bin 0 -> 3672165 bytes
 img/napa_streamflow_plot.png                       |   Bin 0 -> 37521 bytes
 img/note-caution.svg                               |     8 +
 img/note-info.svg                                  |     8 +
 img/oss-panel.png                                  |   Bin 0 -> 862489 bytes
 img/radglue.png                                    |   Bin 0 -> 47173 bytes
 img/radstack.png                                   |   Bin 0 -> 47086 bytes
 img/watermark-dark.png                             |   Bin 0 -> 48713 bytes
 img/watermark-light.png                            |   Bin 0 -> 35315 bytes
 img/wiki-edit-lang-plot.png                        |   Bin 0 -> 48391 bytes
 img/yklogo.png                                     |   Bin 0 -> 5554 bytes
 index.html                                         |   122 +
 libraries.md                                       |    93 +
 licensing.md                                       |    39 +
 package-lock.json                                  |  3872 +++++
 package.json                                       |    24 +
 robots.txt                                         |     7 +
 scss/blogs.scss                                    |    82 +
 scss/bootstrap-pure.scss                           |  2470 +++
 scss/docs.scss                                     |   176 +
 scss/footer.scss                                   |    49 +
 scss/header.scss                                   |   150 +
 scss/index.scss                                    |    85 +
 scss/main.scss                                     |   257 +
 scss/news-list.scss                                |    89 +
 scss/reset.scss                                    |    71 +
 scss/syntax.scss                                   |    69 +
 scss/variables.scss                                |    12 +
 technology.md                                      |   184 +
 thanks.md                                          |    13 +
 use-cases.md                                       |   113 +
 version/stable                                     |     1 +
 4571 files changed, 787113 insertions(+), 1 deletion(-)

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..8cc65fe
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,7 @@
+_site
+Gemfile.lock
+.DS_store
+.idea
+.jekyll-metadata
+node_modules
+private
diff --git a/404.html b/404.html
new file mode 100644
index 0000000..b871aba
--- /dev/null
+++ b/404.html
@@ -0,0 +1,12 @@
+---
+layout: html_page
+---
+
+<div class="druid-header">
+  <div class="container">
+    <h1>Whoops! We couldn't find that page…</h1>
+    <h3><a href="/">Try this one instead?</a></h3>
+  </div>
+</div>
+
+
diff --git a/CNAME b/CNAME
new file mode 100644
index 0000000..0262f41
--- /dev/null
+++ b/CNAME
@@ -0,0 +1 @@
+druid.io
\ No newline at end of file
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 0000000..5bcc218
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,14 @@
+# License
+
+By contributing to this repository you agree to license your contribution under the [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/)
+
+# How to Contribute
+
+When submitting a pull request (PR), please use the following guidelines:
+
+- First verify your changes by running `bundle exec jekyll serve -w`.
+- For most webpage changes, please submit pull requests to this repository based off the "src" branch.
+- For any documentation changes (in the "docs" folder), please submit pull requests to the [main Druid
+  repo](https://github.com/druid-io/druid-io.github.io). All Druid
+  documentation is hosted under
+  [https://github.com/apache/incubator-druid/tree/master/docs/content](https://github.com/apache/incubator-druid/tree/master/docs/content).
diff --git a/Gemfile b/Gemfile
new file mode 100644
index 0000000..7e8ea25
--- /dev/null
+++ b/Gemfile
@@ -0,0 +1,5 @@
+source 'https://rubygems.org'
+gem 'pygments.rb'
+gem 'jekyll', '3.1.6'
+gem 'redcarpet'
+gem 'jekyll-textile-converter'
diff --git a/README.md b/README.md
index 61f2eb3..0cb042f 100644
--- a/README.md
+++ b/README.md
@@ -1 +1,29 @@
-Repository for source files for the druid.apache.org site.
+Druid Project Website
+=====================
+
+http://druid.io/
+
+## Building
+
+Setup (you only need to do this once):
+
+```
+npm install
+bundle install
+```
+
+Every time you want to run the site:
+
+```
+gulp
+npm start
+```
+
+## Notes
+
+Ideally we would not be checking in the `css` directory and just build it as part of the deploy process.
+
+## Contributing
+
+See [CONTRIBUTING.md](https://github.com/druid-io/druid-io.github.io/blob/src/CONTRIBUTING.md).
+
diff --git a/_config.yml b/_config.yml
new file mode 100644
index 0000000..5a5bd54
--- /dev/null
+++ b/_config.yml
@@ -0,0 +1,69 @@
+name: Druid
+
+permalink: /blog/:year/:month/:day/:title.html
+markdown: redcarpet
+
+redcarpet:
+  extensions: ["tables", "no_intra_emphasis", "fenced_code_blocks", "with_toc_data"]
+
+exclude:
+  - CNAME
+  - Gemfile
+  - Gemfile.lock
+  - README.md
+  - node_modules
+  - scss
+  - private
+
+highlighter: pygments
+
+title: 'Druid'
+description: 'Real²time Exploratory Analytics on Large Datasets'
+
+
+druid_versions:
+  - release: 0.14
+    versions:
+      - version: 0.14.2-incubating
+        date: 2019-05-27
+      - version: 0.14.1-incubating
+        date: 2019-05-09
+      - version: 0.14.0-incubating
+        date: 2019-04-09
+  - release: 0.13
+    versions:
+      - version: 0.13.0-incubating
+        date: 2018-12-18
+
+
+tranquility_stable_version: 0.8.3
+
+gems:
+  - jekyll-textile-converter
+
+prose:
+  metadata:
+    siteurl: 'http://druid.io'
+
+    _posts:
+      - name: "author"
+        field:
+          element: "text"
+          label: "Author"
+          value: ""
+      - name: "tags"
+        field:
+          element: "text"
+          label: "Tags"
+          value: ""
+      - name: "image"
+        field:
+          element: "text"
+          label: "Image"
+          value: ""
+      - name: "layout"
+        field:
+          element: "text"
+          label: "Layout"
+          value: "post"
+
diff --git a/_data/events.yml b/_data/events.yml
new file mode 100644
index 0000000..b251cb7
--- /dev/null
+++ b/_data/events.yml
@@ -0,0 +1,14 @@
+- date: 2019-06-12
+  name: "Druid NYC Meetup @ Tumblr"
+  info: "Inside Apache Druid: Deep Dive, Personalized Related Blog Recommendation with User Feedback"
+  link: https://www.meetup.com/Apache-Druid-NYC/events/260925189/
+
+- date: 2019-09-09
+  name: "ApacheCon North America, Las Vegas"
+  info: "2019-09-09 to 2019-09-13 Flamingo Las Vegas, 3555 S Las Vegas Blvd, Las Vegas, NV 89109, USA"
+  link: https://apachecon.com/acna19/
+  
+- date: 2019-10-22
+  name: "ApacheCon Europe, Berlin"
+  info: "2019-10-22 to 2019-10-25 Kulturbrauerei, Schönhauser Allee 36, 10435 Berlin, Germany"
+  link: https://aceu19.apachecon.com/
diff --git a/_data/featured.yml b/_data/featured.yml
new file mode 100644
index 0000000..2ec6f05
--- /dev/null
+++ b/_data/featured.yml
@@ -0,0 +1,47 @@
+  - date: 2019-05-29
+    title: "Monitoring at eBay with Druid"
+    name: "Mohan Garadi"
+    link: https://www.ebayinc.com/stories/blogs/tech/monitoring-at-ebay-with-druid/
+    company: ebay
+    
+  - date: 2019-05-22
+    title: "Setting the stage for fast analytics with Druid"
+    name: "Surekha Saharan and Benjamin Hopp"
+    link: https://speakerdeck.com/implydatainc/setting-the-stage-for-fast-analytics-with-druid
+    company: Imply
+    
+  - date: 2019-03-15
+    title: "Data Engineering At Booking.com Case Study | #064"
+    name: "Andreas Kretz"
+    link: https://youtu.be/9GE3yiVo1FM
+    company: Booking.com
+  
+  - date: 2018-11-14
+    title: "How Druid enables analytics at Airbnb"
+    name: "Pala Muthiah and Jinyang Li"
+    link: https://medium.com/airbnb-engineering/druid-airbnb-data-platform-601c312f2a4c
+    company: Airbnb
+
+  - date: 2018-09-25
+    title: "Data Analytics and Processing at Snap"
+    name: "Charles Allen"
+    link: https://www.slideshare.net/CharlesAllen9/data-analytics-and-processing-at-snap-druid-meetup-la-september-2018
+    company: Snap, Inc.
+
+  - date: 2018-09-13
+    title: "Securing Druid"
+    name: "Jon Wei"
+    link: https://imply.io/post/securing-druid
+    company: Imply
+
+  - date: 2018-08-30
+    title: "Streaming SQL and Druid"
+    name: "Arup Malakar"
+    link: https://youtu.be/ovZ9iAkQllo
+    company: Lyft
+
+  - date: 2018-06-19
+    title: "PayPal merchant ecosystem using Apache Spark, Hive, Druid, and HBase"
+    name: "Deepika Khera & Kasi Natarajan"
+    link: https://dataworkssummit.com/san-jose-2018/session/paypal-merchant-ecosystem-using-apache-spark-hive-druid-and-hbase/
+    company: Paypal
diff --git a/_images/druid_explorer_chart.png b/_images/druid_explorer_chart.png
new file mode 100644
index 0000000..ac416c0
Binary files /dev/null and b/_images/druid_explorer_chart.png differ
diff --git a/_images/map-usgs-napa.png b/_images/map-usgs-napa.png
new file mode 100644
index 0000000..c3ed269
Binary files /dev/null and b/_images/map-usgs-napa.png differ
diff --git a/_includes/event-list.html b/_includes/event-list.html
new file mode 100644
index 0000000..ba64652
--- /dev/null
+++ b/_includes/event-list.html
@@ -0,0 +1,26 @@
+<link rel="stylesheet" href="/css/news-list.css">
+
+<div class="item-list">
+  <h3>
+    Upcoming Events
+  </h3>
+  {% for event in site.data.events limit: 7 %}
+  <div class="event">
+    <div class="mini-cal">
+      <div class="date-month">
+        {{ event.date | date: "%b" }}
+      </div>
+      <div class="date-day">
+        {{ event.date | date: "%e" }}
+      </div>
+    </div>
+    <p>
+      <a href="{{ event.link }}">
+        <span class ="title">{{ event.name }}</span><br>
+        <span class="text-muted">{{ event.info }}</span>
+      </a>
+    </p>
+  </div>
+  {% endfor %}
+  <a class="btn btn-default btn-xs" href="https://www.meetup.com/topics/apache-druid/">Join a Druid Meetup!</a>
+</div>
diff --git a/_includes/featured-list.html b/_includes/featured-list.html
new file mode 100644
index 0000000..4798937
--- /dev/null
+++ b/_includes/featured-list.html
@@ -0,0 +1,17 @@
+<link rel="stylesheet" href="/css/news-list.css">
+
+<div class="item-list">
+  <h3>
+    Featured Content
+  </h3>
+  {% for feature in site.data.featured limit: 5 %}
+  <p>
+    <a href="{{ feature.link }}">
+      <span class="title">{{ feature.title }}</span><br>
+      <span class="text-muted">{{ feature.name }} - </span>
+      <span class="text-muted">{{ feature.company }}</span><br>
+      <span class="text-muted">{{ feature.date | date: "%b %e %Y" }}</span>
+    </a>
+  </p>
+  {% endfor %}
+</div>
\ No newline at end of file
diff --git a/_includes/news-list.html b/_includes/news-list.html
new file mode 100644
index 0000000..1247177
--- /dev/null
+++ b/_includes/news-list.html
@@ -0,0 +1,24 @@
+<link rel="stylesheet" href="/css/news-list.css">
+
+<div class="item-list">
+  <h3>
+    Latest releases
+  </h3>
+  {% assign ctr = 0 %}
+  {% assign max = 5 %}
+  {% for branch in site.druid_versions %}
+  {% if ctr < max %}
+  {% for release in branch.versions %}
+  {% if ctr < max %}
+  {% assign ctr = ctr | plus:1 %}
+  <p>
+    <a href="https://github.com/apache/incubator-druid/releases/tag/druid-{{ release.version }}">
+      <span class="title">Apache Druid (incubating) {{ release.version | remove: "-incubating"}} Released</span><br>
+      <span class="text-muted">{{ release.date | date: "%b %e %Y" }}</span>
+    </a>
+  </p>
+  {% endif %}
+  {% endfor %}
+  {% endif %}
+  {% endfor %}
+</div>
diff --git a/_includes/page_footer.html b/_includes/page_footer.html
new file mode 100644
index 0000000..9bd83ed
--- /dev/null
+++ b/_includes/page_footer.html
@@ -0,0 +1,46 @@
+<!-- Start page_footer include -->
+<footer class="druid-footer">
+<div class="container">
+  <div class="text-center">
+    <p>
+    <a href="/technology">Technology</a>&ensp;·&ensp;
+    <a href="/use-cases">Use Cases</a>&ensp;·&ensp;
+    <a href="/druid-powered">Powered by Druid</a>&ensp;·&ensp;
+    <a href="/docs/latest">Docs</a>&ensp;·&ensp;
+    <a href="https://druid.apache.org/community/">Community</a>&ensp;·&ensp;
+    <a href="/downloads.html">Download</a>&ensp;·&ensp;
+    <a href="/faq">FAQ</a>
+    </p>
+  </div>
+  <div class="text-center">
+    <a title="Join the user group" href="https://groups.google.com/forum/#!forum/druid-user" target="_blank"><span class="fa fa-comments"></span></a>&ensp;·&ensp;
+    <a title="Follow Druid" href="https://twitter.com/druidio" target="_blank"><span class="fab fa-twitter"></span></a>&ensp;·&ensp;
+    <a title="Download via Apache" href="https://www.apache.org/dyn/closer.cgi?path=/incubator/druid/{{ site.druid_versions[0].version }}/apache-druid-{{ site.druid_versions[0].version }}-bin.tar.gz" target="_blank"><span class="fas fa-feather"></span></a>&ensp;·&ensp;
+    <a title="GitHub" href="https://github.com/apache/incubator-druid" target="_blank"><span class="fab fa-github"></span></a>
+  </div>
+  <div class="text-center license">
+    Except where otherwise noted, licensed under <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA 4.0</a>
+  </div>
+</div>
+</footer>
+
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+
+  ga('create', 'UA-40280432-1', 'auto');
+  ga('set', 'anonymizeIp', true);
+  ga('send', 'pageview');
+
+</script>
+<script>
+  function trackDownload(type, url) {
+    ga('send', 'event', 'download', type, url);
+  }
+</script>
+<script src="//code.jquery.com/jquery.min.js"></script>
+<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js"></script>
+<script src="/assets/js/druid.js"></script>
+<!-- stop page_footer include -->
diff --git a/_includes/page_header.html b/_includes/page_header.html
new file mode 100644
index 0000000..881d2ba
--- /dev/null
+++ b/_includes/page_header.html
@@ -0,0 +1,63 @@
+<!-- Start page_header include -->
+<script src="//ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>
+
+<div class="top-navigator">
+  <div class="container">
+    <div class="left-cont">
+      <a class="logo" href="/"><span class="druid-logo"></span></a>
+    </div>
+    <div class="right-cont">
+      <ul class="links">
+        <li class="{% if page.sectionid == 'technology' %} active{% endif %}"><a href="/technology">Technology</a></li>
+        <li class="{% if page.sectionid == 'use-cases' %} active{% endif %}"><a href="/use-cases">Use Cases</a></li>
+        <li class="{% if page.sectionid == 'powered-by' %} active{% endif %}"><a href="/druid-powered">Powered By</a></li>
+        <li class="{% if page.sectionid == 'docs' %} active{% endif %}"><a href="/docs/latest/design/">Docs</a></li>
+        <li class="{% if page.sectionid == 'community' %} active{% endif %}"><a href="https://druid.apache.org/community/">Community</a></li>
+        <li class="{% if page.sectionid == 'download' %} active{% endif %} button-link"><a href="/downloads.html">Download</a></li>
+      </ul>
+    </div>
+  </div>
+  <div class="action-button menu-icon">
+    <span class="fa fa-bars"></span> MENU
+  </div>
+  <div class="action-button menu-icon-close">
+    <span class="fa fa-times"></span> MENU
+  </div>
+</div>
+
+<script type="text/javascript">
+  var $menu = $('.right-cont');
+  var $menuIcon = $('.menu-icon');
+  var $menuIconClose = $('.menu-icon-close');
+
+  function showMenu() {
+    $menu.fadeIn(100);
+    $menuIcon.fadeOut(100);
+    $menuIconClose.fadeIn(100);
+  }
+
+  $menuIcon.click(showMenu);
+
+  function hideMenu() {
+    $menu.fadeOut(100);
+    $menuIconClose.fadeOut(100);
+    $menuIcon.fadeIn(100);
+  }
+
+  $menuIconClose.click(hideMenu);
+
+  $(window).resize(function() {
+    if ($(window).width() >= 840) {
+      $menu.fadeIn(100);
+      $menuIcon.fadeOut(100);
+      $menuIconClose.fadeOut(100);
+    }
+    else {
+      $menu.fadeOut(100);
+      $menuIcon.fadeIn(100);
+      $menuIconClose.fadeOut(100);
+    }
+  });
+</script>
+
+<!-- Stop page_header include -->
diff --git a/_includes/site_head.html b/_includes/site_head.html
new file mode 100644
index 0000000..2f03c1e
--- /dev/null
+++ b/_includes/site_head.html
@@ -0,0 +1,35 @@
+<meta charset="UTF-8" />
+<meta name="viewport" content="width=device-width, initial-scale=1.0">
+<meta name="description" content="Apache Druid">
+<meta name="keywords" content="druid,kafka,database,analytics,streaming,real-time,real time,apache,open source">
+<meta name="author" content="Apache Software Foundation">
+
+<title>Druid | {{page.title}}</title>
+{% if page.canonical %}<link rel="canonical" href="{{page.canonical}}" />{% endif %}
+<link rel="alternate" type="application/atom+xml" href="/feed">
+<link rel="shortcut icon" href="/img/favicon.png">
+
+<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.2/css/all.css" integrity="sha384-fnmOCqbTlWIlj8LyTjo7mOUStjsKC4pOpQbqyi7RrhN7udi9RwhKkMHpvLbHG9Sr" crossorigin="anonymous">
+
+<link href='//fonts.googleapis.com/css?family=Open+Sans+Condensed:300,700,300italic|Open+Sans:300italic,400italic,600italic,400,300,600,700' rel='stylesheet' type='text/css'>
+
+<link rel="stylesheet" href="/css/bootstrap-pure.css?v=1.0">
+<link rel="stylesheet" href="/css/main.css?v=1.0">
+<link rel="stylesheet" href="/css/header.css?v=1.0">
+<link rel="stylesheet" href="/css/footer.css?v=1.0">
+<link rel="stylesheet" href="/css/syntax.css?v=1.0">
+<link rel="stylesheet" href="/css/docs.css?v=1.0">
+
+<script>
+  (function() {
+    var cx = '000162378814775985090:molvbm0vggm';
+    var gcse = document.createElement('script');
+    gcse.type = 'text/javascript';
+    gcse.async = true;
+    gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
+        '//cse.google.com/cse.js?cx=' + cx;
+    var s = document.getElementsByTagName('script')[0];
+    s.parentNode.insertBefore(gcse, s);
+  })();
+</script>
+
diff --git a/_layouts/doc_page.html b/_layouts/doc_page.html
new file mode 100644
index 0000000..3ebad08
--- /dev/null
+++ b/_layouts/doc_page.html
@@ -0,0 +1,60 @@
+---
+title: Documentation
+sectionid: docs
+---
+
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    {% include site_head.html %}
+  </head>
+
+  <body>
+    {% include page_header.html %}
+
+    <div class="container doc-container">
+      {% assign parts = (page.url | split: '/') %}
+      {% assign version = parts[2] %}
+
+      {% if version != site.druid_versions[0].version and version != 'latest' %}
+      <p> Looking for the <a href="/docs/{{ site.druid_versions[0].version }}/">latest stable documentation</a>?</p>
+      {% endif %}
+
+      <div class="row">
+        <div class="col-md-9 doc-content">
+          <p>
+            <a class="btn btn-default btn-xs visible-xs-inline-block visible-sm-inline-block" href="#toc">Table of Contents</a>
+          </p>
+          {{ content }}
+        </div>
+        <div class="col-md-3">
+          <div class="searchbox">
+            <gcse:searchbox-only></gcse:searchbox-only>
+          </div>
+          <div id="toc" class="nav toc hidden-print">
+          </div>
+        </div>
+      </div>
+    </div>
+
+    {% include page_footer.html %}
+
+    <script>
+    $(function() {
+      $(".toc").load("/docs/{{ version }}/toc.html");
+
+      // There is no way to tell when .gsc-input will be async loaded into the page so just try to set a placeholder until it works
+      var tries = 0;
+      var timer = setInterval(function() {
+        tries++;
+        if (tries > 300) clearInterval(timer);
+        var searchInput = $('input.gsc-input');
+        if (searchInput.length) {
+          searchInput.attr('placeholder', 'Search');
+          clearInterval(timer);
+        }
+      }, 100);
+    });
+    </script>
+  </body>
+</html>
diff --git a/_layouts/html_page.html b/_layouts/html_page.html
new file mode 100644
index 0000000..0121ba7
--- /dev/null
+++ b/_layouts/html_page.html
@@ -0,0 +1,14 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>	
+    {% include site_head.html %}
+  </head>
+  <body>	
+    {% include page_header.html %}
+
+    {{ content }}
+    
+    {% include page_footer.html %}
+    
+  </body>
+</html>
diff --git a/_layouts/post.html b/_layouts/post.html
new file mode 100644
index 0000000..44ac8d8
--- /dev/null
+++ b/_layouts/post.html
@@ -0,0 +1,32 @@
+---
+layout: html_page
+sectionid: blog
+---
+
+<link rel="stylesheet" href="/css/blogs.css">
+
+<div class="blog druid-header">
+  <div class="row">
+    <div class="col-md-8 col-md-offset-2">
+      <div class="title-image-wrap">
+        {% if page.image %}
+        <div class="title-spacer"></div>
+        <img class="title-image" src="{{ page.image }}" alt="{{ page.title }}"/>
+        {% endif %}
+      </div>
+    </div>
+  </div>
+</div>
+
+<div class="container blog">
+  <div class="row">
+    <div class="col-md-8 col-md-offset-2">
+      <div class="blog-entry">
+        <h1>{{ page.title }}</h1>
+        <p class="text-muted">{% if page.author %}by <span class="author text-uppercase">{{ page.author }}</span>{% endif %} · {{ page.date | date: "%B %e, %Y" }}</p>
+
+        {{ content }}
+      </div>
+    </div>
+  </div>
+</div>
diff --git a/_layouts/redirect_page.html b/_layouts/redirect_page.html
new file mode 100644
index 0000000..4d85abf
--- /dev/null
+++ b/_layouts/redirect_page.html
@@ -0,0 +1,8 @@
+<!DOCTYPE html>
+<meta charset="utf-8">
+<title>Redirecting...</title>
+<link rel="canonical" href="{{ page.redirect_target }}">
+<meta http-equiv="refresh" content="0; url={{ page.redirect_target }}">
+<h1>Redirecting...</h1>
+<a href="{{ page.redirect_target }}">Click here if you are not redirected.</a>
+<script>location="{{ page.redirect_target }}"</script>
diff --git a/_layouts/simple_page.html b/_layouts/simple_page.html
new file mode 100644
index 0000000..0415169
--- /dev/null
+++ b/_layouts/simple_page.html
@@ -0,0 +1,18 @@
+---
+layout: html_page
+---
+
+<div class="druid-header">
+  <div class="container">
+    <h1>{{ page.title }}</h1>
+    <h4>{{ page.subtitle }}</h4>
+  </div>
+</div>
+
+<div class="container">
+  <div class="row">
+    <div class="col-md-10 col-md-offset-1">
+      {{ content }}
+    </div>
+  </div>
+</div>
diff --git a/_layouts/toc.html b/_layouts/toc.html
new file mode 100644
index 0000000..e89eccf
--- /dev/null
+++ b/_layouts/toc.html
@@ -0,0 +1,7 @@
+---
+---
+
+{% assign parts = (page.url | split: '/') %}
+{% assign version = parts[2] %}
+
+{{ content | replace:'VERSION',version }}
diff --git a/_posts/2011-04-30-introducing-druid.md b/_posts/2011-04-30-introducing-druid.md
new file mode 100644
index 0000000..a1c1888
--- /dev/null
+++ b/_posts/2011-04-30-introducing-druid.md
@@ -0,0 +1,199 @@
+---
+title: "Introducing Druid: Real-Time Analytics at a Billion Rows Per Second"
+layout: post
+author: Eric Tschetter
+image: http://metamarkets.com/wp-content/uploads/2011/04/fastcar-sized-470x288.jpg
+---
+
+Here at Metamarkets we have developed a web-based analytics console that
+supports drill-downs and roll-ups of high dimensional data sets – comprising
+billions of events – in real-time.  This is the first of two blog posts
+introducing Druid, the data store that powers our console.  Over the last twelve
+months, we tried and failed to achieve scale and speed with relational databases
+(Greenplum, InfoBright, MySQL) and NoSQL offerings (HBase). So instead we did
+something crazy: we rolled our own database. Druid is the distributed, in-memory
+OLAP data store that resulted.
+
+**The Challenge: Fast Roll-Ups Over Big Data**
+
+To frame our discussion, let’s begin with an illustration of what our raw impression event logs look 
+like, containing many dimensions and two metrics (click and price).
+
+
+    timestamp             publisher          advertiser  gender  country  dimensions  click  price
+    2011-01-01T01:01:35Z  bieberfever.com    google.com  Male    USA                  0      0.65
+    2011-01-01T01:03:63Z  bieberfever.com    google.com  Male    USA                  0      0.62
+    2011-01-01T01:04:51Z  bieberfever.com    google.com  Male    USA                  1      0.45
+    ...
+    2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Female  UK                   0      0.87
+    2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Female  UK                   0      0.99
+    2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Female  UK                   1      1.53
+    ...
+
+
+We call this our *alpha* data set. We perform a first-level aggregation operation over a selected set of 
+dimensions, equivalent to (in pseudocode):
+
+
+    GROUP BY timestamp, publisher, advertiser, gender, country
+      :: impressions = COUNT(1),  clicks = SUM(click),  revenue = SUM(price)
+
+to yield a compacted version:
+
+     timestamp             publisher          advertiser  gender country impressions clicks revenue
+     2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Male   USA     1800        25     15.70
+     2011-01-01T01:00:00Z  bieberfever.com    google.com  Male   USA     2912        42     29.18
+     2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Male   UK      1953        17     17.31
+     2011-01-01T02:00:00Z  bieberfever.com    google.com  Male   UK      3194        170    34.01
+
+This is our *beta* data set, filtered for five selected dimensions and compacted. In the limit, as we group 
+by all available dimensions, the size of this aggregated beta converges to the original *alpha*. In practice, 
+it is dramatically smaller (often by a factor of 100). Our *beta* data comprises three distinct parts:
+
+> **Timestamp column**: We treat timestamp separately because all of our queries
+> center around the time axis. Timestamps are faceted by varying granularities
+> (hourly, in the example above).
+> 
+> **Dimension columns**: Here we have four dimensions of publisher, advertiser,
+> gender, and country. They each represent an axis of the data that we’ve chosen
+> to slice across.
+> 
+>    **Metric columns**: These are impressions, clicks and revenue. These represent
+> values, usually numeric, which are derived from an aggregation operation – such
+> as count, sum, and mean (we also run variance and higher moment calculations).
+> For example, in the first row, the revenue metric of 15.70 is the sum of 1800
+> event-level prices.
+
+Our goal is to rapidly compute drill-downs and roll-ups over this data set. We
+want to answer questions like “How many impressions from males were on
+bieberfever.com?” and “What is the average cost to advertise to women at
+ultratrimfast.com?”  But we have a hard requirement to meet: we want queries
+over any arbitrary combination of dimensions at sub-second latencies.
+
+Performance of such a system is dependent on the size of our beta set, and
+there are two ways that this becomes large: (i) when we include additional
+dimensions, and (ii) when we include a dimension whose cardinality is large.
+Using our example, for every hour’s worth of data we calculate the maximum
+number of rows as:
+
+	number_of_publishers * number_of_advertisers * number_of_genders * number of countries
+
+If we have 10 publishers, 50 advertisers, 2 genders, and 120 countries, that
+would yield a maximum of 120,000 rows.  If there had been 1,000,000 possible
+publishers, it would become a maximum of 12 billion rows. If we add 10 more
+dimensions of cardinality 10, then it becomes a maximum of 1.2 quadrillion (1.2
+x 10^15) rows.
+
+Luckily for us, these data sets are generally sparse, as dimension values are
+not conditionally independent (few Kazakhstanis visit beiberfever.com, for
+example). Thus the combinatorial explosion is far less than the theoretical
+worst-case. Nonetheless, as a rule, more dimensions and more cardinality
+dramatically inflate the size of the data set.
+
+**Failed Solution I: Dynamic Roll-Ups with a RDBMS**
+
+Our stated goals of aggregation and drill-down are well suited to a classical
+relational architecture. So about a year ago, we fired up a RDBMS instance
+(actually, the Greenplum Community Edition, running on an m1.large EC2 box),
+and began loading our data into it. It worked and we were able to build the
+initial version of our console on this system. However, we had two problems:
+
+1. We stored the data in a star schema, which meant that there was operational
+   overhead maintaining dimension and fact tables.
+
+2. Whenever we needed to do a full table scan, for things like global counts,
+   the queries ran slow. For example, naive benchmarks showed scanning 33
+million rows took 3 seconds.
+
+We initially just decided to eat the operational overhead of (1) because that’s
+how these systems work and we benefited from having the database to do our
+storage and computation. But, (2) was painful. We started materializing all
+dimensional roll-ups of a certain depth, and began routing queries to these
+pre-aggregated tables. We also implemented a caching layer in front of our
+queries.
+
+This approach generally worked and is, I believe, a fairly common strategy in
+the space. Except, when things weren’t in the cache and a query couldn’t be
+mapped to a pre-aggregated table, we were back to full scans and slow
+performance.  We tried indexing our way out of it, but given that we are
+allowing arbitrary combinations of dimensions, we couldn’t really take
+advantage of composite indexes. Additionally, index merge strategies are not
+always implemented, or only implemented for bitmap indexes, depending on the
+flavor of RDBMS.
+
+We also benchmarked plain Postgres, MySQL, and InfoBright, but did not observe
+dramatically better performance. Seeing no path ahead for our relational
+database, we turned to one of those new-fangled, massively scalable NOSQL
+solutions.
+
+**Failed Solution II: Pre-compute the World in NoSQL**
+
+We used a data storage schema very similar to Twitter’s
+[Rainbird](http://www.slideshare.net/kevinweil/rainbird-realtime-analytics-at-twitter-strata-2011).
+
+In short, we took all of our data and pre-computed aggregates for every
+combination of dimensions. At query time we need only locate the specific
+pre-computed aggregate and and return it: an O(1) key-value lookup. This made
+things fast and worked wonderfully when we had a six dimension beta data set.
+But when we added five more dimensions – giving us 11 dimensions total – the
+time to pre-compute all aggregates became unmanageably large (such that we
+never waited more than 24 hours required to see it finish).
+
+So we decided to limit the depth that we aggregated to. By only pre-computing
+aggregates of five dimensions or less, we were able to limit some of the
+exponential expansion of the data. The data became manageable again, meaning it
+only took about 4 hours on 15 machines to compute the expansion of a 500k beta
+rows into the full multi-billion entry output data set.
+
+Then we added three more dimensions, bringing us up to 14. This turned into 9
+hours on 25 machines. We realized that we were [doing it
+wrong](http://knowyourmeme.com/memes/youre-doing-it-wrong).
+
+Lesson learned: massively scalable counter systems like rainbird are intended
+for high cardinality data sets with pre-defined hierarchical drill-downs. But
+they break down when supporting arbitrary drill downs across all dimensions.
+
+**Introducing Druid: A Distributed, In-Memory OLAP Store**
+
+Stepping back from our two failures, let’s examine why these systems failed to
+scale for our needs:
+
+1. Relational Database Architectures
+  * Full table scans were slow, regardless of the storage engine used
+  * Maintaining proper dimension tables, indexes and aggregate tables was painful
+  * Parallelization of queries was not always supported or non-trivial
+2. Massive NOSQL With Pre-Computation
+  * Supporting high dimensional OLAP requires pre-computing an exponentially large amount of data
+
+Looking at the problems with these solutions, it looks like the first,
+RDBMS-style architecture has a simpler issue to tackle: namely, how to scan
+tables fast?  When we were looking at our 500k row data set, someone remarked,
+“Dude, I can store that in memory”. That was the answer.
+
+Keeping everything in memory provides fast scans, but it does introduce a new
+problem: machine memory is limited. The corollary thus was: distribute the data
+over multiple machines. Thus, our requirements were:
+
+* Ability to load up, store, and query data sets in memory
+* Parallelized architecture that allows us to add more machines in order to relieve memory pressure
+
+And then we threw in a couple more that seemed like good ideas:
+
+* Parallelized queries to speed up full scan processing
+* No dimensional tables to manage
+
+These are the requirements we used to implement Druid. The system makes a
+number of simplifying assumptions that fit our use case (namely that all
+analytics are time-based) and integrates access to real-time and historical
+data for a configurable amount of time into the past.
+
+The [next
+installment](http://metamarketsgroup.com/blog/druid-part-deux-three-principles-for-fast-distributed-olap/)
+will go into the architecture of Druid, how queries work and how the system can
+scale out to handle query hotspots and high cardinality data sets. For now, we
+leave you with a benchmark:
+
+* Our 40-instance (m2.2xlarge) cluster can scan, filter, and aggregate 1 billion rows in 950 milliseconds.
+
+
+[CONTINUE TO PART II…](http://metamarkets.com/2011/druid-part-deux-three-principles-for-fast-distributed-olap/)
diff --git a/_posts/2011-05-20-druid-part-deux.md b/_posts/2011-05-20-druid-part-deux.md
new file mode 100644
index 0000000..c0660c8
--- /dev/null
+++ b/_posts/2011-05-20-druid-part-deux.md
@@ -0,0 +1,107 @@
+---
+published: true
+title: "Druid, Part Deux: Three Principles for Fast, Distributed OLAP"
+author: Eric Tschetter
+image: "http://metamarkets.com/wp-content/uploads/2011/05/toyota-sized-470x288.jpg"
+layout: post
+---
+
+In a [previous blog
+post](http://druid.io/blog/2011/04/30/introducing-druid.html) we introduced the
+distributed indexing and query processing infrastructure we call Druid. In that
+post, we characterized the performance and scaling challenges that motivated us
+to build this system in the first place. Here, we discuss three design
+principles underpinning its architecture.
+
+**1. Partial Aggregates + In-Memory + Indexes => Fast Queries** 
+
+We work with two representations of our data: *alpha* represents the raw,
+unaggregated event logs, while *beta* is its partially aggregated derivative.
+This *beta* is the basis against which all further queries are evaluated:
+
+    2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Male    USA  1800  25  15.70 
+    2011-01-01T01:00:00Z  bieberfever.com    google.com  Male    USA  2912  42  29.18 
+    2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Male    UK   1953  17  17.31 
+    2011-01-01T02:00:00Z  bieberfever.com    google.com  Male    UK   3194  170 34.01 
+
+This is the most compact representation that preserves the finest grain of data,
+while enabling on-the-fly computation of all O(2^n) possible dimensional
+roll-ups.
+
+The key to Druid’s speed is maintaining the _beta_ data entirely in memory. Full
+scans are several orders of magnitude faster in memory than via disk. What we
+lose in having to compute roll-ups on the fly, we make up for with speed.
+
+To support drill-downs on specific dimensions (such as results for only
+‘bieberfever.com’), we maintain a set of inverted indices. This allows for fast
+calculation (using AND & OR operations) of rows matching a search query. The
+inverted index enables us to scan a limited subset of rows to compute final
+query results – and these scans are themselves distributed, as we discuss next.
+
+**2. Distributed Data + Parallelizable Queries => Horizontal Scalability** 
+
+Druid’s performance depends on having memory — lots of it. We achieve the requisite
+memory scale by dynamically distributing data across a cluster of nodes. As the
+data set grows, we can horizontally expand by adding more machines.
+
+To facilitate rebalancing, we take chunks of *beta* data and index them into
+segments based on time ranges. For high cardinality dimensions, distributing by
+time isn’t enough (we generally try to keep segments no larger than 20M rows),
+so we have introduced partitioning. We store metadata about segments within the
+query layer and partitioning logic within the segment generation code.
+
+We persist these segments in a storage system (currently S3) that is accessible
+from all nodes. If a node goes down, [Zookeeper](http://zookeeper.apache.org/)
+coordinates the remaining live nodes to reconstitute the missing *beta* set.
+
+Downstream clients of the API are insulated from this rebalancing: Druid’s
+query API seamlessly handles changes in cluster topology.
+
+Queries against the Druid cluster are perfectly horizontal. We limited the
+aggregation operations we support – count, mean, variance and other parametric
+statistics – that are inherently parallelizable. While less parallelizable
+operations, such as median, are not supported, this limitation is offset by
+rich support of histogram and higher-order moment stores. The co-location of
+processing with in-memory data on each node reduces network load and
+dramatically improves performance.
+
+This architecture provides a number of extra benefits:
+
+* Segments are read-only, so they can simultaneously serve multiple servers. If
+  we have a hotspot in a particular index, we can replicate that index to
+multiple servers and load balance across them.  
+* We can provide tiered classes of service for our data, with servers occupying
+  different points in the “query latency vs. data size” spectrum 
+* Our clusters can span data center boundaries
+
+
+
+**3. Real-Time Analytics: Immutable Past, Append-Only Future** 
+
+Our system for real-time analytics is centered, naturally, on time. Because past events
+happen once and never change, they need not be re-writable. We need only be
+able to append new events.
+
+For real-time analytics, we have an event stream that flows into a set of
+real-time indexers. These are servers that advertise responsibility for the
+most recent 60 minutes of data and nothing more. They aggregate the real-time
+feed and periodically push an index segment to our storage system. The segment
+then gets loaded into memory of a standard server, and is flushed from the
+real-time indexer.
+
+Similarly, for long-range historical data that we want to make available, but
+not keep hot, we have deep-history servers. These use a memory mapping strategy
+for addressing segments, rather than loading them all into memory. This
+provides access to long-range data while maintaining the high-performance that
+our customers expect for near-term data.
+
+
+##Summary## 
+Druid’s power resides in providing users fast, arbitrarily deep
+exploration of large-scale transaction data. Queries over billions of rows,
+that previously took minutes or hours to run, can now be investigated directly
+with sub-second response times.
+
+We believe that the performance, scalability, and unification of real-time and
+historical data that Druid provides could be of broader interest. As such, we
+plan to open source our code base in the coming year.
diff --git a/_posts/2012-01-19-scaling-the-druid-data-store.md b/_posts/2012-01-19-scaling-the-druid-data-store.md
new file mode 100644
index 0000000..07ee7c8
--- /dev/null
+++ b/_posts/2012-01-19-scaling-the-druid-data-store.md
@@ -0,0 +1,215 @@
+---
+published: true
+title: Scaling the Druid Data Store
+layout: post
+author: Eric Tschetter
+image: "http://metamarkets.com/wp-content/uploads/2012/01/scaling2.jpg"
+---
+
+> *“Give me a lever long enough… and I shall move the world”*
+> — Archimedes
+
+Parallelism is computing’s leverage, a force multiplier acting against the
+weight of big data.  Cloud-hosted, horizontally scalable systems have the power
+to move even planetary sized data sets with speed.
+
+This blog post discusses our efforts to lift one such data set, achieving a
+scan rate of 26 billions records per second, with our distributed, in-memory
+data store called Druid.  Our main conclusions are:
+
+Horizontally-scalable architectures are an ideal fit for the Cloud Our data
+store’s performance scales up well to a 6TB in-memory cluster and degrades
+gracefully under memory pressure The flexibility of a Cloud environment enables
+pain-free tuning of cost versus performance Benchmarking our infrastructure
+against a big data set in the wild provides validation of the power achievable
+on a Cloud computing fabric of commodity hardware.
+
+For those who are curious as to what our infrastructure powers, Metamarkets
+offers a SaaS analytics solution to gaming, social, and digital media firms.  A
+public example is our dashboard for exploring Wikipedia edits.
+
+###I) The Data
+
+We began our experiment with 6TB of uncompressed data, representing tens of
+billions of fact rows, which we aimed to host and make fully explorable through
+our dashboard.  By way of comparison, the Wikipedia edit feed we host consists
+of 6GB of uncompressed data, representing ~36 million fact rows.
+
+The first hurdle to overcome with a data set of this scale is co-locating the
+data with the compute power.  Most of the trillions events we’ve analyzed on
+our platform have been delivered over months of parallel, continuous feeds.  In
+rare cases, we have had to transform the data locally and sneaker-net the disks
+to our data center.  Pushing terabytes over a standard office uplink can take
+weeks.
+
+Once on the cloud, we performed some cardinality analysis to make sure we
+understood the parameters of the data.  There were more than a dozen
+dimensions, with cardinalities ranging from tens of millions, to hundreds of
+thousands, all the way down to tens.  This kind of Zipfian distribution in
+cardinalities is common in naturally occurring data.  We then computed four
+metrics for each row (consisting of counts, sums, and averages) and loaded the
+data up into Druid.
+
+We sharded the data into chunks and then sub-sharded those chunks by the
+dimension with cardinality >> 1M, creating thousands of shards of roughly 8M
+fact rows apiece.
+
+###II) The Cluster
+
+We then spun up a cluster of compute nodes to load the data up and keep it in
+memory for querying.  The cluster consisted of 100 nodes, each with 16 cores,
+60GB of RAM, 10 GigE ethernet, and 1TB of disk space.  So, collectively the
+cluster comprised 1600 cores, 6TB of RAM, fast ethernet and more than enough
+disk space.
+
+With this first cluster, we were successful in delivering an interactive
+experience on our front-end dashboard, scanning billions of records per second,
+as the benchmarks below attest.
+
+During the course of our testing, we also reconfigured the cluster in multiple
+different ways, switching from pure in memory to using memory mapping and
+pulling back the number of servers to see how performance degrades as we
+changed the ratio of data served to available RAM.
+
+###III) The Benchmarks
+
+First, we’ll provide some benchmarks for our 100-node configuration on simple
+aggregation queries.  SQL is included to describe what the query is doing.
+
+
+    Select count(*) from _table_ where timestamp >= ? and timestamp < ?
+
+    cluster                         cluster scan rate (rows/sec)    core scan rate
+    15-core, 100 nodes, in-memory   26,610,386,635                  17,740,258
+    15-core,  75 nodes, mmap        25,224,873,928                  22,422,110
+    15-core,  50 nodes, mmap        20,387,152,160                  27,182,870
+    15-core,  25 nodes, mmap        11,910,388,894                  31,761,037
+    4-core,  131 nodes, in-memory   10,008,730,163                  19,100,630
+    4-core,  131 nodes, mmap        10,129,695,120                  19,331,479
+    4-core,   50 nodes, mmap         6,626,570,688                  33,132,853
+
+
+* The timestamp range encompasses all data.
+* 15-core is a 16-core machine with 60GB RAM and 1TB of local disk. The machine was configured to only use 15
+threads for processing queries.
+* 4-core is a 4-core machine with 32GB RAM and 1TB of local disk.
+* in-memory means that the machine was configured to load all data up into the Java heap and have it available for querying
+* mmap means that the machine was configured to mmap the data instead of load it into the Java heap
+
+<br/>
+
+    Select count(*), sum(metric1) from _table_ where timestamp >= ? and timestamp < ?
+
+    cluster                         cluster scan rate (rows/sec)    core scan rate
+    15-core, 100 nodes, in-memory   16,223,081,703                  10,815,388
+    15-core,  75 nodes, mmap        9,860,968,285                   8,765,305
+    15-core,  50 nodes, mmap        8,093,611,909                   10,791,483
+    15-core,  25 nodes, mmap        4,126,502,352                   11,004,006
+    4-core,  131 nodes, in-memory   5,755,274,389                   10,983,348
+    4-core,  131 nodes, mmap        5,032,185,657                   9,603,408
+    4-core,   50 nodes, mmap        1,720,238,609                   8,601,193
+
+
+    Select count(*), sum(metric1), sum(metric2), sum(metric3), sum(metric4)
+    where timestamp >= ? and timestamp < ?
+
+    cluster                         cluster scan rate (rows/sec)    core scan rate
+    15-core, 100 nodes, in-memory   7,591,604,822                   5,061,070
+    15-core,  75 nodes, mmap        4,319,179,995                   3,839,271
+    15-core,  50 nodes, mmap        3,406,554,102                   4,542,072
+    15-core,  25 nodes, mmap        1,826,451,888                   4,870,538
+    4-core,  131 nodes, in-memory   1,936,648,601                   3,695,894
+    4-core,  131 nodes, mmap        2,210,367,152                   4,218,258
+    4-core,   50 nodes, mmap        1,002,291,562                   5,011,458
+
+
+The first query is just a count and we see the best performance out of our
+system with it, achieving scan rates of 33M rows/second/core.  At first glance
+it looks like fewer nodes might actually be outperforming more nodes in the
+rows/sec/core metric, but that’s just because 100 nodes is overprovisioned for
+the data set.  Druid’s concurrency model is based on shards, one thread will
+scan one shard.  If a node has 15 cores, for example, and handles a query that
+requires scanning 16 shards, if we assume each shard takes 1 second to process
+the total time to finish the query will be 2 seconds (1 second for the first 15
+shards and 1 second for the 16th shard), decreasing the global scan rate
+because there are actually a number of cores that are idle.
+
+As we move on to include more aggregations we see performance degrade. This is
+because of the column-oriented storage format Druid employs. For the `count *`
+queries, it only has to check the timestamp column to satisfy the where clause.
+As we add metrics, it has to also load those metric values and scan over them,
+increasing the amount of memory scanned.  Next, we’ll do a top 100 query on our
+high cardinality dimension:
+
+
+    Select high_card_dimension, count(*) AS cnt from _table_ where timestamp >= ?
+    and timestamp < ? group by high_card_dimension order by cnt limit 100;
+
+    cluster                             cluster scan rate (rows/sec)    core scan rate
+    15-core, 100 nodes, in-memory       10,241,183,745                  6,827,456
+    15-core,  75 nodes, mmap             4,891,097,559                  4,347,642
+    15-core,  50 nodes, mmap             3,616,707,511                  4,822,277
+    15-core,  25 nodes, mmap             1,665,053,263                  4,440,142
+    4-core,  131 nodes, in-memoy         4,388,159,569                  8,374,350
+    4-core,  131 nodes, mmap             2,444,344,232                  4,664,779
+    4-core,   50 nodes, mmap             1,215,737,558                  6,078,688
+
+
+    Select high_card_dimension, count(*), sum(metric1) AS cnt from _table_
+    where timestamp >= ? and timestamp < ? group by high_card_dimension order by
+    cnt limit 100;
+
+    cluster                             cluster scan rate (rows/sec)    core scan rate
+    15-core, 100 nodes, in-memory       7,309,984,688                   4,873,323
+    15-core,  75 nodes, mmap            3,333,628,777                   2,963,226
+    15-core,  50 nodes, mmap            2,555,300,237                   3,407,067
+    15-core,  25 nodes, mmap            1,384,674,717                   3,692,466
+    4-core,  131 nodes, in-memory       3,237,907,984                   6,179,214
+    4-core,  131 nodes, mmap            1,740,481,380                   3,321,529
+    4-core,   50 nodes, mmap              863,170,420                   4,315,852
+
+
+    Select high_card_dimension, count(*), sum(imetric1), sum(metric2),
+    sum(metric3), sum(metric4) AS cnt from _table_ where timestamp >= ? and
+    timestamp < ? group by high_card_dimension order by cnt limit 100;
+    
+    cluster                             cluster scan rate (rows/sec)    core scan rate
+    15-core, 100 nodes, in-memory       4,064,424,274                   2,709,616
+    15-core,  75 nodes, mmap            2,014,067,386                   1,790,282
+    15-core,  50 nodes, mmap            1,499,452,617                   1,999,270
+    15-core,  25 nodes, mmap              810,143,518                   2,160,383
+    4-core,  131 nodes, in-memory       1,670,214,695                   3,187,433
+    4-core,  131 nodes, mmap            1,116,635,690                   2,130,984
+    4-core,   50 nodes, mmap              531,389,163                   2,656,946
+
+Here we see the superior performance of the in-memory representation when doing
+top lists versus when doing simple time-based aggregations.  This is an
+implementation detail, but it’s largely because of the differences in accessing
+simple in-memory pointers, versus scanning and seeking through a flattened data
+structure (even though it is already largely paged into memory).
+
+###IV) Conclusions
+
+Our conclusions are three-fold.  First, we demonstrate that is possible to
+provide real-time, fully interactive exploration of 6TB of data with a
+distributed, cloud-hosted commodity hardware.
+
+Second, we highlight the flexibility offered by the cloud.  Letting us stick to
+our core engineering competencies and having someone else deal with the
+overhead of running an actual data center is huge.  The fact that we were able
+to spin up 100 machines, run our benchmarks, kill 25, wait a bit, run
+benchmarks, kill another 25, wait a bit, run benchmarks, rinse and repeat was
+just awesome.
+
+Finally, designing an architecture that horizontally scales for performance
+opens up a set of nobs of cost versus performance.  If we can tolerate response
+times of 10 seconds instead of 1 second, we can pay less for our processing.
+If we can tolerate response times of 1 minute, we pay even less.  Conversely,
+if we need answers in milliseconds, this is achievable at a higher price point.
+
+###V) Using Druid
+
+We currently offer Druid as a hosted service, but are exploring steps to open
+up the platform to a developer community.  If you would like to explore either
+using our hosted service or being part of a developer community, please drop us
+a note.
diff --git a/_posts/2012-05-04-fast-cheap-and-98-right-cardinality-estimation-for-big-data.md b/_posts/2012-05-04-fast-cheap-and-98-right-cardinality-estimation-for-big-data.md
new file mode 100644
index 0000000..7f30692
--- /dev/null
+++ b/_posts/2012-05-04-fast-cheap-and-98-right-cardinality-estimation-for-big-data.md
@@ -0,0 +1,128 @@
+---
+title: "Fast, Cheap, and 98% Right: Cardinality Estimation for Big Data"
+author: Fangjin Yang
+layout: post
+image: http://metamarkets.com/wp-content/uploads/2012/05/cardinality1.jpg
+---
+
+The nascent era of big data brings new challenges, which in turn require new
+tools and algorithms. At Metamarkets, one such challenge focuses on cardinality
+estimation: efficiently determining the number of distinct elements within a
+dimension of a large-scale data set. Cardinality estimations have a wide range
+of applications from monitoring network traffic to data mining. If leveraged
+correctly, these algorithms can also be used to provide insights into user
+engagement and growth, via metrics such as “daily active users.”
+
+### The HyperLogLog Algorithm:  Every Bit is Great
+
+It is well known that the cardinality of a large data set can be precisely
+calculated if the storage complexity is proportional to the number of elements
+in the data set. However, given the scale and complexity of some Druid data
+sets (with record counts routinely in the billions), the data ensemble is often
+far too large to be kept in core memory. Furthermore, because Druid data sets
+can be arbitrarily queried with varying time granularities and filter sets, we
+needed the ability to estimate dimension cardinalities on the fly across
+multiple granular buckets. To address our requirements, we opted to implement
+the [HyperLogLog](http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf)
+algorithm, originally described by Flajolet and colleagues in 2007. The
+HyperLogLog algorithm can estimate cardinalities well beyond 10^9 with a
+relative accuracy (standard error) of 2% while only using 1.5kb of memory.
+Other [companies](http://www.addthis.com/blog/2012/03/26/probabilistic-counting/#.T5nYl8SpnIZ) have
+also leveraged variations of this algorithm in their cardinality estimations.
+
+HyperLogLog takes advantage of the randomized distribution of bits from hashing
+functions in order to estimate how many things you would’ve needed to see in
+order to experience a specific phenomenon.  But as that sentence probably made
+little sense to any reader, let’s try a simple example to explain what it does.
+
+### An Example:  Making a Hash of Things
+
+First, there’s a fundamental mental model shift that is important to realize. 
+A hash function is generally understood as a function that maps a value from
+one (larger) space onto another (smaller) space.  In order to randomly hash on
+a computer system, which is binary at its core, you can view the input value as
+a series of bits. The hash function acts to contort the input value in some
+meaningful way such that an output value that is N bits long is produced. A
+good hash function should assure that the bits of the output value are
+independent and each have an equal probability (50%) of occurring.
+
+Given a random uniform distribution for likelihoods of N 0s and 1s, you can
+extract a probability distribution for the likelihood of a specific
+phenomenon.  The phenomenon we care about is the maximum index of a 1 bit. 
+Specifically, we expect the following to be true:
+
+50% of hashed values will look like this: 1xxxxxxx…x  
+25% of hashed values will look like this: 01xxxxxx…x  
+12.5% of hashed values will look like this: 001xxxxxxxx…x  
+6.25% of hashed values will look like this: 0001xxxxxxxx…x  
+…
+
+So, naively speaking, we expect that if we were to hash 8 unique things, one of
+them will start with 001.  If we were to hash 4 unique things, we would expect
+one to start with 01.  This expectation can also be inverted: if the “highest”
+index of a 1 is 2 (we start counting with index 1 as the leftmost bit
+location), then we probably saw ~4 unique values.  If the highest index is
+4, we probably saw ~16 unique values.  This level of approximation is pretty
+coarse and it is pretty easy to see that it is only approximate at best, but it
+is the basic idea behind HyperLogLog.
+
+### Buckets and Bits:  Tuning Precision and Scale
+
+The adjustment HyperLogLog makes is that it essentially takes the above
+algorithm and introduces multiple “buckets”.  That is, you can take the first k
+bits of the hashed value and use that as a bucket index, then you keep track of
+the max(index of 1) for the remaining bits in that bucket.  The authors then
+provide some math for converting the values in all of the buckets back into an
+approximate cardinality.
+
+Another interesting thing about this algorithm is that it introduces two
+parameters to adjust the accuracy of the approximation:
+
+* Increasing the number of buckets (the k) increases the accuracy of the approximation
+* Increasing the number of bits of your hash increases the highest possible number you can accurately approximate
+
+
+### Now, Do it in Parallel
+
+So how exactly is all of this useful?  When working with large data sets, it is
+common to maintain a summarization of the data set inside of a data warehouse
+and run analytical queries against that summarization.  Often, including
+information like user ids, user cookies or IP addresses (things that are used
+to compute unique users) in these summarizations results in a tradeoff with
+the potential reduction of data volume seen in the summarization and the
+ability to compute cardinalities.  We wanted to be able to take advantage of
+the space savings and row reduction of summarization while still being able to
+compute cardinalities:  this is where HyperLogLog comes in.
+
+In [Druid](http://druid.io/), our summarization process applies the hash
+function ([Murmur 128](http://sites.google.com/site/murmurhash/)) and computes
+the intermediate HyperLogLog format (i.e. the list of buckets of
+`max(index of 1)`) and stores that in a column.  Thus, for every row in our
+summarized dataset, we have a HyperLogLog “sketch” of the unique users that
+were seen in the original event rows comprising that summarized line.  These
+sketches are combinable in an additive/commutative way, just like sum, max, and
+min.  In other words, this intermediate format fits in perfectly with the
+hierarchical scatter/gather query distribution and processing paradigm employed
+by Druid, allowing us to provide granular time-series and top lists of unique
+users, with the full arbitrary slicing and dicing power of Druid.
+
+We don’t just end there though.  We also further optimize the storage format of
+the intermediate data structure depending on whether the set of buckets is
+sparse or dense. Stored densely, the data structure is just n buckets of 1 byte
+(or an array of n bytes, generally, k is less than 256, so it can be
+represented in one byte).  However, in the sparse case, we only need to store
+buckets with valid index values in them.  This means that instead of storing n
+buckets of 1 byte apiece, we can just store the (index, value) pairs.
+
+### We Are the 99% (ok, the 98.5%)
+
+Given our implementation of the algorithm, the theoretical average amount of
+error is 1.5% (i.e. the values will be off by an average of 1.5%). The graph
+below shows the benchmark results for a loop that ran from 0 to
+`Integer.MAX_VALUE` and added the result of a `Random.nextLong()` to the
+HyperLogLog.  For this particular benchmark, the average error rate was found
+to be 1.202526%.
+
+![](/assets/hll-cardinality-error.png)
+
+#### Looking for more Druid information? [Learn more about our core technology.](http://metamarkets.com/product/technology/)
diff --git a/_posts/2012-09-21-druid-bitmap-compression.md b/_posts/2012-09-21-druid-bitmap-compression.md
new file mode 100644
index 0000000..4e54511
--- /dev/null
+++ b/_posts/2012-09-21-druid-bitmap-compression.md
@@ -0,0 +1,1204 @@
+---
+published: true
+title: "Maximum Performance with Minimum Storage: Data Compression in Druid"
+layout: post
+author: Fangjin Yang
+image: "http://metamarkets.com/wp-content/uploads/2012/09/Computer_Chip-470x3401-470x288.jpeg"
+tags: "algorithms, druid, technology"
+---
+
+The Metamarkets solution allows for arbitrary exploration of massive data sets. Powered by Druid, our in-house distributed data store and processor, users can filter time series and top list queries based on Boolean expressions of dimension values. Given that some of our dataset dimensions contain millions of unique values, the subset of things that may match a particular filter expression may be quite large. To design for these challenges, we needed a fast and accurate (not a fast and a [...]
+
+##From Justin Bieber to Ones and Zeros
+
+To better understand how Druid stores dimension values, consider the following data set:
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="104">Timestamp</td>
+<td valign="top" width="68">Publisher</td>
+<td valign="top" width="52">Advertiser</td>
+<td valign="top" width="49">Gender</td>
+<td valign="top" width="50">Country</td>
+<td valign="top" width="52">Impressions</td>
+<td valign="top" width="48">Clicks</td>
+<td valign="top" width="50">Revenue</td>
+</tr>
+<tr>
+<td valign="top" width="104">
+<pre>2011-01-01T01:00:00Z</pre>
+</td>
+<td valign="top" width="68">
+<pre>bieberfever.com</pre>
+</td>
+<td valign="top" width="52">
+<pre>google.com</pre>
+</td>
+<td valign="top" width="49">
+<pre>Male</pre>
+</td>
+<td valign="top" width="50">
+<pre>USA</pre>
+</td>
+<td valign="top" width="52">
+<pre>1800</pre>
+</td>
+<td valign="top" width="48">
+<pre>25</pre>
+</td>
+<td valign="top" width="50">
+<pre>15.70</pre>
+</td>
+</tr>
+<tr>
+<td valign="top" width="104">
+<pre>2011-01-01T01:00:00Z</pre>
+</td>
+<td valign="top" width="68">
+<pre>bieberfever.com</pre>
+</td>
+<td valign="top" width="52">
+<pre>google.com</pre>
+</td>
+<td valign="top" width="49">
+<pre>Male</pre>
+</td>
+<td valign="top" width="50">
+<pre>USA</pre>
+</td>
+<td valign="top" width="52">
+<pre>2912</pre>
+</td>
+<td valign="top" width="48">
+<pre>42</pre>
+</td>
+<td valign="top" width="50">
+<pre>29.18</pre>
+</td>
+</tr>
+<tr>
+<td valign="top" width="104">
+<pre>2011-01-01T02:00:00Z</pre>
+</td>
+<td valign="top" width="68">
+<pre>ultratrimfast.com</pre>
+</td>
+<td valign="top" width="52">
+<pre>google.com</pre>
+</td>
+<td valign="top" width="49">
+<pre>Male</pre>
+</td>
+<td valign="top" width="50">
+<pre>USA</pre>
+</td>
+<td valign="top" width="52">
+<pre>1953</pre>
+</td>
+<td valign="top" width="48">
+<pre>17</pre>
+</td>
+<td valign="top" width="50">
+<pre>17.31</pre>
+</td>
+</tr>
+<tr>
+<td valign="top" width="104">
+<pre>2011-01-01T02:00:00Z</pre>
+</td>
+<td valign="top" width="68">
+<pre>ultratrimfast.com</pre>
+</td>
+<td valign="top" width="52">
+<pre>google.com</pre>
+</td>
+<td valign="top" width="49">
+<pre>Male</pre>
+</td>
+<td valign="top" width="50">
+<pre>USA</pre>
+</td>
+<td valign="top" width="52">
+<pre>3194</pre>
+</td>
+<td valign="top" width="48">
+<pre>170</pre>
+</td>
+<td valign="top" width="50">
+<pre>34.01</pre>
+</td>
+</tr>
+</tbody>
+</table>
+
+Consider the publisher dimension (column) in the table above. For each unique publisher, we can form some representation indicating in which table rows a particular publisher is seen. We can store this information in a binary array where the array indices represent our rows. If a particular publisher is seen in a certain row, that array index is marked as ‘1’. For example:
+
+Bieberfever.com -> `[1, 2]` -> `[1][1][0][0]`
+
+Ultratrimfast.com -> `[3, 4]` -> `[0][0][1][1]`
+
+In the example above bieberfever.com is seen in rows 1 and 2. This mapping of dimension values to row indices forms an [inverted index](http://en.wikipedia.org/wiki/Inverted_index) and is in fact how we store dimension information in Druid. If we want to know which rows contain bieberfever.com OR ultratrimfast.com, we can OR together the bieberfever.com and ultratrimfast.com arrays.
+
+`[0][1][0][1] OR [1][0][1][0] = [1][1][1][1]`
+
+This idea forms the basis of how to perform Boolean operations on large bitmap sets. A challenge still remains in that if each array consisted of millions or billions of entries and if we had to OR together millions of such arrays, performance can potentially become a major issue. Thankfully for us, most bitmap indices are either very sparse or very dense, which is something that can be leveraged for compression.
+
+Bit arrays, or bitmaps, are frequently employed in areas such as data warehousing and data mining to significantly reduce storage costs. Bitmap compression algorithms are a well-defined area of research and often utilize run-length encoding. Well known algorithms include Byte-aligned Bitmap Code, Word-Aligned Hybrid (WAH) code, and Partitioned Word-Aligned Hybrid (PWAH) compression.
+
+##A Concise Solution
+
+Most word-aligned run-length encoding algorithms represent long sequences of ones and zeros in a single word. The word contains the length of the sequence and some information about whether it is a one fill or a zero fill. Sequences that contain a mixture of 0 and 1 bits are stored in 32 bit blocks known as literals. An example of word-aligned hybrid compression is shown below:
+
+Given a bitstream: `[10110...1][000...010][010...011]`
+
+There are three separate 32 bit sequences in the bitstream.
+
+1. `[1]0110...1` - 31 "dirty" bits (a literal)
+
+2. `[00]0...010` - 31 x 2 zeros (a sequence of zeros)
+
+3. `[01]0...011` - 31 x 3 ones (a sequences of ones)
+
+[Concise](http://ricerca.mat.uniroma3.it/users/colanton/docs/concise.pdf) bitmap compression introduces the concept of a mixed fill, where fills and literals can be represented in a single word. The author of the original Concise paper claims that Concise outperforms WAH by reducing the size of the compressed bitmaps by up to 50%. For mixed fill sequences, the first 2 bits indicate the type of fill (0 or 1). The next 5 bits can be used to indicate the position where bits flip from 0 to 1 [...]
+
+1. `[1]0...101000`
+
+2. `[01][00000]0...01`
+
+3. `[00][00001]0...11101`
+
+4. `[1]0...100010`
+
+5. `[00][00000]1...1011101`
+
+6. `[1]10...0`
+
+##Efficiency at Scale
+
+Although Concise compression can greatly reduce the size of resulting bitmaps, we still have the problem of performing efficient Boolean operations on top of a large number of Concise sets. Luckily, Concise sets share a very important property with other bitmap compression schemes: they can be operated on in their compressed form.  The Boolean operations we care about are AND, OR, and NOT. The NOT operation is the most straightforward to implement. Literals are directly complemented and  [...]
+
+Consider ORing two sets where one set is a long sequence of ones and the other set contains a shorter sequence of ones, a sequence of zeros, and some literals. If the sequence of ones in the first set is sufficiently long enough to encompass the second set, we don’t need to care about the second set at all (yay for Boolean logic!). Hence, when ORing sets, sequences of ones always have priority over sequences of zeros and literals. Similarly, a sequence of zeros can be ignored; the sequen [...]
+
+##Results
+
+The following results were generated on a cc2.8xlarge system with a single thread, 2G heap, 512m young gen, and a forced GC between each run. The data set is a single day's worth of data collected from the [Twitter garden hose](https://dev.twitter.com/docs/streaming-apis/streams/public) data stream. The data set contains 2, 272, 295 rows. The table below demonstrates a size comparison between Concise compressed sets and regular integer arrays for different dimensions.
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="106">Dimension</td>
+<td valign="top" width="93">Cardinality</td>
+<td valign="top" width="94">Concise compressed size (bytes)</td>
+<td valign="top" width="92">Integer array size (bytes)</td>
+<td valign="top" width="94">Concise size as a % of integer array size</td>
+</tr>
+<tr>
+<td valign="top" width="106">Has_mention</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">586,400</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">6.451627</td>
+</tr>
+<tr>
+<td valign="top" width="106">Has_links</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">580,872</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">6.390808</td>
+</tr>
+<tr>
+<td valign="top" width="106">Has_geo</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">144,004</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">1.584345</td>
+</tr>
+<tr>
+<td valign="top" width="106">Is_retweet</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">584,592</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">6.431735</td>
+</tr>
+<tr>
+<td valign="top" width="106">Is_viral</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">358,380</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">3.942930</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_lang</td>
+<td valign="top" width="93">21</td>
+<td valign="top" width="94">1,414,000</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">15.556959</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_time_zone</td>
+<td valign="top" width="93">142</td>
+<td valign="top" width="94">3,876,244</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">42.646795</td>
+</tr>
+<tr>
+<td valign="top" width="106">URL_domain</td>
+<td valign="top" width="93">31,165</td>
+<td valign="top" width="94">1,562,428</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">17.189978</td>
+</tr>
+<tr>
+<td valign="top" width="106">First_hashtag</td>
+<td valign="top" width="93">100,728</td>
+<td valign="top" width="94">1,837,144</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">20.212428</td>
+</tr>
+<tr>
+<td valign="top" width="106">Rt_name</td>
+<td valign="top" width="93">182,704</td>
+<td valign="top" width="94">2,235,288</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">24.592846</td>
+</tr>
+<tr>
+<td valign="top" width="106">Reply_to_name</td>
+<td valign="top" width="93">620,421</td>
+<td valign="top" width="94">5,673,504</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">62.420416</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_location</td>
+<td valign="top" width="93">637,774</td>
+<td valign="top" width="94">9,511,844</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">104.650188</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_mention_name</td>
+<td valign="top" width="93">923,842</td>
+<td valign="top" width="94">9,086,416</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">99.969590</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_name</td>
+<td valign="top" width="93">1,784,369</td>
+<td valign="top" width="94">16,000,028</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">176.033790</td>
+</tr>
+</tbody>
+</table>
+
+Total concise compressed size = 53, 451, 144 bytes
+
+Total integer array size = 127, 248, 520 bytes
+
+Overall, Concise compressed sets are about 42.005317% less than integer arrays.
+
+We also resorted the rows of the data set to maximize compression to see how the results would be affected.
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="106">Dimension</td>
+<td valign="top" width="93">Cardinality</td>
+<td valign="top" width="94">Concise compressed size (bytes)</td>
+<td valign="top" width="92">Integer array size (bytes)</td>
+<td valign="top" width="94">Concise size as a % of integer array size</td>
+</tr>
+<tr>
+<td valign="top" width="106">Has_mention</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">744</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">0.008186</td>
+</tr>
+<tr>
+<td valign="top" width="106">Has_links</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">1,504</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">0.016547</td>
+</tr>
+<tr>
+<td valign="top" width="106">Has_geo</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">2,840</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">0.031246</td>
+</tr>
+<tr>
+<td valign="top" width="106">Is_retweet</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">1,616</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">0.017779</td>
+</tr>
+<tr>
+<td valign="top" width="106">Is_viral</td>
+<td valign="top" width="93">2</td>
+<td valign="top" width="94">1,488</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">0.016371</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_lang</td>
+<td valign="top" width="93">21</td>
+<td valign="top" width="94">38,416</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">0.422656</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_time_zone</td>
+<td valign="top" width="93">142</td>
+<td valign="top" width="94">319,644</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">3.516753</td>
+</tr>
+<tr>
+<td valign="top" width="106">URL_domain</td>
+<td valign="top" width="93">31,165</td>
+<td valign="top" width="94">700,752</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">7.709738</td>
+</tr>
+<tr>
+<td valign="top" width="106">First_hashtag</td>
+<td valign="top" width="93">100,728</td>
+<td valign="top" width="94">1,505,292</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">16.561362</td>
+</tr>
+<tr>
+<td valign="top" width="106">Rt_name</td>
+<td valign="top" width="93">182,704</td>
+<td valign="top" width="94">1,874,180</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">20.619902</td>
+</tr>
+<tr>
+<td valign="top" width="106">Reply_to_name</td>
+<td valign="top" width="93">620,421</td>
+<td valign="top" width="94">5,404,108</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">59.456497</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_location</td>
+<td valign="top" width="93">637,774</td>
+<td valign="top" width="94">9,091,016</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">100.075340</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_mention_name</td>
+<td valign="top" width="93">923,842</td>
+<td valign="top" width="94">8,686,384</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">95.568401</td>
+</tr>
+<tr>
+<td valign="top" width="106">User_name</td>
+<td valign="top" width="93">1,784,369</td>
+<td valign="top" width="94">16,204,900</td>
+<td valign="top" width="92">9,089,180</td>
+<td valign="top" width="94">178.287810</td>
+</tr>
+</tbody>
+</table>
+
+Total concise compressed size = 43,832,884 bytes
+
+Total integer array size = 127, 248, 520 bytes
+
+What is interesting to note is that after sorting, global compression only increased minimally. The total Concise set size to total integer array size is 34.448031%.
+
+To understand the performance implications of using Concise sets versus integer arrays, we choose several dimensions from our data set with varying cardinalities and generated Concise sets for every dimension value of every selected dimension. The histograms below indicate the size distribution of the generated Concise sets for a given dimension. Each test run randomly picked a given number of Concise sets and performed Boolean operations with them.  Integer array representations of thes [...]
+
+###Dimension: User_time_zone
+
+Cardinality: 142
+
+![user_time_zone](http://metamarkets.com/wp-content/uploads/2012/09/user_time_zone1-1024x768.png)
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">31</td>
+<td valign="top" width="96">20</td>
+<td valign="top" width="96">1</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">66</td>
+<td valign="top" width="96">53</td>
+<td valign="top" width="96">2</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">159</td>
+<td valign="top" width="96">153</td>
+<td valign="top" width="96">4</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">339</td>
+<td valign="top" width="96">322</td>
+<td valign="top" width="96">7</td>
+<td valign="top" width="96">0</td>
+</tr>
+</tbody>
+</table>
+
+Always including the largest Concise set of the dimension:
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">44</td>
+<td valign="top" width="96">77</td>
+<td valign="top" width="96">1</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">92</td>
+<td valign="top" width="96">141</td>
+<td valign="top" width="96">2</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">184</td>
+<td valign="top" width="96">223</td>
+<td valign="top" width="96">4</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">398</td>
+<td valign="top" width="96">419</td>
+<td valign="top" width="96">8</td>
+<td valign="top" width="96">0</td>
+</tr>
+</tbody>
+</table>
+&nbsp;
+
+###Dimension: URL_domain
+
+Cardinality: 31,165
+
+![url_domain](http://metamarkets.com/wp-content/uploads/2012/09/url_domain-1024x768.png)
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">2</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000</td>
+<td valign="top" width="96">8</td>
+<td valign="top" width="96">24</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">1</td>
+</tr>
+<tr>
+<td valign="top" width="96">5,000</td>
+<td valign="top" width="96">54</td>
+<td valign="top" width="96">132</td>
+<td valign="top" width="96">3</td>
+<td valign="top" width="96">57</td>
+</tr>
+<tr>
+<td valign="top" width="96">10,000</td>
+<td valign="top" width="96">111</td>
+<td valign="top" width="96">286</td>
+<td valign="top" width="96">8</td>
+<td valign="top" width="96">284</td>
+</tr>
+<tr>
+<td valign="top" width="96">25,000</td>
+<td valign="top" width="96">348</td>
+<td valign="top" width="96">779</td>
+<td valign="top" width="96">22</td>
+<td valign="top" width="96">1,925</td>
+</tr>
+</tbody>
+</table>
+
+Always including the largest Concise set of the dimension:
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">14</td>
+<td valign="top" width="96">172</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">17</td>
+<td valign="top" width="96">242</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">19</td>
+<td valign="top" width="96">298</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">22</td>
+<td valign="top" width="96">356</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000</td>
+<td valign="top" width="96">35</td>
+<td valign="top" width="96">569</td>
+<td valign="top" width="96">1</td>
+<td valign="top" width="96">1</td>
+</tr>
+<tr>
+<td valign="top" width="96">5,000</td>
+<td valign="top" width="96">89</td>
+<td valign="top" width="96">865</td>
+<td valign="top" width="96">4</td>
+<td valign="top" width="96">59</td>
+</tr>
+<tr>
+<td valign="top" width="96">10,000</td>
+<td valign="top" width="96">158</td>
+<td valign="top" width="96">1,050</td>
+<td valign="top" width="96">9</td>
+<td valign="top" width="96">289</td>
+</tr>
+<tr>
+<td valign="top" width="96">25,000</td>
+<td valign="top" width="96">382</td>
+<td valign="top" width="96">1,618</td>
+<td valign="top" width="96">21</td>
+<td valign="top" width="96">1,949</td>
+</tr>
+</tbody>
+</table>
+
+###Dimension: RT_name
+
+Cardinality: 182,704
+
+![rt_name](http://metamarkets.com/wp-content/uploads/2012/09/rt_name1-1024x768.png)
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000</td>
+<td valign="top" width="96">1</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">1</td>
+</tr>
+<tr>
+<td valign="top" width="96">5,000</td>
+<td valign="top" width="96">11</td>
+<td valign="top" width="96">31</td>
+<td valign="top" width="96">3</td>
+<td valign="top" width="96">57</td>
+</tr>
+<tr>
+<td valign="top" width="96">10,000</td>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">68</td>
+<td valign="top" width="96">7</td>
+<td valign="top" width="96">284</td>
+</tr>
+<tr>
+<td valign="top" width="96">25,000</td>
+<td valign="top" width="96">98</td>
+<td valign="top" width="96">118</td>
+<td valign="top" width="96">20</td>
+<td valign="top" width="96">1,925</td>
+</tr>
+<tr>
+<td valign="top" width="96">50,000</td>
+<td valign="top" width="96">224</td>
+<td valign="top" width="96">292</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">100,000</td>
+<td valign="top" width="96">521</td>
+<td valign="top" width="96">727</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+</tbody>
+</table>
+
+**Note:** for AND operations on 50,000+ items, our implementation of the array based approach produced StackOverflow exceptions.  Instead of changing the implementation to something that didn't, we just decided not to do comparisons beyond that point.
+
+Always including the largest Concise set of the dimension:
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">14</td>
+<td valign="top" width="96">168</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">16</td>
+<td valign="top" width="96">236</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">18</td>
+<td valign="top" width="96">289</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">20</td>
+<td valign="top" width="96">348</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000</td>
+<td valign="top" width="96">29</td>
+<td valign="top" width="96">551</td>
+<td valign="top" width="96">1</td>
+<td valign="top" width="96">1</td>
+</tr>
+<tr>
+<td valign="top" width="96">5,000</td>
+<td valign="top" width="96">44</td>
+<td valign="top" width="96">712</td>
+<td valign="top" width="96">4</td>
+<td valign="top" width="96">59</td>
+</tr>
+<tr>
+<td valign="top" width="96">10,000</td>
+<td valign="top" width="96">69</td>
+<td valign="top" width="96">817</td>
+<td valign="top" width="96">8</td>
+<td valign="top" width="96">289</td>
+</tr>
+<tr>
+<td valign="top" width="96">25,000</td>
+<td valign="top" width="96">161</td>
+<td valign="top" width="96">986</td>
+<td valign="top" width="96">20</td>
+<td valign="top" width="96">1,949</td>
+</tr>
+<tr>
+<td valign="top" width="96">50,000</td>
+<td valign="top" width="96">303</td>
+<td valign="top" width="96">1,182</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+</tbody>
+</table>
+
+###Dimension: User_location
+
+Cardinality: 637,774
+
+![user_location](http://metamarkets.com/wp-content/uploads/2012/09/user_location-1024x768.png)
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000</td>
+<td valign="top" width="96">2</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">1</td>
+</tr>
+<tr>
+<td valign="top" width="96">5,000</td>
+<td valign="top" width="96">15</td>
+<td valign="top" width="96">7</td>
+<td valign="top" width="96">3</td>
+<td valign="top" width="96">57</td>
+</tr>
+<tr>
+<td valign="top" width="96">10,000</td>
+<td valign="top" width="96">34</td>
+<td valign="top" width="96">16</td>
+<td valign="top" width="96">8</td>
+<td valign="top" width="96">284</td>
+</tr>
+<tr>
+<td valign="top" width="96">25,000</td>
+<td valign="top" width="96">138</td>
+<td valign="top" width="96">54</td>
+<td valign="top" width="96">21</td>
+<td valign="top" width="96">1,927</td>
+</tr>
+<tr>
+<td valign="top" width="96">50,000</td>
+<td valign="top" width="96">298</td>
+<td valign="top" width="96">128</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">100,000</td>
+<td valign="top" width="96">650</td>
+<td valign="top" width="96">271</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">250,000</td>
+<td valign="top" width="96">1,695</td>
+<td valign="top" width="96">881</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">500,000</td>
+<td valign="top" width="96">3,433</td>
+<td valign="top" width="96">2,311</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+</tbody>
+</table>
+
+Always including the largest Concise set of the dimension:
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">14</td>
+<td valign="top" width="96">47</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">16</td>
+<td valign="top" width="96">67</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">18</td>
+<td valign="top" width="96">80</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">20</td>
+<td valign="top" width="96">97</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000</td>
+<td valign="top" width="96">29</td>
+<td valign="top" width="96">153</td>
+<td valign="top" width="96">1</td>
+<td valign="top" width="96">1</td>
+</tr>
+<tr>
+<td valign="top" width="96">5,000</td>
+<td valign="top" width="96">48</td>
+<td valign="top" width="96">206</td>
+<td valign="top" width="96">4</td>
+<td valign="top" width="96">59</td>
+</tr>
+<tr>
+<td valign="top" width="96">10,000</td>
+<td valign="top" width="96">81</td>
+<td valign="top" width="96">233</td>
+<td valign="top" width="96">9</td>
+<td valign="top" width="96">290</td>
+</tr>
+<tr>
+<td valign="top" width="96">25,000</td>
+<td valign="top" width="96">190</td>
+<td valign="top" width="96">294</td>
+<td valign="top" width="96">21</td>
+<td valign="top" width="96">1,958</td>
+</tr>
+<tr>
+<td valign="top" width="96">50,000</td>
+<td valign="top" width="96">359</td>
+<td valign="top" width="96">378</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+</tbody>
+</table>
+&nbsp;
+
+###Dimension: User_name
+
+Cardinality: 1,784,369
+
+![user_name](http://metamarkets.com/wp-content/uploads/2012/09/user_name-1024x768.png)
+
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000</td>
+<td valign="top" width="96">1</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">1</td>
+</tr>
+<tr>
+<td valign="top" width="96">5,000</td>
+<td valign="top" width="96">7</td>
+<td valign="top" width="96">2</td>
+<td valign="top" width="96">3</td>
+<td valign="top" width="96">57</td>
+</tr>
+<tr>
+<td valign="top" width="96">10,000</td>
+<td valign="top" width="96">17</td>
+<td valign="top" width="96">6</td>
+<td valign="top" width="96">7</td>
+<td valign="top" width="96">283</td>
+</tr>
+<tr>
+<td valign="top" width="96">25,000</td>
+<td valign="top" width="96">74</td>
+<td valign="top" width="96">19</td>
+<td valign="top" width="96">21</td>
+<td valign="top" width="96">1,928</td>
+</tr>
+<tr>
+<td valign="top" width="96">50,000</td>
+<td valign="top" width="96">177</td>
+<td valign="top" width="96">45</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">100,000</td>
+<td valign="top" width="96">440</td>
+<td valign="top" width="96">108</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">250,000</td>
+<td valign="top" width="96">1,225</td>
+<td valign="top" width="96">379</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">500,000</td>
+<td valign="top" width="96">2,504</td>
+<td valign="top" width="96">978</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000,000</td>
+<td valign="top" width="96">5,076</td>
+<td valign="top" width="96">2,460</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,250,000</td>
+<td valign="top" width="96">6,331</td>
+<td valign="top" width="96">3,265</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,500,000</td>
+<td valign="top" width="96">7,622</td>
+<td valign="top" width="96">4,036</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,750,000</td>
+<td valign="top" width="96">8,911</td>
+<td valign="top" width="96">4,982</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+</tbody>
+</table>
+
+Always including the largest Concise set of the dimension:
+<table border="1" cellspacing="0" cellpadding="5px">
+<tbody>
+<tr>
+<td valign="top" width="96">Number of filter elements</td>
+<td valign="top" width="96">OR operation with Concise set (ms)</td>
+<td valign="top" width="96">OR operation with integer arrays (ms)</td>
+<td valign="top" width="96">AND operations with Concise set (ms)</td>
+<td valign="top" width="96">AND operation with integer arrays (ms)</td>
+</tr>
+<tr>
+<td valign="top" width="96">10</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">25</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">50</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">100</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+</tr>
+<tr>
+<td valign="top" width="96">1,000</td>
+<td valign="top" width="96">1</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">0</td>
+<td valign="top" width="96">1</td>
+</tr>
+<tr>
+<td valign="top" width="96">5,000</td>
+<td valign="top" width="96">8</td>
+<td valign="top" width="96">2</td>
+<td valign="top" width="96">3</td>
+<td valign="top" width="96">59</td>
+</tr>
+<tr>
+<td valign="top" width="96">10,000</td>
+<td valign="top" width="96">22</td>
+<td valign="top" width="96">6</td>
+<td valign="top" width="96">7</td>
+<td valign="top" width="96">289</td>
+</tr>
+<tr>
+<td valign="top" width="96">25,000</td>
+<td valign="top" width="96">77</td>
+<td valign="top" width="96">19</td>
+<td valign="top" width="96">21</td>
+<td valign="top" width="96">1,954</td>
+</tr>
+<tr>
+<td valign="top" width="96">50,000</td>
+<td valign="top" width="96">196</td>
+<td valign="top" width="96">45</td>
+<td valign="top" width="96">-</td>
+<td valign="top" width="96">-</td>
+</tr>
+</tbody>
+</table>
\ No newline at end of file
diff --git a/_posts/2012-10-24-beyond-hadoop-fast-ad-hoc-queries-on-big-data.md b/_posts/2012-10-24-beyond-hadoop-fast-ad-hoc-queries-on-big-data.md
new file mode 100644
index 0000000..a99c89a
--- /dev/null
+++ b/_posts/2012-10-24-beyond-hadoop-fast-ad-hoc-queries-on-big-data.md
@@ -0,0 +1,9 @@
+---
+title: "Beyond Hadoop: Fast Ad-Hoc Queries on Big Data (Video)"
+layout: post
+author: Eric Tschetter
+---
+
+Eric Tschetter (lead architect of Druid)
+
+<iframe width="640" height="360" src="//www.youtube.com/embed/eCbXoGSyHbg?rel=0" frameborder="0" allowfullscreen=""></iframe>
diff --git a/_posts/2012-10-24-introducing-druid.md b/_posts/2012-10-24-introducing-druid.md
new file mode 100644
index 0000000..4098f26
--- /dev/null
+++ b/_posts/2012-10-24-introducing-druid.md
@@ -0,0 +1,104 @@
+---
+title: Introducing Druid
+layout: post
+author: Eric Tschetter
+image: http://metamarkets.com/wp-content/uploads/2012/10/Druid.jpg
+---
+
+In [April 2011](http://metamarkets.com/2011/druid-part-i-real-time-analytics-at-a-billion-rows-per-second/),
+we introduced Druid, our distributed, real-time data store.  Today I am
+extremely proud to announce that we are releasing the Druid data store to the
+community as an open source project. To mark this special occasion, I wanted to
+recap why we built Druid, and why we believe there is broader utility for Druid
+beyond [Metamarkets' analytical SaaS offering](http://metamarkets.com/2012/metamarkets-open-sources-druid/metamarkets.com/product).
+
+When we started to build Metamarkets’ analytics solution, we tried several
+commercially available data stores. Our requirements were driven by our online
+advertising customers who have data volumes often upwards of hundreds of
+billions of events per month, and need highly interactive queries on the latest
+data as well as an ability to arbitrarily filter across any dimension – with
+data sets that contain 30 dimensions or more.  For example, a typical query
+might be “find me how many advertisements were seen by female executives, aged
+35 to 44, from the US, UK, and Canada, reading sports blogs on weekends.”
+
+First, we went the traditional database route. Companies have successfully used
+data warehouses to manage the cost and overhead of querying historical data,
+and the architecture aligned with our goals of data aggregation and drill down.
+For our data volumes (reaching billions of records), we found that the scan
+rates were not fast enough to support our interactive dashboard, and caching
+could not be used to reliably speed up queries due to the arbitrary drill-downs
+we need to support. In addition, because RDBMS data updates are inherently
+batch, updates made via inserts lead to locking of rows for queries.
+
+Next, we investigated a NoSQL architecture. To support our use case of allowing
+users to drill down on arbitrary dimensions, we pre-computed dimensional
+aggregations and wrote them into a NoSQL key-value store.  While this approach
+provided fast query times, pre-aggregations required hours of processing time
+for just millions of records (on a ~10-node Hadoop cluster).  More
+problematically, as the number of dimensions increased, the aggregation and
+processing time increased exponentially, exceeding 24 hours.  Beyond its cost,
+this time created an unacceptably high latency between when events occurred and
+when they were available for querying – negating any possibility of supporting
+our customers’ desire for real-time visibility into their data.
+
+We thus decided to build Druid, to better meet the needs of analytics workloads
+requiring fast, real-time access to data at scale.
+
+Druid’s key features are:
+
+- **Distributed architecture.** Swappable read-only data segments using an MVCC
+swapping protocol. Per-segment replication relieves load on hot segments.
+Supports both in-memory and memory-mapped versions.
+
+- **Real-time ingestion.** Real-time ingestion coupled with broker servers to
+query across real-time and historical data. Automated migration of real-time to
+historical as it ages.
+
+- **Column-oriented for speed.**  Data is laid out in columns so that scans are
+limited to specific data being searched. Compression decreases overall data
+footprint.
+
+- **Fast filtering.** Bitmap indices with CONCISE compression.
+
+- **Operational simplicity.** Fault tolerant due to replication. Supports
+rolling deployments and restarts. Allows simple scale up and scale down – just
+add or remove nodes.
+
+From a query perspective, Druid supports arbitrary Boolean filters as well as
+Group By, time series roll-ups, aggregation functions and regular expression
+searches.
+
+In terms of performance, Druid’s scan speed is 33M rows per second per core,
+and can ingest up to 10K incoming records per second per node. We have
+horizontally scaled Druid to support [scan speeds of 26B records per
+second](http://metamarkets.com/2012/scaling-druid/).
+
+Now that more people have hands-on experience with Hadoop, there is a
+broadening realization that while it is ideal for batch processing of large
+data volumes, tools for real-time data queries are lacking. Hence there is
+growing interest in tools like Google’s Dremel and PowerDrill, as evidenced by
+the new Apache Drill project. We believe that Druid addresses a gap in the
+existing big data ecosystem for a real-time analytical data store, and we are
+excited to make it available to the open source community.
+
+Metamarkets has engaged with multiple large internet properties like Netflix,
+providing early access to the code for evaluation purposes. Netflix is
+assessing Druid for operational monitoring of real-time metrics across their
+streaming media business.
+
+Sudhir Tonse, Manager, Cloud Platform Infrastructure says, “Netflix manages
+billions of streaming events each day, so we need a highly scalable data store
+for operational reporting. We are so far impressed with the speed and
+scalability of Druid, and are continuing to evaluate it for providing critical
+real-time transparency into our operational metrics.”
+
+Metamarkets anticipates that open sourcing Druid will also help other
+organizations solve their real-time data analysis and processing needs. We are
+excited to see how the open source community benefits from using Druid in their
+own applications, and hopeful that Druid improves through their feedback and
+usage.
+
+Druid is available for download on GitHub at <https://github.com/metamx/druid>,
+and more information can be found on the [Druid project
+website](http://metamarkets.com/druid).
+
diff --git a/_posts/2013-02-28-interactive-queries-meet-real-time-data.md b/_posts/2013-02-28-interactive-queries-meet-real-time-data.md
new file mode 100644
index 0000000..94affb8
--- /dev/null
+++ b/_posts/2013-02-28-interactive-queries-meet-real-time-data.md
@@ -0,0 +1,9 @@
+---
+title: "Druid: Interactive Queries Meet Real-time Data (Video)"
+layout: post
+author: Eric Tschetter
+---
+
+Eric Tschetter (lead architect of Druid) and Danny Yuan (Netflix Platform Engineering Team) co-presented at the 2013 Strata conference in Santa Clara, CA.
+
+<iframe width="640" height="360" src="//www.youtube.com/embed/Dlqj34l2upk?rel=0" frameborder="0" allowfullscreen=""></iframe>
diff --git a/_posts/2013-04-03-15-minutes-to-live-druid.md b/_posts/2013-04-03-15-minutes-to-live-druid.md
new file mode 100644
index 0000000..8abf52f
--- /dev/null
+++ b/_posts/2013-04-03-15-minutes-to-live-druid.md
@@ -0,0 +1,93 @@
+---
+title: 15 Minutes to Live Druid
+layout: post
+author: Jaypal Sethi
+image: http://metamarkets.com/wp-content/uploads/2013/04/Druid-Cluster1.jpg
+---
+
+Big Data reflects today’s world where data generating events are measured in
+the billions and business decisions based on insight derived from this data is
+measured in seconds. There are few tools that provide deep insight into both
+live and stationary data as business events are occurring; Druid was designed
+specifically to serve this purpose.
+
+If you’re not familiar with Druid, it’s a powerful, open source, real-time
+analytics database designed to allow queries on large quantities of streaming
+data – that means querying data as it’s being ingested into the system (see
+previous [blog post](http://metamarkets.com/2012/metamarkets-open-sources-druid/).
+Many databases claim they are real-time because they are
+“real fast;” this usually works for smaller workloads or for customers with
+infinite IT budgets. For companies like Netflix, whose engineers use Druid to
+cull through [70 billion log events per day, ingesting over 2 TB per hour at
+peak times](http://www.slideshare.net/g9yuayon/netflix-druidstrata2013)
+(more on this in a later blog post), real-time means they have to
+query data as it’s being ingested into the system.
+
+Taking Druid a step further, the database provides benefits for both real-time
+and non-real-time uses by allowing arbitrary drill-downs and n-dimensional
+filtering without any impact on performance. Beyond being a key feature used by
+[Metamarkets](http://www.metamarkets.com/) (average query times of less than
+500 milliseconds), it’s also a valuable capability for Netflix, and a key use
+case for the R community.
+
+Outside of features and functionality, the value of so many successful open
+source projects can be attributed to their user community. As a sponsor of this
+project, one of our core goals here at Metamarkets is to support our growing
+Druid Community. In fact, this blog post is a good example of responding to
+community feedback to make Druid immediately accessible to users who want to
+explore and become familiar with the database.
+
+Today, we’re excited to announce a ready to run Druid Personal Demo Cluster
+with a pre-loaded test workload: the Wikipedia edit stream. The DPDC (Druid
+Personal Demo Cluster) is available via AWS as a StackTemplate and is free to
+use and run; all that’s required is your own AWS account and 15 minutes.
+
+The DPDC is designed to provide a small, but realistic and fully functional
+Druid environment, allowing users to become familiar with a working example of
+a Druid system, write queries and understand how to manage the environment. The
+DPDC is also extensible; once users are familiar with Druid, we encourage them
+to load their own data and to continue learning. While the DPDC is far from an
+actual deployment, it’s designed to be an educational tool and an on-ramp
+towards your own deployment.
+
+The AWS (Amazon Web Services) [CloudFormation](http://aws.amazon.com/cloudformation/)
+Template pulls together two Druid AMIs and creates a pre-configured Druid
+Cluster preloaded with the Wikipedia edit stream, and a basic query interface
+to help you become familiar with Druid capabilities like drill-downs on
+arbitrary dimensions and filters.
+
+What’s in this Druid Demo Cluster?
+
+1. A single Master node is based on a preconfigured AWS AMI (Amazon Machine
+Image) and also contains the Zookeeper broker, the Indexer, and a MySQL
+instance which keeps track of system metadata. You can read more about Druid
+architecture [here](https://github.com/metamx/druid/wiki/Design).
+
+2. Three compute nodes based on another AWS AMI; these compute nodes, have been
+pre-configured to work with the Master node and already contain the Wikipedia
+edit stream data (no specific setup is required).  How to Get Started:
+
+Our quick start guide is located on the Druid Github wiki:
+<https://github.com/metamx/druid/wiki/Druid-Personal-Demo-Cluster>
+
+For support, please join our mailing list (Google Groups):
+<https://groups.google.com/d/forum/druid-development>. We welcome your feedback
+and contributions as we consider adding more content for the DPDC.
+
+Need more?
+
+Try out our connectors – we recently open-sourced our RDruid connector and will
+be holding a Druid Meetup where we’ll conduct a hands-on mini-lab to get
+attendees working with Druid.
+
+The community also contributed a Ruby client
+(<https://github.com/madvertise/ruby-druid>) and is rumored to be working on
+Python and SQL clients. And, a massive thanks to the team at
+[SkilledAnalysts](http://skilledanalysts.com/) for their contributions to the
+DPDC and their continued involvement in the Druid community.
+
+Finally, if you’re looking for more information on Druid, you can find it on
+our [technology page](http://metamarkets.com/product/technology/).
+
+IMAGE: [PEDRO MIGUEL SOUSA](http://www.shutterstock.com/gallery-86570p1.html) / [SHUTTERSTOCK](http://www.shutterstock.com/)
+
diff --git a/_posts/2013-04-03-druid-r-meetup.md b/_posts/2013-04-03-druid-r-meetup.md
new file mode 100644
index 0000000..ca9bc75
--- /dev/null
+++ b/_posts/2013-04-03-druid-r-meetup.md
@@ -0,0 +1,32 @@
+---
+published: true
+title: "Druid, R, Pizza and massively large data sets (Video)"
+author: Xavier Léauté
+tags: meetup R druid
+layout: post
+---
+
+On April 3rd 2013 we held our first Meetup hosted by Metamarkets. The description and video follows. 
+
+####[Meetup Description](https://github.com/metamx/RDruid)
+Since Metamarkets open-sourced Druid, our real-time analytics database, it’s
+been used within many different industries outside the online advertising space
+(gaming, on-line entertainment, enterprise CRM). Our community is growing but
+none so fast and as vocal as the R community.
+
+Outside of our production environment, Metamarkets leverages Druid for research - query and drill down on extremely large data sets in order to provide insight
+on everything from internal metrics to customer usage patterns. As the key
+sponsor and creator of Druid, our data scientists rely on the combination of R
+and Druid for developing analytics to serve customers in online advertising.
+
+In response to the interest we recently decided to open source our [R connector
+for Druid](https://github.com/metamx/RDruid) and we thought we'd create a Meetup
+to show you how we've successfully used this powerful combination. Please join
+us for our first Open Druid user group at our new office, bring your laptops
+and learn how to query large data sets in Druid with our recently open sourced
+Druid R connector.
+
+[Follow this link for the original meetup](http://www.meetup.com/Open-Druid/events/109420402/) 
+
+
+<iframe src="http://player.vimeo.com/video/63512886" width="500" height="281" frameborder="0" webkitAllowFullScreen="" mozallowfullscreen="" allowFullScreen=""></iframe>
diff --git a/_posts/2013-04-26-meet-the-druid.md b/_posts/2013-04-26-meet-the-druid.md
new file mode 100644
index 0000000..592bc87
--- /dev/null
+++ b/_posts/2013-04-26-meet-the-druid.md
@@ -0,0 +1,111 @@
+---
+title: Meet the Druid and Find Out Why We Set Him Free
+layout: post
+author: Steve Harris
+image: http://metamarkets.com/wp-content/uploads/2013/04/wordle_from_open_source_book-4f737c9-intro.jpg
+---
+
+Before jumping straight into why [Metamarkets](http://metamarkets.com/) open
+sourced [Druid](https://github.com/metamx/druid/wiki), I thought I would give a
+brief dive into what Druid is and how it came about. For more details, check
+out the [Druid white paper](http://static.druid.io/docs/druid.pdf).
+
+### Introduction
+
+We are lucky to be developing software in a period of extreme innovation.
+Fifteen years ago, if a developer or ops person went into his or her boss’s
+office and suggested using a non-relational/non-SQL/non-ACID/non-Oracle
+approach to storing data, they would pretty much get sent on their way. All
+problems at all companies were believed to be solved just fine using relational
+databases.
+
+Skip forward a few years and the scale, latency and uptime requirements of the
+Internet really started hitting the
+[Googles](http://research.google.com/archive/spanner.html) and
+[Amazons](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf)
+of the world. It was quickly realized that some compromises needed to be made
+to manage the challenging data issues they were having in a cost-effective way.
+It was also finally acknowledged that different use cases might benefit from
+different solutions.
+
+Druid was born out of this era of data stores, purpose-built for a specific set
+of trade-offs and use cases. We believe that taking part in and keeping pace
+with this period of innovation requires more than a company. It requires a
+community.
+
+### So What Are Druid's Core Values?
+
+Druid was built as an analytics data store for Metamarkets’ as-it-happens,
+interactive SaaS platform targeted at the online advertising industry. It
+fundamentally needs to ingest tens of billions of events per day per customer
+and provide sub-second, interactive, slicing and dicing on arbitrary queries.
+It has to do this in an efficient and cost effective way.
+
+Values:
+- 24x7x365x10 (Hours/days, days/a week, days/year, years)
+- User speed responses (millis not micros) on arbitrary analytics queries
+- Billions of events per day per customer as they happen (fast append)
+- Cost-effective data management
+- Linear scale-out
+- Predictable responses
+- Community/Adoption wins
+
+Non-Values:
+- Not a key-value store
+- Not focused on fast update or delete
+
+We looked at a lot of options and many of them had some of these properties but none had all.
+
+### Cost Is No Joke
+
+When looking at the success of data management platforms like Hadoop, it is
+important not to underestimate how important cost is. While Hadoop is powerful,
+it was certainly true that other platforms were managing huge amounts of data
+before its existence. One of Hadoop’s fundamental innovations was being able to
+manage that data for a much lower cost per gigabyte compared to existing
+solutions. This was achieved by a combination of the hardware it could run on,
+the flexible programming model, and of course, the fact that it’s open source
+and can be used for free.
+
+Druid also takes the value proposition of cost seriously. It compresses rolled
+up data to use as little CPU and storage space as it can. It also runs well on
+commodity boxes and is open source. The combination of these two factors make
+it a cost effective solution to user time querying of 100s of terabytes of
+data. This makes the difficult and expensive practical.
+
+### Druid Today?
+
+Druid has been in production living up to its core values at Metamarkets for a
+few years now. Since going open source, we’ve had the pleasure of seeing
+adoption in a number of different organizations and for different use cases.
+Not least of which culminated in co-presenting on [how Netflix engineers use
+Druid](http://www.youtube.com/watch?v=Dlqj34l2upk) at Strata in February, 2013.
+It has proven to be an excellent platform, processing 10s of billions of
+events/day, storing 100s of TB of data, and providing fast, predictable
+arbitrary querying.
+
+So why did we open source it?
+
+### Why OSS?
+
+I'm glad you asked. It might seem counter intuitive to open source something so
+valuable. We feel like we have some good reasoning.
+
+- While we have some very specific use cases for Druid, we felt like it was
+  broadly applicable. Opening it up helps us learn what those other use cases
+  are.
+- Having others put pressure on it from other verticals is an excellent way to
+  keep the data store ahead of our needs. Since the platform is so important to
+  us, we want to make sure it has momentum and life.
+- We hope that by open sourcing it, we will get outside contributions both in
+  code and ideas.
+
+Druid is a very important piece of the Metamarkets platform. That said, it will
+always be cheaper and easier for people to use the Metamarkets SaaS solution
+rather than building and managing a cluster oneself. However, for those who
+have use cases not directly covered by what Metamarkets offers, open source
+Druid helps users create software that can leverage the power of a real-time,
+scalable analytics-oriented data store.  Looking for more Druid information?
+[Learn more about our core technology](http://metamarkets.com/product/technology/).
+
+[PHOTOGRAPH BY NICOLE C. ENGARD](http://www.flickr.com/photos/nengard/5755231610/)
diff --git a/_posts/2013-05-10-real-time-for-real.md b/_posts/2013-05-10-real-time-for-real.md
new file mode 100644
index 0000000..b2b72f0
--- /dev/null
+++ b/_posts/2013-05-10-real-time-for-real.md
@@ -0,0 +1,140 @@
+---
+title: "Real Real-Time. For Realz."
+layout: post
+author: Eric Tschetter
+image: "http://metamarkets.com/wp-content/uploads/2013/05/Clocks.jpg"
+published: true
+---
+
+_Danny Yuan, Cloud System Architect at Netflix, and I recently co-presented at
+the Strata Conference in Santa Clara. [The
+presentation](http://www.youtube.com/watch?v=Dlqj34l2upk) discussed how Netflix
+engineers leverage [Druid](http://metamarkets.com/product/technology/),
+Metamarkets’ open-source, distributed, real-time, analytical data store, to
+ingest 150,000 events per second (billions per day), equating to about 500MB/s
+of data at peak (terabytes per hour) while still maintaining real-time,
+exploratory querying capabilities. Before and after the presentation, we had
+some interesting chats with conference attendees. One common theme from those
+discussions was curiosity around the definition of “real-time” in the real
+world and how Netflix could possibly achieve it at those volumes. This post is
+a summary of the learnings from those conversations and a response to some of
+those questions._
+
+### What is Real-time?
+
+Real-time has become a heavily overloaded term so it is important to properly
+define it. I will limit our discussion of the term to its usage in the data
+space as it takes on different meanings in other arenas. In the data space, it
+is now commonly used to refer to one of two kinds of latency: query latency and
+data ingestion latency.
+
+Query latency is the rate of return of queries. It assumes a static data set
+and refers to the speed at which you can ask questions of that data set. Right
+now, the vast majority of “real-time” systems are co-opting the word real-time
+to refer to “fast query latency.” I do not agree with this definition of
+“real-time” and prefer “interactive queries,” but it is the most prevalent use
+of real-time and thus is worth noting.
+
+Data ingestion latency is the amount of time it takes for an event to be
+reflected in your query results. An example of this would be the amount of time
+it takes from when someone visits your website to when you can run a query that
+tells you about that person’s activity on your site. When that latency is close
+to a few seconds, you feel like you are seeing what is going on right now or
+that you are seeing things in “real-time.” This is what I believe most people
+assume when they hear about “real-time data.” However, rapid data ingestion
+latency is the lesser used definition due to of the lack of infrastructure to
+support it at scale (tens of billions of events/terabytes of data per day),
+while the infrastructure to support fast query latencies is easier to create
+and readily available.
+
+### What’s Considered Real-Time?
+
+Okay, now that we have a definition of real-time and that definition depends on
+latency, there’s the remaining question of which latencies are good enough to
+earn the “real-time” moniker. The truth is that it’s up to interpretation. The
+key point is that the people who see the output of the queries feel like they
+are looking at what is going on “right now.” I don’t have any
+scientifically-driven methods of understanding where this boundary is, but I do
+have experience from interacting with customers at Metamarkets.
+
+Conclusions first, descriptions second. To be considered real-time, query
+latency must be below 5 seconds and data ingestion latency must be below 15
+seconds.
+
+### Why Druid?
+
+Of course, in my infinite bias, I’m going to tell you about how Druid is able
+to handle data ingestion latencies in the sub-15 second range. If I didn’t tell
+you about that, then the blog post would be quite pointless. If you are
+interested in how Druid is able to handle the query latency side of the
+endeavor, please [watch the video](http://www.youtube.com/watch?v=eCbXoGSyHbg)
+from my October talk at Strata NY. I will continue with a discussion of the
+data ingestion side of the story.
+
+### How does Druid do it?
+
+If you want to deeply understand Druid, then a great place to start is its
+[whitepaper](http://static.druid.io/docs/druid.pdf)
+but we will provide a brief overview here of how the real-time ingestion piece
+achieves its goals. Druid handles real-time data ingestion by having a separate
+node type: the descriptively-named “real-time” node. Real-time nodes
+encapsulate the functionality to ingest and query data streams. Therefore, data
+indexed via these nodes is immediately available for querying. Typically, for
+data durability purposes, a message bus such as
+[Kafka](http://kafka.apache.org/) sits between the event creation point and the
+real-time node.
+
+The purpose of the message bus is to act as a buffer for incoming events. In an
+event stream, the message bus maintains offsets indicating the point a
+real-time node has read up to. Then, the real-time nodes can update these
+offsets periodically.
+
+Real-time nodes pull data in from the message bus and buffer it in indexes that
+do not hit disk. To minimize the impact of losing a node, the nodes will
+persist their indexes to disk either periodically or after some maximum size
+threshold is reached. After each persist, a real-time node updates the message
+bus, informing it of everything it has consumed so far (this is done by
+“committing the offset” in Kafka). If a real-time node fails and recovers, it
+can simply reload any indexes that were persisted to disk and continue reading
+the message bus from the point the last offset was committed.
+
+Real-time nodes expose a consolidated view of the current and updated buffer
+and of all of the indexes persisted to disk. This allows for a consistent view
+of the data on the query side, while still allowing us to incrementally append
+data. On a periodic basis, the nodes will schedule a background task that takes
+all of the persisted indexes of a data source, merges them together to build a
+segment and uploads it to deep storage. It then signals for the historical
+compute nodes to begin serving the segment. Once the compute nodes load up the
+data and start serving requests against it, the real-time node no longer needs
+to maintain its older data. The real-time nodes then clean up the older segment
+of data and begin work on their new segment(s). The intricate and ongoing
+sequence of ingest, persist, merge, and handoff is completely fluid. The people
+querying the system are unaware of what is going on behind the scenes and they
+simply have a system that works.
+
+### TL;DR, but yet you somehow made it to the end of the post:
+
+A deep understanding of the problem, specifically the end-user’s expectations
+and how that will affect their interactions, is key to designing a
+technological solution to a problem. When dealing with transparency and
+analytical needs for large quantities of data, the big questions around user
+experience that must be answered are how soon data needs to be available and
+how quickly queries need to return.
+
+Hopefully this blog helped clarify the considerations around these two key
+components and how infrastructure can be developed to handle it.
+
+Lastly, the shameless plug for Druid: you should use Druid.
+
+Druid is open source, you can download it and run it on your own infrastructure
+for your own problems. If you are interested in learning more about Druid or
+trying it out, the code is available on
+[GitHub](https://github.com/metamx/druid) and our [wiki with documentation is
+available here](https://github.com/metamx/druid/wiki). Finally, to complete
+the link soup at the bottom of our post,
+[here is our introductory
+presentation at Strata](http://www.youtube.com/watch?v=eCbXoGSyHbg) and [here
+is our most recent Strata talk](http://www.youtube.com/watch?v=Dlqj34l2upk) with Danny about real-time in Santa Clara.
+
+[Clocks photograph by Image Club Graphics via Sean
+Turvey](http://www.flickr.com/photos/74586726@N00/4176786834/)
\ No newline at end of file
diff --git a/_posts/2013-07-11-booting-ec2.md b/_posts/2013-07-11-booting-ec2.md
new file mode 100644
index 0000000..a41a3a5
--- /dev/null
+++ b/_posts/2013-07-11-booting-ec2.md
@@ -0,0 +1,25 @@
+---
+published: false
+layout: post
+title: Untitled
+---
+
+## About Druid ##
+
+Druid is a rockin' analytical data store that forms the basis for the metamarkets platform. Metamarkets is dedicated to developing druid in [open source](https://github.com/metamx/druid/wiki).
+
+## Booting a Druid EC2 Instance ##
+
+[Loading Your Data]() and [Querying Your Data]() contain recipes to boot a small druid cluster on localhost. Here we will boot a small cluster on EC2. You can checkout the code, or download a tarball from [here](http://static.druid.io/artifacts/druid-services-0.5.6-SNAPSHOT-bin.tar.gz).
+
+The [ec2 run script](https://github.com/metamx/druid/blob/master/examples/bin/run_ec2.sh), run_ec2.sh, is located at 'examples/bin' if you have checked out the code, or at the root of the project if you've downloaded a tarball. The scripts rely on the [Amazon EC2 API Tools](http://aws.amazon.com/developertools/351), and you will need to set three environment variables:
+
+```bash
+# Setup environment for ec2-api-tools
+export EC2_HOME=/path/to/ec2-api-tools-1.6.7.4/
+export PATH=$PATH:$EC2_HOME/bin
+export AWS_ACCESS_KEY=
+export AWS_SECRET_KEY=
+```
+
+Then, booting an ec2 instance running one node of each type is as simple as running the script, run_ec2.sh :) You will see an Ubuntu 12.04 machine bootstrap and it will tell you when it is ready and how to ssh to the machine.
\ No newline at end of file
diff --git a/_posts/2013-08-06-twitter-tutorial.md b/_posts/2013-08-06-twitter-tutorial.md
new file mode 100644
index 0000000..fcdafad
--- /dev/null
+++ b/_posts/2013-08-06-twitter-tutorial.md
@@ -0,0 +1,333 @@
+---
+published: true
+layout: post
+title: Understanding Druid Via Twitter Data
+author: Russell Jurney
+tags: "druid, bigdata, analytics, datastore,exploritory, kafka, storm, hadoop, zookeeper"
+---
+
+Druid is a rockin' exploratory analytical data store capable of offering interactive query of big data in realtime - as data is ingested. Druid drives 10's of billions of events per day for the [Metamarkets](http://www.metamarkets.com) platform, and Metamarkets is committed to building Druid in open source.
+
+## About Druid ##
+
+Thanks for taking an interest in Druid. This tutorial will help clarify some core Druid concepts. We will go through one of the Real-time examples and issue some basic Druid queries. The data source we'll be working with is the [Twitter spritzer stream](https://dev.twitter.com/docs/streaming-apis/streams/public). If you are ready to explore Druid, brave its challenges, and maybe learn a thing or two, read on!
+
+## Setting Up ##
+
+There are two ways to setup Druid: download a tarball, or build it from source.
+
+### Download a Tarball ###
+
+We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.5.6-SNAPSHOT-bin.tar.gz).
+Download this to a directory of your choosing.
+
+You can extract the awesomeness within by issuing:
+<pre>tar -zxvf druid-services-0.5.6-SNAPSHOT-bin.tar.gz</pre>
+
+If you cd into the directory:
+<pre>cd druid-services-0.5.6-SNAPSHOT</pre>
+
+You should see a bunch of files:
+* run_example_server.sh
+* run_example_client.sh
+* run_ec2.sh
+* LICENSE, config, examples, lib directories
+
+### Clone and Build from Source ###
+
+The other way to setup Druid is from source via git. To do so, run these commands:
+
+```
+git clone git@github.com:metamx/druid.git
+cd druid
+./build.sh
+```
+
+You should see a bunch of files: 
+
+```
+DruidCorporateCLA.pdf README      common      examples    indexer     pom.xml     server
+DruidIndividualCLA.pdf  build.sh    doc     group_by.body   install     publications    services
+LICENSE     client      eclipse_formatting.xml  index-common    merger      realtime
+```
+
+You can find the example executables in the examples/bin directory:
+* run_example_server.sh
+* run_example_client.sh
+
+## Running Example Scripts ##
+
+Let's start doing stuff. You can start a Druid Realtime node by issuing:
+<pre>./run_example_server.sh</pre>
+
+Select "twitter". 
+
+You'll need to register a new application with the twitter API, which only takes a minute. Go to [https://dev.twitter.com/apps/new](https://dev.twitter.com/apps/new) and fill out the form and submit. Don't worry, the home page and callback url can be anything. This will generate keys for the Twitter example application. Take note of the values for consumer key/secret and access token/secret.
+
+Enter your credentials when prompted.
+
+Once the node starts up you will see a bunch of logs about setting up properties and connecting to the data source. If you see crazy exceptions, you probably typed in your login information incorrectly. If the server started properly you will see a message like this that repeats periodically:
+
+<pre><code>
+2013-05-17 23:04:59,793 INFO [chief-twitterstream] druid.examples.twitter.TwitterSpritzerFirehoseFactory - nextRow() has returned 1,000 InputRows
+</code></pre>
+
+These messages indicate you are ingesting events. The Druid real time-node ingests events in an in-memory buffer. Periodically, these events will be persisted to disk. Persisting to disk generates a whole bunch of logs:
+<pre><code>
+2013-05-17 23:06:40,918 INFO [chief-twitterstream] com.metamx.druid.realtime.plumber.RealtimePlumberSchool - Submitting persist runnable for dataSource[twitterstream]
+2013-05-17 23:06:40,920 INFO [twitterstream-incremental-persist] com.metamx.druid.realtime.plumber.RealtimePlumberSchool - DataSource[twitterstream], Interval[2013-05-17T23:00:00.000Z/2013-05-18T00:00:00.000Z], persisting Hydrant[FireHydrant{index=com.metamx.druid.index.v1.IncrementalIndex@126212dd, queryable=com.metamx.druid.index.IncrementalIndexSegment@64c47498, count=0}]
+2013-05-17 23:06:40,937 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - Starting persist for interval[2013-05-17T23:00:00.000Z/2013-05-17T23:07:00.000Z], rows[4,666]
+2013-05-17 23:06:41,039 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - outDir[/tmp/example/twitter_realtime/basePersist/twitterstream/2013-05-17T23:00:00.000Z_2013-05-18T00:00:00.000Z/0/v8-tmp] completed index.drd in 11 millis.
+2013-05-17 23:06:41,070 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - outDir[/tmp/example/twitter_realtime/basePersist/twitterstream/2013-05-17T23:00:00.000Z_2013-05-18T00:00:00.000Z/0/v8-tmp] completed dim conversions in 31 millis.
+2013-05-17 23:06:41,275 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.CompressedPools - Allocating new chunkEncoder[1]
+2013-05-17 23:06:41,332 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - outDir[/tmp/example/twitter_realtime/basePersist/twitterstream/2013-05-17T23:00:00.000Z_2013-05-18T00:00:00.000Z/0/v8-tmp] completed walk through of 4,666 rows in 262 millis.
+2013-05-17 23:06:41,334 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - Starting dimension[htags] with cardinality[634]
+2013-05-17 23:06:41,381 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - Completed dimension[htags] in 49 millis.
+2013-05-17 23:06:41,382 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - Starting dimension[lang] with cardinality[19]
+2013-05-17 23:06:41,398 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - Completed dimension[lang] in 17 millis.
+2013-05-17 23:06:41,398 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - Starting dimension[utc_offset] with cardinality[32]
+2013-05-17 23:06:41,413 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - Completed dimension[utc_offset] in 15 millis.
+2013-05-17 23:06:41,413 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexMerger - outDir[/tmp/example/twitter_realtime/basePersist/twitterstream/2013-05-17T23:00:00.000Z_2013-05-18T00:00:00.000Z/0/v8-tmp] completed inverted.drd in 81 millis.
+2013-05-17 23:06:41,425 INFO [twitterstream-incremental-persist] com.metamx.druid.index.v1.IndexIO$DefaultIndexIOHandler - Converting v8[/tmp/example/twitter_realtime/basePersist/twitterstream/2013-05-17T23:00:00.000Z_2013-05-18T00:00:00.000Z/0/v8-tmp] to v9[/tmp/example/twitter_realtime/basePersist/twitterstream/2013-05-17T23:00:00.000Z_2013-05-18T00:00:00.000Z/0]
+2013-05-17 23:06:41,426 INFO [twitterstream-incremental-persist] 
+... ETC
+</code></pre>
+
+The logs are about building different columns, probably not the most exciting stuff (they might as well be in Vulcan) if are you learning about Druid for the first time. Nevertheless, if you are interested in the details of our real-time architecture and why we persist indexes to disk, I suggest you read our [White Paper](http://static.druid.io/docs/druid.pdf).
+
+Okay, things are about to get real (-time). To query the real-time node you've spun up, you can issue:
+<pre>./run_example_client.sh</pre>
+
+Select "twitter" once again. This script issues GroupByQuerys to the twitter data we've been ingesting. The query looks like this:
+
+```json
+{
+    "queryType": "groupBy",
+    "dataSource": "twitterstream",
+    "granularity": "all",
+    "dimensions": ["lang", "utc_offset"],
+    "aggregations":[
+      { "type": "count", "name": "rows"},
+      { "type": "doubleSum", "fieldName": "tweets", "name": "tweets"}
+    ],
+    "filter": { "type": "selector", "dimension": "lang", "value": "en" },
+    "intervals":["2012-10-01T00:00/2020-01-01T00"]
+}
+```
+
+This is a **groupBy** query, which you may be familiar with from SQL. We are grouping, or aggregating, via the **dimensions** field: \["lang", "utc_offset"\]. We are **filtering** via the **"lang"** dimension, to only look at english tweets. Our **aggregations** are what we are calculating: a row count, and the sum of the tweets in our data.
+
+The result looks something like this:
+
+```json
+[
+    {
+        "version": "v1",
+        "timestamp": "2012-10-01T00:00:00.000Z",
+        "event": {
+            "utc_offset": "-10800",
+            "tweets": 90,
+            "lang": "en",
+            "rows": 81
+        }
+    },
+    {
+        "version": "v1",
+        "timestamp": "2012-10-01T00:00:00.000Z",
+        "event": {
+            "utc_offset": "-14400",
+            "tweets": 177,
+            "lang": "en",
+            "rows": 154
+        }
+    },
+...
+```
+
+This data, plotted in a time series/distribution, looks something like this:
+
+
+![Timezone / Tweets Scatter Plot](http://metamarkets.com/wp-content/uploads/2013/06/tweets_timezone_offset.png)
+
+This groupBy query is a bit complicated and we'll return to it later. For the time being, just make sure you are getting some blocks of data back. If you are having problems, make sure you have [curl](http://curl.haxx.se/) installed. Control+C to break out of the client script.
+
+## Querying Druid ##
+
+In your favorite editor, create the file:
+<pre>time_boundary_query.body</pre>
+
+Druid queries are JSON blobs which are relatively painless to create programmatically, but an absolute pain to write by hand. So anyway, we are going to create a Druid query by hand. Add the following to the file you just created:
+<pre><code>{

+  "queryType"  : "timeBoundary",
+  "dataSource" : "twitterstream"
+}
+</code></pre>
+
+The [TimeBoundaryQuery](https://github.com/metamx/druid/wiki/TimeBoundaryQuery) is one of the simplest Druid queries. To run the query, you can issue:
+<pre><code> curl -X POST 'http://localhost:8080/druid/v2/?pretty' -H 'content-type: application/json'  -d @time_boundary_query.body</code></pre>
+
+We get something like this JSON back:
+
+```json
+[ {
+  "timestamp" : "2013-06-10T19:09:00.000Z",
+  "result" : {
+    "minTime" : "2013-06-10T19:09:00.000Z",
+    "maxTime" : "2013-06-10T20:50:00.000Z"
+  }
+} ]
+```
+That's the result. What information do you think the result is conveying? 
+...
+If you said the result is indicating the maximum and minimum timestamps of the events ingested we've seen thus far (summarized to a minutely granularity), you are absolutely correct. I can see you are a person legitimately interested in learning about Druid. Let's explore a bit further.
+
+Return to your favorite editor and create the file:
+<pre>timeseries_query.body</pre>
+
+We are going to make a slightly more complicated query, the [TimeseriesQuery](https://github.com/metamx/druid/wiki/TimeseriesQuery). Copy and paste the following into the file:
+<pre><code>{
+  "queryType":"timeseries",
+  "dataSource":"twitterstream",
+  "intervals":["2010-01-01/2020-01-01"],
+  "granularity":"all",
+  "aggregations":[
+      { "type": "count", "name": "rows"},
+      { "type": "doubleSum", "fieldName": "tweets", "name": "tweets"}
+  ]
+}
+</code></pre>
+
+You are probably wondering, what are these granularities and aggregations things? What the query is doing is aggregating some metrics over some span of time. 
+To issue the query and get some results, run the following in your command line:
+<pre><code>curl -X POST 'http://localhost:8080/druid/v2/?pretty' -H 'content-type: application/json'  -d @timeseries_query.body</code></pre>
+
+Once again, you should get a JSON blob of text back with your results, that looks something like this:
+
+```json
+[ {
+  "timestamp" : "2013-06-10T19:09:00.000Z",
+  "result" : {
+    "tweets" : 358562.0,
+    "rows" : 272271
+  }
+} ]
+```
+
+If you issue the query again, you should notice your results updating.
+
+Right now all the results you are getting back are being aggregated into a single timestamp bucket. What if we wanted to see our aggregations on a per minute basis? What field can we change in the query to accomplish this?
+
+If you loudly exclaimed "we can change granularity to minute", you are absolutely correct again! We can specify different granularities to bucket our results, like so:
+
+```json
+{
+  "queryType":"timeseries",
+  "dataSource":"twitterstream",
+  "intervals":["2010-01-01/2020-01-01"],
+  "granularity":"minute",
+  "aggregations":[
+      { "type": "count", "name": "rows"},
+      { "type": "doubleSum", "fieldName": "tweets", "name": "tweets"}
+  ]
+}
+```
+
+This gives us something like the following:
+
+```json
+[ {
+  "timestamp" : "2013-06-10T19:09:00.000Z",
+  "result" : {
+    "tweets" : 2650.0,
+    "rows" : 2120
+  }
+}, {
+  "timestamp" : "2013-06-10T19:10:00.000Z",
+  "result" : {
+    "tweets" : 3401.0,
+    "rows" : 2609
+  }
+}, {
+  "timestamp" : "2013-06-10T19:11:00.000Z",
+  "result" : {
+    "tweets" : 3472.0,
+    "rows" : 2610
+  }
+},
+...
+```
+
+## Solving a Problem ##
+
+One of Druid's main powers is to provide answers to problems, so let's pose a problem. What if we wanted to know what the top hash tags are, ordered by the number tweets, where the language is english, over the last few minutes you've been reading this tutorial? To solve this problem, we have to return to the query we introduced at the very beginning of this tutorial, the [GroupByQuery](https://github.com/metamx/druid/wiki/GroupByQuery). It would be nice if we could group our results by  [...]
+
+Let's create the file:
+<pre>group_by_query.body</pre>
+and put the following in there:
+<pre><code>{
+    "queryType": "groupBy",
+    "dataSource": "twitterstream",
+    "granularity": "all",
+    "dimensions": ["htags"],
+    "orderBy": {"type":"default", "columns":[{"dimension": "tweets", "direction":"DESCENDING"}], "limit":5},
+    "aggregations":[
+      { "type": "longSum", "fieldName": "tweets", "name": "tweets"}
+    ],
+    "filter": {"type": "selector", "dimension": "lang", "value": "en" },
+    "intervals":["2012-10-01T00:00/2020-01-01T00"]
+}
+</code></pre>
+
+Woah! Our query just got a way more complicated. Now we have these [Filters](https://github.com/metamx/druid/wiki/Filters) things and this [OrderBy](https://github.com/metamx/druid/wiki/OrderBy) thing. Fear not, it turns out the new objects we've introduced to our query can help define the format of our results and provide an answer to our question.
+
+If you issue the query:
+<pre><code>curl -X POST 'http://localhost:8080/druid/v2/?pretty' -H 'content-type: application/json'  -d @group_by_query.body</code></pre>
+
+You should hopefully see an answer to our question. For my twitter stream, it looks like this:
+
+```json
+[ {
+  "version" : "v1",
+  "timestamp" : "2012-10-01T00:00:00.000Z",
+  "event" : {
+    "tweets" : 2660,
+    "htags" : "android"
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2012-10-01T00:00:00.000Z",
+  "event" : {
+    "tweets" : 1944,
+    "htags" : "E3"
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2012-10-01T00:00:00.000Z",
+  "event" : {
+    "tweets" : 1927,
+    "htags" : "15SueñosPendientes"
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2012-10-01T00:00:00.000Z",
+  "event" : {
+    "tweets" : 1717,
+    "htags" : "ipad"
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2012-10-01T00:00:00.000Z",
+  "event" : {
+    "tweets" : 1515,
+    "htags" : "IDidntTextYouBackBecause"
+  }
+} ]
+```
+
+Feel free to tweak other query parameters to answer other questions you may have about the data.
+
+## Additional Information ##
+
+This tutorial is merely showcasing a small fraction of what Druid can do. If you are interested in more information about Druid, including setting up a more sophisticated Druid cluster, please read the other links in our wiki.
+
+And thus concludes our journey! Hopefully you learned a thing or two about Druid real-time ingestion, querying Druid, and how Druid can be used to solve problems. If you have additional questions, feel free to post in our [google groups page](http://www.groups.google.com/forum/#!forum/druid-development).
diff --git a/_posts/2013-08-30-loading-data.md b/_posts/2013-08-30-loading-data.md
new file mode 100644
index 0000000..12cff7e
--- /dev/null
+++ b/_posts/2013-08-30-loading-data.md
@@ -0,0 +1,184 @@
+---
+published: true
+layout: post
+title: "Understanding Druid Real-time Ingestion"
+author: Russell Jurney
+tags: "druid, analytics, datastore, olap"
+---
+
+In our last post, we got a realtime node working with example Twitter data. Now it's time to load our own data to see how Druid performs. Druid can ingest data in three ways: via Kafka and a realtime node, via the indexing service, and via the Hadoop batch loader. Data is ingested in realtime using a [Firehose](https://github.com/metamx/druid/wiki/Firehose). In this post we'll outline how to ingest data from Kafka in realtime using the Kafka Firehose.
+
+## About Druid ##
+Druid is a rockin' exploratory analytical data store capable of offering interactive query of big data in realtime - as data is ingested. Druid drives 10's of billions of events per day for the [Metamarkets](http://www.metamarkets.com) platform, and Metamarkets is committed to building Druid in open source.
+
+To learn more check out the first blog in this series [Understanding Druid Via Twitter Data](http://druid.io/blog/2013/08/06/twitter-tutorial.html)
+
+Checkout Druid at XLDB on Sept 9th [XLDB](https://conf-slac.stanford.edu/xldb-2013/tutorials#amC)
+
+Druid is available [here](https://github.com/metamx/druid).
+
+## Create Config Directories ##
+Each type of node needs its own config file and directory, so create these subdirectories under the druid directory.
+
+    mkdir config
+    mkdir config/realtime
+
+## Loading Data with Kafka ##
+
+[KafkaFirehoseFactory](https://github.com/metamx/druid/blob/master/realtime/src/main/java/com/metamx/druid/realtime/firehose/KafkaFirehoseFactory.java) is how druid communicates with Kafka. Using this Firehose with the right configuration, we can import data into Druid in realtime without writing any code. To load data to a realtime node via Kafka, we'll first need to initialize Zookeeper and Kafka, and then configure and initialize a Realtime node.
+
+### Booting Kafka ###
+
+Instructions for booting a Zookeeper and then Kafka cluster are available [here](http://kafka.apache.org/07/quickstart.html).
+
+**Download Apache Kafka** 0.7.2 from [http://static.druid.io/artifacts/kafka-0.7.2-incubating-bin.tar.gz](http://static.druid.io/artifacts/kafka-0.7.2-incubating-bin.tar.gz)
+
+
+    wget http://static.druid.io/artifacts/kafka-0.7.2-incubating-bin.tar.gz
+    tar -xvzf kafka-0.7.2-incubating-bin.tar.gz
+    cd kafka-0.7.2-incubating-bin
+
+**Boot Zookeeper and Kafka**
+
+    cat config/zookeeper.properties
+    bin/zookeeper-server-start.sh config/zookeeper.properties
+    # in a new console
+    bin/kafka-server-start.sh config/server.properties
+
+**Launch Kafka**
+
+In a new console, launch the kafka console producer (so you can type in JSON kafka messages in a bit)
+
+    bin/kafka-console-producer.sh --zookeeper localhost:2181 --topic druidtest
+
+### Launching a Realtime Node
+
+**Download Druid**
+
+
+    wget http://static.druid.io/artifacts/releases/druid-services-0.5.50-bin.tar.gz
+    tar -xvzf druid-services-0.5.50-bin.tar.gz
+    cd druid-services-0.5.50-bin
+
+**Create a valid configuration file** similar to this called config/realtime/runtime.properties:
+
+    druid.host=127.0.0.1
+    druid.port=8083
+
+    com.metamx.emitter.logging=true
+
+    druid.processing.formatString=processing_%s
+    druid.processing.numThreads=1
+    druid.processing.buffer.sizeBytes=10000000
+
+    druid.service=example
+
+    druid.request.logging.dir=/tmp/example/log
+    druid.realtime.specFile=realtime.spec
+    com.metamx.emitter.logging=true
+    com.metamx.emitter.logging.level=info
+
+    com.metamx.aws.accessKey=dummy_access_key
+    com.metamx.aws.secretKey=dummy_secret_key
+    druid.pusher.s3.bucket=dummy_s3_bucket
+
+    druid.zk.service.host=localhost
+    druid.server.maxSize=300000000000
+    druid.zk.paths.base=/druid
+    druid.database.segmentTable=prod_segments
+    druid.database.user=user
+    druid.database.password=diurd
+    druid.database.connectURI=
+    druid.host=127.0.0.1:8083
+
+
+**Create a valid realtime configuration file** similar to this called realtime.spec in the current directory:
+
+
+    [{
+      "schema" : { "dataSource":"druidtest",
+                   "aggregators":[ {"type":"count", "name":"impressions"},
+                                      {"type":"doubleSum","name":"wp","fieldName":"wp"}],
+                   "indexGranularity":"minute",
+               "shardSpec" : { "type": "none" } },
+      "config" : { "maxRowsInMemory" : 500000,
+                   "intermediatePersistPeriod" : "PT10m" },
+      "firehose" : { "type" : "kafka-0.7.2",
+                     "consumerProps" : { "zk.connect" : "localhost:2181",
+                                         "zk.connectiontimeout.ms" : "15000",
+                                         "zk.sessiontimeout.ms" : "15000",
+                                         "zk.synctime.ms" : "5000",
+                                         "groupid" : "topic-pixel-local",
+                                         "fetch.size" : "1048586",
+                                         "autooffset.reset" : "largest",
+                                         "autocommit.enable" : "false" },
+                     "feed" : "druidtest",
+                     "parser" : { "timestampSpec" : { "column" : "utcdt", "format" : "iso" },
+                                  "data" : { "format" : "json" },
+                                  "dimensionExclusions" : ["wp"] } },
+      "plumber" : { "type" : "realtime",
+                    "windowPeriod" : "PT10m",
+                    "segmentGranularity":"hour",
+                    "basePersistDirectory" : "/tmp/realtime/basePersist",
+                    "rejectionPolicy": {"type": "messageTime"} }
+    }]
+
+**Launch the realtime node**
+
+
+    java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
+    -Ddruid.realtime.specFile=realtime.spec \
+    -classpath services/target/druid-services-0.5.6-SNAPSHOT-selfcontained.jar:config/realtime \
+    com.metamx.druid.realtime.RealtimeMain
+
+**Paste data into the Kafka console producer**
+
+
+    {"utcdt": "2010-01-01T01:01:01", "wp": 1000, "gender": "male", "age": 100}
+    {"utcdt": "2010-01-01T01:01:02", "wp": 2000, "gender": "female", "age": 50}
+    {"utcdt": "2010-01-01T01:01:03", "wp": 3000, "gender": "male", "age": 20}
+    {"utcdt": "2010-01-01T01:01:04", "wp": 4000, "gender": "female", "age": 30}
+    {"utcdt": "2010-01-01T01:01:05", "wp": 5000, "gender": "male", "age": 40}
+    
+**Watch the events as they are ingested** in the Druid realtime node console
+
+    ...
+    2013-06-17 21:41:55,569 INFO [Global--0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2013-06-17T21:41:55.569Z","service":"example","host":"127.0.0.1","metric":"events/processed","value":5,"user2":"druidtest"}]
+    ...
+
+**Create a query**
+In a new console, edit a file called query.body:
+
+
+    {
+        "queryType": "groupBy",
+        "dataSource": "druidtest",
+        "granularity": "all",
+        "dimensions": [],
+        "aggregations": [
+            { "type": "count", "name": "rows" },
+            {"type": "longSum", "name": "imps", "fieldName": "impressions"},
+            {"type": "doubleSum", "name": "wp", "fieldName": "wp"}
+        ],
+        "intervals": ["2010-01-01T00:00/2020-01-01T00"]
+    }
+
+**Submit the query via curl**
+
+
+    curl -X POST "http://localhost:8083/druid/v2/?pretty" \
+    -H 'content-type: application/json' -d @query.body
+
+**View Result!**
+
+
+    [ {
+      "timestamp" : "2010-01-01T01:01:00.000Z",
+      "result" : {
+        "imps" : 20,
+        "wp" : 60000.0,
+        "rows" : 5
+      }
+    } ]
+
+Congratulations, you've queried the data we just loaded! In our next post, we'll move on to Querying our Data.
\ No newline at end of file
diff --git a/_posts/2013-09-12-the-art-of-approximating-distributions.md b/_posts/2013-09-12-the-art-of-approximating-distributions.md
new file mode 100644
index 0000000..bdad5e5
--- /dev/null
+++ b/_posts/2013-09-12-the-art-of-approximating-distributions.md
@@ -0,0 +1,303 @@
+---
+title: "The Art of Approximating Distributions: Histograms and Quantiles at Scale"
+layout: post
+author: Nelson Ray
+image: http://metamarkets.com/wp-content/uploads/2013/06/atlas-600x402.jpeg
+---
+
+_I’d like to acknowledge Xavier Léauté for his extensive contributions (in
+particular, for suggesting several algorithmic improvements and work on
+implementation), helpful comments, and fruitful discussions.  Featured image
+courtesy of CERN._
+
+Many businesses care about accurately computing quantiles over their key
+metrics, which can pose several interesting challenges at scale. For example,
+many service level agreements hinge on these metrics, such as guaranteeing that
+95% of queries return in < 500ms. Internet service providers routinely use
+burstable billing, a fact that Google famously exploited to transfer terabytes
+of data across the US for free. Quantile calculations just involve sorting the
+data, which can be easily parallelized. However, this requires storing the raw
+values, which is at odds with a pre-aggregation step that helps Druid achieve
+such dizzying speed. Instead, we store smaller, adaptive approximations of
+these values as the building blocks of our “approximate histograms.” In this
+post, we explore the related problems of accurate estimation of quantiles and
+building histogram visualizations that enable the live exploration of
+distributions of values. Our solution is capable of scaling out to aggregate
+billions of values in seconds.
+
+## Druid Summarization
+
+When we first [met
+Druid](http://metamarkets.com/2011/druid-part-i-real-time-analytics-at-a-billion-rows-per-second/),
+we considered the following example of a raw impression event log:
+
+    timestamp             publisher          advertiser  gender  country  dimensions  click  price
+    2011-01-01T01:01:35Z  bieberfever.com    google.com  Male    USA                  0      0.65
+    2011-01-01T01:03:63Z  bieberfever.com    google.com  Male    USA                  0      0.62
+    2011-01-01T01:04:51Z  bieberfever.com    google.com  Male    USA                  1      0.45
+    ...
+    2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Female  UK                   0      0.87
+    2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Female  UK                   0      0.99
+    2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Female  UK                   1      1.53
+    ...
+
+By giving up some resolution in the timestamp column (e.g., by truncating the
+timestamps to the hour), we can produce a summarized dataset by grouping by the
+dimensions and aggregating the metrics. We also introduce the “impressions”
+column, which counts the rows from the raw data with that combination of
+dimensions:
+
+     timestamp             publisher          advertiser  gender country impressions clicks revenue
+     2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Male   USA     1800        25     15.70
+     2011-01-01T01:00:00Z  bieberfever.com    google.com  Male   USA     2912        42     29.18
+     2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Male   UK      1953        17     17.31
+     2011-01-01T02:00:00Z  bieberfever.com    google.com  Male   UK      3194        170    34.01
+
+All is well and good if we content ourselves with computations that can be
+distributed efficiently such as summing hourly revenue to produce daily
+revenue, or calculating click-through rates. In the language of [Gray et
+al.](http://paul.rutgers.edu/~aminabdu/cs541/cube_op.pdf), the former
+calculation is _distributive_: we can sum the raw event prices to produce hourly
+revenue over each combination of dimensions and in turn sum this intermediary
+for further coarsening into daily and quarterly totals. The latter is
+algebraic: it is a combination of a fixed number of distributive statistics, in
+particular, clicks / impressions.
+
+However, sums and averages are of very little use when one wants to ask certain
+questions of bid-level data. Exchanges may wish to visualize the [bid
+landscape](http://users.cis.fiu.edu/~lzhen001/activities/KDD2011Program/docs/p265.pdf)
+so as to provide guidance to publishers on how to set floor prices. Because of
+our data-summarization process, we have lost the individual bid prices–and
+knowing that the 20 total bids sum to $5 won’t tell us how many exceed $1 or
+$2. Quantiles, by contrast, are holistic: there is no constant bound on the
+size of the storage needed to exactly describe a sub-aggregate.
+
+Although the raw data contain the unadulterated prices–with which we can answer
+these bid landscape questions exactly–let’s recall why we much prefer the
+summarized dataset. In the above example, each raw row corresponds to an
+impression, and the summarized data represent an average compression ratio of
+~2500:1 (in practice, we see ratios in the 1 to 3 digit range). Less data is
+both cheaper to store in memory and faster to scan through. In effect, we are
+trading off increased ETL effort against less storage and faster queries with
+this pre-aggregation.
+
+One solution to support quantile queries is to store the entire array of ~2500
+prices in each row:
+
+     timestamp             publisher          advertiser  gender country impressions clicks prices
+     2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Male   USA     1800        25     [0.64, 1.93, 0.93, ...]
+     2011-01-01T01:00:00Z  bieberfever.com    google.com  Male   USA     2912        42     [0.65, 0.62, 0.45, ...]
+     2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Male   UK      1953        17     [0.07, 0.34, 1.23, ...]
+     2011-01-01T02:00:00Z  bieberfever.com    google.com  Male   UK      3194        170    [0.53, 0.92, 0.12, ...]
+
+But the storage requirements for this approach are prohibitive. If we can
+accept _approximate_ quantiles, then we can replace the complete array of prices
+with a data structure that is sublinear in storage–similar to our sketch-based
+approach to cardinality estimation.
+
+## Approximate Histograms
+
+[Ben-Haim and
+Tom-Tov](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf) suggest
+summarizing the unbounded-length arrays with a fixed number of (count,
+centroid) pairs. Suppose we attempt to summarize a set of numbers with a single
+pair. The mean (centroid) has the nice property of minimizing the sum of the
+squared differences between it and each value, but it is sensitive to outliers
+because of the squaring. The median is the minimizer of the sum of the absolute
+differences and for an odd number of observations, corresponds to an actual bid
+price. Bid prices tend to be skewed due to the mechanics of second price
+auctions–some bidders have no problem bidding $100, knowing that they will
+likely only have to pay $2. So a median of $1 is more representative of the
+“average” bid price than a mean of $20. However, with the (count, median)
+representation, there is no way to merge medians: knowing that 8 prices have a
+median of $.43 and 10 prices have a median of $.59 doesn’t tell you that the
+median of all 18 prices is $.44. Merging centroids is simple–just use the
+weighted mean. Given some approximate histogram representation of (count,
+centroid) pairs, we can make _online_ updates as we scan through data.
+
+Of course, there is no way to accurately summarize an arbitrary number of
+prices with a single pair, so we are confronted with a classical
+accuracy/storage/speed tradeoff. We can fix the number of pairs that we store
+like so:
+
+     timestamp             publisher          advertiser  gender country impressions clicks prices
+     2011-01-01T01:00:00Z  ultratrimfast.com  google.com  Male   USA     1800        25     [(1, .16), (48, .62), (83, .71), ...]
+     2011-01-01T01:00:00Z  bieberfever.com    google.com  Male   USA     2912        42     [(1, .12), (3, .15), (30, 1.41), ...]
+     2011-01-01T02:00:00Z  ultratrimfast.com  google.com  Male   UK      1953        17     [(2, .03), (1, .62), (20, .93), ...]
+     2011-01-01T02:00:00Z  bieberfever.com    google.com  Male   UK      3194        170    [(1, .05), (94, .84), (1, 1.14), ...]
+
+In the first row, there is one bid at $.16, 48 bids with an average price of
+$.62, and so on. But given a set of prices, how do we summarize them as (count,
+centroid) pairs? This is a special case of the k-means clustering problem,
+which in general is [NP-hard](http://dl.acm.org/citation.cfm?id=1519389), even
+in the [plane](http://cseweb.ucsd.edu/~avattani/papers/kmeans_hardness.pdf).
+Fortunately, however, the one-dimensional case is tractable and admits a
+[solution via dynamic
+programming](http://journal.r-project.org/archive/2011-2/RJournal_2011-2_Wang+Song.pdf).
+The [B-H/T-T](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf)
+approach is to iteratively combine the closest two pairs together by taking
+weighted means until we reach our desired size.
+
+Here we illustrate the B-H/T-T summarization process for the integers 1 through
+10, 15 and 20, and 12 and 25 each repeated 3 times, for 3 different choices of
+the number of (count, centroid) pairs.
+
+<img src="http://metamarkets.com/wp-content/uploads/2013/06/histogram_pairs-1024x614.png" alt="Histogram Pairs"/>
+
+There 4 salient operations on these approximate histogram objects:
+
+1. Adding new values to the histogram: add a new pair, (1, value), and merge
+the closest pair if we exceed the size parameter 
+
+2. Merging two histograms together: repeatedly add all pairs of values from one
+histogram to another 
+
+3. Estimating the count of values below some reference value: build trapezoids
+between the pairs and look at the various areas 
+
+4. Estimating the quantiles of the values represented in a histogram: walk
+along the trapezoids until you reach the desired quantile We apply operation 1
+during our ETL phase, as we group by the dimensions and build a histogram on
+the resulting prices, serializing this object into a Druid data segment. The
+[compute nodes](http://static.druid.io/docs/druid.pdf) repeat operation 2 in
+parallel, each emitting an intermediate histogram to the [query
+broker](http://static.druid.io/docs/druid.pdf) for combination (another
+application of operation 2). Finally, we can apply operation 3 repeatedly to
+estimate counts in between various breakpoints, producing a histogram plot. Or
+we can estimate quantiles of interest with operation 4.
+
+Here we review the trapezoidal estimation of [Ben-Haim and
+Tom-Tov](http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf) with an
+example. Suppose we wanted to estimate the number of values less than or equal
+to 10 (the exact answer is 10) knowing that there are 10 points with mean 5.5,
+4 with mean 12.8, and 4 with mean 23.8. We assume that half of the values lie
+to the left and half lie to the right (we shall improve upon this assumption in
+the next section) of the centroid. So we mark off that 5 values are smaller
+than the first centroid (this turns out to be correct). We then draw a
+trapezoid connecting the next two centroids and assume that the number of
+values between 5.5 and 10 is proportional to the area that this sub-trapezoid
+occupies (the latter half of which is marked in blue). We assume that half of
+the 10 values near 5.5 lie to its right, and half of the 4 values near 12.8 lie
+to its left and multiply the sum of 7 by the ratio of areas to come up with our
+estimate of 5.05 in this region (the exact answer is 5). Therefore, we estimate
+that there are 10.05 values less than or equal to 10.
+
+<img src="http://metamarkets.com/wp-content/uploads/2013/06/ah_trapezoid.png" alt="AH Trapezoid"/>
+
+## Improvements
+
+Here we describe some improvements and efficiencies specific to our
+implementation of the B-H/T-T approximate histogram.
+
+Computational efficiency at query time (operation 2) is more
+[dear](http://metamarkets.com/2012/scaling-druid/) to us than at ETL time
+(operation 1). That is, we can spend a few more cycles in building the
+histograms if it allows for a very efficient means of combination. Our
+Java-based implementation of operation 2 using a heap to keep track of the
+differences between pairs can combine roughly 200K (size 50) histograms per
+second per core (on an i7-3615QM). This compares unfavorably with [core scan
+rates](http://metamarkets.com/2012/scaling-druid/) an order of magnitude or two
+higher for count, sum, and group by queries.  Although, to be fair, a histogram
+contains 1-2 orders of magnitude more information than a single count or sum.
+Still, we sought a faster solution. If we know ahead of time what the proper
+threshold below which to merge pairs is, then we can do a linear scan through
+the sorted pairs (which we can do at ETL time), choosing to merge or not based
+on the threshold. The exact determination of this threshold is difficult to do
+efficiently, but eschewing the heap-based solution for this approximation
+results in core aggregation rates of ~1.3M (size 50) histograms per second.
+
+We have 3 different serialization formats when indexing depending on the nature
+of the data, for which we use the most efficent encoding:
+
+1. a dense format, storing all counts and centroids up to the configurable size
+parameter
+
+2. a sparse format, storing some number of pairs below the limit
+
+3. a compact format, storing the individual values themselves
+
+It is important to emphasize that we can specify different levels of accuracy
+hierarchically. The above formats come into play when we index the data,
+turning the arrays of raw values into (count, centroid) pairs. Because indexing
+is slow and expensive and Druid segments are
+[immutable](http://static.druid.io/docs/druid.pdf), it’s better at this level
+to err on the side of accuracy. So we do something like specify a maximum of
+100 (count, centroid) pairs in indexing, which will allow for greater
+flexibility at query time, when we aggregate these together into some possibly
+different number of (count, centroid) pairs.
+
+We use the superfluous sign bit of the count to determine whether a (count,
+centroid) pair with count > 1 is exact or not. Does a value of (2, 1.51)
+indicate 2 bid prices of $1.51, or 2 unequal bid prices that average to $1.51?
+The trapezoid method of count estimation makes no such distinction and will
+“spread out” its uncertainty equally. This can be problematic for the discrete,
+multimodal distributions characteristic of bid data. But given knowledge of
+which (count, centroid) pairs are exact, we can make more accurate estimates.
+
+Recall that our data typically exhibit high skewness. Because the closest
+histogram pairs are continuously merged until the number of pairs is small
+enough, the remaining pairs are necessarily (relatively) far apart. It is
+logical to summarize 12 prices around $.10 and 6 prices around $.12 as 18
+prices around $.11, but we wouldn’t want to merge all prices under $2 because
+of the influence of 49 wildly-high prices–unless we are particularly interested
+in those outliers, that is. At the very least, we would like to be able to
+control our “area of interest”–do we care about the majority of the data or
+about those few outliers? For when we aggregate millions or billions of values,
+even with the tiniest skew, we’ll end up summarizing the bulk of the
+distribution with a single (count, centroid) pair. Our solution is to define
+special limits, inside of which we maintain the accuracy of our estimates. This
+typically jives well with setting x-axis limits for our histogram
+visualization.
+
+## Accuracy
+
+Here, we plot a histogram over ~18M prices, using default settings for the
+x-axis limits and bin widths. Due to the high degree of skew, the inferred
+limits are suboptimal, as they include prices ~$100. In addition, there are
+even negative bid prices (which could be erroneous or a way of expressing
+uninterest in the auction)!
+
+<img src="https://metamarkets.com/wp-content/uploads/2013/06/histogram_skew-1024x614.png" alt="Histogram Skew"/>
+
+Below, we set our resolution limits to $0 and $1 and vary the number of (count,
+centroid) pairs in our approximate histogram datastructure. The accuracy using
+only 5 pairs is abysmal and doesn’t even capture the second mode in the $.20 to
+$.25 bucket. 50 pairs fare much better, and 200 are very accurate.
+
+<img src="https://metamarkets.com/wp-content/uploads/2013/06/histogram_accuracy-1024x614.png" alt="Histogram Accuracy"/>
+
+## Speed
+
+Let’s take a look at some benchmarks on our modest demo cluster (4 m2.2xlarge
+compute nodes) with some wikipedia data. We’ll look at the performance of the
+following aggregators:
+
+1. a count aggregator, which simply counts the number of rows 
+
+2. a uniques aggregator, which implements a version of the HyperLogLog algorithm
+
+3. approximate histogram aggregators, varying the resolution from 10
+pairs to 50 pairs to 200 pairs We get about 1-3M summarized rows of data per
+week from Wikipedia, and the benchmarks over the full 32 week period cover 84M
+rows. There appears to be a roughly linear relationship between the query time
+and the quantity of data:
+
+<img src="https://metamarkets.com/wp-content/uploads/2013/06/ah_speed-1024x614.png" alt="AH Speed"/>
+
+Indeed, the cluster scan rates tend to flatten out once we hit enough data:
+
+<img src="http://metamarkets.com/wp-content/uploads/2013/06/ah_scan_rate.png" alt="AH Scan Rate"/>
+
+We previously obtained cluster scan rates of [26B rows per second](http://metamarkets.com/2012/scaling-druid/) on a beefier
+cluster. Very roughly speaking, the approximate histogram aggregator is 1/10
+the speed of the count aggregator, so we might expect speeds of 2-3B rows per
+second on such a cluster. Recall that our summarization step compacts 10-100
+rows of data into 1, for typical datasets. This means that it is possible to
+construct histograms representing tens to hundreds of billions of prices in
+seconds.
+
+Finally, my colleague Fangjin Yang and I will continue the discussion in
+October in New York at the Strata Conference where we will present, [“Not
+Exactly! Fast Queries via Approximation
+Algorithms.”](http://strataconf.com/stratany2013/public/schedule/detail/30045)
+
diff --git a/_posts/2013-09-16-upcoming-events.md b/_posts/2013-09-16-upcoming-events.md
new file mode 100644
index 0000000..911212c
--- /dev/null
+++ b/_posts/2013-09-16-upcoming-events.md
@@ -0,0 +1,17 @@
+---
+published: true
+layout: post
+---
+
+Druid hits the road this fall, with presentations in the Bay Area, North Carolina and New York!
+
+## About Druid ##
+Druid is a rockin' exploratory analytical data store capable of offering interactive query of big data in realtime - as data is ingested. Druid cost effectively drives 10's of billions of events per day for the [Metamarkets](http://www.metamarkets.com) platform, and Metamarkets is committed to building Druid in open source.
+
+## Upcoming Druid Events in 2013 ##
+
+* 9/24 at Flurry in SF, at the [Real-time Big Data meetup](http://www.meetup.com/Real-time-Big-Data/events/139221542/)
+* 10/23-24 in Raleigh, NC at [All Things Open](http://www.allthingsopen.org)
+* 10/26-31 at Strata in [New York City](http://strataconf.com/stratany2013?intcmp=il-strata-stny13-franchise-page)
+
+Come learn the way of the Druid at these events!
\ No newline at end of file
diff --git a/_posts/2013-09-19-launching-druid-with-apache-whirr.md b/_posts/2013-09-19-launching-druid-with-apache-whirr.md
new file mode 100644
index 0000000..a6001f4
--- /dev/null
+++ b/_posts/2013-09-19-launching-druid-with-apache-whirr.md
@@ -0,0 +1,66 @@
+---
+published: true
+layout: post
+author: Russell Jurney
+---
+
+Without Whirr, to launch a Druid cluster, you'd have to provision machines yourself, and then install each node type manually. This process is outlined [here](https://github.com/metamx/druid/wiki/Tutorial%3A-The-Druid-Cluster). With Whirr, you can boot a druid cluster by editing a simple configuration file and then issuing a single command!
+
+## About Druid ##
+Druid is a rockin' exploratory analytical data store capable of offering interactive query of big data in realtime - as data is ingested. Druid cost effectively drives 10's of billions of events per day for the [Metamarkets](http://www.metamarkets.com) platform, and Metamarkets is committed to building Druid in open source.
+
+## About Apache Whirr ##
+Apache Whirr is a set of libraries for running cloud services. It allows you to use simple commands to boot clusters of distributed systems for testing and experimentation. Apache Whirr makes booting clusters easy.
+
+## Installing Whirr ##
+Until Druid is part of an Apache release (a month or two from now) of Whirr, you'll need to clone the code from [https://github.com/rjurney/whirr/tree/trunk](https://github.com/rjurney/whirr/tree/trunk) and build Whirr.
+
+    git clone git@github.com:rjurney/whirr.git
+    cd whirr
+    git checkout trunk
+    mvn clean install -Dmaven.test.failure.ignore=true
+
+## Configuring your Cloud Provider ##
+
+You'll need to set these environment variables:
+
+    export WHIRR_PROVIDER=aws-ec2
+    export WHIRR_IDENTITY=$AWS_ACCESS_KEY_ID
+    export WHIRR_CREDENTIAL=$AWS_SECRET_ACCESS_KEY
+
+## build.properties ##
+
+    cat recipes/druid.properties
+
+Much of the configuration is self explanatory:
+
+    # Change the cluster name here
+    whirr.cluster-name=druid
+
+    # Change the number of machines in the cluster here
+    whirr.instance-templates=1 zookeeper+druid-mysql+druid-master+druid-broker+druid-compute+druid-realtime
+    # whirr.instance-templates=3 zookeeper,1 druid-mysql,2 druid-realtime,2 druid-broker,2 druid-master,5 druid-compute
+
+    # Which version of druid to load
+    whirr.druid.version=0.5.54
+
+    # S3 bucket to store segments in
+    whirr.druid.pusher.s3.bucket=dummy_s3_bucket
+
+    # The realtime.spec file to use to configure a realtime node
+    # whirr.druid.realtime.spec.path=/path/to/druid/examples/config/realtime/realtime.spec
+
+
+Note that you can change a cluster's configuration with the whirr.instance-templates parameter. This enables you to boot clusters large or small. Note that at least one zookeeper and druid-mysql nodes are required.
+
+## Launching a Druid Cluster with Whirr ##
+
+    bin/whirr launch-cluster --config recipes/druid.properties
+
+When the cluster is ready, ssh instructions will print and we can connect and use the cluster. For more instructions on using a Druid cluster, see [here](https://github.com/metamx/druid/wiki/Querying-your-data). To destroy a cluster when we're done, run:
+
+
+    bin/whirr destroy-cluster --config recipes/druid.properties
+
+
+We hope Apache Whirr makes experimenting with Druid easier than ever!
\ No newline at end of file
diff --git a/_posts/2013-09-20-druid-at-xldb.md b/_posts/2013-09-20-druid-at-xldb.md
new file mode 100644
index 0000000..ece264a
--- /dev/null
+++ b/_posts/2013-09-20-druid-at-xldb.md
@@ -0,0 +1,19 @@
+---
+title: Druid at XLDB
+published: true
+layout: post
+author: Russell Jurney
+tags: "#xldb #druidio #analytics #olap"
+---
+
+We recently attended [Stanford XLDB](http://www.xldb.org/) and the experience was a blast. Once a year, XLDB invites speakers from different organizations to discuss the challenges of and solutions to dealing with Xtreme (with an X!) data sets. This year, Jeff Dean dropped knowledge bombs about architecting scalable systems, Michael Stonebraker provided inspiring advice about growing open source projects, CERN explained how they found the Higgs Boson, and several organizations spoke abou [...]
+
+We attended XLDB to teach our very first Druid tutorial session. Battling an alarm clock that went off far too early (for engineers anyway) and braving the insanity that is highway 101 morning traffic, most of us _almost_ managed to show up on time for our session.
+
+![Druid Users at XLDB](http://distilleryimage3.ak.instagram.com/ce5ff7c4197111e3b2e322000a1f9a5c_7.jpg)
+
+The focus of our tutorial is to educate people on why we built Druid, how Druid is architected, and how to build applications on top of Druid. The tutorial has several hands-on sections about how to spin up and load data into a Druid cluster. For [R](http://www.r-project.org/) enthusiasts out there, there is a section about building an R application for data analysis using Druid. Check out our slides below:
+
+<script async="" class="speakerdeck-embed" data-id="50c52830fc7301302f630ada113e7e19" data-ratio="1.72972972972973" src="//speakerdeck.com/assets/embed.js"></script>
+
+We are constantly trying improve the Druid educational process. In the future, we hope to refine and repeat this talk at other cool conferences.
\ No newline at end of file
diff --git a/_posts/2013-10-18-R-applications.md b/_posts/2013-10-18-R-applications.md
new file mode 100644
index 0000000..b54b395
--- /dev/null
+++ b/_posts/2013-10-18-R-applications.md
@@ -0,0 +1,44 @@
+---
+published: false
+---
+
+
+In this post, we'll look at building Druid Applications in the [R language](http://www.r-project.org/). RDruid is a Druid library for R, available here: [https://github.com/metamx/RDruid](https://github.com/metamx/RDruid) 
+
+To setup Druid's webstream example, grab the Druid tarball at [http://static.druid.io/artifacts/releases/druid-services-0.5.54-bin.tar.gz](http://static.druid.io/artifacts/releases/druid-services-0.5.54-bin.tar.gz)
+
+	tar -zxvf druid-services-*-bin.tar.gz
+    cd druid-services-0.5.54
+    ./run_example_server.sh
+    Enter webstream
+
+To install RDruid and the other dependencies for this tutorial, simply run these commands in R:
+
+	install.packages("devtools")
+	library(devtools)
+ 
+	install.packages(c("shiny", "ggplot2"))
+	install_github("RDruid", "metamx")
+
+Lets query Druid and make a simple chart:
+
+	library(RDruid)
+    library(ggplot2)
+    
+    url <- druid.url(host="localhost", port="8083")
+    datasource <- "wikipedia"
+    timespan <- interval(ymd(20130101), ymd(20200101))
+
+    tsdata <- druid.query.timeseries(url=url, dataSource=datasource,
+                            intervals = timespan,
+                            aggregations = sum(metric("count")),
+                            granularity = granularity("PT1M")
+    )
+    
+	print(ggplot(data=tsdata, aes_string(x="timestamp", y="rows")) + geom_line())
+    
+Which results in:
+
+![Druid GGPlot Time Series](/_posts/r_druid_ggplot.png)
+  
+A more complicated Shiny web application is available on [github here](https://github.com/rjurney/druid-application-development/tree/master/R).
\ No newline at end of file
diff --git a/_posts/2013-10-18-python-applications.md b/_posts/2013-10-18-python-applications.md
new file mode 100644
index 0000000..5b6137e
--- /dev/null
+++ b/_posts/2013-10-18-python-applications.md
@@ -0,0 +1,56 @@
+---
+published: false
+layout: post
+---
+
+In this post we will demonstrate building a Druid application in Python. Code for this example is available [on github](https://github.com/rjurney/druid-application-development).
+
+## Webstream Example
+
+To setup Druid's webstream example, grab the Druid tarball at [http://static.druid.io/artifacts/releases/druid-services-0.5.54-bin.tar.gz](http://static.druid.io/artifacts/releases/druid-services-0.5.54-bin.tar.gz)
+
+	tar -zxvf druid-services-*-bin.tar.gz
+    cd druid-services-0.5.54
+    ./run_example_server.sh
+    Enter webstream
+
+## Installing pyDruid
+
+Druid's python library is called pyDruid, and can be installed via:
+
+	pip install pydruid
+
+The source to pydruid is available on github: [https://github.com/metamx/pydruid](https://github.com/metamx/pydruid)
+
+## Working with pyDruid
+
+A simple example of querying Druid with pyDruid looks like this:
+
+	#!/usr/bin/env python
+
+	from pydruid.client import *
+
+	# Druid Config
+	endpoint = 'druid/v2/?pretty'
+	demo_bard_url =  'http://localhost:8083'
+	dataSource = 'webstream'
+	intervals = ["2013-01-01/p1y"]
+
+	query = pyDruid(demo_bard_url, endpoint)
+
+	counts = query.timeseries(dataSource = dataSource, 
+	              granularity = "minute", 
+	              intervals = intervals, 
+	              aggregations = {"count" : doubleSum("rows")}
+	              )
+
+	print counts
+    
+Which results in this:
+
+	[{'timestamp': '2013-09-30T23:31:00.000Z', 'result': {'count': 0.0}}, {'timestamp': '2013-09-30T23:32:00.000Z', 'result': {'count': 0.0}}, {'timestamp': '2013-09-30T23:33:00.000Z', 'result': {'count': 0.0}}, {'timestamp': '2013-09-30T23:34:00.000Z', 'result': {'count': 0.0}}]
+
+## Conclusion
+
+In our next post, we'll build a full-blown Druid Python web application!
+
diff --git a/_posts/2013-10-18-realtime-web-applications.md b/_posts/2013-10-18-realtime-web-applications.md
new file mode 100644
index 0000000..9819b06
--- /dev/null
+++ b/_posts/2013-10-18-realtime-web-applications.md
@@ -0,0 +1,157 @@
+---
+published: false
+layout: post
+---
+
+In this post, we will cover the creation of web applications with realtime visualizations using Druid, Ruby/Python and D3.js. Complete code in Ruby and Python for this example is available at [https://github.com/rjurney/druid-application-development](https://github.com/rjurney/druid-application-development).
+
+For more information on the Ruby and Python Druid clients, see here and here. For more information on starting a Druid realtime node, see here.
+
+![Druid Explorer Chart](/_images/druid_explorer_chart.png)
+
+## Web App in Python/Flask/pyDruid
+
+Our Python [Flask](http://flask.pocoo.org/) application is simple enough. One route serves our HTML/CSS/Javascript, and another serves JSON to our chart. The fetch_data method runs our Druid query via the [pyDruid package](https://github.com/metamx/pydruid).
+
+	from flask import Flask, render_template
+	import json
+	import re
+	from pydruid.client import *
+
+	# Setup Flask
+	app = Flask(__name__)
+
+	# Druid Config
+	endpoint = 'druid/v2/?pretty'
+	demo_bard_url =  'http://localhost:8083'
+	dataSource = 'webstream'
+
+	# Boot a Druid 
+	query = pyDruid(demo_bard_url, endpoint)
+	
+	# Display our HTML Template
+	@app.route("/time_series")
+	def time_series():
+	    return render_template('index.html')
+	
+	# Fetch our data from Druid
+	def fetch_data(start_iso_date, end_iso_date):
+	    intervals = [start_iso_date + "/" + end_iso_date]
+	    counts = query.timeseries(dataSource = dataSource, 
+	    	                      granularity = "second", 
+	    						  intervals = intervals, 
+	    						  aggregations = {"count" : doubleSum("rows")}
+	    					     )				     
+	    json_data = json.dumps(counts)
+	    return json_data
+	
+	# Deliver data in JSON to our chart
+	@app.route("/time_series_data/<start_iso_date>/<end_iso_date>")
+	def time_series_data(start_iso_date, end_iso_date):
+	    return fetch_data(start_iso_date, end_iso_date)
+	
+	if __name__ == "__main__":
+	    app.run(debug=True)
+
+## Web App in Ruby/Sinatra/ruby-druid
+
+Our Ruby application using Sinatra and ruby-druid is similar. First we setup some Sinatra configuration variables, and then repeat the work above:
+
+	# index.rb
+	require 'sinatra'
+	require 'druid'
+	require 'json'
+	
+	set :public_folder, File.dirname(__FILE__) + '/static'
+	set :views, 'templates'
+	
+	client = Druid::Client.new('', {:static_setup => { 'realtime/webstream' => 'http://localhost:8083/druid/v2/' }})
+
+	def fetch_data(client, start_iso_date, end_iso_date)
+	  query = Druid::Query.new('realtime/webstream').time_series().double_sum(:rows).granularity(:second).interval(start_iso_date, end_iso_date)
+	  result = client.send(query)
+	  counts = result.map {|r| {'timestamp' => r.timestamp, 'result' => r.row}}
+	  json = JSON.generate(counts)
+	end
+
+	get '/time_series' do
+	  erb :index
+	end
+	
+	get '/time_series_data/:start_iso_date/:end_iso_date' do |start_iso_date, end_iso_date|
+	  fetch_data(client, start_iso_date, end_iso_date)
+	end
+
+## Javascript - D3.js
+
+The meat of our appliation is in Javascript, using the [d3.js](http://d3js.org/) library. The complete code is [here](https://github.com/rjurney/druid-application-development/blob/master/python/templates/index.html) and a working JSFiddle is [here](http://jsfiddle.net/CBsgU/). Commented code highlights are below:
+
+	// Made possible only with help from Vadim Ogeivetsky
+	var data = [];
+    var maxDataPoints = 20; // Max number of points to keep in the graph
+    var nextData = data;
+    var dataToShow = [];
+    setInterval(function() { 
+        data = nextData;
+
+        // Skip when nothing more to show
+        if (dataToShow.length == 0) return;
+
+        // Take on datum from the new data and add it to the data
+        // (pretend like the data is arriving one at a time)
+        data.push(dataToShow.shift());
+
+        // once we get too many things in data, remove some
+        // use nextData to train the scales but use the untrimmed data
+        // for rendering so that it looks smooth
+        nextData = data.length > maxDataPoints ? data.slice(data.length - maxDataPoints) : data;
+
+        // can not show area unless we gave min 2 points
+        if (data.length < 2) return;
+
+        // This is a key step that needs to be done because of the 
+        // paculiarity of area / line charts
+        // (they have one element that represnts N data points - unlike a bar chart) 
+        // reaply the old area function (with the old scale) to the new data
+        dPath.attr("d", area(data))        
+
+        // Update the scale domains
+        x.domain(d3.extent(nextData, function(d) { return d.date; }));
+        y.domain([0, d3.max(nextData, function(d) { return d.close; })]);
+
+        // reaply the axis selection (now that the scales have been updated)
+        // yay for transition!
+        xAxisSel.transition().duration(900).call(xAxis);        
+        yAxisSel.transition().duration(900).call(yAxis);
+
+        // reaply the updated area function to animate the area 
+        dPath.transition().duration(900).attr("d", area(data))
+
+    }, 1000);
+
+    function convert(ds) { 
+        return ds.map(function(d) {   
+            return {
+                date: new Date(d['timestamp']),
+                close: d['result']['count']
+            }
+        });
+    }
+
+    lastQueryTime = new Date(Date.now() - 60 * 1000) // start from one minute ago
+    lastQueryTime.setUTCMilliseconds(0)
+    function doQuery() {
+        now = new Date()
+        now.setUTCMilliseconds(0)
+        console.log('query!')
+        druidQuery(lastQueryTime, now, function(err, results) {
+            // add results to the data to be shown
+            lastQueryTime = now
+            dataToShow = dataToShow.concat(convert(results)) 
+            console.log('dataToShow length', dataToShow.length)
+        })
+    }
+    doQuery()
+    setInterval(doQuery, 10000)
+
+This chart highlights Druid's dual-realtime abilities: rapidly consuming and querying large streams, and we hope it helps illustrate how to use Druid with realtime visualizations!
\ No newline at end of file
diff --git a/_posts/2013-10-18-ruby-applications.md b/_posts/2013-10-18-ruby-applications.md
new file mode 100644
index 0000000..b871c55
--- /dev/null
+++ b/_posts/2013-10-18-ruby-applications.md
@@ -0,0 +1,50 @@
+---
+published: false
+layout: post
+---
+
+In this post we will demonstrate building a Druid application in Ruby. Code for this example is available [on github](https://github.com/rjurney/druid-application-development).
+
+## Webstream Example
+
+To setup Druid's webstream example, grab the Druid tarball at [http://static.druid.io/artifacts/releases/druid-services-0.5.54-bin.tar.gz](http://static.druid.io/artifacts/releases/druid-services-0.5.54-bin.tar.gz)
+
+	tar -zxvf druid-services-*-bin.tar.gz
+    cd druid-services-0.5.54
+    ./run_example_server.sh
+    Enter webstream
+
+## ruby-druid
+
+The [ruby-druid project](https://github.com/madvertise/ruby-druid) from Madvertise provides Ruby connectivity with Druid. To install ruby-druid, you'll need to get the source:
+
+	git clone git@github.com:madvertise/ruby-druid.git
+
+Then use bundler to build ruby-druid:
+
+	gem install bundler
+    bundle install
+
+Next you'll need to copy the file dot_driplrc_example to .dripl and edit this file to include this line:
+    
+	options :static_setup => { 'realtime/webstream' => 'http://localhost:8083/druid/v2/' }
+
+To launch the repl, run:
+
+	bundle exec bin/dripl
+
+Now you can query the webstream example:
+
+	long_sum(:added)[-7.days].granularity(:minute)
+
+Or, to query in raw Ruby, run something like this:
+
+	bundle exec irb
+
+	client = Druid::Client.new('', {:static_setup => { 'realtime/webstream' => 'http://localhost:8083/druid/v2/' }})
+	query = Druid::Query.new('realtime/webstream').double_sum(:rows).granularity(:minute)
+	result = client.send(query)
+	puts result
+	["2013-10-03T23:29:00.000Z":{"rows"=>3124.0}, "2013-10-03T23:30:00.000Z":{"rows"=>73508.0}, "2013-10-03T23:31:00.000Z":{"rows"=>26791.0}, "2013-10-03T23:32:00.000Z":{"rows"=>29966.0}, "2013-10-03T23:33:00.000Z":{"rows"=>21450.0}]
+
+Thats it! Simple enough. In our next post we'll look at building a full-blown web application over Druid.
\ No newline at end of file
diff --git a/_posts/2013-11-04-querying-your-data.md b/_posts/2013-11-04-querying-your-data.md
new file mode 100644
index 0000000..0e5d5ba
--- /dev/null
+++ b/_posts/2013-11-04-querying-your-data.md
@@ -0,0 +1,336 @@
+---
+published: true
+layout: post
+author: Russell Jurney
+tags: "#druidio #analytics #olap"
+---
+
+Before we start querying druid, we're going to finish setting up a complete cluster on localhost. In our previous posts, we setup a Realtime node. In this tutorial we will also setup the other Druid node types: Compute, Master and Broker.
+
+## Booting a Broker Node ##
+
+1. Setup a config file at config/broker/runtime.properties that looks like this: [https://gist.github.com/rjurney/5818837](https://gist.github.com/rjurney/5818837)
+2. Run the broker node:
+
+```bash
+java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
+-Ddruid.realtime.specFile=realtime.spec \
+-classpath services/target/druid-services-0.5.6-SNAPSHOT-selfcontained.jar:config/broker \
+com.metamx.druid.http.BrokerMain
+```
+
+## Booting a Master Node ##
+
+1. Setup a config file at config/master/runtime.properties that looks like this: [https://gist.github.com/rjurney/5818870](https://gist.github.com/rjurney/5818870)
+2. Run the master node:
+
+```bash
+java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
+-classpath services/target/druid-services-0.5.6-SNAPSHOT-selfcontained.jar:config/master \
+com.metamx.druid.http.MasterMain
+```
+
+## Booting a Realtime Node ##
+
+1. Setup a config file at config/realtime/runtime.properties that looks like this: [https://gist.github.com/rjurney/5818774](https://gist.github.com/rjurney/5818774)
+
+2. Setup a realtime.spec file like this: [https://gist.github.com/rjurney/5818779](https://gist.github.com/rjurney/5818779)
+3. Run the realtime node:
+
+```bash
+java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
+-Ddruid.realtime.specFile=realtime.spec \
+-classpath services/target/druid-services-0.5.6-SNAPSHOT-selfcontained.jar:config/realtime \
+com.metamx.druid.realtime.RealtimeMain
+```
+
+## Booting a Compute Node ##
+
+1. Setup a config file at config/compute/runtime.properties that looks like this: [https://gist.github.com/rjurney/5818885](https://gist.github.com/rjurney/5818885)
+2. Run the compute node:
+
+```bash
+java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 \
+-classpath services/target/druid-services-0.5.6-SNAPSHOT-selfcontained.jar:config/compute \
+com.metamx.druid.http.ComputeMain
+```
+
+# Querying Your Data #
+
+Now that we have a complete cluster setup on localhost, we need to load data. To do so, refer to [Loading Your Data](http://druid.io/blog/2013/08/30/loading-data.html). Having done that, its time to query our data!
+
+## Querying Different Nodes ##
+
+As a shared-nothing system, there are three ways to query druid, against the Realtime, Compute or Broker node. Querying a Realtime node returns only realtime data, querying a compute node returns only historical segments. Querying the broker will query both realtime and compute segments and compose an overall result for the query. This is the normal mode of operation for queries in druid.
+
+### Construct a Query ###
+
+For constructing this query, see below at: Querying Against the realtime.spec
+
+```json
+{
+    "queryType": "groupBy",
+    "dataSource": "druidtest",
+    "granularity": "all",
+    "dimensions": [],
+    "aggregations": [
+        {"type": "count", "name": "rows"},
+        {"type": "longSum", "name": "imps", "fieldName": "impressions"},
+        {"type": "doubleSum", "name": "wp", "fieldName": "wp"}
+    ],
+    "intervals": ["2010-01-01T00:00/2020-01-01T00"]
+}
+```
+
+### Querying the Realtime Node ###
+
+Run our query against port 8080:
+
+```bash
+curl -X POST "http://localhost:8080/druid/v2/?pretty" \
+-H 'content-type: application/json' -d @query.body
+```
+
+See our result:
+
+```json
+[ {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 5,
+    "wp" : 15000.0,
+    "rows" : 5
+  }
+} ]
+```
+
+### Querying the Compute Node ###
+Run the query against port 8082:
+
+```bash
+curl -X POST "http://localhost:8082/druid/v2/?pretty" \
+-H 'content-type: application/json' -d @query.body
+```
+
+And get (similar to):
+
+```json
+[ {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 27,
+    "wp" : 77000.0,
+    "rows" : 9
+  }
+} ]
+```
+
+### Querying both Nodes via the Broker ###
+Run the query against port 8083:
+
+```bash
+curl -X POST "http://localhost:8083/druid/v2/?pretty" \
+-H 'content-type: application/json' -d @query.body
+```
+
+And get:
+
+```json
+[ {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 5,
+    "wp" : 15000.0,
+    "rows" : 5
+  }
+} ]
+```
+
+Now that we know what nodes can be queried (although you should usually use the broker node), lets learn how to know what queries are available.
+
+## Querying Against the realtime.spec ##
+
+How are we to know what queries we can run? Although [Querying](http://druid.io/docs/latest/Querying.html) is a helpful index, to get a handle on querying our data we need to look at our Realtime node's realtime.spec file:
+
+```json
+[{
+  "schema" : { "dataSource":"druidtest",
+               "aggregators":[ {"type":"count", "name":"impressions"},
+                                  {"type":"doubleSum","name":"wp","fieldName":"wp"}],
+               "indexGranularity":"minute",
+           "shardSpec" : { "type": "none" } },
+  "config" : { "maxRowsInMemory" : 500000,
+               "intermediatePersistPeriod" : "PT10m" },
+  "firehose" : { "type" : "kafka-0.7.2",
+                 "consumerProps" : { "zk.connect" : "localhost:2181",
+                                     "zk.connectiontimeout.ms" : "15000",
+                                     "zk.sessiontimeout.ms" : "15000",
+                                     "zk.synctime.ms" : "5000",
+                                     "groupid" : "topic-pixel-local",
+                                     "fetch.size" : "1048586",
+                                     "autooffset.reset" : "largest",
+                                     "autocommit.enable" : "false" },
+                 "feed" : "druidtest",
+                 "parser" : { "timestampSpec" : { "column" : "utcdt", "format" : "iso" },
+                              "data" : { "format" : "json" },
+                              "dimensionExclusions" : ["wp"] } },
+  "plumber" : { "type" : "realtime",
+                "windowPeriod" : "PT10m",
+                "segmentGranularity":"hour",
+                "basePersistDirectory" : "/tmp/realtime/basePersist",
+                "rejectionPolicy": {"type": "messageTime"} }
+
+}]
+```
+
+### dataSource ###
+
+```json
+"dataSource":"druidtest"
+```
+Our dataSource tells us the name of the relation/table, or 'source of data', to query in both our realtime.spec and query.body!
+
+### aggregations ###
+
+Note the [aggregations](http://druid.io/docs/latest/Aggregations.html) in our query:
+
+```json
+    "aggregations": [
+        {"type": "count", "name": "rows"},
+        {"type": "longSum", "name": "imps", "fieldName": "impressions"},
+        {"type": "doubleSum", "name": "wp", "fieldName": "wp"}
+    ],
+```
+
+this matches up to the aggregators in the schema of our realtime.spec!
+
+```json
+"aggregators":[ {"type":"count", "name":"impressions"},
+                                  {"type":"doubleSum","name":"wp","fieldName":"wp"}],
+```
+
+### dimensions ###
+
+Lets look back at our actual records (from [Loading Your Data](http://druid.io/blog/2013/08/30/loading-data.html):
+
+```json
+{"utcdt": "2010-01-01T01:01:01", "wp": 1000, "gender": "male", "age": 100}
+{"utcdt": "2010-01-01T01:01:02", "wp": 2000, "gender": "female", "age": 50}
+{"utcdt": "2010-01-01T01:01:03", "wp": 3000, "gender": "male", "age": 20}
+{"utcdt": "2010-01-01T01:01:04", "wp": 4000, "gender": "female", "age": 30}
+{"utcdt": "2010-01-01T01:01:05", "wp": 5000, "gender": "male", "age": 40}
+```
+
+Note that we have two dimensions to our data, other than our primary metric, wp. They are 'gender' and 'age'. We can specify these in our query! Note that we have added a dimension: age, below.
+
+```json
+{
+    "queryType": "groupBy",
+    "dataSource": "druidtest",
+    "granularity": "all",
+    "dimensions": ["age"],
+    "aggregations": [
+        {"type": "count", "name": "rows"},
+        {"type": "longSum", "name": "imps", "fieldName": "impressions"},
+        {"type": "doubleSum", "name": "wp", "fieldName": "wp"}
+    ],
+    "intervals": ["2010-01-01T00:00/2020-01-01T00"]
+}
+```
+
+Which gets us grouped data in return!
+
+```json
+[ {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 1,
+    "age" : "100",
+    "wp" : 1000.0,
+    "rows" : 1
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 1,
+    "age" : "20",
+    "wp" : 3000.0,
+    "rows" : 1
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 1,
+    "age" : "30",
+    "wp" : 4000.0,
+    "rows" : 1
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 1,
+    "age" : "40",
+    "wp" : 5000.0,
+    "rows" : 1
+  }
+}, {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 1,
+    "age" : "50",
+    "wp" : 2000.0,
+    "rows" : 1
+  }
+} ]
+```
+
+### filtering ###
+
+Now that we've observed our dimensions, we can also filter:
+
+```json
+{
+    "queryType": "groupBy",
+    "dataSource": "druidtest",
+    "granularity": "all",
+    "filter": {
+        "type": "selector",
+        "dimension": "gender",
+        "value": "male"
+    },
+    "aggregations": [
+        {"type": "count", "name": "rows"},
+        {"type": "longSum", "name": "imps", "fieldName": "impressions"},
+        {"type": "doubleSum", "name": "wp", "fieldName": "wp"}
+    ],
+    "intervals": ["2010-01-01T00:00/2020-01-01T00"]
+}
+```
+
+Which gets us just people aged 40:
+
+```json
+[ {
+  "version" : "v1",
+  "timestamp" : "2010-01-01T00:00:00.000Z",
+  "event" : {
+    "imps" : 3,
+    "wp" : 9000.0,
+    "rows" : 3
+  }
+} ]
+```
+
+Check out [Filters](http://druid.io/docs/latest/Filters.html) for more.
+
+## Learn More ##
+
+Finally, you can learn more about querying at [Querying](http://druid.io/docs/latest/Querying.html)!
diff --git a/_posts/2014-02-03-rdruid-and-twitterstream.md b/_posts/2014-02-03-rdruid-and-twitterstream.md
new file mode 100644
index 0000000..4fce8dd
--- /dev/null
+++ b/_posts/2014-02-03-rdruid-and-twitterstream.md
@@ -0,0 +1,292 @@
+---
+title: RDruid and Twitterstream
+published: true
+layout: post
+author: Igal Levy
+tags: #R #druid #analytics #querying #bigdata, #datastore
+---
+
+What if you could combine a statistical analysis language with the power of an analytics database for instant insights into realtime data? You'd be able to draw conclusions from analyzing data streams at the speed of now. That's what combining the prowess of a [Druid database](http://druid.io) with the power of [R](http://www.r-project.org) can do.
+
+In this blog, we'll look at how to bring streamed realtime data into R using nothing more than a laptop, an Internet connection, and open-source applications. And we'll do it with *only one* Druid node.
+
+## What You'll Need
+
+You'll need to download and unpack [Druid](http://static.druid.io/artifacts/releases/druid-services-0.6.52-bin.tar.gz).
+
+Get the [R application](http://www.r-project.org/) for your platform.
+We also recommend using [RStudio](http://www.rstudio.com/) as the R IDE, which is what we used to run R.
+    
+You'll also need a free Twitter account to be able to get a sample of streamed Twitter data.
+    
+
+## Set Up the Twitterstream
+
+First, register with the Twitter API. Log in at the [Twitter developer's site](https://dev.twitter.com/apps/new) (you can use your normal Twitter credentials) and fill out the form for creating an application; use any website and callback URL to complete the form. 
+
+Make note of the API credentials that are then generated. Later you'll need to enter them when prompted by the Twitter-example startup script, or save them in a `twitter4j.properties` file (nicer if you ever restart the server). If using a properties file, save it under `$DRUID_HOME/examples/twitter`. The file should contains the following (using your real keys):
+
+~~~
+oauth.consumerKey=<yourTwitterConsumerKey>
+oauth.consumerSecret=<yourTwitterConsumerSecret>
+oauth.accessToken=<yourTwitterAccessToken>
+oauth.accessTokenSecret=<yourTwitterAccessTokenSecret>
+~~~
+
+
+## Start Up the Realtime Node
+
+From the Druid home directory, start the Druid Realtime node:
+
+    $DRUID_HOME/run_example_server.sh
+    
+When prompted, you'll choose the "twitter" example. If you're using the properties file, the server should start right up. Otherwise, you'll have to answer the prompts with the credentials you obtained from Twitter. 
+
+After the Realtime node starts successfully, you should see "Connected_to_Twitter" printed, as well as messages similar to the following:
+
+    2014-01-13 19:35:59,646 INFO [chief-twitterstream] druid.examples.twitter.TwitterSpritzerFirehoseFactory - nextRow() has returned 1,000 InputRows
+
+This indicates that the Druid Realtime node is ingesting tweets in realtime.
+
+
+## Set Up R
+
+Install and load the following packages:
+
+~~~
+install.packages("devtools")
+install.packages("ggplot2")
+
+library("devtools")
+
+install_github("RDruid", "metamx")
+
+library(RDruid)
+library(ggplot2)
+~~~
+
+Now tell RDruid where to find the Realtime node:
+
+```
+druid <- druid.url("localhost:8083")
+```
+
+## Querying the Realtime Node
+
+[Druid queries](http://druid.io/docs/latest/Tutorial:-All-About-Queries.html) are in the format of JSON objects, but in R they'll have a different format. Let's look at this with a simple query that will give the time range of the Twitter data currently in our Druid node:
+
+```
+> druid.query.timeBoundary(druid, dataSource="twitterstream", intervals=interval(ymd(20140101), ymd(20141231)), verbose="true")
+```
+
+Let's break this query down:
+
+* `druid.query.timeBoundary` &ndash; The RDruid query that finds the earliest and latest timestamps on data in Druid, within a specified interval.
+* `druid` and `dataSource` &ndash; Specify the location of the Druid node and the name of the Twitter data stream.
+* `intervals` &ndash; The interval we're looking in. Our choice is more than wide enough to encompass any data we've received from Twitter.
+* `verbose` &ndash; The response should also print the JSON object that is posted to the Realtime node, that node's HTTP response, and possibly other information besides the actual response to the query.
+
+By making this a verbose query, we can take a look at the JSON object that RDruid creates from our R query and will post to the Druid node:
+
+{
+	"dataSource" : "twitterstream",
+	"intervals" : [
+		"2014-01-01T00:00:00.000+00:00/2014-12-31T00:00:00.000+00:00"
+	],
+	"queryType" : "timeBoundary"
+}
+
+This is the type of query that Druid can understand. Now let's look at the rest of the post and response:
+
+```
+* Adding handle: conn: 0x7fa1eb723800
+* Adding handle: send: 0
+* Adding handle: recv: 0
+* Curl_addHandleToPipeline: length: 1
+* - Conn 2 (0x7fa1eb723800) send_pipe: 1, recv_pipe: 0
+* About to connect() to localhost port 8083 (#2)
+*   Trying ::1...
+* Connected to localhost (::1) port 8083 (#2)
+> POST /druid/v2/ HTTP/1.1
+Host: localhost:8083
+Accept: */*
+Accept-Encoding: gzip
+Content-Type: application/json
+Content-Length: 151
+
+* upload completely sent off: 151 out of 151 bytes
+< HTTP/1.1 200 OK
+< Content-Type: application/x-javascript
+< Transfer-Encoding: chunked
+* Server Jetty(8.1.11.v20130520) is not blacklisted
+< Server: Jetty(8.1.11.v20130520)
+< 
+* Connection #2 to host localhost left intact
+                  minTime                   maxTime 
+"2014-01-25 00:52:00 UTC" "2014-01-25 01:35:00 UTC" 
+```
+
+At the very end comes the response to our query, a minTime and maxTime, the boundaries to our data set.
+
+### More Complex Queries
+Now lets look at some real Twitter data. Say we are interested in the number of tweets per language during that time period. We need to do an aggregation via a groupBy query (see RDruid help in RStudio):
+
+```
+druid.query.groupBy(druid, dataSource="twitterstream", 
+                    interval(ymd("2014-01-01"), ymd("2015-01-01")), 
+                    granularity=granularity("P1D"), 
+                    aggregations = (tweets = sum(metric("tweets"))), 
+                    dimensions = "lang", 
+                    verbose="true")
+```
+
+We see some new arguments in this query:
+
+* `granularity` &ndash; This sets the time period for each aggregation (in ISO 8601). Since all our data is in one day and we don't care about breaking down by hour or minute, we choose per-day granularity.
+* `aggregations` &ndash; This is where we specify and name the metrics that we're interesting in summing up. We wants tweets, and it just so happens that this metric is named "tweets" as it's mapped from the twitter API, so we'll keep that name as the column head for our output.
+* `dimension` &ndash; Here's the actual type of data we're interesting in. Tweets are identified by language in their metadata (using ISO 639 language codes). We use the name of the dimension, "lang," to slice the data along language.
+
+Here's the actual output:
+
+```
+{
+	"intervals" : [
+		"2014-01-01T00:00:00.000+00:00/2015-01-01T00:00:00.000+00:00"
+	],
+	"aggregations" : [
+		{
+			"type" : "doubleSum",
+			"name" : "tweets",
+			"fieldName" : "tweets"
+		}
+	],
+	"dataSource" : "twitterstream",
+	"filter" : null,
+	"having" : null,
+	"granularity" : {
+		"type" : "period",
+		"period" : "P1D",
+		"origin" : null,
+		"timeZone" : null
+	},
+	"dimensions" : [
+		"lang"
+	],
+	"postAggregations" : null,
+	"limitSpec" : null,
+	"queryType" : "groupBy",
+	"context" : null
+}
+* Adding handle: conn: 0x7fa1eb767600
+* Adding handle: send: 0
+* Adding handle: recv: 0
+* Curl_addHandleToPipeline: length: 1
+* - Conn 3 (0x7fa1eb767600) send_pipe: 1, recv_pipe: 0
+* About to connect() to localhost port 8083 (#3)
+*   Trying ::1...
+* Connected to localhost (::1) port 8083 (#3)
+> POST /druid/v2/ HTTP/1.1
+Host: localhost:8083
+Accept: */*
+Accept-Encoding: gzip
+Content-Type: application/json
+Content-Length: 489
+
+* upload completely sent off: 489 out of 489 bytes
+< HTTP/1.1 200 OK
+< Content-Type: application/x-javascript
+< Transfer-Encoding: chunked
+* Server Jetty(8.1.11.v20130520) is not blacklisted
+< Server: Jetty(8.1.11.v20130520)
+< 
+* Connection #3 to host localhost left intact
+    timestamp tweets  lang
+1  2014-01-25   6476    ar
+2  2014-01-25      1    bg
+3  2014-01-25     22    ca
+4  2014-01-25     10    cs
+5  2014-01-25     21    da
+6  2014-01-25    311    de
+7  2014-01-25     23    el
+8  2014-01-25  74842    en
+9  2014-01-25     20 en-GB
+10 2014-01-25    690 en-gb
+11 2014-01-25  22920    es
+12 2014-01-25      2    eu
+13 2014-01-25      2    fa
+14 2014-01-25     10    fi
+15 2014-01-25     36   fil
+16 2014-01-25   1521    fr
+17 2014-01-25      9    gl
+18 2014-01-25     15    he
+19 2014-01-25      1    hi
+20 2014-01-25      5    hu
+21 2014-01-25   3809    id
+22 2014-01-25      4    in
+23 2014-01-25    256    it
+24 2014-01-25  19748    ja
+25 2014-01-25   1079    ko
+26 2014-01-25      1    ms
+27 2014-01-25     19   msa
+28 2014-01-25    243    nl
+29 2014-01-25     24    no
+30 2014-01-25    113    pl
+31 2014-01-25  12707    pt
+32 2014-01-25      3    ro
+33 2014-01-25   1606    ru
+34 2014-01-25      1    sr
+35 2014-01-25     76    sv
+36 2014-01-25    532    th
+37 2014-01-25   1415    tr
+38 2014-01-25     30    uk
+39 2014-01-25      6 xx-lc
+40 2014-01-25      1 zh-CN
+41 2014-01-25     30 zh-cn
+42 2014-01-25     34 zh-tw
+```
+
+This gives an idea of what languages dominate Twitter (at least for the given time range). For visualization, you can use a library like ggplot2. Try the `geom_bar` function to quickly produce a basic bar chart of the data. First, send the query above to a dataframe (let's call it `tweet_langs` in this example), then subset it to take languages with more than a thousand tweets:
+
+    major_tweet_langs <- subset(tweet_langs, tweets > 1000)
+
+Then create the chart:
+
+    ggplot(major_tweet_langs, aes(x=lang, y=tweets)) + geom_bar(stat="identity")
+
+You can refine this query with more aggregations and post aggregations (math within the results). For example, to find out how many rows in Druid the data for each of those languages takes, use:
+
+```
+druid.query.groupBy(druid, dataSource="twitterstream", 
+                    interval(ymd("2014-01-01"), ymd("2015-01-01")), 
+                    granularity=granularity("all"), 
+                    aggregations = list(rows = druid.count(), 
+                                        tweets = sum(metric("tweets"))), 
+                    dimensions = "lang", 
+                    verbose="true")
+```
+
+## Metrics and Dimensions
+How do you find out what metrics and dimensions are available to query? You can find the metrics in `$DRUID_HOME/examples/twitter/twitter_realtime.spec`. The dimensions are not as apparent. There's an easy way to query for them from a certain type of Druid node, but not from a Realtime node, which leaves the less-appetizing approach of digging through [code](https://github.com/metamx/druid/blob/druid-0.5.x/examples/src/main/java/druid/examples/twitter/TwitterSpritzerFirehoseFactory.java) [...]
+
+* "first_hashtag"
+* "user_time_zone"
+* "user_location"
+* "is_retweet"
+* "is_viral"
+
+Some interesting analyses on current events could be done using these dimensions and metrics. For example, you could filter on specific hashtags for events that happen to be spiking at the time:
+
+```
+druid.query.groupBy(druid, dataSource="twitterstream", 
+                interval(ymd("2014-01-01"), ymd("2015-01-01")), 
+                granularity=granularity("P1D"), 
+                aggregations = (tweets = sum(metric("tweets"))), 
+                filter =
+                    dimension("first_hashtag") %~% "academyawards" |
+                    dimension("first_hashtag") %~% "oscars",
+                dimensions   = list("first_hashtag"))
+```
+
+See the [RDruid wiki](https://github.com/metamx/RDruid/wiki/Examples) for more examples.
+
+The point to remember is that this data is being streamed into Druid and brought into R via RDruid in realtime. For example, with an R script the data could be continuously queried, updated, and analyzed. 
diff --git a/_posts/2014-02-18-hyperloglog-optimizations-for-real-world-systems.md b/_posts/2014-02-18-hyperloglog-optimizations-for-real-world-systems.md
new file mode 100644
index 0000000..4c7f681
--- /dev/null
+++ b/_posts/2014-02-18-hyperloglog-optimizations-for-real-world-systems.md
@@ -0,0 +1,189 @@
+---
+title: "How We Scaled HyperLogLog: Three Real-World Optimizations"
+author: NELSON RAY AND FANGJIN YANG
+image: http://metamarkets.com/wp-content/uploads/2014/02/sequoia-600x400.jpg
+layout: post
+---
+
+At Metamarkets, we specialize in converting mountains of programmatic ad data
+into real-time, explorable views. Because these datasets are so large and
+complex, we’re always looking for ways to maximize the speed and efficiency of
+how we deliver them to our clients.  In this post, we’re going to continue our
+discussion of some of the techniques we use to calculate critical metrics such
+as unique users and device IDs with maximum performance and accuracy.
+
+Approximation algorithms are rapidly gaining traction as the preferred way to
+determine the unique number of elements in high cardinality sets. In the space
+of cardinality estimation algorithms, HyperLogLog has quickly emerged as the
+de-facto standard. Widely discussed by [technology companies][pub40671] and
+[popular blogs][highscalability-count], HyperLogLog trades
+accuracy in data and query results for massive reductions in data storage and
+vastly improved [system performance][strata-talk].
+
+In our [previous][previous-hll-post] investigation of HyperLogLog, we briefly
+discussed our motivations for using approximate algorithms and how we leveraged
+HyperLogLog in [Druid][druid], Metamarkets’ open source, distributed data
+store.  Since implementing and deploying HyperLogLog last year, we’ve made
+several optimizations to further improve performance and reduce storage cost.
+This blog post will share some of those optimizations. This blog post assumes
+that you are already familiar with how HyperLogLog works. If you are not
+familiar with the algorithm, there are plenty of resources [online][flajolet].
+
+## Compacting Registers
+
+In our initial implementation of HLL, we allocated 8 bits of memory for each
+register. Recall that each value stored in a register indicates the position of
+the first ‘1’ bit of a hashed input. Given that 2^255 ~== 10^76, a single 8 bit
+register could approximate (not well, though) a cardinality close to the number
+of atoms in the entire [observable universe][atoms-in-the-universe]. Martin
+Traverso, et. al. of [Facebook’s Presto][presto] , realized that this was a bit
+wasteful and proposed an optimization, exploiting the fact that the registers
+increment in near lockstep.
+
+Given that each register is initially initialized with value 0, with 0 uniques,
+there is no change in any of the registers. Let’s say we have 8 registers. Then
+with 8 * 2^10 uniques, each register will have values ~ 10. Of course, there
+will be some variance, which can be calculated exactly if one were so inclined,
+given that the distribution in each register is an independent maximum of
+[Negative Binomial][negative-binomial] (1, .5) draws.
+
+With 4 bit registers, each register can only approximate up to 2^15 = 32,768
+uniques. In fact, the reality is worse because the higher numbers cannot be
+represented and are lost, impacting accuracy. Even with 2,048 registers, we
+can’t do much better than ~60M, which is one or two orders of magnitude lower
+than what we need.
+
+Since the register values tend to increase together, the FB folks decided to
+introduce an offset counter and only store positive differences from it in the
+registers. That is, if we have register values of 8, 7, and 9, this corresponds
+to having an offset of 7 and using register difference values of 1, 0, and 2.
+Given the smallish spread that we expect to see, we typically won’t observe a
+difference of more than 15 among register values. So we feel comfortable using
+2,048 4 bit registers with an 8 bit offset, for 1025 bytes of storage &lt; 2048
+bytes (no offset and 8 bit registers).
+
+In fact, others have commented on the concentrated distribution of the register
+values as well. In her [thesis][durand-thesis], Marianne Durand suggested using
+a variable bit prefix encoding. Researchers at [Google][google-40671] have had
+success with difference encodings and variable length encodings.
+
+### Problem
+
+This optimization has served us well, with no appreciable loss in accuracy when
+streaming many uniques into a single HLL object, because the offset increments
+when all the registers get hit. Similarly, we can combine many HLL objects of
+moderate size together and watch the offsets increase. However, a curious
+phenomenon occurs when we try to combine many “small” HLL objects together.
+
+Suppose each HLL object stores a single unique value. Then its offset will be
+0, one register will have a value between 1 and 15, and the remaining registers
+will be 0. No matter how many of these we combine together, our aggregate HLL
+object will never be able to exceed a value of 15 in each register with a 0
+offset, which is equivalent to an offset of 15 with 0’s in each register. Using
+2,048 registers, this means we won’t be able to produce estimates greater than
+~ .7 * 2048^2 * 1 / (2048 / 2^15) ~ 47M. ([*Flajolet, et al. 2007*][flajolet])
+
+Not good, because this means our estimates are capped at 10^7 instead of 10^80,
+irrespective of the number of true uniques. And this isn’t just some
+pathological edge case. Its untimely appearance in production a while ago was
+no fun trying to fix.
+
+### Floating Max
+
+The root problem in the above scenario is that the high values (&gt; 15) are
+being clipped, with no hope of making it into a “small” HLL object, since the
+offset is 0. Although they are rare, many cumulative misses can have a
+noticeably large effect. Our solution involves storing one additional pair, a
+“floating max” bucket with higher resolution. Previously, a value of 20 in
+bucket 94 would be clipped to 15. Now, we store (20, 94) as the floating max,
+requiring at most an additional 2 bytes, bringing our total up to 1027 bytes.
+With enough small HLL objects so that each position is covered by a floating
+max, the combined HLL object can exceed the previous limit of 15 in each
+position. It also turns out that just one floating max is sufficient to largely
+fix the problem.
+
+Let’s take a look at one measure of the accuracy of our approximations. We
+simulate 1,000 runs of streaming 1B uniques into an HLL object and look at the
+proportion of cases in which we observed clipping with the offset approximation
+(black) and the addition of the floating max (red). So for 1e9 uniques, the max
+reduced clipping from 95%+ to ~15%. That is, in 85% of cases, the much smaller
+HLL objects with the floating max agreed with HLL versus less than 5% without
+the floating max.
+
+![Clipping on Cardinality](http://metamarkets.com/wp-content/uploads/2014/02/FJblogpost-600x560.png "Clipping on Cardinality")
+
+For the cost of only 2 bytes, the floating max register allowed us to union
+millions of HLL objects with minimal measurable loss in accuracy.
+
+## Sparse and Dense Storage
+
+We first discussed the concept of representing HLL buckets in either a sparse
+or dense format in our [first blog post][previous-hll-post]. Since that time,
+Google has also written a [great paper][pub40671] on the matter. Data undergoes
+a [summarization process][druid-part-deux] when it is ingested in Druid. It is
+unnecessarily expensive to store raw event data and instead, Druid rolls
+ingested data up to some time granularity.
+
+![](https://lh6.googleusercontent.com/O2YefUQdRdmCTXzh6xdxthD0VJY0Vq96DTXkhhPVAL_JXaJ1JuAWfFaxZDSmf9NDZgrmHS61RMFLqivacqsOw7evy1Ff73KNb1MdjoLchpCwc-YE8d9eCLiAAA)
+
+In practice, we see tremendous reductions in data volume by summarizing our
+[data][strata-talk]. For a given summarized row, we can maintain HLL objects
+where each object represents the estimated number of unique elements for a
+column of that row.
+
+When the summarization granularity is sufficiently small, only a limited number
+of unique elements may be seen for a dimension. In this case, a given HLL
+object may have registers that contain no values. The HLL registers are thus
+‘sparsely’ populated.
+
+Our normal storage representation of HLL stores 2 register values per byte. In
+the sparse representation, we instead store the explicit indexes of buckets
+that have valid values in them as (index, value) pairs. When the sparse
+representation exceeds the size of the normal or ‘dense’ representation (1027
+bytes), we can switch to using only the dense representation. Our actual
+implementation uses a heuristic to determine when this switch occurs, but the
+idea is the same. In practice, many dimensions in real world data sets are of
+low cardinality, and this optimization can greatly reduce storage versus only
+storing the dense representation.
+
+## Faster Lookups
+
+One of the simpler optimizations that we implemented for faster cardinality
+calculations was to use lookups for register values. Instead of computing the
+actual register value by summing the register offset with the stored register
+value, we instead perform a lookup into a precalculated map. Similarly, to
+determine the number of zeros in a register value, we created a secondary
+lookup table. Given the number of registers we have, the cost of storing these
+lookup tables is near trivial. This problem is often known as the [Hamming
+Weight problem][hamming-weight].
+
+## Lessons
+
+Many of our optimizations came out of necessity, both to provide the
+interactive query latencies that Druid users have come to expect, and to keep
+our storage costs reasonable. If you have any further improvements to our
+optimizations, please share them with us! We strongly believe that as data sets
+get increasingly larger, estimation algorithms are key to keeping query times
+acceptable. The approximate algorithm space remains relatively new, but it is
+something we can build together.
+
+For more information on Druid, please visit [druid.io][druid] and follow
+[@druidio][twitter]. We’d also like to thank Eric Tschetter and Xavier Léauté
+for their contributions to this work.  Featured image courtesy of [Donna L
+Martin][image-credits].
+
+[druid]: http://druid.io/
+[twitter]: https://twitter.com/druidio
+[pub40671]: http://research.google.com/pubs/pub40671.html
+[highscalability-count]: http://highscalability.com/blog/2012/4/5/big-data-counting-how-to-count-a-billion-distinct-objects-us.html
+[flajolet]: http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf
+[previous-hll-post]: http://metamarkets.com/2012/fast-cheap-and-98-right-cardinality-estimation-for-big-data
+[atoms-in-the-universe]: http://www.universetoday.com/36302/atoms-in-the-universe/
+[presto]: https://www.facebook.com/notes/facebook-engineering/presto-interacting-with-petabytes-of-data-at-facebook/10151786197628920
+[negative-binomial]: http://en.wikipedia.org/wiki/Negative_binomial_distribution
+[durand-thesis]: http://algo.inria.fr/durand/Articles/these.ps
+[google-40671]: http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40671.pdf
+[strata-talk]: http://strataconf.com/stratany2013/public/schedule/detail/30045
+[druid-part-deux]: http://druid.io/blog/2011/05/20/druid-part-deux.html
+[hamming-weight]: http://en.wikipedia.org/wiki/Hamming_weight
+[image-credits]: http://donasdays.blogspot.com/2012/10/are-you-sprinter-or-long-distance-runner.html
diff --git a/_posts/2014-03-12-batch-ingestion.md b/_posts/2014-03-12-batch-ingestion.md
new file mode 100644
index 0000000..0dd3a97
--- /dev/null
+++ b/_posts/2014-03-12-batch-ingestion.md
@@ -0,0 +1,202 @@
+---
+title: Batch-Loading Sensor Data into Druid
+published: true
+layout: post 
+author: Igal Levy
+tags: #sensors #usgs #druid #analytics #querying #bigdata, #datastore
+---
+
+Sensors are everywhere these days, and that means sensor data is big data. Ingesting and analyzing sensor data at speed is an interesting problem, especially when scale is desired. In this post, we'll access some real-world sensor data, and show how Druid can be used to store that data and make it available for immediate querying.
+
+## Finding Sensor Data
+The United States Geological Survey (USGS) has millions of sensors for all types of physical and natural phenomena, many of which concern water. If you live anywhere where water is a concern, which is pretty much everywhere (considering that both too little or too much H<sub>2</sub>O can be an issue), this is interesting data. You can learn about USGS sensors in a variety of ways, one of which is an [interactive map](http://maps.waterdata.usgs.gov/mapper/index.html).
+
+We used this map to get the sensor info for the Napa River in Napa County, California.
+
+<img src="{{ relative }}/img/map-usgs-napa.png"  alt"USGS map showing Napa River sensor location and information" title="USGS Napa River Sensor Information">
+
+We decided to first import the data into [R (the statistical programming language)](http://www.r-project.org/) for two reasons:
+
+* The R package `waterData` from USGS. This package allows us to retrieve and analyze hydrologic data from USGS. We can then export that data from within the R environment, then set up Druid to ingest it.
+* The R package `RDruid` which we've [blogged about before](http://druid.io/blog/2014/02/03/rdruid-and-twitterstream.html). This package allows us to query Druid from within the R environment.
+
+## Extracting the Streamflow Data
+In R, load the waterData package, then run `importDVs()`:
+
+```r
+> install.packages("waterData")
+...
+> library(waterData)
+...
+> napa_flow <- importDVs("11458000", code="00060", stat="00003", sdate="1963-01-01", edate="2013-12-31")
+```
+The last line uses the function `waterData.importDVs()` to get sensor (or "streamgage") data directly from the USGS datasource. This function has the following arguments:
+
+* staid, or site identification number, which is entered as a string due to the fact that some IDs have leading 0s. This value was obtained from the interactive map discussed above.
+* code, which specifies the type of sensor data we're interested in (if available). Our chosen code specifies measurement of discharge, in cubic feet per second. You can learn about codes at the [USGS Water Resources site](http://nwis.waterdata.usgs.gov/usa/nwis/pmcodes).
+* stat, which specifies the type of statistic we're looking for&mdash;in this case, the mean daily flow (mean is the default statistic). The USGS provides [a page summarizing various types of codes and parameters](http://help.waterdata.usgs.gov/codes-and-parameters).
+* start and end dates. 
+
+The information on the specific site and sensor should provide information on the type of data available and the start-end dates for the full historical record.
+
+You can now analyse and visualize the streamflow data. For example, we ran:
+
+```r
+> myWater.plot <- plotParam(napa_flow)
+> print(myWater.plot)
+```
+
+to get:
+
+<img src="{{ relative }}/img/napa_streamflow_plot.png" alt="Napa River streamflow historical data" title="Napa River streamflow historical data" >
+
+Reflected in the flow of the Napa River, you can see the severe drought California experienced in the late 1970s, the very wet years that followed, a less severe drought beginning in the late 1980s, and the beginning of the current drought.
+
+## Transforming the Data for Druid
+We first want to have a look at the content of the data frame:
+
+```r
+> head(napa_flow)
+     staid val      dates qualcode
+1 11458000  90 1963-01-01        A
+2 11458000  87 1963-01-02        A
+3 11458000  85 1963-01-03        A
+4 11458000  80 1963-01-04        A
+5 11458000  76 1963-01-05        A
+6 11458000  75 1963-01-06        A
+```
+
+We don't have any use for the qualcode (the [Daily Value Qualification Code](http://help.waterdata.usgs.gov/codes-and-parameters/daily-value-qualification-code-dv_rmk_cd)), column:
+
+```r
+> napa_flow_subset <- napa_flow[,c(1:3)]
+```
+
+It may look like we also don't need the staid column, either, since it's all the same sensor ID. However, we'll keep it because at some later time we may want to load similar data from other sensors.
+
+Now we can export the data to a file, removing the header and row names:
+
+```r
+write.table(napa_flow_subset, file="~/napa-flow.tsv", sep="\t", col.names = F, row.names = F)
+```
+
+And here's our file:
+
+```bash
+$ head ~/napa-flow.tsv 
+"11458000"	90	1963-01-01
+"11458000"	87	1963-01-02
+"11458000"	85	1963-01-03
+"11458000"	80	1963-01-04
+"11458000"	76	1963-01-05
+"11458000"	75	1963-01-06
+"11458000"	73	1963-01-07
+"11458000"	71	1963-01-08
+"11458000"	65	1963-01-09
+"11458000"	59	1963-01-10
+```
+
+## Loading the Data into Druid
+Loading the data into Druid involves setting up Druid's indexing service to ingest the data into the Druid cluster, where specialized nodes will manage it.
+
+### Configure the Indexing Task
+Druid has an indexing service that can load data. Since there's a relatively small amount of data to ingest, we're going to use the [basic Druid indexing service](http://druid.io/docs/latest/Batch-ingestion.html) to ingest it. (Another option to ingest data uses a Hadoop cluster and is set up in a similar way, but that is more than we need for this job.) We must create a task (in JSON format) that specifies the work the indexing service will do:
+
+```json
+{
+  "type" : "index",
+  "dataSource" : "usgs",
+  "granularitySpec" : {
+    "type" : "uniform",
+    "gran" : "MONTH",
+    "intervals" : [ "1963-01-01/2013-12-31" ]
+  },
+  "aggregators" : [{
+     "type" : "count",
+     "name" : "count"
+    }, {
+     "type" : "doubleSum",
+     "name" : "avgFlowCuFtsec",
+     "fieldName" : "val"
+  }],
+  "firehose" : {
+    "type" : "local",
+    "baseDir" : "examples/usgs/",
+    "filter" : "napa-flow-subset.tsv",
+    "parser" : {
+      "timestampSpec" : {
+        "column" : "dates"
+      },
+      "data" : {
+        "type" : "tsv",
+        "columns" : ["staid","val","dates"],
+        "dimensions" : ["staid","val"]
+      }
+    }
+  }
+}
+``` 
+
+The taks is saved to a file, `usgs_index_task.json`. Note a few things about this task:
+
+* granularitySpec sets [segment](http://druid.io/docs/latest/Concepts-and-Terminology.html) granularity to MONTH, rather than using the default DAY, even though each row of our data is a daily reading. We do this to avoid having Druid create a segment per row of data. That's a lot of extra work (note the interval is "1963-01-01/2013-12-31"), and we simply don't need that much granularity to make sense of this data for a broad view. Setting the granularity to MONTH causes Druid to roll up [...]
+
... 808818 lines suppressed ...


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org