You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@druid.apache.org by GitBox <gi...@apache.org> on 2018/07/17 16:55:41 UTC

[GitHub] fjy closed pull request #6010: Update

fjy closed pull request #6010: Update
URL: https://github.com/apache/incubator-druid/pull/6010
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/.travis.yml b/.travis.yml
index ef27a62391b..6d9068c56b8 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,11 +1,10 @@
 language: java
 
 jdk:
-  - oraclejdk7
   - oraclejdk8
 
-after_success:
-  - mvn clean cobertura:cobertura coveralls:report -pl '!benchmarks,!distribution'
+script:
+  - mvn test -B -Pparallel-test -Dmaven.fork.count=2 && mvn clean -Pstrict compile test-compile -B
 
 sudo: false
 
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 1624072e2a2..6427d905907 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -3,9 +3,9 @@
 When submitting a pull request (PR), please use the following guidelines:
 
 - Make sure your code respects existing formatting conventions. In general, follow
-  the same coding style as the code that you are modifying. If you are using
-  IntelliJ, you can import our code style settings jar:
-  [intellij_formatting.jar](https://github.com/druid-io/druid/raw/master/intellij_formatting.jar).
+  the same coding style as the code that you are modifying.
+- For Intellij you can import our code style settings xml: [druid_intellij_formatting.xml](https://github.com/druid-io/druid/raw/master/druid_intellij_formatting.xml).
+- For Eclipse you can import our code style settings xml: [eclipse_formatting.xml](https://github.com/druid-io/druid/raw/master/eclipse_formatting.xml).
 - Do add/update documentation appropriately for the change you are making.
 - If you are introducing a new feature you may want to first submit your idea
   for feedback to the [mailing list](mailto:druid-development@googlegroups.com).
@@ -58,13 +58,13 @@ When submitting a pull request (PR), please use the following guidelines:
   git commit -a
   ```
 
-1. Periodically rebase your changes
+1. Before submitting a pull request, periodically rebase your changes
 
   ```
   git pull --rebase
   ```
 
-1. When done, combine ("squash") related commits into a single one
+1. Before submitting a pull request, combine ("squash") related commits into a single one
 
   ```
   git rebase -i upstream/master
@@ -96,24 +96,24 @@ When submitting a pull request (PR), please use the following guidelines:
 
 1. Addressing code review comments
 
-  Repeat steps 5. through 7. to address any code review comments and
-  rebase your changes if necessary.
-
-  Push your updated changes to update the pull request
+  Address code review comments by committing changes and pushing them to your feature
+  branch.
 
   ```
-  git push origin [--force] feature-xxx
+  git push origin feature-xxx
   ```
 
-  `--force` may be necessary to overwrite your existing pull request in case your
-  commit history was changed when performing the rebase.
-
-  Note: Be careful when using `--force` since you may lose data if you are not careful.
+  If your pull request shows conflicts with master, merge master into your feature branch
+  and resolve the conflicts. After resolving conflicts, push your branch again.
 
   ```
-  git push origin --force feature-xxx
+  git merge master
   ```
 
+  Avoid rebasing and force pushes after submitting a pull request, since these make it
+  difficult for reviewers to see what you've changed in response to their reviews. The Druid
+  committer that merges your change will rebase and squash it into a single commit before
+  committing it to master.
 
 # FAQ
 
diff --git a/NOTICE b/NOTICE
index 9460784b640..fb36a21373f 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,6 +1,7 @@
 Druid - a distributed column store.
 Copyright 2012-2016 Metamarkets Group Inc.
 Copyright 2015-2016 Yahoo! Inc.
+Copyright 2015-2016 Imply Data, Inc.
 
 -------------------------------------------------------------------------------
 
@@ -9,3 +10,70 @@ This product contains a modified version of Andrew Duffy's java-alphanum library
     * https://github.com/amjjd/java-alphanum/blob/5c036e2e492cc7f3b7bcdebd46b8f9e2a87927e5/LICENSE.txt (Apache License, Version 2.0)
   * HOMEPAGE:
     * https://github.com/amjjd/java-alphanum
+
+This product contains conjunctive normal form conversion code and a variance aggregator algorithm adapted from Apache Hive
+  * LICENSE:
+    * https://github.com/apache/hive/blob/branch-2.0/LICENSE (Apache License, Version 2.0)
+  * HOMEPAGE:
+    * https://github.com/apache/hive
+
+This product contains variable length long deserialization code adapted from Apache Lucene
+  * LICENSE:
+    * https://github.com/apache/lucene-solr/blob/master/lucene/LICENSE.txt (Apache License, Version 2.0)
+  * HOMEPAGE:
+    * https://github.com/apache/lucene-solr
+
+This product contains a modified version of Metamarkets java-util library
+  * LICENSE:
+    * https://github.com/metamx/java-util/blob/master/LICENSE (Apache License, Version 2.0)
+  * HOMEPAGE:
+    * https://github.com/metamx/java-util
+  * COMMIT TAG:
+    * https://github.com/metamx/java-util/commit/826021f
+
+This product contains a modified version of TestNG 6.8.7
+  * LICENSE:
+    * http://testng.org/license/ (Apache License, Version 2.0)
+  * HOMEPAGE:
+    * http://testng.org/
+
+This product contains a modified version of Metamarkets bytebuffer-collections library
+  * LICENSE:
+    * https://github.com/metamx/bytebuffer-collections/blob/master/LICENSE (Apache License, Version 2.0)
+  * HOMEPAGE:
+    * https://github.com/metamx/bytebuffer-collections
+  * COMMIT TAG:
+    * https://github.com/metamx/bytebuffer-collections/commit/3d1e7c8
+
+This product contains SQL query planning code adapted from Apache Calcite
+  * LICENSE:
+    * https://github.com/apache/calcite/blob/master/LICENSE (Apache License, Version 2.0)
+  * HOMEPAGE:
+    * https://calcite.apache.org/
+
+This product contains a modified version of Metamarkets extendedset library
+  * LICENSE:
+    * https://github.com/metamx/extendedset/blob/master/LICENSE (Apache License, Version 2.0)
+  * HOMEPAGE:
+    * https://github.com/metamx/extendedset
+  * COMMIT TAG:
+    * https://github.com/metamx/extendedset/commit/c9d647d
+
+This product contains a modified version of Alessandro Colantonio's CONCISE
+(COmpressed 'N' Composable Integer SEt) library, extending the functionality of
+ConciseSet to use IntBuffers.
+  * (c) 2010 Alessandro Colantonio
+  * <ma...@mat.uniroma3.it>
+  * <http://ricerca.mat.uniroma3.it/users/colanton>
+  * LICENSE:
+    * Apache License, Version 2.0
+  * HOMEPAGE:
+    * https://sourceforge.net/projects/concise/
+
+This product contains a modified version of The Guava Authors's Closer class from Guava library:
+ * LICENSE:
+   * https://github.com/google/guava/blob/c462d69329709f72a17a64cb229d15e76e72199c/COPYING (Apache License, Version 2.0)
+ * HOMEPAGE:
+   * https://github.com/google/guava
+ * COMMIT TAG:
+   * https://github.com/google/guava/blob/c462d69329709f72a17a64cb229d15e76e72199c
diff --git a/README.md b/README.md
index 7e2c65610d2..a0211adf7cc 100644
--- a/README.md
+++ b/README.md
@@ -1,17 +1,8 @@
 [![Build Status](https://travis-ci.org/druid-io/druid.svg?branch=master)](https://travis-ci.org/druid-io/druid) [![Coverage Status](https://coveralls.io/repos/druid-io/druid/badge.svg?branch=master)](https://coveralls.io/r/druid-io/druid?branch=master)
 
-## Druid
+## Apache Druid (incubating)
 
-Druid is a distributed, column-oriented, real-time analytics data store
-that is commonly used to power exploratory dashboards in multi-tenant
-environments.
-
-Druid excels as a data warehousing solution for fast aggregate queries on
-petabyte sized data sets. Druid supports a variety of flexible filters, exact
-calculations, approximate algorithms, and other useful calculations.
-
-Druid can load both streaming and batch data and integrates with
-Samza, Kafka, Storm, and Hadoop.
+Apache Druid (incubating) is a high performance analytics data store for event-driven data.
 
 ### License
 
@@ -23,28 +14,30 @@ More information about Druid can be found on <http://www.druid.io>.
 
 ### Documentation
 
-You can find the [latest Druid Documentation](http://druid.io/docs/latest/) on
+You can find the [documentation for the latest Druid release](http://druid.io/docs/latest/) on
 the [project website](http://druid.io/docs/latest/).
 
 If you would like to contribute documentation, please do so under
 `/docs/content` in this repository and submit a pull request.
 
-### Tutorials
+### Getting Started
 
-We have a series of tutorials to get started with Druid.  If you are just
-getting started, we suggest going over the [first Druid
-tutorial](http://druid.io/docs/latest/Tutorial:-A-First-Look-at-Druid.html).
+You can get started with Druid with our [quickstart](http://druid.io/docs/latest/tutorials/quickstart.html).
 
 ### Reporting Issues
 
-If you find any bugs, please file a [GitHub issue](https://github.com/druid-io/druid/issues).
+If you find any bugs, please file a [GitHub issue](https://github.com/apache/incubator-druid/issues).
 
 ### Community
 
 Community support is available on the [druid-user mailing
 list](https://groups.google.com/forum/#!forum/druid-user)(druid-user@googlegroups.com).
 
-Development discussions occur on the [druid-development list](https://groups.google.com/forum/#!forum/druid-development)(druid-development@googlegroups.com).
+Development discussions occur on the [druid-development list](dev@druid.apache.org for discussion about project development)(dev@druid.apache.org).
 
 We also have a couple people hanging out on IRC in `#druid-dev` on
 `irc.freenode.net`.
+
+### Contributing
+
+Please follow the guidelines listed [here](http://druid.io/community/).
diff --git a/api/pom.xml b/api/pom.xml
new file mode 100644
index 00000000000..d494af174e4
--- /dev/null
+++ b/api/pom.xml
@@ -0,0 +1,139 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ ~ Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ ~ or more contributor license agreements.  See the NOTICE file
+ ~ distributed with this work for additional information
+ ~ regarding copyright ownership.  Metamarkets licenses this file
+ ~ to you under the Apache License, Version 2.0 (the
+ ~ "License"); you may not use this file except in compliance
+ ~ with the License.  You may obtain a copy of the License at
+ ~
+ ~   http://www.apache.org/licenses/LICENSE-2.0
+ ~
+ ~ Unless required by applicable law or agreed to in writing,
+ ~ software distributed under the License is distributed on an
+ ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ ~ KIND, either express or implied.  See the License for the
+ ~ specific language governing permissions and limitations
+ ~ under the License.
+ -->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+    <modelVersion>4.0.0</modelVersion>
+
+    <artifactId>druid-api</artifactId>
+    <name>druid-api</name>
+    <description>Druid Extensions API</description>
+
+    <parent>
+        <groupId>io.druid</groupId>
+        <artifactId>druid</artifactId>
+        <version>0.10.0-SNAPSHOT</version>
+    </parent>
+
+    <dependencies>
+        <dependency>
+            <groupId>io.druid</groupId>
+            <artifactId>java-util</artifactId>
+            <version>${project.parent.version}</version>
+                <exclusions>
+                    <exclusion>
+                        <groupId>org.slf4j</groupId>
+                        <artifactId>slf4j-api</artifactId>
+                    </exclusion>
+                </exclusions>
+        </dependency>
+        <dependency>
+            <groupId>com.google.inject</groupId>
+            <artifactId>guice</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>com.google.inject.extensions</groupId>
+            <artifactId>guice-multibindings</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>io.airlift</groupId>
+            <artifactId>airline</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>com.fasterxml.jackson.core</groupId>
+            <artifactId>jackson-annotations</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>com.fasterxml.jackson.core</groupId>
+            <artifactId>jackson-core</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>com.fasterxml.jackson.core</groupId>
+            <artifactId>jackson-databind</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>com.fasterxml.jackson.dataformat</groupId>
+            <artifactId>jackson-dataformat-smile</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.hibernate</groupId>
+            <artifactId>hibernate-validator</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>javax.validation</groupId>
+            <artifactId>validation-api</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>commons-io</groupId>
+            <artifactId>commons-io</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>com.google.code.findbugs</groupId>
+            <artifactId>jsr305</artifactId>
+        </dependency>
+
+        <!-- Tests -->
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.slf4j</groupId>
+            <artifactId>slf4j-simple</artifactId>
+            <scope>test</scope>
+            <optional>true</optional>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.logging.log4j</groupId>
+            <artifactId>log4j-api</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.logging.log4j</groupId>
+            <artifactId>log4j-core</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.logging.log4j</groupId>
+            <artifactId>log4j-slf4j-impl</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.logging.log4j</groupId>
+            <artifactId>log4j-1.2-api</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.logging.log4j</groupId>
+            <artifactId>log4j-jul</artifactId>
+            <scope>test</scope>
+        </dependency>
+    </dependencies>
+
+    <build>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-release-plugin</artifactId>
+            </plugin>
+        </plugins>
+    </build>
+
+</project>
diff --git a/api/src/main/java/io/druid/cli/CliCommandCreator.java b/api/src/main/java/io/druid/cli/CliCommandCreator.java
new file mode 100644
index 00000000000..24b8379fe1b
--- /dev/null
+++ b/api/src/main/java/io/druid/cli/CliCommandCreator.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.cli;
+
+import io.airlift.airline.Cli;
+
+/**
+ */
+public interface CliCommandCreator
+{
+  public void addCommands(Cli.CliBuilder builder);
+}
diff --git a/api/src/main/java/io/druid/cli/CliRunnable.java b/api/src/main/java/io/druid/cli/CliRunnable.java
new file mode 100644
index 00000000000..1982e7ce1c0
--- /dev/null
+++ b/api/src/main/java/io/druid/cli/CliRunnable.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.cli;
+
+/**
+ */
+public interface CliRunnable extends Runnable
+{
+}
diff --git a/api/src/main/java/io/druid/data/input/ByteBufferInputRowParser.java b/api/src/main/java/io/druid/data/input/ByteBufferInputRowParser.java
new file mode 100644
index 00000000000..67c74d937fe
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/ByteBufferInputRowParser.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import io.druid.data.input.impl.InputRowParser;
+import io.druid.data.input.impl.ParseSpec;
+
+import java.nio.ByteBuffer;
+
+public interface ByteBufferInputRowParser extends InputRowParser<ByteBuffer>
+{
+  @Override
+  public ByteBufferInputRowParser withParseSpec(ParseSpec parseSpec);
+}
diff --git a/api/src/main/java/io/druid/data/input/Committer.java b/api/src/main/java/io/druid/data/input/Committer.java
new file mode 100644
index 00000000000..04dbe96707e
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/Committer.java
@@ -0,0 +1,32 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+/**
+ * Committer includes a Runnable and a Jackson-serialized metadata object containing the offset
+ */
+public interface Committer extends Runnable
+{
+    /**
+     * @return A json serialized representation of commit metadata,
+     * which needs to be serialized and deserialized by Jackson.
+     * Commit metadata can be a complex type, but we recommend keeping it to List/Map/"Primitive JSON" types
+     * */
+    public Object getMetadata();
+}
diff --git a/api/src/main/java/io/druid/data/input/Firehose.java b/api/src/main/java/io/druid/data/input/Firehose.java
new file mode 100644
index 00000000000..a768e778d81
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/Firehose.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import java.io.Closeable;
+
+/**
+ * This is an interface that holds onto the stream of incoming data.  Realtime data ingestion is built around this
+ * abstraction.  In order to add a new type of source for realtime data ingestion, all you need to do is implement
+ * one of these and register it with the Main.
+ *
+ * This object acts a lot like an Iterator, but it doesn't extend the Iterator interface because it extends
+ * Closeable and it is very important that the close() method doesn't get forgotten, which is easy to do if this
+ * gets passed around as an Iterator.
+ * <p>
+ * The implementation of this interface only needs to be minimally thread-safe. The three methods ##hasMore(),
+ * ##nextRow() and ##commit() are all called from the same thread.  ##commit(), however, returns a callback
+ * which will be called on another thread, so the operations inside of that callback must be thread-safe.
+ * </p>
+ */
+public interface Firehose extends Closeable
+{
+  /**
+   * Returns whether there are more rows to process.  This is used to indicate that another item is immediately
+   * available via ##nextRow().  Thus, if the stream is still available but there are no new messages on it, this call
+   * should block until a new message is available.
+   *
+   * If something happens such that the stream is no longer available, this should return false.
+   *
+   * @return true if and when there is another row available, false if the stream has dried up
+   */
+  public boolean hasMore();
+
+  /**
+   * The next row available.  Should only be called if hasMore returns true.
+   *
+   * @return The next row
+   */
+  public InputRow nextRow();
+
+  /**
+   * Returns a runnable that will "commit" everything read up to the point at which commit() is called.  This is
+   * often equivalent to everything that has been read since the last commit() call (or instantiation of the object),
+   * but doesn't necessarily have to be.
+   *
+   * This method is called when the main processing loop starts to persist its current batch of things to process.
+   * The returned runnable will be run when the current batch has been successfully persisted, there is usually
+   * some time lag between when this method is called and when the runnable is run.  The Runnable is also run on
+   * a separate thread so its operation should be thread-safe.
+   *
+   * The Runnable is essentially just a lambda/closure that is run() after data supplied by this instance has
+   * been committed on the writer side of this interface protocol.
+   * <p>
+   * A simple implementation of this interface might do nothing when run() is called 
+   * (in which case the same do-nothing instance can be returned every time), or 
+   * a more complex implementation might clean up temporary resources that are no longer needed 
+   * because of InputRows delivered by prior calls to ##nextRow().
+   * </p>
+   */
+  public Runnable commit();
+}
diff --git a/api/src/main/java/io/druid/data/input/FirehoseFactory.java b/api/src/main/java/io/druid/data/input/FirehoseFactory.java
new file mode 100644
index 00000000000..43cbf01111f
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/FirehoseFactory.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+
+import io.druid.data.input.impl.InputRowParser;
+import io.druid.java.util.common.parsers.ParseException;
+
+import java.io.IOException;
+
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type")
+public interface FirehoseFactory<T extends InputRowParser>
+{
+  /**
+   * Initialization method that connects up the fire hose.  If this method returns successfully it should be safe to
+   * call hasMore() on the returned Firehose (which might subsequently block).
+   * <p/>
+   * If this method returns null, then any attempt to call hasMore(), nextRow(), commit() and close() on the return
+   * value will throw a surprising NPE.   Throwing IOException on connection failure or runtime exception on
+   * invalid configuration is preferred over returning null.
+   */
+  public Firehose connect(T parser) throws IOException, ParseException;
+
+}
diff --git a/api/src/main/java/io/druid/data/input/FirehoseFactoryV2.java b/api/src/main/java/io/druid/data/input/FirehoseFactoryV2.java
new file mode 100644
index 00000000000..a0fc5e2468f
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/FirehoseFactoryV2.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+
+import io.druid.data.input.impl.InputRowParser;
+import io.druid.java.util.common.parsers.ParseException;
+
+import java.io.IOException;
+/**
+ * Initialization method that connects up the FirehoseV2.  If this method returns successfully it should be safe to
+ * call start() on the returned FirehoseV2 (which might subsequently block).
+ *
+ * In contrast to V1 version, FirehoseFactoryV2 is able to pass an additional json-serialized object to FirehoseV2,
+ * which contains last commit metadata
+ *
+ * <p/>
+ * If this method returns null, then any attempt to call start(), advance(), currRow(), makeCommitter() and close() on the return
+ * value will throw a surprising NPE.   Throwing IOException on connection failure or runtime exception on
+ * invalid configuration is preferred over returning null.
+ */
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type")
+public interface FirehoseFactoryV2<T extends InputRowParser>
+{
+  public FirehoseV2 connect(T parser, Object lastCommit) throws IOException, ParseException;
+
+}
diff --git a/api/src/main/java/io/druid/data/input/FirehoseV2.java b/api/src/main/java/io/druid/data/input/FirehoseV2.java
new file mode 100644
index 00000000000..69ead44276d
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/FirehoseV2.java
@@ -0,0 +1,88 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import java.io.Closeable;
+/**
+ * This is an interface that holds onto the stream of incoming data.  Realtime data ingestion is built around this
+ * abstraction.  In order to add a new type of source for realtime data ingestion, all you need to do is implement
+ * one of these and register it with the Main.
+ *
+ * In contrast to Firehose v1 version, FirehoseV2 will always operate in a "peek, then advance" manner.
+ * And the intended usage patttern is
+ * 1. Call start()
+ * 2. Read currRow()
+ * 3. Call advance()
+ * 4. If index should be committed: commit()
+ * 5. GOTO 2
+ *
+ * Note that commit() is being called *after* advance.
+ * 
+ * This object acts a lot like an Iterator, but it doesn't extend the Iterator interface because it extends
+ * Closeable and it is very important that the close() method doesn't get forgotten, which is easy to do if this
+ * gets passed around as an Iterator.
+ * <p>
+ * The implementation of this interface only needs to be minimally thread-safe. The methods ##start(), ##advance(),
+ * ##currRow() and ##makeCommitter() are all called from the same thread.  ##makeCommitter(), however, returns a callback
+ * which will be called on another thread, so the operations inside of that callback must be thread-safe.
+ * </p>
+ */
+public interface FirehoseV2 extends Closeable
+{
+    /**
+     * For initial start
+     * */
+    void start() throws Exception;
+
+    /**
+     * Advance the firehose to the next offset.  Implementations of this interface should make sure that
+     * if advance() is called and throws out an exception, the next call to currRow() should return a 
+     * null value.
+     * 
+     * @return true if and when there is another row available, false if the stream has dried up
+     */
+    public boolean advance();
+
+    /**
+     * @return The current row
+     */
+    public InputRow currRow() ;
+
+    /**
+     * Returns a Committer that will "commit" everything read up to the point at which makeCommitter() is called.
+     *
+     * This method is called when the main processing loop starts to persist its current batch of things to process.
+     * The returned committer will be run when the current batch has been successfully persisted
+     * and the metadata the committer carries can also be persisted along with segment data. There is usually
+     * some time lag between when this method is called and when the runnable is run.  The Runnable is also run on
+     * a separate thread so its operation should be thread-safe.
+     * 
+     * Note that "correct" usage of this interface will always call advance() before commit() if the current row
+     * is considered in the commit.
+     *
+     * The Runnable is essentially just a lambda/closure that is run() after data supplied by this instance has
+     * been committed on the writer side of this interface protocol.
+     * <p>
+     * A simple implementation of this interface might do nothing when run() is called,
+     * and save proper commit information in metadata
+     * </p>
+     */
+    public Committer makeCommitter();
+}
diff --git a/api/src/main/java/io/druid/data/input/InputRow.java b/api/src/main/java/io/druid/data/input/InputRow.java
new file mode 100644
index 00000000000..40164571bc1
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/InputRow.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import java.util.List;
+
+/**
+ * An InputRow is the interface definition of an event being input into the data ingestion layer.
+ *
+ * An InputRow is a Row with a self-describing list of the dimensions available.  This list is used to
+ * implement "schema-less" data ingestion that allows the system to add new dimensions as they appear.
+ *
+ */
+public interface
+    InputRow extends Row
+{
+  /**
+   * Returns the dimensions that exist in this row.
+   *
+   * @return the dimensions that exist in this row.
+   */
+  public List<String> getDimensions();
+}
diff --git a/api/src/main/java/io/druid/data/input/MapBasedInputRow.java b/api/src/main/java/io/druid/data/input/MapBasedInputRow.java
new file mode 100644
index 00000000000..61fe512e2fc
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/MapBasedInputRow.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import org.joda.time.DateTime;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ */
+public class MapBasedInputRow extends MapBasedRow implements InputRow
+{
+  private final List<String> dimensions;
+
+  public MapBasedInputRow(
+      long timestamp,
+      List<String> dimensions,
+      Map<String, Object> event
+  )
+  {
+    super(timestamp, event);
+    this.dimensions = dimensions;
+  }
+
+  public MapBasedInputRow(
+      DateTime timestamp,
+      List<String> dimensions,
+      Map<String, Object> event
+  )
+  {
+    super(timestamp, event);
+    this.dimensions = dimensions;
+  }
+
+  @Override
+  public List<String> getDimensions()
+  {
+    return dimensions;
+  }
+
+  @Override
+  public String toString()
+  {
+    return "MapBasedInputRow{" +
+           "timestamp=" + new DateTime(getTimestampFromEpoch()) +
+           ", event=" + getEvent() +
+           ", dimensions=" + dimensions +
+           '}';
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/MapBasedRow.java b/api/src/main/java/io/druid/data/input/MapBasedRow.java
new file mode 100644
index 00000000000..77e4fd9f3d3
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/MapBasedRow.java
@@ -0,0 +1,205 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+
+import io.druid.java.util.common.logger.Logger;
+import io.druid.java.util.common.parsers.ParseException;
+
+import org.joda.time.DateTime;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.regex.Pattern;
+
+/**
+ */
+public class MapBasedRow implements Row
+{
+  private static final Logger log = new Logger(MapBasedRow.class);
+  private static final Function<Object, String> TO_STRING_INCLUDING_NULL = new Function<Object, String>() {
+    @Override
+    public String apply(final Object o)
+    {
+      return String.valueOf(o);
+    }
+  };
+
+  private final DateTime timestamp;
+  private final Map<String, Object> event;
+
+  private static final Pattern LONG_PAT = Pattern.compile("[-|+]?\\d+");
+
+  @JsonCreator
+  public MapBasedRow(
+      @JsonProperty("timestamp") DateTime timestamp,
+      @JsonProperty("event") Map<String, Object> event
+  )
+  {
+    this.timestamp = timestamp;
+    this.event = event;
+  }
+
+  public MapBasedRow(
+      long timestamp,
+      Map<String, Object> event
+  )
+  {
+    this(new DateTime(timestamp), event);
+  }
+
+  @Override
+  public long getTimestampFromEpoch()
+  {
+    return timestamp.getMillis();
+  }
+
+  @JsonProperty
+  public DateTime getTimestamp()
+  {
+    return timestamp;
+  }
+
+  @JsonProperty
+  public Map<String, Object> getEvent()
+  {
+    return event;
+  }
+
+  @Override
+  public List<String> getDimension(String dimension)
+  {
+    final Object dimValue = event.get(dimension);
+
+    if (dimValue == null) {
+      return Collections.emptyList();
+    } else if (dimValue instanceof List) {
+      // guava's toString function fails on null objects, so please do not use it
+      return Lists.transform(
+          (List) dimValue,
+          TO_STRING_INCLUDING_NULL);
+    } else {
+      return Collections.singletonList(String.valueOf(dimValue));
+    }
+  }
+
+  @Override
+  public Object getRaw(String dimension)
+  {
+    return event.get(dimension);
+  }
+
+  @Override
+  public float getFloatMetric(String metric)
+  {
+    Object metricValue = event.get(metric);
+
+    if (metricValue == null) {
+      return 0.0f;
+    }
+
+    if (metricValue instanceof Number) {
+      return ((Number) metricValue).floatValue();
+    } else if (metricValue instanceof String) {
+      try {
+        return Float.valueOf(((String) metricValue).replace(",", ""));
+      }
+      catch (Exception e) {
+        throw new ParseException(e, "Unable to parse metrics[%s], value[%s]", metric, metricValue);
+      }
+    } else {
+      throw new ParseException("Unknown type[%s]", metricValue.getClass());
+    }
+  }
+
+  @Override
+  public long getLongMetric(String metric)
+  {
+    Object metricValue = event.get(metric);
+
+    if (metricValue == null) {
+      return 0L;
+    }
+
+    if (metricValue instanceof Number) {
+      return ((Number) metricValue).longValue();
+    } else if (metricValue instanceof String) {
+      try {
+        String s = ((String) metricValue).replace(",", "");
+        return LONG_PAT.matcher(s).matches() ? Long.valueOf(s) : Double.valueOf(s).longValue();
+      }
+      catch (Exception e) {
+        throw new ParseException(e, "Unable to parse metrics[%s], value[%s]", metric, metricValue);
+      }
+    } else {
+      throw new ParseException("Unknown type[%s]", metricValue.getClass());
+    }
+  }
+
+  @Override
+  public String toString()
+  {
+    return "MapBasedRow{" +
+           "timestamp=" + timestamp +
+           ", event=" + event +
+           '}';
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    MapBasedRow that = (MapBasedRow) o;
+
+    if (!event.equals(that.event)) {
+      return false;
+    }
+    if (!timestamp.equals(that.timestamp)) {
+      return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode()
+  {
+    int result = timestamp.hashCode();
+    result = 31 * result + event.hashCode();
+    return result;
+  }
+
+  @Override
+  public int compareTo(Row o)
+  {
+    return timestamp.compareTo(o.getTimestamp());
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/Row.java b/api/src/main/java/io/druid/data/input/Row.java
new file mode 100644
index 00000000000..2c3daa2afba
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/Row.java
@@ -0,0 +1,93 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import org.joda.time.DateTime;
+
+import java.util.List;
+
+/**
+ * A Row of data.  This can be used for both input and output into various parts of the system.  It assumes
+ * that the user already knows the schema of the row and can query for the parts that they care about.
+ */
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "version", defaultImpl = MapBasedRow.class)
+@JsonSubTypes(value = {
+    @JsonSubTypes.Type(name = "v1", value = MapBasedRow.class)
+})
+public interface Row extends Comparable<Row>
+{
+  /**
+   * Returns the timestamp from the epoch in milliseconds.  If the event happened _right now_, this would return the
+   * same thing as System.currentTimeMillis();
+   *
+   * @return the timestamp from the epoch in milliseconds.
+   */
+  public long getTimestampFromEpoch();
+
+  /**
+   * Returns the timestamp from the epoch as an org.joda.time.DateTime.  If the event happened _right now_, this would return the
+   * same thing as new DateTime();
+   *
+   * @return the timestamp from the epoch as an org.joda.time.DateTime object.
+   */
+  public DateTime getTimestamp();
+
+  /**
+   * Returns the list of dimension values for the given column name.
+   * <p/>
+   *
+   * @param dimension the column name of the dimension requested
+   *
+   * @return the list of values for the provided column name
+   */
+  public List<String> getDimension(String dimension);
+
+  /**
+   * Returns the raw dimension value for the given column name. This is different from #getDimension which
+   * all values to strings before returning them.
+   *
+   * @param dimension the column name of the dimension requested
+   *
+   * @return the value of the provided column name
+   */
+  public Object getRaw(String dimension);
+
+  /**
+   * Returns the float value of the given metric column.
+   * <p/>
+   *
+   * @param metric the column name of the metric requested
+   *
+   * @return the float value for the provided column name.
+   */
+  public float getFloatMetric(String metric);
+
+  /**
+   * Returns the long value of the given metric column.
+   * <p/>
+   *
+   * @param metric the column name of the metric requested
+   *
+   * @return the long value for the provided column name.
+   */
+  public long getLongMetric(String metric);
+}
diff --git a/api/src/main/java/io/druid/data/input/Rows.java b/api/src/main/java/io/druid/data/input/Rows.java
new file mode 100644
index 00000000000..05e1aeec4f9
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/Rows.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableSortedSet;
+import com.google.common.collect.Maps;
+
+import io.druid.java.util.common.ISE;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+
+/**
+ */
+public class Rows
+{
+  public static InputRow toCaseInsensitiveInputRow(final Row row, final List<String> dimensions)
+  {
+    if (row instanceof MapBasedRow) {
+      MapBasedRow mapBasedRow = (MapBasedRow) row;
+
+      TreeMap<String, Object> caseInsensitiveMap = Maps.newTreeMap(String.CASE_INSENSITIVE_ORDER);
+      caseInsensitiveMap.putAll(mapBasedRow.getEvent());
+      return new MapBasedInputRow(
+          mapBasedRow.getTimestamp(),
+          dimensions,
+          caseInsensitiveMap
+      );
+    }
+    throw new ISE("Can only convert MapBasedRow objects because we are ghetto like that.");
+  }
+
+  /**
+   * @param timeStamp rollup up timestamp to be used to create group key
+   * @param inputRow input row
+   * @return groupKey for the given input row
+   */
+  public static List<Object> toGroupKey(long timeStamp, InputRow inputRow)
+  {
+    final Map<String, Set<String>> dims = Maps.newTreeMap();
+    for (final String dim : inputRow.getDimensions()) {
+      final Set<String> dimValues = ImmutableSortedSet.copyOf(inputRow.getDimension(dim));
+      if (dimValues.size() > 0) {
+        dims.put(dim, dimValues);
+      }
+    }
+    return ImmutableList.of(
+        timeStamp,
+        dims
+    );
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/CSVParseSpec.java b/api/src/main/java/io/druid/data/input/impl/CSVParseSpec.java
new file mode 100644
index 00000000000..bbe1fc4d228
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/CSVParseSpec.java
@@ -0,0 +1,102 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import io.druid.java.util.common.parsers.CSVParser;
+import io.druid.java.util.common.parsers.Parser;
+
+import java.util.List;
+
+/**
+ */
+public class CSVParseSpec extends ParseSpec
+{
+  private final String listDelimiter;
+  private final List<String> columns;
+
+  @JsonCreator
+  public CSVParseSpec(
+      @JsonProperty("timestampSpec") TimestampSpec timestampSpec,
+      @JsonProperty("dimensionsSpec") DimensionsSpec dimensionsSpec,
+      @JsonProperty("listDelimiter") String listDelimiter,
+      @JsonProperty("columns") List<String> columns
+  )
+  {
+    super(timestampSpec, dimensionsSpec);
+
+    this.listDelimiter = listDelimiter;
+    Preconditions.checkNotNull(columns, "columns");
+    for (String column : columns) {
+      Preconditions.checkArgument(!column.contains(","), "Column[%s] has a comma, it cannot", column);
+    }
+
+    this.columns = columns;
+
+    verify(dimensionsSpec.getDimensionNames());
+  }
+
+  @JsonProperty
+  public String getListDelimiter()
+  {
+    return listDelimiter;
+  }
+
+  @JsonProperty("columns")
+  public List<String> getColumns()
+  {
+    return columns;
+  }
+
+  @Override
+  public void verify(List<String> usedCols)
+  {
+    for (String columnName : usedCols) {
+      Preconditions.checkArgument(columns.contains(columnName), "column[%s] not in columns.", columnName);
+    }
+  }
+
+  @Override
+  public Parser<String, Object> makeParser()
+  {
+    return new CSVParser(Optional.fromNullable(listDelimiter), columns);
+  }
+
+  @Override
+  public ParseSpec withTimestampSpec(TimestampSpec spec)
+  {
+    return new CSVParseSpec(spec, getDimensionsSpec(), listDelimiter, columns);
+  }
+
+  @Override
+  public ParseSpec withDimensionsSpec(DimensionsSpec spec)
+  {
+    return new CSVParseSpec(getTimestampSpec(), spec, listDelimiter, columns);
+  }
+
+  public ParseSpec withColumns(List<String> cols)
+  {
+    return new CSVParseSpec(getTimestampSpec(), getDimensionsSpec(), listDelimiter, cols);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/DelimitedParseSpec.java b/api/src/main/java/io/druid/data/input/impl/DelimitedParseSpec.java
new file mode 100644
index 00000000000..6d7096d7921
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/DelimitedParseSpec.java
@@ -0,0 +1,125 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import io.druid.java.util.common.parsers.DelimitedParser;
+import io.druid.java.util.common.parsers.Parser;
+
+import java.util.List;
+
+/**
+ */
+public class DelimitedParseSpec extends ParseSpec
+{
+  private final String delimiter;
+  private final String listDelimiter;
+  private final List<String> columns;
+
+  @JsonCreator
+  public DelimitedParseSpec(
+      @JsonProperty("timestampSpec") TimestampSpec timestampSpec,
+      @JsonProperty("dimensionsSpec") DimensionsSpec dimensionsSpec,
+      @JsonProperty("delimiter") String delimiter,
+      @JsonProperty("listDelimiter") String listDelimiter,
+      @JsonProperty("columns") List<String> columns
+  )
+  {
+    super(timestampSpec, dimensionsSpec);
+
+    this.delimiter = delimiter;
+    this.listDelimiter = listDelimiter;
+    Preconditions.checkNotNull(columns, "columns");
+    this.columns = columns;
+    for (String column : this.columns) {
+      Preconditions.checkArgument(!column.contains(","), "Column[%s] has a comma, it cannot", column);
+    }
+
+    verify(dimensionsSpec.getDimensionNames());
+  }
+
+  @JsonProperty("delimiter")
+  public String getDelimiter()
+  {
+    return delimiter;
+  }
+
+  @JsonProperty("listDelimiter")
+  public String getListDelimiter()
+  {
+    return listDelimiter;
+  }
+
+  @JsonProperty("columns")
+  public List<String> getColumns()
+  {
+    return columns;
+  }
+
+  @Override
+  public void verify(List<String> usedCols)
+  {
+    for (String columnName : usedCols) {
+      Preconditions.checkArgument(columns.contains(columnName), "column[%s] not in columns.", columnName);
+    }
+  }
+
+  @Override
+  public Parser<String, Object> makeParser()
+  {
+    Parser<String, Object> retVal = new DelimitedParser(
+        Optional.fromNullable(delimiter),
+        Optional.fromNullable(listDelimiter)
+    );
+    retVal.setFieldNames(columns);
+    return retVal;
+  }
+
+  @Override
+  public ParseSpec withTimestampSpec(TimestampSpec spec)
+  {
+    return new DelimitedParseSpec(spec, getDimensionsSpec(), delimiter, listDelimiter, columns);
+  }
+
+  @Override
+  public ParseSpec withDimensionsSpec(DimensionsSpec spec)
+  {
+    return new DelimitedParseSpec(getTimestampSpec(), spec, delimiter, listDelimiter, columns);
+  }
+
+  public ParseSpec withDelimiter(String delim)
+  {
+    return new DelimitedParseSpec(getTimestampSpec(), getDimensionsSpec(), delim, listDelimiter, columns);
+  }
+
+  public ParseSpec withListDelimiter(String delim)
+  {
+    return new DelimitedParseSpec(getTimestampSpec(), getDimensionsSpec(), delimiter, delim, columns);
+  }
+
+  public ParseSpec withColumns(List<String> cols)
+  {
+    return new DelimitedParseSpec(getTimestampSpec(), getDimensionsSpec(), delimiter, listDelimiter, cols);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/DimensionSchema.java b/api/src/main/java/io/druid/data/input/impl/DimensionSchema.java
new file mode 100644
index 00000000000..69dbb0cd0cb
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/DimensionSchema.java
@@ -0,0 +1,152 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import com.fasterxml.jackson.annotation.JsonValue;
+import com.google.common.base.Preconditions;
+
+/**
+ */
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type", defaultImpl = StringDimensionSchema.class)
+@JsonSubTypes(value = {
+    @JsonSubTypes.Type(name = DimensionSchema.STRING_TYPE_NAME, value = StringDimensionSchema.class),
+    @JsonSubTypes.Type(name = DimensionSchema.LONG_TYPE_NAME, value = LongDimensionSchema.class),
+    @JsonSubTypes.Type(name = DimensionSchema.FLOAT_TYPE_NAME, value = FloatDimensionSchema.class),
+    @JsonSubTypes.Type(name = DimensionSchema.SPATIAL_TYPE_NAME, value = NewSpatialDimensionSchema.class),
+})
+public abstract class DimensionSchema
+{
+  public static final String STRING_TYPE_NAME = "string";
+  public static final String LONG_TYPE_NAME = "long";
+  public static final String FLOAT_TYPE_NAME = "float";
+  public static final String SPATIAL_TYPE_NAME = "spatial";
+
+
+  // main druid and druid-api should really use the same ValueType enum.
+  // merge them when druid-api is merged back into the main repo
+  public enum ValueType
+  {
+    FLOAT,
+    LONG,
+    STRING,
+    COMPLEX;
+
+    @JsonValue
+    @Override
+    public String toString()
+    {
+      return this.name().toUpperCase();
+    }
+
+    @JsonCreator
+    public static ValueType fromString(String name)
+    {
+      return valueOf(name.toUpperCase());
+    }
+  }
+
+  public static enum MultiValueHandling
+  {
+    SORTED_ARRAY,
+    SORTED_SET,
+    ARRAY {
+      @Override
+      public boolean needSorting() { return false;}
+    };
+
+    public boolean needSorting()
+    {
+      return true;
+    }
+
+    @Override
+    @JsonValue
+    public String toString()
+    {
+      return name().toUpperCase();
+    }
+
+    @JsonCreator
+    public static MultiValueHandling fromString(String name)
+    {
+      return name == null ? ofDefault() : valueOf(name.toUpperCase());
+    }
+
+    // this can be system configuration
+    public static MultiValueHandling ofDefault()
+    {
+      return SORTED_ARRAY;
+    }
+  }
+
+  private final String name;
+  private final MultiValueHandling multiValueHandling;
+
+  protected DimensionSchema(String name, MultiValueHandling multiValueHandling)
+  {
+    this.name = Preconditions.checkNotNull(name, "Dimension name cannot be null.");
+    this.multiValueHandling = multiValueHandling;
+  }
+
+  @JsonProperty
+  public String getName()
+  {
+    return name;
+  }
+
+  @JsonProperty
+  public MultiValueHandling getMultiValueHandling()
+  {
+    return multiValueHandling;
+  }
+
+  @JsonIgnore
+  public abstract String getTypeName();
+
+  @JsonIgnore
+  public abstract ValueType getValueType();
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    DimensionSchema that = (DimensionSchema) o;
+
+    return name.equals(that.name);
+
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return name.hashCode();
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/DimensionsSpec.java b/api/src/main/java/io/druid/data/input/impl/DimensionsSpec.java
new file mode 100644
index 00000000000..8132309ac86
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/DimensionsSpec.java
@@ -0,0 +1,250 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+
+import io.druid.java.util.common.parsers.ParserUtils;
+
+import javax.annotation.Nullable;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+
+public class DimensionsSpec
+{
+  private final List<DimensionSchema> dimensions;
+  private final Set<String> dimensionExclusions;
+  private final Map<String, DimensionSchema> dimensionSchemaMap;
+
+  public static DimensionsSpec ofEmpty()
+  {
+    return new DimensionsSpec(null, null, null);
+  }
+
+  public static List<DimensionSchema> getDefaultSchemas(List<String> dimNames)
+  {
+    return getDefaultSchemas(dimNames, DimensionSchema.MultiValueHandling.ofDefault());
+  }
+
+  public static List<DimensionSchema> getDefaultSchemas(
+      final List<String> dimNames,
+      final DimensionSchema.MultiValueHandling multiValueHandling
+  )
+  {
+    return Lists.transform(
+        dimNames,
+        new Function<String, DimensionSchema>()
+        {
+          @Override
+          public DimensionSchema apply(String input)
+          {
+            return new StringDimensionSchema(input, multiValueHandling);
+          }
+        }
+    );
+  }
+
+  public static DimensionSchema convertSpatialSchema(SpatialDimensionSchema spatialSchema)
+  {
+    return new NewSpatialDimensionSchema(spatialSchema.getDimName(), spatialSchema.getDims());
+  }
+
+  @JsonCreator
+  public DimensionsSpec(
+      @JsonProperty("dimensions") List<DimensionSchema> dimensions,
+      @JsonProperty("dimensionExclusions") List<String> dimensionExclusions,
+      @Deprecated @JsonProperty("spatialDimensions") List<SpatialDimensionSchema> spatialDimensions
+  )
+  {
+    this.dimensions = dimensions == null
+                      ? Lists.<DimensionSchema>newArrayList()
+                      : Lists.newArrayList(dimensions);
+
+    this.dimensionExclusions = (dimensionExclusions == null)
+                               ? Sets.<String>newHashSet()
+                               : Sets.newHashSet(dimensionExclusions);
+
+    List<SpatialDimensionSchema> spatialDims = (spatialDimensions == null)
+                                               ? Lists.<SpatialDimensionSchema>newArrayList()
+                                               : spatialDimensions;
+
+    verify(spatialDims);
+
+    // Map for easy dimension name-based schema lookup
+    this.dimensionSchemaMap = new HashMap<>();
+    for (DimensionSchema schema : this.dimensions) {
+      dimensionSchemaMap.put(schema.getName(), schema);
+    }
+
+    for(SpatialDimensionSchema spatialSchema : spatialDims) {
+      DimensionSchema newSchema = DimensionsSpec.convertSpatialSchema(spatialSchema);
+      this.dimensions.add(newSchema);
+      dimensionSchemaMap.put(newSchema.getName(), newSchema);
+    }
+  }
+
+
+  @JsonProperty
+  public List<DimensionSchema> getDimensions()
+  {
+    return dimensions;
+  }
+
+  @JsonProperty
+  public Set<String> getDimensionExclusions()
+  {
+    return dimensionExclusions;
+  }
+
+  @Deprecated @JsonIgnore
+  public List<SpatialDimensionSchema> getSpatialDimensions()
+  {
+    Iterable<NewSpatialDimensionSchema> filteredList = Iterables.filter(
+        dimensions, NewSpatialDimensionSchema.class
+    );
+
+    Iterable<SpatialDimensionSchema> transformedList = Iterables.transform(
+        filteredList,
+        new Function<NewSpatialDimensionSchema, SpatialDimensionSchema>()
+        {
+          @Nullable
+          @Override
+          public SpatialDimensionSchema apply(NewSpatialDimensionSchema input)
+          {
+            return new SpatialDimensionSchema(input.getName(), input.getDims());
+          }
+        }
+    );
+
+    return Lists.newArrayList(transformedList);
+  }
+
+
+  @JsonIgnore
+  public List<String> getDimensionNames()
+  {
+    return Lists.transform(
+        dimensions,
+        new Function<DimensionSchema, String>()
+        {
+          @Override
+          public String apply(DimensionSchema input)
+          {
+            return input.getName();
+          }
+        }
+    );
+  }
+
+  public DimensionSchema getSchema(String dimension)
+  {
+    return dimensionSchemaMap.get(dimension);
+  }
+
+  public boolean hasCustomDimensions()
+  {
+    return !(dimensions == null || dimensions.isEmpty());
+  }
+
+  public DimensionsSpec withDimensions(List<DimensionSchema> dims)
+  {
+    return new DimensionsSpec(dims, ImmutableList.copyOf(dimensionExclusions), null);
+  }
+
+  public DimensionsSpec withDimensionExclusions(Set<String> dimExs)
+  {
+    return new DimensionsSpec(
+        dimensions,
+        ImmutableList.copyOf(Sets.union(dimensionExclusions, dimExs)),
+        null
+    );
+  }
+
+  @Deprecated
+  public DimensionsSpec withSpatialDimensions(List<SpatialDimensionSchema> spatials)
+  {
+    return new DimensionsSpec(dimensions, ImmutableList.copyOf(dimensionExclusions), spatials);
+  }
+
+  private void verify(List<SpatialDimensionSchema> spatialDimensions)
+  {
+    List<String> dimNames = getDimensionNames();
+    Preconditions.checkArgument(
+        Sets.intersection(this.dimensionExclusions, Sets.newHashSet(dimNames)).isEmpty(),
+        "dimensions and dimensions exclusions cannot overlap"
+    );
+
+    ParserUtils.validateFields(dimNames);
+    ParserUtils.validateFields(dimensionExclusions);
+
+    List<String> spatialDimNames = Lists.transform(
+        spatialDimensions,
+        new Function<SpatialDimensionSchema, String>()
+        {
+          @Override
+          public String apply(SpatialDimensionSchema input)
+          {
+            return input.getDimName();
+          }
+        }
+    );
+
+    // Don't allow duplicates between main list and deprecated spatial list
+    ParserUtils.validateFields(Iterables.concat(dimNames, spatialDimNames));
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    DimensionsSpec that = (DimensionsSpec) o;
+
+    if (!dimensions.equals(that.dimensions)) {
+      return false;
+    }
+
+    return dimensionExclusions.equals(that.dimensionExclusions);
+  }
+
+  @Override
+  public int hashCode()
+  {
+    int result = dimensions.hashCode();
+    result = 31 * result + dimensionExclusions.hashCode();
+    return result;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/FileIteratingFirehose.java b/api/src/main/java/io/druid/data/input/impl/FileIteratingFirehose.java
new file mode 100644
index 00000000000..97e33f04a89
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/FileIteratingFirehose.java
@@ -0,0 +1,92 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.google.common.base.Throwables;
+import io.druid.data.input.Firehose;
+import io.druid.data.input.InputRow;
+import io.druid.utils.Runnables;
+import org.apache.commons.io.LineIterator;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+/**
+ */
+public class FileIteratingFirehose implements Firehose
+{
+  private final Iterator<LineIterator> lineIterators;
+  private final StringInputRowParser parser;
+
+  private LineIterator lineIterator = null;
+
+  public FileIteratingFirehose(
+      Iterator<LineIterator> lineIterators,
+      StringInputRowParser parser
+  )
+  {
+    this.lineIterators = lineIterators;
+    this.parser = parser;
+  }
+
+  @Override
+  public boolean hasMore()
+  {
+    while ((lineIterator == null || !lineIterator.hasNext()) && lineIterators.hasNext()) {
+      lineIterator = lineIterators.next();
+    }
+
+    return lineIterator != null && lineIterator.hasNext();
+  }
+
+  @Override
+  public InputRow nextRow()
+  {
+    try {
+      if (lineIterator == null || !lineIterator.hasNext()) {
+        // Close old streams, maybe.
+        if (lineIterator != null) {
+          lineIterator.close();
+        }
+
+        lineIterator = lineIterators.next();
+      }
+
+      return parser.parse(lineIterator.next());
+    }
+    catch (Exception e) {
+      throw Throwables.propagate(e);
+    }
+  }
+
+  @Override
+  public Runnable commit()
+  {
+    return Runnables.getNoopRunnable();
+  }
+
+  @Override
+  public void close() throws IOException
+  {
+    if (lineIterator != null) {
+      lineIterator.close();
+    }
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/FloatDimensionSchema.java b/api/src/main/java/io/druid/data/input/impl/FloatDimensionSchema.java
new file mode 100644
index 00000000000..a457a226ee4
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/FloatDimensionSchema.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class FloatDimensionSchema extends DimensionSchema
+{
+  @JsonCreator
+  public FloatDimensionSchema(
+      @JsonProperty("name") String name
+  )
+  {
+    super(name, null);
+  }
+
+  @Override
+  public String getTypeName()
+  {
+    return DimensionSchema.FLOAT_TYPE_NAME;
+  }
+
+  @Override
+  @JsonIgnore
+  public ValueType getValueType()
+  {
+    return ValueType.FLOAT;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/InputRowParser.java b/api/src/main/java/io/druid/data/input/impl/InputRowParser.java
new file mode 100644
index 00000000000..c9850bebde9
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/InputRowParser.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import io.druid.data.input.InputRow;
+
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type", defaultImpl = StringInputRowParser.class)
+@JsonSubTypes(value = {
+    @JsonSubTypes.Type(name = "string", value = StringInputRowParser.class),
+    @JsonSubTypes.Type(name = "map", value = MapInputRowParser.class),
+    @JsonSubTypes.Type(name = "noop", value = NoopInputRowParser.class)
+})
+public interface InputRowParser<T>
+{
+  public InputRow parse(T input) ;
+
+  public ParseSpec getParseSpec();
+
+  public InputRowParser withParseSpec(ParseSpec parseSpec) ;
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/JSONLowercaseParseSpec.java b/api/src/main/java/io/druid/data/input/impl/JSONLowercaseParseSpec.java
new file mode 100644
index 00000000000..17600ee18f4
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/JSONLowercaseParseSpec.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import io.druid.java.util.common.parsers.JSONToLowerParser;
+import io.druid.java.util.common.parsers.Parser;
+
+import java.util.List;
+
+/**
+ * This class is only here for backwards compatibility
+ */
+@Deprecated
+public class JSONLowercaseParseSpec extends ParseSpec
+{
+  private final ObjectMapper objectMapper;
+
+  @JsonCreator
+  public JSONLowercaseParseSpec(
+      @JsonProperty("timestampSpec") TimestampSpec timestampSpec,
+      @JsonProperty("dimensionsSpec") DimensionsSpec dimensionsSpec
+  )
+  {
+    super(timestampSpec, dimensionsSpec);
+    this.objectMapper = new ObjectMapper();
+  }
+
+  @Override
+  public void verify(List<String> usedCols)
+  {
+  }
+
+  @Override
+  public Parser<String, Object> makeParser()
+  {
+    return new JSONToLowerParser(objectMapper, null, null);
+  }
+
+  @Override
+  public ParseSpec withTimestampSpec(TimestampSpec spec)
+  {
+    return new JSONLowercaseParseSpec(spec, getDimensionsSpec());
+  }
+
+  @Override
+  public ParseSpec withDimensionsSpec(DimensionsSpec spec)
+  {
+    return new JSONLowercaseParseSpec(getTimestampSpec(), spec);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/JSONParseSpec.java b/api/src/main/java/io/druid/data/input/impl/JSONParseSpec.java
new file mode 100644
index 00000000000..81ce73b94a4
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/JSONParseSpec.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.core.JsonParser.Feature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import io.druid.java.util.common.parsers.JSONPathParser;
+import io.druid.java.util.common.parsers.Parser;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ */
+public class JSONParseSpec extends ParseSpec
+{
+  private final ObjectMapper objectMapper;
+  private final JSONPathSpec flattenSpec;
+  private final Map<String, Boolean> featureSpec;
+
+  @JsonCreator
+  public JSONParseSpec(
+      @JsonProperty("timestampSpec") TimestampSpec timestampSpec,
+      @JsonProperty("dimensionsSpec") DimensionsSpec dimensionsSpec,
+      @JsonProperty("flattenSpec") JSONPathSpec flattenSpec,
+      @JsonProperty("featureSpec") Map<String, Boolean> featureSpec
+  )
+  {
+    super(timestampSpec, dimensionsSpec);
+    this.objectMapper = new ObjectMapper();
+    this.flattenSpec = flattenSpec != null ? flattenSpec : new JSONPathSpec(true, null);
+    this.featureSpec = (featureSpec == null) ? new HashMap<String, Boolean>() : featureSpec;
+    for (Map.Entry<String, Boolean> entry : this.featureSpec.entrySet()) {
+      Feature feature = Feature.valueOf(entry.getKey());
+      objectMapper.configure(feature, entry.getValue());
+    }
+  }
+
+  @Deprecated
+  public JSONParseSpec(TimestampSpec ts, DimensionsSpec dims)
+  {
+    this(ts, dims, null, null);
+  }
+
+  @Override
+  public void verify(List<String> usedCols)
+  {
+  }
+
+  @Override
+  public Parser<String, Object> makeParser()
+  {
+    return new JSONPathParser(
+        convertFieldSpecs(flattenSpec.getFields()),
+        flattenSpec.isUseFieldDiscovery(),
+        objectMapper
+    );
+  }
+
+  @Override
+  public ParseSpec withTimestampSpec(TimestampSpec spec)
+  {
+    return new JSONParseSpec(spec, getDimensionsSpec(), getFlattenSpec(), getFeatureSpec());
+  }
+
+  @Override
+  public ParseSpec withDimensionsSpec(DimensionsSpec spec)
+  {
+    return new JSONParseSpec(getTimestampSpec(), spec, getFlattenSpec(), getFeatureSpec());
+  }
+
+  @JsonProperty
+  public JSONPathSpec getFlattenSpec()
+  {
+    return flattenSpec;
+  }
+
+  @JsonProperty
+  public Map<String, Boolean> getFeatureSpec()
+  {
+    return featureSpec;
+  }
+
+  private List<JSONPathParser.FieldSpec> convertFieldSpecs(List<JSONPathFieldSpec> druidFieldSpecs)
+  {
+    List<JSONPathParser.FieldSpec> newSpecs = new ArrayList<>();
+    for (JSONPathFieldSpec druidSpec : druidFieldSpecs) {
+      JSONPathParser.FieldType type;
+      switch (druidSpec.getType()) {
+        case ROOT:
+          type = JSONPathParser.FieldType.ROOT;
+          break;
+        case PATH:
+          type = JSONPathParser.FieldType.PATH;
+          break;
+        default:
+          throw new IllegalArgumentException("Invalid type for field " + druidSpec.getName());
+      }
+
+      JSONPathParser.FieldSpec newSpec = new JSONPathParser.FieldSpec(
+          type,
+          druidSpec.getName(),
+          druidSpec.getExpr()
+      );
+      newSpecs.add(newSpec);
+    }
+    return newSpecs;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/JSONPathFieldSpec.java b/api/src/main/java/io/druid/data/input/impl/JSONPathFieldSpec.java
new file mode 100644
index 00000000000..2825d92652f
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/JSONPathFieldSpec.java
@@ -0,0 +1,76 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class JSONPathFieldSpec
+{
+  private final JSONPathFieldType type;
+  private final String name;
+  private final String expr;
+
+  @JsonCreator
+  public JSONPathFieldSpec(
+      @JsonProperty("type") JSONPathFieldType type,
+      @JsonProperty("name") String name,
+      @JsonProperty("expr") String expr
+  )
+  {
+    this.type = type;
+    this.name = name;
+    this.expr = expr;
+  }
+
+  @JsonProperty
+  public JSONPathFieldType getType()
+  {
+    return type;
+  }
+
+  @JsonProperty
+  public String getName()
+  {
+    return name;
+  }
+
+  @JsonProperty
+  public String getExpr()
+  {
+    return expr;
+  }
+
+  @JsonCreator
+  public static JSONPathFieldSpec fromString(String name)
+  {
+    return JSONPathFieldSpec.createRootField(name);
+  }
+
+  public static JSONPathFieldSpec createNestedField(String name, String expr)
+  {
+    return new JSONPathFieldSpec(JSONPathFieldType.PATH, name, expr);
+  }
+
+  public static JSONPathFieldSpec createRootField(String name)
+  {
+    return new JSONPathFieldSpec(JSONPathFieldType.ROOT, name, null);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/JSONPathFieldType.java b/api/src/main/java/io/druid/data/input/impl/JSONPathFieldType.java
new file mode 100644
index 00000000000..d99ad77c44e
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/JSONPathFieldType.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonValue;
+
+public enum JSONPathFieldType
+{
+  ROOT,
+  PATH;
+
+  @JsonValue
+  @Override
+  public String toString()
+  {
+    return this.name().toLowerCase();
+  }
+
+  @JsonCreator
+  public static JSONPathFieldType fromString(String name)
+  {
+    return valueOf(name.toUpperCase());
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/JSONPathSpec.java b/api/src/main/java/io/druid/data/input/impl/JSONPathSpec.java
new file mode 100644
index 00000000000..8ba57a7cd0d
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/JSONPathSpec.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.collect.ImmutableList;
+
+import java.util.List;
+
+public class JSONPathSpec
+{
+  private final boolean useFieldDiscovery;
+  private final List<JSONPathFieldSpec> fields;
+
+  @JsonCreator
+  public JSONPathSpec(
+      @JsonProperty("useFieldDiscovery") Boolean useFieldDiscovery,
+      @JsonProperty("fields") List<JSONPathFieldSpec> fields
+  )
+  {
+    this.useFieldDiscovery = useFieldDiscovery == null ? true : useFieldDiscovery;
+    this.fields = fields == null ? ImmutableList.<JSONPathFieldSpec>of() : fields;
+  }
+
+  @JsonProperty
+  public boolean isUseFieldDiscovery()
+  {
+    return useFieldDiscovery;
+  }
+
+  @JsonProperty
+  public List<JSONPathFieldSpec> getFields()
+  {
+    return fields;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/JavaScriptParseSpec.java b/api/src/main/java/io/druid/data/input/impl/JavaScriptParseSpec.java
new file mode 100644
index 00000000000..620f8109bd1
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/JavaScriptParseSpec.java
@@ -0,0 +1,91 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import io.druid.java.util.common.ISE;
+import io.druid.java.util.common.parsers.JavaScriptParser;
+import io.druid.java.util.common.parsers.Parser;
+import io.druid.js.JavaScriptConfig;
+
+import java.util.List;
+
+/**
+ */
+public class JavaScriptParseSpec extends ParseSpec
+{
+  private final String function;
+  private final JavaScriptConfig config;
+
+  @JsonCreator
+  public JavaScriptParseSpec(
+      @JsonProperty("timestampSpec") TimestampSpec timestampSpec,
+      @JsonProperty("dimensionsSpec") DimensionsSpec dimensionsSpec,
+      @JsonProperty("function") String function,
+      @JacksonInject JavaScriptConfig config
+  )
+  {
+    super(timestampSpec, dimensionsSpec);
+
+    this.function = function;
+    this.config = config;
+  }
+
+  @JsonProperty("function")
+  public String getFunction()
+  {
+    return function;
+  }
+
+  @Override
+  public void verify(List<String> usedCols)
+  {
+  }
+
+  @Override
+  public Parser<String, Object> makeParser()
+  {
+    if (!config.isEnabled()) {
+      throw new ISE("JavaScript is disabled");
+    }
+
+    return new JavaScriptParser(function);
+  }
+
+  @Override
+  public ParseSpec withTimestampSpec(TimestampSpec spec)
+  {
+    return new JavaScriptParseSpec(spec, getDimensionsSpec(), function, config);
+  }
+
+  @Override
+  public ParseSpec withDimensionsSpec(DimensionsSpec spec)
+  {
+    return new JavaScriptParseSpec(getTimestampSpec(), spec, function, config);
+  }
+
+  public ParseSpec withFunction(String fn)
+  {
+    return new JavaScriptParseSpec(getTimestampSpec(), getDimensionsSpec(), fn, config);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/LongDimensionSchema.java b/api/src/main/java/io/druid/data/input/impl/LongDimensionSchema.java
new file mode 100644
index 00000000000..64af529360b
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/LongDimensionSchema.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class LongDimensionSchema extends DimensionSchema
+{
+  @JsonCreator
+  public LongDimensionSchema(
+      @JsonProperty("name") String name
+  )
+  {
+    super(name, null);
+  }
+
+  @Override
+  public String getTypeName()
+  {
+    return DimensionSchema.LONG_TYPE_NAME;
+  }
+
+  @Override
+  @JsonIgnore
+  public ValueType getValueType()
+  {
+    return ValueType.LONG;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/MapInputRowParser.java b/api/src/main/java/io/druid/data/input/impl/MapInputRowParser.java
new file mode 100644
index 00000000000..8847dea0278
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/MapInputRowParser.java
@@ -0,0 +1,93 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
+
+import io.druid.data.input.InputRow;
+import io.druid.data.input.MapBasedInputRow;
+import io.druid.java.util.common.parsers.ParseException;
+
+import org.joda.time.DateTime;
+
+import java.util.List;
+import java.util.Map;
+
+public class MapInputRowParser implements InputRowParser<Map<String, Object>>
+{
+  private final ParseSpec parseSpec;
+
+  @JsonCreator
+  public MapInputRowParser(
+      @JsonProperty("parseSpec") ParseSpec parseSpec
+  )
+  {
+    this.parseSpec = parseSpec;
+  }
+
+  @Override
+  public InputRow parse(Map<String, Object> theMap)
+  {
+    final List<String> dimensions = parseSpec.getDimensionsSpec().hasCustomDimensions()
+                                    ? parseSpec.getDimensionsSpec().getDimensionNames()
+                                    : Lists.newArrayList(
+                                        Sets.difference(
+                                            theMap.keySet(),
+                                            parseSpec.getDimensionsSpec()
+                                                     .getDimensionExclusions()
+                                        )
+                                    );
+
+    final DateTime timestamp;
+    try {
+      timestamp = parseSpec.getTimestampSpec().extractTimestamp(theMap);
+      if (timestamp == null) {
+        final String input = theMap.toString();
+        throw new NullPointerException(
+            String.format(
+                "Null timestamp in input: %s",
+                input.length() < 100 ? input : input.substring(0, 100) + "..."
+            )
+        );
+      }
+    }
+    catch (Exception e) {
+      throw new ParseException(e, "Unparseable timestamp found!");
+    }
+
+    return new MapBasedInputRow(timestamp.getMillis(), dimensions, theMap);
+  }
+
+  @JsonProperty
+  @Override
+  public ParseSpec getParseSpec()
+  {
+    return parseSpec;
+  }
+
+  @Override
+  public InputRowParser withParseSpec(ParseSpec parseSpec)
+  {
+    return new MapInputRowParser(parseSpec);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/NewSpatialDimensionSchema.java b/api/src/main/java/io/druid/data/input/impl/NewSpatialDimensionSchema.java
new file mode 100644
index 00000000000..c5b823e39a9
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/NewSpatialDimensionSchema.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.List;
+
+/**
+ * NOTE: 
+ * This class should be deprecated after Druid supports configurable index types on dimensions.
+ * When that exists, this should be the implementation: https://github.com/druid-io/druid/issues/2622
+ * 
+ * This is a stop-gap solution to consolidate the dimension specs and remove the separate spatial 
+ * section in DimensionsSpec.
+ */
+public class NewSpatialDimensionSchema extends DimensionSchema
+{
+  private final List<String> dims;
+
+  @JsonCreator
+  public NewSpatialDimensionSchema(
+      @JsonProperty("name") String name,
+      @JsonProperty("dims") List<String> dims
+  )
+  {
+    super(name, null);
+    this.dims = dims;
+  }
+
+  @JsonProperty
+  public List<String> getDims()
+  {
+    return dims;
+  }
+
+  @Override
+  public String getTypeName()
+  {
+    return DimensionSchema.SPATIAL_TYPE_NAME;
+  }
+
+  @Override
+  @JsonIgnore
+  public ValueType getValueType()
+  {
+    return ValueType.STRING;
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    NewSpatialDimensionSchema that = (NewSpatialDimensionSchema) o;
+
+    return dims != null ? dims.equals(that.dims) : that.dims == null;
+
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return dims != null ? dims.hashCode() : 0;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/NoopInputRowParser.java b/api/src/main/java/io/druid/data/input/impl/NoopInputRowParser.java
new file mode 100644
index 00000000000..772024223c8
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/NoopInputRowParser.java
@@ -0,0 +1,79 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import io.druid.data.input.InputRow;
+
+/**
+ */
+public class NoopInputRowParser implements InputRowParser<InputRow>
+{
+  private final ParseSpec parseSpec;
+
+  @JsonCreator
+  public NoopInputRowParser(
+      @JsonProperty("parseSpec") ParseSpec parseSpec
+  )
+  {
+    this.parseSpec = parseSpec != null ? parseSpec : new TimeAndDimsParseSpec(null, null);
+  }
+
+  @Override
+  public InputRow parse(InputRow input)
+  {
+    return input;
+  }
+
+  @Override
+  public ParseSpec getParseSpec()
+  {
+    return parseSpec;
+  }
+
+  @Override
+  public InputRowParser withParseSpec(ParseSpec parseSpec)
+  {
+    return new NoopInputRowParser(parseSpec);
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    NoopInputRowParser that = (NoopInputRowParser) o;
+
+    return parseSpec.equals(that.parseSpec);
+
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return parseSpec.hashCode();
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/ParseSpec.java b/api/src/main/java/io/druid/data/input/impl/ParseSpec.java
new file mode 100644
index 00000000000..96c06237d9b
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/ParseSpec.java
@@ -0,0 +1,114 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+
+import io.druid.java.util.common.parsers.Parser;
+
+import java.util.List;
+
+/**
+ */
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "format", defaultImpl = DelimitedParseSpec.class)
+@JsonSubTypes(value = {
+    @JsonSubTypes.Type(name = "json", value = JSONParseSpec.class),
+    @JsonSubTypes.Type(name = "csv", value = CSVParseSpec.class),
+    @JsonSubTypes.Type(name = "tsv", value = DelimitedParseSpec.class),
+    @JsonSubTypes.Type(name = "jsonLowercase", value = JSONLowercaseParseSpec.class),
+    @JsonSubTypes.Type(name = "timeAndDims", value = TimeAndDimsParseSpec.class),
+    @JsonSubTypes.Type(name = "regex", value = RegexParseSpec.class),
+    @JsonSubTypes.Type(name = "javascript", value = JavaScriptParseSpec.class)
+
+})
+public abstract class ParseSpec
+{
+  private final TimestampSpec timestampSpec;
+  private final DimensionsSpec dimensionsSpec;
+
+  protected ParseSpec(TimestampSpec timestampSpec, DimensionsSpec dimensionsSpec)
+  {
+    this.timestampSpec = timestampSpec;
+    this.dimensionsSpec = dimensionsSpec;
+  }
+
+  @JsonProperty
+  public TimestampSpec getTimestampSpec()
+  {
+    return timestampSpec;
+  }
+
+  @JsonProperty
+  public DimensionsSpec getDimensionsSpec()
+  {
+    return dimensionsSpec;
+  }
+
+  public void verify(List<String> usedCols)
+  {
+    // do nothing
+  }
+
+  public Parser<String, Object> makeParser()
+  {
+    return null;
+  }
+
+  public ParseSpec withTimestampSpec(TimestampSpec spec)
+  {
+    throw new UnsupportedOperationException();
+  }
+
+  public ParseSpec withDimensionsSpec(DimensionsSpec spec)
+  {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    ParseSpec parseSpec = (ParseSpec) o;
+
+    if (timestampSpec != null ? !timestampSpec.equals(parseSpec.timestampSpec) : parseSpec.timestampSpec != null) {
+      return false;
+    }
+    return !(dimensionsSpec != null
+             ? !dimensionsSpec.equals(parseSpec.dimensionsSpec)
+             : parseSpec.dimensionsSpec != null);
+
+  }
+
+  @Override
+  public int hashCode()
+  {
+    int result = timestampSpec != null ? timestampSpec.hashCode() : 0;
+    result = 31 * result + (dimensionsSpec != null ? dimensionsSpec.hashCode() : 0);
+    return result;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/RegexParseSpec.java b/api/src/main/java/io/druid/data/input/impl/RegexParseSpec.java
new file mode 100644
index 00000000000..a90978bf2b1
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/RegexParseSpec.java
@@ -0,0 +1,116 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.base.Optional;
+import com.google.common.base.Preconditions;
+
+import io.druid.java.util.common.parsers.Parser;
+import io.druid.java.util.common.parsers.RegexParser;
+
+import java.util.List;
+
+/**
+ */
+public class RegexParseSpec extends ParseSpec
+{
+  private final String listDelimiter;
+  private final List<String> columns;
+  private final String pattern;
+
+  @JsonCreator
+  public RegexParseSpec(
+      @JsonProperty("timestampSpec") TimestampSpec timestampSpec,
+      @JsonProperty("dimensionsSpec") DimensionsSpec dimensionsSpec,
+      @JsonProperty("listDelimiter") String listDelimiter,
+      @JsonProperty("columns") List<String> columns,
+      @JsonProperty("pattern") String pattern
+  )
+  {
+    super(timestampSpec, dimensionsSpec);
+
+    this.listDelimiter = listDelimiter;
+    this.columns = columns;
+    this.pattern = pattern;
+
+    verify(dimensionsSpec.getDimensionNames());
+  }
+
+  @JsonProperty
+  public String getListDelimiter()
+  {
+    return listDelimiter;
+  }
+
+  @JsonProperty("pattern")
+  public String getPattern()
+  {
+    return pattern;
+  }
+
+  @JsonProperty
+  public List<String> getColumns()
+  {
+    return columns;
+  }
+
+  @Override
+  public void verify(List<String> usedCols)
+  {
+    if (columns != null) {
+      for (String columnName : usedCols) {
+        Preconditions.checkArgument(columns.contains(columnName), "column[%s] not in columns.", columnName);
+      }
+    }
+  }
+
+  @Override
+  public Parser<String, Object> makeParser()
+  {
+    if (columns == null) {
+      return new RegexParser(pattern, Optional.fromNullable(listDelimiter));
+    }
+    return new RegexParser(pattern, Optional.fromNullable(listDelimiter), columns);
+  }
+
+  @Override
+  public ParseSpec withTimestampSpec(TimestampSpec spec)
+  {
+    return new RegexParseSpec(spec, getDimensionsSpec(), listDelimiter, columns, pattern);
+  }
+
+  @Override
+  public ParseSpec withDimensionsSpec(DimensionsSpec spec)
+  {
+    return new RegexParseSpec(getTimestampSpec(), spec, listDelimiter, columns, pattern);
+  }
+
+  public ParseSpec withColumns(List<String> cols)
+  {
+    return new RegexParseSpec(getTimestampSpec(), getDimensionsSpec(), listDelimiter, cols, pattern);
+  }
+
+  public ParseSpec withPattern(String pat)
+  {
+    return new RegexParseSpec(getTimestampSpec(), getDimensionsSpec(), listDelimiter, columns, pat);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/SpatialDimensionSchema.java b/api/src/main/java/io/druid/data/input/impl/SpatialDimensionSchema.java
new file mode 100644
index 00000000000..60a9224707d
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/SpatialDimensionSchema.java
@@ -0,0 +1,83 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import java.util.List;
+
+/**
+ */
+@Deprecated
+public class SpatialDimensionSchema
+{
+  private final String dimName;
+  private final List<String> dims;
+
+  @JsonCreator
+  public SpatialDimensionSchema(
+      @JsonProperty("dimName") String dimName,
+      @JsonProperty("dims") List<String> dims
+  )
+  {
+    this.dimName = dimName;
+    this.dims = dims;
+  }
+
+  @JsonProperty
+  public String getDimName()
+  {
+    return dimName;
+  }
+
+  @JsonProperty
+  public List<String> getDims()
+  {
+    return dims;
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    SpatialDimensionSchema that = (SpatialDimensionSchema) o;
+
+    if (dimName != null ? !dimName.equals(that.dimName) : that.dimName != null) {
+      return false;
+    }
+    return dims != null ? dims.equals(that.dims) : that.dims == null;
+
+  }
+
+  @Override
+  public int hashCode()
+  {
+    int result = dimName != null ? dimName.hashCode() : 0;
+    result = 31 * result + (dims != null ? dims.hashCode() : 0);
+    return result;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/StringDimensionSchema.java b/api/src/main/java/io/druid/data/input/impl/StringDimensionSchema.java
new file mode 100644
index 00000000000..dd6ffd40163
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/StringDimensionSchema.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class StringDimensionSchema extends DimensionSchema
+{
+  @JsonCreator
+  public static StringDimensionSchema create(String name)
+  {
+    return new StringDimensionSchema(name);
+  }
+
+  @JsonCreator
+  public StringDimensionSchema(
+      @JsonProperty("name") String name,
+      @JsonProperty("multiValueHandling") MultiValueHandling multiValueHandling
+  )
+  {
+    super(name, multiValueHandling);
+  }
+
+  public StringDimensionSchema(String name)
+  {
+    this(name, null);
+  }
+
+  @Override
+  public String getTypeName()
+  {
+    return DimensionSchema.STRING_TYPE_NAME;
+  }
+
+  @Override
+  @JsonIgnore
+  public ValueType getValueType()
+  {
+    return ValueType.STRING;
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/StringInputRowParser.java b/api/src/main/java/io/druid/data/input/impl/StringInputRowParser.java
new file mode 100644
index 00000000000..6a13fcb7bd1
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/StringInputRowParser.java
@@ -0,0 +1,141 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.base.Charsets;
+
+import io.druid.data.input.ByteBufferInputRowParser;
+import io.druid.data.input.InputRow;
+import io.druid.java.util.common.parsers.ParseException;
+import io.druid.java.util.common.parsers.Parser;
+
+import java.nio.ByteBuffer;
+import java.nio.CharBuffer;
+import java.nio.charset.Charset;
+import java.nio.charset.CoderResult;
+import java.nio.charset.CodingErrorAction;
+import java.util.Map;
+
+/**
+ */
+public class StringInputRowParser implements ByteBufferInputRowParser
+{
+  private static final Charset DEFAULT_CHARSET = Charsets.UTF_8;
+
+  private final ParseSpec parseSpec;
+  private final MapInputRowParser mapParser;
+  private final Parser<String, Object> parser;
+  private final Charset charset;
+
+  private CharBuffer chars = null;
+
+  @JsonCreator
+  public StringInputRowParser(
+      @JsonProperty("parseSpec") ParseSpec parseSpec,
+      @JsonProperty("encoding") String encoding
+  )
+  {
+    this.parseSpec = parseSpec;
+    this.mapParser = new MapInputRowParser(parseSpec);
+    this.parser = parseSpec.makeParser();
+
+    if (encoding != null) {
+      this.charset = Charset.forName(encoding);
+    } else {
+      this.charset = DEFAULT_CHARSET;
+    }
+  }
+
+  @Deprecated
+  public StringInputRowParser(ParseSpec parseSpec)
+  {
+    this(parseSpec, null);
+  }
+
+  @Override
+  public InputRow parse(ByteBuffer input)
+  {
+    return parseMap(buildStringKeyMap(input));
+  }
+
+  @JsonProperty
+  @Override
+  public ParseSpec getParseSpec()
+  {
+    return parseSpec;
+  }
+
+  @JsonProperty
+  public String getEncoding()
+  {
+    return charset.name();
+  }
+
+  @Override
+  public StringInputRowParser withParseSpec(ParseSpec parseSpec)
+  {
+    return new StringInputRowParser(parseSpec, getEncoding());
+  }
+
+  private Map<String, Object> buildStringKeyMap(ByteBuffer input)
+  {
+    int payloadSize = input.remaining();
+
+    if (chars == null || chars.remaining() < payloadSize) {
+      chars = CharBuffer.allocate(payloadSize);
+    }
+
+    final CoderResult coderResult = charset.newDecoder()
+                                           .onMalformedInput(CodingErrorAction.REPLACE)
+                                           .onUnmappableCharacter(CodingErrorAction.REPLACE)
+                                           .decode(input, chars, true);
+
+    Map<String, Object> theMap;
+    if (coderResult.isUnderflow()) {
+      chars.flip();
+      try {
+        theMap = parseString(chars.toString());
+      }
+      finally {
+        chars.clear();
+      }
+    } else {
+      throw new ParseException("Failed with CoderResult[%s]", coderResult);
+    }
+    return theMap;
+  }
+
+  private Map<String, Object> parseString(String inputString)
+  {
+    return parser.parse(inputString);
+  }
+
+  public InputRow parse(String input)
+  {
+    return parseMap(parseString(input));
+  }
+
+  private InputRow parseMap(Map<String, Object> theMap)
+  {
+    return mapParser.parse(theMap);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/TimeAndDimsParseSpec.java b/api/src/main/java/io/druid/data/input/impl/TimeAndDimsParseSpec.java
new file mode 100644
index 00000000000..e6740cb63f4
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/TimeAndDimsParseSpec.java
@@ -0,0 +1,79 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+import io.druid.java.util.common.parsers.Parser;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ */
+public class TimeAndDimsParseSpec extends ParseSpec
+{
+  @JsonCreator
+  public TimeAndDimsParseSpec(
+      @JsonProperty("timestampSpec") TimestampSpec timestampSpec,
+      @JsonProperty("dimensionsSpec") DimensionsSpec dimensionsSpec
+  )
+  {
+    super(
+        timestampSpec != null ? timestampSpec : new TimestampSpec(null, null, null),
+        dimensionsSpec != null ? dimensionsSpec : new DimensionsSpec(null, null, null)
+    );
+  }
+
+  public Parser<String, Object> makeParser()
+  {
+    return new Parser<String, Object>()
+    {
+      @Override
+      public Map<String, Object> parse(String input)
+      {
+        throw new UnsupportedOperationException("not supported");
+      }
+
+      @Override
+      public void setFieldNames(Iterable<String> fieldNames)
+      {
+        throw new UnsupportedOperationException("not supported");
+      }
+
+      @Override
+      public List<String> getFieldNames()
+      {
+        throw new UnsupportedOperationException("not supported");
+      }
+    };
+  }
+
+  public ParseSpec withTimestampSpec(TimestampSpec spec)
+  {
+    return new TimeAndDimsParseSpec(spec, getDimensionsSpec());
+  }
+
+  public ParseSpec withDimensionsSpec(DimensionsSpec spec)
+  {
+    return new TimeAndDimsParseSpec(getTimestampSpec(), spec);
+  }
+}
diff --git a/api/src/main/java/io/druid/data/input/impl/TimestampSpec.java b/api/src/main/java/io/druid/data/input/impl/TimestampSpec.java
new file mode 100644
index 00000000000..e956abef6ad
--- /dev/null
+++ b/api/src/main/java/io/druid/data/input/impl/TimestampSpec.java
@@ -0,0 +1,163 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.base.Function;
+
+import io.druid.java.util.common.parsers.TimestampParser;
+
+import org.joda.time.DateTime;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+
+/**
+ */
+public class TimestampSpec
+{
+  private static class ParseCtx
+  {
+    Object lastTimeObject = null;
+    DateTime lastDateTime = null;
+  }
+
+  private static final String DEFAULT_COLUMN = "timestamp";
+  private static final String DEFAULT_FORMAT = "auto";
+  private static final DateTime DEFAULT_MISSING_VALUE = null;
+
+  private final String timestampColumn;
+  private final String timestampFormat;
+  private final Function<Object, DateTime> timestampConverter;
+  // this value should never be set for production data
+  private final DateTime missingValue;
+
+  // remember last value parsed
+  private ParseCtx parseCtx = new ParseCtx();
+
+  @JsonCreator
+  public TimestampSpec(
+      @JsonProperty("column") String timestampColumn,
+      @JsonProperty("format") String format,
+      // this value should never be set for production data
+      @JsonProperty("missingValue") DateTime missingValue
+  )
+  {
+    this.timestampColumn = (timestampColumn == null) ? DEFAULT_COLUMN : timestampColumn;
+    this.timestampFormat = format == null ? DEFAULT_FORMAT : format;
+    this.timestampConverter = TimestampParser.createObjectTimestampParser(timestampFormat);
+    this.missingValue = missingValue == null
+                        ? DEFAULT_MISSING_VALUE
+                        : missingValue;
+  }
+
+  @JsonProperty("column")
+  public String getTimestampColumn()
+  {
+    return timestampColumn;
+  }
+
+  @JsonProperty("format")
+  public String getTimestampFormat()
+  {
+    return timestampFormat;
+  }
+
+  @JsonProperty("missingValue")
+  public DateTime getMissingValue()
+  {
+    return missingValue;
+  }
+
+  public DateTime extractTimestamp(Map<String, Object> input)
+  {
+    return parseDateTime(input.get(timestampColumn));
+  }
+
+  public DateTime parseDateTime(Object input)
+  {
+    DateTime extracted = missingValue;
+    if (input != null) {
+      if (input.equals(parseCtx.lastTimeObject)) {
+        extracted = parseCtx.lastDateTime;
+      } else {
+        ParseCtx newCtx = new ParseCtx();
+        newCtx.lastTimeObject = input;
+        extracted = timestampConverter.apply(input);
+        newCtx.lastDateTime = extracted;
+        parseCtx = newCtx;
+      }
+    }
+    return extracted;
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    TimestampSpec that = (TimestampSpec) o;
+
+    if (!timestampColumn.equals(that.timestampColumn)) {
+      return false;
+    }
+    if (!timestampFormat.equals(that.timestampFormat)) {
+      return false;
+    }
+    return !(missingValue != null ? !missingValue.equals(that.missingValue) : that.missingValue != null);
+
+  }
+
+  @Override
+  public int hashCode()
+  {
+    int result = timestampColumn.hashCode();
+    result = 31 * result + timestampFormat.hashCode();
+    result = 31 * result + (missingValue != null ? missingValue.hashCode() : 0);
+    return result;
+  }
+
+  //simple merge strategy on timestampSpec that checks if all are equal or else
+  //returns null. this can be improved in future but is good enough for most use-cases.
+  public static TimestampSpec mergeTimestampSpec(List<TimestampSpec> toMerge) {
+    if (toMerge == null || toMerge.size() == 0) {
+      return null;
+    }
+
+    TimestampSpec result = toMerge.get(0);
+    for (int i = 1; i < toMerge.size(); i++) {
+      if (toMerge.get(i) == null) {
+        continue;
+      }
+      if (!Objects.equals(result, toMerge.get(i))) {
+        return null;
+      }
+    }
+
+    return result;
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/Binders.java b/api/src/main/java/io/druid/guice/Binders.java
new file mode 100644
index 00000000000..1d6b220ffa7
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/Binders.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.Binder;
+import com.google.inject.Key;
+import com.google.inject.multibindings.MapBinder;
+import io.druid.segment.loading.DataSegmentArchiver;
+import io.druid.segment.loading.DataSegmentFinder;
+import io.druid.segment.loading.DataSegmentMover;
+import io.druid.segment.loading.DataSegmentKiller;
+import io.druid.segment.loading.DataSegmentPuller;
+import io.druid.segment.loading.DataSegmentPusher;
+import io.druid.tasklogs.TaskLogs;
+
+/**
+ */
+public class Binders
+{
+  public static MapBinder<String, DataSegmentPuller> dataSegmentPullerBinder(Binder binder)
+  {
+    return MapBinder.newMapBinder(binder, String.class, DataSegmentPuller.class);
+  }
+
+  public static MapBinder<String, DataSegmentKiller> dataSegmentKillerBinder(Binder binder)
+  {
+    return MapBinder.newMapBinder(binder, String.class, DataSegmentKiller.class);
+  }
+
+  public static MapBinder<String, DataSegmentMover> dataSegmentMoverBinder(Binder binder)
+  {
+    return MapBinder.newMapBinder(binder, String.class, DataSegmentMover.class);
+  }
+
+  public static MapBinder<String, DataSegmentArchiver> dataSegmentArchiverBinder(Binder binder)
+  {
+    return MapBinder.newMapBinder(binder, String.class, DataSegmentArchiver.class);
+  }
+
+  public static MapBinder<String, DataSegmentPusher> dataSegmentPusherBinder(Binder binder)
+  {
+    return PolyBind.optionBinder(binder, Key.get(DataSegmentPusher.class));
+  }
+
+  public static MapBinder<String, DataSegmentFinder> dataSegmentFinderBinder(Binder binder)
+  {
+    return PolyBind.optionBinder(binder, Key.get(DataSegmentFinder.class));
+  }
+
+  public static MapBinder<String, TaskLogs> taskLogsBinder(Binder binder)
+  {
+    return PolyBind.optionBinder(binder, Key.get(TaskLogs.class));
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/ConditionalMultibind.java b/api/src/main/java/io/druid/guice/ConditionalMultibind.java
new file mode 100644
index 00000000000..2846977944c
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/ConditionalMultibind.java
@@ -0,0 +1,244 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.common.base.Predicate;
+import com.google.inject.Binder;
+import com.google.inject.TypeLiteral;
+import com.google.inject.multibindings.Multibinder;
+
+import java.lang.annotation.Annotation;
+import java.util.Properties;
+
+/**
+ * Provides the ability to conditionally bind an item to a set. The condition is based on the value set in the
+ * runtime.properties.
+ *
+ * Usage example:
+ *
+ * ConditionalMultibind.create(props, binder, Animal.class)
+ *                     .addConditionBinding("animal.type", Predicates.equalTo("cat"), Cat.class)
+ *                     .addConditionBinding("animal.type", Predicates.equalTo("dog"), Dog.class);
+ *
+ * At binding time, this will check the value set for property "animal.type" in props. If the value is "cat", it will
+ * add a binding to Cat.class. If the value is "dog", it will add a binding to Dog.class.
+ *
+ * At injection time, you will get the items that satisfy their corresponding predicates by calling
+ * injector.getInstance(Key.get(new TypeLiteral<Set<Animal>>(){}))
+ */
+public class ConditionalMultibind<T>
+{
+
+  /**
+   * Create a ConditionalMultibind that resolves items to be added to the set at "binding" time.
+   *
+   * @param properties the runtime properties.
+   * @param binder     the binder for the injector that is being configured.
+   * @param type       the type that will be injected.
+   * @param <T>        interface type.
+   *
+   * @return An instance of ConditionalMultibind that can be used to add conditional bindings.
+   */
+  public static <T> ConditionalMultibind<T> create(Properties properties, Binder binder, Class<T> type)
+  {
+    return new ConditionalMultibind<T>(properties, Multibinder.<T>newSetBinder(binder, type));
+  }
+
+  /**
+   * Create a ConditionalMultibind that resolves items to be added to the set at "binding" time.
+   *
+   * @param properties     the runtime properties.
+   * @param binder         the binder for the injector that is being configured.
+   * @param type           the type that will be injected.
+   * @param <T>            interface type.
+   * @param annotationType the binding annotation.
+   *
+   * @return An instance of ConditionalMultibind that can be used to add conditional bindings.
+   */
+  public static <T> ConditionalMultibind<T> create(
+      Properties properties,
+      Binder binder,
+      Class<T> type,
+      Class<? extends Annotation> annotationType
+  )
+  {
+    return new ConditionalMultibind<T>(properties, Multibinder.<T>newSetBinder(binder, type, annotationType));
+  }
+
+  /**
+   * Create a ConditionalMultibind that resolves items to be added to the set at "binding" time.
+   *
+   * @param properties the runtime properties.
+   * @param binder     the binder for the injector that is being configured.
+   * @param type       the type that will be injected.
+   * @param <T>        interface type.
+   *
+   * @return An instance of ConditionalMultibind that can be used to add conditional bindings.
+   */
+  public static <T> ConditionalMultibind<T> create(Properties properties, Binder binder, TypeLiteral<T> type)
+  {
+    return new ConditionalMultibind<T>(properties, Multibinder.<T>newSetBinder(binder, type));
+  }
+
+  /**
+   * Create a ConditionalMultibind that resolves items to be added to the set at "binding" time.
+   *
+   * @param properties     the runtime properties.
+   * @param binder         the binder for the injector that is being configured.
+   * @param type           the type that will be injected.
+   * @param <T>            interface type.
+   * @param annotationType the binding annotation.
+   *
+   * @return An instance of ConditionalMultibind that can be used to add conditional bindings.
+   */
+  public static <T> ConditionalMultibind<T> create(
+      Properties properties,
+      Binder binder,
+      TypeLiteral<T> type,
+      Class<? extends Annotation> annotationType
+  )
+  {
+    return new ConditionalMultibind<T>(properties, Multibinder.<T>newSetBinder(binder, type, annotationType));
+  }
+
+
+  private final Properties properties;
+  private final Multibinder<T> multibinder;
+
+  public ConditionalMultibind(Properties properties, Multibinder<T> multibinder)
+  {
+    this.properties = properties;
+    this.multibinder = multibinder;
+  }
+
+  /**
+   * Unconditionally bind target to the set.
+   *
+   * @param target the target class to which it adds a binding.
+   *
+   * @return self to support a continuous syntax for adding more conditional bindings.
+   */
+  public ConditionalMultibind<T> addBinding(Class<? extends T> target)
+  {
+    multibinder.addBinding().to(target);
+    return this;
+  }
+
+  /**
+   * Unconditionally bind target to the set.
+   *
+   * @param target the target instance to which it adds a binding.
+   *
+   * @return self to support a continuous syntax for adding more conditional bindings.
+   */
+  public ConditionalMultibind<T> addBinding(T target)
+  {
+    multibinder.addBinding().toInstance(target);
+    return this;
+  }
+
+  /**
+   * Unconditionally bind target to the set.
+   *
+   * @param target the target type to which it adds a binding.
+   *
+   * @return self to support a continuous syntax for adding more conditional bindings.
+   */
+  public ConditionalMultibind<T> addBinding(TypeLiteral<T> target)
+  {
+    multibinder.addBinding().to(target);
+    return this;
+  }
+
+  /**
+   * Conditionally bind target to the set. If "condition" returns true, add a binding to "target".
+   *
+   * @param property  the property to inspect on
+   * @param condition the predicate used to verify whether to add a binding to "target"
+   * @param target    the target class to which it adds a binding.
+   *
+   * @return self to support a continuous syntax for adding more conditional bindings.
+   */
+  public ConditionalMultibind<T> addConditionBinding(
+      String property,
+      Predicate<String> condition,
+      Class<? extends T> target
+  )
+  {
+    final String value = properties.getProperty(property);
+    if (value == null) {
+      return this;
+    }
+    if (condition.apply(value)) {
+      multibinder.addBinding().to(target);
+    }
+    return this;
+  }
+
+  /**
+   * Conditionally bind target to the set. If "condition" returns true, add a binding to "target".
+   *
+   * @param property  the property to inspect on
+   * @param condition the predicate used to verify whether to add a binding to "target"
+   * @param target    the target instance to which it adds a binding.
+   *
+   * @return self to support a continuous syntax for adding more conditional bindings.
+   */
+  public ConditionalMultibind<T> addConditionBinding(
+      String property,
+      Predicate<String> condition,
+      T target
+  )
+  {
+    final String value = properties.getProperty(property);
+    if (value == null) {
+      return this;
+    }
+    if (condition.apply(value)) {
+      multibinder.addBinding().toInstance(target);
+    }
+    return this;
+  }
+
+  /**
+   * Conditionally bind target to the set. If "condition" returns true, add a binding to "target".
+   *
+   * @param property  the property to inspect on
+   * @param condition the predicate used to verify whether to add a binding to "target"
+   * @param target    the target type to which it adds a binding.
+   *
+   * @return self to support a continuous syntax for adding more conditional bindings.
+   */
+  public ConditionalMultibind<T> addConditionBinding(
+      String property,
+      Predicate<String> condition,
+      TypeLiteral<T> target
+  )
+  {
+    final String value = properties.getProperty(property);
+    if (value == null) {
+      return this;
+    }
+    if (condition.apply(value)) {
+      multibinder.addBinding().to(target);
+    }
+    return this;
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/DruidGuiceExtensions.java b/api/src/main/java/io/druid/guice/DruidGuiceExtensions.java
new file mode 100644
index 00000000000..149f72c9be7
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/DruidGuiceExtensions.java
@@ -0,0 +1,34 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.Binder;
+import com.google.inject.Module;
+
+/**
+ */
+public class DruidGuiceExtensions implements Module
+{
+  @Override
+  public void configure(Binder binder)
+  {
+    binder.bindScope(LazySingleton.class, DruidScopes.SINGLETON);
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/DruidScopes.java b/api/src/main/java/io/druid/guice/DruidScopes.java
new file mode 100644
index 00000000000..a837928a2a7
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/DruidScopes.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.Key;
+import com.google.inject.Provider;
+import com.google.inject.Scope;
+import com.google.inject.Scopes;
+
+/**
+ */
+public class DruidScopes
+{
+  public static final Scope SINGLETON = new Scope()
+  {
+    @Override
+    public <T> Provider<T> scope(Key<T> key, Provider<T> unscoped)
+    {
+      return Scopes.SINGLETON.scope(key, unscoped);
+    }
+
+    @Override
+    public String toString()
+    {
+      return "DruidScopes.SINGLETON";
+    }
+  };
+}
diff --git a/api/src/main/java/io/druid/guice/Jerseys.java b/api/src/main/java/io/druid/guice/Jerseys.java
new file mode 100644
index 00000000000..9c0163a4fb5
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/Jerseys.java
@@ -0,0 +1,37 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.Binder;
+import com.google.inject.TypeLiteral;
+import com.google.inject.multibindings.Multibinder;
+import io.druid.guice.annotations.JSR311Resource;
+
+/**
+ */
+public class Jerseys
+{
+  public static void addResource(Binder binder, Class<?> resourceClazz)
+  {
+    Multibinder.newSetBinder(binder, new TypeLiteral<Class<?>>(){}, JSR311Resource.class)
+               .addBinding()
+               .toInstance(resourceClazz);
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/JsonConfigProvider.java b/api/src/main/java/io/druid/guice/JsonConfigProvider.java
new file mode 100644
index 00000000000..f9017af2ff9
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/JsonConfigProvider.java
@@ -0,0 +1,213 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import com.google.inject.Binder;
+import com.google.inject.Inject;
+import com.google.inject.Key;
+import com.google.inject.Provider;
+import com.google.inject.util.Types;
+
+import java.lang.annotation.Annotation;
+import java.lang.reflect.ParameterizedType;
+import java.util.Properties;
+
+
+/**
+ * Provides a singleton value of type {@code <T>} from {@code Properties} bound in guice.
+ * <br/>
+ * <h3>Usage</h3>
+ * To install this provider, bind it in your guice module, like below.
+ *
+ * <pre>
+ * JsonConfigProvider.bind(binder, "druid.server", DruidServerConfig.class);
+ * </pre>
+ * <br/>
+ * In the above case, {@code druid.server} should be a key found in the {@code Properties} bound elsewhere.
+ * The value of that key should directly relate to the fields in {@code DruidServerConfig.class}.
+ *
+ * <h3>Implementation</h3>
+ * <br/>
+ * The state of {@code <T>} is defined by the value of the property {@code propertyBase}.
+ * This value is a json structure, decoded via {@link JsonConfigurator#configurate(java.util.Properties, String, Class)}.
+ * <br/>
+ *
+ * An example might be if DruidServerConfig.class were
+ *
+ * <pre>
+ *   public class DruidServerConfig
+ *   {
+ *     @JsonProperty @NotNull public String hostname = null;
+ *     @JsonProperty @Min(1025) public int port = 8080;
+ *   }
+ * </pre>
+ *
+ * And your Properties object had in it
+ *
+ * <pre>
+ *   druid.server.hostname=0.0.0.0
+ *   druid.server.port=3333
+ * </pre>
+ *
+ * Then this would bind a singleton instance of a DruidServerConfig object with hostname = "0.0.0.0" and port = 3333.
+ *
+ * If the port weren't set in the properties, then the default of 8080 would be taken.  Essentially, it is the same as
+ * subtracting the "druid.server" prefix from the properties and building a Map which is then passed into
+ * ObjectMapper.convertValue()
+ *
+ * @param <T> type of config object to provide.
+ */
+public class JsonConfigProvider<T> implements Provider<Supplier<T>>
+{
+  @SuppressWarnings("unchecked")
+  public static <T> void bind(Binder binder, String propertyBase, Class<T> classToProvide)
+  {
+    bind(
+        binder,
+        propertyBase,
+        classToProvide,
+        Key.get(classToProvide),
+        (Key) Key.get(Types.newParameterizedType(Supplier.class, classToProvide))
+    );
+  }
+
+  @SuppressWarnings("unchecked")
+  public static <T> void bind(Binder binder, String propertyBase, Class<T> classToProvide, Annotation annotation)
+  {
+    bind(
+        binder,
+        propertyBase,
+        classToProvide,
+        Key.get(classToProvide, annotation),
+        (Key) Key.get(Types.newParameterizedType(Supplier.class, classToProvide), annotation)
+    );
+  }
+
+  @SuppressWarnings("unchecked")
+  public static <T> void bind(
+      Binder binder,
+      String propertyBase,
+      Class<T> classToProvide,
+      Class<? extends Annotation> annotation
+  )
+  {
+    bind(
+        binder,
+        propertyBase,
+        classToProvide,
+        Key.get(classToProvide, annotation),
+        (Key) Key.get(Types.newParameterizedType(Supplier.class, classToProvide), annotation)
+    );
+  }
+
+  @SuppressWarnings("unchecked")
+  public static <T> void bind(
+      Binder binder,
+      String propertyBase,
+      Class<T> clazz,
+      Key<T> instanceKey,
+      Key<Supplier<T>> supplierKey
+  )
+  {
+    binder.bind(supplierKey).toProvider((Provider) of(propertyBase, clazz)).in(LazySingleton.class);
+    binder.bind(instanceKey).toProvider(new SupplierProvider<T>(supplierKey));
+  }
+
+  @SuppressWarnings("unchecked")
+  public static <T> void bindInstance(
+      Binder binder,
+      Key<T> bindKey,
+      T instance
+  )
+  {
+    binder.bind(bindKey).toInstance(instance);
+
+    final ParameterizedType supType = Types.newParameterizedType(Supplier.class, bindKey.getTypeLiteral().getType());
+    final Key supplierKey;
+
+    if (bindKey.getAnnotationType() != null) {
+      supplierKey = Key.get(supType, bindKey.getAnnotationType());
+    }
+    else if (bindKey.getAnnotation() != null) {
+      supplierKey = Key.get(supType, bindKey.getAnnotation());
+    }
+    else {
+      supplierKey = Key.get(supType);
+    }
+
+    binder.bind(supplierKey).toInstance(Suppliers.<T>ofInstance(instance));
+  }
+
+  public static <T> JsonConfigProvider<T> of(String propertyBase, Class<T> classToProvide)
+  {
+    return new JsonConfigProvider<T>(propertyBase, classToProvide);
+  }
+
+  private final String propertyBase;
+  private final Class<T> classToProvide;
+
+  private Properties props;
+  private JsonConfigurator configurator;
+
+  private Supplier<T> retVal = null;
+
+  public JsonConfigProvider(
+      String propertyBase,
+      Class<T> classToProvide
+  )
+  {
+    this.propertyBase = propertyBase;
+    this.classToProvide = classToProvide;
+  }
+
+  @Inject
+  public void inject(
+      Properties props,
+      JsonConfigurator configurator
+  )
+  {
+    this.props = props;
+    this.configurator = configurator;
+  }
+
+  @Override
+  public Supplier<T> get()
+  {
+    if (retVal != null) {
+      return retVal;
+    }
+
+    try {
+      final T config = configurator.configurate(props, propertyBase, classToProvide);
+      retVal = Suppliers.ofInstance(config);
+    }
+    catch (RuntimeException e) {
+      // When a runtime exception gets thrown out, this provider will get called again if the object is asked for again.
+      // This will have the same failed result, 'cause when it's called no parameters will have actually changed.
+      // Guice will then report the same error multiple times, which is pretty annoying. Cache a null supplier and
+      // return that instead.  This is technically enforcing a singleton, but such is life.
+      retVal = Suppliers.ofInstance(null);
+      throw e;
+    }
+    return retVal;
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/JsonConfigurator.java b/api/src/main/java/io/druid/guice/JsonConfigurator.java
new file mode 100644
index 00000000000..dd829505db0
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/JsonConfigurator.java
@@ -0,0 +1,186 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.fasterxml.jackson.annotation.JacksonInject;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.introspect.AnnotatedField;
+import com.fasterxml.jackson.databind.introspect.BeanPropertyDefinition;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Function;
+import com.google.common.base.Strings;
+import com.google.common.base.Throwables;
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.inject.Inject;
+import com.google.inject.ProvisionException;
+import com.google.inject.spi.Message;
+import io.druid.java.util.common.logger.Logger;
+
+import javax.validation.ConstraintViolation;
+import javax.validation.ElementKind;
+import javax.validation.Path;
+import javax.validation.Validator;
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ */
+public class JsonConfigurator
+{
+  private static final Logger log = new Logger(JsonConfigurator.class);
+
+  private final ObjectMapper jsonMapper;
+  private final Validator validator;
+
+  @Inject
+  public JsonConfigurator(
+      ObjectMapper jsonMapper,
+      Validator validator
+  )
+  {
+    this.jsonMapper = jsonMapper;
+    this.validator = validator;
+  }
+
+  public <T> T configurate(Properties props, String propertyPrefix, Class<T> clazz) throws ProvisionException
+  {
+    verifyClazzIsConfigurable(jsonMapper, clazz);
+
+    // Make it end with a period so we only include properties with sub-object thingies.
+    final String propertyBase = propertyPrefix.endsWith(".") ? propertyPrefix : propertyPrefix + ".";
+
+    Map<String, Object> jsonMap = Maps.newHashMap();
+    for (String prop : props.stringPropertyNames()) {
+      if (prop.startsWith(propertyBase)) {
+        final String propValue = props.getProperty(prop);
+        Object value;
+        try {
+          // If it's a String Jackson wants it to be quoted, so check if it's not an object or array and quote.
+          String modifiedPropValue = propValue;
+          if (! (modifiedPropValue.startsWith("[") || modifiedPropValue.startsWith("{"))) {
+            modifiedPropValue = jsonMapper.writeValueAsString(propValue);
+          }
+          value = jsonMapper.readValue(modifiedPropValue, Object.class);
+        }
+        catch (IOException e) {
+          log.info(e, "Unable to parse [%s]=[%s] as a json object, using as is.", prop, propValue);
+          value = propValue;
+        }
+
+        jsonMap.put(prop.substring(propertyBase.length()), value);
+      }
+    }
+
+    final T config;
+    try {
+      config = jsonMapper.convertValue(jsonMap, clazz);
+    }
+    catch (IllegalArgumentException e) {
+      throw new ProvisionException(
+          String.format("Problem parsing object at prefix[%s]: %s.", propertyPrefix, e.getMessage()), e
+      );
+    }
+
+    final Set<ConstraintViolation<T>> violations = validator.validate(config);
+    if (!violations.isEmpty()) {
+      List<String> messages = Lists.newArrayList();
+
+      for (ConstraintViolation<T> violation : violations) {
+        String path = "";
+        try {
+          Class<?> beanClazz = violation.getRootBeanClass();
+          final Iterator<Path.Node> iter = violation.getPropertyPath().iterator();
+          while (iter.hasNext()) {
+            Path.Node next = iter.next();
+            if (next.getKind() == ElementKind.PROPERTY) {
+              final String fieldName = next.getName();
+              final Field theField = beanClazz.getDeclaredField(fieldName);
+
+              if (theField.getAnnotation(JacksonInject.class) != null) {
+                path = String.format(" -- Injected field[%s] not bound!?", fieldName);
+                break;
+              }
+
+              JsonProperty annotation = theField.getAnnotation(JsonProperty.class);
+              final boolean noAnnotationValue = annotation == null || Strings.isNullOrEmpty(annotation.value());
+              final String pathPart = noAnnotationValue ? fieldName : annotation.value();
+              if (path.isEmpty()) {
+                path += pathPart;
+              }
+              else {
+                path += "." + pathPart;
+              }
+            }
+          }
+        }
+        catch (NoSuchFieldException e) {
+          throw Throwables.propagate(e);
+        }
+
+        messages.add(String.format("%s - %s", path, violation.getMessage()));
+      }
+
+      throw new ProvisionException(
+          Iterables.transform(
+              messages,
+              new Function<String, Message>()
+              {
+                @Override
+                public Message apply(String input)
+                {
+                  return new Message(String.format("%s%s", propertyBase, input));
+                }
+              }
+          )
+      );
+    }
+
+    log.info("Loaded class[%s] from props[%s] as [%s]", clazz, propertyBase, config);
+
+    return config;
+  }
+
+  @VisibleForTesting
+  public static <T> void verifyClazzIsConfigurable(ObjectMapper mapper, Class<T> clazz)
+  {
+    final List<BeanPropertyDefinition> beanDefs = mapper.getSerializationConfig()
+                                                        .introspect(mapper.constructType(clazz))
+                                                        .findProperties();
+    for (BeanPropertyDefinition beanDef : beanDefs) {
+      final AnnotatedField field = beanDef.getField();
+      if (field == null || !field.hasAnnotation(JsonProperty.class)) {
+        throw new ProvisionException(
+            String.format(
+                "JsonConfigurator requires Jackson-annotated Config objects to have field annotations. %s doesn't",
+                clazz
+            )
+        );
+      }
+    }
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/KeyHolder.java b/api/src/main/java/io/druid/guice/KeyHolder.java
new file mode 100644
index 00000000000..24533fcdd5a
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/KeyHolder.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.Key;
+
+/**
+ */
+public class KeyHolder<T>
+{
+  private final Key<? extends T> key;
+
+  public KeyHolder(
+      Key<? extends T> key
+  )
+  {
+    this.key = key;
+  }
+
+  public Key<? extends T> getKey()
+  {
+    return key;
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/LazySingleton.java b/api/src/main/java/io/druid/guice/LazySingleton.java
new file mode 100644
index 00000000000..452621df812
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/LazySingleton.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.ScopeAnnotation;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ */
+@Target({ElementType.TYPE, ElementType.METHOD})
+@Retention(RetentionPolicy.RUNTIME)
+@ScopeAnnotation
+public @interface LazySingleton
+{
+}
diff --git a/api/src/main/java/io/druid/guice/LifecycleModule.java b/api/src/main/java/io/druid/guice/LifecycleModule.java
new file mode 100644
index 00000000000..eb65d7c2e28
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/LifecycleModule.java
@@ -0,0 +1,164 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.Binder;
+import com.google.inject.Injector;
+import com.google.inject.Key;
+import com.google.inject.Module;
+import com.google.inject.Provides;
+import com.google.inject.TypeLiteral;
+import com.google.inject.multibindings.Multibinder;
+import com.google.inject.name.Names;
+
+import io.druid.java.util.common.lifecycle.Lifecycle;
+
+import java.lang.annotation.Annotation;
+import java.util.Set;
+
+/**
+ * A Module to add lifecycle management to the injector.  {@link DruidGuiceExtensions} must also be included.
+ */
+public class LifecycleModule implements Module
+{
+  private final LifecycleScope scope = new LifecycleScope(Lifecycle.Stage.NORMAL);
+  private final LifecycleScope lastScope = new LifecycleScope(Lifecycle.Stage.LAST);
+
+  /**
+   * Registers a class to instantiate eagerly.  Classes mentioned here will be pulled out of
+   * the injector with an injector.getInstance() call when the lifecycle is created.
+   *
+   * Eagerly loaded classes will *not* be automatically added to the Lifecycle unless they are bound to the proper
+   * scope.  That is, they are generally eagerly loaded because the loading operation will produce some beneficial
+   * side-effect even if nothing actually directly depends on the instance.
+   *
+   * This mechanism exists to allow the {@link io.druid.java.util.common.lifecycle.Lifecycle} to be the primary entry point from the injector, not to
+   * auto-register things with the {@link io.druid.java.util.common.lifecycle.Lifecycle}.  It is also possible to just bind things eagerly with Guice,
+   * it is not clear which is actually the best approach.  This is more explicit, but eager bindings inside of modules
+   * is less error-prone.
+   *
+   * @param clazz, the class to instantiate
+   * @return this, for chaining.
+   */
+  public static void register(Binder binder, Class<?> clazz)
+  {
+    registerKey(binder, Key.get(clazz));
+  }
+
+  /**
+   * Registers a class/annotation combination to instantiate eagerly.  Classes mentioned here will be pulled out of
+   * the injector with an injector.getInstance() call when the lifecycle is created.
+   *
+   * Eagerly loaded classes will *not* be automatically added to the Lifecycle unless they are bound to the proper
+   * scope.  That is, they are generally eagerly loaded because the loading operation will produce some beneficial
+   * side-effect even if nothing actually directly depends on the instance.
+   *
+   * This mechanism exists to allow the {@link io.druid.java.util.common.lifecycle.Lifecycle} to be the primary entry point from the injector, not to
+   * auto-register things with the {@link io.druid.java.util.common.lifecycle.Lifecycle}.  It is also possible to just bind things eagerly with Guice,
+   * it is not clear which is actually the best approach.  This is more explicit, but eager bindings inside of modules
+   * is less error-prone.
+   *
+   * @param clazz, the class to instantiate
+   * @param annotation The annotation instance to register with Guice, usually a Named annotation
+   * @return this, for chaining.
+   */
+  public static void register(Binder binder, Class<?> clazz, Annotation annotation)
+  {
+    registerKey(binder, Key.get(clazz, annotation));
+  }
+
+  /**
+   * Registers a class/annotation combination to instantiate eagerly.  Classes mentioned here will be pulled out of
+   * the injector with an injector.getInstance() call when the lifecycle is created.
+   *
+   * Eagerly loaded classes will *not* be automatically added to the Lifecycle unless they are bound to the proper
+   * scope.  That is, they are generally eagerly loaded because the loading operation will produce some beneficial
+   * side-effect even if nothing actually directly depends on the instance.
+   *
+   * This mechanism exists to allow the {@link io.druid.java.util.common.lifecycle.Lifecycle} to be the primary entry point from the injector, not to
+   * auto-register things with the {@link io.druid.java.util.common.lifecycle.Lifecycle}.  It is also possible to just bind things eagerly with Guice,
+   * it is not clear which is actually the best approach.  This is more explicit, but eager bindings inside of modules
+   * is less error-prone.
+   *
+   * @param clazz, the class to instantiate
+   * @param annotation The annotation class to register with Guice
+   * @return this, for chaining
+   */
+  public static void register(Binder binder, Class<?> clazz, Class<? extends Annotation> annotation)
+  {
+    registerKey(binder, Key.get(clazz, annotation));
+  }
+
+  /**
+   * Registers a key to instantiate eagerly.  {@link com.google.inject.Key}s mentioned here will be pulled out of
+   * the injector with an injector.getInstance() call when the lifecycle is created.
+   *
+   * Eagerly loaded classes will *not* be automatically added to the Lifecycle unless they are bound to the proper
+   * scope.  That is, they are generally eagerly loaded because the loading operation will produce some beneficial
+   * side-effect even if nothing actually directly depends on the instance.
+   *
+   * This mechanism exists to allow the {@link io.druid.java.util.common.lifecycle.Lifecycle} to be the primary entry point
+   * from the injector, not to auto-register things with the {@link io.druid.java.util.common.lifecycle.Lifecycle}.  It is
+   * also possible to just bind things eagerly with Guice, it is not clear which is actually the best approach.
+   * This is more explicit, but eager bindings inside of modules is less error-prone.
+   *
+   * @param key The key to use in finding the DruidNode instance
+   */
+  public static void registerKey(Binder binder, Key<?> key)
+  {
+    getEagerBinder(binder).addBinding().toInstance(new KeyHolder<Object>(key));
+  }
+
+  private static Multibinder<KeyHolder> getEagerBinder(Binder binder)
+  {
+    return Multibinder.newSetBinder(binder, KeyHolder.class, Names.named("lifecycle"));
+  }
+
+  @Override
+  public void configure(Binder binder)
+  {
+    getEagerBinder(binder); // Load up the eager binder so that it will inject the empty set at a minimum.
+
+    binder.bindScope(ManageLifecycle.class, scope);
+    binder.bindScope(ManageLifecycleLast.class, lastScope);
+  }
+
+  @Provides @LazySingleton
+  public Lifecycle getLifecycle(final Injector injector)
+  {
+    final Key<Set<KeyHolder>> keyHolderKey = Key.get(new TypeLiteral<Set<KeyHolder>>(){}, Names.named("lifecycle"));
+    final Set<KeyHolder> eagerClasses = injector.getInstance(keyHolderKey);
+
+    Lifecycle lifecycle = new Lifecycle(){
+      @Override
+      public void start() throws Exception
+      {
+        for (KeyHolder<?> holder : eagerClasses) {
+          injector.getInstance(holder.getKey()); // Pull the key so as to "eagerly" load up the class.
+        }
+        super.start();
+      }
+    };
+    scope.setLifecycle(lifecycle);
+    lastScope.setLifecycle(lifecycle);
+
+    return lifecycle;
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/LifecycleScope.java b/api/src/main/java/io/druid/guice/LifecycleScope.java
new file mode 100644
index 00000000000..95269baa95b
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/LifecycleScope.java
@@ -0,0 +1,93 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.common.collect.Lists;
+import com.google.inject.Key;
+import com.google.inject.Provider;
+import com.google.inject.Scope;
+
+import io.druid.java.util.common.lifecycle.Lifecycle;
+import io.druid.java.util.common.logger.Logger;
+
+import java.util.List;
+
+/**
+ * A scope that adds objects to the Lifecycle.  This is by definition also a lazy singleton scope.
+ */
+public class LifecycleScope implements Scope
+{
+  private static final Logger log = new Logger(LifecycleScope.class);
+  private final Lifecycle.Stage stage;
+
+  private Lifecycle lifecycle;
+  private final List<Object> instances = Lists.newLinkedList();
+
+  public LifecycleScope(Lifecycle.Stage stage)
+  {
+    this.stage = stage;
+  }
+
+  public void setLifecycle(Lifecycle lifecycle)
+  {
+    synchronized (instances) {
+      this.lifecycle = lifecycle;
+      for (Object instance : instances) {
+        lifecycle.addManagedInstance(instance, stage);
+      }
+    }
+  }
+
+  @Override
+  public <T> Provider<T> scope(final Key<T> key, final Provider<T> unscoped)
+  {
+    return new Provider<T>()
+    {
+      private volatile T value = null;
+
+      @Override
+      public synchronized T get()
+      {
+        if (value == null) {
+          final T retVal = unscoped.get();
+
+          synchronized (instances) {
+            if (lifecycle == null) {
+              instances.add(retVal);
+            }
+            else {
+              try {
+                lifecycle.addMaybeStartManagedInstance(retVal, stage);
+              }
+              catch (Exception e) {
+                log.warn(e, "Caught exception when trying to create a[%s]", key);
+                return null;
+              }
+            }
+          }
+
+          value = retVal;
+        }
+
+        return value;
+      }
+    };
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/ManageLifecycle.java b/api/src/main/java/io/druid/guice/ManageLifecycle.java
new file mode 100644
index 00000000000..4256467d7f6
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/ManageLifecycle.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.ScopeAnnotation;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Marks the object to be managed by {@link io.druid.java.util.common.lifecycle.Lifecycle}
+ *
+ * This Scope gets defined by {@link io.druid.guice.LifecycleModule}
+ */
+@Target({ ElementType.TYPE, ElementType.METHOD })
+@Retention(RetentionPolicy.RUNTIME)
+@ScopeAnnotation
+public @interface ManageLifecycle
+{
+}
diff --git a/api/src/main/java/io/druid/guice/ManageLifecycleLast.java b/api/src/main/java/io/druid/guice/ManageLifecycleLast.java
new file mode 100644
index 00000000000..e542ad10c5e
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/ManageLifecycleLast.java
@@ -0,0 +1,39 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.ScopeAnnotation;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * Marks the object to be managed by {@link io.druid.java.util.common.lifecycle.Lifecycle} and set to be on Stage.LAST
+ *
+ * This Scope gets defined by {@link io.druid.guice.LifecycleModule}
+ */
+@Target({ ElementType.TYPE, ElementType.METHOD })
+@Retention(RetentionPolicy.RUNTIME)
+@ScopeAnnotation
+public @interface ManageLifecycleLast
+{
+}
diff --git a/api/src/main/java/io/druid/guice/PolyBind.java b/api/src/main/java/io/druid/guice/PolyBind.java
new file mode 100644
index 00000000000..01c44aefd31
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/PolyBind.java
@@ -0,0 +1,184 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.inject.Binder;
+import com.google.inject.Inject;
+import com.google.inject.Injector;
+import com.google.inject.Key;
+import com.google.inject.Provider;
+import com.google.inject.ProvisionException;
+import com.google.inject.TypeLiteral;
+import com.google.inject.binder.ScopedBindingBuilder;
+import com.google.inject.multibindings.MapBinder;
+import com.google.inject.util.Types;
+
+import java.lang.reflect.ParameterizedType;
+import java.util.Map;
+import java.util.Properties;
+
+/**
+ * Provides the ability to create "polymorphic" bindings.  Where the polymorphism is actually just making a decision
+ * based on a value in a Properties.
+ *
+ * The workflow is that you first create a choice by calling createChoice().  Then you create options using the binder
+ * returned by the optionBinder() method.  Multiple different modules can call optionBinder and all options will be
+ * reflected at injection time as long as equivalent interface Key objects are passed into the various methods.
+ */
+public class PolyBind
+{
+  /**
+   * Sets up a "choice" for the injector to resolve at injection time.
+   *
+   * @param binder the binder for the injector that is being configured
+   * @param property the property that will be checked to determine the implementation choice
+   * @param interfaceKey the interface that will be injected using this choice
+   * @param defaultKey the default instance to be injected if the property doesn't match a choice.  Can be null
+   * @param <T> interface type
+   * @return A ScopedBindingBuilder so that scopes can be added to the binding, if required.
+   */
+  public static <T> ScopedBindingBuilder createChoice(
+      Binder binder,
+      String property,
+      Key<T> interfaceKey,
+      Key<? extends T> defaultKey
+  )
+  {
+    return createChoiceWithDefault(binder, property, interfaceKey, defaultKey, null);
+  }
+
+  /**
+   * Sets up a "choice" for the injector to resolve at injection time.
+   *
+   * @param binder the binder for the injector that is being configured
+   * @param property the property that will be checked to determine the implementation choice
+   * @param interfaceKey the interface that will be injected using this choice
+   * @param defaultKey the default instance to be injected if the property doesn't match a choice.  Can be null
+   * @param defaultPropertyValue the default property value to use if the property is not set.
+   * @param <T> interface type
+   * @return A ScopedBindingBuilder so that scopes can be added to the binding, if required.
+   */
+  public static <T> ScopedBindingBuilder createChoiceWithDefault(
+      Binder binder,
+      String property,
+      Key<T> interfaceKey,
+      Key<? extends T> defaultKey,
+      String defaultPropertyValue
+  )
+  {
+    return binder.bind(interfaceKey).toProvider(new ConfiggedProvider<T>(interfaceKey, property, defaultKey, defaultPropertyValue));
+  }
+
+  /**
+   * Binds an option for a specific choice.  The choice must already be registered on the injector for this to work.
+   *
+   * @param binder the binder for the injector that is being configured
+   * @param interfaceKey the interface that will have an option added to it.  This must equal the
+   *                     Key provided to createChoice
+   * @param <T> interface type
+   * @return A MapBinder that can be used to create the actual option bindings.
+   */
+  public static <T> MapBinder<String, T> optionBinder(Binder binder, Key<T> interfaceKey)
+  {
+    final TypeLiteral<T> interfaceType = interfaceKey.getTypeLiteral();
+
+    if (interfaceKey.getAnnotation() != null) {
+      return MapBinder.newMapBinder(
+          binder, TypeLiteral.get(String.class), interfaceType, interfaceKey.getAnnotation()
+      );
+    }
+    else if (interfaceKey.getAnnotationType() != null) {
+      return MapBinder.newMapBinder(
+          binder, TypeLiteral.get(String.class), interfaceType, interfaceKey.getAnnotationType()
+      );
+    }
+    else {
+      return MapBinder.newMapBinder(binder, TypeLiteral.get(String.class), interfaceType);
+    }
+  }
+
+  static class ConfiggedProvider<T> implements Provider<T>
+  {
+    private final Key<T> key;
+    private final String property;
+    private final Key<? extends T> defaultKey;
+    private final String defaultPropertyValue;
+
+    private Injector injector;
+    private Properties props;
+
+    ConfiggedProvider(
+        Key<T> key,
+        String property,
+        Key<? extends T> defaultKey,
+        String defaultPropertyValue
+    )
+    {
+      this.key = key;
+      this.property = property;
+      this.defaultKey = defaultKey;
+      this.defaultPropertyValue = defaultPropertyValue;
+    }
+
+    @Inject
+    void configure(Injector injector, Properties props)
+    {
+      this.injector = injector;
+      this.props = props;
+    }
+
+    @Override
+    @SuppressWarnings("unchecked")
+    public T get()
+    {
+      final ParameterizedType mapType = Types.mapOf(
+          String.class, Types.newParameterizedType(Provider.class, key.getTypeLiteral().getType())
+      );
+
+      final Map<String, Provider<T>> implsMap;
+      if (key.getAnnotation() != null) {
+        implsMap = (Map<String, Provider<T>>) injector.getInstance(Key.get(mapType, key.getAnnotation()));
+      }
+      else if (key.getAnnotationType() != null) {
+        implsMap = (Map<String, Provider<T>>) injector.getInstance(Key.get(mapType, key.getAnnotation()));
+      }
+      else {
+        implsMap = (Map<String, Provider<T>>) injector.getInstance(Key.get(mapType));
+      }
+
+      String implName = props.getProperty(property);
+      if (implName == null) {
+        implName = defaultPropertyValue;
+      }
+      final Provider<T> provider = implsMap.get(implName);
+
+      if (provider == null) {
+        if (defaultKey == null) {
+          throw new ProvisionException(
+              String.format("Unknown provider[%s] of %s, known options[%s]", implName, key, implsMap.keySet())
+          );
+        }
+        return injector.getInstance(defaultKey);
+      }
+
+      return provider.get();
+    }
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/SupplierProvider.java b/api/src/main/java/io/druid/guice/SupplierProvider.java
new file mode 100644
index 00000000000..32afa505d03
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/SupplierProvider.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.common.base.Supplier;
+import com.google.inject.Inject;
+import com.google.inject.Injector;
+import com.google.inject.Key;
+import com.google.inject.Provider;
+
+/**
+ */
+public class SupplierProvider<T> implements Provider<T>
+{
+  private final Key<Supplier<T>> supplierKey;
+
+  private Provider<Supplier<T>> supplierProvider;
+
+  public SupplierProvider(
+      Key<Supplier<T>> supplierKey
+  )
+  {
+    this.supplierKey = supplierKey;
+  }
+
+  @Inject
+  public void configure(Injector injector)
+  {
+    this.supplierProvider = injector.getProvider(supplierKey);
+  }
+
+  @Override
+  public T get()
+  {
+    return supplierProvider.get().get();
+  }
+}
diff --git a/api/src/main/java/io/druid/guice/annotations/Global.java b/api/src/main/java/io/druid/guice/annotations/Global.java
new file mode 100644
index 00000000000..25222ce4bf3
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/annotations/Global.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice.annotations;
+
+import com.google.inject.BindingAnnotation;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ */
+@BindingAnnotation
+@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
+@Retention(RetentionPolicy.RUNTIME)
+public @interface Global
+{
+}
diff --git a/api/src/main/java/io/druid/guice/annotations/JSR311Resource.java b/api/src/main/java/io/druid/guice/annotations/JSR311Resource.java
new file mode 100644
index 00000000000..465840cc7d0
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/annotations/JSR311Resource.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice.annotations;
+
+import com.google.inject.BindingAnnotation;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ */
+@BindingAnnotation
+@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
+@Retention(RetentionPolicy.RUNTIME)
+public @interface JSR311Resource
+{
+}
diff --git a/api/src/main/java/io/druid/guice/annotations/Json.java b/api/src/main/java/io/druid/guice/annotations/Json.java
new file mode 100644
index 00000000000..73dac864e9a
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/annotations/Json.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice.annotations;
+
+import com.google.inject.BindingAnnotation;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ */
+@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
+@Retention(RetentionPolicy.RUNTIME)
+@BindingAnnotation
+public @interface Json
+{
+}
diff --git a/api/src/main/java/io/druid/guice/annotations/Self.java b/api/src/main/java/io/druid/guice/annotations/Self.java
new file mode 100644
index 00000000000..e6123fbe188
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/annotations/Self.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice.annotations;
+
+import com.google.inject.BindingAnnotation;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ */
+@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
+@Retention(RetentionPolicy.RUNTIME)
+@BindingAnnotation
+public @interface Self
+{
+}
diff --git a/api/src/main/java/io/druid/guice/annotations/Smile.java b/api/src/main/java/io/druid/guice/annotations/Smile.java
new file mode 100644
index 00000000000..136885a4f46
--- /dev/null
+++ b/api/src/main/java/io/druid/guice/annotations/Smile.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice.annotations;
+
+import com.google.inject.BindingAnnotation;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ */
+@Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
+@Retention(RetentionPolicy.RUNTIME)
+@BindingAnnotation
+public @interface Smile
+{
+}
diff --git a/api/src/main/java/io/druid/initialization/DruidModule.java b/api/src/main/java/io/druid/initialization/DruidModule.java
new file mode 100644
index 00000000000..9015dca45be
--- /dev/null
+++ b/api/src/main/java/io/druid/initialization/DruidModule.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.initialization;
+
+import com.fasterxml.jackson.databind.Module;
+
+import java.util.List;
+
+/**
+ */
+public interface DruidModule extends com.google.inject.Module
+{
+  public List<? extends Module> getJacksonModules();
+}
diff --git a/api/src/main/java/io/druid/jackson/CommaListJoinDeserializer.java b/api/src/main/java/io/druid/jackson/CommaListJoinDeserializer.java
new file mode 100644
index 00000000000..883c701bff4
--- /dev/null
+++ b/api/src/main/java/io/druid/jackson/CommaListJoinDeserializer.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.jackson;
+
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.DeserializationContext;
+import com.fasterxml.jackson.databind.deser.std.StdScalarDeserializer;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ */
+public class CommaListJoinDeserializer extends StdScalarDeserializer<List<String>>
+{
+    protected CommaListJoinDeserializer()
+  {
+    super(List.class);
+  }
+
+  @Override
+  public List<String> deserialize(JsonParser jsonParser, DeserializationContext deserializationContext)
+      throws IOException, JsonProcessingException
+  {
+    return Arrays.asList(jsonParser.getText().split(","));
+  }
+}
diff --git a/api/src/main/java/io/druid/jackson/CommaListJoinSerializer.java b/api/src/main/java/io/druid/jackson/CommaListJoinSerializer.java
new file mode 100644
index 00000000000..7e39b7c72ba
--- /dev/null
+++ b/api/src/main/java/io/druid/jackson/CommaListJoinSerializer.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.jackson;
+
+import com.fasterxml.jackson.core.JsonGenerationException;
+import com.fasterxml.jackson.core.JsonGenerator;
+import com.fasterxml.jackson.databind.SerializerProvider;
+import com.fasterxml.jackson.databind.ser.std.StdScalarSerializer;
+import com.google.common.base.Joiner;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ */
+public class CommaListJoinSerializer extends StdScalarSerializer<List<String>>
+{
+  private static final Joiner joiner = Joiner.on(",");
+
+  protected CommaListJoinSerializer()
+  {
+    super(List.class, true);
+  }
+
+  @Override
+  public void serialize(List<String> value, JsonGenerator jgen, SerializerProvider provider)
+      throws IOException, JsonGenerationException
+  {
+    jgen.writeString(joiner.join(value));
+  }
+}
diff --git a/api/src/main/java/io/druid/js/JavaScriptConfig.java b/api/src/main/java/io/druid/js/JavaScriptConfig.java
new file mode 100644
index 00000000000..6b62431aa88
--- /dev/null
+++ b/api/src/main/java/io/druid/js/JavaScriptConfig.java
@@ -0,0 +1,83 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.js;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class JavaScriptConfig
+{
+  public static final int DEFAULT_OPTIMIZATION_LEVEL = 9;
+
+  private static final JavaScriptConfig ENABLED_INSTANCE = new JavaScriptConfig(true);
+
+  @JsonProperty
+  private boolean enabled = false;
+
+  @JsonCreator
+  public JavaScriptConfig(
+      @JsonProperty("enabled") Boolean enabled
+  )
+  {
+    if (enabled != null) {
+      this.enabled = enabled.booleanValue();
+    }
+  }
+
+  public boolean isEnabled()
+  {
+    return enabled;
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    JavaScriptConfig that = (JavaScriptConfig) o;
+
+    return enabled == that.enabled;
+
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return (enabled ? 1 : 0);
+  }
+
+  @Override
+  public String toString()
+  {
+    return "JavaScriptConfig{" +
+           "enabled=" + enabled +
+           '}';
+  }
+
+  public static JavaScriptConfig getEnabledInstance()
+  {
+    return ENABLED_INSTANCE;
+  }
+}
diff --git a/api/src/main/java/io/druid/query/SegmentDescriptor.java b/api/src/main/java/io/druid/query/SegmentDescriptor.java
new file mode 100644
index 00000000000..ca7dfb2767c
--- /dev/null
+++ b/api/src/main/java/io/druid/query/SegmentDescriptor.java
@@ -0,0 +1,107 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.query;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import org.joda.time.Interval;
+
+/**
+*/
+public class SegmentDescriptor
+{
+  private final Interval interval;
+  private final String version;
+  private final int partitionNumber;
+
+  @JsonCreator
+  public SegmentDescriptor(
+      @JsonProperty("itvl") Interval interval,
+      @JsonProperty("ver") String version,
+      @JsonProperty("part") int partitionNumber
+  )
+  {
+    this.interval = interval;
+    this.version = version;
+    this.partitionNumber = partitionNumber;
+  }
+
+  @JsonProperty("itvl")
+  public Interval getInterval()
+  {
+    return interval;
+  }
+
+  @JsonProperty("ver")
+  public String getVersion()
+  {
+    return version;
+  }
+
+  @JsonProperty("part")
+  public int getPartitionNumber()
+  {
+    return partitionNumber;
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    SegmentDescriptor that = (SegmentDescriptor) o;
+
+    if (partitionNumber != that.partitionNumber) {
+      return false;
+    }
+    if (interval != null ? !interval.equals(that.interval) : that.interval != null) {
+      return false;
+    }
+    if (version != null ? !version.equals(that.version) : that.version != null) {
+      return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode()
+  {
+    int result = interval != null ? interval.hashCode() : 0;
+    result = 31 * result + (version != null ? version.hashCode() : 0);
+    result = 31 * result + partitionNumber;
+    return result;
+  }
+
+  @Override
+  public String toString()
+  {
+    return "SegmentDescriptor{" +
+           "interval=" + interval +
+           ", version='" + version + '\'' +
+           ", partitionNumber=" + partitionNumber +
+           '}';
+  }
+}
diff --git a/api/src/main/java/io/druid/segment/SegmentUtils.java b/api/src/main/java/io/druid/segment/SegmentUtils.java
new file mode 100644
index 00000000000..88a28095f43
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/SegmentUtils.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment;
+
+import com.google.common.io.Files;
+import com.google.common.primitives.Ints;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+
+/**
+ */
+public class SegmentUtils
+{
+  public static int getVersionFromDir(File inDir) throws IOException
+  {
+    File versionFile = new File(inDir, "version.bin");
+    if (versionFile.exists()) {
+      return Ints.fromByteArray(Files.toByteArray(versionFile));
+    }
+
+    final File indexFile = new File(inDir, "index.drd");
+    int version;
+    if (indexFile.exists()) {
+      try (InputStream in = new FileInputStream(indexFile)) {
+        version = in.read();
+      }
+      return version;
+    }
+
+    throw new IOException(
+        String.format(
+            "Invalid segment dir [%s]. Can't find either of version.bin or index.drd.",
+            inDir
+        )
+    );
+  }
+}
diff --git a/api/src/main/java/io/druid/segment/loading/DataSegmentArchiver.java b/api/src/main/java/io/druid/segment/loading/DataSegmentArchiver.java
new file mode 100644
index 00000000000..b08365cce9e
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/DataSegmentArchiver.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import io.druid.timeline.DataSegment;
+
+import javax.annotation.Nullable;
+
+public interface DataSegmentArchiver
+{
+  /**
+   * Perform an archive task on the segment and return the resulting segment or null if there was no action needed.
+   *
+   * @param segment The source segment
+   *
+   * @return The segment after archiving or `null` if there was no archiving performed.
+   *
+   * @throws SegmentLoadingException on error
+   */
+  @Nullable
+  DataSegment archive(DataSegment segment) throws SegmentLoadingException;
+
+  /**
+   * Perform the restore from an archived segment and return the resulting segment or null if there was no action
+   *
+   * @param segment The source (archived) segment
+   *
+   * @return The segment after it has been unarchived
+   *
+   * @throws SegmentLoadingException on error
+   */
+  @Nullable
+  DataSegment restore(DataSegment segment) throws SegmentLoadingException;
+}
diff --git a/api/src/main/java/io/druid/segment/loading/DataSegmentFinder.java b/api/src/main/java/io/druid/segment/loading/DataSegmentFinder.java
new file mode 100644
index 00000000000..ef4dafbdba9
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/DataSegmentFinder.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import io.druid.timeline.DataSegment;
+
+import java.util.Set;
+
+/**
+ * A DataSegmentFinder is responsible for finding Druid segments underneath a specified directory and optionally updates
+ * all descriptor.json files on deep storage with correct loadSpec.
+ */
+public interface DataSegmentFinder
+{
+  /**
+   * This method should first recursively look for descriptor.json (partitionNum_descriptor.json for HDFS data storage) underneath
+   * workingDirPath and then verify that index.zip (partitionNum_index.zip for HDFS data storage) exists in the same folder.
+   * If not, it should throw SegmentLoadingException to let the caller know that descriptor.json exists
+   * while index.zip doesn't. If a segment is found and updateDescriptor is set, then this method should update the
+   * loadSpec in descriptor.json to reflect the location from where it was found. After the search, this method
+   * should return the set of segments that were found.
+   *
+   * @param workingDirPath   the String representation of the working directory path
+   * @param updateDescriptor if true, update loadSpec in descriptor.json if loadSpec's location is different from where
+   *                         desciptor.json was found
+   *
+   * @return a set of segments that were found underneath workingDirPath
+   */
+  Set<DataSegment> findSegments(String workingDirPath, boolean updateDescriptor) throws SegmentLoadingException;
+}
diff --git a/api/src/main/java/io/druid/segment/loading/DataSegmentKiller.java b/api/src/main/java/io/druid/segment/loading/DataSegmentKiller.java
new file mode 100644
index 00000000000..ba9b879587a
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/DataSegmentKiller.java
@@ -0,0 +1,33 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import io.druid.timeline.DataSegment;
+
+import java.io.IOException;
+
+/**
+ */
+public interface DataSegmentKiller
+{
+  void kill(DataSegment segments) throws SegmentLoadingException;
+  void killAll() throws IOException;
+
+}
diff --git a/api/src/main/java/io/druid/segment/loading/DataSegmentMover.java b/api/src/main/java/io/druid/segment/loading/DataSegmentMover.java
new file mode 100644
index 00000000000..81080585cdf
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/DataSegmentMover.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import io.druid.timeline.DataSegment;
+
+import java.util.Map;
+
+public interface DataSegmentMover
+{
+  public DataSegment move(DataSegment segment, Map<String, Object> targetLoadSpec) throws SegmentLoadingException;
+}
diff --git a/api/src/main/java/io/druid/segment/loading/DataSegmentPuller.java b/api/src/main/java/io/druid/segment/loading/DataSegmentPuller.java
new file mode 100644
index 00000000000..46f051138a5
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/DataSegmentPuller.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import io.druid.timeline.DataSegment;
+
+import java.io.File;
+
+/**
+ * A DataSegmentPuller is responsible for pulling data for a particular segment into a particular directory
+ */
+public interface DataSegmentPuller
+{
+  /**
+   * Pull down segment files for the given DataSegment and put them in the given directory.
+   *
+   * @param segment The segment to pull down files for
+   * @param dir     The directory to store the files in
+   *
+   * @throws SegmentLoadingException if there are any errors
+   */
+  public void getSegmentFiles(DataSegment segment, File dir) throws SegmentLoadingException;
+}
diff --git a/api/src/main/java/io/druid/segment/loading/DataSegmentPusher.java b/api/src/main/java/io/druid/segment/loading/DataSegmentPusher.java
new file mode 100644
index 00000000000..f77aa198c8a
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/DataSegmentPusher.java
@@ -0,0 +1,33 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import io.druid.timeline.DataSegment;
+
+import java.io.File;
+import java.io.IOException;
+
+public interface DataSegmentPusher
+{
+  @Deprecated
+  String getPathForHadoop(String dataSource);
+  String getPathForHadoop();
+  DataSegment push(File file, DataSegment segment) throws IOException;
+}
diff --git a/api/src/main/java/io/druid/segment/loading/DataSegmentPusherUtil.java b/api/src/main/java/io/druid/segment/loading/DataSegmentPusherUtil.java
new file mode 100644
index 00000000000..7daa125088d
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/DataSegmentPusherUtil.java
@@ -0,0 +1,66 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import com.google.common.base.Joiner;
+import io.druid.timeline.DataSegment;
+import org.joda.time.format.ISODateTimeFormat;
+
+/**
+ */
+public class DataSegmentPusherUtil
+{
+  private static final Joiner JOINER = Joiner.on("/").skipNulls();
+
+  // Note: storage directory structure format = .../dataSource/interval/version/partitionNumber/
+  // If above format is ever changed, make sure to change it appropriately in other places
+  // e.g. HDFSDataSegmentKiller uses this information to clean the version, interval and dataSource directories
+  // on segment deletion if segment being deleted was the only segment
+  public static String getStorageDir(DataSegment segment)
+  {
+    return JOINER.join(
+        segment.getDataSource(),
+        String.format(
+            "%s_%s",
+            segment.getInterval().getStart(),
+            segment.getInterval().getEnd()
+        ),
+        segment.getVersion(),
+        segment.getShardSpec().getPartitionNum()
+    );
+  }
+
+  /**
+   * Due to https://issues.apache.org/jira/browse/HDFS-13 ":" are not allowed in
+   * path names. So we format paths differently for HDFS.
+   */
+  public static String getHdfsStorageDir(DataSegment segment)
+  {
+    return JOINER.join(
+        segment.getDataSource(),
+        String.format(
+            "%s_%s",
+            segment.getInterval().getStart().toString(ISODateTimeFormat.basicDateTime()),
+            segment.getInterval().getEnd().toString(ISODateTimeFormat.basicDateTime())
+        ),
+        segment.getVersion().replaceAll(":", "_")
+    );
+  }
+}
diff --git a/api/src/main/java/io/druid/segment/loading/LoadSpec.java b/api/src/main/java/io/druid/segment/loading/LoadSpec.java
new file mode 100644
index 00000000000..3adef9c4513
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/LoadSpec.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+
+import java.io.File;
+
+/**
+ * A means of pulling segment files into a destination directory
+ */
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type")
+public interface LoadSpec
+{
+  /**
+   * Method should put the segment files in the directory passed
+   * @param destDir The destination directory
+   * @return The byte count of data put in the destination directory
+   */
+  public LoadSpecResult loadSegment(File destDir) throws SegmentLoadingException;
+
+  // Hold interesting data about the results of the segment load
+  public static class LoadSpecResult{
+    private final long size;
+    public LoadSpecResult(long size){
+      this.size = size;
+    }
+    public long getSize(){
+      return this.size;
+    }
+  }
+}
diff --git a/api/src/main/java/io/druid/segment/loading/SegmentLoadingException.java b/api/src/main/java/io/druid/segment/loading/SegmentLoadingException.java
new file mode 100644
index 00000000000..061375c9c18
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/SegmentLoadingException.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+/**
+ */
+public class SegmentLoadingException extends Exception
+{
+  public SegmentLoadingException(
+      String formatString,
+      Object... objs
+  )
+  {
+    super(String.format(formatString, objs));
+  }
+
+  public SegmentLoadingException(
+      Throwable cause,
+      String formatString,
+      Object... objs
+  )
+  {
+    super(String.format(formatString, objs), cause);
+  }
+}
diff --git a/api/src/main/java/io/druid/segment/loading/URIDataPuller.java b/api/src/main/java/io/druid/segment/loading/URIDataPuller.java
new file mode 100644
index 00000000000..9ae45a3ac42
--- /dev/null
+++ b/api/src/main/java/io/druid/segment/loading/URIDataPuller.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import com.google.common.base.Predicate;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+
+/**
+ * A URIDataPuller has handlings for URI based data
+ */
+public interface URIDataPuller
+{
+  /**
+   * Create a new InputStream based on the URI
+   *
+   * @param uri The URI to open an Input Stream to
+   *
+   * @return A new InputStream which streams the URI in question
+   *
+   * @throws IOException
+   */
+  public InputStream getInputStream(URI uri) throws IOException;
+
+  /**
+   * Returns an abstract "version" for the URI. The exact meaning of the version is left up to the implementation.
+   *
+   * @param uri The URI to check
+   *
+   * @return A "version" as interpreted by the URIDataPuller implementation
+   *
+   * @throws IOException on error
+   */
+  public String getVersion(URI uri) throws IOException;
+
+  /**
+   * Evaluates a Throwable to see if it is recoverable. This is expected to be used in conjunction with the other methods
+   * to determine if anything thrown from the method should be retried.
+   *
+   * @return Predicate function indicating if the Throwable is recoverable
+   */
+  public Predicate<Throwable> shouldRetryPredicate();
+}
diff --git a/api/src/main/java/io/druid/tasklogs/NoopTaskLogs.java b/api/src/main/java/io/druid/tasklogs/NoopTaskLogs.java
new file mode 100644
index 00000000000..4ba760a79b8
--- /dev/null
+++ b/api/src/main/java/io/druid/tasklogs/NoopTaskLogs.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.tasklogs;
+
+import com.google.common.base.Optional;
+import com.google.common.io.ByteSource;
+
+import io.druid.java.util.common.logger.Logger;
+
+import java.io.File;
+import java.io.IOException;
+
+public class NoopTaskLogs implements TaskLogs
+{
+  private final Logger log = new Logger(TaskLogs.class);
+
+  @Override
+  public Optional<ByteSource> streamTaskLog(String taskid, long offset) throws IOException
+  {
+    return Optional.absent();
+  }
+
+  @Override
+  public void pushTaskLog(String taskid, File logFile) throws IOException
+  {
+    log.info("Not pushing logs for task: %s", taskid);
+  }
+
+  @Override
+  public void killAll() throws IOException
+  {
+    log.info("Noop: No task logs are deleted.");
+  }
+
+  @Override
+  public void killOlderThan(long timestamp) throws IOException
+  {
+    log.info("Noop: No task logs are deleted.");
+  }
+}
diff --git a/api/src/main/java/io/druid/tasklogs/TaskLogKiller.java b/api/src/main/java/io/druid/tasklogs/TaskLogKiller.java
new file mode 100644
index 00000000000..f03e46ad0c4
--- /dev/null
+++ b/api/src/main/java/io/druid/tasklogs/TaskLogKiller.java
@@ -0,0 +1,30 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.tasklogs;
+
+import java.io.IOException;
+
+/**
+ */
+public interface TaskLogKiller
+{
+  void killAll() throws IOException;
+  void killOlderThan(long timestamp) throws IOException;
+}
diff --git a/api/src/main/java/io/druid/tasklogs/TaskLogPusher.java b/api/src/main/java/io/druid/tasklogs/TaskLogPusher.java
new file mode 100644
index 00000000000..3fc16d46f98
--- /dev/null
+++ b/api/src/main/java/io/druid/tasklogs/TaskLogPusher.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.tasklogs;
+
+import java.io.File;
+import java.io.IOException;
+
+/**
+ * Something that knows how to persist local task logs to some form of long-term storage.
+ */
+public interface TaskLogPusher
+{
+  public void pushTaskLog(String taskid, File logFile) throws IOException;
+}
diff --git a/api/src/main/java/io/druid/tasklogs/TaskLogStreamer.java b/api/src/main/java/io/druid/tasklogs/TaskLogStreamer.java
new file mode 100644
index 00000000000..ccd9a99cdcb
--- /dev/null
+++ b/api/src/main/java/io/druid/tasklogs/TaskLogStreamer.java
@@ -0,0 +1,41 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.tasklogs;
+
+import com.google.common.base.Optional;
+import com.google.common.io.ByteSource;
+
+import java.io.IOException;
+
+/**
+ * Something that knows how to stream logs for tasks.
+ */
+public interface TaskLogStreamer
+{
+  /**
+   * Stream log for a task.
+   *
+   * @param offset If zero, stream the entire log. If positive, attempt to read from this position onwards. If
+   *               negative, attempt to read this many bytes from the end of the file (like <tt>tail -n</tt>).
+   *
+   * @return input supplier for this log, if available from this provider
+   */
+  public Optional<ByteSource> streamTaskLog(String taskid, long offset) throws IOException;
+}
diff --git a/api/src/main/java/io/druid/tasklogs/TaskLogs.java b/api/src/main/java/io/druid/tasklogs/TaskLogs.java
new file mode 100644
index 00000000000..db76b924e9e
--- /dev/null
+++ b/api/src/main/java/io/druid/tasklogs/TaskLogs.java
@@ -0,0 +1,24 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.tasklogs;
+
+public interface TaskLogs extends TaskLogStreamer, TaskLogPusher, TaskLogKiller
+{
+}
diff --git a/api/src/main/java/io/druid/timeline/DataSegment.java b/api/src/main/java/io/druid/timeline/DataSegment.java
new file mode 100644
index 00000000000..74322c8c3c6
--- /dev/null
+++ b/api/src/main/java/io/druid/timeline/DataSegment.java
@@ -0,0 +1,423 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.databind.annotation.JsonDeserialize;
+import com.fasterxml.jackson.databind.annotation.JsonSerialize;
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Predicate;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Interner;
+import com.google.common.collect.Interners;
+import com.google.common.collect.Iterables;
+import io.druid.jackson.CommaListJoinDeserializer;
+import io.druid.jackson.CommaListJoinSerializer;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.query.SegmentDescriptor;
+import io.druid.timeline.partition.NoneShardSpec;
+import io.druid.timeline.partition.ShardSpec;
+import org.joda.time.DateTime;
+import org.joda.time.Interval;
+
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+
+/**
+ */
+public class DataSegment implements Comparable<DataSegment>
+{
+  public static String delimiter = "_";
+  private final Integer binaryVersion;
+  private static final Interner<String> interner = Interners.newWeakInterner();
+  private static final Function<String, String> internFun = new Function<String, String>()
+  {
+    @Override
+    public String apply(String input)
+    {
+      return interner.intern(input);
+    }
+  };
+
+  public static String makeDataSegmentIdentifier(
+      String dataSource,
+      DateTime start,
+      DateTime end,
+      String version,
+      ShardSpec shardSpec
+  )
+  {
+    StringBuilder sb = new StringBuilder();
+
+    sb.append(dataSource).append(delimiter)
+      .append(start).append(delimiter)
+      .append(end).append(delimiter)
+      .append(version);
+
+    if (shardSpec.getPartitionNum() != 0) {
+      sb.append(delimiter).append(shardSpec.getPartitionNum());
+    }
+
+    return sb.toString();
+  }
+
+  private final String dataSource;
+  private final Interval interval;
+  private final String version;
+  private final Map<String, Object> loadSpec;
+  private final List<String> dimensions;
+  private final List<String> metrics;
+  private final ShardSpec shardSpec;
+  private final long size;
+  private final String identifier;
+
+  @JsonCreator
+  public DataSegment(
+      @JsonProperty("dataSource") String dataSource,
+      @JsonProperty("interval") Interval interval,
+      @JsonProperty("version") String version,
+      // use `Map` *NOT* `LoadSpec` because we want to do lazy materialization to prevent dependency pollution
+      @JsonProperty("loadSpec") Map<String, Object> loadSpec,
+      @JsonProperty("dimensions") @JsonDeserialize(using = CommaListJoinDeserializer.class) List<String> dimensions,
+      @JsonProperty("metrics") @JsonDeserialize(using = CommaListJoinDeserializer.class) List<String> metrics,
+      @JsonProperty("shardSpec") ShardSpec shardSpec,
+      @JsonProperty("binaryVersion") Integer binaryVersion,
+      @JsonProperty("size") long size
+  )
+  {
+    final Predicate<String> nonEmpty = new Predicate<String>()
+    {
+      @Override
+      public boolean apply(String input)
+      {
+        return input != null && !input.isEmpty();
+      }
+    };
+
+    // dataSource, dimensions & metrics are stored as canonical string values to decrease memory required for storing large numbers of segments.
+    this.dataSource = interner.intern(dataSource);
+    this.interval = interval;
+    this.loadSpec = loadSpec;
+    this.version = version;
+    this.dimensions = dimensions == null
+                      ? ImmutableList.<String>of()
+                      : ImmutableList.copyOf(Iterables.transform(Iterables.filter(dimensions, nonEmpty), internFun));
+    this.metrics = metrics == null
+                   ? ImmutableList.<String>of()
+                   : ImmutableList.copyOf(Iterables.transform(Iterables.filter(metrics, nonEmpty), internFun));
+    this.shardSpec = (shardSpec == null) ? NoneShardSpec.instance() : shardSpec;
+    this.binaryVersion = binaryVersion;
+    this.size = size;
+
+    this.identifier = makeDataSegmentIdentifier(
+        this.dataSource,
+        this.interval.getStart(),
+        this.interval.getEnd(),
+        this.version,
+        this.shardSpec
+    );
+  }
+
+  /**
+   * Get dataSource
+   *
+   * @return the dataSource
+   */
+  @JsonProperty
+  public String getDataSource()
+  {
+    return dataSource;
+  }
+
+  @JsonProperty
+  public Interval getInterval()
+  {
+    return interval;
+  }
+
+  @JsonProperty
+  public Map<String, Object> getLoadSpec()
+  {
+    return loadSpec;
+  }
+
+  @JsonProperty
+  public String getVersion()
+  {
+    return version;
+  }
+
+  @JsonProperty
+  @JsonSerialize(using = CommaListJoinSerializer.class)
+  public List<String> getDimensions()
+  {
+    return dimensions;
+  }
+
+  @JsonProperty
+  @JsonSerialize(using = CommaListJoinSerializer.class)
+  public List<String> getMetrics()
+  {
+    return metrics;
+  }
+
+  @JsonProperty
+  public ShardSpec getShardSpec()
+  {
+    return shardSpec;
+  }
+
+  @JsonProperty
+  public Integer getBinaryVersion()
+  {
+    return binaryVersion;
+  }
+
+  @JsonProperty
+  public long getSize()
+  {
+    return size;
+  }
+
+  @JsonProperty
+  public String getIdentifier()
+  {
+    return identifier;
+  }
+
+  public SegmentDescriptor toDescriptor()
+  {
+    return new SegmentDescriptor(interval, version, shardSpec.getPartitionNum());
+  }
+
+  public DataSegment withLoadSpec(Map<String, Object> loadSpec)
+  {
+    return builder(this).loadSpec(loadSpec).build();
+  }
+
+  public DataSegment withDimensions(List<String> dimensions)
+  {
+    return builder(this).dimensions(dimensions).build();
+  }
+
+  public DataSegment withMetrics(List<String> metrics)
+  {
+    return builder(this).metrics(metrics).build();
+  }
+
+  public DataSegment withSize(long size)
+  {
+    return builder(this).size(size).build();
+  }
+
+  public DataSegment withVersion(String version)
+  {
+    return builder(this).version(version).build();
+  }
+
+  public DataSegment withBinaryVersion(int binaryVersion)
+  {
+    return builder(this).binaryVersion(binaryVersion).build();
+  }
+
+  @Override
+  public int compareTo(DataSegment dataSegment)
+  {
+    return getIdentifier().compareTo(dataSegment.getIdentifier());
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (o instanceof DataSegment) {
+      return getIdentifier().equals(((DataSegment) o).getIdentifier());
+    }
+    return false;
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return getIdentifier().hashCode();
+  }
+
+  @Override
+  public String toString()
+  {
+    return "DataSegment{" +
+           "size=" + size +
+           ", shardSpec=" + shardSpec +
+           ", metrics=" + metrics +
+           ", dimensions=" + dimensions +
+           ", version='" + version + '\'' +
+           ", loadSpec=" + loadSpec +
+           ", interval=" + interval +
+           ", dataSource='" + dataSource + '\'' +
+           ", binaryVersion='" + binaryVersion + '\'' +
+           '}';
+  }
+
+  public static Comparator<DataSegment> bucketMonthComparator()
+  {
+    return new Comparator<DataSegment>()
+    {
+      @Override
+      public int compare(DataSegment lhs, DataSegment rhs)
+      {
+        int retVal;
+
+        DateTime lhsMonth = Granularities.MONTH.bucketStart(lhs.getInterval().getStart());
+        DateTime rhsMonth = Granularities.MONTH.bucketStart(rhs.getInterval().getStart());
+
+        retVal = lhsMonth.compareTo(rhsMonth);
+
+        if (retVal != 0) {
+          return retVal;
+        }
+
+        return lhs.compareTo(rhs);
+      }
+    };
+  }
+
+  public static Builder builder()
+  {
+    return new Builder();
+  }
+
+  public static Builder builder(DataSegment segment)
+  {
+    return new Builder(segment);
+  }
+
+  public static class Builder
+  {
+    private String dataSource;
+    private Interval interval;
+    private String version;
+    private Map<String, Object> loadSpec;
+    private List<String> dimensions;
+    private List<String> metrics;
+    private ShardSpec shardSpec;
+    private Integer binaryVersion;
+    private long size;
+
+    public Builder()
+    {
+      this.loadSpec = ImmutableMap.of();
+      this.dimensions = ImmutableList.of();
+      this.metrics = ImmutableList.of();
+      this.shardSpec = NoneShardSpec.instance();
+      this.size = -1;
+    }
+
+    public Builder(DataSegment segment)
+    {
+      this.dataSource = segment.getDataSource();
+      this.interval = segment.getInterval();
+      this.version = segment.getVersion();
+      this.loadSpec = segment.getLoadSpec();
+      this.dimensions = segment.getDimensions();
+      this.metrics = segment.getMetrics();
+      this.shardSpec = segment.getShardSpec();
+      this.binaryVersion = segment.getBinaryVersion();
+      this.size = segment.getSize();
+    }
+
+    public Builder dataSource(String dataSource)
+    {
+      this.dataSource = dataSource;
+      return this;
+    }
+
+    public Builder interval(Interval interval)
+    {
+      this.interval = interval;
+      return this;
+    }
+
+    public Builder version(String version)
+    {
+      this.version = version;
+      return this;
+    }
+
+    public Builder loadSpec(Map<String, Object> loadSpec)
+    {
+      this.loadSpec = loadSpec;
+      return this;
+    }
+
+    public Builder dimensions(List<String> dimensions)
+    {
+      this.dimensions = dimensions;
+      return this;
+    }
+
+    public Builder metrics(List<String> metrics)
+    {
+      this.metrics = metrics;
+      return this;
+    }
+
+    public Builder shardSpec(ShardSpec shardSpec)
+    {
+      this.shardSpec = shardSpec;
+      return this;
+    }
+
+    public Builder binaryVersion(Integer binaryVersion)
+    {
+      this.binaryVersion = binaryVersion;
+      return this;
+    }
+
+    public Builder size(long size)
+    {
+      this.size = size;
+      return this;
+    }
+
+    public DataSegment build()
+    {
+      // Check stuff that goes into the identifier, at least.
+      Preconditions.checkNotNull(dataSource, "dataSource");
+      Preconditions.checkNotNull(interval, "interval");
+      Preconditions.checkNotNull(version, "version");
+      Preconditions.checkNotNull(shardSpec, "shardSpec");
+
+      return new DataSegment(
+          dataSource,
+          interval,
+          version,
+          loadSpec,
+          dimensions,
+          metrics,
+          shardSpec,
+          binaryVersion,
+          size
+      );
+    }
+  }
+}
diff --git a/api/src/main/java/io/druid/timeline/DataSegmentUtils.java b/api/src/main/java/io/druid/timeline/DataSegmentUtils.java
new file mode 100644
index 00000000000..aa110d11d02
--- /dev/null
+++ b/api/src/main/java/io/druid/timeline/DataSegmentUtils.java
@@ -0,0 +1,208 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline;
+
+import com.google.common.base.Function;
+
+import io.druid.java.util.common.IAE;
+import io.druid.java.util.common.logger.Logger;
+
+import org.joda.time.DateTime;
+import org.joda.time.Interval;
+import org.joda.time.format.DateTimeFormatter;
+import org.joda.time.format.ISODateTimeFormat;
+
+import java.util.Objects;
+
+/**
+ * identifier to DataSegment.
+ */
+public class DataSegmentUtils
+{
+  private static final Logger LOGGER = new Logger(DataSegmentUtils.class);
+
+  public static Function<String, Interval> INTERVAL_EXTRACTOR(final String datasource)
+  {
+    return new Function<String, Interval>()
+    {
+      @Override
+      public Interval apply(String identifier)
+      {
+        SegmentIdentifierParts segmentIdentifierParts = valueOf(datasource, identifier);
+        if (segmentIdentifierParts == null) {
+          throw new IAE("Invalid identifier [%s]", identifier);
+        }
+
+        return segmentIdentifierParts.getInterval();
+      }
+    };
+  }
+
+  /**
+   * Parses a segment identifier into its components: dataSource, interval, version, and any trailing tags. Ignores
+   * shard spec.
+   *
+   * It is possible that this method may incorrectly parse an identifier, for example if the dataSource name in the
+   * identifier contains a DateTime parseable string such as 'datasource_2000-01-01T00:00:00.000Z' and dataSource was
+   * provided as 'datasource'. The desired behavior in this case would be to return null since the identifier does not
+   * actually belong to the provided dataSource but a non-null result would be returned. This is an edge case that would
+   * currently only affect paged select queries with a union dataSource of two similarly-named dataSources as in the
+   * given example.
+   *
+   * @param dataSource the dataSource corresponding to this identifier
+   * @param identifier segment identifier
+   * @return a {@link io.druid.timeline.DataSegmentUtils.SegmentIdentifierParts} object if the identifier could be
+   *         parsed, null otherwise
+   */
+  public static SegmentIdentifierParts valueOf(String dataSource, String identifier)
+  {
+    if (!identifier.startsWith(String.format("%s_", dataSource))) {
+      return null;
+    }
+
+    String remaining = identifier.substring(dataSource.length() + 1);
+    String[] splits = remaining.split(DataSegment.delimiter);
+    if (splits.length < 3) {
+      return null;
+    }
+
+    DateTimeFormatter formatter = ISODateTimeFormat.dateTime();
+
+    try {
+      DateTime start = formatter.parseDateTime(splits[0]);
+      DateTime end = formatter.parseDateTime(splits[1]);
+      String version = splits[2];
+      String trail = splits.length > 3 ? join(splits, DataSegment.delimiter, 3, splits.length) : null;
+
+      return new SegmentIdentifierParts(
+          dataSource,
+          new Interval(start.getMillis(), end.getMillis()),
+          version,
+          trail
+      );
+    } catch (IllegalArgumentException e) {
+      return null;
+    }
+  }
+
+  public static String withInterval(final String dataSource, final String identifier, Interval newInterval)
+  {
+    SegmentIdentifierParts segmentDesc = DataSegmentUtils.valueOf(dataSource, identifier);
+    if (segmentDesc == null) {
+      // happens for test segments which has invalid segment id.. ignore for now
+      LOGGER.warn("Invalid segment identifier " + identifier);
+      return identifier;
+    }
+    return segmentDesc.withInterval(newInterval).toString();
+  }
+
+  static class SegmentIdentifierParts
+  {
+    private final String dataSource;
+    private final Interval interval;
+    private final String version;
+    private final String trail;
+
+    public SegmentIdentifierParts(String dataSource, Interval interval, String version, String trail)
+    {
+      this.dataSource = dataSource;
+      this.interval = interval;
+      this.version = version;
+      this.trail = trail;
+    }
+
+    public String getDataSource()
+    {
+      return dataSource;
+    }
+
+    public Interval getInterval()
+    {
+      return interval;
+    }
+
+    public String getVersion()
+    {
+      return version;
+    }
+
+    public SegmentIdentifierParts withInterval(Interval interval)
+    {
+      return new SegmentIdentifierParts(dataSource, interval, version, trail);
+    }
+
+    @Override
+    public boolean equals(Object o)
+    {
+      if (this == o) {
+        return true;
+      }
+      if (o == null || getClass() != o.getClass()) {
+        return false;
+      }
+
+      SegmentIdentifierParts that = (SegmentIdentifierParts) o;
+
+      if (!Objects.equals(dataSource, that.dataSource)) {
+        return false;
+      }
+      if (!Objects.equals(interval, that.interval)) {
+        return false;
+      }
+      if (!Objects.equals(version, that.version)) {
+        return false;
+      }
+      if (!Objects.equals(trail, that.trail)) {
+        return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode()
+    {
+      return Objects.hash(dataSource, interval, version, trail);
+    }
+
+    @Override
+    public String toString()
+    {
+      return join(
+          new Object[]{dataSource, interval.getStart(), interval.getEnd(), version, trail},
+          DataSegment.delimiter, 0, version == null ? 3 : trail == null ? 4 : 5
+      );
+    }
+  }
+
+  private static String join(Object[] input, String delimiter, int start, int end)
+  {
+    StringBuilder builder = new StringBuilder();
+    for (int i = start; i < end; i++) {
+      if (i > start) {
+        builder.append(delimiter);
+      }
+      if (input[i] != null) {
+        builder.append(input[i]);
+      }
+    }
+    return builder.toString();
+  }
+}
diff --git a/api/src/main/java/io/druid/timeline/partition/NoneShardSpec.java b/api/src/main/java/io/druid/timeline/partition/NoneShardSpec.java
new file mode 100644
index 00000000000..b8374a253a3
--- /dev/null
+++ b/api/src/main/java/io/druid/timeline/partition/NoneShardSpec.java
@@ -0,0 +1,102 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline.partition;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Range;
+import io.druid.data.input.InputRow;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ */
+public class NoneShardSpec implements ShardSpec
+{
+  private final static NoneShardSpec INSTANCE = new NoneShardSpec();
+
+  @JsonCreator
+  public static NoneShardSpec instance() { return INSTANCE; }
+
+  @Deprecated
+  // Use NoneShardSpec.instance() instead
+  public NoneShardSpec(){
+
+  }
+
+  @Override
+  public <T> PartitionChunk<T> createChunk(T obj)
+  {
+    return new SingleElementPartitionChunk<T>(obj);
+  }
+
+  @Override
+  public boolean isInChunk(long timestamp, InputRow inputRow)
+  {
+    return true;
+  }
+
+  @Override
+  @JsonIgnore
+  public int getPartitionNum()
+  {
+    return 0;
+  }
+
+  @Override
+  public ShardSpecLookup getLookup(final List<ShardSpec> shardSpecs)
+  {
+
+    return new ShardSpecLookup()
+    {
+      @Override
+      public ShardSpec getShardSpec(long timestamp, InputRow row)
+      {
+        return shardSpecs.get(0);
+      }
+    };
+  }
+
+  @Override
+  public Map<String, Range<String>> getDomain()
+  {
+    return ImmutableMap.of();
+  }
+
+  @Override
+  public boolean equals(Object obj)
+  {
+    return obj instanceof NoneShardSpec;
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return 0;
+  }
+
+  @Override
+  public String toString()
+  {
+    return "NoneShardSpec";
+  }
+}
diff --git a/api/src/main/java/io/druid/timeline/partition/PartitionChunk.java b/api/src/main/java/io/druid/timeline/partition/PartitionChunk.java
new file mode 100644
index 00000000000..1fcab99b575
--- /dev/null
+++ b/api/src/main/java/io/druid/timeline/partition/PartitionChunk.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline.partition;
+
+/**
+ * A PartitionChunk represents a chunk of a partitioned(sharded) space.  It has knowledge of whether it is
+ * the start of the domain of partitions, the end of the domain, if it abuts another partition and where it stands
+ * inside of a sorted collection of partitions.
+ *
+ * The ordering of PartitionChunks is based entirely upon the partition boundaries defined inside the concrete
+ * PartitionChunk class.  That is, the payload (the object returned by getObject()) should *not* be involved in
+ * comparisons between PartitionChunk objects.
+ */
+public interface PartitionChunk<T> extends Comparable<PartitionChunk<T>>
+{
+  /**
+   * Returns the payload, generally an object that can be used to perform some action against the shard.
+   *
+   * @return the payload
+   */
+  public T getObject();
+
+  /**
+   * Determines if this PartitionChunk abuts another PartitionChunk.  A sequence of abutting PartitionChunks should
+   * start with an object where isStart() == true and eventually end with an object where isEnd() == true.
+   *
+   * @param chunk input chunk
+   * @return true if this chunk abuts the input chunk
+   */
+  public boolean abuts(PartitionChunk<T> chunk);
+
+  /**
+   * Returns true if this chunk is the beginning of the partition. Most commonly, that means it represents the range
+   * [-infinity, X) for some concrete X.
+   *
+   * @return true if the chunk is the beginning of the partition
+   */
+  public boolean isStart();
+
+  /**
+   * Returns true if this chunk is the end of the partition.  Most commonly, that means it represents the range
+   * [X, infinity] for some concrete X.
+   *
+   * @return true if the chunk is the beginning of the partition
+   */
+  public boolean isEnd();
+
+  /**
+   * Returns the partition chunk number of this PartitionChunk.  I.e. if there are 4 partitions in total and this
+   * is the 3rd partition, it will return 2
+   *
+   * @return the sequential numerical id of this partition chunk
+   */
+  public int getChunkNumber();
+}
diff --git a/api/src/main/java/io/druid/timeline/partition/ShardSpec.java b/api/src/main/java/io/druid/timeline/partition/ShardSpec.java
new file mode 100644
index 00000000000..b76a01941cf
--- /dev/null
+++ b/api/src/main/java/io/druid/timeline/partition/ShardSpec.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline.partition;
+
+import com.fasterxml.jackson.annotation.JsonSubTypes;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import com.google.common.collect.Range;
+import io.druid.data.input.InputRow;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ * A Marker interface that exists to combine ShardSpec objects together for Jackson
+ */
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type")
+@JsonSubTypes({
+                  @JsonSubTypes.Type(name = "none", value = NoneShardSpec.class),
+              })
+public interface ShardSpec
+{
+  public <T> PartitionChunk<T> createChunk(T obj);
+
+  public boolean isInChunk(long timestamp, InputRow inputRow);
+
+  public int getPartitionNum();
+
+  public ShardSpecLookup getLookup(List<ShardSpec> shardSpecs);
+
+  /**
+   * Get the possible range of each dimension for the rows this shard contains.
+   *
+   * @return map of dimensions to its possible range. Dimensions with unknown possible range are not mapped
+   */
+  public Map<String, Range<String>> getDomain();
+}
diff --git a/api/src/main/java/io/druid/timeline/partition/ShardSpecLookup.java b/api/src/main/java/io/druid/timeline/partition/ShardSpecLookup.java
new file mode 100644
index 00000000000..721b83af3e7
--- /dev/null
+++ b/api/src/main/java/io/druid/timeline/partition/ShardSpecLookup.java
@@ -0,0 +1,27 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline.partition;
+
+import io.druid.data.input.InputRow;
+
+public interface ShardSpecLookup
+{
+  ShardSpec getShardSpec(long timestamp, InputRow row);
+}
diff --git a/api/src/main/java/io/druid/timeline/partition/SingleElementPartitionChunk.java b/api/src/main/java/io/druid/timeline/partition/SingleElementPartitionChunk.java
new file mode 100644
index 00000000000..e1f8e01feb7
--- /dev/null
+++ b/api/src/main/java/io/druid/timeline/partition/SingleElementPartitionChunk.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline.partition;
+
+/**
+ */
+public class SingleElementPartitionChunk<T> implements PartitionChunk<T>
+{
+  private final T element;
+
+  public SingleElementPartitionChunk
+      (
+          T element
+      )
+  {
+    this.element = element;
+  }
+
+  @Override
+  public T getObject()
+  {
+    return element;
+  }
+
+  @Override
+  public boolean abuts(PartitionChunk<T> tPartitionChunk)
+  {
+    return false;
+  }
+
+  @Override
+  public boolean isStart()
+  {
+    return true;
+  }
+
+  @Override
+  public boolean isEnd()
+  {
+    return true;
+  }
+
+  @Override
+  public int getChunkNumber()
+  {
+    return 0;
+  }
+
+  /**
+   * The ordering of PartitionChunks is determined entirely by the partition boundaries and has nothing to do
+   * with the object.  Thus, if there are two SingleElementPartitionChunks, they are equal because they both
+   * represent the full partition space.
+   *
+   * SingleElementPartitionChunks are currently defined as less than every other type of PartitionChunk.  There
+   * is no good reason for it, nor is there a bad reason, that's just the way it is.  This is subject to change.
+   *
+   * @param chunk
+   * @return
+   */
+  @Override
+  public int compareTo(PartitionChunk<T> chunk)
+  {
+    return chunk instanceof SingleElementPartitionChunk ? 0 : -1;
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode()
+  {
+    return element != null ? element.hashCode() : 0;
+  }
+
+  @Override
+  public String toString()
+  {
+    return "SingleElementPartitionChunk{" +
+           "element=" + element +
+           '}';
+  }
+}
diff --git a/api/src/main/java/io/druid/utils/CompressionUtils.java b/api/src/main/java/io/druid/utils/CompressionUtils.java
new file mode 100644
index 00000000000..3d628dce1cf
--- /dev/null
+++ b/api/src/main/java/io/druid/utils/CompressionUtils.java
@@ -0,0 +1,82 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.utils;
+
+
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import io.druid.java.util.common.logger.Logger;
+
+/**
+ */
+public class CompressionUtils
+{
+  private static final Logger log = new Logger(CompressionUtils.class);
+
+
+  @Deprecated // Use com.metamx.common.CompressionUtils.zip
+  public static long zip(File directory, File outputZipFile) throws IOException
+  {
+    return io.druid.java.util.common.CompressionUtils.zip(directory, outputZipFile);
+  }
+
+
+  @Deprecated // Use com.metamx.common.CompressionUtils.zip
+  public static long zip(File directory, OutputStream out) throws IOException
+  {
+    return io.druid.java.util.common.CompressionUtils.zip(directory, out);
+  }
+
+  @Deprecated // Use com.metamx.common.CompressionUtils.unzip
+  public static void unzip(File pulledFile, File outDir) throws IOException
+  {
+    io.druid.java.util.common.CompressionUtils.unzip(pulledFile, outDir);
+  }
+
+  @Deprecated // Use com.metamx.common.CompressionUtils.unzip
+  public static void unzip(InputStream in, File outDir) throws IOException
+  {
+    io.druid.java.util.common.CompressionUtils.unzip(in, outDir);
+  }
+
+  /**
+   * Uncompress using a gzip uncompress algorithm from the `pulledFile` to the `outDir`.
+   * Unlike `com.metamx.common.CompressionUtils.gunzip`, this function takes an output *DIRECTORY* and tries to guess the file name.
+   * It is recommended that the caller use `com.metamx.common.CompressionUtils.gunzip` and specify the output file themselves to ensure names are as expected
+   *
+   * @param pulledFile The source file
+   * @param outDir     The destination directory to put the resulting file
+   *
+   * @throws IOException on propogated IO exception, IAE if it cannot determine the proper new name for `pulledFile`
+   */
+  @Deprecated // See description for alternative
+  public static void gunzip(File pulledFile, File outDir) throws IOException
+  {
+    final File outFile = new File(outDir, io.druid.java.util.common.CompressionUtils.getGzBaseName(pulledFile.getName()));
+    io.druid.java.util.common.CompressionUtils.gunzip(pulledFile, outFile);
+    if (!pulledFile.delete()) {
+      log.error("Could not delete tmpFile[%s].", pulledFile);
+    }
+  }
+
+}
diff --git a/api/src/main/java/io/druid/utils/Runnables.java b/api/src/main/java/io/druid/utils/Runnables.java
new file mode 100644
index 00000000000..793c9c1c3d0
--- /dev/null
+++ b/api/src/main/java/io/druid/utils/Runnables.java
@@ -0,0 +1,31 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.utils;
+
+/**
+ */
+public class Runnables
+{
+  public static Runnable getNoopRunnable(){
+    return new Runnable(){
+      public void run(){}
+    };
+  }
+}
diff --git a/api/src/test/java/io/druid/TestObjectMapper.java b/api/src/test/java/io/druid/TestObjectMapper.java
new file mode 100644
index 00000000000..740176fb0c5
--- /dev/null
+++ b/api/src/test/java/io/druid/TestObjectMapper.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid;
+
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.DeserializationContext;
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.MapperFeature;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.SerializationFeature;
+import com.fasterxml.jackson.databind.deser.std.StdDeserializer;
+import com.fasterxml.jackson.databind.module.SimpleModule;
+import com.fasterxml.jackson.databind.ser.std.ToStringSerializer;
+import org.joda.time.Interval;
+
+import java.io.IOException;
+
+/**
+ */
+public class TestObjectMapper extends ObjectMapper
+{
+  public TestObjectMapper()
+  {
+    configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+    configure(MapperFeature.AUTO_DETECT_GETTERS, false);
+    configure(MapperFeature.AUTO_DETECT_FIELDS, false);
+    configure(MapperFeature.AUTO_DETECT_IS_GETTERS, false);
+    configure(MapperFeature.AUTO_DETECT_SETTERS, false);
+    configure(SerializationFeature.INDENT_OUTPUT, false);
+    registerModule(new TestModule());
+  }
+
+  public static class TestModule extends SimpleModule
+  {
+    TestModule()
+    {
+      addSerializer(Interval.class, ToStringSerializer.instance);
+      addDeserializer(
+          Interval.class, new StdDeserializer<Interval>(Interval.class)
+          {
+            @Override
+            public Interval deserialize(
+                JsonParser jsonParser, DeserializationContext deserializationContext
+            ) throws IOException, JsonProcessingException
+            {
+              return new Interval(jsonParser.getText());
+            }
+          }
+      );
+    }
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/MapBasedRowTest.java b/api/src/test/java/io/druid/data/input/MapBasedRowTest.java
new file mode 100644
index 00000000000..d3192abf201
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/MapBasedRowTest.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input;
+
+import org.joda.time.DateTime;
+import org.junit.Assert;
+import org.junit.Test;
+
+import com.google.common.collect.ImmutableMap;
+
+public class MapBasedRowTest
+{
+  @Test
+  public void testGetLongMetricFromString()
+  {
+    MapBasedRow row = new MapBasedRow(
+        new DateTime(),
+        ImmutableMap.<String,Object>builder()
+          .put("k0", "-1.2")
+          .put("k1", "1.23")
+          .put("k2", "1.8")
+          .put("k3", "1e5")
+          .put("k4", "9223372036854775806")
+          .put("k5", "-9223372036854775807")
+          .put("k6", "+9223372036854775802")
+          .build()
+    );
+    
+    Assert.assertEquals(-1, row.getLongMetric("k0"));
+    Assert.assertEquals(1, row.getLongMetric("k1"));
+    Assert.assertEquals(1, row.getLongMetric("k2"));
+    Assert.assertEquals(100000, row.getLongMetric("k3"));
+    Assert.assertEquals(9223372036854775806L, row.getLongMetric("k4"));
+    Assert.assertEquals(-9223372036854775807L, row.getLongMetric("k5"));
+    Assert.assertEquals(9223372036854775802L, row.getLongMetric("k6"));
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/CSVParseSpecTest.java b/api/src/test/java/io/druid/data/input/impl/CSVParseSpecTest.java
new file mode 100644
index 00000000000..7d99a2b804d
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/CSVParseSpecTest.java
@@ -0,0 +1,66 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.google.common.collect.Lists;
+import org.junit.Test;
+
+import java.util.Arrays;
+
+public class CSVParseSpecTest
+{
+  @Test(expected = IllegalArgumentException.class)
+  public void testColumnMissing() throws Exception
+  {
+    final ParseSpec spec = new CSVParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("a", "b")),
+            Lists.<String>newArrayList(),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        ),
+        ",",
+        Arrays.asList("a")
+    );
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testComma() throws Exception
+  {
+    final ParseSpec spec = new CSVParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("a,", "b")),
+            Lists.<String>newArrayList(),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        ),
+        ",",
+        Arrays.asList("a")
+    );
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/DelimitedParseSpecTest.java b/api/src/test/java/io/druid/data/input/impl/DelimitedParseSpecTest.java
new file mode 100644
index 00000000000..1ebad8c6414
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/DelimitedParseSpecTest.java
@@ -0,0 +1,117 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.Lists;
+import io.druid.TestObjectMapper;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+public class DelimitedParseSpecTest
+{
+  private final ObjectMapper jsonMapper = new TestObjectMapper();
+
+  @Test
+  public void testSerde() throws IOException
+  {
+    DelimitedParseSpec spec = new DelimitedParseSpec(
+        new TimestampSpec("abc", "iso", null),
+        new DimensionsSpec(DimensionsSpec.getDefaultSchemas(Arrays.asList("abc")), null, null),
+        "\u0001",
+        "\u0002",
+        Arrays.asList("abc")
+    );
+    final DelimitedParseSpec serde = jsonMapper.readValue(
+        jsonMapper.writeValueAsString(spec),
+        DelimitedParseSpec.class
+    );
+    Assert.assertEquals("abc", serde.getTimestampSpec().getTimestampColumn());
+    Assert.assertEquals("iso", serde.getTimestampSpec().getTimestampFormat());
+
+    Assert.assertEquals(Arrays.asList("abc"), serde.getColumns());
+    Assert.assertEquals("\u0001", serde.getDelimiter());
+    Assert.assertEquals("\u0002", serde.getListDelimiter());
+    Assert.assertEquals(Arrays.asList("abc"), serde.getDimensionsSpec().getDimensionNames());
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testColumnMissing() throws Exception
+  {
+    final ParseSpec spec = new DelimitedParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("a", "b")),
+            Lists.<String>newArrayList(),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        ),
+        ",",
+        " ",
+        Arrays.asList("a")
+    );
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testComma() throws Exception
+  {
+    final ParseSpec spec = new DelimitedParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("a,", "b")),
+            Lists.<String>newArrayList(),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        ),
+        ",",
+        null,
+        Arrays.asList("a")
+    );
+  }
+
+  @Test(expected = NullPointerException.class)
+  public void testDefaultColumnList(){
+    final DelimitedParseSpec spec = new DelimitedParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("a", "b")),
+            Lists.<String>newArrayList(),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        ),
+        ",",
+        null,
+        // pass null columns not allowed
+        null
+    );
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/DimensionsSpecSerdeTest.java b/api/src/test/java/io/druid/data/input/impl/DimensionsSpecSerdeTest.java
new file mode 100644
index 00000000000..1512bc971ad
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/DimensionsSpecSerdeTest.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import junit.framework.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ */
+public class DimensionsSpecSerdeTest
+{
+  private final ObjectMapper mapper = new ObjectMapper();
+
+  @Test
+  public void testDimensionsSpecSerde() throws Exception
+  {
+    DimensionsSpec expected = new DimensionsSpec(
+        Arrays.asList(
+            new StringDimensionSchema("AAA"),
+            new StringDimensionSchema("BBB"),
+            new FloatDimensionSchema("C++"),
+            new NewSpatialDimensionSchema("DDT", null),
+            new LongDimensionSchema("EEE"),
+            new NewSpatialDimensionSchema("DDT2", Arrays.asList("A", "B")),
+            new NewSpatialDimensionSchema("IMPR", Arrays.asList("S", "P", "Q", "R"))
+        ),
+        Arrays.asList("FOO", "HAR"),
+        null
+    );
+
+    String jsonStr = "{\"dimensions\":"
+                     + "[\"AAA\", \"BBB\","
+                     + "{\"name\":\"C++\", \"type\":\"float\"},"
+                     + "{\"name\":\"DDT\", \"type\":\"spatial\"},"
+                     + "{\"name\":\"EEE\", \"type\":\"long\"},"
+                     + "{\"name\":\"DDT2\", \"type\": \"spatial\", \"dims\":[\"A\", \"B\"]}],"
+                     + "\"dimensionExclusions\": [\"FOO\", \"HAR\"],"
+                     + "\"spatialDimensions\": [{\"dimName\":\"IMPR\", \"dims\":[\"S\",\"P\",\"Q\",\"R\"]}]"
+                     + "}";
+
+    DimensionsSpec actual = mapper.readValue(
+        mapper.writeValueAsString(
+            mapper.readValue(jsonStr, DimensionsSpec.class)
+        ),
+        DimensionsSpec.class
+    );
+
+    List<SpatialDimensionSchema> expectedSpatials = Arrays.asList(
+        new SpatialDimensionSchema("DDT", null),
+        new SpatialDimensionSchema("DDT2", Arrays.asList("A","B")),
+        new SpatialDimensionSchema("IMPR", Arrays.asList("S","P","Q","R"))
+    );
+
+    Assert.assertEquals(expected, actual);
+    Assert.assertEquals(expectedSpatials, actual.getSpatialDimensions());
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/FileIteratingFirehoseTest.java b/api/src/test/java/io/druid/data/input/impl/FileIteratingFirehoseTest.java
new file mode 100644
index 00000000000..87f5d20fcd5
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/FileIteratingFirehoseTest.java
@@ -0,0 +1,85 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.google.common.base.Function;
+import com.google.common.base.Joiner;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Lists;
+
+import io.druid.java.util.common.Pair;
+import junit.framework.Assert;
+
+import org.apache.commons.io.LineIterator;
+import org.junit.Test;
+
+import java.io.StringReader;
+import java.util.Arrays;
+import java.util.List;
+
+public class FileIteratingFirehoseTest
+{
+  private static final List<Pair<String[], ImmutableList<String>>> fixtures = ImmutableList.of(
+      Pair.of(new String[]{"2000,foo"}, ImmutableList.of("foo")),
+      Pair.of(new String[]{"2000,foo\n2000,bar\n"}, ImmutableList.of("foo", "bar")),
+      Pair.of(new String[]{"2000,foo\n2000,bar\n", "2000,baz"}, ImmutableList.of("foo", "bar", "baz")),
+      Pair.of(new String[]{"2000,foo\n2000,bar\n", "", "2000,baz"}, ImmutableList.of("foo", "bar", "baz")),
+      Pair.of(new String[]{"2000,foo\n2000,bar\n", "", "2000,baz", ""}, ImmutableList.of("foo", "bar", "baz")),
+      Pair.of(new String[]{""}, ImmutableList.<String>of()),
+      Pair.of(new String[]{}, ImmutableList.<String>of())
+  );
+
+  @Test
+  public void testFirehose() throws Exception
+  {
+    for (Pair<String[], ImmutableList<String>> fixture : fixtures) {
+      final List<LineIterator> lineIterators = Lists.transform(
+          Arrays.asList(fixture.lhs),
+          new Function<String, LineIterator>()
+          {
+            @Override
+            public LineIterator apply(String s)
+            {
+              return new LineIterator(new StringReader(s));
+            }
+          }
+      );
+
+      final StringInputRowParser parser = new StringInputRowParser(
+          new CSVParseSpec(
+              new TimestampSpec("ts", "auto", null),
+              new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("x")), null, null),
+              ",",
+              ImmutableList.of("ts", "x")
+          ),
+          null
+      );
+
+      final FileIteratingFirehose firehose = new FileIteratingFirehose(lineIterators.iterator(), parser);
+      final List<String> results = Lists.newArrayList();
+
+      while (firehose.hasMore()) {
+        results.add(Joiner.on("|").join(firehose.nextRow().getDimension("x")));
+      }
+
+      Assert.assertEquals(fixture.rhs, results);
+    }
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/InputRowParserSerdeTest.java b/api/src/test/java/io/druid/data/input/impl/InputRowParserSerdeTest.java
new file mode 100644
index 00000000000..cbc3b42b7f6
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/InputRowParserSerdeTest.java
@@ -0,0 +1,233 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Charsets;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Lists;
+import io.druid.TestObjectMapper;
+import io.druid.data.input.ByteBufferInputRowParser;
+import io.druid.data.input.InputRow;
+import org.joda.time.DateTime;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.nio.ByteBuffer;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.List;
+
+public class InputRowParserSerdeTest
+{
+  private final ObjectMapper jsonMapper = new TestObjectMapper();
+
+  @Test
+  public void testStringInputRowParserSerde() throws Exception
+  {
+    final StringInputRowParser parser = new StringInputRowParser(
+        new JSONParseSpec(
+            new TimestampSpec("timestamp", "iso", null),
+            new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("foo", "bar")), null, null),
+            null,
+            null
+        ),
+        null
+    );
+    final ByteBufferInputRowParser parser2 = jsonMapper.readValue(
+        jsonMapper.writeValueAsBytes(parser),
+        ByteBufferInputRowParser.class
+    );
+    final InputRow parsed = parser2.parse(
+        ByteBuffer.wrap(
+            "{\"foo\":\"x\",\"bar\":\"y\",\"qux\":\"z\",\"timestamp\":\"2000\"}".getBytes(Charsets.UTF_8)
+        )
+    );
+    Assert.assertEquals(ImmutableList.of("foo", "bar"), parsed.getDimensions());
+    Assert.assertEquals(ImmutableList.of("x"), parsed.getDimension("foo"));
+    Assert.assertEquals(ImmutableList.of("y"), parsed.getDimension("bar"));
+    Assert.assertEquals(new DateTime("2000").getMillis(), parsed.getTimestampFromEpoch());
+  }
+
+  @Test
+  public void testStringInputRowParserSerdeMultiCharset() throws Exception
+  {
+    Charset[] testCharsets = {
+        Charsets.US_ASCII, Charsets.ISO_8859_1, Charsets.UTF_8,
+        Charsets.UTF_16BE, Charsets.UTF_16LE, Charsets.UTF_16
+    };
+
+    for (Charset testCharset : testCharsets) {
+      InputRow parsed = testCharsetParseHelper(testCharset);
+      Assert.assertEquals(ImmutableList.of("foo", "bar"), parsed.getDimensions());
+      Assert.assertEquals(ImmutableList.of("x"), parsed.getDimension("foo"));
+      Assert.assertEquals(ImmutableList.of("y"), parsed.getDimension("bar"));
+      Assert.assertEquals(new DateTime("3000").getMillis(), parsed.getTimestampFromEpoch());
+    }
+  }
+
+  @Test
+  public void testMapInputRowParserSerde() throws Exception
+  {
+    final MapInputRowParser parser = new MapInputRowParser(
+        new JSONParseSpec(
+            new TimestampSpec("timeposix", "posix", null),
+            new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("foo", "bar")), ImmutableList.of("baz"), null),
+            null,
+            null
+        )
+    );
+    final MapInputRowParser parser2 = jsonMapper.readValue(
+        jsonMapper.writeValueAsBytes(parser),
+        MapInputRowParser.class
+    );
+    final InputRow parsed = parser2.parse(
+        ImmutableMap.<String, Object>of(
+            "foo", "x",
+            "bar", "y",
+            "qux", "z",
+            "timeposix", "1"
+        )
+    );
+    Assert.assertEquals(ImmutableList.of("foo", "bar"), parsed.getDimensions());
+    Assert.assertEquals(ImmutableList.of("x"), parsed.getDimension("foo"));
+    Assert.assertEquals(ImmutableList.of("y"), parsed.getDimension("bar"));
+    Assert.assertEquals(1000, parsed.getTimestampFromEpoch());
+  }
+
+  @Test
+  public void testMapInputRowParserNumbersSerde() throws Exception
+  {
+    final MapInputRowParser parser = new MapInputRowParser(
+        new JSONParseSpec(
+            new TimestampSpec("timemillis", "millis", null),
+            new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("foo", "values")), ImmutableList.of("toobig", "value"), null),
+            null,
+            null
+        )
+    );
+    final MapInputRowParser parser2 = jsonMapper.readValue(
+        jsonMapper.writeValueAsBytes(parser),
+        MapInputRowParser.class
+    );
+    final InputRow parsed = parser2.parse(
+        ImmutableMap.<String, Object>of(
+            "timemillis", 1412705931123L,
+            "toobig", 123E64,
+            "value", 123.456,
+            "long", 123456789000L,
+            "values", Lists.newArrayList(1412705931123L, 123.456, 123E45, "hello")
+        )
+    );
+    Assert.assertEquals(ImmutableList.of("foo", "values"), parsed.getDimensions());
+    Assert.assertEquals(ImmutableList.of(), parsed.getDimension("foo"));
+    Assert.assertEquals(
+        ImmutableList.of("1412705931123", "123.456", "1.23E47", "hello"),
+        parsed.getDimension("values")
+    );
+    Assert.assertEquals(Float.POSITIVE_INFINITY, parsed.getFloatMetric("toobig"), 0.0);
+    Assert.assertEquals(123E64, parsed.getRaw("toobig"));
+    Assert.assertEquals(123.456f, parsed.getFloatMetric("value"), 0.0f);
+    Assert.assertEquals(123456789000L, parsed.getRaw("long"));
+    Assert.assertEquals(1.23456791E11f, parsed.getFloatMetric("long"), 0.0f);
+    Assert.assertEquals(1412705931123L, parsed.getTimestampFromEpoch());
+  }
+
+  private InputRow testCharsetParseHelper(Charset charset) throws Exception
+  {
+    final StringInputRowParser parser = new StringInputRowParser(
+        new JSONParseSpec(
+            new TimestampSpec("timestamp", "iso", null),
+            new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("foo", "bar")), null, null),
+            null,
+            null
+        ),
+        charset.name()
+    );
+
+    final ByteBufferInputRowParser parser2 = jsonMapper.readValue(
+        jsonMapper.writeValueAsBytes(parser),
+        ByteBufferInputRowParser.class
+    );
+
+    final InputRow parsed = parser2.parse(
+        ByteBuffer.wrap(
+            "{\"foo\":\"x\",\"bar\":\"y\",\"qux\":\"z\",\"timestamp\":\"3000\"}".getBytes(charset)
+        )
+    );
+
+    return parsed;
+  }
+
+  @Test
+  public void testFlattenParse() throws Exception
+  {
+    List<JSONPathFieldSpec> fields = new ArrayList<>();
+    fields.add(JSONPathFieldSpec.createNestedField("foobar1", "$.foo.bar1"));
+    fields.add(JSONPathFieldSpec.createNestedField("foobar2", "$.foo.bar2"));
+    fields.add(JSONPathFieldSpec.createNestedField("baz0", "$.baz[0]"));
+    fields.add(JSONPathFieldSpec.createNestedField("baz1", "$.baz[1]"));
+    fields.add(JSONPathFieldSpec.createNestedField("baz2", "$.baz[2]"));
+    fields.add(JSONPathFieldSpec.createNestedField("hey0barx", "$.hey[0].barx"));
+    fields.add(JSONPathFieldSpec.createNestedField("metA", "$.met.a"));
+    fields.add(JSONPathFieldSpec.createRootField("timestamp"));
+    fields.add(JSONPathFieldSpec.createRootField("foo.bar1"));
+
+    JSONPathSpec flattenSpec = new JSONPathSpec(true, fields);
+    final StringInputRowParser parser = new StringInputRowParser(
+        new JSONParseSpec(
+            new TimestampSpec("timestamp", "iso", null),
+            new DimensionsSpec(null, null, null),
+            flattenSpec,
+            null
+        ),
+        null
+    );
+
+    final StringInputRowParser parser2 = jsonMapper.readValue(
+        jsonMapper.writeValueAsBytes(parser),
+        StringInputRowParser.class
+    );
+
+    final InputRow parsed = parser2.parse(
+        "{\"blah\":[4,5,6], \"newmet\":5, \"foo\":{\"bar1\":\"aaa\", \"bar2\":\"bbb\"}, \"baz\":[1,2,3], \"timestamp\":\"2999\", \"foo.bar1\":\"Hello world!\", \"hey\":[{\"barx\":\"asdf\"}], \"met\":{\"a\":456}}"
+    );
+    Assert.assertEquals(ImmutableList.of("foobar1", "foobar2", "baz0", "baz1", "baz2", "hey0barx", "metA", "timestamp", "foo.bar1", "blah", "newmet", "baz"), parsed.getDimensions());
+    Assert.assertEquals(ImmutableList.of("aaa"), parsed.getDimension("foobar1"));
+    Assert.assertEquals(ImmutableList.of("bbb"), parsed.getDimension("foobar2"));
+    Assert.assertEquals(ImmutableList.of("1"), parsed.getDimension("baz0"));
+    Assert.assertEquals(ImmutableList.of("2"), parsed.getDimension("baz1"));
+    Assert.assertEquals(ImmutableList.of("3"), parsed.getDimension("baz2"));
+    Assert.assertEquals(ImmutableList.of("Hello world!"), parsed.getDimension("foo.bar1"));
+    Assert.assertEquals(ImmutableList.of("asdf"), parsed.getDimension("hey0barx"));
+    Assert.assertEquals(ImmutableList.of("456"), parsed.getDimension("metA"));
+    Assert.assertEquals(ImmutableList.of("5"), parsed.getDimension("newmet"));
+    Assert.assertEquals(new DateTime("2999").getMillis(), parsed.getTimestampFromEpoch());
+
+    String testSpec = "{\"enabled\": true,\"useFieldDiscovery\": true, \"fields\": [\"parseThisRootField\"]}";
+    final JSONPathSpec parsedSpec = jsonMapper.readValue(testSpec, JSONPathSpec.class);
+    List<JSONPathFieldSpec> fieldSpecs = parsedSpec.getFields();
+    Assert.assertEquals(JSONPathFieldType.ROOT, fieldSpecs.get(0).getType());
+    Assert.assertEquals("parseThisRootField", fieldSpecs.get(0).getName());
+    Assert.assertEquals(null, fieldSpecs.get(0).getExpr());
+  }
+
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/JSONLowercaseParseSpecTest.java b/api/src/test/java/io/druid/data/input/impl/JSONLowercaseParseSpecTest.java
new file mode 100644
index 00000000000..b2e6f4681ad
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/JSONLowercaseParseSpecTest.java
@@ -0,0 +1,52 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.google.common.collect.Lists;
+
+import io.druid.java.util.common.parsers.Parser;
+import junit.framework.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.Map;
+
+public class JSONLowercaseParseSpecTest
+{
+  @Test
+  public void testLowercasing() throws Exception
+  {
+    JSONLowercaseParseSpec spec = new JSONLowercaseParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("A", "B")),
+            Lists.<String>newArrayList(),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        )
+    );
+    Parser parser = spec.makeParser();
+    Map<String, Object> event = parser.parse("{\"timestamp\":\"2015-01-01\",\"A\":\"foo\"}");
+    Assert.assertEquals("foo", event.get("a"));
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/JSONParseSpecTest.java b/api/src/test/java/io/druid/data/input/impl/JSONParseSpecTest.java
new file mode 100644
index 00000000000..3407496cd9c
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/JSONParseSpecTest.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import io.druid.TestObjectMapper;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashMap;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+
+public class JSONParseSpecTest {
+  private final ObjectMapper jsonMapper = new TestObjectMapper();
+
+  @Test
+  public void testSerde() throws IOException
+  {
+    HashMap<String, Boolean> feature = new HashMap<String, Boolean>();
+    feature.put("ALLOW_UNQUOTED_CONTROL_CHARS", true);
+    JSONParseSpec spec = new JSONParseSpec(
+        new TimestampSpec("timestamp", "iso", null),
+        new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("bar", "foo")), null, null),
+        null,
+        feature
+    );
+
+    final JSONParseSpec serde = jsonMapper.readValue(
+        jsonMapper.writeValueAsString(spec),
+        JSONParseSpec.class
+    );
+    Assert.assertEquals("timestamp", serde.getTimestampSpec().getTimestampColumn());
+    Assert.assertEquals("iso", serde.getTimestampSpec().getTimestampFormat());
+
+    Assert.assertEquals(Arrays.asList("bar", "foo"), serde.getDimensionsSpec().getDimensionNames());
+    Assert.assertEquals(feature, serde.getFeatureSpec());
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/JSONPathSpecTest.java b/api/src/test/java/io/druid/data/input/impl/JSONPathSpecTest.java
new file mode 100644
index 00000000000..5f405409d86
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/JSONPathSpecTest.java
@@ -0,0 +1,79 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import io.druid.TestObjectMapper;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+public class JSONPathSpecTest
+{
+  private final ObjectMapper jsonMapper = new TestObjectMapper();
+
+  @Test
+  public void testSerde() throws IOException
+  {
+    List<JSONPathFieldSpec> fields = new ArrayList<>();
+    fields.add(JSONPathFieldSpec.createNestedField("foobar1", "$.foo.bar1"));
+    fields.add(JSONPathFieldSpec.createNestedField("baz0", "$.baz[0]"));
+    fields.add(JSONPathFieldSpec.createNestedField("hey0barx", "$.hey[0].barx"));
+    fields.add(JSONPathFieldSpec.createRootField("timestamp"));
+    fields.add(JSONPathFieldSpec.createRootField("foo.bar1"));
+
+    JSONPathSpec flattenSpec = new JSONPathSpec(true, fields);
+
+    final JSONPathSpec serde = jsonMapper.readValue(
+        jsonMapper.writeValueAsString(flattenSpec),
+        JSONPathSpec.class
+    );
+    Assert.assertTrue(serde.isUseFieldDiscovery());
+    List<JSONPathFieldSpec> serdeFields = serde.getFields();
+    JSONPathFieldSpec foobar1 = serdeFields.get(0);
+    JSONPathFieldSpec baz0 = serdeFields.get(1);
+    JSONPathFieldSpec hey0barx = serdeFields.get(2);
+    JSONPathFieldSpec timestamp = serdeFields.get(3);
+    JSONPathFieldSpec foodotbar1 = serdeFields.get(4);
+
+    Assert.assertEquals(JSONPathFieldType.PATH, foobar1.getType());
+    Assert.assertEquals("foobar1", foobar1.getName());
+    Assert.assertEquals("$.foo.bar1", foobar1.getExpr());
+
+    Assert.assertEquals(JSONPathFieldType.PATH, baz0.getType());
+    Assert.assertEquals("baz0", baz0.getName());
+    Assert.assertEquals("$.baz[0]", baz0.getExpr());
+
+    Assert.assertEquals(JSONPathFieldType.PATH, hey0barx.getType());
+    Assert.assertEquals("hey0barx", hey0barx.getName());
+    Assert.assertEquals("$.hey[0].barx", hey0barx.getExpr());
+
+    Assert.assertEquals(JSONPathFieldType.ROOT, timestamp.getType());
+    Assert.assertEquals("timestamp", timestamp.getName());
+    Assert.assertEquals(null, timestamp.getExpr());
+
+    Assert.assertEquals(JSONPathFieldType.ROOT, foodotbar1.getType());
+    Assert.assertEquals("foo.bar1", foodotbar1.getName());
+    Assert.assertEquals(null, foodotbar1.getExpr());
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/JavaScriptParseSpecTest.java b/api/src/test/java/io/druid/data/input/impl/JavaScriptParseSpecTest.java
new file mode 100644
index 00000000000..0a806b61356
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/JavaScriptParseSpecTest.java
@@ -0,0 +1,104 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.databind.InjectableValues;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableMap;
+
+import io.druid.TestObjectMapper;
+import io.druid.java.util.common.parsers.Parser;
+import io.druid.js.JavaScriptConfig;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Map;
+
+/**
+ */
+public class JavaScriptParseSpecTest
+{
+  private final ObjectMapper jsonMapper = new TestObjectMapper();
+
+  @Rule
+  public ExpectedException expectedException = ExpectedException.none();
+
+  @Test
+  public void testSerde() throws IOException
+  {
+    jsonMapper.setInjectableValues(
+        new InjectableValues.Std().addValue(
+            JavaScriptConfig.class,
+            JavaScriptConfig.getEnabledInstance()
+        )
+    );
+    JavaScriptParseSpec spec = new JavaScriptParseSpec(
+        new TimestampSpec("abc", "iso", null),
+        new DimensionsSpec(DimensionsSpec.getDefaultSchemas(Arrays.asList("abc")), null, null),
+        "abc",
+        JavaScriptConfig.getEnabledInstance()
+    );
+    final JavaScriptParseSpec serde = jsonMapper.readValue(
+        jsonMapper.writeValueAsString(spec),
+        JavaScriptParseSpec.class
+    );
+    Assert.assertEquals("abc", serde.getTimestampSpec().getTimestampColumn());
+    Assert.assertEquals("iso", serde.getTimestampSpec().getTimestampFormat());
+
+    Assert.assertEquals("abc", serde.getFunction());
+    Assert.assertEquals(Arrays.asList("abc"), serde.getDimensionsSpec().getDimensionNames());
+  }
+
+  @Test
+  public void testMakeParser()
+  {
+    final JavaScriptConfig config = JavaScriptConfig.getEnabledInstance();
+    JavaScriptParseSpec spec = new JavaScriptParseSpec(
+        new TimestampSpec("abc", "iso", null),
+        new DimensionsSpec(DimensionsSpec.getDefaultSchemas(Arrays.asList("abc")), null, null),
+        "function(str) { var parts = str.split(\"-\"); return { one: parts[0], two: parts[1] } }",
+        config
+    );
+
+    final Parser<String, Object> parser = spec.makeParser();
+    final Map<String, Object> obj = parser.parse("x-y");
+    Assert.assertEquals(ImmutableMap.of("one", "x", "two", "y"), obj);
+  }
+
+  @Test
+  public void testMakeParserNotAllowed()
+  {
+    final JavaScriptConfig config = new JavaScriptConfig(false);
+    JavaScriptParseSpec spec = new JavaScriptParseSpec(
+        new TimestampSpec("abc", "iso", null),
+        new DimensionsSpec(DimensionsSpec.getDefaultSchemas(Arrays.asList("abc")), null, null),
+        "abc",
+        config
+    );
+
+    expectedException.expect(IllegalStateException.class);
+    expectedException.expectMessage("JavaScript is disabled");
+    spec.makeParser();
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/NoopInputRowParserTest.java b/api/src/test/java/io/druid/data/input/impl/NoopInputRowParserTest.java
new file mode 100644
index 00000000000..d0faf6f963f
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/NoopInputRowParserTest.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import junit.framework.Assert;
+import org.junit.Test;
+
+/**
+ */
+public class NoopInputRowParserTest
+{
+  private final ObjectMapper mapper = new ObjectMapper();
+
+  @Test
+  public void testSerdeWithNullParseSpec() throws Exception
+  {
+    String jsonStr = "{ \"type\":\"noop\" }";
+
+    InputRowParser actual = mapper.readValue(
+        mapper.writeValueAsString(
+            mapper.readValue(jsonStr, InputRowParser.class)
+        ),
+        InputRowParser.class
+    );
+
+    Assert.assertEquals(new NoopInputRowParser(null), actual);
+  }
+
+  @Test
+  public void testSerdeWithNonNullParseSpec() throws Exception
+  {
+    String jsonStr = "{"
+                     + "\"type\":\"noop\","
+                     + "\"parseSpec\":{ \"format\":\"timeAndDims\", \"dimensionsSpec\": { \"dimensions\": [\"host\"] } }"
+                     + "}";
+
+    InputRowParser actual = mapper.readValue(
+        mapper.writeValueAsString(
+            mapper.readValue(jsonStr, InputRowParser.class)
+        ),
+        InputRowParser.class
+    );
+
+    Assert.assertEquals(
+        new NoopInputRowParser(
+            new TimeAndDimsParseSpec(
+                null,
+                new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("host")), null, null)
+            )
+        ),
+        actual
+    );
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/ParseSpecTest.java b/api/src/test/java/io/druid/data/input/impl/ParseSpecTest.java
new file mode 100644
index 00000000000..4b38453fe35
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/ParseSpecTest.java
@@ -0,0 +1,91 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.google.common.collect.Lists;
+
+import io.druid.java.util.common.parsers.ParseException;
+
+import org.junit.Test;
+
+import java.util.Arrays;
+
+public class ParseSpecTest
+{
+  @Test(expected = ParseException.class)
+  public void testDuplicateNames() throws Exception
+  {
+    final ParseSpec spec = new DelimitedParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("a", "b", "a")),
+            Lists.<String>newArrayList(),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        ),
+        ",",
+        " ",
+        Arrays.asList("a", "b")
+    );
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testDimAndDimExcluOverlap() throws Exception
+  {
+    final ParseSpec spec = new DelimitedParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("a", "B")),
+            Lists.newArrayList("B"),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        ),
+        ",",
+        null,
+        Arrays.asList("a", "B")
+    );
+  }
+
+  @Test
+  public void testDimExclusionDuplicate() throws Exception
+  {
+    final ParseSpec spec = new DelimitedParseSpec(
+        new TimestampSpec(
+            "timestamp",
+            "auto",
+            null
+        ),
+        new DimensionsSpec(
+            DimensionsSpec.getDefaultSchemas(Arrays.asList("a")),
+            Lists.newArrayList("B", "B"),
+            Lists.<SpatialDimensionSchema>newArrayList()
+        ),
+        ",",
+        null,
+        Arrays.asList("a", "B")
+    );
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/RegexParseSpecTest.java b/api/src/test/java/io/druid/data/input/impl/RegexParseSpecTest.java
new file mode 100644
index 00000000000..d3b86ee8f1d
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/RegexParseSpecTest.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import io.druid.TestObjectMapper;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+/**
+ */
+public class RegexParseSpecTest
+{
+  private final ObjectMapper jsonMapper = new TestObjectMapper();
+
+  @Test
+  public void testSerde() throws IOException
+  {
+    RegexParseSpec spec = new RegexParseSpec(
+        new TimestampSpec("abc", "iso", null),
+        new DimensionsSpec(DimensionsSpec.getDefaultSchemas(Arrays.asList("abc")), null, null),
+        "\u0001",
+        Arrays.asList("abc"),
+        "abc"
+    );
+    final RegexParseSpec serde = jsonMapper.readValue(
+        jsonMapper.writeValueAsString(spec),
+        RegexParseSpec.class
+    );
+    Assert.assertEquals("abc", serde.getTimestampSpec().getTimestampColumn());
+    Assert.assertEquals("iso", serde.getTimestampSpec().getTimestampFormat());
+
+    Assert.assertEquals("abc", serde.getPattern());
+    Assert.assertEquals("\u0001", serde.getListDelimiter());
+    Assert.assertEquals(Arrays.asList("abc"), serde.getDimensionsSpec().getDimensionNames());
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/TimeAndDimsParseSpecTest.java b/api/src/test/java/io/druid/data/input/impl/TimeAndDimsParseSpecTest.java
new file mode 100644
index 00000000000..8f36bacf490
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/TimeAndDimsParseSpecTest.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import junit.framework.Assert;
+import org.junit.Test;
+
+/**
+ */
+public class TimeAndDimsParseSpecTest
+{
+  private final ObjectMapper mapper = new ObjectMapper();
+
+  @Test
+  public void testSerdeWithNulls() throws Exception
+  {
+    String jsonStr = "{ \"format\":\"timeAndDims\" }";
+
+    ParseSpec actual = mapper.readValue(
+        mapper.writeValueAsString(
+            mapper.readValue(jsonStr, ParseSpec.class)
+        ),
+        ParseSpec.class
+    );
+
+    Assert.assertEquals(new TimeAndDimsParseSpec(null, null), actual);
+  }
+
+  @Test
+  public void testSerdeWithNonNulls() throws Exception
+  {
+    String jsonStr = "{"
+                     + "\"format\":\"timeAndDims\","
+                     + "\"timestampSpec\": { \"column\": \"tcol\" },"
+                     + "\"dimensionsSpec\": { \"dimensions\": [\"host\"] }"
+                     + "}";
+
+    ParseSpec actual = mapper.readValue(
+        mapper.writeValueAsString(
+            mapper.readValue(jsonStr, ParseSpec.class)
+        ),
+        ParseSpec.class
+    );
+
+    Assert.assertEquals(
+        new TimeAndDimsParseSpec(
+            new TimestampSpec("tcol", null, null),
+            new DimensionsSpec(DimensionsSpec.getDefaultSchemas(ImmutableList.of("host")), null, null)
+        ),
+        actual
+    );
+  }
+}
diff --git a/api/src/test/java/io/druid/data/input/impl/TimestampSpecTest.java b/api/src/test/java/io/druid/data/input/impl/TimestampSpecTest.java
new file mode 100644
index 00000000000..8bff2b9d634
--- /dev/null
+++ b/api/src/test/java/io/druid/data/input/impl/TimestampSpecTest.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.data.input.impl;
+
+import com.google.common.collect.ImmutableMap;
+import org.joda.time.DateTime;
+import org.joda.time.format.ISODateTimeFormat;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TimestampSpecTest
+{
+  @Test
+  public void testExtractTimestamp() throws Exception
+  {
+    TimestampSpec spec = new TimestampSpec("TIMEstamp", "yyyy-MM-dd", null);
+    Assert.assertEquals(
+        new DateTime("2014-03-01"),
+        spec.extractTimestamp(ImmutableMap.<String, Object>of("TIMEstamp", "2014-03-01"))
+    );
+  }
+
+  @Test
+  public void testExtractTimestampWithMissingTimestampColumn() throws Exception
+  {
+    TimestampSpec spec = new TimestampSpec(null, null, new DateTime(0));
+    Assert.assertEquals(
+        new DateTime("1970-01-01"),
+        spec.extractTimestamp(ImmutableMap.<String, Object>of("dim", "foo"))
+    );
+  }
+
+  @Test
+  public void testContextualTimestampList() throws Exception
+  {
+    String DATE_FORMAT = "yyyy-MM-dd'T'HH:mm:ss";
+    String[] dates = new String[]{
+        "2000-01-01T05:00:00",
+        "2000-01-01T05:00:01",
+        "2000-01-01T05:00:01",
+        "2000-01-01T05:00:02",
+        "2000-01-01T05:00:03",
+        };
+    TimestampSpec spec = new TimestampSpec("TIMEstamp", DATE_FORMAT, null);
+
+    for (int i = 0; i < dates.length; ++i) {
+      String date = dates[i];
+      DateTime dateTime = spec.extractTimestamp(ImmutableMap.<String, Object>of("TIMEstamp", date));
+      DateTime expectedDateTime = ISODateTimeFormat.dateHourMinuteSecond().parseDateTime(date);
+      Assert.assertEquals(expectedDateTime, dateTime);
+    }
+  }
+}
diff --git a/api/src/test/java/io/druid/guice/ConditionalMultibindTest.java b/api/src/test/java/io/druid/guice/ConditionalMultibindTest.java
new file mode 100644
index 00000000000..6b9a23491d8
--- /dev/null
+++ b/api/src/test/java/io/druid/guice/ConditionalMultibindTest.java
@@ -0,0 +1,477 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.common.base.Predicates;
+import com.google.common.collect.ImmutableSet;
+import com.google.inject.Binder;
+import com.google.inject.BindingAnnotation;
+import com.google.inject.Guice;
+import com.google.inject.Inject;
+import com.google.inject.Injector;
+import com.google.inject.Key;
+import com.google.inject.Module;
+import com.google.inject.TypeLiteral;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+import java.util.HashSet;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ */
+public class ConditionalMultibindTest
+{
+
+  private static final String ANIMAL_TYPE = "animal.type";
+
+  private Properties props;
+
+  @Before
+  public void setUp() throws Exception
+  {
+    props = new Properties();
+  }
+
+  @Test
+  public void testMultiConditionalBind_cat()
+  {
+    props.setProperty("animal.type", "cat");
+
+    Injector injector = Guice.createInjector(new Module()
+    {
+      @Override
+      public void configure(Binder binder)
+      {
+        ConditionalMultibind.create(props, binder, Animal.class)
+                            .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("cat"), Cat.class)
+                            .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("dog"), Dog.class);
+      }
+    });
+
+    Set<Animal> animalSet = injector.getInstance(Key.get(new TypeLiteral<Set<Animal>>()
+    {
+    }));
+
+    Assert.assertEquals(1, animalSet.size());
+    Assert.assertEquals(animalSet, ImmutableSet.<Animal>of(new Cat()));
+  }
+
+  @Test
+  public void testMultiConditionalBind_cat_dog()
+  {
+    props.setProperty("animal.type", "pets");
+
+    Injector injector = Guice.createInjector(new Module()
+    {
+      @Override
+      public void configure(Binder binder)
+      {
+        ConditionalMultibind.create(props, binder, Animal.class)
+                            .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Cat.class)
+                            .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Dog.class);
+      }
+    });
+
+    Set<Animal> animalSet = injector.getInstance(Key.get(new TypeLiteral<Set<Animal>>()
+    {
+    }));
+
+    Assert.assertEquals(2, animalSet.size());
+    Assert.assertEquals(animalSet, ImmutableSet.<Animal>of(new Cat(), new Dog()));
+  }
+
+  @Test
+  public void testMultiConditionalBind_cat_dog_non_continuous_syntax()
+  {
+    props.setProperty("animal.type", "pets");
+
+    Injector injector = Guice.createInjector(new Module()
+    {
+      @Override
+      public void configure(Binder binder)
+      {
+        ConditionalMultibind.create(props, binder, Animal.class)
+                            .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Cat.class);
+
+        ConditionalMultibind.create(props, binder, Animal.class)
+                            .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Dog.class);
+
+      }
+    });
+
+    Set<Animal> animalSet = injector.getInstance(Key.get(new TypeLiteral<Set<Animal>>()
+    {
+    }));
+
+    Assert.assertEquals(2, animalSet.size());
+    Assert.assertEquals(animalSet, ImmutableSet.<Animal>of(new Cat(), new Dog()));
+  }
+
+  @Test
+  public void testMultiConditionalBind_multiple_modules()
+  {
+    props.setProperty("animal.type", "pets");
+
+    Injector injector = Guice.createInjector(
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            ConditionalMultibind.create(props, binder, Animal.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Cat.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Dog.class);
+          }
+        },
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            ConditionalMultibind.create(props, binder, Animal.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("not_match"), Tiger.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Fish.class);
+          }
+        }
+    );
+
+    Set<Animal> animalSet = injector.getInstance(Key.get(new TypeLiteral<Set<Animal>>()
+    {
+    }));
+
+    Assert.assertEquals(3, animalSet.size());
+    Assert.assertEquals(animalSet, ImmutableSet.<Animal>of(new Cat(), new Dog(), new Fish()));
+  }
+
+  @Test
+  public void testMultiConditionalBind_multiple_modules_with_annotation()
+  {
+    props.setProperty("animal.type", "pets");
+
+    Injector injector = Guice.createInjector(
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            ConditionalMultibind.create(props, binder, Animal.class, SanDiego.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Cat.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Dog.class);
+          }
+        },
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            ConditionalMultibind.create(props, binder, Animal.class, SanDiego.class)
+                                .addBinding(new Bird())
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Tiger.class);
+
+            ConditionalMultibind.create(props, binder, Animal.class, SanJose.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Fish.class);
+          }
+        }
+    );
+
+    Set<Animal> animalSet_1 = injector.getInstance(Key.get(new TypeLiteral<Set<Animal>>()
+    {
+    }, SanDiego.class));
+    Assert.assertEquals(4, animalSet_1.size());
+    Assert.assertEquals(animalSet_1, ImmutableSet.<Animal>of(new Bird(), new Cat(), new Dog(), new Tiger()));
+
+    Set<Animal> animalSet_2 = injector.getInstance(Key.get(new TypeLiteral<Set<Animal>>()
+    {
+    }, SanJose.class));
+    Assert.assertEquals(1, animalSet_2.size());
+    Assert.assertEquals(animalSet_2, ImmutableSet.<Animal>of(new Fish()));
+  }
+
+  @Test
+  public void testMultiConditionalBind_inject()
+  {
+    props.setProperty("animal.type", "pets");
+
+    Injector injector = Guice.createInjector(
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            ConditionalMultibind.create(props, binder, Animal.class)
+                                .addBinding(Bird.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Cat.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Dog.class);
+          }
+        },
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            ConditionalMultibind.create(props, binder, Animal.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("not_match"), Tiger.class)
+                                .addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), Fish.class);
+          }
+        }
+    );
+
+    PetShotAvails shop = new PetShotAvails();
+    injector.injectMembers(shop);
+
+    Assert.assertEquals(4, shop.animals.size());
+    Assert.assertEquals(shop.animals, ImmutableSet.<Animal>of(new Bird(), new Cat(), new Dog(), new Fish()));
+  }
+
+  @Test
+  public void testMultiConditionalBind_typeLiteral()
+  {
+    props.setProperty("animal.type", "pets");
+
+    final Set<Animal> set1 = ImmutableSet.<Animal>of(new Dog(), new Tiger());
+    final Set<Animal> set2 = ImmutableSet.<Animal>of(new Cat(), new Fish());
+    final Set<Animal> set3 = ImmutableSet.<Animal>of(new Cat());
+    final Set<Animal> union = new HashSet<>();
+    union.addAll(set1);
+    union.addAll(set2);
+
+    final Zoo<Animal> zoo1 = new Zoo<>(set1);
+    final Zoo<Animal> zoo2 = new Zoo<>();
+
+    Injector injector = Guice.createInjector(
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            ConditionalMultibind.create(props, binder,
+                                        new TypeLiteral<Set<Animal>>()
+                                        {
+                                        }
+            ).addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), set1
+            ).addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), set2);
+
+            ConditionalMultibind.create(props, binder,
+                                        new TypeLiteral<Zoo<Animal>>()
+                                        {
+                                        }
+            ).addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), zoo1);
+          }
+        },
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            ConditionalMultibind.create(props, binder,
+                                        new TypeLiteral<Set<Animal>>()
+                                        {
+                                        }
+            ).addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), set3);
+
+            ConditionalMultibind.create(props, binder,
+                                        new TypeLiteral<Set<Animal>>()
+                                        {
+                                        },
+                                        SanDiego.class
+            ).addConditionBinding(ANIMAL_TYPE, Predicates.equalTo("pets"), union);
+
+            ConditionalMultibind.create(props, binder,
+                                        new TypeLiteral<Zoo<Animal>>()
+                                        {
+                                        }
+            ).addBinding(new TypeLiteral<Zoo<Animal>>()
+            {
+            });
+
+          }
+        }
+    );
+
+    Set<Set<Animal>> actualAnimalSet = injector.getInstance(Key.get(new TypeLiteral<Set<Set<Animal>>>()
+    {
+    }));
+    Assert.assertEquals(3, actualAnimalSet.size());
+    Assert.assertEquals(ImmutableSet.of(set1, set2, set3), actualAnimalSet);
+
+    actualAnimalSet = injector.getInstance(Key.get(new TypeLiteral<Set<Set<Animal>>>()
+    {
+    }, SanDiego.class));
+    Assert.assertEquals(1, actualAnimalSet.size());
+    Assert.assertEquals(ImmutableSet.of(union), actualAnimalSet);
+
+    final Set<Zoo<Animal>> actualZooSet = injector.getInstance(Key.get(new TypeLiteral<Set<Zoo<Animal>>>()
+    {
+    }));
+    Assert.assertEquals(2, actualZooSet.size());
+    Assert.assertEquals(ImmutableSet.of(zoo1, zoo2), actualZooSet);
+  }
+
+  static abstract class Animal
+  {
+    private final String type;
+
+    Animal(String type)
+    {
+      this.type = type;
+    }
+
+    @Override
+    public String toString()
+    {
+      return "Animal{" +
+             "type='" + type + '\'' +
+             '}';
+    }
+
+    @Override
+    public boolean equals(Object o)
+    {
+      if (this == o) {
+        return true;
+      }
+      if (o == null || getClass() != o.getClass()) {
+        return false;
+      }
+
+      Animal animal = (Animal) o;
+
+      return type != null ? type.equals(animal.type) : animal.type == null;
+    }
+
+    @Override
+    public int hashCode()
+    {
+      return type != null ? type.hashCode() : 0;
+    }
+  }
+
+  static class PetShotAvails
+  {
+    @Inject
+    Set<Animal> animals;
+  }
+
+  static class Dog extends Animal
+  {
+    Dog()
+    {
+      super("dog");
+    }
+  }
+
+  static class Cat extends Animal
+  {
+    Cat()
+    {
+      super("cat");
+    }
+  }
+
+  static class Fish extends Animal
+  {
+    Fish()
+    {
+      super("fish");
+    }
+  }
+
+  static class Tiger extends Animal
+  {
+    Tiger()
+    {
+      super("tiger");
+    }
+  }
+
+  static class Bird extends Animal
+  {
+    Bird()
+    {
+      super("bird");
+    }
+  }
+
+  static class Zoo<T>
+  {
+    Set<T> animals;
+
+    public Zoo()
+    {
+      animals = new HashSet<>();
+    }
+
+    public Zoo(Set<T> animals)
+    {
+      this.animals = animals;
+    }
+
+    @Override
+    public boolean equals(Object o)
+    {
+      if (this == o) {
+        return true;
+      }
+      if (o == null || getClass() != o.getClass()) {
+        return false;
+      }
+
+      Zoo<?> zoo = (Zoo<?>) o;
+
+      return animals != null ? animals.equals(zoo.animals) : zoo.animals == null;
+    }
+
+    @Override
+    public int hashCode()
+    {
+      return animals != null ? animals.hashCode() : 0;
+    }
+
+    @Override
+    public String toString()
+    {
+      return "Zoo{" +
+             "animals=" + animals +
+             '}';
+    }
+  }
+
+  @Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
+  @Retention(RetentionPolicy.RUNTIME)
+  @BindingAnnotation
+  @interface SanDiego
+  {
+  }
+
+  @Target({ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
+  @Retention(RetentionPolicy.RUNTIME)
+  @BindingAnnotation
+  @interface SanJose
+  {
+  }
+
+}
diff --git a/api/src/test/java/io/druid/guice/JsonConfiguratorTest.java b/api/src/test/java/io/druid/guice/JsonConfiguratorTest.java
new file mode 100644
index 00000000000..0ce4f77a79a
--- /dev/null
+++ b/api/src/test/java/io/druid/guice/JsonConfiguratorTest.java
@@ -0,0 +1,201 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableSet;
+import io.druid.TestObjectMapper;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import javax.validation.ConstraintViolation;
+import javax.validation.Validator;
+import javax.validation.executable.ExecutableValidator;
+import javax.validation.metadata.BeanDescriptor;
+import java.util.List;
+import java.util.Properties;
+import java.util.Set;
+
+public class JsonConfiguratorTest
+{
+  private static final String PROP_PREFIX = "test.property.prefix.";
+  private final ObjectMapper mapper = new TestObjectMapper();
+  private final Properties properties = new Properties();
+
+  @Before
+  public void setUp()
+  {
+    mapper.registerSubtypes(MappableObject.class);
+  }
+
+  final Validator validator = new Validator()
+  {
+    @Override
+    public <T> Set<ConstraintViolation<T>> validate(T object, Class<?>... groups)
+    {
+      return ImmutableSet.of();
+    }
+
+    @Override
+    public <T> Set<ConstraintViolation<T>> validateProperty(T object, String propertyName, Class<?>... groups)
+    {
+      return ImmutableSet.of();
+    }
+
+    @Override
+    public <T> Set<ConstraintViolation<T>> validateValue(
+        Class<T> beanType, String propertyName, Object value, Class<?>... groups
+    )
+    {
+      return ImmutableSet.of();
+    }
+
+    @Override
+    public BeanDescriptor getConstraintsForClass(Class<?> clazz)
+    {
+      return null;
+    }
+
+    @Override
+    public <T> T unwrap(Class<T> type)
+    {
+      return null;
+    }
+
+    @Override
+    public ExecutableValidator forExecutables()
+    {
+      return null;
+    }
+  };
+
+  @Test
+  public void testTest()
+  {
+    Assert.assertEquals(
+        new MappableObject("p1", ImmutableList.<String>of("p2")),
+        new MappableObject("p1", ImmutableList.<String>of("p2"))
+    );
+    Assert.assertEquals(new MappableObject("p1", null), new MappableObject("p1", ImmutableList.<String>of()));
+  }
+
+  @Test
+  public void testsimpleConfigurate() throws Exception
+  {
+    final JsonConfigurator configurator = new JsonConfigurator(mapper, validator);
+    properties.setProperty(PROP_PREFIX + "prop1", "prop1");
+    properties.setProperty(PROP_PREFIX + "prop1List", "[\"prop2\"]");
+    final MappableObject obj = configurator.configurate(properties, PROP_PREFIX, MappableObject.class);
+    Assert.assertEquals("prop1", obj.prop1);
+    Assert.assertEquals(ImmutableList.of("prop2"), obj.prop1List);
+  }
+
+  @Test
+  public void testMissingConfigList()
+  {
+    final JsonConfigurator configurator = new JsonConfigurator(mapper, validator);
+    properties.setProperty(PROP_PREFIX + "prop1", "prop1");
+    final MappableObject obj = configurator.configurate(properties, PROP_PREFIX, MappableObject.class);
+    Assert.assertEquals("prop1", obj.prop1);
+    Assert.assertEquals(ImmutableList.of(), obj.prop1List);
+  }
+
+  @Test
+  public void testMissingConfig()
+  {
+    final JsonConfigurator configurator = new JsonConfigurator(mapper, validator);
+    properties.setProperty(PROP_PREFIX + "prop1List", "[\"prop2\"]");
+    final MappableObject obj = configurator.configurate(properties, PROP_PREFIX, MappableObject.class);
+    Assert.assertNull(obj.prop1);
+    Assert.assertEquals(ImmutableList.of("prop2"), obj.prop1List);
+  }
+
+  @Test
+  public void testQuotedConfig()
+  {
+    final JsonConfigurator configurator = new JsonConfigurator(mapper, validator);
+    properties.setProperty(PROP_PREFIX + "prop1", "testing \"prop1\"");
+    final MappableObject obj = configurator.configurate(properties, PROP_PREFIX, MappableObject.class);
+    Assert.assertEquals("testing \"prop1\"", obj.prop1);
+    Assert.assertEquals(ImmutableList.of(), obj.prop1List);
+  }
+}
+
+class MappableObject
+{
+  @JsonProperty("prop1")
+  final String prop1;
+  @JsonProperty("prop1List")
+  final List<String> prop1List;
+
+  @JsonCreator
+  protected MappableObject(
+      @JsonProperty("prop1") final String prop1,
+      @JsonProperty("prop1List") final List<String> prop1List
+  )
+  {
+    this.prop1 = prop1;
+    this.prop1List = prop1List == null ? ImmutableList.<String>of() : prop1List;
+  }
+
+
+  @JsonProperty
+  public List<String> getProp1List()
+  {
+    return prop1List;
+  }
+
+  @JsonProperty
+  public String getProp1()
+  {
+    return prop1;
+  }
+
+  @Override
+  public boolean equals(Object o)
+  {
+    if (this == o) {
+      return true;
+    }
+    if (o == null || getClass() != o.getClass()) {
+      return false;
+    }
+
+    MappableObject object = (MappableObject) o;
+
+    if (prop1 != null ? !prop1.equals(object.prop1) : object.prop1 != null) {
+      return false;
+    }
+    return prop1List != null ? prop1List.equals(object.prop1List) : object.prop1List == null;
+
+  }
+
+  @Override
+  public int hashCode()
+  {
+    int result = prop1 != null ? prop1.hashCode() : 0;
+    result = 31 * result + (prop1List != null ? prop1List.hashCode() : 0);
+    return result;
+  }
+}
diff --git a/api/src/test/java/io/druid/guice/PolyBindTest.java b/api/src/test/java/io/druid/guice/PolyBindTest.java
new file mode 100644
index 00000000000..eaa2583563a
--- /dev/null
+++ b/api/src/test/java/io/druid/guice/PolyBindTest.java
@@ -0,0 +1,155 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.guice;
+
+import com.google.common.collect.Iterables;
+import com.google.inject.Binder;
+import com.google.inject.Guice;
+import com.google.inject.Injector;
+import com.google.inject.Key;
+import com.google.inject.Module;
+import com.google.inject.ProvisionException;
+import com.google.inject.multibindings.MapBinder;
+import com.google.inject.name.Names;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.Properties;
+
+/**
+ */
+public class PolyBindTest
+{
+  private Properties props;
+  private Injector injector;
+
+  public void setUp(Module... modules) throws Exception
+  {
+    props = new Properties();
+    injector = Guice.createInjector(
+        Iterables.concat(
+            Arrays.asList(
+                new Module()
+                {
+                  @Override
+                  public void configure(Binder binder)
+                  {
+                    binder.bind(Properties.class).toInstance(props);
+                    PolyBind.createChoice(binder, "billy", Key.get(Gogo.class), Key.get(GoA.class));
+                    PolyBind.createChoiceWithDefault(binder, "sally", Key.get(GogoSally.class), null, "b");
+
+                  }
+                }
+            ),
+            Arrays.asList(modules)
+        )
+    );
+  }
+
+  @Test
+  public void testSanity() throws Exception
+  {
+    setUp(
+        new Module()
+        {
+          @Override
+          public void configure(Binder binder)
+          {
+            final MapBinder<String, Gogo> gogoBinder = PolyBind.optionBinder(binder, Key.get(Gogo.class));
+            gogoBinder.addBinding("a").to(GoA.class);
+            gogoBinder.addBinding("b").to(GoB.class);
+
+            final MapBinder<String, GogoSally> gogoSallyBinder = PolyBind.optionBinder(binder, Key.get(GogoSally.class));
+            gogoSallyBinder.addBinding("a").to(GoA.class);
+            gogoSallyBinder.addBinding("b").to(GoB.class);
+
+            PolyBind.createChoice(
+                binder, "billy", Key.get(Gogo.class, Names.named("reverse")), Key.get(GoB.class)
+            );
+            final MapBinder<String,Gogo> annotatedGogoBinder = PolyBind.optionBinder(
+                binder, Key.get(Gogo.class, Names.named("reverse"))
+            );
+            annotatedGogoBinder.addBinding("a").to(GoB.class);
+            annotatedGogoBinder.addBinding("b").to(GoA.class);
+          }
+        }
+    );
+
+
+    Assert.assertEquals("A", injector.getInstance(Gogo.class).go());
+    Assert.assertEquals("B", injector.getInstance(Key.get(Gogo.class, Names.named("reverse"))).go());
+    props.setProperty("billy", "b");
+    Assert.assertEquals("B", injector.getInstance(Gogo.class).go());
+    Assert.assertEquals("A", injector.getInstance(Key.get(Gogo.class, Names.named("reverse"))).go());
+    props.setProperty("billy", "a");
+    Assert.assertEquals("A", injector.getInstance(Gogo.class).go());
+    Assert.assertEquals("B", injector.getInstance(Key.get(Gogo.class, Names.named("reverse"))).go());
+    props.setProperty("billy", "b");
+    Assert.assertEquals("B", injector.getInstance(Gogo.class).go());
+    Assert.assertEquals("A", injector.getInstance(Key.get(Gogo.class, Names.named("reverse"))).go());
+    props.setProperty("billy", "c");
+    Assert.assertEquals("A", injector.getInstance(Gogo.class).go());
+    Assert.assertEquals("B", injector.getInstance(Key.get(Gogo.class, Names.named("reverse"))).go());
+
+    // test default property value
+    Assert.assertEquals("B", injector.getInstance(GogoSally.class).go());
+    props.setProperty("sally", "a");
+    Assert.assertEquals("A", injector.getInstance(GogoSally.class).go());
+    props.setProperty("sally", "b");
+    Assert.assertEquals("B", injector.getInstance(GogoSally.class).go());
+    props.setProperty("sally", "c");
+    try {
+      injector.getInstance(GogoSally.class).go();
+      Assert.fail(); // should never be reached
+    } catch(Exception e) {
+      Assert.assertTrue(e instanceof ProvisionException);
+      Assert.assertTrue(e.getMessage().contains("Unknown provider[c] of Key[type=io.druid.guice.PolyBindTest$GogoSally"));
+    }
+  }
+
+  public static interface Gogo
+  {
+    public String go();
+  }
+
+  public static interface GogoSally
+  {
+    public String go();
+  }
+
+  public static class GoA implements Gogo, GogoSally
+  {
+    @Override
+    public String go()
+    {
+      return "A";
+    }
+  }
+
+  public static class GoB implements Gogo, GogoSally
+  {
+    @Override
+    public String go()
+    {
+      return "B";
+    }
+  }
+}
diff --git a/api/src/test/java/io/druid/js/JavaScriptConfigTest.java b/api/src/test/java/io/druid/js/JavaScriptConfigTest.java
new file mode 100644
index 00000000000..6917caebfda
--- /dev/null
+++ b/api/src/test/java/io/druid/js/JavaScriptConfigTest.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.js;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class JavaScriptConfigTest
+{
+  private static ObjectMapper mapper = new ObjectMapper();
+
+  @Test
+  public void testSerde() throws Exception
+  {
+    String json = "{\"enabled\":true}";
+
+    JavaScriptConfig config = mapper.readValue(
+        mapper.writeValueAsString(
+            mapper.readValue(
+                json,
+                JavaScriptConfig.class
+            )
+        ), JavaScriptConfig.class
+    );
+
+    Assert.assertTrue(config.isEnabled());
+  }
+
+  @Test
+  public void testSerdeWithDefaults() throws Exception
+  {
+    String json = "{}";
+
+    JavaScriptConfig config = mapper.readValue(
+        mapper.writeValueAsString(
+            mapper.readValue(
+                json,
+                JavaScriptConfig.class
+            )
+        ), JavaScriptConfig.class
+    );
+
+    Assert.assertFalse(config.isEnabled());
+  }
+}
diff --git a/api/src/test/java/io/druid/segment/SegmentUtilsTest.java b/api/src/test/java/io/druid/segment/SegmentUtilsTest.java
new file mode 100644
index 00000000000..eeb5fbfd9f2
--- /dev/null
+++ b/api/src/test/java/io/druid/segment/SegmentUtilsTest.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment;
+
+import com.google.common.primitives.Ints;
+import org.apache.commons.io.FileUtils;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.File;
+import java.io.IOException;
+
+/**
+ */
+public class SegmentUtilsTest
+{
+  @Rule
+  public final TemporaryFolder tempFolder = new TemporaryFolder();
+
+  @Test
+  public void testVersionBin() throws Exception
+  {
+    File dir = tempFolder.newFolder();
+    byte[] bytes = Ints.toByteArray(9);
+    FileUtils.writeByteArrayToFile(new File(dir, "version.bin"), Ints.toByteArray(9));
+    Assert.assertEquals(9, SegmentUtils.getVersionFromDir(dir));
+  }
+
+  @Test
+  public void testIndexDrd() throws Exception
+  {
+    File dir = tempFolder.newFolder();
+    FileUtils.writeByteArrayToFile(new File(dir, "index.drd"), new byte[]{(byte) 0x8});
+    Assert.assertEquals(8, SegmentUtils.getVersionFromDir(dir));
+  }
+
+  @Test(expected = IOException.class)
+  public void testException() throws Exception
+  {
+    SegmentUtils.getVersionFromDir(tempFolder.newFolder());
+  }
+}
diff --git a/api/src/test/java/io/druid/segment/loading/DataSegmentPusherUtilTest.java b/api/src/test/java/io/druid/segment/loading/DataSegmentPusherUtilTest.java
new file mode 100644
index 00000000000..d2f1c6eab40
--- /dev/null
+++ b/api/src/test/java/io/druid/segment/loading/DataSegmentPusherUtilTest.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.segment.loading;
+
+import com.google.common.collect.ImmutableMap;
+import io.druid.timeline.DataSegment;
+import io.druid.timeline.partition.NoneShardSpec;
+import org.joda.time.Interval;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+
+public class DataSegmentPusherUtilTest
+{
+    @Test
+    public void shouldNotHaveColonsInHdfsStorageDir() throws Exception
+    {
+
+        Interval interval = new Interval("2011-10-01/2011-10-02");
+        ImmutableMap<String, Object> loadSpec = ImmutableMap.<String, Object>of("something", "or_other");
+
+        DataSegment segment = new DataSegment(
+            "something",
+            interval,
+            "brand:new:version",
+            loadSpec,
+            Arrays.asList("dim1", "dim2"),
+            Arrays.asList("met1", "met2"),
+            NoneShardSpec.instance(),
+            null,
+            1
+        );
+
+        String storageDir = DataSegmentPusherUtil.getHdfsStorageDir(segment);
+        Assert.assertEquals("something/20111001T000000.000Z_20111002T000000.000Z/brand_new_version", storageDir);
+
+    }
+}
diff --git a/api/src/test/java/io/druid/timeline/DataSegmentTest.java b/api/src/test/java/io/druid/timeline/DataSegmentTest.java
new file mode 100644
index 00000000000..5488e97d965
--- /dev/null
+++ b/api/src/test/java/io/druid/timeline/DataSegmentTest.java
@@ -0,0 +1,248 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline;
+
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Range;
+import com.google.common.collect.Sets;
+import io.druid.TestObjectMapper;
+import io.druid.data.input.InputRow;
+import io.druid.timeline.partition.NoneShardSpec;
+import io.druid.timeline.partition.PartitionChunk;
+import io.druid.timeline.partition.ShardSpec;
+import io.druid.timeline.partition.ShardSpecLookup;
+import org.joda.time.DateTime;
+import org.joda.time.Interval;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ */
+public class DataSegmentTest
+{
+  private final static ObjectMapper mapper = new TestObjectMapper();
+  private final static int TEST_VERSION = 0x7;
+
+  private static ShardSpec getShardSpec(final int partitionNum)
+  {
+    return new ShardSpec()
+    {
+      @Override
+      public <T> PartitionChunk<T> createChunk(T obj)
+      {
+        return null;
+      }
+
+      @Override
+      public boolean isInChunk(long timestamp, InputRow inputRow)
+      {
+        return false;
+      }
+
+      @Override
+      public int getPartitionNum()
+      {
+        return partitionNum;
+      }
+
+      @Override
+      public ShardSpecLookup getLookup(List<ShardSpec> shardSpecs)
+      {
+        return null;
+      }
+
+      @Override
+      public Map<String, Range<String>> getDomain()
+      {
+        return ImmutableMap.of();
+      }
+    };
+  }
+
+  @Test
+  public void testV1Serialization() throws Exception
+  {
+
+    final Interval interval = new Interval("2011-10-01/2011-10-02");
+    final ImmutableMap<String, Object> loadSpec = ImmutableMap.<String, Object>of("something", "or_other");
+
+    DataSegment segment = new DataSegment(
+        "something",
+        interval,
+        "1",
+        loadSpec,
+        Arrays.asList("dim1", "dim2"),
+        Arrays.asList("met1", "met2"),
+        NoneShardSpec.instance(),
+        TEST_VERSION,
+        1
+    );
+
+    final Map<String, Object> objectMap = mapper.readValue(
+        mapper.writeValueAsString(segment),
+        new TypeReference<Map<String, Object>>()
+        {
+        }
+    );
+
+    Assert.assertEquals(10, objectMap.size());
+    Assert.assertEquals("something", objectMap.get("dataSource"));
+    Assert.assertEquals(interval.toString(), objectMap.get("interval"));
+    Assert.assertEquals("1", objectMap.get("version"));
+    Assert.assertEquals(loadSpec, objectMap.get("loadSpec"));
+    Assert.assertEquals("dim1,dim2", objectMap.get("dimensions"));
+    Assert.assertEquals("met1,met2", objectMap.get("metrics"));
+    Assert.assertEquals(ImmutableMap.of("type", "none"), objectMap.get("shardSpec"));
+    Assert.assertEquals(TEST_VERSION, objectMap.get("binaryVersion"));
+    Assert.assertEquals(1, objectMap.get("size"));
+
+    DataSegment deserializedSegment = mapper.readValue(mapper.writeValueAsString(segment), DataSegment.class);
+
+    Assert.assertEquals(segment.getDataSource(), deserializedSegment.getDataSource());
+    Assert.assertEquals(segment.getInterval(), deserializedSegment.getInterval());
+    Assert.assertEquals(segment.getVersion(), deserializedSegment.getVersion());
+    Assert.assertEquals(segment.getLoadSpec(), deserializedSegment.getLoadSpec());
+    Assert.assertEquals(segment.getDimensions(), deserializedSegment.getDimensions());
+    Assert.assertEquals(segment.getMetrics(), deserializedSegment.getMetrics());
+    Assert.assertEquals(segment.getShardSpec(), deserializedSegment.getShardSpec());
+    Assert.assertEquals(segment.getSize(), deserializedSegment.getSize());
+    Assert.assertEquals(segment.getIdentifier(), deserializedSegment.getIdentifier());
+
+    deserializedSegment = mapper.readValue(mapper.writeValueAsString(segment), DataSegment.class);
+    Assert.assertEquals(0, segment.compareTo(deserializedSegment));
+
+    deserializedSegment = mapper.readValue(mapper.writeValueAsString(segment), DataSegment.class);
+    Assert.assertEquals(0, deserializedSegment.compareTo(segment));
+
+    deserializedSegment = mapper.readValue(mapper.writeValueAsString(segment), DataSegment.class);
+    Assert.assertEquals(segment.hashCode(), deserializedSegment.hashCode());
+  }
+
+  @Test
+  public void testIdentifier()
+  {
+    final DataSegment segment = DataSegment.builder()
+                                           .dataSource("foo")
+                                           .interval(new Interval("2012-01-01/2012-01-02"))
+                                           .version(new DateTime("2012-01-01T11:22:33.444Z").toString())
+                                           .shardSpec(NoneShardSpec.instance())
+                                           .build();
+
+    Assert.assertEquals(
+        "foo_2012-01-01T00:00:00.000Z_2012-01-02T00:00:00.000Z_2012-01-01T11:22:33.444Z",
+        segment.getIdentifier()
+    );
+  }
+
+  @Test
+  public void testIdentifierWithZeroPartition()
+  {
+    final DataSegment segment = DataSegment.builder()
+                                           .dataSource("foo")
+                                           .interval(new Interval("2012-01-01/2012-01-02"))
+                                           .version(new DateTime("2012-01-01T11:22:33.444Z").toString())
+                                           .shardSpec(getShardSpec(0))
+                                           .build();
+
+    Assert.assertEquals(
+        "foo_2012-01-01T00:00:00.000Z_2012-01-02T00:00:00.000Z_2012-01-01T11:22:33.444Z",
+        segment.getIdentifier()
+    );
+  }
+
+  @Test
+  public void testIdentifierWithNonzeroPartition()
+  {
+    final DataSegment segment = DataSegment.builder()
+                                           .dataSource("foo")
+                                           .interval(new Interval("2012-01-01/2012-01-02"))
+                                           .version(new DateTime("2012-01-01T11:22:33.444Z").toString())
+                                           .shardSpec(getShardSpec(7))
+                                           .build();
+
+    Assert.assertEquals(
+        "foo_2012-01-01T00:00:00.000Z_2012-01-02T00:00:00.000Z_2012-01-01T11:22:33.444Z_7",
+        segment.getIdentifier()
+    );
+  }
+
+  @Test
+  public void testV1SerializationNullMetrics() throws Exception
+  {
+    final DataSegment segment = DataSegment.builder()
+                                           .dataSource("foo")
+                                           .interval(new Interval("2012-01-01/2012-01-02"))
+                                           .version(new DateTime("2012-01-01T11:22:33.444Z").toString())
+                                           .build();
+
+    final DataSegment segment2 = mapper.readValue(mapper.writeValueAsString(segment), DataSegment.class);
+    Assert.assertEquals("empty dimensions", ImmutableList.of(), segment2.getDimensions());
+    Assert.assertEquals("empty metrics", ImmutableList.of(), segment2.getMetrics());
+  }
+
+  @Test
+  public void testBucketMonthComparator() throws Exception
+  {
+    DataSegment[] sortedOrder = {
+        makeDataSegment("test1", "2011-01-01/2011-01-02", "a"),
+        makeDataSegment("test1", "2011-01-02/2011-01-03", "a"),
+        makeDataSegment("test1", "2011-01-02/2011-01-03", "b"),
+        makeDataSegment("test2", "2011-01-01/2011-01-02", "a"),
+        makeDataSegment("test2", "2011-01-02/2011-01-03", "a"),
+        makeDataSegment("test1", "2011-02-01/2011-02-02", "a"),
+        makeDataSegment("test1", "2011-02-02/2011-02-03", "a"),
+        makeDataSegment("test1", "2011-02-02/2011-02-03", "b"),
+        makeDataSegment("test2", "2011-02-01/2011-02-02", "a"),
+        makeDataSegment("test2", "2011-02-02/2011-02-03", "a"),
+    };
+
+    List<DataSegment> shuffled = Lists.newArrayList(sortedOrder);
+    Collections.shuffle(shuffled);
+
+    Set<DataSegment> theSet = Sets.newTreeSet(DataSegment.bucketMonthComparator());
+    theSet.addAll(shuffled);
+
+    int index = 0;
+    for (DataSegment dataSegment : theSet) {
+      Assert.assertEquals(sortedOrder[index], dataSegment);
+      ++index;
+    }
+  }
+
+  private DataSegment makeDataSegment(String dataSource, String interval, String version)
+  {
+    return DataSegment.builder()
+                      .dataSource(dataSource)
+                      .interval(new Interval(interval))
+                      .version(version)
+                      .size(1)
+                      .build();
+  }
+}
diff --git a/api/src/test/java/io/druid/timeline/DataSegmentUtilsTest.java b/api/src/test/java/io/druid/timeline/DataSegmentUtilsTest.java
new file mode 100644
index 00000000000..5ae9d1dae69
--- /dev/null
+++ b/api/src/test/java/io/druid/timeline/DataSegmentUtilsTest.java
@@ -0,0 +1,123 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.timeline;
+
+import io.druid.timeline.DataSegmentUtils.SegmentIdentifierParts;
+import org.joda.time.Interval;
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ */
+public class DataSegmentUtilsTest
+{
+  @Test
+  public void testBasic()
+  {
+    String datasource = "datasource";
+    SegmentIdentifierParts desc = new SegmentIdentifierParts(datasource, new Interval("2015-01-02/2015-01-03"), "ver", "0_0");
+    Assert.assertEquals("datasource_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_ver_0_0", desc.toString());
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(datasource, desc.toString()));
+
+    desc = desc.withInterval(new Interval("2014-10-20T00:00:00Z/P1D"));
+    Assert.assertEquals("datasource_2014-10-20T00:00:00.000Z_2014-10-21T00:00:00.000Z_ver_0_0", desc.toString());
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(datasource, desc.toString()));
+
+    desc = new SegmentIdentifierParts(datasource, new Interval("2015-01-02/2015-01-03"), "ver", null);
+    Assert.assertEquals("datasource_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_ver", desc.toString());
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(datasource, desc.toString()));
+
+    desc = desc.withInterval(new Interval("2014-10-20T00:00:00Z/P1D"));
+    Assert.assertEquals("datasource_2014-10-20T00:00:00.000Z_2014-10-21T00:00:00.000Z_ver", desc.toString());
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(datasource, desc.toString()));
+  }
+
+  @Test
+  public void testDataSourceWithUnderscore1()
+  {
+    String datasource = "datasource_1";
+    SegmentIdentifierParts desc = new SegmentIdentifierParts(datasource, new Interval("2015-01-02/2015-01-03"), "ver", "0_0");
+    Assert.assertEquals("datasource_1_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_ver_0_0", desc.toString());
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(datasource, desc.toString()));
+
+    desc = desc.withInterval(new Interval("2014-10-20T00:00:00Z/P1D"));
+    Assert.assertEquals("datasource_1_2014-10-20T00:00:00.000Z_2014-10-21T00:00:00.000Z_ver_0_0", desc.toString());
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(datasource, desc.toString()));
+
+    desc = new SegmentIdentifierParts(datasource, new Interval("2015-01-02/2015-01-03"), "ver", null);
+    Assert.assertEquals("datasource_1_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_ver", desc.toString());
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(datasource, desc.toString()));
+
+    desc = desc.withInterval(new Interval("2014-10-20T00:00:00Z/P1D"));
+    Assert.assertEquals("datasource_1_2014-10-20T00:00:00.000Z_2014-10-21T00:00:00.000Z_ver", desc.toString());
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(datasource, desc.toString()));
+  }
+
+  @Test
+  public void testDataSourceWithUnderscore2()
+  {
+    String dataSource = "datasource_2015-01-01T00:00:00.000Z";
+    SegmentIdentifierParts desc = new SegmentIdentifierParts(dataSource, new Interval("2015-01-02/2015-01-03"), "ver", "0_0");
+    Assert.assertEquals(
+        "datasource_2015-01-01T00:00:00.000Z_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_ver_0_0",
+        desc.toString()
+    );
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(dataSource, desc.toString()));
+
+    desc = desc.withInterval(new Interval("2014-10-20T00:00:00Z/P1D"));
+    Assert.assertEquals(
+        "datasource_2015-01-01T00:00:00.000Z_2014-10-20T00:00:00.000Z_2014-10-21T00:00:00.000Z_ver_0_0",
+        desc.toString()
+    );
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(dataSource, desc.toString()));
+
+    desc = new SegmentIdentifierParts(dataSource, new Interval("2015-01-02/2015-01-03"), "ver", null);
+    Assert.assertEquals(
+        "datasource_2015-01-01T00:00:00.000Z_2015-01-02T00:00:00.000Z_2015-01-03T00:00:00.000Z_ver",
+        desc.toString()
+    );
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(dataSource, desc.toString()));
+
+    desc = desc.withInterval(new Interval("2014-10-20T00:00:00Z/P1D"));
+    Assert.assertEquals(
+        "datasource_2015-01-01T00:00:00.000Z_2014-10-20T00:00:00.000Z_2014-10-21T00:00:00.000Z_ver",
+        desc.toString()
+    );
+    Assert.assertEquals(desc, DataSegmentUtils.valueOf(dataSource, desc.toString()));
+  }
+
+  @Test
+  public void testInvalidFormat0()
+  {
+    Assert.assertNull(DataSegmentUtils.valueOf("ds", "datasource_2015-01-02T00:00:00.000Z_2014-10-20T00:00:00.000Z_version"));
+  }
+
+  @Test
+  public void testInvalidFormat1()
+  {
+    Assert.assertNull(DataSegmentUtils.valueOf("datasource", "datasource_invalid_interval_version"));
+  }
+
+  @Test
+  public void testInvalidFormat2()
+  {
+    Assert.assertNull(DataSegmentUtils.valueOf("datasource", "datasource_2015-01-02T00:00:00.000Z_version"));
+  }
+}
diff --git a/api/src/test/java/io/druid/timeline/partition/NoneShardSpecTest.java b/api/src/test/java/io/druid/timeline/partition/NoneShardSpecTest.java
new file mode 100644
index 00000000000..bc9925a4ddc
--- /dev/null
+++ b/api/src/test/java/io/druid/timeline/partition/NoneShardSpecTest.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.timeline.partition;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import io.druid.TestObjectMapper;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class NoneShardSpecTest
+{
+  @Test
+  public void testEqualsAndHashCode()
+  {
+    final ShardSpec one = NoneShardSpec.instance();
+    final ShardSpec two = NoneShardSpec.instance();
+    Assert.assertEquals(one, two);
+    Assert.assertEquals(one.hashCode(), two.hashCode());
+  }
+
+  @Test
+  public void testSerde() throws Exception
+  {
+    final NoneShardSpec one = NoneShardSpec.instance();
+    ObjectMapper mapper = new TestObjectMapper();
+    NoneShardSpec serde1 = mapper.readValue(mapper.writeValueAsString(one), NoneShardSpec.class);
+    NoneShardSpec serde2 = mapper.readValue(mapper.writeValueAsString(one), NoneShardSpec.class);
+
+    // Serde should return same object instead of creating new one every time.
+    Assert.assertTrue(serde1 == serde2);
+    Assert.assertTrue(one == serde1);
+  }
+
+  @Test
+  public void testPartitionFieldIgnored() throws IOException
+  {
+    final String jsonStr = "{\"type\": \"none\",\"partitionNum\": 2}";
+    ObjectMapper mapper = new TestObjectMapper();
+    final ShardSpec noneShardSpec = mapper.readValue(jsonStr, ShardSpec.class);
+    noneShardSpec.equals(NoneShardSpec.instance());
+  }
+}
diff --git a/api/src/test/resources/log4j2.xml b/api/src/test/resources/log4j2.xml
new file mode 100644
index 00000000000..625f9fe1516
--- /dev/null
+++ b/api/src/test/resources/log4j2.xml
@@ -0,0 +1,35 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+  ~ Licensed to Metamarkets Group Inc. (Metamarkets) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  Metamarkets licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+<Configuration status="WARN">
+  <Appenders>
+    <Console name="Console" target="SYSTEM_OUT">
+      <PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/>
+    </Console>
+  </Appenders>
+  <Loggers>
+    <Root level="info">
+      <AppenderRef ref="Console"/>
+    </Root>
+    <Logger level="debug" name="io.druid" additivity="false">
+      <AppenderRef ref="Console"/>
+    </Logger>
+  </Loggers>
+</Configuration>
diff --git a/aws-common/pom.xml b/aws-common/pom.xml
index 4584043cddc..7c1fafdca46 100644
--- a/aws-common/pom.xml
+++ b/aws-common/pom.xml
@@ -26,7 +26,7 @@
     <parent>
         <groupId>io.druid</groupId>
         <artifactId>druid</artifactId>
-        <version>0.9.0-SNAPSHOT</version>
+        <version>0.10.0-SNAPSHOT</version>
     </parent>
 
     <dependencies>
diff --git a/benchmarks/pom.xml b/benchmarks/pom.xml
index b2ae71e27cc..bd3216f5b3d 100644
--- a/benchmarks/pom.xml
+++ b/benchmarks/pom.xml
@@ -27,7 +27,7 @@
   <parent>
     <groupId>io.druid</groupId>
     <artifactId>druid</artifactId>
-    <version>0.9.0-SNAPSHOT</version>
+    <version>0.10.0-SNAPSHOT</version>
   </parent>
 
   <prerequisites>
@@ -51,6 +51,28 @@
       <artifactId>druid-processing</artifactId>
       <version>${project.parent.version}</version>
     </dependency>
+    <dependency>
+      <groupId>io.druid</groupId>
+      <artifactId>druid-server</artifactId>
+      <version>${project.parent.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>io.druid</groupId>
+      <artifactId>druid-sql</artifactId>
+      <version>${project.parent.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>io.druid</groupId>
+      <artifactId>druid-processing</artifactId>
+      <version>${project.parent.version}</version>
+      <type>test-jar</type>
+    </dependency>
+    <dependency>
+      <groupId>io.druid</groupId>
+      <artifactId>druid-sql</artifactId>
+      <version>${project.parent.version}</version>
+      <type>test-jar</type>
+    </dependency>
     <dependency>
       <groupId>com.github.wnameless</groupId>
       <artifactId>json-flattener</artifactId>
@@ -65,7 +87,7 @@
 
   <properties>
     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-    <jmh.version>1.9.2</jmh.version>
+    <jmh.version>1.17.2</jmh.version>
     <javac.target>1.7</javac.target>
     <uberjar.name>benchmarks</uberjar.name>
   </properties>
@@ -154,7 +176,6 @@
         </plugin>
         <plugin>
           <artifactId>maven-surefire-plugin</artifactId>
-          <version>2.17</version>
         </plugin>
       </plugins>
     </pluginManagement>
diff --git a/benchmarks/src/main/java/io/druid/benchmark/BitmapIterationBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/BitmapIterationBenchmark.java
new file mode 100644
index 00000000000..f68f025764b
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/BitmapIterationBenchmark.java
@@ -0,0 +1,282 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import io.druid.collections.bitmap.BitSetBitmapFactory;
+import io.druid.collections.bitmap.BitmapFactory;
+import io.druid.collections.bitmap.ConciseBitmapFactory;
+import io.druid.collections.bitmap.ImmutableBitmap;
+import io.druid.collections.bitmap.MutableBitmap;
+import io.druid.collections.bitmap.RoaringBitmapFactory;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+import org.roaringbitmap.IntIterator;
+
+import java.util.Arrays;
+import java.util.Random;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+
+/**
+ * Benchmarks of bitmap iteration and iteration + something (cumulative cost), the latter is useful for comparing total
+ * "usage cost" of different {@link io.druid.segment.data.BitmapSerdeFactory}.
+ *
+ * @see #iter(IterState)
+ * @see #constructAndIter(ConstructAndIterState)
+ * @see #intersectionAndIter(BitmapsForIntersection)
+ * @see #unionAndIter(BitmapsForUnion)
+ */
+@State(Scope.Benchmark)
+@Fork(1)
+@BenchmarkMode(Mode.AverageTime)
+@OutputTimeUnit(TimeUnit.NANOSECONDS)
+@Warmup(iterations = 5)
+@Measurement(iterations = 5)
+public class BitmapIterationBenchmark
+{
+  @Param({"bitset", "concise", "roaring"})
+  public String bitmapAlgo;
+
+  /**
+   * Fraction of set bits in the bitmaps to iterate. For {@link #intersectionAndIter} and
+   * {@link #unionAndIter}, this is the fraction of set bits in the final result of intersection or union.
+   */
+  @Param({"0.0", "0.001", "0.1", "0.5", "0.99", "1.0"})
+  public double prob;
+
+  /**
+   * The size of all bitmaps, i. e. the number of rows in a segment for the most bitmap use cases.
+   */
+  @Param({"1000000"})
+  public int size;
+
+  private BitmapFactory makeFactory() {
+    switch (bitmapAlgo) {
+      case "bitset":
+        return new BitSetBitmapFactory();
+      case "concise":
+        return new ConciseBitmapFactory();
+      case "roaring":
+        return new RoaringBitmapFactory();
+      default:
+        throw new IllegalStateException();
+    }
+  }
+
+  private BitmapFactory factory;
+
+  @Setup
+  public void setup()
+  {
+    factory = makeFactory();
+  }
+
+  private ImmutableBitmap makeBitmap(double prob)
+  {
+    MutableBitmap mutableBitmap = factory.makeEmptyMutableBitmap();
+    Random random = ThreadLocalRandom.current();
+    for (int bit = 0; bit < size; bit++) {
+      if (random.nextDouble() < prob) {
+        mutableBitmap.add(bit);
+      }
+    }
+    return factory.makeImmutableBitmap(mutableBitmap);
+  }
+
+  @State(Scope.Benchmark)
+  public static class IterState
+  {
+    private ImmutableBitmap bitmap;
+
+    @Setup
+    public void setup(BitmapIterationBenchmark state)
+    {
+      bitmap = state.makeBitmap(state.prob);
+    }
+  }
+
+  /**
+   * General benchmark of bitmap iteration, this is a part of {@link io.druid.segment.IndexMerger#merge} and
+   * query processing on both realtime and historical nodes.
+   */
+  @Benchmark
+  public int iter(IterState state)
+  {
+    ImmutableBitmap bitmap = state.bitmap;
+    return iter(bitmap);
+  }
+
+  private static int iter(ImmutableBitmap bitmap)
+  {
+    int consume = 0;
+    for (IntIterator it = bitmap.iterator(); it.hasNext();) {
+      consume ^= it.next();
+    }
+    return consume;
+  }
+
+  @State(Scope.Benchmark)
+  public static class ConstructAndIterState
+  {
+    private int dataSize;
+    private int[] data;
+
+    @Setup
+    public void setup(BitmapIterationBenchmark state)
+    {
+      data = new int[(int) (state.size * state.prob) * 2];
+      dataSize = 0;
+      Random random = ThreadLocalRandom.current();
+      for (int bit = 0; bit < state.size; bit++) {
+        if (random.nextDouble() < state.prob) {
+          data[dataSize] = bit;
+          dataSize++;
+        }
+      }
+    }
+  }
+
+  /**
+   * Benchmark of cumulative cost of construction of an immutable bitmap and then iterating over it. This is a pattern
+   * from realtime nodes, see {@link io.druid.segment.StringDimensionIndexer#fillBitmapsFromUnsortedEncodedKeyComponent}.
+   * However this benchmark is yet approximate and to be improved to better reflect actual workloads of realtime nodes.
+   */
+  @Benchmark
+  public int constructAndIter(ConstructAndIterState state)
+  {
+    int dataSize = state.dataSize;
+    int[] data = state.data;
+    MutableBitmap mutableBitmap = factory.makeEmptyMutableBitmap();
+    for (int i = 0; i < dataSize; i++) {
+      mutableBitmap.add(data[i]);
+    }
+    ImmutableBitmap bitmap = factory.makeImmutableBitmap(mutableBitmap);
+    return iter(bitmap);
+  }
+
+  @State(Scope.Benchmark)
+  public static class BitmapsForIntersection
+  {
+    /**
+     * Number of bitmaps to intersect.
+     */
+    @Param({"2", "10", "100"})
+    public int n;
+
+    private ImmutableBitmap[] bitmaps;
+
+    @Setup
+    public void setup(BitmapIterationBenchmark state)
+    {
+      // prob of intersection = product (probs of intersected bitmaps), prob = intersectedBitmapProb ^ n
+      double intersectedBitmapProb = Math.pow(state.prob, 1.0 / n);
+      bitmaps = new ImmutableBitmap[n];
+      for (int i = 0; i < n; i++) {
+        bitmaps[i] = state.makeBitmap(intersectedBitmapProb);
+      }
+    }
+  }
+
+  /**
+   * Benchmark of cumulative cost of bitmap intersection with subsequent iteration over the result. This is a pattern
+   * from query processing of historical nodes, when {@link io.druid.segment.filter.AndFilter} is used.
+   */
+  @Benchmark
+  public int intersectionAndIter(BitmapsForIntersection state)
+  {
+    ImmutableBitmap intersection = factory.intersection(Arrays.asList(state.bitmaps));
+    return iter(intersection);
+  }
+
+  @State(Scope.Benchmark)
+  public static class BitmapsForUnion
+  {
+    /**
+     * Number of bitmaps to union.
+     */
+    @Param({"2", "10", "100"})
+    public int n;
+
+    private ImmutableBitmap[] bitmaps;
+
+    @Setup
+    public void setup(BitmapIterationBenchmark state)
+    {
+      double prob = Math.pow(state.prob, 1.0 / n);
+      MutableBitmap[] mutableBitmaps = new MutableBitmap[n];
+      for (int i = 0; i < n; i++) {
+        mutableBitmaps[i] = state.factory.makeEmptyMutableBitmap();
+      }
+      Random r = ThreadLocalRandom.current();
+      for (int i = 0; i < state.size; i++) {
+        // unions are usually search/filter/select of multiple values of one dimension, so making bitmaps disjoint will
+        // make benchmarks closer to actual workloads
+        MutableBitmap bitmap = mutableBitmaps[r.nextInt(n)];
+        // In one selected bitmap, set the bit with probability=prob, to have the same fraction of set bit in the union
+        if (r.nextDouble() < prob) {
+          bitmap.add(i);
+        }
+      }
+      bitmaps = new ImmutableBitmap[n];
+      for (int i = 0; i < n; i++) {
+        bitmaps[i] = state.factory.makeImmutableBitmap(mutableBitmaps[i]);
+      }
+    }
+  }
+
+  /**
+   * Benchmark of cumulative cost of bitmap union with subsequent iteration over the result. This is a pattern from
+   * query processing on historical nodes, when filters like {@link io.druid.segment.filter.DimensionPredicateFilter},
+   * {@link io.druid.query.filter.RegexDimFilter}, {@link io.druid.query.filter.SearchQueryDimFilter} and similar are
+   * used.
+   */
+  @Benchmark
+  public int unionAndIter(BitmapsForUnion state)
+  {
+    ImmutableBitmap intersection = factory.union(Arrays.asList(state.bitmaps));
+    return iter(intersection);
+  }
+
+  /**
+   * This main() is for debugging from the IDE.
+   */
+  public static void main(String[] args)
+  {
+    BitmapIterationBenchmark state = new BitmapIterationBenchmark();
+    state.bitmapAlgo = "concise";
+    state.prob = 0.001;
+    state.size = 1000000;
+    state.setup();
+
+    BitmapsForIntersection state2 = new BitmapsForIntersection();
+    state2.setup(state);
+    state.intersectionAndIter(state2);
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/BoundFilterBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/BoundFilterBenchmark.java
new file mode 100644
index 00000000000..a8a3679d812
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/BoundFilterBenchmark.java
@@ -0,0 +1,299 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.FluentIterable;
+import io.druid.collections.bitmap.BitmapFactory;
+import io.druid.collections.bitmap.ImmutableBitmap;
+import io.druid.collections.bitmap.MutableBitmap;
+import io.druid.collections.bitmap.RoaringBitmapFactory;
+import io.druid.collections.spatial.ImmutableRTree;
+import io.druid.extendedset.intset.ConciseSetUtils;
+import io.druid.query.filter.BitmapIndexSelector;
+import io.druid.query.filter.BoundDimFilter;
+import io.druid.query.ordering.StringComparators;
+import io.druid.segment.column.BitmapIndex;
+import io.druid.segment.data.BitmapSerdeFactory;
+import io.druid.segment.data.GenericIndexed;
+import io.druid.segment.data.Indexed;
+import io.druid.segment.data.RoaringBitmapSerdeFactory;
+import io.druid.segment.filter.BoundFilter;
+import io.druid.segment.serde.BitmapIndexColumnPartSupplier;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 10)
+public class BoundFilterBenchmark
+{
+  private static final int START_INT = 1_000_000_000;
+  private static final int END_INT = ConciseSetUtils.MAX_ALLOWED_INTEGER;
+
+  private static final BoundFilter NOTHING_LEXICOGRAPHIC = new BoundFilter(
+      new BoundDimFilter(
+          "foo",
+          String.valueOf(START_INT),
+          String.valueOf(START_INT),
+          true,
+          false,
+          false,
+          null,
+          StringComparators.LEXICOGRAPHIC
+      )
+  );
+
+  private static final BoundFilter HALF_LEXICOGRAPHIC = new BoundFilter(
+      new BoundDimFilter(
+          "foo",
+          String.valueOf(START_INT + (END_INT - START_INT) / 2),
+          String.valueOf(END_INT),
+          false,
+          false,
+          false,
+          null,
+          StringComparators.LEXICOGRAPHIC
+      )
+  );
+
+  private static final BoundFilter EVERYTHING_LEXICOGRAPHIC = new BoundFilter(
+      new BoundDimFilter(
+          "foo",
+          String.valueOf(START_INT),
+          String.valueOf(END_INT),
+          false,
+          false,
+          false,
+          null,
+          StringComparators.LEXICOGRAPHIC
+      )
+  );
+
+  private static final BoundFilter NOTHING_ALPHANUMERIC = new BoundFilter(
+      new BoundDimFilter(
+          "foo",
+          String.valueOf(START_INT),
+          String.valueOf(START_INT),
+          true,
+          false,
+          true,
+          null,
+          StringComparators.ALPHANUMERIC
+      )
+  );
+
+  private static final BoundFilter HALF_ALPHANUMERIC = new BoundFilter(
+      new BoundDimFilter(
+          "foo",
+          String.valueOf(START_INT + (END_INT - START_INT) / 2),
+          String.valueOf(END_INT),
+          false,
+          false,
+          true,
+          null,
+          StringComparators.ALPHANUMERIC
+      )
+  );
+
+  private static final BoundFilter EVERYTHING_ALPHANUMERIC = new BoundFilter(
+      new BoundDimFilter(
+          "foo",
+          String.valueOf(START_INT),
+          String.valueOf(END_INT),
+          false,
+          false,
+          true,
+          null,
+          StringComparators.ALPHANUMERIC
+      )
+  );
+
+  // cardinality, the dictionary will contain evenly spaced integers
+  @Param({"1000", "100000", "1000000"})
+  int cardinality;
+
+  int step;
+
+  // selector will contain a cardinality number of bitmaps; each one contains a single int: 0
+  BitmapIndexSelector selector;
+
+  @Setup
+  public void setup() throws IOException
+  {
+    step = (END_INT - START_INT) / cardinality;
+    final BitmapFactory bitmapFactory = new RoaringBitmapFactory();
+    final BitmapSerdeFactory serdeFactory = new RoaringBitmapSerdeFactory(null);
+    final List<Integer> ints = generateInts();
+    final GenericIndexed<String> dictionary = GenericIndexed.fromIterable(
+        FluentIterable.from(ints)
+                      .transform(
+                          new Function<Integer, String>()
+                          {
+                            @Override
+                            public String apply(Integer i)
+                            {
+                              return i.toString();
+                            }
+                          }
+                      ),
+        GenericIndexed.STRING_STRATEGY
+    );
+    final BitmapIndex bitmapIndex = new BitmapIndexColumnPartSupplier(
+        bitmapFactory,
+        GenericIndexed.fromIterable(
+            FluentIterable.from(ints)
+                          .transform(
+                              new Function<Integer, ImmutableBitmap>()
+                              {
+                                @Override
+                                public ImmutableBitmap apply(Integer i)
+                                {
+                                  final MutableBitmap mutableBitmap = bitmapFactory.makeEmptyMutableBitmap();
+                                  mutableBitmap.add((i - START_INT) / step);
+                                  return bitmapFactory.makeImmutableBitmap(mutableBitmap);
+                                }
+                              }
+                          ),
+            serdeFactory.getObjectStrategy()
+        ),
+        dictionary
+    ).get();
+    selector = new BitmapIndexSelector()
+    {
+      @Override
+      public Indexed<String> getDimensionValues(String dimension)
+      {
+        return dictionary;
+      }
+
+      @Override
+      public int getNumRows()
+      {
+        throw new UnsupportedOperationException();
+      }
+
+      @Override
+      public BitmapFactory getBitmapFactory()
+      {
+        return bitmapFactory;
+      }
+
+      @Override
+      public ImmutableBitmap getBitmapIndex(String dimension, String value)
+      {
+        return bitmapIndex.getBitmap(bitmapIndex.getIndex(value));
+      }
+
+      @Override
+      public BitmapIndex getBitmapIndex(String dimension)
+      {
+        return bitmapIndex;
+      }
+
+      @Override
+      public ImmutableRTree getSpatialIndex(String dimension)
+      {
+        throw new UnsupportedOperationException();
+      }
+    };
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchNothingLexicographic()
+  {
+    final ImmutableBitmap bitmapIndex = NOTHING_LEXICOGRAPHIC.getBitmapIndex(selector);
+    Preconditions.checkState(bitmapIndex.size() == 0);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchHalfLexicographic()
+  {
+    final ImmutableBitmap bitmapIndex = HALF_LEXICOGRAPHIC.getBitmapIndex(selector);
+    Preconditions.checkState(bitmapIndex.size() > 0 && bitmapIndex.size() < cardinality);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchEverythingLexicographic()
+  {
+    final ImmutableBitmap bitmapIndex = EVERYTHING_LEXICOGRAPHIC.getBitmapIndex(selector);
+    Preconditions.checkState(bitmapIndex.size() == cardinality);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchNothingAlphaNumeric()
+  {
+    final ImmutableBitmap bitmapIndex = NOTHING_ALPHANUMERIC.getBitmapIndex(selector);
+    Preconditions.checkState(bitmapIndex.size() == 0);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchHalfAlphaNumeric()
+  {
+    final ImmutableBitmap bitmapIndex = HALF_ALPHANUMERIC.getBitmapIndex(selector);
+    Preconditions.checkState(bitmapIndex.size() > 0 && bitmapIndex.size() < cardinality);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchEverythingAlphaNumeric()
+  {
+    final ImmutableBitmap bitmapIndex = EVERYTHING_ALPHANUMERIC.getBitmapIndex(selector);
+    Preconditions.checkState(bitmapIndex.size() == cardinality);
+  }
+
+  private List<Integer> generateInts()
+  {
+    final List<Integer> ints = new ArrayList<>(cardinality);
+
+    for (int i = 0; i < cardinality; i++) {
+      ints.add(START_INT + step * i);
+    }
+
+    return ints;
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/CompressedIndexedIntsBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/CompressedIndexedIntsBenchmark.java
index 8d97705ca11..9a5bf336e65 100644
--- a/benchmarks/src/main/java/io/druid/benchmark/CompressedIndexedIntsBenchmark.java
+++ b/benchmarks/src/main/java/io/druid/benchmark/CompressedIndexedIntsBenchmark.java
@@ -76,7 +76,9 @@ public void setup() throws IOException
         )
     );
     this.compressed = CompressedVSizeIntsIndexedSupplier.fromByteBuffer(
-        bufferCompressed, ByteOrder.nativeOrder()
+        bufferCompressed,
+        ByteOrder.nativeOrder(),
+        null
     ).get();
 
     final ByteBuffer bufferUncompressed = serialize(
diff --git a/benchmarks/src/main/java/io/druid/benchmark/CompressedVSizeIndexedBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/CompressedVSizeIndexedBenchmark.java
index 431eb2c2abe..106854cd038 100644
--- a/benchmarks/src/main/java/io/druid/benchmark/CompressedVSizeIndexedBenchmark.java
+++ b/benchmarks/src/main/java/io/druid/benchmark/CompressedVSizeIndexedBenchmark.java
@@ -99,7 +99,9 @@ public IndexedInts apply(int[] input)
         )
     );
     this.compressed = CompressedVSizeIndexedSupplier.fromByteBuffer(
-        bufferCompressed, ByteOrder.nativeOrder()
+        bufferCompressed,
+        ByteOrder.nativeOrder(),
+        null
     ).get();
 
     final ByteBuffer bufferUncompressed = serialize(
diff --git a/benchmarks/src/main/java/io/druid/benchmark/ConciseComplementBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/ConciseComplementBenchmark.java
index dc69035acf9..a31a3f713f3 100644
--- a/benchmarks/src/main/java/io/druid/benchmark/ConciseComplementBenchmark.java
+++ b/benchmarks/src/main/java/io/druid/benchmark/ConciseComplementBenchmark.java
@@ -20,7 +20,7 @@
 package io.druid.benchmark;
 
 
-import it.uniroma3.mat.extendedset.intset.ImmutableConciseSet;
+import io.druid.extendedset.intset.ImmutableConciseSet;
 import org.openjdk.jmh.annotations.Benchmark;
 import org.openjdk.jmh.annotations.BenchmarkMode;
 import org.openjdk.jmh.annotations.Mode;
diff --git a/benchmarks/src/main/java/io/druid/benchmark/DimensionPredicateFilterBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/DimensionPredicateFilterBenchmark.java
new file mode 100644
index 00000000000..0a22e8c1267
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/DimensionPredicateFilterBenchmark.java
@@ -0,0 +1,208 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Predicate;
+import com.google.common.collect.FluentIterable;
+import io.druid.collections.bitmap.BitmapFactory;
+import io.druid.collections.bitmap.ImmutableBitmap;
+import io.druid.collections.bitmap.MutableBitmap;
+import io.druid.collections.bitmap.RoaringBitmapFactory;
+import io.druid.collections.spatial.ImmutableRTree;
+import io.druid.query.filter.BitmapIndexSelector;
+import io.druid.query.filter.DruidFloatPredicate;
+import io.druid.query.filter.DruidLongPredicate;
+import io.druid.query.filter.DruidPredicateFactory;
+import io.druid.segment.column.BitmapIndex;
+import io.druid.segment.data.BitmapSerdeFactory;
+import io.druid.segment.data.GenericIndexed;
+import io.druid.segment.data.Indexed;
+import io.druid.segment.data.RoaringBitmapSerdeFactory;
+import io.druid.segment.filter.DimensionPredicateFilter;
+import io.druid.segment.serde.BitmapIndexColumnPartSupplier;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 10)
+public class DimensionPredicateFilterBenchmark
+{
+  private static final int START_INT = 1_000_000_000;
+
+  private static final DimensionPredicateFilter IS_EVEN = new DimensionPredicateFilter(
+      "foo",
+      new DruidPredicateFactory()
+      {
+        @Override
+        public Predicate<String> makeStringPredicate()
+        {
+          return new Predicate<String>()
+          {
+            @Override
+            public boolean apply(String input)
+            {
+              if (input == null) {
+                return false;
+              }
+              return Integer.parseInt(input.toString()) % 2 == 0;
+            }
+          };
+        }
+
+        @Override
+        public DruidLongPredicate makeLongPredicate()
+        {
+          return DruidLongPredicate.ALWAYS_FALSE;
+        }
+
+        @Override
+        public DruidFloatPredicate makeFloatPredicate()
+        {
+          return DruidFloatPredicate.ALWAYS_FALSE;
+        }
+      },
+      null
+  );
+
+  // cardinality, the dictionary will contain integers starting from START_INT
+  @Param({"1000", "100000", "1000000"})
+  int cardinality;
+
+  // selector will contain a cardinality number of bitmaps; each one contains a single int: 0
+  BitmapIndexSelector selector;
+
+  @Setup
+  public void setup() throws IOException
+  {
+    final BitmapFactory bitmapFactory = new RoaringBitmapFactory();
+    final BitmapSerdeFactory serdeFactory = new RoaringBitmapSerdeFactory(null);
+    final List<Integer> ints = generateInts();
+    final GenericIndexed<String> dictionary = GenericIndexed.fromIterable(
+        FluentIterable.from(ints)
+                      .transform(
+                          new Function<Integer, String>()
+                          {
+                            @Override
+                            public String apply(Integer i)
+                            {
+                              return i.toString();
+                            }
+                          }
+                      ),
+        GenericIndexed.STRING_STRATEGY
+    );
+    final BitmapIndex bitmapIndex = new BitmapIndexColumnPartSupplier(
+        bitmapFactory,
+        GenericIndexed.fromIterable(
+            FluentIterable.from(ints)
+                          .transform(
+                              new Function<Integer, ImmutableBitmap>()
+                              {
+                                @Override
+                                public ImmutableBitmap apply(Integer i)
+                                {
+                                  final MutableBitmap mutableBitmap = bitmapFactory.makeEmptyMutableBitmap();
+                                  mutableBitmap.add(i - START_INT);
+                                  return bitmapFactory.makeImmutableBitmap(mutableBitmap);
+                                }
+                              }
+                          ),
+            serdeFactory.getObjectStrategy()
+        ),
+        dictionary
+    ).get();
+    selector = new BitmapIndexSelector()
+    {
+      @Override
+      public Indexed<String> getDimensionValues(String dimension)
+      {
+        return dictionary;
+      }
+
+      @Override
+      public int getNumRows()
+      {
+        throw new UnsupportedOperationException();
+      }
+
+      @Override
+      public BitmapFactory getBitmapFactory()
+      {
+        return bitmapFactory;
+      }
+
+      @Override
+      public ImmutableBitmap getBitmapIndex(String dimension, String value)
+      {
+        return bitmapIndex.getBitmap(bitmapIndex.getIndex(value));
+      }
+
+      @Override
+      public BitmapIndex getBitmapIndex(String dimension)
+      {
+        return bitmapIndex;
+      }
+
+      @Override
+      public ImmutableRTree getSpatialIndex(String dimension)
+      {
+        throw new UnsupportedOperationException();
+      }
+    };
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchIsEven()
+  {
+    final ImmutableBitmap bitmapIndex = IS_EVEN.getBitmapIndex(selector);
+    Preconditions.checkState(bitmapIndex.size() == cardinality / 2);
+  }
+
+  private List<Integer> generateInts()
+  {
+    final List<Integer> ints = new ArrayList<>(cardinality);
+
+    for (int i = 0; i < cardinality; i++) {
+      ints.add(START_INT + i);
+    }
+
+    return ints;
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/FilterPartitionBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/FilterPartitionBenchmark.java
new file mode 100644
index 00000000000..62de8735200
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/FilterPartitionBenchmark.java
@@ -0,0 +1,651 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Function;
+import com.google.common.base.Predicate;
+import com.google.common.base.Strings;
+import com.google.common.collect.Lists;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.js.JavaScriptConfig;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.dimension.DefaultDimensionSpec;
+import io.druid.query.extraction.ExtractionFn;
+import io.druid.query.extraction.JavaScriptExtractionFn;
+import io.druid.query.filter.AndDimFilter;
+import io.druid.query.filter.BitmapIndexSelector;
+import io.druid.query.filter.BoundDimFilter;
+import io.druid.query.filter.DimFilter;
+import io.druid.query.filter.DruidFloatPredicate;
+import io.druid.query.filter.DruidLongPredicate;
+import io.druid.query.filter.DruidPredicateFactory;
+import io.druid.query.filter.Filter;
+import io.druid.query.filter.OrDimFilter;
+import io.druid.query.filter.SelectorDimFilter;
+import io.druid.query.ordering.StringComparators;
+import io.druid.segment.Cursor;
+import io.druid.segment.DimensionSelector;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.LongColumnSelector;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.QueryableIndexStorageAdapter;
+import io.druid.segment.StorageAdapter;
+import io.druid.segment.VirtualColumns;
+import io.druid.segment.column.Column;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.data.IndexedInts;
+import io.druid.segment.filter.AndFilter;
+import io.druid.segment.filter.BoundFilter;
+import io.druid.segment.filter.DimensionPredicateFilter;
+import io.druid.segment.filter.Filters;
+import io.druid.segment.filter.OrFilter;
+import io.druid.segment.filter.SelectorFilter;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.apache.commons.io.FileUtils;
+import org.joda.time.Interval;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Objects;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class FilterPartitionBenchmark
+{
+  @Param({"750000"})
+  private int rowsPerSegment;
+
+  @Param({"basic"})
+  private String schema;
+
+  private static final Logger log = new Logger(FilterPartitionBenchmark.class);
+  private static final int RNG_SEED = 9999;
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+  private IncrementalIndex incIndex;
+  private QueryableIndex qIndex;
+  private File indexFile;
+  private File tmpDir;
+
+  private Filter timeFilterNone;
+  private Filter timeFilterHalf;
+  private Filter timeFilterAll;
+
+  private BenchmarkSchemaInfo schemaInfo;
+
+  private static String JS_FN = "function(str) { return 'super-' + str; }";
+  private static ExtractionFn JS_EXTRACTION_FN = new JavaScriptExtractionFn(JS_FN, false, JavaScriptConfig.getEnabledInstance());
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  @Setup
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schema);
+
+    BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+        schemaInfo.getColumnSchemas(),
+        RNG_SEED,
+        schemaInfo.getDataInterval(),
+        rowsPerSegment
+    );
+
+    incIndex = makeIncIndex();
+
+    for (int j = 0; j < rowsPerSegment; j++) {
+      InputRow row = gen.nextRow();
+      if (j % 10000 == 0) {
+        log.info(j + " rows generated.");
+      }
+      incIndex.add(row);
+    }
+
+    tmpDir = Files.createTempDir();
+    log.info("Using temp dir: " + tmpDir.getAbsolutePath());
+
+    indexFile = INDEX_MERGER_V9.persist(
+        incIndex,
+        tmpDir,
+        new IndexSpec()
+    );
+    qIndex = INDEX_IO.loadIndex(indexFile);
+
+    Interval interval = schemaInfo.getDataInterval();
+    timeFilterNone = new BoundFilter(new BoundDimFilter(
+        Column.TIME_COLUMN_NAME,
+        String.valueOf(Long.MAX_VALUE),
+        String.valueOf(Long.MAX_VALUE),
+        true,
+        true,
+        null,
+        null,
+        StringComparators.ALPHANUMERIC
+    ));
+
+    long halfEnd = (interval.getEndMillis() + interval.getStartMillis()) / 2;
+    timeFilterHalf = new BoundFilter(new BoundDimFilter(
+        Column.TIME_COLUMN_NAME,
+        String.valueOf(interval.getStartMillis()),
+        String.valueOf(halfEnd),
+        true,
+        true,
+        null,
+        null,
+        StringComparators.ALPHANUMERIC
+    ));
+
+    timeFilterAll = new BoundFilter(new BoundDimFilter(
+        Column.TIME_COLUMN_NAME,
+        String.valueOf(interval.getStartMillis()),
+        String.valueOf(interval.getEndMillis()),
+        true,
+        true,
+        null,
+        null,
+        StringComparators.ALPHANUMERIC
+    ));
+  }
+
+  @TearDown
+  public void tearDown() throws IOException
+  {
+    FileUtils.deleteDirectory(tmpDir);
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void stringRead(Blackhole blackhole) throws Exception
+  {
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, null);
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void longRead(Blackhole blackhole) throws Exception
+  {
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, null);
+
+    Sequence<List<Long>> longListSeq = readCursorsLong(cursors, blackhole);
+    List<Long> strings = Sequences.toList(Sequences.limit(longListSeq, 1), Lists.<List<Long>>newArrayList()).get(0);
+    for (Long st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void timeFilterNone(Blackhole blackhole) throws Exception
+  {
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, timeFilterNone);
+
+    Sequence<List<Long>> longListSeq = readCursorsLong(cursors, blackhole);
+    List<Long> strings = Sequences.toList(Sequences.limit(longListSeq, 1), Lists.<List<Long>>newArrayList()).get(0);
+    for (Long st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void timeFilterHalf(Blackhole blackhole) throws Exception
+  {
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, timeFilterHalf);
+
+    Sequence<List<Long>> longListSeq = readCursorsLong(cursors, blackhole);
+    List<Long> strings = Sequences.toList(Sequences.limit(longListSeq, 1), Lists.<List<Long>>newArrayList()).get(0);
+    for (Long st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void timeFilterAll(Blackhole blackhole) throws Exception
+  {
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, timeFilterAll);
+
+    Sequence<List<Long>> longListSeq = readCursorsLong(cursors, blackhole);
+    List<Long> strings = Sequences.toList(Sequences.limit(longListSeq, 1), Lists.<List<Long>>newArrayList()).get(0);
+    for (Long st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readWithPreFilter(Blackhole blackhole) throws Exception
+  {
+    Filter filter = new SelectorFilter("dimSequential", "199");
+
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, filter);
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readWithPostFilter(Blackhole blackhole) throws Exception
+  {
+    Filter filter = new NoBitmapSelectorFilter("dimSequential", "199");
+
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, filter);
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readWithExFnPreFilter(Blackhole blackhole) throws Exception
+  {
+    Filter filter = new SelectorDimFilter("dimSequential", "super-199", JS_EXTRACTION_FN).toFilter();
+
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, filter);
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readWithExFnPostFilter(Blackhole blackhole) throws Exception
+  {
+    Filter filter = new NoBitmapSelectorDimFilter("dimSequential", "super-199", JS_EXTRACTION_FN).toFilter();
+
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, filter);
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readOrFilter(Blackhole blackhole) throws Exception
+  {
+    Filter filter = new NoBitmapSelectorFilter("dimSequential", "199");
+    Filter filter2 = new AndFilter(Arrays.<Filter>asList(new SelectorFilter("dimMultivalEnumerated2", "Corundum"), new NoBitmapSelectorFilter("dimMultivalEnumerated", "Bar")));
+    Filter orFilter = new OrFilter(Arrays.<Filter>asList(filter, filter2));
+
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, orFilter);
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readOrFilterCNF(Blackhole blackhole) throws Exception
+  {
+    Filter filter = new NoBitmapSelectorFilter("dimSequential", "199");
+    Filter filter2 = new AndFilter(Arrays.<Filter>asList(new SelectorFilter("dimMultivalEnumerated2", "Corundum"), new NoBitmapSelectorFilter("dimMultivalEnumerated", "Bar")));
+    Filter orFilter = new OrFilter(Arrays.<Filter>asList(filter, filter2));
+
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, Filters.convertToCNF(orFilter));
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readComplexOrFilter(Blackhole blackhole) throws Exception
+  {
+    DimFilter dimFilter1 = new OrDimFilter(Arrays.<DimFilter>asList(
+        new SelectorDimFilter("dimSequential", "199", null),
+        new AndDimFilter(Arrays.<DimFilter>asList(
+            new NoBitmapSelectorDimFilter("dimMultivalEnumerated2", "Corundum", null),
+            new SelectorDimFilter("dimMultivalEnumerated", "Bar", null)
+        )
+        ))
+    );
+    DimFilter dimFilter2 = new OrDimFilter(Arrays.<DimFilter>asList(
+        new SelectorDimFilter("dimSequential", "299", null),
+        new SelectorDimFilter("dimSequential", "399", null),
+        new AndDimFilter(Arrays.<DimFilter>asList(
+            new NoBitmapSelectorDimFilter("dimMultivalEnumerated2", "Xylophone", null),
+            new SelectorDimFilter("dimMultivalEnumerated", "Foo", null)
+        )
+        ))
+    );
+    DimFilter dimFilter3 = new OrDimFilter(Arrays.<DimFilter>asList(
+        dimFilter1,
+        dimFilter2,
+        new AndDimFilter(Arrays.<DimFilter>asList(
+            new NoBitmapSelectorDimFilter("dimMultivalEnumerated2", "Orange", null),
+            new SelectorDimFilter("dimMultivalEnumerated", "World", null)
+        )
+        ))
+    );
+
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, dimFilter3.toFilter());
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readComplexOrFilterCNF(Blackhole blackhole) throws Exception
+  {
+    DimFilter dimFilter1 = new OrDimFilter(Arrays.<DimFilter>asList(
+        new SelectorDimFilter("dimSequential", "199", null),
+        new AndDimFilter(Arrays.<DimFilter>asList(
+            new NoBitmapSelectorDimFilter("dimMultivalEnumerated2", "Corundum", null),
+            new SelectorDimFilter("dimMultivalEnumerated", "Bar", null)
+        )
+        ))
+    );
+    DimFilter dimFilter2 = new OrDimFilter(Arrays.<DimFilter>asList(
+        new SelectorDimFilter("dimSequential", "299", null),
+        new SelectorDimFilter("dimSequential", "399", null),
+        new AndDimFilter(Arrays.<DimFilter>asList(
+            new NoBitmapSelectorDimFilter("dimMultivalEnumerated2", "Xylophone", null),
+            new SelectorDimFilter("dimMultivalEnumerated", "Foo", null)
+        )
+        ))
+    );
+    DimFilter dimFilter3 = new OrDimFilter(Arrays.<DimFilter>asList(
+        dimFilter1,
+        dimFilter2,
+        new AndDimFilter(Arrays.<DimFilter>asList(
+            new NoBitmapSelectorDimFilter("dimMultivalEnumerated2", "Orange", null),
+            new SelectorDimFilter("dimMultivalEnumerated", "World", null)
+        )
+        ))
+    );
+
+    StorageAdapter sa = new QueryableIndexStorageAdapter(qIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, Filters.convertToCNF(dimFilter3.toFilter()));
+
+    Sequence<List<String>> stringListSeq = readCursors(cursors, blackhole);
+    List<String> strings = Sequences.toList(Sequences.limit(stringListSeq, 1), Lists.<List<String>>newArrayList()).get(0);
+    for (String st : strings) {
+      blackhole.consume(st);
+    }
+  }
+
+  private Sequence<Cursor> makeCursors(StorageAdapter sa, Filter filter)
+  {
+    return sa.makeCursors(filter, schemaInfo.getDataInterval(), VirtualColumns.EMPTY, Granularities.ALL, false);
+  }
+
+  private Sequence<List<String>> readCursors(Sequence<Cursor> cursors, final Blackhole blackhole)
+  {
+    return Sequences.map(
+        cursors,
+        new Function<Cursor, List<String>>()
+        {
+          @Override
+          public List<String> apply(Cursor input)
+          {
+            List<String> strings = new ArrayList<String>();
+            List<DimensionSelector> selectors = new ArrayList<>();
+            selectors.add(input.makeDimensionSelector(new DefaultDimensionSpec("dimSequential", null)));
+            //selectors.add(input.makeDimensionSelector(new DefaultDimensionSpec("dimB", null)));
+            while (!input.isDone()) {
+              for (DimensionSelector selector : selectors) {
+                IndexedInts row = selector.getRow();
+                blackhole.consume(selector.lookupName(row.get(0)));
+                //strings.add(selector.lookupName(row.get(0)));
+              }
+              input.advance();
+            }
+            return strings;
+          }
+        }
+    );
+  }
+
+  private Sequence<List<Long>> readCursorsLong(Sequence<Cursor> cursors, final Blackhole blackhole)
+  {
+    return Sequences.map(
+        cursors,
+        new Function<Cursor, List<Long>>()
+        {
+          @Override
+          public List<Long> apply(Cursor input)
+          {
+            List<Long> longvals = new ArrayList<Long>();
+            LongColumnSelector selector = input.makeLongColumnSelector("sumLongSequential");
+            while (!input.isDone()) {
+              long rowval = selector.get();
+              blackhole.consume(rowval);
+              input.advance();
+            }
+            return longvals;
+          }
+        }
+    );
+  }
+
+  private class NoBitmapSelectorFilter extends SelectorFilter
+  {
+    public NoBitmapSelectorFilter(
+        String dimension,
+        String value
+    )
+    {
+      super(dimension, value);
+    }
+
+    @Override
+    public boolean supportsBitmapIndex(BitmapIndexSelector selector)
+    {
+      return false;
+    }
+  }
+
+  private class NoBitmapDimensionPredicateFilter extends DimensionPredicateFilter
+  {
+    public NoBitmapDimensionPredicateFilter(
+        final String dimension,
+        final DruidPredicateFactory predicateFactory,
+        final ExtractionFn extractionFn
+    )
+    {
+      super(dimension, predicateFactory, extractionFn);
+    }
+
+    @Override
+    public boolean supportsBitmapIndex(BitmapIndexSelector selector)
+    {
+      return false;
+    }
+  }
+
+  private class NoBitmapSelectorDimFilter extends SelectorDimFilter
+  {
+    public NoBitmapSelectorDimFilter(
+        String dimension,
+        String value,
+        ExtractionFn extractionFn
+    )
+    {
+      super(dimension, value, extractionFn);
+    }
+    @Override
+    public Filter toFilter()
+    {
+      ExtractionFn extractionFn = getExtractionFn();
+      String dimension = getDimension();
+      final String value = getValue();
+      if (extractionFn == null) {
+        return new NoBitmapSelectorFilter(dimension, value);
+      } else {
+        final String valueOrNull = Strings.emptyToNull(value);
+
+        final DruidPredicateFactory predicateFactory = new DruidPredicateFactory()
+        {
+          @Override
+          public Predicate<String> makeStringPredicate()
+          {
+            return new Predicate<String>()
+            {
+              @Override
+              public boolean apply(String input)
+              {
+                return Objects.equals(valueOrNull, input);
+              }
+            };
+          }
+
+          @Override
+          public DruidLongPredicate makeLongPredicate()
+          {
+            return DruidLongPredicate.ALWAYS_FALSE;
+          }
+
+          @Override
+          public DruidFloatPredicate makeFloatPredicate()
+          {
+            return DruidFloatPredicate.ALWAYS_FALSE;
+          }
+        };
+
+        return new NoBitmapDimensionPredicateFilter(dimension, predicateFactory, extractionFn);
+      }
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/FilteredAggregatorBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/FilteredAggregatorBenchmark.java
new file mode 100644
index 00000000000..4c2537fd9de
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/FilteredAggregatorBenchmark.java
@@ -0,0 +1,304 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.benchmark.query.QueryBenchmarkUtil;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.js.JavaScriptConfig;
+import io.druid.query.Druids;
+import io.druid.query.FinalizeResultsQueryRunner;
+import io.druid.query.Query;
+import io.druid.query.QueryRunner;
+import io.druid.query.QueryRunnerFactory;
+import io.druid.query.QueryToolChest;
+import io.druid.query.Result;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.FilteredAggregatorFactory;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.extraction.ExtractionFn;
+import io.druid.query.extraction.JavaScriptExtractionFn;
+import io.druid.query.filter.BoundDimFilter;
+import io.druid.query.filter.DimFilter;
+import io.druid.query.filter.InDimFilter;
+import io.druid.query.filter.JavaScriptDimFilter;
+import io.druid.query.filter.OrDimFilter;
+import io.druid.query.filter.RegexDimFilter;
+import io.druid.query.filter.SearchQueryDimFilter;
+import io.druid.query.ordering.StringComparators;
+import io.druid.query.search.search.ContainsSearchQuerySpec;
+import io.druid.query.spec.MultipleIntervalSegmentSpec;
+import io.druid.query.spec.QuerySegmentSpec;
+import io.druid.query.timeseries.TimeseriesQuery;
+import io.druid.query.timeseries.TimeseriesQueryEngine;
+import io.druid.query.timeseries.TimeseriesQueryQueryToolChest;
+import io.druid.query.timeseries.TimeseriesQueryRunnerFactory;
+import io.druid.query.timeseries.TimeseriesResultValue;
+import io.druid.segment.IncrementalIndexSegment;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.QueryableIndexSegment;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.apache.commons.io.FileUtils;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class FilteredAggregatorBenchmark
+{
+  @Param({"75000"})
+  private int rowsPerSegment;
+
+  @Param({"basic"})
+  private String schema;
+
+  private static final Logger log = new Logger(FilteredAggregatorBenchmark.class);
+  private static final int RNG_SEED = 9999;
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+  private IncrementalIndex incIndex;
+  private IncrementalIndex incIndexFilteredAgg;
+  private AggregatorFactory[] filteredMetrics;
+  private QueryableIndex qIndex;
+  private File indexFile;
+  private DimFilter filter;
+  private List<InputRow> inputRows;
+  private QueryRunnerFactory factory;
+  private BenchmarkSchemaInfo schemaInfo;
+  private TimeseriesQuery query;
+  private File tmpDir;
+
+  private static String JS_FN = "function(str) { return 'super-' + str; }";
+  private static ExtractionFn JS_EXTRACTION_FN = new JavaScriptExtractionFn(JS_FN, false, JavaScriptConfig.getEnabledInstance());
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  @Setup
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schema);
+
+    BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+        schemaInfo.getColumnSchemas(),
+        RNG_SEED,
+        schemaInfo.getDataInterval(),
+        rowsPerSegment
+    );
+
+    incIndex = makeIncIndex(schemaInfo.getAggsArray());
+
+    filter = new OrDimFilter(
+        Arrays.asList(
+            new BoundDimFilter("dimSequential", "-1", "-1", true, true, null, null, StringComparators.ALPHANUMERIC),
+            new JavaScriptDimFilter("dimSequential", "function(x) { return false }", null, JavaScriptConfig.getEnabledInstance()),
+            new RegexDimFilter("dimSequential", "X", null),
+            new SearchQueryDimFilter("dimSequential", new ContainsSearchQuerySpec("X", false), null),
+            new InDimFilter("dimSequential", Arrays.asList("X"), null)
+        )
+    );
+    filteredMetrics = new AggregatorFactory[1];
+    filteredMetrics[0] = new FilteredAggregatorFactory(new CountAggregatorFactory("rows"), filter);
+    incIndexFilteredAgg = makeIncIndex(filteredMetrics);
+
+    inputRows = new ArrayList<>();
+    for (int j = 0; j < rowsPerSegment; j++) {
+      InputRow row = gen.nextRow();
+      if (j % 10000 == 0) {
+        log.info(j + " rows generated.");
+      }
+      incIndex.add(row);
+      inputRows.add(row);
+    }
+
+    tmpDir = Files.createTempDir();
+    log.info("Using temp dir: " + tmpDir.getAbsolutePath());
+
+    indexFile = INDEX_MERGER_V9.persist(
+        incIndex,
+        tmpDir,
+        new IndexSpec()
+    );
+    qIndex = INDEX_IO.loadIndex(indexFile);
+
+    factory = new TimeseriesQueryRunnerFactory(
+        new TimeseriesQueryQueryToolChest(
+            QueryBenchmarkUtil.NoopIntervalChunkingQueryRunnerDecorator()
+        ),
+        new TimeseriesQueryEngine(),
+        QueryBenchmarkUtil.NOOP_QUERYWATCHER
+    );
+
+    BenchmarkSchemaInfo basicSchema = BenchmarkSchemas.SCHEMA_MAP.get("basic");
+    QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+    List<AggregatorFactory> queryAggs = new ArrayList<>();
+    queryAggs.add(filteredMetrics[0]);
+
+    query = Druids.newTimeseriesQueryBuilder()
+                  .dataSource("blah")
+                  .granularity(Granularities.ALL)
+                  .intervals(intervalSpec)
+                  .aggregators(queryAggs)
+                  .descending(false)
+                  .build();
+  }
+
+  @TearDown
+  public void tearDown() throws IOException
+  {
+    FileUtils.deleteDirectory(tmpDir);
+  }
+
+  private IncrementalIndex makeIncIndex(AggregatorFactory[] metrics)
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(metrics)
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  private static <T> List<T> runQuery(QueryRunnerFactory factory, QueryRunner runner, Query<T> query)
+  {
+    QueryToolChest toolChest = factory.getToolchest();
+    QueryRunner<T> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(toolChest.preMergeQueryDecoration(runner)),
+        toolChest
+    );
+
+    Sequence<T> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    return Sequences.toList(queryResult, Lists.<T>newArrayList());
+  }
+
+  // Filtered agg doesn't work with ingestion, cardinality is not supported in incremental index
+  // See https://github.com/druid-io/druid/issues/3164
+  // @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void ingest(Blackhole blackhole) throws Exception
+  {
+    incIndexFilteredAgg = makeIncIndex(filteredMetrics);
+    for (InputRow row : inputRows) {
+      int rv = incIndexFilteredAgg.add(row);
+      blackhole.consume(rv);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleIncrementalIndex(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TimeseriesResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "incIndex",
+        new IncrementalIndexSegment(incIndex, "incIndex")
+    );
+
+    List<Result<TimeseriesResultValue>> results = FilteredAggregatorBenchmark.runQuery(factory, runner, query);
+    for (Result<TimeseriesResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndex(Blackhole blackhole) throws Exception
+  {
+    final QueryRunner<Result<TimeseriesResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndex)
+    );
+
+    List<Result<TimeseriesResultValue>> results = FilteredAggregatorBenchmark.runQuery(factory, runner, query);
+    for (Result<TimeseriesResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONBenchmark.java
index e5b63dcf835..eedd9a530ea 100644
--- a/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONBenchmark.java
+++ b/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONBenchmark.java
@@ -1,25 +1,24 @@
 /*
  * Licensed to Metamarkets Group Inc. (Metamarkets) under one
- * or more contributor license agreements.  See the NOTICE file
+ * or more contributor license agreements. See the NOTICE file
  * distributed with this work for additional information
- * regarding copyright ownership.  Metamarkets licenses this file
+ * regarding copyright ownership. Metamarkets licenses this file
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
+ * with the License. You may obtain a copy of the License at
  *
- *   http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
  * Unless required by applicable law or agreed to in writing,
  * software distributed under the License is distributed on an
  * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
+ * KIND, either express or implied. See the License for the
  * specific language governing permissions and limitations
  * under the License.
  */
 
 package io.druid.benchmark;
 
-import com.metamx.common.parsers.Parser;
 import org.openjdk.jmh.annotations.Benchmark;
 import org.openjdk.jmh.annotations.BenchmarkMode;
 import org.openjdk.jmh.annotations.Mode;
@@ -32,6 +31,8 @@
 import org.openjdk.jmh.runner.options.Options;
 import org.openjdk.jmh.runner.options.OptionsBuilder;
 
+import io.druid.java.util.common.parsers.Parser;
+
 import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
diff --git a/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONBenchmarkUtil.java b/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONBenchmarkUtil.java
index 513d82f4781..00fbcdc97b8 100644
--- a/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONBenchmarkUtil.java
+++ b/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONBenchmarkUtil.java
@@ -1,3 +1,22 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
 package io.druid.benchmark;
 
 import com.fasterxml.jackson.annotation.JsonAutoDetect;
@@ -7,13 +26,14 @@
 import com.fasterxml.jackson.annotation.PropertyAccessor;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import com.github.wnameless.json.flattener.JsonFlattener;
-import com.metamx.common.parsers.Parser;
+
 import io.druid.data.input.impl.DimensionsSpec;
 import io.druid.data.input.impl.JSONParseSpec;
 import io.druid.data.input.impl.JSONPathFieldSpec;
 import io.druid.data.input.impl.JSONPathSpec;
 import io.druid.data.input.impl.TimestampSpec;
 import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.parsers.Parser;
 
 import java.util.ArrayList;
 import java.util.List;
diff --git a/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONProfile.java b/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONProfile.java
index 0e6662de87b..37e16c9b42d 100644
--- a/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONProfile.java
+++ b/benchmarks/src/main/java/io/druid/benchmark/FlattenJSONProfile.java
@@ -1,42 +1,25 @@
 /*
  * Licensed to Metamarkets Group Inc. (Metamarkets) under one
- * or more contributor license agreements.  See the NOTICE file
+ * or more contributor license agreements. See the NOTICE file
  * distributed with this work for additional information
- * regarding copyright ownership.  Metamarkets licenses this file
+ * regarding copyright ownership. Metamarkets licenses this file
  * to you under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
+ * with the License. You may obtain a copy of the License at
  *
- *   http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
  * Unless required by applicable law or agreed to in writing,
  * software distributed under the License is distributed on an
  * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
+ * KIND, either express or implied. See the License for the
  * specific language governing permissions and limitations
  * under the License.
  */
 
 package io.druid.benchmark;
 
-import com.metamx.common.parsers.JSONPathParser;
-import com.metamx.common.parsers.Parser;
 //import com.yourkit.api.Controller;
-import io.druid.data.input.InputRow;
-import io.druid.data.input.impl.DimensionsSpec;
-import io.druid.data.input.impl.JSONParseSpec;
-import io.druid.data.input.impl.JSONPathFieldSpec;
-import io.druid.data.input.impl.JSONPathSpec;
-import io.druid.data.input.impl.StringInputRowParser;
-import io.druid.data.input.impl.TimestampSpec;
-import org.openjdk.jmh.runner.Runner;
-import org.openjdk.jmh.runner.RunnerException;
-import org.openjdk.jmh.runner.options.Options;
-import org.openjdk.jmh.runner.options.OptionsBuilder;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map;
 
 /**
  * Test app for profiling JSON parsing behavior. Uses the proprietary YourKit API, so this file
diff --git a/benchmarks/src/main/java/io/druid/benchmark/FloatCompressionBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/FloatCompressionBenchmark.java
new file mode 100644
index 00000000000..b06ecb0657a
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/FloatCompressionBenchmark.java
@@ -0,0 +1,104 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+// Run FloatCompressionBenchmarkFileGenerator to generate the required files before running this benchmark
+
+import com.google.common.base.Supplier;
+import com.google.common.io.Files;
+import io.druid.segment.data.CompressedFloatsIndexedSupplier;
+import io.druid.segment.data.IndexedFloats;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.Random;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+@BenchmarkMode(Mode.AverageTime)
+@OutputTimeUnit(TimeUnit.MILLISECONDS)
+public class FloatCompressionBenchmark
+{
+  @Param("floatCompress/")
+  private static String dirPath;
+
+  @Param({"enumerate", "zipfLow", "zipfHigh", "sequential", "uniform"})
+  private static String file;
+
+  @Param({"lz4", "none"})
+  private static String strategy;
+
+  private Random rand;
+  private Supplier<IndexedFloats> supplier;
+
+  @Setup
+  public void setup() throws Exception
+  {
+    File dir = new File(dirPath);
+    File compFile = new File(dir, file + "-" + strategy);
+    rand = new Random();
+    ByteBuffer buffer = Files.map(compFile);
+    supplier = CompressedFloatsIndexedSupplier.fromByteBuffer(buffer, ByteOrder.nativeOrder(), null);
+  }
+
+  @Benchmark
+  public void readContinuous(Blackhole bh) throws IOException
+  {
+    IndexedFloats indexedFloats = supplier.get();
+    int count = indexedFloats.size();
+    float sum = 0;
+    for (int i = 0; i < count; i++) {
+      sum += indexedFloats.get(i);
+    }
+    bh.consume(sum);
+    indexedFloats.close();
+  }
+
+  @Benchmark
+  public void readSkipping(Blackhole bh) throws IOException
+  {
+    IndexedFloats indexedFloats = supplier.get();
+    int count = indexedFloats.size();
+    float sum = 0;
+    for (int i = 0; i < count; i += rand.nextInt(2000)) {
+      sum += indexedFloats.get(i);
+    }
+    bh.consume(sum);
+    indexedFloats.close();
+  }
+
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/FloatCompressionBenchmarkFileGenerator.java b/benchmarks/src/main/java/io/druid/benchmark/FloatCompressionBenchmarkFileGenerator.java
new file mode 100644
index 00000000000..10f046e805b
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/FloatCompressionBenchmarkFileGenerator.java
@@ -0,0 +1,194 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.io.ByteSink;
+import io.druid.benchmark.datagen.BenchmarkColumnSchema;
+import io.druid.benchmark.datagen.BenchmarkColumnValueGenerator;
+import io.druid.segment.column.ValueType;
+import io.druid.segment.data.CompressedObjectStrategy;
+import io.druid.segment.data.CompressionFactory;
+import io.druid.segment.data.FloatSupplierSerializer;
+import io.druid.segment.data.TmpFileIOPeon;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.net.URISyntaxException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.nio.channels.FileChannel;
+import java.nio.file.StandardOpenOption;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class FloatCompressionBenchmarkFileGenerator
+{
+  public static final int ROW_NUM = 5000000;
+  public static final List<CompressedObjectStrategy.CompressionStrategy> compressions =
+      ImmutableList.of(
+          CompressedObjectStrategy.CompressionStrategy.LZ4,
+          CompressedObjectStrategy.CompressionStrategy.NONE
+      );
+
+  private static String dirPath = "floatCompress/";
+
+  public static void main(String[] args) throws IOException, URISyntaxException
+  {
+    if (args.length >= 1) {
+      dirPath = args[0];
+    }
+
+    BenchmarkColumnSchema enumeratedSchema = BenchmarkColumnSchema.makeEnumerated("", ValueType.FLOAT, true, 1, 0d,
+                                                                                  ImmutableList.<Object>of(
+                                                                                      0f,
+                                                                                      1.1f,
+                                                                                      2.2f,
+                                                                                      3.3f,
+                                                                                      4.4f
+                                                                                  ),
+                                                                                  ImmutableList.of(
+                                                                                      0.95,
+                                                                                      0.001,
+                                                                                      0.0189,
+                                                                                      0.03,
+                                                                                      0.0001
+                                                                                  )
+    );
+    BenchmarkColumnSchema zipfLowSchema = BenchmarkColumnSchema.makeZipf(
+        "",
+        ValueType.FLOAT,
+        true,
+        1,
+        0d,
+        -1,
+        1000,
+        1d
+    );
+    BenchmarkColumnSchema zipfHighSchema = BenchmarkColumnSchema.makeZipf(
+        "",
+        ValueType.FLOAT,
+        true,
+        1,
+        0d,
+        -1,
+        1000,
+        3d
+    );
+    BenchmarkColumnSchema sequentialSchema = BenchmarkColumnSchema.makeSequential(
+        "",
+        ValueType.FLOAT,
+        true,
+        1,
+        0d,
+        1470187671,
+        2000000000
+    );
+    BenchmarkColumnSchema uniformSchema = BenchmarkColumnSchema.makeContinuousUniform(
+        "",
+        ValueType.FLOAT,
+        true,
+        1,
+        0d,
+        0,
+        1000
+    );
+
+    Map<String, BenchmarkColumnValueGenerator> generators = new HashMap<>();
+    generators.put("enumerate", new BenchmarkColumnValueGenerator(enumeratedSchema, 1));
+    generators.put("zipfLow", new BenchmarkColumnValueGenerator(zipfLowSchema, 1));
+    generators.put("zipfHigh", new BenchmarkColumnValueGenerator(zipfHighSchema, 1));
+    generators.put("sequential", new BenchmarkColumnValueGenerator(sequentialSchema, 1));
+    generators.put("uniform", new BenchmarkColumnValueGenerator(uniformSchema, 1));
+
+    File dir = new File(dirPath);
+    dir.mkdir();
+
+    // create data files using BenchmarkColunValueGenerator
+    for (Map.Entry<String, BenchmarkColumnValueGenerator> entry : generators.entrySet()) {
+      final File dataFile = new File(dir, entry.getKey());
+      dataFile.delete();
+      try (Writer writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(dataFile)))) {
+        for (int i = 0; i < ROW_NUM; i++) {
+          writer.write((Float) entry.getValue().generateRowValue() + "\n");
+        }
+      }
+    }
+
+    // create compressed files using all combinations of CompressionStrategy and FloatEncoding provided
+    for (Map.Entry<String, BenchmarkColumnValueGenerator> entry : generators.entrySet()) {
+      for (CompressedObjectStrategy.CompressionStrategy compression : compressions) {
+        String name = entry.getKey() + "-" + compression.toString();
+        System.out.print(name + ": ");
+        File compFile = new File(dir, name);
+        compFile.delete();
+        File dataFile = new File(dir, entry.getKey());
+
+        TmpFileIOPeon iopeon = new TmpFileIOPeon(true);
+        FloatSupplierSerializer writer = CompressionFactory.getFloatSerializer(
+            iopeon,
+            "float",
+            ByteOrder.nativeOrder(),
+            compression
+        );
+        BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(dataFile)));
+
+        try (FileChannel output = FileChannel.open(
+            compFile.toPath(),
+            StandardOpenOption.CREATE_NEW,
+            StandardOpenOption.WRITE
+        )) {
+          writer.open();
+          String line;
+          while ((line = br.readLine()) != null) {
+            writer.add(Float.parseFloat(line));
+          }
+          final ByteArrayOutputStream baos = new ByteArrayOutputStream();
+          writer.closeAndConsolidate(
+              new ByteSink()
+              {
+                @Override
+                public OutputStream openStream() throws IOException
+                {
+                  return baos;
+                }
+              }
+          );
+          output.write(ByteBuffer.wrap(baos.toByteArray()));
+        }
+        finally {
+          iopeon.close();
+          br.close();
+        }
+        System.out.print(compFile.length() / 1024 + "\n");
+      }
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/GenericIndexedBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/GenericIndexedBenchmark.java
new file mode 100644
index 00000000000..4b5fb28574d
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/GenericIndexedBenchmark.java
@@ -0,0 +1,174 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.io.Files;
+import com.google.common.primitives.Ints;
+import io.druid.java.util.common.io.smoosh.FileSmoosher;
+import io.druid.java.util.common.io.smoosh.SmooshedFileMapper;
+import io.druid.segment.data.GenericIndexed;
+import io.druid.segment.data.GenericIndexedWriter;
+import io.druid.segment.data.ObjectStrategy;
+import io.druid.segment.data.TmpFileIOPeon;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OperationsPerInvocation;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.MappedByteBuffer;
+import java.nio.channels.FileChannel;
+import java.nio.file.StandardOpenOption;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
+
+@BenchmarkMode(Mode.AverageTime)
+@OutputTimeUnit(TimeUnit.NANOSECONDS)
+@OperationsPerInvocation(GenericIndexedBenchmark.ITERATIONS)
+@Warmup(iterations = 5)
+@Measurement(iterations = 20)
+@Fork(1)
+@State(Scope.Benchmark)
+public class GenericIndexedBenchmark
+{
+  public static final int ITERATIONS = 10000;
+
+  static final ObjectStrategy<byte[]> byteArrayStrategy = new ObjectStrategy<byte[]>()
+  {
+    @Override
+    public Class<? extends byte[]> getClazz()
+    {
+      return byte[].class;
+    }
+
+    @Override
+    public byte[] fromByteBuffer(ByteBuffer buffer, int numBytes)
+    {
+      byte[] result = new byte[numBytes];
+      buffer.get(result);
+      return result;
+    }
+
+    @Override
+    public byte[] toBytes(byte[] val)
+    {
+      return val;
+    }
+
+    @Override
+    public int compare(byte[] o1, byte[] o2)
+    {
+      return Integer.compare(Ints.fromByteArray(o1), Ints.fromByteArray(o2));
+    }
+  };
+
+  @Param({"10000"})
+  public int n;
+  @Param({"8"})
+  public int elementSize;
+
+  private File file;
+  private File smooshDir;
+  private GenericIndexed<byte[]> genericIndexed;
+  private int[] iterationIndexes;
+  private byte[][] elementsToSearch;
+
+  @Setup(Level.Trial)
+  public void createGenericIndexed() throws IOException
+  {
+    GenericIndexedWriter<byte[]> genericIndexedWriter = new GenericIndexedWriter<>(
+        new TmpFileIOPeon(),
+        "genericIndexedBenchmark",
+        byteArrayStrategy
+    );
+    genericIndexedWriter.open();
+
+    // GenericIndexObject caches prevObject for comparison, so need two arrays for correct objectsSorted computation.
+    ByteBuffer[] elements = new ByteBuffer[2];
+    elements[0] = ByteBuffer.allocate(elementSize);
+    elements[1] = ByteBuffer.allocate(elementSize);
+    for (int i = 0; i < n; i++) {
+      ByteBuffer element = elements[i & 1];
+      element.putInt(0, i);
+      genericIndexedWriter.write(element.array());
+    }
+    genericIndexedWriter.close();
+    smooshDir = Files.createTempDir();
+    file = File.createTempFile("genericIndexedBenchmark", "meta");
+
+    try (FileChannel fileChannel =
+             FileChannel.open(file.toPath(), StandardOpenOption.CREATE, StandardOpenOption.WRITE);
+         FileSmoosher fileSmoosher = new FileSmoosher(smooshDir)) {
+      genericIndexedWriter.writeToChannel(fileChannel, fileSmoosher);
+    }
+
+    FileChannel fileChannel = FileChannel.open(file.toPath());
+    MappedByteBuffer byteBuffer = fileChannel.map(FileChannel.MapMode.READ_ONLY, 0, file.length());
+    genericIndexed = GenericIndexed.read(byteBuffer, byteArrayStrategy, SmooshedFileMapper.load(smooshDir));
+  }
+
+  @Setup(Level.Trial)
+  public void createIterationIndexes()
+  {
+    iterationIndexes = new int[ITERATIONS];
+    for (int i = 0; i < ITERATIONS; i++) {
+      iterationIndexes[i] = ThreadLocalRandom.current().nextInt(n);
+    }
+  }
+
+  @Setup(Level.Trial)
+  public void createElementsToSearch()
+  {
+    elementsToSearch = new byte[ITERATIONS][];
+    for (int i = 0; i < ITERATIONS; i++) {
+      elementsToSearch[i] = Ints.toByteArray(ThreadLocalRandom.current().nextInt(n));
+    }
+  }
+
+  @Benchmark
+  public void get(Blackhole bh)
+  {
+    for (int i : iterationIndexes) {
+      bh.consume(genericIndexed.get(i));
+    }
+  }
+
+  @Benchmark
+  public int indexOf()
+  {
+    int r = 0;
+    for (byte[] elementToSearch : elementsToSearch) {
+      r ^= genericIndexed.indexOf(elementToSearch);
+    }
+    return r;
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/GroupByTypeInterfaceBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/GroupByTypeInterfaceBenchmark.java
new file mode 100644
index 00000000000..d737508eaff
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/GroupByTypeInterfaceBenchmark.java
@@ -0,0 +1,858 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.dataformat.smile.SmileFactory;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import com.google.common.base.Throwables;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.benchmark.query.QueryBenchmarkUtil;
+import io.druid.collections.BlockingPool;
+import io.druid.collections.StupidPool;
+import io.druid.concurrent.Execs;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.Row;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.granularity.Granularity;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.offheap.OffheapBufferGenerator;
+import io.druid.query.DruidProcessingConfig;
+import io.druid.query.FinalizeResultsQueryRunner;
+import io.druid.query.Query;
+import io.druid.query.QueryRunner;
+import io.druid.query.QueryRunnerFactory;
+import io.druid.query.QueryToolChest;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.dimension.DefaultDimensionSpec;
+import io.druid.query.dimension.DimensionSpec;
+import io.druid.query.groupby.GroupByQuery;
+import io.druid.query.groupby.GroupByQueryConfig;
+import io.druid.query.groupby.GroupByQueryEngine;
+import io.druid.query.groupby.GroupByQueryQueryToolChest;
+import io.druid.query.groupby.GroupByQueryRunnerFactory;
+import io.druid.query.groupby.strategy.GroupByStrategySelector;
+import io.druid.query.groupby.strategy.GroupByStrategyV1;
+import io.druid.query.groupby.strategy.GroupByStrategyV2;
+import io.druid.query.spec.MultipleIntervalSegmentSpec;
+import io.druid.query.spec.QuerySegmentSpec;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.QueryableIndexSegment;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.apache.commons.io.FileUtils;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+
+// Benchmark for determining the interface overhead of GroupBy with multiple type implementations
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 15)
+@Measurement(iterations = 30)
+public class GroupByTypeInterfaceBenchmark
+{
+  @Param({"4"})
+  private int numSegments;
+
+  @Param({"4"})
+  private int numProcessingThreads;
+
+  @Param({"-1"})
+  private int initialBuckets;
+
+  @Param({"100000"})
+  private int rowsPerSegment;
+
+  @Param({"v2"})
+  private String defaultStrategy;
+
+  @Param({"all"})
+  private String queryGranularity;
+
+  private static final Logger log = new Logger(GroupByTypeInterfaceBenchmark.class);
+  private static final int RNG_SEED = 9999;
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+
+  private File tmpDir;
+  private IncrementalIndex anIncrementalIndex;
+  private List<QueryableIndex> queryableIndexes;
+
+  private QueryRunnerFactory<Row, GroupByQuery> factory;
+
+  private BenchmarkSchemaInfo schemaInfo;
+  private GroupByQuery stringQuery;
+  private GroupByQuery longFloatQuery;
+  private GroupByQuery floatQuery;
+  private GroupByQuery longQuery;
+
+  private ExecutorService executorService;
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  private static final Map<String, Map<String, GroupByQuery>> SCHEMA_QUERY_MAP = new LinkedHashMap<>();
+
+  private void setupQueries()
+  {
+    // queries for the basic schema
+    Map<String, GroupByQuery> basicQueries = new LinkedHashMap<>();
+    BenchmarkSchemaInfo basicSchema = BenchmarkSchemas.SCHEMA_MAP.get("basic");
+
+    { // basic.A
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory(
+          "sumLongSequential",
+          "sumLongSequential"
+      ));
+      GroupByQuery queryString = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularity.fromString(queryGranularity))
+          .build();
+
+      GroupByQuery queryLongFloat = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("metLongUniform", null),
+              new DefaultDimensionSpec("metFloatNormal", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularity.fromString(queryGranularity))
+          .build();
+
+      GroupByQuery queryLong = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("metLongUniform", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularity.fromString(queryGranularity))
+          .build();
+
+      GroupByQuery queryFloat = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("metFloatNormal", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularity.fromString(queryGranularity))
+          .build();
+
+      basicQueries.put("string", queryString);
+      basicQueries.put("longFloat", queryLongFloat);
+      basicQueries.put("long", queryLong);
+      basicQueries.put("float", queryFloat);
+    }
+
+    { // basic.nested
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory(
+          "sumLongSequential",
+          "sumLongSequential"
+      ));
+
+      GroupByQuery subqueryA = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", null),
+              new DefaultDimensionSpec("dimZipf", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularities.DAY)
+          .build();
+
+      GroupByQuery queryA = GroupByQuery
+          .builder()
+          .setDataSource(subqueryA)
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularities.WEEK)
+          .build();
+
+      basicQueries.put("nested", queryA);
+    }
+
+    SCHEMA_QUERY_MAP.put("basic", basicQueries);
+  }
+
+  @Setup(Level.Trial)
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT %d", System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+    executorService = Execs.multiThreaded(numProcessingThreads, "GroupByThreadPool[%d]");
+
+    setupQueries();
+
+    String schemaName = "basic";
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schemaName);
+    stringQuery = SCHEMA_QUERY_MAP.get(schemaName).get("string");
+    longFloatQuery = SCHEMA_QUERY_MAP.get(schemaName).get("longFloat");
+    longQuery = SCHEMA_QUERY_MAP.get(schemaName).get("long");
+    floatQuery = SCHEMA_QUERY_MAP.get(schemaName).get("float");
+
+    final BenchmarkDataGenerator dataGenerator = new BenchmarkDataGenerator(
+        schemaInfo.getColumnSchemas(),
+        RNG_SEED + 1,
+        schemaInfo.getDataInterval(),
+        rowsPerSegment
+    );
+
+    tmpDir = Files.createTempDir();
+    log.info("Using temp dir: %s", tmpDir.getAbsolutePath());
+
+    // queryableIndexes   -> numSegments worth of on-disk segments
+    // anIncrementalIndex -> the last incremental index
+    anIncrementalIndex = null;
+    queryableIndexes = new ArrayList<>(numSegments);
+
+    for (int i = 0; i < numSegments; i++) {
+      log.info("Generating rows for segment %d/%d", i + 1, numSegments);
+
+      final IncrementalIndex index = makeIncIndex();
+
+      for (int j = 0; j < rowsPerSegment; j++) {
+        final InputRow row = dataGenerator.nextRow();
+        if (j % 20000 == 0) {
+          log.info("%,d/%,d rows generated.", i * rowsPerSegment + j, rowsPerSegment * numSegments);
+        }
+        index.add(row);
+      }
+
+      log.info(
+          "%,d/%,d rows generated, persisting segment %d/%d.",
+          (i + 1) * rowsPerSegment,
+          rowsPerSegment * numSegments,
+          i + 1,
+          numSegments
+      );
+
+      final File file = INDEX_MERGER_V9.persist(
+          index,
+          new File(tmpDir, String.valueOf(i)),
+          new IndexSpec()
+      );
+
+      queryableIndexes.add(INDEX_IO.loadIndex(file));
+
+      if (i == numSegments - 1) {
+        anIncrementalIndex = index;
+      } else {
+        index.close();
+      }
+    }
+
+    StupidPool<ByteBuffer> bufferPool = new StupidPool<>(
+        "GroupByBenchmark-computeBufferPool",
+        new OffheapBufferGenerator("compute", 250_000_000),
+        0,
+        Integer.MAX_VALUE
+    );
+
+    // limit of 2 is required since we simulate both historical merge and broker merge in the same process
+    BlockingPool<ByteBuffer> mergePool = new BlockingPool<>(
+        new OffheapBufferGenerator("merge", 250_000_000),
+        2
+    );
+    final GroupByQueryConfig config = new GroupByQueryConfig()
+    {
+      @Override
+      public String getDefaultStrategy()
+      {
+        return defaultStrategy;
+      }
+
+      @Override
+      public int getBufferGrouperInitialBuckets()
+      {
+        return initialBuckets;
+      }
+
+      @Override
+      public long getMaxOnDiskStorage()
+      {
+        return 1_000_000_000L;
+      }
+    };
+    config.setSingleThreaded(false);
+    config.setMaxIntermediateRows(Integer.MAX_VALUE);
+    config.setMaxResults(Integer.MAX_VALUE);
+
+    DruidProcessingConfig druidProcessingConfig = new DruidProcessingConfig()
+    {
+      @Override
+      public int getNumThreads()
+      {
+        // Used by "v2" strategy for concurrencyHint
+        return numProcessingThreads;
+      }
+
+      @Override
+      public String getFormatString()
+      {
+        return null;
+      }
+    };
+
+    final Supplier<GroupByQueryConfig> configSupplier = Suppliers.ofInstance(config);
+    final GroupByStrategySelector strategySelector = new GroupByStrategySelector(
+        configSupplier,
+        new GroupByStrategyV1(
+            configSupplier,
+            new GroupByQueryEngine(configSupplier, bufferPool),
+            QueryBenchmarkUtil.NOOP_QUERYWATCHER,
+            bufferPool
+        ),
+        new GroupByStrategyV2(
+            druidProcessingConfig,
+            configSupplier,
+            bufferPool,
+            mergePool,
+            new ObjectMapper(new SmileFactory()),
+            QueryBenchmarkUtil.NOOP_QUERYWATCHER
+        )
+    );
+
+    factory = new GroupByQueryRunnerFactory(
+        strategySelector,
+        new GroupByQueryQueryToolChest(
+            strategySelector,
+            QueryBenchmarkUtil.NoopIntervalChunkingQueryRunnerDecorator()
+        )
+    );
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  @TearDown(Level.Trial)
+  public void tearDown()
+  {
+    try {
+      if (anIncrementalIndex != null) {
+        anIncrementalIndex.close();
+      }
+
+      if (queryableIndexes != null) {
+        for (QueryableIndex index : queryableIndexes) {
+          index.close();
+        }
+      }
+
+      if (tmpDir != null) {
+        FileUtils.deleteDirectory(tmpDir);
+      }
+    }
+    catch (IOException e) {
+      log.warn(e, "Failed to tear down, temp dir was: %s", tmpDir);
+      throw Throwables.propagate(e);
+    }
+  }
+
+  private static <T> List<T> runQuery(QueryRunnerFactory factory, QueryRunner runner, Query<T> query)
+  {
+    QueryToolChest toolChest = factory.getToolchest();
+    QueryRunner<T> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(toolChest.preMergeQueryDecoration(runner)),
+        toolChest
+    );
+
+    Sequence<T> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    return Sequences.toList(queryResult, Lists.<T>newArrayList());
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexStringOnly(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexLongOnly(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexFloatOnly(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexNumericOnly(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longFloatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexNumericThenString(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longFloatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexLongThenString(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexLongThenFloat(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexStringThenNumeric(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longFloatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexStringThenLong(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexStringTwice(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexLongTwice(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexFloatTwice(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexFloatThenLong(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexFloatThenString(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    results = GroupByTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/IncrementalIndexRowTypeBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/IncrementalIndexRowTypeBenchmark.java
new file mode 100644
index 00000000000..4b900568552
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/IncrementalIndexRowTypeBenchmark.java
@@ -0,0 +1,199 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.collect.ImmutableMap;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.MapBasedInputRow;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OperationsPerInvocation;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+public class IncrementalIndexRowTypeBenchmark
+{
+  private IncrementalIndex incIndex;
+  private IncrementalIndex incFloatIndex;
+  private IncrementalIndex incStrIndex;
+  private static AggregatorFactory[] aggs;
+  static final int dimensionCount = 8;
+  private Random rng;
+  static final int maxRows = 250000;
+
+  private ArrayList<InputRow> longRows = new ArrayList<InputRow>();
+  private ArrayList<InputRow> floatRows = new ArrayList<InputRow>();
+  private ArrayList<InputRow> stringRows = new ArrayList<InputRow>();
+
+
+  static {
+    final ArrayList<AggregatorFactory> ingestAggregatorFactories = new ArrayList<>(dimensionCount + 1);
+    ingestAggregatorFactories.add(new CountAggregatorFactory("rows"));
+    for (int i = 0; i < dimensionCount; ++i) {
+      ingestAggregatorFactories.add(
+          new LongSumAggregatorFactory(
+              String.format("sumResult%s", i),
+              String.format("Dim_%s", i)
+          )
+      );
+      ingestAggregatorFactories.add(
+          new DoubleSumAggregatorFactory(
+              String.format("doubleSumResult%s", i),
+              String.format("Dim_%s", i)
+          )
+      );
+    }
+    aggs = ingestAggregatorFactories.toArray(new AggregatorFactory[0]);
+  }
+
+  private MapBasedInputRow getLongRow(long timestamp, int rowID, int dimensionCount)
+  {
+    List<String> dimensionList = new ArrayList<String>(dimensionCount);
+    ImmutableMap.Builder<String, Object> builder = ImmutableMap.builder();
+    for (int i = 0; i < dimensionCount; i++) {
+      String dimName = String.format("Dim_%d", i);
+      dimensionList.add(dimName);
+      builder.put(dimName, rng.nextLong());
+    }
+    return new MapBasedInputRow(timestamp, dimensionList, builder.build());
+  }
+
+  private MapBasedInputRow getFloatRow(long timestamp, int rowID, int dimensionCount)
+  {
+    List<String> dimensionList = new ArrayList<String>(dimensionCount);
+    ImmutableMap.Builder<String, Object> builder = ImmutableMap.builder();
+    for (int i = 0; i < dimensionCount; i++) {
+      String dimName = String.format("Dim_%d", i);
+      dimensionList.add(dimName);
+      builder.put(dimName, rng.nextFloat());
+    }
+    return new MapBasedInputRow(timestamp, dimensionList, builder.build());
+  }
+
+  private MapBasedInputRow getStringRow(long timestamp, int rowID, int dimensionCount)
+  {
+    List<String> dimensionList = new ArrayList<String>(dimensionCount);
+    ImmutableMap.Builder<String, Object> builder = ImmutableMap.builder();
+    for (int i = 0; i < dimensionCount; i++) {
+      String dimName = String.format("Dim_%d", i);
+      dimensionList.add(dimName);
+      builder.put(dimName, String.valueOf(rng.nextLong()));
+    }
+    return new MapBasedInputRow(timestamp, dimensionList, builder.build());
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        0,
+        Granularities.NONE,
+        aggs,
+        false,
+        false,
+        true,
+        maxRows
+    );
+  }
+
+  @Setup
+  public void setup() throws IOException
+  {
+    rng = new Random(9999);
+
+    for (int i = 0; i < maxRows; i++) {
+      longRows.add(getLongRow(0, i, dimensionCount));
+    }
+
+    for (int i = 0; i < maxRows; i++) {
+      floatRows.add(getFloatRow(0, i, dimensionCount));
+    }
+
+    for (int i = 0; i < maxRows; i++) {
+      stringRows.add(getStringRow(0, i, dimensionCount));
+    }
+  }
+
+  @Setup(Level.Iteration)
+  public void setup2() throws IOException
+  {
+    ;
+    incIndex = makeIncIndex();
+    incFloatIndex = makeIncIndex();
+    incStrIndex = makeIncIndex();
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  @OperationsPerInvocation(maxRows)
+  public void normalLongs(Blackhole blackhole) throws Exception
+  {
+    for (int i = 0; i < maxRows; i++) {
+      InputRow row = longRows.get(i);
+      int rv = incIndex.add(row);
+      blackhole.consume(rv);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  @OperationsPerInvocation(maxRows)
+  public void normalFloats(Blackhole blackhole) throws Exception
+  {
+    for (int i = 0; i < maxRows; i++) {
+      InputRow row = floatRows.get(i);
+      int rv = incFloatIndex.add(row);
+      blackhole.consume(rv);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  @OperationsPerInvocation(maxRows)
+  public void normalStrings(Blackhole blackhole) throws Exception
+  {
+    for (int i = 0; i < maxRows; i++) {
+      InputRow row = stringRows.get(i);
+      int rv = incStrIndex.add(row);
+      blackhole.consume(rv);
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/LikeFilterBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/LikeFilterBenchmark.java
new file mode 100644
index 00000000000..a5d0e54b210
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/LikeFilterBenchmark.java
@@ -0,0 +1,251 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.base.Function;
+import com.google.common.collect.FluentIterable;
+import io.druid.collections.bitmap.BitmapFactory;
+import io.druid.collections.bitmap.ImmutableBitmap;
+import io.druid.collections.bitmap.MutableBitmap;
+import io.druid.collections.bitmap.RoaringBitmapFactory;
+import io.druid.collections.spatial.ImmutableRTree;
+import io.druid.query.filter.BitmapIndexSelector;
+import io.druid.query.filter.BoundDimFilter;
+import io.druid.query.filter.Filter;
+import io.druid.query.filter.LikeDimFilter;
+import io.druid.query.filter.RegexDimFilter;
+import io.druid.query.filter.SelectorDimFilter;
+import io.druid.query.ordering.StringComparators;
+import io.druid.segment.column.BitmapIndex;
+import io.druid.segment.data.BitmapSerdeFactory;
+import io.druid.segment.data.GenericIndexed;
+import io.druid.segment.data.Indexed;
+import io.druid.segment.data.RoaringBitmapSerdeFactory;
+import io.druid.segment.serde.BitmapIndexColumnPartSupplier;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 10)
+public class LikeFilterBenchmark
+{
+  private static final int START_INT = 1_000_000;
+  private static final int END_INT = 9_999_999;
+
+  private static final Filter SELECTOR_EQUALS = new SelectorDimFilter(
+      "foo",
+      "1000000",
+      null
+  ).toFilter();
+
+  private static final Filter LIKE_EQUALS = new LikeDimFilter(
+      "foo",
+      "1000000",
+      null,
+      null
+  ).toFilter();
+
+  private static final Filter BOUND_PREFIX = new BoundDimFilter(
+      "foo",
+      "50",
+      "50\uffff",
+      false,
+      false,
+      null,
+      null,
+      StringComparators.LEXICOGRAPHIC
+  ).toFilter();
+
+  private static final Filter REGEX_PREFIX = new RegexDimFilter(
+      "foo",
+      "^50.*",
+      null
+  ).toFilter();
+
+  private static final Filter LIKE_PREFIX = new LikeDimFilter(
+      "foo",
+      "50%",
+      null,
+      null
+  ).toFilter();
+
+  // cardinality, the dictionary will contain evenly spaced integers
+  @Param({"1000", "100000", "1000000"})
+  int cardinality;
+
+  int step;
+
+  // selector will contain a cardinality number of bitmaps; each one contains a single int: 0
+  BitmapIndexSelector selector;
+
+  @Setup
+  public void setup() throws IOException
+  {
+    step = (END_INT - START_INT) / cardinality;
+    final BitmapFactory bitmapFactory = new RoaringBitmapFactory();
+    final BitmapSerdeFactory serdeFactory = new RoaringBitmapSerdeFactory(null);
+    final List<Integer> ints = generateInts();
+    final GenericIndexed<String> dictionary = GenericIndexed.fromIterable(
+        FluentIterable.from(ints)
+                      .transform(
+                          new Function<Integer, String>()
+                          {
+                            @Override
+                            public String apply(Integer i)
+                            {
+                              return i.toString();
+                            }
+                          }
+                      ),
+        GenericIndexed.STRING_STRATEGY
+    );
+    final BitmapIndex bitmapIndex = new BitmapIndexColumnPartSupplier(
+        bitmapFactory,
+        GenericIndexed.fromIterable(
+            FluentIterable.from(ints)
+                          .transform(
+                              new Function<Integer, ImmutableBitmap>()
+                              {
+                                @Override
+                                public ImmutableBitmap apply(Integer i)
+                                {
+                                  final MutableBitmap mutableBitmap = bitmapFactory.makeEmptyMutableBitmap();
+                                  mutableBitmap.add((i - START_INT) / step);
+                                  return bitmapFactory.makeImmutableBitmap(mutableBitmap);
+                                }
+                              }
+                          ),
+            serdeFactory.getObjectStrategy()
+        ),
+        dictionary
+    ).get();
+    selector = new BitmapIndexSelector()
+    {
+      @Override
+      public Indexed<String> getDimensionValues(String dimension)
+      {
+        return dictionary;
+      }
+
+      @Override
+      public int getNumRows()
+      {
+        throw new UnsupportedOperationException();
+      }
+
+      @Override
+      public BitmapFactory getBitmapFactory()
+      {
+        return bitmapFactory;
+      }
+
+      @Override
+      public ImmutableBitmap getBitmapIndex(String dimension, String value)
+      {
+        return bitmapIndex.getBitmap(bitmapIndex.getIndex(value));
+      }
+
+      @Override
+      public BitmapIndex getBitmapIndex(String dimension)
+      {
+        return bitmapIndex;
+      }
+
+      @Override
+      public ImmutableRTree getSpatialIndex(String dimension)
+      {
+        throw new UnsupportedOperationException();
+      }
+    };
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchLikeEquals(Blackhole blackhole)
+  {
+    final ImmutableBitmap bitmapIndex = LIKE_EQUALS.getBitmapIndex(selector);
+    blackhole.consume(bitmapIndex);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchSelectorEquals(Blackhole blackhole)
+  {
+    final ImmutableBitmap bitmapIndex = SELECTOR_EQUALS.getBitmapIndex(selector);
+    blackhole.consume(bitmapIndex);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchLikePrefix(Blackhole blackhole)
+  {
+    final ImmutableBitmap bitmapIndex = LIKE_PREFIX.getBitmapIndex(selector);
+    blackhole.consume(bitmapIndex);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchBoundPrefix(Blackhole blackhole)
+  {
+    final ImmutableBitmap bitmapIndex = BOUND_PREFIX.getBitmapIndex(selector);
+    blackhole.consume(bitmapIndex);
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void matchRegexPrefix(Blackhole blackhole)
+  {
+    final ImmutableBitmap bitmapIndex = REGEX_PREFIX.getBitmapIndex(selector);
+    blackhole.consume(bitmapIndex);
+  }
+
+  private List<Integer> generateInts()
+  {
+    final List<Integer> ints = new ArrayList<>(cardinality);
+
+    for (int i = 0; i < cardinality; i++) {
+      ints.add(START_INT + step * i);
+    }
+
+    return ints;
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/LongCompressionBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/LongCompressionBenchmark.java
new file mode 100644
index 00000000000..ac41c687457
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/LongCompressionBenchmark.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.base.Supplier;
+import com.google.common.io.Files;
+import io.druid.segment.data.CompressedLongsIndexedSupplier;
+import io.druid.segment.data.IndexedLongs;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.Random;
+import java.util.concurrent.TimeUnit;
+
+// Run LongCompressionBenchmarkFileGenerator to generate the required files before running this benchmark
+
+@State(Scope.Benchmark)
+@Fork(value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+@BenchmarkMode(Mode.AverageTime)
+@OutputTimeUnit(TimeUnit.MILLISECONDS)
+public class LongCompressionBenchmark
+{
+  @Param("longCompress/")
+  private static String dirPath;
+
+  @Param({"enumerate", "zipfLow", "zipfHigh", "sequential", "uniform"})
+  private static String file;
+
+  @Param({"auto", "longs"})
+  private static String format;
+
+  @Param({"lz4", "none"})
+  private static String strategy;
+
+  private Random rand;
+  private Supplier<IndexedLongs> supplier;
+
+  @Setup
+  public void setup() throws Exception
+  {
+    File dir = new File(dirPath);
+    File compFile = new File(dir, file + "-" + strategy + "-" + format);
+    rand = new Random();
+    ByteBuffer buffer = Files.map(compFile);
+    supplier = CompressedLongsIndexedSupplier.fromByteBuffer(buffer, ByteOrder.nativeOrder(), null);
+  }
+
+  @Benchmark
+  public void readContinuous(Blackhole bh) throws IOException
+  {
+    IndexedLongs indexedLongs = supplier.get();
+    int count = indexedLongs.size();
+    long sum = 0;
+    for (int i = 0; i < count; i++) {
+      sum += indexedLongs.get(i);
+    }
+    bh.consume(sum);
+    indexedLongs.close();
+  }
+
+  @Benchmark
+  public void readSkipping(Blackhole bh) throws IOException
+  {
+    IndexedLongs indexedLongs = supplier.get();
+    int count = indexedLongs.size();
+    long sum = 0;
+    for (int i = 0; i < count; i += rand.nextInt(2000)) {
+      sum += indexedLongs.get(i);
+    }
+    bh.consume(sum);
+    indexedLongs.close();
+  }
+
+}
+
diff --git a/benchmarks/src/main/java/io/druid/benchmark/LongCompressionBenchmarkFileGenerator.java b/benchmarks/src/main/java/io/druid/benchmark/LongCompressionBenchmarkFileGenerator.java
new file mode 100644
index 00000000000..5d6215fa44e
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/LongCompressionBenchmarkFileGenerator.java
@@ -0,0 +1,188 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.io.ByteSink;
+import io.druid.benchmark.datagen.BenchmarkColumnSchema;
+import io.druid.benchmark.datagen.BenchmarkColumnValueGenerator;
+import io.druid.segment.column.ValueType;
+import io.druid.segment.data.CompressedObjectStrategy;
+import io.druid.segment.data.CompressionFactory;
+import io.druid.segment.data.LongSupplierSerializer;
+import io.druid.segment.data.TmpFileIOPeon;
+
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.net.URISyntaxException;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.nio.channels.FileChannel;
+import java.nio.file.StandardOpenOption;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class LongCompressionBenchmarkFileGenerator
+{
+  public static final int ROW_NUM = 5000000;
+  public static final List<CompressedObjectStrategy.CompressionStrategy> compressions =
+      ImmutableList.of(CompressedObjectStrategy.CompressionStrategy.LZ4,
+                       CompressedObjectStrategy.CompressionStrategy.NONE);
+  public static final List<CompressionFactory.LongEncodingStrategy> encodings =
+      ImmutableList.of(CompressionFactory.LongEncodingStrategy.AUTO, CompressionFactory.LongEncodingStrategy.LONGS);
+
+  private static String dirPath = "longCompress/";
+
+  public static void main(String[] args) throws IOException, URISyntaxException
+  {
+    if (args.length >= 1) {
+      dirPath = args[0];
+    }
+
+    BenchmarkColumnSchema enumeratedSchema = BenchmarkColumnSchema.makeEnumerated("", ValueType.LONG, true, 1, 0d,
+                                                                                  ImmutableList.<Object>of(
+                                                                                      0,
+                                                                                      1,
+                                                                                      2,
+                                                                                      3,
+                                                                                      4
+                                                                                  ),
+                                                                                  ImmutableList.of(
+                                                                                      0.95,
+                                                                                      0.001,
+                                                                                      0.0189,
+                                                                                      0.03,
+                                                                                      0.0001
+                                                                                  )
+    );
+    BenchmarkColumnSchema zipfLowSchema = BenchmarkColumnSchema.makeZipf("", ValueType.LONG, true, 1, 0d, -1, 1000, 1d);
+    BenchmarkColumnSchema zipfHighSchema = BenchmarkColumnSchema.makeZipf(
+        "",
+        ValueType.LONG,
+        true,
+        1,
+        0d,
+        -1,
+        1000,
+        3d
+    );
+    BenchmarkColumnSchema sequentialSchema = BenchmarkColumnSchema.makeSequential(
+        "",
+        ValueType.LONG,
+        true,
+        1,
+        0d,
+        1470187671,
+        2000000000
+    );
+    BenchmarkColumnSchema uniformSchema = BenchmarkColumnSchema.makeDiscreteUniform(
+        "",
+        ValueType.LONG,
+        true,
+        1,
+        0d,
+        0,
+        1000
+    );
+
+    Map<String, BenchmarkColumnValueGenerator> generators = new HashMap<>();
+    generators.put("enumerate", new BenchmarkColumnValueGenerator(enumeratedSchema, 1));
+    generators.put("zipfLow", new BenchmarkColumnValueGenerator(zipfLowSchema, 1));
+    generators.put("zipfHigh", new BenchmarkColumnValueGenerator(zipfHighSchema, 1));
+    generators.put("sequential", new BenchmarkColumnValueGenerator(sequentialSchema, 1));
+    generators.put("uniform", new BenchmarkColumnValueGenerator(uniformSchema, 1));
+
+    File dir = new File(dirPath);
+    dir.mkdir();
+
+    // create data files using BenchmarkColunValueGenerator
+    for (Map.Entry<String, BenchmarkColumnValueGenerator> entry : generators.entrySet()) {
+      final File dataFile = new File(dir, entry.getKey());
+      dataFile.delete();
+      try (Writer writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(dataFile)))) {
+        for (int i = 0; i < ROW_NUM; i++) {
+          writer.write((long) entry.getValue().generateRowValue() + "\n");
+        }
+      }
+    }
+
+    // create compressed files using all combinations of CompressionStrategy and LongEncoding provided
+    for (Map.Entry<String, BenchmarkColumnValueGenerator> entry : generators.entrySet()) {
+      for (CompressedObjectStrategy.CompressionStrategy compression : compressions) {
+        for (CompressionFactory.LongEncodingStrategy encoding : encodings) {
+          String name = entry.getKey() + "-" + compression.toString() + "-" + encoding.toString();
+          System.out.print(name + ": ");
+          File compFile = new File(dir, name);
+          compFile.delete();
+          File dataFile = new File(dir, entry.getKey());
+
+          TmpFileIOPeon iopeon = new TmpFileIOPeon(true);
+          LongSupplierSerializer writer = CompressionFactory.getLongSerializer(
+              iopeon,
+              "long",
+              ByteOrder.nativeOrder(),
+              encoding,
+              compression
+          );
+          BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(dataFile)));
+
+          try (FileChannel output = FileChannel.open(
+              compFile.toPath(),
+              StandardOpenOption.CREATE_NEW,
+              StandardOpenOption.WRITE
+          )) {
+            writer.open();
+            String line;
+            while ((line = br.readLine()) != null) {
+              writer.add(Long.parseLong(line));
+            }
+            final ByteArrayOutputStream baos = new ByteArrayOutputStream();
+            writer.closeAndConsolidate(
+                new ByteSink()
+                {
+                  @Override
+                  public OutputStream openStream() throws IOException
+                  {
+                    return baos;
+                  }
+                }
+            );
+            output.write(ByteBuffer.wrap(baos.toByteArray()));
+          }
+          finally {
+            iopeon.close();
+            br.close();
+          }
+          System.out.print(compFile.length() / 1024 + "\n");
+        }
+      }
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/MergeSequenceBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/MergeSequenceBenchmark.java
index 5721bba87d3..54ad24b983e 100644
--- a/benchmarks/src/main/java/io/druid/benchmark/MergeSequenceBenchmark.java
+++ b/benchmarks/src/main/java/io/druid/benchmark/MergeSequenceBenchmark.java
@@ -22,10 +22,12 @@
 import com.google.common.collect.Lists;
 import com.google.common.collect.Ordering;
 import com.google.common.primitives.Ints;
-import com.metamx.common.guava.Accumulator;
-import com.metamx.common.guava.MergeSequence;
-import com.metamx.common.guava.Sequence;
-import com.metamx.common.guava.Sequences;
+
+import io.druid.java.util.common.guava.Accumulator;
+import io.druid.java.util.common.guava.MergeSequence;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+
 import org.openjdk.jmh.annotations.Benchmark;
 import org.openjdk.jmh.annotations.BenchmarkMode;
 import org.openjdk.jmh.annotations.Mode;
diff --git a/benchmarks/src/main/java/io/druid/benchmark/StupidPoolConcurrencyBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/StupidPoolConcurrencyBenchmark.java
index b5391d626af..43ae737495c 100644
--- a/benchmarks/src/main/java/io/druid/benchmark/StupidPoolConcurrencyBenchmark.java
+++ b/benchmarks/src/main/java/io/druid/benchmark/StupidPoolConcurrencyBenchmark.java
@@ -20,9 +20,11 @@
 package io.druid.benchmark;
 
 import com.google.common.base.Supplier;
-import com.metamx.common.logger.Logger;
+
 import io.druid.collections.ResourceHolder;
 import io.druid.collections.StupidPool;
+import io.druid.java.util.common.logger.Logger;
+
 import org.openjdk.jmh.annotations.Benchmark;
 import org.openjdk.jmh.annotations.BenchmarkMode;
 import org.openjdk.jmh.annotations.Level;
@@ -63,6 +65,7 @@ public void teardown()
   {
     private final AtomicLong numPools = new AtomicLong(0L);
     private final StupidPool<Object> pool = new StupidPool<>(
+        "simpleObject pool",
         new Supplier<Object>()
         {
           @Override
diff --git a/benchmarks/src/main/java/io/druid/benchmark/TimeParseBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/TimeParseBenchmark.java
new file mode 100644
index 00000000000..126b7b81120
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/TimeParseBenchmark.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package io.druid.benchmark;
+
+import com.google.common.base.Function;
+
+import io.druid.java.util.common.parsers.TimestampParser;
+
+import org.joda.time.DateTime;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.infra.Blackhole;
+import org.openjdk.jmh.runner.Runner;
+import org.openjdk.jmh.runner.RunnerException;
+import org.openjdk.jmh.runner.options.Options;
+import org.openjdk.jmh.runner.options.OptionsBuilder;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+public class TimeParseBenchmark
+{
+  // 1 million rows
+  int numRows = 1000000;
+
+  // Number of batches of same times
+  @Param({"10000", "100000", "500000", "1000000"})
+  int numBatches;
+
+  static final String DATA_FORMAT = "MM/dd/yyyy HH:mm:ss Z";
+
+  static Function<String, DateTime> timeFn = TimestampParser.createTimestampParser(DATA_FORMAT);
+
+  private String[] rows;
+
+  @Setup
+  public void setup()
+  {
+    SimpleDateFormat format = new SimpleDateFormat(DATA_FORMAT);
+    long start = System.currentTimeMillis();
+    int rowsPerBatch = numRows / numBatches;
+    int numRowInBatch = 0;
+    rows = new String[numRows];
+    for (int i = 0; i < numRows; ++i) {
+      if (numRowInBatch >= rowsPerBatch) {
+        numRowInBatch = 0;
+        start += 5000; // new batch, add 5 seconds
+      }
+      rows[i] = format.format(new Date(start));
+      numRowInBatch++;
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.NANOSECONDS)
+  public void parseNoContext(Blackhole blackhole)
+  {
+    for (int i = 0; i < rows.length; ++i) {
+      blackhole.consume(timeFn.apply(rows[i]).getMillis());
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.NANOSECONDS)
+  public void parseWithContext(Blackhole blackhole)
+  {
+    String lastTimeString = null;
+    long lastTime = 0L;
+    for (int i = 0; i < rows.length; ++i) {
+      if (!rows[i].equals(lastTimeString)) {
+        lastTimeString = rows[i];
+        lastTime = timeFn.apply(rows[i]).getMillis();
+      }
+      blackhole.consume(lastTime);
+    }
+  }
+
+  public static void main(String[] args) throws RunnerException
+  {
+    Options opt = new OptionsBuilder()
+        .include(TimeParseBenchmark.class.getSimpleName())
+        .warmupIterations(1)
+        .measurementIterations(10)
+        .forks(1)
+        .build();
+
+    new Runner(opt).run();
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/TopNTypeInterfaceBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/TopNTypeInterfaceBenchmark.java
new file mode 100644
index 00000000000..de8f74111c3
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/TopNTypeInterfaceBenchmark.java
@@ -0,0 +1,645 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.benchmark.query.QueryBenchmarkUtil;
+import io.druid.collections.StupidPool;
+import io.druid.concurrent.Execs;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.offheap.OffheapBufferGenerator;
+import io.druid.query.FinalizeResultsQueryRunner;
+import io.druid.query.Query;
+import io.druid.query.QueryRunner;
+import io.druid.query.QueryRunnerFactory;
+import io.druid.query.QueryToolChest;
+import io.druid.query.Result;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.dimension.ExtractionDimensionSpec;
+import io.druid.query.extraction.IdentityExtractionFn;
+import io.druid.query.ordering.StringComparators;
+import io.druid.query.spec.MultipleIntervalSegmentSpec;
+import io.druid.query.spec.QuerySegmentSpec;
+import io.druid.query.topn.DimensionTopNMetricSpec;
+import io.druid.query.topn.TopNQuery;
+import io.druid.query.topn.TopNQueryBuilder;
+import io.druid.query.topn.TopNQueryConfig;
+import io.druid.query.topn.TopNQueryQueryToolChest;
+import io.druid.query.topn.TopNQueryRunnerFactory;
+import io.druid.query.topn.TopNResultValue;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.QueryableIndexSegment;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+
+// Benchmark for determining the interface overhead of TopN with multiple type implementations
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class TopNTypeInterfaceBenchmark
+{
+  @Param({"1"})
+  private int numSegments;
+
+  @Param({"750000"})
+  private int rowsPerSegment;
+
+  @Param({"basic.A"})
+  private String schemaAndQuery;
+
+  @Param({"10"})
+  private int threshold;
+
+  private static final Logger log = new Logger(TopNTypeInterfaceBenchmark.class);
+  private static final int RNG_SEED = 9999;
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+
+  private List<IncrementalIndex> incIndexes;
+  private List<QueryableIndex> qIndexes;
+
+  private QueryRunnerFactory factory;
+  private BenchmarkSchemaInfo schemaInfo;
+  private TopNQueryBuilder queryBuilder;
+  private TopNQuery stringQuery;
+  private TopNQuery longQuery;
+  private TopNQuery floatQuery;
+
+  private ExecutorService executorService;
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  private static final Map<String, Map<String, TopNQueryBuilder>> SCHEMA_QUERY_MAP = new LinkedHashMap<>();
+
+  private void setupQueries()
+  {
+    // queries for the basic schema
+    Map<String, TopNQueryBuilder> basicQueries = new LinkedHashMap<>();
+    BenchmarkSchemaInfo basicSchema = BenchmarkSchemas.SCHEMA_MAP.get("basic");
+
+    { // basic.A
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory("sumLongSequential", "sumLongSequential"));
+      queryAggs.add(new LongMaxAggregatorFactory("maxLongUniform", "maxLongUniform"));
+      queryAggs.add(new DoubleSumAggregatorFactory("sumFloatNormal", "sumFloatNormal"));
+      queryAggs.add(new DoubleMinAggregatorFactory("minFloatZipf", "minFloatZipf"));
+      queryAggs.add(new HyperUniquesAggregatorFactory("hyperUniquesMet", "hyper"));
+
+      // Use an IdentityExtractionFn to force usage of DimExtractionTopNAlgorithm
+      TopNQueryBuilder queryBuilderString = new TopNQueryBuilder()
+          .dataSource("blah")
+          .granularity(Granularities.ALL)
+          .dimension(new ExtractionDimensionSpec("dimSequential", "dimSequential", IdentityExtractionFn.getInstance()))
+          .metric("sumFloatNormal")
+          .intervals(intervalSpec)
+          .aggregators(queryAggs);
+
+      // DimExtractionTopNAlgorithm is always used for numeric columns
+      TopNQueryBuilder queryBuilderLong = new TopNQueryBuilder()
+          .dataSource("blah")
+          .granularity(Granularities.ALL)
+          .dimension("metLongUniform")
+          .metric("sumFloatNormal")
+          .intervals(intervalSpec)
+          .aggregators(queryAggs);
+
+      TopNQueryBuilder queryBuilderFloat = new TopNQueryBuilder()
+          .dataSource("blah")
+          .granularity(Granularities.ALL)
+          .dimension("metFloatNormal")
+          .metric("sumFloatNormal")
+          .intervals(intervalSpec)
+          .aggregators(queryAggs);
+
+      basicQueries.put("string", queryBuilderString);
+      basicQueries.put("long", queryBuilderLong);
+      basicQueries.put("float", queryBuilderFloat);
+    }
+    { // basic.numericSort
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory("sumLongSequential", "sumLongSequential"));
+
+      TopNQueryBuilder queryBuilderA = new TopNQueryBuilder()
+          .dataSource("blah")
+          .granularity(Granularities.ALL)
+          .dimension("dimUniform")
+          .metric(new DimensionTopNMetricSpec(null, StringComparators.NUMERIC))
+          .intervals(intervalSpec)
+          .aggregators(queryAggs);
+
+      basicQueries.put("numericSort", queryBuilderA);
+    }
+    { // basic.alphanumericSort
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory("sumLongSequential", "sumLongSequential"));
+
+      TopNQueryBuilder queryBuilderA = new TopNQueryBuilder()
+          .dataSource("blah")
+          .granularity(Granularities.ALL)
+          .dimension("dimUniform")
+          .metric(new DimensionTopNMetricSpec(null, StringComparators.ALPHANUMERIC))
+          .intervals(intervalSpec)
+          .aggregators(queryAggs);
+
+      basicQueries.put("alphanumericSort", queryBuilderA);
+    }
+
+    SCHEMA_QUERY_MAP.put("basic", basicQueries);
+  }
+
+
+  @Setup
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+
+    executorService = Execs.multiThreaded(numSegments, "TopNThreadPool");
+
+    setupQueries();
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get("basic");
+    queryBuilder = SCHEMA_QUERY_MAP.get("basic").get("string");
+    queryBuilder.threshold(threshold);
+    stringQuery = queryBuilder.build();
+
+    TopNQueryBuilder longBuilder =  SCHEMA_QUERY_MAP.get("basic").get("long");
+    longBuilder.threshold(threshold);
+    longQuery = longBuilder.build();
+
+    TopNQueryBuilder floatBuilder =  SCHEMA_QUERY_MAP.get("basic").get("float");
+    floatBuilder.threshold(threshold);
+    floatQuery = floatBuilder.build();
+
+    incIndexes = new ArrayList<>();
+    for (int i = 0; i < numSegments; i++) {
+      log.info("Generating rows for segment " + i);
+
+      BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+          schemaInfo.getColumnSchemas(),
+          RNG_SEED + i,
+          schemaInfo.getDataInterval(),
+          rowsPerSegment
+      );
+
+      IncrementalIndex incIndex = makeIncIndex();
+
+      for (int j = 0; j < rowsPerSegment; j++) {
+        InputRow row = gen.nextRow();
+        if (j % 10000 == 0) {
+          log.info(j + " rows generated.");
+        }
+        incIndex.add(row);
+      }
+      incIndexes.add(incIndex);
+    }
+
+    File tmpFile = Files.createTempDir();
+    log.info("Using temp dir: " + tmpFile.getAbsolutePath());
+    tmpFile.deleteOnExit();
+
+    qIndexes = new ArrayList<>();
+    for (int i = 0; i < numSegments; i++) {
+      File indexFile = INDEX_MERGER_V9.persist(
+          incIndexes.get(i),
+          tmpFile,
+          new IndexSpec()
+      );
+
+      QueryableIndex qIndex = INDEX_IO.loadIndex(indexFile);
+      qIndexes.add(qIndex);
+    }
+
+    factory = new TopNQueryRunnerFactory(
+        new StupidPool<>(
+            "TopNBenchmark-compute-bufferPool",
+            new OffheapBufferGenerator("compute", 250000000),
+            0,
+            Integer.MAX_VALUE
+        ),
+        new TopNQueryQueryToolChest(new TopNQueryConfig(), QueryBenchmarkUtil.NoopIntervalChunkingQueryRunnerDecorator()),
+        QueryBenchmarkUtil.NOOP_QUERYWATCHER
+    );
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  private static <T> List<T> runQuery(QueryRunnerFactory factory, QueryRunner runner, Query<T> query)
+  {
+
+    QueryToolChest toolChest = factory.getToolchest();
+    QueryRunner<T> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(toolChest.preMergeQueryDecoration(runner)),
+        toolChest
+    );
+
+    Sequence<T> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    return Sequences.toList(queryResult, Lists.<T>newArrayList());
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexStringOnly(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexStringTwice(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexStringThenLong(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexStringThenFloat(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexLongOnly(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexLongTwice(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexLongThenString(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexLongThenFloat(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexFloatOnly(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexFloatTwice(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexFloatThenString(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, stringQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndexFloatThenLong(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Result<TopNResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<TopNResultValue>> results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, floatQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+
+    runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    results = TopNTypeInterfaceBenchmark.runQuery(factory, runner, longQuery);
+    for (Result<TopNResultValue> result : results) {
+      blackhole.consume(result);
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/VSizeSerdeBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/VSizeSerdeBenchmark.java
new file mode 100644
index 00000000000..490128cc5f6
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/VSizeSerdeBenchmark.java
@@ -0,0 +1,212 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark;
+
+import com.google.common.io.Files;
+import io.druid.segment.data.VSizeLongSerde;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.Writer;
+import java.net.URISyntaxException;
+import java.nio.ByteBuffer;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 10)
+@BenchmarkMode(Mode.AverageTime)
+@OutputTimeUnit(TimeUnit.MILLISECONDS)
+public class VSizeSerdeBenchmark
+{
+  @Param({"500000"})
+  private int values;
+
+  private VSizeLongSerde.LongDeserializer d1;
+  private VSizeLongSerde.LongDeserializer d2;
+  private VSizeLongSerde.LongDeserializer d4;
+  private VSizeLongSerde.LongDeserializer d8;
+  private VSizeLongSerde.LongDeserializer d12;
+  private VSizeLongSerde.LongDeserializer d16;
+  private VSizeLongSerde.LongDeserializer d20;
+  private VSizeLongSerde.LongDeserializer d24;
+  private VSizeLongSerde.LongDeserializer d32;
+  private VSizeLongSerde.LongDeserializer d40;
+  private VSizeLongSerde.LongDeserializer d48;
+  private VSizeLongSerde.LongDeserializer d56;
+  private VSizeLongSerde.LongDeserializer d64;
+  private long sum;
+  private File dummy;
+
+  @Setup
+  public void setup() throws IOException, URISyntaxException
+  {
+    // this uses a dummy file of sufficient size to construct a mappedByteBuffer instead of using ByteBuffer.allocate
+    // to construct a heapByteBuffer since they have different performance
+    File base = new File(this.getClass().getClassLoader().getResource("").toURI());
+    dummy = new File(base, "dummy");
+    try (Writer writer = new BufferedWriter(new FileWriter(dummy))) {
+      String EMPTY_STRING = "        ";
+      for (int i = 0; i < values + 10; i++) {
+        writer.write(EMPTY_STRING);
+      }
+    }
+    ByteBuffer buffer = Files.map(dummy);
+    d1 = VSizeLongSerde.getDeserializer(1, buffer, 10);
+    d2 = VSizeLongSerde.getDeserializer(2, buffer, 10);
+    d4 = VSizeLongSerde.getDeserializer(4, buffer, 10);
+    d8 = VSizeLongSerde.getDeserializer(8, buffer, 10);
+    d12 = VSizeLongSerde.getDeserializer(12, buffer, 10);
+    d16 = VSizeLongSerde.getDeserializer(16, buffer, 10);
+    d20 = VSizeLongSerde.getDeserializer(20, buffer, 10);
+    d24 = VSizeLongSerde.getDeserializer(24, buffer, 10);
+    d32 = VSizeLongSerde.getDeserializer(32, buffer, 10);
+    d40 = VSizeLongSerde.getDeserializer(40, buffer, 10);
+    d48 = VSizeLongSerde.getDeserializer(48, buffer, 10);
+    d56 = VSizeLongSerde.getDeserializer(56, buffer, 10);
+    d64 = VSizeLongSerde.getDeserializer(64, buffer, 10);
+  }
+
+  @TearDown
+  public void tearDown()
+  {
+    dummy.delete();
+    System.out.println(sum);
+  }
+
+  @Benchmark
+  public void read1()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d1.get(i);
+    }
+  }
+
+  @Benchmark
+  public void read2()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d2.get(i);
+    }
+  }
+
+  @Benchmark
+  public void read4()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d4.get(i);
+    }
+  }
+
+  @Benchmark
+  public void read8()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d8.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd12()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d12.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd16()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d16.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd20()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d20.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd24()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d24.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd32()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d32.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd40()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d40.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd48()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d48.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd56()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d56.get(i);
+    }
+  }
+
+  @Benchmark
+  public void readd64()
+  {
+    for (int i = 0; i < values; i++) {
+      sum += d64.get(i);
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkColumnSchema.java b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkColumnSchema.java
new file mode 100644
index 00000000000..b477f84add5
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkColumnSchema.java
@@ -0,0 +1,429 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.datagen;
+
+import io.druid.segment.column.ValueType;
+
+import java.util.List;
+
+public class BenchmarkColumnSchema
+{
+  /**
+   * SEQUENTIAL:          Generate integer or enumerated values in sequence. Not random.
+   *
+   * DISCRETE_UNIFORM:    Discrete uniform distribution, generates integers or enumerated values.
+   *
+   * ROUNDED_NORMAL:      Discrete distribution that rounds sample values from an underlying normal
+   *                      distribution
+   *
+   * ZIPF:                Discrete Zipf distribution.
+   *                      Lower numbers have higher probability.
+   *                      Can also generate Zipf distribution from a list of enumerated values.
+   *
+   * ENUMERATED:          Discrete distribution, generated from lists of values and associated probabilities.
+   *
+   * NORMAL:              Continuous normal distribution.
+   *
+   * UNIFORM:             Continuous uniform distribution.
+   */
+  public enum ValueDistribution
+  {
+    // discrete distributions
+    SEQUENTIAL,
+    DISCRETE_UNIFORM,
+    ROUNDED_NORMAL,
+    ZIPF,
+    ENUMERATED,
+
+    // continuous distributions
+    UNIFORM,
+    NORMAL
+  }
+
+  /**
+   * Generate values according to this distribution.
+   */
+  private ValueDistribution distributionType;
+
+  /**
+   * Name of the column.
+   */
+  private String name;
+
+  /**
+   * Value type of this column.
+   */
+  private ValueType type;
+
+  /**
+   * Is this column a metric or dimension?
+   */
+  private boolean isMetric;
+
+  /**
+   * Controls how many values are generated per row (use > 1 for multi-value dimensions)
+   */
+  private int rowSize;
+
+  /**
+   * Probability that a null row will be generated instead of a row with values sampled from the distribution.
+   */
+  private final Double nullProbability;
+
+  /**
+   * When used in discrete distributions, the set of possible values to be generated.
+   */
+  private List<Object> enumeratedValues;
+
+  /**
+   * When using ENUMERATED distribution, the probabilities associated with the set of values to be generated.
+   * The probabilities in this list must follow the same order as those in enumeratedValues.
+   * Probabilities do not need to sum to 1.0, they will be automatically normalized.
+   */
+  private List<Double> enumeratedProbabilities;
+
+  /**
+   * Range of integer values to generate in ZIPF and DISCRETE_NORMAL distributions.
+   */
+  private Integer startInt;
+  private Integer endInt;
+
+  /**
+   * Range of double values to generate in NORMAL distribution.
+   */
+  private Double startDouble;
+  private Double endDouble;
+
+  /**
+   * Exponent for the ZIPF distribution.
+   */
+  private Double zipfExponent;
+
+  /**
+   * Mean and standard deviation for the NORMAL and ROUNDED_NORMAL distributions.
+   */
+  private Double mean;
+  private Double standardDeviation;
+
+  private BenchmarkColumnSchema(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      ValueDistribution distributionType
+  )
+  {
+    this.name = name;
+    this.type = type;
+    this.isMetric = isMetric;
+    this.distributionType = distributionType;
+    this.rowSize = rowSize;
+    this.nullProbability = nullProbability;
+  }
+
+  public BenchmarkColumnValueGenerator makeGenerator(long seed)
+  {
+    return new BenchmarkColumnValueGenerator(this, seed);
+  }
+
+  public String getName()
+  {
+    return name;
+  }
+
+  public Double getNullProbability()
+  {
+    return nullProbability;
+  }
+
+  public ValueType getType()
+  {
+    return type;
+  }
+
+  public boolean isMetric()
+  {
+    return isMetric;
+  }
+
+  public ValueDistribution getDistributionType()
+  {
+    return distributionType;
+  }
+
+  public int getRowSize()
+  {
+    return rowSize;
+  }
+
+  public List<Object> getEnumeratedValues()
+  {
+    return enumeratedValues;
+  }
+
+  public List<Double> getEnumeratedProbabilities()
+  {
+    return enumeratedProbabilities;
+  }
+
+  public Integer getStartInt()
+  {
+    return startInt;
+  }
+
+  public Integer getEndInt()
+  {
+    return endInt;
+  }
+
+  public Double getStartDouble()
+  {
+    return startDouble;
+  }
+
+  public Double getEndDouble()
+  {
+    return endDouble;
+  }
+
+  public Double getZipfExponent()
+  {
+    return zipfExponent;
+  }
+
+  public Double getMean()
+  {
+    return mean;
+  }
+
+  public Double getStandardDeviation()
+  {
+    return standardDeviation;
+  }
+
+  public static BenchmarkColumnSchema makeSequential(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      int startInt,
+      int endInt
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        ValueDistribution.SEQUENTIAL
+    );
+    schema.startInt = startInt;
+    schema.endInt = endInt;
+    return schema;
+  };
+
+  public static BenchmarkColumnSchema makeEnumeratedSequential(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      List<Object> enumeratedValues
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        ValueDistribution.SEQUENTIAL
+    );
+    schema.enumeratedValues = enumeratedValues;
+    return schema;
+  };
+
+  public static BenchmarkColumnSchema makeDiscreteUniform(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      int startInt,
+      int endInt
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        ValueDistribution.DISCRETE_UNIFORM
+    );
+    schema.startInt = startInt;
+    schema.endInt = endInt;
+    return schema;
+  };
+
+  public static BenchmarkColumnSchema makeEnumeratedDiscreteUniform(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      List<Object> enumeratedValues
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        ValueDistribution.DISCRETE_UNIFORM
+    );
+    schema.enumeratedValues = enumeratedValues;
+    return schema;
+  };
+
+  public static BenchmarkColumnSchema makeContinuousUniform(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      double startDouble,
+      double endDouble
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        ValueDistribution.UNIFORM
+    );
+    schema.startDouble = startDouble;
+    schema.endDouble = endDouble;
+    return schema;
+  };
+
+
+  public static BenchmarkColumnSchema makeNormal(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      Double mean,
+      Double standardDeviation,
+      boolean useRounding
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        useRounding ? ValueDistribution.ROUNDED_NORMAL : ValueDistribution.NORMAL
+    );
+    schema.mean = mean;
+    schema.standardDeviation = standardDeviation;
+    return schema;
+  };
+
+  public static BenchmarkColumnSchema makeZipf(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      int startInt,
+      int endInt,
+      Double zipfExponent
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        ValueDistribution.ZIPF
+    );
+    schema.startInt = startInt;
+    schema.endInt = endInt;
+    schema.zipfExponent = zipfExponent;
+    return schema;
+  };
+
+  public static BenchmarkColumnSchema makeEnumeratedZipf(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      List<Object> enumeratedValues,
+      Double zipfExponent
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        ValueDistribution.ZIPF
+    );
+    schema.enumeratedValues = enumeratedValues;
+    schema.zipfExponent = zipfExponent;
+    return schema;
+  };
+
+
+  public static BenchmarkColumnSchema makeEnumerated(
+      String name,
+      ValueType type,
+      boolean isMetric,
+      int rowSize,
+      Double nullProbability,
+      List<Object> enumeratedValues,
+      List<Double> enumeratedProbabilities
+  )
+  {
+    BenchmarkColumnSchema schema = new BenchmarkColumnSchema(
+        name,
+        type,
+        isMetric,
+        rowSize,
+        nullProbability,
+        ValueDistribution.ENUMERATED
+    );
+    schema.enumeratedValues = enumeratedValues;
+    schema.enumeratedProbabilities = enumeratedProbabilities;
+    return schema;
+  };
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkColumnValueGenerator.java b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkColumnValueGenerator.java
new file mode 100644
index 00000000000..966021b483b
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkColumnValueGenerator.java
@@ -0,0 +1,215 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.datagen;
+
+import io.druid.segment.column.ValueType;
+import org.apache.commons.math3.distribution.AbstractIntegerDistribution;
+import org.apache.commons.math3.distribution.AbstractRealDistribution;
+import org.apache.commons.math3.distribution.EnumeratedDistribution;
+import org.apache.commons.math3.distribution.NormalDistribution;
+import org.apache.commons.math3.distribution.UniformRealDistribution;
+import org.apache.commons.math3.distribution.ZipfDistribution;
+import org.apache.commons.math3.util.Pair;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Random;
+
+public class BenchmarkColumnValueGenerator
+{
+  private final BenchmarkColumnSchema schema;
+  private final long seed;
+
+  private Serializable distribution;
+  private Random simpleRng;
+
+  public BenchmarkColumnValueGenerator(
+      BenchmarkColumnSchema schema,
+      long seed
+  )
+  {
+    this.schema = schema;
+    this.seed = seed;
+
+    simpleRng = new Random(seed);
+    initDistribution();
+  }
+
+  public Object generateRowValue()
+  {
+    Double nullProbability = schema.getNullProbability();
+    int rowSize = schema.getRowSize();
+
+    if (nullProbability != null) {
+      Double randDouble = simpleRng.nextDouble();
+      if (randDouble <= nullProbability) {
+        return null;
+      }
+    }
+
+    if (rowSize == 1) {
+      return generateSingleRowValue();
+    } else {
+      List<Object> rowVals = new ArrayList<>(rowSize);
+      for (int i = 0; i < rowSize; i++) {
+        rowVals.add(generateSingleRowValue());
+      }
+      return rowVals;
+    }
+  }
+
+  public BenchmarkColumnSchema getSchema()
+  {
+    return schema;
+  }
+
+  public long getSeed()
+  {
+    return seed;
+  }
+
+  private Object generateSingleRowValue()
+  {
+    Object ret = null;
+    ValueType type = schema.getType();
+
+    if (distribution instanceof AbstractIntegerDistribution) {
+      ret = ((AbstractIntegerDistribution) distribution).sample();
+    } else if (distribution instanceof AbstractRealDistribution) {
+      ret = ((AbstractRealDistribution) distribution).sample();
+    } else if (distribution instanceof EnumeratedDistribution) {
+      ret = ((EnumeratedDistribution) distribution).sample();
+    }
+
+    ret = convertType(ret, type);
+    return ret;
+  }
+
+  private Object convertType(Object input, ValueType type)
+  {
+    if (input == null) {
+      return null;
+    }
+
+    Object ret;
+    switch (type) {
+      case STRING:
+        ret = input.toString();
+        break;
+      case LONG:
+        if (input instanceof Number) {
+          ret = ((Number) input).longValue();
+        } else {
+          ret = Long.parseLong(input.toString());
+        }
+        break;
+      case FLOAT:
+        if (input instanceof Number) {
+          ret = ((Number) input).floatValue();
+        } else {
+          ret = Float.parseFloat(input.toString());
+        }
+        break;
+      default:
+        throw new UnsupportedOperationException("Unknown data type: " + type);
+    }
+    return ret;
+  }
+
+  private void initDistribution()
+  {
+    BenchmarkColumnSchema.ValueDistribution distributionType = schema.getDistributionType();
+    ValueType type = schema.getType();
+    List<Object> enumeratedValues = schema.getEnumeratedValues();
+    List<Double> enumeratedProbabilities = schema.getEnumeratedProbabilities();
+    List<Pair<Object, Double>> probabilities = new ArrayList<>();
+
+    switch (distributionType) {
+      case SEQUENTIAL:
+        // not random, just cycle through numbers from start to end, or cycle through enumerated values if provided
+        distribution = new SequentialDistribution(
+            schema.getStartInt(),
+            schema.getEndInt(),
+            schema.getEnumeratedValues()
+        );
+        break;
+      case UNIFORM:
+        distribution = new UniformRealDistribution(schema.getStartDouble(), schema.getEndDouble());
+        break;
+      case DISCRETE_UNIFORM:
+        if (enumeratedValues == null) {
+          enumeratedValues = new ArrayList<>();
+          for (int i = schema.getStartInt(); i < schema.getEndInt(); i++) {
+            Object val = convertType(i, type);
+            enumeratedValues.add(val);
+          }
+        }
+        // give them all equal probability, the library will normalize probabilities to sum to 1.0
+        for (int i = 0; i < enumeratedValues.size(); i++) {
+          probabilities.add(new Pair<>(enumeratedValues.get(i), 0.1));
+        }
+        distribution = new EnumeratedTreeDistribution<>(probabilities);
+        break;
+      case NORMAL:
+        distribution = new NormalDistribution(schema.getMean(), schema.getStandardDeviation());
+        break;
+      case ROUNDED_NORMAL:
+        NormalDistribution normalDist = new NormalDistribution(schema.getMean(), schema.getStandardDeviation());
+        distribution = new RealRoundingDistribution(normalDist);
+        break;
+      case ZIPF:
+        int cardinality;
+        if (enumeratedValues == null) {
+          Integer startInt = schema.getStartInt();
+          cardinality = schema.getEndInt() - startInt;
+          ZipfDistribution zipf = new ZipfDistribution(cardinality, schema.getZipfExponent());
+          for (int i = 0; i < cardinality; i++) {
+            probabilities.add(new Pair<>((Object) (i + startInt), zipf.probability(i)));
+          }
+        } else {
+          cardinality = enumeratedValues.size();
+          ZipfDistribution zipf = new ZipfDistribution(enumeratedValues.size(), schema.getZipfExponent());
+          for (int i = 0; i < cardinality; i++) {
+            probabilities.add(new Pair<>(enumeratedValues.get(i), zipf.probability(i)));
+          }
+        }
+        distribution = new EnumeratedTreeDistribution<>(probabilities);
+        break;
+      case ENUMERATED:
+        for (int i = 0; i < enumeratedValues.size(); i++) {
+          probabilities.add(new Pair<>(enumeratedValues.get(i), enumeratedProbabilities.get(i)));
+        }
+        distribution = new EnumeratedTreeDistribution<>(probabilities);
+        break;
+
+      default:
+        throw new UnsupportedOperationException("Unknown distribution type: " + distributionType);
+    }
+
+    if (distribution instanceof AbstractIntegerDistribution) {
+      ((AbstractIntegerDistribution) distribution).reseedRandomGenerator(seed);
+    } else if (distribution instanceof AbstractRealDistribution) {
+      ((AbstractRealDistribution) distribution).reseedRandomGenerator(seed);
+    } else if (distribution instanceof EnumeratedDistribution) {
+      ((EnumeratedDistribution) distribution).reseedRandomGenerator(seed);
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkDataGenerator.java b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkDataGenerator.java
new file mode 100644
index 00000000000..d5989789629
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkDataGenerator.java
@@ -0,0 +1,146 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.datagen;
+
+import com.google.common.base.Function;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.MapBasedInputRow;
+import org.joda.time.Interval;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+public class BenchmarkDataGenerator
+{
+  private final List<BenchmarkColumnSchema> columnSchemas;
+  private final long seed;
+
+  private List<BenchmarkColumnValueGenerator> columnGenerators;
+  private final long startTime;
+  private final long endTime;
+  private final int numConsecutiveTimestamps;
+  private final double timestampIncrement;
+
+  private double currentTime;
+  private int timeCounter;
+  private List<String> dimensionNames;
+
+  public BenchmarkDataGenerator(
+      List<BenchmarkColumnSchema> columnSchemas,
+      final long seed,
+      long startTime,
+      int numConsecutiveTimestamps,
+      Double timestampIncrement
+  )
+  {
+    this.columnSchemas = columnSchemas;
+    this.seed = seed;
+
+    this.startTime = startTime;
+    this.endTime = Long.MAX_VALUE;
+    this.numConsecutiveTimestamps = numConsecutiveTimestamps;
+    this.timestampIncrement = timestampIncrement;
+    this.currentTime = startTime;
+
+    init();
+  }
+
+  public BenchmarkDataGenerator(
+      List<BenchmarkColumnSchema> columnSchemas,
+      final long seed,
+      Interval interval,
+      int numRows
+  )
+  {
+    this.columnSchemas = columnSchemas;
+    this.seed = seed;
+
+    this.startTime = interval.getStartMillis();
+    this.endTime = interval.getEndMillis() - 1;
+
+    Preconditions.checkArgument(endTime >= startTime, "endTime >= startTime");
+
+    long timeDelta = endTime - startTime;
+    this.timestampIncrement = timeDelta / (numRows * 1.0);
+    this.numConsecutiveTimestamps = 0;
+
+    init();
+  }
+
+  public InputRow nextRow()
+  {
+    Map<String, Object> event = new HashMap<>();
+    for (BenchmarkColumnValueGenerator generator : columnGenerators) {
+      event.put(generator.getSchema().getName(), generator.generateRowValue());
+    }
+    MapBasedInputRow row = new MapBasedInputRow(nextTimestamp(), dimensionNames, event);
+    return row;
+  }
+
+  private void init()
+  {
+    this.timeCounter = 0;
+    this.currentTime = startTime;
+
+    dimensionNames = new ArrayList<>();
+    for (BenchmarkColumnSchema schema : columnSchemas) {
+      if (!schema.isMetric()) {
+        dimensionNames.add(schema.getName());
+      }
+    }
+
+    columnGenerators = new ArrayList<>();
+    columnGenerators.addAll(
+        Lists.transform(
+            columnSchemas,
+            new Function<BenchmarkColumnSchema, BenchmarkColumnValueGenerator>()
+            {
+              @Override
+              public BenchmarkColumnValueGenerator apply(
+                  BenchmarkColumnSchema input
+              )
+              {
+                return input.makeGenerator(seed);
+              }
+            }
+        )
+    );
+  }
+
+  private long nextTimestamp()
+  {
+    timeCounter += 1;
+    if (timeCounter > numConsecutiveTimestamps) {
+      currentTime += timestampIncrement;
+      timeCounter = 0;
+    }
+    long newMillis = Math.round(currentTime);
+    if (newMillis > endTime) {
+      return endTime;
+    } else {
+      return newMillis;
+    }
+  }
+
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkSchemaInfo.java b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkSchemaInfo.java
new file mode 100644
index 00000000000..b83a4c9e46c
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkSchemaInfo.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.datagen;
+
+import io.druid.query.aggregation.AggregatorFactory;
+import org.joda.time.Interval;
+
+import java.util.List;
+
+public class BenchmarkSchemaInfo
+{
+  private List<BenchmarkColumnSchema> columnSchemas;
+  private List<AggregatorFactory> aggs;
+  private Interval dataInterval;
+  private boolean withRollup;
+
+  public BenchmarkSchemaInfo (
+      List<BenchmarkColumnSchema> columnSchemas,
+      List<AggregatorFactory> aggs,
+      Interval dataInterval,
+      boolean withRollup
+  )
+  {
+    this.columnSchemas = columnSchemas;
+    this.aggs = aggs;
+    this.dataInterval = dataInterval;
+    this.withRollup = withRollup;
+  }
+
+  public List<BenchmarkColumnSchema> getColumnSchemas()
+  {
+    return columnSchemas;
+  }
+
+  public List<AggregatorFactory> getAggs()
+  {
+    return aggs;
+  }
+
+  public AggregatorFactory[] getAggsArray()
+  {
+    return aggs.toArray(new AggregatorFactory[0]);
+  }
+
+  public Interval getDataInterval()
+  {
+    return dataInterval;
+  }
+
+  public boolean isWithRollup()
+  {
+    return withRollup;
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkSchemas.java b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkSchemas.java
new file mode 100644
index 00000000000..f8b5da8dcc9
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/datagen/BenchmarkSchemas.java
@@ -0,0 +1,159 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.datagen;
+
+import com.google.common.collect.ImmutableList;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.DoubleMinAggregatorFactory;
+import io.druid.query.aggregation.DoubleSumAggregatorFactory;
+import io.druid.query.aggregation.LongMaxAggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory;
+import io.druid.segment.column.ValueType;
+import org.joda.time.Interval;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+
+public class BenchmarkSchemas
+{
+  public static final Map<String, BenchmarkSchemaInfo> SCHEMA_MAP = new LinkedHashMap<>();
+
+  static { // basic schema
+    List<BenchmarkColumnSchema> basicSchemaColumns = ImmutableList.of(
+        // dims
+        BenchmarkColumnSchema.makeSequential("dimSequential", ValueType.STRING, false, 1, null, 0, 1000),
+        BenchmarkColumnSchema.makeZipf("dimZipf", ValueType.STRING, false, 1, null, 1, 101, 1.0),
+        BenchmarkColumnSchema.makeDiscreteUniform("dimUniform", ValueType.STRING, false, 1, null, 1, 100000),
+        BenchmarkColumnSchema.makeSequential("dimSequentialHalfNull", ValueType.STRING, false, 1, 0.5, 0, 1000),
+        BenchmarkColumnSchema.makeEnumerated(
+            "dimMultivalEnumerated",
+            ValueType.STRING,
+            false,
+            4,
+            null,
+            Arrays.<Object>asList("Hello", "World", "Foo", "Bar", "Baz"),
+            Arrays.<Double>asList(0.2, 0.25, 0.15, 0.10, 0.3)
+        ),
+        BenchmarkColumnSchema.makeEnumerated(
+            "dimMultivalEnumerated2",
+            ValueType.STRING,
+            false,
+            3,
+            null,
+            Arrays.<Object>asList("Apple", "Orange", "Xylophone", "Corundum", null),
+            Arrays.<Double>asList(0.2, 0.25, 0.15, 0.10, 0.3)
+        ),
+        BenchmarkColumnSchema.makeSequential("dimMultivalSequentialWithNulls", ValueType.STRING, false, 8, 0.15, 1, 11),
+        BenchmarkColumnSchema.makeSequential("dimHyperUnique", ValueType.STRING, false, 1, null, 0, 100000),
+        BenchmarkColumnSchema.makeSequential("dimNull", ValueType.STRING, false, 1, 1.0, 0, 1),
+
+        // metrics
+        BenchmarkColumnSchema.makeSequential("metLongSequential", ValueType.LONG, true, 1, null, 0, 10000),
+        BenchmarkColumnSchema.makeDiscreteUniform("metLongUniform", ValueType.LONG, true, 1, null, 0, 500),
+        BenchmarkColumnSchema.makeNormal("metFloatNormal", ValueType.FLOAT, true, 1, null, 5000.0, 1.0, true),
+        BenchmarkColumnSchema.makeZipf("metFloatZipf", ValueType.FLOAT, true, 1, null, 0, 1000, 1.0)
+    );
+
+    List<AggregatorFactory> basicSchemaIngestAggs = new ArrayList<>();
+    basicSchemaIngestAggs.add(new CountAggregatorFactory("rows"));
+    basicSchemaIngestAggs.add(new LongSumAggregatorFactory("sumLongSequential", "metLongSequential"));
+    basicSchemaIngestAggs.add(new LongMaxAggregatorFactory("maxLongUniform", "metLongUniform"));
+    basicSchemaIngestAggs.add(new DoubleSumAggregatorFactory("sumFloatNormal", "metFloatNormal"));
+    basicSchemaIngestAggs.add(new DoubleMinAggregatorFactory("minFloatZipf", "metFloatZipf"));
+    basicSchemaIngestAggs.add(new HyperUniquesAggregatorFactory("hyper", "dimHyperUnique"));
+
+    Interval basicSchemaDataInterval = new Interval(0, 1000000);
+
+    BenchmarkSchemaInfo basicSchema = new BenchmarkSchemaInfo(
+        basicSchemaColumns,
+        basicSchemaIngestAggs,
+        basicSchemaDataInterval,
+        true
+    );
+    SCHEMA_MAP.put("basic", basicSchema);
+  }
+
+  static { // simple single string column and count agg schema, no rollup
+    List<BenchmarkColumnSchema> basicSchemaColumns = ImmutableList.of(
+        // dims
+        BenchmarkColumnSchema.makeSequential("dimSequential", ValueType.STRING, false, 1, null, 0, 1000000)
+    );
+
+    List<AggregatorFactory> basicSchemaIngestAggs = new ArrayList<>();
+    basicSchemaIngestAggs.add(new CountAggregatorFactory("rows"));
+
+    Interval basicSchemaDataInterval = new Interval(0, 1000000);
+
+    BenchmarkSchemaInfo basicSchema = new BenchmarkSchemaInfo(
+        basicSchemaColumns,
+        basicSchemaIngestAggs,
+        basicSchemaDataInterval,
+        false
+    );
+    SCHEMA_MAP.put("simple", basicSchema);
+  }
+
+  static { // simple single long column and count agg schema, no rollup
+    List<BenchmarkColumnSchema> basicSchemaColumns = ImmutableList.of(
+        // dims, ingest as a metric for now with rollup off, until numeric dims at ingestion are supported
+        BenchmarkColumnSchema.makeSequential("dimSequential", ValueType.LONG, true, 1, null, 0, 1000000)
+    );
+
+    List<AggregatorFactory> basicSchemaIngestAggs = new ArrayList<>();
+    basicSchemaIngestAggs.add(new LongSumAggregatorFactory("dimSequential", "dimSequential"));
+    basicSchemaIngestAggs.add(new CountAggregatorFactory("rows"));
+
+    Interval basicSchemaDataInterval = new Interval(0, 1000000);
+
+    BenchmarkSchemaInfo basicSchema = new BenchmarkSchemaInfo(
+        basicSchemaColumns,
+        basicSchemaIngestAggs,
+        basicSchemaDataInterval,
+        false
+    );
+    SCHEMA_MAP.put("simpleLong", basicSchema);
+  }
+
+  static { // simple single float column and count agg schema, no rollup
+    List<BenchmarkColumnSchema> basicSchemaColumns = ImmutableList.of(
+        // dims, ingest as a metric for now with rollup off, until numeric dims at ingestion are supported
+        BenchmarkColumnSchema.makeSequential("dimSequential", ValueType.FLOAT, true, 1, null, 0, 1000000)
+    );
+
+    List<AggregatorFactory> basicSchemaIngestAggs = new ArrayList<>();
+    basicSchemaIngestAggs.add(new DoubleSumAggregatorFactory("dimSequential", "dimSequential"));
+    basicSchemaIngestAggs.add(new CountAggregatorFactory("rows"));
+
+    Interval basicSchemaDataInterval = new Interval(0, 1000000);
+
+    BenchmarkSchemaInfo basicSchema = new BenchmarkSchemaInfo(
+        basicSchemaColumns,
+        basicSchemaIngestAggs,
+        basicSchemaDataInterval,
+        false
+    );
+    SCHEMA_MAP.put("simpleFloat", basicSchema);
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/datagen/EnumeratedTreeDistribution.java b/benchmarks/src/main/java/io/druid/benchmark/datagen/EnumeratedTreeDistribution.java
new file mode 100644
index 00000000000..5cab5af7ee7
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/datagen/EnumeratedTreeDistribution.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.datagen;
+
+import org.apache.commons.math3.distribution.EnumeratedDistribution;
+import org.apache.commons.math3.util.Pair;
+
+import java.util.List;
+import java.util.TreeMap;
+
+/*
+ * EnumeratedDistrubtion's sample() method does a linear scan through the array of probabilities.
+ *
+ * This is too slow with high cardinality value sets, so this subclass overrides sample() to use
+ * a TreeMap instead.
+ */
+public class EnumeratedTreeDistribution<T> extends EnumeratedDistribution
+{
+  private TreeMap<Double, Integer> probabilityRanges;
+  private List<Pair<T, Double>> normalizedPmf;
+
+  public EnumeratedTreeDistribution(final List<Pair<T, Double>> pmf)
+  {
+    super(pmf);
+
+    // build the interval tree
+    probabilityRanges = new TreeMap<Double, Integer>();
+    normalizedPmf = this.getPmf();
+    double cumulativep = 0.0;
+    for (int i = 0; i < normalizedPmf.size(); i++) {
+      probabilityRanges.put(cumulativep, i);
+      Pair<T, Double> pair = normalizedPmf.get(i);
+      cumulativep += pair.getSecond();
+    }
+  }
+
+  @Override
+  public T sample()
+  {
+    final double randomValue = random.nextDouble();
+    Integer valueIndex = probabilityRanges.floorEntry(randomValue).getValue();
+    return normalizedPmf.get(valueIndex).getFirst();
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/datagen/RealRoundingDistribution.java b/benchmarks/src/main/java/io/druid/benchmark/datagen/RealRoundingDistribution.java
new file mode 100644
index 00000000000..e913f3a8a0f
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/datagen/RealRoundingDistribution.java
@@ -0,0 +1,92 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.datagen;
+
+import org.apache.commons.math3.distribution.AbstractIntegerDistribution;
+import org.apache.commons.math3.distribution.AbstractRealDistribution;
+
+/*
+ * Rounds the output values from the sample() function of an AbstractRealDistribution.
+ */
+public class RealRoundingDistribution extends AbstractIntegerDistribution
+{
+  private AbstractRealDistribution realDist;
+
+  public RealRoundingDistribution(AbstractRealDistribution realDist)
+  {
+    this.realDist = realDist;
+  }
+
+  @Override
+  public double probability(int x)
+  {
+    return 0;
+  }
+
+  @Override
+  public double cumulativeProbability(int x)
+  {
+    return 0;
+  }
+
+  @Override
+  public double getNumericalMean()
+  {
+    return 0;
+  }
+
+  @Override
+  public double getNumericalVariance()
+  {
+    return 0;
+  }
+
+  @Override
+  public int getSupportLowerBound()
+  {
+    return 0;
+  }
+
+  @Override
+  public int getSupportUpperBound()
+  {
+    return 0;
+  }
+
+  @Override
+  public boolean isSupportConnected()
+  {
+    return false;
+  }
+
+  @Override
+  public void reseedRandomGenerator(long seed)
+  {
+    realDist.reseedRandomGenerator(seed);
+  }
+
+  @Override
+  public int sample()
+  {
+    double randomVal = realDist.sample();
+    Long longVal = Math.round(randomVal);
+    return longVal.intValue();
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/datagen/SequentialDistribution.java b/benchmarks/src/main/java/io/druid/benchmark/datagen/SequentialDistribution.java
new file mode 100644
index 00000000000..b73d6253e52
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/datagen/SequentialDistribution.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.datagen;
+
+import org.apache.commons.math3.distribution.EnumeratedDistribution;
+import org.apache.commons.math3.util.Pair;
+
+import java.util.Arrays;
+import java.util.List;
+
+public class SequentialDistribution extends EnumeratedDistribution
+{
+
+  private Integer start;
+  private Integer end;
+  private List<Object> enumeratedValues;
+  private int counter;
+
+
+  public SequentialDistribution(Integer start, Integer end, List<Object> enumeratedValues)
+  {
+    // just pass in some bogus probability mass function, we won't be using it
+    super(Arrays.asList(new Pair<Object, Double>(null, 1.0)));
+    this.start = start;
+    this.end = end;
+    this.enumeratedValues = enumeratedValues;
+    if (enumeratedValues == null) {
+      counter = start;
+    } else {
+      counter = 0;
+    }
+  }
+
+  @Override
+  public Object sample()
+  {
+    Object ret;
+    if (enumeratedValues != null) {
+      ret = enumeratedValues.get(counter);
+      counter = (counter + 1) % enumeratedValues.size();
+    } else {
+      ret = counter;
+      counter++;
+      if (counter >= end) {
+        counter = start;
+      }
+    }
+    return ret;
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/indexing/IncrementalIndexReadBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/indexing/IncrementalIndexReadBenchmark.java
new file mode 100644
index 00000000000..9e54a79bcd1
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/indexing/IncrementalIndexReadBenchmark.java
@@ -0,0 +1,210 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.indexing;
+
+import com.google.common.collect.Lists;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.js.JavaScriptConfig;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.dimension.DefaultDimensionSpec;
+import io.druid.query.filter.BoundDimFilter;
+import io.druid.query.filter.DimFilter;
+import io.druid.query.filter.InDimFilter;
+import io.druid.query.filter.JavaScriptDimFilter;
+import io.druid.query.filter.OrDimFilter;
+import io.druid.query.filter.RegexDimFilter;
+import io.druid.query.filter.SearchQueryDimFilter;
+import io.druid.query.ordering.StringComparators;
+import io.druid.query.search.search.ContainsSearchQuerySpec;
+import io.druid.segment.Cursor;
+import io.druid.segment.DimensionSelector;
+import io.druid.segment.VirtualColumns;
+import io.druid.segment.data.IndexedInts;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.IncrementalIndexStorageAdapter;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class IncrementalIndexReadBenchmark
+{
+  @Param({"750000"})
+  private int rowsPerSegment;
+
+  @Param({"basic"})
+  private String schema;
+
+  @Param({"true", "false"})
+  private boolean rollup;
+
+  private static final Logger log = new Logger(IncrementalIndexReadBenchmark.class);
+  private static final int RNG_SEED = 9999;
+  private IncrementalIndex incIndex;
+
+  private BenchmarkSchemaInfo schemaInfo;
+
+  @Setup
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + +System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schema);
+
+    BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+        schemaInfo.getColumnSchemas(),
+        RNG_SEED,
+        schemaInfo.getDataInterval(),
+        rowsPerSegment
+    );
+
+    incIndex = makeIncIndex();
+
+    for (int j = 0; j < rowsPerSegment; j++) {
+      InputRow row = gen.nextRow();
+      if (j % 10000 == 0) {
+        log.info(j + " rows generated.");
+      }
+      incIndex.add(row);
+    }
+
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .withRollup(rollup)
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void read(Blackhole blackhole) throws Exception
+  {
+    IncrementalIndexStorageAdapter sa = new IncrementalIndexStorageAdapter(incIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, null);
+    Cursor cursor = Sequences.toList(Sequences.limit(cursors, 1), Lists.<Cursor>newArrayList()).get(0);
+
+    List<DimensionSelector> selectors = new ArrayList<>();
+    selectors.add(cursor.makeDimensionSelector(new DefaultDimensionSpec("dimSequential", null)));
+    selectors.add(cursor.makeDimensionSelector(new DefaultDimensionSpec("dimZipf", null)));
+    selectors.add(cursor.makeDimensionSelector(new DefaultDimensionSpec("dimUniform", null)));
+    selectors.add(cursor.makeDimensionSelector(new DefaultDimensionSpec("dimSequentialHalfNull", null)));
+
+    cursor.reset();
+    while (!cursor.isDone()) {
+      for (DimensionSelector selector : selectors) {
+        IndexedInts row = selector.getRow();
+        blackhole.consume(selector.lookupName(row.get(0)));
+      }
+      cursor.advance();
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void readWithFilters(Blackhole blackhole) throws Exception
+  {
+    DimFilter filter = new OrDimFilter(
+        Arrays.asList(
+            new BoundDimFilter("dimSequential", "-1", "-1", true, true, null, null, StringComparators.ALPHANUMERIC),
+            new JavaScriptDimFilter("dimSequential", "function(x) { return false }", null, JavaScriptConfig.getEnabledInstance()),
+            new RegexDimFilter("dimSequential", "X", null),
+            new SearchQueryDimFilter("dimSequential", new ContainsSearchQuerySpec("X", false), null),
+            new InDimFilter("dimSequential", Arrays.asList("X"), null)
+        )
+    );
+
+    IncrementalIndexStorageAdapter sa = new IncrementalIndexStorageAdapter(incIndex);
+    Sequence<Cursor> cursors = makeCursors(sa, filter);
+    Cursor cursor = Sequences.toList(Sequences.limit(cursors, 1), Lists.<Cursor>newArrayList()).get(0);
+
+    List<DimensionSelector> selectors = new ArrayList<>();
+    selectors.add(cursor.makeDimensionSelector(new DefaultDimensionSpec("dimSequential", null)));
+    selectors.add(cursor.makeDimensionSelector(new DefaultDimensionSpec("dimZipf", null)));
+    selectors.add(cursor.makeDimensionSelector(new DefaultDimensionSpec("dimUniform", null)));
+    selectors.add(cursor.makeDimensionSelector(new DefaultDimensionSpec("dimSequentialHalfNull", null)));
+
+    cursor.reset();
+    while (!cursor.isDone()) {
+      for (DimensionSelector selector : selectors) {
+        IndexedInts row = selector.getRow();
+        blackhole.consume(selector.lookupName(row.get(0)));
+      }
+      cursor.advance();
+    }
+  }
+
+  private Sequence<Cursor> makeCursors(IncrementalIndexStorageAdapter sa, DimFilter filter)
+  {
+    return sa.makeCursors(
+        filter.toFilter(),
+        schemaInfo.getDataInterval(),
+        VirtualColumns.EMPTY,
+        Granularities.ALL,
+        false
+    );
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexIngestionBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexIngestionBenchmark.java
new file mode 100644
index 00000000000..015fe82b80e
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexIngestionBenchmark.java
@@ -0,0 +1,132 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.indexing;
+
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class IndexIngestionBenchmark
+{
+  @Param({"75000"})
+  private int rowsPerSegment;
+
+  @Param({"basic"})
+  private String schema;
+
+  @Param({"true", "false"})
+  private boolean rollup;
+
+  private static final Logger log = new Logger(IndexIngestionBenchmark.class);
+  private static final int RNG_SEED = 9999;
+
+  private IncrementalIndex incIndex;
+  private ArrayList<InputRow> rows;
+  private BenchmarkSchemaInfo schemaInfo;
+
+  @Setup
+  public void setup() throws IOException
+  {
+    ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+
+    rows = new ArrayList<InputRow>();
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schema);
+
+    BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+        schemaInfo.getColumnSchemas(),
+        RNG_SEED,
+        schemaInfo.getDataInterval(),
+        rowsPerSegment
+    );
+
+    for (int i = 0; i < rowsPerSegment; i++) {
+      InputRow row = gen.nextRow();
+      if (i % 10000 == 0) {
+        log.info(i + " rows generated.");
+      }
+      rows.add(row);
+    }
+  }
+
+  @Setup(Level.Invocation)
+  public void setup2() throws IOException
+  {
+    incIndex = makeIncIndex();
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .withRollup(rollup)
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment * 2
+    );
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void addRows(Blackhole blackhole) throws Exception
+  {
+    for (int i = 0; i < rowsPerSegment; i++) {
+      InputRow row = rows.get(i);
+      int rv = incIndex.add(row);
+      blackhole.consume(rv);
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexMergeBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexMergeBenchmark.java
new file mode 100644
index 00000000000..3702f464305
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexMergeBenchmark.java
@@ -0,0 +1,232 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.indexing;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMerger;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.apache.commons.io.FileUtils;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class IndexMergeBenchmark
+{
+  @Param({"5"})
+  private int numSegments;
+
+  @Param({"75000"})
+  private int rowsPerSegment;
+
+  @Param({"basic"})
+  private String schema;
+
+  @Param({"true", "false"})
+  private boolean rollup;
+
+  private static final Logger log = new Logger(IndexMergeBenchmark.class);
+  private static final int RNG_SEED = 9999;
+  private static final IndexMerger INDEX_MERGER;
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+
+  private List<QueryableIndex> indexesToMerge;
+  private BenchmarkSchemaInfo schemaInfo;
+  private File tmpDir;
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER = new IndexMerger(JSON_MAPPER, INDEX_IO);
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  @Setup
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + + System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+
+    indexesToMerge = new ArrayList<>();
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schema);
+
+    for (int i = 0; i < numSegments; i++) {
+      BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+          schemaInfo.getColumnSchemas(),
+          RNG_SEED + i,
+          schemaInfo.getDataInterval(),
+          rowsPerSegment
+      );
+
+      IncrementalIndex incIndex = makeIncIndex();
+
+      for (int j = 0; j < rowsPerSegment; j++) {
+        InputRow row = gen.nextRow();
+        if (j % 10000 == 0) {
+          log.info(j + " rows generated.");
+        }
+        incIndex.add(row);
+      }
+
+      tmpDir = Files.createTempDir();
+      log.info("Using temp dir: " + tmpDir.getAbsolutePath());
+
+      File indexFile = INDEX_MERGER_V9.persist(
+          incIndex,
+          tmpDir,
+          new IndexSpec()
+      );
+
+      QueryableIndex qIndex = INDEX_IO.loadIndex(indexFile);
+      indexesToMerge.add(qIndex);
+    }
+  }
+
+  @TearDown
+  public void tearDown() throws IOException
+  {
+    FileUtils.deleteDirectory(tmpDir);
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .withRollup(rollup)
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void merge(Blackhole blackhole) throws Exception
+  {
+    File tmpFile = File.createTempFile("IndexMergeBenchmark-MERGEDFILE-" + System.currentTimeMillis(), ".TEMPFILE");
+    tmpFile.delete();
+    tmpFile.mkdirs();
+    try {
+      log.info(tmpFile.getAbsolutePath() + " isFile: " + tmpFile.isFile() + " isDir:" + tmpFile.isDirectory());
+
+      File mergedFile = INDEX_MERGER.mergeQueryableIndex(
+          indexesToMerge,
+          rollup,
+          schemaInfo.getAggsArray(),
+          tmpFile,
+          new IndexSpec()
+      );
+
+      blackhole.consume(mergedFile);
+    }
+    finally {
+      tmpFile.delete();
+    }
+
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void mergeV9(Blackhole blackhole) throws Exception
+  {
+    File tmpFile = File.createTempFile("IndexMergeBenchmark-MERGEDFILE-V9-" + System.currentTimeMillis(), ".TEMPFILE");
+    tmpFile.delete();
+    tmpFile.mkdirs();
+    try {
+      log.info(tmpFile.getAbsolutePath() + " isFile: " + tmpFile.isFile() + " isDir:" + tmpFile.isDirectory());
+
+      File mergedFile = INDEX_MERGER_V9.mergeQueryableIndex(
+          indexesToMerge,
+          rollup,
+          schemaInfo.getAggsArray(),
+          tmpFile,
+          new IndexSpec()
+      );
+
+      blackhole.consume(mergedFile);
+    }
+    finally {
+      tmpFile.delete();
+
+    }
+
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexPersistBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexPersistBenchmark.java
new file mode 100644
index 00000000000..6e376261604
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/indexing/IndexPersistBenchmark.java
@@ -0,0 +1,215 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.indexing;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMerger;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.apache.commons.io.FileUtils;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class IndexPersistBenchmark
+{
+  @Param({"75000"})
+  private int rowsPerSegment;
+
+  @Param({"basic"})
+  private String schema;
+
+  @Param({"true", "false"})
+  private boolean rollup;
+
+  private static final Logger log = new Logger(IndexPersistBenchmark.class);
+  private static final int RNG_SEED = 9999;
+
+  private IncrementalIndex incIndex;
+  private ArrayList<InputRow> rows;
+  private BenchmarkSchemaInfo schemaInfo;
+
+
+  private static final IndexMerger INDEX_MERGER;
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER = new IndexMerger(JSON_MAPPER, INDEX_IO);
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  @Setup
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + + System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+
+    rows = new ArrayList<InputRow>();
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schema);
+
+    BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+        schemaInfo.getColumnSchemas(),
+        RNG_SEED,
+        schemaInfo.getDataInterval(),
+        rowsPerSegment
+    );
+
+    for (int i = 0; i < rowsPerSegment; i++) {
+      InputRow row = gen.nextRow();
+      if (i % 10000 == 0) {
+        log.info(i + " rows generated.");
+      }
+      rows.add(row);
+    }
+
+
+  }
+
+  @Setup(Level.Iteration)
+  public void setup2() throws IOException
+  {
+    incIndex = makeIncIndex();
+    for (int i = 0; i < rowsPerSegment; i++) {
+      InputRow row = rows.get(i);
+      incIndex.add(row);
+    }
+  }
+
+  @TearDown(Level.Iteration)
+  public void teardown() throws IOException
+  {
+    incIndex.close();
+    incIndex = null;
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .withRollup(rollup)
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void persist(Blackhole blackhole) throws Exception
+  {
+    File tmpDir = Files.createTempDir();
+    log.info("Using temp dir: " + tmpDir.getAbsolutePath());
+    try {
+      File indexFile = INDEX_MERGER.persist(
+          incIndex,
+          tmpDir,
+          new IndexSpec()
+      );
+
+      blackhole.consume(indexFile);
+    }
+    finally {
+      FileUtils.deleteDirectory(tmpDir);
+    }
+
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void persistV9(Blackhole blackhole) throws Exception
+  {
+    File tmpDir = Files.createTempDir();
+    log.info("Using temp dir: " + tmpDir.getAbsolutePath());
+    try {
+      File indexFile = INDEX_MERGER_V9.persist(
+          incIndex,
+          tmpDir,
+          new IndexSpec()
+      );
+
+      blackhole.consume(indexFile);
+
+    }
+    finally {
+    FileUtils.deleteDirectory(tmpDir);
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/query/GroupByBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/query/GroupByBenchmark.java
new file mode 100644
index 00000000000..0a1001e8a7c
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/query/GroupByBenchmark.java
@@ -0,0 +1,651 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.query;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.dataformat.smile.SmileFactory;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import com.google.common.base.Throwables;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.collections.BlockingPool;
+import io.druid.collections.StupidPool;
+import io.druid.concurrent.Execs;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.Row;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.granularity.Granularity;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.offheap.OffheapBufferGenerator;
+import io.druid.query.DruidProcessingConfig;
+import io.druid.query.FinalizeResultsQueryRunner;
+import io.druid.query.Query;
+import io.druid.query.QueryRunner;
+import io.druid.query.QueryRunnerFactory;
+import io.druid.query.QueryToolChest;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.LongSumAggregatorFactory;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.dimension.DefaultDimensionSpec;
+import io.druid.query.dimension.DimensionSpec;
+import io.druid.query.groupby.GroupByQuery;
+import io.druid.query.groupby.GroupByQueryConfig;
+import io.druid.query.groupby.GroupByQueryEngine;
+import io.druid.query.groupby.GroupByQueryQueryToolChest;
+import io.druid.query.groupby.GroupByQueryRunnerFactory;
+import io.druid.query.groupby.strategy.GroupByStrategySelector;
+import io.druid.query.groupby.strategy.GroupByStrategyV1;
+import io.druid.query.groupby.strategy.GroupByStrategyV2;
+import io.druid.query.spec.MultipleIntervalSegmentSpec;
+import io.druid.query.spec.QuerySegmentSpec;
+import io.druid.segment.IncrementalIndexSegment;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.QueryableIndexSegment;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.column.ValueType;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.apache.commons.io.FileUtils;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 15)
+@Measurement(iterations = 30)
+public class GroupByBenchmark
+{
+  @Param({"4"})
+  private int numSegments;
+
+  @Param({"2", "4"})
+  private int numProcessingThreads;
+
+  @Param({"-1"})
+  private int initialBuckets;
+
+  @Param({"100000"})
+  private int rowsPerSegment;
+
+  @Param({"basic.A", "basic.nested"})
+  private String schemaAndQuery;
+
+  @Param({"v1", "v2"})
+  private String defaultStrategy;
+
+  @Param({"all", "day"})
+  private String queryGranularity;
+
+  private static final Logger log = new Logger(GroupByBenchmark.class);
+  private static final int RNG_SEED = 9999;
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+
+  private File tmpDir;
+  private IncrementalIndex anIncrementalIndex;
+  private List<QueryableIndex> queryableIndexes;
+
+  private QueryRunnerFactory<Row, GroupByQuery> factory;
+
+  private BenchmarkSchemaInfo schemaInfo;
+  private GroupByQuery query;
+
+  private ExecutorService executorService;
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  private static final Map<String, Map<String, GroupByQuery>> SCHEMA_QUERY_MAP = new LinkedHashMap<>();
+
+  private void setupQueries()
+  {
+    // queries for the basic schema
+    Map<String, GroupByQuery> basicQueries = new LinkedHashMap<>();
+    BenchmarkSchemaInfo basicSchema = BenchmarkSchemas.SCHEMA_MAP.get("basic");
+
+    { // basic.A
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory(
+          "sumLongSequential",
+          "sumLongSequential"
+      ));
+      GroupByQuery queryA = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", null),
+              new DefaultDimensionSpec("dimZipf", null)
+              //new DefaultDimensionSpec("dimUniform", null),
+              //new DefaultDimensionSpec("dimSequentialHalfNull", null)
+              //new DefaultDimensionSpec("dimMultivalEnumerated", null), //multival dims greatly increase the running time, disable for now
+              //new DefaultDimensionSpec("dimMultivalEnumerated2", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularity.fromString(queryGranularity))
+          .build();
+
+      basicQueries.put("A", queryA);
+    }
+
+    { // basic.nested
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory(
+          "sumLongSequential",
+          "sumLongSequential"
+      ));
+
+      GroupByQuery subqueryA = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", null),
+              new DefaultDimensionSpec("dimZipf", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularities.DAY)
+          .build();
+
+      GroupByQuery queryA = GroupByQuery
+          .builder()
+          .setDataSource(subqueryA)
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", null)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularities.WEEK)
+          .build();
+
+      basicQueries.put("nested", queryA);
+    }
+    SCHEMA_QUERY_MAP.put("basic", basicQueries);
+
+    // simple one column schema, for testing performance difference between querying on numeric values as Strings and
+    // directly as longs
+    Map<String, GroupByQuery> simpleQueries = new LinkedHashMap<>();
+    BenchmarkSchemaInfo simpleSchema = BenchmarkSchemas.SCHEMA_MAP.get("simple");
+
+    { // simple.A
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(simpleSchema.getDataInterval()));
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory(
+          "rows",
+          "rows"
+      ));
+      GroupByQuery queryA = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", "dimSequential", ValueType.STRING)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularity.fromString(queryGranularity))
+          .build();
+
+      simpleQueries.put("A", queryA);
+    }
+    SCHEMA_QUERY_MAP.put("simple", simpleQueries);
+
+
+    Map<String, GroupByQuery> simpleLongQueries = new LinkedHashMap<>();
+    BenchmarkSchemaInfo simpleLongSchema = BenchmarkSchemas.SCHEMA_MAP.get("simpleLong");
+    { // simpleLong.A
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(simpleLongSchema.getDataInterval()));
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory(
+          "rows",
+          "rows"
+      ));
+      GroupByQuery queryA = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", "dimSequential", ValueType.LONG)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularity.fromString(queryGranularity))
+          .build();
+
+      simpleLongQueries.put("A", queryA);
+    }
+    SCHEMA_QUERY_MAP.put("simpleLong", simpleLongQueries);
+
+
+    Map<String, GroupByQuery> simpleFloatQueries = new LinkedHashMap<>();
+    BenchmarkSchemaInfo simpleFloatSchema = BenchmarkSchemas.SCHEMA_MAP.get("simpleFloat");
+    { // simpleFloat.A
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(simpleFloatSchema.getDataInterval()));
+      List<AggregatorFactory> queryAggs = new ArrayList<>();
+      queryAggs.add(new LongSumAggregatorFactory(
+          "rows",
+          "rows"
+      ));
+      GroupByQuery queryA = GroupByQuery
+          .builder()
+          .setDataSource("blah")
+          .setQuerySegmentSpec(intervalSpec)
+          .setDimensions(Lists.<DimensionSpec>newArrayList(
+              new DefaultDimensionSpec("dimSequential", "dimSequential", ValueType.FLOAT)
+          ))
+          .setAggregatorSpecs(
+              queryAggs
+          )
+          .setGranularity(Granularity.fromString(queryGranularity))
+          .build();
+
+      simpleFloatQueries.put("A", queryA);
+    }
+    SCHEMA_QUERY_MAP.put("simpleFloat", simpleFloatQueries);
+  }
+
+  @Setup(Level.Trial)
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + +System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+    executorService = Execs.multiThreaded(numProcessingThreads, "GroupByThreadPool[%d]");
+
+    setupQueries();
+
+    String[] schemaQuery = schemaAndQuery.split("\\.");
+    String schemaName = schemaQuery[0];
+    String queryName = schemaQuery[1];
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schemaName);
+    query = SCHEMA_QUERY_MAP.get(schemaName).get(queryName);
+
+    final BenchmarkDataGenerator dataGenerator = new BenchmarkDataGenerator(
+        schemaInfo.getColumnSchemas(),
+        RNG_SEED + 1,
+        schemaInfo.getDataInterval(),
+        rowsPerSegment
+    );
+
+    tmpDir = Files.createTempDir();
+    log.info("Using temp dir: %s", tmpDir.getAbsolutePath());
+
+    // queryableIndexes   -> numSegments worth of on-disk segments
+    // anIncrementalIndex -> the last incremental index
+    anIncrementalIndex = null;
+    queryableIndexes = new ArrayList<>(numSegments);
+
+    for (int i = 0; i < numSegments; i++) {
+      log.info("Generating rows for segment %d/%d", i + 1, numSegments);
+
+      final IncrementalIndex index = makeIncIndex(schemaInfo.isWithRollup());
+
+      for (int j = 0; j < rowsPerSegment; j++) {
+        final InputRow row = dataGenerator.nextRow();
+        if (j % 20000 == 0) {
+          log.info("%,d/%,d rows generated.", i * rowsPerSegment + j, rowsPerSegment * numSegments);
+        }
+        index.add(row);
+      }
+
+      log.info(
+          "%,d/%,d rows generated, persisting segment %d/%d.",
+          (i + 1) * rowsPerSegment,
+          rowsPerSegment * numSegments,
+          i + 1,
+          numSegments
+      );
+
+      final File file = INDEX_MERGER_V9.persist(
+          index,
+          new File(tmpDir, String.valueOf(i)),
+          new IndexSpec()
+      );
+
+      queryableIndexes.add(INDEX_IO.loadIndex(file));
+
+      if (i == numSegments - 1) {
+        anIncrementalIndex = index;
+      } else {
+        index.close();
+      }
+    }
+
+    StupidPool<ByteBuffer> bufferPool = new StupidPool<>(
+        "GroupByBenchmark-computeBufferPool",
+        new OffheapBufferGenerator("compute", 250_000_000),
+        0,
+        Integer.MAX_VALUE
+    );
+
+    // limit of 2 is required since we simulate both historical merge and broker merge in the same process
+    BlockingPool<ByteBuffer> mergePool = new BlockingPool<>(
+        new OffheapBufferGenerator("merge", 250_000_000),
+        2
+    );
+    final GroupByQueryConfig config = new GroupByQueryConfig()
+    {
+      @Override
+      public String getDefaultStrategy()
+      {
+        return defaultStrategy;
+      }
+
+      @Override
+      public int getBufferGrouperInitialBuckets()
+      {
+        return initialBuckets;
+      }
+
+      @Override
+      public long getMaxOnDiskStorage()
+      {
+        return 1_000_000_000L;
+      }
+    };
+    config.setSingleThreaded(false);
+    config.setMaxIntermediateRows(Integer.MAX_VALUE);
+    config.setMaxResults(Integer.MAX_VALUE);
+
+    DruidProcessingConfig druidProcessingConfig = new DruidProcessingConfig()
+    {
+      @Override
+      public int getNumThreads()
+      {
+        // Used by "v2" strategy for concurrencyHint
+        return numProcessingThreads;
+      }
+
+      @Override
+      public String getFormatString()
+      {
+        return null;
+      }
+    };
+
+    final Supplier<GroupByQueryConfig> configSupplier = Suppliers.ofInstance(config);
+    final GroupByStrategySelector strategySelector = new GroupByStrategySelector(
+        configSupplier,
+        new GroupByStrategyV1(
+            configSupplier,
+            new GroupByQueryEngine(configSupplier, bufferPool),
+            QueryBenchmarkUtil.NOOP_QUERYWATCHER,
+            bufferPool
+        ),
+        new GroupByStrategyV2(
+            druidProcessingConfig,
+            configSupplier,
+            bufferPool,
+            mergePool,
+            new ObjectMapper(new SmileFactory()),
+            QueryBenchmarkUtil.NOOP_QUERYWATCHER
+        )
+    );
+
+    factory = new GroupByQueryRunnerFactory(
+        strategySelector,
+        new GroupByQueryQueryToolChest(
+            strategySelector,
+            QueryBenchmarkUtil.NoopIntervalChunkingQueryRunnerDecorator()
+        )
+    );
+  }
+
+  private IncrementalIndex makeIncIndex(boolean withRollup)
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .withRollup(withRollup)
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  @TearDown(Level.Trial)
+  public void tearDown()
+  {
+    try {
+      if (anIncrementalIndex != null) {
+        anIncrementalIndex.close();
+      }
+
+      if (queryableIndexes != null) {
+        for (QueryableIndex index : queryableIndexes) {
+          index.close();
+        }
+      }
+
+      if (tmpDir != null) {
+        FileUtils.deleteDirectory(tmpDir);
+      }
+    }
+    catch (IOException e) {
+      log.warn(e, "Failed to tear down, temp dir was: %s", tmpDir);
+      throw Throwables.propagate(e);
+    }
+  }
+
+  private static <T> List<T> runQuery(QueryRunnerFactory factory, QueryRunner runner, Query<T> query)
+  {
+    QueryToolChest toolChest = factory.getToolchest();
+    QueryRunner<T> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(toolChest.preMergeQueryDecoration(runner)),
+        toolChest
+    );
+
+    Sequence<T> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    return Sequences.toList(queryResult, Lists.<T>newArrayList());
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleIncrementalIndex(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "incIndex",
+        new IncrementalIndexSegment(anIncrementalIndex, "incIndex")
+    );
+
+    List<Row> results = GroupByBenchmark.runQuery(factory, runner, query);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndex(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", queryableIndexes.get(0))
+    );
+
+    List<Row> results = GroupByBenchmark.runQuery(factory, runner, query);
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void queryMultiQueryableIndex(Blackhole blackhole) throws Exception
+  {
+    QueryToolChest<Row, GroupByQuery> toolChest = factory.getToolchest();
+    QueryRunner<Row> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(
+            factory.mergeRunners(executorService, makeMultiRunners())
+        ),
+        (QueryToolChest) toolChest
+    );
+
+    Sequence<Row> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    List<Row> results = Sequences.toList(queryResult, Lists.<Row>newArrayList());
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void queryMultiQueryableIndexWithSpilling(Blackhole blackhole) throws Exception
+  {
+    QueryToolChest<Row, GroupByQuery> toolChest = factory.getToolchest();
+    QueryRunner<Row> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(
+            factory.mergeRunners(executorService, makeMultiRunners())
+        ),
+        (QueryToolChest) toolChest
+    );
+
+    final GroupByQuery spillingQuery = query.withOverriddenContext(
+        ImmutableMap.<String, Object>of("bufferGrouperMaxSize", 4000)
+    );
+    Sequence<Row> queryResult = theRunner.run(spillingQuery, Maps.<String, Object>newHashMap());
+    List<Row> results = Sequences.toList(queryResult, Lists.<Row>newArrayList());
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void queryMultiQueryableIndexWithSerde(Blackhole blackhole) throws Exception
+  {
+    QueryToolChest<Row, GroupByQuery> toolChest = factory.getToolchest();
+    QueryRunner<Row> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(
+            new SerializingQueryRunner<>(
+                new DefaultObjectMapper(new SmileFactory()),
+                Row.class,
+                toolChest.mergeResults(
+                    factory.mergeRunners(executorService, makeMultiRunners())
+                )
+            )
+        ),
+        (QueryToolChest) toolChest
+    );
+
+    Sequence<Row> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    List<Row> results = Sequences.toList(queryResult, Lists.<Row>newArrayList());
+
+    for (Row result : results) {
+      blackhole.consume(result);
+    }
+  }
+
+  private List<QueryRunner<Row>> makeMultiRunners()
+  {
+    List<QueryRunner<Row>> runners = Lists.newArrayList();
+    for (int i = 0; i < numSegments; i++) {
+      String segmentName = "qIndex" + i;
+      QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+          factory,
+          segmentName,
+          new QueryableIndexSegment(segmentName, queryableIndexes.get(i))
+      );
+      runners.add(factory.getToolchest().preMergeQueryDecoration(runner));
+    }
+    return runners;
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/query/QueryBenchmarkUtil.java b/benchmarks/src/main/java/io/druid/benchmark/query/QueryBenchmarkUtil.java
new file mode 100644
index 00000000000..662b0ed71e8
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/query/QueryBenchmarkUtil.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.query;
+
+import com.google.common.util.concurrent.ListenableFuture;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.query.BySegmentQueryRunner;
+import io.druid.query.FinalizeResultsQueryRunner;
+import io.druid.query.IntervalChunkingQueryRunnerDecorator;
+import io.druid.query.Query;
+import io.druid.query.QueryRunner;
+import io.druid.query.QueryRunnerFactory;
+import io.druid.query.QueryToolChest;
+import io.druid.query.QueryWatcher;
+import io.druid.segment.Segment;
+
+import java.util.Map;
+
+public class QueryBenchmarkUtil
+{
+  public static <T, QueryType extends Query<T>> QueryRunner<T> makeQueryRunner(
+      QueryRunnerFactory<T, QueryType> factory,
+      String segmentId,
+      Segment adapter
+  )
+  {
+    return new FinalizeResultsQueryRunner<T>(
+        new BySegmentQueryRunner<T>(
+            segmentId, adapter.getDataInterval().getStart(),
+            factory.createRunner(adapter)
+        ),
+        (QueryToolChest<T, Query<T>>)factory.getToolchest()
+    );
+  }
+
+  public static IntervalChunkingQueryRunnerDecorator NoopIntervalChunkingQueryRunnerDecorator()
+  {
+    return new IntervalChunkingQueryRunnerDecorator(null, null, null) {
+      @Override
+      public <T> QueryRunner<T> decorate(final QueryRunner<T> delegate,
+                                         QueryToolChest<T, ? extends Query<T>> toolChest) {
+        return new QueryRunner<T>() {
+          @Override
+          public Sequence<T> run(Query<T> query, Map<String, Object> responseContext)
+          {
+            return delegate.run(query, responseContext);
+          }
+        };
+      }
+    };
+  }
+
+  public static final QueryWatcher NOOP_QUERYWATCHER = new QueryWatcher()
+  {
+    @Override
+    public void registerQuery(Query query, ListenableFuture future)
+    {
+
+    }
+  };
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/query/SearchBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/query/SearchBenchmark.java
new file mode 100644
index 00000000000..b2584fc9545
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/query/SearchBenchmark.java
@@ -0,0 +1,490 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+
+package io.druid.benchmark.query;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Suppliers;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.concurrent.Execs;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.Row;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.query.Druids;
+import io.druid.query.Druids.SearchQueryBuilder;
+import io.druid.query.FinalizeResultsQueryRunner;
+import io.druid.query.Query;
+import io.druid.query.QueryRunner;
+import io.druid.query.QueryRunnerFactory;
+import io.druid.query.QueryToolChest;
+import io.druid.query.Result;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.extraction.DimExtractionFn;
+import io.druid.query.extraction.IdentityExtractionFn;
+import io.druid.query.extraction.LowerExtractionFn;
+import io.druid.query.extraction.StrlenExtractionFn;
+import io.druid.query.extraction.SubstringDimExtractionFn;
+import io.druid.query.extraction.UpperExtractionFn;
+import io.druid.query.filter.AndDimFilter;
+import io.druid.query.filter.BoundDimFilter;
+import io.druid.query.filter.DimFilter;
+import io.druid.query.filter.InDimFilter;
+import io.druid.query.filter.SelectorDimFilter;
+import io.druid.query.search.SearchQueryQueryToolChest;
+import io.druid.query.search.SearchQueryRunnerFactory;
+import io.druid.query.search.SearchResultValue;
+import io.druid.query.search.SearchStrategySelector;
+import io.druid.query.search.search.SearchHit;
+import io.druid.query.search.search.SearchQuery;
+import io.druid.query.search.search.SearchQueryConfig;
+import io.druid.query.spec.MultipleIntervalSegmentSpec;
+import io.druid.query.spec.QuerySegmentSpec;
+import io.druid.segment.IncrementalIndexSegment;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.QueryableIndexSegment;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.apache.commons.io.FileUtils;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class SearchBenchmark
+{
+  @Param({"1"})
+  private int numSegments;
+
+  @Param({"750000"})
+  private int rowsPerSegment;
+
+  @Param({"basic.A"})
+  private String schemaAndQuery;
+
+  @Param({"1000"})
+  private int limit;
+
+  private static final Logger log = new Logger(SearchBenchmark.class);
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+
+  private List<IncrementalIndex> incIndexes;
+  private List<QueryableIndex> qIndexes;
+
+  private QueryRunnerFactory factory;
+  private BenchmarkSchemaInfo schemaInfo;
+  private Druids.SearchQueryBuilder queryBuilder;
+  private SearchQuery query;
+  private File tmpDir;
+
+  private ExecutorService executorService;
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  private static final Map<String, Map<String, Druids.SearchQueryBuilder>> SCHEMA_QUERY_MAP = new LinkedHashMap<>();
+
+  private void setupQueries()
+  {
+    // queries for the basic schema
+    final Map<String, SearchQueryBuilder> basicQueries = new LinkedHashMap<>();
+    final BenchmarkSchemaInfo basicSchema = BenchmarkSchemas.SCHEMA_MAP.get("basic");
+
+    final List<String> queryTypes = ImmutableList.of("A", "B", "C", "D");
+    for (final String eachType : queryTypes) {
+      basicQueries.put(eachType, makeQuery(eachType, basicSchema));
+    }
+
+    SCHEMA_QUERY_MAP.put("basic", basicQueries);
+  }
+
+  private static SearchQueryBuilder makeQuery(final String name, final BenchmarkSchemaInfo basicSchema)
+  {
+    switch (name) {
+      case "A":
+        return basicA(basicSchema);
+      case "B":
+        return basicB(basicSchema);
+      case "C":
+        return basicC(basicSchema);
+      case "D":
+        return basicD(basicSchema);
+      default:
+        return null;
+    }
+  }
+
+  private static SearchQueryBuilder basicA(final BenchmarkSchemaInfo basicSchema)
+  {
+    final QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+
+    return Druids.newSearchQueryBuilder()
+                 .dataSource("blah")
+                 .granularity(Granularities.ALL)
+                 .intervals(intervalSpec)
+                 .query("123");
+  }
+
+  private static SearchQueryBuilder basicB(final BenchmarkSchemaInfo basicSchema)
+  {
+    final QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+
+    final List<String> dimUniformFilterVals = Lists.newArrayList();
+    int resultNum = (int) (100000 * 0.1);
+    int step = 100000 / resultNum;
+    for (int i = 1; i < 100001 && dimUniformFilterVals.size() < resultNum; i += step) {
+      dimUniformFilterVals.add(String.valueOf(i));
+    }
+
+    List<String> dimHyperUniqueFilterVals = Lists.newArrayList();
+    resultNum = (int) (100000 * 0.1);
+    step = 100000 / resultNum;
+    for (int i = 0; i < 100001 && dimHyperUniqueFilterVals.size() < resultNum; i += step) {
+      dimHyperUniqueFilterVals.add(String.valueOf(i));
+    }
+
+    final List<DimFilter> dimFilters = Lists.newArrayList();
+    dimFilters.add(new InDimFilter("dimUniform", dimUniformFilterVals, null));
+    dimFilters.add(new InDimFilter("dimHyperUnique", dimHyperUniqueFilterVals, null));
+
+    return Druids.newSearchQueryBuilder()
+                 .dataSource("blah")
+                 .granularity(Granularities.ALL)
+                 .intervals(intervalSpec)
+                 .query("")
+                 .dimensions(Lists.newArrayList("dimUniform", "dimHyperUnique"))
+                 .filters(new AndDimFilter(dimFilters));
+  }
+
+  private static SearchQueryBuilder basicC(final BenchmarkSchemaInfo basicSchema)
+  {
+    final QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+
+    final List<String> dimUniformFilterVals = Lists.newArrayList();
+    final int resultNum = (int) (100000 * 0.1);
+    final int step = 100000 / resultNum;
+    for (int i = 1; i < 100001 && dimUniformFilterVals.size() < resultNum; i += step) {
+      dimUniformFilterVals.add(String.valueOf(i));
+    }
+
+    final String dimName = "dimUniform";
+    final List<DimFilter> dimFilters = Lists.newArrayList();
+    dimFilters.add(new InDimFilter(dimName, dimUniformFilterVals, IdentityExtractionFn.getInstance()));
+    dimFilters.add(new SelectorDimFilter(dimName, "3", StrlenExtractionFn.instance()));
+    dimFilters.add(new BoundDimFilter(dimName, "100", "10000", true, true, true, new DimExtractionFn()
+    {
+      @Override
+      public byte[] getCacheKey()
+      {
+        return new byte[]{0xF};
+      }
+
+      @Override
+      public String apply(String value)
+      {
+        return String.valueOf(Long.parseLong(value) + 1);
+      }
+
+      @Override
+      public boolean preservesOrdering()
+      {
+        return false;
+      }
+
+      @Override
+      public ExtractionType getExtractionType()
+      {
+        return ExtractionType.ONE_TO_ONE;
+      }
+    }, null));
+    dimFilters.add(new InDimFilter(dimName, dimUniformFilterVals, new LowerExtractionFn(null)));
+    dimFilters.add(new InDimFilter(dimName, dimUniformFilterVals, new UpperExtractionFn(null)));
+    dimFilters.add(new InDimFilter(dimName, dimUniformFilterVals, new SubstringDimExtractionFn(1, 3)));
+
+    return Druids.newSearchQueryBuilder()
+                 .dataSource("blah")
+                 .granularity(Granularities.ALL)
+                 .intervals(intervalSpec)
+                 .query("")
+                 .dimensions(Lists.newArrayList("dimUniform"))
+                 .filters(new AndDimFilter(dimFilters));
+  }
+
+  private static SearchQueryBuilder basicD(final BenchmarkSchemaInfo basicSchema)
+  {
+    final QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+
+    final List<String> dimUniformFilterVals = Lists.newArrayList();
+    final int resultNum = (int) (100000 * 0.1);
+    final int step = 100000 / resultNum;
+    for (int i = 1; i < 100001 && dimUniformFilterVals.size() < resultNum; i += step) {
+      dimUniformFilterVals.add(String.valueOf(i));
+    }
+
+    final String dimName = "dimUniform";
+    final List<DimFilter> dimFilters = Lists.newArrayList();
+    dimFilters.add(new InDimFilter(dimName, dimUniformFilterVals, null));
+    dimFilters.add(new SelectorDimFilter(dimName, "3", null));
+    dimFilters.add(new BoundDimFilter(dimName, "100", "10000", true, true, true, null, null));
+    dimFilters.add(new InDimFilter(dimName, dimUniformFilterVals, null));
+    dimFilters.add(new InDimFilter(dimName, dimUniformFilterVals, null));
+    dimFilters.add(new InDimFilter(dimName, dimUniformFilterVals, null));
+
+    return Druids.newSearchQueryBuilder()
+                 .dataSource("blah")
+                 .granularity(Granularities.ALL)
+                 .intervals(intervalSpec)
+                 .query("")
+                 .dimensions(Lists.newArrayList("dimUniform"))
+                 .filters(new AndDimFilter(dimFilters));
+  }
+
+  @Setup
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + +System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+    executorService = Execs.multiThreaded(numSegments, "SearchThreadPool");
+
+    setupQueries();
+
+    String[] schemaQuery = schemaAndQuery.split("\\.");
+    String schemaName = schemaQuery[0];
+    String queryName = schemaQuery[1];
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schemaName);
+    queryBuilder = SCHEMA_QUERY_MAP.get(schemaName).get(queryName);
+    queryBuilder.limit(limit);
+    query = queryBuilder.build();
+
+    incIndexes = new ArrayList<>();
+    for (int i = 0; i < numSegments; i++) {
+      log.info("Generating rows for segment " + i);
+      BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+          schemaInfo.getColumnSchemas(),
+          System.currentTimeMillis(),
+          schemaInfo.getDataInterval(),
+          rowsPerSegment
+      );
+
+      IncrementalIndex incIndex = makeIncIndex();
+
+      for (int j = 0; j < rowsPerSegment; j++) {
+        InputRow row = gen.nextRow();
+        if (j % 10000 == 0) {
+          log.info(j + " rows generated.");
+        }
+        incIndex.add(row);
+      }
+      incIndexes.add(incIndex);
+    }
+
+    tmpDir = Files.createTempDir();
+    log.info("Using temp dir: " + tmpDir.getAbsolutePath());
+
+    qIndexes = new ArrayList<>();
+    for (int i = 0; i < numSegments; i++) {
+      File indexFile = INDEX_MERGER_V9.persist(
+          incIndexes.get(i),
+          tmpDir,
+          new IndexSpec()
+      );
+
+      QueryableIndex qIndex = INDEX_IO.loadIndex(indexFile);
+      qIndexes.add(qIndex);
+    }
+
+    final SearchQueryConfig config = new SearchQueryConfig().withOverrides(query);
+    factory = new SearchQueryRunnerFactory(
+        new SearchStrategySelector(Suppliers.ofInstance(config)),
+        new SearchQueryQueryToolChest(
+            config,
+            QueryBenchmarkUtil.NoopIntervalChunkingQueryRunnerDecorator()
+        ),
+        QueryBenchmarkUtil.NOOP_QUERYWATCHER
+    );
+  }
+
+  @TearDown
+  public void tearDown() throws IOException
+  {
+    FileUtils.deleteDirectory(tmpDir);
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  private static <T> List<T> runQuery(QueryRunnerFactory factory, QueryRunner runner, Query<T> query)
+  {
+    QueryToolChest toolChest = factory.getToolchest();
+    QueryRunner<T> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(toolChest.preMergeQueryDecoration(runner)),
+        toolChest
+    );
+
+    Sequence<T> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    return Sequences.toList(queryResult, Lists.<T>newArrayList());
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleIncrementalIndex(Blackhole blackhole) throws Exception
+  {
+    QueryRunner<SearchHit> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "incIndex",
+        new IncrementalIndexSegment(incIndexes.get(0), "incIndex")
+    );
+
+    List<Result<SearchResultValue>> results = SearchBenchmark.runQuery(factory, runner, query);
+    List<SearchHit> hits = results.get(0).getValue().getValue();
+    for (SearchHit hit : hits) {
+      blackhole.consume(hit);
+    }
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void querySingleQueryableIndex(Blackhole blackhole) throws Exception
+  {
+    final QueryRunner<Result<SearchResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        "qIndex",
+        new QueryableIndexSegment("qIndex", qIndexes.get(0))
+    );
+
+    List<Result<SearchResultValue>> results = SearchBenchmark.runQuery(factory, runner, query);
+    List<SearchHit> hits = results.get(0).getValue().getValue();
+    for (SearchHit hit : hits) {
+      blackhole.consume(hit);
+    }
+  }
+
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void queryMultiQueryableIndex(Blackhole blackhole) throws Exception
+  {
+    List<QueryRunner<Row>> singleSegmentRunners = Lists.newArrayList();
+    QueryToolChest toolChest = factory.getToolchest();
+    for (int i = 0; i < numSegments; i++) {
+      String segmentName = "qIndex" + i;
+      final QueryRunner<Result<SearchResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+          factory,
+          segmentName,
+          new QueryableIndexSegment(segmentName, qIndexes.get(i))
+      );
+      singleSegmentRunners.add(toolChest.preMergeQueryDecoration(runner));
+    }
+
+    QueryRunner theRunner = toolChest.postMergeQueryDecoration(
+        new FinalizeResultsQueryRunner<>(
+            toolChest.mergeResults(factory.mergeRunners(executorService, singleSegmentRunners)),
+            toolChest
+        )
+    );
+
+    Sequence<Result<SearchResultValue>> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    List<Result<SearchResultValue>> results = Sequences.toList(
+        queryResult,
+        Lists.<Result<SearchResultValue>>newArrayList()
+    );
+
+    for (Result<SearchResultValue> result : results) {
+      List<SearchHit> hits = result.getValue().getValue();
+      for (SearchHit hit : hits) {
+        blackhole.consume(hit);
+      }
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/query/SelectBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/query/SelectBenchmark.java
new file mode 100644
index 00000000000..21033adc7a7
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/query/SelectBenchmark.java
@@ -0,0 +1,400 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.query;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.concurrent.Execs;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.Row;
+import io.druid.data.input.impl.DimensionsSpec;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.jackson.DefaultObjectMapper;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.query.Druids;
+import io.druid.query.FinalizeResultsQueryRunner;
+import io.druid.query.Query;
+import io.druid.query.QueryRunner;
+import io.druid.query.QueryRunnerFactory;
+import io.druid.query.QueryToolChest;
+import io.druid.query.Result;
+import io.druid.query.TableDataSource;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.dimension.DefaultDimensionSpec;
+import io.druid.query.select.EventHolder;
+import io.druid.query.select.PagingSpec;
+import io.druid.query.select.SelectQuery;
+import io.druid.query.select.SelectQueryConfig;
+import io.druid.query.select.SelectQueryEngine;
+import io.druid.query.select.SelectQueryQueryToolChest;
+import io.druid.query.select.SelectQueryRunnerFactory;
+import io.druid.query.select.SelectResultValue;
+import io.druid.query.spec.MultipleIntervalSegmentSpec;
+import io.druid.query.spec.QuerySegmentSpec;
+import io.druid.segment.IncrementalIndexSegment;
+import io.druid.segment.IndexIO;
+import io.druid.segment.IndexMergerV9;
+import io.druid.segment.IndexSpec;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.QueryableIndexSegment;
+import io.druid.segment.column.ColumnConfig;
+import io.druid.segment.incremental.IncrementalIndex;
+import io.druid.segment.incremental.IncrementalIndexSchema;
+import io.druid.segment.incremental.OnheapIncrementalIndex;
+import io.druid.segment.serde.ComplexMetrics;
+import org.apache.commons.io.FileUtils;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.TimeUnit;
+
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 10)
+@Measurement(iterations = 25)
+public class SelectBenchmark
+{
+  @Param({"1"})
+  private int numSegments;
+
+  @Param({"25000"})
+  private int rowsPerSegment;
+
+  @Param({"basic.A"})
+  private String schemaAndQuery;
+
+  @Param({"1000"})
+  private int pagingThreshold;
+
+  private static final Logger log = new Logger(SelectBenchmark.class);
+  private static final int RNG_SEED = 9999;
+  private static final IndexMergerV9 INDEX_MERGER_V9;
+  private static final IndexIO INDEX_IO;
+  public static final ObjectMapper JSON_MAPPER;
+
+  private List<IncrementalIndex> incIndexes;
+  private List<QueryableIndex> qIndexes;
+
+  private QueryRunnerFactory factory;
+
+  private BenchmarkSchemaInfo schemaInfo;
+  private Druids.SelectQueryBuilder queryBuilder;
+  private SelectQuery query;
+  private File tmpDir;
+
+  private ExecutorService executorService;
+
+  static {
+    JSON_MAPPER = new DefaultObjectMapper();
+    INDEX_IO = new IndexIO(
+        JSON_MAPPER,
+        new ColumnConfig()
+        {
+          @Override
+          public int columnCacheSizeBytes()
+          {
+            return 0;
+          }
+        }
+    );
+    INDEX_MERGER_V9 = new IndexMergerV9(JSON_MAPPER, INDEX_IO);
+  }
+
+  private static final Map<String, Map<String, Druids.SelectQueryBuilder>> SCHEMA_QUERY_MAP = new LinkedHashMap<>();
+
+  private void setupQueries()
+  {
+    // queries for the basic schema
+    Map<String, Druids.SelectQueryBuilder> basicQueries = new LinkedHashMap<>();
+    BenchmarkSchemaInfo basicSchema = BenchmarkSchemas.SCHEMA_MAP.get("basic");
+
+    { // basic.A
+      QuerySegmentSpec intervalSpec = new MultipleIntervalSegmentSpec(Arrays.asList(basicSchema.getDataInterval()));
+
+      Druids.SelectQueryBuilder queryBuilderA =
+          Druids.newSelectQueryBuilder()
+                .dataSource(new TableDataSource("blah"))
+                .dimensionSpecs(DefaultDimensionSpec.toSpec(Arrays.<String>asList()))
+                .metrics(Arrays.<String>asList())
+                .intervals(intervalSpec)
+                .granularity(Granularities.ALL)
+                .descending(false);
+
+      basicQueries.put("A", queryBuilderA);
+    }
+
+    SCHEMA_QUERY_MAP.put("basic", basicQueries);
+  }
+
+  @Setup
+  public void setup() throws IOException
+  {
+    log.info("SETUP CALLED AT " + System.currentTimeMillis());
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+
+    executorService = Execs.multiThreaded(numSegments, "SelectThreadPool");
+
+    setupQueries();
+
+    String[] schemaQuery = schemaAndQuery.split("\\.");
+    String schemaName = schemaQuery[0];
+    String queryName = schemaQuery[1];
+
+    schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get(schemaName);
+    queryBuilder = SCHEMA_QUERY_MAP.get(schemaName).get(queryName);
+    queryBuilder.pagingSpec(PagingSpec.newSpec(pagingThreshold));
+    query = queryBuilder.build();
+
+    incIndexes = new ArrayList<>();
+    for (int i = 0; i < numSegments; i++) {
+      BenchmarkDataGenerator gen = new BenchmarkDataGenerator(
+          schemaInfo.getColumnSchemas(),
+          RNG_SEED + i,
+          schemaInfo.getDataInterval(),
+          rowsPerSegment
+      );
+
+      IncrementalIndex incIndex = makeIncIndex();
+
+      for (int j = 0; j < rowsPerSegment; j++) {
+        InputRow row = gen.nextRow();
+        if (j % 10000 == 0) {
+          log.info(j + " rows generated.");
+        }
+        incIndex.add(row);
+      }
+      incIndexes.add(incIndex);
+    }
+
+    tmpDir = Files.createTempDir();
+    log.info("Using temp dir: " + tmpDir.getAbsolutePath());
+
+    qIndexes = new ArrayList<>();
+    for (int i = 0; i < numSegments; i++) {
+      File indexFile = INDEX_MERGER_V9.persist(
+          incIndexes.get(i),
+          tmpDir,
+          new IndexSpec()
+      );
+      QueryableIndex qIndex = INDEX_IO.loadIndex(indexFile);
+      qIndexes.add(qIndex);
+    }
+
+    final Supplier<SelectQueryConfig> selectConfigSupplier = Suppliers.ofInstance(new SelectQueryConfig(true));
+
+    factory = new SelectQueryRunnerFactory(
+        new SelectQueryQueryToolChest(
+            JSON_MAPPER,
+            QueryBenchmarkUtil.NoopIntervalChunkingQueryRunnerDecorator(),
+            selectConfigSupplier
+        ),
+        new SelectQueryEngine(selectConfigSupplier),
+        QueryBenchmarkUtil.NOOP_QUERYWATCHER
+    );
+  }
+
+  @TearDown
+  public void tearDown() throws IOException
+  {
+    FileUtils.deleteDirectory(tmpDir);
+  }
+
+  private IncrementalIndex makeIncIndex()
+  {
+    return new OnheapIncrementalIndex(
+        new IncrementalIndexSchema.Builder()
+            .withQueryGranularity(Granularities.NONE)
+            .withMetrics(schemaInfo.getAggsArray())
+            .withDimensionsSpec(new DimensionsSpec(null, null, null))
+            .build(),
+        true,
+        false,
+        true,
+        rowsPerSegment
+    );
+  }
+
+  private static <T> List<T> runQuery(QueryRunnerFactory factory, QueryRunner runner, Query<T> query)
+  {
+
+    QueryToolChest toolChest = factory.getToolchest();
+    QueryRunner<T> theRunner = new FinalizeResultsQueryRunner<>(
+        toolChest.mergeResults(toolChest.preMergeQueryDecoration(runner)),
+        toolChest
+    );
+
+    Sequence<T> queryResult = theRunner.run(query, Maps.<String, Object>newHashMap());
+    return Sequences.toList(queryResult, Lists.<T>newArrayList());
+  }
+
+  // don't run this benchmark with a query that doesn't use QueryGranularities.ALL,
+  // this pagination function probably doesn't work correctly in that case.
+  private SelectQuery incrementQueryPagination(SelectQuery query, SelectResultValue prevResult)
+  {
+    Map<String, Integer> pagingIdentifiers = prevResult.getPagingIdentifiers();
+    Map<String, Integer> newPagingIdentifers = new HashMap<>();
+
+    for (String segmentId : pagingIdentifiers.keySet()) {
+      int newOffset = pagingIdentifiers.get(segmentId) + 1;
+      newPagingIdentifers.put(segmentId, newOffset);
+    }
+
+    return query.withPagingSpec(new PagingSpec(newPagingIdentifers, pagingThreshold));
+  }
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void queryIncrementalIndex(Blackhole blackhole) throws Exception
+  {
+    SelectQuery queryCopy = query.withPagingSpec(PagingSpec.newSpec(pagingThreshold));
+
+    String segmentId = "incIndex";
+    QueryRunner<Row> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        segmentId,
+        new IncrementalIndexSegment(incIndexes.get(0), segmentId)
+    );
+
+    boolean done = false;
+    while (!done) {
+      List<Result<SelectResultValue>> results = SelectBenchmark.runQuery(factory, runner, queryCopy);
+      SelectResultValue result = results.get(0).getValue();
+      if (result.getEvents().size() == 0) {
+        done = true;
+      } else {
+        for (EventHolder eh : result.getEvents()) {
+          blackhole.consume(eh);
+        }
+        queryCopy = incrementQueryPagination(queryCopy, result);
+      }
+    }
+  }
+
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void queryQueryableIndex(Blackhole blackhole) throws Exception
+  {
+    SelectQuery queryCopy = query.withPagingSpec(PagingSpec.newSpec(pagingThreshold));
+
+    String segmentId = "qIndex";
+    QueryRunner<Result<SelectResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+        factory,
+        segmentId,
+        new QueryableIndexSegment(segmentId, qIndexes.get(0))
+    );
+
+    boolean done = false;
+    while (!done) {
+      List<Result<SelectResultValue>> results = SelectBenchmark.runQuery(factory, runner, queryCopy);
+      SelectResultValue result = results.get(0).getValue();
+      if (result.getEvents().size() == 0) {
+        done = true;
+      } else {
+        for (EventHolder eh : result.getEvents()) {
+          blackhole.consume(eh);
+        }
+        queryCopy = incrementQueryPagination(queryCopy, result);
+      }
+    }
+  }
+
+
+  @Benchmark
+  @BenchmarkMode(Mode.AverageTime)
+  @OutputTimeUnit(TimeUnit.MICROSECONDS)
+  public void queryMultiQueryableIndex(Blackhole blackhole) throws Exception
+  {
+    SelectQuery queryCopy = query.withPagingSpec(PagingSpec.newSpec(pagingThreshold));
+
+    String segmentName;
+    List<QueryRunner<Result<SelectResultValue>>> singleSegmentRunners = Lists.newArrayList();
+    QueryToolChest toolChest = factory.getToolchest();
+    for (int i = 0; i < numSegments; i++) {
+      segmentName = "qIndex" + i;
+      QueryRunner<Result<SelectResultValue>> runner = QueryBenchmarkUtil.makeQueryRunner(
+          factory,
+          segmentName,
+          new QueryableIndexSegment(segmentName, qIndexes.get(i))
+      );
+      singleSegmentRunners.add(toolChest.preMergeQueryDecoration(runner));
+    }
+
+    QueryRunner theRunner = toolChest.postMergeQueryDecoration(
+        new FinalizeResultsQueryRunner<>(
+            toolChest.mergeResults(factory.mergeRunners(executorService, singleSegmentRunners)),
+            toolChest
+        )
+    );
+
+
+    boolean done = false;
+    while (!done) {
+      Sequence<Result<SelectResultValue>> queryResult = theRunner.run(queryCopy, Maps.<String, Object>newHashMap());
+      List<Result<SelectResultValue>> results = Sequences.toList(queryResult, Lists.<Result<SelectResultValue>>newArrayList());
+      
+      SelectResultValue result = results.get(0).getValue();
+
+      if (result.getEvents().size() == 0) {
+        done = true;
+      } else {
+        for (EventHolder eh : result.getEvents()) {
+          blackhole.consume(eh);
+        }
+        queryCopy = incrementQueryPagination(queryCopy, result);
+      }
+    }
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/query/SerializingQueryRunner.java b/benchmarks/src/main/java/io/druid/benchmark/query/SerializingQueryRunner.java
new file mode 100644
index 00000000000..25655968552
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/query/SerializingQueryRunner.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.query;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.base.Function;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.query.Query;
+import io.druid.query.QueryRunner;
+
+import java.util.Map;
+
+public class SerializingQueryRunner<T> implements QueryRunner<T>
+{
+  private final ObjectMapper smileMapper;
+  private final QueryRunner<T> baseRunner;
+  private final Class<T> clazz;
+
+  public SerializingQueryRunner(
+      ObjectMapper smileMapper,
+      Class<T> clazz,
+      QueryRunner<T> baseRunner
+  )
+  {
+    this.smileMapper = smileMapper;
+    this.clazz = clazz;
+    this.baseRunner = baseRunner;
+  }
+
+  @Override
+  public Sequence<T> run(
+      final Query<T> query,
+      final Map<String, Object> responseContext
+  )
+  {
+    return Sequences.map(
+        baseRunner.run(query, responseContext),
+        new Function<T, T>()
+        {
+          @Override
+          public T apply(T input)
+          {
+            try {
+              return smileMapper.readValue(smileMapper.writeValueAsBytes(input), clazz);
+            }
+            catch (Exception e) {
+              throw new RuntimeException(e);
+            }
+          }
+        }
+    );
+  }
+}
diff --git a/benchmarks/src/main/java/io/druid/benchmark/query/SqlBenchmark.java b/benchmarks/src/main/java/io/druid/benchmark/query/SqlBenchmark.java
new file mode 100644
index 00000000000..07d4daeb829
--- /dev/null
+++ b/benchmarks/src/main/java/io/druid/benchmark/query/SqlBenchmark.java
@@ -0,0 +1,241 @@
+/*
+ * Licensed to Metamarkets Group Inc. (Metamarkets) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. Metamarkets licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package io.druid.benchmark.query;
+
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.io.Files;
+import io.druid.benchmark.datagen.BenchmarkDataGenerator;
+import io.druid.benchmark.datagen.BenchmarkSchemaInfo;
+import io.druid.benchmark.datagen.BenchmarkSchemas;
+import io.druid.common.utils.JodaUtils;
+import io.druid.data.input.InputRow;
+import io.druid.data.input.Row;
+import io.druid.hll.HyperLogLogHash;
+import io.druid.java.util.common.granularity.Granularities;
+import io.druid.java.util.common.guava.Sequence;
+import io.druid.java.util.common.guava.Sequences;
+import io.druid.java.util.common.logger.Logger;
+import io.druid.query.QueryRunnerFactoryConglomerate;
+import io.druid.query.TableDataSource;
+import io.druid.query.aggregation.AggregatorFactory;
+import io.druid.query.aggregation.CountAggregatorFactory;
+import io.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
+import io.druid.query.dimension.DefaultDimensionSpec;
+import io.druid.query.dimension.DimensionSpec;
+import io.druid.query.groupby.GroupByQuery;
+import io.druid.segment.IndexBuilder;
+import io.druid.segment.QueryableIndex;
+import io.druid.segment.TestHelper;
+import io.druid.segment.column.ValueType;
+import io.druid.segment.serde.ComplexMetrics;
+import io.druid.sql.calcite.planner.Calcites;
+import io.druid.sql.calcite.planner.DruidPlanner;
+import io.druid.sql.calcite.planner.PlannerConfig;
+import io.druid.sql.calcite.planner.PlannerFactory;
+import io.druid.sql.calcite.planner.PlannerResult;
+import io.druid.sql.calcite.table.DruidTable;
+import io.druid.sql.calcite.table.RowSignature;
+import io.druid.sql.calcite.util.CalciteTests;
+import io.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
+import io.druid.timeline.DataSegment;
+import io.druid.timeline.partition.LinearShardSpec;
+import org.apache.calcite.schema.Schema;
+import org.apache.calcite.schema.Table;
+import org.apache.calcite.schema.impl.AbstractSchema;
+import org.apache.commons.io.FileUtils;
+import org.joda.time.Interval;
+import org.openjdk.jmh.annotations.Benchmark;
+import org.openjdk.jmh.annotations.BenchmarkMode;
+import org.openjdk.jmh.annotations.Fork;
+import org.openjdk.jmh.annotations.Level;
+import org.openjdk.jmh.annotations.Measurement;
+import org.openjdk.jmh.annotations.Mode;
+import org.openjdk.jmh.annotations.OutputTimeUnit;
+import org.openjdk.jmh.annotations.Param;
+import org.openjdk.jmh.annotations.Scope;
+import org.openjdk.jmh.annotations.Setup;
+import org.openjdk.jmh.annotations.State;
+import org.openjdk.jmh.annotations.TearDown;
+import org.openjdk.jmh.annotations.Warmup;
+import org.openjdk.jmh.infra.Blackhole;
+
+import java.io.File;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Benchmark that compares the same groupBy query through the native query layer and through the SQL layer.
+ */
+@State(Scope.Benchmark)
+@Fork(jvmArgsPrepend = "-server", value = 1)
+@Warmup(iterations = 15)
+@Measurement(iterations = 30)
+public class SqlBenchmark
+{
+  @Param({"10000", "100000", "200000"})
+  private int rowsPerSegment;
+
+  private static final Logger log = new Logger(SqlBenchmark.class);
+  private static final int RNG_SEED = 9999;
+
+  private File tmpDir;
+  private SpecificSegmentsQuerySegmentWalker walker;
+  private PlannerFactory plannerFactory;
+  private GroupByQuery groupByQuery;
+  private String sqlQuery;
+
+  @Setup(Level.Trial)
+  public void setup() throws Exception
+  {
+    tmpDir = Files.createTempDir();
+    log.info("Starting benchmark setup using tmpDir[%s], rows[%,d].", tmpDir, rowsPerSegment);
+
+    if (ComplexMetrics.getSerdeForType("hyperUnique") == null) {
+      ComplexMetrics.registerSerde("hyperUnique", new HyperUniquesSerde(HyperLogLogHash.getDefault()));
+    }
+
+    final BenchmarkSchemaInfo schemaInfo = BenchmarkSchemas.SCHEMA_MAP.get("basic");
+    final BenchmarkDataGenerator dataGenerator = new BenchmarkDataGenerator(
+        schemaInfo.getColumnSchemas(),
+        RNG_SEED + 1,
+        schemaInfo.getDataInterval(),
+        rowsPerSegment
+    );
+
+    final List<InputRow> rows = Lists.newArrayList();
+    for (int i = 0; i < rowsPerSegment; i++) {
+      final InputRow row = dataGenerator.nextRow();
+      if (i % 20000 == 0) {

  (This diff was longer than 20,000 lines, and has been truncated...)


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@druid.apache.org
For additional commands, e-mail: dev-help@druid.apache.org