You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by ja...@apache.org on 2014/01/27 20:23:03 UTC

[02/51] [partial] Initial commit

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/markdown/tuning.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tuning.md b/src/site/markdown/tuning.md
new file mode 100644
index 0000000..91fe348
--- /dev/null
+++ b/src/site/markdown/tuning.md
@@ -0,0 +1,128 @@
+# Configuration and Tuning
+
+Phoenix provides many different knobs and dials to configure and tune the system to run more optimally on your cluster. The configuration is done through a series of Phoenix-specific properties specified for the most part in your client-side <code>hbase-site.xml</code> file. In addition to these properties, there are of course all the <a href="http://hbase.apache.org/book/config.files.html" target="_blank">HBase configuration</a> properties with the most important ones documented <a href="http://hbase.apache.org/book/important_configurations.html" target="_blank">here</a>. This blog will focus on the Phoenix-specific properties and touch on some important considerations to maximize Phoenix and HBase performance.<br />
+<br />
+The table below outlines the full set of Phoenix-specific configuration properties and their defaults. Of these, we'll talk in depth about some of the most important ones below.<br />
+<br />
+<table border="1">
+    <tbody>
+<tr><td><b>Property</b></td><td><b>Description</b></td><td><b>Default
+</b></td></tr>
+<tr><td><small>phoenix.query.timeoutMs</small></td><td style="text-align: left;">Number of milliseconds
+    after which a query will timeout on the client. Default is 10 min.</td><td>600000
+</td></tr>
+<tr><td><small>phoenix.query.keepAliveMs</small></td><td style="text-align: left;">When the number of
+      threads is greater than the core in the client side thread pool
+      executor, this is the maximum time in milliseconds that excess idle
+      threads will wait for a new tasks before
+terminating. Default is 60 sec.</td><td>60000</td></tr>
+<tr><td><small>phoenix.query.threadPoolSize</small></td><td style="text-align: left;">Number of threads
+      in client side thread pool executor. As the number of machines/cores
+      in the cluster grows, this value should be
+increased.</td><td>128</td></tr>
+<tr><td><small>phoenix.query.queueSize</small></td><td>Max queue depth
+of the
+      bounded round robin backing the client side thread pool executor,
+      beyond which an attempt to queue additional work is
+      rejected by throwing an exception. If zero, a SynchronousQueue is used
+      instead of the bounded round robin queue.</td><td>500</td></tr>
+<tr><td><small>phoenix.query.spoolThresholdBytes</small></td><td style="text-align: left;">Threshold
+      size in bytes after which results from parallelly executed
+      query results are spooled to disk. Default is 20 mb.</td><td>20971520</td></tr>
+<tr><td><small>phoenix.query.maxSpoolToDiskBytes</small></td><td style="text-align: left;">Threshold
+      size in bytes upto which results from parallelly executed
+      query results are spooled to disk above which the query will fail. Default is 1 gb.</td><td>1024000000</td></tr>
+<tr><td><small>phoenix.query.maxGlobalMemoryPercentage</small></td><td style="text-align: left;">Percentage of total heap memory (i.e. Runtime.getRuntime().totalMemory()) that all threads may use. Only course grain memory usage is tracked, mainly accounting for memory usage in the intermediate map built during group by aggregation.  When this limit is reached the clients block attempting to get more memory, essentially throttling memory usage. Defaults to 50%</td><td>50</td></tr>
+<tr><td><small>phoenix.query.maxGlobalMemoryWaitMs</small></td><td style="text-align: left;">Maximum
+      amount of time that a client will block while waiting for more memory
+      to become available.  After this amount of time, an
+<code>InsufficientMemoryException</code> is
+      thrown. Default is 10 sec.</td><td>10000</td></tr>
+<tr><td><small>phoenix.query.maxTenantMemoryPercentage</small></td><td style="text-align: left;">Maximum
+      percentage of <code>phoenix.query.maxGlobalMemoryPercentage</code> that
+any one tenant is allowed to consume. After this percentage, an
+<code>InsufficientMemoryException</code> is
+      thrown. Default is 100%</td><td>100</td></tr>
+<tr><td><small>phoenix.query.targetConcurrency</small></td><td style="text-align: left;">Target concurrent
+      threads to use for a query. It serves as a soft limit on the number of
+      scans into which a query may be split. The value should not exceed the hard limit imposed by<code> phoenix.query.maxConcurrency</code>.</td><td>32</td></tr>
+<tr><td><small>phoenix.query.maxConcurrency</small></td><td style="text-align: left;">Maximum concurrent
+      threads to use for a query. It servers as a hard limit on the number
+      of scans into which a query may be split. A soft limit is imposed by
+<code>phoenix.query.targetConcurrency</code>.</td><td>64</td></tr>
+<tr><td><small>phoenix.query.dateFormat</small></td><td style="text-align: left;">Default pattern to use
+      for conversion of a date to/from a string, whether through the
+      <code>TO_CHAR(&lt;date&gt;)</code> or
+<code>TO_DATE(&lt;date-string&gt;)</code> functions, or through
+<code>resultSet.getString(&lt;date-column&gt;)</code>.</td><td>yyyy-MM-dd HH:mm:ss</td></tr>
+<tr><td><small>phoenix.query.statsUpdateFrequency</small></td><td style="text-align: left;">The frequency
+      in milliseconds at which the stats for each table will be
+updated. Default is 15 min.</td><td>900000</td></tr>
+<tr><td><small>phoenix.query.maxStatsAge</small></td><td>The maximum age of
+      stats in milliseconds after which they will no longer be used (i.e. the stats were not able to be updated in this amount of time and thus are considered too old). Default is 1 day.</td><td>1</td></tr>
+<tr><td><small>phoenix.mutate.maxSize</small></td><td style="text-align: left;">The maximum number of rows
+      that may be batched on the client
+      before a commit or rollback must be called.</td><td>500000</td></tr>
+<tr><td><small>phoenix.mutate.batchSize</small></td><td style="text-align: left;">The number of rows that are batched together and automatically committed during the execution of an
+      <code>UPSERT SELECT</code> or <code>DELETE</code> statement. This property may be
+overridden at connection
+      time by specifying the <code>UpsertBatchSize</code>
+      property value. Note that the connection property value does not affect the batch size used by the coprocessor when these statements are executed completely on the server side.</td><td>1000</td></tr>
+<tr><td><small>phoenix.query.regionBoundaryCacheTTL</small></td><td style="text-align: left;">The time-to-live
+      in milliseconds of the region boundary cache used to guide the split
+      points for query parallelization. Default is 15 sec.</td><td>15000</td></tr>
+<tr><td><small>phoenix.query.maxIntraRegionParallelization</small></td><td style="text-align: left;">The maximum number of threads that will be spawned to process data within a single region during query execution</td><td>64</td></tr>
+<tr><td><small>phoenix.query.rowKeyOrderSaltedTable</small></td><td style="text-align: left;">Whether or not a non aggregate query returns rows in row key order for salted tables. If this option is turned on, split points may not be specified at table create time, but instead the default splits on each salt bucket must be used. Default is true</td><td>true</td></tr></tbody></table>
+<br />
+<h4>
+Parallelization</h4>
+Phoenix breaks up aggregate queries into multiple scans and runs them in parallel through custom aggregating coprocessors to improve performance.&nbsp;Hari Kumar, from Ericsson Labs, did a good job of explaining the performance benefits of parallelization and coprocessors <a href="http://labs.ericsson.com/blog/hbase-performance-tuners" target="_blank">here</a>. One of the most important factors in getting good query performance with Phoenix is to ensure that table splits are well balanced. This includes having regions of equal size as well as an even distribution across region servers. There are open source tools such as&nbsp;<a href="http://www.sentric.ch/blog/hbase-split-visualisation-introducing-hannibal" target="_blank">Hannibal</a>&nbsp;that can help you monitor this. By having an even distribution of data, every thread spawned by the Phoenix client will have an equal amount of work to process, thus reducing the time it takes to get the results back. <br />
+<br />
+The <code>phoenix.query.targetConcurrency</code> and <code>phoenix.query.maxConcurrency</code> control how a query is broken up into multiple scans on the client side. The idea for parallelization of queries is to align the scan boundaries with region boundaries. If rows are not evenly distributed across regions, using this scheme compensates for regions that have more rows than others, by applying tighter splits and therefore spawning off more scans over the overloaded regions.<br />
+<br />
+The split points for parallelization are computed as follows. Let's suppose:<br />
+<ul>
+<li><code>t</code> is the target concurrency</li>
+<li><code>m</code> is the max concurrency</li>
+<li><code>r</code> is the number of regions we need to scan</li>
+</ul>
+<code>if r &gt;= t</code><br />
+&nbsp;&nbsp; scan using regional boundaries<br />
+<code>else if r/2 &gt; t</code><br />
+&nbsp;&nbsp; split each region in s splits such that: <code>s = max(x) where s * x &lt; m</code><br />
+<code>else</code><br />
+&nbsp;&nbsp; split each region in s splits such that:&nbsp; <code>s = max(x) where s * x &lt; t</code><br />
+<br />
+Depending on the number of cores in your client machine and the size of your cluster, the <code>phoenix.query.threadPoolSize</code>, <code>phoenix.query.queueSize</code>,<code> phoenix.query.maxConcurrency</code>, and <code>phoenix.query.targetConcurrency</code> may all be increased to allow more threads to process a query in parallel. This will allow Phoenix to divide up a query into more scans that may then be executed in parallel, thus reducing latency.<br />
+<br />
+This approach is not without its limitations. The primary issue is that Phoenix does not have sufficient information to divide up a region into equal data sizes. If the query results span many regions of data, this is not a problem, since regions are more or less of equal size. However, if a query accesses only a few regions, this can be an issue. The best Phoenix can do is to divide up the key space between the start and end key evenly. If there's any skew in the data, then some scans are bound to bear the brunt of the work. You can adjust <code>phoenix.query.maxIntraRegionParallelization</code> to a smaller number to decrease the number of threads spawned per region if you find that throughput is suffering.<br />
+<br />
+For example, let's say a row key is comprised of a five digit zip code in California, declared as a CHAR(5). Phoenix only knows that the column has 5 characters. In theory, the byte array could vary from five 0x01 bytes to five 0xff bytes (or what ever is the largest valid UTF-8 encoded single byte character). While in actuality, the range is from&nbsp;90001 to 96162. Since Phoenix doesn't know this, it'll divide up the region based on the theoretical range and all of the work will end up being done by the single thread that has the range encompassing the actual data. The same thing will occur with a DATE column, since the theoretical range is from 1970 to&nbsp;2038, while in actuality the date is probably +/- a year from the current date. Even if Phoenix uses better defaults for the start and end range rather than the theoretical min and max, it would not usually help - there's just too much variability across domains.<br />
+<br />
+One solution to this problem is to maintain statistics for a table to feed into the parallelization process to ensure an even data distribution. This is the solution we're working on, as described in more detail in this <a href="https://github.com/forcedotcom/phoenix/issues/49" target="_blank">issue</a>.<br />
+<h4>
+Batching</h4>
+An important HBase configuration property <code>hbase.client.scanner.caching</code> controls scanner caching, that is how many rows are returned from the server in a single round trip when a scan is performed. Although this is less important for aggregate queries, since the Phoenix coprocessors are performing the aggregation instead of returning all the data back to the client, it is important for non aggregate queries. If unset, Phoenix defaults this property to 1000.<br />
+<br />
+On the DML side of the fence, performance may improve by turning the connection auto commit to on for multi-row mutations such as those that can occur with <code>DELETE</code> and <code>UPSERT SELECT</code>. In this case, if possible, the mutation will be performed completely on the server side without returning data back to the client. However, when performing single row mutations, such as <code>UPSERT VALUES</code>, the opposite is true: auto commit should be off and a reasonable number of rows should be batched together for a single commit to reduce RPC traffic.<br />
+<h3>
+Measuring Performance</h3>
+One way to get a feeling for how to configure these properties is to use the performance.sh shell script provided in the bin directory of the installation tar.<br />
+<br />
+<b>Usage: </b><code>performance.sh &lt;zookeeper&gt; &lt;row count&gt;</code><br />
+<b>Example: </b><code>performance.sh localhost 1000000</code><br />
+<br />
+This will create a new table named <code>performance_1000000</code> and upsert 1000000 rows. The schema and data generated is similar to <code>examples/web_stat.sql</code> and <code>examples/web_stat.csv</code>. On the console it will measure the time it takes to:<br />
+<ul>
+<li>upsert these rows</li>
+<li>run queries that perform <code>COUNT</code>, <code>GROUP BY</code>, and <code>WHERE</code> clause filters</li>
+</ul>
+For convenience, an <code>hbase-site.xml</code> file is included in the bin directory and pre-configured to already be on the classpath during script execution.<br />
+<br />
+Here is a screenshot of the performance.sh script in action:<br />
+<div class="separator" style="clear: both; text-align: center;">
+<a href="http://1.bp.blogspot.com/-VhinivNOJmI/URWBGLYTiHI/AAAAAAAAAQU/Dp9lbH2CxYE/s1600/performance_script.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://1.bp.blogspot.com/-VhinivNOJmI/URWBGLYTiHI/AAAAAAAAAQU/Dp9lbH2CxYE/s640/performance_script.png" width="497" /></a></div>
+<h3>
+&nbsp;Conclusion</h3>
+Phoenix has many knobs and dials to tailor the system to your use case. From controlling the level of parallelization, to the size of batches, to the consumption of resource, <i>there's a knob for that</i>. &nbsp;These controls are not without there limitations, however. There's still more work to be done and we'd love to hear your ideas on what you'd like to see made more configurable.<br />
+<br />

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/css/site.css
----------------------------------------------------------------------
diff --git a/src/site/resources/css/site.css b/src/site/resources/css/site.css
new file mode 100644
index 0000000..99d6fbb
--- /dev/null
+++ b/src/site/resources/css/site.css
@@ -0,0 +1,65 @@
+/* You can override this file with your own styles */
+
+@media (min-width: 980px) {
+h1[id]:before,
+h2[id]:before,
+h3[id]:before,
+h4[id]:before,
+h5[id]:before,
+h6[id]:before,
+a[name]:before {
+    display:block; 
+    content:""; 
+    height:55px; 
+    margin:-55px 0 0; 
+}
+}
+
+@media (max-width: 979px) {
+h1[id]:before,
+h2[id]:before,
+h3[id]:before,
+h4[id]:before,
+h5[id]:before,
+h6[id]:before,
+a[name]:before {
+    display:block; 
+    content:""; 
+    height:0px; 
+    margin:0px 0 0; 
+}
+}
+
+.navbar-fixed-top {
+margin-bottom: 0px;
+}
+
+@media (min-width: 980px) {
+body {
+  padding-top: 40px;
+  padding-bottom: 20px;
+}
+}
+
+.page-header {
+padding-bottom: 0px;
+margin-top: 20px;
+margin-bottom: 10px;
+}
+
+
+@media (max-width: 479px) {
+	.xtoplogo {
+	height: 23px;
+	width: 183px;
+	background:url(../images/topbar-logo-small.png)
+	}
+}
+
+@media (min-width: 480px) {
+	.xtoplogo {
+	height: 23px;
+	width: 360px;
+	background:url(../images/topbar-logo.png)
+	}
+}

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/favicon.ico
----------------------------------------------------------------------
diff --git a/src/site/resources/favicon.ico b/src/site/resources/favicon.ico
new file mode 100644
index 0000000..ede81cb
Binary files /dev/null and b/src/site/resources/favicon.ico differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/PhoenixVsHive.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/PhoenixVsHive.png b/src/site/resources/images/PhoenixVsHive.png
new file mode 100644
index 0000000..b225df6
Binary files /dev/null and b/src/site/resources/images/PhoenixVsHive.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/PhoenixVsImpala.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/PhoenixVsImpala.png b/src/site/resources/images/PhoenixVsImpala.png
new file mode 100644
index 0000000..799fa7b
Binary files /dev/null and b/src/site/resources/images/PhoenixVsImpala.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/PhoenixVsOpenTSDB.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/PhoenixVsOpenTSDB.png b/src/site/resources/images/PhoenixVsOpenTSDB.png
new file mode 100644
index 0000000..de6dade
Binary files /dev/null and b/src/site/resources/images/PhoenixVsOpenTSDB.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/logo.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/logo.png b/src/site/resources/images/logo.png
new file mode 100644
index 0000000..ee28709
Binary files /dev/null and b/src/site/resources/images/logo.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/perf-esscf.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/perf-esscf.png b/src/site/resources/images/perf-esscf.png
new file mode 100644
index 0000000..7aa267c
Binary files /dev/null and b/src/site/resources/images/perf-esscf.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/perf-salted-read.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/perf-salted-read.png b/src/site/resources/images/perf-salted-read.png
new file mode 100644
index 0000000..7e950fd
Binary files /dev/null and b/src/site/resources/images/perf-salted-read.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/perf-salted-write.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/perf-salted-write.png b/src/site/resources/images/perf-salted-write.png
new file mode 100644
index 0000000..db6a79d
Binary files /dev/null and b/src/site/resources/images/perf-salted-write.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/perf-skipscan.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/perf-skipscan.png b/src/site/resources/images/perf-skipscan.png
new file mode 100644
index 0000000..c4cf3b4
Binary files /dev/null and b/src/site/resources/images/perf-skipscan.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/perf-topn.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/perf-topn.png b/src/site/resources/images/perf-topn.png
new file mode 100644
index 0000000..87d3766
Binary files /dev/null and b/src/site/resources/images/perf-topn.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/psql.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/psql.png b/src/site/resources/images/psql.png
new file mode 100644
index 0000000..efd3fc3
Binary files /dev/null and b/src/site/resources/images/psql.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/sqlline.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/sqlline.png b/src/site/resources/images/sqlline.png
new file mode 100644
index 0000000..4c2c042
Binary files /dev/null and b/src/site/resources/images/sqlline.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/squirrel.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/squirrel.png b/src/site/resources/images/squirrel.png
new file mode 100644
index 0000000..59d5b2c
Binary files /dev/null and b/src/site/resources/images/squirrel.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/topbar-logo-small.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/topbar-logo-small.png b/src/site/resources/images/topbar-logo-small.png
new file mode 100644
index 0000000..bc54627
Binary files /dev/null and b/src/site/resources/images/topbar-logo-small.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/images/topbar-logo.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/topbar-logo.png b/src/site/resources/images/topbar-logo.png
new file mode 100644
index 0000000..3a6d5d3
Binary files /dev/null and b/src/site/resources/images/topbar-logo.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/language/images/div-d.png
----------------------------------------------------------------------
diff --git a/src/site/resources/language/images/div-d.png b/src/site/resources/language/images/div-d.png
new file mode 100644
index 0000000..18f65b4
Binary files /dev/null and b/src/site/resources/language/images/div-d.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/language/images/div-ke.png
----------------------------------------------------------------------
diff --git a/src/site/resources/language/images/div-ke.png b/src/site/resources/language/images/div-ke.png
new file mode 100644
index 0000000..dbcbc48
Binary files /dev/null and b/src/site/resources/language/images/div-ke.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/language/images/div-ks.png
----------------------------------------------------------------------
diff --git a/src/site/resources/language/images/div-ks.png b/src/site/resources/language/images/div-ks.png
new file mode 100644
index 0000000..70b1a8f
Binary files /dev/null and b/src/site/resources/language/images/div-ks.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/language/images/div-le.png
----------------------------------------------------------------------
diff --git a/src/site/resources/language/images/div-le.png b/src/site/resources/language/images/div-le.png
new file mode 100644
index 0000000..7f89f95
Binary files /dev/null and b/src/site/resources/language/images/div-le.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/language/images/div-ls.png
----------------------------------------------------------------------
diff --git a/src/site/resources/language/images/div-ls.png b/src/site/resources/language/images/div-ls.png
new file mode 100644
index 0000000..cda5042
Binary files /dev/null and b/src/site/resources/language/images/div-ls.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/language/images/div-te.png
----------------------------------------------------------------------
diff --git a/src/site/resources/language/images/div-te.png b/src/site/resources/language/images/div-te.png
new file mode 100644
index 0000000..00ce32a
Binary files /dev/null and b/src/site/resources/language/images/div-te.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/language/images/div-ts.png
----------------------------------------------------------------------
diff --git a/src/site/resources/language/images/div-ts.png b/src/site/resources/language/images/div-ts.png
new file mode 100644
index 0000000..0b178e8
Binary files /dev/null and b/src/site/resources/language/images/div-ts.png differ

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/resources/language/stylesheet.css
----------------------------------------------------------------------
diff --git a/src/site/resources/language/stylesheet.css b/src/site/resources/language/stylesheet.css
new file mode 100644
index 0000000..a68dc08
--- /dev/null
+++ b/src/site/resources/language/stylesheet.css
@@ -0,0 +1,139 @@
+
+
+
+table.index {
+    width: 60%;
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border: 0px none;
+    border-collapse: collapse;
+}
+
+td.index {
+    width: 20%;
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border: 0px none;
+    border-collapse: collapse;
+    vertical-align: top;
+}
+
+
+
+code {
+    background-color: #ece9d8;
+    padding: 0px 4px;
+    color: #000000;
+    font-family: "lucida grande", tahoma, verdana, arial, sans-serif;
+    font-size:small;
+    -moz-border-radius: 4px;
+    -webkit-border-radius: 4px;
+    -khtml-border-radius: 4px;
+    border-radius: 4px;
+}
+
+.railroad {
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border-collapse: collapse;
+    vertical-align: top;
+}
+
+.c {
+    color: #000000;
+    padding: 1px 3px;
+    margin: 0px 0px;
+    border: 2px solid;
+    -moz-border-radius: 0.4em;
+    -webkit-border-radius: 0.4em;
+    -khtml-border-radius: 0.4em;
+    border-radius: 0.4em;
+    background-color: #fff;
+}
+
+.ts {
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border-collapse: collapse;
+    vertical-align: top;
+    width: 16px;
+    height: 24px;
+    background-image: url(images/div-ts.png);
+    background-size: 16px 512px;
+}
+
+.ls {
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border-collapse: collapse;
+    vertical-align: top;
+    width: 16px;
+    height: 24px;
+    background-image: url(images/div-ls.png);
+    background-size: 16px 512px;
+}
+
+.ks {
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border-collapse: collapse;
+    vertical-align: top;
+    width: 16px;
+    height: 24px;
+    background-image: url(images/div-ks.png);
+    background-size: 16px 512px;
+}
+
+.te {
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border-collapse: collapse;
+    vertical-align: top;
+    width: 16px;
+    height: 24px;
+    background-image: url(images/div-te.png);
+    background-size: 16px 512px;
+}
+
+.le {
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border-collapse: collapse;
+    vertical-align: top;
+    width: 16px;
+    height: 24px;
+    background-image: url(images/div-le.png);
+    background-size: 16px 512px;
+}
+
+.ke {
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border-collapse: collapse;
+    vertical-align: top;
+    width: 16px;
+    height: 24px;
+    background-image: url(images/div-ke.png);
+    background-size: 16px 512px;
+}
+
+.d {
+    border: 0px;
+    padding: 0px;
+    margin: 0px;
+    border-collapse: collapse;
+    vertical-align: top;
+    min-width: 16px;
+    height: 24px;
+    background-image: url(images/div-d.png);
+    background-size: 1024px 512px;
+}

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/site.xml
----------------------------------------------------------------------
diff --git a/src/site/site.xml b/src/site/site.xml
new file mode 100644
index 0000000..37d584f
--- /dev/null
+++ b/src/site/site.xml
@@ -0,0 +1,96 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+
+<project name="Apache Phoenix">
+  <skin>
+    <groupId>lt.velykis.maven.skins</groupId>
+    <artifactId>reflow-maven-skin</artifactId>
+    <version>1.0.0</version>
+  </skin>
+
+  <custom>
+    <reflowSkin>
+    <smoothScroll>true</smoothScroll>
+    <bottomNav maxSpan="9" >
+        <column>About</column>
+        <column>Using</column>
+        <column>Reference</column>
+    </bottomNav>
+    <theme>bootswatch-united</theme>
+    <highlightJs>true</highlightJs>
+    <titleTemplate>%2$s | %1$s</titleTemplate>
+    <brand>
+        <name><![CDATA[<div class="xtoplogo"></div>]]></name>
+      <href>index.html</href>
+    </brand>
+    <skinAttribution>false</skinAttribution>
+    <breadcrumbs>true</breadcrumbs>
+      <bottomDescription quote="false">
+        <![CDATA[<form action="https://www.google.com/search" method="get"><input value="phoenix.incubator.apache.org" name="sitesearch" type="hidden"><input placeholder="Search the site&hellip;" required="required" style="width:170px;" size="18" name="q" id="query" type="search"></form>]]>
+      </bottomDescription>
+    </reflowSkin>
+  </custom>
+    <body>
+
+        <menu name="About">
+            <item href="http://phoenix.incubator.apache.org/" name="Overview"/>
+            <item href="http://phoenix.incubator.apache.org/recent.html" name="New Features"/>
+            <item href="http://phoenix.incubator.apache.org/roadmap.html" name="Roadmap"/>
+            <item href="http://phoenix.incubator.apache.org/performance.html" name="Performance"/>
+            <item href="http://phoenix.incubator.apache.org/team.html" name="Team"/>
+            <item href="http://phoenix.incubator.apache.org/mailing_list.html" name="Mailing Lists"/>
+            <item href="http://phoenix.incubator.apache.org/source.html" name="Source Repository"/>
+            <item href="http://phoenix.incubator.apache.org/issues.html" name="Issue Tracking"/>
+            <item href="http://phoenix.incubator.apache.org/download.html" name="Download" />
+            <item href="" name=""/>
+            <item href="http://www.apache.org/licenses/" name="License" />
+            <item href="http://www.apache.org/foundation/sponsorship.html" name="Sponsorship" />
+            <item href="http://www.apache.org/foundation/thanks.html" name="Thanks" />
+            <item href="http://www.apache.org/security/" name="Security" />
+
+        </menu>
+        <menu name="Using">
+            <item href="http://phoenix.incubator.apache.org/faq.html" name="F.A.Q."/>
+            <item href="http://phoenix.incubator.apache.org/Phoenix-in-15-minutes-or-less.html" name="Quick Start"/>
+            <item href="http://phoenix.incubator.apache.org/building.html" name="Building"/>
+            <item href="http://phoenix.incubator.apache.org/tuning.html" name="Tuning"/>
+            <item href="" name=""/>
+            <item href="http://phoenix.incubator.apache.org/secondary_indexing.html" name="Secondary Indexes"/>
+            <item href="http://phoenix.incubator.apache.org/sequences.html" name="Sequences"/>
+            <item href="http://phoenix.incubator.apache.org/salted.html" name="Salted Tables"/>
+            <item href="http://phoenix.incubator.apache.org/paged.html" name="Paged Queries"/>
+            <item href="http://phoenix.incubator.apache.org/dynamic_columns.html" name="Dynamic Columns"/>
+            <item href="http://phoenix.incubator.apache.org/skip_scan.html" name="Skip Scan"/>
+            <item href="http://phoenix.incubator.apache.org/mr_dataload.html" name="Bulk Loading"/>
+            <item href="" name=""/>
+            <item href="http://phoenix.incubator.apache.org/phoenix_on_emr.html" name="Amazon EMR Support"/>
+            <item href="http://phoenix.incubator.apache.org/flume.html" name="Apache Flume Plugin"/>
+
+        </menu>
+        <menu name="Reference">
+            <item href="http://phoenix.incubator.apache.org/language/index.html" name="Grammar"/>
+            <item href="http://phoenix.incubator.apache.org/language/functions.html" name="Functions"/>
+            <item href="http://phoenix.incubator.apache.org/language/datatypes.html" name="Datatypes"/>
+        </menu>
+
+    </body>
+</project>

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/xhtml/language/datatypes.xhtml
----------------------------------------------------------------------
diff --git a/src/site/xhtml/language/datatypes.xhtml b/src/site/xhtml/language/datatypes.xhtml
new file mode 100644
index 0000000..ab7906c
--- /dev/null
+++ b/src/site/xhtml/language/datatypes.xhtml
@@ -0,0 +1,5 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
+<head><link href="stylesheet.css" rel="stylesheet" /></head>
+<body><h1>Data Types</h1></body>update_here</html>
+

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/xhtml/language/functions.xhtml
----------------------------------------------------------------------
diff --git a/src/site/xhtml/language/functions.xhtml b/src/site/xhtml/language/functions.xhtml
new file mode 100644
index 0000000..569c2f5
--- /dev/null
+++ b/src/site/xhtml/language/functions.xhtml
@@ -0,0 +1,5 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
+<head><link href="stylesheet.css" rel="stylesheet" /></head>
+<body><h1>Functions</h1></body>update_here</html>
+

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/xhtml/language/index.xhtml
----------------------------------------------------------------------
diff --git a/src/site/xhtml/language/index.xhtml b/src/site/xhtml/language/index.xhtml
new file mode 100644
index 0000000..13ef62d
--- /dev/null
+++ b/src/site/xhtml/language/index.xhtml
@@ -0,0 +1,5 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
+<head><link href="stylesheet.css" rel="stylesheet" /></head>
+<body><h1>Grammar</h1></body>update_here</html>
+

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/site/xhtml/team.xhtml
----------------------------------------------------------------------
diff --git a/src/site/xhtml/team.xhtml b/src/site/xhtml/team.xhtml
new file mode 100644
index 0000000..a456681
--- /dev/null
+++ b/src/site/xhtml/team.xhtml
@@ -0,0 +1,115 @@
+<html>
+<body>
+<h1>Team</h1>
+<p>The following is a list of mentors and contributors with commit privileges that have directly contributed to the project in one way or another.</p> 
+<hr /> 
+
+   <h4 id="Mentors">Mentors</h4> 
+   <table border="0" class="bodyTable table table-striped table-hover"> 
+    <thead> 
+     <tr class="a"> 
+      <th>Name</th> 
+      <th>Company</th> 
+      <th>Email</th> 
+     </tr> 
+    </thead> 
+    <tbody> 
+     <tr class="b"> 
+      <td width="25%">Lars Hofhansl </td> 
+      <td width="25%">Salesforce</td> 
+      <td><a class="externalLink" href="mailto:larsh@apache.org">larsh@apache.org</a></td> 
+     </tr> 
+     <tr class="a"> 
+      <td>Andrew Purtell </td> 
+      <td>Intel </td> 
+      <td><a class="externalLink" href="mailto:apurtell@apache.org">apurtell@apache.org</a></td> 
+     </tr> 
+     <tr class="b"> 
+      <td>Enis Soztutar </td> 
+      <td>Hortonworks </td> 
+      <td><a class="externalLink" href="mailto:enis@apache.org">enis@apache.org</a></td> 
+     </tr> 
+     <tr class="a"> 
+      <td>Deveraj Das </td> 
+      <td>Hortonworks </td> 
+      <td><a class="externalLink" href="mailto:ddas@apache.org">ddas@apache.org</a></td> 
+     </tr> 
+     <tr class="b"> 
+      <td>Steven Noels </td> 
+      <td>NG Data </td> 
+      <td><a class="externalLink" href="mailto:stevenn@apache.org">stevenn@apache.org</a></td> 
+     </tr> 
+     <tr class="a"> 
+      <td>Michael Stack </td> 
+      <td>Cloudera </td> 
+      <td><a class="externalLink" href="mailto:stack@apache.org">stack@apache.org</a></td> 
+     </tr> 
+    </tbody> 
+   </table> 
+
+<hr/>
+
+   <h4 id="Committers">Committers</h4> 
+   <table border="0" class="bodyTable table table-striped table-hover"> 
+    <thead> 
+     <tr class="a"> 
+      <th>Name</th> 
+      <th>Company</th> 
+      <th>Email</th> 
+     </tr> 
+    </thead> 
+    <tbody> 
+     <tr class="b"> 
+      <td width="25%">James Taylor </td> 
+      <td width="25%">Salesforce </td> 
+      <td><a class="externalLink" href="mailto:jamestaylor@apache.org">jamestaylor@apache.org</a></td> 
+     </tr> 
+     <tr class="a"> 
+      <td>Jesse Yates </td> 
+      <td>Salesforce</td> 
+      <td><a class="externalLink" href="mailto:jyates@apache.org">jyates@apache.org</a></td> 
+     </tr> 
+     <tr class="b"> 
+      <td>Eli Levine </td> 
+      <td>Salesforce</td> 
+      <td><a class="externalLink" href="mailto:elevine@apache.org">elevine@apache.org</a></td> 
+     </tr> 
+     <tr class="a"> 
+      <td>Simon Toens </td> 
+      <td>Salesforce</td> 
+      <td><a class="externalLink" href="mailto:stoens@apache.org">stoens@apache.org</a></td> 
+     </tr> 
+     <tr class="b"> 
+      <td>Maryann Xue </td> 
+      <td>Intel </td> 
+      <td><a class="externalLink" href="mailto:maryannxue@apache.org">maryannxue@apache.org</a></td> 
+     </tr> 
+     <tr class="a"> 
+      <td>Anoop Sam John </td> 
+      <td>Intel </td> 
+      <td><a class="externalLink" href="mailto:anoopsamjohn@apache.org">anoopsamjohn@apache.org</a></td> 
+     </tr> 
+     <tr class="b"> 
+      <td>Ramkrishna Vasudevan </td> 
+      <td>Intel </td> 
+      <td><a class="externalLink" href="mailto:ramkrishna@apache.org">ramkrishna@apache.org</a></td> 
+     </tr> 
+     <tr class="a"> 
+      <td>Jeffrey Zhong </td> 
+      <td>Hortonworks </td> 
+      <td><a class="externalLink" href="mailto:jzhong@apache.org">jzhong@apache.org</a></td> 
+     </tr> 
+     <tr class="b"> 
+      <td>Nick Dimiduk </td> 
+      <td>Hortonworks </td> 
+      <td><a class="externalLink" href="mailto:ndimiduk@apache.org">ndimiduk@apache.org</a></td> 
+     </tr> 
+     <tr class="a"> 
+      <td>Mujtaba Chohan </td> 
+      <td>Salesforce</td> 
+      <td><a class="externalLink" href="mailto:mujtaba@apache.org">mujtaba@apache.org</a></td> 
+     </tr> 
+    </tbody> 
+   </table> 
+</body>
+</html>

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestReadWriteKeyValuesWithCodec.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestReadWriteKeyValuesWithCodec.java b/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestReadWriteKeyValuesWithCodec.java
new file mode 100644
index 0000000..48513b3
--- /dev/null
+++ b/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestReadWriteKeyValuesWithCodec.java
@@ -0,0 +1,155 @@
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import org.apache.hbase.index.IndexTestingUtils;
+import org.apache.hbase.index.wal.IndexedKeyValue;
+
+/**
+ * Simple test to read/write simple files via our custom {@link WALEditCodec} to ensure properly
+ * encoding/decoding without going through a cluster.
+ */
+public class TestReadWriteKeyValuesWithCodec {
+
+  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private static final byte[] ROW = Bytes.toBytes("row");
+  private static final byte[] FAMILY = Bytes.toBytes("family");
+
+  @BeforeClass
+  public static void setupCodec() {
+    Configuration conf = UTIL.getConfiguration();
+    IndexTestingUtils.setupConfig(conf);
+    conf.set(WALEditCodec.WAL_EDIT_CODEC_CLASS_KEY, IndexedWALEditCodec.class.getName());
+  }
+
+  @Test
+  public void testWithoutCompression() throws Exception {
+    // get the FS ready to read/write the edits
+    Path testDir = UTIL.getDataTestDir("TestReadWriteCustomEdits_withoutCompression");
+    Path testFile = new Path(testDir, "testfile");
+    FileSystem fs = UTIL.getTestFileSystem();
+
+    List<WALEdit> edits = getEdits();
+    WALEditCodec codec = WALEditCodec.create(UTIL.getConfiguration(), null);
+    writeReadAndVerify(codec, fs, edits, testFile);
+
+  }
+
+  @Test
+  public void testWithCompression() throws Exception {
+    // get the FS ready to read/write the edit
+    Path testDir = UTIL.getDataTestDir("TestReadWriteCustomEdits_withCompression");
+    Path testFile = new Path(testDir, "testfile");
+    FileSystem fs = UTIL.getTestFileSystem();
+
+    List<WALEdit> edits = getEdits();
+    CompressionContext compression = new CompressionContext(LRUDictionary.class);
+    WALEditCodec codec = WALEditCodec.create(UTIL.getConfiguration(), compression);
+    writeReadAndVerify(codec, fs, edits, testFile);
+  }
+
+  /**
+   * @return a bunch of {@link WALEdit}s that test a range of serialization possibilities.
+   */
+  private List<WALEdit> getEdits() {
+    // Build up a couple of edits
+    List<WALEdit> edits = new ArrayList<WALEdit>();
+    Put p = new Put(ROW);
+    p.add(FAMILY, null, Bytes.toBytes("v1"));
+
+    WALEdit withPut = new WALEdit();
+    addMutation(withPut, p, FAMILY);
+    edits.add(withPut);
+
+    Delete d = new Delete(ROW);
+    d.deleteColumn(FAMILY, null);
+    WALEdit withDelete = new WALEdit();
+    addMutation(withDelete, d, FAMILY);
+    edits.add(withDelete);
+    
+    WALEdit withPutsAndDeletes = new WALEdit();
+    addMutation(withPutsAndDeletes, d, FAMILY);
+    addMutation(withPutsAndDeletes, p, FAMILY);
+    edits.add(withPutsAndDeletes);
+    
+    WALEdit justIndexUpdates = new WALEdit();
+    byte[] table = Bytes.toBytes("targetTable");
+    IndexedKeyValue ikv = new IndexedKeyValue(table, p);
+    justIndexUpdates.add(ikv);
+    edits.add(justIndexUpdates);
+
+    WALEdit mixed = new WALEdit();
+    addMutation(mixed, d, FAMILY);
+    mixed.add(ikv);
+    addMutation(mixed, p, FAMILY);
+    edits.add(mixed);
+
+    return edits;
+  }
+
+  /**
+   * Add all the {@link KeyValue}s in the {@link Mutation}, for the pass family, to the given
+   * {@link WALEdit}.
+   */
+  private void addMutation(WALEdit edit, Mutation m, byte[] family) {
+    List<KeyValue> kvs = m.getFamilyMap().get(FAMILY);
+    for (KeyValue kv : kvs) {
+      edit.add(kv);
+    }
+  }
+
+  /**
+   * Write the edits to the specified path on the {@link FileSystem} using the given codec and then
+   * read them back in and ensure that we read the same thing we wrote.
+   */
+  private void writeReadAndVerify(WALEditCodec codec, FileSystem fs, List<WALEdit> edits,
+      Path testFile) throws IOException {
+    // write the edits out
+    FSDataOutputStream out = fs.create(testFile);
+    for (WALEdit edit : edits) {
+      edit.setCodec(codec);
+      edit.write(out);
+    }
+    out.close();
+
+    // read in the edits
+    FSDataInputStream in = fs.open(testFile);
+    List<WALEdit> read = new ArrayList<WALEdit>();
+    for (int i = 0; i < edits.size(); i++) {
+      WALEdit edit = new WALEdit();
+      edit.setCodec(codec);
+      edit.readFields(in);
+      read.add(edit);
+    }
+    in.close();
+
+    // make sure the read edits match the written
+    for(int i=0; i< edits.size(); i++){
+      WALEdit expected = edits.get(i);
+      WALEdit found = read.get(i);
+      for(int j=0; j< expected.getKeyValues().size(); j++){
+        KeyValue fkv = found.getKeyValues().get(j);
+        KeyValue ekv = expected.getKeyValues().get(j);
+        assertEquals("KV mismatch for edit! Expected: "+expected+", but found: "+found, ekv, fkv);
+      }
+    }
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayWithIndexWritesAndCompressedWAL.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayWithIndexWritesAndCompressedWAL.java b/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayWithIndexWritesAndCompressedWAL.java
new file mode 100644
index 0000000..38be2ab
--- /dev/null
+++ b/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayWithIndexWritesAndCompressedWAL.java
@@ -0,0 +1,275 @@
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionServerAccounting;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hbase.index.IndexTestingUtils;
+import org.apache.hbase.index.TableName;
+import org.apache.hbase.index.covered.example.ColumnGroup;
+import org.apache.hbase.index.covered.example.CoveredColumn;
+import org.apache.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder;
+import org.apache.hbase.index.covered.example.CoveredColumnIndexer;
+
+/**
+ * For pre-0.94.9 instances, this class tests correctly deserializing WALEdits w/o compression. Post
+ * 0.94.9 we can support a custom {@link WALEditCodec}, which handles reading/writing the compressed
+ * edits.
+ * <p>
+ * Most of the underlying work (creating/splitting the WAL, etc) is from
+ * org.apache.hadoop.hhbase.regionserver.wal.TestWALReplay, copied here for completeness and ease of
+ * use.
+ * <p>
+ * This test should only have a single test - otherwise we will start/stop the minicluster multiple
+ * times, which is probably not what you want to do (mostly because its so much effort).
+ */
+public class TestWALReplayWithIndexWritesAndCompressedWAL {
+
+  public static final Log LOG = LogFactory.getLog(TestWALReplay.class);
+  @Rule
+  public TableName table = new TableName();
+  private String INDEX_TABLE_NAME = table.getTableNameString() + "_INDEX";
+
+  final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private Path hbaseRootDir = null;
+  private Path oldLogDir;
+  private Path logDir;
+  private FileSystem fs;
+  private Configuration conf;
+
+  @Before
+  public void setUp() throws Exception {
+    setupCluster();
+    this.conf = HBaseConfiguration.create(UTIL.getConfiguration());
+    this.fs = UTIL.getDFSCluster().getFileSystem();
+    this.hbaseRootDir = new Path(this.conf.get(HConstants.HBASE_DIR));
+    this.oldLogDir = new Path(this.hbaseRootDir, HConstants.HREGION_OLDLOGDIR_NAME);
+    this.logDir = new Path(this.hbaseRootDir, HConstants.HREGION_LOGDIR_NAME);
+    // reset the log reader to ensure we pull the one from this config
+    HLog.resetLogReaderClass();
+  }
+
+  private void setupCluster() throws Exception {
+    configureCluster();
+    startCluster();
+  }
+
+  protected void configureCluster() throws Exception {
+    Configuration conf = UTIL.getConfiguration();
+    setDefaults(conf);
+
+    // enable WAL compression
+    conf.setBoolean(HConstants.ENABLE_WAL_COMPRESSION, true);
+  }
+
+  protected final void setDefaults(Configuration conf) {
+    // make sure writers fail quickly
+    conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 3);
+    conf.setInt(HConstants.HBASE_CLIENT_PAUSE, 1000);
+    conf.setInt("zookeeper.recovery.retry", 3);
+    conf.setInt("zookeeper.recovery.retry.intervalmill", 100);
+    conf.setInt(HConstants.ZK_SESSION_TIMEOUT, 30000);
+    conf.setInt(HConstants.HBASE_RPC_TIMEOUT_KEY, 5000);
+    // enable appends
+    conf.setBoolean("dfs.support.append", true);
+    IndexTestingUtils.setupConfig(conf);
+  }
+
+  protected void startCluster() throws Exception {
+    UTIL.startMiniDFSCluster(3);
+    UTIL.startMiniZKCluster();
+    UTIL.startMiniHBaseCluster(1, 1);
+
+    Path hbaseRootDir = UTIL.getDFSCluster().getFileSystem().makeQualified(new Path("/hbase"));
+    LOG.info("hbase.rootdir=" + hbaseRootDir);
+    UTIL.getConfiguration().set(HConstants.HBASE_DIR, hbaseRootDir.toString());
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    UTIL.shutdownMiniHBaseCluster();
+    UTIL.shutdownMiniDFSCluster();
+    UTIL.shutdownMiniZKCluster();
+  }
+
+
+  private void deleteDir(final Path p) throws IOException {
+    if (this.fs.exists(p)) {
+      if (!this.fs.delete(p, true)) {
+        throw new IOException("Failed remove of " + p);
+      }
+    }
+  }
+
+  /**
+   * Test writing edits into an HRegion, closing it, splitting logs, opening Region again. Verify
+   * seqids.
+   * @throws Exception on failure
+   */
+  @Test
+  public void testReplayEditsWrittenViaHRegion() throws Exception {
+    final String tableNameStr = "testReplayEditsWrittenViaHRegion";
+    final HRegionInfo hri = new HRegionInfo(Bytes.toBytes(tableNameStr), null, null, false);
+    final Path basedir = new Path(this.hbaseRootDir, tableNameStr);
+    deleteDir(basedir);
+    final HTableDescriptor htd = createBasic3FamilyHTD(tableNameStr);
+    
+    //setup basic indexing for the table
+    // enable indexing to a non-existant index table
+    byte[] family = new byte[] { 'a' };
+    ColumnGroup fam1 = new ColumnGroup(INDEX_TABLE_NAME);
+    fam1.add(new CoveredColumn(family, CoveredColumn.ALL_QUALIFIERS));
+    CoveredColumnIndexSpecifierBuilder builder = new CoveredColumnIndexSpecifierBuilder();
+    builder.addIndexGroup(fam1);
+    builder.build(htd);
+
+    // create the region + its WAL
+    HRegion region0 = HRegion.createHRegion(hri, hbaseRootDir, this.conf, htd);
+    region0.close();
+    region0.getLog().closeAndDelete();
+    HLog wal = createWAL(this.conf);
+    RegionServerServices mockRS = Mockito.mock(RegionServerServices.class);
+    // mock out some of the internals of the RSS, so we can run CPs
+    Mockito.when(mockRS.getWAL()).thenReturn(wal);
+    RegionServerAccounting rsa = Mockito.mock(RegionServerAccounting.class);
+    Mockito.when(mockRS.getRegionServerAccounting()).thenReturn(rsa);
+    ServerName mockServerName = Mockito.mock(ServerName.class);
+    Mockito.when(mockServerName.getServerName()).thenReturn(tableNameStr + "-server-1234");
+    Mockito.when(mockRS.getServerName()).thenReturn(mockServerName);
+    HRegion region = new HRegion(basedir, wal, this.fs, this.conf, hri, htd, mockRS);
+    long seqid = region.initialize();
+    // HRegionServer usually does this. It knows the largest seqid across all regions.
+    wal.setSequenceNumber(seqid);
+    
+    //make an attempted write to the primary that should also be indexed
+    byte[] rowkey = Bytes.toBytes("indexed_row_key");
+    Put p = new Put(rowkey);
+    p.add(family, Bytes.toBytes("qual"), Bytes.toBytes("value"));
+    region.put(new Put[] { p });
+
+    // we should then see the server go down
+    Mockito.verify(mockRS, Mockito.times(1)).abort(Mockito.anyString(),
+      Mockito.any(Exception.class));
+    region.close(true);
+    wal.close();
+
+    // then create the index table so we are successful on WAL replay
+    CoveredColumnIndexer.createIndexTable(UTIL.getHBaseAdmin(), INDEX_TABLE_NAME);
+
+    // run the WAL split and setup the region
+    runWALSplit(this.conf);
+    HLog wal2 = createWAL(this.conf);
+    HRegion region1 = new HRegion(basedir, wal2, this.fs, this.conf, hri, htd, mockRS);
+
+    // initialize the region - this should replay the WALEdits from the WAL
+    region1.initialize();
+
+    // now check to ensure that we wrote to the index table
+    HTable index = new HTable(UTIL.getConfiguration(), INDEX_TABLE_NAME);
+    int indexSize = getKeyValueCount(index);
+    assertEquals("Index wasn't propertly updated from WAL replay!", 1, indexSize);
+    Get g = new Get(rowkey);
+    final Result result = region1.get(g);
+    assertEquals("Primary region wasn't updated from WAL replay!", 1, result.size());
+
+    // cleanup the index table
+    HBaseAdmin admin = UTIL.getHBaseAdmin();
+    admin.disableTable(INDEX_TABLE_NAME);
+    admin.deleteTable(INDEX_TABLE_NAME);
+    admin.close();
+  }
+
+  /**
+   * Create simple HTD with three families: 'a', 'b', and 'c'
+   * @param tableName name of the table descriptor
+   * @return
+   */
+  private HTableDescriptor createBasic3FamilyHTD(final String tableName) {
+    HTableDescriptor htd = new HTableDescriptor(tableName);
+    HColumnDescriptor a = new HColumnDescriptor(Bytes.toBytes("a"));
+    htd.addFamily(a);
+    HColumnDescriptor b = new HColumnDescriptor(Bytes.toBytes("b"));
+    htd.addFamily(b);
+    HColumnDescriptor c = new HColumnDescriptor(Bytes.toBytes("c"));
+    htd.addFamily(c);
+    return htd;
+  }
+
+  /*
+   * @param c
+   * @return WAL with retries set down from 5 to 1 only.
+   * @throws IOException
+   */
+  private HLog createWAL(final Configuration c) throws IOException {
+    HLog wal = new HLog(FileSystem.get(c), logDir, oldLogDir, c);
+    // Set down maximum recovery so we dfsclient doesn't linger retrying something
+    // long gone.
+    HBaseTestingUtility.setMaxRecoveryErrorCount(wal.getOutputStream(), 1);
+    return wal;
+  }
+
+  /*
+   * Run the split. Verify only single split file made.
+   * @param c
+   * @return The single split file made
+   * @throws IOException
+   */
+  private Path runWALSplit(final Configuration c) throws IOException {
+    FileSystem fs = FileSystem.get(c);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(c, this.hbaseRootDir, this.logDir,
+      this.oldLogDir, fs);
+    List<Path> splits = logSplitter.splitLog();
+    // Split should generate only 1 file since there's only 1 region
+    assertEquals("splits=" + splits, 1, splits.size());
+    // Make sure the file exists
+    assertTrue(fs.exists(splits.get(0)));
+    LOG.info("Split file=" + splits.get(0));
+    return splits.get(0);
+  }
+
+  private int getKeyValueCount(HTable table) throws IOException {
+    Scan scan = new Scan();
+    scan.setMaxVersions(Integer.MAX_VALUE - 1);
+
+    ResultScanner results = table.getScanner(scan);
+    int count = 0;
+    for (Result res : results) {
+      count += res.list().size();
+      System.out.println(count + ") " + res);
+    }
+    results.close();
+
+    return count;
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayWithIndexWritesAndUncompressedWALInHBase_094_9.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayWithIndexWritesAndUncompressedWALInHBase_094_9.java b/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayWithIndexWritesAndUncompressedWALInHBase_094_9.java
new file mode 100644
index 0000000..2f0bf36
--- /dev/null
+++ b/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayWithIndexWritesAndUncompressedWALInHBase_094_9.java
@@ -0,0 +1,24 @@
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+
+import org.apache.hbase.index.util.IndexManagementUtil;
+
+/**
+ * Do the WAL Replay test but with the WALEditCodec, rather than an {@link IndexedHLogReader}, but
+ * still with compression
+ */
+public class TestWALReplayWithIndexWritesAndUncompressedWALInHBase_094_9 extends TestWALReplayWithIndexWritesAndCompressedWAL {
+
+  @Override
+  protected void configureCluster() throws Exception {
+    Configuration conf = UTIL.getConfiguration();
+    setDefaults(conf);
+    LOG.info("Setting HLog impl to indexed log reader");
+    conf.set(IndexManagementUtil.HLOG_READER_IMPL_KEY, IndexedHLogReader.class.getName());
+
+    // disable WAL compression
+    conf.setBoolean(HConstants.ENABLE_WAL_COMPRESSION, false);
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hbase/index/IndexTestingUtils.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hbase/index/IndexTestingUtils.java b/src/test/java/org/apache/hbase/index/IndexTestingUtils.java
new file mode 100644
index 0000000..3738429
--- /dev/null
+++ b/src/test/java/org/apache/hbase/index/IndexTestingUtils.java
@@ -0,0 +1,96 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.index;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec;
+import org.apache.hadoop.hbase.regionserver.wal.WALEditCodec;
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+
+/**
+ * Utility class for testing indexing
+ */
+public class IndexTestingUtils {
+
+  private static final Log LOG = LogFactory.getLog(IndexTestingUtils.class);
+  private static final String MASTER_INFO_PORT_KEY = "hbase.master.info.port";
+  private static final String RS_INFO_PORT_KEY = "hbase.regionserver.info.port";
+  
+  private IndexTestingUtils() {
+    // private ctor for util class
+  }
+
+  public static void setupConfig(Configuration conf) {
+      conf.setInt(MASTER_INFO_PORT_KEY, -1);
+      conf.setInt(RS_INFO_PORT_KEY, -1);
+    // setup our codec, so we get proper replay/write
+      conf.set(WALEditCodec.WAL_EDIT_CODEC_CLASS_KEY, IndexedWALEditCodec.class.getName());
+  }
+  /**
+   * Verify the state of the index table between the given key and time ranges against the list of
+   * expected keyvalues.
+   * @throws IOException
+   */
+  @SuppressWarnings("javadoc")
+  public static void verifyIndexTableAtTimestamp(HTable index1, List<KeyValue> expected,
+      long start, long end, byte[] startKey, byte[] endKey) throws IOException {
+    LOG.debug("Scanning " + Bytes.toString(index1.getTableName()) + " between times (" + start
+        + ", " + end + "] and keys: [" + Bytes.toString(startKey) + ", " + Bytes.toString(endKey)
+        + "].");
+    Scan s = new Scan(startKey, endKey);
+    // s.setRaw(true);
+    s.setMaxVersions();
+    s.setTimeRange(start, end);
+    List<KeyValue> received = new ArrayList<KeyValue>();
+    ResultScanner scanner = index1.getScanner(s);
+    for (Result r : scanner) {
+      received.addAll(r.list());
+      LOG.debug("Received: " + r.list());
+    }
+    scanner.close();
+    assertEquals("Didn't get the expected kvs from the index table!", expected, received);
+  }
+
+  public static void verifyIndexTableAtTimestamp(HTable index1, List<KeyValue> expected, long ts,
+      byte[] startKey) throws IOException {
+    IndexTestingUtils.verifyIndexTableAtTimestamp(index1, expected, ts, startKey, HConstants.EMPTY_END_ROW);
+  }
+
+  public static void verifyIndexTableAtTimestamp(HTable index1, List<KeyValue> expected, long start,
+      byte[] startKey, byte[] endKey) throws IOException {
+    verifyIndexTableAtTimestamp(index1, expected, start, start + 1, startKey, endKey);
+  }
+}

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hbase/index/StubAbortable.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hbase/index/StubAbortable.java b/src/test/java/org/apache/hbase/index/StubAbortable.java
new file mode 100644
index 0000000..6ac63ad
--- /dev/null
+++ b/src/test/java/org/apache/hbase/index/StubAbortable.java
@@ -0,0 +1,43 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.index;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Abortable;
+
+/**
+ * TEst helper to stub out an {@link Abortable} when needed.
+ */
+public class StubAbortable implements Abortable {
+  private static final Log LOG = LogFactory.getLog(StubAbortable.class);
+  private boolean abort;
+
+  @Override
+  public void abort(String reason, Throwable e) {
+    LOG.info("Aborting: " + reason, e);
+    abort = true;
+  }
+
+  @Override
+  public boolean isAborted() {
+    return abort;
+  }
+}

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hbase/index/TableName.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hbase/index/TableName.java b/src/test/java/org/apache/hbase/index/TableName.java
new file mode 100644
index 0000000..221c696
--- /dev/null
+++ b/src/test/java/org/apache/hbase/index/TableName.java
@@ -0,0 +1,47 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.index;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.rules.TestWatcher;
+import org.junit.runner.Description;
+
+/**
+ * Returns a {@code byte[]} containing the name of the currently running test method.
+ */
+public class TableName extends TestWatcher {
+  private String tableName;
+
+  /**
+   * Invoked when a test is about to start
+   */
+  @Override
+  protected void starting(Description description) {
+    tableName = description.getMethodName();
+  }
+
+  public byte[] getTableName() {
+    return Bytes.toBytes(tableName);
+  }
+
+  public String getTableNameString() {
+    return this.tableName;
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hbase/index/TestFailForUnsupportedHBaseVersions.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hbase/index/TestFailForUnsupportedHBaseVersions.java b/src/test/java/org/apache/hbase/index/TestFailForUnsupportedHBaseVersions.java
new file mode 100644
index 0000000..07eb95c
--- /dev/null
+++ b/src/test/java/org/apache/hbase/index/TestFailForUnsupportedHBaseVersions.java
@@ -0,0 +1,157 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.index;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.VersionInfo;
+import org.junit.Test;
+
+import org.apache.hbase.index.covered.example.ColumnGroup;
+import org.apache.hbase.index.covered.example.CoveredColumn;
+import org.apache.hbase.index.covered.example.CoveredColumnIndexSpecifierBuilder;
+
+/**
+ * Test that we correctly fail for versions of HBase that don't support current properties
+ */
+public class TestFailForUnsupportedHBaseVersions {
+  private static final Log LOG = LogFactory.getLog(TestFailForUnsupportedHBaseVersions.class);
+
+  /**
+   * We don't support WAL Compression for HBase &lt; 0.94.9, so we shouldn't even allow the server
+   * to start if both indexing and WAL Compression are enabled for the wrong versions.
+   */
+  @Test
+  public void testDoesNotSupportCompressedWAL() {
+    Configuration conf = HBaseConfiguration.create();
+    IndexTestingUtils.setupConfig(conf);
+    // get the current version
+    String version = VersionInfo.getVersion();
+    
+    // ensure WAL Compression not enabled
+    conf.setBoolean(HConstants.ENABLE_WAL_COMPRESSION, false);
+    
+    //we support all versions without WAL Compression
+    String supported = Indexer.validateVersion(version, conf);
+    assertNull(
+      "WAL Compression wasn't enabled, but version "+version+" of HBase wasn't supported! All versions should"
+          + " support writing without a compressed WAL. Message: "+supported, supported);
+
+    // enable WAL Compression
+    conf.setBoolean(HConstants.ENABLE_WAL_COMPRESSION, true);
+
+    // set the version to something we know isn't supported
+    version = "0.94.4";
+    supported = Indexer.validateVersion(version, conf);
+    assertNotNull("WAL Compression was enabled, but incorrectly marked version as supported",
+      supported);
+    
+    //make sure the first version of 0.94 that supports Indexing + WAL Compression works
+    version = "0.94.9";
+    supported = Indexer.validateVersion(version, conf);
+    assertNull(
+      "WAL Compression wasn't enabled, but version "+version+" of HBase wasn't supported! Message: "+supported, supported);
+    
+    //make sure we support snapshot builds too
+    version = "0.94.9-SNAPSHOT";
+    supported = Indexer.validateVersion(version, conf);
+    assertNull(
+      "WAL Compression wasn't enabled, but version "+version+" of HBase wasn't supported! Message: "+supported, supported);
+  }
+
+  /**
+   * Test that we correctly abort a RegionServer when we run tests with an unsupported HBase
+   * version. The 'completeness' of this test requires that we run the test with both a version of
+   * HBase that wouldn't be supported with WAL Compression. Currently, this is the default version
+   * (0.94.4) so just running 'mvn test' will run the full test. However, this test will not fail
+   * when running against a version of HBase with WALCompression enabled. Therefore, to fully test
+   * this functionality, we need to run the test against both a supported and an unsupported version
+   * of HBase (as long as we want to support an version of HBase that doesn't support custom WAL
+   * Codecs).
+   * @throws Exception on failure
+   */
+  @Test(timeout = 300000 /* 5 mins */)
+  public void testDoesNotStartRegionServerForUnsupportedCompressionAndVersion() throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    IndexTestingUtils.setupConfig(conf);
+    // enable WAL Compression
+    conf.setBoolean(HConstants.ENABLE_WAL_COMPRESSION, true);
+
+    // check the version to see if it isn't supported
+    String version = VersionInfo.getVersion();
+    boolean supported = false;
+    if (Indexer.validateVersion(version, conf) == null) {
+      supported = true;
+    }
+
+    // start the minicluster
+    HBaseTestingUtility util = new HBaseTestingUtility(conf);
+    util.startMiniCluster();
+
+    // setup the primary table
+    HTableDescriptor desc = new HTableDescriptor(
+        "testDoesNotStartRegionServerForUnsupportedCompressionAndVersion");
+    byte[] family = Bytes.toBytes("f");
+    desc.addFamily(new HColumnDescriptor(family));
+
+    // enable indexing to a non-existant index table
+    String indexTableName = "INDEX_TABLE";
+    ColumnGroup fam1 = new ColumnGroup(indexTableName);
+    fam1.add(new CoveredColumn(family, CoveredColumn.ALL_QUALIFIERS));
+    CoveredColumnIndexSpecifierBuilder builder = new CoveredColumnIndexSpecifierBuilder();
+    builder.addIndexGroup(fam1);
+    builder.build(desc);
+
+    // get a reference to the regionserver, so we can ensure it aborts
+    HRegionServer server = util.getMiniHBaseCluster().getRegionServer(0);
+
+    // create the primary table
+    HBaseAdmin admin = util.getHBaseAdmin();
+    if (supported) {
+      admin.createTable(desc);
+      assertFalse("Hosting regeion server failed, even the HBase version (" + version
+          + ") supports WAL Compression.", server.isAborted());
+    } else {
+      admin.createTableAsync(desc, null);
+
+      // wait for the regionserver to abort - if this doesn't occur in the timeout, assume its
+      // broken.
+      while (!server.isAborted()) {
+        LOG.debug("Waiting on regionserver to abort..");
+      }
+    }
+
+    // cleanup
+    util.shutdownMiniCluster();
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hbase/index/covered/CoveredIndexCodecForTesting.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hbase/index/covered/CoveredIndexCodecForTesting.java b/src/test/java/org/apache/hbase/index/covered/CoveredIndexCodecForTesting.java
new file mode 100644
index 0000000..d0b77da
--- /dev/null
+++ b/src/test/java/org/apache/hbase/index/covered/CoveredIndexCodecForTesting.java
@@ -0,0 +1,73 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.index.covered;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+
+import org.apache.phoenix.index.BaseIndexCodec;
+
+/**
+ * An {@link IndexCodec} for testing that allow you to specify the index updates/deletes, regardless
+ * of the current tables' state.
+ */
+public class CoveredIndexCodecForTesting extends BaseIndexCodec {
+
+  private List<IndexUpdate> deletes = new ArrayList<IndexUpdate>();
+  private List<IndexUpdate> updates = new ArrayList<IndexUpdate>();
+
+  public void addIndexDelete(IndexUpdate... deletes) {
+    this.deletes.addAll(Arrays.asList(deletes));
+  }
+  
+  public void addIndexUpserts(IndexUpdate... updates) {
+    this.updates.addAll(Arrays.asList(updates));
+  }
+
+  public void clear() {
+    this.deletes.clear();
+    this.updates.clear();
+  }
+  
+  @Override
+  public Iterable<IndexUpdate> getIndexDeletes(TableState state) {
+    return this.deletes;
+  }
+
+  @Override
+  public Iterable<IndexUpdate> getIndexUpserts(TableState state) {
+    return this.updates;
+  }
+
+  @Override
+  public void initialize(RegionCoprocessorEnvironment env) throws IOException {
+    // noop
+  }
+
+  @Override
+  public boolean isEnabled(Mutation m) {
+    return true;
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-phoenix/blob/c5b80246/src/test/java/org/apache/hbase/index/covered/TestCoveredColumns.java
----------------------------------------------------------------------
diff --git a/src/test/java/org/apache/hbase/index/covered/TestCoveredColumns.java b/src/test/java/org/apache/hbase/index/covered/TestCoveredColumns.java
new file mode 100644
index 0000000..9a15295
--- /dev/null
+++ b/src/test/java/org/apache/hbase/index/covered/TestCoveredColumns.java
@@ -0,0 +1,47 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.index.covered;
+
+import static org.junit.Assert.assertEquals;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+
+import org.apache.hbase.index.covered.update.ColumnReference;
+
+public class TestCoveredColumns {
+
+  private static final byte[] fam = Bytes.toBytes("fam");
+  private static final byte[] qual = Bytes.toBytes("qual");
+
+  @Test
+  public void testCovering() {
+    ColumnReference ref = new ColumnReference(fam, qual);
+    CoveredColumns columns = new CoveredColumns();
+    assertEquals("Should have only found a single column to cover", 1, columns
+        .findNonCoveredColumns(Arrays.asList(ref)).size());
+
+    columns.addColumn(ref);
+    assertEquals("Shouldn't have any columns to cover", 0,
+      columns.findNonCoveredColumns(Arrays.asList(ref)).size());
+  }
+}
\ No newline at end of file