You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by mu...@apache.org on 2014/01/31 21:42:04 UTC

svn commit: r1563252 [5/6] - in /incubator/phoenix: ./ site/publish/ site/publish/language/ site/source/ site/source/src/ site/source/src/site/ site/source/src/site/bin/ site/source/src/site/markdown/ site/source/src/site/resources/ site/source/src/sit...

Modified: incubator/phoenix/site/publish/team.html
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/publish/team.html?rev=1563252&r1=1563251&r2=1563252&view=diff
==============================================================================
--- incubator/phoenix/site/publish/team.html (original)
+++ incubator/phoenix/site/publish/team.html Fri Jan 31 20:42:02 2014
@@ -1,7 +1,7 @@
 
 <!DOCTYPE html>
 <!--
- Generated by Apache Maven Doxia at Jan 28, 2014
+ Generated by Apache Maven Doxia at Jan 31, 2014
  Rendered using Maven Reflow Skin 1.0.0 (http://andriusvelykis.github.com/reflow-maven-skin)
 -->
 <html  xml:lang="en" lang="en">
@@ -32,7 +32,7 @@
 
 	</head>
 
-	<body class="page-team project-phoenix-core" data-spy="scroll" data-offset="60" data-target="#toc-scroll-target">
+	<body class="page-team project-phoenix-site" data-spy="scroll" data-offset="60" data-target="#toc-scroll-target">
 
 		<div class="navbar navbar-fixed-top">
 			<div class="navbar-inner">
@@ -73,6 +73,8 @@
 									<li><a href="http://phoenix.incubator.apache.org/tuning.html" title="Tuning" class="externalLink">Tuning </a></li>
 									<li class="divider"><a href="" title=""> </a></li>
 									<li><a href="http://phoenix.incubator.apache.org/secondary_indexing.html" title="Secondary Indexes" class="externalLink">Secondary Indexes </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/views.html" title="Views" class="externalLink">Views </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/multi-tenancy.html" title="Multi tenancy" class="externalLink">Multi tenancy </a></li>
 									<li><a href="http://phoenix.incubator.apache.org/sequences.html" title="Sequences" class="externalLink">Sequences </a></li>
 									<li><a href="http://phoenix.incubator.apache.org/salted.html" title="Salted Tables" class="externalLink">Salted Tables </a></li>
 									<li><a href="http://phoenix.incubator.apache.org/paged.html" title="Paged Queries" class="externalLink">Paged Queries </a></li>
@@ -307,6 +309,12 @@
 							<a href="http://phoenix.incubator.apache.org/secondary_indexing.html" title="Secondary Indexes" class="externalLink">Secondary Indexes </a>
 						</li>
 						<li>
+							<a href="http://phoenix.incubator.apache.org/views.html" title="Views" class="externalLink">Views </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/multi-tenancy.html" title="Multi tenancy" class="externalLink">Multi tenancy </a>
+						</li>
+						<li>
 							<a href="http://phoenix.incubator.apache.org/sequences.html" title="Sequences" class="externalLink">Sequences </a>
 						</li>
 						<li>
@@ -361,7 +369,7 @@
 			<div class="span12">
 				<p class="pull-right"><a href="#">Back to top</a></p>
 				<p class="copyright">Copyright &copy;2014 <a href="http://www.apache.org">Apache Software Foundation</a>. All Rights Reserved.</p>
-				<p class="version-date"><span class="projectVersion">Version: 3.0.0-SNAPSHOT. </span><span class="publishDate">Last Published: 2014-01-28. </span></p>
+				<p class="version-date"><span class="projectVersion">Version: 3.0.0-SNAPSHOT. </span><span class="publishDate">Last Published: 2014-01-31. </span></p>
 			</div>
 		</div>
 	</div>

Modified: incubator/phoenix/site/publish/tuning.html
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/publish/tuning.html?rev=1563252&r1=1563251&r2=1563252&view=diff
==============================================================================
--- incubator/phoenix/site/publish/tuning.html (original)
+++ incubator/phoenix/site/publish/tuning.html Fri Jan 31 20:42:02 2014
@@ -1,7 +1,7 @@
 
 <!DOCTYPE html>
 <!--
- Generated by Apache Maven Doxia at Jan 28, 2014
+ Generated by Apache Maven Doxia at Jan 31, 2014
  Rendered using Maven Reflow Skin 1.0.0 (http://andriusvelykis.github.com/reflow-maven-skin)
 -->
 <html  xml:lang="en" lang="en">
@@ -32,7 +32,7 @@
 
 	</head>
 
-	<body class="page-tuning project-phoenix-core" data-spy="scroll" data-offset="60" data-target="#toc-scroll-target">
+	<body class="page-tuning project-phoenix-site" data-spy="scroll" data-offset="60" data-target="#toc-scroll-target">
 
 		<div class="navbar navbar-fixed-top">
 			<div class="navbar-inner">
@@ -73,6 +73,8 @@
 									<li><a href="http://phoenix.incubator.apache.org/tuning.html" title="Tuning" class="externalLink">Tuning </a></li>
 									<li class="divider"><a href="" title=""> </a></li>
 									<li><a href="http://phoenix.incubator.apache.org/secondary_indexing.html" title="Secondary Indexes" class="externalLink">Secondary Indexes </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/views.html" title="Views" class="externalLink">Views </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/multi-tenancy.html" title="Multi tenancy" class="externalLink">Multi tenancy </a></li>
 									<li><a href="http://phoenix.incubator.apache.org/sequences.html" title="Sequences" class="externalLink">Sequences </a></li>
 									<li><a href="http://phoenix.incubator.apache.org/salted.html" title="Salted Tables" class="externalLink">Salted Tables </a></li>
 									<li><a href="http://phoenix.incubator.apache.org/paged.html" title="Paged Queries" class="externalLink">Paged Queries </a></li>
@@ -385,6 +387,12 @@
 							<a href="http://phoenix.incubator.apache.org/secondary_indexing.html" title="Secondary Indexes" class="externalLink">Secondary Indexes </a>
 						</li>
 						<li>
+							<a href="http://phoenix.incubator.apache.org/views.html" title="Views" class="externalLink">Views </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/multi-tenancy.html" title="Multi tenancy" class="externalLink">Multi tenancy </a>
+						</li>
+						<li>
 							<a href="http://phoenix.incubator.apache.org/sequences.html" title="Sequences" class="externalLink">Sequences </a>
 						</li>
 						<li>
@@ -439,7 +447,7 @@
 			<div class="span12">
 				<p class="pull-right"><a href="#">Back to top</a></p>
 				<p class="copyright">Copyright &copy;2014 <a href="http://www.apache.org">Apache Software Foundation</a>. All Rights Reserved.</p>
-				<p class="version-date"><span class="projectVersion">Version: 3.0.0-SNAPSHOT. </span><span class="publishDate">Last Published: 2014-01-28. </span></p>
+				<p class="version-date"><span class="projectVersion">Version: 3.0.0-SNAPSHOT. </span><span class="publishDate">Last Published: 2014-01-31. </span></p>
 			</div>
 		</div>
 	</div>

Added: incubator/phoenix/site/publish/views.html
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/publish/views.html?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/publish/views.html (added)
+++ incubator/phoenix/site/publish/views.html Fri Jan 31 20:42:02 2014
@@ -0,0 +1,341 @@
+
+<!DOCTYPE html>
+<!--
+ Generated by Apache Maven Doxia at Jan 31, 2014
+ Rendered using Maven Reflow Skin 1.0.0 (http://andriusvelykis.github.com/reflow-maven-skin)
+-->
+<html  xml:lang="en" lang="en">
+
+	<head>
+		<meta charset="UTF-8" />
+		<title>Views | Apache Phoenix</title>
+		<meta name="viewport" content="width=device-width, initial-scale=1.0" />
+		<meta name="description" content="" />
+		<meta http-equiv="content-language" content="en" />
+
+		<link href="http://netdna.bootstrapcdn.com/bootswatch/2.2.2/united/bootstrap.min.css" rel="stylesheet" />
+		<link href="http://netdna.bootstrapcdn.com/twitter-bootstrap/2.2.2/css/bootstrap-responsive.min.css" rel="stylesheet" />
+		<link href="./css/bootswatch.css" rel="stylesheet" />
+		<link href="./css/reflow-skin.css" rel="stylesheet" />
+		
+		<link href="http://yandex.st/highlightjs/7.3/styles/default.min.css" rel="stylesheet" />
+		
+		<link href="./css/lightbox.css" rel="stylesheet" />
+		
+		<link href="./css/site.css" rel="stylesheet" />
+		<link href="./css/print.css" rel="stylesheet" media="print" />
+		
+		<!-- Le HTML5 shim, for IE6-8 support of HTML5 elements -->
+		<!--[if lt IE 9]>
+			<script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></script>
+		<![endif]-->
+
+	</head>
+
+	<body class="page-views project-phoenix-site" data-spy="scroll" data-offset="60" data-target="#toc-scroll-target">
+
+		<div class="navbar navbar-fixed-top">
+			<div class="navbar-inner">
+				<div class="container">
+					<a class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse">
+						<span class="icon-bar"></span>
+						<span class="icon-bar"></span>
+						<span class="icon-bar"></span>
+					</a>
+					<a class="brand" href="index.html"><div class="xtoplogo"></div></a>
+					<div class="nav-collapse collapse">
+						<ul class="nav pull-right">
+							<li class="dropdown">
+								<a href="#" class="dropdown-toggle" data-toggle="dropdown">About <b class="caret"></b></a>
+								<ul class="dropdown-menu">
+									<li><a href="http://phoenix.incubator.apache.org/" title="Overview" class="externalLink">Overview </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/recent.html" title="New Features" class="externalLink">New Features </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/roadmap.html" title="Roadmap" class="externalLink">Roadmap </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/performance.html" title="Performance" class="externalLink">Performance </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/team.html" title="Team" class="externalLink">Team </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/mailing_list.html" title="Mailing Lists" class="externalLink">Mailing Lists </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/source.html" title="Source Repository" class="externalLink">Source Repository </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/issues.html" title="Issue Tracking" class="externalLink">Issue Tracking </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/download.html" title="Download" class="externalLink">Download </a></li>
+									<li class="divider"><a href="" title=""> </a></li>
+									<li><a href="http://www.apache.org/licenses/" title="License" class="externalLink">License </a></li>
+									<li><a href="http://www.apache.org/foundation/sponsorship.html" title="Sponsorship" class="externalLink">Sponsorship </a></li>
+									<li><a href="http://www.apache.org/foundation/thanks.html" title="Thanks" class="externalLink">Thanks </a></li>
+									<li><a href="http://www.apache.org/security/" title="Security" class="externalLink">Security </a></li>
+								</ul>
+							</li>
+							<li class="dropdown">
+								<a href="#" class="dropdown-toggle" data-toggle="dropdown">Using <b class="caret"></b></a>
+								<ul class="dropdown-menu">
+									<li><a href="http://phoenix.incubator.apache.org/faq.html" title="F.A.Q." class="externalLink">F.A.Q. </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/Phoenix-in-15-minutes-or-less.html" title="Quick Start" class="externalLink">Quick Start </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/building.html" title="Building" class="externalLink">Building </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/tuning.html" title="Tuning" class="externalLink">Tuning </a></li>
+									<li class="divider"><a href="" title=""> </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/secondary_indexing.html" title="Secondary Indexes" class="externalLink">Secondary Indexes </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/views.html" title="Views" class="externalLink">Views </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/multi-tenancy.html" title="Multi tenancy" class="externalLink">Multi tenancy </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/sequences.html" title="Sequences" class="externalLink">Sequences </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/salted.html" title="Salted Tables" class="externalLink">Salted Tables </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/paged.html" title="Paged Queries" class="externalLink">Paged Queries </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/dynamic_columns.html" title="Dynamic Columns" class="externalLink">Dynamic Columns </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/skip_scan.html" title="Skip Scan" class="externalLink">Skip Scan </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/mr_dataload.html" title="Bulk Loading" class="externalLink">Bulk Loading </a></li>
+									<li class="divider"><a href="" title=""> </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/phoenix_on_emr.html" title="Amazon EMR Support" class="externalLink">Amazon EMR Support </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/flume.html" title="Apache Flume Plugin" class="externalLink">Apache Flume Plugin </a></li>
+								</ul>
+							</li>
+							<li class="dropdown">
+								<a href="#" class="dropdown-toggle" data-toggle="dropdown">Reference <b class="caret"></b></a>
+								<ul class="dropdown-menu">
+									<li><a href="http://phoenix.incubator.apache.org/language/index.html" title="Grammar" class="externalLink">Grammar </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/language/functions.html" title="Functions" class="externalLink">Functions </a></li>
+									<li><a href="http://phoenix.incubator.apache.org/language/datatypes.html" title="Datatypes" class="externalLink">Datatypes </a></li>
+								</ul>
+							</li>
+						</ul>
+					</div><!--/.nav-collapse -->
+				</div>
+			</div>
+		</div>
+		
+	<div class="container">
+	
+	<!-- Masthead
+	================================================== -->
+	</header>
+
+	<div class="main-body">
+	<div class="row">
+		<div class="span12">
+			<div class="body-content">
+<div class="page-header">
+ <h1>Views</h1>
+</div> 
+<p>The standard SQL view syntax (with some limitations) is now supported by Phoenix to enable multiple virtual tables to all share the same underlying physical HBase table. This is especially important in HBase, as you cannot realistically expect to have more than perhaps up to a hundred physical tables and continue to get reasonable performance from HBase.</p> 
+<p>For example, given the following table definition that defines a base table to collect product metrics:</p> 
+<div class="source"> 
+ <pre>CREATE  TABLE product_metrics (
+    metric_type CHAR(1),
+    created_by VARCHAR, 
+    created_date DATE, 
+    metric_id INTEGER
+    CONSTRAINT pk PRIMARY KEY (metric_type, created_by, created_date, metric_id));
+</pre> 
+</div> 
+<p>You may define the following view:</p> 
+<div class="source"> 
+ <pre>CREATE VIEW mobile_product_metrics (carrier VARCHAR, dropped_calls BIGINT) AS
+SELECT * FROM product_metrics
+WHERE metric_type = 'm';
+</pre> 
+</div> 
+<p>In this case, the same underlying physical HBase table (i.e. PRODUCT_METRICS) stores all of the data. Notice that unlike with standard SQL views, you may define additional columns for your view. The view inherits all of the columns from its base table, in addition to being able to optionally add new KeyValue columns. You may also add these columns after-the-fact with an ALTER TABLE statement. </p> 
+<div class="section"> 
+ <h2 id="Updatable_Views">Updatable Views</h2> 
+ <p>If your view uses only simple equality expressions in the WHERE clause, you are also allowed to issue DML against the view. These views are termed <i>updatable views</i>. For example, in this case you could issue the following UPSERT statement:</p> 
+ <div class="source"> 
+  <pre>UPSERT INTO mobile_product_metrics(created_by, create_date, metric_id, carrier, dropped_calls)
+VALUES('John Doe', CURRENT_DATE(), NEXT VALUE FOR metric_seq, 'Verizon', 20);
+</pre> 
+ </div> 
+ <p>In this case, the row will be stored in the PRODUCT_METRICS HBase table and the metric_type column value will be inferred to be m since the VIEW defines it as such.</p> 
+ <p>Also, queries done through the view will automatically apply the WHERE clause filter. For example:</p> 
+ <div class="source"> 
+  <pre>SELECT sum(dropped_calls) FROM mobile_product_metrics WHERE carrier='Verizon'
+</pre> 
+ </div> 
+ <p>This would sum all the dropped_calls across all product_metrics with a metric_type of m and a carrier of Verizon.</p> 
+</div> 
+<div class="section"> 
+ <h2 id="Read-only_Views">Read-only Views</h2> 
+ <p>Views may also be defined with more complex WHERE clauses, but in that case you cannot issue DML against them as youll get a ReadOnlyException. You are still allowed to query through them and their WHERE clauses will be in effect as with standard SQL views. </p> 
+ <p>As expected, you may create a VIEW on another VIEW as well to further filter the data set. The same rules as above apply: if only simple equality expressions are used in the VIEW and its parent VIEW(s), the new view is updatable as well, otherwise its read-only.</p> 
+ <p>Note that the previous support for creating a read-only VIEW directly over an HBase table is still supported.</p> 
+</div> 
+<div class="section"> 
+ <h2 id="Indexes_on_Views">Indexes on Views</h2> 
+ <p>In addition, you may create an INDEX over a VIEW, just as with a TABLE. This is particularly useful to improve query performance over newly added columns on a VIEW, since it provides a way of doing point lookups based on these column values.</p> 
+</div> 
+<div class="section"> 
+ <h2 id="Limitations">Limitations</h2> 
+ <p>In our Phoenix 3.0 release, views have the following restrictions:</p> 
+ <ol style="list-style-type: decimal"> 
+  <li>The primary key constraint may not be changed by a VIEW.</li> 
+  <li>A TABLE that has a VIEW may not be dropped, but instead all VIEWs must be dropped first. In the future, we may support the concept of a CASCADE delete.</li> 
+  <li>Single-table only - you may not create a VIEW over multiple, joined tables. This will be supported in a future release.</li> 
+  <li>All columns must be projected into a VIEW (i.e. only the CREATE VIEW AS SELECT * syntax is supported). Note, however, you may drop non primary key columns inherited from the base table in a VIEW after it is created through the ALTER TABLE command. Providing a subset of columns and or expressions in the SELECT clause will be supported in a future release.</li> 
+ </ol> 
+</div>
+			</div>
+		</div>
+	</div>
+	</div>
+
+	</div><!-- /container -->
+	
+	<!-- Footer
+	================================================== -->
+	<footer class="well">
+		<div class="container">
+			<div class="row">
+				<div class="span3 bottom-nav">
+					<ul class="nav nav-list">
+						<li class="nav-header">About</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/" title="Overview" class="externalLink">Overview </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/recent.html" title="New Features" class="externalLink">New Features </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/roadmap.html" title="Roadmap" class="externalLink">Roadmap </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/performance.html" title="Performance" class="externalLink">Performance </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/team.html" title="Team" class="externalLink">Team </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/mailing_list.html" title="Mailing Lists" class="externalLink">Mailing Lists </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/source.html" title="Source Repository" class="externalLink">Source Repository </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/issues.html" title="Issue Tracking" class="externalLink">Issue Tracking </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/download.html" title="Download" class="externalLink">Download </a>
+						</li>
+						<li class="divider">
+							<a href="#" title=""> </a>
+						</li>
+						<li>
+							<a href="http://www.apache.org/licenses/" title="License" class="externalLink">License </a>
+						</li>
+						<li>
+							<a href="http://www.apache.org/foundation/sponsorship.html" title="Sponsorship" class="externalLink">Sponsorship </a>
+						</li>
+						<li>
+							<a href="http://www.apache.org/foundation/thanks.html" title="Thanks" class="externalLink">Thanks </a>
+						</li>
+						<li>
+							<a href="http://www.apache.org/security/" title="Security" class="externalLink">Security </a>
+						</li>
+					</ul>
+				</div>
+				<div class="span3 bottom-nav">
+					<ul class="nav nav-list">
+						<li class="nav-header">Using</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/faq.html" title="F.A.Q." class="externalLink">F.A.Q. </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/Phoenix-in-15-minutes-or-less.html" title="Quick Start" class="externalLink">Quick Start </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/building.html" title="Building" class="externalLink">Building </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/tuning.html" title="Tuning" class="externalLink">Tuning </a>
+						</li>
+						<li class="divider">
+							<a href="#" title=""> </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/secondary_indexing.html" title="Secondary Indexes" class="externalLink">Secondary Indexes </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/views.html" title="Views" class="externalLink">Views </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/multi-tenancy.html" title="Multi tenancy" class="externalLink">Multi tenancy </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/sequences.html" title="Sequences" class="externalLink">Sequences </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/salted.html" title="Salted Tables" class="externalLink">Salted Tables </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/paged.html" title="Paged Queries" class="externalLink">Paged Queries </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/dynamic_columns.html" title="Dynamic Columns" class="externalLink">Dynamic Columns </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/skip_scan.html" title="Skip Scan" class="externalLink">Skip Scan </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/mr_dataload.html" title="Bulk Loading" class="externalLink">Bulk Loading </a>
+						</li>
+						<li class="divider">
+							<a href="#" title=""> </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/phoenix_on_emr.html" title="Amazon EMR Support" class="externalLink">Amazon EMR Support </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/flume.html" title="Apache Flume Plugin" class="externalLink">Apache Flume Plugin </a>
+						</li>
+					</ul>
+				</div>
+				<div class="span3 bottom-nav">
+					<ul class="nav nav-list">
+						<li class="nav-header">Reference</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/language/index.html" title="Grammar" class="externalLink">Grammar </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/language/functions.html" title="Functions" class="externalLink">Functions </a>
+						</li>
+						<li>
+							<a href="http://phoenix.incubator.apache.org/language/datatypes.html" title="Datatypes" class="externalLink">Datatypes </a>
+						</li>
+					</ul>
+				</div>
+				<div class="span3 bottom-description">
+					<form action="https://www.google.com/search" method="get"><input value="phoenix.incubator.apache.org" name="sitesearch" type="hidden"><input placeholder="Search the site&hellip;" required="required" style="width:170px;" size="18" name="q" id="query" type="search"></form>
+				</div>
+			</div>
+		</div>
+	</footer>
+		
+	<div class="container subfooter">
+		<div class="row">
+			<div class="span12">
+				<p class="pull-right"><a href="#">Back to top</a></p>
+				<p class="copyright">Copyright &copy;2014 <a href="http://www.apache.org">Apache Software Foundation</a>. All Rights Reserved.</p>
+				<p class="version-date"><span class="projectVersion">Version: 3.0.0-SNAPSHOT. </span><span class="publishDate">Last Published: 2014-01-31. </span></p>
+			</div>
+		</div>
+	</div>
+
+	<!-- Le javascript
+	================================================== -->
+	<!-- Placed at the end of the document so the pages load faster -->
+
+	<!-- Fallback jQuery loading from Google CDN:
+	     http://stackoverflow.com/questions/1014203/best-way-to-use-googles-hosted-jquery-but-fall-back-to-my-hosted-library-on-go -->
+	<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
+	<script type="text/javascript">
+		if (typeof jQuery == 'undefined')
+		{
+			document.write(unescape("%3Cscript src='./js/jquery-1.8.3.min.js' type='text/javascript'%3E%3C/script%3E"));
+		}
+	</script>
+	
+	<script src="http://netdna.bootstrapcdn.com/twitter-bootstrap/2.2.2/js/bootstrap.min.js"></script>
+	<script src="./js/lightbox.js"></script>
+	<script src="./js/jquery.smooth-scroll.min.js"></script>
+	<!-- back button support for smooth scroll -->
+	<script src="./js/jquery.ba-bbq.min.js"></script>
+	<script src="http://yandex.st/highlightjs/7.3/highlight.min.js"></script>
+
+	<script src="./js/reflow-skin.js"></script>
+	
+	</body>
+</html>

Added: incubator/phoenix/site/source/pom.xml
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/pom.xml?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/pom.xml (added)
+++ incubator/phoenix/site/source/pom.xml Fri Jan 31 20:42:02 2014
@@ -0,0 +1,82 @@
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+  <parent>
+    <groupId>org.apache</groupId>
+    <artifactId>phoenix</artifactId>
+    <version>3.0.0-SNAPSHOT</version>
+  </parent>
+  <artifactId>phoenix-site</artifactId>
+  <name>Phoenix</name>
+  <description>Phoenix site</description>
+
+  <licenses>
+      <license>
+          <name>The Apache Software License, Version 2.0</name>
+          <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
+          <distribution>repo</distribution>
+          <comments />
+      </license>
+  </licenses>
+
+  <organization>
+      <name>Apache Software Foundation</name>
+      <url>http://www.apache.org</url>
+  </organization>
+
+  <build>
+  <directory>${project.basedir}/t1</directory>
+    <plugins>
+     <plugin>
+       <groupId>org.apache.maven.plugins</groupId>
+       <artifactId>maven-site-plugin</artifactId>
+       <version>3.2</version>
+       <dependencies>
+        <dependency>
+           <groupId>org.apache.maven.doxia</groupId>
+           <artifactId>doxia-module-markdown</artifactId>
+           <version>1.3</version>
+         </dependency>
+         <dependency>
+           <groupId>lt.velykis.maven.skins</groupId>
+           <artifactId>reflow-velocity-tools</artifactId>
+           <version>1.0.0</version>
+         </dependency>
+         <dependency>
+           <groupId>org.apache.velocity</groupId>
+           <artifactId>velocity</artifactId>
+           <version>1.7</version>
+         </dependency>
+       </dependencies>
+       <configuration>
+          <outputDirectory>${basedir}/../publish</outputDirectory>
+         <reportPlugins>
+           <plugin>
+             <groupId>org.codehaus.mojo</groupId>
+             <artifactId>findbugs-maven-plugin</artifactId>
+	         <version>2.5.2</version>
+           </plugin>
+         </reportPlugins>
+       </configuration>
+     </plugin>
+     <plugin>
+       <artifactId>exec-maven-plugin</artifactId>
+       <groupId>org.codehaus.mojo</groupId>
+       <version>1.2.1</version>
+       <executions>
+        <execution>
+          <id>Merge Language Reference</id>
+           <phase>site</phase>
+           <goals>
+             <goal>exec</goal>
+           </goals>
+           <configuration>
+             <executable>${basedir}/src/site/bin/merge.sh</executable>
+           </configuration>
+         </execution>
+       </executions>
+      </plugin>
+    </plugins>
+  </build>
+
+</project>

Added: incubator/phoenix/site/source/src/site/bin/merge.jar
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/bin/merge.jar?rev=1563252&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/phoenix/site/source/src/site/bin/merge.jar
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream

Added: incubator/phoenix/site/source/src/site/bin/merge.sh
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/bin/merge.sh?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/bin/merge.sh (added)
+++ incubator/phoenix/site/source/src/site/bin/merge.sh Fri Jan 31 20:42:02 2014
@@ -0,0 +1,11 @@
+current_dir=$(cd $(dirname $0);pwd)
+cd $current_dir
+DOC_SRC="../../../../../phoenix-docs/docs/html"
+SITE_TARGET="../../../../publish"
+java -jar merge.jar $DOC_SRC/index.html $SITE_TARGET/language/index.html
+java -jar merge.jar $DOC_SRC/functions.html $SITE_TARGET/language/functions.html
+java -jar merge.jar $DOC_SRC/datatypes.html $SITE_TARGET/language/datatypes.html
+cd $SITE_TARGET
+
+grep -rl class=\"nav-collapse\" . | xargs sed -i 's/class=\"nav-collapse\"/class=\"nav-collapse collapse\"/g';grep -rl class=\"active\" . | xargs sed -i 's/class=\"active\"/class=\"divider\"/g'
+grep -rl "dropdown active" . | xargs sed -i 's/dropdown active/dropdown/g'

Propchange: incubator/phoenix/site/source/src/site/bin/merge.sh
------------------------------------------------------------------------------
    svn:executable = *

Added: incubator/phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/Phoenix-in-15-minutes-or-less.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,80 @@
+# Phoenix in 15 minutes or less
+
+*<strong>What is this new [Phoenix](index.html) thing I've been hearing about?</strong>*<br/>
+Phoenix is an open source SQL skin for HBase. You use the standard JDBC APIs instead of the regular HBase client APIs to create tables, insert data, and query your HBase data.
+
+*<strong>Doesn't putting an extra layer between my application and HBase just slow things down?</strong>*<br/>
+Actually, no. Phoenix achieves as good or likely better [performance](performance.html) than if you hand-coded it yourself (not to mention with a heck of a lot less code) by:
+* compiling your SQL queries to native HBase scans
+* determining the optimal start and stop for your scan key
+* orchestrating the parallel execution of your scans
+* bringing the computation to the data by
+  * pushing the predicates in your where clause to a server-side filter
+  * executing aggregate queries through server-side hooks (called co-processors)
+
+In addition to these items, we've got some interesting enhancements in the works to further optimize performance:
+* secondary indexes to improve performance for queries on non row key columns 
+* stats gathering to improve parallelization and guide choices between optimizations 
+* skip scan filter to optimize IN, LIKE, and OR queries
+* optional salting of row keys to evenly distribute write load
+
+*<strong>Ok, so it's fast. But why SQL? It's so 1970s</strong>*<br/>
+Well, that's kind of the point: give folks something with which they're already familiar. What better way to spur the adoption of HBase? On top of that, using JDBC and SQL:
+* Reduces the amount of code users need to write
+* Allows for performance optimizations transparent to the user
+* Opens the door for leveraging and integrating lots of existing tooling
+
+*<strong>But how can SQL support my favorite HBase technique of x,y,z</strong>*<br/>
+Didn't make it to the last HBase Meetup did you? SQL is just a way of expressing *<strong>what you want to get</strong>* not *<strong>how you want to get it</strong>*. Check out my [presentation](http://files.meetup.com/1350427/IntelPhoenixHBaseMeetup.ppt) for various existing and to-be-done Phoenix features to support your favorite HBase trick. Have ideas of your own? We'd love to hear about them: file an [issue](issues.html) for us and/or join our [mailing list](mailing_list.html).
+
+*<strong>Blah, blah, blah - I just want to get started!</strong>*<br/>
+Ok, great! Just follow our [install instructions](download.html#Installation):
+* [download](download.html) and expand our installation tar
+* copy the phoenix jar into the HBase lib directory of every region server
+* restart the region servers
+* add the phoenix client jar to the classpath of your HBase client
+* download and [setup SQuirrel](download.html#SQL-Client) as your SQL client so you can issue adhoc SQL against your HBase cluster
+
+*<strong>I don't want to download and setup anything else!</strong>*<br/>
+Ok, fair enough - you can create your own SQL scripts and execute them using our command line tool instead. Let's walk through an example now. In the bin directory of your install location:
+* Create us_population.sql file
+<pre><code>CREATE TABLE IF NOT EXISTS us_population (
+      state CHAR(2) NOT NULL,
+      city VARCHAR NOT NULL,
+      population BIGINT
+      CONSTRAINT my_pk PRIMARY KEY (state, city));</code></pre>
+* Create us_population.csv file
+<pre><code>NY,New York,8143197
+CA,Los Angeles,3844829
+IL,Chicago,2842518
+TX,Houston,2016582
+PA,Philadelphia,1463281
+AZ,Phoenix,1461575
+TX,San Antonio,1256509
+CA,San Diego,1255540
+TX,Dallas,1213825
+CA,San Jose,912332
+</code></pre>
+* Create us_population_queries.sql file
+<pre><code>SELECT state as "State",count(city) as "City Count",sum(population) as "Population Sum"
+FROM us_population
+GROUP BY state
+ORDER BY sum(population) DESC;
+</code></pre>
+* Execute the following command from a command terminal
+<pre><code>./psql.sh &lt;your_zookeeper_quorum&gt; us_population.sql us_population.csv us_population_queries.sql
+</code></pre>
+
+Congratulations! You've just created your first Phoenix table, inserted data into it, and executed an aggregate query with just a few lines of code in 15 minutes or less! 
+
+*<strong>Big deal - 10 rows! What else you got?</strong>*<br/>
+Ok, ok - tough crowd. Check out our <code>bin/performance.sh</code> script to create as many rows as you want, for any schema you come up with, and run timed queries against it.
+
+*<strong>Why is it called Phoenix anyway? Did some other project crash and burn and this is the next generation?</strong>*<br/>
+I'm sorry, but we're out of time and space, so we'll have to answer that next time!
+
+Thanks for your time,<br/>
+James Taylor<br/>
+http://phoenix-hbase.blogspot.com/
+<br/>
+@JamesPlusPlus<br/>

Added: incubator/phoenix/site/source/src/site/markdown/building.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/building.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/building.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/building.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,25 @@
+# Building Phoenix Project
+
+Phoenix is a fully mavenized project. That means you can build simply by doing:
+
+```
+$ mvn package
+```
+
+builds, test and package Phoenix and put the resulting jars (phoenix-[version].jar and phoenix-[version]-client.jar) in the generated phoenix-core/target/ and phoenix-assembly/target/ directories respectively.
+
+To build, but skip running the tests, you can do:
+
+```
+ $ mvn package -DskipTests
+```
+
+To only build the generated parser (i.e. <code>PhoenixSQLLexer</code> and <code>PhoenixSQLParser</code>), you can do:
+
+```
+ $ mvn install -DskipTests
+ $ mvn process-sources
+```
+
+To build an Eclipse project, install the m2e plugin and do an File->Import...->Import Existing Maven Projects selecting the root directory of Phoenix.
+

Added: incubator/phoenix/site/source/src/site/markdown/download.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/download.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/download.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/download.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,84 @@
+## Available Phoenix Downloads
+
+### Download link will be available soon.
+
+<br/>
+
+### Installation ###
+To install a pre-built phoenix, use these directions:
+
+* Download and expand the latest phoenix-[version]-install.tar
+* Add the phoenix-[version].jar to the classpath of every HBase region server. An easy way to do this is to copy it into the HBase lib directory.
+* Restart all region servers.
+* Add the phoenix-[version]-client.jar to the classpath of any Phoenix client.
+
+### Getting Started ###
+Wanted to get started quickly? Take a look at our [FAQs](faq.html) and take our quick start guide [here](Phoenix-in-15-minutes-or-less.html).
+
+<h4>Command Line</h4>
+
+A terminal interface to execute SQL from the command line is now bundled with Phoenix. To start it, execute the following from the bin directory:
+
+	$ sqlline.sh localhost
+
+To execute SQL scripts from the command line, you can include a SQL file argument like this:
+
+	$ sqlline.sh localhost ../examples/stock_symbol.sql
+
+![sqlline](images/sqlline.png)
+
+For more information, see the [manual](http://www.hydromatic.net/sqlline/manual.html).
+
+<h5>Loading Data</h5>
+
+In addition, you can use the bin/psql.sh to load CSV data or execute SQL scripts. For example:
+
+        $ psql.sh localhost ../examples/web_stat.sql ../examples/web_stat.csv ../examples/web_stat_queries.sql
+
+Other alternatives include:
+* Using our [map-reduce based CSV loader](mr_dataload.html) for bigger data sets
+* [Mapping an existing HBase table to a Phoenix table](index.html#Mapping-to-an-Existing-HBase-Table) and using the [UPSERT SELECT](language/index.html#upsert_select) command to populate a new table.
+* Populating the table through our [UPSERT VALUES](language/index.html#upsert_values) command.
+
+<h4>SQL Client</h4>
+
+If you'd rather use a client GUI to interact with Phoenix, download and install [SQuirrel](http://squirrel-sql.sourceforge.net/). Since Phoenix is a JDBC driver, integration with tools such as this are seamless. Here are the setup steps necessary:
+
+1. Remove prior phoenix-[version]-client.jar from the lib directory of SQuirrel
+2. Copy the phoenix-[version]-client.jar into the lib directory of SQuirrel (Note that on a Mac, this is the *internal* lib directory).
+3. Start SQuirrel and add new driver to SQuirrel (Drivers -> New Driver)
+4. In Add Driver dialog box, set Name to Phoenix
+5. Press List Drivers button and org.apache.phoenix.jdbc.PhoenixDriver should be automatically populated in the Class Name textbox. Press OK to close this dialog.
+6. Switch to Alias tab and create the new Alias (Aliases -> New Aliases)
+7. In the dialog box, Name: _any name_, Driver: Phoenix, User Name: _anything_, Password: _anything_
+8. Construct URL as follows: jdbc:phoenix: _zookeeper quorum server_. For example, to connect to a local HBase use: jdbc:phoenix:localhost
+9. Press Test (which should succeed if everything is setup correctly) and press OK to close.
+10. Now double click on your newly created Phoenix alias and click Connect. Now you are ready to run SQL queries against Phoenix.
+
+Through SQuirrel, you can issue SQL statements in the SQL tab (create tables, insert data, run queries), and inspect table metadata in the Object tab (i.e. list tables, their columns, primary keys, and types).
+
+![squirrel](images/squirrel.png)
+
+### Samples ###
+The best place to see samples are in our unit tests under src/test/java. The ones in the endToEnd package are tests demonstrating how to use all aspects of the Phoenix JDBC driver. We also have some examples in the examples directory.
+
+### Phoenix Client - Server Compatibility
+
+Major and minor version should match between client and server (patch version can mismatch). Following is the list of compatible client and server version(s). It is recommended that same client and server version are used. 
+
+Phoenix Client Version | Compatible Server Versions
+-----------------------|---
+1.0.0 | 1.0.0
+1.1.0 | 1.1.0
+1.2.0 | 1.2.0, 1.2.1
+1.2.1 | 1.2.0, 1.2.1
+2.0.0 | 2.0.0, 2.0.1, 2.0.2
+2.0.1 | 2.0.0, 2.0.1, 2.0.2
+2.0.2 | 2.0.0, 2.0.1, 2.0.2
+2.1.0 | 2.1.0, 2.1.1, 2.1.2
+2.1.1 | 2.1.0, 2.1.1, 2.1.2
+2.1.2 | 2.1.0, 2.1.1, 2.1.2
+2.2.0 | 2.2.0, 2.2.1
+2.2.1 | 2.2.0, 2.2.1
+
+[![githalytics.com alpha](https://cruel-carlota.pagodabox.com/33878dc7c0522eed32d2d54db9c59f78 "githalytics.com")](http://githalytics.com/forcedotcom/phoenix.git)

Added: incubator/phoenix/site/source/src/site/markdown/dynamic_columns.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/dynamic_columns.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/dynamic_columns.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/dynamic_columns.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,17 @@
+# Dynamic Columns
+
+Sometimes defining a static schema up front is not feasible. Instead, a subset of columns may be specified at table [create](language/index.html#create) time while the rest would be specified at [query](language/index.html#select) time. As of Phoenix 1.2, specifying columns dynamically is now supported by allowing column definitions to included in parenthesis after the table in the <code>FROM</code> clause on a <code>SELECT</code> statement. Although this is not standard SQL, it is useful to surface this type of functionality to leverage the late binding ability of HBase.
+
+For example:
+
+    SELECT eventTime, lastGCTime, usedMemory, maxMemory
+    FROM EventLog(lastGCTime TIME, usedMemory BIGINT, maxMemory BIGINT)
+    WHERE eventType = 'OOM' AND lastGCTime < eventTime - 1
+
+Where you may have defined only a subset of your event columns at create time, since each event type may have different properties:
+
+    CREATE TABLE EventLog (
+        eventId BIGINT NOT NULL,
+        eventTime TIME NOT NULL,
+        eventType CHAR(3) NOT NULL
+        CONSTRAINT pk PRIMARY KEY (eventId, eventTime))

Added: incubator/phoenix/site/source/src/site/markdown/faq.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/faq.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/faq.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/faq.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,279 @@
+# F.A.Q.
+
+* [I want to get started. Is there a Phoenix Hello World?](#I_want_to_get_started_Is_there_a_Phoenix_Hello_World)
+* [Is there a way to bulk load in Phoenix?](#Is_there_a_way_to_bulk_load_in_Phoenix)
+* [How do I create a VIEW in Phoenix? What's the difference between a VIEW and a TABLE?](#How_I_create_Views_in_Phoenix_Whatnulls_the_difference_between_ViewsTables)
+* [Are there any tips for optimizing Phoenix?](#Are_there_any_tips_for_optimizing_Phoenix)
+* [How do I create Secondary Index on a table?](#How_do_I_create_Secondary_Index_on_a_table)
+* [Why isn't my secondary index being used?](#Why_isnnullt_my_secondary_index_being_used)
+* [How fast is Phoenix? Why is it so fast?](#How_fast_is_Phoenix_Why_is_it_so_fast)
+* [How do I connect to secure HBase cluster?](#How_do_I_connect_to_secure_HBase_cluster)
+* [How do I connect with HBase running on Hadoop-2?](#How_do_I_connect_with_HBase_running_on_Hadoop-2)
+* [Can phoenix work on tables with arbitrary timestamp as flexible as HBase API?](#Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API)
+* [Why isn't my query doing a RANGE SCAN?](#Why_isnnullt_my_query_doing_a_RANGE_SCAN)
+
+
+### I want to get started. Is there a Phoenix _Hello World_?
+
+*Pre-requisite:* Download latest Phoenix from [here](download.html)
+and copy phoenix-*.jar to HBase lib folder and restart HBase.
+
+**1. Using console**
+
+1. Start Sqlline: `$ sqlline.sh [zookeeper]`
+2. Execute the following statements when Sqlline connects: 
+
+```
+create table test (mykey integer not null primary key, mycolumn varchar);
+upsert into test values (1,'Hello');
+upsert into test values (2,'World!');
+select * from test;
+```
+
+3. You should get the following output
+
+``` 
++-------+------------+
+| MYKEY |  MYCOLUMN  |
++-------+------------+
+| 1     | Hello      |
+| 2     | World!     |
++-------+------------+
+``` 
+
+
+**2. Using java**
+
+Create test.java file with the following content:
+
+``` 
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.PreparedStatement;
+import java.sql.Statement;
+
+public class test {
+
+	public static void main(String[] args) throws SQLException {
+		Statement stmt = null;
+		ResultSet rset = null;
+		
+		Connection con = DriverManager.getConnection("jdbc:phoenix:[zookeeper]");
+		stmt = con.createStatement();
+		
+		stmt.executeUpdate("create table test (mykey integer not null primary key, mycolumn varchar)");
+		stmt.executeUpdate("upsert into test values (1,'Hello')");
+		stmt.executeUpdate("upsert into test values (2,'World!')");
+		con.commit();
+		
+		PreparedStatement statement = con.prepareStatement("select * from test");
+		rset = statement.executeQuery();
+		while (rset.next()) {
+			System.out.println(rset.getString("mycolumn"));
+		}
+		statement.close();
+		con.close();
+	}
+}
+``` 
+Compile and execute on command line
+
+`$ javac test.java`
+
+`$ java -cp "../phoenix-[version]-client.jar:." test`
+
+
+You should get the following output
+
+`Hello`
+`World!`
+
+
+
+### Is there a way to bulk load in Phoenix?
+
+**Map Reduce**
+
+See the example [here](mr_dataload.html) Credit: Arun Singh
+
+**CSV**
+
+CSV data can be bulk loaded with built in utility named psql. Typical upsert rates are 20K - 50K rows per second (depends on how wide are the rows).
+
+Usage example:  
+Create table using psql
+`$ psql.sh [zookeeper] ../examples/web_stat.sql`  
+
+Upsert CSV bulk data
+`$ psql.sh [zookeeper] ../examples/web_stat.csv`
+
+
+
+### How I create Views in Phoenix? What's the difference between Views/Tables?
+
+You can create both a Phoenix table or view through the CREATE TABLE/CREATE VIEW DDL statement on a pre-existing HBase table. In both cases, we'll leave the HBase metadata as-is, except for with a TABLE we turn KEEP_DELETED_CELLS on. For CREATE TABLE, we'll create any metadata (table, column families) that doesn't already exist. We'll also add an empty key value for each row so that queries behave as expected (without requiring all columns to be projected during scans).
+
+The other caveat is that the way the bytes were serialized must match the way the bytes are serialized by Phoenix. For VARCHAR,CHAR, and UNSIGNED_* types, we use the HBase Bytes methods. The CHAR type expects only single-byte characters and the UNSIGNED types expect values greater than or equal to zero.
+
+Our composite row keys are formed by simply concatenating the values together, with a zero byte character used as a separator after a variable length type.
+
+If you create an HBase table like this:
+
+`create 't1', {NAME => 'f1', VERSIONS => 5}`
+
+then you have an HBase table with a name of 't1' and a column family with a name of 'f1'. Remember, in HBase, you don't model the possible KeyValues or the structure of the row key. This is the information you specify in Phoenix above and beyond the table and column family.
+
+So in Phoenix, you'd create a view like this:
+
+`CREATE VIEW "t1" ( pk VARCHAR PRIMARY KEY, "f1".val VARCHAR )`
+
+The "pk" column declares that your row key is a VARCHAR (i.e. a string) while the "f1".val column declares that your HBase table will contain KeyValues with a column family and column qualifier of "f1":VAL and that their value will be a VARCHAR.
+
+Note that you don't need the double quotes if you create your HBase table with all caps names (since this is how Phoenix normalizes strings, by upper casing them). For example, with:
+
+`create 'T1', {NAME => 'F1', VERSIONS => 5}`
+
+you could create this Phoenix view:
+
+`CREATE VIEW t1 ( pk VARCHAR PRIMARY KEY, f1.val VARCHAR )`
+
+Or if you're creating new HBase tables, just let Phoenix do everything for you like this (No need to use the HBase shell at all.):
+
+`CREATE TABLE t1 ( pk VARCHAR PRIMARY KEY, val VARCHAR )`
+
+
+
+### Are there any tips for optimizing Phoenix?
+
+* Use **Salting** to increase read/write performance
+Salting can significantly increase read/write performance by pre-splitting the data into multiple regions. Although Salting will yield better performance in most scenarios. 
+
+Example:
+
+` CREATE TABLE TEST (HOST VARCHAR NOT NULL PRIMARY KEY, DESCRIPTION VARCHAR) SALT_BUCKETS=16`
+
+Note: Ideally for a 16 region server cluster with quad-core CPUs, choose salt buckets between 32-64 for optimal performance.
+
+* **Per-split** table
+Salting does automatic table splitting but in case you want to exactly control where table split occurs with out adding extra byte or change row key order then you can pre-split a table. 
+
+Example: 
+
+` CREATE TABLE TEST (HOST VARCHAR NOT NULL PRIMARY KEY, DESCRIPTION VARCHAR) SPLIT ON ('CS','EU','NA')`
+
+* Use **multiple column families**
+
+Column family contains related data in separate files. If you query use selected columns then it make sense to group those columns together in a column family to improve read performance.
+
+Example:
+
+Following create table DDL will create two column familes A and B.
+
+` CREATE TABLE TEST (MYKEY VARCHAR NOT NULL PRIMARY KEY, A.COL1 VARCHAR, A.COL2 VARCHAR, B.COL3 VARCHAR)`
+
+* Use **compression**
+On disk compression improves performance on large tables
+
+Example: 
+
+` CREATE TABLE TEST (HOST VARCHAR NOT NULL PRIMARY KEY, DESCRIPTION VARCHAR) COMPRESSION='GZ'`
+
+* Create **indexes**
+See [faq.html#/How_do_I_create_Secondary_Index_on_a_table](faq.html#/How_do_I_create_Secondary_Index_on_a_table)
+
+* **Optimize cluster** parameters
+See http://hbase.apache.org/book/performance.html
+
+* **Optimize Phoenix** parameters
+See [tuning.html](tuning.html)
+
+
+
+### How do I create Secondary Index on a table?
+
+Starting with Phoenix version 2.1, Phoenix supports index over mutable and immutable data. Note that Phoenix 2.0.x only supports Index over immutable data. Index write performance index with immutable table is slightly faster than mutable table however data in immutable table cannot be updated.
+
+Example
+
+* Create table
+
+Immutable table: `create table test (mykey varchar primary key, col1 varchar, col2 varchar) IMMUTABLE_ROWS=true;`
+
+Mutable table: `create table test (mykey varchar primary key, col1 varchar, col2 varchar);`
+
+* Creating index on col2
+
+`create index idx on test (col2)`
+
+* Creating index on col1 and a covered index on col2
+
+`create index idx on test (col1) include (col2)`
+
+Upsert rows in this test table and Phoenix query optimizer will choose correct index to use. You can see in [explain plan](language/index.html#explain) if Phoenix is using the index table. You can also give a [hint](language/index.html#hint) in Phoenix query to use a specific index.
+
+
+
+### Why isn't my secondary index being used?
+
+The secondary index won't be used unless all columns used in the query are in it ( as indexed or covered columns). All columns making up the primary key of the data table will automatically be included in the index.
+
+Example: DDL `create table usertable (id varchar primary key, firstname varchar, lastname varchar); create index idx_name on usertable (firstname);`
+
+Query: DDL `select id, firstname, lastname from usertable where firstname = 'foo';`
+
+Index would not be used in this case as lastname is not part of indexed or covered column. This can be verified by looking at the explain plan. To fix this create index that has either lastname part of index or covered column. Example: `create idx_name on usertable (firstname) include (lastname);`
+
+
+### How fast is Phoenix? Why is it so fast?
+
+Phoenix is fast. Full table scan of 100M rows usually completes in 20 seconds (narrow table on a medium sized cluster). This time come down to few milliseconds if query contains filter on key columns. For filters on non-key columns or non-leading key columns, you can add index on these columns which leads to performance equivalent to filtering on key column by making copy of table with indexed column(s) part of key.
+
+Why is Phoenix fast even when doing full scan:
+
+1. Phoenix chunks up your query using the region boundaries and runs them in parallel on the client using a configurable number of threads 
+2. The aggregation will be done in a coprocessor on the server-side, collapsing the amount of data that gets returned back to the client rather than returning it all. 
+
+
+
+### How do I connect to secure HBase cluster?
+Check out excellent post by Anil Gupta 
+http://bigdatanoob.blogspot.com/2013/09/connect-phoenix-to-secure-hbase-cluster.html
+
+
+
+### How do I connect with HBase running on Hadoop-2?
+Hadoop-2 profile exists in Phoenix pom.xml. 
+
+
+### Can phoenix work on tables with arbitrary timestamp as flexible as HBase API?
+By default, Phoenix let's HBase manage the timestamps and just shows you the latest values for everything. However, Phoenix also allows arbitrary timestamps to be supplied by the user. To do that you'd specify a "CurrentSCN" (or PhoenixRuntime.CURRENT_SCN_ATTRIB if you want to use our constant) at connection time, like this:
+
+    Properties props = new Properties();
+    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts));
+    Connection conn = DriverManager.connect(myUrl, props);
+
+    conn.createStatement().execute("UPSERT INTO myTable VALUES ('a')");
+    conn.commit();
+The above is equivalent to doing this with the HBase API:
+
+    myTable.put(Bytes.toBytes('a'),ts);
+By specifying a CurrentSCN, you're telling Phoenix that you want everything for that connection to be done at that timestamp. Note that this applies to queries done on the connection as well - for example, a query over myTable above would not see the data it just upserted, since it only sees data that was created before its CurrentSCN property. This provides a way of doing snapshot, flashback, or point-in-time queries.
+
+Keep in mind that creating a new connection is *not* an expensive operation. The same underlying HConnection is used for all connections to the same cluster, so it's more or less like instantiating a few objects.
+
+
+### Why isn't my query doing a RANGE SCAN?
+
+`DDL: CREATE TABLE TEST (pk1 char(1) not null, pk2 char(1) not null, pk3 char(1) not null, non-pk varchar CONSTRAINT PK PRIMARY KEY(pk1, pk2, pk3));`
+
+RANGE SCAN means that only a subset of the rows in your table will be scanned over. This occurs if you use one or more leading columns from your primary key constraint. Query that is not filtering on leading PK columns ex. `select * from test where pk2='x' and pk3='y';` will result in full scan whereas the following query will result in range scan `select * from test where pk1='x' and pk2='y';`. Note that you can add a secondary index on your "pk2" and "pk3" columns and that would cause a range scan to be done for the first query (over the index table).
+
+DEGENERATE SCAN means that a query can't possibly return any rows. If we can determine that at compile time, then we don't bother to even run the scan.
+
+FULL SCAN means that all rows of the table will be scanned over (potentially with a filter applied if you have a WHERE clause)
+
+SKIP SCAN means that either a subset or all rows in your table will be scanned over, however it will skip large groups of rows depending on the conditions in your filter. See this blog for more detail. We don't do a SKIP SCAN if you have no filter on the leading primary key columns, but you can force a SKIP SCAN by using the /*+ SKIP_SCAN */ hint. Under some conditions, namely when the cardinality of your leading primary key columns is low, it will be more efficient than a FULL SCAN.
+
+

Added: incubator/phoenix/site/source/src/site/markdown/flume.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/flume.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/flume.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/flume.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,42 @@
+# Apache Flume Plugin
+
+The plugin enables us to reliably and efficiently stream large amounts of data/logs onto HBase using the Phoenix API. The necessary configuration of the custom Phoenix sink and the Event Serializer has to be configured in the Flume configuration file for the Agent. Currently, the only supported Event serializer is a RegexEventSerializer which primarily breaks the Flume Event body based on the regex specified in the configuration file.   
+
+#### Prerequisites:
+
+* Phoenix v 3.0.0 SNAPSHOT +
+* Flume 1.4.0 +
+
+#### Installation & Setup:
+
+1. Download and build Phoenix v 0.3.0 SNAPSHOT
+2. Follow the instructions as specified [here](building.html) to build the project as the Flume plugin is still under beta
+3. Create a directory plugins.d within $FLUME_HOME directory. Within that, create a sub-directories phoenix-sink/lib 
+4. Copy the generated phoenix-3.0.0-SNAPSHOT-client.jar onto $FLUME_HOME/plugins.d/phoenix-sink/lib
+
+#### Configuration:
+  
+Property Name             |Default| Description
+--------------------------|-------|---
+type                      |       |org.apache.phoenix.flume.sink.PhoenixSink
+batchSize                 |100    |Default number of events per transaction 
+zookeeperQuorum           |       |Zookeeper quorum of the HBase cluster
+table                     |       |The name of the table in HBase to write to.
+ddl                       |       |The CREATE TABLE query for the HBase table where the events will be                                                    upserted to. If specified, the query will be executed. Recommended to include the IF NOT EXISTS clause in the ddl.
+serializer                |regex  |Event serializers for processing the Flume Event . Currently , only regex is supported.
+serializer.regex          |(.*)   |The regular expression for parsing the event. 
+serializer.columns        |       |The columns that will be extracted from the Flume event for inserting         into HBase. 
+serializer.headers        |       |Headers of the Flume Events that go as part of the UPSERT query. The  data type for these columns are VARCHAR by default.
+serializer.rowkeyType     |     |A custom row key generator . Can be one of timestamp,date,uuid,random and     nanotimestamp. This should be configured in cases  where we need a custom row key value to be auto generated and set for the primary key column.
+
+
+For an example configuration for ingesting Apache access logs onto Phoenix, see [this](https://github.com/forcedotcom/phoenix/blob/master/src/main/config/apache-access-logs.properties) property file. Here we are using UUID as a row key generator for the primary key.	
+		   	
+#### Starting the agent:
+       $ bin/flume-ng agent -f conf/flume-conf.properties -c ./conf -n agent
+
+#### Monitoring:
+   For monitoring the agent and the sink process , enable JMX via flume-env.sh($FLUME_HOME/conf/flume-env.sh) script. Ensure you have the following line uncommented.
+   
+    JAVA_OPTS="-Xms1g -Xmx1g -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=3141 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"   	
+	

Added: incubator/phoenix/site/source/src/site/markdown/index.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/index.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/index.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/index.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,69 @@
+# Overview
+
+Apache Phoenix is a SQL skin over HBase delivered as a client-embedded JDBC driver targeting low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. The table metadata is stored in an HBase table and versioned, such that snapshot queries over prior versions will automatically use the correct schema. Direct use of the HBase API, along with coprocessors and custom filters, results in [performance](performance.html) on the order of milliseconds for small queries, or seconds for tens of millions of rows. 
+
+## Mission
+Become the standard means of accessing HBase data through a well-defined, industry standard API.
+
+## Quick Start
+Tired of reading already and just want to get started? Take a look at our [FAQs](faq.html), listen to the Apache Phoenix talks from [Hadoop Summit 2013](http://www.youtube.com/watch?v=YHsHdQ08trg) and [HBaseConn 2013](http://www.cloudera.com/content/cloudera/en/resources/library/hbasecon/hbasecon-2013--how-and-why-phoenix-puts-the-sql-back-into-nosql-video.html), and jump over to our quick start guide [here](Phoenix-in-15-minutes-or-less.html).
+
+##SQL Support##
+To see what's supported, go to our [language reference](language/index.html). It includes all typical SQL query statement clauses, including `SELECT`, `FROM`, `WHERE`, `GROUP BY`, `HAVING`, `ORDER BY`, etc. It also supports a full set of DML commands as well as table creation and versioned incremental alterations through our DDL commands. We try to follow the SQL standards wherever possible.
+
+<a id="connStr"></a>Use JDBC to get a connection to an HBase cluster like this:
+
+        Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333");
+where the connection string is composed of:
+<code><small>jdbc:phoenix</small></code> [ <code><small>:&lt;zookeeper quorum&gt;</small></code> [ <code><small>:&lt;port number&gt;</small></code> ] [ <code><small>:&lt;root node&gt;</small></code> ] ]
+
+For any omitted part, the relevant property value, hbase.zookeeper.quorum, hbase.zookeeper.property.clientPort, and zookeeper.znode.parent will be used from hbase-site.xml configuration file.
+
+Here's a list of what is currently **not** supported:
+
+* **Full Transaction Support**. Although we allow client-side batching and rollback as described [here](#transactions), we do not provide transaction semantics above and beyond what HBase gives you out-of-the-box.
+* **Derived tables**. Nested queries are coming soon.
+* **Relational operators**. Union, Intersect, Minus.
+* **Miscellaneous built-in functions**. These are easy to add - read this [blog](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html) for step by step instructions.
+
+##<a id="schema"></a>Schema##
+
+Apache Phoenix supports table creation and versioned incremental alterations through DDL commands. The table metadata is stored in an HBase table.
+
+A Phoenix table is created through the [CREATE TABLE](language/index.html#create) DDL command and can either be:
+
+1. **built from scratch**, in which case the HBase table and column families will be created automatically.
+2. **mapped to an existing HBase table**, by creating either a read-write TABLE or a read-only VIEW, with the caveat that the binary representation of the row key and key values must match that of the Phoenix data types (see [Data Types reference](datatypes.html) for the detail on the binary representation).
+    * For a read-write TABLE, column families will be created automatically if they don't already exist. An empty key value will be added to the first column family of each existing row to minimize the size of the projection for queries.
+    * For a read-only VIEW, all column families must already exist. The only change made to the HBase table will be the addition of the Phoenix coprocessors used for query processing. The primary use case for a VIEW is to transfer existing data into a Phoenix table, since data modification are not allowed on a VIEW and query performance will likely be less than as with a TABLE.
+
+All schema is versioned, and prior versions are stored forever. Thus, snapshot queries over older data will pick up and use the correct schema for each row.
+
+####Salting
+A table could also be declared as salted to prevent HBase region hot spotting. You just need to declare how many salt buckets your table has, and Phoenix will transparently manage the salting for you. You'll find more detail on this feature [here](salted.html), along with a nice comparison on write throughput between salted and unsalted tables [here](performance.htm#salting).
+
+####Schema at Read-time
+Another schema-related feature allows columns to be defined dynamically at query time. This is useful in situations where you don't know in advance all of the columns at create time. You'll find more details on this feature [here](dynamic_columns.html).
+
+####<a id="mapping"></a>Mapping to an Existing HBase Table
+Apache Phoenix supports mapping to an existing HBase table through the [CREATE TABLE](language/index.html#create) and [CREATE VIEW](language/index.html#create) DDL statements. In both cases, the HBase metadata is left as-is, except for with CREATE TABLE the [KEEP_DELETED_CELLS](http://hbase.apache.org/book/cf.keep.deleted.html) option is enabled to allow for flashback queries to work correctly. For CREATE TABLE, any HBase metadata (table, column families) that doesn't already exist will be created. Note that the table and column family names are case sensitive, with Phoenix upper-casing all names. To make a name case sensitive in the DDL statement, surround it with double quotes as shown below:
+      <pre><code>CREATE VIEW "MyTable" ("a".ID VARCHAR PRIMARY KEY)</code></pre>
+
+For CREATE TABLE, an empty key value will also be added for each row so that queries behave as expected (without requiring all columns to be projected during scans). For CREATE VIEW, this will not be done, nor will any HBase metadata be created. Instead the existing HBase metadata must match the metadata specified in the DDL statement or a <code>ERROR 505 (42000): Table is read only</code> will be thrown.
+
+The other caveat is that the way the bytes were serialized in HBase must match the way the bytes are expected to be serialized by Phoenix. For VARCHAR,CHAR, and UNSIGNED_* types, Phoenix uses the HBase Bytes utility methods to perform serialization. The CHAR type expects only single-byte characters and the UNSIGNED types expect values greater than or equal to zero.
+
+Our composite row keys are formed by simply concatenating the values together, with a zero byte character used as a separator after a variable length type. For more information on our type system, see the [Data Type](datatypes.html).
+
+##<a id="transactions"></a>Transactions##
+The DML commands of Apache Phoenix, [UPSERT VALUES](language/index.html#upsert_values), [UPSERT SELECT](language/index.html#upsert_select) and [DELETE](language/index.html#delete), batch pending changes to HBase tables on the client side. The changes are sent to the server when the transaction is committed and discarded when the transaction is rolled back. The only transaction isolation level we support is TRANSACTION_READ_COMMITTED. This includes not being able to see your own uncommitted data as well. Phoenix does not providing any additional transactional semantics beyond what HBase supports when a batch of mutations is submitted to the server. If auto commit is turned on for a connection, then Phoenix will, whenever possible, execute the entire DML command through a coprocessor on the server-side, so performance will improve.
+
+Most commonly, an application will let HBase manage timestamps. However, under some circumstances, an application needs to control the timestamps itself. In this case, a long-valued "CurrentSCN" property may be specified at connection time to control timestamps for any DDL, DML, or query. This capability may be used to run snapshot queries against prior row values, since Phoenix uses the value of this connection property as the max timestamp of scans.
+
+## Metadata ##
+The catalog of tables, their columns, primary keys, and types may be retrieved via the java.sql metadata interfaces: `DatabaseMetaData`, `ParameterMetaData`, and `ResultSetMetaData`. For retrieving schemas, tables, and columns through the DatabaseMetaData interface, the schema pattern, table pattern, and column pattern are specified as in a LIKE expression (i.e. % and _ are wildcards escaped through the \ character). The table catalog argument to the metadata APIs deviates from a more standard relational database model, and instead is used to specify a column family name (in particular to see all columns in a given column family).
+
+<hr/>
+## Disclaimer ##
+Apache Phoenix is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the [Apache Incubator PMC](http://incubator.apache.org/). Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
+<br/><br/><img src="http://incubator.apache.org/images/apache-incubator-logo.png"/>

Added: incubator/phoenix/site/source/src/site/markdown/issues.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/issues.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/issues.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/issues.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,9 @@
+# Issue Tracking
+
+This project uses JIRA issue tracking and project management application. Issues, bugs, and feature requests should be submitted to the following:
+
+<hr/>
+
+https://issues.apache.org/jira/browse/PHOENIX
+
+<hr/>

Added: incubator/phoenix/site/source/src/site/markdown/mailing_list.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/mailing_list.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/mailing_list.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/mailing_list.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,14 @@
+# Mailing Lists
+
+These are the mailing lists that have been established for this project. For each list, there is a subscribe, unsubscribe and post link.
+
+<hr/>
+
+Name| Subscribe| Unsubscribe| Post | Archive
+--------------------------|----|----|----|----
+User List | [Subscribe](mailto:user-subscribe@phoenix.incubator.apache.org) | [Unsubscribe](mailto:user-unsubscribe@phoenix.incubator.apache.org) | [Post](mailto:user@phoenix.incubator.apache.org) | [Archive](http://mail-archives.apache.org/mod_mbox/incubator-phoenix-user/)
+Developer List | [Subscribe](mailto:dev-subscribe@phoenix.incubator.apache.org) | [Unsubscribe](mailto:dev-unsubscribe@phoenix.incubator.apache.org) | [Post](mailto:dev@phoenix.incubator.apache.org) | [Archive](http://mail-archives.apache.org/mod_mbox/incubator-phoenix-dev/)
+Private List | [Subscribe](mailto:private-subscribe@phoenix.incubator.apache.org) | [Unsubscribe](mailto:private-unsubscribe@phoenix.incubator.apache.org) | [Post](mailto:private@phoenix.incubator.apache.org) | &nbsp;
+Commits List | [Subscribe](mailto:commits-subscribe@phoenix.incubator.apache.org) | [Unsubscribe](mailto:commits-unsubscribe@phoenix.incubator.apache.org) | [Post](mailto:commits@phoenix.incubator.apache.org) | [Archive](http://mail-archives.apache.org/mod_mbox/incubator-phoenix-commits/)
+
+<hr/>

Added: incubator/phoenix/site/source/src/site/markdown/mr_dataload.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/mr_dataload.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/mr_dataload.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/mr_dataload.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,63 @@
+# Bulk CSV Data Load using Map-Reduce
+
+Phoenix v 2.1 provides support for loading CSV data into a new/existing Phoenix table using Hadoop Map-Reduce. This provides a means of bulk loading CSV data in parallel through map-reduce, yielding better performance in comparison with the existing [psql csv loader](download.html#Loading-Data).
+
+####Sample input CSV data:
+
+```
+12345, John, Doe
+67890, Mary, Poppins
+```
+
+####Compatible Phoenix schema to hold the above CSV data:
+
+     CREATE TABLE ns.example (
+        my_pk bigint not null,
+        m.first_name varchar(50),
+        m.last_name varchar(50) 
+        CONSTRAINT pk PRIMARY KEY (my_pk))
+
+<table>
+<tr><td>Row Key</td><td colspan="2" bgcolor="#00FF00"><center>Column Family (m)</center></td></tr>
+<tr><td><strong>my_pk</strong> BIGINT</td><td><strong>first_name</strong> VARCHAR(50)</td><td><strong>last_name</strong> VARCHAR(50)</td></tr>
+<tr><td>12345</td><td>John</td><td>Doe</td></tr>
+<tr><td>67890</td><td>Mary</td><td>Poppins</td></tr>
+</table>
+
+
+####How to run?
+
+1- Please make sure that Hadoop cluster is working correctly and you are able to run any job like [this](http://wiki.apache.org/hadoop/WordCount). 
+
+2- Copy latest phoenix-[version].jar to hadoop/lib folder on each node or add it to Hadoop classpath.
+
+3- Run the bulk loader job using the script /bin/csv-bulk-loader.sh as below:
+
+```
+./csv-bulk-loader.sh <option value>
+
+<option>  <value>
+-i        CSV data file path in hdfs (mandatory)
+-s        Phoenix schema name (mandatory if not default)
+-t        Phoenix table name (mandatory)
+-sql      Phoenix create table sql file path (mandatory)
+-zk       Zookeeper IP:<port> (mandatory)
+-mr       MapReduce Job Tracker IP:<port> (mandatory)
+-hd       HDFS NameNode IP:<port> (mandatory)
+-o        Output directory path in hdfs (optional)
+-idx      Phoenix index table name (optional, not yet supported)
+-error    Ignore error while reading rows from CSV ? (1-YES | 0-NO, default-1) (optional)
+-help     Print all options (optional)
+```
+Example
+
+```
+./csv-bulk-loader.sh -i hdfs://server:9000/mydir/data.csv -s ns -t example -sql ~/Documents/createTable.sql -zk server:2181 -hd hdfs://server:9000 -mr server:9001
+```
+
+This would create the phoenix table "ns.example" as specified in createTable.sql and will then load the CSV data from the file "data.csv" located in HDFS into the table.
+
+##### Notes
+1. You must provide an explicit column family name in your CREATE TABLE statement for your non primary key columns, as the default column family used by Phoenix is treated specially by HBase because it starts with an underscore.
+2. The current bulk loader does not support the migration of index related data yet. So, if you have created your phoenix table with index, please use the [psql CSV loader](download.html#Loading-Data). 
+3. In case you want to further optimize the map-reduce performance, please refer to the current map-reduce optimization params in the file "src/main/config/csv-bulk-load-config.properties". In case you modify this list, please re-build the phoenix jar and re-run the job as described above.

Added: incubator/phoenix/site/source/src/site/markdown/multi-tenancy.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/multi-tenancy.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/multi-tenancy.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/multi-tenancy.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,46 @@
+# Multi tenancy
+
+Support for multi-tenancy is built on top of the concepts of a [VIEW](https://github.com/forcedotcom/phoenix/wiki/Views) in Phoenix. Users create a logical tenant-specific table as a VIEW and query and update it just like with regular Phoenix tables.  Data in these tenant-specific tables resides in a shared, regular Phoenix table (and thus in a shared HBase table) that is declared at table creation time to support multi-tenancy. All tenant-specific Phoenix tables whose data resides in the same physical HBase table have the same primary key structure but each tenant’s table can contain any number of non-PK columns unique to it. The main advantages afforded by this feature are:
+
+1. It implements physical tenant data isolation, including automatically constraining tenants to only work with data that “belongs” to the each tenant.
+2. It prevents a proliferation of HBase tables, minimizing operational complexity.
+
+### Multi-tenant tables
+The first primary key column of the physical multi-tenant table must be used to identify the tenant. For example:
+
+    CREATE TABLE base.event (tenant_id VARCHAR, event_type CHAR(1), created_date DATE, event_id BIGINT)
+    MULTI_TENANT=true;
+
+In this case, the tenant_id column identifies the tenant and the table is declared to be multi-tenant. The column that identifies the tenant must of type VARCHAR or CHAR.
+
+### Tenant-specific Tables
+Tenants are identified by the presence or absence of a TenantId property at JDBC connection-time. A connection with a non-null TenantId is considered a tenant-specific connection. A connection with an unspecified or null TenantId is a regular connection.  A tenant specific connection may only query:
+
+* **their own schema**, which is to say it only sees tenant-specific views that were created by that tenant.
+* **non multi-tenant global tables**, that is tables created with a regular connection without the MULTI_TENANT=TRUE declaration.
+
+Tenant-specific views may only be created using a tenant-specific connection and the base table must be a multi-tenant table.  Regular connections are used to create global tables, including those that can be used as base tables for tenant-specific tables.
+
+For example, a tenant-specific connection is established like this:
+
+    Properties props = new Properties();
+    props.setProperty("TenantId", "Acme");
+    Connection conn = DriverManager.getConnection("localhost", props);
+
+through which a tenant-specific table may be defined like this:
+
+    CREATE VIEW acme.event AS
+    SELECT * FROM base.event;
+
+The tenant_id column is neither visible nor accessible to a tenant-specific view. Any reference to it will cause a ColumnNotFoundException.
+
+Alternately, a WHERE clause may be specified to further constrain the data as well:
+
+    CREATE VIEW acme.login_event AS
+    SELECT * FROM base.event
+    WHERE event_type='L';
+
+Just like any other Phoenix view, whether or not this view is updatable is based on the rules explained [here](https://github.com/forcedotcom/phoenix/wiki/Views#wiki-updatable-views). In addition, indexes may be added to tenant-specific tables just like with regular tables.
+
+### Tenant Data Isolation
+Any DML or query that is performed on a tenant-specific table is automatically constrained to only operate on the tenant’s data. For the upsert operation, this means that Phoenix automatically populates the tenantId column with the tenant’s id specified at connection-time. For querying and delete, a where clause is transparently added to constrain the operations to only see data belonging to the current tenant.

Added: incubator/phoenix/site/source/src/site/markdown/paged.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/paged.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/paged.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/paged.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,31 @@
+# Paged Queries
+
+Phoenix v 2.1 supports the use in queries of row value constructors, a standard SQL construct to enable paged queries. A row value constructor is an ordered sequence of values delimited by parentheses. For example:
+
+    (4, 'foo', 3.5)
+    ('Doe', 'Jane')
+    (my_col1, my_col2, 'bar')
+
+Just like with regular values, row value constructors may be used in comparison expression like this:
+
+    WHERE (x,y,z) >= ('foo','bar')
+    WHERE (last_name,first_name) = ('Jane','Doe')
+
+Row value constructors are compared by conceptually concatenating the values together and comparing them against each other, with the leftmost part being most significant. Section 8.2 (comparison predicates) of the SQL-92 standard explains this in detail, but here are a few examples of predicates that would evaluate to true:
+
+    (9, 5, 3) > (8, 8)
+    ('foo', 'bar') < 'g'
+    (1,2) = (1,2)
+Row value constructors may also be used in an IN list expression to efficiently query for a set of rows given the composite primary key columns. For example, the following would be optimized to be a point get of three rows:
+
+    WHERE (x,y) IN ((1,2),(3,4),(5,6))
+Another primary use case for row value constructors is to support query-more type functionality by enabling an ordered set of rows to be incrementally stepped through. For example, the following query would step through a set of rows, 20 rows at a time:
+
+    SELECT title, author, isbn, description 
+    FROM library 
+    WHERE published_date > 2010
+    AND (title, author, isbn) > (?, ?, ?)
+    ORDER BY title, author, isbn
+    LIMIT 20
+
+Assuming that the client binds the three bind variables to the values of the last row processed, the next invocation would find the next 20 rows that match the query. If the columns you supply in your row value constructor match in order the columns from your primary key (or from a secondary index), then Phoenix will be able to turn the row value constructor expression into the start row of your scan. This enables a very efficient mechanism to locate _at or after_ a row.

Added: incubator/phoenix/site/source/src/site/markdown/performance.md
URL: http://svn.apache.org/viewvc/incubator/phoenix/site/source/src/site/markdown/performance.md?rev=1563252&view=auto
==============================================================================
--- incubator/phoenix/site/source/src/site/markdown/performance.md (added)
+++ incubator/phoenix/site/source/src/site/markdown/performance.md Fri Jan 31 20:42:02 2014
@@ -0,0 +1,86 @@
+# Performance
+
+Phoenix follows the philosophy of **bringing the computation to the data** by using:
+* **coprocessors** to perform operations on the server-side thus minimizing client/server data transfer
+* **custom filters** to prune data as close to the source as possible
+In addition, to minimize any startup costs, Phoenix uses native HBase APIs rather than going through the map/reduce framework.
+## Phoenix vs related products
+Below are charts showing relative performance between Phoenix and some other related products.
+
+### Phoenix vs Hive (running over HDFS and HBase)
+![Phoenix vs Hive](images/PhoenixVsHive.png)
+
+Query: select count(1) from table over 10M and 100M rows. Data is 5 narrow columns. Number of Region 
+Servers: 4 (HBase heap: 10GB, Processor: 6 cores @ 3.3GHz Xeon)
+
+### Phoenix vs Impala (running over HBase)
+![Phoenix vs Impala](images/PhoenixVsImpala.png)
+
+Query: select count(1) from table over 1M and 5M rows. Data is 3 narrow columns. Number of Region Server: 1 (Virtual Machine, HBase heap: 2GB, Processor: 2 cores @ 3.3GHz Xeon)
+
+***
+## Latest Automated Performance Run
+
+[Latest Automated Performance Run](http://phoenix-bin.github.io/client/performance/latest.htm) | 
+[Automated Performance Runs History](http://phoenix-bin.github.io/client/performance/)
+
+***
+
+## Performance improvements in Phoenix 1.2
+
+### Essential Column Family
+Phoenix 1.2 query filter leverages [HBase Filter Essential Column Family](http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.html#isFamilyEssential(byte[]) feature which leads to improved performance when Phoenix query filters on data that is split in multiple column families (cf) by only loading essential cf. In second pass, all cf are are loaded as needed.
+
+Consider the following schema in which data is split in two cf
+`create table t (k varchar not null primary key, a.c1 integer, b.c2 varchar, b.c3 varchar, b.c4 varchar)`. 
+
+Running a query similar to the following shows significant performance when a subset of rows match filter
+`select count(c2) from t where c1 = ?` 
+
+Following chart shows query in-memory performance of running the above query with 10M rows on 4 region servers when 10% of the rows matches the filter. Note: cf-a is approx 8 bytes and cf-b is approx 400 bytes wide.
+
+![Ess. CF](images/perf-esscf.png)
+
+
+### Skip Scan
+
+Skip Scan Filter leverages [SEEK_NEXT_USING_HINT](http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.ReturnCode.html#SEEK_NEXT_USING_HINT) of HBase Filter. It significantly improves point queries over key columns.
+
+Consider the following schema in which data is split in two cf
+`create table t (k varchar not null primary key, a.c1 integer, b.c2 varchar, b.c3 varchar)`. 
+
+Running a query similar to the following shows significant performance when a subset of rows match filter
+`select count(c1) from t where k in (1% random k's)` 
+
+Following chart shows query in-memory performance of running the above query with 10M rows on 4 region servers when 1% random keys over the entire range passed in query `IN` clause. Note: all varchar columns are approx 15 bytes.
+
+![SkipScan](images/perf-skipscan.png)
+
+
+### Salting
+Salting in Phoenix 1.2 leads to both improved read and write performance by adding an extra hash byte at start of key and pre-splitting data in number of regions. This eliminates hot-spotting of single or few regions servers. Read more about this feature [here](salted.html).
+
+Consider the following schema
+
+`CREATE TABLE T (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL,`
+`FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB BIGINT,STATS.ACTIVE_VISITOR`
+`INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, FEATURE, DATE)) SALT_BUCKETS = 4`. 
+
+Following chart shows write performance with and without the use of Salting which splits table in 4 regions running on 4 region server cluster (Note: For optimal performance, number of salt buckets should match number of region servers).
+
+![Salted-Write](images/perf-salted-write.png)
+
+Following chart shows in-memory query performance for 10M row table where `host='NA'` filter matches 3.3M rows
+
+`select count(1) from t where host='NA'`
+
+![Salted-Read](images/perf-salted-read.png)
+
+
+### Top-N 
+
+Following chart shows in-memory query time of running the Top-N query over 10M rows using Phoenix 1.2 and Hive over HBase
+
+`select core from t order by core desc limit 10`
+
+![Phoenix vs Hive](images/perf-topn.png)