You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@phoenix.apache.org by ja...@apache.org on 2016/03/04 04:00:22 UTC

svn commit: r1733552 - in /phoenix/site: publish/faq.html publish/index.html source/src/site/markdown/faq.md source/src/site/markdown/index.md

Author: jamestaylor
Date: Fri Mar  4 03:00:22 2016
New Revision: 1733552

URL: http://svn.apache.org/viewvc?rev=1733552&view=rev
Log:
Updates to overview

Modified:
    phoenix/site/publish/faq.html
    phoenix/site/publish/index.html
    phoenix/site/source/src/site/markdown/faq.md
    phoenix/site/source/src/site/markdown/index.md

Modified: phoenix/site/publish/faq.html
URL: http://svn.apache.org/viewvc/phoenix/site/publish/faq.html?rev=1733552&r1=1733551&r2=1733552&view=diff
==============================================================================
--- phoenix/site/publish/faq.html (original)
+++ phoenix/site/publish/faq.html Fri Mar  4 03:00:22 2016
@@ -1,7 +1,7 @@
 
 <!DOCTYPE html>
 <!--
- Generated by Apache Maven Doxia at 2016-02-04
+ Generated by Apache Maven Doxia at 2016-03-03
  Rendered using Reflow Maven Skin 1.1.0 (http://andriusvelykis.github.io/reflow-maven-skin)
 -->
 <html  xml:lang="en" lang="en">
@@ -329,10 +329,10 @@ public class test {
  </div> 
  <div class="section"> 
   <h3 id="Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API">Can phoenix work on tables with arbitrary timestamp as flexible as HBase API?</h3> 
-  <p>By default, Phoenix let’s HBase manage the timestamps and just shows you the latest values for everything. However, Phoenix also allows arbitrary timestamps to be supplied by the user. To do that you’d specify a “CurrentSCN” (or PhoenixRuntime.CURRENT_SCN_ATTRIB if you want to use our constant) at connection time, like this:</p> 
+  <p>By default, Phoenix let’s HBase manage the timestamps and just shows you the latest values for everything. However, Phoenix also allows arbitrary timestamps to be supplied by the user. To do that you’d specify a “CurrentSCN” at connection time, like this:</p> 
   <div class="source"> 
    <pre>Properties props = new Properties();
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts));
+props.setProperty(&quot;CurrentSCN&quot;, Long.toString(ts));
 Connection conn = DriverManager.connect(myUrl, props);
 
 conn.createStatement().execute(&quot;UPSERT INTO myTable VALUES ('a')&quot;);

Modified: phoenix/site/publish/index.html
URL: http://svn.apache.org/viewvc/phoenix/site/publish/index.html?rev=1733552&r1=1733551&r2=1733552&view=diff
==============================================================================
--- phoenix/site/publish/index.html (original)
+++ phoenix/site/publish/index.html Fri Mar  4 03:00:22 2016
@@ -1,7 +1,7 @@
 
 <!DOCTYPE html>
 <!--
- Generated by Apache Maven Doxia at 2016-02-04
+ Generated by Apache Maven Doxia at 2016-03-03
  Rendered using Reflow Maven Skin 1.1.0 (http://andriusvelykis.github.io/reflow-maven-skin)
 -->
 <html  xml:lang="en" lang="en">
@@ -184,7 +184,7 @@
  <div class="page-header">
   <h2 id="Overview">Overview</h2>
  </div> 
- <p>Apache Phoenix is a relational database layer over HBase delivered as a client-embedded JDBC driver targeting low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. The table metadata is stored in an HBase table and versioned, such that snapshot queries over prior versions will automatically use the correct schema. Direct use of the HBase API, along with coprocessors and custom filters, results in <a href="performance.html">performance</a> on the order of milliseconds for small queries, or seconds for tens of millions of rows. </p> 
+ <p>Apache Phoenix is a relational database layer over HBase supporting full ACID transactions and delivered as a client-embedded JDBC driver that targets low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. The table metadata is stored in an HBase table and versioned, such that snapshot queries over prior versions will automatically use the correct schema. Direct use of the HBase API, along with coprocessors and custom filters, results in <a href="performance.html">performance</a> on the order of milliseconds for small queries, or seconds for tens of millions of rows. </p> 
  <p align="center"> <br />Who is using Apache Phoenix? Read more <a href="who_is_using.html">here...</a><br /> <img src="images/using/all.png" alt="" /> </p> 
 </div> 
 <div class="section"> 
@@ -193,25 +193,37 @@
 </div> 
 <div class="section"> 
  <h2 id="Quick_Start">Quick Start</h2> 
- <p>Tired of reading already and just want to get started? Take a look at our <a href="faq.html">FAQs</a>, listen to the Apache Phoenix talks from <a class="externalLink" href="https://www.youtube.com/watch?v=f4Nmh5KM6gI&amp;feature=youtu.be">Hadoop Summit 2014</a>, review the <a class="externalLink" href="http://phoenix.apache.org/presentations/OC-HUG-2014-10-4x3.pdf">overview presentation</a>, and jump over to our quick start guide <a href="Phoenix-in-15-minutes-or-less.html">here</a>.</p> 
+ <p>Tired of reading already and just want to get started? Take a look at our <a href="faq.html">FAQs</a>, listen to the Apache Phoenix talk from <a class="externalLink" href="https://www.youtube.com/watch?v=XGa0SyJMH94">Hadoop Summit 2015</a>, review the <a class="externalLink" href="http://phoenix.apache.org/presentations/OC-HUG-2014-10-4x3.pdf">overview presentation</a>, and jump over to our quick start guide <a href="Phoenix-in-15-minutes-or-less.html">here</a>.</p> 
 </div> 
 <div class="section"> 
  <h2 id="SQL_Support">SQL Support</h2> 
  <p>To see what’s supported, go to our <a href="language/index.html">language reference</a>. It includes all typical SQL query statement clauses, including <tt>SELECT</tt>, <tt>FROM</tt>, <tt>WHERE</tt>, <tt>GROUP BY</tt>, <tt>HAVING</tt>, <tt>ORDER BY</tt>, etc. It also supports a full set of DML commands as well as table creation and versioned incremental alterations through our DDL commands. We try to follow the SQL standards wherever possible.</p> 
  <p><a name="connStr" id="connStr"></a>Use JDBC to get a connection to an HBase cluster like this:</p> 
  <div> 
-  <pre><tt>Connection conn = DriverManager.getConnection(&quot;jdbc:phoenix:server1,server2:3333&quot;);</tt></pre> 
+  <pre><tt>Connection conn = DriverManager.getConnection(&quot;jdbc:phoenix:server1,server2:3333&quot;,props);</tt></pre> 
  </div> 
- <p>where the connection string is composed of: </p> 
+ <p>where <tt>props</tt> are optional properties which may include Phoenix and HBase configuration properties, and the connection string which is composed of: </p> 
  <div> 
-  <pre><tt>jdbc:phoenix</tt> [ <tt>:&lt;zookeeper quorum&gt;</tt> [ <tt>:&lt;port number&gt;</tt> ] [ <tt>:&lt;root node&gt;</tt> ] ]</pre> 
+  <pre><tt>jdbc:phoenix</tt> [ <tt>:&lt;zookeeper quorum&gt;</tt> [ <tt>:&lt;port number&gt;</tt> ] [ <tt>:&lt;root node&gt;</tt> ] [ <tt>:&lt;principal&gt;</tt> ] [ <tt>:&lt;keytab file&gt;</tt> ] ] </pre> 
+ </div> 
+ <p>For any omitted parts, the relevant property value, hbase.zookeeper.quorum, hbase.zookeeper.property.clientPort, and zookeeper.znode.parent will be used from hbase-site.xml configuration file. The optional <tt>principal</tt> and <tt>keytab file</tt> may be used to connect to a Kerberos secured cluster. If only <tt>principal</tt> is specified, then this defines the user name with each distinct user having their own dedicated HBase connection (HConnection). This provides a means of having multiple, different connections each with different configuration properties on the same JVM.</p> 
+ <p>For example, the following connection string might be used for longer running queries, where the <tt>longRunningProps</tt> specifies Phoenix and HBase configuration properties with longer timeouts: </p> 
+ <div> 
+  <pre><tt>Connection conn = DriverManager.getConnection(“jdbc:phoenix:server1,server2:3333:longRunning”, longRunningProps);</tt></pre> 
+ </div> while the following connection string might be used for shorter running queries: 
+ <div> 
+  <pre><tt>Connection conn = DriverManager.getConnection(&quot;jdbc:phoenix:server1,server2:3333:shortRunning&quot;, shortRunningProps);</tt></pre> 
+ </div> 
+ <div class="section"> 
+  <div class="section"> 
+   <h4 id="Not_Supported">Not Supported</h4> 
+   <p>Here’s a list of what is currently <b>not</b> supported:</p> 
+   <ul> 
+    <li><b>Relational operators</b>. Intersect, Minus.</li> 
+    <li><b>Miscellaneous built-in functions</b>. These are easy to add - read this <a class="externalLink" href="http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html">blog</a> for step by step instructions.</li> 
+   </ul> 
+  </div> 
  </div> 
- <p>For any omitted part, the relevant property value, hbase.zookeeper.quorum, hbase.zookeeper.property.clientPort, and zookeeper.znode.parent will be used from hbase-site.xml configuration file.</p> 
- <p>Here’s a list of what is currently <b>not</b> supported:</p> 
- <ul> 
-  <li><b>Relational operators</b>. Union, Intersect, Minus.</li> 
-  <li><b>Miscellaneous built-in functions</b>. These are easy to add - read this <a class="externalLink" href="http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html">blog</a> for step by step instructions.</li> 
- </ul> 
 </div> 
 <div class="section"> 
  <h2 id="transactions">Transactions<a name="Transactions"></a></h2> 
@@ -220,7 +232,7 @@
  <div class="section"> 
   <div class="section"> 
    <h4 id="Timestamps">Timestamps</h4> 
-   <p>Most commonly, an application will let HBase manage timestamps. However, under some circumstances, an application needs to control the timestamps itself. In this case, a long-valued “CurrentSCN” property may be specified at connection time to control timestamps for any DDL, DML, or query. This capability may be used to run snapshot queries against prior row values, since Phoenix uses the value of this connection property as the max timestamp of scans.</p> 
+   <p>Most commonly, an application will let HBase manage timestamps. However, under some circumstances, an application needs to control the timestamps itself. In this case, the <a href="faq.html#Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API">CurrentSCN</a> property may be specified at connection time to control timestamps for any DDL, DML, or query. This capability may be used to run snapshot queries against prior row values, since Phoenix uses the value of this connection property as the max timestamp of scans.</p> 
    <p>Timestamps may not be controlled for transactional tables. Instead, the transaction manager assigns timestamps which become the HBase cell timestamps after a commit. Timestamps still correspond to wall clock time, however they are multiplied by 1,000,000 to ensure enough granularity for uniqueness across the cluster.</p> 
   </div> 
  </div> 
@@ -228,7 +240,7 @@
 <div class="section"> 
  <h2 id="schema">Schema<a name="Schema"></a></h2> 
  <p>Apache Phoenix supports table creation and versioned incremental alterations through DDL commands. The table metadata is stored in an HBase table.</p> 
- <p>A Phoenix table is created through the <a href="language/index.html#create">CREATE TABLE</a> DDL command and can either be:</p> 
+ <p>A Phoenix table is created through the <a href="language/index.html#create">CREATE TABLE</a> command and can either be:</p> 
  <ol style="list-style-type: decimal"> 
   <li><b>built from scratch</b>, in which case the HBase table and column families will be created automatically.</li> 
   <li><b>mapped to an existing HBase table</b>, by creating either a read-write TABLE or a read-only VIEW, with the caveat that the binary representation of the row key and key values must match that of the Phoenix data types (see <a href="language/datatypes.html">Data Types reference</a> for the detail on the binary representation). 
@@ -237,11 +249,21 @@
     <li>For a read-only VIEW, all column families must already exist. The only change made to the HBase table will be the addition of the Phoenix coprocessors used for query processing. The primary use case for a VIEW is to transfer existing data into a Phoenix table, since data modification are not allowed on a VIEW and query performance will likely be less than as with a TABLE.</li> 
    </ul></li> 
  </ol> 
- <p>All schema is versioned, and prior versions are stored forever. Thus, snapshot queries over older data will pick up and use the correct schema for each row.</p> 
+ <p>All schema is versioned (with up to 1000 versions being kept). Snapshot queries over older data will pick up and use the correct schema based on the time at which you’ve connected (based on the <a href="faq.html#Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API">CurrentSCN</a> property).</p> 
  <div class="section"> 
   <div class="section"> 
-   <h4 id="Salting">Salting</h4> 
-   <p>A table could also be declared as salted to prevent HBase region hot spotting. You just need to declare how many salt buckets your table has, and Phoenix will transparently manage the salting for you. You’ll find more detail on this feature <a href="salted.html">here</a>, along with a nice comparison on write throughput between salted and unsalted tables <a href="performance.html#salting">here</a>.</p> 
+   <h4 id="Altering">Altering</h4> 
+   <p>A Phoenix table may be altered through the <a href="language/index.html#alter">ALTER TABLE</a> command. When a SQL statement is run which references a table, Phoenix will by default check with the server to ensure it has the most up to date table metadata and statistics. This RPC may not be necessary when you know in advance that the structure of a table may never change. The UPDATE_CACHE_FREQUENCY property was added in Phoenix 4.7 to allow the user to declare how often the server will be checked for meta data updates (for example, the addition or removal of a table column or the updates of table statistics). Possible values are ALWAYS (the default), NEVER, and a millisecond numeric value. An ALWAYS value will cause the client to check with the server each time a statement is executed that references a table (or once per commit for an UPSERT VALUES statement). A millisecond value indicates how long the client will hold on to its cached version of the metadata before checking b
 ack with the server for updates.</p> 
+   <p>For example, the following DDL command would create table <tt>FOO</tt> and declare that a client should only check for updates to the table or its statistics every 15 minutes:</p> 
+   <p><tt> CREATE TABLE FOO (k BIGINT PRIMARY KEY, v VARCHAR) UPDATE_CACHE_FREQUENCY=900000; </tt></p> 
+  </div> 
+  <div class="section"> 
+   <h4 id="Views">Views</h4> 
+   <p>Phoenix supports updatable views on top of tables with the unique feature leveraging the schemaless capabilities of HBase of being able to add columns to them. All views all share the same underlying physical HBase table and may even be indexed independently. For more read <a href="views.html">here</a>. </p> 
+  </div> 
+  <div class="section"> 
+   <h4 id="Multi-tenancy">Multi-tenancy</h4> 
+   <p>Built on top of view support, Phoenix also supports <a href="multi-tenancy.html">multi-tenancy</a>. Just as with views, a multi-tenant view may add columns which are defined solely for that user.</p> 
   </div> 
   <div class="section"> 
    <h4 id="Schema_at_Read-time">Schema at Read-time</h4> 
@@ -257,11 +279,15 @@
    <p>The other caveat is that the way the bytes were serialized in HBase must match the way the bytes are expected to be serialized by Phoenix. For VARCHAR,CHAR, and UNSIGNED_* types, Phoenix uses the HBase Bytes utility methods to perform serialization. The CHAR type expects only single-byte characters and the UNSIGNED types expect values greater than or equal to zero.</p> 
    <p>Our composite row keys are formed by simply concatenating the values together, with a zero byte character used as a separator after a variable length type. For more information on our type system, see the <a href="language/datatypes.html">Data Type</a>.</p> 
   </div> 
+  <div class="section"> 
+   <h4 id="Salting">Salting</h4> 
+   <p>A table could also be declared as salted to prevent HBase region hot spotting. You just need to declare how many salt buckets your table has, and Phoenix will transparently manage the salting for you. You’ll find more detail on this feature <a href="salted.html">here</a>, along with a nice comparison on write throughput between salted and unsalted tables <a href="performance.html#salting">here</a>.</p> 
+  </div> 
+  <div class="section"> 
+   <h4 id="APIs">APIs</h4> 
+   <p>The catalog of tables, their columns, primary keys, and types may be retrieved via the java.sql metadata interfaces: <tt>DatabaseMetaData</tt>, <tt>ParameterMetaData</tt>, and <tt>ResultSetMetaData</tt>. For retrieving schemas, tables, and columns through the DatabaseMetaData interface, the schema pattern, table pattern, and column pattern are specified as in a LIKE expression (i.e. % and _ are wildcards escaped through the character). The table catalog argument in the metadata APIs is used to filter based on the tenant ID for multi-tenant tables.</p> 
+  </div> 
  </div> 
-</div> 
-<div class="section"> 
- <h2 id="Metadata">Metadata</h2> 
- <p>The catalog of tables, their columns, primary keys, and types may be retrieved via the java.sql metadata interfaces: <tt>DatabaseMetaData</tt>, <tt>ParameterMetaData</tt>, and <tt>ResultSetMetaData</tt>. For retrieving schemas, tables, and columns through the DatabaseMetaData interface, the schema pattern, table pattern, and column pattern are specified as in a LIKE expression (i.e. % and _ are wildcards escaped through the character). The table catalog argument to the metadata APIs deviates from a more standard relational database model, and instead is used to specify a column family name (in particular to see all columns in a given column family).</p> 
 </div>
 			</div>
 		</div>

Modified: phoenix/site/source/src/site/markdown/faq.md
URL: http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/faq.md?rev=1733552&r1=1733551&r2=1733552&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/faq.md (original)
+++ phoenix/site/source/src/site/markdown/faq.md Fri Mar  4 03:00:22 2016
@@ -249,10 +249,10 @@ Hadoop-2 profile exists in Phoenix pom.x
 
 
 ### Can phoenix work on tables with arbitrary timestamp as flexible as HBase API?
-By default, Phoenix let's HBase manage the timestamps and just shows you the latest values for everything. However, Phoenix also allows arbitrary timestamps to be supplied by the user. To do that you'd specify a "CurrentSCN" (or PhoenixRuntime.CURRENT_SCN_ATTRIB if you want to use our constant) at connection time, like this:
+By default, Phoenix let's HBase manage the timestamps and just shows you the latest values for everything. However, Phoenix also allows arbitrary timestamps to be supplied by the user. To do that you'd specify a "CurrentSCN" at connection time, like this:
 
     Properties props = new Properties();
-    props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts));
+    props.setProperty("CurrentSCN", Long.toString(ts));
     Connection conn = DriverManager.connect(myUrl, props);
 
     conn.createStatement().execute("UPSERT INTO myTable VALUES ('a')");

Modified: phoenix/site/source/src/site/markdown/index.md
URL: http://svn.apache.org/viewvc/phoenix/site/source/src/site/markdown/index.md?rev=1733552&r1=1733551&r2=1733552&view=diff
==============================================================================
--- phoenix/site/source/src/site/markdown/index.md (original)
+++ phoenix/site/source/src/site/markdown/index.md Fri Mar  4 03:00:22 2016
@@ -53,7 +53,7 @@ Announcing [transaction support](transac
 
 ## Overview
 
-Apache Phoenix is a relational database layer over HBase delivered as a client-embedded JDBC driver targeting low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. The table metadata is stored in an HBase table and versioned, such that snapshot queries over prior versions will automatically use the correct schema. Direct use of the HBase API, along with coprocessors and custom filters, results in [performance](performance.html) on the order of milliseconds for small queries, or seconds for tens of millions of rows. 
+Apache Phoenix is a relational database layer over HBase supporting full ACID transactions and delivered as a client-embedded JDBC driver that targets low latency queries over HBase data. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. The table metadata is stored in an HBase table and versioned, such that snapshot queries over prior versions will automatically use the correct schema. Direct use of the HBase API, along with coprocessors and custom filters, results in [performance](performance.html) on the order of milliseconds for small queries, or seconds for tens of millions of rows. 
 
 <p align="center">
 <br/>Who is using Apache Phoenix? Read more <a href="who_is_using.html">here...</a><br/>
@@ -63,22 +63,36 @@ Apache Phoenix is a relational database
 Become the standard means of accessing HBase data through a well-defined, industry standard API.
 
 ## Quick Start
-Tired of reading already and just want to get started? Take a look at our [FAQs](faq.html), listen to the Apache Phoenix talks from [Hadoop Summit 2014](https://www.youtube.com/watch?v=f4Nmh5KM6gI&feature=youtu.be), review the [overview presentation](http://phoenix.apache.org/presentations/OC-HUG-2014-10-4x3.pdf), and jump over to our quick start guide [here](Phoenix-in-15-minutes-or-less.html).
+Tired of reading already and just want to get started? Take a look at our [FAQs](faq.html), listen to the Apache Phoenix talk from [Hadoop Summit 2015](https://www.youtube.com/watch?v=XGa0SyJMH94), review the [overview presentation](http://phoenix.apache.org/presentations/OC-HUG-2014-10-4x3.pdf), and jump over to our quick start guide [here](Phoenix-in-15-minutes-or-less.html).
 
 ## SQL Support
 To see what's supported, go to our [language reference](language/index.html). It includes all typical SQL query statement clauses, including `SELECT`, `FROM`, `WHERE`, `GROUP BY`, `HAVING`, `ORDER BY`, etc. It also supports a full set of DML commands as well as table creation and versioned incremental alterations through our DDL commands. We try to follow the SQL standards wherever possible.
 
 <a id="connStr"></a>Use JDBC to get a connection to an HBase cluster like this:
 
-<pre><code>Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333");</code></pre>
-where the connection string is composed of:
-<pre><code>jdbc:phoenix</code> [ <code>:&lt;zookeeper quorum&gt;</code> [ <code>:&lt;port number&gt;</code> ] [ <code>:&lt;root node&gt;</code> ] ]</pre>
+<pre><code>Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333",props);</code></pre>
+where <code>props</code> are optional properties which may include Phoenix and HBase configuration properties, and
+the connection string which is composed of:
+<pre><code>jdbc:phoenix</code> [ <code>:&lt;zookeeper quorum&gt;</code> [ <code>:&lt;port number&gt;</code> ] [ <code>:&lt;root node&gt;</code> ] [ <code>:&lt;principal&gt;</code> ] [ <code>:&lt;keytab file&gt;</code> ] ]
+</pre>
+
+For any omitted parts, the relevant property value, hbase.zookeeper.quorum, hbase.zookeeper.property.clientPort, and zookeeper.znode.parent
+will be used from hbase-site.xml configuration file. The optional <code>principal</code> and <code>keytab file</code> may be used to connect
+to a Kerberos secured cluster. If only <code>principal</code> is specified, then this defines the user name with each distinct
+user having their own dedicated HBase connection (HConnection). This provides a means of having multiple, different connections each with different
+configuration properties on the same JVM.
+
+For example, the following connection string might be used for longer running queries, where the <code>longRunningProps</code> specifies Phoenix and HBase configuration properties with longer timeouts:
+<pre><code>Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333:longRunning", longRunningProps);</code></pre>
+while the following connection string might be used for shorter running queries:
 
-For any omitted part, the relevant property value, hbase.zookeeper.quorum, hbase.zookeeper.property.clientPort, and zookeeper.znode.parent will be used from hbase-site.xml configuration file.
+<pre><code>Connection conn = DriverManager.getConnection("jdbc:phoenix:server1,server2:3333:shortRunning", shortRunningProps);</code></pre>
 
+
+####Not Supported
 Here's a list of what is currently **not** supported:
 
-* **Relational operators**. Union, Intersect, Minus.
+* **Relational operators**. Intersect, Minus.
 * **Miscellaneous built-in functions**. These are easy to add - read this [blog](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html) for step by step instructions.
 
 ##<a id="transactions"></a>Transactions##
@@ -87,7 +101,10 @@ To enable full ACID transactions, a beta
 Non transactional tables have no guarantees above and beyond the HBase guarantee of row level atomicity (see [here](https://hbase.apache.org/acid-semantics.html)). In addition, non transactional tables will not see their updates until after a commit has occurred. The DML commands of Apache Phoenix, UPSERT VALUES, UPSERT SELECT and DELETE, batch pending changes to HBase tables on the client side. The changes are sent to the server when the transaction is committed and discarded when the transaction is rolled back. If auto commit is turned on for a connection, then Phoenix will, whenever possible, execute the entire DML command through a coprocessor on the server-side, so performance will improve.
 
 ####Timestamps
-Most commonly, an application will let HBase manage timestamps. However, under some circumstances, an application needs to control the timestamps itself. In this case, a long-valued “CurrentSCN” property may be specified at connection time to control timestamps for any DDL, DML, or query. This capability may be used to run snapshot queries against prior row values, since Phoenix uses the value of this connection property as the max timestamp of scans.
+Most commonly, an application will let HBase manage timestamps. However, under some circumstances, an application needs to control the
+timestamps itself. In this case, the [CurrentSCN](faq.html#Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API) 
+property may be specified at connection time to control timestamps for any DDL, DML, or query. This capability may be used to run snapshot
+queries against prior row values, since Phoenix uses the value of this connection property as the max timestamp of scans.
 
 Timestamps may not be controlled for transactional tables. Instead, the transaction manager assigns timestamps which become the HBase cell timestamps after a commit. Timestamps still correspond to wall clock time, however they are multiplied by 1,000,000 to ensure enough granularity for uniqueness across the cluster.
 
@@ -95,17 +112,39 @@ Timestamps may not be controlled for tra
 
 Apache Phoenix supports table creation and versioned incremental alterations through DDL commands. The table metadata is stored in an HBase table.
 
-A Phoenix table is created through the [CREATE TABLE](language/index.html#create) DDL command and can either be:
+A Phoenix table is created through the [CREATE TABLE](language/index.html#create) command and can either be:
 
 1. **built from scratch**, in which case the HBase table and column families will be created automatically.
 2. **mapped to an existing HBase table**, by creating either a read-write TABLE or a read-only VIEW, with the caveat that the binary representation of the row key and key values must match that of the Phoenix data types (see [Data Types reference](language/datatypes.html) for the detail on the binary representation).
     * For a read-write TABLE, column families will be created automatically if they don't already exist. An empty key value will be added to the first column family of each existing row to minimize the size of the projection for queries.
     * For a read-only VIEW, all column families must already exist. The only change made to the HBase table will be the addition of the Phoenix coprocessors used for query processing. The primary use case for a VIEW is to transfer existing data into a Phoenix table, since data modification are not allowed on a VIEW and query performance will likely be less than as with a TABLE.
 
-All schema is versioned, and prior versions are stored forever. Thus, snapshot queries over older data will pick up and use the correct schema for each row.
+All schema is versioned (with up to 1000 versions being kept). Snapshot queries over older data will pick up and use the correct schema based on
+the time at which you've connected (based on the [CurrentSCN](faq.html#Can_phoenix_work_on_tables_with_arbitrary_timestamp_as_flexible_as_HBase_API) property).
 
-####Salting
-A table could also be declared as salted to prevent HBase region hot spotting. You just need to declare how many salt buckets your table has, and Phoenix will transparently manage the salting for you. You'll find more detail on this feature [here](salted.html), along with a nice comparison on write throughput between salted and unsalted tables [here](performance.html#salting).
+####Altering
+A Phoenix table may be altered through the [ALTER  TABLE](language/index.html#alter) command. When a SQL statement is run which references
+a table, Phoenix will by default check with the server to ensure it has the most up to date table metadata and statistics. This RPC may not be
+necessary when you know in advance that the structure of a table may never change. The UPDATE_CACHE_FREQUENCY property was added in
+Phoenix 4.7 to allow the user to declare how often the server will be checked for meta data updates (for example, the addition or removal of a
+table column or the updates of table statistics). Possible values are ALWAYS (the default), NEVER, and a millisecond numeric value. An ALWAYS
+value will cause the client to check with the server each time a statement is executed that references a table (or once per commit for an
+UPSERT VALUES statement).  A millisecond value indicates how long the client will hold on to its cached version of the metadata before checking
+back with the server for updates.
+
+For example, the following DDL command would create table <code>FOO</code> and declare that a client should only check for updates
+to the table or its statistics every 15 minutes:
+
+<code>
+CREATE TABLE FOO (k BIGINT PRIMARY KEY, v VARCHAR) UPDATE_CACHE_FREQUENCY=900000;
+</code>
+
+####Views
+Phoenix supports updatable views on top of tables with the unique feature leveraging the schemaless capabilities of HBase of being able to
+add columns to them. All views all share the same underlying physical HBase table and may even be indexed independently. For more read [here](views.html). 
+
+####Multi-tenancy
+Built on top of view support, Phoenix also supports [multi-tenancy](multi-tenancy.html). Just as with views, a multi-tenant view may add columns which are defined solely for that user.
 
 ####Schema at Read-time
 Another schema-related feature allows columns to be defined dynamically at query time. This is useful in situations where you don't know in advance all of the columns at create time. You'll find more details on this feature [here](dynamic_columns.html).
@@ -114,11 +153,22 @@ Another schema-related feature allows co
 Apache Phoenix supports mapping to an existing HBase table through the [CREATE TABLE](language/index.html#create) and [CREATE VIEW](language/index.html#create) DDL statements. In both cases, the HBase metadata is left as-is, except for with CREATE TABLE the [KEEP_DELETED_CELLS](http://hbase.apache.org/book/cf.keep.deleted.html) option is enabled to allow for flashback queries to work correctly. For CREATE TABLE, any HBase metadata (table, column families) that doesn't already exist will be created. Note that the table and column family names are case sensitive, with Phoenix upper-casing all names. To make a name case sensitive in the DDL statement, surround it with double quotes as shown below:
       <pre><code>CREATE VIEW "MyTable" ("a".ID VARCHAR PRIMARY KEY)</code></pre>
 
-For CREATE TABLE, an empty key value will also be added for each row so that queries behave as expected (without requiring all columns to be projected during scans). For CREATE VIEW, this will not be done, nor will any HBase metadata be created. Instead the existing HBase metadata must match the metadata specified in the DDL statement or a <code>ERROR 505 (42000): Table is read only</code> will be thrown.
+For CREATE TABLE, an empty key value will also be added for each row so that queries behave as expected (without requiring all columns to
+be projected during scans). For CREATE VIEW, this will not be done, nor will any HBase metadata be created. Instead the existing HBase
+metadata must match the metadata specified in the DDL statement or a <code>ERROR 505 (42000): Table is read only</code> will be thrown.
+
+The other caveat is that the way the bytes were serialized in HBase must match the way the bytes are expected to be serialized by Phoenix.
+For VARCHAR,CHAR, and UNSIGNED_* types, Phoenix uses the HBase Bytes utility methods to perform serialization. The CHAR type expects only
+single-byte characters and the UNSIGNED types expect values greater than or equal to zero.
 
-The other caveat is that the way the bytes were serialized in HBase must match the way the bytes are expected to be serialized by Phoenix. For VARCHAR,CHAR, and UNSIGNED_* types, Phoenix uses the HBase Bytes utility methods to perform serialization. The CHAR type expects only single-byte characters and the UNSIGNED types expect values greater than or equal to zero.
+Our composite row keys are formed by simply concatenating the values together, with a zero byte character used as a separator after a
+variable length type. For more information on our type system, see the [Data Type](language/datatypes.html).
 
-Our composite row keys are formed by simply concatenating the values together, with a zero byte character used as a separator after a variable length type. For more information on our type system, see the [Data Type](language/datatypes.html).
+####Salting
+A table could also be declared as salted to prevent HBase region hot spotting. You just need to declare how many salt buckets your table has, and Phoenix will transparently manage the salting for you. You'll find more detail on this feature [here](salted.html), along with a nice comparison on write throughput between salted and unsalted tables [here](performance.html#salting).
 
-## Metadata ##
-The catalog of tables, their columns, primary keys, and types may be retrieved via the java.sql metadata interfaces: `DatabaseMetaData`, `ParameterMetaData`, and `ResultSetMetaData`. For retrieving schemas, tables, and columns through the DatabaseMetaData interface, the schema pattern, table pattern, and column pattern are specified as in a LIKE expression (i.e. % and _ are wildcards escaped through the \ character). The table catalog argument to the metadata APIs deviates from a more standard relational database model, and instead is used to specify a column family name (in particular to see all columns in a given column family).
+####APIs
+The catalog of tables, their columns, primary keys, and types may be retrieved via the java.sql metadata interfaces: `DatabaseMetaData`,
+`ParameterMetaData`, and `ResultSetMetaData`. For retrieving schemas, tables, and columns through the DatabaseMetaData interface, the schema
+pattern, table pattern, and column pattern are specified as in a LIKE expression (i.e. % and _ are wildcards escaped through the \ character).
+The table catalog argument in the metadata APIs is used to filter based on the tenant ID for multi-tenant tables.