You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/05/19 01:36:46 UTC

[23/31] drill git commit: add BB's clause pages, renamed sql command pages

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/configure-drill/070-configuring-user-impersonation.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/070-configuring-user-impersonation.md b/_docs/configure-drill/070-configuring-user-impersonation.md
index 6203ca1..dbcdaf0 100644
--- a/_docs/configure-drill/070-configuring-user-impersonation.md
+++ b/_docs/configure-drill/070-configuring-user-impersonation.md
@@ -43,7 +43,7 @@ The following table lists the clients, storage plugins, and types of queries tha
 </table>
 
 ## Impersonation and Views
-You can use views with impersonation to provide granular access to data and protect sensitive information. When you create a view, Drill stores the view definition in a file and suffixes the file with .drill.view. For example, if you create a view named myview, Drill creates a view file named myview.drill.view and saves it in the current workspace or the workspace specified, such as dfs.views.myview. See [CREATE VIEW]({{site.baseurl}}/docs/create-view-command/) Command.
+You can use views with impersonation to provide granular access to data and protect sensitive information. When you create a view, Drill stores the view definition in a file and suffixes the file with .drill.view. For example, if you create a view named myview, Drill creates a view file named myview.drill.view and saves it in the current workspace or the workspace specified, such as dfs.views.myview. See [CREATE VIEW]({{site.baseurl}}/docs/create-view) Command.
 
 You can create a view and grant read permissions on the view to give other users access to the data that the view references. When a user queries the view, Drill impersonates the view owner to access the underlying data. A user with read access to a view can create new views from the originating view to further restrict access on data.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 548b709..a793dbf 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -95,7 +95,7 @@ You can write data from Drill to a JSON file. The following setup is required:
         CREATE TABLE my_json AS
         SELECT my column from dfs.`<path_file_name>`;
 
-Drill performs the following actions, as shown in the complete [CTAS command example]({{ site.baseurl }}/docs/create-table-as-ctas-command):
+Drill performs the following actions, as shown in the complete [CTAS command example]({{ site.baseurl }}/docs/create-table-as-ctas/):
    
 * Creates a directory using table name.
 * Writes the JSON data to the directory in the workspace location.

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/040-operators.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/040-operators.md b/_docs/sql-reference/040-operators.md
index 2de460e..4eb83f9 100644
--- a/_docs/sql-reference/040-operators.md
+++ b/_docs/sql-reference/040-operators.md
@@ -60,7 +60,7 @@ You can use the following subquery operators in your Drill queries:
   * EXISTS
   * IN
 
-See [SELECT Statements]({{ site.baseurl }}/docs/select-statements).
+See [SELECT Statements]({{ site.baseurl }}/docs/select).
 
 ## String Concatenate Operator
 

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/005-supported-sql-commands.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/005-supported-sql-commands.md b/_docs/sql-reference/sql-commands/005-supported-sql-commands.md
index 233019f..37a08eb 100644
--- a/_docs/sql-reference/sql-commands/005-supported-sql-commands.md
+++ b/_docs/sql-reference/sql-commands/005-supported-sql-commands.md
@@ -6,4 +6,4 @@ The following table provides a list of the SQL commands that Drill supports,
 with their descriptions and example syntax:
 
 <table style='table-layout:fixed;width:100%'>
-    <tr><th >Command</th><th >Description</th><th >Syntax</th></tr><tr><td valign="top" width="15%"><a href="/docs/alter-session-command">ALTER SESSION</a></td><td valign="top" width="60%">Changes a system setting for the duration of a session. A session ends when you quit the Drill shell. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options">Planning and Execution Options</a>.</td><td valign="top"><pre>ALTER SESSION SET `&lt;option_name&gt;`=&lt;value&gt;;</pre></td></tr><tr><td valign="top" ><a href="/docs/alter-system-command">ALTER SYSTEM</a></td><td valign="top" >Permanently changes a system setting. The new settings persist across all sessions. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options">Planning and Execution Options</a>.</td><td valign="top" ><pre>ALTER SYSTEM SET `&lt;option_name&gt;`=&lt;value&gt;;</pre></td></tr><tr><td valign="top" ><p><a href="/docs/crea
 te-table-as--ctas-command">CREATE TABLE AS<br />(CTAS)</a></p></td><td valign="top" >Creates a new table and populates the new table with rows returned from a SELECT query. Use the CREATE TABLE AS (CTAS) statement in place of INSERT INTO. When you issue the CTAS command, you create a directory that contains parquet or CSV files. Each workspace in a file system has a default file type.<br />You can specify which writer you want Drill to use when creating a table: parquet, CSV, or JSON (as specified with the <code>store.format</code> option).</td><td valign="top" ><pre class="programlisting">CREATE TABLE new_table_name AS &lt;query&gt;;</pre></td></tr><tr><td - valign="top" ><a href="/docs/create-view-command">CREATE VIEW </a></td><td - valign="top" >Creates a virtual structure for the result set of a stored query.-</td><td -valign="top" ><pre>CREATE [OR REPLACE] VIEW [workspace.]view_name [ (column_name [, ...]) ] AS &lt;query&gt;;</pre></td></tr><tr><td  valign="top" ><a href="/docs
 /describe-command">DESCRIBE</a></td><td  valign="top" >Returns information about columns in a table or view.</td><td valign="top" ><pre>DESCRIBE [workspace.]table_name|view_name</pre></td></tr><tr><td valign="top" ><a href="/docs/drop-view-command">DROP VIEW</a></td><td valign="top" >Removes a view.</td><td valign="top" ><pre>DROP VIEW [workspace.]view_name ;</pre></td></tr><tr><td  valign="top" ><a href="/docs/explain-commands">EXPLAIN PLAN FOR</a></td><td valign="top" >Returns the physical plan for a particular query.</td><td valign="top" ><pre>EXPLAIN PLAN FOR &lt;query&gt;;</pre></td></tr><tr><td valign="top" ><a href="/docs/explain-commands">EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR</a></td><td valign="top" >Returns the logical plan for a particular query.</td><td  valign="top" ><pre>EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR &lt;query&gt;;</pre></td></tr><tr><td colspan="1" valign="top" ><a href="/docs/select-statements" rel="nofollow">SELECT</a></td><td valign="top" >Retrieves dat
 a from tables and files.</td><td  valign="top" ><pre>[WITH subquery]<br />SELECT column_list FROM table_name <br />[WHERE clause]<br />[GROUP BY clause]<br />[HAVING clause]<br />[ORDER BY clause];</pre></td></tr><tr><td  valign="top" ><a href="/docs/show-databases-and-show-schemas-commands">SHOW DATABASES </a></td><td valign="top" >Returns a list of available schemas. Equivalent to SHOW SCHEMAS.</td><td valign="top" ><pre>SHOW DATABASES;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-files-command" >SHOW FILES</a></td><td valign="top" >Returns a list of files in a file system schema.</td><td valign="top" ><pre>SHOW FILES IN filesystem.`schema_name`;<br />SHOW FILES FROM filesystem.`schema_name`;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-databases-and-show-schemas-commands">SHOW SCHEMAS</a></td><td - valign="top" >Returns a list of available schemas. Equivalent to SHOW DATABASES.</td><td valign="top" ><pre>SHOW SCHEMAS;</pre></td></tr><tr><td valign="top" ><
 a href="/docs/show-tables-command">SHOW TABLES</a></td><td valign="top" >Returns a list of tables and views.</td><td valign="top" ><pre>SHOW TABLES;</pre></td></tr><tr><td valign="top" ><a href="/docs/use-command">USE</a></td><td valign="top" >Change to a particular schema. When you opt to use a particular schema, Drill issues queries on that schema only.</td><td valign="top" ><pre>USE schema_name;</pre></td></tr></table>
+    <tr><th >Command</th><th >Description</th><th >Syntax</th></tr><tr><td valign="top" width="15%"><a href="/docs/alter-session">ALTER SESSION</a></td><td valign="top" width="60%">Changes a system setting for the duration of a session. A session ends when you quit the Drill shell. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options">Planning and Execution Options</a>.</td><td valign="top"><pre>ALTER SESSION SET `&lt;option_name&gt;`=&lt;value&gt;;</pre></td></tr><tr><td valign="top" ><a href="/docs/alter-system">ALTER SYSTEM</a></td><td valign="top" >Permanently changes a system setting. The new settings persist across all sessions. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options">Planning and Execution Options</a>.</td><td valign="top" ><pre>ALTER SYSTEM SET `&lt;option_name&gt;`=&lt;value&gt;;</pre></td></tr><tr><td valign="top" ><p><a href="/docs/create-table-as--cta
 s">CREATE TABLE AS<br />(CTAS)</a></p></td><td valign="top" >Creates a new table and populates the new table with rows returned from a SELECT query. Use the CREATE TABLE AS (CTAS) statement in place of INSERT INTO. When you issue the CTAS command, you create a directory that contains parquet or CSV files. Each workspace in a file system has a default file type.<br />You can specify which writer you want Drill to use when creating a table: parquet, CSV, or JSON (as specified with the <code>store.format</code> option).</td><td valign="top" ><pre class="programlisting">CREATE TABLE new_table_name AS &lt;query&gt;;</pre></td></tr><tr><td - valign="top" ><a href="/docs/create-view">CREATE VIEW </a></td><td - valign="top" >Creates a virtual structure for the result set of a stored query.-</td><td -valign="top" ><pre>CREATE [OR REPLACE] VIEW [workspace.]view_name [ (column_name [, ...]) ] AS &lt;query&gt;;</pre></td></tr><tr><td  valign="top" ><a href="/docs/describe">DESCRIBE</a></td><td 
  valign="top" >Returns information about columns in a table or view.</td><td valign="top" ><pre>DESCRIBE [workspace.]table_name|view_name</pre></td></tr><tr><td valign="top" ><a href="/docs/drop-view-command">DROP VIEW</a></td><td valign="top" >Removes a view.</td><td valign="top" ><pre>DROP VIEW [workspace.]view_name ;</pre></td></tr><tr><td  valign="top" ><a href="/docs/explain">EXPLAIN PLAN FOR</a></td><td valign="top" >Returns the physical plan for a particular query.</td><td valign="top" ><pre>EXPLAIN PLAN FOR &lt;query&gt;;</pre></td></tr><tr><td valign="top" ><a href="/docs/explain">EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR</a></td><td valign="top" >Returns the logical plan for a particular query.</td><td  valign="top" ><pre>EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR &lt;query&gt;;</pre></td></tr><tr><td colspan="1" valign="top" ><a href="/docs/select" rel="nofollow">SELECT</a></td><td valign="top" >Retrieves data from tables and files.</td><td  valign="top" ><pre>[WITH subquery]<
 br />SELECT column_list FROM table_name <br />[WHERE clause]<br />[GROUP BY clause]<br />[HAVING clause]<br />[ORDER BY clause];</pre></td></tr><tr><td  valign="top" ><a href="/docs/show-databases-and-show-schemas">SHOW DATABASES </a></td><td valign="top" >Returns a list of available schemas. Equivalent to SHOW SCHEMAS.</td><td valign="top" ><pre>SHOW DATABASES;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-files" >SHOW FILES</a></td><td valign="top" >Returns a list of files in a file system schema.</td><td valign="top" ><pre>SHOW FILES IN filesystem.`schema_name`;<br />SHOW FILES FROM filesystem.`schema_name`;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-databases-and-show-schemas">SHOW SCHEMAS</a></td><td - valign="top" >Returns a list of available schemas. Equivalent to SHOW DATABASES.</td><td valign="top" ><pre>SHOW SCHEMAS;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-tables">SHOW TABLES</a></td><td valign="top" >Returns a list of tables and v
 iews.</td><td valign="top" ><pre>SHOW TABLES;</pre></td></tr><tr><td valign="top" ><a href="/docs/use">USE</a></td><td valign="top" >Change to a particular schema. When you opt to use a particular schema, Drill issues queries on that schema only.</td><td valign="top" ><pre>USE schema_name;</pre></td></tr></table>

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/010-alter-session-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/010-alter-session-command.md b/_docs/sql-reference/sql-commands/010-alter-session-command.md
deleted file mode 100644
index c3bdc86..0000000
--- a/_docs/sql-reference/sql-commands/010-alter-session-command.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-title: "ALTER SESSION Command"
-parent: "SQL Commands"
----
-The ALTER SESSION command changes a system setting for the duration of a
-session. Session level settings override system level settings.
-
-## Syntax
-
-The ALTER SESSION command supports the following syntax:
-
-    ALTER SESSION SET `<option_name>`=<value>;
-
-## Parameters
-
-*option_name*  
-This is the option name as it appears in the systems table.
-
-*value*  
-A value of the type listed in the sys.options table: number, string, boolean,
-or float. Use the appropriate value type for each option that you set.
-
-## Usage Notes
-
-Use the ALTER SESSION command to set Drill query planning and execution
-options per session in a cluster. The options that you set using the ALTER
-SESSION command only apply to queries that run during the current Drill
-connection. A session ends when you quit the Drill shell. You can set any of
-the system level options at the session level.
-
-You can run the following query to see a complete list of planning and
-execution options that are currently set at the system or session level:
-
-    0: jdbc:drill:zk=local> SELECT name, type FROM sys.options WHERE type in ('SYSTEM','SESSION') order by name;
-    +------------+----------------------------------------------+
-    |   name                                       |    type    |
-    +----------------------------------------------+------------+
-    | drill.exec.functions.cast_empty_string_to_null | SYSTEM   |
-    | drill.exec.storage.file.partition.column.label | SYSTEM   |
-    | exec.errors.verbose                          | SYSTEM     |
-    | exec.java_compiler                           | SYSTEM     |
-    | exec.java_compiler_debug                     | SYSTEM     |
-    …
-    +------------+----------------------------------------------+
-
-{% include startnote.html %}This is a truncated version of the list.{% include endnote.html %}
-
-## Example
-
-This example demonstrates how to use the ALTER SESSION command to set the
-`store.json.all_text_mode` option to “true” for the current Drill session.
-Setting this option to “true” enables text mode so that Drill reads everything
-in JSON as a text object instead of trying to interpret data types. This
-allows complicated JSON to be read using CASE and CAST.
-
-    0: jdbc:drill:zk=local> alter session set `store.json.all_text_mode`= true;
-    +------------+------------+
-    |   ok  |  summary   |
-    +------------+------------+
-    | true      | store.json.all_text_mode updated. |
-    +------------+------------+
-    1 row selected (0.046 seconds)
-
-You can issue a query to see all of the session level settings. Note that the
-option type is case-sensitive.
-
-    0: jdbc:drill:zk=local> SELECT name, type, bool_val FROM sys.options WHERE type = 'SESSION' order by name;
-    +------------+------------+------------+
-    |   name    |   type    |  bool_val  |
-    +------------+------------+------------+
-    | store.json.all_text_mode | SESSION    | true      |
-    +------------+------------+------------+
-    1 row selected (0.176 seconds)
-

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/010-alter-session.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/010-alter-session.md b/_docs/sql-reference/sql-commands/010-alter-session.md
new file mode 100644
index 0000000..5173b57
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/010-alter-session.md
@@ -0,0 +1,74 @@
+---
+title: "ALTER SESSION"
+parent: "SQL Commands"
+---
+The ALTER SESSION command changes a system setting for the duration of a
+session. Session level settings override system level settings.
+
+## Syntax
+
+The ALTER SESSION command supports the following syntax:
+
+    ALTER SESSION SET `<option_name>`=<value>;
+
+## Parameters
+
+*option_name*  
+This is the option name as it appears in the systems table.
+
+*value*  
+A value of the type listed in the sys.options table: number, string, boolean,
+or float. Use the appropriate value type for each option that you set.
+
+## Usage Notes
+
+Use the ALTER SESSION command to set Drill query planning and execution
+options per session in a cluster. The options that you set using the ALTER
+SESSION command only apply to queries that run during the current Drill
+connection. A session ends when you quit the Drill shell. You can set any of
+the system level options at the session level.
+
+You can run the following query to see a complete list of planning and
+execution options that are currently set at the system or session level:
+
+    0: jdbc:drill:zk=local> SELECT name, type FROM sys.options WHERE type in ('SYSTEM','SESSION') order by name;
+    +------------+----------------------------------------------+
+    |   name                                       |    type    |
+    +----------------------------------------------+------------+
+    | drill.exec.functions.cast_empty_string_to_null | SYSTEM   |
+    | drill.exec.storage.file.partition.column.label | SYSTEM   |
+    | exec.errors.verbose                          | SYSTEM     |
+    | exec.java_compiler                           | SYSTEM     |
+    | exec.java_compiler_debug                     | SYSTEM     |
+    …
+    +------------+----------------------------------------------+
+
+{% include startnote.html %}This is a truncated version of the list.{% include endnote.html %}
+
+## Example
+
+This example demonstrates how to use the ALTER SESSION command to set the
+`store.json.all_text_mode` option to “true” for the current Drill session.
+Setting this option to “true” enables text mode so that Drill reads everything
+in JSON as a text object instead of trying to interpret data types. This
+allows complicated JSON to be read using CASE and CAST.
+
+    0: jdbc:drill:zk=local> alter session set `store.json.all_text_mode`= true;
+    +------------+------------+
+    |   ok  |  summary   |
+    +------------+------------+
+    | true      | store.json.all_text_mode updated. |
+    +------------+------------+
+    1 row selected (0.046 seconds)
+
+You can issue a query to see all of the session level settings. Note that the
+option type is case-sensitive.
+
+    0: jdbc:drill:zk=local> SELECT name, type, bool_val FROM sys.options WHERE type = 'SESSION' order by name;
+    +------------+------------+------------+
+    |   name    |   type    |  bool_val  |
+    +------------+------------+------------+
+    | store.json.all_text_mode | SESSION    | true      |
+    +------------+------------+------------+
+    1 row selected (0.176 seconds)
+

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/020-alter-system.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/020-alter-system.md b/_docs/sql-reference/sql-commands/020-alter-system.md
index a351ac8..cc7a9f1 100644
--- a/_docs/sql-reference/sql-commands/020-alter-system.md
+++ b/_docs/sql-reference/sql-commands/020-alter-system.md
@@ -1,5 +1,5 @@
 ---
-title: "ALTER SYSTEM Command"
+title: "ALTER SYSTEM"
 parent: "SQL Commands"
 ---
 The ALTER SYSTEM command permanently changes a system setting. The new setting

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/030-create-table-as-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/030-create-table-as-command.md b/_docs/sql-reference/sql-commands/030-create-table-as-command.md
deleted file mode 100644
index 8e0a4e1..0000000
--- a/_docs/sql-reference/sql-commands/030-create-table-as-command.md
+++ /dev/null
@@ -1,134 +0,0 @@
----
-title: "CREATE TABLE AS (CTAS) Command"
-parent: "SQL Commands"
----
-You can create tables in Drill by using the CTAS command:
-
-    CREATE TABLE new_table_name AS <query>;
-
-where query is any valid Drill query. Each table you create must have a unique
-name. You can include an optional column list for the new table. For example:
-
-    create table logtable(transid, prodid) as select transaction_id, product_id from ...
-
-You can store table data in one of three formats:
-
-  * csv
-  * parquet
-  * json
-
-The parquet and json formats can be used to store complex data.
-
-To set the output format for a Drill table, set the `store.format` option with
-the ALTER SYSTEM or ALTER SESSION command. For example:
-
-    alter session set `store.format`='json';
-
-Table data is stored in the location specified by the workspace that is in use
-when you run the CTAS statement. By default, a directory is created, using the
-exact table name specified in the CTAS statement. A .json, .csv, or .parquet
-file inside that directory contains the data.
-
-You can only create new tables in workspaces. You cannot create tables in
-other storage plugins such as Hive and HBase.
-
-You must use a writable (mutable) workspace when creating Drill tables. For
-example:
-
-	"tmp": {
-	      "location": "/tmp",
-	      "writable": true,
-	       }
-
-## Example
-
-The following query returns one row from a JSON file:
-
-	0: jdbc:drill:zk=local> select id, type, name, ppu
-	from dfs.`/Users/brumsby/drill/donuts.json`;
-	+------------+------------+------------+------------+
-	|     id     |    type    |    name    |    ppu     |
-	+------------+------------+------------+------------+
-	| 0001       | donut      | Cake       | 0.55       |
-	+------------+------------+------------+------------+
-	1 row selected (0.248 seconds)
-
-To create and verify the contents of a table that contains this row:
-
-  1. Set the workspace to a writable workspace.
-  2. Set the `store.format` option appropriately.
-  3. Run a CTAS statement that contains the query.
-  4. Go to the directory where the table is stored and check the contents of the file.
-  5. Run a query against the new table.
-
-The following sqlline output captures this sequence of steps.
-
-### Workspace Definition
-
-	"tmp": {
-	      "location": "/tmp",
-	      "writable": true,
-	       }
-
-### ALTER SESSION Command
-
-    alter session set `store.format`='json';
-
-### USE Command
-
-	0: jdbc:drill:zk=local> use dfs.tmp;
-	+------------+------------+
-	|     ok     |  summary   |
-	+------------+------------+
-	| true       | Default schema changed to 'dfs.tmp' |
-	+------------+------------+
-	1 row selected (0.03 seconds)
-
-### CTAS Command
-
-	0: jdbc:drill:zk=local> create table donuts_json as
-	select id, type, name, ppu from dfs.`/Users/brumsby/drill/donuts.json`;
-	+------------+---------------------------+
-	|  Fragment  | Number of records written |
-	+------------+---------------------------+
-	| 0_0        | 1                         |
-	+------------+---------------------------+
-	1 row selected (0.107 seconds)
-
-### File Contents
-
-	administorsmbp7:tmp brumsby$ pwd
-	/tmp
-	administorsmbp7:tmp brumsby$ cd donuts_json
-	administorsmbp7:donuts_json brumsby$ more 0_0_0.json
-	{
-	 "id" : "0001",
-	  "type" : "donut",
-	  "name" : "Cake",
-	  "ppu" : 0.55
-	}
-
-### Query Against New Table
-
-	0: jdbc:drill:zk=local> select * from donuts_json;
-	+------------+------------+------------+------------+
-	|     id     |    type    |    name    |    ppu     |
-	+------------+------------+------------+------------+
-	| 0001       | donut      | Cake       | 0.55       |
-	+------------+------------+------------+------------+
-	1 row selected (0.053 seconds)
-
-### Use a Different Output Format
-
-You can run the same sequence again with a different storage format set for
-the system or session (csv or parquet). For example, if the format is set to
-csv, and you name the table donuts_csv, the resulting file would look like
-this:
-
-	administorsmbp7:tmp brumsby$ cd donuts_csv
-	administorsmbp7:donuts_csv brumsby$ ls
-	0_0_0.csv
-	administorsmbp7:donuts_csv brumsby$ more 0_0_0.csv
-	id,type,name,ppu
-	0001,donut,Cake,0.55
-

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/030-create-table-as.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/030-create-table-as.md b/_docs/sql-reference/sql-commands/030-create-table-as.md
new file mode 100644
index 0000000..f5ba9d3
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/030-create-table-as.md
@@ -0,0 +1,134 @@
+---
+title: "CREATE TABLE AS (CTAS)"
+parent: "SQL Commands"
+---
+You can create tables in Drill by using the CTAS command:
+
+    CREATE TABLE new_table_name AS <query>;
+
+where query is any valid Drill query. Each table you create must have a unique
+name. You can include an optional column list for the new table. For example:
+
+    create table logtable(transid, prodid) as select transaction_id, product_id from ...
+
+You can store table data in one of three formats:
+
+  * csv
+  * parquet
+  * json
+
+The parquet and json formats can be used to store complex data.
+
+To set the output format for a Drill table, set the `store.format` option with
+the ALTER SYSTEM or ALTER SESSION command. For example:
+
+    alter session set `store.format`='json';
+
+Table data is stored in the location specified by the workspace that is in use
+when you run the CTAS statement. By default, a directory is created, using the
+exact table name specified in the CTAS statement. A .json, .csv, or .parquet
+file inside that directory contains the data.
+
+You can only create new tables in workspaces. You cannot create tables in
+other storage plugins such as Hive and HBase.
+
+You must use a writable (mutable) workspace when creating Drill tables. For
+example:
+
+	"tmp": {
+	      "location": "/tmp",
+	      "writable": true,
+	       }
+
+## Example
+
+The following query returns one row from a JSON file:
+
+	0: jdbc:drill:zk=local> select id, type, name, ppu
+	from dfs.`/Users/brumsby/drill/donuts.json`;
+	+------------+------------+------------+------------+
+	|     id     |    type    |    name    |    ppu     |
+	+------------+------------+------------+------------+
+	| 0001       | donut      | Cake       | 0.55       |
+	+------------+------------+------------+------------+
+	1 row selected (0.248 seconds)
+
+To create and verify the contents of a table that contains this row:
+
+  1. Set the workspace to a writable workspace.
+  2. Set the `store.format` option appropriately.
+  3. Run a CTAS statement that contains the query.
+  4. Go to the directory where the table is stored and check the contents of the file.
+  5. Run a query against the new table.
+
+The following sqlline output captures this sequence of steps.
+
+### Workspace Definition
+
+	"tmp": {
+	      "location": "/tmp",
+	      "writable": true,
+	       }
+
+### ALTER SESSION Command
+
+    alter session set `store.format`='json';
+
+### USE Command
+
+	0: jdbc:drill:zk=local> use dfs.tmp;
+	+------------+------------+
+	|     ok     |  summary   |
+	+------------+------------+
+	| true       | Default schema changed to 'dfs.tmp' |
+	+------------+------------+
+	1 row selected (0.03 seconds)
+
+### CTAS Command
+
+	0: jdbc:drill:zk=local> create table donuts_json as
+	select id, type, name, ppu from dfs.`/Users/brumsby/drill/donuts.json`;
+	+------------+---------------------------+
+	|  Fragment  | Number of records written |
+	+------------+---------------------------+
+	| 0_0        | 1                         |
+	+------------+---------------------------+
+	1 row selected (0.107 seconds)
+
+### File Contents
+
+	administorsmbp7:tmp brumsby$ pwd
+	/tmp
+	administorsmbp7:tmp brumsby$ cd donuts_json
+	administorsmbp7:donuts_json brumsby$ more 0_0_0.json
+	{
+	 "id" : "0001",
+	  "type" : "donut",
+	  "name" : "Cake",
+	  "ppu" : 0.55
+	}
+
+### Query Against New Table
+
+	0: jdbc:drill:zk=local> select * from donuts_json;
+	+------------+------------+------------+------------+
+	|     id     |    type    |    name    |    ppu     |
+	+------------+------------+------------+------------+
+	| 0001       | donut      | Cake       | 0.55       |
+	+------------+------------+------------+------------+
+	1 row selected (0.053 seconds)
+
+### Use a Different Output Format
+
+You can run the same sequence again with a different storage format set for
+the system or session (csv or parquet). For example, if the format is set to
+csv, and you name the table donuts_csv, the resulting file would look like
+this:
+
+	administorsmbp7:tmp brumsby$ cd donuts_csv
+	administorsmbp7:donuts_csv brumsby$ ls
+	0_0_0.csv
+	administorsmbp7:donuts_csv brumsby$ more 0_0_0.csv
+	id,type,name,ppu
+	0001,donut,Cake,0.55
+

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/050-create-view-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/050-create-view-command.md b/_docs/sql-reference/sql-commands/050-create-view-command.md
deleted file mode 100644
index 53cf3b8..0000000
--- a/_docs/sql-reference/sql-commands/050-create-view-command.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-title: "CREATE VIEW Command"
-parent: "SQL Commands"
----
-The CREATE VIEW command creates a virtual structure for the result set of a
-stored query. A view can combine data from multiple underlying data sources
-and provide the illusion that all of the data is from one source. You can use
-views to protect sensitive data, for data aggregation, and to hide data
-complexity from users. You can create Drill views from files in your local and
-distributed file systems, Hive, HBase, and MapR-DB tables, as well as from
-existing views or any other available storage plugin data sources.
-
-## Syntax
-
-The CREATE VIEW command supports the following syntax:
-
-    CREATE [OR REPLACE] VIEW [workspace.]view_name [ (column_name [, ...]) ] AS <query>;
-
-Use CREATE VIEW to create a new view. Use CREATE OR REPLACE VIEW to replace an
-existing view with the same name. When you replace a view, the query must
-generate the same set of columns with the same column names and data types.
-
-**Note:** Follow Drill’s rules for identifiers when you name the view. See coming soon...
-
-## Parameters
-
-_workspace_  
-The location where you want the view to exist. By default, the view is created
-in the current workspace. See
-[Workspaces]({{ site.baseurl }}/docs/Workspaces).
-
-_view_name_  
-The name that you give the view. The view must have a unique name. It cannot
-have the same name as any other view or table in the workspace.
-
-_column_name_  
-Optional list of column names in the view. If you do not supply column names,
-they are derived from the query.
-
-_query_  
-A SELECT statement that defines the columns and rows in the view.
-
-## Usage Notes
-
-### Storage
-
-Drill stores views in the location specified by the workspace that you use
-when you run the CREATE VIEW command. If the workspace is not defined, Drill
-creates the view in the current workspace. You must use a writable workspace
-when you create a view. Currently, Drill only supports views created in the
-file system or distributed file system.
-
-The following example shows a writable workspace as defined within the storage
-plugin in the `/tmp` directory of the file system:
-
-    "tmp": {
-          "location": "/tmp",
-          "writable": true,
-           }
-
-Drill stores the view definition in JSON format with the name that you specify
-when you run the CREATE VIEW command, suffixed `by .view.drill`. For example,
-if you create a view named `myview`, Drill stores the view in the designated
-workspace as `myview.view.drill`.
-
-Data Sources
-
-Drill considers data sources to have either a strong schema or a weak schema.  
-
-##### Strong Schema
-
-With the exception of text file data sources, Drill verifies that data sources
-associated with a strong schema contain data types compatible with those used
-in the query. Drill also verifies that the columns referenced in the query
-exist in the underlying data sources. If the columns do not exist, CREATE VIEW
-fails.
-
-#### Weak Schema
-
-Drill does not verify that data sources associated with a weak schema contain
-data types compatible with those used in the query. Drill does not verify if
-columns referenced in a query on a Parquet data source exist, therefore CREATE
-VIEW always succeeds. In the case of JSON files, Drill does not verify if the
-files contain the maps specified in the view.
-
-The following table lists the current categories of schema and the data
-sources associated with each:
-
-<table>
-  <tr>
-    <th></th>
-    <th>Strong Schema</th>
-    <th>Weak Schema</th>
-  </tr>
-  <tr>
-    <td valign="top">Data Sources</td>
-    <td>views<br>hive tables<br>hbase column families<br>text</td>
-    <td>json<br>mongodb<br>hbase column qualifiers<br>parquet</td>
-  </tr>
-</table>
-  
-## Related Commands
-
-After you create a view using the CREATE VIEW command, you can issue the
-following commands against the view:
-
-  * SELECT 
-  * DESCRIBE 
-  * DROP 
-
-{% include startnote.html %}You cannot update, insert into, or delete from a view.{% include endnote.html %}
-
-## Example
-
-This example shows you some steps that you can follow when you want to create
-a view in Drill using the CREATE VIEW command. A workspace named “donuts” was
-created for the steps in this example.
-
-Complete the following steps to create a view in Drill:
-
-  1. Decide which workspace you will use to create the view, and verify that the writable option is set to “true.” You can use an existing workspace, or you can create a new workspace. See [Workspaces](https://cwiki.apache.org/confluence/display/DRILL/Workspaces) for more information.  
-  
-        "workspaces": {
-           "donuts": {
-             "location": "/home/donuts",
-             "writable": true,
-             "defaultInputFormat": null
-           }
-         },
-
-  2. Run SHOW DATABASES to verify that Drill recognizes the workspace.  
-
-        0: jdbc:drill:zk=local> show databases;
-        +-------------+
-        | SCHEMA_NAME |
-        +-------------+
-        | dfs.default |
-        | dfs.root  |
-        | dfs.donuts  |
-        | dfs.tmp   |
-        | cp.default  |
-        | sys       |
-        | INFORMATION_SCHEMA |
-        +-------------+
-
-  3. Use the writable workspace.  
-
-        0: jdbc:drill:zk=local> use dfs.donuts;
-        +------------+------------+
-        |     ok    |  summary   |
-        +------------+------------+
-        | true      | Default schema changed to 'dfs.donuts' |
-        +------------+------------+
-
-  4. Test run the query that you plan to use with the CREATE VIEW command.  
-
-        0: jdbc:drill:zk=local> select id, type, name, ppu from `donuts.json`;
-        +------------+------------+------------+------------+
-        |     id    |   type    |   name    |    ppu    |
-        +------------+------------+------------+------------+
-        | 0001      | donut      | Cake     | 0.55      |
-        +------------+------------+------------+------------+
-
-  5. Run the CREATE VIEW command with the query.  
-
-        0: jdbc:drill:zk=local> create view mydonuts as select id, type, name, ppu from `donuts.json`;
-        +------------+------------+
-        |     ok    |  summary   |
-        +------------+------------+
-        | true      | View 'mydonuts' created successfully in 'dfs.donuts' schema |
-        +------------+------------+
-
-  6. Create a new view in another workspace from the current workspace.  
-
-        0: jdbc:drill:zk=local> create view dfs.tmp.yourdonuts as select id, type, name from `donuts.json`;
-        +------------+------------+
-        |   ok  |  summary   |
-        +------------+------------+
-        | true      | View 'yourdonuts' created successfully in 'dfs.tmp' schema |
-        +------------+------------+
-
-  7. Query the view created in both workspaces.
-
-        0: jdbc:drill:zk=local> select * from mydonuts;
-        +------------+------------+------------+------------+
-        |     id    |   type    |   name    |    ppu    |
-        +------------+------------+------------+------------+
-        | 0001      | donut      | Cake     | 0.55      |
-        +------------+------------+------------+------------+
-         
-         
-        0: jdbc:drill:zk=local> select * from dfs.tmp.yourdonuts;
-        +------------+------------+------------+
-        |   id  |   type    |   name    |
-        +------------+------------+------------+
-        | 0001      | donut     | Cake      |
-        +------------+------------+------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/050-create-view.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/050-create-view.md b/_docs/sql-reference/sql-commands/050-create-view.md
new file mode 100644
index 0000000..4d71148
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/050-create-view.md
@@ -0,0 +1,197 @@
+---
+title: "CREATE VIEW"
+parent: "SQL Commands"
+---
+The CREATE VIEW command creates a virtual structure for the result set of a
+stored query. A view can combine data from multiple underlying data sources
+and provide the illusion that all of the data is from one source. You can use
+views to protect sensitive data, for data aggregation, and to hide data
+complexity from users. You can create Drill views from files in your local and
+distributed file systems, Hive, HBase, and MapR-DB tables, as well as from
+existing views or any other available storage plugin data sources.
+
+## Syntax
+
+The CREATE VIEW command supports the following syntax:
+
+    CREATE [OR REPLACE] VIEW [workspace.]view_name [ (column_name [, ...]) ] AS <query>;
+
+Use CREATE VIEW to create a new view. Use CREATE OR REPLACE VIEW to replace an
+existing view with the same name. When you replace a view, the query must
+generate the same set of columns with the same column names and data types.
+
+**Note:** Follow Drill’s rules for identifiers when you name the view. See coming soon...
+
+## Parameters
+
+_workspace_  
+The location where you want the view to exist. By default, the view is created
+in the current workspace. See
+[Workspaces]({{ site.baseurl }}/docs/Workspaces).
+
+_view_name_  
+The name that you give the view. The view must have a unique name. It cannot
+have the same name as any other view or table in the workspace.
+
+_column_name_  
+Optional list of column names in the view. If you do not supply column names,
+they are derived from the query.
+
+_query_  
+A SELECT statement that defines the columns and rows in the view.
+
+## Usage Notes
+
+### Storage
+
+Drill stores views in the location specified by the workspace that you use
+when you run the CREATE VIEW command. If the workspace is not defined, Drill
+creates the view in the current workspace. You must use a writable workspace
+when you create a view. Currently, Drill only supports views created in the
+file system or distributed file system.
+
+The following example shows a writable workspace as defined within the storage
+plugin in the `/tmp` directory of the file system:
+
+    "tmp": {
+          "location": "/tmp",
+          "writable": true,
+           }
+
+Drill stores the view definition in JSON format with the name that you specify
+when you run the CREATE VIEW command, suffixed `by .view.drill`. For example,
+if you create a view named `myview`, Drill stores the view in the designated
+workspace as `myview.view.drill`.
+
+Data Sources
+
+Drill considers data sources to have either a strong schema or a weak schema.  
+
+##### Strong Schema
+
+With the exception of text file data sources, Drill verifies that data sources
+associated with a strong schema contain data types compatible with those used
+in the query. Drill also verifies that the columns referenced in the query
+exist in the underlying data sources. If the columns do not exist, CREATE VIEW
+fails.
+
+#### Weak Schema
+
+Drill does not verify that data sources associated with a weak schema contain
+data types compatible with those used in the query. Drill does not verify if
+columns referenced in a query on a Parquet data source exist, therefore CREATE
+VIEW always succeeds. In the case of JSON files, Drill does not verify if the
+files contain the maps specified in the view.
+
+The following table lists the current categories of schema and the data
+sources associated with each:
+
+<table>
+  <tr>
+    <th></th>
+    <th>Strong Schema</th>
+    <th>Weak Schema</th>
+  </tr>
+  <tr>
+    <td valign="top">Data Sources</td>
+    <td>views<br>hive tables<br>hbase column families<br>text</td>
+    <td>json<br>mongodb<br>hbase column qualifiers<br>parquet</td>
+  </tr>
+</table>
+  
+## Related Commands
+
+After you create a view using the CREATE VIEW command, you can issue the
+following commands against the view:
+
+  * SELECT 
+  * DESCRIBE 
+  * DROP 
+
+{% include startnote.html %}You cannot update, insert into, or delete from a view.{% include endnote.html %}
+
+## Example
+
+This example shows you some steps that you can follow when you want to create
+a view in Drill using the CREATE VIEW command. A workspace named “donuts” was
+created for the steps in this example.
+
+Complete the following steps to create a view in Drill:
+
+  1. Decide which workspace you will use to create the view, and verify that the writable option is set to “true.” You can use an existing workspace, or you can create a new workspace. See [Workspaces](https://cwiki.apache.org/confluence/display/DRILL/Workspaces) for more information.  
+  
+        "workspaces": {
+           "donuts": {
+             "location": "/home/donuts",
+             "writable": true,
+             "defaultInputFormat": null
+           }
+         },
+
+  2. Run SHOW DATABASES to verify that Drill recognizes the workspace.  
+
+        0: jdbc:drill:zk=local> show databases;
+        +-------------+
+        | SCHEMA_NAME |
+        +-------------+
+        | dfs.default |
+        | dfs.root  |
+        | dfs.donuts  |
+        | dfs.tmp   |
+        | cp.default  |
+        | sys       |
+        | INFORMATION_SCHEMA |
+        +-------------+
+
+  3. Use the writable workspace.  
+
+        0: jdbc:drill:zk=local> use dfs.donuts;
+        +------------+------------+
+        |     ok    |  summary   |
+        +------------+------------+
+        | true      | Default schema changed to 'dfs.donuts' |
+        +------------+------------+
+
+  4. Test run the query that you plan to use with the CREATE VIEW command.  
+
+        0: jdbc:drill:zk=local> select id, type, name, ppu from `donuts.json`;
+        +------------+------------+------------+------------+
+        |     id    |   type    |   name    |    ppu    |
+        +------------+------------+------------+------------+
+        | 0001      | donut      | Cake     | 0.55      |
+        +------------+------------+------------+------------+
+
+  5. Run the CREATE VIEW command with the query.  
+
+        0: jdbc:drill:zk=local> create view mydonuts as select id, type, name, ppu from `donuts.json`;
+        +------------+------------+
+        |     ok    |  summary   |
+        +------------+------------+
+        | true      | View 'mydonuts' created successfully in 'dfs.donuts' schema |
+        +------------+------------+
+
+  6. Create a new view in another workspace from the current workspace.  
+
+        0: jdbc:drill:zk=local> create view dfs.tmp.yourdonuts as select id, type, name from `donuts.json`;
+        +------------+------------+
+        |   ok  |  summary   |
+        +------------+------------+
+        | true      | View 'yourdonuts' created successfully in 'dfs.tmp' schema |
+        +------------+------------+
+
+  7. Query the view created in both workspaces.
+
+        0: jdbc:drill:zk=local> select * from mydonuts;
+        +------------+------------+------------+------------+
+        |     id    |   type    |   name    |    ppu    |
+        +------------+------------+------------+------------+
+        | 0001      | donut      | Cake     | 0.55      |
+        +------------+------------+------------+------------+
+         
+         
+        0: jdbc:drill:zk=local> select * from dfs.tmp.yourdonuts;
+        +------------+------------+------------+
+        |   id  |   type    |   name    |
+        +------------+------------+------------+
+        | 0001      | donut     | Cake      |
+        +------------+------------+------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/055-drop-view.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/055-drop-view.md b/_docs/sql-reference/sql-commands/055-drop-view.md
new file mode 100644
index 0000000..4e8b477
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/055-drop-view.md
@@ -0,0 +1,47 @@
+---
+title: "DROP VIEW"
+parent: "SQL Commands"
+---
+
+The DROP VIEW command removes a view that was created in a workspace using the CREATE VIEW command.
+
+## Syntax
+
+The DROP VIEW command supports the following syntax:
+
+     DROP VIEW [workspace.]view_name;
+
+## Usage Notes
+
+When you drop a view, all information about the view is deleted from the workspace in which it was created. DROP VIEW applies to the view only, not to the underlying data sources used to create the view. However, if you drop a view that another view is dependent on, you can no longer use the dependent view. If the underlying tables or views change after a view is created, you may want to drop and re-create the view. Alternatively, you can use the CREATE OR REPLACE VIEW syntax to update the view.
+
+## Example
+
+This example shows you some steps to follow when you want to drop a view in Drill using the DROP VIEW command. A workspace named “donuts” was created for the steps in this example.
+Complete the following steps to drop a view in Drill:
+Use the writable workspace from which the view was created.
+
+    0: jdbc:drill:zk=local> use dfs.donuts;
+    +------------+------------+
+    |     ok    |  summary   |
+    +------------+------------+
+    | true      | Default schema changed to 'dfs.donuts' |
+    +------------+------------+
+ 
+Use the DROP VIEW command to remove a view created in the current workspace.
+
+    0: jdbc:drill:zk=local> drop view mydonuts;
+    +------------+------------+
+    |     ok    |  summary   |
+    +------------+------------+
+    | true      | View 'mydonuts' deleted successfully from 'dfs.donuts' schema |
+    +------------+------------+
+
+Use the DROP VIEW command to remove a view created in another workspace.
+
+    0: jdbc:drill:zk=local> drop view dfs.tmp.yourdonuts;
+    +------------+------------+
+    |   ok  |  summary   |
+    +------------+------------+
+    | true      | View 'yourdonuts' deleted successfully from 'dfs.tmp' schema |
+    +------------+------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/060-describe-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/060-describe-command.md b/_docs/sql-reference/sql-commands/060-describe-command.md
deleted file mode 100644
index 349f0ef..0000000
--- a/_docs/sql-reference/sql-commands/060-describe-command.md
+++ /dev/null
@@ -1,99 +0,0 @@
----
-title: "DESCRIBE Command"
-parent: "SQL Commands"
----
-The DESCRIBE command returns information about columns in a table or view.
-
-## Syntax
-
-The DESCRIBE command supports the following syntax:
-
-    DESCRIBE [workspace.]table_name|view_name
-
-## Usage Notes
-
-You can issue the DESCRIBE command against views created in a workspace and
-tables created in Hive, HBase, and MapR-DB. You can issue the DESCRIBE command
-on a table or view from any schema. For example, if you are working in the
-`dfs.myworkspace` schema, you can issue the DESCRIBE command on a view or
-table in another schema. Currently, DESCRIBE does not support tables created
-in a file system.
-
-Drill only supports SQL data types. Verify that all data types in an external
-data source, such as Hive or HBase, map to supported data types in Drill. See
-Drill Data Type Mapping for more information.
-
-## Example
-
-The following example demonstrates the steps that you can follow when you want
-to use the DESCRIBE command to see column information for a view and for Hive
-and HBase tables.
-
-Complete the following steps to use the DESCRIBE command:
-
-  1. Issue the USE command to switch to a particular schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> use hive;
-        +------------+------------+
-        |   ok  |  summary   |
-        +------------+------------+
-        | true      | Default schema changed to 'hive' |
-        +------------+------------+
-        1 row selected (0.025 seconds)
-
-  2. Issue the SHOW TABLES command to see the existing tables in the schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> show tables;
-        +--------------+------------+
-        | TABLE_SCHEMA | TABLE_NAME |
-        +--------------+------------+
-        | hive.default | orders     |
-        | hive.default | products   |
-        +--------------+------------+
-        2 rows selected (0.438 seconds)
-
-  3. Issue the DESCRIBE command on a table.
-
-        0: jdbc:drill:zk=drilldemo:5181> describe orders;
-        +-------------+------------+-------------+
-        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
-        +-------------+------------+-------------+
-        | order_id  | BIGINT    | YES       |
-        | month     | VARCHAR   | YES       |
-        | purchdate   | TIMESTAMP  | YES        |
-        | cust_id   | BIGINT    | YES       |
-        | state     | VARCHAR   | YES       |
-        | prod_id   | BIGINT    | YES       |
-        | order_total | INTEGER | YES       |
-        +-------------+------------+-------------+
-        7 rows selected (0.64 seconds)
-
-  4. Issue the DESCRIBE command on a table in another schema from the current schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> describe hbase.customers;
-        +-------------+------------+-------------+
-        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
-        +-------------+------------+-------------+
-        | row_key   | ANY       | NO        |
-        | address   | (VARCHAR(1), ANY) MAP | NO        |
-        | loyalty   | (VARCHAR(1), ANY) MAP | NO        |
-        | personal  | (VARCHAR(1), ANY) MAP | NO        |
-        +-------------+------------+-------------+
-        4 rows selected (0.671 seconds)
-
-  5. Issue the DESCRIBE command on a view in another schema from the current schema.
-
-        0: jdbc:drill:zk=drilldemo:5181> describe dfs.views.customers_vw;
-        +-------------+------------+-------------+
-        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
-        +-------------+------------+-------------+
-        | cust_id   | BIGINT    | NO        |
-        | name      | VARCHAR   | NO        |
-        | address   | VARCHAR   | NO        |
-        | gender    | VARCHAR   | NO        |
-        | age       | VARCHAR   | NO        |
-        | agg_rev   | VARCHAR   | NO        |
-        | membership  | VARCHAR | NO        |
-        +-------------+------------+-------------+
-        7 rows selected (0.403 seconds)
-

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/060-describe.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/060-describe.md b/_docs/sql-reference/sql-commands/060-describe.md
new file mode 100644
index 0000000..6623c8f
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/060-describe.md
@@ -0,0 +1,99 @@
+---
+title: "DESCRIBE"
+parent: "SQL Commands"
+---
+The DESCRIBE command returns information about columns in a table or view.
+
+## Syntax
+
+The DESCRIBE command supports the following syntax:
+
+    DESCRIBE [workspace.]table_name|view_name
+
+## Usage Notes
+
+You can issue the DESCRIBE command against views created in a workspace and
+tables created in Hive, HBase, and MapR-DB. You can issue the DESCRIBE command
+on a table or view from any schema. For example, if you are working in the
+`dfs.myworkspace` schema, you can issue the DESCRIBE command on a view or
+table in another schema. Currently, DESCRIBE does not support tables created
+in a file system.
+
+Drill only supports SQL data types. Verify that all data types in an external
+data source, such as Hive or HBase, map to supported data types in Drill. See
+Drill Data Type Mapping for more information.
+
+## Example
+
+The following example demonstrates the steps that you can follow when you want
+to use the DESCRIBE command to see column information for a view and for Hive
+and HBase tables.
+
+Complete the following steps to use the DESCRIBE command:
+
+  1. Issue the USE command to switch to a particular schema.
+
+        0: jdbc:drill:zk=drilldemo:5181> use hive;
+        +------------+------------+
+        |   ok  |  summary   |
+        +------------+------------+
+        | true      | Default schema changed to 'hive' |
+        +------------+------------+
+        1 row selected (0.025 seconds)
+
+  2. Issue the SHOW TABLES command to see the existing tables in the schema.
+
+        0: jdbc:drill:zk=drilldemo:5181> show tables;
+        +--------------+------------+
+        | TABLE_SCHEMA | TABLE_NAME |
+        +--------------+------------+
+        | hive.default | orders     |
+        | hive.default | products   |
+        +--------------+------------+
+        2 rows selected (0.438 seconds)
+
+  3. Issue the DESCRIBE command on a table.
+
+        0: jdbc:drill:zk=drilldemo:5181> describe orders;
+        +-------------+------------+-------------+
+        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
+        +-------------+------------+-------------+
+        | order_id  | BIGINT    | YES       |
+        | month     | VARCHAR   | YES       |
+        | purchdate   | TIMESTAMP  | YES        |
+        | cust_id   | BIGINT    | YES       |
+        | state     | VARCHAR   | YES       |
+        | prod_id   | BIGINT    | YES       |
+        | order_total | INTEGER | YES       |
+        +-------------+------------+-------------+
+        7 rows selected (0.64 seconds)
+
+  4. Issue the DESCRIBE command on a table in another schema from the current schema.
+
+        0: jdbc:drill:zk=drilldemo:5181> describe hbase.customers;
+        +-------------+------------+-------------+
+        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
+        +-------------+------------+-------------+
+        | row_key   | ANY       | NO        |
+        | address   | (VARCHAR(1), ANY) MAP | NO        |
+        | loyalty   | (VARCHAR(1), ANY) MAP | NO        |
+        | personal  | (VARCHAR(1), ANY) MAP | NO        |
+        +-------------+------------+-------------+
+        4 rows selected (0.671 seconds)
+
+  5. Issue the DESCRIBE command on a view in another schema from the current schema.
+
+        0: jdbc:drill:zk=drilldemo:5181> describe dfs.views.customers_vw;
+        +-------------+------------+-------------+
+        | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
+        +-------------+------------+-------------+
+        | cust_id   | BIGINT    | NO        |
+        | name      | VARCHAR   | NO        |
+        | address   | VARCHAR   | NO        |
+        | gender    | VARCHAR   | NO        |
+        | age       | VARCHAR   | NO        |
+        | agg_rev   | VARCHAR   | NO        |
+        | membership  | VARCHAR | NO        |
+        +-------------+------------+-------------+
+        7 rows selected (0.403 seconds)
+

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/070-explain-commands.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/070-explain-commands.md b/_docs/sql-reference/sql-commands/070-explain-commands.md
deleted file mode 100644
index acf0825..0000000
--- a/_docs/sql-reference/sql-commands/070-explain-commands.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-title: "EXPLAIN Commands"
-parent: "SQL Commands"
----
-EXPLAIN is a useful tool for examining the steps that a query goes through
-when it is executed. You can use the EXPLAIN output to gain a deeper
-understanding of the parallel processing that Drill queries exploit. You can
-also look at costing information, troubleshoot performance issues, and
-diagnose routine errors that may occur when you run queries.
-
-Drill provides two variations on the EXPLAIN command, one that returns the
-physical plan and one that returns the logical plan. A logical plan takes the
-SQL query (as written by the user and accepted by the parser) and translates
-it into a logical series of operations that correspond to SQL language
-constructs (without defining the specific algorithms that will be implemented
-to run the query). A physical plan translates the logical plan into a specific
-series of steps that will be used when the query runs. For example, a logical
-plan may indicate a join step in general and classify it as inner or outer,
-but the corresponding physical plan will indicate the specific type of join
-operator that will run, such as a merge join or a hash join. The physical plan
-is operational and reveals the specific _access methods_ that will be used for
-the query.
-
-An EXPLAIN command for a query that is run repeatedly under the exact same
-conditions against the same data will return the same plan. However, if you
-change a configuration option, for example, or update the tables or files that
-you are selecting from, you are likely to see plan changes.
-
-## EXPLAIN Syntax
-
-The EXPLAIN command supports the following syntax:
-
-    explain plan [ including all attributes ] [ with implementation | without implementation ] for <query> ;
-
-where `query` is any valid SELECT statement supported by Drill.
-
-##### INCLUDING ALL ATTRIBUTES
-
-This option returns costing information. You can use this option for both
-physical and logical plans.
-
-#### WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION
-
-These options return the physical and logical plan information, respectively.
-The default is physical (WITH IMPLEMENTATION).
-
-## EXPLAIN for Physical Plans
-
-The EXPLAIN PLAN FOR <query> command returns the chosen physical execution
-plan for a query statement without running the query. You can use this command
-to see what kind of execution operators Drill implements. For example, you can
-find out what kind of join algorithm is chosen when tables or files are
-joined. You can also use this command to analyze errors and troubleshoot
-queries that do not run. For example, if you run into a casting error, the
-query plan text may help you isolate the problem.
-
-Use the following syntax:
-
-    explain plan for <query> ;
-
-The following set command increases the default text display (number of
-characters). By default, most of the plan output is not displayed.
-
-    0: jdbc:drill:zk=local> !set maxwidth 10000
-
-Do not use a semicolon to terminate set commands.
-
-For example, here is the top portion of the explain output for a
-COUNT(DISTINCT) query on a JSON file:
-
-    0: jdbc:drill:zk=local> !set maxwidth 10000
-	0: jdbc:drill:zk=local> explain plan for select type t, count(distinct id) from dfs.`/home/donuts/donuts.json` where type='donut' group by type;
-	+------------+------------+
-	|   text    |   json    |
-	+------------+------------+
-	| 00-00 Screen
-	00-01   Project(t=[$0], EXPR$1=[$1])
-	00-02       Project(t=[$0], EXPR$1=[$1])
-	00-03       HashAgg(group=[{0}], EXPR$1=[COUNT($1)])
-	00-04           HashAgg(group=[{0, 1}])
-	00-05           SelectionVectorRemover
-	00-06               Filter(condition=[=($0, 'donut')])
-	00-07               Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`type`, `id`], files=[file:/home/donuts/donuts.json]]])...
-	...
-
-Read the text output from bottom to top to understand the sequence of
-operators that will execute the query. Note that the physical plan starts with
-a scan of the JSON file that is being queried. The selected columns are
-projected and filtered, then the aggregate function is applied.
-
-The EXPLAIN text output is followed by detailed JSON output, which is reusable
-for submitting the query via Drill APIs.
-
-	| {
-	  "head" : {
-	    "version" : 1,
-	    "generator" : {
-	      "type" : "ExplainHandler",
-	      "info" : ""
-	    },
-	    "type" : "APACHE_DRILL_PHYSICAL",
-	    "options" : [ ],
-	    "queue" : 0,
-	    "resultMode" : "EXEC"
-	  },
-	....
-
-## Costing Information
-
-Add the INCLUDING ALL ATTRIBUTES option to the EXPLAIN command to see cost
-estimates for the query plan. For example:
-
-	0: jdbc:drill:zk=local> !set maxwidth 10000
-	0: jdbc:drill:zk=local> explain plan including all attributes for select * from dfs.`/home/donuts/donuts.json` where type='donut';
-	+------------+------------+
-	|   text    |   json    |
-	+------------+------------+
-	| 00-00 Screen: rowcount = 1.0, cumulative cost = {5.1 rows, 21.1 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 889
-	00-01   Project(*=[$0]): rowcount = 1.0, cumulative cost = {5.0 rows, 21.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 888
-	00-02       Project(T1¦¦*=[$0]): rowcount = 1.0, cumulative cost = {4.0 rows, 17.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 887
-	00-03       SelectionVectorRemover: rowcount = 1.0, cumulative cost = {3.0 rows, 13.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 886
-	00-04           Filter(condition=[=($1, 'donut')]): rowcount = 1.0, cumulative cost = {2.0 rows, 12.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 885
-	00-05           Project(T1¦¦*=[$0], type=[$1]): rowcount = 1.0, cumulative cost = {1.0 rows, 8.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 884
-	00-06               Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`*`], files=[file:/home/donuts/donuts.json]]]): rowcount = 1.0, cumulative cost = {0.0 rows, 0.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 883
-
-## EXPLAIN for Logical Plans
-
-To return the logical plan for a query (again, without actually running the
-query), use the EXPLAIN PLAN WITHOUT IMPLEMENTATION syntax:
-
-    explain plan without implementation for <query> ;
-
-For example:
-
-	0: jdbc:drill:zk=local> explain plan without implementation for select type t, count(distinct id) from dfs.`/home/donuts/donuts.json` where type='donut' group by type;
-	+------------+------------+
-	|   text    |   json    |
-	+------------+------------+
-	| DrillScreenRel
-	  DrillProjectRel(t=[$0], EXPR$1=[$1])
-	    DrillAggregateRel(group=[{0}], EXPR$1=[COUNT($1)])
-	    DrillAggregateRel(group=[{0, 1}])
-	        DrillFilterRel(condition=[=($0, 'donut')])
-	        DrillScanRel(table=[[dfs, /home/donuts/donuts.json]], groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`type`, `id`], files=[file:/home/donuts/donuts.json]]]) | {
-	  | {
-	  "head" : {
-	    "version" : 1,
-	    "generator" : {
-	    "type" : "org.apache.drill.exec.planner.logical.DrillImplementor",
-	    "info" : ""
-	    },
-	    "type" : "APACHE_DRILL_LOGICAL",
-	    "options" : null,
-	    "queue" : 0,
-	    "resultMode" : "LOGICAL"
-	  },...

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/070-explain.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/070-explain.md b/_docs/sql-reference/sql-commands/070-explain.md
new file mode 100644
index 0000000..c5347a9
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/070-explain.md
@@ -0,0 +1,156 @@
+---
+title: "EXPLAIN"
+parent: "SQL Commands"
+---
+EXPLAIN is a useful tool for examining the steps that a query goes through
+when it is executed. You can use the EXPLAIN output to gain a deeper
+understanding of the parallel processing that Drill queries exploit. You can
+also look at costing information, troubleshoot performance issues, and
+diagnose routine errors that may occur when you run queries.
+
+Drill provides two variations on the EXPLAIN command, one that returns the
+physical plan and one that returns the logical plan. A logical plan takes the
+SQL query (as written by the user and accepted by the parser) and translates
+it into a logical series of operations that correspond to SQL language
+constructs (without defining the specific algorithms that will be implemented
+to run the query). A physical plan translates the logical plan into a specific
+series of steps that will be used when the query runs. For example, a logical
+plan may indicate a join step in general and classify it as inner or outer,
+but the corresponding physical plan will indicate the specific type of join
+operator that will run, such as a merge join or a hash join. The physical plan
+is operational and reveals the specific _access methods_ that will be used for
+the query.
+
+An EXPLAIN command for a query that is run repeatedly under the exact same
+conditions against the same data will return the same plan. However, if you
+change a configuration option, for example, or update the tables or files that
+you are selecting from, you are likely to see plan changes.
+
+## EXPLAIN Syntax
+
+The EXPLAIN command supports the following syntax:
+
+    explain plan [ including all attributes ] [ with implementation | without implementation ] for <query> ;
+
+where `query` is any valid SELECT statement supported by Drill.
+
+##### INCLUDING ALL ATTRIBUTES
+
+This option returns costing information. You can use this option for both
+physical and logical plans.
+
+#### WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION
+
+These options return the physical and logical plan information, respectively.
+The default is physical (WITH IMPLEMENTATION).
+
+## EXPLAIN for Physical Plans
+
+The EXPLAIN PLAN FOR <query> command returns the chosen physical execution
+plan for a query statement without running the query. You can use this command
+to see what kind of execution operators Drill implements. For example, you can
+find out what kind of join algorithm is chosen when tables or files are
+joined. You can also use this command to analyze errors and troubleshoot
+queries that do not run. For example, if you run into a casting error, the
+query plan text may help you isolate the problem.
+
+Use the following syntax:
+
+    explain plan for <query> ;
+
+The following set command increases the default text display (number of
+characters). By default, most of the plan output is not displayed.
+
+    0: jdbc:drill:zk=local> !set maxwidth 10000
+
+Do not use a semicolon to terminate set commands.
+
+For example, here is the top portion of the explain output for a
+COUNT(DISTINCT) query on a JSON file:
+
+    0: jdbc:drill:zk=local> !set maxwidth 10000
+	0: jdbc:drill:zk=local> explain plan for select type t, count(distinct id) from dfs.`/home/donuts/donuts.json` where type='donut' group by type;
+	+------------+------------+
+	|   text    |   json    |
+	+------------+------------+
+	| 00-00 Screen
+	00-01   Project(t=[$0], EXPR$1=[$1])
+	00-02       Project(t=[$0], EXPR$1=[$1])
+	00-03       HashAgg(group=[{0}], EXPR$1=[COUNT($1)])
+	00-04           HashAgg(group=[{0, 1}])
+	00-05           SelectionVectorRemover
+	00-06               Filter(condition=[=($0, 'donut')])
+	00-07               Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`type`, `id`], files=[file:/home/donuts/donuts.json]]])...
+	...
+
+Read the text output from bottom to top to understand the sequence of
+operators that will execute the query. Note that the physical plan starts with
+a scan of the JSON file that is being queried. The selected columns are
+projected and filtered, then the aggregate function is applied.
+
+The EXPLAIN text output is followed by detailed JSON output, which is reusable
+for submitting the query via Drill APIs.
+
+	| {
+	  "head" : {
+	    "version" : 1,
+	    "generator" : {
+	      "type" : "ExplainHandler",
+	      "info" : ""
+	    },
+	    "type" : "APACHE_DRILL_PHYSICAL",
+	    "options" : [ ],
+	    "queue" : 0,
+	    "resultMode" : "EXEC"
+	  },
+	....
+
+## Costing Information
+
+Add the INCLUDING ALL ATTRIBUTES option to the EXPLAIN command to see cost
+estimates for the query plan. For example:
+
+	0: jdbc:drill:zk=local> !set maxwidth 10000
+	0: jdbc:drill:zk=local> explain plan including all attributes for select * from dfs.`/home/donuts/donuts.json` where type='donut';
+	+------------+------------+
+	|   text    |   json    |
+	+------------+------------+
+	| 00-00 Screen: rowcount = 1.0, cumulative cost = {5.1 rows, 21.1 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 889
+	00-01   Project(*=[$0]): rowcount = 1.0, cumulative cost = {5.0 rows, 21.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 888
+	00-02       Project(T1¦¦*=[$0]): rowcount = 1.0, cumulative cost = {4.0 rows, 17.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 887
+	00-03       SelectionVectorRemover: rowcount = 1.0, cumulative cost = {3.0 rows, 13.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 886
+	00-04           Filter(condition=[=($1, 'donut')]): rowcount = 1.0, cumulative cost = {2.0 rows, 12.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 885
+	00-05           Project(T1¦¦*=[$0], type=[$1]): rowcount = 1.0, cumulative cost = {1.0 rows, 8.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 884
+	00-06               Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`*`], files=[file:/home/donuts/donuts.json]]]): rowcount = 1.0, cumulative cost = {0.0 rows, 0.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 883
+
+## EXPLAIN for Logical Plans
+
+To return the logical plan for a query (again, without actually running the
+query), use the EXPLAIN PLAN WITHOUT IMPLEMENTATION syntax:
+
+    explain plan without implementation for <query> ;
+
+For example:
+
+	0: jdbc:drill:zk=local> explain plan without implementation for select type t, count(distinct id) from dfs.`/home/donuts/donuts.json` where type='donut' group by type;
+	+------------+------------+
+	|   text    |   json    |
+	+------------+------------+
+	| DrillScreenRel
+	  DrillProjectRel(t=[$0], EXPR$1=[$1])
+	    DrillAggregateRel(group=[{0}], EXPR$1=[COUNT($1)])
+	    DrillAggregateRel(group=[{0, 1}])
+	        DrillFilterRel(condition=[=($0, 'donut')])
+	        DrillScanRel(table=[[dfs, /home/donuts/donuts.json]], groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`type`, `id`], files=[file:/home/donuts/donuts.json]]]) | {
+	  | {
+	  "head" : {
+	    "version" : 1,
+	    "generator" : {
+	    "type" : "org.apache.drill.exec.planner.logical.DrillImplementor",
+	    "info" : ""
+	    },
+	    "type" : "APACHE_DRILL_LOGICAL",
+	    "options" : null,
+	    "queue" : 0,
+	    "resultMode" : "LOGICAL"
+	  },...

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/080-select.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/080-select.md b/_docs/sql-reference/sql-commands/080-select.md
index 4fb9f5e..5ee5e25 100644
--- a/_docs/sql-reference/sql-commands/080-select.md
+++ b/_docs/sql-reference/sql-commands/080-select.md
@@ -1,5 +1,5 @@
 ---
-title: "SELECT Statements"
+title: "SELECT"
 parent: "SQL Commands"
 ---
 Drill supports the following ANSI standard clauses in the SELECT statement:
@@ -83,96 +83,3 @@ all return Boolean results.
 In general, correlated subqueries are supported. EXISTS and NOT EXISTS
 subqueries that do not contain a correlation join are not yet supported.
 
-## WITH Clause
-
-The WITH clause is an optional clause used to contain one or more common table
-expressions (CTE) where each CTE defines a temporary table that exists for the
-duration of the query. Each subquery in the WITH clause specifies a table
-name, an optional list of column names, and a SELECT statement.
-
-## Syntax
-
-The WITH clause supports the following syntax:
-
-    [ WITH with_subquery [, ...] ]
-    where with_subquery is:
-    with_subquery_table_name [ ( column_name [, ...] ) ] AS ( query ) 
-
-## Parameters
-
-_with_subquery_table_name_
-
-A unique name for a temporary table that defines the results of a WITH clause
-subquery. You cannot use duplicate names within a single WITH clause. You must
-give each subquery a table name that can be referenced in the FROM clause.
-
-_column_name_
-
-An optional list of output column names for the WITH clause subquery,
-separated by commas. The number of column names specified must be equal to or
-less than the number of columns defined by the subquery.
-
-_query_
-
-Any SELECT query that Drill supports. See
-[SELECT]({{ site.baseurl }}/docs/SELECT+Statements).
-
-## Usage Notes
-
-Use the WITH clause to efficiently define temporary tables that Drill can
-access throughout the execution of a single query. The WITH clause is
-typically a simpler alternative to using subqueries in the main body of the
-SELECT statement. In some cases, Drill can evaluate a WITH subquery once and
-reuse the results for query optimization.
-
-You can use a WITH clause in the following SQL statements:
-
-  * SELECT (including subqueries within SELECT statements)
-
-  * CREATE TABLE AS
-
-  * CREATE VIEW
-
-  * EXPLAIN
-
-You can reference the temporary tables in the FROM clause of the query. If the
-FROM clause does not reference any tables defined by the WITH clause, Drill
-ignores the WITH clause and executes the query as normal.
-
-Drill can only reference a table defined by a WITH clause subquery in the
-scope of the SELECT query that the WITH clause begins. For example, you can
-reference such a table in the FROM clause of a subquery in the SELECT list,
-WHERE clause, or HAVING clause. You cannot use a WITH clause in a subquery and
-reference its table in the FROM clause of the main query or another subquery.
-
-You cannot specify another WITH clause inside a WITH clause subquery.
-
-For example, the following query includes a forward reference to table t2 in
-the definition of table t1:
-
-## Example
-
-The following example shows the WITH clause used to create a WITH query named
-`emp_data` that selects all of the rows from the `employee.json` file. The
-main query selects the `full_name, position_title, salary`, and `hire_date`
-rows from the `emp_data` temporary table (created from the WITH subquery) and
-orders the results by the hire date. The `emp_data` table only exists for the
-duration of the query.
-
-**Note:** The `employee.json` file is included with the Drill installation. It is located in the `cp.default` workspace which is configured by default. 
-
-    0: jdbc:drill:zk=local> with emp_data as (select * from cp.`employee.json`) select full_name, position_title, salary, hire_date from emp_data order by hire_date limit 10;
-    +------------------+-------------------------+------------+-----------------------+
-    | full_name        | position_title          |   salary   | hire_date             |
-    +------------------+-------------------------+------------+-----------------------+
-    | Bunny McCown     | Store Assistant Manager | 8000.0     | 1993-05-01 00:00:00.0 |
-    | Danielle Johnson | Store Assistant Manager | 8000.0     | 1993-05-01 00:00:00.0 |
-    | Dick Brummer     | Store Assistant Manager | 7900.0     | 1993-05-01 00:00:00.0 |
-    | Gregory Whiting  | Store Assistant Manager | 10000.0    | 1993-05-01 00:00:00.0 |
-    | Juanita Sharp    | HQ Human Resources      | 6700.0     | 1994-01-01 00:00:00.0 |
-    | Sheri Nowmer     | President               | 80000.0    | 1994-12-01 00:00:00.0 |
-    | Rebecca Kanagaki | VP Human Resources      | 15000.0    | 1994-12-01 00:00:00.0 |
-    | Shauna Wyro      | Store Manager           | 15000.0    | 1994-12-01 00:00:00.0 |
-    | Roberta Damstra  | VP Information Systems  | 25000.0    | 1994-12-01 00:00:00.0 |
-    | Pedro Castillo   | VP Country Manager      | 35000.0    | 1994-12-01 00:00:00.0 |
-    +------------+----------------+--------------+------------------------------------+
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/081-select-from.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/081-select-from.md b/_docs/sql-reference/sql-commands/081-select-from.md
new file mode 100644
index 0000000..acaaa87
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/081-select-from.md
@@ -0,0 +1,87 @@
+---
+title: "SELECT FROM"
+parent: "SQL Commands"
+---
+The FROM clause lists the references (tables, views, and subqueries) that data is selected from. Drill expands the traditional concept of a “table reference” in a standard SQL FROM clause to refer to files and directories in a local or distributed file system.
+
+## Syntax
+The FROM clause supports the following syntax:
+
+       ... FROM table_expression [, …]
+
+## Parameters
+*table_expression* 
+
+Includes one or more *table_references* and is typically followed by the WHERE, GROUP BY, ORDER BY, or HAVING clause. 
+
+*table_reference*
+
+       with_subquery_table_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
+       table_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
+       ( subquery ) [ AS ] alias [ ( column_alias [, ...] ) ]
+       table_reference [ ON join_condition ]
+
+   * *with\_subquery\_table_name*
+
+       A table defined by a subquery in the WITH clause.
+
+
+  * *table_name* 
+  
+    Name of a table or view. In Drill, you can also refer to a file system directory or a specific file.
+
+   * *alias* 
+
+    A temporary alternative name for a table or view that provides a convenient shortcut for identifying tables in other parts of a query, such as the WHERE clause. You must supply an alias for a table derived from a subquery. In other table references, aliases are optional. The AS keyword is always optional. Drill does not support the GROUP BY alias.
+
+   * *column_alias*  
+     
+    A temporary alternative name for a column in a table or view.
+
+   * *subquery*  
+  
+     A query expression that evaluates to a table. The table exists only for the duration of the query and is typically given a name or alias, though an alias is not required. You can also define column names for tables that derive from subqueries. Naming column aliases is important when you want to join the results of subqueries to other tables and when you want to select or constrain those columns elsewhere in the query. A subquery may contain an ORDER BY clause, but this clause may have no effect if a LIMIT or OFFSET clause is not also specified.
+
+   * *join_type*  
+ 
+    Specifies one of the following join types: 
+
+       [INNER] JOIN  
+       LEFT [OUTER] JOIN  
+       RIGHT [OUTER] JOIN  
+       FULL [OUTER] JOIN
+
+   * *ON join_condition*  
+
+       A type of join specification where the joining columns are stated as a condition that follows the ON keyword.  
+       Example:  
+      ` homes join listing on homes.listid=listing.listid and homes.homeid=listing.homeid`
+
+## Join Types
+INNER JOIN  
+
+Return matching rows only, based on the join condition or list of joining columns.  
+
+OUTER JOIN 
+
+Return all of the rows that the equivalent inner join would return plus non-matching rows from the "left" table, "right" table, or both tables. The left table is the first-listed table, and the right table is the second-listed table. The non-matching rows contain NULL values to fill the gaps in the output columns.
+
+## Usage Notes  
+   * Joined columns must have comparable data types.
+   * A join with the ON syntax retains both joining columns in its intermediate result set.
+
+
+## Examples
+The following query uses a workspace named `dfw.views` and joins a view named “custview” with a hive table named “orders” to determine sales for each membership type:
+
+       0: jdbc:drill:> select membership, sum(order_total) as sales from hive.orders, custview
+       where orders.cust_id=custview.cust_id
+       group by membership order by 2;
+       +------------+------------+
+       | membership |   sales    |
+       +------------+------------+
+       | "basic"    | 380665     |
+       | "silver"   | 708438     |
+       | "gold"     | 2787682    |
+       +------------+------------+
+       3 rows selected

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/082-select-group-by.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/082-select-group-by.md b/_docs/sql-reference/sql-commands/082-select-group-by.md
new file mode 100644
index 0000000..a1f799c
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/082-select-group-by.md
@@ -0,0 +1,51 @@
+---
+title: "SELECT GROUP BY"
+parent: "SQL Commands"
+---
+The GROUP BY clause identifies the grouping columns for the query. You typically use a GROUP BY clause in conjunction with an aggregate expression. Grouping columns must be declared when the query computes aggregates with standard functions such as SUM, AVG, and COUNT. Currently, Drill does not support grouping on aliases.
+
+
+## Syntax
+The GROUP BY clause supports the following syntax:  
+
+
+    GROUP BY expression [, ...]
+  
+
+## Parameters  
+*column_name*  
+
+Must be a column from the current scope of the query. For example, if a GROUP BY clause is in a subquery, it cannot refer to columns in the outer query.
+
+*expression*  
+
+The list of columns or expressions must match the list of non-aggregate expressions in the select list of the query.
+
+
+## Usage Notes
+*SelectItems* in the SELECT statement with a GROUP BY clause can only contain aggregates or grouping columns.
+
+
+## Examples
+The following query returns sales totals grouped by month:  
+
+       0: jdbc:drill:> select `month`, sum(order_total)
+       from orders group by `month` order by 2 desc;
+       +------------+------------+
+       | month | EXPR$1 |
+       +------------+------------+
+       | June | 950481 |
+       | May | 947796 |
+       | March | 836809 |
+       | April | 807291 |
+       | July | 757395 |
+       | October | 676236 |
+       | August | 572269 |
+       | February | 532901 |
+       | September | 373100 |
+       | January | 346536 |
+       +------------+------------+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/083-select-having.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/083-select-having.md b/_docs/sql-reference/sql-commands/083-select-having.md
new file mode 100644
index 0000000..76828c5
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/083-select-having.md
@@ -0,0 +1,51 @@
+---
+title: "SELECT HAVING"
+parent: "SQL Commands"
+---
+The HAVING clause filters group rows created by the GROUP BY clause. The HAVING clause is applied to each group of the grouped table, much as a WHERE clause is applied to a select list. If there is no GROUP BY clause, the HAVING clause is applied to the entire result as a single group. The SELECT clause cannot refer directly to any column that does not have a GROUP BY clause.
+
+## Syntax
+The HAVING clause supports the following syntax:  
+
+`[ HAVING  boolean_expression ]`  
+
+## Expression  
+A *boolean expression* can include one or more of the following operators:  
+
+  * AND
+  * OR
+  * NOT
+  * IS NULL
+  * IS NOT NULL
+  * LIKE 
+  * BETWEEN
+  * IN
+  * Comparison operators
+  * Quantified comparison operators  
+
+## Usage Notes
+  * Any column referenced in a HAVING clause must be either a grouping column or a column that refers to the result of an aggregate function.
+  * In a HAVING clause, you cannot specify:
+   * An alias that was defined in the select list. You must repeat the original, unaliased expression. 
+   * An ordinal number that refers to a select list item. Only the GROUP BY and ORDER BY clauses accept ordinal numbers.
+
+## Examples
+The following example query uses the HAVING clause to constrain an aggregate result. Drill queries the `dfs.clicks workspace` and  returns the total number of clicks for devices that indicate high click-throughs:  
+
+       0: jdbc:drill:> select t.user_info.device, count(*) from \`clicks/clicks.json\` t 
+       group by t.user_info.device having count(*) > 1000;  
+       
+       +------------+------------+
+       |   EXPR$0   |   EXPR$1   |
+       +------------+------------+
+       | IOS5       | 11814      |
+       | AOS4.2     | 5986       |
+       | IOS6       | 4464       |
+       | IOS7       | 3135       |
+       | AOS4.4     | 1562       |
+       | AOS4.3     | 3039       |
+       +------------+------------+  
+
+The aggregate is a count of the records for each different mobile device in the clickstream data. Only the activity for the devices that registered more than 1000 transactions qualify for the result set.
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/9700ff63/_docs/sql-reference/sql-commands/084-select-limit.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/084-select-limit.md b/_docs/sql-reference/sql-commands/084-select-limit.md
new file mode 100644
index 0000000..9a1d6b5
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/084-select-limit.md
@@ -0,0 +1,51 @@
+---
+title: "SELECT LIMIT"
+parent: "SQL Commands"
+---
+The LIMIT clause limits the result set to the specified number of rows. You can use LIMIT with or without an ORDER BY clause.
+
+
+## Syntax
+The LIMIT clause supports the following syntax:  
+
+       [ LIMIT { count | ALL } ]
+
+Specifying ALL returns all records, which is equivalent to omitting the LIMIT clause from the SELECT statement.
+
+## Parameters
+*count*  
+
+Specifies the maximum number of rows to return.
+If the count expression evaluates to NULL, Drill treats it as LIMIT ALL. 
+
+## Examples
+The following example query includes the ORDER BY and LIMIT clauses and returns the top 20 sales totals by month and state:  
+
+       0: jdbc:drill:> select `month`, state, sum(order_total) as sales from orders group by `month`, state
+       order by 3 desc limit 20;
+       +------------+------------+------------+
+       |   month    |   state    |   sales    |
+       +------------+------------+------------+
+       | May        | ca         | 119586     |
+       | June       | ca         | 116322     |
+       | April      | ca         | 101363     |
+       | March      | ca         | 99540      |
+       | July       | ca         | 90285      |
+       | October    | ca         | 80090      |
+       | June       | tx         | 78363      |
+       | May        | tx         | 77247      |
+       | March      | tx         | 73815      |
+       | August     | ca         | 71255      |
+       | April      | tx         | 68385      |
+       | July       | tx         | 63858      |
+       | February   | ca         | 63527      |
+       | June       | fl         | 62199      |
+       | June       | ny         | 62052      |
+       | May        | fl         | 61651      |
+       | May        | ny         | 59369      |
+       | October    | tx         | 55076      |
+       | March      | fl         | 54867      |
+       | March      | ny         | 52101      |
+       +------------+------------+------------+
+       20 rows selected
+