You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/05/01 20:08:34 UTC
[38/50] [abbrv] drill git commit: commands reorg
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands-summary/120-use-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands-summary/120-use-command.md b/_docs/sql-reference/sql-commands-summary/120-use-command.md
deleted file mode 100644
index 1dc656c..0000000
--- a/_docs/sql-reference/sql-commands-summary/120-use-command.md
+++ /dev/null
@@ -1,170 +0,0 @@
----
-title: "USE Command"
-parent: "SQL Commands Summary"
----
-The USE command changes the schema context to the specified schema. When you
-issue the USE command to switch to a particular schema, Drill queries that
-schema only.
-
-## Syntax
-
-The USE command supports the following syntax:
-
- USE schema_name;
-
-## Parameters
-
-_schema_name_
-A unique name for a Drill schema. A schema in Drill is a configured storage
-plugin, such as hive, or a storage plugin and workspace. For example,
-`dfs.donuts` where `dfs` is an instance of the file system configured as a
-storage plugin, and `donuts` is a workspace configured to point to a directory
-within the file system. You can configure and use multiple storage plugins and
-workspaces in Drill. See [Storage Plugin Registration]({{ site.baseurl }}/docs/storage-plugin-registration) and
-[Workspaces]({{ site.baseurl }}/docs/Workspaces).
-
-## Usage Notes
-
-Issue the USE command to change to a particular schema. When you use a schema,
-you do not have to include the full path to a file or table in your query.
-
-For example, to query a file named `donuts.json` in the
-`/users/max/drill/json/` directory, you must include the full file path in
-your query if you do not use a defined workspace
-
- SELECT * FROM dfs.`/users/max/drill/json/donuts.json` WHERE type='frosted';
-
-If you create a schema that points to the `~/json` directory where the file is
-located and then use the schema, you can issue the query without explicitly
-stating the file path:
-
- USE dfs.json;
- SELECT * FROM `donuts.json`WHERE type='frosted';
-
-If you do not use a schema before querying a table, you must use absolute
-notation, such as `[schema.]table[.column]`, to query the table. If you switch
-to the schema where the table exists, you can just specify the table name in
-the query. For example, to query a table named "`products`" in the `hive`
-schema, tell Drill to use the hive schema and then issue your query with the
-table name only:
-
- USE hive;
- SELECT * FROM products limit 5;
-
-Before you issue the USE command, you may want to run SHOW DATABASES or SHOW
-SCHEMAS to see a list of the configured storage plugins and workspaces.
-
-## Example
-
-This example demonstrates how to use a file system and a hive schema to query
-a file and table in Drill.
-
-Issue the SHOW DATABASES or SHOW SCHEMAS command to see a list of the
-available schemas that you can use. Both commands return the same results.
-
- 0: jdbc:drill:zk=drilldemo:5181> show schemas;
- +-------------+
- | SCHEMA_NAME |
- +-------------+
- | hive.default |
- | dfs.reviews |
- | dfs.flatten |
- | dfs.default |
- | dfs.root |
- | dfs.logs |
- | dfs.myviews |
- | dfs.clicks |
- | dfs.tmp |
- | sys |
- | hbase |
- | INFORMATION_SCHEMA |
- | s3.twitter |
- | s3.reviews |
- | s3.default |
- +-------------+
- 15 rows selected (0.059 seconds)
-
-
-Issue the USE command with the schema that you want Drill to query.
-**Note:** If you use any of the Drill default schemas, such as `cp.default` or `dfs.default`, do not include .`default`. For example, if you want Drill to issue queries on files in its classpath, you can issue the following command:
-
- 0: jdbc:drill:zk=local> use cp;
- +------------+------------+
- | ok | summary |
- +------------+------------+
- | true | Default schema changed to 'cp' |
- +------------+------------+
- 1 row selected (0.04 seconds)
-
-Issue the USE command with a file system schema.
-
- 0: jdbc:drill:zk=drilldemo:5181> use dfs.logs;
- +------------+------------+
- | ok | summary |
- +------------+------------+
- | true | Default schema changed to 'dfs.logs' |
- +------------+------------+
- 1 row selected (0.054 seconds)
-
-You can issue the SHOW FILES command to view the files and directories within
-the schema.
-
- 0: jdbc:drill:zk=drilldemo:5181> show files;
- +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
- | name | isDirectory | isFile | length | owner | group | permissions | accessTime | modificationTime |
- +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
- | csv | true | false | 1 | mapr | mapr | rwxrwxr-x | 2015-02-09 06:49:17.0 | 2015-02-09 06:50:11.172 |
- | logs | true | false | 3 | mapr | mapr | rwxrwxr-x | 2014-12-16 18:58:26.0 | 2014-12-16 18:58:27.223 |
- +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
- 2 rows selected (0.156 seconds)
-
-Query a file or directory in the file system schema.
-
- 0: jdbc:drill:zk=drilldemo:5181> select * from logs limit 5;
- +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
- | dir0 | dir1 | trans_id | date | time | cust_id | device | state | camp_id | keywords | prod_id | purch_flag |
- +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
- | 2014 | 8 | 24181 | 08/02/2014 | 09:23:52 | 0 | IOS5 | il | 2 | wait | 128 | false |
- | 2014 | 8 | 24195 | 08/02/2014 | 07:58:19 | 243 | IOS5 | mo | 6 | hmm | 107 | false |
- | 2014 | 8 | 24204 | 08/01/2014 | 12:10:27 | 12048 | IOS6 | il | 1 | marge | 324 | false |
- | 2014 | 8 | 24222 | 08/02/2014 | 16:28:37 | 2488 | IOS6 | pa | 2 | to | 391 | false |
- | 2014 | 8 | 24227 | 08/02/2014 | 07:14:00 | 154687 | IOS5 | wa | 2 | on | 376 | false |
- +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
-
-Issue the USE command to switch to the hive schema.
-
- 0: jdbc:drill:zk=drilldemo:5181> use hive;
- +------------+------------+
- | ok | summary |
- +------------+------------+
- | true | Default schema changed to 'hive' |
- +------------+------------+
- 1 row selected (0.093 seconds)
-
-Issue the SHOW TABLES command to see the tables that exist within the schema.
-
- 0: jdbc:drill:zk=drilldemo:5181> show tables;
- +--------------+------------+
- | TABLE_SCHEMA | TABLE_NAME |
- +--------------+------------+
- | hive.default | orders |
- | hive.default | products |
- +--------------+------------+
- 2 rows selected (0.421 seconds)
-
-Query a table within the schema.
-
- 0: jdbc:drill:zk=drilldemo:5181> select * from products limit 5;
- +------------+------------+------------+------------+
- | prod_id | name | category | price |
- +------------+------------+------------+------------+
- | 0 | Sony notebook | laptop | 959 |
- | 1 | #10-4 1/8 x 9 1/2 Premium Diagonal Seam Envelopes | Envelopes | 16 |
- | 2 | #10- 4 1/8 x 9 1/2 Recycled Envelopes | Envelopes | 9 |
- | 3 | #10- 4 1/8 x 9 1/2 Security-Tint Envelopes | Envelopes | 8 |
- | 4 | #10 Self-Seal White Envelopes | Envelopes | 11 |
- +------------+------------+------------+------------+
- 5 rows selected (0.211 seconds)
-
-
-
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/005-supported-sql-commands.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/005-supported-sql-commands.md b/_docs/sql-reference/sql-commands/005-supported-sql-commands.md
new file mode 100644
index 0000000..233019f
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/005-supported-sql-commands.md
@@ -0,0 +1,9 @@
+---
+title: Supported SQL Commands
+parent: "SQL Commands"
+---
+The following table provides a list of the SQL commands that Drill supports,
+with their descriptions and example syntax:
+
+<table style='table-layout:fixed;width:100%'>
+ <tr><th >Command</th><th >Description</th><th >Syntax</th></tr><tr><td valign="top" width="15%"><a href="/docs/alter-session-command">ALTER SESSION</a></td><td valign="top" width="60%">Changes a system setting for the duration of a session. A session ends when you quit the Drill shell. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options">Planning and Execution Options</a>.</td><td valign="top"><pre>ALTER SESSION SET `<option_name>`=<value>;</pre></td></tr><tr><td valign="top" ><a href="/docs/alter-system-command">ALTER SYSTEM</a></td><td valign="top" >Permanently changes a system setting. The new settings persist across all sessions. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options">Planning and Execution Options</a>.</td><td valign="top" ><pre>ALTER SYSTEM SET `<option_name>`=<value>;</pre></td></tr><tr><td valign="top" ><p><a href="/docs/crea
te-table-as--ctas-command">CREATE TABLE AS<br />(CTAS)</a></p></td><td valign="top" >Creates a new table and populates the new table with rows returned from a SELECT query. Use the CREATE TABLE AS (CTAS) statement in place of INSERT INTO. When you issue the CTAS command, you create a directory that contains parquet or CSV files. Each workspace in a file system has a default file type.<br />You can specify which writer you want Drill to use when creating a table: parquet, CSV, or JSON (as specified with the <code>store.format</code> option).</td><td valign="top" ><pre class="programlisting">CREATE TABLE new_table_name AS <query>;</pre></td></tr><tr><td - valign="top" ><a href="/docs/create-view-command">CREATE VIEW </a></td><td - valign="top" >Creates a virtual structure for the result set of a stored query.-</td><td -valign="top" ><pre>CREATE [OR REPLACE] VIEW [workspace.]view_name [ (column_name [, ...]) ] AS <query>;</pre></td></tr><tr><td valign="top" ><a href="/docs
/describe-command">DESCRIBE</a></td><td valign="top" >Returns information about columns in a table or view.</td><td valign="top" ><pre>DESCRIBE [workspace.]table_name|view_name</pre></td></tr><tr><td valign="top" ><a href="/docs/drop-view-command">DROP VIEW</a></td><td valign="top" >Removes a view.</td><td valign="top" ><pre>DROP VIEW [workspace.]view_name ;</pre></td></tr><tr><td valign="top" ><a href="/docs/explain-commands">EXPLAIN PLAN FOR</a></td><td valign="top" >Returns the physical plan for a particular query.</td><td valign="top" ><pre>EXPLAIN PLAN FOR <query>;</pre></td></tr><tr><td valign="top" ><a href="/docs/explain-commands">EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR</a></td><td valign="top" >Returns the logical plan for a particular query.</td><td valign="top" ><pre>EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR <query>;</pre></td></tr><tr><td colspan="1" valign="top" ><a href="/docs/select-statements" rel="nofollow">SELECT</a></td><td valign="top" >Retrieves dat
a from tables and files.</td><td valign="top" ><pre>[WITH subquery]<br />SELECT column_list FROM table_name <br />[WHERE clause]<br />[GROUP BY clause]<br />[HAVING clause]<br />[ORDER BY clause];</pre></td></tr><tr><td valign="top" ><a href="/docs/show-databases-and-show-schemas-commands">SHOW DATABASES </a></td><td valign="top" >Returns a list of available schemas. Equivalent to SHOW SCHEMAS.</td><td valign="top" ><pre>SHOW DATABASES;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-files-command" >SHOW FILES</a></td><td valign="top" >Returns a list of files in a file system schema.</td><td valign="top" ><pre>SHOW FILES IN filesystem.`schema_name`;<br />SHOW FILES FROM filesystem.`schema_name`;</pre></td></tr><tr><td valign="top" ><a href="/docs/show-databases-and-show-schemas-commands">SHOW SCHEMAS</a></td><td - valign="top" >Returns a list of available schemas. Equivalent to SHOW DATABASES.</td><td valign="top" ><pre>SHOW SCHEMAS;</pre></td></tr><tr><td valign="top" ><
a href="/docs/show-tables-command">SHOW TABLES</a></td><td valign="top" >Returns a list of tables and views.</td><td valign="top" ><pre>SHOW TABLES;</pre></td></tr><tr><td valign="top" ><a href="/docs/use-command">USE</a></td><td valign="top" >Change to a particular schema. When you opt to use a particular schema, Drill issues queries on that schema only.</td><td valign="top" ><pre>USE schema_name;</pre></td></tr></table>
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/010-alter-session-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/010-alter-session-command.md b/_docs/sql-reference/sql-commands/010-alter-session-command.md
new file mode 100644
index 0000000..c3bdc86
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/010-alter-session-command.md
@@ -0,0 +1,74 @@
+---
+title: "ALTER SESSION Command"
+parent: "SQL Commands"
+---
+The ALTER SESSION command changes a system setting for the duration of a
+session. Session level settings override system level settings.
+
+## Syntax
+
+The ALTER SESSION command supports the following syntax:
+
+ ALTER SESSION SET `<option_name>`=<value>;
+
+## Parameters
+
+*option_name*
+This is the option name as it appears in the systems table.
+
+*value*
+A value of the type listed in the sys.options table: number, string, boolean,
+or float. Use the appropriate value type for each option that you set.
+
+## Usage Notes
+
+Use the ALTER SESSION command to set Drill query planning and execution
+options per session in a cluster. The options that you set using the ALTER
+SESSION command only apply to queries that run during the current Drill
+connection. A session ends when you quit the Drill shell. You can set any of
+the system level options at the session level.
+
+You can run the following query to see a complete list of planning and
+execution options that are currently set at the system or session level:
+
+ 0: jdbc:drill:zk=local> SELECT name, type FROM sys.options WHERE type in ('SYSTEM','SESSION') order by name;
+ +------------+----------------------------------------------+
+ | name | type |
+ +----------------------------------------------+------------+
+ | drill.exec.functions.cast_empty_string_to_null | SYSTEM |
+ | drill.exec.storage.file.partition.column.label | SYSTEM |
+ | exec.errors.verbose | SYSTEM |
+ | exec.java_compiler | SYSTEM |
+ | exec.java_compiler_debug | SYSTEM |
+ …
+ +------------+----------------------------------------------+
+
+{% include startnote.html %}This is a truncated version of the list.{% include endnote.html %}
+
+## Example
+
+This example demonstrates how to use the ALTER SESSION command to set the
+`store.json.all_text_mode` option to “true” for the current Drill session.
+Setting this option to “true” enables text mode so that Drill reads everything
+in JSON as a text object instead of trying to interpret data types. This
+allows complicated JSON to be read using CASE and CAST.
+
+ 0: jdbc:drill:zk=local> alter session set `store.json.all_text_mode`= true;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | store.json.all_text_mode updated. |
+ +------------+------------+
+ 1 row selected (0.046 seconds)
+
+You can issue a query to see all of the session level settings. Note that the
+option type is case-sensitive.
+
+ 0: jdbc:drill:zk=local> SELECT name, type, bool_val FROM sys.options WHERE type = 'SESSION' order by name;
+ +------------+------------+------------+
+ | name | type | bool_val |
+ +------------+------------+------------+
+ | store.json.all_text_mode | SESSION | true |
+ +------------+------------+------------+
+ 1 row selected (0.176 seconds)
+
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/020-alter-system.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/020-alter-system.md b/_docs/sql-reference/sql-commands/020-alter-system.md
new file mode 100644
index 0000000..a351ac8
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/020-alter-system.md
@@ -0,0 +1,102 @@
+---
+title: "ALTER SYSTEM Command"
+parent: "SQL Commands"
+---
+The ALTER SYSTEM command permanently changes a system setting. The new setting
+persists across all sessions. Session level settings override system level
+settings.
+
+## Syntax
+
+The ALTER SYSTEM command supports the following syntax:
+
+ ALTER SYSTEM SET `<option_name>`=<value>;
+
+## Parameters
+
+*option_name*
+
+This is the option name as it appears in the systems table.
+
+_value_
+
+A value of the type listed in the sys.options table: number, string, boolean,
+or float. Use the appropriate value type for each option that you set.
+
+## Usage Notes
+
+Use the ALTER SYSTEM command to permanently set Drill query planning and
+execution options per cluster. Options set at the system level affect the
+entire system and persist between restarts.
+
+You can run the following query to see a complete list of planning and
+execution options that you can set at the system level:
+
+ 0: jdbc:drill:zk=local> select name, type, num_val, string_val, bool_val, float_val from sys.options where type like 'SYSTEM' order by name;
+ +------------+------------+------------+------------+------------+------------+
+ | name | type | num_val | string_val | bool_val | float_val |
+ +------------+------------+------------+------------+------------+------------+
+ | drill.exec.functions.cast_empty_string_to_null | SYSTEM | null | null | false | null |
+ | drill.exec.storage.file.partition.column.label | SYSTEM | null | dir | null | null |
+ | exec.errors.verbose | SYSTEM | null | null | false | null |
+ | exec.java_compiler | SYSTEM | null | DEFAULT | null | null |
+ | exec.java_compiler_debug | SYSTEM | null | null | true | null |
+ | exec.java_compiler_janino_maxsize | SYSTEM | 262144 | null | null | null |
+ | exec.queue.timeout_millis | SYSTEM | 400000 | null | null | null |
+ | planner.add_producer_consumer | SYSTEM | null | null | true | null |
+ | planner.affinity_factor | SYSTEM | null | null | null | 1.2 |
+ | planner.broadcast_threshold | SYSTEM | 1000000 | null | null | null |
+ | planner.disable_exchanges | SYSTEM | null | null | false | null |
+ | planner.enable_broadcast_join | SYSTEM | null | null | true | null |
+ | planner.enable_hash_single_key | SYSTEM | null | null | true | null |
+ | planner.enable_hashagg | SYSTEM | null | null | true | null |
+ | planner.enable_hashjoin | SYSTEM | null | null | true | null |
+ | planner.slice_target | SYSTEM | 100000 | null | null | null |
+ | planner.width.max_per_node | SYSTEM | 2 | null | null | null |
+ | planner.width.max_per_query | SYSTEM | 1000 | null | null | null |
+ | store.format | SYSTEM | null | parquet | null | null |
+ | store.json.all_text_mode | SYSTEM | null | null | false | null |
+ | store.mongo.all_text_mode | SYSTEM | null | null | false | null |
+ | store.parquet.block-size | SYSTEM | 536870912 | null | null | null |
+ | store.parquet.use_new_reader | SYSTEM | null | null | false | null |
+ | store.parquet.vector_fill_check_threshold | SYSTEM | 10 | null | null | null |
+ | store.parquet.vector_fill_threshold | SYSTEM | 85 | null | null | null |
+ +------------+------------+------------+------------+------------+------------+
+
+{% include startnote.html %}This is a truncated version of the list.{% include endnote.html %}
+
+## Example
+
+This example demonstrates how to use the ALTER SYSTEM command to set the
+`planner.add_producer_consumer` option to “true.” This option enables a
+secondary reading thread to prefetch data from disk.
+
+ 0: jdbc:drill:zk=local> alter system set `planner.add_producer_consumer` = true;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | planner.add_producer_consumer updated. |
+ +------------+------------+
+ 1 row selected (0.046 seconds)
+
+You can issue a query to see all of the system level settings set to “true.”
+Note that the option type is case-sensitive.
+
+ 0: jdbc:drill:zk=local> SELECT name, type, bool_val FROM sys.options WHERE type = 'SYSTEM' and bool_val=true;
+ +------------+------------+------------+
+ | name | type | bool_val |
+ +------------+------------+------------+
+ | exec.java_compiler_debug | SYSTEM | true |
+ | planner.enable_mergejoin | SYSTEM | true |
+ | planner.enable_broadcast_join | SYSTEM | true |
+ | planner.enable_hashagg | SYSTEM | true |
+ | planner.add_producer_consumer | SYSTEM | true |
+ | planner.enable_hash_single_key | SYSTEM | true |
+ | planner.enable_multiphase_agg | SYSTEM | true |
+ | planner.enable_streamagg | SYSTEM | true |
+ | planner.enable_hashjoin | SYSTEM | true |
+ +------------+------------+------------+
+ 9 rows selected (0.159 seconds)
+
+
+
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/030-create-table-as-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/030-create-table-as-command.md b/_docs/sql-reference/sql-commands/030-create-table-as-command.md
new file mode 100644
index 0000000..5bab011
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/030-create-table-as-command.md
@@ -0,0 +1,134 @@
+---
+title: "CREATE TABLE AS (CTAS) command"
+parent: "SQL Commands"
+---
+You can create tables in Drill by using the CTAS command:
+
+ CREATE TABLE new_table_name AS <query>;
+
+where query is any valid Drill query. Each table you create must have a unique
+name. You can include an optional column list for the new table. For example:
+
+ create table logtable(transid, prodid) as select transaction_id, product_id from ...
+
+You can store table data in one of three formats:
+
+ * csv
+ * parquet
+ * json
+
+The parquet and json formats can be used to store complex data.
+
+To set the output format for a Drill table, set the `store.format` option with
+the ALTER SYSTEM or ALTER SESSION command. For example:
+
+ alter session set `store.format`='json';
+
+Table data is stored in the location specified by the workspace that is in use
+when you run the CTAS statement. By default, a directory is created, using the
+exact table name specified in the CTAS statement. A .json, .csv, or .parquet
+file inside that directory contains the data.
+
+You can only create new tables in workspaces. You cannot create tables in
+other storage plugins such as Hive and HBase.
+
+You must use a writable (mutable) workspace when creating Drill tables. For
+example:
+
+ "tmp": {
+ "location": "/tmp",
+ "writable": true,
+ }
+
+## Example
+
+The following query returns one row from a JSON file:
+
+ 0: jdbc:drill:zk=local> select id, type, name, ppu
+ from dfs.`/Users/brumsby/drill/donuts.json`;
+ +------------+------------+------------+------------+
+ | id | type | name | ppu |
+ +------------+------------+------------+------------+
+ | 0001 | donut | Cake | 0.55 |
+ +------------+------------+------------+------------+
+ 1 row selected (0.248 seconds)
+
+To create and verify the contents of a table that contains this row:
+
+ 1. Set the workspace to a writable workspace.
+ 2. Set the `store.format` option appropriately.
+ 3. Run a CTAS statement that contains the query.
+ 4. Go to the directory where the table is stored and check the contents of the file.
+ 5. Run a query against the new table.
+
+The following sqlline output captures this sequence of steps.
+
+### Workspace Definition
+
+ "tmp": {
+ "location": "/tmp",
+ "writable": true,
+ }
+
+### ALTER SESSION Command
+
+ alter session set `store.format`='json';
+
+### USE Command
+
+ 0: jdbc:drill:zk=local> use dfs.tmp;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'dfs.tmp' |
+ +------------+------------+
+ 1 row selected (0.03 seconds)
+
+### CTAS Command
+
+ 0: jdbc:drill:zk=local> create table donuts_json as
+ select id, type, name, ppu from dfs.`/Users/brumsby/drill/donuts.json`;
+ +------------+---------------------------+
+ | Fragment | Number of records written |
+ +------------+---------------------------+
+ | 0_0 | 1 |
+ +------------+---------------------------+
+ 1 row selected (0.107 seconds)
+
+### File Contents
+
+ administorsmbp7:tmp brumsby$ pwd
+ /tmp
+ administorsmbp7:tmp brumsby$ cd donuts_json
+ administorsmbp7:donuts_json brumsby$ more 0_0_0.json
+ {
+ "id" : "0001",
+ "type" : "donut",
+ "name" : "Cake",
+ "ppu" : 0.55
+ }
+
+### Query Against New Table
+
+ 0: jdbc:drill:zk=local> select * from donuts_json;
+ +------------+------------+------------+------------+
+ | id | type | name | ppu |
+ +------------+------------+------------+------------+
+ | 0001 | donut | Cake | 0.55 |
+ +------------+------------+------------+------------+
+ 1 row selected (0.053 seconds)
+
+### Use a Different Output Format
+
+You can run the same sequence again with a different storage format set for
+the system or session (csv or parquet). For example, if the format is set to
+csv, and you name the table donuts_csv, the resulting file would look like
+this:
+
+ administorsmbp7:tmp brumsby$ cd donuts_csv
+ administorsmbp7:donuts_csv brumsby$ ls
+ 0_0_0.csv
+ administorsmbp7:donuts_csv brumsby$ more 0_0_0.csv
+ id,type,name,ppu
+ 0001,donut,Cake,0.55
+
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/050-create-view-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/050-create-view-command.md b/_docs/sql-reference/sql-commands/050-create-view-command.md
new file mode 100644
index 0000000..d21ea12
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/050-create-view-command.md
@@ -0,0 +1,197 @@
+---
+title: "CREATE VIEW command"
+parent: "SQL Commands"
+---
+The CREATE VIEW command creates a virtual structure for the result set of a
+stored query. A view can combine data from multiple underlying data sources
+and provide the illusion that all of the data is from one source. You can use
+views to protect sensitive data, for data aggregation, and to hide data
+complexity from users. You can create Drill views from files in your local and
+distributed file systems, Hive, HBase, and MapR-DB tables, as well as from
+existing views or any other available storage plugin data sources.
+
+## Syntax
+
+The CREATE VIEW command supports the following syntax:
+
+ CREATE [OR REPLACE] VIEW [workspace.]view_name [ (column_name [, ...]) ] AS <query>;
+
+Use CREATE VIEW to create a new view. Use CREATE OR REPLACE VIEW to replace an
+existing view with the same name. When you replace a view, the query must
+generate the same set of columns with the same column names and data types.
+
+**Note:** Follow Drill’s rules for identifiers when you name the view. See coming soon...
+
+## Parameters
+
+_workspace_
+The location where you want the view to exist. By default, the view is created
+in the current workspace. See
+[Workspaces]({{ site.baseurl }}/docs/Workspaces).
+
+_view_name_
+The name that you give the view. The view must have a unique name. It cannot
+have the same name as any other view or table in the workspace.
+
+_column_name_
+Optional list of column names in the view. If you do not supply column names,
+they are derived from the query.
+
+_query_
+A SELECT statement that defines the columns and rows in the view.
+
+## Usage Notes
+
+### Storage
+
+Drill stores views in the location specified by the workspace that you use
+when you run the CREATE VIEW command. If the workspace is not defined, Drill
+creates the view in the current workspace. You must use a writable workspace
+when you create a view. Currently, Drill only supports views created in the
+file system or distributed file system.
+
+The following example shows a writable workspace as defined within the storage
+plugin in the `/tmp` directory of the file system:
+
+ "tmp": {
+ "location": "/tmp",
+ "writable": true,
+ }
+
+Drill stores the view definition in JSON format with the name that you specify
+when you run the CREATE VIEW command, suffixed `by .view.drill`. For example,
+if you create a view named `myview`, Drill stores the view in the designated
+workspace as `myview.view.drill`.
+
+Data Sources
+
+Drill considers data sources to have either a strong schema or a weak schema.
+
+##### Strong Schema
+
+With the exception of text file data sources, Drill verifies that data sources
+associated with a strong schema contain data types compatible with those used
+in the query. Drill also verifies that the columns referenced in the query
+exist in the underlying data sources. If the columns do not exist, CREATE VIEW
+fails.
+
+#### Weak Schema
+
+Drill does not verify that data sources associated with a weak schema contain
+data types compatible with those used in the query. Drill does not verify if
+columns referenced in a query on a Parquet data source exist, therefore CREATE
+VIEW always succeeds. In the case of JSON files, Drill does not verify if the
+files contain the maps specified in the view.
+
+The following table lists the current categories of schema and the data
+sources associated with each:
+
+<table>
+ <tr>
+ <th></th>
+ <th>Strong Schema</th>
+ <th>Weak Schema</th>
+ </tr>
+ <tr>
+ <td valign="top">Data Sources</td>
+ <td>views<br>hive tables<br>hbase column families<br>text</td>
+ <td>json<br>mongodb<br>hbase column qualifiers<br>parquet</td>
+ </tr>
+</table>
+
+## Related Commands
+
+After you create a view using the CREATE VIEW command, you can issue the
+following commands against the view:
+
+ * SELECT
+ * DESCRIBE
+ * DROP
+
+{% include startnote.html %}You cannot update, insert into, or delete from a view.{% include endnote.html %}
+
+## Example
+
+This example shows you some steps that you can follow when you want to create
+a view in Drill using the CREATE VIEW command. A workspace named “donuts” was
+created for the steps in this example.
+
+Complete the following steps to create a view in Drill:
+
+ 1. Decide which workspace you will use to create the view, and verify that the writable option is set to “true.” You can use an existing workspace, or you can create a new workspace. See [Workspaces](https://cwiki.apache.org/confluence/display/DRILL/Workspaces) for more information.
+
+ "workspaces": {
+ "donuts": {
+ "location": "/home/donuts",
+ "writable": true,
+ "defaultInputFormat": null
+ }
+ },
+
+ 2. Run SHOW DATABASES to verify that Drill recognizes the workspace.
+
+ 0: jdbc:drill:zk=local> show databases;
+ +-------------+
+ | SCHEMA_NAME |
+ +-------------+
+ | dfs.default |
+ | dfs.root |
+ | dfs.donuts |
+ | dfs.tmp |
+ | cp.default |
+ | sys |
+ | INFORMATION_SCHEMA |
+ +-------------+
+
+ 3. Use the writable workspace.
+
+ 0: jdbc:drill:zk=local> use dfs.donuts;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'dfs.donuts' |
+ +------------+------------+
+
+ 4. Test run the query that you plan to use with the CREATE VIEW command.
+
+ 0: jdbc:drill:zk=local> select id, type, name, ppu from `donuts.json`;
+ +------------+------------+------------+------------+
+ | id | type | name | ppu |
+ +------------+------------+------------+------------+
+ | 0001 | donut | Cake | 0.55 |
+ +------------+------------+------------+------------+
+
+ 5. Run the CREATE VIEW command with the query.
+
+ 0: jdbc:drill:zk=local> create view mydonuts as select id, type, name, ppu from `donuts.json`;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | View 'mydonuts' created successfully in 'dfs.donuts' schema |
+ +------------+------------+
+
+ 6. Create a new view in another workspace from the current workspace.
+
+ 0: jdbc:drill:zk=local> create view dfs.tmp.yourdonuts as select id, type, name from `donuts.json`;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | View 'yourdonuts' created successfully in 'dfs.tmp' schema |
+ +------------+------------+
+
+ 7. Query the view created in both workspaces.
+
+ 0: jdbc:drill:zk=local> select * from mydonuts;
+ +------------+------------+------------+------------+
+ | id | type | name | ppu |
+ +------------+------------+------------+------------+
+ | 0001 | donut | Cake | 0.55 |
+ +------------+------------+------------+------------+
+
+
+ 0: jdbc:drill:zk=local> select * from dfs.tmp.yourdonuts;
+ +------------+------------+------------+
+ | id | type | name |
+ +------------+------------+------------+
+ | 0001 | donut | Cake |
+ +------------+------------+------------+
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/060-describe-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/060-describe-command.md b/_docs/sql-reference/sql-commands/060-describe-command.md
new file mode 100644
index 0000000..349f0ef
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/060-describe-command.md
@@ -0,0 +1,99 @@
+---
+title: "DESCRIBE Command"
+parent: "SQL Commands"
+---
+The DESCRIBE command returns information about columns in a table or view.
+
+## Syntax
+
+The DESCRIBE command supports the following syntax:
+
+ DESCRIBE [workspace.]table_name|view_name
+
+## Usage Notes
+
+You can issue the DESCRIBE command against views created in a workspace and
+tables created in Hive, HBase, and MapR-DB. You can issue the DESCRIBE command
+on a table or view from any schema. For example, if you are working in the
+`dfs.myworkspace` schema, you can issue the DESCRIBE command on a view or
+table in another schema. Currently, DESCRIBE does not support tables created
+in a file system.
+
+Drill only supports SQL data types. Verify that all data types in an external
+data source, such as Hive or HBase, map to supported data types in Drill. See
+Drill Data Type Mapping for more information.
+
+## Example
+
+The following example demonstrates the steps that you can follow when you want
+to use the DESCRIBE command to see column information for a view and for Hive
+and HBase tables.
+
+Complete the following steps to use the DESCRIBE command:
+
+ 1. Issue the USE command to switch to a particular schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> use hive;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'hive' |
+ +------------+------------+
+ 1 row selected (0.025 seconds)
+
+ 2. Issue the SHOW TABLES command to see the existing tables in the schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> show tables;
+ +--------------+------------+
+ | TABLE_SCHEMA | TABLE_NAME |
+ +--------------+------------+
+ | hive.default | orders |
+ | hive.default | products |
+ +--------------+------------+
+ 2 rows selected (0.438 seconds)
+
+ 3. Issue the DESCRIBE command on a table.
+
+ 0: jdbc:drill:zk=drilldemo:5181> describe orders;
+ +-------------+------------+-------------+
+ | COLUMN_NAME | DATA_TYPE | IS_NULLABLE |
+ +-------------+------------+-------------+
+ | order_id | BIGINT | YES |
+ | month | VARCHAR | YES |
+ | purchdate | TIMESTAMP | YES |
+ | cust_id | BIGINT | YES |
+ | state | VARCHAR | YES |
+ | prod_id | BIGINT | YES |
+ | order_total | INTEGER | YES |
+ +-------------+------------+-------------+
+ 7 rows selected (0.64 seconds)
+
+ 4. Issue the DESCRIBE command on a table in another schema from the current schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> describe hbase.customers;
+ +-------------+------------+-------------+
+ | COLUMN_NAME | DATA_TYPE | IS_NULLABLE |
+ +-------------+------------+-------------+
+ | row_key | ANY | NO |
+ | address | (VARCHAR(1), ANY) MAP | NO |
+ | loyalty | (VARCHAR(1), ANY) MAP | NO |
+ | personal | (VARCHAR(1), ANY) MAP | NO |
+ +-------------+------------+-------------+
+ 4 rows selected (0.671 seconds)
+
+ 5. Issue the DESCRIBE command on a view in another schema from the current schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> describe dfs.views.customers_vw;
+ +-------------+------------+-------------+
+ | COLUMN_NAME | DATA_TYPE | IS_NULLABLE |
+ +-------------+------------+-------------+
+ | cust_id | BIGINT | NO |
+ | name | VARCHAR | NO |
+ | address | VARCHAR | NO |
+ | gender | VARCHAR | NO |
+ | age | VARCHAR | NO |
+ | agg_rev | VARCHAR | NO |
+ | membership | VARCHAR | NO |
+ +-------------+------------+-------------+
+ 7 rows selected (0.403 seconds)
+
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/070-explain-commands.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/070-explain-commands.md b/_docs/sql-reference/sql-commands/070-explain-commands.md
new file mode 100644
index 0000000..5aab0e9
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/070-explain-commands.md
@@ -0,0 +1,156 @@
+---
+title: "EXPLAIN commands"
+parent: "SQL Commands"
+---
+EXPLAIN is a useful tool for examining the steps that a query goes through
+when it is executed. You can use the EXPLAIN output to gain a deeper
+understanding of the parallel processing that Drill queries exploit. You can
+also look at costing information, troubleshoot performance issues, and
+diagnose routine errors that may occur when you run queries.
+
+Drill provides two variations on the EXPLAIN command, one that returns the
+physical plan and one that returns the logical plan. A logical plan takes the
+SQL query (as written by the user and accepted by the parser) and translates
+it into a logical series of operations that correspond to SQL language
+constructs (without defining the specific algorithms that will be implemented
+to run the query). A physical plan translates the logical plan into a specific
+series of steps that will be used when the query runs. For example, a logical
+plan may indicate a join step in general and classify it as inner or outer,
+but the corresponding physical plan will indicate the specific type of join
+operator that will run, such as a merge join or a hash join. The physical plan
+is operational and reveals the specific _access methods_ that will be used for
+the query.
+
+An EXPLAIN command for a query that is run repeatedly under the exact same
+conditions against the same data will return the same plan. However, if you
+change a configuration option, for example, or update the tables or files that
+you are selecting from, you are likely to see plan changes.
+
+## EXPLAIN Syntax
+
+The EXPLAIN command supports the following syntax:
+
+ explain plan [ including all attributes ] [ with implementation | without implementation ] for <query> ;
+
+where `query` is any valid SELECT statement supported by Drill.
+
+##### INCLUDING ALL ATTRIBUTES
+
+This option returns costing information. You can use this option for both
+physical and logical plans.
+
+#### WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION
+
+These options return the physical and logical plan information, respectively.
+The default is physical (WITH IMPLEMENTATION).
+
+## EXPLAIN for Physical Plans
+
+The EXPLAIN PLAN FOR <query> command returns the chosen physical execution
+plan for a query statement without running the query. You can use this command
+to see what kind of execution operators Drill implements. For example, you can
+find out what kind of join algorithm is chosen when tables or files are
+joined. You can also use this command to analyze errors and troubleshoot
+queries that do not run. For example, if you run into a casting error, the
+query plan text may help you isolate the problem.
+
+Use the following syntax:
+
+ explain plan for <query> ;
+
+The following set command increases the default text display (number of
+characters). By default, most of the plan output is not displayed.
+
+ 0: jdbc:drill:zk=local> !set maxwidth 10000
+
+Do not use a semicolon to terminate set commands.
+
+For example, here is the top portion of the explain output for a
+COUNT(DISTINCT) query on a JSON file:
+
+ 0: jdbc:drill:zk=local> !set maxwidth 10000
+ 0: jdbc:drill:zk=local> explain plan for select type t, count(distinct id) from dfs.`/home/donuts/donuts.json` where type='donut' group by type;
+ +------------+------------+
+ | text | json |
+ +------------+------------+
+ | 00-00 Screen
+ 00-01 Project(t=[$0], EXPR$1=[$1])
+ 00-02 Project(t=[$0], EXPR$1=[$1])
+ 00-03 HashAgg(group=[{0}], EXPR$1=[COUNT($1)])
+ 00-04 HashAgg(group=[{0, 1}])
+ 00-05 SelectionVectorRemover
+ 00-06 Filter(condition=[=($0, 'donut')])
+ 00-07 Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`type`, `id`], files=[file:/home/donuts/donuts.json]]])...
+ ...
+
+Read the text output from bottom to top to understand the sequence of
+operators that will execute the query. Note that the physical plan starts with
+a scan of the JSON file that is being queried. The selected columns are
+projected and filtered, then the aggregate function is applied.
+
+The EXPLAIN text output is followed by detailed JSON output, which is reusable
+for submitting the query via Drill APIs.
+
+ | {
+ "head" : {
+ "version" : 1,
+ "generator" : {
+ "type" : "ExplainHandler",
+ "info" : ""
+ },
+ "type" : "APACHE_DRILL_PHYSICAL",
+ "options" : [ ],
+ "queue" : 0,
+ "resultMode" : "EXEC"
+ },
+ ....
+
+## Costing Information
+
+Add the INCLUDING ALL ATTRIBUTES option to the EXPLAIN command to see cost
+estimates for the query plan. For example:
+
+ 0: jdbc:drill:zk=local> !set maxwidth 10000
+ 0: jdbc:drill:zk=local> explain plan including all attributes for select * from dfs.`/home/donuts/donuts.json` where type='donut';
+ +------------+------------+
+ | text | json |
+ +------------+------------+
+ | 00-00 Screen: rowcount = 1.0, cumulative cost = {5.1 rows, 21.1 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 889
+ 00-01 Project(*=[$0]): rowcount = 1.0, cumulative cost = {5.0 rows, 21.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 888
+ 00-02 Project(T1¦¦*=[$0]): rowcount = 1.0, cumulative cost = {4.0 rows, 17.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 887
+ 00-03 SelectionVectorRemover: rowcount = 1.0, cumulative cost = {3.0 rows, 13.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 886
+ 00-04 Filter(condition=[=($1, 'donut')]): rowcount = 1.0, cumulative cost = {2.0 rows, 12.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 885
+ 00-05 Project(T1¦¦*=[$0], type=[$1]): rowcount = 1.0, cumulative cost = {1.0 rows, 8.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 884
+ 00-06 Scan(groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`*`], files=[file:/home/donuts/donuts.json]]]): rowcount = 1.0, cumulative cost = {0.0 rows, 0.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 883
+
+## EXPLAIN for Logical Plans
+
+To return the logical plan for a query (again, without actually running the
+query), use the EXPLAIN PLAN WITHOUT IMPLEMENTATION syntax:
+
+ explain plan without implementation for <query> ;
+
+For example:
+
+ 0: jdbc:drill:zk=local> explain plan without implementation for select type t, count(distinct id) from dfs.`/home/donuts/donuts.json` where type='donut' group by type;
+ +------------+------------+
+ | text | json |
+ +------------+------------+
+ | DrillScreenRel
+ DrillProjectRel(t=[$0], EXPR$1=[$1])
+ DrillAggregateRel(group=[{0}], EXPR$1=[COUNT($1)])
+ DrillAggregateRel(group=[{0, 1}])
+ DrillFilterRel(condition=[=($0, 'donut')])
+ DrillScanRel(table=[[dfs, /home/donuts/donuts.json]], groupscan=[EasyGroupScan [selectionRoot=/home/donuts/donuts.json, numFiles=1, columns=[`type`, `id`], files=[file:/home/donuts/donuts.json]]]) | {
+ | {
+ "head" : {
+ "version" : 1,
+ "generator" : {
+ "type" : "org.apache.drill.exec.planner.logical.DrillImplementor",
+ "info" : ""
+ },
+ "type" : "APACHE_DRILL_LOGICAL",
+ "options" : null,
+ "queue" : 0,
+ "resultMode" : "LOGICAL"
+ },...
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/080-select.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/080-select.md b/_docs/sql-reference/sql-commands/080-select.md
new file mode 100644
index 0000000..4fb9f5e
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/080-select.md
@@ -0,0 +1,178 @@
+---
+title: "SELECT Statements"
+parent: "SQL Commands"
+---
+Drill supports the following ANSI standard clauses in the SELECT statement:
+
+ * WITH clause
+ * SELECT list
+ * FROM clause
+ * WHERE clause
+ * GROUP BY clause
+ * HAVING clause
+ * ORDER BY clause (with an optional LIMIT clause)
+
+You can use the same SELECT syntax in the following commands:
+
+ * CREATE TABLE AS (CTAS)
+ * CREATE VIEW
+
+INSERT INTO SELECT is not yet supported.
+
+## Column Aliases
+
+You can use named column aliases in the SELECT list to provide meaningful
+names for regular columns and computed columns, such as the results of
+aggregate functions. See the section on running queries for examples.
+
+You cannot reference column aliases in the following clauses:
+
+ * WHERE
+ * GROUP BY
+ * HAVING
+
+Because Drill works with schema-less data sources, you cannot use positional
+aliases (1, 2, etc.) to refer to SELECT list columns, except in the ORDER BY
+clause.
+
+## UNION ALL Set Operator
+
+Drill supports the UNION ALL set operator to combine two result sets. The
+distinct UNION operator is not yet supported.
+
+The EXCEPT, EXCEPT ALL, INTERSECT, and INTERSECT ALL operators are not yet
+supported.
+
+## Joins
+
+Drill supports ANSI standard joins in the FROM and WHERE clauses:
+
+ * Inner joins
+ * Left, full, and right outer joins
+
+The following types of join syntax are supported:
+
+Join type| Syntax
+---|---
+Join condition in WHERE clause|FROM table1, table 2 WHERE table1.col1=table2.col1
+USING join in FROM clause|FROM table1 JOIN table2 USING(col1, ...)
+ON join in FROM clause|FROM table1 JOIN table2 ON table1.col1=table2.col1
+NATURAL JOIN in FROM clause|FROM table 1 NATURAL JOIN table 2
+
+Cross-joins are not yet supported. You must specify a join condition when more
+than one table is listed in the FROM clause.
+
+Non-equijoins are supported if the join also contains an equality condition on
+the same two tables as part of a conjunction:
+
+ table1.col1 = table2.col1 AND table1.c2 < table2.c2
+
+This restriction applies to both inner and outer joins.
+
+## Subqueries
+
+You can use the following subquery operators in Drill queries. These operators
+all return Boolean results.
+
+ * ALL
+ * ANY
+ * EXISTS
+ * IN
+ * SOME
+
+In general, correlated subqueries are supported. EXISTS and NOT EXISTS
+subqueries that do not contain a correlation join are not yet supported.
+
+## WITH Clause
+
+The WITH clause is an optional clause used to contain one or more common table
+expressions (CTE) where each CTE defines a temporary table that exists for the
+duration of the query. Each subquery in the WITH clause specifies a table
+name, an optional list of column names, and a SELECT statement.
+
+## Syntax
+
+The WITH clause supports the following syntax:
+
+ [ WITH with_subquery [, ...] ]
+ where with_subquery is:
+ with_subquery_table_name [ ( column_name [, ...] ) ] AS ( query )
+
+## Parameters
+
+_with_subquery_table_name_
+
+A unique name for a temporary table that defines the results of a WITH clause
+subquery. You cannot use duplicate names within a single WITH clause. You must
+give each subquery a table name that can be referenced in the FROM clause.
+
+_column_name_
+
+An optional list of output column names for the WITH clause subquery,
+separated by commas. The number of column names specified must be equal to or
+less than the number of columns defined by the subquery.
+
+_query_
+
+Any SELECT query that Drill supports. See
+[SELECT]({{ site.baseurl }}/docs/SELECT+Statements).
+
+## Usage Notes
+
+Use the WITH clause to efficiently define temporary tables that Drill can
+access throughout the execution of a single query. The WITH clause is
+typically a simpler alternative to using subqueries in the main body of the
+SELECT statement. In some cases, Drill can evaluate a WITH subquery once and
+reuse the results for query optimization.
+
+You can use a WITH clause in the following SQL statements:
+
+ * SELECT (including subqueries within SELECT statements)
+
+ * CREATE TABLE AS
+
+ * CREATE VIEW
+
+ * EXPLAIN
+
+You can reference the temporary tables in the FROM clause of the query. If the
+FROM clause does not reference any tables defined by the WITH clause, Drill
+ignores the WITH clause and executes the query as normal.
+
+Drill can only reference a table defined by a WITH clause subquery in the
+scope of the SELECT query that the WITH clause begins. For example, you can
+reference such a table in the FROM clause of a subquery in the SELECT list,
+WHERE clause, or HAVING clause. You cannot use a WITH clause in a subquery and
+reference its table in the FROM clause of the main query or another subquery.
+
+You cannot specify another WITH clause inside a WITH clause subquery.
+
+For example, the following query includes a forward reference to table t2 in
+the definition of table t1:
+
+## Example
+
+The following example shows the WITH clause used to create a WITH query named
+`emp_data` that selects all of the rows from the `employee.json` file. The
+main query selects the `full_name, position_title, salary`, and `hire_date`
+rows from the `emp_data` temporary table (created from the WITH subquery) and
+orders the results by the hire date. The `emp_data` table only exists for the
+duration of the query.
+
+**Note:** The `employee.json` file is included with the Drill installation. It is located in the `cp.default` workspace which is configured by default.
+
+ 0: jdbc:drill:zk=local> with emp_data as (select * from cp.`employee.json`) select full_name, position_title, salary, hire_date from emp_data order by hire_date limit 10;
+ +------------------+-------------------------+------------+-----------------------+
+ | full_name | position_title | salary | hire_date |
+ +------------------+-------------------------+------------+-----------------------+
+ | Bunny McCown | Store Assistant Manager | 8000.0 | 1993-05-01 00:00:00.0 |
+ | Danielle Johnson | Store Assistant Manager | 8000.0 | 1993-05-01 00:00:00.0 |
+ | Dick Brummer | Store Assistant Manager | 7900.0 | 1993-05-01 00:00:00.0 |
+ | Gregory Whiting | Store Assistant Manager | 10000.0 | 1993-05-01 00:00:00.0 |
+ | Juanita Sharp | HQ Human Resources | 6700.0 | 1994-01-01 00:00:00.0 |
+ | Sheri Nowmer | President | 80000.0 | 1994-12-01 00:00:00.0 |
+ | Rebecca Kanagaki | VP Human Resources | 15000.0 | 1994-12-01 00:00:00.0 |
+ | Shauna Wyro | Store Manager | 15000.0 | 1994-12-01 00:00:00.0 |
+ | Roberta Damstra | VP Information Systems | 25000.0 | 1994-12-01 00:00:00.0 |
+ | Pedro Castillo | VP Country Manager | 35000.0 | 1994-12-01 00:00:00.0 |
+ +------------+----------------+--------------+------------------------------------+
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md b/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md
new file mode 100644
index 0000000..c000f32
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md
@@ -0,0 +1,59 @@
+---
+title: "SHOW DATABASES AND SHOW SCHEMAS Command"
+parent: "SQL Commands"
+---
+The SHOW DATABASES and SHOW SCHEMAS commands generate a list of available Drill schemas that you can query.
+
+## Syntax
+
+The SHOW DATABASES and SHOW SCHEMAS commands support the following syntax:
+
+ SHOW DATABASES;
+ SHOW SCHEMAS;
+
+{% include startnote.html %}These commands generate the same results.{% include endnote.html %}
+
+## Usage Notes
+
+You may want to run the SHOW DATABASES or SHOW SCHEMAS command to see a list of the configured storage plugins and workspaces in Drill before you issue the USE command to switch to a particular schema for your queries.
+
+In Drill, a database or schema is a configured storage plugin instance or a configured storage plugin instance with a configured workspace. For example, dfs.donuts where dfs is the file system configured as a storage plugin instance, and donuts is a configured workspace.
+
+You can configure and use multiple storage plugins and workspaces in Drill. See [Storage Plugin Registration]({{ site.baseurl }}/docs/storage-plugin-registration) and [Workspaces]({{ site.baseurl }}/docs/workspaces).
+
+## Example
+
+The following example uses the SHOW DATABASES and SHOW SCHEMAS commands to generate a list of the available schemas in Drill. Some of the results that display are specific to all Drill installations, such as `cp.default` and `dfs.default`, while others vary based on your specific storage plugin and workspace configurations.
+
+ 0: jdbc:drill:zk=local> show databases;
+ +-------------+
+ | SCHEMA_NAME |
+ +-------------+
+ | dfs.default |
+ | dfs.root |
+ | dfs.donuts |
+ | dfs.tmp |
+ | dfs.customers |
+ | dfs.yelp |
+ | cp.default |
+ | sys |
+ | INFORMATION_SCHEMA |
+ +-------------+
+ 9 rows selected (0.07 seconds)
+
+
+ 0: jdbc:drill:zk=local> show schemas;
+ +-------------+
+ | SCHEMA_NAME |
+ +-------------+
+ | dfs.default |
+ | dfs.root |
+ | dfs.donuts |
+ | dfs.tmp |
+ | dfs.customers |
+ | dfs.yelp |
+ | cp.default |
+ | sys |
+ | INFORMATION_SCHEMA |
+ +-------------+
+ 9 rows selected (0.058 seconds)
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/100-show-files.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/100-show-files.md b/_docs/sql-reference/sql-commands/100-show-files.md
new file mode 100644
index 0000000..9651add
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/100-show-files.md
@@ -0,0 +1,65 @@
+---
+title: "SHOW FILES Command"
+parent: "SQL Commands"
+---
+The SHOW FILES command provides a quick report of the file systems that are
+visible to Drill for query purposes. This command is unique to Apache Drill.
+
+## Syntax
+
+The SHOW FILES command supports the following syntax.
+
+ SHOW FILES [ FROM filesystem.directory_name | IN filesystem.directory_name ];
+
+The FROM or IN clause is required if you do not specify a default file system
+first. You can do this with the USE command. FROM and IN are synonyms.
+
+The directory name is optional. (If the directory name is a Drill reserved
+word, you must use back ticks around the name.)
+
+The command returns standard Linux `stat` information for each file or
+directory, such as permissions, owner, and group values. This information is
+not specific to Drill.
+
+## Examples
+
+The following example returns information about directories and files in the
+local (`dfs`) file system.
+
+ 0: jdbc:drill:> use dfs;
+
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'dfs' |
+ +------------+------------+
+ 1 row selected (0.318 seconds)
+
+ 0: jdbc:drill:> show files;
+ +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
+ | name | isDirectory | isFile | length | owner | group | permissions | accessTime | modificationTime |
+ +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
+ | user | true | false | 1 | mapr | mapr | rwxr-xr-x | 2014-07-30 21:37:06.0 | 2014-07-31 22:15:53.193 |
+ | backup.tgz | false | true | 36272 | root | root | rw-r--r-- | 2014-07-31 22:09:13.0 | 2014-07-31 22:09:13.211 |
+ | JSON | true | false | 1 | root | root | rwxr-xr-x | 2014-07-31 15:22:42.0 | 2014-08-04 15:43:07.083 |
+ | scripts | true | false | 3 | root | root | rwxr-xr-x | 2014-07-31 22:10:51.0 | 2014-08-04 18:23:09.236 |
+ | temp | true | false | 2 | root | root | rwxr-xr-x | 2014-08-01 20:07:37.0 | 2014-08-01 20:09:42.595 |
+ | hbase | true | false | 10 | mapr | mapr | rwxr-xr-x | 2014-07-30 21:36:08.0 | 2014-08-04 18:31:13.778 |
+ | tables | true | false | 0 | root | root | rwxrwxrwx | 2014-07-31 22:14:35.0 | 2014-08-04 15:42:43.415 |
+ | CSV | true | false | 4 | root | root | rwxrwxrwx | 2014-07-31 17:34:53.0 | 2014-08-04
+ ...
+
+The following example shows the files in a specific directory in the `dfs`
+file system:
+
+ 0: jdbc:drill:> show files in dfs.CSV;
+
+ +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
+ | name | isDirectory | isFile | length | owner | group | permissions | accessTime | modificationTime |
+ +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
+ | customers.csv | false | true | 62011 | root | root | rw-r--r-- | 2014-08-04 18:30:39.0 | 2014-08-04 18:30:39.314 |
+ | products.csv.small | false | true | 34972 | root | root | rw-r--r-- | 2014-07-31 23:58:42.0 | 2014-07-31 23:59:16.849 |
+ | products.csv | false | true | 34972 | root | root | rw-r--r-- | 2014-08-01 06:39:34.0 | 2014-08-04 15:58:09.325 |
+ | products.csv.bad | false | true | 62307 | root | root | rw-r--r-- | 2014-08-04 15:58:02.0 | 2014-08-04 15:58:02.612 |
+ +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
+ 4 rows selected (0.165 seconds)
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/110-show-tables-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/110-show-tables-command.md b/_docs/sql-reference/sql-commands/110-show-tables-command.md
new file mode 100644
index 0000000..560bde4
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/110-show-tables-command.md
@@ -0,0 +1,136 @@
+---
+title: "SHOW TABLES Command"
+parent: "SQL Commands"
+---
+The SHOW TABLES command returns a list of views created within a schema. It
+also returns the tables that exist in Hive, HBase, and MapR-DB when you have
+these data sources configured as storage plugin instances. See[ Storage Plugin
+Registration]({{ site.baseurl }}/docs/storage-plugin-registration).
+
+## Syntax
+
+The SHOW TABLES command supports the following syntax:
+
+ SHOW TABLES;
+
+## Usage Notes
+
+First issue the USE command to identify the schema for which you want to view
+tables or views. For example, the following USE statement tells Drill that you
+only want information from the `dfs.myviews` schema:
+
+ USE dfs.myviews;
+
+In this example, “`myviews`” is a workspace created withing an instance of the
+`dfs` storage plugin.
+
+When you use a particular schema and then issue the SHOW TABLES command, Drill
+returns the tables and views within that schema.
+
+#### Limitations
+
+ * You can create and query tables within the file system, however Drill does not return these tables when you issue the SHOW TABLES command. You can issue the [SHOW FILES ]({{ site.baseurl }}/docs/show-files-command)command to see a list of all files, tables, and views, including those created in Drill.
+
+ * You cannot create Hive, HBase, or MapR-DB tables in Drill.
+
+## Examples
+
+The following examples demonstrate the steps that you can follow when you want
+to issue the SHOW TABLES command on the file system, Hive, and HBase.
+
+Complete the following steps to see views that exist in a file system and
+tables that exist in Hive and HBase data sources:
+
+ 1. Issue the SHOW SCHEMAS command to see a list of available schemas.
+
+ 0: jdbc:drill:zk=drilldemo:5181> show schemas;
+ +-------------+
+ | SCHEMA_NAME |
+ +-------------+
+ | hive.default |
+ | dfs.reviews |
+ | dfs.flatten |
+ | dfs.default |
+ | dfs.root |
+ | dfs.logs |
+ | dfs.myviews |
+ | dfs.clicks |
+ | dfs.tmp |
+ | sys |
+ | hbase |
+ | INFORMATION_SCHEMA |
+ | s3.twitter |
+ | s3.reviews |
+ | s3.default |
+ +-------------+
+ 15 rows selected (0.072 seconds)
+
+ 2. Issue the USE command to switch to a particular schema. When you use a particular schema, Drill searches or queries within that schema only.
+
+ 0: jdbc:drill:zk=drilldemo:5181> use dfs.myviews;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'dfs.myviews' |
+ +------------+------------+
+ 1 row selected (0.025 seconds)
+
+ 3. Issue the SHOW TABLES command to see the views or tables that exist within workspace.
+
+ 0: jdbc:drill:zk=drilldemo:5181> show tables;
+ +--------------+------------+
+ | TABLE_SCHEMA | TABLE_NAME |
+ +--------------+------------+
+ | dfs.myviews | logs_vw |
+ | dfs.myviews | customers_vw |
+ | dfs.myviews | s3_review_vw |
+ | dfs.myviews | clicks_vw |
+ | dfs.myviews | nestedclickview |
+ | dfs.myviews | s3_user_vw |
+ | dfs.myviews | s3_bus_vw |
+ +--------------+------------+
+ 7 rows selected (0.499 seconds)
+ 0: jdbc:drill:zk=drilldemo:5181>
+
+ 4. Switch to the Hive schema and issue the SHOW TABLES command to see the Hive tables that exist.
+
+ 0: jdbc:drill:zk=drilldemo:5181> use hive;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'hive' |
+ +------------+------------+
+ 1 row selected (0.043 seconds)
+
+ 0: jdbc:drill:zk=drilldemo:5181> show tables;
+ +--------------+------------+
+ | TABLE_SCHEMA | TABLE_NAME |
+ +--------------+------------+
+ | hive.default | orders |
+ | hive.default | products |
+ +--------------+------------+
+ 2 rows selected (0.552 seconds)
+
+ 5. Switch to the HBase schema and issue the SHOW TABLES command to see the HBase tables that exist within the schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> use hbase;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'hbase' |
+ +------------+------------+
+ 1 row selected (0.043 seconds)
+
+
+ 0: jdbc:drill:zk=drilldemo:5181> show tables;
+ +--------------+------------+
+ | TABLE_SCHEMA | TABLE_NAME |
+ +--------------+------------+
+ | hbase | customers |
+ +--------------+------------+
+ 1 row selected (0.412 seconds)
+
+
+
+
+
http://git-wip-us.apache.org/repos/asf/drill/blob/76352670/_docs/sql-reference/sql-commands/120-use-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/120-use-command.md b/_docs/sql-reference/sql-commands/120-use-command.md
new file mode 100644
index 0000000..b1bc24a
--- /dev/null
+++ b/_docs/sql-reference/sql-commands/120-use-command.md
@@ -0,0 +1,170 @@
+---
+title: "USE Command"
+parent: "SQL Commands"
+---
+The USE command changes the schema context to the specified schema. When you
+issue the USE command to switch to a particular schema, Drill queries that
+schema only.
+
+## Syntax
+
+The USE command supports the following syntax:
+
+ USE schema_name;
+
+## Parameters
+
+_schema_name_
+A unique name for a Drill schema. A schema in Drill is a configured storage
+plugin, such as hive, or a storage plugin and workspace. For example,
+`dfs.donuts` where `dfs` is an instance of the file system configured as a
+storage plugin, and `donuts` is a workspace configured to point to a directory
+within the file system. You can configure and use multiple storage plugins and
+workspaces in Drill. See [Storage Plugin Registration]({{ site.baseurl }}/docs/storage-plugin-registration) and
+[Workspaces]({{ site.baseurl }}/docs/Workspaces).
+
+## Usage Notes
+
+Issue the USE command to change to a particular schema. When you use a schema,
+you do not have to include the full path to a file or table in your query.
+
+For example, to query a file named `donuts.json` in the
+`/users/max/drill/json/` directory, you must include the full file path in
+your query if you do not use a defined workspace
+
+ SELECT * FROM dfs.`/users/max/drill/json/donuts.json` WHERE type='frosted';
+
+If you create a schema that points to the `~/json` directory where the file is
+located and then use the schema, you can issue the query without explicitly
+stating the file path:
+
+ USE dfs.json;
+ SELECT * FROM `donuts.json`WHERE type='frosted';
+
+If you do not use a schema before querying a table, you must use absolute
+notation, such as `[schema.]table[.column]`, to query the table. If you switch
+to the schema where the table exists, you can just specify the table name in
+the query. For example, to query a table named "`products`" in the `hive`
+schema, tell Drill to use the hive schema and then issue your query with the
+table name only:
+
+ USE hive;
+ SELECT * FROM products limit 5;
+
+Before you issue the USE command, you may want to run SHOW DATABASES or SHOW
+SCHEMAS to see a list of the configured storage plugins and workspaces.
+
+## Example
+
+This example demonstrates how to use a file system and a hive schema to query
+a file and table in Drill.
+
+Issue the SHOW DATABASES or SHOW SCHEMAS command to see a list of the
+available schemas that you can use. Both commands return the same results.
+
+ 0: jdbc:drill:zk=drilldemo:5181> show schemas;
+ +-------------+
+ | SCHEMA_NAME |
+ +-------------+
+ | hive.default |
+ | dfs.reviews |
+ | dfs.flatten |
+ | dfs.default |
+ | dfs.root |
+ | dfs.logs |
+ | dfs.myviews |
+ | dfs.clicks |
+ | dfs.tmp |
+ | sys |
+ | hbase |
+ | INFORMATION_SCHEMA |
+ | s3.twitter |
+ | s3.reviews |
+ | s3.default |
+ +-------------+
+ 15 rows selected (0.059 seconds)
+
+
+Issue the USE command with the schema that you want Drill to query.
+**Note:** If you use any of the Drill default schemas, such as `cp.default` or `dfs.default`, do not include .`default`. For example, if you want Drill to issue queries on files in its classpath, you can issue the following command:
+
+ 0: jdbc:drill:zk=local> use cp;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'cp' |
+ +------------+------------+
+ 1 row selected (0.04 seconds)
+
+Issue the USE command with a file system schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> use dfs.logs;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'dfs.logs' |
+ +------------+------------+
+ 1 row selected (0.054 seconds)
+
+You can issue the SHOW FILES command to view the files and directories within
+the schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> show files;
+ +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
+ | name | isDirectory | isFile | length | owner | group | permissions | accessTime | modificationTime |
+ +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
+ | csv | true | false | 1 | mapr | mapr | rwxrwxr-x | 2015-02-09 06:49:17.0 | 2015-02-09 06:50:11.172 |
+ | logs | true | false | 3 | mapr | mapr | rwxrwxr-x | 2014-12-16 18:58:26.0 | 2014-12-16 18:58:27.223 |
+ +------------+-------------+------------+------------+------------+------------+-------------+------------+------------------+
+ 2 rows selected (0.156 seconds)
+
+Query a file or directory in the file system schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> select * from logs limit 5;
+ +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
+ | dir0 | dir1 | trans_id | date | time | cust_id | device | state | camp_id | keywords | prod_id | purch_flag |
+ +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
+ | 2014 | 8 | 24181 | 08/02/2014 | 09:23:52 | 0 | IOS5 | il | 2 | wait | 128 | false |
+ | 2014 | 8 | 24195 | 08/02/2014 | 07:58:19 | 243 | IOS5 | mo | 6 | hmm | 107 | false |
+ | 2014 | 8 | 24204 | 08/01/2014 | 12:10:27 | 12048 | IOS6 | il | 1 | marge | 324 | false |
+ | 2014 | 8 | 24222 | 08/02/2014 | 16:28:37 | 2488 | IOS6 | pa | 2 | to | 391 | false |
+ | 2014 | 8 | 24227 | 08/02/2014 | 07:14:00 | 154687 | IOS5 | wa | 2 | on | 376 | false |
+ +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
+
+Issue the USE command to switch to the hive schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> use hive;
+ +------------+------------+
+ | ok | summary |
+ +------------+------------+
+ | true | Default schema changed to 'hive' |
+ +------------+------------+
+ 1 row selected (0.093 seconds)
+
+Issue the SHOW TABLES command to see the tables that exist within the schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> show tables;
+ +--------------+------------+
+ | TABLE_SCHEMA | TABLE_NAME |
+ +--------------+------------+
+ | hive.default | orders |
+ | hive.default | products |
+ +--------------+------------+
+ 2 rows selected (0.421 seconds)
+
+Query a table within the schema.
+
+ 0: jdbc:drill:zk=drilldemo:5181> select * from products limit 5;
+ +------------+------------+------------+------------+
+ | prod_id | name | category | price |
+ +------------+------------+------------+------------+
+ | 0 | Sony notebook | laptop | 959 |
+ | 1 | #10-4 1/8 x 9 1/2 Premium Diagonal Seam Envelopes | Envelopes | 16 |
+ | 2 | #10- 4 1/8 x 9 1/2 Recycled Envelopes | Envelopes | 9 |
+ | 3 | #10- 4 1/8 x 9 1/2 Security-Tint Envelopes | Envelopes | 8 |
+ | 4 | #10 Self-Seal White Envelopes | Envelopes | 11 |
+ +------------+------------+------------+------------+
+ 5 rows selected (0.211 seconds)
+
+
+