You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2015/03/06 20:55:00 UTC

drill git commit: DRILL-2381 lexical structure plus fixes

Repository: drill
Updated Branches:
  refs/heads/gh-pages-master 2b7773de2 -> eca98c77b


DRILL-2381 lexical structure plus fixes


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/eca98c77
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/eca98c77
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/eca98c77

Branch: refs/heads/gh-pages-master
Commit: eca98c77b00e5d16a86817feed93dece4426f5b8
Parents: 2b7773d
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu Mar 5 16:41:25 2015 -0800
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Fri Mar 6 11:50:48 2015 -0800

----------------------------------------------------------------------
 _docs/data-sources/003-parquet-ref.md  | 100 +++----
 _docs/data-sources/004-json-ref.md     | 410 ++++++++++++++--------------
 _docs/sql-ref/002-lexical-structure.md | 141 ++++++++++
 _docs/sql-ref/002-operators.md         |  70 -----
 _docs/sql-ref/003-functions.md         | 186 -------------
 _docs/sql-ref/003-operators.md         |  70 +++++
 _docs/sql-ref/004-functions.md         | 186 +++++++++++++
 _docs/sql-ref/004-nest-functions.md    |  10 -
 _docs/sql-ref/005-cmd-summary.md       |   9 -
 _docs/sql-ref/005-nest-functions.md    |  10 +
 _docs/sql-ref/006-cmd-summary.md       |   9 +
 _docs/sql-ref/006-reserved-wds.md      |  16 --
 _docs/sql-ref/007-reserved-wds.md      |  16 ++
 13 files changed, 674 insertions(+), 559 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/data-sources/003-parquet-ref.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources/003-parquet-ref.md b/_docs/data-sources/003-parquet-ref.md
index aa2ff11..71ed851 100644
--- a/_docs/data-sources/003-parquet-ref.md
+++ b/_docs/data-sources/003-parquet-ref.md
@@ -18,30 +18,26 @@ Apache Drill includes the following support for Parquet:
 * Generating Parquet files that have evolving or changing schemas and querying the data on the fly
 * Handling Parquet scalar and complex data types, such as maps and arrays
 
-### Reading and Writing Parquet Files
-When a read of Parquet data occurs, Drill loads only the necessary columns of data, which reduces I/O. Reading only a small piece of the Parquet data from a data file or table, Drill can examine and analyze all values for a column across multiple files.
-Parquet is the default storage format for a [Create Table As Select (CTAS)](/docs/create-table-as-ctas-command) command. You can create a Drill table from one format and store the data in another format, including Parquet.
+### Reading Parquet Files
+When a read of Parquet data occurs, Drill loads only the necessary columns of data, which reduces I/O. Reading only a small piece of the Parquet data from a data file or table, Drill can examine and analyze all values for a column across multiple files. You can create a Drill table from one format and store the data in another format, including Parquet.
 
-CTAS can use any data source provided by the storage plugin. 
+### Writing Parquet Files
+CTAS can use any data source provided by the storage plugin. To write Parquet data using the CTAS command, set the session store.format option as shown in the next section. Alternatively, configure the storage plugin to point to the directory containing the Parquet files.
 
-Parquet data generally resides in multiple files that resemble MapReduce output having numbered file names,  such as 0_0_0.parquet in a directory.
-
-To read Parquet data, point Drill to a single file or directory. Drill merges all files in a directory, including subdirectories, to create a single table.
-
-To write Parquet data using the CTAS command, set the session store.format option as shown in the next section. Alternatively, configure the storage plugin to point to the directory containing the Parquet files.
+Although the data resides in a single table, Parquet output generally consists of multiple files that resemble MapReduce output having numbered file names,  such as 0_0_0.parquet in a directory.
 
 ### Configuring the Parquet Storage Format
-The default file type for writing data to a workspace is Parquet. You can change the default by setting a different format in the storage plugin definition. Use the store.format option to set the CTAS output format of a Parquet row group at the session or system level.
+To read or write Parquet data, you need to include the Parquet format in the storage plugin format definitions. The `dfs` plugin definition includes the Parquet format. 
+
+Use the `store.format` option to set the CTAS output format of a Parquet row group at the session or system level.
 
 Use the ALTER command to set the `store.format` option.
          
         ALTER SESSION SET `store.format` = 'parquet';
         ALTER SYSTEM SET `store.format` = 'parquet';
         
-Parquet is also the default Drill format for reading. For example, if you query a JSON file, Drill attempts to read the file in Parquet format first.
-
 ### Configuring the Size of Parquet Files
-Configuring the size of Parquet files by setting the store.parquet.block-size can improve write performance. The block size is the size of MFS, HDFS, or the file system. 
+Configuring the size of Parquet files by setting the `store.parquet.block-size` can improve write performance. The block size is the size of MFS, HDFS, or the file system. 
 
 The larger the block size, the more memory Drill needs for buffering data. Parquet files that contain a single block maximize the amount of data Drill stores contiguously on disk. Given a single row group per file, Drill stores the entire Parquet file onto the block, avoiding network I/O.
 
@@ -66,45 +62,15 @@ The following general process converts a file from JSON to Parquet:
 This example demonstrates a storage plugin definition, a sample row of data from a JSON file, and a Drill query that writes the JSON input to Parquet output. 
 
 #### Storage Plugin Definition
-The following example storage plugin defines these options
-
-* A connection to the home directory of the file system "file:///" instead of another location, such as the mapr-fs. 
-* A workspace named "home" that represents a location in the file system connection.
-* The path to home: "/home/mapr" 
-* The writable option set to true, so Drill can write the Parquet output.
-* A default input format of null. An error occurs if you read a file having an ambiguous extension. 
-* The storage formats in which Drill writes the data. 
-
-        {
-	      "type": "file",
-	      "enabled": true,
-	      "connection": "file:///",
-		  "workspaces": {
-		    "home": {
-		      "location": "/home/mapr",
-		      "writable": true,
-		      "defaultInputFormat": null
-		    }
-		  },
-		  "formats": {
-		    "parquet": {
-		      "type": "parquet"
-		    },
-		    "json": {
-		      "type": "json"
-		    }
-		  }
-		}
-
-First, the example storage plugin definition allows Drill to write data to Parquet or JSON. Next, the CTAS query gets the JSON file from and writes the Parquet directory of files to the home directory.
+You can use the default dfs storage plugin installed with Drill for reading and writing Parquet files. The storage plugin needs to configure the writable option of the workspace to true, so Drill can write the Parquet output. The dfs storage plugin defines the tmp writable workspace, which you can use in the CTAS command to create a Parquet table.
 
 #### Sample Row of JSON Data
 A JSON file contains data consisting of strings, typical of JSON data. The following example shows one row of the JSON file:
 
         {"trans_id":0,"date":"2013-07-26","time":"04:56:59","amount":80.5,"user_info":
-          {"cust_id":28,"device":"IOS5","state":"mt"
+          {"cust_id":28,"device":"WEARABLE2","state":"mt"
           },"marketing_info":
-            {"camp_id":4,"keywords":            ["go","to","thing","watch","made","laughing","might","pay","in","your","hold"]
+            {"camp_id":4,"keywords": ["go","to","thing","watch","made","laughing","might","pay","in","your","hold"]
             },
             "trans_info":
               {"prod_id":[16],
@@ -116,22 +82,38 @@ A JSON file contains data consisting of strings, typical of JSON data. The follo
 #### CTAS Query      
 The following example shows a CTAS query that creates a table from JSON data. The command casts the date, time, and amount strings to SQL types DATE, TIME, and DOUBLE. String-to-VARCHAR casting of the other strings occurs automatically.
 
-    CREATE TABLE home.sampleparquet AS 
-      (SELECT trans_id, 
-        cast(`date` AS date) transdate, 
-        cast(`time` AS time) transtime, 
-        cast(amount AS double) amount,`
-        user_info`,`marketing_info`, `trans_info` 
-        FROM home.`sample.json`);
+    CREATE TABLE dfs.tmp.sampleparquet AS 
+    (SELECT trans_id, 
+    cast(`date` AS date) transdate, 
+    cast(`time` AS time) transtime, 
+    cast(amount AS double) amountm,
+    user_info, marketing_info, trans_info 
+    FROM dfs.`/Users/drilluser/sample.json`);
         
-The output is a Parquet file:
+The CTAS query did not specify a file name extension, so Drill used the default parquet file name extension. The output is a Parquet file:
 
     +------------+---------------------------+
-	|  Fragment  | Number of records written |
-	+------------+---------------------------+
-	| 0_0        | 5                         |
-	+------------+---------------------------+
-	1 row selected (1.369 seconds)
+    |  Fragment  | Number of records written |
+    +------------+---------------------------+
+    | 0_0        | 5                         |
+    +------------+---------------------------+
+    1 row selected (1.369 seconds)
+
+You can query the Parquet file to verify that Drill now interprets the converted string as a date.
+
+    SELECT extract(year from transdate) AS `Year`, t.user_info.cust_id AS Customer 
+    FROM dfs.tmp.`sampleparquet` t;
+
+    +------------+------------+
+    |    Year    |  Customer  |
+    +------------+------------+
+    | 2013       | 28         |
+    | 2013       | 86623      |
+    | 2013       | 11         |
+    | 2013       | 666        |
+    | 2013       | 999        |
+    +------------+------------+
+    5 rows selected (0.039 seconds)
 
 For more examples of and information about using Parquet data, see ["Evolving Parquet as self-describing data format – New paradigms for consumerization of Hadoop data"](https://www.mapr.com/blog/evolving-parquet-self-describing-data-format-new-paradigms-consumerization-hadoop-data#.VNeqQbDF_8f).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/data-sources/004-json-ref.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources/004-json-ref.md b/_docs/data-sources/004-json-ref.md
index 6119e2e..f865a8a 100644
--- a/_docs/data-sources/004-json-ref.md
+++ b/_docs/data-sources/004-json-ref.md
@@ -14,7 +14,7 @@ Using Drill you can natively query dynamic JSON data sets using SQL. Drill treat
 
 Drill 0.8 and higher can [query compressed .gz files](/docs/drill-default-input-format#querying-compressed-json) having JSON as well as uncompressed .json files. 
 
-In addition to the examples presented later in this section, see "How to Analyze Highly Dynamic Datasets with Apache Drill" (https://www.mapr.com/blog/how-analyze-highly-dynamic-datasets-apache-drill) for information about how to analyze a JSON data set.
+In addition to the examples presented later in this section, see ["How to Analyze Highly Dynamic Datasets with Apache Drill"](https://www.mapr.com/blog/how-analyze-highly-dynamic-datasets-apache-drill) for information about how to analyze a JSON data set.
 
 ## Data Type Mapping
 JSON data consists of the following types:
@@ -28,7 +28,7 @@ JSON data consists of the following types:
 * Value: a string, number, true, false, null
 * Whitespace: used between tokens
 
-JSON data consists of the following types: 
+The following table shows SQL and JSON data mapping: 
 
 <table>
   <tr>
@@ -82,28 +82,23 @@ Drill reads tuples defined in single objects, having no comma between objects. A
     { name: "Apples", desc: "Delicious" }
     { name: "Oranges", desc: "Florida Navel" }
     
-To read and [analyze complex JSON](/docs/json-data-model#analyzing-json) files, use the FLATTEN and KVGEN functions. Observe the following guidelines when reading JSON files:
-
-* Avoid queries that return objects larger than ??MB (16?).
-  These queries might be far less performant than those that return smaller objects.
-* Avoid queries that return portions of objects beyond the ??MB threshold. (16?)
-  These queries might be far less performant than queries that return ports of objects within the threshold.
-
+To read and [analyze complex JSON](/docs/json-data-model#analyzing-json) files, use the FLATTEN and KVGEN functions. 
 
 ## Writing JSON
 You can write data from Drill to a JSON file. The following setup is required:
 
 * In the storage plugin definition, include a writable (mutable) workspace. For example:
 
-         {
-         . . .
-            "workspaces": {
-            . . .
-               "myjsonstore": {
-               "location": "/tmp",
-               "writable": true,
-            }
-       
+    {
+    . . .
+      "workspaces": {
+    . . .
+        "myjsonstore": {
+          "location": "/tmp",
+          "writable": true,
+        }
+    . . .
+
 * Set the output format to JSON. For example:
 
         ALTER SESSION SET `store.format`='json';
@@ -119,10 +114,9 @@ Drill performs the following actions, as shown in the complete [CTAS command exa
 * Creates a directory using table name.
 * Writes the JSON data to the directory in the workspace location.
 
-
 ## Analyzing JSON
 
-Generally, you query JSON files using the following syntax, which includes a table qualifier. The qualifier is typically required for querying complex data:
+Generally, you query JSON files using the following syntax, which includes a table alias. The alias is typically required for querying complex data:
 
 * Dot notation to drill down into a JSON map.
 
@@ -168,15 +162,15 @@ This example uses the following data that represents unit sales of tickets to ev
 Take a look at the data in Drill:
 
     SELECT * FROM dfs.`/Users/drilluser/ticket_sales.json`;
-	+------------+------------+------------+------------+------------+
-	|    type    |  channel   |   month    |    day     |   sales    |
-	+------------+------------+------------+------------+------------+
-	| ticket     | 123455     | 12         | ["15","25","28","31"] | {"NY":"532806","PA":"112889","TX":"898999","UT":"10875"} |
-	| ticket     | 123456     | 12         | ["10","15","19","31"] | {"NY":"972880","PA":"857475","CA":"87350","OR":"49999"} |
-	+------------+------------+------------+------------+------------+
-	2 rows selected (0.041 seconds)
-
-### Flatten JSON Data
+    +------------+------------+------------+------------+------------+
+    |    type    |  channel   |   month    |    day     |   sales    |
+    +------------+------------+------------+------------+------------+
+    | ticket     | 123455     | 12         | ["15","25","28","31"] | {"NY":"532806","PA":"112889","TX":"898999","UT":"10875"} |
+    | ticket     | 123456     | 12         | ["10","15","19","31"] | {"NY":"972880","PA":"857475","CA":"87350","OR":"49999"} |
+    +------------+------------+------------+------------+------------+
+    2 rows selected (0.041 seconds)
+
+### Flatten Arrays
 The flatten function breaks the following _day arrays from the JSON example file shown earlier into separate rows.
 
     "_day": [ 15, 25, 28, 31 ] 
@@ -184,32 +178,33 @@ The flatten function breaks the following _day arrays from the JSON example file
 
 Flatten the sales column of the ticket data onto separate rows, one row for each day in the array, for a better view of the data. Flatten copies the sales data related in the JSON object on each row.  Using the all (*) wildcard as the argument to flatten is not supported and returns an error.
 
-SELECT flatten(tkt._day) AS `day`, tkt.sales FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
+    SELECT flatten(tkt._day) AS `day`, tkt.sales FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
+
     +------------+------------+
-	|    day     |   sales    |
-	+------------+------------+
-	| 15         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
-	| 25         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
-	| 28         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
-	| 31         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
-	| 10         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
-	| 15         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
-	| 19         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
-	| 31         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
-	+------------+------------+
-	8 rows selected (0.072 seconds)
+    |    day     |   sales    |
+    +------------+------------+
+    | 15         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
+    | 25         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
+    | 28         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
+    | 31         | {"NY":532806,"PA":112889,"TX":898999,"UT":10875} |
+    | 10         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
+    | 15         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
+    | 19         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
+    | 31         | {"NY":972880,"PA":857475,"CA":87350,"OR":49999} |
+    +------------+------------+
+    8 rows selected (0.072 seconds)
 
 ### Generate Key/Value Pairs
 Use the kvgen (Key Value Generator) function to generate key/value pairs from complex data. Generating key/value pairs is often helpful when working with data that contains arbitrary maps consisting of dynamic and unknown element names, such as the ticket sales data by state. For example purposes, take a look at how kvgen breaks the sales data into keys and values representing the states and number of tickets sold:
 
     SELECT kvgen(tkt.sales) AS state_sales FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
-	+-------------+
-	| state_sales |
-	+-------------+
-	| [{"key":"NY","value":532806},{"key":"PA","value":112889},{"key":"TX","value":898999},{"key":"UT","value":10875}] |
-	| [{"key":"NY","value":972880},{"key":"PA","value":857475},{"key":"CA","value":87350},{"key":"OR","value":49999}] |
-	+-------------+
-	2 rows selected (0.039 seconds)
+    +-------------+
+    | state_sales |
+    +-------------+
+    | [{"key":"NY","value":532806},{"key":"PA","value":112889},{"key":"TX","value":898999},{"key":"UT","value":10875}] |
+    | [{"key":"NY","value":972880},{"key":"PA","value":857475},{"key":"CA","value":87350},{"key":"OR","value":49999}] |
+    +-------------+
+    2 rows selected (0.039 seconds)
 
 The purpose of using kvgen function is to allow queries against maps where the keys themselves represent data rather than a schema, as shown in the next example.
 
@@ -217,24 +212,25 @@ The purpose of using kvgen function is to allow queries against maps where the k
 
 `Flatten` breaks the list of key-value pairs into separate rows on which you can apply analytic functions. The flatten function takes a JSON array, such as the output from kvgen(sales), as an argument. Using the all (*) wildcard as the argument is not supported and returns an error.
 
-	SELECT flatten(kvgen(sales)) Revenue 
-	FROM dfs.`/Users/drilluser/drill/apache-drill-0.8.0-SNAPSHOT/ticket_sales.json`;
-	+--------------+
-	|   Revenue    |
-	+--------------+
-	| {"key":"12-10","value":532806} |
-	| {"key":"12-11","value":112889} |
-	| {"key":"12-19","value":898999} |
-	| {"key":"12-21","value":10875} |
-	| {"key":"12-10","value":87350} |
-	| {"key":"12-19","value":49999} |
-	| {"key":"12-21","value":857475} |
-	| {"key":"12-15","value":972880} |
-	+--------------+
-	8 rows selected (0.171 seconds)
+    SELECT flatten(kvgen(sales)) Sales 
+    FROM dfs.`/Users/drilluser/drill/apache-drill-0.8.0-SNAPSHOT/ticket_sales.json`;
+
+    +--------------------------------+
+    |           Sales                |
+    +--------------------------------+
+    | {"key":"12-10","value":532806} |
+    | {"key":"12-11","value":112889} |
+    | {"key":"12-19","value":898999} |
+    | {"key":"12-21","value":10875}  |
+    | {"key":"12-10","value":87350}  |
+    | {"key":"12-19","value":49999}  |
+    | {"key":"12-21","value":857475} |
+    | {"key":"12-15","value":972880} |
+    +--------------------------------+
+    8 rows selected (0.171 seconds)
 
 ### Example: Aggregate Loosely Structured Data
-Use flatten and kvgen together to analyze the data. Continuing with the previous example, make sure all text mode is set to false to sum numerical values. Drill returns an error if you attempt to sum data in in all text mode. 
+Use flatten and kvgen together to aggregate the data. Continuing with the previous example, make sure all text mode is set to false to sum numerical values. Drill returns an error if you attempt to sum data in in all text mode. 
 
     ALTER SYSTEM SET `store.json.all_text_mode` = false;
     
@@ -243,11 +239,11 @@ Sum the ticket sales by combining the `sum`, `flatten`, and `kvgen` functions in
     SELECT SUM(tkt.tot_sales.`value`) AS TotalSales FROM (SELECT flatten(kvgen(sales)) tot_sales FROM dfs.`/Users/drilluser/ticket_sales.json`) tkt;
 
     +------------+
-	| TotalSales |
-	+------------+
-	| 3523273    |
-	+------------+
-	1 row selected (0.081 seconds)
+    | TotalSales |
+    +------------+
+    | 3523273    |
+    +------------+
+    1 row selected (0.081 seconds)
 
 ### Example: Aggregate and Sort Data
 Sum the ticket sales by state and group by state and sort in ascending order. 
@@ -259,74 +255,72 @@ Sum the ticket sales by state and group by state and sort in ascending order.
     GROUP BY `right`(tkt.tot_sales.key,2) 
     ORDER BY TotalSales;
 
-	+---------------+--------------+
-	| December_Date | Revenue      |
-	+---------------+--------------+
-	| 11            | 112889       |
-	| 10            | 620156       |
-	| 21            | 868350       |
-	| 19            | 948998       |
-	| 15            | 972880       |
-	+---------------+--------------+
-	5 rows selected (0.203 seconds)
-
-### Example: Analyze a Map Field in an Array
+    +---------------+--------------+
+    | December_Date |  TotalSales  |
+    +---------------+--------------+
+    | 11            | 112889       |
+    | 10            | 620156       |
+    | 21            | 868350       |
+    | 19            | 948998       |
+    | 15            | 972880       |
+    +---------------+--------------+
+    5 rows selected (0.203 seconds)
+
+### Example: Access a Map Field in an Array
 To access a map field in an array, use dot notation to drill down through the hierarchy of the JSON data to the field. Examples are based on the following [City Lots San Francisco in .json](https://github.com/zemirco/sf-city-lots-json), modified slightly as described in the empty array workaround in ["Limitations and Workarounds."](/docs/json-data-model#empty-array)
 
-{
-"type": "FeatureCollection",
-"features": [
-   { 
-   	 "type": "Feature", 
-     "properties": 
-     { 
-       "MAPBLKLOT": "0001001", 
-       "BLKLOT": "0001001", 
-       "BLOCK_NUM": "0001", 
-       "LOT_NUM": "001", 
-       "FROM_ST": "0", 
-       "TO_ST": "0", 
-       "STREET": "UNKNOWN", 
-       "ST_TYPE": null, 
-       "ODD_EVEN": "E" }, 
-       "geometry": 
-       { 
-          "type": "Polygon", 
-          "coordinates": 
-          [ [ 
-          [ -122.422003528252475, 37.808480096967251, 0.0 ], 
-          [ -122.422076013325281, 37.808835019815085, 0.0 ], 
-          [ -122.421102174348633, 37.808803534992904, 0.0 ], 
-          [ -122.421062569067274, 37.808601056818148, 0.0 ], 
-          [ -122.422003528252475, 37.808480096967251, 0.0 ] 
-          ] ] 
-       } 
-     },
-   { 
-      "type": "Feature", 
-   . . .
-
-This example shows you how to drill down using array notation plus dot notation in features[0].properties.MAPBLKLOT to get the MAPBLKLOT property value in the San Francisco city lots data:
-
-        SELECT features[0].properties.MAPBLKLOT, FROM dfs.`/Users/drilluser/citylots.json`;
-          
-        +------------+
-		|   EXPR$0   |
-		+------------+
-		| 0001001    |
-		+------------+
-		1 row selected (0.163 seconds)
-		
+    {
+      "type": "FeatureCollection",
+      "features": [
+      { 
+        "type": "Feature", 
+        "properties": 
+        { 
+          "MAPBLKLOT": "0001001", 
+          "BLKLOT": "0001001", 
+          "BLOCK_NUM": "0001", 
+          "LOT_NUM": "001", 
+          "FROM_ST": "0", 
+          "TO_ST": "0", 
+          "STREET": "UNKNOWN", 
+          "ST_TYPE": null, 
+          "ODD_EVEN": "E" }, 
+          "geometry": 
+        { 
+            "type": "Polygon", 
+            "coordinates": 
+            [ [ 
+            [ -122.422003528252475, 37.808480096967251, 0.0 ], 
+            [ -122.422076013325281, 37.808835019815085, 0.0 ], 
+            [ -122.421102174348633, 37.808803534992904, 0.0 ], 
+            [ -122.421062569067274, 37.808601056818148, 0.0 ], 
+            [ -122.422003528252475, 37.808480096967251, 0.0 ] 
+            ] ] 
+        }
+      },
+    . . .
+
+This example shows how to drill down using array notation plus dot notation in features[0].properties.MAPBLKLOT to get the MAPBLKLOT property value in the San Francisco city lots data:
+
+    SELECT features[0].properties.MAPBLKLOT, FROM dfs.`/Users/drilluser/citylots.json`;
+
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 0001001    |
+    +------------+
+    1 row selected (0.163 seconds)
+
 To access the second geometry coordinate of the first city lot in the San Francisco city lots, use array indexing notation for the coordinates as well as the features:
-		
-		SELECT features[0].geometry.coordinates[0][1] 
-        FROM dfs.`/Users/drilluser/citylots.json`;
-		+------------+
-		|   EXPR$0   |
-		+------------+
-		| 37.80848009696725 |
-		+------------+
-		1 row selected (0.19 seconds)
+
+    SELECT features[0].geometry.coordinates[0][1] 
+    FROM dfs.`/Users/drilluser/citylots.json`;
+    +-------------------+
+    |      EXPR$0       |
+    +-------------------+
+    | 37.80848009696725 |
+    +-------------------+
+    1 row selected (0.19 seconds)
 
 More examples of drilling down into an array are shown in ["Selecting Nested Data for a Column"](/docs/query-3-selecting-nested-data-for-a-column). 
 
@@ -337,63 +331,63 @@ By flattening the following JSON file, which contains an array of maps, you can
 
     SELECT flat.fill FROM (SELECT flatten(t.fillings) AS fill FROM dfs.flatten.`test.json` t) flat WHERE flat.fill.cal  > 300;
 
-    +------------+
-	|    fill    |
-	+------------+
-	| {"name":"sugar","cal":500} |
-	+------------+
-	1 row selected (0.421 seconds)
+    +----------------------------+
+    |           fill             |
+    +----------------------------+
+    | {"name":"sugar","cal":500} |
+    +----------------------------+
+    1 row selected (0.421 seconds)
 
-Use a table qualifier for column fields and functions when working with complex data sets. Currently, you must use a subquery when operating on a flattened column. Eliminating the subquery and table qualifier in the WHERE clause, for example `flat.fillings[0].cal > 300`, does not evaluate all records of the flattened data against the predicate and produces the wrong results.
+Use a table alias for column fields and functions when working with complex data sets. Currently, you must use a subquery when operating on a flattened column. Eliminating the subquery and table alias in the WHERE clause, for example `flat.fillings[0].cal > 300`, does not evaluate all records of the flattened data against the predicate and produces the wrong results.
 
-### Example: Analyze Map Fields in a Map
+### Example: Access Map Fields in a Map
 This example uses a WHERE clause to drill down to a third level of the following JSON hierarchy to get the max_hdl greater than 160:
 
-       {
-	    "SOURCE": "Allegheny County",
-	    "TIMESTAMP": 1366369334989,
-	    "birth": {
-	        "id": 35731300,
-	        "firstname": "Jane",
-	        "lastname": "Doe",
-	        "weight": "CATEGORY_1",
-	        "bearer": {
-	            "father": "John Doe",
-	            "ss": "208-55-5983",
-	            "max_ldl": 180,
-	            "max_hdl": 200
-	        }
-	    }
-	}
-{
-            "SOURCE": "Marin County",
-            "TIMESTAMP": 1366369334,
-            "birth": {
-                "id": 35731309,
-                "firstname": "Somporn",
-                "lastname": "Thongnopneua",
-                "weight": "CATEGORY_2",
-                "bearer": {
-                    "father": "Jeiranan Thongnopneua",
-                    "ss": "208-25-2223",
-                    "max_ldl": 110,
-                    "max_hdl": 150
-                }
-            }
+    {
+      "SOURCE": "Allegheny County",
+      "TIMESTAMP": 1366369334989,
+      "birth": {
+        "id": 35731300,
+        "firstname": "Jane",
+        "lastname": "Doe",
+        "weight": "CATEGORY_1",
+        "bearer": {
+          "father": "John Doe",
+          "ss": "208-55-5983",
+          "max_ldl": 180,
+          "max_hdl": 200
         }
+      }
+    }
+    {
+      "SOURCE": "Marin County",
+      "TIMESTAMP": 1366369334,
+        "birth": {
+          "id": 35731309,
+          "firstname": "Somporn",
+          "lastname": "Thongnopneua",
+          "weight": "CATEGORY_2",
+          "bearer": {
+            "father": "Jeiranan Thongnopneua",
+            "ss": "208-25-2223",
+            "max_ldl": 110,
+            "max_hdl": 150
+        }
+      }
+    }
 
 Use dot notation, for example `t.birth.lastname` and `t.birth.bearer.max_hdl` to drill down to the nested level:
 
-SELECT t.birth.lastname AS Name, t.birth.weight AS Weight 
-FROM dfs.`Users/drilluser/vitalstat.json` t 
-WHERE t.birth.bearer.max_hdl < 160;
+    SELECT t.birth.lastname AS Name, t.birth.weight AS Weight 
+    FROM dfs.`Users/drilluser/vitalstat.json` t 
+    WHERE t.birth.bearer.max_hdl < 160;
 
-+------------+------------+
-|    Name    |   Weight   |
-+------------+------------+
-| Thongneoupeanu | CATEGORY_2 |
-+------------+------------+
-1 row selected (0.142 seconds)
+    +----------------+------------+
+    |    Name        |   Weight   |
+    +----------------+------------+
+    | Thongneoupeanu | CATEGORY_2 |
+    +----------------+------------+
+    1 row selected (0.142 seconds)
 
 ## Limitations and Workarounds
 In most cases, you can use a workaround, presented in the following sections, to overcome the following limitations:
@@ -415,29 +409,29 @@ Workaround: Remove square brackets at the root of the object, as shown in the fo
 ![drill query flow]({{ site.baseurl }}/docs/img/datasources-json-bracket.png)
 
 ### Complex nested data
-Drill cannot read some complex nested arrays unless you use a table qualifier.
+Drill cannot read some complex nested arrays unless you use a table alias.
 
-Workaround: To query n-level nested data, use the table qualifier to remove ambiguity; otherwise, column names such as user_info are parsed as table names by the SQL parser. The qualifier is not needed for data that is not nested, as shown in the following example:
+Workaround: To query n-level nested data, use the table alias to remove ambiguity; otherwise, column names such as user_info are parsed as table names by the SQL parser. The alias is not needed for data that is not nested, as shown in the following example:
 
     {"dev_id": 0,
-	 "date":"07/26/2013",
-	 "time":"04:56:59",
-	 "user_info":
-	   {"user_id":28,
-	    "device":"A306",
-	    "state":"mt"
-	   },
-	   "marketing_info":
-	     {"promo_id":4,
-	      "keywords":  
-	       ["stay","to","think","watch","glasses",
-	         "joining","might","pay","in","your","buy"]
-	     },
-	     "dev_info":
-	       {"prod_id":[16],"purch_flag":"false"
-	       }
-	 }
-	. . .
+      "date":"07/26/2013",
+      "time":"04:56:59",
+      "user_info":
+        {"user_id":28,
+         "device":"A306",
+         "state":"mt"
+        },
+        "marketing_info":
+          {"promo_id":4,
+           "keywords":  
+            ["stay","to","think","watch","glasses",
+             "joining","might","pay","in","your","buy"]
+          },
+          "dev_info":
+            {"prod_id":[16],"purch_flag":"false"
+            }
+    }
+    . . .
 
     SELECT dev_id, `date`, `time`, t.user_info.user_id, t.user_info.device, t.dev_info.prod_id 
     FROM dfs.`/Users/mypath/example.json` t;
@@ -456,14 +450,12 @@ For example, you cannot query the [City Lots San Francisco in .json](https://git
 After removing the extraneous square brackets in the coordinates array, you can drill down to query all the data for the lots.
 
 ### Lengthy JSON objects
-<<Jason will try to provide some statement about limits.>>
+TBD statement about limits.
 
 ### Complex JSON objects
 Complex arrays and maps can be difficult or impossible to query.
 
-Workaround: 
-
-Separate lengthy objects into objects delimited by curly braces using the following functions:
+Workaround: Separate lengthy objects into objects delimited by curly braces using the following functions:
  
 [flatten](/docs/json-data-model#flatten-json-data) separates a set of nested JSON objects into individual rows in a DRILL table.
 [kvgen](/docs/json-data-model#generate-key-value-pairs) separates objects having more elements than optimal for querying.
@@ -475,7 +467,7 @@ You cannot use reserved words for nested column names because Drill returns null
 
 For example, the following object contains the reserved word key, which you need to rename to `_key` or something other than non-reserved word:
 
-{
+    {
       "type": "ticket",
       "channel": 123455,
       "_month": 12,
@@ -487,7 +479,7 @@ For example, the following object contains the reserved word key, which you need
         "UT": 10875
         "key": [ 78946, 39107, 76311 ]
       }
-}
+    }
 
 ### Schema changes
 Drill cannot read JSON files containing changes in the schema. For example, attempting to query an object having array elements of different data types cause an error:

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/002-lexical-structure.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/002-lexical-structure.md b/_docs/sql-ref/002-lexical-structure.md
new file mode 100644
index 0000000..971bb9f
--- /dev/null
+++ b/_docs/sql-ref/002-lexical-structure.md
@@ -0,0 +1,141 @@
+---
+title: "Lexical Structure"
+parent: "SQL Reference"
+---
+
+A SQL statement used in Drill can include the following parts:
+
+* [Storage plugin and workspace references](/docs/lexical-structure#storage-plugin-and-workspace-references)
+* Literal values
+
+  * [Boolean](/docs/lexical-structure#boolean)
+  * [Identifier](/docs/lexical-structure#identifier)
+  * [Integer](/docs/lexical-structure#integer)
+  * [Numeric constant](/docs/lexical-structure#numeric-constant)
+  * [String](/docs/lexical-structure#string)
+
+* Expressions, such as t.metric * 3.1415927
+* Functions, such as count(*)
+* Names of commands and clauses, such as `SELECT * FROM myfile WHERE a > b`.
+
+The upper/lowercase sensitivity of the parts differ.
+
+## Case-sensitivity
+
+SQL function and command names are case-insensitive. Storage plugin and workspace names are case-sensitive. Column and table names are case-insensitive unless enclosed in double quotation marks. The double-quotation mark character can be used as an escape character for the double quotation mark.
+
+Keywords are case-insensitive. For example, the keywords SELECT and select are equivalent. This document shows keywords in uppercase.
+
+The sys.options table name and values are case-sensitive. The following query works:
+
+    SELECT * FROM sys.options where NAME like '%parquet%';
+
+When using the ALTER command, specify the name lower case. For example:
+
+    ALTER SESSION  set `store.parquet.compression`='snappy';
+
+## Storage Plugin and Workspace References
+
+Storage plugin and workspace names are case-sensitive. The case of the name used in the query and the name in the storage plugin definition need to match. For example, defining a storage plugin named `dfs` and then referring to the plugin as `DFS` fails, but this query succeeds:
+
+    SELECT * FROM dfs.`/Users/drilluser/ticket_sales.json`;
+
+## Literal values
+
+This section describes how to construct literals.
+
+### Boolean
+Boolean values are true or false and are case-insensitive. Do not enclose the values in quotation marks.
+
+### Identifier
+An identifier is a letter followed by any sequence of letters, digits, or the underscore. For example, names of tables, columns, and aliases are identifiers. Maximum length is 1024 characters. Enclose the following identifiers in back ticks:
+
+* Keywords
+* Identifiers that SQL cannot parse. 
+
+For example, you enclose the SQL keywords date and time in back ticks in the example in section, ["Example: Read JSON, Write Parquet"](/docs/parquet-format#example-read-json-write-parquet):
+
+    CREATE TABLE dfs.tmp.sampleparquet AS 
+    (SELECT trans_id, 
+    cast(`date` AS date) transdate, 
+    cast(`time` AS time) transtime, 
+    cast(amount AS double) amountm,
+    user_info, marketing_info, trans_info 
+    FROM dfs.`/Users/drilluser/sample.json`);
+
+Table and column names are case-insensitive. Use back ticks to enclose names that contain special characters. Special characters are those other than the 52 Latin alphabet characters. For example, space and @ are special characters. 
+
+The following example shows the keyword Year and the column alias "Customer Number" enclosed in back ticks. The column alias contains the special space character.
+
+    SELECT extract(year from transdate) AS `Year`, t.user_info.cust_id AS `Customer Number` FROM dfs.tmp.`sampleparquet` t;
+
+    +------------+-----------------+
+    |    Year    | Customer Number |
+    +------------+-----------------+
+    | 2013       | 28              |
+    | 2013       | 86623           |
+    | 2013       | 11              |
+    | 2013       | 666             |
+    | 2013       | 999             |
+    +------------+-----------------+
+    5 rows selected (0.051 seconds)
+
+### Integer
+An integer value consists of an optional minus sign, -, followed by one or more digits.
+
+### Numeric constant
+
+Numeric constants include integers, floats, and values in E notation.
+
+* Integers: 0-9 and a minus sign prefix
+* Float: a series of one or more decimal digits, followed by a period, ., and one or more digits in decimal places. There is no optional + sign. Leading or trailing zeros are required before and after decimal points. For example, 0.52 and 52.0. 
+* E notation: Approximate-value numeric literals in scientific notation consist of a mantissa and exponent. Either or both parts can be signed. For example: 1.2E3, 1.2E-3, -1.2E3, -1.2E-3. Values consist of an optional negative sign (using -), a floating point number, letters e or E, a positive or negative sign (+ or -), and an integer exponent. For example, the following JSON file has data in E notation in two records.
+
+        {"trans_id":0,
+         "date":"2013-07-26",
+         "time":"04:56:59",
+         "amount":-2.6034345E+38,
+         "trans_info":{"prod_id":[16],
+         "purch_flag":"false"
+        }}
+
+        {"trans_id":1,
+         "date":"2013-05-16",
+         "time":"07:31:54",
+         "amount":1.8887898E+38,
+         "trans_info":{"prod_id":[],
+         "purch_flag":"false"
+        }}
+  Aggregating the data in Drill produces scientific notation in the output:
+
+        SELECT sum(amount) FROM dfs.`/Users/khahn/Documents/sample2.json`;
+
+        +------------+
+        |   EXPR$0   |
+        +------------+
+        | -7.146447E37 |
+        +------------+
+        1 row selected (0.044 seconds)
+
+Drill represents invalid values, such as the square root of a negative number, as NaN.
+
+### String
+Strings are characters enclosed in single quotation marks. To use a single quotation mark itself (apostrophe) in a string, escape it using a single quotation mark. For example, the value Martha's Vineyard in the SOURCE column in the `vitalstat.json` file contains an apostrophe:
+
+    +------------+
+    |   SOURCE   |
+    +------------+
+    | Martha's Vineyard |
+    | Monroe County |
+    +------------+
+    2 rows selected (0.053 seconds)
+
+To refer to the string Martha's Vineyard in a query, use single quotation marks to enclose the string. and escape the apostophe using a single quotation mark:
+
+    SELECT * FROM dfs.`/Users/drilluser/vitalstat.json` t 
+    WHERE t.source = 'Martha''s Vineyard';
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/002-operators.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/002-operators.md b/_docs/sql-ref/002-operators.md
deleted file mode 100644
index 074375a..0000000
--- a/_docs/sql-ref/002-operators.md
+++ /dev/null
@@ -1,70 +0,0 @@
----
-title: "Operators"
-parent: "SQL Reference"
----
-You can use various types of operators in your Drill queries to perform
-operations on your data.
-
-## Logical Operators
-
-You can use the following logical operators in your Drill queries:
-
-  * AND
-  * BETWEEN
-  * IN
-  * LIKE
-  * NOT
-  * OR 
-
-## Comparison Operators
-
-You can use the following comparison operators in your Drill queries:
-
-  * <
-  * \>
-  * <=
-  * \>=
-  * =
-  * <>
-  * IS NULL
-  * IS NOT NULL
-  * IS FALSE 
-  * IS NOT FALSE
-  * IS TRUE 
-  * IS NOT TRUE
-
-## Pattern Matching Operators
-
-You can use the following pattern matching operators in your Drill queries:
-
-  * LIKE
-  * NOT LIKE
-  * SIMILAR TO
-  * NOT SIMILAR TO
-
-## Math Operators
-
-You can use the following math operators in your Drill queries:
-
-**Operator**| **Description**  
----|---  
-+| Addition  
--| Subtraction  
-*| Multiplication  
-/| Division  
-  
-## Subquery Operators
-
-You can use the following subquery operators in your Drill queries:
-
-  * EXISTS
-  * IN
-
-See [SELECT Statements](/docs/select-statements).
-
-## String Operators
-
-You can use the following string operators in your Drill queries:
-
-  * string || string
-  * string || non-string or non-string || string
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/003-functions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/003-functions.md b/_docs/sql-ref/003-functions.md
deleted file mode 100644
index 98ce701..0000000
--- a/_docs/sql-ref/003-functions.md
+++ /dev/null
@@ -1,186 +0,0 @@
----
-title: "SQL Functions"
-parent: "SQL Reference"
----
-You can use the following types of functions in your Drill queries:
-
-  * Math Functions
-  * String Functions
-  * Date/Time Functions
-  * Data Type Formatting Functions
-  * Aggregate Functions
-  * Aggregate Statistics Functions
-  * Convert and Cast Functions
-  * Nested Data Functions
-
-## Math
-
-You can use the following scalar math functions in your Drill queries:
-
-  * ABS
-  * CEIL
-  * CEILING
-  * DIV
-  * FLOOR
-  * MOD
-  * POWER 
-  * RANDOM
-  * ROUND
-  * SIGN
-  * SQRT
-  * TRUNC
-
-## String Functions
-
-The following table provides the string functions that you can use in your
-Drill queries:
-
-Function| Return Type  
---------|---  
-char_length(string) or character_length(string)| int  
-concat(str "any" [, str "any" [, ...] ])| text
-convert_from(string bytea, src_encoding name)| text 
-convert_to(string text, dest_encoding name)| bytea
-initcap(string)| text
-left(str text, n int)| text
-length(string)| int
-length(string bytes, encoding name )| int
-lower(string)| text
-lpad(string text, length int [, fill text])| text
-ltrim(string text [, characters text])| text
-position(substring in string)| int
-regexp_replace(string text, pattern text, replacement text [, flags text])|text
-replace(string text, from text, to text)| text
-right(str text, n int)| text
-rpad(string text, length int [, fill text])| text
-rtrim(string text [, characters text])| text
-strpos(string, substring)| int
-substr(string, from [, count])| text
-substring(string [from int] [for int])| text
-trim([leading | trailing | both] [characters] from string)| text
-upper(string)| text
-  
-  
-## Date/Time Functions
-
-The following table provides the date/time functions that you can use in your
-Drill queries:
-
-**Function**| **Return Type**  
----|---  
-current_date| date  
-current_time| time with time zone  
-current_timestamp| timestamp with time zone  
-date_add(date,interval expr type)| date/datetime  
-date_part(text, timestamp)| double precision  
-date_part(text, interval)| double precision  
-date_sub(date,INTERVAL expr type)| date/datetime  
-extract(field from interval)| double precision  
-extract(field from timestamp)| double precision  
-localtime| time  
-localtimestamp| timestamp  
-now()| timestamp with time zone  
-timeofday()| text  
-  
-## Data Type Formatting Functions
-
-The following table provides the data type formatting functions that you can
-use in your Drill queries:
-
-**Function**| **Return Type**  
----|---  
-to_char(timestamp, text)| text  
-to_char(int, text)| text  
-to_char(double precision, text)| text  
-to_char(numeric, text)| text  
-to_date(text, text)| date  
-to_number(text, text)| numeric  
-to_timestamp(text, text)| timestamp with time zone  
-to_timestamp(double precision)| timestamp with time zone  
-  
-## Aggregate Functions
-
-The following table provides the aggregate functions that you can use in your
-Drill queries:
-
-**Function** | **Argument Type** | **Return Type**  
-  --------   |   -------------   |   -----------
-avg(expression)| smallint, int, bigint, real, double precision, numeric, or interval| numeric for any integer-type argument, double precision for a floating-point argument, otherwise the same as the argument data type
-count(*)| _-_| bigint
-count([DISTINCT] expression)| any| bigint
-max(expression)| any array, numeric, string, or date/time type| same as argument type
-min(expression)| any array, numeric, string, or date/time type| same as argument type
-sum(expression)| smallint, int, bigint, real, double precision, numeric, or interval| bigint for smallint or int arguments, numeric for bigint arguments, double precision for floating-point arguments, otherwise the same as the argument data type
-  
-  
-## Aggregate Statistics Functions
-
-The following table provides the aggregate statistics functions that you can use in your Drill queries:
-
-**Function**| **Argument Type**| **Return Type**
-  --------  |   -------------  |   -----------
-stddev(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
-stddev_pop(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
-stddev_samp(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
-variance(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
-var_pop(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
-var_samp(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
-  
-  
-## Convert and Cast Functions
-
-You can use the CONVERT_TO and CONVERT_FROM functions to encode and decode
-data when you query your data sources with Drill. For example, HBase stores
-data as encoded byte arrays (VARBINARY data). When you issue a query with the
-CONVERT_FROM function on HBase, Drill decodes the data and converts it to the
-specified data type. In instances where Drill sends data back to HBase during
-a query, you can use the CONVERT_TO function to change the data type to bytes.
-
-Although you can achieve the same results by using the CAST function for some
-data types (such as VARBINARY to VARCHAR conversions), in general it is more
-efficient to use CONVERT functions when your data sources return binary data.
-When your data sources return more conventional data types, you can use the
-CAST function.
-
-The following table provides the data types that you use with the CONVERT_TO
-and CONVERT_FROM functions:
-
-**Type**| **Input Type**| **Output Type**  
----|---|---  
-BOOLEAN_BYTE| bytes(1)| boolean  
-TINYINT_BE| bytes(1)| tinyint  
-TINYINT| bytes(1)| tinyint  
-SMALLINT_BE| bytes(2)| smallint  
-SMALLINT| bytes(2)| smallint  
-INT_BE| bytes(4)| int  
-INT| bytes(4)| int  
-BIGINT_BE| bytes(8)| bigint  
-BIGINT| bytes(8)| bigint  
-FLOAT| bytes(4)| float (float4)  
-DOUBLE| bytes(8)| double (float8)  
-INT_HADOOPV| bytes(1-9)| int  
-BIGINT_HADOOPV| bytes(1-9)| bigint  
-DATE_EPOCH_BE| bytes(8)| date  
-DATE_EPOCH| bytes(8)| date  
-TIME_EPOCH_BE| bytes(8)| time  
-TIME_EPOCH| bytes(8)| time  
-UTF8| bytes| varchar  
-UTF16| bytes| var16char  
-UINT8| bytes(8)| uint8  
-  
-A common use case for CONVERT_FROM is when a data source embeds complex data
-inside a column. For example, you may have an HBase or MapR-DB table with
-embedded JSON data:
-
-    select CONVERT_FROM(col1, 'JSON') 
-    FROM hbase.table1
-    ...
-
-## Nested Data Functions
-
-This section contains descriptions of SQL functions that you can use to
-analyze nested data:
-
-  * [FLATTEN Function](/docs/flatten-function)
-  * [KVGEN Function](/docs/kvgen-function)
-  * [REPEATED_COUNT Function](/docs/repeated-count-function)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/003-operators.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/003-operators.md b/_docs/sql-ref/003-operators.md
new file mode 100644
index 0000000..074375a
--- /dev/null
+++ b/_docs/sql-ref/003-operators.md
@@ -0,0 +1,70 @@
+---
+title: "Operators"
+parent: "SQL Reference"
+---
+You can use various types of operators in your Drill queries to perform
+operations on your data.
+
+## Logical Operators
+
+You can use the following logical operators in your Drill queries:
+
+  * AND
+  * BETWEEN
+  * IN
+  * LIKE
+  * NOT
+  * OR 
+
+## Comparison Operators
+
+You can use the following comparison operators in your Drill queries:
+
+  * <
+  * \>
+  * <=
+  * \>=
+  * =
+  * <>
+  * IS NULL
+  * IS NOT NULL
+  * IS FALSE 
+  * IS NOT FALSE
+  * IS TRUE 
+  * IS NOT TRUE
+
+## Pattern Matching Operators
+
+You can use the following pattern matching operators in your Drill queries:
+
+  * LIKE
+  * NOT LIKE
+  * SIMILAR TO
+  * NOT SIMILAR TO
+
+## Math Operators
+
+You can use the following math operators in your Drill queries:
+
+**Operator**| **Description**  
+---|---  
++| Addition  
+-| Subtraction  
+*| Multiplication  
+/| Division  
+  
+## Subquery Operators
+
+You can use the following subquery operators in your Drill queries:
+
+  * EXISTS
+  * IN
+
+See [SELECT Statements](/docs/select-statements).
+
+## String Operators
+
+You can use the following string operators in your Drill queries:
+
+  * string || string
+  * string || non-string or non-string || string
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/004-functions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/004-functions.md b/_docs/sql-ref/004-functions.md
new file mode 100644
index 0000000..c405a58
--- /dev/null
+++ b/_docs/sql-ref/004-functions.md
@@ -0,0 +1,186 @@
+---
+title: "SQL Functions"
+parent: "SQL Reference"
+---
+You can use the following types of functions in your Drill queries:
+
+  * Math Functions
+  * String Functions
+  * Date/Time Functions
+  * Data Type Formatting Functions
+  * Aggregate Functions
+  * Aggregate Statistics Functions
+  * Convert and Cast Functions
+  * Nested Data Functions
+
+## Math
+
+You can use the following scalar math functions in your Drill queries:
+
+  * ABS
+  * CEIL
+  * CEILING
+  * DIV
+  * FLOOR
+  * MOD
+  * POWER 
+  * RANDOM
+  * ROUND
+  * SIGN
+  * SQRT
+  * TRUNC
+
+## String Functions
+
+The following table provides the string functions that you can use in your
+Drill queries:
+
+Function| Return Type  
+--------|---  
+char_length(string) or character_length(string)| int  
+concat(str "any" [, str "any" [, ...] ])| text
+convert_from(string text, src_encoding name)| text 
+convert_to(string text, dest_encoding name)| 
+initcap(string)| text
+left(str text, n int)| text
+length(string)| int
+length(string bytes, encoding name )| int
+lower(string)| text
+lpad(string text, length int [, fill text])| text
+ltrim(string text [, characters text])| text
+position(substring in string)| int
+regexp_replace(string text, pattern text, replacement text [, flags text])|text
+replace(string text, from text, to text)| text
+right(str text, n int)| text
+rpad(string text, length int [, fill text])| text
+rtrim(string text [, characters text])| text
+strpos(string, substring)| int
+substr(string, from [, count])| text
+substring(string [from int] [for int])| text
+trim([leading | trailing | both] [characters] from string)| text
+upper(string)| text
+  
+  
+## Date/Time Functions
+
+The following table provides the date/time functions that you can use in your
+Drill queries:
+
+**Function**| **Return Type**  
+---|---  
+current_date| date  
+current_time| time with time zone  
+current_timestamp| timestamp with time zone  
+date_add(date,interval expr type)| date/datetime  
+date_part(text, timestamp)| double precision  
+date_part(text, interval)| double precision  
+date_sub(date,INTERVAL expr type)| date/datetime  
+extract(field from interval)| double precision  
+extract(field from timestamp)| double precision  
+localtime| time  
+localtimestamp| timestamp  
+now()| timestamp with time zone  
+timeofday()| text  
+  
+## Data Type Formatting Functions
+
+The following table provides the data type formatting functions that you can
+use in your Drill queries:
+
+**Function**| **Return Type**  
+---|---  
+to_char(timestamp, text)| text  
+to_char(int, text)| text  
+to_char(double precision, text)| text  
+to_char(numeric, text)| text  
+to_date(text, text)| date  
+to_number(text, text)| numeric  
+to_timestamp(text, text)| timestamp with time zone  
+to_timestamp(double precision)| timestamp with time zone  
+  
+## Aggregate Functions
+
+The following table provides the aggregate functions that you can use in your
+Drill queries:
+
+**Function** | **Argument Type** | **Return Type**  
+  --------   |   -------------   |   -----------
+avg(expression)| smallint, int, bigint, real, double precision, numeric, or interval| numeric for any integer-type argument, double precision for a floating-point argument, otherwise the same as the argument data type
+count(*)| _-_| bigint
+count([DISTINCT] expression)| any| bigint
+max(expression)| any array, numeric, string, or date/time type| same as argument type
+min(expression)| any array, numeric, string, or date/time type| same as argument type
+sum(expression)| smallint, int, bigint, real, double precision, numeric, or interval| bigint for smallint or int arguments, numeric for bigint arguments, double precision for floating-point arguments, otherwise the same as the argument data type
+  
+  
+## Aggregate Statistics Functions
+
+The following table provides the aggregate statistics functions that you can use in your Drill queries:
+
+**Function**| **Argument Type**| **Return Type**
+  --------  |   -------------  |   -----------
+stddev(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
+stddev_pop(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
+stddev_samp(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
+variance(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
+var_pop(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
+var_samp(expression)| smallint, int, bigint, real, double precision, or numeric| double precision for floating-point arguments, otherwise numeric
+  
+  
+## Convert and Cast Functions
+
+You can use the CONVERT_TO and CONVERT_FROM functions to encode and decode
+data when you query your data sources with Drill. For example, HBase stores
+data as encoded byte arrays (VARBINARY data). When you issue a query with the
+CONVERT_FROM function on HBase, Drill decodes the data and converts it to the
+specified data type. In instances where Drill sends data back to HBase during
+a query, you can use the CONVERT_TO function to change the data type to bytes.
+
+Although you can achieve the same results by using the CAST function for some
+data types (such as VARBINARY to VARCHAR conversions), in general it is more
+efficient to use CONVERT functions when your data sources return binary data.
+When your data sources return more conventional data types, you can use the
+CAST function.
+
+The following table provides the data types that you use with the CONVERT_TO
+and CONVERT_FROM functions:
+
+**Type**| **Input Type**| **Output Type**  
+---|---|---  
+BOOLEAN_BYTE| bytes(1)| boolean  
+TINYINT_BE| bytes(1)| tinyint  
+TINYINT| bytes(1)| tinyint  
+SMALLINT_BE| bytes(2)| smallint  
+SMALLINT| bytes(2)| smallint  
+INT_BE| bytes(4)| int  
+INT| bytes(4)| int  
+BIGINT_BE| bytes(8)| bigint  
+BIGINT| bytes(8)| bigint  
+FLOAT| bytes(4)| float (float4)  
+DOUBLE| bytes(8)| double (float8)  
+INT_HADOOPV| bytes(1-9)| int  
+BIGINT_HADOOPV| bytes(1-9)| bigint  
+DATE_EPOCH_BE| bytes(8)| date  
+DATE_EPOCH| bytes(8)| date  
+TIME_EPOCH_BE| bytes(8)| time  
+TIME_EPOCH| bytes(8)| time  
+UTF8| bytes| varchar  
+UTF16| bytes| var16char  
+UINT8| bytes(8)| uint8  
+  
+A common use case for CONVERT_FROM is when a data source embeds complex data
+inside a column. For example, you may have an HBase or MapR-DB table with
+embedded JSON data:
+
+    select CONVERT_FROM(col1, 'JSON') 
+    FROM hbase.table1
+    ...
+
+## Nested Data Functions
+
+This section contains descriptions of SQL functions that you can use to
+analyze nested data:
+
+  * [FLATTEN Function](/docs/flatten-function)
+  * [KVGEN Function](/docs/kvgen-function)
+  * [REPEATED_COUNT Function](/docs/repeated-count-function)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/004-nest-functions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/004-nest-functions.md b/_docs/sql-ref/004-nest-functions.md
deleted file mode 100644
index c6e7ff2..0000000
--- a/_docs/sql-ref/004-nest-functions.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: "Nested Data Functions"
-parent: "SQL Reference"
----
-This section contains descriptions of SQL functions that you can use to
-analyze nested data:
-
-  * [FLATTEN Function](/docs/flatten-function)
-  * [KVGEN Function](/docs/kvgen-function)
-  * [REPEATED_COUNT Function](/docs/repeated-count-function)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/005-cmd-summary.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/005-cmd-summary.md b/_docs/sql-ref/005-cmd-summary.md
deleted file mode 100644
index 0a08af6..0000000
--- a/_docs/sql-ref/005-cmd-summary.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: "SQL Commands Summary"
-parent: "SQL Reference"
----
-The following table provides a list of the SQL commands that Drill supports,
-with their descriptions and example syntax:
-
-<table ><tbody><tr><th >Command</th><th >Description</th><th >Syntax</th></tr><tr><td valign="top" >ALTER SESSION</td><td valign="top" >Changes a system setting for the duration of a session. A session ends when you quit the Drill shell. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options" rel="nofollow">Planning and Execution Options</a>.</td><td valign="top" ><code>ALTER SESSION SET `option_name`='string';<br />ALTER SESSION SET `option_name`=TRUE | FALSE;</code></td></tr><tr><td valign="top" >ALTER SYSTEM</td><td valign="top" >Permanently changes a system setting. The new settings persist across all sessions. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options/" rel="nofollow">Planning and Execution Options</a>.</td><td valign="top" ><code>ALTER SYSTEM `option_name`='string'<br />ALTER SYSTEM `option_name`=TRUE | FALSE;</code></td></tr><tr><td valign="top" ><a href="/docs
 /create-table-as-ctas-command">CREATE TABLE AS<br />(CTAS)</a></p></td><td valign="top" >Creates a new table and populates the new table with rows returned from a SELECT query. Use the CREATE TABLE AS (CTAS) statement in place of INSERT INTO. When you issue the CTAS command, you create a directory that contains parquet or CSV files. Each workspace in a file system has a default file type.<p>You can specify which writer you want Drill to use when creating a table: parquet, CSV, or JSON (as specified with <span style="line-height: 1.4285715;">the </span><code>store.format</code><span style="line-height: 1.4285715;"> option<span><span>).</span></span></span></p></td><td valign="top" ><code>CREATE TABLE new_table_name AS &lt;query&gt;;</code></td></tr><tr><td valign="top" colspan="1" >CREATE VIEW</td><td valign="top" colspan="1" >Creates a new view based on the results of a SELECT query.</td><td valign="top" colspan="1" ><code>CREATE VIEW view_name [(column_list)] AS &lt;query&gt;;</cod
 e></td></tr><tr><td valign="top" colspan="1" >DROP VIEW</td><td valign="top" colspan="1" >Removes one or more views.</td><td valign="top" colspan="1" ><code>DROP VIEW view_name [, <em class="replaceable">view_name</em>] ...;     </code></td></tr><tr><td valign="top" colspan="1" ><a href="/docs/explain-commands" rel="nofollow">EXPLAIN PLAN FOR</a></td><td valign="top" colspan="1" >Returns the physical plan for a particular query.</td><td valign="top" colspan="1" ><code>EXPLAIN PLAN FOR &lt;query&gt;;</code></td></tr><tr><td valign="top" colspan="1" ><a href="/docs/explain-commands/" rel="nofollow">EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR</a></td><td valign="top" colspan="1" >Returns the logical plan for a particular query.</td><td valign="top" colspan="1" ><code>EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR &lt;query&gt;;</code></td></tr><tr><td valign="top" colspan="1" ><a href="/docs/select-statements" rel="nofollow">SELECT</a></td><td valign="top" colspan="1" >Retrieves data from tables 
 and files.</td><td valign="top" colspan="1" ><code>[WITH subquery]<br />SELECT column_list FROM table_name <br />[WHERE clause]<br />[GROUP BY clause]<br />[HAVING clause]<br />[ORDER BY clause];</code></td></tr><tr><td valign="top" colspan="1" >SHOW DATABASES</td><td valign="top" colspan="1" >Returns a list of available schemas. Equivalent to SHOW SCHEMAS.</td><td valign="top" colspan="1" ><code>SHOW DATABASES;</code></td></tr><tr><td valign="top" colspan="1" ><a href="/docs/show-files-command/" rel="nofollow">SHOW FILES</a></td><td valign="top" colspan="1" >Returns a list of files in a file system schema.</td><td valign="top" colspan="1" ><code>SHOW FILES IN filesystem.`schema_name`;<br />SHOW FILES FROM filesystem.`schema_name`;</code></td></tr><tr><td valign="top" colspan="1" >SHOW SCHEMAS</td><td valign="top" colspan="1" >Returns a list of available schemas. Equivalent to SHOW DATABASES.</td><td valign="top" colspan="1" ><code>SHOW SCHEMAS;</code></td></tr><tr><td valign="top" 
 colspan="1" >SHOW TABLES</td><td valign="top" colspan="1" >Returns a list of tables for all schemas. Optionally, you can first issue the <code>USE </code>command to identify the schema for which you want to view tables.<br />For example, the following <code>USE</code> statement tells Drill that you only want information from the <code>hive.default</code> schema:<br /><code>USE hive.`default`;</code></td><td valign="top" colspan="1" ><code>SHOW TABLES;</code></td></tr><tr><td valign="top" colspan="1" >USE</td><td valign="top" colspan="1" >Change to a particular schema. When you opt to use a particular schema, Drill issues queries on that schema only.</td><td valign="top" colspan="1" ><code>USE schema_name;</code></td></tr></tbody></table></div>  
-  

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/005-nest-functions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/005-nest-functions.md b/_docs/sql-ref/005-nest-functions.md
new file mode 100644
index 0000000..c6e7ff2
--- /dev/null
+++ b/_docs/sql-ref/005-nest-functions.md
@@ -0,0 +1,10 @@
+---
+title: "Nested Data Functions"
+parent: "SQL Reference"
+---
+This section contains descriptions of SQL functions that you can use to
+analyze nested data:
+
+  * [FLATTEN Function](/docs/flatten-function)
+  * [KVGEN Function](/docs/kvgen-function)
+  * [REPEATED_COUNT Function](/docs/repeated-count-function)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/006-cmd-summary.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/006-cmd-summary.md b/_docs/sql-ref/006-cmd-summary.md
new file mode 100644
index 0000000..0a08af6
--- /dev/null
+++ b/_docs/sql-ref/006-cmd-summary.md
@@ -0,0 +1,9 @@
+---
+title: "SQL Commands Summary"
+parent: "SQL Reference"
+---
+The following table provides a list of the SQL commands that Drill supports,
+with their descriptions and example syntax:
+
+<table ><tbody><tr><th >Command</th><th >Description</th><th >Syntax</th></tr><tr><td valign="top" >ALTER SESSION</td><td valign="top" >Changes a system setting for the duration of a session. A session ends when you quit the Drill shell. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options" rel="nofollow">Planning and Execution Options</a>.</td><td valign="top" ><code>ALTER SESSION SET `option_name`='string';<br />ALTER SESSION SET `option_name`=TRUE | FALSE;</code></td></tr><tr><td valign="top" >ALTER SYSTEM</td><td valign="top" >Permanently changes a system setting. The new settings persist across all sessions. For a list of Drill options and their descriptions, refer to <a href="/docs/planning-and-execution-options/" rel="nofollow">Planning and Execution Options</a>.</td><td valign="top" ><code>ALTER SYSTEM `option_name`='string'<br />ALTER SYSTEM `option_name`=TRUE | FALSE;</code></td></tr><tr><td valign="top" ><a href="/docs
 /create-table-as-ctas-command">CREATE TABLE AS<br />(CTAS)</a></p></td><td valign="top" >Creates a new table and populates the new table with rows returned from a SELECT query. Use the CREATE TABLE AS (CTAS) statement in place of INSERT INTO. When you issue the CTAS command, you create a directory that contains parquet or CSV files. Each workspace in a file system has a default file type.<p>You can specify which writer you want Drill to use when creating a table: parquet, CSV, or JSON (as specified with <span style="line-height: 1.4285715;">the </span><code>store.format</code><span style="line-height: 1.4285715;"> option<span><span>).</span></span></span></p></td><td valign="top" ><code>CREATE TABLE new_table_name AS &lt;query&gt;;</code></td></tr><tr><td valign="top" colspan="1" >CREATE VIEW</td><td valign="top" colspan="1" >Creates a new view based on the results of a SELECT query.</td><td valign="top" colspan="1" ><code>CREATE VIEW view_name [(column_list)] AS &lt;query&gt;;</cod
 e></td></tr><tr><td valign="top" colspan="1" >DROP VIEW</td><td valign="top" colspan="1" >Removes one or more views.</td><td valign="top" colspan="1" ><code>DROP VIEW view_name [, <em class="replaceable">view_name</em>] ...;     </code></td></tr><tr><td valign="top" colspan="1" ><a href="/docs/explain-commands" rel="nofollow">EXPLAIN PLAN FOR</a></td><td valign="top" colspan="1" >Returns the physical plan for a particular query.</td><td valign="top" colspan="1" ><code>EXPLAIN PLAN FOR &lt;query&gt;;</code></td></tr><tr><td valign="top" colspan="1" ><a href="/docs/explain-commands/" rel="nofollow">EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR</a></td><td valign="top" colspan="1" >Returns the logical plan for a particular query.</td><td valign="top" colspan="1" ><code>EXPLAIN PLAN WITHOUT IMPLEMENTATION FOR &lt;query&gt;;</code></td></tr><tr><td valign="top" colspan="1" ><a href="/docs/select-statements" rel="nofollow">SELECT</a></td><td valign="top" colspan="1" >Retrieves data from tables 
 and files.</td><td valign="top" colspan="1" ><code>[WITH subquery]<br />SELECT column_list FROM table_name <br />[WHERE clause]<br />[GROUP BY clause]<br />[HAVING clause]<br />[ORDER BY clause];</code></td></tr><tr><td valign="top" colspan="1" >SHOW DATABASES</td><td valign="top" colspan="1" >Returns a list of available schemas. Equivalent to SHOW SCHEMAS.</td><td valign="top" colspan="1" ><code>SHOW DATABASES;</code></td></tr><tr><td valign="top" colspan="1" ><a href="/docs/show-files-command/" rel="nofollow">SHOW FILES</a></td><td valign="top" colspan="1" >Returns a list of files in a file system schema.</td><td valign="top" colspan="1" ><code>SHOW FILES IN filesystem.`schema_name`;<br />SHOW FILES FROM filesystem.`schema_name`;</code></td></tr><tr><td valign="top" colspan="1" >SHOW SCHEMAS</td><td valign="top" colspan="1" >Returns a list of available schemas. Equivalent to SHOW DATABASES.</td><td valign="top" colspan="1" ><code>SHOW SCHEMAS;</code></td></tr><tr><td valign="top" 
 colspan="1" >SHOW TABLES</td><td valign="top" colspan="1" >Returns a list of tables for all schemas. Optionally, you can first issue the <code>USE </code>command to identify the schema for which you want to view tables.<br />For example, the following <code>USE</code> statement tells Drill that you only want information from the <code>hive.default</code> schema:<br /><code>USE hive.`default`;</code></td><td valign="top" colspan="1" ><code>SHOW TABLES;</code></td></tr><tr><td valign="top" colspan="1" >USE</td><td valign="top" colspan="1" >Change to a particular schema. When you opt to use a particular schema, Drill issues queries on that schema only.</td><td valign="top" colspan="1" ><code>USE schema_name;</code></td></tr></tbody></table></div>  
+  

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/006-reserved-wds.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/006-reserved-wds.md b/_docs/sql-ref/006-reserved-wds.md
deleted file mode 100644
index d3cd280..0000000
--- a/_docs/sql-ref/006-reserved-wds.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: "Reserved Keywords"
-parent: "SQL Reference"
----
-When you use a reserved keyword in a Drill query, enclose the word in
-backticks. For example, if you issue the following query to Drill,  
-you must include backticks around the word TABLES because TABLES is a reserved
-keyword:
-
-``SELECT * FROM INFORMATION_SCHEMA.`TABLES`;``
-
-The following table provides the Drill reserved keywords that require back
-ticks:
-
-<table ><tbody><tr><td valign="top" ><h1 id="ReservedKeywords-A">A</h1><p>ABS<br />ALL<br />ALLOCATE<br />ALLOW<br />ALTER<br />AND<br />ANY<br />ARE<br />ARRAY<br />AS<br />ASENSITIVE<br />ASYMMETRIC<br />AT<br />ATOMIC<br />AUTHORIZATION<br />AVG</p><h1 id="ReservedKeywords-B">B</h1><p>BEGIN<br />BETWEEN<br />BIGINT<br />BINARY<br />BIT<br />BLOB<br />BOOLEAN<br />BOTH<br />BY</p><h1 id="ReservedKeywords-C">C</h1><p>CALL<br />CALLED<br />CARDINALITY<br />CASCADED<br />CASE<br />CAST<br />CEIL<br />CEILING<br />CHAR<br />CHARACTER<br />CHARACTER_LENGTH<br />CHAR_LENGTH<br />CHECK<br />CLOB<br />CLOSE<br />COALESCE<br />COLLATE<br />COLLECT<br />COLUMN<br />COMMIT<br />CONDITION<br />CONNECT<br />CONSTRAINT<br />CONVERT<br />CORR<br />CORRESPONDING<br />COUNT<br />COVAR_POP<br />COVAR_SAMP<br />CREATE<br />CROSS<br />CUBE<br />CUME_DIST<br />CURRENT<br />CURRENT_CATALOG<br />CURRENT_DATE<br />CURRENT_DEFAULT_TRANSFORM_GROUP<br />CURRENT_PATH<br />CURRENT_ROLE<br />CURRENT_SCHEMA<br 
 />CURRENT_TIME<br />CURRENT_TIMESTAMP<br />CURRENT_TRANSFORM_GROUP_FOR_TYPE<br />CURRENT_USER<br />CURSOR<br />CYCLE</p></td><td valign="top" ><h1 id="ReservedKeywords-D">D</h1><p>DATABASES<br />DATE<br />DAY<br />DEALLOCATE<br />DEC<br />DECIMAL<br />DECLARE<br />DEFAULT<br />DEFAULT_KW<br />DELETE<br />DENSE_RANK<br />DEREF<br />DESCRIBE<br />DETERMINISTIC<br />DISALLOW<br />DISCONNECT<br />DISTINCT<br />DOUBLE<br />DROP<br />DYNAMIC</p><h1 id="ReservedKeywords-E">E</h1><p>EACH<br />ELEMENT<br />ELSE<br />END<br />END_EXEC<br />ESCAPE<br />EVERY<br />EXCEPT<br />EXEC<br />EXECUTE<br />EXISTS<br />EXP<br />EXPLAIN<br />EXTERNAL<br />EXTRACT</p><h1 id="ReservedKeywords-F">F</h1><p>FALSE<br />FETCH<br />FILES<br />FILTER<br />FIRST_VALUE<br />FLOAT<br />FLOOR<br />FOR<br />FOREIGN<br />FREE<br />FROM<br />FULL<br />FUNCTION<br />FUSION</p><h1 id="ReservedKeywords-G">G</h1><p>GET<br />GLOBAL<br />GRANT<br />GROUP<br />GROUPING</p><h1 id="ReservedKeywords-H">H</h1><p>HAVING<br />HOLD<b
 r />HOUR</p></td><td valign="top" ><h1 id="ReservedKeywords-I">I</h1><p>IDENTITY<br />IMPORT<br />IN<br />INDICATOR<br />INNER<br />INOUT<br />INSENSITIVE<br />INSERT<br />INT<br />INTEGER<br />INTERSECT<br />INTERSECTION<br />INTERVAL<br />INTO<br />IS</p><h1 id="ReservedKeywords-J">J</h1><p>JOIN</p><h1 id="ReservedKeywords-L">L</h1><p>LANGUAGE<br />LARGE<br />LAST_VALUE<br />LATERAL<br />LEADING<br />LEFT<br />LIKE<br />LIMIT<br />LN<br />LOCAL<br />LOCALTIME<br />LOCALTIMESTAMP<br />LOWER</p><h1 id="ReservedKeywords-M">M</h1><p>MATCH<br />MAX<br />MEMBER<br />MERGE<br />METHOD<br />MIN<br />MINUTE<br />MOD<br />MODIFIES<br />MODULE<br />MONTH<br />MULTISET</p><h1 id="ReservedKeywords-N">N</h1><p>NATIONAL<br />NATURAL<br />NCHAR<br />NCLOB<br />NEW<br />NO<br />NONE<br />NORMALIZE<br />NOT<br />NULL<br />NULLIF<br />NUMERIC</p><h1 id="ReservedKeywords-O">O</h1><p>OCTET_LENGTH<br />OF<br />OFFSET<br />OLD<br />ON<br />ONLY<br />OPEN<br />OR<br />ORDER<br />OUT<br />OUTER<br />OVER<
 br />OVERLAPS<br />OVERLAY</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-P">P</h1><p>PARAMETER<br />PARTITION<br />PERCENTILE_CONT<br />PERCENTILE_DISC<br />PERCENT_RANK<br />POSITION<br />POWER<br />PRECISION<br />PREPARE<br />PRIMARY<br />PROCEDURE</p><h1 id="ReservedKeywords-R">R</h1><p>RANGE<br />RANK<br />READS<br />REAL<br />RECURSIVE<br />REF<br />REFERENCES<br />REFERENCING<br />REGR_AVGX<br />REGR_AVGY<br />REGR_COUNT<br />REGR_INTERCEPT<br />REGR_R2<br />REGR_SLOPE<br />REGR_SXX<br />REGR_SXY<br />RELEASE<br />REPLACE<br />RESULT<br />RETURN<br />RETURNS<br />REVOKE<br />RIGHT<br />ROLLBACK<br />ROLLUP<br />ROW<br />ROWS<br />ROW_NUMBER</p><h1 id="ReservedKeywords-S">S</h1><p>SAVEPOINT<br />SCHEMAS<br />SCOPE<br />SCROLL<br />SEARCH<br />SECOND<br />SELECT<br />SENSITIVE<br />SESSION_USER<br />SET<br />SHOW<br />SIMILAR<br />SMALLINT<br />SOME<br />SPECIFIC<br />SPECIFICTYPE<br />SQL<br />SQLEXCEPTION<br />SQLSTATE<br />SQLWARNING<br />SQRT<br />START<br /
 >STATIC<br />STDDEV_POP<br />STDDEV_SAMP<br />SUBMULTISET<br />SUBSTRING<br />SUM<br />SYMMETRIC<br />SYSTEM<br />SYSTEM_USER</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-T">T</h1><p>TABLE<br />TABLES<br />TABLESAMPLE<br />THEN<br />TIME<br />TIMESTAMP<br />TIMEZONE_HOUR<br />TIMEZONE_MINUTE<br />TINYINT<br />TO<br />TRAILING<br />TRANSLATE<br />TRANSLATION<br />TREAT<br />TRIGGER<br />TRIM<br />TRUE</p><h1 id="ReservedKeywords-U">U</h1><p>UESCAPE<br />UNION<br />UNIQUE<br />UNKNOWN<br />UNNEST<br />UPDATE<br />UPPER<br />USE<br />USER<br />USING</p><h1 id="ReservedKeywords-V">V</h1><p>VALUE<br />VALUES<br />VARBINARY<br />VARCHAR<br />VARYING<br />VAR_POP<br />VAR_SAMP</p><h1 id="ReservedKeywords-W">W</h1><p>WHEN<br />WHENEVER<br />WHERE<br />WIDTH_BUCKET<br />WINDOW<br />WITH<br />WITHIN<br />WITHOUT</p><h1 id="ReservedKeywords-Y">Y</h1><p>YEAR</p></td></tr></tbody></table></div>
-

http://git-wip-us.apache.org/repos/asf/drill/blob/eca98c77/_docs/sql-ref/007-reserved-wds.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/007-reserved-wds.md b/_docs/sql-ref/007-reserved-wds.md
new file mode 100644
index 0000000..d3cd280
--- /dev/null
+++ b/_docs/sql-ref/007-reserved-wds.md
@@ -0,0 +1,16 @@
+---
+title: "Reserved Keywords"
+parent: "SQL Reference"
+---
+When you use a reserved keyword in a Drill query, enclose the word in
+backticks. For example, if you issue the following query to Drill,  
+you must include backticks around the word TABLES because TABLES is a reserved
+keyword:
+
+``SELECT * FROM INFORMATION_SCHEMA.`TABLES`;``
+
+The following table provides the Drill reserved keywords that require back
+ticks:
+
+<table ><tbody><tr><td valign="top" ><h1 id="ReservedKeywords-A">A</h1><p>ABS<br />ALL<br />ALLOCATE<br />ALLOW<br />ALTER<br />AND<br />ANY<br />ARE<br />ARRAY<br />AS<br />ASENSITIVE<br />ASYMMETRIC<br />AT<br />ATOMIC<br />AUTHORIZATION<br />AVG</p><h1 id="ReservedKeywords-B">B</h1><p>BEGIN<br />BETWEEN<br />BIGINT<br />BINARY<br />BIT<br />BLOB<br />BOOLEAN<br />BOTH<br />BY</p><h1 id="ReservedKeywords-C">C</h1><p>CALL<br />CALLED<br />CARDINALITY<br />CASCADED<br />CASE<br />CAST<br />CEIL<br />CEILING<br />CHAR<br />CHARACTER<br />CHARACTER_LENGTH<br />CHAR_LENGTH<br />CHECK<br />CLOB<br />CLOSE<br />COALESCE<br />COLLATE<br />COLLECT<br />COLUMN<br />COMMIT<br />CONDITION<br />CONNECT<br />CONSTRAINT<br />CONVERT<br />CORR<br />CORRESPONDING<br />COUNT<br />COVAR_POP<br />COVAR_SAMP<br />CREATE<br />CROSS<br />CUBE<br />CUME_DIST<br />CURRENT<br />CURRENT_CATALOG<br />CURRENT_DATE<br />CURRENT_DEFAULT_TRANSFORM_GROUP<br />CURRENT_PATH<br />CURRENT_ROLE<br />CURRENT_SCHEMA<br 
 />CURRENT_TIME<br />CURRENT_TIMESTAMP<br />CURRENT_TRANSFORM_GROUP_FOR_TYPE<br />CURRENT_USER<br />CURSOR<br />CYCLE</p></td><td valign="top" ><h1 id="ReservedKeywords-D">D</h1><p>DATABASES<br />DATE<br />DAY<br />DEALLOCATE<br />DEC<br />DECIMAL<br />DECLARE<br />DEFAULT<br />DEFAULT_KW<br />DELETE<br />DENSE_RANK<br />DEREF<br />DESCRIBE<br />DETERMINISTIC<br />DISALLOW<br />DISCONNECT<br />DISTINCT<br />DOUBLE<br />DROP<br />DYNAMIC</p><h1 id="ReservedKeywords-E">E</h1><p>EACH<br />ELEMENT<br />ELSE<br />END<br />END_EXEC<br />ESCAPE<br />EVERY<br />EXCEPT<br />EXEC<br />EXECUTE<br />EXISTS<br />EXP<br />EXPLAIN<br />EXTERNAL<br />EXTRACT</p><h1 id="ReservedKeywords-F">F</h1><p>FALSE<br />FETCH<br />FILES<br />FILTER<br />FIRST_VALUE<br />FLOAT<br />FLOOR<br />FOR<br />FOREIGN<br />FREE<br />FROM<br />FULL<br />FUNCTION<br />FUSION</p><h1 id="ReservedKeywords-G">G</h1><p>GET<br />GLOBAL<br />GRANT<br />GROUP<br />GROUPING</p><h1 id="ReservedKeywords-H">H</h1><p>HAVING<br />HOLD<b
 r />HOUR</p></td><td valign="top" ><h1 id="ReservedKeywords-I">I</h1><p>IDENTITY<br />IMPORT<br />IN<br />INDICATOR<br />INNER<br />INOUT<br />INSENSITIVE<br />INSERT<br />INT<br />INTEGER<br />INTERSECT<br />INTERSECTION<br />INTERVAL<br />INTO<br />IS</p><h1 id="ReservedKeywords-J">J</h1><p>JOIN</p><h1 id="ReservedKeywords-L">L</h1><p>LANGUAGE<br />LARGE<br />LAST_VALUE<br />LATERAL<br />LEADING<br />LEFT<br />LIKE<br />LIMIT<br />LN<br />LOCAL<br />LOCALTIME<br />LOCALTIMESTAMP<br />LOWER</p><h1 id="ReservedKeywords-M">M</h1><p>MATCH<br />MAX<br />MEMBER<br />MERGE<br />METHOD<br />MIN<br />MINUTE<br />MOD<br />MODIFIES<br />MODULE<br />MONTH<br />MULTISET</p><h1 id="ReservedKeywords-N">N</h1><p>NATIONAL<br />NATURAL<br />NCHAR<br />NCLOB<br />NEW<br />NO<br />NONE<br />NORMALIZE<br />NOT<br />NULL<br />NULLIF<br />NUMERIC</p><h1 id="ReservedKeywords-O">O</h1><p>OCTET_LENGTH<br />OF<br />OFFSET<br />OLD<br />ON<br />ONLY<br />OPEN<br />OR<br />ORDER<br />OUT<br />OUTER<br />OVER<
 br />OVERLAPS<br />OVERLAY</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-P">P</h1><p>PARAMETER<br />PARTITION<br />PERCENTILE_CONT<br />PERCENTILE_DISC<br />PERCENT_RANK<br />POSITION<br />POWER<br />PRECISION<br />PREPARE<br />PRIMARY<br />PROCEDURE</p><h1 id="ReservedKeywords-R">R</h1><p>RANGE<br />RANK<br />READS<br />REAL<br />RECURSIVE<br />REF<br />REFERENCES<br />REFERENCING<br />REGR_AVGX<br />REGR_AVGY<br />REGR_COUNT<br />REGR_INTERCEPT<br />REGR_R2<br />REGR_SLOPE<br />REGR_SXX<br />REGR_SXY<br />RELEASE<br />REPLACE<br />RESULT<br />RETURN<br />RETURNS<br />REVOKE<br />RIGHT<br />ROLLBACK<br />ROLLUP<br />ROW<br />ROWS<br />ROW_NUMBER</p><h1 id="ReservedKeywords-S">S</h1><p>SAVEPOINT<br />SCHEMAS<br />SCOPE<br />SCROLL<br />SEARCH<br />SECOND<br />SELECT<br />SENSITIVE<br />SESSION_USER<br />SET<br />SHOW<br />SIMILAR<br />SMALLINT<br />SOME<br />SPECIFIC<br />SPECIFICTYPE<br />SQL<br />SQLEXCEPTION<br />SQLSTATE<br />SQLWARNING<br />SQRT<br />START<br /
 >STATIC<br />STDDEV_POP<br />STDDEV_SAMP<br />SUBMULTISET<br />SUBSTRING<br />SUM<br />SYMMETRIC<br />SYSTEM<br />SYSTEM_USER</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-T">T</h1><p>TABLE<br />TABLES<br />TABLESAMPLE<br />THEN<br />TIME<br />TIMESTAMP<br />TIMEZONE_HOUR<br />TIMEZONE_MINUTE<br />TINYINT<br />TO<br />TRAILING<br />TRANSLATE<br />TRANSLATION<br />TREAT<br />TRIGGER<br />TRIM<br />TRUE</p><h1 id="ReservedKeywords-U">U</h1><p>UESCAPE<br />UNION<br />UNIQUE<br />UNKNOWN<br />UNNEST<br />UPDATE<br />UPPER<br />USE<br />USER<br />USING</p><h1 id="ReservedKeywords-V">V</h1><p>VALUE<br />VALUES<br />VARBINARY<br />VARCHAR<br />VARYING<br />VAR_POP<br />VAR_SAMP</p><h1 id="ReservedKeywords-W">W</h1><p>WHEN<br />WHENEVER<br />WHERE<br />WIDTH_BUCKET<br />WINDOW<br />WITH<br />WITHIN<br />WITHOUT</p><h1 id="ReservedKeywords-Y">Y</h1><p>YEAR</p></td></tr></tbody></table></div>
+