You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by br...@apache.org on 2015/04/13 23:30:24 UTC

[1/2] drill git commit: DRILL-2736

Repository: drill
Updated Branches:
  refs/heads/gh-pages c8a79a519 -> d3328217e


http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/functions/003-date-time-fcns.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/functions/003-date-time-fcns.md b/_docs/sql-ref/functions/003-date-time-fcns.md
index 71a0e12..97267a3 100644
--- a/_docs/sql-ref/functions/003-date-time-fcns.md
+++ b/_docs/sql-ref/functions/003-date-time-fcns.md
@@ -3,44 +3,26 @@ title: "Date/Time Functions and Arithmetic"
 parent: "SQL Functions"
 ---
 
-In addition to the TO_DATE, TO_TIME, and TO_TIMESTAMP functions, Drill supports a number of other date/time functions and arithmetic operators for use with dates, times, and intervals. The following table lists date/time functions described in this section:
+In addition to the TO_DATE, TO_TIME, and TO_TIMESTAMP functions, Drill supports a number of other date/time functions and arithmetic operators for use with dates, times, and intervals. Drill supports time functions based on the Gregorian calendar and in the range 1971 to 2037.
+
+This section defines the following date/time functions:
 
 **Function**| **Return Type**  
 ---|---  
 [AGE(TIMESTAMP)](/docs/date-time-functions-and-arithmetic#age)| INTERVAL
-[CURRENT_DATE](/docs/date-time-functions-and-arithmetic#current_date)| DATE  
-[CURRENT_TIME](/docs/date-time-functions-and-arithmetic#current_time)| TIME   
-[CURRENT_TIMESTAMP](/docs/date-time-functions-and-arithmetic#current_timestamp)| TIMESTAMP 
-[DATE_ADD(DATE,INTERVAL expr type)](/docs/date-time-functions-and-arithmetic#date_add)| date/datetime  
-[DATE_PART(text, time_expression)](/docs/date-time-functions-and-arithmetic#date_part)| double precision  
-[DATE_SUB(DATE,INTERVAL expr type)](/docs/date-time-functions-and-arithmetic#date_sub)| date/datetime  
-[EXTRACT(field from time_expression)](/docs/date-time-functions-and-arithmetic#extract)| double precision   
-[LOCALTIME](/docs/date-time-functions-and-arithmetic#localtime)| TIME  
-[LOCALTIMESTAMP](/docs/date-time-functions-and-arithmetic#localtimestamp)| TIMESTAMP  
-[NOW()](/docs/date-time-functions-and-arithmetic#now)| TIMESTAMP  
-[TIMEOFDAY()](/docs/date-time-functions-and-arithmetic#timeofday)| text  
-
-## Date/Time Functions and Utilities
-
-The following functions perform date/time-related operations:
-
-* AGE
-* EXTRACT
-* DATE_ADD
-* DATE_PART
-* DATE_SUB
-
-Drill supports the following utilities:
-
-* CURRENT_DATE
-* CURRENT_TIME
-* CURRENT_TIMESTAMP
-* LOCALTIME
-* LOCALTIMESTAMP
-* NOW
-* TIMEOFDAY
-
-### AGE
+[EXTRACT(field from time_expression)](/docs/date-time-functions-and-arithmetic#extract)| double precision
+[CURRENT_DATE](/docs/date-time-functions-and-arithmetic#current_*x*-local*x*-now-and-timeofday)| DATE  
+[CURRENT_TIME](/docs/date-time-functions-and-arithmetic#current_*x*-local*x*-now-and-timeofday)| TIME   
+[CURRENT_TIMESTAMP](/docs/date-time-functions-and-arithmetic#current_*x*-local*x*-now-and-timeofday)| TIMESTAMP 
+[DATE_ADD](/docs/date-time-functions-and-arithmetic#date_add)| date/datetime  
+[DATE_PART](/docs/date-time-functions-and-arithmetic#date_part)| double precision  
+[DATE_SUB](/docs/date-time-functions-and-arithmetic#date_sub)| date/datetime     
+[LOCALTIME](/docs/date-time-functions-and-arithmetic#current_*x*-local*x*-now-and-timeofday)| TIME  
+[LOCALTIMESTAMP](/docs/date-time-functions-and-arithmetic#current_*x*-local*x*-now-and-timeofday)| TIMESTAMP  
+[NOW](/docs/date-time-functions-and-arithmetic#current_*x*-local*x*-now-and-timeofday)| TIMESTAMP  
+[TIMEOFDAY](/docs/date-time-functions-and-arithmetic#current_*x*-local*x*-now-and-timeofday)| text  
+
+## AGE
 Returns the interval between two timestamps or subtracts a timestamp from midnight of the current date.
 
 #### Syntax
@@ -76,92 +58,96 @@ Find the interval between 11:10:10 PM on January 1, 2001 and 10:10:10 PM on Janu
 
 For information about how to read the interval data, see the [Interval section](/docs/date-time-and-timestamp#interval).
 
-### EXTRACT
-
-Returns a component of a timestamp, time, date, or interval.
+### DATE_ADD
+Returns the sum of a date/time and a number of days/hours, or of a date/time and date/time interval.
 
 #### Syntax
 
-    EXTRACT (expression);
+    DATE_ADD(date literal_date, integer);
 
-*expression* is:
+    DATE_ADD(keyword literal, interval expr); 
 
-    component FROM (timestamp | time | date | interval)
+*date* is the keyword date.  
+*literal_date* is a date in yyyy-mm-dd format enclosed in single quotation marks.  
+*integer* is a number of days to add to the date/time.  
 
-*component* is a year, month, day, hour, minute, or second value.
+
+*keyword* is the word date, time, or timestamp.  
+*literal* is a date, time, or timestamp literal.  
+*interval* is a keyword  
+*expr* is an interval expression.  
 
 #### Examples
 
-On the third day of the month, run the following function:
+Add two days to today's date May 15, 2015.
 
-    SELECT EXTRACT(day FROM NOW()), EXTRACT(day FROM CURRENT_DATE) FROM sys.version;
-
-    +------------+------------+
-    |   EXPR$0   |   EXPR$1   |
-    +------------+------------+
-    | 3          | 3          |
-    +------------+------------+
-    1 row selected (0.208 seconds)
-
-At 8:00 am, extract the hour from the value of CURRENT_DATE.
+    SELECT DATE_ADD(date '2015-05-15', 2) from sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2015-05-17 |
+    +------------+
+    1 row selected (0.07 seconds)
 
-    SELECT EXTRACT(hour FROM CURRENT_DATE) FROM sys.version;
+Add two months to April 15, 2015.
 
+    SELECT DATE_ADD(date '2015-04-15', interval '2' month) FROM sys.version;
     +------------+
     |   EXPR$0   |
     +------------+
-    | 8          |
+    | 2015-06-15 00:00:00.0 |
     +------------+
+    1 row selected (0.073 seconds)
 
-What is the hour component of this time: 17:12:28.5?
-
-    SELECT EXTRACT(hour FROM TIME '17:12:28.5') from sys.version;
+Add 10 hours to the timestamp 2015-04-15 22:55:55.
 
+    SELECT DATE_ADD(timestamp '2015-04-15 22:55:55', interval '10' hour) FROM sys.version;
     +------------+
     |   EXPR$0   |
     +------------+
-    | 17         |
+    | 2015-04-16 08:55:55.0 |
     +------------+
-    1 row selected (0.056 seconds)
-
-What is the second component of this timestamp: 2001-02-16 20:38:40
+    1 row selected (0.068 seconds)
 
-    SELECT EXTRACT(SECOND FROM TIMESTAMP '2001-02-16 20:38:40') from sys.version;
+Add 10 hours to the time 22 hours, 55 minutes, 55 seconds.
 
+    SELECT DATE_ADD(time '22:55:55', interval '10' hour) FROM sys.version;
     +------------+
     |   EXPR$0   |
     +------------+
-    | 40.0       |
+    | 08:55:55   |
     +------------+
-    1 row selected (0.062 seconds)
+    1 row selected (0.085 seconds)
 
-### DATE_ADD
-Returns the sum of a date and an interval.
-
-#### Syntax
+Add 1 year and 1 month to the timestamp 2015-04-15 22:55:55.
 
-    DATE_ADD(date, interval);
+    SELECT DATE_ADD(timestamp '2015-04-15 22:55:55', interval '1-2' year to month) FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2016-06-15 22:55:55.0 |
+    +------------+
+    1 row selected (0.065 seconds)
 
-#### Example
+Add 1 day 2 and 1/2 hours and 45.100 seconds to the time 22:55:55.
 
-    SELECT CAST(DATE_ADD(datetype(2008, 2, 27), intervaltype(0, 1, 0, 0, 0, 0, 0)) as VARCHAR(100)) FROM sys.version;
+    SELECT DATE_ADD(time '22:55:55', interval '1 2:30:45.100' day to second) FROM sys.version;
     +------------+
     |   EXPR$0   |
     +------------+
-    | 2008-03-27 00:00:00.000 |
+    | 01:26:40.100 |
     +------------+
-    1 row selected (0.247 seconds)
+    1 row selected (0.07 seconds)
 
 ### DATE_PART
 Returns a field of a date, time, timestamp, or interval.
 
 #### Syntax 
 
-    date_part(component, expression);
+    date_part(keyword, expression);
 
-*component* is year, month, day, hour, minute, second, enclosed in single quotation marks.
-
-*expression* is date, time, timestamp, or interval enclosed in single quotation marks.
+*keyword* is year, month, day, hour, minute, or second enclosed in single quotation marks.  
+*expression* is date, time, timestamp, or interval literal enclosed in single quotation marks.
 
 #### Usage Notes
 Use Unix Epoch timestamp in milliseconds as the expression to get the field of a timestamp.
@@ -205,24 +191,89 @@ Return the day part of the one year, 2 months, 10 days interval.
     1 row selected (0.069 seconds)
 
 ### DATE_SUB
-Returns the sum of a date and an interval.
+Returns the difference between a date/time and a number of days/hours, or between a date/time and date/time interval.
 
 #### Syntax
 
-    DATE_SUB(date, interval);
+    DATE_SUB(date literal_date, integer);
+
+    DATE_SUB(keyword literal, interval expr); 
+
+*date* is the keyword date.  
+*literal_date* is a date in yyyy-mm-dd format enclosed in single quotation marks.  
+*integer* is a number of days to subtract from the date/time.  
+
+
+*keyword* is the word date, time, or timestamp.  
+*literal* is a date, time, or timestamp literal.  
+*interval* is a keyword.  
+*expr* is an interval expression.
+
+#### Examples
+
+Subtract two days to today's date May 15, 2015.
+
+    SELECT DATE_SUB(date '2015-05-15', 2) from sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2015-05-13 |
+    +------------+
+    1 row selected (0.088 seconds)
 
-#### Example
+Subtact two months from April 15, 2015.
 
-    SELECT CAST(DATE_SUB(datetype(2008, 2, 27), intervaltype(0, 1, 0, 0, 0, 0, 0)) as VARCHAR(100)) FROM sys.version;
+    SELECT DATE_SUB(date '2015-04-15', interval '2' month) FROM sys.version;
     +------------+
     |   EXPR$0   |
     +------------+
-    | 2008-01-27 |
+    | 2015-02-15 |
     +------------+
-    1 row selected (0.199 seconds)
+    1 row selected (0.088 seconds)
 
-### Date/Time Utilities
-The utilities are:
+Subtract 10 hours from the timestamp 2015-04-15 22:55:55.
+
+    SELECT DATE_SUB(timestamp '2015-04-15 22:55:55', interval '10' hour) FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2015-04-15 12:55:55.0 |
+    +------------+
+    1 row selected (0.068 seconds)
+
+Subtract 10 hours from the time 22 hours, 55 minutes, 55 seconds.
+
+    SELECT DATE_SUB(time '22:55:55', interval '10' hour) FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 12:55:55   |
+    +------------+
+    1 row selected (0.079 seconds)
+
+Subtract 1 year and 1 month from the timestamp 2015-04-15 22:55:55.
+
+    SELECT DATE_SUB(timestamp '2015-04-15 22:55:55', interval '1-2' year to month) FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 2014-02-15 22:55:55.0 |
+    +------------+
+    1 row selected (0.073 seconds)
+
+Subtract 1 day, 2 and 1/2 hours, and 45.100 seconds from the time 22:55:55.
+
+    SELECT DATE_ADD(time '22:55:55', interval '1 2:30:45.100' day to second) FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 01:26:40.100 |
+    +------------+
+    1 row selected (0.073 seconds)
+
+### CURRENT_*x*, LOCAL*x*, NOW, and TIMEOFDAY
+
+The following examples show how to use these functions:
 
 * CURRENT_DATE
 * CURRENT_TIME
@@ -232,8 +283,6 @@ The utilities are:
 * NOW
 * TIMEOFDAY
 
-The following examples show how to use the utilities:
-
     SELECT CURRENT_DATE FROM sys.version;
     +--------------+
     | current_date |
@@ -304,8 +353,71 @@ If you did not set up Drill for UTC time, TIMEOFDAY returns the local date and t
     +------------+
     1 row selected (1.199 seconds)
 
+### EXTRACT
+
+Returns a component of a timestamp, time, date, or interval.
+
+#### Syntax
+
+    EXTRACT (extract_expression);
+
+*extract_expression* is:
+
+    component FROM (timestamp | time | date | interval)
+
+*component* is supported time unit.
+
+#### Usage Notes
+
+The extract function supports the following time units: YEAR, MONTH, DAY, HOUR, MINUTE, SECOND.
+
+#### Examples
+
+On the third day of the month, run the following function:
+
+    SELECT EXTRACT(day FROM NOW()), EXTRACT(day FROM CURRENT_DATE) FROM sys.version;
+
+    +------------+------------+
+    |   EXPR$0   |   EXPR$1   |
+    +------------+------------+
+    | 3          | 3          |
+    +------------+------------+
+    1 row selected (0.208 seconds)
+
+At 8:00 am, extract the hour from the value of CURRENT_DATE.
+
+    SELECT EXTRACT(hour FROM CURRENT_DATE) FROM sys.version;
+
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 8          |
+    +------------+
+
+What is the hour component of this time: 17:12:28.5?
+
+    SELECT EXTRACT(hour FROM TIME '17:12:28.5') from sys.version;
+
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 17         |
+    +------------+
+    1 row selected (0.056 seconds)
+
+What is the seconds component of this timestamp: 2001-02-16 20:38:40
+
+    SELECT EXTRACT(SECOND FROM TIMESTAMP '2001-02-16 20:38:40') from sys.version;
+
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 40.0       |
+    +------------+
+    1 row selected (0.062 seconds)
+
 
-### Date, Time, and Interval Arithmetic Functions
+## Date, Time, and Interval Arithmetic Functions
 <!-- date +/- integer
 date + interval  -->
 

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/functions/004-string.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/functions/004-string.md b/_docs/sql-ref/functions/004-string.md
index 1367272..6e2b0cf 100644
--- a/_docs/sql-ref/functions/004-string.md
+++ b/_docs/sql-ref/functions/004-string.md
@@ -7,22 +7,22 @@ You can use the following string functions in Drill queries:
 
 Function| Return Type  
 --------|---  
-[BYTE_SUBSTR(string, start [, length])](/docs/string-manipulation#byte_substr)|byte array or text
-[CHAR_LENGTH(string) or character_length(string)](/docs/string-manipulation#char_length)| int  
-[CONCAT(str "any" [, str "any" [, ...] ])](/docs/string-manipulation#concat)| text
-[INITCAP(string)](/docs/string-manipulation#initcap)| text
-[LENGTH(string [, encoding name ])](/docs/string-manipulation#length)| int
-[LOWER(string)](/docs/string-manipulation#lower)| text
-[LPAD(string, length [, fill])](/docs/string-manipulation#lpad)| text
-[LTRIM(string [, characters])](/docs/string-manipulation#ltrim)| text
-[POSITION(substring in string)](/docs/string-manipulation#position)| int
-[REGEXP_REPLACE(string, pattern, replacement](/docs/string-manipulation#regexp_replace)|text
-[RPAD(string, length [, fill ])](/docs/string-manipulation#rpad)| text
-[RTRIM(string [, characters])](/docs/string-manipulation#rtrim)| text
-[STRPOS(string, substring)](/docs/string-manipulation#strpos)| int
-[SUBSTR(string, from [, count])](/docs/string-manipulation#substr)| text
-[TRIM([position_option] [characters] from string)](/docs/string-manipulation#trim)| text
-[UPPER(string)](/docs/string-manipulation#upper)| text
+[BYTE_SUBSTR](/docs/string-manipulation#byte_substr)|byte array or text
+[CHAR_LENGTH](/docs/string-manipulation#char_length)| int  
+[CONCAT](/docs/string-manipulation#concat)| text
+[INITCAP](/docs/string-manipulation#initcap)| text
+[LENGTH](/docs/string-manipulation#length)| int
+[LOWER](/docs/string-manipulation#lower)| text
+[LPAD](/docs/string-manipulation#lpad)| text
+[LTRIM](/docs/string-manipulation#ltrim)| text
+[POSITION](/docs/string-manipulation#position)| int
+[REGEXP_REPLACE](/docs/string-manipulation#regexp_replace)|text
+[RPAD](/docs/string-manipulation#rpad)| text
+[RTRIM](/docs/string-manipulation#rtrim)| text
+[STRPOS](/docs/string-manipulation#strpos)| int
+[SUBSTR](/docs/string-manipulation#substr)| text
+[TRIM](/docs/string-manipulation#trim)| text
+[UPPER](/docs/string-manipulation#upper)| text
 
 ## BYTE_SUBSTR
 Returns in binary format a substring of a string.
@@ -65,7 +65,10 @@ Returns the number of characters in a string.
 
 ### Syntax
 
-    ( CHAR_LENGTH | CHARACTER_LENGTH ) (string);
+    CHAR_LENGTH(string);
+
+### Usage Notes
+You can use the alias CHARACTER_LENGTH.
 
 ### Example
 
@@ -183,13 +186,13 @@ Pads the string to the length specified by prepending the fill or a space. Trunc
     1 row selected (0.112 seconds)
 
 ## LTRIM
-Removes the longest string having only characters specified in the second argument string from the beginning of the string.
+Removes any characters from the beginning of string1 that match the characters in string2. 
 
 ### Syntax
 
-    LTRIM(string, string);
+    LTRIM(string1, string2);
 
-### Example
+### Examples
 
     SELECT LTRIM('Apache Drill', 'Apache ') FROM sys.version;
 
@@ -200,6 +203,15 @@ Removes the longest string having only characters specified in the second argume
     +------------+
     1 row selected (0.131 seconds)
 
+    SELECT LTRIM('A powerful tool Apache Drill', 'Apache ') FROM sys.version;
+
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | owerful tool Apache Drill |
+    +------------+
+    1 row selected (0.07 seconds)
+
 ## POSITION
 Returns the location of a substring.
 
@@ -220,7 +232,7 @@ Returns the location of a substring.
 
 ## REGEXP_REPLACE
 
-Substitutes new text for substrings that match POSIX regular expression patterns.
+Substitutes new text for substrings that match [POSIX regular expression patterns](http://www.regular-expressions.info/posix.html).
 
 ### Syntax
 
@@ -234,39 +246,29 @@ Substitutes new text for substrings that match POSIX regular expression patterns
 
 ### Examples
 
-Flatten and replace a's with b's in this JSON data.
+Replace a's with b's in this string.
 
-    {"id":1,"strs":["abc","acd"]}
-    {"id":2,"strs":["ade","aef"]}
-
-    SELECT id, REGEXP_REPLACE(FLATTEN(strs), 'a','b') FROM tmp.`regex-flatten.json`;
+    SELECT REGEXP_REPLACE('abc, acd, ade, aef', 'a', 'b') FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | bbc, bcd, bde, bef |
+    +------------+
 
-    +------------+------------+
-    |     id     |   EXPR$1   |
-    +------------+------------+
-    | 1          | bbc        |
-    | 1          | bcd        |
-    | 2          | bde        |
-    | 2          | bef        |
-    +------------+------------+
-    4 rows selected (0.186 seconds)
 
-Use the regular expression a. in the same query to replace all a's and the subsequent character.
+Use the regular expression *a* followed by a period (.) in the same query to replace all a's and the subsequent character.
 
-    SELECT ID, REGEXP_REPLACE(FLATTEN(strs), 'a.','b') FROM tmp.`regex-flatten.json`;
+    SELECT REGEXP_REPLACE('abc, acd, ade, aef', 'a.','b') FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | bc, bd, be, bf |
+    +------------+
+    1 row selected (0.099 seconds)
 
-    +------------+------------+
-    |     id     |   EXPR$1   |
-    +------------+------------+
-    | 1          | bc         |
-    | 1          | bd         |
-    | 2          | be         |
-    | 2          | bf         |
-    +------------+------------+
-    4 rows selected (0.132 seconds)
 
 ## RPAD
-Pads the string to the length specified by appending the fill or a space. Truncates the string if longer than the specified length.
+Pads the string to the length specified. Appends the text you specify after the fill keyword using spaces for the fill if you provide no text or insufficient text to achieve the length.  Truncates the string if longer than the specified length.
 
 ### Syntax
 
@@ -283,13 +285,13 @@ Pads the string to the length specified by appending the fill or a space. Trunca
     1 row selected (0.15 seconds)
 
 ## RTRIM
-Removes the longest string having only characters specified in the second argument string from the end of the string.
+Removes any characters from the end of string1 that match the characters in string2.  
 
 ### Syntax
 
-    RTRIM(string, string);
+    RTRIM(string1, string2);
 
-### Example
+### Examples
 
     SELECT RTRIM('Apache Drill', 'Drill ') FROM sys.version;
 
@@ -300,6 +302,14 @@ Removes the longest string having only characters specified in the second argume
     +------------+
     1 row selected (0.135 seconds)
 
+    SELECT RTRIM('1.0 Apache Tomcat 1.0', 'Drill 1.0') from sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | 1.0 Apache Tomcat |
+    +------------+
+    1 row selected (0.088 seconds)
+
 ## STRPOS
 Returns the location of the substring in a string.
 
@@ -323,7 +333,10 @@ Extracts characters from position 1 - x of the string an optional y times.
 
 ### Syntax
 
-(SUBSTR | SUBSTRING)(string, x, y)
+SUBSTR(string, x, y)
+
+### Usage Notes
+You can use the alias SUBSTRING for this function.
 
 
 ### Example
@@ -347,11 +360,11 @@ Extracts characters from position 1 - x of the string an optional y times.
     1 row selected (0.129 seconds)
 
 ## TRIM
-Removes the longest string having only the characters from the beginning, end, or both ends of the string.
+Removes any characters from the beginning, end, or both sides of string2 that match the characters in string1.  
 
 ### Syntax
 
-    TRIM ([leading | trailing | both] [characters] from string)
+    TRIM ([leading | trailing | both] [string1] from string2)
 
 ### Example
 
@@ -363,6 +376,22 @@ Removes the longest string having only the characters from the beginning, end, o
     +------------+
     1 row selected (0.172 seconds)
 
+    SELECT TRIM(both 'l' from 'long live Drill') FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | ong live Dri |
+    +------------+
+    1 row selected (0.087 seconds)
+
+    SELECT TRIM(leading 'l' from 'long live Drill') FROM sys.version;
+    +------------+
+    |   EXPR$0   |
+    +------------+
+    | ong live Drill |
+    +------------+
+    1 row selected (0.077 seconds)
+
 ## UPPER
 Converts characters in the string to uppercase.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/functions/005-aggregate.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/functions/005-aggregate.md b/_docs/sql-ref/functions/005-aggregate.md
index eda1049..3295b45 100644
--- a/_docs/sql-ref/functions/005-aggregate.md
+++ b/_docs/sql-ref/functions/005-aggregate.md
@@ -17,7 +17,7 @@ MAX(expression)| any array, numeric, string, or date/time type| same as argument
 MIN(expression)| any array, numeric, string, or date/time type| same as argument type
 SUM(expression)| smallint, int, bigint, real, double precision, numeric, or interval| bigint for smallint or int arguments, numeric for bigint arguments, double precision for floating-point arguments, otherwise the same as the argument data type
 
-MIN, MAX, COUNT, AVG, SUM accept ALL and DISTINCT keywords. The default is ALL.
+MIN, MAX, COUNT, AVG, and SUM accept ALL and DISTINCT keywords. The default is ALL.
 
 ### Examples
 

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/functions/006-nulls.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/functions/006-nulls.md b/_docs/sql-ref/functions/006-nulls.md
index bc98b63..bbf7da0 100644
--- a/_docs/sql-ref/functions/006-nulls.md
+++ b/_docs/sql-ref/functions/006-nulls.md
@@ -3,7 +3,7 @@ title: "Functions for Handling Nulls"
 parent: "SQL Functions"
 ---
 
-Drill supports the following SQL functions:
+Drill supports the following functions for handling nulls:
 
 * COALESCE
 * NULLIF

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/tutorial/install-sandbox/001-install-mapr-vm.md
----------------------------------------------------------------------
diff --git a/_docs/tutorial/install-sandbox/001-install-mapr-vm.md b/_docs/tutorial/install-sandbox/001-install-mapr-vm.md
index 73daa6d..fc75f94 100644
--- a/_docs/tutorial/install-sandbox/001-install-mapr-vm.md
+++ b/_docs/tutorial/install-sandbox/001-install-mapr-vm.md
@@ -7,22 +7,23 @@ VMware Player or VMware Fusion:
 
 1. Download the MapR Sandbox with Drill file to a directory on your machine:  
    <https://www.mapr.com/products/mapr-sandbox-hadoop/download-sandbox-drill>
-2. Open the virtual machine player, and select the **Open a Virtual Machine **option.  
+2. Open the virtual machine player, and select the **Open a Virtual Machine** option.  
   
     **Tip for VMware Fusion**  
 
-    If you are running VMware Fusion, select** Import**.  
+    If you are running VMware Fusion, select **Import**.  
 
     ![drill query flow]({{ site.baseurl }}/docs/img/vmWelcome.png)
-3. Navigate to the directory where you downloaded the MapR Sandbox with Apache Drill file, and select `MapR-Sandbox-For-Apache-Drill-4.0.1_VM.ova`.
+3. Navigate to the directory where you downloaded the MapR Sandbox with Apache Drill file, and select `MapR-Sandbox-For-Apache-Drill-0.8.0-4.1.0-vmware`.  
+
+    The Import Virtual Machine dialog appears.
 
     ![drill query flow]({{ site.baseurl }}/docs/img/vmShare.png)
 
-    The Import Virtual Machine dialog appears.
 4. Click **Import**. The virtual machine player imports the sandbox.
 
     ![drill query flow]({{ site.baseurl }}/docs/img/vmLibrary.png)
-5. Select `MapR-Sandbox-For-Apache-Drill-4.0.1_VM`, and click **Play virtual machine**. It takes a few minutes for the MapR services to start.  
+5. Select `MapR-Sandbox-For-Apache-Drill-0.8.0-4.1.0-vmware`, and click **Play virtual machine**. It takes a few minutes for the MapR services to start.  
 
      After the MapR services start and installation completes, the following screen
 appears:
@@ -31,16 +32,14 @@ appears:
 
      Note the URL provided in the screen, which corresponds to the Web UI in Apache
 Drill.
-6. Verify that a DNS entry was created on the host machine for the virtual machine. If not, create the entry.
-    * For Linux and Mac, create the entry in `/etc/hosts`.  
-    * For Windows, create the entry in the `%WINDIR%\system32\drivers\etc\hosts` file.    
-     
-    For example: `127.0.1.1 <vm_hostname>`
+6. Verify that a DNS entry was created on the host machine for the virtual machine. If not, create the entry.  
+   * For Linux and Mac, create the entry in `/etc/hosts`.  
+   * For Windows, create the entry in the `%WINDIR%\system32\drivers\etc\hosts` file.  
+     For example: `127.0.1.1 <vm_hostname>`  
 
 7. You can navigate to the URL provided to experience Drill Web UI or you can login to the sandbox through the command line.  
-
-    a. To navigate to the MapR Sandbox with Apache Drill, enter the provided URL in your browser's address bar.  
-    b. To login to the virtual machine and access the command line, press Alt+F2 on Windows or Option+F5 on Mac. When prompted, enter `mapr` as the login name and password.
+   * To navigate to the MapR Sandbox with Apache Drill, enter the provided URL in your browser's address bar.  
+   * To login to the virtual machine and access the command line, press Alt+F2 on Windows or Option+F5 on Mac. When prompted, enter `mapr` as the login name and password.
 
 ## What's Next
 

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/tutorial/install-sandbox/002-install-mapr-vb.md
----------------------------------------------------------------------
diff --git a/_docs/tutorial/install-sandbox/002-install-mapr-vb.md b/_docs/tutorial/install-sandbox/002-install-mapr-vb.md
index e72abf9..886d894 100644
--- a/_docs/tutorial/install-sandbox/002-install-mapr-vb.md
+++ b/_docs/tutorial/install-sandbox/002-install-mapr-vb.md
@@ -22,40 +22,40 @@ VirtualBox:
 
      ![drill query flow]({{ site.baseurl }}/docs/img/vbNetwork.png)
 7. Select **Network**.  
-
-    The correct setting depends on your network connectivity when you run the
-Sandbox. In general, if you are going to use a wired Ethernet connection,
-select **NAT Networks **and **vboxnet0**. If you are going to use a wireless
+   The correct setting depends on your network connectivity when you run the
+Sandbox:  
+   * If you are going to use a wired Ethernet connection, generally its best to
+select **NAT Network** and **vboxnet0**.  
+   * If you use ODBC or JDBC on a remote host, select **Bridged Adapter**.  
+   * If you are going to use a wireless
 network, select **Host-only Networks** and the **VirtualBox Host-Only Ethernet
-Adapter**. If no adapters appear, click the green** +** button to add the
+Adapter**.  
+
+    If no adapters appear, click the green **+** button to add the
 VirtualBox adapter.
 
-     ![drill query flow]({{ site.baseurl }}/docs/img/vbMaprSetting.png)
-8. Click **OK **to continue.
-9. Click Settings.
+![drill query flow]({{ site.baseurl }}/docs/img/vbMaprSetting.png)
+8. Click **OK** to continue.  
+9. Click Settings. 
+![settings icon]({{ site.baseurl }}/docs/img/settings.png)  
+   The MapR Settings dialog appears.     
+![drill query flow]({{ site.baseurl }}/docs/img/vbGenSettings.png)    
+10.Click **OK** to continue.  
+11.Click **Start**. It takes a few minutes for the MapR services to start.  
+   After the MapR services start and installation completes, the following screen appears:  
+![drill query flow]({{ site.baseurl }}/docs/img/vbloginSandBox.png)
+   Note the URL provided in the screen.  
+12.The client must be able to resolve the actual hostname of the Drill node(s) with the IP(s). Verify that a DNS entry was created on the client machine for the Drill node(s). If a DNS entry does not exist, create the entry for the Drill node(s):  
+
+   * For Windows, create the entry in the %WINDIR%\system32\drivers\etc\hosts file.  
+   * For Linux and Mac, create the entry in /etc/hosts.  
+<drill-machine-IP> <drill-machine-hostname>
+        Example: `127.0.1.1 maprdemo`  
 
-    ![settings icon]({{ site.baseurl }}/docs/img/settings.png)  
-   The MapR-Sandbox-For-Apache-Drill-0.6.0-r2-4.0.1 - Settings dialog appears.
-     
-     ![drill query flow]({{ site.baseurl }}/docs/img/vbGenSettings.png)    
-10. Click **OK** to continue.
-11. Click **Start**. It takes a few minutes for the MapR services to start.   
- 
-      After the MapR services start and installation completes, the following screen appears:
-      
-       ![drill query flow]({{ site.baseurl }}/docs/img/vbloginSandBox.png)
-12. The client must be able to resolve the actual hostname of the Drill node(s) with the IP(s). Verify that a DNS entry was created on the client machine for the Drill node(s).  
- 
-     If a DNS entry does not exist, create the entry for the Drill node(s).
-     * For Windows, create the entry in the %WINDIR%\system32\drivers\etc\hosts file.
-     * For Linux and Mac, create the entry in /etc/hosts.  
-<drill-machine-IP> <drill-machine-hostname>  
-  
-     Example: `127.0.1.1 maprdemo`
-13. You can navigate to the URL provided or to [localhost:8047](http://localhost:8047) to experience the Drill Web UI, or you can log into the sandbox through the command line.  
+13.You can navigate to the URL provided or to [localhost:8047](http://localhost:8047) to experience the Drill Web UI, or you can log into the sandbox through the command line.  
 
-    a. To navigate to the MapR Sandbox with Apache Drill, enter the provided URL in your browser's address bar.  
-    b. To log into the virtual machine and access the command line, enter Alt+F2 on Windows or Option+F5 on Mac. When prompted, enter `mapr` as the login name and password.
+   * To navigate to the MapR Sandbox with Apache Drill, enter the provided URL in your browser's address bar.  
+   * To log into the virtual machine and access the command line, enter Alt+F2 on Windows or Option+F5 on Mac. When prompted, enter `mapr` as the login name and password.
 
 # What's Next
 


[2/2] drill git commit: DRILL-2736

Posted by br...@apache.org.
DRILL-2736


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/d3328217
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/d3328217
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/d3328217

Branch: refs/heads/gh-pages
Commit: d3328217e27183e6978a0b9ebb09317d666bfd0f
Parents: c8a79a5
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon Apr 13 13:17:36 2015 -0700
Committer: Bridget Bevens <bb...@maprtech.com>
Committed: Mon Apr 13 14:28:46 2015 -0700

----------------------------------------------------------------------
 _docs/0001-DRILL-2720.patch                     | 375 +++++++++++++++++++
 _docs/img/loginSandBox.png                      | Bin 53970 -> 67090 bytes
 _docs/img/vbApplSettings.png                    | Bin 45140 -> 115803 bytes
 _docs/img/vbGenSettings.png                     | Bin 56436 -> 56642 bytes
 _docs/img/vbImport.png                          | Bin 29075 -> 85744 bytes
 _docs/img/vbNetwork.png                         | Bin 32117 -> 30826 bytes
 _docs/img/vbloginSandBox.png                    | Bin 52169 -> 65477 bytes
 _docs/img/vmLibrary.png                         | Bin 68085 -> 85632 bytes
 _docs/img/vmShare.png                           | Bin 49069 -> 22898 bytes
 _docs/manage/conf/001-mem-alloc.md              |   8 +-
 _docs/manage/conf/002-startup-opt.md            |  20 +-
 _docs/manage/conf/003-plan-exec.md              |  52 ++-
 _docs/manage/conf/004-persist-conf.md           |   4 -
 _docs/sql-ref/001-data-types.md                 | 238 +++++++-----
 _docs/sql-ref/002-lexical-structure.md          |  11 +
 _docs/sql-ref/data-types/001-date.md            |  33 +-
 _docs/sql-ref/data-types/002-diff-data-types.md |   2 +-
 _docs/sql-ref/functions/002-conversion.md       | 219 ++++++-----
 _docs/sql-ref/functions/003-date-time-fcns.md   | 288 +++++++++-----
 _docs/sql-ref/functions/004-string.md           | 135 ++++---
 _docs/sql-ref/functions/005-aggregate.md        |   2 +-
 _docs/sql-ref/functions/006-nulls.md            |   2 +-
 .../install-sandbox/001-install-mapr-vm.md      |  25 +-
 .../install-sandbox/002-install-mapr-vb.md      |  58 +--
 24 files changed, 1046 insertions(+), 426 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/0001-DRILL-2720.patch
----------------------------------------------------------------------
diff --git a/_docs/0001-DRILL-2720.patch b/_docs/0001-DRILL-2720.patch
new file mode 100644
index 0000000..738f444
--- /dev/null
+++ b/_docs/0001-DRILL-2720.patch
@@ -0,0 +1,375 @@
+From ec19736742e2a66a5cf4dfe7779af2b46f6e7117 Mon Sep 17 00:00:00 2001
+From: Kristine Hahn <kh...@maprtech.com>
+Date: Wed, 8 Apr 2015 15:51:59 -0700
+Subject: [PATCH] DRILL-2720
+
+---
+ _docs/connect/009-mapr-db-plugin.md       |  2 +-
+ _docs/manage/conf/001-mem-alloc.md        | 88 ++++++++++++-------------------
+ _docs/sql-ref/004-functions.md            |  2 +-
+ _docs/sql-ref/functions/001-math.md       |  2 +-
+ _docs/sql-ref/functions/002-conversion.md | 14 ++---
+ 5 files changed, 43 insertions(+), 65 deletions(-)
+
+diff --git a/_docs/connect/009-mapr-db-plugin.md b/_docs/connect/009-mapr-db-plugin.md
+index bc06144..66d2a81 100644
+--- a/_docs/connect/009-mapr-db-plugin.md
++++ b/_docs/connect/009-mapr-db-plugin.md
+@@ -2,7 +2,7 @@
+ title: "MapR-DB Format"
+ parent: "Connect to a Data Source"
+ ---
+-Drill includes a `maprdb` format plugin for handling MapR-DB and HBase data. The Drill Sandbox also includes the following `maprdb` format plugin on a MapR node:
++Drill includes a `maprdb` format plugin for accessing data stored in MapR-DB. The Drill Sandbox also includes the following `maprdb` format plugin on a MapR node:
+ 
+     {
+       "type": "hbase",
+diff --git a/_docs/manage/conf/001-mem-alloc.md b/_docs/manage/conf/001-mem-alloc.md
+index 5d99015..df60b7f 100644
+--- a/_docs/manage/conf/001-mem-alloc.md
++++ b/_docs/manage/conf/001-mem-alloc.md
+@@ -2,7 +2,7 @@
+ title: "Overview"
+ parent: "Configuration Options"
+ ---
+-The sys.options table in Drill contains information about boot and system options described in the following tables. You configure some of the options to tune performance. You can configure the options using the ALTER SESSION or ALTER SYSTEM command.
++The sys.options table in Drill contains information about boot and system options listed in the following tables. To tune performance, you adjust some of the options to suit your application. Configure the options using the ALTER SESSION or ALTER SYSTEM command.
+ 
+ ## Boot Options
+ 
+@@ -10,7 +10,7 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <th>Name</th>
+     <th>Default</th>
+-    <th>Description</th>
++    <th>Comments</th>
+   </tr>
+   <tr>
+     <td>drill.exec.buffer.impl</td>
+@@ -128,9 +128,9 @@ The sys.options table in Drill contains information about boot and system option
+ 
+ <table>
+   <tr>
+-    <th>name</th>
++    <th>Name</th>
+     <th>Default</th>
+-    <th>Description</th>
++    <th>Comments</th>
+   </tr>
+   <tr>
+     <td>drill.exec.functions.cast_empty_string_to_null</td>
+@@ -140,12 +140,7 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>drill.exec.storage.file.partition.column.label</td>
+     <td>dir</td>
+-    <td></td>
+-  </tr>
+-  <tr>
+-    <td>drill.exec.testing.exception-injections</td>
+-    <td></td>
+-    <td></td>
++    <td>Accepts a string input.</td>
+   </tr>
+   <tr>
+     <td>exec.errors.verbose</td>
+@@ -155,27 +150,27 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>exec.java_compiler</td>
+     <td>DEFAULT</td>
+-    <td></td>
++    <td>Switches between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by default for generated source code of less than exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.</td>
+   </tr>
+   <tr>
+     <td>exec.java_compiler_debug</td>
+     <td>TRUE</td>
+-    <td></td>
++    <td>Toggles the output of debug-level compiler error messages in runtime generated code.</td>
+   </tr>
+   <tr>
+     <td>exec.java_compiler_janino_maxsize</td>
+     <td>262144</td>
+-    <td></td>
++    <td>See the exec.java_compiler option comment. Accepts inputs of type LONG.</td>
+   </tr>
+   <tr>
+     <td>exec.max_hash_table_size</td>
+     <td>1073741824</td>
+-    <td>Starting size for hash tables. Increase according to available memory to improve performance.</td>
++    <td>Ending size for hash tables. Range: 0 - 1073741824</td>
+   </tr>
+   <tr>
+     <td>exec.min_hash_table_size</td>
+     <td>65536</td>
+-    <td></td>
++    <td>Starting size for hash tables. Increase according to available memory to improve performance. Range: 0 - 1073741824</td>
+   </tr>
+   <tr>
+     <td>exec.queue.enable</td>
+@@ -185,27 +180,22 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>exec.queue.large</td>
+     <td>10</td>
+-    <td></td>
++    <td>Range: 0-1000</td>
+   </tr>
+   <tr>
+     <td>exec.queue.small</td>
+     <td>100</td>
+-    <td></td>
++    <td>Range: 0-1001</td>
+   </tr>
+   <tr>
+     <td>exec.queue.threshold</td>
+     <td>30000000</td>
+-    <td></td>
++    <td>Range: 0-9223372036854775807</td>
+   </tr>
+   <tr>
+     <td>exec.queue.timeout_millis</td>
+     <td>300000</td>
+-    <td></td>
+-  </tr>
+-  <tr>
+-    <td>org.apache.drill.exec.compile.ClassTransformer.scalar_replacement</td>
+-    <td>try</td>
+-    <td></td>
++    <td>Range: 0-9223372036854775807</td>
+   </tr>
+   <tr>
+     <td>planner.add_producer_consumer</td>
+@@ -215,7 +205,7 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>planner.affinity_factor</td>
+     <td>1.2</td>
+-    <td></td>
++    <td>Accepts inputs of type DOUBLE.</td>
+   </tr>
+   <tr>
+     <td>planner.broadcast_factor</td>
+@@ -225,22 +215,22 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>planner.broadcast_threshold</td>
+     <td>10000000</td>
+-    <td></td>
++    <td>Threshold in number of rows that triggers a broadcast join for a query if the right side of the join contains fewer rows than the threshold. Avoids broadcasting too many rows to join. Range: 0-2147483647</td>
+   </tr>
+   <tr>
+     <td>planner.disable_exchanges</td>
+     <td>FALSE</td>
+-    <td></td>
++    <td>Toggles the state of hashing to a random exchange.</td>
+   </tr>
+   <tr>
+     <td>planner.enable_broadcast_join</td>
+     <td>TRUE</td>
+-    <td></td>
++    <td>Changes the state of aggregation and join operators. Do not disable.</td>
+   </tr>
+   <tr>
+     <td>planner.enable_demux_exchange</td>
+     <td>FALSE</td>
+-    <td></td>
++    <td>Toggles the state of hashing to a demulitplexed exchange.</td>
+   </tr>
+   <tr>
+     <td>planner.enable_hash_single_key</td>
+@@ -250,12 +240,12 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>planner.enable_hashagg</td>
+     <td>TRUE</td>
+-    <td></td>
++    <td>Enable hash aggregation; otherwise, Drill does a sort-based aggregation. Does not write to disk. Enable is recommended.</td>
+   </tr>
+   <tr>
+     <td>planner.enable_hashjoin</td>
+     <td>TRUE</td>
+-    <td></td>
++    <td>Enable the memory hungry hash join. Does not write to disk.</td>
+   </tr>
+   <tr>
+     <td>planner.enable_hashjoin_swap</td>
+@@ -265,7 +255,7 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>planner.enable_mergejoin</td>
+     <td>TRUE</td>
+-    <td></td>
++    <td>Sort-based operation. Writes to disk.</td>
+   </tr>
+   <tr>
+     <td>planner.enable_multiphase_agg</td>
+@@ -275,12 +265,12 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>planner.enable_mux_exchange</td>
+     <td>TRUE</td>
+-    <td></td>
++    <td>Toggles the state of hashing to a multiplexed exchange.</td>
+   </tr>
+   <tr>
+     <td>planner.enable_streamagg</td>
+     <td>TRUE</td>
+-    <td></td>
++    <td>Sort-based operation. Writes to disk.</td>
+   </tr>
+   <tr>
+     <td>planner.identifier_max_length</td>
+@@ -325,7 +315,7 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>planner.memory.non_blocking_operators_memory</td>
+     <td>64</td>
+-    <td></td>
++    <td>Range: 0-2048</td>
+   </tr>
+   <tr>
+     <td>planner.partitioner_sender_max_threads</td>
+@@ -345,27 +335,27 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>planner.producer_consumer_queue_size</td>
+     <td>10</td>
+-    <td></td>
++    <td>How much data to prefetch from disk (in record batches) out of band of query execution</td>
+   </tr>
+   <tr>
+     <td>planner.slice_target</td>
+     <td>100000</td>
+-    <td></td>
++    <td>The number of records manipulated within a fragment before Drill parallelizes operations.</td>
+   </tr>
+   <tr>
+     <td>planner.width.max_per_node</td>
+     <td>3</td>
+-    <td></td>
++    <td>The maximum degree of distribution of a query across cores and cluster nodes.</td>
+   </tr>
+   <tr>
+     <td>planner.width.max_per_query</td>
+     <td>1000</td>
+-    <td></td>
++    <td>Same as planner but applies to the query as executed by the entire cluster.</td>
+   </tr>
+   <tr>
+     <td>store.format</td>
+     <td>parquet</td>
+-    <td></td>
++    <td>Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, or text. Allowed values: 0, -1, 1000000</td>
+   </tr>
+   <tr>
+     <td>store.json.all_text_mode</td>
+@@ -375,17 +365,17 @@ The sys.options table in Drill contains information about boot and system option
+   <tr>
+     <td>store.mongo.all_text_mode</td>
+     <td>FALSE</td>
+-    <td></td>
++    <td>Similar to store.json.all_text_mode for MongoDB.</td>
+   </tr>
+   <tr>
+     <td>store.parquet.block-size</td>
+     <td>536870912</td>
+-    <td></td>
++    <td>Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.</td>
+   </tr>
+   <tr>
+     <td>store.parquet.compression</td>
+     <td>snappy</td>
+-    <td></td>
++    <td>Compression type for storing Parquet output. Allowed values: snappy, gzip, none</td>
+   </tr>
+   <tr>
+     <td>store.parquet.enable_dictionary_encoding</td>
+@@ -398,22 +388,14 @@ The sys.options table in Drill contains information about boot and system option
+     <td></td>
+   </tr>
+   <tr>
+-    <td>store.parquet.vector_fill_check_threshold</td>
+-    <td>10</td>
+-    <td></td>
+-  </tr>
+-  <tr>
+-    <td>store.parquet.vector_fill_threshold</td>
+-    <td>85</td>
+-    <td></td>
+-  </tr>
+-  <tr>
+     <td>window.enable</td>
+     <td>FALSE</td>
+     <td></td>
+   </tr>
+ </table>
+ 
++## Memory Allocation
++
+ You can configure the amount of direct memory allocated to a Drillbit for
+ query processing. The default limit is 8G, but Drill prefers 16G or more
+ depending on the workload. The total amount of direct memory that a Drillbit
+diff --git a/_docs/sql-ref/004-functions.md b/_docs/sql-ref/004-functions.md
+index a076920..2f1ee0b 100644
+--- a/_docs/sql-ref/004-functions.md
++++ b/_docs/sql-ref/004-functions.md
+@@ -12,4 +12,4 @@ You can use the following types of functions in your Drill queries:
+   * [Nested Data](/docs/nested-data-functions/)
+   * [Functions for Handling Nulls](/docs/functions-for-handling-nulls)
+ 
+-
++You need to use a FROM clause in Drill queries. Examples in this documentation often use `FROM sys.version` in the query for example purposes.
+diff --git a/_docs/sql-ref/functions/001-math.md b/_docs/sql-ref/functions/001-math.md
+index 718998a..4695e32 100644
+--- a/_docs/sql-ref/functions/001-math.md
++++ b/_docs/sql-ref/functions/001-math.md
+@@ -158,7 +158,7 @@ Exceptions are the LSHIFT and RSHIFT functions, which take all types except the
+ 
+ Examples in this section use the `input2.json` file. Download the `input2.json` file from the [Drill source code](https://github.com/apache/drill/tree/master/exec/java-exec/src/test/resources/jsoninput) page. 
+ 
+-You need to use a FROM clause in Drill queries. This document often uses the sys.version table in the FROM clause of the query for example purposes.
++You need to use a FROM clause in Drill queries. In addition to using `input2.json`, examples in this documentation often use `FROM sys.version` in the query for example purposes.
+ 
+ #### ABS Example
+ Get the absolute value of the integer key in `input2.json`. The following snippet of input2.json shows the relevant integer content:
+diff --git a/_docs/sql-ref/functions/002-conversion.md b/_docs/sql-ref/functions/002-conversion.md
+index 875de69..780b397 100644
+--- a/_docs/sql-ref/functions/002-conversion.md
++++ b/_docs/sql-ref/functions/002-conversion.md
+@@ -10,7 +10,7 @@ Drill supports the following functions for casting and converting data types:
+ 
+ ## CAST
+ 
+-The CAST function converts an entity having a single data value, such as a column name, from one type to another.
++The CAST function converts an expression from one type to another.
+ 
+ ### Syntax
+ 
+@@ -18,7 +18,7 @@ The CAST function converts an entity having a single data value, such as a colum
+ 
+ *expression*
+ 
+-An entity that evaluates to one or more values, such as a column name or literal
++A combination of one or more values, operators, and SQL functions that evaluate to a value
+ 
+ *data type*
+ 
+@@ -381,13 +381,9 @@ Currently Drill does not support conversion of a date, time, or timestamp from o
+         +------------+
+         1 row selected (1.199 seconds)
+ 
+-2. Configure the default time zone format in the drill-override.conf. For example:
++2. Configure the default time zone format in <drill installation directory>/conf/drill-env.sh by adding `-Duser.timezone=UTC` to DRILL_JAVA_OPTS. For example:
+ 
+-        drill.exec: {
+-          cluster-id: “xyz",
+-          zk.connect: “abc:5181",
+-          user.timezone: "UTC"
+-        }
++        export DRILL_JAVA_OPTS="-Xms1G -Xmx$DRILL_MAX_HEAP -XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=1G -ea -Duser.timezone=UTC"
+ 
+ 3. Restart sqlline.
+ 
+@@ -416,7 +412,7 @@ TO_NUMBER(text, format)| numeric
+ TO_TIMESTAMP(text, format)| timestamp
+ TO_TIMESTAMP(double precision)| timestamp
+ 
+-Use the ‘z’ option to identify the time zone in TO_TIMESTAMP to make sure the timestamp has the timezone in it. Also, use the ‘z’ option to identify the time zone in a timestamp using the TO_CHAR function. For example:
++You can use the ‘z’ option to identify the time zone in TO_TIMESTAMP to make sure the timestamp has the timezone in it. Also, use the ‘z’ option to identify the time zone in a timestamp using the TO_CHAR function. For example:
+ 
+     SELECT TO_TIMESTAMP('2015-03-30 20:49:59.0 UTC', 'YYYY-MM-dd HH:mm:ss.s z') AS Original, 
+            TO_CHAR(TO_TIMESTAMP('2015-03-30 20:49:59.0 UTC', 'YYYY-MM-dd HH:mm:ss.s z'), 'z') AS TimeZone 
+-- 
+1.9.5 (Apple Git-50.3)
+

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/img/loginSandBox.png
----------------------------------------------------------------------
diff --git a/_docs/img/loginSandBox.png b/_docs/img/loginSandBox.png
index 30f73b2..5727ea4 100644
Binary files a/_docs/img/loginSandBox.png and b/_docs/img/loginSandBox.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/img/vbApplSettings.png
----------------------------------------------------------------------
diff --git a/_docs/img/vbApplSettings.png b/_docs/img/vbApplSettings.png
index 2f7451b..a8050ab 100644
Binary files a/_docs/img/vbApplSettings.png and b/_docs/img/vbApplSettings.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/img/vbGenSettings.png
----------------------------------------------------------------------
diff --git a/_docs/img/vbGenSettings.png b/_docs/img/vbGenSettings.png
index cae235f..9a4451d 100644
Binary files a/_docs/img/vbGenSettings.png and b/_docs/img/vbGenSettings.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/img/vbImport.png
----------------------------------------------------------------------
diff --git a/_docs/img/vbImport.png b/_docs/img/vbImport.png
index e2f6cfe..a8ed45b 100644
Binary files a/_docs/img/vbImport.png and b/_docs/img/vbImport.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/img/vbNetwork.png
----------------------------------------------------------------------
diff --git a/_docs/img/vbNetwork.png b/_docs/img/vbNetwork.png
index bbc1c7a..cbb36f1 100644
Binary files a/_docs/img/vbNetwork.png and b/_docs/img/vbNetwork.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/img/vbloginSandBox.png
----------------------------------------------------------------------
diff --git a/_docs/img/vbloginSandBox.png b/_docs/img/vbloginSandBox.png
index 69c31ab..5012f7d 100644
Binary files a/_docs/img/vbloginSandBox.png and b/_docs/img/vbloginSandBox.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/img/vmLibrary.png
----------------------------------------------------------------------
diff --git a/_docs/img/vmLibrary.png b/_docs/img/vmLibrary.png
index c0b97a3..b3f2fd2 100644
Binary files a/_docs/img/vmLibrary.png and b/_docs/img/vmLibrary.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/img/vmShare.png
----------------------------------------------------------------------
diff --git a/_docs/img/vmShare.png b/_docs/img/vmShare.png
index 16ef052..803ffaf 100644
Binary files a/_docs/img/vmShare.png and b/_docs/img/vmShare.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/manage/conf/001-mem-alloc.md
----------------------------------------------------------------------
diff --git a/_docs/manage/conf/001-mem-alloc.md b/_docs/manage/conf/001-mem-alloc.md
index df60b7f..608eef1 100644
--- a/_docs/manage/conf/001-mem-alloc.md
+++ b/_docs/manage/conf/001-mem-alloc.md
@@ -1,5 +1,5 @@
 ---
-title: "Overview"
+title: "Configuration Options Overview"
 parent: "Configuration Options"
 ---
 The sys.options table in Drill contains information about boot and system options listed in the following tables. To tune performance, you adjust some of the options to suit your application. Configure the options using the ALTER SESSION or ALTER SYSTEM command.
@@ -64,7 +64,7 @@ The sys.options table in Drill contains information about boot and system option
   </tr>
   <tr>
     <td>drill.exec.sys.store.provider.class</td>
-    <td>"org.apache.drill.exec.store.sys.zk.ZkPStoreProvider"</td>
+    <td>ZooKeeper: "org.apache.drill.exec.store.sys.zk.ZkPStoreProvider"</td>
     <td>The Pstore (Persistent Configuration Storage) provider to use. The Pstore holds configuration and profile data.</td>
   </tr>
   <tr>
@@ -358,7 +358,7 @@ The sys.options table in Drill contains information about boot and system option
     <td>Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, or text. Allowed values: 0, -1, 1000000</td>
   </tr>
   <tr>
-    <td>store.json.all_text_mode</td>
+    <td><a href="/docs/json-data-model#handling-type-differences">store.json.all_text_mode</a></td>
     <td>FALSE</td>
     <td>Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.</td>
   </tr>
@@ -368,7 +368,7 @@ The sys.options table in Drill contains information about boot and system option
     <td>Similar to store.json.all_text_mode for MongoDB.</td>
   </tr>
   <tr>
-    <td>store.parquet.block-size</td>
+    <td><a href="/docs/parquet-format#configuring-the-size-of-parquet-files">store.parquet.block-size</a></td>
     <td>536870912</td>
     <td>Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.</td>
   </tr>

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/manage/conf/002-startup-opt.md
----------------------------------------------------------------------
diff --git a/_docs/manage/conf/002-startup-opt.md b/_docs/manage/conf/002-startup-opt.md
index e0b64bf..778957b 100644
--- a/_docs/manage/conf/002-startup-opt.md
+++ b/_docs/manage/conf/002-startup-opt.md
@@ -43,7 +43,21 @@ You can run the following query to see a list of Drill’s startup options:
 You can configure start-up options for each Drillbit in the `drill-
 override.conf` file located in Drill’s` /conf` directory.
 
-You may want to configure the following start-up options that control certain
-behaviors in Drill:
+The summary of start-up options, also known as boot options, lists default values. The following descriptions provide more detail on key options that are frequently reconfigured:
+
+* drill.exec.sys.store.provider.class  
+  
+  Defines the persistent storage (PStore) provider. The [PStore](/docs/persistent-configuration-storage) holds configuration and profile data. 
+
+* drill.exec.buffer.size
+
+  Defines the amount of memory available, in terms of record batches, to hold data on the downstream side of an operation. Drill pushes data downstream as quickly as possible to make data immediately available. This requires Drill to use memory to hold the data pending operations. When data on a downstream operation is required, that data is immediately available so Drill does not have to go over the network to process it. Providing more memory to this option increases the speed at which Drill completes a query.
+
+* drill.exec.sort.external.spill.directories
+
+  Tells Drill which directory to use when spooling. Drill uses a spool and sort operation for beyond memory operations. The sorting operation is designed to spool to a Hadoop file system. The default Hadoop file system is a local file system in the /tmp directory. Spooling performance (both writing and reading back from it) is constrained by the file system. For MapR clusters, use MapReduce volumes or set up local volumes to use for spooling purposes. Volumes improve performance and stripe data across as many disks as possible.
+
+
+* drill.exec.zk.connect  
+  Provides Drill with the ZooKeeper quorum to use to connect to data sources. Change this setting to point to the ZooKeeper quorum that you want Drill to use. You must configure this option on each Drillbit node.
 
-<table ><tbody><tr><th >Option</th><th >Default Value</th><th >Description</th></tr><tr><td valign="top" >drill.exec.sys.store.provider</td><td valign="top" >ZooKeeper</td><td valign="top" >Defines the persistent storage (PStore) provider. The PStore holds configuration and profile data. For more information about PStores, see <a href="/docs/persistent-configuration-storage" rel="nofollow">Persistent Configuration Storage</a>.</td></tr><tr><td valign="top" >drill.exec.buffer.size</td><td valign="top" > </td><td valign="top" >Defines the amount of memory available, in terms of record batches, to hold data on the downstream side of an operation. Drill pushes data downstream as quickly as possible to make data immediately available. This requires Drill to use memory to hold the data pending operations. When data on a downstream operation is required, that data is immediately available so Drill does not have to go over the network to process it. Providing more memory to this option incr
 eases the speed at which Drill completes a query.</td></tr><tr><td valign="top" >drill.exec.sort.external.directoriesdrill.exec.sort.external.fs</td><td valign="top" > </td><td valign="top" >These options control spooling. The drill.exec.sort.external.directories option tells Drill which directory to use when spooling. The drill.exec.sort.external.fs option tells Drill which file system to use when spooling beyond memory files. <span style="line-height: 1.4285715;background-color: transparent;"> </span>Drill uses a spool and sort operation for beyond memory operations. The sorting operation is designed to spool to a Hadoop file system. The default Hadoop file system is a local file system in the /tmp directory. Spooling performance (both writing and reading back from it) is constrained by the file system. <span style="line-height: 1.4285715;background-color: transparent;"> </span>For MapR clusters, use MapReduce volumes or set up local volumes to use for spooling purposes. Volumes i
 mprove performance and stripe data across as many disks as possible.</td></tr><tr><td valign="top" colspan="1" >drill.exec.debug.error_on_leak</td><td valign="top" colspan="1" >True</td><td valign="top" colspan="1" >Determines how Drill behaves when memory leaks occur during a query. By default, this option is enabled so that queries fail when memory leaks occur. If you disable the option, Drill issues a warning when a memory leak occurs and completes the query.</td></tr><tr><td valign="top" colspan="1" >drill.exec.zk.connect</td><td valign="top" colspan="1" >localhost:2181</td><td valign="top" colspan="1" >Provides Drill with the ZooKeeper quorum to use to connect to data sources. Change this setting to point to the ZooKeeper quorum that you want Drill to use. You must configure this option on each Drillbit node.</td></tr><tr><td valign="top" colspan="1" >drill.exec.cluster-id</td><td valign="top" colspan="1" >my_drillbit_cluster</td><td valign="top" colspan="1" >Identifies the clu
 ster that corresponds with the ZooKeeper quorum indicated. It also provides Drill with the name of the cluster used during UDP multicast. You must change the default cluster-id if there are multiple clusters on the same subnet. If you do not change the ID, the clusters will try to connect to each other to create one cluster.</td></tr></tbody></table></div>

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/manage/conf/003-plan-exec.md
----------------------------------------------------------------------
diff --git a/_docs/manage/conf/003-plan-exec.md b/_docs/manage/conf/003-plan-exec.md
index 56a1f69..40cd266 100644
--- a/_docs/manage/conf/003-plan-exec.md
+++ b/_docs/manage/conf/003-plan-exec.md
@@ -21,16 +21,42 @@ Use the` ALTER SYSTEM` or `ALTER SESSION` commands to set options. Typically,
 you set the options at the session level unless you want the setting to
 persist across all sessions.
 
-The following table contains planning and execution options that you can set
-at the system or session level:
-
-<table ><tbody><tr><th >Option name</th><th >Default value</th><th >Description</th></tr><tr><td valign="top" colspan="1" >exec.errors.verbose</td><td valign="top" colspan="1" ><p>false</p></td><td valign="top" colspan="1" ><p>This option enables or disables the verbose message that Drill returns when a query fails. When enabled, Drill provides additional information about failed queries.</p></td></tr><tr><td valign="top" colspan="1" ><span>exec.max_hash_table_size</span></td><td valign="top" colspan="1" >1073741824</td><td valign="top" colspan="1" ><span>The default maximum size for hash tables.</span></td></tr><tr><td valign="top" colspan="1" >exec.min_hash_table_size</td><td valign="top" colspan="1" >65536</td><td valign="top" colspan="1" >The default starting size for hash tables. Increasing this size is useful for very large aggregations or joins when you have large amounts of memory for Drill to use. Drill can spend a lot of time resizing the hash table as it finds new data. I
 f you have large data sets, you can increase this hash table size to increase performance.</td></tr><tr><td valign="top" colspan="1" >planner.add_producer_consumer</td><td valign="top" colspan="1" ><p>false</p><p> </p></td><td valign="top" colspan="1" ><p>This option enables or disables a secondary reading thread that works out of band of the rest of the scanning fragment to prefetch data from disk. <span style="line-height: 1.4285715;background-color: transparent;">If you interact with a certain type of storage medium that is slow or does not prefetch much data, this option tells Drill to add a producer consumer reading thread to the operation. Drill can then assign one thread that focuses on a single reading fragment. </span></p><p>If Drill is using memory, you can disable this option to get better performance. If Drill is using disk space, you should enable this option and set a reasonable queue size for the planner.producer_consumer_queue_size option.</p></td></tr><tr><td valign
 ="top" colspan="1" >planner.broadcast_threshold</td><td valign="top" colspan="1" >1000000</td><td valign="top" colspan="1" ><span style="color: rgb(34,34,34);">Threshold, in terms of a number of rows, that determines whether a broadcast join is chosen for a query. Regardless of the setting of the broadcast_join option (enabled or disabled), a broadcast join is not chosen unless the right side of the join is estimated to contain fewer rows than this threshold. The intent of this option is to avoid broadcasting too many rows for join purposes. Broadcasting involves sending data across nodes and is a network-intensive operation. (The &quot;right side&quot; of the join, which may itself be a join or simply a table, is determined by cost-based optimizations and heuristics during physical planning.)</span></td></tr><tr><td valign="top" colspan="1" ><p>planner.enable_broadcast_join<br />planner.enable_hashagg<br />planner.enable_hashjoin<br />planner.enable_mergejoin<br />planner.enable_mu
 ltiphase_agg<br />planner.enable_streamagg</p></td><td valign="top" colspan="1" >true</td><td valign="top" colspan="1" ><p>These options enable or disable specific aggregation and join operators for queries. These operators are all enabled by default and in general should not be disabled.</p><p>Hash aggregation and hash join are hash-based operations. Streaming aggregation and merge join are sort-based operations. Both hash-based and sort-based operations consume memory; however, currently, hash-based operations do not spill to disk as needed, but the sort-based operations do. If large hash operations do not fit in memory on your system, you may need to disable these operations. Queries will continue to run, using alternative plans.</p></td></tr><tr><td valign="top" colspan="1" >planner.producer_consumer_queue_size</td><td valign="top" colspan="1" >10</td><td valign="top" colspan="1" >Determines how much data to prefetch from disk (in record batches) out of band of query execution. 
 The larger the queue size, the greater the amount of memory that the queue and overall query execution consumes.</td></tr><tr><td valign="top" colspan="1" >planner.slice_target</td><td valign="top" colspan="1" >100000</td><td valign="top" colspan="1" >The number of records manipulated within a fragment before Drill parallelizes them.</td></tr><tr><td valign="top" colspan="1" ><p>planner.width.max_per_node</p><p> </p></td><td valign="top" colspan="1" ><p>The default depends on the number of cores on each node.</p></td><td valign="top" colspan="1" ><p>In this context &quot;width&quot; refers to fanout or distribution potential: the ability to run a query in parallel across the cores on a node and the nodes on a cluster.</p><p><span>A physical plan consists of intermediate operations, known as query &quot;fragments,&quot; that run concurrently, yielding opportunities for parallelism above and below each exchange operator in the plan. An exchange operator represents a breakpoint in the 
 execution flow where processing can be distributed. For example, a single-process scan of a file may flow into an exchange operator, followed by a multi-process aggregation fragment.</span><span> </span></p><p>The maximum width per node defines the maximum degree of parallelism for any fragment of a query, but the setting applies at the level of a single node in the cluster.</p><p>The <em>default</em> maximum degree of parallelism per node is calculated as follows, with the theoretical maximum automatically scaled back (and rounded down) so that only 70% of the actual available capacity is taken into account:</p>
-<script type="syntaxhighlighter" class="theme: Default; brush: java; gutter: false"><![CDATA[number of active drillbits (typically one per node) 
-* number of cores per node
-* 0.7]]></script>
-<p>For example, on a single-node test system with 2 cores and hyper-threading enabled:</p><script type="syntaxhighlighter" class="theme: Default; brush: java; gutter: false"><![CDATA[1 * 4 * 0.7 = 3]]></script>
-<p>When you modify the default setting, you can supply any meaningful number. The system does not automatically scale down your setting.</p></td></tr><tr><td valign="top" colspan="1" >planner.width.max_per_query</td><td valign="top" colspan="1" >1000</td><td valign="top" colspan="1" ><p>The max_per_query value also sets the maximum degree of parallelism for any given stage of a query, but the setting applies to the query as executed by the whole cluster (multiple nodes). In effect, the actual maximum width per query is the <em>minimum of two values</em>:</p>
-<script type="syntaxhighlighter" class="theme: Default; brush: java; gutter: false"><![CDATA[min((number of nodes * width.max_per_node), width.max_per_query)]]></script>
-<p>For example, on a 4-node cluster where <span><code>width.max_per_node</code> is set to 6 and </span><span><code>width.max_per_query</code> is set to 30:</span></p>
-<script type="syntaxhighlighter" class="theme: Default; brush: java; gutter: false"><![CDATA[min((4 * 6), 30) = 24]]></script>
-<p>In this case, the effective maximum width per query is 24, not 30.</p></td></tr><tr><td valign="top" colspan="1" >store.format</td><td valign="top" colspan="1" > </td><td valign="top" colspan="1" >Output format for data that is written to tables with the CREATE TABLE AS (CTAS) command.</td></tr><tr><td valign="top" colspan="1" >store.json.all_text_mode</td><td valign="top" colspan="1" ><p>false</p></td><td valign="top" colspan="1" ><p>This option enables or disables text mode. When enabled, Drill reads everything in JSON as a text object instead of trying to interpret data types. This allows complicated JSON to be read using CASE and CAST.</p></td></tr><tr><td valign="top" >store.parquet.block-size</td><td valign="top" ><p>536870912</p></td><td valign="top" >T<span style="color: rgb(34,34,34);">arget size for a parquet row group, which should be equal to or less than the configured HDFS block size. </span></td></tr></tbody></table>
\ No newline at end of file
+The summary of system options lists default values. The following descriptions provide more detail on some of these options:
+
+* exec.min_hash_table_size
+
+  The default starting size for hash tables. Increasing this size is useful for very large aggregations or joins when you have large amounts of memory for Drill to use. Drill can spend a lot of time resizing the hash table as it finds new data. If you have large data sets, you can increase this hash table size to increase performance.
+
+* planner.add_producer_consumer
+
+  This option enables or disables a secondary reading thread that works out of band of the rest of the scanning fragment to prefetch data from disk. If you interact with a certain type of storage medium that is slow or does not prefetch much data, this option tells Drill to add a producer consumer reading thread to the operation. Drill can then assign one thread that focuses on a single reading fragment. If Drill is using memory, you can disable this option to get better performance. If Drill is using disk space, you should enable this option and set a reasonable queue size for the planner.producer_consumer_queue_size option.
+
+* planner.broadcast_threshold
+
+  Threshold, in terms of a number of rows, that determines whether a broadcast join is chosen for a query. Regardless of the setting of the broadcast_join option (enabled or disabled), a broadcast join is not chosen unless the right side of the join is estimated to contain fewer rows than this threshold. The intent of this option is to avoid broadcasting too many rows for join purposes. Broadcasting involves sending data across nodes and is a network-intensive operation. (The &quot;right side&quot; of the join, which may itself be a join or simply a table, is determined by cost-based optimizations and heuristics during physical planning.)
+
+* planner.enable_broadcast_join, planner.enable_hashagg, planner.enable_hashjoin, planner.enable_mergejoin, planner.enable_multiphase_agg, planner.enable_streamagg
+
+  These options enable or disable specific aggregation and join operators for queries. These operators are all enabled by default and in general should not be disabled.</p><p>Hash aggregation and hash join are hash-based operations. Streaming aggregation and merge join are sort-based operations. Both hash-based and sort-based operations consume memory; however, currently, hash-based operations do not spill to disk as needed, but the sort-based operations do. If large hash operations do not fit in memory on your system, you may need to disable these operations. Queries will continue to run, using alternative plans.
+
+* planner.producer_consumer_queue_size
+
+  Determines how much data to prefetch from disk (in record batches) out of band of query execution. The larger the queue size, the greater the amount of memory that the queue and overall query execution consumes.
+
+* planner.width.max_per_node
+
+  In this context *width* refers to fanout or distribution potential: the ability to run a query in parallel across the cores on a node and the nodes on a cluster. A physical plan consists of intermediate operations, known as query &quot;fragments,&quot; that run concurrently, yielding opportunities for parallelism above and below each exchange operator in the plan. An exchange operator represents a breakpoint in the execution flow where processing can be distributed. For example, a single-process scan of a file may flow into an exchange operator, followed by a multi-process aggregation fragment.
+
+  The maximum width per node defines the maximum degree of parallelism for any fragment of a query, but the setting applies at the level of a single node in the cluster. The *default* maximum degree of parallelism per node is calculated as follows, with the theoretical maximum automatically scaled back (and rounded down) so that only 70% of the actual available capacity is taken into account: number of active drillbits (typically one per node) * number of cores per node * 0.7
+
+  For example, on a single-node test system with 2 cores and hyper-threading enabled: 1 * 4 * 0.7 = 3
+
+  When you modify the default setting, you can supply any meaningful number. The system does not automatically scale down your setting.
+
+* planner.width.max_per_query
+
+  The max_per_query value also sets the maximum degree of parallelism for any given stage of a query, but the setting applies to the query as executed by the whole cluster (multiple nodes). In effect, the actual maximum width per query is the *minimum of two values*: min((number of nodes * width.max_per_node), width.max_per_query)
+
+  For example, on a 4-node cluster where `width.max_per_node` is set to 6 and `width.max_per_query` is set to 30: min((4 * 6), 30) = 24
+
+  In this case, the effective maximum width per query is 24, not 30.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/manage/conf/004-persist-conf.md
----------------------------------------------------------------------
diff --git a/_docs/manage/conf/004-persist-conf.md b/_docs/manage/conf/004-persist-conf.md
index 3f11906..fc2dada 100644
--- a/_docs/manage/conf/004-persist-conf.md
+++ b/_docs/manage/conf/004-persist-conf.md
@@ -66,10 +66,6 @@ override.conf.`
 
 ## MapR-DB for Persistent Configuration Storage
 
-The MapR-DB plugin will be released soon. You can [compile Drill from
-source](/docs/compiling-drill-from-source) to try out this
-new feature.
-
 If you have MapR-DB in your cluster, you can use MapR-DB for persistent
 configuration storage. Using MapR-DB to store persistent configuration data
 can prevent memory strain on ZooKeeper in clusters running heavy workloads.

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/001-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/001-data-types.md b/_docs/sql-ref/001-data-types.md
index 88e0177..689dbf7 100644
--- a/_docs/sql-ref/001-data-types.md
+++ b/_docs/sql-ref/001-data-types.md
@@ -2,11 +2,95 @@
 title: "Data Types"
 parent: "SQL Reference"
 ---
-Depending on the data format, you might need to cast or convert data types when Drill reads/writes data.
 
-After Drill reads schema-less data into SQL tables, you need to cast data types explicitly to query the data. In some cases, Drill converts schema-less data to typed data implicitly. In this case, you do not need to cast. The file format of the data and the nature of your query determines the requirement for casting or converting. 
+## Supported Data Types
 
-Differences in casting depend on the data source. The following list describes how Drill treats data types from various data sources:
+Drill supports the following SQL data types:
+
+* BIGINT  
+  8-byte signed integer in the range -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
+
+* BINARY
+  Variable-length byte string
+
+* BOOLEAN  
+  True or false  
+
+* DATE  
+  Years, months, and days in YYYY-MM-DD format since 4713 BC.
+
+* DECIMAL(p,s), or DEC(p,s), NUMERIC(p,s)  
+  38-digit precision number, precision is p, and scale is s. Example: DECIMAL(6,2) has 4 digits before the decimal point and 2 digits after the decimal point. 
+
+* FLOAT  
+  4-byte floating point number
+
+* DOUBLE, DOUBLE PRECISION**  
+  8-byte floating point number, precision-scalable 
+
+* INTEGER or INT  
+  4-byte signed integer in the range -2,147,483,648 to 2,147,483,647
+
+* INTERVALDAY  
+  A simple version of the interval type expressing a period of time in days, hours, minutes, and seconds only
+
+* INTERVALYEAR  
+  A simple version of interval representing a period of time in years and months only
+
+* SMALLINT*  
+  2-byte signed integer in the range -32,768 to 32,767
+
+* TIME  
+  24-hour based time before or after January 1, 2001 in hours, minutes, seconds format: HH:mm:ss 
+
+* TIMESTAMP  
+  JDBC timestamp in year, month, date hour, minute, second, and optional milliseconds format: yyyy-MM-dd HH:mm:ss.SSS
+
+* CHARACTER VARYING, CHARACTER, CHAR, or VARCHAR  
+  UTF8-encoded variable-length string. For example, CHAR(30) casts data to a 30-character string maximum. The default limit is 1 character. The maximum character limit is 255.
+
+\* Not currently supported.  
+\*\* You specify a DECIMAL using a precision and scale. The precision (p) is the total number of digits required to represent the number. The scale (s) is the number of decimal digits to the right of the decimal point. Subtract s from p to determine the maximum number of digits to the left of the decimal point. Scale is a value from 0 through p. Scale is specified only if precision is specified. The default scale is 0.  
+
+## CONVERT_TO and CONVERT_FROM Data Types
+
+The following table lists the data types for use with the CONVERT_TO
+and CONVERT_FROM functions:
+
+**Type**| **Input Type**| **Output Type**  
+---|---|---  
+BOOLEAN_BYTE| bytes(1)| boolean  
+TINYINT_BE| bytes(1)| tinyint  
+TINYINT| bytes(1)| tinyint  
+SMALLINT_BE| bytes(2)| smallint  
+SMALLINT| bytes(2)| smallint  
+INT_BE| bytes(4)| int  
+INT| bytes(4)| int  
+BIGINT_BE| bytes(8)| bigint  
+BIGINT| bytes(8)| bigint  
+FLOAT| bytes(4)| float (float4)  
+DOUBLE| bytes(8)| double (float8)  
+INT_HADOOPV| bytes(1-9)| int  
+BIGINT_HADOOPV| bytes(1-9)| bigint  
+DATE_EPOCH_BE| bytes(8)| date  
+DATE_EPOCH| bytes(8)| date  
+TIME_EPOCH_BE| bytes(8)| time  
+TIME_EPOCH| bytes(8)| time  
+UTF8| bytes| varchar  
+UTF16| bytes| var16char  
+UINT8| bytes(8)| uint8  
+JSON | bytes | varchar
+
+## Using Drill Data Types
+
+If you understand how to use Drill data types, you have captured the essence of Drill. In Drill, you use data types in the following ways:
+
+* To cast or convert data to the required type for moving data from one data source to another
+* To cast or convert data to the required type for Drill analysis
+
+In Drill, you do not use data types as you do in database software to define the type of a column during table creation. 
+
+In some cases, Drill converts schema-less data to correctly-typed data implicitly. In this case, you do not need to cast the data. The file format of the data and the nature of your query determines the requirement for casting or converting. Differences in casting depend on the data source. The following list describes how Drill treats data types from various data sources:
 
 * HBase  
   Does not implicitly cast input to SQL types. Convert data to appropriate types as shown in ["Querying HBase."](/docs/querying-hbase/)
@@ -21,16 +105,13 @@ Differences in casting depend on the data source. The following list describes h
 * Text: CSV, TSV, and other text  
   Implicitly casts all textual data to VARCHAR.
 
-## Implicit Casting
-
+## Precedence of Data Types
 
-Generally, Drill performs implicit casting based on the order of precedence shown in the implicit casting preference table. Drill usually implicitly casts a type from a lower precedence to a type having higher precedence. For instance, NULL can be promoted to any other type; SMALLINT can be promoted into INT. INT is not promoted to SMALLINT due to possible precision loss. Drill might deviate from these precedence rules for performance reasons.
+Drill reads from and writes to data sources having a wide variety of types, more types than those Drill [previously listed](/docs/data-type-conversion#supported-data-types). Drill uses data types at the RPC level that are not supported for query input, often implicitly casting data.
 
-Under certain circumstances, such as queries involving substr and concat functions, Drill reverses the order of precedence and allows a cast to VARCHAR from a type of higher precedence than VARCHAR, such as BIGINT. 
+The following list includes data types Drill uses in descending order of precedence. As shown in the table, you can cast a NULL value, which has the lowest precedence, to any other type; you can cast a SMALLINT value to INT. You cannot cast an INT value to SMALLINT due to possible precision loss. Drill might deviate from these precedence rules for performance reasons. Under certain circumstances, such as queries involving SUBSTR and CONCAT functions, Drill reverses the order of precedence and allows a cast to VARCHAR from a type of higher precedence than VARCHAR, such as BIGINT.
 
-The following table lists data types top to bottom, in descending order of precedence. Drill implicitly casts to more data types than are currently supported for explicit casting.
-
-### Implicit Casting Precedence
+### Casting Precedence
 
 <table>
   <tr>
@@ -41,79 +122,80 @@ The following table lists data types top to bottom, in descending order of prece
   </tr>
   <tr>
     <td>1</td>
-    <td>INTERVAL</td>
+    <td>INTERVALYEAR (highest)</td>
     <td>13</td>
-    <td>UINT4</td>
+    <td>INT</td>
   </tr>
   <tr>
     <td>2</td>
-    <td>INTERVALYEAR</td>
+    <td>INTERVLADAY</td>
     <td>14</td>
-    <td>INT</td>
+    <td>UINT2</td>
   </tr>
   <tr>
     <td>3</td>
-    <td>INTERVLADAY</td>
+    <td>TIMESTAMPTZ*</td>
     <td>15</td>
-    <td>UINT2</td>
+    <td>SMALLINT</td>
   </tr>
   <tr>
     <td>4</td>
-    <td>TIMESTAMPTZ</td>
+    <td>TIMETZ*</td>
     <td>16</td>
-    <td>SMALLINT</td>
+    <td>UINT1</td>
   </tr>
   <tr>
     <td>5</td>
-    <td>TIMETZ</td>
+    <td>TIMESTAMP</td>
     <td>17</td>
-    <td>UINT1</td>
+    <td>VAR16CHAR</td>
   </tr>
   <tr>
     <td>6</td>
-    <td>TIMESTAMP</td>
+    <td>DATE</td>
     <td>18</td>
-    <td>VAR16CHAR</td>
+    <td>FIXED16CHAR</td>
   </tr>
   <tr>
     <td>7</td>
-    <td>DATE</td>
+    <td>TIME</td>
     <td>19</td>
-    <td>FIXED16CHAR</td>
+    <td>VARCHAR</td>
   </tr>
   <tr>
     <td>8</td>
-    <td>TIME</td>
+    <td>DOUBLE</td>
     <td>20</td>
-    <td>VARCHAR</td>
+    <td>CHAR</td>
   </tr>
   <tr>
     <td>9</td>
-    <td>DOUBLE</td>
+    <td>DECIMAL</td>
     <td>21</td>
-    <td>CHAR</td>
+    <td>VARBINARY**</td>
   </tr>
   <tr>
     <td>10</td>
-    <td>DECIMAL</td>
+    <td>UINT8</td>
     <td>22</td>
-    <td>VARBINARY*</td>
+    <td>FIXEDBINARY**</td>
   </tr>
   <tr>
     <td>11</td>
-    <td>UINT8</td>
+    <td>BIGINT</td>
     <td>23</td>
-    <td>FIXEDBINARY*</td>
+    <td>NULL (lowest)</td>
   </tr>
   <tr>
     <td>12</td>
-    <td>BIGINT</td>
-    <td>24</td>
-    <td>NULL</td>
+    <td>UINT4</td>
+    <td></td>
+    <td></td>
   </tr>
 </table>
 
-\* The Drill Parquet reader supports these types.
+\* Currently not supported.
+\*\* The Drill Parquet reader supports these types.
 
 ## Explicit Casting
 
@@ -134,54 +216,6 @@ In a textual file, such as CSV, Drill interprets every field as a VARCHAR, as pr
 
 If the SELECT statement includes a WHERE clause that compares a column of an unknown data type, cast both the value of the column and the comparison value in the WHERE clause.
 
-## Supported Data Types for Casting
-You use the following data types in queries that involve casting/converting data types:
-
-* BIGINT  
-  8-byte signed integer. the range is -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
-
-* BOOLEAN  
-  True or false  
-
-* DATE  
-  Years, months, and days in YYYY-MM-DD format
-
-* DECIMAL(p,s), or DEC(p,s), NUMERIC(p,s) 
-  38-digit precision number, precision is p, and scale is s. Example: DECIMAL(6,2) has 4 digits before the decimal point and 2 digits after the decimal point. 
-
-* FLOAT  
-  4-byte single precision floating point number
-
-* DOUBLE, DOUBLE PRECISION  
-  8-byte double precision floating point number. 
-
-* INTEGER or INT  
-  4-byte signed integer. The range is -2,147,483,648 to 2,147,483,647.
-
-* INTERVAL  
-  Integer fields representing a period of time in years, months, days hours, minutes, seconds and optional milliseconds using ISO 8601 format.
-
-* INTERVALDAY  
-  A simple version of the interval type expressing a period of time in days, hours, minutes, and seconds only.
-
-* INTERVALYEAR  
-  A simple version of interval representing a period of time in years and months only.
-
-* SMALLINT  
-  2-byte signed integer. The range is -32,768 to 32,767. Supported in Drill 0.9 and later. See DRILL-2135.
-
-* TIME  
-  Hours, minutes, seconds in the form HH:mm:ss, 24-hour based
-
-* TIMESTAMP  
-  JDBC timestamp in year, month, date hour, minute, second, and optional milliseconds: yyyy-MM-dd HH:mm:ss.SSS
-
-* CHARACTER VARYING, CHARACTER, CHAR, or VARCHAR  
-  Character string optionally declared with a length that indicates the maximum number of characters to use. For example, CHAR(30) casts data to a 30-character string maximum. The default limit is 1 character. The maximum character limit is 255.
-
-You specify a DECIMAL using a precision and scale. The precision (p) is the total number of digits required to represent the number.
-. The scale (s) is the number of decimal digits to the right of the decimal point. Subtract s from p to determine the maximum number of digits to the left of the decimal point. Scale is a value from 0 through p. Scale is specified only if precision is specified. The default scale is 0.
-
 For more information about and examples of casting, see [CAST]().
 
 ### Explicit Type Casting Maps
@@ -216,7 +250,7 @@ The following tables show data types that Drill can cast to/from other data type
     <td>VARBINARY</td>
   </tr>
   <tr>
-    <td>SMALLINT</td>
+    <td>SMALLINT*</td>
     <td></td>
     <td>yes</td>
     <td>yes</td>
@@ -300,7 +334,7 @@ The following tables show data types that Drill can cast to/from other data type
     <td>yes</td>
   </tr>
   <tr>
-    <td>FIXEDBINARY*</td>
+    <td>FIXEDBINARY**</td>
     <td>yes</td>
     <td>yes</td>
     <td>yes</td>
@@ -312,7 +346,7 @@ The following tables show data types that Drill can cast to/from other data type
     <td>yes</td>
   </tr>
   <tr>
-    <td>VARCHAR**</td>
+    <td>VARCHAR***</td>
     <td>yes</td>
     <td>yes</td>
     <td>yes</td>
@@ -324,7 +358,7 @@ The following tables show data types that Drill can cast to/from other data type
     <td>yes</td>
   </tr>
   <tr>
-    <td>VARBINARY*</td>
+    <td>VARBINARY**</td>
     <td>yes</td>
     <td>yes</td>
     <td>yes</td>
@@ -337,9 +371,11 @@ The following tables show data types that Drill can cast to/from other data type
   </tr>
 </table>
 
-\* For use with CONVERT_TO/FROM to cast binary data coming to/from sources such as MapR-DB/HBase.
+\* Not supported in this release.   
+
+\*\* Used to cast binary data coming to/from sources such as MapR-DB/HBase.   
 
-\*\* You cannot convert a character string having a decimal point to an INT or BIGINT.
+\*\*\* You cannot convert a character string having a decimal point to an INT or BIGINT.   
 
 #### Date and Time Data Types
 
@@ -360,7 +396,6 @@ The following tables show data types that Drill can cast to/from other data type
     <td>TIME</td>
     <td>TIMESTAMP</td>
     <td>TIMESTAMPTZ</td>
-    <td>INTERVAL</td>
     <td>INTERVALYEAR</td>
     <td>INTERVALDAY</td>
   </tr>
@@ -375,7 +410,7 @@ The following tables show data types that Drill can cast to/from other data type
     <td>Yes</td>
   </tr>
   <tr>
-    <td>FIXEDBINARY</td>
+    <td>FIXEDBINARY*</td>
     <td>No</td>
     <td>No</td>
     <td>No</td>
@@ -395,7 +430,7 @@ The following tables show data types that Drill can cast to/from other data type
     <td>Yes</td>
   </tr>
   <tr>
-    <td>VARBINARY</td>
+    <td>VARBINARY*</td>
     <td>No</td>
     <td>No</td>
     <td>Yes</td>
@@ -435,7 +470,7 @@ The following tables show data types that Drill can cast to/from other data type
     <td>No</td>
   </tr>
   <tr>
-    <td>TIMESTAMPTZ</td>
+    <td>TIMESTAMPTZ**</td>
     <td>Yes</td>
     <td>Yes</td>
     <td>Yes</td>
@@ -445,16 +480,6 @@ The following tables show data types that Drill can cast to/from other data type
     <td>No</td>
   </tr>
   <tr>
-    <td>INTERVAL</td>
-    <td>Yes</td>
-    <td>No</td>
-    <td>Yes</td>
-    <td>Yes</td>
-    <td>No</td>
-    <td>Yes</td>
-    <td>Yes</td>
-  </tr>
-  <tr>
     <td>INTERVALYEAR</td>
     <td>Yes</td>
     <td>No</td>
@@ -476,3 +501,8 @@ The following tables show data types that Drill can cast to/from other data type
   </tr>
 </table>
 
+\* Used to cast binary data coming to/from sources such as MapR-DB/HBase.   
+
+\*\* Not supported in this release.   
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/002-lexical-structure.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/002-lexical-structure.md b/_docs/sql-ref/002-lexical-structure.md
index 6046490..634cb42 100644
--- a/_docs/sql-ref/002-lexical-structure.md
+++ b/_docs/sql-ref/002-lexical-structure.md
@@ -62,6 +62,17 @@ This section describes how to construct literals.
 ### Boolean
 Boolean values are true or false and are case-insensitive. Do not enclose the values in quotation marks.
 
+### Date and Time
+Format dates using dashes (-) to separate year, month, and day. Format time using colons (:) to separate hours, minutes and seconds. Format timestamps using a date and a time. These literals are shown in the following examples:
+
+* Date: 2008-12-15
+
+* Time: 22:55:55.123...
+
+* Timestamp: 2008-12-15 22:55:55.12345
+
+If you have dates and times in other formats, use a [data type conversion function](/data-type-conversion/#other-data-type-conversions) in your queries.
+
 ### Identifier
 An identifier is a letter followed by any sequence of letters, digits, or the underscore. For example, names of tables, columns, and aliases are identifiers. Maximum length is 1024 characters. Enclose the following identifiers in back ticks:
 

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/data-types/001-date.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/data-types/001-date.md b/_docs/sql-ref/data-types/001-date.md
index cadeba5..4c036b6 100644
--- a/_docs/sql-ref/data-types/001-date.md
+++ b/_docs/sql-ref/data-types/001-date.md
@@ -4,7 +4,9 @@ parent: "Data Types"
 ---
 Using familiar date and time formats, listed in the [SQL data types table](/docs/data-types/supported-data-types), you can construct query date and time data. You need to cast textual data to date and time data types. The format of date, time, and timestamp text in a textual data source needs to match the SQL query format for successful casting. 
 
-DATE, TIME, and TIMESTAMP store values in Coordinated Universal Time (UTC). Currently, Drill does not support casting a TIMESTAMP with time zone, but you can use the [TO_TIMESTAMP function](/docs/casting/converting-data-types#to_timestamp) in a query to use time stamp data having a time zone.
+DATE, TIME, and TIMESTAMP store values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.
+
+Currently, Drill does not support casting a TIMESTAMP with time zone, but you can use the [TO_TIMESTAMP function](/docs/casting/converting-data-types#to_timestamp) in a query to use time stamp data having a time zone.
 
 Next, use the following literals in a SELECT statement. 
 
@@ -36,9 +38,11 @@ Next, use the following literals in a SELECT statement.
         +------------+
         1 row selected (0.071 seconds)
 
-## INTERVAL
+## INTERVALYEAR and INTERVALDAY
+
+The INTERVALYEAR AND INTERVALDAY types represent a period of time. The INTERVALYEAR type specifies values from a year to a month. The INTERVALDAY type specifies values from a day to seconds.
 
-The INTERVAL type represents a period of time. Use ISO 8601 syntax to format a value of this type:
+Use ISO 8601 syntax to format an interval:
 
     P [qty] Y [qty] M [qty] D T [qty] H [qty] M [qty] S
 
@@ -56,12 +60,23 @@ where:
 * M follows a number of minutes.
 * S follows a number of seconds and optional milliseconds to the right of a decimal point
 
-
-INTERVALYEAR (Year, Month) and INTERVALDAY (Day, Hours, Minutes, Seconds, Milliseconds) are a simpler version of INTERVAL with a subset of the fields.  You do not need to specify all fields.
-
-The format of INTERVAL data in the data source differs from the query format. 
-
-You can run the query described earlier to check the formatting of the fields. The input to the following SELECT statements show how to format INTERVAL data in the query. The output shows how to format the data in the data source.
+You can restrict the set of stored interval fields by using one of these phrases in the query:
+
+* YEAR
+* MONTH
+* DAY
+* HOUR
+* MINUTE
+* SECOND
+* YEAR TO MONTH
+* DAY TO HOUR
+* DAY TO MINUTE
+* DAY TO SECOND
+* HOUR TO MINUTE
+* HOUR TO SECOND
+* MINUTE TO SECOND
+
+The following examples show the input and output format of INTERVALYEAR (Year, Month) and INTERVALDAY (Day, Hours, Minutes, Seconds, Milliseconds). The following SELECT statements show how to format the query input. The output shows how to format the data in the data source.
 
     SELECT INTERVAL '1 10:20:30.123' day to second FROM sys.version;
     +------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/data-types/002-diff-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/data-types/002-diff-data-types.md b/_docs/sql-ref/data-types/002-diff-data-types.md
index 539e9a9..e81c435 100644
--- a/_docs/sql-ref/data-types/002-diff-data-types.md
+++ b/_docs/sql-ref/data-types/002-diff-data-types.md
@@ -6,7 +6,7 @@ parent: "Data Types"
 To query HBase data in Drill, convert every column of an HBase table to/from byte arrays from/to an SQL data type using CONVERT_TO or CONVERT_FROM. For examples of how to use these functions, see "Convert and Cast Functions".
 
 ## Handling Textual Data
-In a textual file, such as CSV, Drill interprets every field as a VARCHAR, as previously mentioned. In addition to using the CAST function, you can also use to_char, to_date, to_number, and to_timestamp. If the SELECT statement includes a WHERE clause that compares a column of an unknown data type, cast both the value of the column and the comparison value in the WHERE clause.
+In a textual file, such as CSV, Drill interprets every field as a VARCHAR, as previously mentioned. In addition to using the CAST function, you can also use TO_CHAR, TO_DATE, TO_NUMBER, and TO_TIMESTAMP. If the SELECT statement includes a WHERE clause that compares a column of an unknown data type, cast both the value of the column and the comparison value in the WHERE clause.
 
 ## Handling JSON and Parquet Data
 Complex and nested data structures in JSON and Parquet files are of map and array types.

http://git-wip-us.apache.org/repos/asf/drill/blob/d3328217/_docs/sql-ref/functions/002-conversion.md
----------------------------------------------------------------------
diff --git a/_docs/sql-ref/functions/002-conversion.md b/_docs/sql-ref/functions/002-conversion.md
index 780b397..635bb6b 100644
--- a/_docs/sql-ref/functions/002-conversion.md
+++ b/_docs/sql-ref/functions/002-conversion.md
@@ -10,7 +10,7 @@ Drill supports the following functions for casting and converting data types:
 
 ## CAST
 
-The CAST function converts an expression from one type to another.
+The CAST function converts an entity, such as an expression that evaluates to a single value, from one type to another.
 
 ### Syntax
 
@@ -40,7 +40,7 @@ Refer to the following tables for information about the data types to use for ca
 
 ### Examples
 
-The following examples show how to cast a string to a number, a number to a string, and one numerical type to another.
+The following examples show how to cast a string to a number, a number to a string, and one type of number to another.
 
 #### Casting a character string to a number
 You cannot cast a character string that includes a decimal point to an INT or BIGINT. For example, if you have "1200.50" in a JSON file, attempting to select and cast the string to an INT fails. As a workaround, cast to a FLOAT or DECIMAL type, and then to an INT. 
@@ -55,15 +55,7 @@ The following example shows how to cast a character to a DECIMAL having two deci
     +------------+
 
 #### Casting a number to a character string
-The first example shows that Drill uses a default limit of 1 character if you omit the VARCHAR limit: The result is truncated to 1 character.  The second example casts the same number to a VARCHAR having a limit of 3 characters: The result is a 3-character string, 456. The third example shows that you can use CHAR as an alias for VARCHAR. You can also use CHARACTER or CHARACTER VARYING.
-
-    SELECT CAST(456 as VARCHAR) FROM sys.version;
-    +------------+
-    |   EXPR$0   |
-    +------------+
-    | 4          |
-    +------------+
-    1 row selected (0.063 seconds)
+The first example shows Drill casting a number to a VARCHAR having a length of 3 bytes: The result is a 3-character string, 456. Drill supports the CHAR and CHARACTER VARYING alias.
 
     SELECT CAST(456 as VARCHAR(3)) FROM sys.version;
     +------------+
@@ -81,7 +73,7 @@ The first example shows that Drill uses a default limit of 1 character if you om
     +------------+
     1 row selected (0.093 seconds)
 
-#### Casting from one numerical type to another
+#### Casting from one type of number to another
 
 Cast an integer to a decimal.
 
@@ -101,19 +93,29 @@ To cast INTERVAL data use the following syntax:
     CAST (column_name AS INTERVAL DAY)
     CAST (column_name AS INTERVAL YEAR)
 
-For example, a JSON file contains the following objects:
+For example, a JSON file named intervals.json contains the following objects:
 
     { "INTERVALYEAR_col":"P1Y", "INTERVALDAY_col":"P1D", "INTERVAL_col":"P1Y1M1DT1H1M" }
     { "INTERVALYEAR_col":"P2Y", "INTERVALDAY_col":"P2D", "INTERVAL_col":"P2Y2M2DT2H2M" }
     { "INTERVALYEAR_col":"P3Y", "INTERVALDAY_col":"P3D", "INTERVAL_col":"P3Y3M3DT3H3M" }
 
-The following CTAS statement casts text from a JSON file to INTERVAL data types in a Parquet table:
+1. Set the storage format to Parquet.
+
+        ALTER SESSION SET `store.format` = 'parquet';
+
+        +------------+------------+
+        |     ok     |  summary   |
+        +------------+------------+
+        | true       | store.format updated. |
+        +------------+------------+
+        1 row selected (0.037 seconds)
+
+Use a CTAS statement to cast text from a JSON file to year and day intervals and to write the data to a Parquet table:
 
     CREATE TABLE dfs.tmp.parquet_intervals AS 
-    (SELECT cast (INTERVAL_col as interval),
-           cast( INTERVALYEAR_col as interval year) INTERVALYEAR_col, 
-           cast( INTERVALDAY_col as interval day) INTERVALDAY_col 
-    FROM `/user/root/intervals.json`);
+    (SELECT CAST( INTERVALYEAR_col as interval year) INTERVALYEAR_col, 
+            CAST( INTERVALDAY_col as interval day) INTERVALDAY_col 
+    FROM dfs.`/Users/drill/intervals.json`);
 
 <!-- Text and include output -->
 
@@ -124,43 +126,15 @@ data to and from another data type.
 
 ## Syntax  
 
-CONVERT_TO (column, type)
+    CONVERT_TO (column, type)
 
-CONVERT_FROM(column, type)
+    CONVERT_FROM(column, type)
 
 *column* is the name of a column Drill reads.
 
-*type* is one of the data types listed in the CONVERT_TO/FROM Data Types table.
-
-
-The following table provides the data types that you use with the CONVERT_TO
-and CONVERT_FROM functions:
-
-### CONVERT_TO/FROM Data Types
-
-**Type**| **Input Type**| **Output Type**  
----|---|---  
-BOOLEAN_BYTE| bytes(1)| boolean  
-TINYINT_BE| bytes(1)| tinyint  
-TINYINT| bytes(1)| tinyint  
-SMALLINT_BE| bytes(2)| smallint  
-SMALLINT| bytes(2)| smallint  
-INT_BE| bytes(4)| int  
-INT| bytes(4)| int  
-BIGINT_BE| bytes(8)| bigint  
-BIGINT| bytes(8)| bigint  
-FLOAT| bytes(4)| float (float4)  
-DOUBLE| bytes(8)| double (float8)  
-INT_HADOOPV| bytes(1-9)| int  
-BIGINT_HADOOPV| bytes(1-9)| bigint  
-DATE_EPOCH_BE| bytes(8)| date  
-DATE_EPOCH| bytes(8)| date  
-TIME_EPOCH_BE| bytes(8)| time  
-TIME_EPOCH| bytes(8)| time  
-UTF8| bytes| varchar  
-UTF16| bytes| var16char  
-UINT8| bytes(8)| uint8  
-  
+*type* is one of the data types listed in the [CONVERT_TO/FROM Data Types](/docs/data-types#convert_to-and-convert_from-data-types) table.
+
+
 ### Usage Notes
 
 You can use the CONVERT_TO and CONVERT_FROM functions to encode and decode data that is binary or complex. For example, HBase stores
@@ -169,7 +143,7 @@ data as encoded VARBINARY data. To read HBase data in Drill, convert every colum
 Do not use the CAST function for converting binary data types to other types. Although CAST works for converting VARBINARY to VARCHAR, CAST does not work in some other binary conversion cases. CONVERT functions work for binary conversions and are also more efficient to use than CAST.
 
 ## Usage Notes
-Use the CONVERT_TO function to change the data type to binary when sending data back to a binary data source, such as HBase, MapR, and Parquet, from a Drill query. CONVERT_TO also converts an SQL data type to complex types, including Hbase byte arrays, JSON and Parquet arrays, and maps. CONVERT_FROM converts from complex types, including Hbase arrays, JSON and Parquet arrays and maps to an SQL data type. 
+Use the CONVERT_TO function to change the data type to binary when sending data back to a binary data source, such as HBase, MapR, and Parquet, from a Drill query. CONVERT_TO also converts an SQL data type to complex types, including HBase byte arrays, JSON and Parquet arrays, and maps. CONVERT_FROM converts from complex types, including HBase arrays, JSON and Parquet arrays and maps to an SQL data type. 
 
 ### Examples
 
@@ -189,7 +163,7 @@ This example shows how to use the CONVERT_FROM function to convert complex HBase
     +------------+------------+------------+
     4 rows selected (1.335 seconds)
 
-You use the CONVERT_FROM function to decode the binary data to render it readable:
+You use the CONVERT_FROM function to decode the binary data to render it readable, selecting a data type to use from the [list of supported types](/docs/data-type-conversion/#convert_to-and-convert_from-data-types). JSON supports strings. To convert binary to strings, use the UTF8 type.:
 
     SELECT CONVERT_FROM(row_key, 'UTF8') AS studentid, 
            CONVERT_FROM(students.account.name, 'UTF8') AS name, 
@@ -207,6 +181,36 @@ You use the CONVERT_FROM function to decode the binary data to render it readabl
     +------------+------------+------------+------------+------------+
     4 rows selected (0.504 seconds)
 
+This example converts from VARCHAR to a JSON map:
+
+    SELECT CONVERT_FROM('{x:100, y:215.6}' ,'JSON') AS MYCOL FROM sys.version;
+    +------------+
+    |   MYCOL    |
+    +------------+
+    | {"x":100,"y":215.6} |
+    +------------+
+    1 row selected (0.073 seconds)
+
+This example uses a list of BIGINT as input and returns a repeated list of vectors:
+
+    SELECT CONVERT_FROM('[ [1, 2], [3, 4], [5]]' ,'JSON') AS MYCOL1 FROM sys.version;
+    +------------+
+    |   mycol1   |
+    +------------+
+    | [[1,2],[3,4],[5]] |
+    +------------+
+    1 row selected (0.054 seconds)
+
+This example uses a map as input to return a repeated list vector (JSON).
+
+    SELECT CONVERT_FROM('[{a : 100, b: 200}, {a:300, b: 400}]' ,'JSON') AS MYCOL1  FROM sys.version;
+    +------------+
+    |   MYCOL1   |
+    +------------+
+    | [{"a":100,"b":200},{"a":300,"b":400}] |
+    +------------+
+    1 row selected (0.074 seconds)
+
 #### Set up a storage plugin for working with HBase files
 
 This example assumes you are working in the Drill Sandbox. The `maprdb` storage plugin definition is limited, so you modify the `dfs` storage plugin slightly and use that plugin for this example.
@@ -244,13 +248,14 @@ This example assumes you are working in the Drill Sandbox. The `maprdb` storage
           }
         }
 
-#### Convert the binary HBase students table to JSON data.
+#### Convert the binary HBase students table to JSON data
+First, you set the storage format to JSON. Next, you use the CREATE TABLE AS SELECT (CTAS) statement to convert from a selected file of a different format, HBase in this example, to the storage format. You then convert the JSON file to Parquet using a similar procedure. Set the storage format to Parquet, and use a CTAS statement to convert to Parquet from JSON. In each case, you [select UTF8](/docs/data-type-conversion/#convert_to-and-convert_from-data-types) as the file format because the data you are converting from and then to consists of strings.
 
 1. Start Drill on the Drill Sandbox and set the default storage format from Parquet to JSON.
 
         ALTER SESSION SET `store.format`='json';
 
-2. Use CONVERT_FROM queries to convert the VARBINARY data in the HBase students table to JSON, and store the JSON data in a file. 
+2. Use CONVERT_FROM queries to convert the binary data in the HBase students table to JSON, and store the JSON data in a file. You select a data type to use from the supported. JSON supports strings. To convert binary to strings, use the UTF8 type.
 
         CREATE TABLE tmp.`to_json` AS SELECT 
             CONVERT_FROM(row_key, 'UTF8') AS `studentid`, 
@@ -274,7 +279,7 @@ This example assumes you are working in the Drill Sandbox. The `maprdb` storage
 
         0_0_0.json
 
-5. Take a look at the output om `to_json`:
+5. Take a look at the output of `to_json`:
 
         {
           "studentid" : "student1",
@@ -361,42 +366,17 @@ This example assumes you are working in the Drill Sandbox. The `maprdb` storage
         4 rows selected (0.182 seconds)
 
 ## Other Data Type Conversions
-In addition to the CAST, CONVERT_TO, and CONVERT_FROM functions, Drill supports data type conversion functions to perform the following conversions:
+Drill supports the format for date and time literals shown in the following examples:
 
-* A timestamp, integer, decimal, or double to a character string.
-* A character string to a date
-* A character string to a number
-
-## Time Zone Limitation
-Currently Drill does not support conversion of a date, time, or timestamp from one time zone to another. The workaround is to configure Drill to use [UTC](http://www.timeanddate.com/time/aboututc.html)-based time, convert your data to UTC timestamps, and perform date/time operation in UTC.  
-
-1. Take a look at the Drill time zone configuration by running the TIMEOFDAY function. This function returns the local date and time with time zone information.
+* 2008-12-15
 
-        SELECT TIMEOFDAY() FROM sys.version;
+* 22:55:55.123...
 
-        +------------+
-        |   EXPR$0   |
-        +------------+
-        | 2015-04-02 15:01:31.114 America/Los_Angeles |
-        +------------+
-        1 row selected (1.199 seconds)
+If you have dates and times in other formats, use a data type conversion functions to perform the following conversions:
 
-2. Configure the default time zone format in <drill installation directory>/conf/drill-env.sh by adding `-Duser.timezone=UTC` to DRILL_JAVA_OPTS. For example:
-
-        export DRILL_JAVA_OPTS="-Xms1G -Xmx$DRILL_MAX_HEAP -XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=1G -ea -Duser.timezone=UTC"
-
-3. Restart sqlline.
-
-4. Confirm that Drill is now set to UTC:
-
-        SELECT TIMEOFDAY() FROM sys.version;
-
-        +------------+
-        |   EXPR$0   |
-        +------------+
-        | 2015-04-02 17:05:02.424 UTC |
-        +------------+
-        1 row selected (1.191 seconds)
+* A timestamp, integer, decimal, or double to a character string.
+* A character string to a date
+* A character string to a number
 
 The following table lists data type formatting functions that you can
 use in your Drill queries as described in this section:
@@ -412,21 +392,8 @@ TO_NUMBER(text, format)| numeric
 TO_TIMESTAMP(text, format)| timestamp
 TO_TIMESTAMP(double precision)| timestamp
 
-You can use the ‘z’ option to identify the time zone in TO_TIMESTAMP to make sure the timestamp has the timezone in it. Also, use the ‘z’ option to identify the time zone in a timestamp using the TO_CHAR function. For example:
-
-    SELECT TO_TIMESTAMP('2015-03-30 20:49:59.0 UTC', 'YYYY-MM-dd HH:mm:ss.s z') AS Original, 
-           TO_CHAR(TO_TIMESTAMP('2015-03-30 20:49:59.0 UTC', 'YYYY-MM-dd HH:mm:ss.s z'), 'z') AS TimeZone 
-           FROM sys.version;
-
-    +------------+------------+
-    |  Original  |  TimeZone  |
-    +------------+------------+
-    | 2015-03-30 20:49:00.0 | UTC        |
-    +------------+------------+
-    1 row selected (0.299 seconds)
-
 ### Format Specifiers for Numerical Conversions
-Use the following format specifiers for numerical conversions:
+Use the following format specifiers for converting numbers:
 <table >
      <tr >
           <th align=left>Symbol
@@ -637,7 +604,7 @@ For more information about specifying a format, refer to one of the following fo
 
 ## TO_CHAR
 
-TO_CHAR converts a date, time, timestamp, or numerical expression to a character string.
+TO_CHAR converts a number, date, time, or timestamp expression to a character string.
 
 ### Syntax
 
@@ -647,6 +614,9 @@ TO_CHAR converts a date, time, timestamp, or numerical expression to a character
 
 *'format'* is a format specifier enclosed in single quotation marks that sets a pattern for the output formatting. 
 
+### Usage Notes
+
+You can use the ‘z’ option to identify the time zone in TO_TIMESTAMP to make sure the timestamp has the timezone in it, as shown in the TO_TIMESTAMP description.
 
 ### Examples
 
@@ -910,6 +880,49 @@ Convert a UTC date to a timestamp offset from the UTC time zone code.
     +------------+------------+
     1 row selected (0.129 seconds)
 
+## Time Zone Limitation
+Currently Drill does not support conversion of a date, time, or timestamp from one time zone to another. The workaround is to configure Drill to use [UTC](http://www.timeanddate.com/time/aboututc.html)-based time, convert your data to UTC timestamps, and perform date/time operation in UTC.  
+
+1. Take a look at the Drill time zone configuration by running the TIMEOFDAY function. This function returns the local date and time with time zone information.
+
+        SELECT TIMEOFDAY() FROM sys.version;
+
+        +------------+
+        |   EXPR$0   |
+        +------------+
+        | 2015-04-02 15:01:31.114 America/Los_Angeles |
+        +------------+
+        1 row selected (1.199 seconds)
+
+2. Configure the default time zone format in <drill installation directory>/conf/drill-env.sh by adding `-Duser.timezone=UTC` to DRILL_JAVA_OPTS. For example:
+
+        export DRILL_JAVA_OPTS="-Xms1G -Xmx$DRILL_MAX_HEAP -XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=1G -ea -Duser.timezone=UTC"
+
+3. Restart sqlline.
+
+4. Confirm that Drill is now set to UTC:
+
+        SELECT TIMEOFDAY() FROM sys.version;
+
+        +------------+
+        |   EXPR$0   |
+        +------------+
+        | 2015-04-02 17:05:02.424 UTC |
+        +------------+
+        1 row selected (1.191 seconds)
+
+You can use the ‘z’ option to identify the time zone in TO_TIMESTAMP to make sure the timestamp has the timezone in it. Also, use the ‘z’ option to identify the time zone in a timestamp using the TO_CHAR function. For example:
+
+    SELECT TO_TIMESTAMP('2015-03-30 20:49:59.0 UTC', 'YYYY-MM-dd HH:mm:ss.s z') AS Original, 
+           TO_CHAR(TO_TIMESTAMP('2015-03-30 20:49:59.0 UTC', 'YYYY-MM-dd HH:mm:ss.s z'), 'z') AS TimeZone 
+           FROM sys.version;
+
+    +------------+------------+
+    |  Original  |  TimeZone  |
+    +------------+------------+
+    | 2015-03-30 20:49:00.0 | UTC        |
+    +------------+------------+
+    1 row selected (0.299 seconds)
 
 <!-- DRILL-448 Support timestamp with time zone -->