You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/05/19 08:50:42 UTC

[1/2] drill git commit: sandbox updates

Repository: drill
Updated Branches:
  refs/heads/gh-pages cb925342f -> d22ac4af0


sandbox updates


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/e5d81270
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/e5d81270
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/e5d81270

Branch: refs/heads/gh-pages
Commit: e5d812708a9abcdb278217e8f34d55672719b35a
Parents: 134b6ff
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon May 18 23:44:18 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Mon May 18 23:44:18 2015 -0700

----------------------------------------------------------------------
 _docs/img/loginSandBox.png                      | Bin 67090 -> 26594 bytes
 _docs/img/vbImport.png                          | Bin 85744 -> 95211 bytes
 _docs/img/vbMaprSetting.png                     | Bin 56436 -> 77126 bytes
 _docs/img/vbNetwork.png                         | Bin 30826 -> 61818 bytes
 .../data-types/010-supported-data-types.md      |  63 +++--
 .../data-types/020-date-time-and-timestamp.md   |  70 +++---
 .../sql-functions/020-data-type-conversion.md   |   6 +-
 .../010-installing-the-apache-drill-sandbox.md  |  46 +---
 .../020-getting-to-know-the-drill-sandbox.md    |  45 ++--
 .../030-lesson-1-learn-about-the-data-set.md    | 249 ++++++++++---------
 .../040-lesson-2-run-queries-with-ansi-sql.md   | 223 +++++++++--------
 ...esson-3-run-queries-on-complex-data-types.md | 235 ++++++++---------
 12 files changed, 478 insertions(+), 459 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/img/loginSandBox.png
----------------------------------------------------------------------
diff --git a/_docs/img/loginSandBox.png b/_docs/img/loginSandBox.png
index 5727ea4..07a6dfc 100644
Binary files a/_docs/img/loginSandBox.png and b/_docs/img/loginSandBox.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/img/vbImport.png
----------------------------------------------------------------------
diff --git a/_docs/img/vbImport.png b/_docs/img/vbImport.png
index a8ed45b..197f419 100644
Binary files a/_docs/img/vbImport.png and b/_docs/img/vbImport.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/img/vbMaprSetting.png
----------------------------------------------------------------------
diff --git a/_docs/img/vbMaprSetting.png b/_docs/img/vbMaprSetting.png
index b7720e3..73e0304 100644
Binary files a/_docs/img/vbMaprSetting.png and b/_docs/img/vbMaprSetting.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/img/vbNetwork.png
----------------------------------------------------------------------
diff --git a/_docs/img/vbNetwork.png b/_docs/img/vbNetwork.png
index cbb36f1..db72a7f 100644
Binary files a/_docs/img/vbNetwork.png and b/_docs/img/vbNetwork.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/sql-reference/data-types/010-supported-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/010-supported-data-types.md b/_docs/sql-reference/data-types/010-supported-data-types.md
index 7ffa85e..e170f59 100644
--- a/_docs/sql-reference/data-types/010-supported-data-types.md
+++ b/_docs/sql-reference/data-types/010-supported-data-types.md
@@ -19,7 +19,7 @@ Drill reads from and writes to data sources having a wide variety of types. Dril
 | SMALLINT**                                        | 2-byte signed integer in the range -32,768 to 32,767                                                                 | 32000                                                                          |
 | TIME                                              | 24-hour based time before or after January 1, 2001 in hours, minutes, seconds format: HH:mm:ss                       | 22:55:55.23                                                                    |
 | TIMESTAMP                                         | JDBC timestamp in year, month, date hour, minute, second, and optional milliseconds format: yyyy-MM-dd HH:mm:ss.SSS  | 2015-12-30 22:55:55.23                                                         |
-| CHARACTER VARYING, CHARACTER, CHAR, or VARCHAR*** | UTF8-encoded variable-length string. The default limit is 1 character. The maximum character limit is 2,147,483,647. | CHAR(30) casts data to a 30-character string maximum.                          |
+| CHARACTER VARYING, CHARACTER, CHAR,*** or VARCHAR | UTF8-encoded variable-length string. The default limit is 1 character. The maximum character limit is 2,147,483,647. | CHAR(30) casts data to a 30-character string maximum.                          |
 
 
 \* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. The NUMERIC data type is an alias for the DECIMAL data type.  
@@ -28,7 +28,7 @@ Drill reads from and writes to data sources having a wide variety of types. Dril
 
 ## Enabling the DECIMAL Type
 
-To enable the DECIMAL type, set the `planner.enable_decimal_data_type` option to `true`. Enable the DECIMAL data type if performance is not an issue.
+To enable the DECIMAL type, set the `planner.enable_decimal_data_type` option to `true`. The DECIMAL type is released as an alpha feature and not recommended for production use.
 
      ALTER SYSTEM SET `planner.enable_decimal_data_type` = true;
 
@@ -64,25 +64,29 @@ In some cases, Drill converts schema-less data to correctly-typed data implicitl
 * Text: CSV, TSV, and other text  
   Implicitly casts all textual data to VARCHAR.
 
-## Precedence of Data Types
+## Explicit Casting Precedence of Data Types
 
-The following list includes data types Drill uses in descending order of precedence. As shown in the table, you can cast a NULL value, which has the lowest precedence, to any other type; you can cast a SMALLINT (not supported in this release) value to INT. You cannot cast an INT value to SMALLINT due to possible precision loss. Drill might deviate from these precedence rules for performance reasons. Under certain circumstances, such as queries involving SUBSTR and CONCAT functions, Drill reverses the order of precedence and allows a cast to VARCHAR from a type of higher precedence than VARCHAR, such as BIGINT.
+The following list includes data types Drill uses in descending order of precedence. Casting precedence shown in the following table applies to the implicit casting that Drill performs. For example, Drill might implicitly cast data when a query includes a function or filter on mismatched data types:
+
+    SELECT myBigInt FROM mytable WHERE myBigInt = 2.5;
+
+As shown in the table, you can cast a NULL value, which has the lowest precedence, to any other type; you can cast a SMALLINT (not supported in this release) value to INT. Drill might deviate from these precedence rules for performance reasons. Under certain circumstances, such as queries involving SUBSTR and CONCAT functions, Drill reverses the order of precedence and allows a cast to VARCHAR from a type of higher precedence than VARCHAR, such as BIGINT.
 
 ### Casting Precedence
 
-| Precedence | Data Type              | Precedence | Data Type   |
-|------------|------------------------|------------|-------------|
-| 1          | INTERVALYEAR (highest) | 11         | INT         |
-| 2          | INTERVLADAY            | 12         | UINT2       |
-| 3          | TIMESTAMP              | 13         | SMALLINT*   |
-| 4          | DATE                   | 14         | UINT1       |
-| 5          | TIME                   | 15         | VAR16CHAR   |
-| 6          | DOUBLE                 | 16         | FIXED16CHAR |
-| 7          | DECIMAL                | 17         | VARCHAR     |
-| 8          | UINT8                  | 18         | CHAR        |
-| 9          | BIGINT                 | 19         | VARBINARY   |
-| 10         | UINT4                  | 20         | FIXEDBINARY |
-| 21         | NULL (lowest)          |            |             |
+| Precedence | Data Type              | Precedence |    Data Type   |
+|------------|------------------------|------------|----------------|
+| 1          | INTERVALYEAR (highest) | 11         | INT            |
+| 2          | INTERVLADAY            | 12         | UINT2          |
+| 3          | TIMESTAMP              | 13         | SMALLINT*      |
+| 4          | DATE                   | 14         | UINT1          |
+| 5          | TIME                   | 15         | VAR16CHAR      |
+| 6          | DOUBLE                 | 16         | FIXED16CHAR    |
+| 7          | DECIMAL                | 17         | VARCHAR        |
+| 8          | UINT8                  | 18         | CHAR           |
+| 9          | BIGINT                 | 19         | VARBINARY      |
+| 10         | UINT4                  | 20         | FIXEDBINARY    |
+| 21         | NULL (lowest)          | 21         | NULL (lowest)  |
 
 \* Not supported in this release.
 
@@ -114,22 +118,26 @@ The following tables show data types that Drill can cast to/from other data type
 | To            | SMALLINT | INT | BIGINT | DECIMAL | FLOAT | CHAR | FIXEDBINARY | VARCHAR | VARBINARY |
 |---------------|----------|-----|--------|---------|-------|------|-------------|---------|-----------|
 | **From:**     |          |     |        |         |       |      |             |         |           |
-| SMALLINT*     |          | yes | yes    | yes     | yes   | yes  | yes         | yes     | yes       |
-| INT           | yes      | no  | yes    | yes     | yes   | yes  | yes         | yes     | yes       |
+| SMALLINT*     | yes      | yes | yes    | yes     | yes   | yes  | yes         | yes     | yes       |
+| INT           | yes      | yes | yes    | yes     | yes   | yes  | yes         | yes     | yes       |
 | BIGINT        | yes      | yes | yes    | yes     | yes   | yes  | yes         | yes     | yes       |
 | DECIMAL       | yes      | yes | yes    | yes     | yes   | yes  | yes         | yes     | yes       |
 | DOUBLE        | yes      | yes | yes    | yes     | yes   | yes  | no          | yes     | no        |
-| FLOAT         | yes      | yes | yes    | yes     | no    | yes  | no          | yes     | no        |
-| CHAR          | yes      | yes | yes    | yes     | yes   | no   | yes         | yes     | yes       |
+| FLOAT         | yes      | yes | yes    | yes     | yes   | yes  | no          | yes     | no        |
+| CHAR          | yes      | yes | yes    | yes     | yes   | char | yes         | yes     | yes       |
 | FIXEDBINARY** | yes      | yes | yes    | yes     | yes   | no   | no          | yes     | yes       |
-| VARCHAR***    | yes      | yes | yes    | yes     | yes   | yes  | yes         | no      | yes       |
+| VARCHAR***    | yes      | yes | yes    | yes     | yes   | yes  | yes         | no      | no        |
 | VARBINARY**   | yes      | yes | yes    | yes     | yes   | no   | yes         | yes     | no        |
 
 
 \* Not supported in this release.   
-\*\* Used to cast binary data coming to/from sources such as MapR-DB/HBase.   
+\*\* Used to cast binary UTF-8 data coming to/from sources such as MapR-DB/HBase.   
 \*\*\* You cannot convert a character string having a decimal point to an INT or BIGINT.   
 
+{% include startnote.html %}The CAST function does not support all representations of FIXEDBINARY. Only the UTF-8 format is supported. {% include endnote.html %}
+
+If your FIXEDBINARY or VARBINARY data is in a format other than UTF-8, such as big endian, use the CONVERT_TO/FROM functions instead of CAST.
+
 ### Date and Time Data Types
 
 | To:          | DATE | TIME | TIMESTAMP | INTERVALDAY | INTERVALYEAR | INTERVALDAY |
@@ -145,12 +153,15 @@ The following tables show data types that Drill can cast to/from other data type
 | INTERVALYEAR | Yes  | No   | Yes       | Yes         | No           | Yes         |
 | INTERVALDAY  | Yes  | No   | Yes       | Yes         | Yes          | No          |
 
-\* Used to cast binary data coming to/from sources such as MapR-DB/HBase.   
+\* Used to cast binary UTF-8 data coming to/from sources such as MapR-DB/HBase.   
 
 ## CONVERT_TO and CONVERT_FROM Data Types
 
-You use the CONVERT_TO and CONVERT_FROM data types as arguments to the CONVERT_TO
-and CONVERT_FROM functions. CONVERT_FROM and CONVERT_TO methods transform a known binary representation/encoding to a Drill internal format. 
+CONVERT_TO converts data to binary from the input type. CONVERT_FROM converts data from binary to the input type. For example, the following CONVERT_TO function converts an integer in big endian format to VARBINARY:
+
+    CONVERT_TO(mycolumn, 'INT_BE')
+
+CONVERT_FROM and CONVERT_TO methods transform a known binary representation/encoding to a Drill internal format. 
 
 We recommend storing HBase/MapR-DB data in a binary representation rather than
 a string representation. Use the \*\_BE types to store integer data types in an HBase or Mapr-DB table.  INT is a 4-byte little endian signed integer. INT_BE is a 4-byte big endian signed integer. The comparison order of \*\_BE encoded bytes is the same as the integer value itself if the bytes are unsigned or positive. Using a *_BE type facilitates scan range pruning and filter pushdown into HBase scan. 

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/020-date-time-and-timestamp.md b/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
index 245f659..33f3cf8 100644
--- a/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
+++ b/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
@@ -4,40 +4,6 @@ parent: "Data Types"
 ---
 Using familiar date and time formats, listed in the [SQL data types table]({{ site.baseurl }}/docs/data-types/data-types), you can construct query date and time data. You need to cast textual data to date and time data types. The format of date, time, and timestamp text in a textual data source needs to match the SQL query format for successful casting. 
 
-DATE, TIME, and TIMESTAMP store values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.
-
-Currently, Drill does not support casting a TIMESTAMP with time zone, but you can use the [TO_TIMESTAMP function]({{ site.baseurl }}/docs/casting/converting-data-types#to_timestamp) in a query to use time stamp data having a time zone.
-
-Next, use the following literals in a SELECT statement. 
-
-* `date`
-* `time`
-* `timestamp`
-
-        SELECT date '2010-2-15' FROM sys.version;
-        +------------+
-        |   EXPR$0   |
-        +------------+
-        | 2010-02-15 |
-        +------------+
-        1 row selected (0.083 seconds)
-
-        SELECT time '15:20:30' from sys.version;
-        +------------+
-        |   EXPR$0   |
-        +------------+
-        | 15:20:30   |
-        +------------+
-        1 row selected (0.067 seconds)
-
-        SELECT timestamp '2015-03-11 6:50:08' FROM sys.version;
-        +------------+
-        |   EXPR$0   |
-        +------------+
-        | 2015-03-11 06:50:08.0 |
-        +------------+
-        1 row selected (0.071 seconds)
-
 ## INTERVALYEAR and INTERVALDAY
 
 The INTERVALYEAR AND INTERVALDAY types represent a period of time. The INTERVALYEAR type specifies values from a year to a month. The INTERVALDAY type specifies values from a day to seconds.
@@ -112,4 +78,40 @@ The following examples show the input and output format of INTERVALYEAR (Year, M
 
 For information about casting interval data, see the ["CAST"]({{ site.baseurl }}/docs/data-type-conversion#cast) function.
 
+## DATE, TIME, and TIMESTAMP
+
+DATE, TIME, and TIMESTAMP store values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.
+
+Drill does not support TIMESTAMP with time zone; however, if your data includes the time zone, use the [TO_TIMESTAMP function]({{ site.baseurl }}/docs/casting/converting-data-types#to_timestamp) and [Joda format specifiers]({{site.baseurl}}/docs/data-type-conversion/#format-specifiers-for-date/time-conversions) as shown the examples in section, ["Time Zone Limitation"]({{site.baseurl}}/docs/data-type-conversion/#time-zone-limitation).
+
+Next, use the following literals in a SELECT statement. 
+
+* `date`
+* `time`
+* `timestamp`
+
+        SELECT date '2010-2-15' FROM sys.version;
+        +------------+
+        |   EXPR$0   |
+        +------------+
+        | 2010-02-15 |
+        +------------+
+        1 row selected (0.083 seconds)
+
+        SELECT time '15:20:30' from sys.version;
+        +------------+
+        |   EXPR$0   |
+        +------------+
+        | 15:20:30   |
+        +------------+
+        1 row selected (0.067 seconds)
+
+        SELECT timestamp '2015-03-11 6:50:08' FROM sys.version;
+        +------------+
+        |   EXPR$0   |
+        +------------+
+        | 2015-03-11 06:50:08.0 |
+        +------------+
+        1 row selected (0.071 seconds)
+
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/sql-reference/sql-functions/020-data-type-conversion.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/020-data-type-conversion.md b/_docs/sql-reference/sql-functions/020-data-type-conversion.md
index 82e8931..55bd4fb 100644
--- a/_docs/sql-reference/sql-functions/020-data-type-conversion.md
+++ b/_docs/sql-reference/sql-functions/020-data-type-conversion.md
@@ -390,7 +390,7 @@ use in your Drill queries as described in this section:
 [TO_TIMESTAMP](#TO_TIMESTAMP)(DOUBLE)| TIMESTAMP
 
 ### Format Specifiers for Numerical Conversions
-Use the following format specifiers for converting numbers:
+Use the following Java format specifiers for converting numbers:
 
 | Symbol     | Location            | Meaning                                                                                                                                                                                              |
 |------------|---------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -408,7 +408,7 @@ Use the following format specifiers for converting numbers:
 
 ### Format Specifiers for Date/Time Conversions
 
-Use the following format specifiers for date/time conversions:
+Use the following Joda format specifiers for date/time conversions:
 
 | Symbol | Meaning                                          | Presentation | Examples                           |
 |--------|--------------------------------------------------|--------------|------------------------------------|
@@ -438,7 +438,7 @@ Use the following format specifiers for date/time conversions:
 For more information about specifying a format, refer to one of the following format specifier documents:
 
 * [Java DecimalFormat class](http://docs.oracle.com/javase/7/docs/api/java/text/DecimalFormat.html) format specifiers 
-* [Java DateTimeFormat class](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html) format specifiers
+* [Joda DateTimeFormat class](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html) format specifiers
 
 ## TO_CHAR
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md
index 0081fc5..875dd4b 100755
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md
@@ -44,15 +44,10 @@ VMware Player or VMware Fusion:
     If you are running VMware Fusion, select **Import**.  
 
     ![drill query flow]({{ site.baseurl }}/docs/img/vmWelcome.png)
-3. Navigate to the directory where you downloaded the MapR Sandbox with Apache Drill file, and select `MapR-Sandbox-For-Apache-Drill-4.0.1_VM.ova`.
-
-    ![drill query flow]({{ site.baseurl }}/docs/img/vmShare.png)
-
+3. Navigate to the directory where you downloaded the MapR Sandbox with Apache Drill file, and select `MapR-Sandbox-For-Apache-Drill-1.0.0-4.1.0-vmware.ova`.  
     The Import Virtual Machine dialog appears.
-4. Click **Import**. The virtual machine player imports the sandbox.
-
-    ![drill query flow]({{ site.baseurl }}/docs/img/vmLibrary.png)
-5. Select `MapR-Sandbox-For-Apache-Drill-4.0.1_VM`, and click **Play virtual machine**. It takes a few minutes for the MapR services to start.  
+4. Click **Import**. The virtual machine player imports the sandbox.  
+5. Select `MapR-Sandbox-For-Apache-Drill-4.1.0_VM`, and click **Play virtual machine**. It takes a few minutes for the MapR services to start.  
 
      After the MapR services start and installation completes, the following screen
 appears:
@@ -67,10 +62,7 @@ Drill.
      
     For example: `127.0.1.1 <vm_hostname>`
 
-7. You can navigate to the URL provided to experience Drill Web UI or you can login to the sandbox through the command line.  
-
-    a. To navigate to the MapR Sandbox with Apache Drill, enter the provided URL in your browser's address bar.  
-    b. To login to the virtual machine and access the command line, press Alt+F2 on Windows or Option+F5 on Mac. When prompted, enter `mapr` as the login name and password.
+7. Navigate to [localhost:8047](http://localhost:8047) to experience the Drill Web UI, or log into the sandbox through the command line.  Login using ssh as described in ["Getting to Know the Sandbox"]({{site.baseurl}}/docs/getting-to-know-the-drill-sandbox). When prompted, enter `mapr` as the login name and password. Alternatively, access the command line on the VM: Press Alt+F2 on Windows or Option+F5 on Mac.  
 
 ### What's Next
 
@@ -92,48 +84,30 @@ VirtualBox:
 3. Select **File > Import Appliance**. The Import Virtual Appliance dialog appears.
 
      ![drill query flow]({{ site.baseurl }}/docs/img/vbImport.png)
-4. Navigate to the directory where you downloaded the MapR Sandbox with Apache Drill and click **Next**. The Appliance Settings window appears.
+4. Navigate to the directory where you downloaded the MapR Sandbox with Apache Drill, select MapR-Sandbox-For-Apache-Drill-1.0.0-4.1.0.ova, and click **Next**. The Appliance Settings window appears.
 
      ![drill query flow]({{ site.baseurl }}/docs/img/vbApplSettings.png)
 5. Select the check box at the bottom of the screen: **Reinitialize the MAC address of all network cards**, then click **Import**. The Import Appliance imports the sandbox.
-6. When the import completes, select **File > Preferences**. The VirtualBox - Settings dialog appears.
+6. When the import completes, select **Settings**. The VirtualBox - Settings dialog appears.
 
      ![drill query flow]({{ site.baseurl }}/docs/img/vbNetwork.png)
 7. Select **Network**.  
 
     The correct setting depends on your network connectivity when you run the
 Sandbox. In general, if you are going to use a wired Ethernet connection,
-select **NAT Networks** and **vboxnet0**. If you are going to use a wireless
-network, select **Host-only Networks** and the **VirtualBox Host-Only Ethernet
-Adapter**. If no adapters appear, click the **green** + **button** to add the
-VirtualBox adapter.
+select **NAT Network**. If you are going to use a wireless
+network, select **Host-only Networks** and the **Host-Only Ethernet**. 
 
      ![drill query flow]({{ site.baseurl }}/docs/img/vbMaprSetting.png)
 8. Click **OK** to continue.
-9. Click Settings.
 
-    ![settings icon]({{ site.baseurl }}/docs/img/settings.png)  
-   The MapR-Sandbox-For-Apache-Drill - Settings dialog appears.
-     
-     ![drill query flow]({{ site.baseurl }}/docs/img/vbGenSettings.png)    
-10. Click **OK** to continue.
-11. Click **Start**. It takes a few minutes for the MapR services to start.   
+9. Click **Start**. It takes a few minutes for the MapR services to start.   
  
       After the MapR services start and installation completes, the following screen appears:
       
        ![drill query flow]({{ site.baseurl }}/docs/img/vbloginSandBox.png)
-12. The client must be able to resolve the actual hostname of the Drill node(s) with the IP(s). Verify that a DNS entry was created on the client machine for the Drill node(s).  
- 
-     If a DNS entry does not exist, create the entry for the Drill node(s).
-     * For Windows, create the entry in the %WINDIR%\system32\drivers\etc\hosts file.
-     * For Linux and Mac, create the entry in /etc/hosts.  
-<drill-machine-IP> <drill-machine-hostname>  
-  
-     Example: `127.0.1.1 maprdemo`
-13. You can navigate to the URL provided or to [localhost:8047](http://localhost:8047) to experience the Drill Web UI, or you can log into the sandbox through the command line.  
 
-    a. To navigate to the MapR Sandbox with Apache Drill, enter the provided URL in your browser's address bar.  
-    b. To log into the virtual machine and access the command line, enter Alt+F2 on Windows or Option+F5 on Mac. When prompted, enter `mapr` as the login name and password.
+10. Navigate to [localhost:8047](http://localhost:8047) to experience the Drill Web UI, or log into the sandbox through the command line.  Login using ssh as described in ["Getting to Know the Sandbox"]({{site.baseurl}}/docs/getting-to-know-the-drill-sandbox). When prompted, enter `mapr` as the login name and password. Alternatively, access the command line on the VM: Press Alt+F2 on Windows or Option+F5 on Mac.   
 
 ### What's Next
 

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md
index cb679bf..7959d81 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md
@@ -12,25 +12,23 @@ example:
 
 Using the secure shell instead of the VM interface has some advantages. You can copy/paste commands from the tutorial and avoid mouse control problems.
 
-Drill includes a shell for connecting to relational databases and executing SQL commands. On the sandbox, the Drill shell runs in embedded mode. After logging into the sandbox,  use the `SQLLine` command to start SQLLine for executing Drill queries in embedded mode.  
+Drill includes a shell for connecting to relational databases and executing SQL commands. On the sandbox, the Drill shell runs in embedded mode. After logging into the sandbox,  use the `SQLLine` command. The Drill shell appears, and you can run Drill queries.  
 
-    [mapr@maprdemo ~]# sqlline
-    sqlline version 1.1.6
+    [mapr@maprdemo ~]$ sqlline
+    apache drill 1.0.0 
+    "the only truly happy people are children, the creative minority and drill users"
     0: jdbc:drill:>
 
 In this tutorial you query a number of data sets, including Hive and HBase, and files on the file system, such as CSV, JSON, and Parquet files. To access these diverse data sources, you connect Drill to storage plugins. 
 
 ## Storage Plugin Overview
-This section describes storage plugins included in the sandbox. For general information about Drill storage plugins, see ["Connect to a Data Source"]({{ site.baseurl }}/docs/connect-a-data-source-introduction).
-Take a look at the pre-configured storage plugins for the sandbox by opening the Storage tab in the Drill Web UI. Launch a web browser and go to: `http://<IP address>:8047/storage`. For example:
-
-    http://localhost:8047/storage
+You use a [storage plugins]({{ site.baseurl }}/docs/connect-a-data-source-introduction) to connect to a data source, such as a file or the Hive metastore. Take a look at the storage plugin definitions by opening the Storage tab in the Drill Web UI. Launch a web browser and go to: `http://<IP address>:8047/storage`. 
 
 The control panel for managing storage plugins appears.
 
 ![sandbox plugin]({{ site.baseurl }}/docs/img/get2kno_plugin.png)
 
-You see that the following storage plugin controls:
+You see the following storage plugin controls:
 
 * cp
 * dfs
@@ -39,11 +37,9 @@ You see that the following storage plugin controls:
 * hbase
 * mongo
 
-Click Update to look at a configuration. 
-
-In some cases, the storage plugin defined for use in the sandbox differs from the default storage plugin of the same name in a Drill installation as described in the following sections. Typically you create a storage plugin or customize an existing one for analyzing a particular data source. 
+Click Update to examine a configuration. 
 
-The tutorial uses the dfs, hive, maprdb, and hbase storage plugins. 
+If you've used an installation of Drill before using the sandbox, you might notice that a few storage plugins in the sandbox differ from the same storage plugin in a Drill installation. The sandbox version of dfs, hive, maprdb, and hbase storage plugins definitions play a role in simulating the cluster environment for running the tutorial. 
 
 ### dfs
 
@@ -85,11 +81,13 @@ The `dfs` definition includes format definitions.
          "delimiter": ","
       },
      . . .
-      "json": {
-       "type": "json"
-       }
+       "json": {
+          "type": "json"
+      },
+       "maprdb": {
+          "type": "maprdb"
       }
-    }
+     . . .
 
 ### maprdb
 
@@ -97,12 +95,13 @@ The maprdb storage plugin is a configuration for MapR-DB in the sandbox. You use
 information on how to configure Drill to query HBase.
 
     {
-      "type" : "hbase",
-      "enabled" : true,
-      "config" : {
-        "hbase.table.namespace.mappings" : "*:/tables"
-      }
-     }
+      "type": "hbase",
+      "config": {
+        "hbase.table.namespace.mappings": "*:/tables"
+      },
+      "size.calculator.enabled": false,
+      "enabled": true
+    }
 
 ### hive
 
@@ -110,7 +109,7 @@ The hive storage plugin is a configuration for a Hive data warehouse within the
 Drill connects to the Hive metastore by using the configured metastore thrift
 URI. Metadata for Hive tables is automatically available for users to query.
 
-     {
+    {
       "type": "hive",
       "enabled": true,
       "configProps": {

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md
index 9344006..e15e878 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md
@@ -24,41 +24,52 @@ This lesson consists of select * queries on each data source.
 
 ### Start the Drill Shell
 
-If the Drill shell is not already started, use a Terminal or Command window to log
-into the demo VM as root, then enter `sqlline`, as described in ["Getting to Know the Sandbox"]({{ site.baseurl }}/docs/getting-to-know-the-drill-sandbox):
+If the Drill shell is not already started, use a Terminal or Command Prompt to log
+into the demo VM as mapr, then enter `sqlline`, as described in ["Getting to Know the Sandbox"]({{ site.baseurl }}/docs/getting-to-know-the-drill-sandbox):
 
 You can run queries to complete the tutorial. To exit from
 the Drill shell, type:
 
-    0: jdbc:drill:> !quit
+`0: jdbc:drill:> !quit`
 
 Examples in this tutorial use the Drill shell. You can also execute queries using the Drill Web UI.
 
+### Enable the DECIMAL Data Yype
+
+This tutorial uses the DECIMAL data type in some examples. The DECIMAL data type is disabled by default in this release, so enable the DECIMAL data type before proceeding:
+
+    alter session set `planner.enable_decimal_data_type`=true;
+
+    +-------+--------------------------------------------+
+    |  ok   |                  summary                   |
+    +-------+--------------------------------------------+
+    | true  | planner.enable_decimal_data_type updated.  |
+    +-------+--------------------------------------------+
+    1 row selected 
+
 ### List the available workspaces and databases:
 
     0: jdbc:drill:> show databases;
-    +-------------+
-    | SCHEMA_NAME |
-    +-------------+
-    | hive.default |
-    | dfs.default |
-    | dfs.logs    |
-    | dfs.root    |
-    | dfs.views   |
-    | dfs.clicks  |
-    | dfs.tmp     |
-    | sys         |
-    | maprdb      |
-    | cp.default  |
-    | INFORMATION_SCHEMA |
-    +-------------+
-    12 rows selected
+    +---------------------+
+    |     SCHEMA_NAME     |
+    +---------------------+
+    | INFORMATION_SCHEMA  |
+    | cp.default          |
+    | dfs.clicks          |
+    | dfs.default         |
+    | dfs.logs            |
+    | dfs.root            |
+    | dfs.tmp             |
+    | dfs.views           |
+    | hive.default        |
+    | maprdb              |
+    | sys                 |
+    +---------------------+
 
 This command exposes all the metadata available from the storage
 plugins configured with Drill as a set of schemas. The Hive and
 MapR-DB databases, file system, and other data are configured in the file system. As
-you run queries in the tutorial, you will switch among these schemas by
-submitting the USE command. This behavior resembles the ability to use
+you run queries in the tutorial, you run the USE command to switch among these schemas. Switching schemas in this way resembles using
 different database schemas (namespaces) in a relational database system.
 
 ## Query Hive Tables
@@ -69,12 +80,13 @@ MapR file system. The orders table contains 122,000 rows.
 
 ### Set the schema to hive:
 
-    0: jdbc:drill:> use hive;
-    +------------+------------+
-    | ok | summary |
-    +------------+------------+
-    | true | Default schema changed to 'hive' |
-    +------------+------------+
+    0: jdbc:drill:> use hive.`default`;
+    +-------+-------------------------------------------+
+    |  ok   |                  summary                  |
+    +-------+-------------------------------------------+
+    | true  | Default schema changed to [hive.default]  |
+    +-------+-------------------------------------------+
+    1 row selected
 
 You will run the USE command throughout this tutorial. The USE command sets
 the schema for the current session.
@@ -118,7 +130,7 @@ the standard LIMIT clause, which limits the result set to the specified number
 of rows. You can use LIMIT with or without an ORDER BY clause.
 
 Drill provides seamless integration with Hive by allowing queries on Hive
-tables defined in the metastore with no extra configuration. Note that Hive is
+tables defined in the metastore with no extra configuration. Hive is
 not a prerequisite for Drill, but simply serves as a storage plugin or data
 source for Drill. Drill also lets users query all Hive file formats (including
 custom serdes). Additionally, any UDFs defined in Hive can be leveraged as
@@ -136,11 +148,10 @@ development. Every MapR-DB table has a row_key, in addition to one or more
 column families. Each column family contains one or more specific columns. The
 row_key value is a primary key that uniquely identifies each row.
 
-Drill allows direct queries on MapR-DB and HBase tables. Unlike other SQL on
+Drill directly queries MapR-DB and HBase tables. Unlike other SQL on
 Hadoop options, Drill requires no overlay schema definitions in Hive to work
-with this data. Think about a MapR-DB or HBase table with thousands of
-columns, such as a time-series database, and the pain of having to manage
-duplicate schemas for it in Hive!
+with this data. Drill removes the pain of having to manage duplicate schemas in Hive when you have a MapR-DB or HBase table with thousands of
+columns typical of a time-series database.
 
 ### Products Table
 
@@ -159,33 +170,36 @@ The customers table contains 993 rows.
 
 ### Set the workspace to maprdb:
 
-    0: jdbc:drill:> use maprdb;
-    +------------+------------+
-    | ok | summary |
-    +------------+------------+
-    | true | Default schema changed to 'maprdb' |
-    +------------+------------+
+    use maprdb;
+    +-------+-------------------------------------+
+    |  ok   |               summary               |
+    +-------+-------------------------------------+
+    | true  | Default schema changed to [maprdb]  |
+    +-------+-------------------------------------+
+    1 row selected
 
 ### Describe the tables:
 
     0: jdbc:drill:> describe customers;
-    +-------------+------------+-------------+
-    | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
-    +-------------+------------+-------------+
-    | row_key     | ANY        | NO          |
-    | address     | (VARCHAR(1), ANY) MAP | NO          |
-    | loyalty     | (VARCHAR(1), ANY) MAP | NO          |
-    | personal    | (VARCHAR(1), ANY) MAP | NO          |
-    +-------------+------------+-------------+
+    +--------------+------------------------+--------------+
+    | COLUMN_NAME  |       DATA_TYPE        | IS_NULLABLE  |
+    +--------------+------------------------+--------------+
+    | row_key      | ANY                    | NO           |
+    | address      | (VARCHAR(1), ANY) MAP  | NO           |
+    | loyalty      | (VARCHAR(1), ANY) MAP  | NO           |
+    | personal     | (VARCHAR(1), ANY) MAP  | NO           |
+    +--------------+------------------------+--------------+
+    4 rows selected 
  
     0: jdbc:drill:> describe products;
-    +-------------+------------+-------------+
-    | COLUMN_NAME | DATA_TYPE  | IS_NULLABLE |
-    +-------------+------------+-------------+
-    | row_key     | ANY        | NO          |
-    | details     | (VARCHAR(1), ANY) MAP | NO          |
-    | pricing     | (VARCHAR(1), ANY) MAP | NO          |
-    +-------------+------------+-------------+
+    +--------------+------------------------+--------------+
+    | COLUMN_NAME  |       DATA_TYPE        | IS_NULLABLE  |
+    +--------------+------------------------+--------------+
+    | row_key      | ANY                    | NO           |
+    | details      | (VARCHAR(1), ANY) MAP  | NO           |
+    | pricing      | (VARCHAR(1), ANY) MAP  | NO           |
+    +--------------+------------------------+--------------+
+    3 rows selected 
 
 Unlike the Hive example, the DESCRIBE command does not return the full schema
 up to the column level. Wide-column NoSQL databases such as MapR-DB and HBase
@@ -201,14 +215,16 @@ ANY.
 ### Select 5 rows from the products table:
 
     0: jdbc:drill:> select * from products limit 5;
-    +------------+------------+------------+
-    | row_key | details | pricing |
-    +------------+------------+------------+
-    | [B@a1a3e25 | {"category":"bGFwdG9w","name":"IlNvbnkgbm90ZWJvb2si"} | {"price":"OTU5"} |
-    | [B@103a43af | {"category":"RW52ZWxvcGVz","name":"IzEwLTQgMS84IHggOSAxLzIgUHJlbWl1bSBEaWFnb25hbCBTZWFtIEVudmVsb3Blcw=="} | {"price":"MT |
-    | [B@61319e7b | {"category":"U3RvcmFnZSAmIE9yZ2FuaXphdGlvbg==","name":"MjQgQ2FwYWNpdHkgTWF4aSBEYXRhIEJpbmRlciBSYWNrc1BlYXJs"} | {"price" |
-    | [B@9bcf17 | {"category":"TGFiZWxz","name":"QXZlcnkgNDk4"} | {"price":"Mw=="} |
-    | [B@7538ef50 | {"category":"TGFiZWxz","name":"QXZlcnkgNDk="} | {"price":"Mw=="} |
+    +--------------+----------------------------------------------------------------------------------------------------------------+-------------------+
+    |   row_key    |                                                    details                                                     |      pricing      |
+    +--------------+----------------------------------------------------------------------------------------------------------------+-------------------+
+    | [B@b01c5f8   | {"category":"bGFwdG9w","name":"U29ueSBub3RlYm9vaw=="}                                                          | {"price":"OTU5"}  |
+    | [B@5edfe5ad  | {"category":"RW52ZWxvcGVz","name":"IzEwLTQgMS84IHggOSAxLzIgUHJlbWl1bSBEaWFnb25hbCBTZWFtIEVudmVsb3Blcw=="}      | {"price":"MTY="}  |
+    | [B@3d5ff184  | {"category":"U3RvcmFnZSAmIE9yZ2FuaXphdGlvbg==","name":"MjQgQ2FwYWNpdHkgTWF4aSBEYXRhIEJpbmRlciBSYWNrc1BlYXJs"}  | {"price":"MjEx"}  |
+    | [B@65e93096  | {"category":"TGFiZWxz","name":"QXZlcnkgNDk4"}                                                                  | {"price":"Mw=="}  |
+    | [B@3074fc1f  | {"category":"TGFiZWxz","name":"QXZlcnkgNDk="}                                                                  | {"price":"Mw=="}  |
+    +--------------+----------------------------------------------------------------------------------------------------------------+-------------------+
+    5 rows selected 
 
 Given that Drill requires no up front schema definitions indicating data
 types, the query returns the raw byte arrays for column values, just as they
@@ -221,17 +237,18 @@ In Lesson 2, you will use CAST functions to return typed data for each column.
 
 
     +0: jdbc:drill:> select * from customers limit 5;
-    +------------+------------+------------+------------+
-    | row_key | address | loyalty | personal |
-    +------------+------------+------------+------------+
-    | [B@284bae62 | {"state":"Imt5Ig=="} | {"agg_rev":"IjEwMDEtMzAwMCI=","membership":"ImJhc2ljIg=="} | {"age":"IjI2LTM1Ig==","gender":"Ik1B |
-    | [B@7ffa4523 | {"state":"ImNhIg=="} | {"agg_rev":"IjAtMTAwIg==","membership":"ImdvbGQi"} | {"age":"IjI2LTM1Ig==","gender":"IkZFTUFMRSI= |
-    | [B@7d13e79 | {"state":"Im9rIg=="} | {"agg_rev":"IjUwMS0xMDAwIg==","membership":"InNpbHZlciI="} | {"age":"IjI2LTM1Ig==","gender":"IkZFT |
-    | [B@3a5c7df1 | {"state":"ImtzIg=="} | {"agg_rev":"IjMwMDEtMTAwMDAwIg==","membership":"ImdvbGQi"} | {"age":"IjUxLTEwMCI=","gender":"IkZF |
-    | [B@e507726 | {"state":"Im5qIg=="} | {"agg_rev":"IjAtMTAwIg==","membership":"ImJhc2ljIg=="} | {"age":"IjIxLTI1Ig==","gender":"Ik1BTEUi" |
-    +------------+------------+------------+------------+
-
-Again the table returns byte data that needs to be cast to readable data
+    +--------------+-----------------------+-------------------------------------------------+---------------------------------------------------------------------------------------+
+    |   row_key    |        address        |                     loyalty                     |                                       personal                                        |
+    +--------------+-----------------------+-------------------------------------------------+---------------------------------------------------------------------------------------+
+    | [B@3ed2649e  | {"state":"InZhIg=="}  | {"agg_rev":"MTk3","membership":"InNpbHZlciI="}  | {"age":"IjE1LTIwIg==","gender":"IkZFTUFMRSI=","name":"IkNvcnJpbmUgTWVjaGFtIg=="}      |
+    | [B@66cbe14a  | {"state":"ImluIg=="}  | {"agg_rev":"MjMw","membership":"InNpbHZlciI="}  | {"age":"IjI2LTM1Ig==","gender":"Ik1BTEUi","name":"IkJyaXR0YW55IFBhcmsi"}              |
+    | [B@5333f5ff  | {"state":"ImNhIg=="}  | {"agg_rev":"MjUw","membership":"InNpbHZlciI="}  | {"age":"IjI2LTM1Ig==","gender":"Ik1BTEUi","name":"IlJvc2UgTG9rZXki"}                  |
+    | [B@785b6305  | {"state":"Im1lIg=="}  | {"agg_rev":"MjYz","membership":"InNpbHZlciI="}  | {"age":"IjUxLTEwMCI=","gender":"IkZFTUFMRSI=","name":"IkphbWVzIEZvd2xlciI="}          |
+    | [B@37c21afe  | {"state":"Im1uIg=="}  | {"agg_rev":"MjAy","membership":"InNpbHZlciI="}  | {"age":"IjUxLTEwMCI=","gender":"Ik9USEVSIg==","name":"Ikd1aWxsZXJtbyBLb2VobGVyIg=="}  |
+    +--------------+-----------------------+-------------------------------------------------+---------------------------------------------------------------------------------------+
+    5 rows selected
+
+Again, the table returns byte data that needs to be cast to readable data
 types.
 
 ## Query the File System
@@ -241,7 +258,7 @@ schemas (such as MapR-DB and HBase), Drill offers the unique capability to
 perform SQL queries directly on file system. The file system could be a local
 file system, or a distributed file system such as MapR-FS, HDFS, or S3.
 
-In the context of Drill, a file or a directory is considered as synonymous to
+In the context of Drill, a file or a directory is synonymous with
 a relational database “table.” Therefore, you can perform SQL operations
 directly on files and directories without the need for up-front schema
 definitions or schema management for any model changes. The schema is
@@ -257,7 +274,7 @@ is in JSON format. The JSON files have the following structure:
 
 
 The clicks.json and clicks.campaign.json files contain metadata as part of the
-data itself (referred to as “self-describing” data). Also note that the data
+data itself (referred to as “self-describing” data). The data
 elements are complex, or nested. The initial queries below do not show how to
 unpack the nested data, but they show that easy access to the data requires no
 setup beyond the definition of a workspace.
@@ -266,12 +283,13 @@ setup beyond the definition of a workspace.
 
 #### Set the workspace to dfs.clicks:
 
-     0: jdbc:drill:> use dfs.clicks;
-    +------------+------------+
-    | ok | summary |
-    +------------+------------+
-    | true | Default schema changed to 'dfs.clicks' |
-    +------------+------------+
+    0: jdbc:drill:> use dfs.clicks;
+    +-------+-----------------------------------------+
+    |  ok   |                 summary                 |
+    +-------+-----------------------------------------+
+    | true  | Default schema changed to [dfs.clicks]  |
+    +-------+-----------------------------------------+
+    1 row selected
 
 In this case, setting the workspace is a mechanism for making queries easier
 to write. When you specify a file system workspace, you can shorten references
@@ -279,7 +297,7 @@ to files in your queries. Instead of having to provide the
 complete path to a file, you can provide the path relative to a directory
 location specified in the workspace. For example:
 
-    "location": "/mapr/demo.mapr.com/data/nested"
+`"location": "/mapr/demo.mapr.com/data/nested"`
 
 Any file or directory that you want to query in this path can be referenced
 relative to this path. The clicks directory referred to in the following query
@@ -288,15 +306,15 @@ is directly below the nested directory.
 #### Select 2 rows from the clicks.json file:
 
     0: jdbc:drill:> select * from `clicks/clicks.json` limit 2;
-    +------------+------------+------------+------------+------------+
-    |  trans_id  |    date    |    time    | user_info  | trans_info |
-    +------------+------------+------------+------------+------------+
-    | 31920      | 2014-04-26 | 12:17:12   | {"cust_id":22526,"device":"IOS5","state":"il"} | {"prod_id":[174,2],"purch_flag":"false"} |
-    | 31026      | 2014-04-20 | 13:50:29   | {"cust_id":16368,"device":"AOS4.2","state":"nc"} | {"prod_id":[],"purch_flag":"false"} |
-    +------------+------------+------------+------------+------------+
-    2 rows selected
-
-Note that the FROM clause reference points to a specific file. Drill expands
+    +-----------+-------------+-----------+---------------------------------------------------+-------------------------------------------+
+    | trans_id  |    date     |   time    |                     user_info                     |                trans_info                 |
+    +-----------+-------------+-----------+---------------------------------------------------+-------------------------------------------+
+    | 31920     | 2014-04-26  | 12:17:12  | {"cust_id":22526,"device":"IOS5","state":"il"}    | {"prod_id":[174,2],"purch_flag":"false"}  |
+    | 31026     | 2014-04-20  | 13:50:29  | {"cust_id":16368,"device":"AOS4.2","state":"nc"}  | {"prod_id":[],"purch_flag":"false"}       |
+    +-----------+-------------+-----------+---------------------------------------------------+-------------------------------------------+
+    2 rows selected 
+
+The FROM clause reference points to a specific file. Drill expands
 the traditional concept of a “table reference” in a standard SQL FROM clause
 to refer to a file in a local or distributed file system.
 
@@ -307,13 +325,13 @@ or characters.
 #### Select 2 rows from the campaign.json file:
 
     0: jdbc:drill:> select * from `clicks/clicks.campaign.json` limit 2;
-    +------------+------------+------------+------------+------------+------------+
-    |  trans_id  |    date    |    time    | user_info  |  ad_info   | trans_info |
-    +------------+------------+------------+------------+------------+------------+
-    | 35232      | 2014-05-10 | 00:13:03   | {"cust_id":18520,"device":"AOS4.3","state":"tx"} | {"camp_id":"null"} | {"prod_id":[7,7],"purch_flag":"true"} |
-    | 31995      | 2014-05-22 | 16:06:38   | {"cust_id":17182,"device":"IOS6","state":"fl"} | {"camp_id":"null"} | {"prod_id":[],"purch_flag":"false"} |
-    +------------+------------+------------+------------+------------+------------+
-    2 rows selected
+    +-----------+-------------+-----------+---------------------------------------------------+---------------------+----------------------------------------+
+    | trans_id  |    date     |   time    |                     user_info                     |       ad_info       |               trans_info               |
+    +-----------+-------------+-----------+---------------------------------------------------+---------------------+----------------------------------------+
+    | 35232     | 2014-05-10  | 00:13:03  | {"cust_id":18520,"device":"AOS4.3","state":"tx"}  | {"camp_id":"null"}  | {"prod_id":[7,7],"purch_flag":"true"}  |
+    | 31995     | 2014-05-22  | 16:06:38  | {"cust_id":17182,"device":"IOS6","state":"fl"}    | {"camp_id":"null"}  | {"prod_id":[],"purch_flag":"false"}    |
+    +-----------+-------------+-----------+---------------------------------------------------+---------------------+----------------------------------------+
+    2 rows selected 
 
 Notice that with a select * query, any complex data types such as maps and
 arrays return as JSON strings. You will see how to unpack this data using
@@ -339,22 +357,24 @@ data source, or to query a subset of the files.
 
 #### Set the workspace to dfs.logs:
 
-     0: jdbc:drill:> use dfs.logs;
-    +------------+------------+
-    | ok | summary |
-    +------------+------------+
-    | true | Default schema changed to 'dfs.logs' |
-    +------------+------------+
+    0: jdbc:drill:> use dfs.logs;
+    +-------+---------------------------------------+
+    |  ok   |                summary                |
+    +-------+---------------------------------------+
+    | true  | Default schema changed to [dfs.logs]  |
+    +-------+---------------------------------------+
+    1 row selected
 
 #### Select 2 rows from the logs directory:
 
     0: jdbc:drill:> select * from logs limit 2;
-    +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+----------+
-    | dir0 | dir1 | trans_id | date | time | cust_id | device | state | camp_id | keywords | prod_id | purch_fl |
-    +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+----------+
-    | 2014 | 8 | 24181 | 08/02/2014 | 09:23:52 | 0 | IOS5 | il | 2 | wait | 128 | false |
-    | 2014 | 8 | 24195 | 08/02/2014 | 07:58:19 | 243 | IOS5 | mo | 6 | hmm | 107 | false |
-    +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+----------+
+    +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
+    | dir0  | dir1  | trans_id  |    date     |   time    | cust_id  | device  | state  | camp_id  | keywords  | prod_id  | purch_flag  |
+    +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
+    | 2012  | 8     | 109       | 08/07/2012  | 20:33:13  | 144618   | IOS5    | ga     | 4        | hey       | 6        | false       |
+    | 2012  | 8     | 119       | 08/19/2012  | 03:37:50  | 17       | IOS5    | tx     | 16       | and       | 50       | false       |
+    +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
+    2 rows selected 
 
 Note that this is flat JSON data. The dfs.clicks workspace location property
 points to a directory that contains the logs directory, making the FROM clause
@@ -368,11 +388,12 @@ queries that leverage these dynamic variables.
 #### Find the total number of rows in the logs directory (all files):
 
     0: jdbc:drill:> select count(*) from logs;
-    +------------+
-    | EXPR$0 |
-    +------------+
-    | 48000 |
-    +------------+
+    +---------+
+    | EXPR$0  |
+    +---------+
+    | 48000   |
+    +---------+
+    1 row selected 
 
 This query traverses all of the files in the logs directory and its
 subdirectories to return the total number of rows in those files.

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
index 8e53b7c..e2aa8c3 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
@@ -27,32 +27,33 @@ statement.
 
 ### Set the schema to hive:
 
-    0: jdbc:drill:> use hive;
-    +------------+------------+
-    |     ok     |  summary   |
-    +------------+------------+
-    | true       | Default schema changed to 'hive' |
-    +------------+------------+
-    1 row selected
+    0: jdbc:drill:> use hive.`default`;
+    +-------+-------------------------------------------+
+    |  ok   |                  summary                  |
+    +-------+-------------------------------------------+
+    | true  | Default schema changed to [hive.default]  |
+    +-------+-------------------------------------------+
+    1 row selected 
 
 ### Return sales totals by month:
 
     0: jdbc:drill:> select `month`, sum(order_total)
     from orders group by `month` order by 2 desc;
-    +------------+------------+
-    | month | EXPR$1 |
-    +------------+------------+
-    | June | 950481 |
-    | May | 947796 |
-    | March | 836809 |
-    | April | 807291 |
-    | July | 757395 |
-    | October | 676236 |
-    | August | 572269 |
-    | February | 532901 |
-    | September | 373100 |
-    | January | 346536 |
-    +------------+------------+
+    +------------+---------+
+    |   month    | EXPR$1  |
+    +------------+---------+
+    | June       | 950481  |
+    | May        | 947796  |
+    | March      | 836809  |
+    | April      | 807291  |
+    | July       | 757395  |
+    | October    | 676236  |
+    | August     | 572269  |
+    | February   | 532901  |
+    | September  | 373100  |
+    | January    | 346536  |
+    +------------+---------+
+    10 rows selected 
 
 Drill supports SQL aggregate functions such as SUM, MAX, AVG, and MIN.
 Standard SQL clauses work in the same way in Drill queries as in relational
@@ -65,31 +66,31 @@ is a reserved word in SQL.
 
     0: jdbc:drill:> select `month`, state, sum(order_total) as sales from orders group by `month`, state
     order by 3 desc limit 20;
-    +------------+------------+------------+
-    |   month    |   state    |   sales    |
-    +------------+------------+------------+
-    | May        | ca         | 119586     |
-    | June       | ca         | 116322     |
-    | April      | ca         | 101363     |
-    | March      | ca         | 99540      |
-    | July       | ca         | 90285      |
-    | October    | ca         | 80090      |
-    | June       | tx         | 78363      |
-    | May        | tx         | 77247      |
-    | March      | tx         | 73815      |
-    | August     | ca         | 71255      |
-    | April      | tx         | 68385      |
-    | July       | tx         | 63858      |
-    | February   | ca         | 63527      |
-    | June       | fl         | 62199      |
-    | June       | ny         | 62052      |
-    | May        | fl         | 61651      |
-    | May        | ny         | 59369      |
-    | October    | tx         | 55076      |
-    | March      | fl         | 54867      |
-    | March      | ny         | 52101      |
-    +------------+------------+------------+
-    20 rows selected
+    +-----------+--------+---------+
+    |   month   | state  |  sales  |
+    +-----------+--------+---------+
+    | May       | ca     | 119586  |
+    | June      | ca     | 116322  |
+    | April     | ca     | 101363  |
+    | March     | ca     | 99540   |
+    | July      | ca     | 90285   |
+    | October   | ca     | 80090   |
+    | June      | tx     | 78363   |
+    | May       | tx     | 77247   |
+    | March     | tx     | 73815   |
+    | August    | ca     | 71255   |
+    | April     | tx     | 68385   |
+    | July      | tx     | 63858   |
+    | February  | ca     | 63527   |
+    | June      | fl     | 62199   |
+    | June      | ny     | 62052   |
+    | May       | fl     | 61651   |
+    | May       | ny     | 59369   |
+    | October   | tx     | 55076   |
+    | March     | fl     | 54867   |
+    | March     | ny     | 52101   |
+    +-----------+--------+---------+
+    20 rows selected 
 
 Note the alias for the result of the SUM function. Drill supports column
 aliases and table aliases.
@@ -101,11 +102,11 @@ This query uses the HAVING clause to constrain an aggregate result.
 ### Set the workspace to dfs.clicks
 
     0: jdbc:drill:> use dfs.clicks;
-    +------------+------------+
-    |     ok     |  summary   |
-    +------------+------------+
-    | true       | Default schema changed to 'dfs.clicks' |
-    +------------+------------+
+    +-------+-----------------------------------------+
+    |  ok   |                 summary                 |
+    +-------+-----------------------------------------+
+    | true  | Default schema changed to [dfs.clicks]  |
+    +-------+-----------------------------------------+
     1 row selected
 
 ### Return total number of clicks for devices that indicate high click-throughs:
@@ -113,16 +114,17 @@ This query uses the HAVING clause to constrain an aggregate result.
     0: jdbc:drill:> select t.user_info.device, count(*) from `clicks/clicks.json` t 
     group by t.user_info.device
     having count(*) > 1000;
-    +------------+------------+
-    |   EXPR$0   |   EXPR$1   |
-    +------------+------------+
-    | IOS5       | 11814      |
-    | AOS4.2     | 5986       |
-    | IOS6       | 4464       |
-    | IOS7       | 3135       |
-    | AOS4.4     | 1562       |
-    | AOS4.3     | 3039       |
-    +------------+------------+
+    +---------+---------+
+    | EXPR$0  | EXPR$1  |
+    +---------+---------+
+    | IOS5    | 11814   |
+    | AOS4.2  | 5986    |
+    | IOS6    | 4464    |
+    | IOS7    | 3135    |
+    | AOS4.4  | 1562    |
+    | AOS4.3  | 3039    |
+    +---------+---------+
+    6 rows selected
 
 The aggregate is a count of the records for each different mobile device in
 the clickstream data. Only the activity for the devices that registered more
@@ -154,20 +156,24 @@ duplicate rows from those files): `clicks.campaign.json` and `clicks.json`.
 
 ### Set the workspace to hive:
 
-    0: jdbc:drill:> use hive;
-    +------------+------------+
-    |     ok     |  summary   |
-    +------------+------------+
-    | true       | Default schema changed to 'hive' |
-    +------------+------------+
+    0: jdbc:drill:> use hive.`default`;
+    +-------+-------------------------------------------+
+    |  ok   |                  summary                  |
+    +-------+-------------------------------------------+
+    | true  | Default schema changed to [hive.default]  |
+    +-------+-------------------------------------------+
+    1 row selected
     
 ### Compare order totals across states:
 
-    0: jdbc:drill:> select o1.cust_id, sum(o1.order_total) as ny_sales,
-    (select sum(o2.order_total) from hive.orders o2
-    where o1.cust_id=o2.cust_id and state='ca') as ca_sales
-    from hive.orders o1 where o1.state='ny' group by o1.cust_id
-    order by cust_id limit 20;
+    0: jdbc:drill:> select ny_sales.cust_id, ny_sales.total_orders, ca_sales.total_orders
+    from
+    (select o.cust_id, sum(o.order_total) as total_orders from hive.orders o where state = 'ny' group by o.cust_id) ny_sales
+    left outer join
+    (select o.cust_id, sum(o.order_total) as total_orders from hive.orders o where state = 'ca' group by o.cust_id) ca_sales
+    on ny_sales.cust_id = ca_sales.cust_id
+    order by ny_sales.cust_id
+    limit 20;
     +------------+------------+------------+
     |  cust_id   |  ny_sales  |  ca_sales  |
     +------------+------------+------------+
@@ -193,27 +199,19 @@ duplicate rows from those files): `clicks.campaign.json` and `clicks.json`.
     | 1024       | 233        | null       |
     +------------+------------+------------+
 
-This example demonstrates Drill support for correlated subqueries. This query
-uses a subquery in the select list and correlates the result of the subquery
-with the outer query, using the cust_id column reference. The subquery returns
-the sum of order totals for California, and the outer query returns the
-equivalent sum, for the same cust_id, for New York.
-
-The result set is sorted by the cust_id and presents the sales totals side by
-side for easy comparison. Null values indicate customer IDs that did not
-register any sales in that state.
+This example demonstrates Drill support for subqueries. 
 
 ## CAST Function
 
 ### Use the maprdb workspace:
 
     0: jdbc:drill:> use maprdb;
-    +------------+------------+
-    |     ok     |  summary   |
-    +------------+------------+
-    | true       | Default schema changed to 'maprdb' |
-    +------------+------------+
-    1 row selected
+    +-------+-------------------------------------+
+    |  ok   |               summary               |
+    +-------+-------------------------------------+
+    | true  | Default schema changed to [maprdb]  |
+    +-------+-------------------------------------+
+    1 row selected (0.088 seconds)
 
 ### Return customer data with appropriate data types
 
@@ -222,16 +220,15 @@ register any sales in that state.
     cast(t.address.state as varchar(4)) as state, cast(t.loyalty.agg_rev as dec(7,2)) as agg_rev, 
     cast(t.loyalty.membership as varchar(20)) as membership
     from customers t limit 5;
-    +------------+------------+------------+------------+------------    +------------+------------+
-    |  cust_id   |    name    |   gender   |    age     |   state    |  agg_rev   | membership |
-    +------------+------------+------------+------------+------------+------------+------------+
-    | 10001      | "Corrine Mecham" | "FEMALE"   | "15-20"    | "va"       | 197.00     | "silver"   |
-    | 10005      | "Brittany Park" | "MALE"     | "26-35"    | "in"       | 230.00     | "silver"   |
-    | 10006      | "Rose Lokey" | "MALE"     | "26-35"    | "ca"       | 250.00     | "silver"   |
-    | 10007      | "James Fowler" | "FEMALE"   | "51-100"   | "me"       | 263.00     | "silver"   |
-    | 10010      | "Guillermo Koehler" | "OTHER"    | "51-100"   | "mn"       | 202.00     | "silver"   |
-    +------------+------------+------------+------------+------------+------------+------------+
-    5 rows selected
+    +----------+----------------------+-----------+-----------+--------+----------+-------------+
+    | cust_id  |         name         |  gender   |    age    | state  | agg_rev  | membership  |
+    +----------+----------------------+-----------+-----------+--------+----------+-------------+
+    | 10001    | "Corrine Mecham"     | "FEMALE"  | "15-20"   | "va"   | 197.00   | "silver"    |
+    | 10005    | "Brittany Park"      | "MALE"    | "26-35"   | "in"   | 230.00   | "silver"    |
+    | 10006    | "Rose Lokey"         | "MALE"    | "26-35"   | "ca"   | 250.00   | "silver"    |
+    | 10007    | "James Fowler"       | "FEMALE"  | "51-100"  | "me"   | 263.00   | "silver"    |
+    | 10010    | "Guillermo Koehler"  | "OTHER"   | "51-100"  | "mn"   | 202.00   | "silver"    |
+    +----------+----------------------+-----------+-----------+--------+----------+-------------+
 
 Note the following features of this query:
 
@@ -257,11 +254,12 @@ of “va”:
 ## CREATE VIEW Command
 
     0: jdbc:drill:> use dfs.views;
-    +------------+------------+
-    | ok | summary |
-    +------------+------------+
-    | true | Default schema changed to 'dfs.views' |
-    +------------+------------+
+    +-------+----------------------------------------+
+    |  ok   |                summary                 |
+    +-------+----------------------------------------+
+    | true  | Default schema changed to [dfs.views]  |
+    +-------+----------------------------------------+
+    1 row selected
 
 ### Use a mutable workspace:
 
@@ -279,11 +277,11 @@ can create Drill views and tables in mutable workspaces.
     cast(t.loyalty.agg_rev as dec(7,2)) as agg_rev,
     cast(t.loyalty.membership as varchar(20)) as membership
     from maprdb.customers t;
-    +------------+------------+
-    |     ok     |  summary   |
-    +------------+------------+
-    | true       | View 'custview' replaced successfully in 'dfs.views' schema |
-    +------------+------------+
+    +-------+-------------------------------------------------------------+
+    |  ok   |                           summary                           |
+    +-------+-------------------------------------------------------------+
+    | true  | View 'custview' created successfully in 'dfs.views' schema  |
+    +-------+-------------------------------------------------------------+
     1 row selected
 
 Drill provides CREATE OR REPLACE VIEW syntax similar to relational databases
@@ -306,11 +304,12 @@ supports the creation of metadata in the file system.
 ### Query data from the view:
 
     0: jdbc:drill:> select * from custview limit 1;
-    +------------+------------+------------+------------+------------+------------+------------+
-    |  cust_id   |    name    |   gender   |    age     |   state    |  agg_rev   | membership |
-    +------------+------------+------------+------------+------------+------------+------------+
-    | 10001      | "Corrine Mecham" | "FEMALE"   | "15-20"    | "va"       | 197.00     | "silver"   |
-    +------------+------------+------------+------------+------------+------------+------------+
+    +----------+-------------------+-----------+----------+--------+----------+-------------+
+    | cust_id  |       name        |  gender   |   age    | state  | agg_rev  | membership  |
+    +----------+-------------------+-----------+----------+--------+----------+-------------+
+    | 10001    | "Corrine Mecham"  | "FEMALE"  | "15-20"  | "va"   | 197.00   | "silver"    |
+    +----------+-------------------+-----------+----------+--------+----------+-------------+
+    1 row selected
 
 Once the users get an idea on what data is available by exploring it directly
 from file system , views can be used as a way to take the data in downstream

http://git-wip-us.apache.org/repos/asf/drill/blob/e5d81270/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md
index bcb62ee..be5fae8 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md
@@ -36,29 +36,31 @@ exist. Here is a visual example of how this works:
 ### Set workspace to dfs.logs:
 
     0: jdbc:drill:> use dfs.logs;
-    +------------+------------+
-    | ok | summary |
-    +------------+------------+
-    | true | Default schema changed to 'dfs.logs' |
-    +------------+------------+
+    +-------+---------------------------------------+
+    |  ok   |                summary                |
+    +-------+---------------------------------------+
+    | true  | Default schema changed to [dfs.logs]  |
+    +-------+---------------------------------------+
+    1 row selected
 
 ### Query logs data for a specific year:
 
     0: jdbc:drill:> select * from logs where dir0='2013' limit 10;
-    +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
-    |    dir0    |    dir1    |  trans_id  |    date    |    time    |  cust_id   |   device   |   state    |  camp_id   |  keywords  |  prod_id   | purch_flag |
-    +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
-    | 2013       | 2          | 12115      | 02/23/2013 | 19:48:24   | 3          | IOS5       | az         | 5          | who's      | 6          | false      |
-    | 2013       | 2          | 12127      | 02/26/2013 | 19:42:03   | 11459      | IOS5       | wa         | 10         | for        | 331        | false      |
-    | 2013       | 2          | 12138      | 02/09/2013 | 05:49:01   | 1          | IOS6       | ca         | 7          | minutes    | 500        | false      |
-    | 2013       | 2          | 12139      | 02/23/2013 | 06:58:20   | 1          | AOS4.4     | ms         | 7          | i          | 20         | false      |
-    | 2013       | 2          | 12145      | 02/10/2013 | 10:14:56   | 10         | IOS5       | mi         | 6          | wrong      | 42         | false      |
-    | 2013       | 2          | 12157      | 02/15/2013 | 02:49:22   | 102        | IOS5       | ny         | 5          | want       | 95         | false      |
-    | 2013       | 2          | 12176      | 02/19/2013 | 08:39:02   | 28         | IOS5       | or         | 0          | and        | 351        | false      |
-    | 2013       | 2          | 12194      | 02/24/2013 | 08:26:17   | 125445     | IOS5       | ar         | 0          | say        | 500        | true       |
-    | 2013       | 2          | 12236      | 02/05/2013 | 01:40:05   | 10         | IOS5       | nj         | 2          | sir        | 393        | false      |
-    | 2013       | 2          | 12249      | 02/03/2013 | 04:45:47   | 21725      | IOS5       | nj         | 5          | no         | 414        | false      |
-    +------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+
+    +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
+    | dir0  | dir1  | trans_id  |    date     |   time    | cust_id  | device  | state  | camp_id  | keywords  | prod_id  | purch_flag  |
+    +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
+    | 2013  | 8     | 12104     | 08/29/2013  | 09:34:37  | 962      | IOS5    | ma     | 3        | milhouse  | 17       | false       |
+    | 2013  | 8     | 12132     | 08/23/2013  | 01:11:25  | 4        | IOS7    | mi     | 11       | hi        | 439      | false       |
+    | 2013  | 8     | 12177     | 08/14/2013  | 13:48:50  | 23       | AOS4.2  | il     | 14       | give      | 382      | false       |
+    | 2013  | 8     | 12180     | 08/03/2013  | 20:48:45  | 1509     | IOS7    | ca     | 0        | i'm       | 340      | false       |
+    | 2013  | 8     | 12187     | 08/16/2013  | 10:28:07  | 0        | IOS5    | ny     | 16       | clicking  | 11       | false       |
+    | 2013  | 8     | 12190     | 08/10/2013  | 14:16:50  | 9        | IOS5    | va     | 3        | a         | 495      | false       |
+    | 2013  | 8     | 12200     | 08/02/2013  | 20:54:38  | 42219    | IOS5    | ia     | 0        | what's    | 346      | false       |
+    | 2013  | 8     | 12210     | 08/05/2013  | 20:12:24  | 8073     | IOS5    | sc     | 5        | if        | 33       | false       |
+    | 2013  | 8     | 12235     | 08/28/2013  | 07:49:45  | 595      | IOS5    | tx     | 2        | that      | 51       | false       |
+    | 2013  | 8     | 12239     | 08/13/2013  | 03:24:31  | 2        | IOS5    | or     | 6        | haw-haw   | 40       | false       |
+    +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
+    10 rows selected
 
 
 This query constrains files inside the subdirectory named 2013. The variable
@@ -73,13 +75,13 @@ an IOS5 device in August 2013.
     0: jdbc:drill:> select dir0 as yr, dir1 as mth, cust_id from logs
     where dir0='2013' and dir1='8' and device='IOS5' and purch_flag='true'
     order by `date`;
-    +------------+------------+------------+
-    |     yr     |    mth     |  cust_id   |
-    +------------+------------+------------+
-    | 2013       | 8          | 4          |
-    | 2013       | 8          | 521        |
-    | 2013       | 8          | 1          |
-    | 2013       | 8          | 2          |
+    +-------+------+----------+
+    |  yr   | mth  | cust_id  |
+    +-------+------+----------+
+    | 2013  | 8    | 4        |
+    | 2013  | 8    | 521      |
+    | 2013  | 8    | 1        |
+    | 2013  | 8    | 2        |
 
     ...
 
@@ -87,20 +89,20 @@ an IOS5 device in August 2013.
 
     0: jdbc:drill:> select cust_id, dir1 month_no, count(*) month_count from logs
     where dir0=2014 group by cust_id, dir1 order by cust_id, month_no limit 10;
-    +------------+------------+-------------+
-    |  cust_id   |  month_no  | month_count |
-    +------------+------------+-------------+
-    | 0          | 1          | 143         |
-    | 0          | 2          | 118         |
-    | 0          | 3          | 117         |
-    | 0          | 4          | 115         |
-    | 0          | 5          | 137         |
-    | 0          | 6          | 117         |
-    | 0          | 7          | 142         |
-    | 0          | 8          | 19          |
-    | 1          | 1          | 66          |
-    | 1          | 2          | 59          |
-    +------------+------------+-------------+
+    +----------+-----------+--------------+
+    | cust_id  | month_no  | month_count  |
+    +----------+-----------+--------------+
+    | 0        | 1         | 143          |
+    | 0        | 2         | 118          |
+    | 0        | 3         | 117          |
+    | 0        | 4         | 115          |
+    | 0        | 5         | 137          |
+    | 0        | 6         | 117          |
+    | 0        | 7         | 142          |
+    | 0        | 8         | 19           |
+    | 1        | 1         | 66           |
+    | 1        | 2         | 59           |
+    +----------+-----------+--------------+
     10 rows selected
 
 This query groups the aggregate function by customer ID and month for one
@@ -114,12 +116,14 @@ JavaScript notation, you will already know how some of these extensions work.
 
 ### Set the workspace to dfs.clicks:
 
+
     0: jdbc:drill:> use dfs.clicks;
-    +------------+------------+
-    | ok | summary |
-    +------------+------------+
-    | true | Default schema changed to 'dfs.clicks' |
-    +------------+------------+
+    +-------+-----------------------------------------+
+    |  ok   |                 summary                 |
+    +-------+-----------------------------------------+
+    | true  | Default schema changed to [dfs.clicks]  |
+    +-------+-----------------------------------------+
+    1 row selected
 
 ### Explore clickstream data:
 
@@ -128,15 +132,16 @@ arrays within arrays. The following queries show how to access this complex
 data.
 
     0: jdbc:drill:> select * from `clicks/clicks.json` limit 5;
-    +------------+------------+------------+------------+------------+
-    | trans_id | date | time | user_info | trans_info |
-    +------------+------------+------------+------------+------------+
-    | 31920 | 2014-04-26 | 12:17:12 | {"cust_id":22526,"device":"IOS5","state":"il"} | {"prod_id":[174,2],"purch_flag":"false"} |
-    | 31026 | 2014-04-20 | 13:50:29 | {"cust_id":16368,"device":"AOS4.2","state":"nc"} | {"prod_id":[],"purch_flag":"false"} |
-    | 33848 | 2014-04-10 | 04:44:42 | {"cust_id":21449,"device":"IOS6","state":"oh"} | {"prod_id":[582],"purch_flag":"false"} |
-    | 32383 | 2014-04-18 | 06:27:47 | {"cust_id":20323,"device":"IOS5","state":"oh"} | {"prod_id":[710,47],"purch_flag":"false"} |
-    | 32359 | 2014-04-19 | 23:13:25 | {"cust_id":15360,"device":"IOS5","state":"ca"} | {"prod_id": [0,8,170,173,1,124,46,764,30,711,0,3,25],"purch_flag":"true"} |
-    +------------+------------+------------+------------+------------+
+    +-----------+-------------+-----------+---------------------------------------------------+---------------------------------------------------------------------------+
+    | trans_id  |    date     |   time    |                     user_info                     |                                trans_info                                 |
+    +-----------+-------------+-----------+---------------------------------------------------+---------------------------------------------------------------------------+
+    | 31920     | 2014-04-26  | 12:17:12  | {"cust_id":22526,"device":"IOS5","state":"il"}    | {"prod_id":[174,2],"purch_flag":"false"}                                  |
+    | 31026     | 2014-04-20  | 13:50:29  | {"cust_id":16368,"device":"AOS4.2","state":"nc"}  | {"prod_id":[],"purch_flag":"false"}                                       |
+    | 33848     | 2014-04-10  | 04:44:42  | {"cust_id":21449,"device":"IOS6","state":"oh"}    | {"prod_id":[582],"purch_flag":"false"}                                    |
+    | 32383     | 2014-04-18  | 06:27:47  | {"cust_id":20323,"device":"IOS5","state":"oh"}    | {"prod_id":[710,47],"purch_flag":"false"}                                 |
+    | 32359     | 2014-04-19  | 23:13:25  | {"cust_id":15360,"device":"IOS5","state":"ca"}    | {"prod_id":[0,8,170,173,1,124,46,764,30,711,0,3,25],"purch_flag":"true"}  |
+    +-----------+-------------+-----------+---------------------------------------------------+---------------------------------------------------------------------------+
+    5 rows selected
 
 
 ### Unpack the user_info column:
@@ -144,15 +149,16 @@ data.
     0: jdbc:drill:> select t.user_info.cust_id as custid, t.user_info.device as device,
     t.user_info.state as state
     from `clicks/clicks.json` t limit 5;
-    +------------+------------+------------+
-    |   custid   |   device   |   state    |
-    +------------+------------+------------+
-    | 22526      | IOS5       | il         |
-    | 16368      | AOS4.2     | nc         |
-    | 21449      | IOS6       | oh         |
-    | 20323      | IOS5       | oh         |
-    | 15360      | IOS5       | ca         |
-    +------------+------------+------------+
+    +---------+---------+--------+
+    | custid  | device  | state  |
+    +---------+---------+--------+
+    | 22526   | IOS5    | il     |
+    | 16368   | AOS4.2  | nc     |
+    | 21449   | IOS6    | oh     |
+    | 20323   | IOS5    | oh     |
+    | 15360   | IOS5    | ca     |
+    +---------+---------+--------+
+    5 rows selected (0.171 seconds)
 
 This query uses a simple table.column.column notation to extract nested column
 data. For example:
@@ -170,15 +176,16 @@ parsed as table names by the SQL parser.
     0: jdbc:drill:> select t.trans_info.prod_id as prodid, t.trans_info.purch_flag as
     purchased
     from `clicks/clicks.json` t limit 5;
-    +------------+------------+
-    |   prodid   | purchased  |
-    +------------+------------+
-    | [174,2]    | false      |
-    | []         | false      |
-    | [582]      | false      |
-    | [710,47]   | false      |
-    | [0,8,170,173,1,124,46,764,30,711,0,3,25] | true       |
-        5 rows selected
+    +-------------------------------------------+------------+
+    |                  prodid                   | purchased  |
+    +-------------------------------------------+------------+
+    | [174,2]                                   | false      |
+    | []                                        | false      |
+    | [582]                                     | false      |
+    | [710,47]                                  | false      |
+    | [0,8,170,173,1,124,46,764,30,711,0,3,25]  | true       |
+    +-------------------------------------------+------------+
+    5 rows selected
 
 Note that this result reveals that the prod_id column contains an array of IDs
 (one or more product ID values per row, separated by commas). The next step
@@ -289,32 +296,34 @@ quickly create a Drill table from the results of the query.
 ### Continue to use the dfs.clicks workspace
 
     0: jdbc:drill:> use dfs.clicks;
-    +------------+------------+
-    | ok | summary |
-    +------------+------------+
-    | true | Default schema changed to 'dfs.clicks' |
-    +------------+------------+
+    +-------+-----------------------------------------+
+    |  ok   |                 summary                 |
+    +-------+-----------------------------------------+
+    | true  | Default schema changed to [dfs.clicks]  |
+    +-------+-----------------------------------------+
+    1 row selected (1.61 seconds)
 
 ### Return product searches for high-value customers:
 
-    0: jdbc:drill:> select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id 
-    from hive.orders as o, `clicks/clicks.json` t 
-    where o.cust_id=t.user_info.cust_id 
-    and o.order_total > (select avg(inord.order_total) 
-    from hive.orders inord where inord.state = o.state);
-    +------------+-------------+------------+
-    |  cust_id   | order_total |   prod_id  |
-    +------------+-------------+------------+
-    ...
-    | 9650       | 69          | 16         |
-    | 9650       | 69          | 560        |
-    | 9650       | 69          | 959        |
-    | 9654       | 76          | 768        |
-    | 9656       | 76          | 32         |
-    | 9656       | 76          | 16         |
-    ...
-    +------------+-------------+------------+
-    106,281 rows selected
+    0: jdbc:drill:> select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id
+    from 
+    hive.orders as o
+    join `clicks/clicks.json` t
+    on o.cust_id=t.user_info.cust_id
+    where o.order_total > (select avg(inord.order_total)
+                          from hive.orders inord
+                          where inord.state = o.state);
+    +----------+--------------+----------+
+    | cust_id  | order_total  | prod_id  |
+    +----------+--------------+----------+
+    | 1328     | 73           | 26       |
+    | 1328     | 146          | 26       |
+    | 1328     | 56           | 26       |
+    | 1328     | 91           | 26       |
+    | 1328     | 74           | 26       |
+        ...
+    +----------+--------------+----------+
+    107,482 rows selected (14.863 seconds)
 
 This query returns a list of products that are being searched for by customers
 who have made transactions that are above the average in their states.
@@ -322,15 +331,19 @@ who have made transactions that are above the average in their states.
 ### Materialize the result of the previous query:
 
     0: jdbc:drill:> create table product_search as select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id
-    from hive.orders as o, `clicks/clicks.json` t 
-    where o.cust_id=t.user_info.cust_id and o.order_total > (select avg(inord.order_total) 
-    from hive.orders inord where inord.state = o.state);
-    +------------+---------------------------+
-    |  Fragment  | Number of records written |
-    +------------+---------------------------+
-    | 0_0        | 106281                    |
-    +------------+---------------------------+
-    1 row selected
+    from
+    hive.orders as o
+    join `clicks/clicks.json` t
+    on o.cust_id=t.user_info.cust_id
+    where o.order_total > (select avg(inord.order_total)
+                          from hive.orders inord
+                          where inord.state = o.state);
+    +-----------+----------------------------+
+    | Fragment  | Number of records written  |
+    +-----------+----------------------------+
+    | 0_0       | 107482                     |
+    +-----------+----------------------------+
+    1 row selected (3.488 seconds)
 
 This example uses a CTAS statement to create a table based on a correlated
 subquery that you ran previously. This table contains all of the rows that the
@@ -344,12 +357,12 @@ This example simply checks that the CTAS statement worked by verifying the
 number of rows in the table.
 
     0: jdbc:drill:> select count(*) from product_search;
-    +------------+
-    |   EXPR$0   |
-    +------------+
-    | 106281     |
-    +------------+
-    1 row selected
+    +---------+
+    | EXPR$0  |
+    +---------+
+    | 107482  |
+    +---------+
+    1 row selected (0.155 seconds)
 
 ### Find the storage file for the table:
 
@@ -365,7 +378,7 @@ stored in the location defined by the dfs.clicks workspace:
 
     "location": "http://demo.mapr.com/data/nested"
 
-with a subdirectory that has the same name as the table you created.
+There is a subdirectory that has the same name as the table you created.
 
 ## What's Next
 


[2/2] drill git commit: Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/d22ac4af
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/d22ac4af
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/d22ac4af

Branch: refs/heads/gh-pages
Commit: d22ac4af0b34fb33089f6ae035d4b41108b1fa86
Parents: e5d8127 cb92534
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon May 18 23:44:33 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Mon May 18 23:44:33 2015 -0700

----------------------------------------------------------------------
 _config.yml                                     |   8 +-
 _data/version.json                              |   8 +-
 _sass/_site-main.scss                           |  80 +++++++++----------
 _sass/_site-responsive.scss                     |  25 +++---
 blog/_drafts/drill-1.0-released.md              |  54 -------------
 blog/_posts/2015-05-19-drill-1.0-released.md    |  33 ++++++++
 ...are-foundation-announces-apache-drill-1.0.md |  47 +++++++++++
 images/home-coffee.jpg                          | Bin 33977 -> 35145 bytes
 index.html                                      |   6 +-
 9 files changed, 142 insertions(+), 119 deletions(-)
----------------------------------------------------------------------