You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/05/17 08:51:21 UTC

[01/26] drill git commit: Removing intro table background dividers at 768px max width responsive break point. Also making width 100% so it flows with the page size.

Repository: drill
Updated Branches:
  refs/heads/gh-pages a59289d16 -> 1b7072c5d


Removing intro table background dividers at 768px max width responsive
break point. Also making width 100% so it flows with the page size.


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/7cf162a9
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/7cf162a9
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/7cf162a9

Branch: refs/heads/gh-pages
Commit: 7cf162a9943305ac6d22ab484e9b68c081b046fc
Parents: fcb4f41
Author: Danny <dk...@batchblue.com>
Authored: Tue May 12 23:32:15 2015 -0400
Committer: Danny <dk...@batchblue.com>
Committed: Tue May 12 23:32:15 2015 -0400

----------------------------------------------------------------------
 css/responsive.css | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/7cf162a9/css/responsive.css
----------------------------------------------------------------------
diff --git a/css/responsive.css b/css/responsive.css
index 81714a5..0f86cfa 100644
--- a/css/responsive.css
+++ b/css/responsive.css
@@ -128,7 +128,8 @@
     display: block;
   }
   table.intro {
-    width: 768px;
+    width: 100%;
+    background: none;
   }
   
   img {


[23/26] drill git commit: Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/7c6c47de
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/7c6c47de
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/7c6c47de

Branch: refs/heads/gh-pages
Commit: 7c6c47decd72594f9c08f01b82f4435d29031db3
Parents: 59bc915 6ea0c7a
Author: Tomer Shiran <ts...@gmail.com>
Authored: Sat May 16 22:42:12 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Sat May 16 22:42:12 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 |  97 ++++++-
 _docs/074-query-audit-logging.md                |   5 +
 _docs/075-getting-query-information.md          |  55 ++++
 .../020-configuring-drill-memory.md             |   4 +-
 .../010-configuration-options-introduction.md   |   2 +-
 .../020-start-up-options.md                     |   3 +-
 .../030-planning-and-exececution-options.md     |   5 -
 .../035-plugin-configuration-introduction.md    |   2 +-
 _docs/img/drill-bin.png                         | Bin 85005 -> 51164 bytes
 _docs/img/drill-directory.png                   | Bin 87661 -> 46151 bytes
 _docs/img/sqlline1.png                          | Bin 23074 -> 6633 bytes
 _docs/install/010-install-drill-introduction.md |  10 +-
 .../install/045-embedded-mode-prerequisites.md  |  10 +-
 .../047-installing-drill-on-the-cluster.md      |   7 +-
 .../050-starting-drill-in-distributed mode.md   |  83 +++---
 .../010-embedded-mode-prerequisites.md          |   9 +-
 ...20-installing-drill-on-linux-and-mac-os-x.md |   8 +-
 .../030-starting-drill-on-linux-and-mac-os-x.md |  25 +-
 .../040-installing-drill-on-windows.md          |   6 +-
 .../050-starting-drill-on-windows.md            |   8 +-
 .../010-interfaces-introduction.md              |   2 +-
 _docs/tutorials/020-drill-in-10-minutes.md      | 137 +++++----
 .../030-analyzing-the-yelp-academic-dataset.md  | 284 ++++++++++---------
 .../050-analyzing-highly-dynamic-datasets.md    |  53 ++--
 24 files changed, 477 insertions(+), 338 deletions(-)
----------------------------------------------------------------------



[07/26] drill git commit: spotfire server doc

Posted by ts...@apache.org.
spotfire server doc


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/6244940a
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/6244940a
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/6244940a

Branch: refs/heads/gh-pages
Commit: 6244940acf6163c255417c911fdc07dc3f33f9b8
Parents: 10e158f
Author: Bob Rumsby <br...@mapr.com>
Authored: Thu May 14 17:31:54 2015 -0700
Committer: Bob Rumsby <br...@mapr.com>
Committed: Thu May 14 17:31:54 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 |  75 ++++++++++++++++---
 _docs/img/spotfire-server-client.png            | Bin 0 -> 48430 bytes
 _docs/img/spotfire-server-configtab.png         | Bin 0 -> 76152 bytes
 _docs/img/spotfire-server-connectionURL.png     | Bin 0 -> 47664 bytes
 _docs/img/spotfire-server-database.png          | Bin 0 -> 36204 bytes
 _docs/img/spotfire-server-datasources-tab.png   | Bin 0 -> 49236 bytes
 _docs/img/spotfire-server-deployment.png        | Bin 0 -> 22058 bytes
 _docs/img/spotfire-server-hiveorders.png        | Bin 0 -> 62537 bytes
 _docs/img/spotfire-server-importconfig.png      | Bin 0 -> 32739 bytes
 _docs/img/spotfire-server-infodesigner.png      | Bin 0 -> 69950 bytes
 _docs/img/spotfire-server-infodesigner2.png     | Bin 0 -> 36991 bytes
 _docs/img/spotfire-server-infolink.png          | Bin 0 -> 126884 bytes
 _docs/img/spotfire-server-new.png               | Bin 0 -> 23290 bytes
 _docs/img/spotfire-server-saveconfig.png        | Bin 0 -> 188740 bytes
 _docs/img/spotfire-server-saveconfig2.png       | Bin 0 -> 32622 bytes
 _docs/img/spotfire-server-start.png             | Bin 0 -> 54541 bytes
 _docs/img/spotfire-server-template.png          | Bin 0 -> 155705 bytes
 _docs/img/spotfire-server-tss.png               | Bin 0 -> 43564 bytes
 .../065-configuring-spotfire-server.md          |  61 +++++++++++++++
 19 files changed, 124 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 28fb349..ec323dc 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -1226,6 +1226,23 @@
             "title": "Configuring Resources for a Shared Drillbit", 
             "url": "/docs/configuring-resources-for-a-shared-drillbit/"
         }, 
+        "Configuring Tibco Spotfire Server with Drill": {
+            "breadcrumbs": [
+                {
+                    "title": "ODBC/JDBC Interfaces", 
+                    "url": "/docs/odbc-jdbc-interfaces/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+            "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+            "parent": "ODBC/JDBC Interfaces", 
+            "previous_title": "Using Tibco Spotfire with Drill", 
+            "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+            "relative_path": "_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md", 
+            "title": "Configuring Tibco Spotfire Server with Drill", 
+            "url": "/docs/configuring-tibco-spotfire-server-with-drill/"
+        }, 
         "Configuring User Authentication": {
             "breadcrumbs": [
                 {
@@ -4578,8 +4595,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
-                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+                    "next_title": "Configuring Tibco Spotfire Server with Drill", 
+                    "next_url": "/docs/configuring-tibco-spotfire-server-with-drill/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Apache Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
@@ -4595,11 +4612,28 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Apache Drill with Tableau 9 Server", 
-                    "next_url": "/docs/using-apache-drill-with-tableau-9-server/", 
+                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Tibco Spotfire with Drill", 
                     "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md", 
+                    "title": "Configuring Tibco Spotfire Server with Drill", 
+                    "url": "/docs/configuring-tibco-spotfire-server-with-drill/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "ODBC/JDBC Interfaces", 
+                            "url": "/docs/odbc-jdbc-interfaces/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Using Apache Drill with Tableau 9 Server", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-server/", 
+                    "parent": "ODBC/JDBC Interfaces", 
+                    "previous_title": "Configuring Tibco Spotfire Server with Drill", 
+                    "previous_url": "/docs/configuring-tibco-spotfire-server-with-drill/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
                     "title": "Using Apache Drill with Tableau 9 Desktop", 
                     "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
@@ -8172,8 +8206,8 @@
             "next_title": "Using Apache Drill with Tableau 9 Server", 
             "next_url": "/docs/using-apache-drill-with-tableau-9-server/", 
             "parent": "ODBC/JDBC Interfaces", 
-            "previous_title": "Using Tibco Spotfire with Drill", 
-            "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+            "previous_title": "Configuring Tibco Spotfire Server with Drill", 
+            "previous_url": "/docs/configuring-tibco-spotfire-server-with-drill/", 
             "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
             "title": "Using Apache Drill with Tableau 9 Desktop", 
             "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
@@ -8605,8 +8639,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Using Apache Drill with Tableau 9 Desktop", 
-            "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+            "next_title": "Configuring Tibco Spotfire Server with Drill", 
+            "next_url": "/docs/configuring-tibco-spotfire-server-with-drill/", 
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using MicroStrategy Analytics with Apache Drill", 
             "previous_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
@@ -10258,8 +10292,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
-                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+                    "next_title": "Configuring Tibco Spotfire Server with Drill", 
+                    "next_url": "/docs/configuring-tibco-spotfire-server-with-drill/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Apache Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
@@ -10275,11 +10309,28 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Apache Drill with Tableau 9 Server", 
-                    "next_url": "/docs/using-apache-drill-with-tableau-9-server/", 
+                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Tibco Spotfire with Drill", 
                     "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md", 
+                    "title": "Configuring Tibco Spotfire Server with Drill", 
+                    "url": "/docs/configuring-tibco-spotfire-server-with-drill/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "ODBC/JDBC Interfaces", 
+                            "url": "/docs/odbc-jdbc-interfaces/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Using Apache Drill with Tableau 9 Server", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-server/", 
+                    "parent": "ODBC/JDBC Interfaces", 
+                    "previous_title": "Configuring Tibco Spotfire Server with Drill", 
+                    "previous_url": "/docs/configuring-tibco-spotfire-server-with-drill/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
                     "title": "Using Apache Drill with Tableau 9 Desktop", 
                     "url": "/docs/using-apache-drill-with-tableau-9-desktop/"

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-client.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-client.png b/_docs/img/spotfire-server-client.png
new file mode 100644
index 0000000..183488c
Binary files /dev/null and b/_docs/img/spotfire-server-client.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-configtab.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-configtab.png b/_docs/img/spotfire-server-configtab.png
new file mode 100644
index 0000000..9dbcfcb
Binary files /dev/null and b/_docs/img/spotfire-server-configtab.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-connectionURL.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-connectionURL.png b/_docs/img/spotfire-server-connectionURL.png
new file mode 100644
index 0000000..950d8cc
Binary files /dev/null and b/_docs/img/spotfire-server-connectionURL.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-database.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-database.png b/_docs/img/spotfire-server-database.png
new file mode 100644
index 0000000..e572596
Binary files /dev/null and b/_docs/img/spotfire-server-database.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-datasources-tab.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-datasources-tab.png b/_docs/img/spotfire-server-datasources-tab.png
new file mode 100644
index 0000000..9fbaa87
Binary files /dev/null and b/_docs/img/spotfire-server-datasources-tab.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-deployment.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-deployment.png b/_docs/img/spotfire-server-deployment.png
new file mode 100644
index 0000000..f537f40
Binary files /dev/null and b/_docs/img/spotfire-server-deployment.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-hiveorders.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-hiveorders.png b/_docs/img/spotfire-server-hiveorders.png
new file mode 100644
index 0000000..53ff7ed
Binary files /dev/null and b/_docs/img/spotfire-server-hiveorders.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-importconfig.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-importconfig.png b/_docs/img/spotfire-server-importconfig.png
new file mode 100644
index 0000000..cc42c1d
Binary files /dev/null and b/_docs/img/spotfire-server-importconfig.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-infodesigner.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-infodesigner.png b/_docs/img/spotfire-server-infodesigner.png
new file mode 100644
index 0000000..ab7f04f
Binary files /dev/null and b/_docs/img/spotfire-server-infodesigner.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-infodesigner2.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-infodesigner2.png b/_docs/img/spotfire-server-infodesigner2.png
new file mode 100644
index 0000000..5534aa2
Binary files /dev/null and b/_docs/img/spotfire-server-infodesigner2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-infolink.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-infolink.png b/_docs/img/spotfire-server-infolink.png
new file mode 100644
index 0000000..7d4e2b7
Binary files /dev/null and b/_docs/img/spotfire-server-infolink.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-new.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-new.png b/_docs/img/spotfire-server-new.png
new file mode 100644
index 0000000..a52d39d
Binary files /dev/null and b/_docs/img/spotfire-server-new.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-saveconfig.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-saveconfig.png b/_docs/img/spotfire-server-saveconfig.png
new file mode 100644
index 0000000..466c2c8
Binary files /dev/null and b/_docs/img/spotfire-server-saveconfig.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-saveconfig2.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-saveconfig2.png b/_docs/img/spotfire-server-saveconfig2.png
new file mode 100644
index 0000000..563b973
Binary files /dev/null and b/_docs/img/spotfire-server-saveconfig2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-start.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-start.png b/_docs/img/spotfire-server-start.png
new file mode 100644
index 0000000..cdb4075
Binary files /dev/null and b/_docs/img/spotfire-server-start.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-template.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-template.png b/_docs/img/spotfire-server-template.png
new file mode 100644
index 0000000..3ddd482
Binary files /dev/null and b/_docs/img/spotfire-server-template.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/img/spotfire-server-tss.png
----------------------------------------------------------------------
diff --git a/_docs/img/spotfire-server-tss.png b/_docs/img/spotfire-server-tss.png
new file mode 100644
index 0000000..d79ea2e
Binary files /dev/null and b/_docs/img/spotfire-server-tss.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6244940a/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md b/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md
new file mode 100644
index 0000000..674929c
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md
@@ -0,0 +1,61 @@
+---
+title: "Configuring Tibco Spotfire Server with Drill"
+parent: "ODBC/JDBC Interfaces"
+---
+This document describes how to configure Tibco Spotfire Server (TSS) to integrate with Apache Drill and explore multiple data formats instantly on Hadoop. Users can combine these powerful platforms to rapidly gain analytical access to a wide variety of data types. 

Complete the following steps to configure and use Apache Drill with TSS: 

1. Install the Drill JDBC driver with TSS.
2. Configure the Drill Data Source Template in TSS with the TSS configuration tool.
3. Configure Drill data sources with Tibco Spotfire Desktop and Information Designer.
4. Query and analyze various data formats with Tibco Spotfire and Drill.
+
+
+----------
+
+
+### Step 1: Install and Configure the Drill JDBC Driver 
+
+
+Drill provides standard JDBC connectivity, making it easy to integrate data exploration capabilities on complex, schema-less data sets. Tibco Spotfire Server (TSS) requires Drill 1.0 or later, which incudes the JDBC driver. The JDBC driver is bundled with the Drill configuration files, and it is recommended that you use the JDBC driver that is shipped with the specific Drill version.

For general instructions to install the Drill JDBC driver, see [Using JDBC](http://drill.apache.org/docs/using-jdbc/).
+
Complete the following steps to install and configure the JDBC driver for TSS:

1. Locate the JDBC driver in the Drill installation directory:  
   `<drill-home>/jars/jdbc-driver/drill-jdbc-all-<drill-version>.jar`  
   For example, on a MapR cluster:  
   `/opt/mapr/drill/drill-1.0.0/jars/jdbc-driver/drill-jdbc-all-1.0.0-SNAPSHOT.jar`

2. Locate the TSS library directory and copy the JDBC driver file to that directory: 
+   `<TSS-home-directory>/tomcat/lib`  
   For example, on a Linux server:  
+   `/usr/local/bin/tibco/tss/6.0.3/tomcat/lib`  
+   For example, on a Windows server:  
+   `C:\Program Files\apache-tomcat\lib`

3. Restart TSS to load the JDBC driver.
4. Verify that the TSS system can resolve the hostnames of the ZooKeeper nodes for the Drill cluster. You can do this by validating that DNS is properly configured for the TSS system and all the ZooKeeper nodes. Alternatively, you can add the hostnames and IP addresses of the ZooKeeper nodes to the TSS system hosts file.  
+   For Linux systems, the hosts file is located here: 
+   `/etc/hosts`  
+   For Windows systems, the hosts file is located here: 
+   `%WINDIR%\system32\drivers\etc\hosts`
+----------
+
+### Step 2: Configure the Drill Data Source Template in TSS
+
+The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.
+
+1. Click **Start > All Programs > TIBCO Spotfire Server > Configure TIBCO Spotfire Server**.
+2. Enter the Configuration Tool password that was specified when TSS was initially installed.
3. Once the Configuration Tool has connected to TSS, click the **Configuration** tab, then **Data Source Templates**.
4. In the Data Source Templates window, click the **New** button at the bottom of the window.
5. Provide a name for the data source template, then copy the following XML template into the **Data Source Template** box. When complete, click **OK**.
6. The new entry will now be available in the data source template. Check the box next to the new entry, then click **Save Configuration**.
+   
+#### XML Template
+
+Make sure that you enter the correct ZooKeeper node name instead of `<zk-node>`, as well as the correct Drill cluster name instead of `<drill-cluster-name>` in the example below. This is just a template that will appear whenever a data source is configured. The hostnames of ZooKeeper nodes and the Drill cluster name can be found in the `$DRILL_HOME/conf/drill-override.conf` file on any of the Drill nodes in the cluster.
+     
+    <jdbc-type-settings>
    <type-name>drill</type-name>
    <driver>org.apache.drill.jdbc.Driver</driver> 
    <connection-url-pattern>jdbc:drill:zk=<zk-node>:5181/drill/<drill-cluster-name>-drillbits</connection-url-pattern> 
    <ping-command>SELECT 1 FROM sys.version</ping-command>
    <supports-catalogs>true</supports-catalogs>
    <supports-schemas>true</supports-schemas>
    <supports-procedures>false</supports-procedures>
    <table-expression-pattern>[$$schema$$.]$$table$$</table-expression-pattern>
 
    <column-name-pattern>`$$name$$`</column-name-pattern>
    <table-name-pattern>`$$name$$`</table-name-pattern>
    <schema-name-pattern>`$$name$$`</schema-name-pattern>
    <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
    <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
    <column-alias-pattern>`$$name$$`</column-alias-pattern>

    <java-to-sql-type-conversions>
     <type-mapping>
      <from max-length="32672">String</from>
      <to>VARCHAR($$val
 ue$$)</to>
     </type-mapping>
     <type-mapping>
      <from>String</from>
      <to>VARCHAR(32672)</to>
     </type-mapping>
     <type-mapping>
      <from>Integer</from>
      <to>INTEGER</to>
     </type-mapping>
    </java-to-sql-type-conversions>
+    </jdbc-type-settings>
+----------
+
+### Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop 
+
+To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.
+
+1. Open Tibco Spotfire Desktop.
+2. Log into TSS.
+3. Select the deployment area in TSS to be used.
+4. Click **Tools > Information Designer**.
+5. In the Information Designer, click **New > Data Source**.
+6. In the Data Source window, enter the name for the data source. Select the Drill Data Source template created in Step 2 as the type. Update the connection URL with the correct hostname of the ZooKeeper node(s) and the Drill cluster name. Note: The Zookeeper node(s) hostname(s) and Drill cluster name can be found in the `$DRILL_HOME/conf/drill-override.conf` file on any of the Drill nodes in the cluster. Enter the username and password used to connect to Drill. When completed, click **Save**. 
+7. In the Save As window, verify the name and the folder where you want to save the new data source in TSS. Click **Save** when done. TSS will now validate the information and save the new data source in TSS.
8. When the data source is saved, it will appear in the **Data Sources** tab, and you will be able to navigate the schema.
+
+
+----------
+
+### Step 4: Query and Analyze the Data
+
+After the Drill data source has been configured in the Information Designer, the information elements can be defined. 

1.	In this example all the columns of a Hive table have been defined, using the Drill data source, and added to an information link.
+2.	The SQL syntax to retrieve the data can be validated by clicking the **SQL** button. Many other operations can be performed in Information Link,  including joins, filters, and so on. See the Tibco Spotfire documentation for details.
3.	You can now import the data of this table into TSS by clicking the **Open Data** button. 
+
The data is now available in Tibco Spotfire Desktop to create various reports and tables as needed, and to be shared. For more information about creating charts, tables and reports, see the Tibco Spotfire documentation.


...
+


[15/26] drill git commit: New homepage design

Posted by ts...@apache.org.
New homepage design


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/0b09f9b5
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/0b09f9b5
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/0b09f9b5

Branch: refs/heads/gh-pages
Commit: 0b09f9b5cf03546f5297b36813c817fe11a98558
Parents: ca346ee
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 15 20:46:34 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 15 20:46:34 2015 -0700

----------------------------------------------------------------------
 blog/_drafts/drill-1.0-released.md |  12 +++--
 css/code.css                       |   4 +-
 css/style.css                      |  78 +++++++++++++++++++------------
 images/home-any.png                | Bin 0 -> 55988 bytes
 images/home-bi.png                 | Bin 0 -> 42034 bytes
 images/home-coffee.jpg             | Bin 0 -> 33977 bytes
 images/home-json.png               | Bin 0 -> 54424 bytes
 index.html                         |  79 ++++++++++++++++++++++++++++++--
 8 files changed, 136 insertions(+), 37 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/0b09f9b5/blog/_drafts/drill-1.0-released.md
----------------------------------------------------------------------
diff --git a/blog/_drafts/drill-1.0-released.md b/blog/_drafts/drill-1.0-released.md
index b9cfbb7..c9fc5d8 100644
--- a/blog/_drafts/drill-1.0-released.md
+++ b/blog/_drafts/drill-1.0-released.md
@@ -31,12 +31,20 @@ Forest Hill, MD - 19 May 2015 - The Apache Software Foundation (ASF), the all-vo
 
 With the exponential growth of data in recent years, and the shift towards rapid application development, new data is increasingly being stored in non-relational, schema-free datastores including Hadoop, NoSQL and Cloud storage. Apache Drill enables analysts, business users, data scientists and developers to explore and analyze this data without sacrificing the flexibility and agility offered by these datastores. Drill processes the data in-situ without requiring users to define schemas or transform data.
 
-"Drill introduces the JSON document model to the world of SQL-based analytics and BI" said Jacques Nadeau, Vice President of Apache Drill. "This enables users to query fixed-schema, evolving-schema and schema-free data stored in a variety of formats and datastores. The architecture of relational query engines and databases is built on the assumption that all data has a simple and static structure that’s known in advance, and this 40-year-old assumption is simply no longer valid. We designed Drill from the ground up to address the new reality.”
+"Drill introduces the JSON document model to the world of SQL-based analytics and BI" said Jacques Nadeau, Vice President of Apache Drill. "This enables users to query fixed-schema, evolving-schema and schema-free data stored in a variety of formats and datastores. The architecture of relational query engines and databases is built on the assumption that all data has a simple and static structure that’s known in advance, and this 40-year-old assumption is simply no longer valid. We designed Drill from the ground up to address the new reality."
 
 Apache Drill's architecture is unique in many ways. It is the only columnar execution engine that supports complex and schema-free data, and the only execution engine that performs data-driven query compilation (and re-compilation, also known as schema discovery) during query execution. These unique capabilities enable Drill to achieve record-breaking performance with the flexibility offered by the JSON document model.
 
 "Drill's columnar execution engine and optimizer take full advantage of Apache Parquet's columnar storage to achieve maximum performance," said Julien Le Dem, Technical Lead of Data Processing at Twitter and Vice President of Apache Parquet. "The Drill team has been a key contributor to the Parquet project, including recent enhancements to Parquet types and vectorization. The Drill team’s involvement in the Parquet community is instrumental in driving the standard."
 
+"Apache Drill 1.0 raises the bar for secure, reliable and scalable SQL-on-Hadoop," said Piyush Bhargava, distinguished engineer, IT, Cisco Systems.  "Because Drill integrates with existing data virtualization and visualization tools, we expect it will improve adoption of self-service data exploration and large-scale BI queries on our advanced Hadoop platform at Cisco."  
+
+"Apache Drill closes a gap around self-service SQL queries in Hadoop, especially on complex, dynamic NoSQL data types," said Mike Foster, strategic alliances technology officer, Qlik.  "Drill's performance advantages for Hadoop data access, combined with the Qlik associative experience, enables our customers to continue discovering business value from a wide range of data. Congrats to the Apache Drill community."
+
+"Apache Drill empowers people to access data that is traditionally difficult to work with," said Jeff Feng, product manager, Tableau.  "Direct access within a centralized data repository and without pre-generating metadata definitions encourages data democracy which is essential for data-driven organizations. Additionally, Drill's instant and secure access to complex data formats, such as JSON, opens up extended analytical opportunities."
+
+"Congratulations to the Apache Drill community on the availability of 1.0," said Karl Van den Bergh, vice president, products and cloud, TIBCO. "Drill promises to bring low-latency access to data stored in Hadoop and HBase via standard SQL semantics. This innovation is in line with the value of Fast Data analysis, which TIBCO customers welcome and appreciate."
+
 Availability and Oversight
 Apache Drill 1.0 is available immediately as a free download from http://drill.apache.org/download/. Documentation is available at http://drill.apache.org/docs/. As with all Apache products, Apache Drill software is released under the Apache License v2.0, and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the project's day-to-day operations, including community development and product releases. For ways to become involved with Apache Drill, visit http://drill.apache.org/ and @ApacheDrill on Twitter.
 
@@ -44,5 +52,3 @@ About The Apache Software Foundation (ASF)
 Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 500 individual Members and 4,500 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Bloomberg, Budget Direct, Cerner, Citrix, Cloudera, Comcast, Facebook, Google, Hortonworks, HP, IBM, InMotion Hosting, iSigma, Matt Mullenweg, Microsoft, Pivotal, Produban, WANdisco, and Yahoo. For more information, visit ht
 tp://www.apache.org/ or follow @TheASF on Twitter.
 
 © The Apache Software Foundation. "Apache", "Apache Drill", "Drill", "Apache Hadoop", "Hadoop", "Apache Parquet", "Parquet", and "ApacheCon", are registered trademarks or trademarks of The Apache Software Foundation. All other brands and trademarks are the property of their respective owners.
-
-\# \# \#

http://git-wip-us.apache.org/repos/asf/drill/blob/0b09f9b5/css/code.css
----------------------------------------------------------------------
diff --git a/css/code.css b/css/code.css
index b06c8d2..b0de98d 100644
--- a/css/code.css
+++ b/css/code.css
@@ -3,7 +3,7 @@ div.highlight pre, code {
   border-radius: 0;
   border: none;
   border-left: 5px solid #494747;
-  font-family: 'Source Code Pro', monospace;
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
   font-size: 14px;
   line-height: 24px;
   overflow: auto;
@@ -21,7 +21,7 @@ code {
   border-radius: 0;
   border: none;
   border-left: 5px;
-  font-family: 'Source Code Pro', monospace;
+  font-family: font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
   font-size: 14px;
   line-height: 24px;
   overflow: auto;

http://git-wip-us.apache.org/repos/asf/drill/blob/0b09f9b5/css/style.css
----------------------------------------------------------------------
diff --git a/css/style.css b/css/style.css
index cc85454..e7ab7ef 100755
--- a/css/style.css
+++ b/css/style.css
@@ -442,7 +442,6 @@ span.strong {
 
 .introWrapper {
   border-bottom:1px solid #CCC;
-  margin-bottom:50px;  
 }
 
 table.intro {
@@ -456,7 +455,7 @@ table.intro td {
   background-position:center 25px;
   background-repeat:no-repeat;
   background-size:25px auto;
-  padding:65px 0 40px 0;
+  padding:65px 0 0 0;
   position:relative;
   vertical-align:top;
 }
@@ -501,32 +500,6 @@ table.intro a {
   font-weight: bold;
 }
 
-.home_txt { 
-  text-align:center;
-  padding-bottom:25px;
-}
-
-.home_txt h1 {
-  font-size:36px;
-  font-weight:normal;
-  line-height:44px;  
-  margin:0;
-}
-
-.home_txt h2 {
-  font-size:16px;
-  font-weight:normal;
-  line-height:24px;  
-}
-
-.home_txt p {
-  font-size:16px;
-  font-weight:lighter;
-  line-height:24px;
-  margin:40px auto;
-  width:770px;  
-}
-
 #blu {
   display:table;
   font-size:12px;
@@ -851,4 +824,51 @@ div.alertbar a{
 div.alertbar div span{
   font-size:65%;
   color:#aa7;
-}
\ No newline at end of file
+}
+
+div.home-row{
+  border-bottom:solid 1px #ccc;
+  margin:0 auto;
+  text-align:center;
+}
+
+div.home-row div{
+  display:inline-block;
+  vertical-align:middle;
+  text-align:left;
+}
+
+div.home-row:nth-child(odd) div:nth-child(1){
+  width:300px;
+}
+
+div.home-row:nth-child(odd) div:nth-child(2){
+  margin-left:40px;
+  width:580px;
+}
+
+div.home-row:nth-child(even) div:nth-child(1){
+  width:580px;
+}
+
+div.home-row:nth-child(even) div:nth-child(2){
+  margin-left:40px;
+  width:300px;
+}
+
+.home-row h1 {
+  font-size:24px;
+  margin:24px 0;
+  font-weight:bold;
+}
+
+.home-row h2 {
+  font-size:20px;
+  margin:20px 0;
+  font-weight:bold;
+}
+
+.home-row p {
+  font-size:16px;
+  line-height:22px;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/0b09f9b5/images/home-any.png
----------------------------------------------------------------------
diff --git a/images/home-any.png b/images/home-any.png
new file mode 100644
index 0000000..71ddeb5
Binary files /dev/null and b/images/home-any.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0b09f9b5/images/home-bi.png
----------------------------------------------------------------------
diff --git a/images/home-bi.png b/images/home-bi.png
new file mode 100644
index 0000000..80bfccb
Binary files /dev/null and b/images/home-bi.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0b09f9b5/images/home-coffee.jpg
----------------------------------------------------------------------
diff --git a/images/home-coffee.jpg b/images/home-coffee.jpg
new file mode 100644
index 0000000..da4f6e0
Binary files /dev/null and b/images/home-coffee.jpg differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0b09f9b5/images/home-json.png
----------------------------------------------------------------------
diff --git a/images/home-json.png b/images/home-json.png
new file mode 100644
index 0000000..b6cccea
Binary files /dev/null and b/images/home-json.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/0b09f9b5/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index a1eee46..03b787c 100755
--- a/index.html
+++ b/index.html
@@ -67,23 +67,95 @@ $(document).ready(function() {
         <td class="ag">
           <h1>Agility</h1>
           <p>Get faster insights without the overhead (data loading, schema creation and maintenance, transformations, etc.)</p>
-          <span><a href="#agility">LEARN MORE</a></span>
         </td>
         <td class="fl">
           <h1>Flexibility</h1>
           <p>Analyze the multi-structured and nested data in non-relational datatastores directly without transforming or restricting the data</p>
-          <span><a href="#flexibility">LEARN MORE</a></span>
         </td>
         <td class="fam">
           <h1>Familiarity</h1>
           <p>Leverage your existing SQL skillsets and BI tools including Tableau, Qlikview, MicroStrategy, Spotfire, Excel and more</p>
-          <span><a href="#familiarity">LEARN MORE</a></span>
         </td>
       </tr>
     </tbody>
   </table>
 </div>
 
+<div class="home-row">
+  <div><img src="{{ site.baseurl }}/images/home-any.png" style="width:300px" /></div>
+  <div>
+    <h1>Query any non-relational datastore (well, almost...)</h1>
+    <p>Drill supports a variety of NoSQL databases and file systems, including HBase, MongoDB, MapR-DB, HDFS, MapR-FS, S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files. A single query can join data from multiple datastores. For example, you could join a user profile collection in MongoDB with a directory of event logs in Hadoop.</p>
+    <p>Drill’s datastore-aware optimizer automatically restructures a query plan to leverage the datastore’s internal processing capabilities. In addition, Drill supports 'data locality', so it’s a good idea to co-locate Drill and the datastore on the same nodes.</p>
+  </div>
+</div>
+
+<div class="home-row">
+  <div>
+    <h1>Kiss the overhead goodbye and enjoy data agility</h1>
+    <p>Traditional query engines demand significant IT intervention before data can be queried. Drill gets rid of all that overhead so that users can just query the raw data in-situ. There's no need to load the data, create and maintain schemas, or transform the data before it can be processed. Instead, simply include the path to a Hadoop directory, MongoDB collection or S3 bucket in the SQL query.</p>
+    <p>Drill leverages advanced query compilation and re-compilation techniques to maximize performance without requiring up-front schema knowledge.</p>
+  </div>
+  <div><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
+  line-height: 1.5;'>SELECT * FROM <span style="font-weight:bold;color:#000;text-decoration: underline">dfs.root.`/web/logs`</span>;
+  
+SELECT country, count(*)
+  FROM <span style="font-weight:bold;color:#000;text-decoration: underline;">mongodb.web.users</span>
+  GROUP BY country;
+
+SELECT timestamp
+  FROM <span style="font-weight:bold;color:#000;text-decoration: underline">s3.root.`clicks.json`</span>
+  WHERE user_id = 'jdoe';</pre></div>
+</div>
+
+<div class="home-row">
+  <div><img src="{{ site.baseurl }}/images/home-json.png" style="width:250px" /></div>
+  <div>
+    <h1>Treat your data like a table even when it’s not</h1>
+    <p>Drill features a JSON data model that enables it to query complex/nested data and rapidly evolving structure commonly seen in modern applications and non-relational datastores. Drill also provides intuitive extensions to SQL so that the user can easily query complex data.
+    <p>Drill is the only columnar query engine that supports complex data. It features an in-memory shredded columnar representation for complex data which allows Drill to achieve columnar speed with the flexibility of an internal JSON document model.</p>
+  </div>
+</div>
+
+<div class="home-row">
+  <div>
+    <h1>Keep using the BI tools you love</h1>
+    <p>Drill supports standard SQL. Business users, analysts and data scientists can use standard BI/analytics tools such as Tableau, Qlik, MicroStrategy, Spotfire, SAS and Excel to interact with non-relational datastores by leveraging Drill's JDBC and ODBC drivers. Developers can leverage Drill's simple REST API in their custom applications to create beautiful visualizations.</p>
+    <p>Drill’s virtual datasets allow even the most complex, non-relational data to be mapped into BI-friendly structures which users can explore and visualize using their tool of choice.</p>
+  </div>
+  <div><img src="{{ site.baseurl }}/images/home-bi.png" style="width:300px" /></div>
+</div>
+
+<div class="home-row">
+  <div><div><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
+  line-height: 1.5;'>$ curl j.mp/drill-latest -o drill.tgz
+$ tar xzf drill.tgz
+$ cd apache-drill-1.0.0
+$ bin/drill-embedded
+</pre></div></div>
+  <div>
+    <h1>Scale from one laptop to 1000s of servers</h1>
+    <p>We made it easy to download and run Drill on your laptop. It runs on Mac, Windows and Linux, and within a minute or two you’ll be exploring your data. When you’re ready for prime time, deploy Drill on a cluster of commodity servers and take advantage of the world’s most scalable and high performance execution engine.
+    <p>Drill’s symmetrical architecture (all nodes are the same) and simple installation makes it easy to deploy and operate very large clusters.</p>
+  </div>
+</div>
+
+<div class="home-row">
+  <div>
+    <h1>No more waiting for coffee</h1>
+    <p>Drill isn’t the world’s first query engine, but it’s the first that combines both flexibility and speed. To achieve this, Drill features a radically different architecture that enables record-breaking performance without sacrificing the flexibility offered by the JSON document model. For example:<ul>
+<li>Columnar execution engine (the first ever to support complex data!)</li>
+<li>Data-driven compilation and recompilation at execution time</li>
+<li>Specialized memory management that reduces memory footprint and eliminates garbage collections</li>
+<li>Locality-aware exeucution that reduces network traffic when Drill is co-located with the datastore</li>
+<li>Advanced cost-based optimizer that pushes processing into the datastore when possible</li></ul></p>
+  </div>
+  <div><img src="{{ site.baseurl }}/images/home-coffee.jpg" style="width:300px" /></div>
+</div>
+
+
+
+<!--
 <div class="home_txt mw">
   <p>The 40-year monopoly of the relational database is over. The explosion of data in recent years and the shift towards rapid application development have led to the rise of non-relational datastores including Hadoop, NoSQL and cloud storage. Organizations are increasingly leveraging these systems for new and existing applications due to their flexibility, scalability and price advantages. Drill is built from the ground up to enable business users, analysts, data scientists and developers to explore and analyze the data in these systems while maintaining their unique agility and flexibility advantages.</p>
 
@@ -103,3 +175,4 @@ $(document).ready(function() {
   <img src="images/home-img3.jpg" alt="familiarity" width="380" />
   <p>Drill supports standard SQL. Business users, analysts and data scientists can use standard BI/analytics tools such as Tableau, QlikView, MicroStrategy, Spotfire, SAS and Excel to interact with non-relational datastores by leveraging Drill's JDBC and ODBC drivers. Developers can leverage Drill's simple REST API in their custom applications to create beautiful visualizations based on data in their non-relational datastores. Users can also plug-and-play with Hive environments to enable ad-hoc low latency queries on existing Hive tables and reuse Hive's metadata, hundreds of file formats and UDFs out of the box.</p>
 </div>
+-->


[17/26] drill git commit: Fixes

Posted by ts...@apache.org.
Fixes


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/af2096d2
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/af2096d2
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/af2096d2

Branch: refs/heads/gh-pages
Commit: af2096d213826da33ee0488dc80124a875386db2
Parents: 44f8fde
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 15 22:55:27 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 15 22:55:27 2015 -0700

----------------------------------------------------------------------
 css/style.css | 15 +++++++++++++++
 index.html    | 13 +++++--------
 2 files changed, 20 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/af2096d2/css/style.css
----------------------------------------------------------------------
diff --git a/css/style.css b/css/style.css
index fe9b855..414efa5 100755
--- a/css/style.css
+++ b/css/style.css
@@ -880,3 +880,18 @@ div.home-row:nth-child(even) div.big{
 .home-row div.big{
   display:inline-block;
 }
+
+div.home-row div pre{
+  background:#f3f5f7;
+  color:#2a333c;
+  border:solid 1px #aaa;
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  font-size: 12px;
+  line-height: 1.5;
+}
+
+div.home-row div pre span.code-underline{
+  font-weight:bold;
+  color:#000;
+  text-decoration: underline;
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/af2096d2/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index 6860277..8494f54 100755
--- a/index.html
+++ b/index.html
@@ -97,15 +97,14 @@ $(document).ready(function() {
     <p>Traditional query engines demand significant IT intervention before data can be queried. Drill gets rid of all that overhead so that users can just query the raw data in-situ. There's no need to load the data, create and maintain schemas, or transform the data before it can be processed. Instead, simply include the path to a Hadoop directory, MongoDB collection or S3 bucket in the SQL query.</p>
     <p>Drill leverages advanced query compilation and re-compilation techniques to maximize performance without requiring up-front schema knowledge.</p>
   </div>
-  <div class="small big"><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
-  line-height: 1.5;'>SELECT * FROM <span style="font-weight:bold;color:#000;text-decoration: underline">dfs.root.`/web/logs`</span>;
+  <div class="small big"><pre>SELECT * FROM <span class="code-underline">dfs.root.`/web/logs`</span>;
   
 SELECT country, count(*)
-  FROM <span style="font-weight:bold;color:#000;text-decoration: underline;">mongodb.web.users</span>
+  FROM <span class="code-underline">mongodb.web.users</span>
   GROUP BY country;
 
 SELECT timestamp
-  FROM <span style="font-weight:bold;color:#000;text-decoration: underline">s3.root.`clicks.json`</span>
+  FROM <span class="code-underline">s3.root.`clicks.json`</span>
   WHERE user_id = 'jdoe';</pre></div>
 </div>
 
@@ -129,8 +128,7 @@ SELECT timestamp
 </div>
 
 <div class="home-row">
-  <div class="big"><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
-  line-height: 1.5;'>$ curl j.mp/drill-latest -o drill.tgz
+  <div class="big"><pre>$ curl j.mp/drill-latest -o drill.tgz
 $ tar xzf drill.tgz
 $ cd apache-drill-1.0.0
 $ bin/drill-embedded
@@ -140,8 +138,7 @@ $ bin/drill-embedded
     <p>We made it easy to download and run Drill on your laptop. It runs on Mac, Windows and Linux, and within a minute or two you’ll be exploring your data. When you’re ready for prime time, deploy Drill on a cluster of commodity servers and take advantage of the world’s most scalable and high performance execution engine.
     <p>Drill’s symmetrical architecture (all nodes are the same) and simple installation makes it easy to deploy and operate very large clusters.</p>
   </div>
-  <div class="small"><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
-    line-height: 1.5;'>$ curl j.mp/drill-latest -o drill.tgz
+  <div class="small"><pre>$ curl j.mp/drill-latest -o drill.tgz
   $ tar xzf drill.tgz
   $ cd apache-drill-1.0.0
   $ bin/drill-embedded


[03/26] drill git commit: Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/0640c9ae
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/0640c9ae
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/0640c9ae

Branch: refs/heads/gh-pages
Commit: 0640c9aecb8e9db94d8576b7234d59ef1066f999
Parents: 2a06d65 fcb4f41
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Wed May 13 17:56:47 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Wed May 13 17:56:47 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 142 +++++++++----------
 _docs/img/tableau-server-authentication.png     | Bin 0 -> 42451 bytes
 _docs/img/tableau-server-publish-datasource.png | Bin 0 -> 79417 bytes
 .../img/tableau-server-publish-datasource2.png  | Bin 0 -> 38100 bytes
 .../img/tableau-server-publish-datasource3.png  | Bin 0 -> 42950 bytes
 _docs/img/tableau-server-publish1.png           | Bin 0 -> 20646 bytes
 _docs/img/tableau-server-publish2.png           | Bin 0 -> 51834 bytes
 _docs/img/tableau-server-signin1.png            | Bin 0 -> 21257 bytes
 _docs/img/tableau-server-signin2.png            | Bin 0 -> 19833 bytes
 ...-using-apache-drill-with-tableau-9-server.md |  76 ++++++++++
 _includes/authors.html                          |   1 +
 css/responsive.css                              |  19 +--
 css/style.css                                   |  28 ++++
 css/video-slider.css                            |  34 +++++
 images/play-mq.png                              | Bin 0 -> 1050 bytes
 images/thumbnail-65c42i7Xg7Q.jpg                | Bin 0 -> 12659 bytes
 images/thumbnail-6pGeQOXDdD8.jpg                | Bin 0 -> 13315 bytes
 images/thumbnail-MYY51kiFPTk.jpg                | Bin 0 -> 13058 bytes
 images/thumbnail-bhmNbH2yzhM.jpg                | Bin 0 -> 14299 bytes
 index.html                                      |  29 ++--
 20 files changed, 237 insertions(+), 92 deletions(-)
----------------------------------------------------------------------



[24/26] drill git commit: Updated download link

Posted by ts...@apache.org.
Updated download link


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/55a3ebb4
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/55a3ebb4
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/55a3ebb4

Branch: refs/heads/gh-pages
Commit: 55a3ebb4ec68cb045f7120d73965e0542562156a
Parents: 7c6c47d
Author: Tomer Shiran <ts...@gmail.com>
Authored: Sat May 16 22:58:57 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Sat May 16 22:58:57 2015 -0700

----------------------------------------------------------------------
 index.html | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/55a3ebb4/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index 6637183..e64a70b 100755
--- a/index.html
+++ b/index.html
@@ -130,7 +130,7 @@ SELECT timestamp
 </div>
 
 <div class="home-row">
-  <div class="big"><pre>$ curl j.mp/drill-latest -o drill.tgz
+  <div class="big"><pre>$ curl j.mp/drill-1-0-0-rc1 -o drill.tgz
 $ tar xzf drill.tgz
 $ cd apache-drill-1.0.0
 $ bin/drill-embedded
@@ -140,7 +140,7 @@ $ bin/drill-embedded
     <p>We made it easy to download and run Drill on your laptop. It runs on Mac, Windows and Linux, and within a minute or two you’ll be exploring your data. When you’re ready for prime time, deploy Drill on a cluster of commodity servers and take advantage of the world’s most scalable and high performance execution engine.
     <p>Drill’s symmetrical architecture (all nodes are the same) and simple installation makes it easy to deploy and operate very large clusters.</p>
   </div>
-  <div class="small"><pre>$ curl j.mp/drill-latest -o drill.tgz
+  <div class="small"><pre>$ curl j.mp/drill-1-0-0-rc1 -o drill.tgz
   $ tar xzf drill.tgz
   $ cd apache-drill-1.0.0
   $ bin/drill-embedded


[11/26] drill git commit: Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/1c27ed53
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/1c27ed53
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/1c27ed53

Branch: refs/heads/gh-pages
Commit: 1c27ed53ff6f983bc81c71a95102114be7cebe78
Parents: c18b098 9fd0cca
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 15 15:02:19 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 15 15:02:19 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 |  75 ++++++++++++++++---
 _docs/img/spotfire-server-client.png            | Bin 0 -> 48430 bytes
 _docs/img/spotfire-server-configtab.png         | Bin 0 -> 76152 bytes
 _docs/img/spotfire-server-connectionURL.png     | Bin 0 -> 47664 bytes
 _docs/img/spotfire-server-database.png          | Bin 0 -> 36204 bytes
 _docs/img/spotfire-server-datasources-tab.png   | Bin 0 -> 49236 bytes
 _docs/img/spotfire-server-deployment.png        | Bin 0 -> 22058 bytes
 _docs/img/spotfire-server-hiveorders.png        | Bin 0 -> 62537 bytes
 _docs/img/spotfire-server-importconfig.png      | Bin 0 -> 32739 bytes
 _docs/img/spotfire-server-infodesigner.png      | Bin 0 -> 69950 bytes
 _docs/img/spotfire-server-infodesigner2.png     | Bin 0 -> 36991 bytes
 _docs/img/spotfire-server-infolink.png          | Bin 0 -> 126884 bytes
 _docs/img/spotfire-server-new.png               | Bin 0 -> 23290 bytes
 _docs/img/spotfire-server-saveconfig.png        | Bin 0 -> 188740 bytes
 _docs/img/spotfire-server-saveconfig2.png       | Bin 0 -> 32622 bytes
 _docs/img/spotfire-server-start.png             | Bin 0 -> 54541 bytes
 _docs/img/spotfire-server-template.png          | Bin 0 -> 155705 bytes
 _docs/img/spotfire-server-tss.png               | Bin 0 -> 43564 bytes
 .../065-configuring-spotfire-server.md          |  64 ++++++++++++++++
 19 files changed, 127 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/1c27ed53/_data/docs.json
----------------------------------------------------------------------


[14/26] drill git commit: add new tuts

Posted by ts...@apache.org.
add new tuts


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/4d19390b
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/4d19390b
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/4d19390b

Branch: refs/heads/gh-pages
Commit: 4d19390b4c16aca19e87005d57ae5807b419eeff
Parents: d5b22a4
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 15 16:05:08 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 15 16:05:08 2015 -0700

----------------------------------------------------------------------
 _docs/tutorials/010-tutorials-introduction.md | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/4d19390b/_docs/tutorials/010-tutorials-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/010-tutorials-introduction.md b/_docs/tutorials/010-tutorials-introduction.md
index f9c6263..178e721 100644
--- a/_docs/tutorials/010-tutorials-introduction.md
+++ b/_docs/tutorials/010-tutorials-introduction.md
@@ -12,10 +12,20 @@ If you've never used Drill, use these tutorials to download, install, and start
   Explore data using a Hadoop environment pre-configured with Drill.  
 * [Analyzing Highly Dynamic Datasets]({{site.baseurl}}/docs/analyzing-highly-dynamic-datasets)  
   Delve into changing data without creating a schema or going through an ETL phase.
+* [Analyzing Social Media]({site.baseurl}}/docs/analyzing-social-media)  
+  Analyze Twitter data in native JSON format using Apache Drill.  
 * [Tableau Examples]({{site.baseurl}}/docs/tableau-examples)  
   Access Hive tables in Tableau.  
-* [Using MicroStrategy Analytics with Drill]({{site.baseurl}}/docs/using-microstrategy-analytics-with--apache-drill/)  
+* [Using MicroStrategy Analytics with Apache Drill]({{site.baseurl}}/docs/using-microstrategy-analytics-with--apache-drill/)  
   Use the Drill ODBC driver from MapR to analyze data and generate a report using Drill from the MicroStrategy UI.  
+* [Using Tibco Spotfire Server with Drill]({{site.baseurl}}/drill/docs/using-tibco-spotfire-with-drill/)  
+  Use the Apache Drill to query complex data structures from Tibco Spotfire Desktop.
+* [Configuring Tibco Spotfire Server with Drill]({{site.baseurl}}/docs/configuring-tibco-spotfire-server-with-drill)  
+  Integrate Tibco Spotfire Server with Apache Drill and explore multiple data formats on Hadoop.  
+* [Using Apache Drill with Tableau 9 Desktop]({{site.baseurl}}/docs/using-apache-drill-with-tableau-9-desktop)  
+  Connect Tableau 9 Desktop to Apache Drill, explore multiple data formats on Hadoop, and access semi-structured data.  
+* [Using Apache Drill with Tableau 9 Server]({{site.baseurl}}/docs/using-apache-drill-with-tableau-9-server)  
+  Connect Tableau 9 Server to Apache Drill, explore multiple data formats on Hadoop, access semi-structured data, and share Tableau visualizations with others.  
 * [Using Drill to Analyze Amazon Spot Prices](https://github.com/vicenteg/spot-price-history#drill-workshop---amazon-spot-prices)  
   A Drill workshop on github that covers views of JSON and Parquet data.  
 * [Running Drill Queries on S3 Data](http://drill.apache.org/blog/2014/12/09/running-sql-queries-on-amazon-s3/)  


[08/26] drill git commit: update spotfire server doc with captures

Posted by ts...@apache.org.
update spotfire server doc with captures


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/9fd0ccac
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/9fd0ccac
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/9fd0ccac

Branch: refs/heads/gh-pages
Commit: 9fd0ccac4a756a5c58874c82addd262c43a60971
Parents: 6244940
Author: Bob Rumsby <br...@mapr.com>
Authored: Thu May 14 18:04:17 2015 -0700
Committer: Bob Rumsby <br...@mapr.com>
Committed: Thu May 14 18:04:17 2015 -0700

----------------------------------------------------------------------
 .../065-configuring-spotfire-server.md          | 25 +++++++++++---------
 1 file changed, 14 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/9fd0ccac/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md b/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md
index 674929c..b79d378 100644
--- a/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md
+++ b/_docs/odbc-jdbc-interfaces/065-configuring-spotfire-server.md
@@ -27,8 +27,11 @@ Drill provides standard JDBC connectivity, making it easy to integrate data expl
 
 The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.
 
-1. Click **Start > All Programs > TIBCO Spotfire Server > Configure TIBCO Spotfire Server**.
-2. Enter the Configuration Tool password that was specified when TSS was initially installed.
3. Once the Configuration Tool has connected to TSS, click the **Configuration** tab, then **Data Source Templates**.
4. In the Data Source Templates window, click the **New** button at the bottom of the window.
5. Provide a name for the data source template, then copy the following XML template into the **Data Source Template** box. When complete, click **OK**.
6. The new entry will now be available in the data source template. Check the box next to the new entry, then click **Save Configuration**.
+1. Click **Start > All Programs > TIBCO Spotfire Server > Configure TIBCO Spotfire Server**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-start.png)
+2. Enter the Configuration Tool password that was specified when TSS was initially installed.
3. Once the Configuration Tool has connected to TSS, click the **Configuration** tab, then **Data Source Templates**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-configtab.png)
4. In the Data Source Templates window, click the **New** button at the bottom of the window. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-new.png)
5. Provide a name for the data source template, then copy the following XML template into the **Data Source Template** box. When complete, click **OK**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-template.png)
6. The new entry will now be available in the data source template. Check the box next to the new entry, then click **Save Configuration**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-saveconfig.png)
7. Select Database as the destination and click Next. ![drill query flow]({{ site.baseur
 l }}/docs/img/spotfire-server-saveconfig2.png) 
+8. Add a comment to the updated configuration and click **Finish**. 
+9. A response window is displayed to state that the configuration was successfully uploaded to TSS. Click **OK**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-importconfig.png)
+10. Restart TSS to enable it to use the Drill data source template.
    
 #### XML Template
 
@@ -42,20 +45,20 @@ Make sure that you enter the correct ZooKeeper node name instead of `<zk-node>`,
 
 To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.
 
-1. Open Tibco Spotfire Desktop.
-2. Log into TSS.
-3. Select the deployment area in TSS to be used.
-4. Click **Tools > Information Designer**.
-5. In the Information Designer, click **New > Data Source**.
-6. In the Data Source window, enter the name for the data source. Select the Drill Data Source template created in Step 2 as the type. Update the connection URL with the correct hostname of the ZooKeeper node(s) and the Drill cluster name. Note: The Zookeeper node(s) hostname(s) and Drill cluster name can be found in the `$DRILL_HOME/conf/drill-override.conf` file on any of the Drill nodes in the cluster. Enter the username and password used to connect to Drill. When completed, click **Save**. 
-7. In the Save As window, verify the name and the folder where you want to save the new data source in TSS. Click **Save** when done. TSS will now validate the information and save the new data source in TSS.
8. When the data source is saved, it will appear in the **Data Sources** tab, and you will be able to navigate the schema.
+1. Open Tibco Spotfire Desktop. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-client.png)
+2. Log into TSS. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-tss.png)
+3. Select the deployment area in TSS to be used. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-deployment.png)
+4. Click **Tools > Information Designer**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-infodesigner.png)
+5. In the Information Designer, click **New > Data Source**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-infodesigner2.png)
+6. In the Data Source window, enter the name for the data source. Select the Drill Data Source template created in Step 2 as the type. Update the connection URL with the correct hostname of the ZooKeeper node(s) and the Drill cluster name. Note: The Zookeeper node(s) hostname(s) and Drill cluster name can be found in the `$DRILL_HOME/conf/drill-override.conf` file on any of the Drill nodes in the cluster. Enter the username and password used to connect to Drill. When completed, click **Save**. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-connectionURL.png)
+7. In the Save As window, verify the name and the folder where you want to save the new data source in TSS. Click **Save** when done. TSS will now validate the information and save the new data source in TSS.
8. When the data source is saved, it will appear in the **Data Sources** tab, and you will be able to navigate the schema. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-datasources-tab.png)
 
 
 ----------
 
 ### Step 4: Query and Analyze the Data
 
-After the Drill data source has been configured in the Information Designer, the information elements can be defined. 

1.	In this example all the columns of a Hive table have been defined, using the Drill data source, and added to an information link.
-2.	The SQL syntax to retrieve the data can be validated by clicking the **SQL** button. Many other operations can be performed in Information Link,  including joins, filters, and so on. See the Tibco Spotfire documentation for details.
3.	You can now import the data of this table into TSS by clicking the **Open Data** button. 
+After the Drill data source has been configured in the Information Designer, the information elements can be defined. 

1.	In this example all the columns of a Hive table have been defined, using the Drill data source, and added to an information link. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-infolink.png)
+2.	The SQL syntax to retrieve the data can be validated by clicking the **SQL** button. Many other operations can be performed in Information Link,  including joins, filters, and so on. See the Tibco Spotfire documentation for details.
3.	You can now import the data of this table into TSS by clicking the **Open Data** button. ![drill query flow]({{ site.baseurl }}/docs/img/spotfire-server-hiveorders.png)
 
The data is now available in Tibco Spotfire Desktop to create various reports and tables as needed, and to be shared. For more information about creating charts, tables and reports, see the Tibco Spotfire documentation.


...
 


[04/26] drill git commit: remove old Drill refs, finish config options

Posted by ts...@apache.org.
remove old Drill refs, finish config options


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/c38e6a18
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/c38e6a18
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/c38e6a18

Branch: refs/heads/gh-pages
Commit: c38e6a1886aa9ed6c9ff277ada028ff10d53c043
Parents: 0640c9a
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 14 12:47:53 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 14 12:47:53 2015 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md   | 32 ++++++++++----------
 .../080-drill-default-input-format.md           |  2 +-
 .../030-deploying-and-using-a-hive-udf.md       |  2 +-
 .../050-json-data-model.md                      |  4 +--
 .../010-interfaces-introduction.md              |  4 +--
 ...microstrategy-analytics-with-apache-drill.md |  9 +++---
 .../050-using-drill-explorer-on-windows.md      |  4 +--
 .../030-analyzing-the-yelp-academic-dataset.md  |  5 ++-
 .../040-learn-drill-with-the-mapr-sandbox.md    | 19 +-----------
 .../010-installing-the-apache-drill-sandbox.md  |  2 +-
 10 files changed, 32 insertions(+), 51 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index 587c698..4fd7948 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -3,16 +3,16 @@ title: "Configuration Options Introduction"
 parent: "Configuration Options"
 ---
 Drill provides many configuration options that you can enable, disable, or
-modify. Modifying certain configuration options can impact Drill’s
-performance. Many of Drill's configuration options reside in the `drill-
-env.sh` and `drill-override.conf` files. Drill stores these files in the
+modify. Modifying certain configuration options can impact Drill
+performance. Many of configuration options reside in the `drill-
+env.sh` and `drill-override.conf` files in the
 `/conf` directory. Drill sources` /etc/drill/conf` if it exists. Otherwise,
 Drill sources the local `<drill_installation_directory>/conf` directory.
 
-The sys.options table in Drill contains information about boot (start-up) and system options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view key boot options. The sys.options table also contains many system options, some of which are described in detail the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). The following table lists the options in alphabetical order and provides a brief description of supported options:
+The sys.options table contains information about boot (start-up), system, and session options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view key boot options. The following table lists the options in alphabetical order and provides a brief description of supported options:
 
 ## System Options
-The sys.options table lists the following options that you can set as a system or session option as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options) 
+The sys.options table lists the following options that you can set as a system or session option as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). 
 
 | Name                                           | Default          | Comments                                                                                                                                                                                                                                                                                                                                                         |
 |------------------------------------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -33,8 +33,8 @@ The sys.options table lists the following options that you can set as a system o
 | exec.storage.enable_new_text_reader            | TRUE             | Enables the text reader that complies with the RFC 4180 standard for text/csv files.                                                                                                                                                                                                                                                                             |
 | new_view_default_permissions                   | 700              | Sets view permissions using an octal code in the Unix tradition.                                                                                                                                                                                                                                                                                                 |
 | planner.add_producer_consumer                  | FALSE            | Increase prefetching of data from disk. Disable for in-memory reads.                                                                                                                                                                                                                                                                                             |
-| planner.affinity_factor                        | 1.2              | Factor by which a node with endpoint affinity  is favored while creating assignment. Accepts inputs of type DOUBLE.                                                                                                                                                                                                                                              |
-| planner.broadcast_factor                       | 1                |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.affinity_factor                        | 1.2              | Factor by which a node with endpoint affinity is favored while creating assignment. Accepts inputs of type DOUBLE.                                                                                                                                                                                                                                               |
+| planner.broadcast_factor                       | 1                | A heuristic parameter for influencing the broadcast of records as part of a query.                                                                                                                                                                                                                                                                               |
 | planner.broadcast_threshold                    | 10000000         | The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647                                                                                                                                                            |
 | planner.disable_exchanges                      | FALSE            | Toggles the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                               |
 | planner.enable_broadcast_join                  | TRUE             | Changes the state of aggregation and join operators. The broadcast join can be used for hash join, merge join and nested loop join. Use to join a large (fact) table to relatively smaller (dimension) tables. Do not disable.                                                                                                                                   |
@@ -55,17 +55,17 @@ The sys.options table lists the following options that you can set as a system o
 | planner.identifier_max_length                  | 1024             | A minimum length is needed because option names are identifiers themselves.                                                                                                                                                                                                                                                                                      |
 | planner.join.hash_join_swap_margin_factor      | 10               | The number of join order sequences to consider during the planning phase.                                                                                                                                                                                                                                                                                        |
 | planner.join.row_count_estimate_factor         | 1                | The factor for adjusting the estimated row count when considering multiple join order sequences during the planning phase.                                                                                                                                                                                                                                       |
-| planner.memory.average_field_width             | 8                |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.average_field_width             | 8                | Used in estimating memory requirements.                                                                                                                                                                                                                                                                                                                          |
 | planner.memory.enable_memory_estimation        | FALSE            | Toggles the state of memory estimation and re-planning of the query. When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.                                                                                                                                   |
-| planner.memory.hash_agg_table_factor           | 1.1              |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.memory.hash_join_table_factor          | 1.1              |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.hash_agg_table_factor           | 1.1              | A heuristic value for influencing the size of the hash aggregation table.                                                                                                                                                                                                                                                                                        |
+| planner.memory.hash_join_table_factor          | 1.1              | A heuristic value for influencing the size of the hash aggregation table.                                                                                                                                                                                                                                                                                        |
 | planner.memory.max_query_memory_per_node       | 2147483648 bytes | Sets the maximum estimate of memory for a query per node in bytes. If the estimate is too low, Drill re-plans the query without memory-constrained operators.                                                                                                                                                                                                    |
-| planner.memory.non_blocking_operators_memory   | 64               | Extra query memory per node foer non-blocking operators. This option is currently used only for memory estimation. Range: 0-2048 MB                                                                                                                                                                                                                              |
-| planner.nestedloopjoin_factor                  | 100              |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.non_blocking_operators_memory   | 64               | Extra query memory per node for non-blocking operators. This option is currently used only for memory estimation. Range: 0-2048 MB                                                                                                                                                                                                                               |
+| planner.nestedloopjoin_factor                  | 100              | A heuristic value for influencing the nested loop join.                                                                                                                                                                                                                                                                                                          |
 | planner.partitioner_sender_max_threads         | 8                | Upper limit of threads for outbound queuing.                                                                                                                                                                                                                                                                                                                     |
-| planner.partitioner_sender_set_threads         | -1               |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.partitioner_sender_threads_factor      | 2                |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.producer_consumer_queue_size           | 10               | How much data to prefetch from disk (in record batches) out of band of query execution                                                                                                                                                                                                                                                                           |
+| planner.partitioner_sender_set_threads         | -1               | Overwrites the number of threads used to send out batches of records. Set to -1 to disable. Typically not changed.                                                                                                                                                                                                                                               |
+| planner.partitioner_sender_threads_factor      | 2                | A heuristic param to use to influence final number of threads. The higher the value the fewer the number of threads.                                                                                                                                                                                                                                             |
+| planner.producer_consumer_queue_size           | 10               | How much data to prefetch from disk in record batches out-of-band of query execution.                                                                                                                                                                                                                                                                            |
 | planner.slice_target                           | 100000           | The number of records manipulated within a fragment before Drill parallelizes operations.                                                                                                                                                                                                                                                                        |
 | planner.width.max_per_node                     | 3                | Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query’s major fragment on a node.                                                                                                                                                |
 | planner.width.max_per_query                    | 1000             | Same as max per node but applies to the query as executed by the entire cluster. For example, this value might be the number of active Drillbits, or a higher number to return results faster.                                                                                                                                                                   |
@@ -77,7 +77,7 @@ The sys.options table lists the following options that you can set as a system o
 | store.mongo.read_numbers_as_double             | FALSE            | Similar to store.json.read_numbers_as_double.                                                                                                                                                                                                                                                                                                                    |
 | store.parquet.block-size                       | 536870912        | Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.                                                                                                                                                                                                                               |
 | store.parquet.compression                      | snappy           | Compression type for storing Parquet output. Allowed values: snappy, gzip, none                                                                                                                                                                                                                                                                                  |
-| store.parquet.enable_dictionary_encoding       | FALSE            | Do not change.                                                                                                                                                                                                                                                                                                                                                   |
+| store.parquet.enable_dictionary_encoding       | FALSE            | For internal use. Do not change.                                                                                                                                                                                                                                                                                                                                 |
 | store.parquet.use_new_reader                   | FALSE            | Not supported in this release.                                                                                                                                                                                                                                                                                                                                   |
 | store.text.estimated_row_size_bytes            | 100              | Estimate of the row size in a delimited text file, such as csv. The closer to actual, the better the query plan. Used for all csv files in the system/session where the value is set. Impacts the decision to plan a broadcast join or not.                                                                                                                      |
 | window.enable                                  | FALSE            | Not supported in this release. Coming soon.                                                                                                                                                                                                                                                                                                                      |

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/connect-a-data-source/080-drill-default-input-format.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/080-drill-default-input-format.md b/_docs/connect-a-data-source/080-drill-default-input-format.md
index e817343..25a065b 100644
--- a/_docs/connect-a-data-source/080-drill-default-input-format.md
+++ b/_docs/connect-a-data-source/080-drill-default-input-format.md
@@ -61,7 +61,7 @@ steps:
 
 ## Querying Compressed JSON
 
-You can use Drill 0.8 and later to query compressed JSON in .gz files as well as uncompressed files having the .json extension. First, add the gz extension to a storage plugin, and then use that plugin to query the compressed file.
+You can query compressed JSON in .gz files as well as uncompressed files having the .json extension. First, add the gz extension to a storage plugin, and then use that plugin to query the compressed file.
 
       "extensions": [
         "json",

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md b/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
index 538c1e3..6a26376 100644
--- a/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
+++ b/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
@@ -22,7 +22,7 @@ After you export the custom UDF as a JAR, perform the UDF setup tasks so Drill c
 To set up the UDF:
 
 1. Register Hive. [Register a Hive storage plugin]({{ site.baseurl }}/docs/registering-hive/) that connects Drill to a Hive data source.
-2. In Drill 0.7 and later, add the JAR for the UDF to the Drill CLASSPATH. In earlier versions of Drill, place the JAR file in the `/jars/3rdparty` directory of the Drill installation on all nodes running a Drillbit.
+2. Add the JAR for the UDF to the Drill CLASSPATH. In earlier versions of Drill, place the JAR file in the `/jars/3rdparty` directory of the Drill installation on all nodes running a Drillbit.
 3. On each Drill node in the cluster, restart the Drillbit.
    `<drill installation directory>/bin/drillbit.sh restart`
  

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 29efeb2..28ab921 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -12,7 +12,7 @@ Semi-structured JSON data often consists of complex, nested elements having sche
 
 Using Drill you can natively query dynamic JSON data sets using SQL. Drill treats a JSON object as a SQL record. One object equals one row in a Drill table. 
 
-Drill 0.8 and higher can [query compressed .gz files]({{ site.baseurl }}/docs/drill-default-input-format#querying-compressed-json) having JSON as well as uncompressed .json files. 
+You can also [query compressed .gz files]({{ site.baseurl }}/docs/drill-default-input-format#querying-compressed-json) having JSON as well as uncompressed .json files. 
 
 In addition to the examples presented later in this section, see ["How to Analyze Highly Dynamic Datasets with Apache Drill"](https://www.mapr.com/blog/how-analyze-highly-dynamic-datasets-apache-drill) for information about how to analyze a JSON data set.
 
@@ -56,7 +56,7 @@ When you set this option, Drill reads all numbers from the JSON files as DOUBLE.
 * Cast JSON values to [SQL types]({{ site.baseurl }}/docs/data-types), such as BIGINT, FLOAT, and INTEGER.
 * Cast JSON strings to [Drill Date/Time Data Type Formats]({{ site.baseurl }}/docs/supported-date-time-data-type-formats).
 
-Drill uses [map and array data types]({{ site.baseurl }}/docs/data-types) internally for reading complex and nested data structures from JSON. You can cast data in a map or array of data to return a value from the structure, as shown in [“Create a view on a MapR-DB table”] ({{ site.baseurl }}/docs/lesson-2-run-queries-with-ansi-sql). “Query Complex Data” shows how to access nested arrays.
+Drill uses [map and array data types]({{ site.baseurl }}/docs/data-types) internally for reading complex and nested data structures from JSON. You can cast data in a map or array of data to return a value from the structure, as shown in [“Create a view on a MapR-DB table”] ({{ site.baseurl }}/docs/lesson-2-run-queries-with-ansi-sql). [“Query Complex Data”]({{ site.baseurl }}/docs/querying-complex-data-introduction) shows how to access nested arrays.
 
 ## Reading JSON
 To read JSON data using Drill, use a [file system storage plugin]({{ site.baseurl }}/docs/connect-to-a-data-source) that defines the JSON format. You can use the `dfs` storage plugin, which includes the definition. 

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md b/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md
index fd4346e..d7bba62 100644
--- a/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md
+++ b/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md
@@ -18,8 +18,8 @@ MapR provides ODBC drivers for Windows, Mac OS X, and Linux. It is recommended
 that you install the latest version of Apache Drill with the latest version of
 the Drill ODBC driver.
 
-For example, if you have Apache Drill 0.5 and a Drill ODBC driver installed on
-your machine, and then you upgrade to Apache Drill 0.6, do not assume that the
+For example, if you have Apache Drill 0.8 and a Drill ODBC driver installed on
+your machine, and then you upgrade to Apache Drill 1.0, do not assume that the
 Drill ODBC driver installed on your machine will work with the new version of
 Apache Drill. Install the latest available Drill ODBC driver to ensure that
 the two components work together.

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
index d6140aa..cdade1c 100755
--- a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
+++ b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
@@ -142,12 +142,11 @@ In this scenario, you learned how to configure MicroStrategy Analytics Enterpris
 
 ### Certification Links
 
-MicroStrategy announced post certification of Drill 0.6 and 0.7 with MicroStrategy Analytics Enterprise 9.4.1
+* MicroStrategy certifies its analytics platform with Apache Drill: http://ir.microstrategy.com/releasedetail.cfm?releaseid=902795
 
+* http://community.microstrategy.com/t5/Database/TN225724-Post-Certification-of-MapR-Drill-0-6-and-0-7-with/ta-p/225724
 
-http://community.microstrategy.com/t5/Database/TN225724-Post-Certification-of-MapR-Drill-0-6-and-0-7-with/ta-p/225724
+* http://community.microstrategy.com/t5/Release-Notes/TN231092-Certified-Database-and-ODBC-configurations-for/ta-p/231092
 
-http://community.microstrategy.com/t5/Release-Notes/TN231092-Certified-Database-and-ODBC-configurations-for/ta-p/231092
-
-http://community.microstrategy.com/t5/Release-Notes/TN231094-Certified-Database-and-ODBC-configurations-for/ta-p/231094   
+* http://community.microstrategy.com/t5/Release-Notes/TN231094-Certified-Database-and-ODBC-configurations-for/ta-p/231094   
 

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/050-using-drill-explorer-on-windows.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/050-using-drill-explorer-on-windows.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/050-using-drill-explorer-on-windows.md
index be0389d..3d84978 100644
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/050-using-drill-explorer-on-windows.md
+++ b/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/050-using-drill-explorer-on-windows.md
@@ -39,9 +39,9 @@ Preview again.
   9. Click **Create As**.  
      The _Create As_ dialog displays.
   10. In the **Schema** field, select the schema where you want to save the view.
-      As of 0.4.0, you can only save views to file-based schemas.
+      You can save views only to file-based schemas.
   11. In the **View Name** field, enter a descriptive name for the view.
-      As of 0.4.0, do not include spaces in the view name.
+      Do not include spaces in the view name.
   12. Click **Save**.   
       The status and any error message associated with the view creation displays in
 the Create As dialog. When a view saves successfully, the Save button changes

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md b/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
index 37f0d37..c822ada 100644
--- a/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
+++ b/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
@@ -33,7 +33,7 @@ want to scale your environment.
 
 ### Step 2 : Open the Drill tar file
 
-    tar -xvf apache-drill-0.6.0-incubating.tar
+    tar -xvf apache-drill-0.1.0.tar.gz
 
 ### Step 3: Launch SQLLine, a JDBC application that ships with Drill
 
@@ -352,8 +352,7 @@ exploring data in ways we have never seen before with SQL technologies. The
 community is working on more exciting features around nested data and
 supporting data with changing schemas in upcoming releases.
 
-As an example, a new FLATTEN function is in development (an upcoming feature
-in 0.7). This function can be used to dynamically rationalize semi-structured
+The FLATTEN function can be used to dynamically rationalize semi-structured
 data so you can apply even deeper SQL functionality. Here is a sample query:
 
 #### Get a flattened list of categories for each business

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md b/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md
index ab12d86..e64dc6f 100644
--- a/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md
+++ b/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md
@@ -15,23 +15,6 @@ the following pages in order:
   * [Lesson 3: Run Queries on Complex Data Types]({{ site.baseurl }}/docs/lesson-3-run-queries-on-complex-data-types)
   * [Summary]({{ site.baseurl }}/docs/summary)
 
-## About Apache Drill
-
-Drill is an Apache open-source SQL query engine for Big Data exploration.
-Drill is designed from the ground up to support high-performance analysis on
-the semi-structured and rapidly evolving data coming from modern Big Data
-applications, while still providing the familiarity and ecosystem of ANSI SQL,
-the industry-standard query language. Drill provides plug-and-play integration
-with existing Apache Hive and Apache HBase deployments.Apache Drill 0.5 offers
-the following key features:
-
-  * Low-latency SQL queries
-  * Dynamic queries on self-describing data in files (such as JSON, Parquet, text) and MapR-DB/HBase tables, without requiring metadata definitions in the Hive metastore.
-  * ANSI SQL
-  * Nested data support
-  * Integration with Apache Hive (queries on Hive tables and views, support for all Hive file formats and Hive UDFs)
-  * BI/SQL tool integration using standard JDBC/ODBC drivers
-
 ## MapR Sandbox with Apache Drill
 
 MapR includes Apache Drill as part of the Hadoop distribution. The MapR
@@ -45,7 +28,7 @@ refer to the [Apache Drill web site](http://drill.apache.org) and
 ]({{ site.baseurl }}/docs)for more
 details.
 
-Note that Hadoop is not a prerequisite for Drill and users can start ramping
+Hadoop is not a prerequisite for Drill and users can start ramping
 up with Drill by running SQL queries directly on the local file system. Refer
 to [Apache Drill in 10 minutes]({{ site.baseurl }}/docs/drill-in-10-minutes) for an introduction to using Drill in local
 (embedded) mode.

http://git-wip-us.apache.org/repos/asf/drill/blob/c38e6a18/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md
index e5ea95b..0081fc5 100755
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/010-installing-the-apache-drill-sandbox.md
@@ -113,7 +113,7 @@ VirtualBox adapter.
 9. Click Settings.
 
     ![settings icon]({{ site.baseurl }}/docs/img/settings.png)  
-   The MapR-Sandbox-For-Apache-Drill-0.6.0-r2-4.0.1 - Settings dialog appears.
+   The MapR-Sandbox-For-Apache-Drill - Settings dialog appears.
      
      ![drill query flow]({{ site.baseurl }}/docs/img/vbGenSettings.png)    
 10. Click **OK** to continue.


[09/26] drill git commit: Improved home page text and blog spacing

Posted by ts...@apache.org.
Improved home page text and blog spacing


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/ca346ee0
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/ca346ee0
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/ca346ee0

Branch: refs/heads/gh-pages
Commit: ca346ee0c4867ef298d49cdeeee952c62419d286
Parents: 7cf162a
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 15 00:06:52 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 15 00:06:52 2015 -0700

----------------------------------------------------------------------
 _layouts/post.html                 |  4 +--
 blog/_drafts/drill-1.0-released.md | 48 +++++++++++++++++++++++++++++++++
 css/style.css                      |  2 +-
 index.html                         | 36 +++++++++++--------------
 4 files changed, 67 insertions(+), 23 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/ca346ee0/_layouts/post.html
----------------------------------------------------------------------
diff --git a/_layouts/post.html b/_layouts/post.html
index d98656c..454343a 100644
--- a/_layouts/post.html
+++ b/_layouts/post.html
@@ -19,9 +19,9 @@ layout: default
     {% else %}
       {% assign alias = page.authors[0] %}
       {% assign author = site.data.authors[alias] %}
-      <strong>Author:</strong> {{ author.name }} ({{ author.title}}, {{ author.org}})
+      <strong>Author:</strong> {{ author.name }} ({{ author.title}}, {{ author.org}})<br />
     {% endif %}
-{% unless page.nodate %}<br/><strong>Date:</strong> {{ page.date | date: "%b %-d, %Y" }}{% endunless %}
+{% unless page.nodate %}<strong>Date:</strong> {{ page.date | date: "%b %-d, %Y" }}{% endunless %}
 {% if page.meta %}{{ page.meta }}{% endif %}</p>
   </header>
   <div class="addthis_sharing_toolbox"></div>

http://git-wip-us.apache.org/repos/asf/drill/blob/ca346ee0/blog/_drafts/drill-1.0-released.md
----------------------------------------------------------------------
diff --git a/blog/_drafts/drill-1.0-released.md b/blog/_drafts/drill-1.0-released.md
new file mode 100644
index 0000000..b9cfbb7
--- /dev/null
+++ b/blog/_drafts/drill-1.0-released.md
@@ -0,0 +1,48 @@
+---
+layout: post
+title: "Drill 1.0 Released"
+code: drill-1.0-released
+excerpt: Drill 1.0 is now available, representing a major milestone for the Drill community. Drill in now production-ready, making it easier than ever to explore and analyze data in non-relational datastores.
+authors: ["tshiran", "jnadeau"]
+---
+We embarked on the Drill project in late 2012 with two primary objectives:
+
+* Revolutionize the query engine by enabling low-latency queries on Big Data while getting rid of all the 'overhead' - namely, the need to load data, create and maintain schemas, transform data, etc. We wanted to develop a system that would support the speed and agility at which modern organizations want (or need) to operate in this era.
+* Unlock the data housed in non-relational datastores like NoSQL, Hadoop and cloud storage, making it available not only to developers, but also business users, analysts, data scientists and anyone else who can write a SQL query or use a BI tool. Non-relational datastores are capturing an increasing share of the world's data, and it's incredibly hard to explore and analyze this data.
+
+Today we're happy to announce the availability of Drill 1.0, our first production-ready release. Drill 1.0 includes many performance and reliability enhancements over previous releases.
+
+We would not have been able to reach this milestone without the tremendous effort by all the [committers]({{ site.baseurl }}/team/) and contributors, and we would like to congratulate the entire community on achieving this milestone. While 1.0 is an exciting milestone, it's really just the beginning of the journey. We'll release 1.1 next month, and continue with our 4-6 week release cycle, so you can count on many additional enhancements over the coming months.
+
+We have inlcluded the press release issued by the Apache Software Foundation below.
+
+Happy Drilling!  
+Tomer Shiran and Jacques Nadeau
+
+<hr />
+
+# The Apache Software Foundation Announces Apache™ Drill™ 1.0
+
+## Open Source schema-free SQL query engine revolutionizes data exploration and analytics for Apache Hadoop®, NoSQL and Cloud storage 
+
+Forest Hill, MD - 19 May 2015 - The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of Apache™ Drill™ 1.0, the schema-free SQL query engine for Apache Hadoop®, NoSQL and Cloud storage.
+
+"The production-ready 1.0 release represents a significant milestone for the Drill project," said Tomer Shiran, member of the Apache Drill Project Management Committee. "It is the outcome of almost three years of development involving dozens of engineers from numerous companies. Apache Drill's flexibility and ease-of-use have attracted thousands of users, and the enterprise-grade reliability, security and performance in the 1.0 release will further accelerate adoption."
+
+With the exponential growth of data in recent years, and the shift towards rapid application development, new data is increasingly being stored in non-relational, schema-free datastores including Hadoop, NoSQL and Cloud storage. Apache Drill enables analysts, business users, data scientists and developers to explore and analyze this data without sacrificing the flexibility and agility offered by these datastores. Drill processes the data in-situ without requiring users to define schemas or transform data.
+
+"Drill introduces the JSON document model to the world of SQL-based analytics and BI" said Jacques Nadeau, Vice President of Apache Drill. "This enables users to query fixed-schema, evolving-schema and schema-free data stored in a variety of formats and datastores. The architecture of relational query engines and databases is built on the assumption that all data has a simple and static structure that’s known in advance, and this 40-year-old assumption is simply no longer valid. We designed Drill from the ground up to address the new reality.”
+
+Apache Drill's architecture is unique in many ways. It is the only columnar execution engine that supports complex and schema-free data, and the only execution engine that performs data-driven query compilation (and re-compilation, also known as schema discovery) during query execution. These unique capabilities enable Drill to achieve record-breaking performance with the flexibility offered by the JSON document model.
+
+"Drill's columnar execution engine and optimizer take full advantage of Apache Parquet's columnar storage to achieve maximum performance," said Julien Le Dem, Technical Lead of Data Processing at Twitter and Vice President of Apache Parquet. "The Drill team has been a key contributor to the Parquet project, including recent enhancements to Parquet types and vectorization. The Drill team’s involvement in the Parquet community is instrumental in driving the standard."
+
+Availability and Oversight
+Apache Drill 1.0 is available immediately as a free download from http://drill.apache.org/download/. Documentation is available at http://drill.apache.org/docs/. As with all Apache products, Apache Drill software is released under the Apache License v2.0, and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the project's day-to-day operations, including community development and product releases. For ways to become involved with Apache Drill, visit http://drill.apache.org/ and @ApacheDrill on Twitter.
+
+About The Apache Software Foundation (ASF)
+Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 500 individual Members and 4,500 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Bloomberg, Budget Direct, Cerner, Citrix, Cloudera, Comcast, Facebook, Google, Hortonworks, HP, IBM, InMotion Hosting, iSigma, Matt Mullenweg, Microsoft, Pivotal, Produban, WANdisco, and Yahoo. For more information, visit ht
 tp://www.apache.org/ or follow @TheASF on Twitter.
+
+© The Apache Software Foundation. "Apache", "Apache Drill", "Drill", "Apache Hadoop", "Hadoop", "Apache Parquet", "Parquet", and "ApacheCon", are registered trademarks or trademarks of The Apache Software Foundation. All other brands and trademarks are the property of their respective owners.
+
+\# \# \#

http://git-wip-us.apache.org/repos/asf/drill/blob/ca346ee0/css/style.css
----------------------------------------------------------------------
diff --git a/css/style.css b/css/style.css
index aa8cfb5..cc85454 100755
--- a/css/style.css
+++ b/css/style.css
@@ -316,7 +316,7 @@ a.anchor {
   padding:0;
 }
 #header .scroller .item .tc h2 {
-  font-size: 20px;
+  font-size: 18px;
 }
 
 #header .scroller .item .tc p {

http://git-wip-us.apache.org/repos/asf/drill/blob/ca346ee0/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index 7ad2709..a1eee46 100755
--- a/index.html
+++ b/index.html
@@ -49,7 +49,7 @@ $(document).ready(function() {
         <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-6pGeQOXDdD8.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">High Performance with a JSON Data Model</div></div>
       </div>
       <h1 class="main-headline">Apache Drill</h1>
-      <h2 id="sub-headline">Schema-free SQL Query Engine <br class="mobile-break" /> for Hadoop and NoSQL</h2>
+      <h2 id="sub-headline">Schema-free SQL Query Engine <br class="mobile-break" /> for Hadoop, NoSQL and Cloud Storage</h2>
       <a href="{{ site.baseurl }}/download/" class="download-headline btn btn-1 btn-1c"><span>DOWNLOAD NOW</span></a>
     </div>
   </div>
@@ -66,17 +66,17 @@ $(document).ready(function() {
       <tr>
         <td class="ag">
           <h1>Agility</h1>
-          <p>Get faster insights from big data with no IT intervention</p>
+          <p>Get faster insights without the overhead (data loading, schema creation and maintenance, transformations, etc.)</p>
           <span><a href="#agility">LEARN MORE</a></span>
         </td>
         <td class="fl">
           <h1>Flexibility</h1>
-          <p>Analyze semi-structured/nested data coming from NoSQL applications</p>
+          <p>Analyze the multi-structured and nested data in non-relational datatastores directly without transforming or restricting the data</p>
           <span><a href="#flexibility">LEARN MORE</a></span>
         </td>
         <td class="fam">
           <h1>Familiarity</h1>
-          <p>Leverage existing SQL skillsets, BI tools and Apache Hive deployments</p>
+          <p>Leverage your existing SQL skillsets and BI tools including Tableau, Qlikview, MicroStrategy, Spotfire, Excel and more</p>
           <span><a href="#familiarity">LEARN MORE</a></span>
         </td>
       </tr>
@@ -85,25 +85,21 @@ $(document).ready(function() {
 </div>
 
 <div class="home_txt mw">
-  <h2>Apache Drill is an open source, low latency SQL query engine for Hadoop and NoSQL.</h2>
-  <p>Modern big data applications such as social, mobile, web and IoT deal with a larger number of users and larger amount of data than the traditional transactional applications. The datasets associated with these applications evolve rapidly, are often self-describing and can include complex types such as JSON and Parquet. Apache Drill is built from the ground up to provide low latency queries natively on such rapidly evolving multi-structured datasets at scale.</p>
+  <p>The 40-year monopoly of the relational database is over. The explosion of data in recent years and the shift towards rapid application development have led to the rise of non-relational datastores including Hadoop, NoSQL and cloud storage. Organizations are increasingly leveraging these systems for new and existing applications due to their flexibility, scalability and price advantages. Drill is built from the ground up to enable business users, analysts, data scientists and developers to explore and analyze the data in these systems while maintaining their unique agility and flexibility advantages.</p>
+
   <a name="agility" class="anchor"></a>
-  <h1>Day-zero analytics &amp; rapid<br>application development</h1>
-  <!-- <h2>Evolution towards Self-Service Data Exploration</h2> -->
-  <img src="images/home-img1.jpg" alt="Day-zero analytics & rapid application development" width="606">
+  <h1>Agility</h1>
+  <img src="images/home-img1.jpg" alt="Agility" width="606" />
 
-  <p>Apache Drill provides direct queries on self-describing and semi-structured data in files (such as JSON, Parquet) and HBase tables without needing to define and maintain schemas in a centralized store such as Hive metastore. This means that  users can explore live data on their own as it arrives versus spending weeks or months on data preparation, modeling, ETL and subsequent schema management.</p>
+  <p>Drill is unlike any other query engine. Traditional query engines demand significant IT intervention before data can be queried. Drill gets rid of all that overhead so that users can just query the raw data in-situ at record speeds. There's no need to load the data, create and maintain schemas, or transform the data before it can be processed. For example, the user can directly query Hadoop directories, MongoDB collections, S3 buckets and more. Drill leverages advanced query compilation and re-compilation techniques to maximize performance without requiring up-front schema knowledge.</p>
+  
   <a name="flexibility" class="anchor"></a>
-  <h1>Purpose-built for semi-structured/nested data</h1>
-  <!-- <h2>A Flexible Data Model for Modern Apps</h2> -->
-
-  <img src="images/home-img2.jpg" alt="Purpose-built for semi-structured/nested data" width="635">
-
-  <p>Drill provides a JSON-like internal data model to represent and process data. The flexibility of this data model allows Drill to query, without flattening, both simple and complex/nested data types as well as constantly changing application-driven schemas commonly seen with Hadoop/NoSQL applications. Drill also provides intuitive extensions to SQL to work with complex/nested data types.</p>
+  <h1>Flexibility</h1>
+  <img src="images/home-img2.jpg" alt="Agility" width="635" />
 
+  <p>Drill features a JSON data model that allows it to query, without flattening, both simple and complex/nested data as well as rapidly evolving structures commonly seen with modern applications and non-relational datastores. Drill also provides intuitive extensions to SQL to work with complex/nested data. Drill achieves high performance via an in-memory shredded columnar representation for complex data. In fact, Drill is the only columnar query engine that supports complex data.</p>
   <a name="familiarity" class="anchor"></a>
-  <h1>Compatibility with existing SQL environments<br>and Apache Hive deployments</h1>
-  <br><br>
-  <img src="images/home-img3.jpg" width="380" alt="Compatibility with existing SQL environments and Apache Hive deployments">
-  <p>With Drill, businesses can minimize switching costs and learning curves for users with the familiar ANSI SQL syntax. Analysts can continue to use familiar BI/analytics tools that assume and auto-generate ANSI SQL code to interact with Hadoop data by leveraging the standard JDBC/ODBC interfaces that Drill exposes. Users can also plug-and-play with Hive environments to enable ad-hoc low latency queries on existing Hive tables and reuse Hive's metadata, hundreds of file formats and UDFs out of the box.</p>
+  <h1>Familiarity</h1>
+  <img src="images/home-img3.jpg" alt="familiarity" width="380" />
+  <p>Drill supports standard SQL. Business users, analysts and data scientists can use standard BI/analytics tools such as Tableau, QlikView, MicroStrategy, Spotfire, SAS and Excel to interact with non-relational datastores by leveraging Drill's JDBC and ODBC drivers. Developers can leverage Drill's simple REST API in their custom applications to create beautiful visualizations based on data in their non-relational datastores. Users can also plug-and-play with Hive environments to enable ad-hoc low latency queries on existing Hive tables and reuse Hive's metadata, hundreds of file formats and UDFs out of the box.</p>
 </div>


[25/26] drill git commit: Website fixes

Posted by ts...@apache.org.
Website fixes


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/58711e5b
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/58711e5b
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/58711e5b

Branch: refs/heads/gh-pages
Commit: 58711e5b3fa11928a511669e801c195977ac0b21
Parents: 55a3ebb
Author: Tomer Shiran <ts...@gmail.com>
Authored: Sat May 16 23:43:00 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Sat May 16 23:43:00 2015 -0700

----------------------------------------------------------------------
 _sass/_site-main.scss                                        | 8 ++++++++
 blog.html                                                    | 5 ++++-
 blog/_drafts/drill-1.0-released.md                           | 2 +-
 blog/_posts/2014-12-11-apache-drill-qa-panelist-spotlight.md | 5 +++--
 4 files changed, 16 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/58711e5b/_sass/_site-main.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-main.scss b/_sass/_site-main.scss
index 8c69c50..3660298 100644
--- a/_sass/_site-main.scss
+++ b/_sass/_site-main.scss
@@ -892,3 +892,11 @@ div.home-row div pre span.code-underline{
   color:#000;
   text-decoration: underline;
 }
+
+.int_text p a.post-link{
+  font-size:22px;
+}
+
+.int_text p span.post-date{
+  font-style: italic;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/58711e5b/blog.html
----------------------------------------------------------------------
diff --git a/blog.html b/blog.html
index 977a085..779c182 100644
--- a/blog.html
+++ b/blog.html
@@ -3,7 +3,10 @@ layout: page
 title: Blog
 ---
 {% for post in site.categories.blog %}<!-- previously: site.posts -->
-<p><a class="post-link" href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a>{% if post.date %} ({{ post.date | date: "%b %-d, %Y" }}){% endif %}{% if post.excerpt %}<br/>{{ post.excerpt }}{% endif %}</p>
+<p><a class="post-link" href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a><br/>
+<span class="post-date">Posted on{% if post.date %} {{ post.date | date: "%b %-d, %Y" }}{% endif %}
+{% if post.authors %}by {% include authors.html %}{% endif %}</span>
+{% if post.excerpt %}<br/>{{ post.excerpt }}{% endif %}</p>
 {% endfor %}
 <p class="info">Want to contribute a blog post? Check out the source for some of the <a href="https://github.com/apache/drill/tree/gh-pages/blog/_posts">existing posts</a> to see how it's done. When you're ready, email your Markdown file to <a href="mailto:dev@drill.apache.org">dev@drill.apache.org</a>.</p>
 <h1>Third-Party Articles</h1>

http://git-wip-us.apache.org/repos/asf/drill/blob/58711e5b/blog/_drafts/drill-1.0-released.md
----------------------------------------------------------------------
diff --git a/blog/_drafts/drill-1.0-released.md b/blog/_drafts/drill-1.0-released.md
index c9fc5d8..e0c721a 100644
--- a/blog/_drafts/drill-1.0-released.md
+++ b/blog/_drafts/drill-1.0-released.md
@@ -2,7 +2,7 @@
 layout: post
 title: "Drill 1.0 Released"
 code: drill-1.0-released
-excerpt: Drill 1.0 is now available, representing a major milestone for the Drill community. Drill in now production-ready, making it easier than ever to explore and analyze data in non-relational datastores.
+excerpt: Drill 1.0 has been released, representing a major milestone for the Drill community. Drill in now production-ready, making it easier than ever to explore and analyze data in non-relational datastores.
 authors: ["tshiran", "jnadeau"]
 ---
 We embarked on the Drill project in late 2012 with two primary objectives:

http://git-wip-us.apache.org/repos/asf/drill/blob/58711e5b/blog/_posts/2014-12-11-apache-drill-qa-panelist-spotlight.md
----------------------------------------------------------------------
diff --git a/blog/_posts/2014-12-11-apache-drill-qa-panelist-spotlight.md b/blog/_posts/2014-12-11-apache-drill-qa-panelist-spotlight.md
index 9e1726d..f1e95d5 100644
--- a/blog/_posts/2014-12-11-apache-drill-qa-panelist-spotlight.md
+++ b/blog/_posts/2014-12-11-apache-drill-qa-panelist-spotlight.md
@@ -3,9 +3,10 @@ layout: post
 title: "Apache Drill Q&A Panelist Spotlight"
 code: apache-drill-qa-panelist-spotlight
 excerpt: Join us on Twitter for a live Q&A on Wednesday, December 17.
+authors: ["tshiran"]
 nodate: true
 ---
-<script type="text/javascript" src="https://addthisevent.com/libs/1.5.8/ate.min.js"></script>
+<script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
 <a href="{{ site.baseurl }}/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">12-17-2014 11:30:00</span>
@@ -15,7 +16,7 @@ nodate: true
     <span class="_description">Join us on Twitter for a one-hour, live SQL-on-Hadoop Q&A. Use the **hashtag #DrillQA** so the panelists can engage with your questions and comments. Apache Drill committers Tomer Shiran, Jacques Nadeau, and Ted Dunning, as well as Tableau Product Manager Jeff Feng and Data Scientist Dr. Kirk Borne will be on hand to answer your questions.</span>
     <span class="_location">Twitter: #DrillQA</span>
     <span class="_organizer">Tomer Shiran</span>
-    <span class="_organizer_email">tshiran@apache.org</span>
+    <span class="_organizer_email">tshiran\@apache.org</span>
     <span class="_all_day_event">false</span>
     <span class="_date_format">MM-DD-YYYY</span>
 </a>


[05/26] drill git commit: decimal data type disabled

Posted by ts...@apache.org.
decimal data type disabled


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/f58d360e
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/f58d360e
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/f58d360e

Branch: refs/heads/gh-pages
Commit: f58d360e43331825ea7e9885c167078e2a1c4ff1
Parents: c38e6a1
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 14 14:40:08 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 14 14:40:08 2015 -0700

----------------------------------------------------------------------
 .../020-hive-to-drill-data-type-mapping.md      | 20 ++++++++++++++++----
 .../040-parquet-format.md                       |  4 +++-
 .../050-json-data-model.md                      |  8 ++++----
 _docs/getting-started/020-why-drill.md          |  2 +-
 .../040-tableau-examples.md                     |  2 +-
 .../060-querying-the-information-schema.md      |  1 +
 .../005-querying-complex-data-introduction.md   |  2 +-
 .../data-types/010-supported-data-types.md      | 15 ++++++++++++++-
 .../030-handling-different-data-types.md        |  9 ++++-----
 .../050-aggregate-and-aggregate-statistical.md  |  2 ++
 10 files changed, 47 insertions(+), 18 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md b/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
index ed66a7a..20a921f 100644
--- a/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
+++ b/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
@@ -11,8 +11,8 @@ Using Drill you can read tables created in Hive that use data types compatible w
 | BIGINT             | BIGINT    | 8-byte signed integer                                      |
 | BOOLEAN            | BOOLEAN   | TRUE (1) or FALSE (0)                                      |
 | CHAR               | CHAR      | Character string, fixed-length max 255                     |
-| DATE               | DATE      | Years months and days in the form in the form YYYY-­MM-­DD |
-| DECIMAL            | DECIMAL   | 38-digit precision                                         |
+| DATE               | DATE      | Years months and days in the form in the form YYYY-­MM-­DD   |
+| DECIMAL*           | DECIMAL   | 38-digit precision                                         |
 | FLOAT              | FLOAT     | 4-byte single precision floating point number              |
 | DOUBLE             | DOUBLE    | 8-byte double precision floating point number              |
 | INT or INTEGER     | INT       | 4-byte signed integer                                      |
@@ -25,6 +25,8 @@ Using Drill you can read tables created in Hive that use data types compatible w
 | None               | STRING    | Binary string (16)                                         |
 | VARCHAR            | VARCHAR   | Character string variable length                           |
 
+\* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. To enable the DECIMAL type, set the `planner.enable_decimal_data_type` option to `true`.
+
 ## Unsupported Types
 Drill does not support the following Hive types:
 
@@ -41,8 +43,14 @@ This example demonstrates the mapping of Hive data types to Drill data types. Us
 
      8223372036854775807,true,3.5,-1231.4,3.14,42,"SomeText",2015-03-25,2015-03-25 01:23:15 
 
-The example assumes that the CSV resides on the MapR file system (MapRFS) in the Drill sandbox: `/mapr/demo.mapr.com/data/`
- 
+### Example Assumptions
+The example makes the following assumptions:
+
+* The CSV resides on the MapR file system (MapRFS) in the Drill sandbox: `/mapr/demo.mapr.com/data/`  
+* You [enabled the DECIMAL data type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type) in Drill.  
+
+### Define an External Table in Hive
+
 In Hive, you define an external table using the following query:
 
     hive> CREATE EXTERNAL TABLE types_demo ( 
@@ -59,6 +67,8 @@ In Hive, you define an external table using the following query:
           LINES TERMINATED BY '\n' 
           STORED AS TEXTFILE LOCATION '/mapr/demo.mapr.com/data/mytypes.csv';
 
+\* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. To enable the DECIMAL type, set the `planner.enable_decimal_data_type` option to `true`.
+
 You check that Hive mapped the data from the CSV to the typed values as as expected:
 
     hive> SELECT * FROM types_demo;
@@ -66,6 +76,8 @@ You check that Hive mapped the data from the CSV to the typed values as as expec
     8223372036854775807	true	3.5	-1231.4	3.14	42	"SomeText"	2015-03-25   2015-03-25 01:23:15
     Time taken: 0.524 seconds, Fetched: 1 row(s)
 
+### Connect Drill to Hive and Query the Data
+
 In Drill, you use the Hive storage plugin that has the following definition.
 
 	{

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/data-sources-and-file-formats/040-parquet-format.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/040-parquet-format.md b/_docs/data-sources-and-file-formats/040-parquet-format.md
index 55c4e68..cd14359 100644
--- a/_docs/data-sources-and-file-formats/040-parquet-format.md
+++ b/_docs/data-sources-and-file-formats/040-parquet-format.md
@@ -144,11 +144,13 @@ Parquet also supports logical types, fully described on the [Apache Parquet site
 | None                         |                                                                                | UINT_16              | 16 bits, unsigned                                                                                                                          |
 | None                         |                                                                                | UINT_32              | 32 bits, unsigned                                                                                                                          |
 | None                         |                                                                                | UINT_64              | 64 bits, unsigned                                                                                                                          |
-| DECIMAL                      | 38-digit precision                                                             | DECIMAL              | Arbitrary-precision signed decimal numbers of the form unscaledValue * 10^(-scale)                                                         |
+| DECIMAL*                     | 38-digit precision                                                             | DECIMAL              | Arbitrary-precision signed decimal numbers of the form unscaledValue * 10^(-scale)                                                         |
 | TIME                         | Hours, minutes, seconds, milliseconds; 24-hour basis                           | TIME_MILLIS          | Logical time, not including the date. Annotates int32. Number of milliseconds after midnight.                                              |
 | TIMESTAMP                    | Year, month, day, and seconds                                                  | TIMESTAMP_MILLIS     | Logical date and time. Annotates an int64 that stores the number of milliseconds from the Unix epoch, 00:00:00.000 on 1 January 1970, UTC. |
 | INTERVALDAY and INTERVALYEAR | Integer fields representing a period of time depending on the type of interval | INTERVAL             | An interval of time. Annotates a fixed_len_byte_array of length 12. Months, days, and ms in unsigned little-endian format.                 |
 
+\* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. To enable the DECIMAL type, set the `planner.enable_decimal_data_type` option to `true`.
+
 ## Data Description Language Support
 Parquet supports the following data description languages:
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 28ab921..82b8a3a 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -40,11 +40,11 @@ The following table shows SQL-JSON data type mapping:
 By default, Drill does not support JSON lists of different types. For example, JSON does not enforce types or distinguish between integers and floating point values. When reading numerical values from a JSON file, Drill distinguishes integers from floating point numbers by the presence or lack of a decimal point. If some numbers in a JSON map or array appear with and without a decimal point, such as 0 and 0.0, Drill throws a schema change error. You use the following options to read JSON lists of different types:
 
 * `store.json.read_numbers_as_double`  
-  Reads numbers from JSON files with or without a decimal point as DOUBLE.
-* `store.json.all_text_mode`
-  Reads all data from JSON files as VARCHAR.
+  Reads numbers from JSON files with or without a decimal point as DOUBLE. You need to cast numbers from VARCHAR to numerical data types, such as DOUBLE or INTEGER.
+* `store.json.all_text_mode`  
+  Reads all data from JSON files as VARCHAR. you need to cast numbers from DOUBLE to other numerical types only if you cannot use the numbers as DOUBLE.
 
-The following session/system options for `store.json.all_text_mode` and `store.json.read_numbers_as_double` options is false. Enable the latter if the JSON contains integers and floating point numbers. Using either option prevents schema errors, but using `store.json.read_numbers_as_double` has an advantage over `store.json.all_text_mode`: You do not have to cast every number from VARCHAR to DOUBLE or BIGINT when you query the JSON file.
+The default setting of `store.json.all_text_mode` and `store.json.read_numbers_as_double` options is false. Using either option prevents schema errors, but using `store.json.read_numbers_as_double` has an advantage over `store.json.all_text_mode`. Using `store.json.read_numbers_as_double` typically involves less explicit casting than using `store.json.all_text_mode` because you can often use the numerical data as is -\-DOUBLE.
 
 ### Handling Type Differences
 Set the `store.json.read_numbers_as_double` property to true.

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/getting-started/020-why-drill.md
----------------------------------------------------------------------
diff --git a/_docs/getting-started/020-why-drill.md b/_docs/getting-started/020-why-drill.md
index f7f4495..d00d882 100644
--- a/_docs/getting-started/020-why-drill.md
+++ b/_docs/getting-started/020-why-drill.md
@@ -39,7 +39,7 @@ Drill's schema-free JSON model allows you to query complex, semi-structured data
 
 
 ## 4. Real SQL -- not "SQL-like"
-Drill supports the standard SQL:2003 syntax. No need to learn a new "SQL-like" language or struggle with a semi-functional BI tool. Drill supports many data types including DATE, INTERVALDAY/INTERVALYEAR, TIMESTAMP, VARCHAR and DECIMAL, as well as complex query constructs such as correlated sub-queries and joins in WHERE clauses. Here is an example of a TPC-H standard query that runs in Drill "as is":
+Drill supports the standard SQL:2003 syntax. No need to learn a new "SQL-like" language or struggle with a semi-functional BI tool. Drill supports many data types including DATE, INTERVALDAY/INTERVALYEAR, TIMESTAMP, and VARCHAR, as well as complex query constructs such as correlated sub-queries and joins in WHERE clauses. Here is an example of a TPC-H standard query that runs in Drill "as is":
 
 ### TPC-H query 4
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/040-tableau-examples.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/040-tableau-examples.md b/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/040-tableau-examples.md
index 85e8c88..9b73096 100644
--- a/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/040-tableau-examples.md
+++ b/_docs/odbc-jdbc-interfaces/using-odbc-on-windows/040-tableau-examples.md
@@ -14,7 +14,7 @@ This section includes the following examples:
 
 The steps and results of these examples assume pre-configured schemas and
 source data. You configure schemas as storage plugin instances on the Storage
-tab of the [Drill Web UI]({{ site.baseurl }}/docs/getting-to-know-the-drill-sandbox#storage-plugin-overview).
+tab of the [Drill Web UI]({{ site.baseurl }}/docs/getting-to-know-the-drill-sandbox#storage-plugin-overview). Also, the examples assume you [enabled the DECIMAL data type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type) in Drill.  
 
 ## Example: Connect to a Hive Table in Tableau
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/query-data/060-querying-the-information-schema.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/060-querying-the-information-schema.md b/_docs/query-data/060-querying-the-information-schema.md
index 590fad2..fddb194 100644
--- a/_docs/query-data/060-querying-the-information-schema.md
+++ b/_docs/query-data/060-querying-the-information-schema.md
@@ -107,3 +107,4 @@ of those columns:
     | OrderTotal  | Decimal    |
     +-------------+------------+
 
+In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. [Enable the DECIMAL data type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type)) if performance is not an issue.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md b/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
index 2f59219..a6a8c84 100644
--- a/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
+++ b/_docs/query-data/querying-complex-data/005-querying-complex-data-introduction.md
@@ -43,7 +43,7 @@ The examples in this section operate on JSON data files. In order to write
 your own queries, you need to be aware of the basic data types in these files:
 
   * string (all data inside double quotes), such as `"0001"` or `"Cake"`
-  * number: integers, decimals, and floats, such as `0.55` or `10`
+  * number: integers and floats, such as `0.55` or `10`
   * null values
   * boolean values: true, false
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/sql-reference/data-types/010-supported-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/010-supported-data-types.md b/_docs/sql-reference/data-types/010-supported-data-types.md
index 5641dc4..c588a8b 100644
--- a/_docs/sql-reference/data-types/010-supported-data-types.md
+++ b/_docs/sql-reference/data-types/010-supported-data-types.md
@@ -22,10 +22,23 @@ Drill reads from and writes to data sources having a wide variety of types. Dril
 | CHARACTER VARYING, CHARACTER, CHAR, or VARCHAR*** | UTF8-encoded variable-length string. The default limit is 1 character. The maximum character limit is 2,147,483,647. | CHAR(30) casts data to a 30-character string maximum.                          |
 
 
-\* In this release, the NUMERIC data type is an alias for the DECIMAL data type.  
+\* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. The NUMERIC data type is an alias for the DECIMAL data type.
 \*\* Not currently supported.  
 \*\*\* Currently, Drill supports only variable-length strings.  
 
+## Enabling the DECIMAL Type
+
+To enable the DECIMAL type, set the `planner.enable_decimal_data_type` option to `true`. Enable the DECIMAL data type if performance is not an issue.
+
+     ALTER SYSTEM SET `planner.enable_decimal_data_type` = true;
+
+     +------------+------------+
+     |     ok     |  summary   |
+     +------------+------------+
+     | true       | planner.enable_decimal_data_type updated. |
+     +------------+------------+
+     1 row selected (1.191 seconds)
+
 ## Casting and Converting Data Types
 
 In Drill, you cast or convert data to the required type for moving data from one data source to another or to make the data readable.

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/sql-reference/data-types/030-handling-different-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/030-handling-different-data-types.md b/_docs/sql-reference/data-types/030-handling-different-data-types.md
index 64121dd..110d32d 100644
--- a/_docs/sql-reference/data-types/030-handling-different-data-types.md
+++ b/_docs/sql-reference/data-types/030-handling-different-data-types.md
@@ -11,8 +11,7 @@ In a textual file, such as CSV, Drill interprets every field as a VARCHAR, as pr
 ## Handling JSON and Parquet Data
 Complex and nested data structures in JSON and Parquet files are of map and array types.
 
-A map is a set of name/value pairs. A value in a map can be a scalar type, such as string or int, or a complex type, such as an array or another map.
-An array is a repeated list of values. A value in an array can be a scalar type, such as string or int, or an array can be a complex type, such as a map or another array.
+A map is a set of name/value pairs. A value in a map can be a scalar type, such as string or int, or a complex type, such as an array or another map. An array is a repeated list of values. A value in an array can be a scalar type, such as string or int, or an array can be a complex type, such as a map or another array.
 
 Drill reads/writes maps and arrays from/to JSON and Parquet files. In Drill, you do not cast a map or array to another type.
 
@@ -58,15 +57,15 @@ The following example shows a JSON array having complex type values:
 
 ## Reading numbers of different types from JSON
 
-The `store.json.read_numbers_as_double` and `store.json.all_text_mode` system/session options control how Drill implicitly casts JSON data. By default, when reading numerical values from a JSON file, Drill implicitly casts a number to the DOUBLE or BIGINT type depending on the presence or absence a decimal point. If some numbers in a JSON map or array appear with and without a decimal point, such as 0 and 0.0, Drill throws a schema change error. By default, Drill reads numbers without decimal point as BIGINT values by default. The range of BIGINT is -9223372036854775808 to 9223372036854775807. A BIGINT result outside this range produces an error. 
+The `store.json.read_numbers_as_double` and `store.json.all_text_mode` system/session options control how Drill implicitly casts JSON data. By default, when reading numerical values from a JSON file, Drill implicitly casts a number to the DOUBLE or BIGINT type depending on the presence or absence of a decimal point. If some numbers in a JSON map or array appear with and without a decimal point, such as 0 and 0.0, Drill throws a schema change error. By default, Drill reads numbers without decimal point as BIGINT values. The range of BIGINT is -9223372036854775808 to 9223372036854775807. A BIGINT result outside this range produces an error. 
 
-To prevent Drill from attempting to read such data, set `store.json.read_numbers_as_double` or `store.json.all_text_mode` to true. Using `store.json.all_text_mode` set to true, Drill implicitly casts JSON data to VARCHAR. You need to cast the VARCHAR values to other types you want the returned data to represent. Using `store.json.read_numbers_as_double` set to true, Drill casts numbers in the JSON file to DOUBLE. You need to cast the DOUBLE to any other types of numbers, such as FLOAT and INTEGER, you want the returned data to represent. Using `store.json.read_numbers_as_double` typically involves less casting on your part than using `store.json.all_text_mode`.
+To prevent Drill from attempting to read such data, set `store.json.read_numbers_as_double` or `store.json.all_text_mode` to true. Using `store.json.all_text_mode` set to true, Drill implicitly casts JSON data to VARCHAR. You need to cast the VARCHAR values to other types. Using `store.json.read_numbers_as_double` set to true, Drill implicitly casts numbers in the JSON file to DOUBLE. You need to cast the DOUBLE type to other types, such as FLOAT and INTEGER. Using `store.json.read_numbers_as_double` typically involves less explicit casting than using `store.json.all_text_mode` because you can often use the numerical data as is (DOUBLE).
 
 ## Guidelines for Using Float and Double
 
 FLOAT and DOUBLE yield approximate results. These are variable-precision numeric types. Drill does not cast/convert all values precisely to the internal format, but instead stores approximations. Slight differences can occur in the value stored and retrieved. The following guidelines are recommended:
 
-* For conversions involving monetary calculations, for example, that require precise results use the decimal type instead of float or double.
+* For conversions involving monetary calculations, for example, that require precise results use the DECIMAL type instead of FLOAT or DOUBLE. In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. [Enable the DECIMAL data type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type)) if performance is not an issue.
 * For complex calculations or mission-critical applications, especially those involving infinity and underflow situations, carefully consider the limitations of type casting that involves FLOAT or DOUBLE.
 * Equality comparisons between floating-point values can produce unexpected results.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/f58d360e/_docs/sql-reference/sql-functions/050-aggregate-and-aggregate-statistical.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/050-aggregate-and-aggregate-statistical.md b/_docs/sql-reference/sql-functions/050-aggregate-and-aggregate-statistical.md
index d6e66de..21e6e40 100644
--- a/_docs/sql-reference/sql-functions/050-aggregate-and-aggregate-statistical.md
+++ b/_docs/sql-reference/sql-functions/050-aggregate-and-aggregate-statistical.md
@@ -17,6 +17,8 @@ MAX(expression)| BINARY, DECIMAL, VARCHAR, DATE, TIME, or TIMESTAMP| same as arg
 MIN(expression)| BINARY, DECIMAL, VARCHAR, DATE, TIME, or TIMESTAMP| same as argument type
 SUM(expression)| SMALLINT, INTEGER, BIGINT, FLOAT, DOUBLE, DECIMAL, INTERVALDAY, or INTERVALYEAR| BIGINT for SMALLINT or INTEGER arguments, DECIMAL for BIGINT arguments, DOUBLE for floating-point arguments, otherwise the same as the argument data type
 
+\* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. [Enable the DECIMAL data type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type)) if performance is not an issue.
+
 MIN, MAX, COUNT, AVG, and SUM accept ALL and DISTINCT keywords. The default is ALL.
 
 ### Examples


[20/26] drill git commit: last-min features

Posted by ts...@apache.org.
last-min features


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/6ea0c7a8
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/6ea0c7a8
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/6ea0c7a8

Branch: refs/heads/gh-pages
Commit: 6ea0c7a88a8f1852c9fd9f363f21944a578d5998
Parents: d8c9599
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Sat May 16 19:17:13 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Sat May 16 19:17:13 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 |  97 ++++++-
 _docs/074-query-audit-logging.md                |   5 +
 _docs/075-getting-query-information.md          |  55 ++++
 .../020-configuring-drill-memory.md             |   4 +-
 .../010-configuration-options-introduction.md   |   2 +-
 .../020-start-up-options.md                     |   3 +-
 .../030-planning-and-exececution-options.md     |   5 -
 .../035-plugin-configuration-introduction.md    |   2 +-
 _docs/img/drill-bin.png                         | Bin 85005 -> 51164 bytes
 _docs/img/drill-directory.png                   | Bin 87661 -> 46151 bytes
 _docs/img/sqlline1.png                          | Bin 23074 -> 6633 bytes
 _docs/install/010-install-drill-introduction.md |  10 +-
 .../install/045-embedded-mode-prerequisites.md  |  10 +-
 .../047-installing-drill-on-the-cluster.md      |   7 +-
 .../050-starting-drill-in-distributed mode.md   |  83 +++---
 .../010-embedded-mode-prerequisites.md          |   9 +-
 ...20-installing-drill-on-linux-and-mac-os-x.md |   8 +-
 .../030-starting-drill-on-linux-and-mac-os-x.md |  25 +-
 .../040-installing-drill-on-windows.md          |   6 +-
 .../050-starting-drill-on-windows.md            |   8 +-
 .../010-interfaces-introduction.md              |   2 +-
 _docs/tutorials/020-drill-in-10-minutes.md      | 137 +++++----
 .../030-analyzing-the-yelp-academic-dataset.md  | 284 ++++++++++---------
 .../050-analyzing-highly-dynamic-datasets.md    |  53 ++--
 24 files changed, 477 insertions(+), 338 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index c34dc8f..2bf92ef 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -2937,6 +2937,23 @@
             "title": "Functions for Handling Nulls", 
             "url": "/docs/functions-for-handling-nulls/"
         }, 
+        "Getting Query Information": {
+            "breadcrumbs": [
+                {
+                    "title": "Query Audit Logging", 
+                    "url": "/docs/query-audit-logging/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "SQL Reference", 
+            "next_url": "/docs/sql-reference/", 
+            "parent": "Query Audit Logging", 
+            "previous_title": "Query Audit Logging", 
+            "previous_url": "/docs/query-audit-logging/", 
+            "relative_path": "_docs/075-getting-query-information.md", 
+            "title": "Getting Query Information", 
+            "url": "/docs/getting-query-information/"
+        }, 
         "Getting Started": {
             "breadcrumbs": [], 
             "children": [
@@ -4067,8 +4084,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "SQL Reference", 
-            "next_url": "/docs/sql-reference/", 
+            "next_title": "Query Audit Logging", 
+            "next_url": "/docs/query-audit-logging/", 
             "parent": "Query Data", 
             "previous_title": "Querying System Tables", 
             "previous_url": "/docs/querying-system-tables/", 
@@ -4876,6 +4893,36 @@
             "title": "Project Bylaws", 
             "url": "/docs/project-bylaws/"
         }, 
+        "Query Audit Logging": {
+            "breadcrumbs": [], 
+            "children": [
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "Query Audit Logging", 
+                            "url": "/docs/query-audit-logging/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "SQL Reference", 
+                    "next_url": "/docs/sql-reference/", 
+                    "parent": "Query Audit Logging", 
+                    "previous_title": "Query Audit Logging", 
+                    "previous_url": "/docs/query-audit-logging/", 
+                    "relative_path": "_docs/075-getting-query-information.md", 
+                    "title": "Getting Query Information", 
+                    "url": "/docs/getting-query-information/"
+                }
+            ], 
+            "next_title": "Getting Query Information", 
+            "next_url": "/docs/getting-query-information/", 
+            "parent": "", 
+            "previous_title": "Monitoring and Canceling Queries in the Drill Web UI", 
+            "previous_url": "/docs/monitoring-and-canceling-queries-in-the-drill-web-ui/", 
+            "relative_path": "_docs/074-query-audit-logging.md", 
+            "title": "Query Audit Logging", 
+            "url": "/docs/query-audit-logging/"
+        }, 
         "Query Data": {
             "breadcrumbs": [], 
             "children": [
@@ -5239,8 +5286,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "SQL Reference", 
-                    "next_url": "/docs/sql-reference/", 
+                    "next_title": "Query Audit Logging", 
+                    "next_url": "/docs/query-audit-logging/", 
                     "parent": "Query Data", 
                     "previous_title": "Querying System Tables", 
                     "previous_url": "/docs/querying-system-tables/", 
@@ -7365,8 +7412,8 @@
             "next_title": "SQL Reference Introduction", 
             "next_url": "/docs/sql-reference-introduction/", 
             "parent": "", 
-            "previous_title": "Monitoring and Canceling Queries in the Drill Web UI", 
-            "previous_url": "/docs/monitoring-and-canceling-queries-in-the-drill-web-ui/", 
+            "previous_title": "Getting Query Information", 
+            "previous_url": "/docs/getting-query-information/", 
             "relative_path": "_docs/080-sql-reference.md", 
             "title": "SQL Reference", 
             "url": "/docs/sql-reference/"
@@ -10776,8 +10823,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "SQL Reference", 
-                    "next_url": "/docs/sql-reference/", 
+                    "next_title": "Query Audit Logging", 
+                    "next_url": "/docs/query-audit-logging/", 
                     "parent": "Query Data", 
                     "previous_title": "Querying System Tables", 
                     "previous_url": "/docs/querying-system-tables/", 
@@ -10801,6 +10848,36 @@
                 {
                     "breadcrumbs": [
                         {
+                            "title": "Query Audit Logging", 
+                            "url": "/docs/query-audit-logging/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "SQL Reference", 
+                    "next_url": "/docs/sql-reference/", 
+                    "parent": "Query Audit Logging", 
+                    "previous_title": "Query Audit Logging", 
+                    "previous_url": "/docs/query-audit-logging/", 
+                    "relative_path": "_docs/075-getting-query-information.md", 
+                    "title": "Getting Query Information", 
+                    "url": "/docs/getting-query-information/"
+                }
+            ], 
+            "next_title": "Getting Query Information", 
+            "next_url": "/docs/getting-query-information/", 
+            "parent": "", 
+            "previous_title": "Monitoring and Canceling Queries in the Drill Web UI", 
+            "previous_url": "/docs/monitoring-and-canceling-queries-in-the-drill-web-ui/", 
+            "relative_path": "_docs/074-query-audit-logging.md", 
+            "title": "Query Audit Logging", 
+            "url": "/docs/query-audit-logging/"
+        }, 
+        {
+            "breadcrumbs": [], 
+            "children": [
+                {
+                    "breadcrumbs": [
+                        {
                             "title": "SQL Reference", 
                             "url": "/docs/sql-reference/"
                         }
@@ -11582,8 +11659,8 @@
             "next_title": "SQL Reference Introduction", 
             "next_url": "/docs/sql-reference-introduction/", 
             "parent": "", 
-            "previous_title": "Monitoring and Canceling Queries in the Drill Web UI", 
-            "previous_url": "/docs/monitoring-and-canceling-queries-in-the-drill-web-ui/", 
+            "previous_title": "Getting Query Information", 
+            "previous_url": "/docs/getting-query-information/", 
             "relative_path": "_docs/080-sql-reference.md", 
             "title": "SQL Reference", 
             "url": "/docs/sql-reference/"

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/074-query-audit-logging.md
----------------------------------------------------------------------
diff --git a/_docs/074-query-audit-logging.md b/_docs/074-query-audit-logging.md
new file mode 100644
index 0000000..f305725
--- /dev/null
+++ b/_docs/074-query-audit-logging.md
@@ -0,0 +1,5 @@
+---
+title: "Query Audit Logging"
+---
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/075-getting-query-information.md
----------------------------------------------------------------------
diff --git a/_docs/075-getting-query-information.md b/_docs/075-getting-query-information.md
new file mode 100644
index 0000000..51c7fe4
--- /dev/null
+++ b/_docs/075-getting-query-information.md
@@ -0,0 +1,55 @@
+---
+title: "Getting Query Information"
+parent: "Query Audit Logging"
+---
+The query log provides audit log functionality for the queries executed by various drillbits in the cluster. To access the query log, go to `sqlline_queries.json` file in the `log` directory of the Drill installation. The log records important information about queries executed on the Drillbit where Drill runs. The log includes query text, start time, end time, user, status, schema, and the query id.
+
+You can query the `sqlline_queries.json` using Drill to get audit logging information.
+
+## Checking the Most Recent Queries
+
+For example, to check the most recent queries, query the log using this command:
+
+    SELECT * FROM dfs.`default`.`/Users/drill-user/apache-drill-1.0.0/log/sqlline_queries.json` t ORDER BY `start` LIMIT 5;
+
+    +----------------+------------+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+----------------+------------+
+    |     finish     |  outcome   |                queryId                |                                                            queryText                                                                                                                                         | schema  |     start      |  username  |
+    +----------------+------------+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+----------------+------------+
+    | 1431752662216  | FAILED     |  2aa9302b-bf6f-a378-d66e-151834e87b16 | select * from dfs.`default`.`/Users/nrentachintala/Downloads/testgoogle.json` t limit 1                                                                                                                      |         | 1431752660376  |  anonymous |
+    | 1431752769079  | COMPLETED  |  2aa92fc1-b722-c27a-10f7-57a1cf0dd366 | SELECT KVGEN(checkin_info) checkins FROM dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_checkin.json` LIMIT 2                                                                               |         | 1431752765303  |  anonymous |
+    | 1431752786341  | COMPLETED  |  2aa92faf-2103-047b-9761-32eedefba1e6 | SELECT FLATTEN(KVGEN(checkin_info)) checkins FROM dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_checkin.json` LIMIT 20                                                                     |         | 1431752784532  |  anonymous |
+    | 1431752809084  | FAILED     |  2aa92f97-61d3-1e9a-97b0-c754f5b568d5 | SELECT SUM(checkintbl.checkins.`value`) AS TotalCheckins FROM (SELECT FLATTEN(KVGEN(checkin_info)) checkins FROM dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_checkin.json` ) checkintbl  |         | 1431752808923  |  anonymous |
+    | 1431752853992  | COMPLETED  |  2aa92f87-0250-c6ac-3700-9ae1f98435b8 | SELECT SUM(checkintbl.checkins.`value`) AS TotalCheckins FROM (SELECT FLATTEN(KVGEN(checkin_info)) checkins FROM dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_checkin.json` ) checkintbl  |         | 1431752824947  |  anonymous |
+    +----------------+------------+---------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+----------------+------------+
+    5 rows selected (0.532 seconds)
+
+{% include startnote.html %}This document aligns Drill output for example purposes. Drill output is not aligned in this case.{% include endnote.html %}
+
+## Checking Drillbit Traffic
+
+To check the total number of queries executed since the session started on the Drillbit, use the following command:
+
+    SELECT COUNT(*) FROM dfs.`default`.`/Users/drill-user/apache-drill-1.0.0/log/sqlline_queries.json`;
+
+    +---------+
+    | EXPR$0  |
+    +---------+
+    | 32      |
+    +---------+
+    1 row selected (0.144 seconds)
+
+## Getting Query Success Statistics
+
+To get the total number of successful and failed executions, run the following command:
+
+    SELECT outcome, COUNT(*) FROM dfs.`default`.`/Users/drill-user/apache-drill-1.0.0/log/sqlline_queries.json` GROUP BY outcome;
+
+    +------------+---------+
+    |  outcome   | EXPR$1  |
+    +------------+---------+
+    | COMPLETED  | 18      |
+    | FAILED     | 14      |
+    +------------+---------+
+    2 rows selected (0.219 seconds)
+
+Note the queryid column in the audit can be correlated with the profiles of the queries for troubleshooting/diagnostics purposes.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/configure-drill/020-configuring-drill-memory.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/020-configuring-drill-memory.md b/_docs/configure-drill/020-configuring-drill-memory.md
index 81487d1..5948bcc 100644
--- a/_docs/configure-drill/020-configuring-drill-memory.md
+++ b/_docs/configure-drill/020-configuring-drill-memory.md
@@ -3,7 +3,7 @@ title: "Configuring Drill Memory"
 parent: "Configure Drill"
 ---
 
-This section describes how to configure the amount of direct memory allocated to a Drillbit for query processing in any Drill cluster, multitenant or not. The default memory for a Drillbit is 8G, but Drill prefers 16G or more depending on the workload. The total amount of direct memory that a Drillbit allocates to query operations cannot exceed the limit set.
+You can configure the amount of direct memory allocated to a Drillbit for query processing in any Drill cluster, multitenant or not. The default memory for a Drillbit is 8G, but Drill prefers 16G or more depending on the workload. The total amount of direct memory that a Drillbit allocates to query operations cannot exceed the limit set.
 
 Drill uses Java direct memory and performs well when executing
 operations in memory instead of storing the operations on disk. Drill does not
@@ -26,7 +26,7 @@ After you edit `<drill_installation_directory>/conf/drill-env.sh`, [restart the
 
 ## About the Drillbit startup script
 
-The drill-env.sh file contains the following options:
+The `drill-env.sh` file contains the following options:
 
     DRILL_MAX_DIRECT_MEMORY="8G"
     DRILL_MAX_HEAP="4G"

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index 4fd7948..ee2ff9e 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -9,7 +9,7 @@ env.sh` and `drill-override.conf` files in the
 `/conf` directory. Drill sources` /etc/drill/conf` if it exists. Otherwise,
 Drill sources the local `<drill_installation_directory>/conf` directory.
 
-The sys.options table contains information about boot (start-up), system, and session options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view key boot options. The following table lists the options in alphabetical order and provides a brief description of supported options:
+The sys.options table contains information about system and session options. The sys.boot table contains information about Drill start-up options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view key boot options. The following table lists the system options in alphabetical order and provides a brief description of supported options:
 
 ## System Options
 The sys.options table lists the following options that you can set as a system or session option as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). 

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/configure-drill/configuration-options/020-start-up-options.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/020-start-up-options.md b/_docs/configure-drill/configuration-options/020-start-up-options.md
index e525608..2f1ee5d 100644
--- a/_docs/configure-drill/configuration-options/020-start-up-options.md
+++ b/_docs/configure-drill/configuration-options/020-start-up-options.md
@@ -36,7 +36,7 @@ file tells Drill to scan that JAR file or associated object and include it.
 
 You can run the following query to see a list of Drill’s startup options:
 
-    SELECT * FROM sys.options WHERE type='BOOT';
+    SELECT * FROM sys.boot;
 
 ## Configuring Start-Up Options
 
@@ -57,7 +57,6 @@ The summary of start-up options, also known as boot options, lists default value
 
   Tells Drill which directory to use when spooling. Drill uses a spool and sort operation for beyond memory operations. The sorting operation is designed to spool to a Hadoop file system. The default Hadoop file system is a local file system in the /tmp directory. Spooling performance (both writing and reading back from it) is constrained by the file system. For MapR clusters, use MapReduce volumes or set up local volumes to use for spooling purposes. Volumes improve performance and stripe data across as many disks as possible.
 
-
 * drill.exec.zk.connect  
   Provides Drill with the ZooKeeper quorum to use to connect to data sources. Change this setting to point to the ZooKeeper quorum that you want Drill to use. You must configure this option on each Drillbit node.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/configure-drill/configuration-options/030-planning-and-exececution-options.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/030-planning-and-exececution-options.md b/_docs/configure-drill/configuration-options/030-planning-and-exececution-options.md
index f7d3442..2608538 100644
--- a/_docs/configure-drill/configuration-options/030-planning-and-exececution-options.md
+++ b/_docs/configure-drill/configuration-options/030-planning-and-exececution-options.md
@@ -8,11 +8,6 @@ queries that you run during the current Drill connection. Options set at the
 system level affect the entire system and persist between restarts. Session
 level settings override system level settings.
 
-You can run the following query to see a list of the system and session
-planning and execution options:
-
-    SELECT name FROM sys.options WHERE type in (SYSTEM, SESSION);
-
 ## Configuring Planning and Execution Options
 
 Use the ALTER SYSTEM or ALTER SESSION commands to set options. Typically,

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/035-plugin-configuration-introduction.md b/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
index 7744310..332966c 100644
--- a/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
+++ b/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
@@ -7,7 +7,7 @@ cluster, Drill broadcasts the information to all of the other Drill nodes
 to have identical storage plugin configurations. You do not need to
 restart any of the Drillbits when you add or update a storage plugin instance.
 
-Use the Drill Web UI to update or add a new storage plugin. Launch a web browser, go to: `http://<IP address of the sandbox>:8047`, and then go to the Storage tab. 
+Use the Drill Web UI to update or add a new storage plugin. Launch a web browser, go to: `http://<IP address or host name>:8047`, and then go to the Storage tab. 
 
 To create and configure a new storage plugin:
 

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/img/drill-bin.png
----------------------------------------------------------------------
diff --git a/_docs/img/drill-bin.png b/_docs/img/drill-bin.png
old mode 100644
new mode 100755
index 6cbf7b8..a4c21d8
Binary files a/_docs/img/drill-bin.png and b/_docs/img/drill-bin.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/img/drill-directory.png
----------------------------------------------------------------------
diff --git a/_docs/img/drill-directory.png b/_docs/img/drill-directory.png
old mode 100644
new mode 100755
index f15e898..ab38a33
Binary files a/_docs/img/drill-directory.png and b/_docs/img/drill-directory.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/img/sqlline1.png
----------------------------------------------------------------------
diff --git a/_docs/img/sqlline1.png b/_docs/img/sqlline1.png
old mode 100644
new mode 100755
index 2588d91..5ea6b30
Binary files a/_docs/img/sqlline1.png and b/_docs/img/sqlline1.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/010-install-drill-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/install/010-install-drill-introduction.md b/_docs/install/010-install-drill-introduction.md
index 09df476..0ca6fe5 100644
--- a/_docs/install/010-install-drill-introduction.md
+++ b/_docs/install/010-install-drill-introduction.md
@@ -4,9 +4,7 @@ parent: "Install Drill"
 ---
 
 
-You can install Drill in embedded mode or in distributed mode. Installing
-Drill in embedded mode does not require any configuration. If you want to use Drill in a
-clustered Hadoop environment, you can install Drill in distributed mode.
-Installing in distributed mode requires some configuration, however once you
-install you can connect Drill to your Hive, HBase, or distributed file system
-data sources and run queries on them.
\ No newline at end of file
+You can install Drill in either embedded mode or distributed mode. Installing
+Drill in embedded mode does not require any configuration. To use Drill in a
+clustered Hadoop environment, install Drill in distributed mode. You need to perform some configuration after installing Drill in distributed mode. After you complete these tasks, connect Drill to your Hive, HBase, or distributed file system
+data sources, and run queries on them.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/045-embedded-mode-prerequisites.md
----------------------------------------------------------------------
diff --git a/_docs/install/045-embedded-mode-prerequisites.md b/_docs/install/045-embedded-mode-prerequisites.md
index e0289d2..73f4ab0 100644
--- a/_docs/install/045-embedded-mode-prerequisites.md
+++ b/_docs/install/045-embedded-mode-prerequisites.md
@@ -14,10 +14,10 @@ To install Apache Drill in distributed mode, complete the following steps:
 
 **Prerequisites**
 
-Before you install Apache Drill on nodes in your cluster, you must have the
-following software and services installed:
+Before you install Apache Drill on nodes in your cluster, install and configure the
+following software and services:
 
-  * [Oracle JDK version 7](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html)
-  * Configured and running ZooKeeper quorum
-  * Configured and running Hadoop cluster (Recommended)
+  * [Oracle JDK version 7](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html) (Required)
+  * Configured and running a ZooKeeper quorum (Required)
+  * Configured and running a Hadoop cluster (Recommended)
   * DNS (Recommended)

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/047-installing-drill-on-the-cluster.md
----------------------------------------------------------------------
diff --git a/_docs/install/047-installing-drill-on-the-cluster.md b/_docs/install/047-installing-drill-on-the-cluster.md
index eeef763..ac224b5 100644
--- a/_docs/install/047-installing-drill-on-the-cluster.md
+++ b/_docs/install/047-installing-drill-on-the-cluster.md
@@ -6,8 +6,8 @@ Complete the following steps to install Drill on designated nodes:
 
   1. Download the Drill tarball.
   
-        curl http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz
-  2. Explode the tarball to the directory of your choice. For example, to install Drill in `/opt`:
+        curl http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz
+  2. Explode the tarball to the directory of your choice, such as `/opt`:
   
         tar -xzvf apache-drill-0.9.0.tar.gz -C /opt
   3. In `drill-override.conf,` create a unique Drill `cluster ID`, and provide Zookeeper host names and port numbers to configure a connection to your Zookeeper quorum.
@@ -21,6 +21,5 @@ Complete the following steps to install Drill on designated nodes:
           zk.connect: "<zkhostname1>:<port>,<zkhostname2>:<port>,<zkhostname3>:<port>"
          }
 
-You can connect Drill to various types of data sources. Refer to [Connect
-Apache Drill to Data Sources]({{ site.baseurl }}/docs/connect-a-data-source-introduction) to get configuration instructions for the
+You can connect Drill to various types of data sources. Refer to [Connect Apache Drill to Data Sources]({{ site.baseurl }}/docs/connect-a-data-source-introduction) to get configuration instructions for the
 particular type of data source that you want to connect to Drill.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/050-starting-drill-in-distributed mode.md
----------------------------------------------------------------------
diff --git a/_docs/install/050-starting-drill-in-distributed mode.md b/_docs/install/050-starting-drill-in-distributed mode.md
index ac730c7..4a66447 100644
--- a/_docs/install/050-starting-drill-in-distributed mode.md	
+++ b/_docs/install/050-starting-drill-in-distributed mode.md	
@@ -17,12 +17,44 @@ For example, to restart a Drillbit, navigate to the Drill installation directory
 
     bin/drillbit.sh restart
 
-## Invoking SQLLine
-SQLLine is used as the Drill shell. SQLLine connects to relational databases and executes SQL commands. You invoke SQLLine for Drill in embedded or distributed mode. If you want to use a particular storage plugin, you specify the plugin as a schema when you invoke SQLLine.
+## Starting the Drill Shell
+Using the Drill shell, you can connect to relational databases and execute SQL commands. To start the Drill shell, run one of the following scripts, which are located in the bin directory of the Drill installation:
 
-### SQLLine Command Syntax on Linux and Mac OS X
-To start SQLLine, use the following **sqlline command** syntax:
+* `drill-conf`  
+  Opens the shell using the connection string to ZooKeeper nodes specified in the drill-conf script.  
+* `drill-localhost`  
+  Opens the Drill shell using a connection to the ZooKeeper running on the local host.
 
+Complete the following steps to start the Drill shell on the local node:
+
+  1. Navigate to the Drill installation directory, and issue the following command to start the Drillbit if necessary:
+  
+        bin/drillbit.sh restart
+  2. Issue the following command to start the Drill shell if ZooKeeper is running on the same node as the shell:
+  
+        bin/drill-localhost
+     
+     Alternatively, issue the following command to start the Drill shell using the connection string in `drill-conf`:
+
+         bin/drill-conf
+
+  3. Issue the following query to check the Drillbits running in the cluster:
+  
+        0: jdbc:drill:zk=<zk1host>:<port> select * from sys.drillbits;
+
+Drill provides a list of Drillbits that are running.
+
+    +----------------+--------------+--------------+--------------------+
+    |    host        | user_port    | control_port |      data_port     |
+    +----------------+--------------+--------------+--------------------+
+    | <host address> | <port number>| <port number>|   <port number>    |
+    +----------------+--------------+--------------+--------------------+
+
+Now you can run queries. The Drill installation includes sample data
+that you can query. Refer to [Querying Parquet Files]({{ site.baseurl }}/docs/querying-parquet-files/).
+
+### Using an Ad-Hoc Connection to Drill
+To use a custom connection to Drill, but not alter the connection string in `drill-conf` that you normally use, start the Drill shell on an ad-hoc basis using `sqlline`. For example, to start the Drill shell using a particular storage plugin as a schema, use the following command syntax: 
 
     sqlline –u jdbc:drill:[schema=<storage plugin>;]zk=<zk name>[:<port>][,<zk name2>[:<port>]... ]
 
@@ -31,51 +63,20 @@ To start SQLLine, use the following **sqlline command** syntax:
 * `-u` is the option that precedes a connection string. Required.  
 * `jdbc` is the connection protocol. Required.  
 * `schema` is the name of a [storage plugin]({{site.baseurl}}/docs/storage-plugin-registration) to use for queries. Optional.  
-* `Zk=zkname` is one or more ZooKeeper host names or IP addresses. Optional if you are running SQLLine and ZooKeeper on the local node.  
+* `Zk=zkname` is one or more ZooKeeper host names or IP addresses.  
 * `port` is the ZooKeeper port number. Optional. Port 2181 is the default.  
 
-#### Examples of Starting Drill
-This example also starts SQLLine using the `dfs` storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query:
-
+For example, start the Drill shell using the `dfs` storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query:
 
     bin/sqlline –u jdbc:drill:schema=dfs;zk=centos26
 
-This command starts SQLLine in a cluster configured to run ZooKeeper on three nodes:
+This command starts the Drill shell in a cluster configured to run ZooKeeper on three nodes:
 
     bin/sqlline –u jdbc:drill:zk=cento23,zk=centos24,zk=centos26:5181
 
-## Procedure for Starting Drill in Distributed Mode
-
-Complete the following steps to start Drill:
-
-  1. Navigate to the Drill installation directory, and issue the following command to start a Drillbit:
-  
-        bin/drillbit.sh restart
-  2. Issue the following command to invoke SQLLine and start Drill if ZooKeeper is running on the same node as SQLLine:
-  
-        bin/sqlline -u jdbc:drill:
-     
-     If you cannot connect to Drill, invoke SQLLine with the ZooKeeper quorum:
-
-         bin/sqlline -u jdbc:drill:zk=<zk1host>:<port>,<zk2host>:<port>,<zk3host>:<port>
-  3. Issue the following query to Drill to verify that all Drillbits have joined the cluster:
-  
-        0: jdbc:drill:zk=<zk1host>:<port> select * from sys.drillbits;
-
-Drill provides a list of Drillbits that have joined.
-
-    +------------+------------+--------------+--------------------+
-    |    host        | user_port    | control_port | data_port    |
-    +------------+------------+--------------+--------------------+
-    | <host address> | <port number>| <port number>| <port number>|
-    +------------+------------+--------------+--------------------+
-
-Now you can run queries. The Drill installation includes sample data
-that you can query. Refer to [Querying Parquet Files]({{ site.baseurl }}/docs/querying-parquet-files/).
-
-## Exiting SQLLine
+## Exiting the Drill Shell
 
-To exit SQLLine, issue the following command:
+To exit the Drill shell, issue the following command:
 
     !quit
 
@@ -83,4 +84,4 @@ To exit SQLLine, issue the following command:
 
 In some cases, such as stopping while a query is in progress, the `!quit` command does not stop Drill running in embedded mode. In distributed mode, you stop the Drillbit service. Navigate to the Drill installation directory, and issue the following command to stop a Drillbit:
   
-        bin/drillbit.sh stop
+    bin/drillbit.sh stop

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/installing-drill-in-embedded-mode/010-embedded-mode-prerequisites.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/010-embedded-mode-prerequisites.md b/_docs/install/installing-drill-in-embedded-mode/010-embedded-mode-prerequisites.md
index 87113c3..f0cd33f 100644
--- a/_docs/install/installing-drill-in-embedded-mode/010-embedded-mode-prerequisites.md
+++ b/_docs/install/installing-drill-in-embedded-mode/010-embedded-mode-prerequisites.md
@@ -3,16 +3,13 @@ title: "Embedded Mode Prerequisites"
 parent: "Installing Drill in Embedded Mode"
 ---
 Installing Drill in embedded mode installs Drill locally on your machine.
-Embedded mode is a quick, easy way to install and try Drill without having to
-perform any configuration tasks. When you install Drill in embedded mode, the
-Drillbit service is installed locally and starts automatically when you invoke
-SQLLine, the Drill shell. You can install Drill in embedded mode on a machine
+Embedded mode is a quick way to install and try Drill without having to
+perform any configuration tasks. Installing Drill in embedded mode configures the
+local Drillbit service to start automatically when you launch the Drill shell. You can install Drill in embedded mode on a machine
 running Linux, Mac OS X, or Windows.
 
 **Prerequisite:**
 
-You must have the following software installed on your machine to run Drill:
-
 You need to meet the following prerequisites to run Drill:
 
 * Linux, Mac OS X, and Windows: [Oracle Java SE Development (JDK) Kit 7](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html) installation  

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md b/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md
index 2720fe7..51c80fe 100755
--- a/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md
+++ b/_docs/install/installing-drill-in-embedded-mode/020-installing-drill-on-linux-and-mac-os-x.md
@@ -6,16 +6,16 @@ First, check that you [meet the prerequisites]({{site.baseurl}}/docs/embedded-mo
 
 Complete the following steps to install Drill:  
 
-1. Issue the following command in a terminal to download the latest, stable version of Apache Drill to a directory on your machine, or download Drill from the [Drill web site](http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz):
+1. Issue the following command in a terminal to download the latest, stable version of Apache Drill to a directory on your machine, or download Drill from the [Drill web site](http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz):
 
-        wget http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz  
+        wget http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz  
 
 2. Copy the downloaded file to the directory where you want to install Drill. 
 
 3. Extract the contents of the Drill tar.gz file. Use sudo if necessary:  
 
-        sudo tar -xvzf apache-drill-0.9.0..tar.gz  
+        sudo tar -xvzf apache-drill-1.0.0.tar.gz  
 
-The extraction process creates the installation directory named apache-drill-0.9.0 containing the Drill software.
+The extraction process creates the installation directory named apache-drill-1.0.0 containing the Drill software.
 
 At this point, you can [start Drill]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md b/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
index ae72664..2bbfede 100644
--- a/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
+++ b/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
@@ -2,38 +2,27 @@
 title: "Starting Drill on Linux and Mac OS X"
 parent: "Installing Drill in Embedded Mode"
 ---
-Launch SQLLine using the sqlline command to start to Drill in embedded mode. The command directs SQLLine to connect to Drill using jdbc. The zk=local means the local node is the ZooKeeper node. Complete the following steps to launch SQLLine and start Drill:
+Start the Drill shell using the `drill-embedded` command. The command uses a jdbc connection string and identifies the local node as the ZooKeeper node. Complete the following steps to start the Drill shell:
 
 1. Navigate to the Drill installation directory. For example:  
 
-        cd apache-drill-0.9.0  
+        cd apache-drill-1.0.0  
 
 2. Issue the following command to launch SQLLine:
 
-        bin/sqlline -u jdbc:drill:zk=local  
+        bin/drill-embedded  
 
    The `0: jdbc:drill:zk=local>`  prompt appears.  
 
-   At this point, you can [submit queries]({{site.baseurl}}/docs/drill-in-10-minutes#query-sample-data) to Drill.
+   At this point, you can [run queries]({{site.baseurl}}/docs/drill-in-10-minutes#query-sample-data).
 
-## Example of Starting Drill
-
-The simplest example of how to start SQLLine is to identify the protocol, JDBC, and ZooKeeper node or nodes in the **sqlline** command. This example starts SQLLine on a node in an embedded, single-node cluster:
-
-    sqlline -u jdbc:drill:zk=local
-
-This example also starts SQLLine using the `dfs` storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query:
-
-
-    bin/sqlline –u jdbc:drill:schema=dfs;zk=centos26
-    
-You can use the schema option in the **sqlline** command to specify a storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query: For example, this command specifies the `dfs` storage plugin.
+You can also use the **sqlline** command to start Drill using a custom connection string, as described in ["Using an Ad-Hoc Connection to Drill"](docs/starting-drill-in-distributed-mode/#using-an-ad-hoc-connection-to-drill). For example, you can specify the storage plugin when you start the shell. Doing so eliminates the need to specify the storage plugin in the query: For example, this command specifies the `dfs` storage plugin.
 
     bin/sqlline –u jdbc:drill:schema=dfs;zk=local
 
-## Exiting SQLLine
+## Exiting the Drill Shell
 
-To exit SQLLine, issue the following command:
+To exit the Drill shell, issue the following command:
 
     !quit
 

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md b/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md
index f12e175..05328c7 100755
--- a/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md
+++ b/_docs/install/installing-drill-in-embedded-mode/040-installing-drill-on-windows.md
@@ -4,9 +4,9 @@ parent: "Installing Drill in Embedded Mode"
 ---
 You can install Drill on Windows 7 or 8. First, check that you [meet the prerequisites]({{site.baseurl}}/docs/embedded-mode-prerequisites), including setting the JAVA_HOME environment variable, and then install Drill. Complete the following steps to install Drill:
 
-1. Click the following link to download the latest, stable version of Apache Drill:  [http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz](http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz)
-2. Move the `apache-drill-0.9.0.tar.gz` file to a directory where you want to install Drill.
-3. Unzip the `TAR.GZ` file using a third-party tool. If the tool you use does not unzip the TAR file as well as the `TAR.GZ` file, unzip the `apache-drill-0.9.0.tar` to extract the Drill software. The extraction process creates the installation directory named apache-drill-0.9.0 containing the Drill software. For example:
+1. Click the following link to download the latest, stable version of Apache Drill:  [http://getdrill.org/drill/download/apache-drill-0.1.0.tar.gz](http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz)
+2. Move the `apache-drill-0.1.0.tar.gz` file to a directory where you want to install Drill.
+3. Unzip the `TAR.GZ` file using a third-party tool. If the tool you use does not unzip the TAR file as well as the `TAR.GZ` file, unzip the `apache-drill-0.1.0.tar` to extract the Drill software. The extraction process creates the installation directory named apache-drill-0.1.0 containing the Drill software. For example:
    ![drill install dir]({{ site.baseurl }}/docs/img/drill-directory.png)
 
 At this point, you can [start Drill]({{site.baseurl}}/docs/starting-drill-on-windows). 
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md b/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
index ba95059..e8ef3e4 100644
--- a/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
+++ b/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
@@ -2,9 +2,9 @@
 title: "Starting Drill on Windows"
 parent: "Installing Drill in Embedded Mode"
 ---
-Launch SQLLine using the **sqlline command** to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The `zk=local` means the local node is the ZooKeeper node. Complete the following steps to launch SQLLine and start Drill:
+Start the Drill shell using the **sqlline command**. The `zk=local` means the local node is the ZooKeeper node. Complete the following steps to launch the Drill shell:
 
-1. Open the apache-drill-0.9.0 folder.  
+1. Open the apache-drill-0.1.0 folder.  
 2. Open the bin folder, and double-click the `sqlline.bat` file:
    ![drill bin dir]({{ site.baseurl }}/docs/img/drill-bin.png)
    The Windows command prompt opens.  
@@ -18,9 +18,9 @@ You can use the schema option in the **sqlline** command to specify a storage pl
 
     bin/sqlline –u jdbc:drill:schema=dfs;zk=local
 
-## Exiting SQLLine
+## Exiting the Drill Shell
 
-To exit SQLLine, issue the following command:
+To exit the Drill shell, issue the following command:
 
     !quit
 

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md b/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md
index d7bba62..a427590 100644
--- a/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md
+++ b/_docs/odbc-jdbc-interfaces/010-interfaces-introduction.md
@@ -4,7 +4,7 @@ parent: "ODBC/JDBC Interfaces"
 ---
 You can connect to Apache Drill through the following interfaces:
 
-  * Drill shell (SQLLine)
+  * Drill shell
   * Drill Web UI
   * [ODBC]({{ site.baseurl }}/docs/odbc-jdbc-interfaces#using-odbc-to-access-apache-drill-from-bi-tools)*
   * [JDBC]({{ site.baseurl }}/docs/using-jdbc/)

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/tutorials/020-drill-in-10-minutes.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/020-drill-in-10-minutes.md b/_docs/tutorials/020-drill-in-10-minutes.md
index 9f16044..98f5758 100755
--- a/_docs/tutorials/020-drill-in-10-minutes.md
+++ b/_docs/tutorials/020-drill-in-10-minutes.md
@@ -43,32 +43,32 @@ The output looks something like this:
 
 Complete the following steps to install Drill:  
 
-1. Issue the following command in a terminal to download the latest, stable version of Apache Drill to a directory on your machine, or download Drill from the [Drill web site](http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz):
+1. Issue the following command in a terminal to download the latest version of Apache Drill to a directory on your machine, or download Drill from the [Drill web site](http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz):
 
-        wget http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz  
+        wget http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz  
 
 2. Copy the downloaded file to the directory where you want to install Drill. 
 
 3. Extract the contents of the Drill tar.gz file. Use sudo if necessary:  
 
-        sudo tar -xvzf apache-drill-0.9.0..tar.gz  
+        sudo tar -xvzf apache-drill-1.0.0.tar.gz  
 
-The extraction process creates the installation directory named apache-drill-0.9.0 containing the Drill software.
+The extraction process creates the installation directory named apache-drill-1.0.0 containing the Drill software.
 
-At this point, you can [start Drill]({{site.baseurl}}/docs/drill-in-10-minutes/#start-drill).
+At this point, you can start Drill.
 
 ## Start Drill on Linux and Mac OS X
-Launch SQLLine using the sqlline command to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The zk=local means the local node is the ZooKeeper node. Complete the following steps to launch SQLLine and start Drill:
+Start Drill in embedded mode using the `drill-embedded` command:
 
 1. Navigate to the Drill installation directory. For example:  
 
-        cd apache-drill-0.9.0  
+        cd apache-drill-1.0.0  
 
-2. Issue the following command to launch SQLLine:
+2. Issue the following command to launch Drill in embedded mode:
 
-        bin/sqlline -u jdbc:drill:zk=local  
+        bin/drill-embedded  
 
-   The `0: jdbc:drill:zk=local>`  prompt appears.  
+   The message of the day followed by the `0: jdbc:drill:zk=local>`  prompt appears.  
 
    At this point, you can [submit queries]({{site.baseurl}}/docs/drill-in-10-minutes#query-sample-data) to Drill.
 
@@ -76,24 +76,25 @@ Launch SQLLine using the sqlline command to start to Drill in embedded mode. The
 
 You can install Drill on Windows 7 or 8. First, set the JAVA_HOME environment variable, and then install Drill. Complete the following steps to install Drill:
 
-1. Click the following link to download the latest, stable version of Apache Drill:  [http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz](http://getdrill.org/drill/download/apache-drill-0.9.0.tar.gz)
-2. Move the `apache-drill-0.9.0.tar.gz` file to a directory where you want to install Drill.
-3. Unzip the `TAR.GZ` file using a third-party tool. If the tool you use does not unzip the TAR file as well as the `TAR.GZ` file, unzip the `apache-drill-0.9.0.tar` to extract the Drill software. The extraction process creates the installation directory named apache-drill-0.9.0 containing the Drill software. For example:
+1. Click the following link to download the latest version of Apache Drill:  [http://getdrill.org/drill/download/apache-drill-1.0.0.tar.gz](http://getdrill.org/drill/download/apache-drill-0.1.0.tar.gz)  
+2. Move the `apache-drill-1.0.0.tar.gz` file to a directory where you want to install Drill.  
+3. Unzip the `TAR.GZ` file using a third-party tool. If the tool you use does not unzip the TAR file as well as the `TAR.GZ` file, unzip the `apache-drill-1.0.0.tar` to extract the Drill software. The extraction process creates the installation directory named apache-drill-1.0.0 containing the Drill software. For example:  
    ![drill install dir]({{ site.baseurl }}/docs/img/drill-directory.png)
-   At this point, you can start Drill.  
+
+At this point, you can start Drill.  
 
 ## Start Drill on Windows
-Launch SQLLine using the **sqlline command** to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The `zk=local` means the local node is the ZooKeeper node. Complete the following steps to launch SQLLine and start Drill:
+Start Drill by running the sqlline.bat file and typing a connection string, as shown in the following procedure. The `zk=local` in the connection string means the local node is the ZooKeeper node:
 
-1. Open the apache-drill-0.9.0 folder.  
+1. Open the apache-drill-1.0.0 folder.  
 2. Open the bin folder, and double-click the `sqlline.bat` file:
    ![drill bin dir]({{ site.baseurl }}/docs/img/drill-bin.png)
    The Windows command prompt opens.  
 3. At the sqlline> prompt, type `!connect jdbc:drill:zk=local` and then press Enter:
    ![sqlline]({{ site.baseurl }}/docs/img/sqlline1.png)
-4. Enter the username, `admin`, and password, also `admin` when prompted.
+4. Enter the username, `admin`, and password, also `admin` when prompted.  
    The `0: jdbc:drill:zk=local>` prompt appears.
-At this point, you can [submit queries]({{ site.baseurl }}/docs/drill-in-10-minutes#query-sample-data) to Drill.
+At this point, you can [run queries]({{ site.baseurl }}/docs/drill-in-10-minutes#query-sample-data).
 
 ## Stopping Drill
 
@@ -118,35 +119,18 @@ A sample JSON file, `employee.json`, contains fictitious employee data.
 To view the data in the `employee.json` file, submit the following SQL query
 to Drill:
     
-    0: jdbc:drill:zk=local> SELECT * FROM cp.`employee.json`;
+    0: jdbc:drill:zk=local> SELECT * FROM cp.`employee.json` LIMIT 3;
 
-The query returns the following results:
+The query output is:
 
-**Example of partial output**
-
-    +-------------+------------+------------+------------+-------------+-----------+
-    | employee_id | full_name  | first_name | last_name  | position_id | position_ |
-    +-------------+------------+------------+------------+-------------+-----------+
-    | 1101        | Steve Eurich | Steve      | Eurich         | 16          | Store T |
-    | 1102        | Mary Pierson | Mary       | Pierson    | 16          | Store T |
-    | 1103        | Leo Jones  | Leo        | Jones      | 16          | Store Tem |
-    | 1104        | Nancy Beatty | Nancy      | Beatty     | 16          | Store T |
-    | 1105        | Clara McNight | Clara      | McNight    | 16          | Store  |
-    | 1106        | Marcella Isaacs | Marcella   | Isaacs     | 17          | Stor |
-    | 1107        | Charlotte Yonce | Charlotte  | Yonce      | 17          | Stor |
-    | 1108        | Benjamin Foster | Benjamin   | Foster     | 17          | Stor |
-    | 1109        | John Reed  | John       | Reed       | 17          | Store Per |
-    | 1110        | Lynn Kwiatkowski | Lynn       | Kwiatkowski | 17          | St |
-    | 1111        | Donald Vann | Donald     | Vann       | 17          | Store Pe |
-    | 1112        | William Smith | William    | Smith      | 17          | Store  |
-    | 1113        | Amy Hensley | Amy        | Hensley    | 17          | Store Pe |
-    | 1114        | Judy Owens | Judy       | Owens      | 17          | Store Per |
-    | 1115        | Frederick Castillo | Frederick  | Castillo   | 17          | S |
-    | 1116        | Phil Munoz | Phil       | Munoz      | 17          | Store Per |
-    | 1117        | Lori Lightfoot | Lori       | Lightfoot  | 17          | Store |
-    +-------------+------------+------------+------------+-------------+-----------+
-    1,155 rows selected (0.762 seconds)
-    0: jdbc:drill:zk=local>
+    +--------------+------------------+-------------+------------+--------------+---------------------+-----------+----------------+-------------+------------------------+----------+----------------+------------------+-----------------+---------+--------------------+
+    | employee_id  |    full_name     | first_name  | last_name  | position_id  |   position_title    | store_id  | department_id  | birth_date  |       hire_date        |  salary  | supervisor_id  | education_level  | marital_status  | gender  |  management_role   |
+    +--------------+------------------+-------------+------------+--------------+---------------------+-----------+----------------+-------------+------------------------+----------+----------------+------------------+-----------------+---------+--------------------+
+    | 1            | Sheri Nowmer     | Sheri       | Nowmer     | 1            | President           | 0         | 1              | 1961-08-26  | 1994-12-01 00:00:00.0  | 80000.0  | 0              | Graduate Degree  | S               | F       | Senior Management  |
+    | 2            | Derrick Whelply  | Derrick     | Whelply    | 2            | VP Country Manager  | 0         | 1              | 1915-07-03  | 1994-12-01 00:00:00.0  | 40000.0  | 1              | Graduate Degree  | M               | M       | Senior Management  |
+    | 4            | Michael Spence   | Michael     | Spence     | 2            | VP Country Manager  | 0         | 1              | 1969-06-20  | 1998-01-01 00:00:00.0  | 40000.0  | 1              | Graduate Degree  | S               | M       | Senior Management  |
+    +--------------+------------------+-------------+------------+--------------+---------------------+-----------+----------------+-------------+------------------------+----------+----------------+------------------+-----------------+---------+--------------------+
+    3 rows selected (0.827 seconds)
 
 ### Querying a Parquet File
 
@@ -176,17 +160,16 @@ your operating system:
 
 The query returns the following results:
 
-    +------------+------------+
-    |   EXPR$0   |   EXPR$1   |
-    +------------+------------+
-    | AFRICA     | lar deposits. blithely final packages cajole. regular waters ar |
-    | AMERICA    | hs use ironic, even requests. s |
-    | ASIA       | ges. thinly even pinto beans ca |
-    | EUROPE     | ly final courts cajole furiously final excuse |
-    | MIDDLE EAST | uickly special accounts cajole carefully blithely close reques |
-    +------------+------------+
-    5 rows selected (0.165 seconds)
-    0: jdbc:drill:zk=local>
+    +--------------+--------------+-----------------------+
+    | R_REGIONKEY  |    R_NAME    |       R_COMMENT       |
+    +--------------+--------------+-----------------------+
+    | 0            | AFRICA       | lar deposits. blithe  |
+    | 1            | AMERICA      | hs use ironic, even   |
+    | 2            | ASIA         | ges. thinly even pin  |
+    | 3            | EUROPE       | ly final courts cajo  |
+    | 4            | MIDDLE EAST  | uickly special accou  |
+    +--------------+--------------+-----------------------+
+    5 rows selected (0.409 seconds)
 
 #### Nation File
 
@@ -194,7 +177,7 @@ If you followed the Apache Drill in 10 Minutes instructions to install Drill
 in embedded mode, the path to the parquet file varies between operating
 systems.
 
-**Note:** When you enter the query, include the version of Drill that you are currently running. 
+{% include startnote.html %}When you enter the query, include the version of Drill that you are currently running.{% include endnote.html %}
 
 To view the data in the `nation.parquet` file, issue the query appropriate for
 your operating system:
@@ -212,14 +195,44 @@ your operating system:
 
 The query returns the following results:
 
+    SELECT * FROM dfs.`Users/khahn/drill/apache-drill-1.0.0-SNAPSHOT/sample-data/nation.parquet`;
+    +--------------+-----------------+--------------+-----------------------+
+    | N_NATIONKEY  |     N_NAME      | N_REGIONKEY  |       N_COMMENT       |
+    +--------------+-----------------+--------------+-----------------------+
+    | 0            | ALGERIA         | 0            |  haggle. carefully f  |
+    | 1            | ARGENTINA       | 1            | al foxes promise sly  |
+    | 2            | BRAZIL          | 1            | y alongside of the p  |
+    | 3            | CANADA          | 1            | eas hang ironic, sil  |
+    | 4            | EGYPT           | 4            | y above the carefull  |
+    | 5            | ETHIOPIA        | 0            | ven packages wake qu  |
+    | 6            | FRANCE          | 3            | refully final reques  |
+    | 7            | GERMANY         | 3            | l platelets. regular  |
+    | 8            | INDIA           | 2            | ss excuses cajole sl  |
+    | 9            | INDONESIA       | 2            |  slyly express asymp  |
+    | 10           | IRAN            | 4            | efully alongside of   |
+    | 11           | IRAQ            | 4            | nic deposits boost a  |
+    | 12           | JAPAN           | 2            | ously. final, expres  |
+    | 13           | JORDAN          | 4            | ic deposits are blit  |
+    | 14           | KENYA           | 0            |  pending excuses hag  |
+    | 15           | MOROCCO         | 0            | rns. blithely bold c  |
+    | 16           | MOZAMBIQUE      | 0            | s. ironic, unusual a  |
+    | 17           | PERU            | 1            | platelets. blithely   |
+    | 18           | CHINA           | 2            | c dependencies. furi  |
+    | 19           | ROMANIA         | 3            | ular asymptotes are   |
+    | 20           | SAUDI ARABIA    | 4            | ts. silent requests   |
+    | 21           | VIETNAM         | 2            | hely enticingly expr  |
+    | 22           | RUSSIA          | 3            |  requests against th  |
+    | 23           | UNITED KINGDOM  | 3            | eans boost carefully  |
+    | 24           | UNITED STATES   | 1            | y final packages. sl  |
+    +--------------+-----------------+--------------+-----------------------+
+    25 rows selected (0.101 seconds)
+
 ## Summary
 
-Now you know a bit about Apache Drill. To summarize, you have completed the
-following tasks:
+Now, you have been introduced to Apache Drill, which supports nested data, schema-less execution, and decentralized metadata. To summarize, you have completed the following tasks:
 
-  * Learned that Apache Drill supports nested data, schema-less execution, and decentralized metadata.
   * Downloaded and installed Apache Drill.
-  * Invoked SQLLine with Drill in embedded mode.
+  * Started Drill in embedded mode.
   * Queried the sample JSON file, `employee.json`, to view its data.
   * Queried the sample `region.parquet` file to view its data.
   * Queried the sample `nation.parquet` file to view its data.
@@ -228,7 +241,7 @@ following tasks:
 
 Now that you have an idea about what Drill can do, you might want to:
 
-  * [Deploy Drill in a clustered environment.]({{ site.baseurl }}/docs/deploying-drill-in-a-cluster)
+  * [Install Drill on a cluster.]({{ site.baseurl }}/docs/installing-drill-on-the-cluster)
   * [Configure storage plugins to connect Drill to your data sources]({{ site.baseurl }}/docs/connect-a-data-source-introduction).
   * Query [Hive]({{ site.baseurl }}/docs/querying-hive) and [HBase]({{ site.baseurl }}/docs/hbase-storage-plugin) data.
   * [Query Complex Data]({{ site.baseurl }}/docs/querying-complex-data)


[18/26] drill git commit: Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/d8c95990
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/d8c95990
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/d8c95990

Branch: refs/heads/gh-pages
Commit: d8c95990a3914fe421f6b90b8eea3fcdc3c5eb7f
Parents: af2096d 4d19390
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 15 22:56:35 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 15 22:56:35 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 146 +++++++++++--
 .../010-configuration-options-introduction.md   | 137 ++++++------
 .../080-drill-default-input-format.md           |   2 +-
 .../020-hive-to-drill-data-type-mapping.md      |  20 +-
 .../030-deploying-and-using-a-hive-udf.md       |   2 +-
 .../040-parquet-format.md                       |   4 +-
 .../050-json-data-model.md                      |  27 ++-
 _docs/getting-started/020-why-drill.md          |   2 +-
 _docs/img/socialmed1.png                        | Bin 0 -> 90999 bytes
 _docs/img/socialmed10.png                       | Bin 0 -> 50143 bytes
 _docs/img/socialmed11.png                       | Bin 0 -> 21996 bytes
 _docs/img/socialmed12.png                       | Bin 0 -> 51774 bytes
 _docs/img/socialmed13.png                       | Bin 0 -> 209081 bytes
 _docs/img/socialmed2.png                        | Bin 0 -> 63683 bytes
 _docs/img/socialmed3.png                        | Bin 0 -> 37894 bytes
 _docs/img/socialmed4.png                        | Bin 0 -> 19875 bytes
 _docs/img/socialmed5.png                        | Bin 0 -> 53990 bytes
 _docs/img/socialmed6.png                        | Bin 0 -> 35748 bytes
 _docs/img/socialmed7.png                        | Bin 0 -> 59350 bytes
 _docs/img/socialmed8.png                        | Bin 0 -> 4234 bytes
 _docs/img/socialmed9.png                        | Bin 0 -> 11851 bytes
 _docs/img/spotfire-server-client.png            | Bin 0 -> 48430 bytes
 _docs/img/spotfire-server-configtab.png         | Bin 0 -> 76152 bytes
 _docs/img/spotfire-server-connectionURL.png     | Bin 0 -> 47664 bytes
 _docs/img/spotfire-server-database.png          | Bin 0 -> 36204 bytes
 _docs/img/spotfire-server-datasources-tab.png   | Bin 0 -> 49236 bytes
 _docs/img/spotfire-server-deployment.png        | Bin 0 -> 22058 bytes
 _docs/img/spotfire-server-hiveorders.png        | Bin 0 -> 62537 bytes
 _docs/img/spotfire-server-importconfig.png      | Bin 0 -> 32739 bytes
 _docs/img/spotfire-server-infodesigner.png      | Bin 0 -> 69950 bytes
 _docs/img/spotfire-server-infodesigner2.png     | Bin 0 -> 36991 bytes
 _docs/img/spotfire-server-infolink.png          | Bin 0 -> 126884 bytes
 _docs/img/spotfire-server-new.png               | Bin 0 -> 23290 bytes
 _docs/img/spotfire-server-saveconfig.png        | Bin 0 -> 188740 bytes
 _docs/img/spotfire-server-saveconfig2.png       | Bin 0 -> 32622 bytes
 _docs/img/spotfire-server-start.png             | Bin 0 -> 54541 bytes
 _docs/img/spotfire-server-template.png          | Bin 0 -> 155705 bytes
 _docs/img/spotfire-server-tss.png               | Bin 0 -> 43564 bytes
 .../010-interfaces-introduction.md              |   4 +-
 ...microstrategy-analytics-with-apache-drill.md |   9 +-
 .../065-configuring-spotfire-server.md          |  64 ++++++
 .../040-tableau-examples.md                     |   2 +-
 .../050-using-drill-explorer-on-windows.md      |   4 +-
 .../060-querying-the-information-schema.md      |   1 +
 .../005-querying-complex-data-introduction.md   |   2 +-
 _docs/sql-reference/090-sql-extensions.md       |   2 +-
 .../data-types/010-supported-data-types.md      |  15 +-
 .../030-handling-different-data-types.md        |  12 +-
 .../050-aggregate-and-aggregate-statistical.md  |   2 +
 _docs/tutorials/010-tutorials-introduction.md   |  12 +-
 .../030-analyzing-the-yelp-academic-dataset.md  |  11 +-
 .../040-learn-drill-with-the-mapr-sandbox.md    |  19 +-
 _docs/tutorials/060-analyzing-social-media.md   | 206 +++++++++++++++++++
 .../010-installing-the-apache-drill-sandbox.md  |   2 +-
 54 files changed, 557 insertions(+), 150 deletions(-)
----------------------------------------------------------------------



[21/26] drill git commit: Migrated site from CSS to SASS. Consolidated CSS and JS files. More CDN use.

Posted by ts...@apache.org.
http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/syntax.css
----------------------------------------------------------------------
diff --git a/css/syntax.css b/css/syntax.css
deleted file mode 100644
index 2774b76..0000000
--- a/css/syntax.css
+++ /dev/null
@@ -1,60 +0,0 @@
-.highlight  { background: #ffffff; }
-.highlight .c { color: #999988; font-style: italic } /* Comment */
-.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
-.highlight .k { font-weight: bold } /* Keyword */
-.highlight .o { font-weight: bold } /* Operator */
-.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */
-.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */
-.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */
-.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */
-.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
-.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */
-.highlight .ge { font-style: italic } /* Generic.Emph */
-.highlight .gr { color: #aa0000 } /* Generic.Error */
-.highlight .gh { color: #999999 } /* Generic.Heading */
-.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
-.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */
-.highlight .go { color: #888888 } /* Generic.Output */
-.highlight .gp { color: #555555 } /* Generic.Prompt */
-.highlight .gs { font-weight: bold } /* Generic.Strong */
-.highlight .gu { color: #aaaaaa } /* Generic.Subheading */
-.highlight .gt { color: #aa0000 } /* Generic.Traceback */
-.highlight .kc { font-weight: bold } /* Keyword.Constant */
-.highlight .kd { font-weight: bold } /* Keyword.Declaration */
-.highlight .kp { font-weight: bold } /* Keyword.Pseudo */
-.highlight .kr { font-weight: bold } /* Keyword.Reserved */
-.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */
-.highlight .m { color: #009999 } /* Literal.Number */
-.highlight .s { color: #d14 } /* Literal.String */
-.highlight .na { color: #008080 } /* Name.Attribute */
-.highlight .nb { color: #0086B3 } /* Name.Builtin */
-.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */
-.highlight .no { color: #008080 } /* Name.Constant */
-.highlight .ni { color: #800080 } /* Name.Entity */
-.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */
-.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */
-.highlight .nn { color: #555555 } /* Name.Namespace */
-.highlight .nt { color: #000080 } /* Name.Tag */
-.highlight .nv { color: #008080 } /* Name.Variable */
-.highlight .ow { font-weight: bold } /* Operator.Word */
-.highlight .w { color: #bbbbbb } /* Text.Whitespace */
-.highlight .mf { color: #009999 } /* Literal.Number.Float */
-.highlight .mh { color: #009999 } /* Literal.Number.Hex */
-.highlight .mi { color: #009999 } /* Literal.Number.Integer */
-.highlight .mo { color: #009999 } /* Literal.Number.Oct */
-.highlight .sb { color: #d14 } /* Literal.String.Backtick */
-.highlight .sc { color: #d14 } /* Literal.String.Char */
-.highlight .sd { color: #d14 } /* Literal.String.Doc */
-.highlight .s2 { color: #d14 } /* Literal.String.Double */
-.highlight .se { color: #d14 } /* Literal.String.Escape */
-.highlight .sh { color: #d14 } /* Literal.String.Heredoc */
-.highlight .si { color: #d14 } /* Literal.String.Interpol */
-.highlight .sx { color: #d14 } /* Literal.String.Other */
-.highlight .sr { color: #009926 } /* Literal.String.Regex */
-.highlight .s1 { color: #d14 } /* Literal.String.Single */
-.highlight .ss { color: #990073 } /* Literal.String.Symbol */
-.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */
-.highlight .vc { color: #008080 } /* Name.Variable.Class */
-.highlight .vg { color: #008080 } /* Name.Variable.Global */
-.highlight .vi { color: #008080 } /* Name.Variable.Instance */
-.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/video-box.css
----------------------------------------------------------------------
diff --git a/css/video-box.css b/css/video-box.css
deleted file mode 100644
index f343730..0000000
--- a/css/video-box.css
+++ /dev/null
@@ -1,55 +0,0 @@
-div#video-box{
-  position:relative;
-  float:right;
-  width:320px;
-  height:160px;
-}
-
-div#video-box div.background {
-  position:absolute;
-  background-color:#fff;
-  height:100%;
-  width:100%;
-  opacity:.15;
-}
-
-div#video-box div.row {
-  position:absolute;
-  height:40px;
-  width:100%;
-  border-bottom:dotted 1px #999;
-  top:0px;
-  font-size:12px;
-  color:black;
-  line-height:40px;
-  
-}
-
-div#video-box div.row.r1 {
-  top:40px;
-}
-
-div#video-box div.row.r2 {
-  top:80px;
-}
-
-div#video-box div.row.r3 {
-  top:120px;
-  border-bottom:none;
-}
-
-div#video-box div.row div {
-  overflow: hidden;
-  margin:5px;
-  height:30px;
-  float:left;
-}
-
-div#video-box div.row div img {
-  height:40px;
-  margin:-5px 0;
-}
-
-div#video-box a {
-  color:#006;
-}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/video-slider.css
----------------------------------------------------------------------
diff --git a/css/video-slider.css b/css/video-slider.css
deleted file mode 100644
index e80ece7..0000000
--- a/css/video-slider.css
+++ /dev/null
@@ -1,34 +0,0 @@
-div#video-slider{
-  width:260px;
-  float:right;
-}
-
-div.slide{
-  position:relative;
-  padding:0px 0px;
-}
-
-img.thumbnail {
-  width:100%;
-  margin:0 auto;
-}
-
-img.play{
-  position:absolute;
-  width:40px;
-  left:110px;
-  top:60px;
-}
-
-div.title{
-  layout:block;
-  bottom:0px;
-  left:0px;
-  width:100%;
-  line-height:20px;
-  color:#000;
-  opacity:.4;
-  text-align:center;
-  font-size:12px;
-  background-color:#fff;
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/images/home-json.png
----------------------------------------------------------------------
diff --git a/images/home-json.png b/images/home-json.png
index b6cccea..bfe3f91 100644
Binary files a/images/home-json.png and b/images/home-json.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index 8494f54..6637183 100755
--- a/index.html
+++ b/index.html
@@ -1,12 +1,14 @@
 ---
 layout: default
 ---
-<link href="{{ site.baseurl }}/static/fancybox/jquery.fancybox.css" rel="stylesheet" type="text/css">
-<link href="{{ site.baseurl }}/css/video-slider.css" rel="stylesheet" type="text/css">
-<script language="javascript" type="text/javascript" src="{{ site.baseurl }}/static/fancybox/jquery.fancybox.pack.js"></script>
-<link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/jquery.slick/1.5.0/slick.css"/>
-<link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/jquery.slick/1.5.0/slick-theme.css"/>
-<script type="text/javascript" src="//cdn.jsdelivr.net/jquery.slick/1.5.0/slick.min.js"></script>
+<link href="//cdnjs.cloudflare.com/ajax/libs/fancybox/2.1.5/jquery.fancybox.min.css" rel="stylesheet" type="text/css"/>
+<link href="//cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.5.4/slick.min.css" rel="stylesheet" type="text/css"/>
+<link href="//cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.5.4/slick-theme.min.css" rel="stylesheet" type="text/css"/>
+
+<script src="//cdnjs.cloudflare.com/ajax/libs/fancybox/2.1.5/jquery.fancybox.min.js" language="javascript" type="text/javascript"></script>
+<script src="//cdnjs.cloudflare.com/ajax/libs/slick-carousel/1.5.4/slick.min.js" language="javascript" type="text/javascript"></script>
+
+<link href="{{ site.baseurl }}/css/home.css" rel="stylesheet" type="text/css"/>
 
 <script type="text/javascript">
 
@@ -109,13 +111,13 @@ SELECT timestamp
 </div>
 
 <div class="home-row">
-  <div class="big"><img src="{{ site.baseurl }}/images/home-json.png" style="width:250px" /></div>
+  <div class="big"><img src="{{ site.baseurl }}/images/home-json.png" style="width:300px" /></div>
   <div class="description">
     <h1>Treat your data like a table even when it’s not</h1>
     <p>Drill features a JSON data model that enables it to query complex/nested data and rapidly evolving structure commonly seen in modern applications and non-relational datastores. Drill also provides intuitive extensions to SQL so that the user can easily query complex data.
     <p>Drill is the only columnar query engine that supports complex data. It features an in-memory shredded columnar representation for complex data which allows Drill to achieve columnar speed with the flexibility of an internal JSON document model.</p>
   </div>
-  <div class="small"><img src="{{ site.baseurl }}/images/home-json.png" style="width:250px" /></div>
+  <div class="small"><img src="{{ site.baseurl }}/images/home-json.png" style="width:300px" /></div>
 </div>
 
 <div class="home-row">

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/js/drill.js
----------------------------------------------------------------------
diff --git a/js/drill.js b/js/drill.js
index b484955..b17f31a 100644
--- a/js/drill.js
+++ b/js/drill.js
@@ -49,10 +49,10 @@ Drill.Site = {
   watchSearchBarMouseEnter: function() {
     $("#menu .search-bar input[type=text]").on({
       focus: function(){
-        $(this).animate({ width: '125px' });
+        $(this).animate({ width: '130px' });
       },
       blur: function() {
-        $(this).animate({ width: '44px' });
+        $(this).animate({ width: '50px' });
       }
     })
   },


[12/26] drill git commit: image tweaks

Posted by ts...@apache.org.
image tweaks


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/babf4925
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/babf4925
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/babf4925

Branch: refs/heads/gh-pages
Commit: babf4925457b9182985a04b5bf3e79850983b212
Parents: 1c27ed5
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 15 15:13:28 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 15 15:13:28 2015 -0700

----------------------------------------------------------------------
 _docs/img/socialmed1.png | Bin 91288 -> 90999 bytes
 _docs/img/socialmed2.png | Bin 58175 -> 63683 bytes
 _docs/img/socialmed3.png | Bin 37943 -> 37894 bytes
 3 files changed, 0 insertions(+), 0 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/babf4925/_docs/img/socialmed1.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed1.png b/_docs/img/socialmed1.png
index 86c4776..c6c5855 100644
Binary files a/_docs/img/socialmed1.png and b/_docs/img/socialmed1.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/babf4925/_docs/img/socialmed2.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed2.png b/_docs/img/socialmed2.png
index b5d78e3..c815bc3 100644
Binary files a/_docs/img/socialmed2.png and b/_docs/img/socialmed2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/babf4925/_docs/img/socialmed3.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed3.png b/_docs/img/socialmed3.png
index 72d651a..cf4957b 100644
Binary files a/_docs/img/socialmed3.png and b/_docs/img/socialmed3.png differ


[16/26] drill git commit: Home page responsive layout

Posted by ts...@apache.org.
Home page responsive layout


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/44f8fde8
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/44f8fde8
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/44f8fde8

Branch: refs/heads/gh-pages
Commit: 44f8fde8f3856f2a91d4becb6d21a8b029d67531
Parents: 0b09f9b
Author: Tomer Shiran <ts...@gmail.com>
Authored: Fri May 15 22:41:27 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Fri May 15 22:41:27 2015 -0700

----------------------------------------------------------------------
 css/responsive.css | 63 ++++++++++++++++++++++++++++++++-----------------
 css/style.css      | 16 +++++++++----
 index.html         | 57 +++++++++++++++++---------------------------
 3 files changed, 74 insertions(+), 62 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/44f8fde8/css/responsive.css
----------------------------------------------------------------------
diff --git a/css/responsive.css b/css/responsive.css
index 0f86cfa..17da16d 100644
--- a/css/responsive.css
+++ b/css/responsive.css
@@ -109,6 +109,43 @@
   div.post .post-header .int_title {
     margin-left: 0px;
   }
+  
+  
+  div.home-row{
+    width:100%;
+  }
+  
+  div.home-row:nth-child(odd) div.small{
+    width:300px;
+  }
+
+  div.home-row:nth-child(odd) div.description{
+    margin-left:20px;
+    width:auto;
+  }
+
+  div.home-row:nth-child(even) div.description{
+    margin-left:20px;
+    width:auto;
+  }
+
+  div.home-row:nth-child(even) div.small{
+    margin:0 0 15px 0;
+    width:300px;
+  }
+  
+  div.home-row div.big{
+    display:none;
+  }
+  
+  div.home-row div.small{
+    display:inline-block;
+  }
+  
+  table.intro {
+    width: 100%;
+    background: none;
+  }
 
 }
 @media (max-width: 768px) {
@@ -127,10 +164,6 @@
   br.mobile-break {
     display: block;
   }
-  table.intro {
-    width: 100%;
-    background: none;
-  }
   
   img {
     max-width: 100%;
@@ -219,38 +252,24 @@
   table.intro td h1 {
     font-size: 24px;
     text-align: left;
-    padding-left: 70px;
+    padding-left: 60px;
   }
   table.intro td.ag, table.intro td.fl, table.intro td.fam {
-    background-position: 30px 23%;
+    background-position: 20px 23%;
   }
   table.intro p {
     font-size: 16px;
     line-height: 24px;
-    padding: 2px 25px 15px 30px;
+    padding: 2px 25px 15px 20px;
     text-align: left;
   }
   table.intro span {
     position: relative;
     bottom: 10px;
-    padding-left: 30px;
+    padding-left: 20px;
     text-align: left;
   }
 
-  .home_txt p {
-    margin: 20px 10px 20px 10px;
-  }
-  .home_txt h1 {
-    font-size: 32px;
-    margin: 20px 10px 20px 10px;
-  }
-
-  .home_txt h2 {
-    font-size: 20px;
-    margin: 20px 10px 20px 10px;
-
-  }
-
 }
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/44f8fde8/css/style.css
----------------------------------------------------------------------
diff --git a/css/style.css b/css/style.css
index e7ab7ef..fe9b855 100755
--- a/css/style.css
+++ b/css/style.css
@@ -838,20 +838,20 @@ div.home-row div{
   text-align:left;
 }
 
-div.home-row:nth-child(odd) div:nth-child(1){
+div.home-row:nth-child(odd) div.big{
   width:300px;
 }
 
-div.home-row:nth-child(odd) div:nth-child(2){
+div.home-row:nth-child(odd) div.description{
   margin-left:40px;
   width:580px;
 }
 
-div.home-row:nth-child(even) div:nth-child(1){
+div.home-row:nth-child(even) div.description{
   width:580px;
 }
 
-div.home-row:nth-child(even) div:nth-child(2){
+div.home-row:nth-child(even) div.big{
   margin-left:40px;
   width:300px;
 }
@@ -872,3 +872,11 @@ div.home-row:nth-child(even) div:nth-child(2){
   font-size:16px;
   line-height:22px;
 }
+
+.home-row div.small{
+  display:none;
+}
+
+.home-row div.big{
+  display:inline-block;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/44f8fde8/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index 03b787c..6860277 100755
--- a/index.html
+++ b/index.html
@@ -82,21 +82,22 @@ $(document).ready(function() {
 </div>
 
 <div class="home-row">
-  <div><img src="{{ site.baseurl }}/images/home-any.png" style="width:300px" /></div>
-  <div>
+  <div class="big"><img src="{{ site.baseurl }}/images/home-any.png" style="width:300px" /></div>
+  <div class="description">
     <h1>Query any non-relational datastore (well, almost...)</h1>
     <p>Drill supports a variety of NoSQL databases and file systems, including HBase, MongoDB, MapR-DB, HDFS, MapR-FS, S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files. A single query can join data from multiple datastores. For example, you could join a user profile collection in MongoDB with a directory of event logs in Hadoop.</p>
     <p>Drill’s datastore-aware optimizer automatically restructures a query plan to leverage the datastore’s internal processing capabilities. In addition, Drill supports 'data locality', so it’s a good idea to co-locate Drill and the datastore on the same nodes.</p>
   </div>
+  <div class="small"><img src="{{ site.baseurl }}/images/home-any.png" style="width:300px" /></div>
 </div>
 
 <div class="home-row">
-  <div>
+  <div class="description">
     <h1>Kiss the overhead goodbye and enjoy data agility</h1>
     <p>Traditional query engines demand significant IT intervention before data can be queried. Drill gets rid of all that overhead so that users can just query the raw data in-situ. There's no need to load the data, create and maintain schemas, or transform the data before it can be processed. Instead, simply include the path to a Hadoop directory, MongoDB collection or S3 bucket in the SQL query.</p>
     <p>Drill leverages advanced query compilation and re-compilation techniques to maximize performance without requiring up-front schema knowledge.</p>
   </div>
-  <div><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
+  <div class="small big"><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
   line-height: 1.5;'>SELECT * FROM <span style="font-weight:bold;color:#000;text-decoration: underline">dfs.root.`/web/logs`</span>;
   
 SELECT country, count(*)
@@ -109,39 +110,46 @@ SELECT timestamp
 </div>
 
 <div class="home-row">
-  <div><img src="{{ site.baseurl }}/images/home-json.png" style="width:250px" /></div>
-  <div>
+  <div class="big"><img src="{{ site.baseurl }}/images/home-json.png" style="width:250px" /></div>
+  <div class="description">
     <h1>Treat your data like a table even when it’s not</h1>
     <p>Drill features a JSON data model that enables it to query complex/nested data and rapidly evolving structure commonly seen in modern applications and non-relational datastores. Drill also provides intuitive extensions to SQL so that the user can easily query complex data.
     <p>Drill is the only columnar query engine that supports complex data. It features an in-memory shredded columnar representation for complex data which allows Drill to achieve columnar speed with the flexibility of an internal JSON document model.</p>
   </div>
+  <div class="small"><img src="{{ site.baseurl }}/images/home-json.png" style="width:250px" /></div>
 </div>
 
 <div class="home-row">
-  <div>
+  <div class="description">
     <h1>Keep using the BI tools you love</h1>
     <p>Drill supports standard SQL. Business users, analysts and data scientists can use standard BI/analytics tools such as Tableau, Qlik, MicroStrategy, Spotfire, SAS and Excel to interact with non-relational datastores by leveraging Drill's JDBC and ODBC drivers. Developers can leverage Drill's simple REST API in their custom applications to create beautiful visualizations.</p>
     <p>Drill’s virtual datasets allow even the most complex, non-relational data to be mapped into BI-friendly structures which users can explore and visualize using their tool of choice.</p>
   </div>
-  <div><img src="{{ site.baseurl }}/images/home-bi.png" style="width:300px" /></div>
+  <div class="small big"><img src="{{ site.baseurl }}/images/home-bi.png" style="width:300px" /></div>
 </div>
 
 <div class="home-row">
-  <div><div><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
+  <div class="big"><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
   line-height: 1.5;'>$ curl j.mp/drill-latest -o drill.tgz
 $ tar xzf drill.tgz
 $ cd apache-drill-1.0.0
 $ bin/drill-embedded
-</pre></div></div>
-  <div>
+</pre></div>
+  <div class="description">
     <h1>Scale from one laptop to 1000s of servers</h1>
     <p>We made it easy to download and run Drill on your laptop. It runs on Mac, Windows and Linux, and within a minute or two you’ll be exploring your data. When you’re ready for prime time, deploy Drill on a cluster of commodity servers and take advantage of the world’s most scalable and high performance execution engine.
     <p>Drill’s symmetrical architecture (all nodes are the same) and simple installation makes it easy to deploy and operate very large clusters.</p>
   </div>
+  <div class="small"><pre style='background:#f3f5f7;color:#2a333c;border:solid 1px #aaa;  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;  font-size: 12px;
+    line-height: 1.5;'>$ curl j.mp/drill-latest -o drill.tgz
+  $ tar xzf drill.tgz
+  $ cd apache-drill-1.0.0
+  $ bin/drill-embedded
+  </pre></div>
 </div>
 
 <div class="home-row">
-  <div>
+  <div class="description">
     <h1>No more waiting for coffee</h1>
     <p>Drill isn’t the world’s first query engine, but it’s the first that combines both flexibility and speed. To achieve this, Drill features a radically different architecture that enables record-breaking performance without sacrificing the flexibility offered by the JSON document model. For example:<ul>
 <li>Columnar execution engine (the first ever to support complex data!)</li>
@@ -150,29 +158,6 @@ $ bin/drill-embedded
 <li>Locality-aware exeucution that reduces network traffic when Drill is co-located with the datastore</li>
 <li>Advanced cost-based optimizer that pushes processing into the datastore when possible</li></ul></p>
   </div>
-  <div><img src="{{ site.baseurl }}/images/home-coffee.jpg" style="width:300px" /></div>
+  <div class="small big"><img src="{{ site.baseurl }}/images/home-coffee.jpg" style="width:300px" /></div>
 </div>
 
-
-
-<!--
-<div class="home_txt mw">
-  <p>The 40-year monopoly of the relational database is over. The explosion of data in recent years and the shift towards rapid application development have led to the rise of non-relational datastores including Hadoop, NoSQL and cloud storage. Organizations are increasingly leveraging these systems for new and existing applications due to their flexibility, scalability and price advantages. Drill is built from the ground up to enable business users, analysts, data scientists and developers to explore and analyze the data in these systems while maintaining their unique agility and flexibility advantages.</p>
-
-  <a name="agility" class="anchor"></a>
-  <h1>Agility</h1>
-  <img src="images/home-img1.jpg" alt="Agility" width="606" />
-
-  <p>Drill is unlike any other query engine. Traditional query engines demand significant IT intervention before data can be queried. Drill gets rid of all that overhead so that users can just query the raw data in-situ at record speeds. There's no need to load the data, create and maintain schemas, or transform the data before it can be processed. For example, the user can directly query Hadoop directories, MongoDB collections, S3 buckets and more. Drill leverages advanced query compilation and re-compilation techniques to maximize performance without requiring up-front schema knowledge.</p>
-  
-  <a name="flexibility" class="anchor"></a>
-  <h1>Flexibility</h1>
-  <img src="images/home-img2.jpg" alt="Agility" width="635" />
-
-  <p>Drill features a JSON data model that allows it to query, without flattening, both simple and complex/nested data as well as rapidly evolving structures commonly seen with modern applications and non-relational datastores. Drill also provides intuitive extensions to SQL to work with complex/nested data. Drill achieves high performance via an in-memory shredded columnar representation for complex data. In fact, Drill is the only columnar query engine that supports complex data.</p>
-  <a name="familiarity" class="anchor"></a>
-  <h1>Familiarity</h1>
-  <img src="images/home-img3.jpg" alt="familiarity" width="380" />
-  <p>Drill supports standard SQL. Business users, analysts and data scientists can use standard BI/analytics tools such as Tableau, QlikView, MicroStrategy, Spotfire, SAS and Excel to interact with non-relational datastores by leveraging Drill's JDBC and ODBC drivers. Developers can leverage Drill's simple REST API in their custom applications to create beautiful visualizations based on data in their non-relational datastores. Users can also plug-and-play with Hive environments to enable ad-hoc low latency queries on existing Hive tables and reuse Hive's metadata, hundreds of file formats and UDFs out of the box.</p>
-</div>
--->


[26/26] drill git commit: Merge branch 'gh-pages' of https://git-wip-us.apache.org/repos/asf/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://git-wip-us.apache.org/repos/asf/drill into gh-pages


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/1b7072c5
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/1b7072c5
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/1b7072c5

Branch: refs/heads/gh-pages
Commit: 1b7072c5d49b571779a7d3a00c8bb953e127aa06
Parents: 58711e5 a59289d
Author: Tomer Shiran <ts...@gmail.com>
Authored: Sat May 16 23:50:44 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Sat May 16 23:50:44 2015 -0700

----------------------------------------------------------------------
 _docs/configure-drill/075-configuring-user-authentication.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------



[02/26] drill git commit: read_numbers_as_double enhancement

Posted by ts...@apache.org.
read_numbers_as_double enhancement


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/2a06d657
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/2a06d657
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/2a06d657

Branch: refs/heads/gh-pages
Commit: 2a06d657edfaa62b733f9972a12ee93e12bb1a92
Parents: 1cec6cc
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Wed May 13 17:56:40 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Wed May 13 17:56:40 2015 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md   | 129 ++++++++++---------
 .../050-json-data-model.md                      |  23 ++--
 _docs/sql-reference/090-sql-extensions.md       |   2 +-
 .../030-handling-different-data-types.md        |   7 +-
 4 files changed, 88 insertions(+), 73 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/2a06d657/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index 37de559..587c698 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -12,68 +12,75 @@ Drill sources the local `<drill_installation_directory>/conf` directory.
 The sys.options table in Drill contains information about boot (start-up) and system options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view key boot options. The sys.options table also contains many system options, some of which are described in detail the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). The following table lists the options in alphabetical order and provides a brief description of supported options:
 
 ## System Options
-The sys.options table lists the following options that you can set at the session or system level as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options) 
+The sys.options table lists the following options that you can set as a system or session option as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options) 
 
-| Name                                           | Default    | Comments                                                                                                                                                                                                                                                                                                                                                         |
-|------------------------------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| drill.exec.functions.cast_empty_string_to_null | FALSE      | Not supported in this release.                                                                                                                                                                                                                                                                                                                                   |
-| drill.exec.storage.file.partition.column.label | dir        | Accepts a string input.                                                                                                                                                                                                                                                                                                                                          |
-| exec.errors.verbose                            | FALSE      | Toggles verbose output of executable error messages                                                                                                                                                                                                                                                                                                              |
-| exec.java_compiler                             | DEFAULT    | Switches between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by default for generated source code of less than exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.                                                                                                                                                |
-| exec.java_compiler_debug                       | TRUE       | Toggles the output of debug-level compiler error messages in runtime generated code.                                                                                                                                                                                                                                                                             |
-| exec.java_compiler_janino_maxsize              | 262144     | See the exec.java_compiler option comment. Accepts inputs of type LONG.                                                                                                                                                                                                                                                                                          |
-| exec.max_hash_table_size                       | 1073741824 | Ending size for hash tables. Range: 0 - 1073741824                                                                                                                                                                                                                                                                                                               |
-| exec.min_hash_table_size                       | 65536      | Starting size for hash tables. Increase according to available memory to improve performance. Range: 0 - 1073741824                                                                                                                                                                                                                                              |
-| exec.queue.enable                              | FALSE      | Changes the state of query queues to control the number of queries that run simultaneously.                                                                                                                                                                                                                                                                      |
-| exec.queue.large                               | 10         | Sets the number of large queries that can run concurrently in the cluster. Range: 0-1000                                                                                                                                                                                                                                                                         |
-| exec.queue.small                               | 100        | Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001                                                                                                                                                                                                                                                                         |
-| exec.queue.threshold                           | 30000000   | Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807                                                                                                                                                          |
-| exec.queue.timeout_millis                      | 300000     | Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807                                                                                                                                                                                                                                                                |
-| planner.add_producer_consumer                  | FALSE      | Increase prefetching of data from disk. Disable for in-memory reads.                                                                                                                                                                                                                                                                                             |
-| planner.affinity_factor                        | 1.2        | Accepts inputs of type DOUBLE.                                                                                                                                                                                                                                                                                                                                   |
-| planner.broadcast_factor                       | 1          |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.broadcast_threshold                    | 10000000   | The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647                                                                                                                                                            |
-| planner.disable_exchanges                      | FALSE      | Toggles the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                               |
-| planner.enable_broadcast_join                  | TRUE       | Changes the state of aggregation and join operators. The broadcast join can be used for hash join, merge join and nested loop join. Use to join a large (fact) table to relatively smaller (dimension) tables. Do not disable.                                                                                                                                   |
-| planner.enable_constant_folding                | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.enable_demux_exchange                  | FALSE      | Toggles the state of hashing to a demulitplexed exchange.                                                                                                                                                                                                                                                                                                        |
-| planner.enable_hash_single_key                 | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.enable_hashagg                         | TRUE       | Enable hash aggregation; otherwise, Drill does a sort-based aggregation. Does not write to disk. Enable is recommended.                                                                                                                                                                                                                                          |
-| planner.enable_hashjoin                        | TRUE       | Enable the memory hungry hash join. Drill assumes that a query with have adequate memory to complete and tries to use the fastest operations possible to complete the planned inner, left, right, or full outer joins using a hash table. Does not write to disk. Disabling hash join allows Drill to manage arbitrarily large data in a small memory footprint. |
-| planner.enable_hashjoin_swap                   | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.enable_mergejoin                       | TRUE       | Sort-based operation. A merge join is used for inner join, left and right outer joins. Inputs to the merge join must be sorted. It reads the sorted input streams from both sides and finds matching rows. Writes to disk.                                                                                                                                       |
-| planner.enable_multiphase_agg                  | TRUE       | Each minor fragment does a local aggregation in phase 1, distributes on a hash basis using GROUP-BY keys partially aggregated results to other fragments, and all the fragments perform a total aggregation using this data.                                                                                                                                     |
-| planner.enable_mux_exchange                    | TRUE       | Toggles the state of hashing to a multiplexed exchange.                                                                                                                                                                                                                                                                                                          |
-| planner.enable_nestedloopjoin                  | TRUE       | Sort-based operation. Writes to disk.                                                                                                                                                                                                                                                                                                                            |
-| planner.enable_nljoin_for_scalar_only          | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.enable_streamagg                       | TRUE       | Sort-based operation. Writes to disk.                                                                                                                                                                                                                                                                                                                            |
-| planner.identifier_max_length                  | 1024       |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.join.hash_join_swap_margin_factor      | 10         |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.join.row_count_estimate_factor         | 1          |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.memory.average_field_width             | 8          |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.memory.enable_memory_estimation        | FALSE      | Toggles the state of memory estimation and re-planning of the query. When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.                                                                                                                                   |
-| planner.memory.hash_agg_table_factor           | 1.1        |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.memory.hash_join_table_factor          | 1.1        |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.memory.max_query_memory_per_node       | 2147483648 | Sets the maximum estimate of memory for a query per node. If the estimate is too low, Drill re-plans the query without memory-constrained operators.                                                                                                                                                                                                             |
-| planner.memory.non_blocking_operators_memory   | 64         | Range: 0-2048                                                                                                                                                                                                                                                                                                                                                    |
-| planner.nestedloopjoin_factor                  | 100        |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.partitioner_sender_max_threads         | 8          | Upper limit of threads for outbound queuing.                                                                                                                                                                                                                                                                                                                     |
-| planner.partitioner_sender_set_threads         | -1         |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.partitioner_sender_threads_factor      | 1          |                                                                                                                                                                                                                                                                                                                                                                  |
-| planner.producer_consumer_queue_size           | 10         | How much data to prefetch from disk (in record batches) out of band of query execution                                                                                                                                                                                                                                                                           |
-| planner.slice_target                           | 100000     | The number of records manipulated within a fragment before Drill parallelizes operations.                                                                                                                                                                                                                                                                        |
-| planner.width.max_per_node                     | 3          | Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query’s major fragment on a node.                                                                                                                                                |
-| planner.width.max_per_query                    | 1000       | Same as max per node but applies to the query as executed by the entire cluster.                                                                                                                                                                                                                                                                                 |
-| store.format                                   | parquet    | Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, or text. Allowed values: 0, -1, 1000000                                                                                                                                                                                                      |
-| store.json.all_text_mode                       | FALSE      | Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.                                                                                                                                                                                                                                                                              |
-| store.mongo.all_text_mode                      | FALSE      | Similar to store.json.all_text_mode for MongoDB.                                                                                                                                                                                                                                                                                                                 |
-| store.parquet.block-size                       | 536870912  | Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.                                                                                                                                                                                                                               |
-| store.parquet.compression                      | snappy     | Compression type for storing Parquet output. Allowed values: snappy, gzip, none                                                                                                                                                                                                                                                                                  |
-| store.parquet.enable_dictionary_encoding*      | FALSE      | Do not change.                                                                                                                                                                                                                                                                                                                                                   |
-| store.parquet.use_new_reader                   | FALSE      | Not supported                                                                                                                                                                                                                                                                                                                                                    |
-| window.enable*                                 | FALSE      | Coming soon.                                                                                                                                                                                                                                                                                                                                                     |
-
-\* Not supported in this release.
+| Name                                           | Default          | Comments                                                                                                                                                                                                                                                                                                                                                         |
+|------------------------------------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| drill.exec.functions.cast_empty_string_to_null | FALSE            | Not supported in this release.                                                                                                                                                                                                                                                                                                                                   |
+| drill.exec.storage.file.partition.column.label | dir              | The column label for directory levels in results of queries of files in a directory. Accepts a string input.                                                                                                                                                                                                                                                     |
+| exec.errors.verbose                            | FALSE            | Toggles verbose output of executable error messages                                                                                                                                                                                                                                                                                                              |
+| exec.java_compiler                             | DEFAULT          | Switches between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by default for generated source code of less than exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.                                                                                                                                                |
+| exec.java_compiler_debug                       | TRUE             | Toggles the output of debug-level compiler error messages in runtime generated code.                                                                                                                                                                                                                                                                             |
+| exec.java_compiler_janino_maxsize              | 262144           | See the exec.java_compiler option comment. Accepts inputs of type LONG.                                                                                                                                                                                                                                                                                          |
+| exec.max_hash_table_size                       | 1073741824       | Ending size for hash tables. Range: 0 - 1073741824. For internal use.                                                                                                                                                                                                                                                                                            |
+| exec.min_hash_table_size                       | 65536            | Starting size for hash tables. Increase according to available memory to improve performance. Range: 0 - 1073741824. For internal use.                                                                                                                                                                                                                           |
+| exec.queue.enable                              | FALSE            | Changes the state of query queues to control the number of queries that run simultaneously.                                                                                                                                                                                                                                                                      |
+| exec.queue.large                               | 10               | Sets the number of large queries that can run concurrently in the cluster. Range: 0-1000                                                                                                                                                                                                                                                                         |
+| exec.queue.small                               | 100              | Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001                                                                                                                                                                                                                                                                         |
+| exec.queue.threshold                           | 30000000         | Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807                                                                                                                                                          |
+| exec.queue.timeout_millis                      | 300000           | Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807                                                                                                                                                                                                                                                                |
+| exec.schedule.assignment.old                   | FALSE            | Used to prevent query failure when no work units are assigned to a minor fragment, particularly when the number of files is much larger than the number of leaf fragments.                                                                                                                                                                                       |
+| exec.storage.enable_new_text_reader            | TRUE             | Enables the text reader that complies with the RFC 4180 standard for text/csv files.                                                                                                                                                                                                                                                                             |
+| new_view_default_permissions                   | 700              | Sets view permissions using an octal code in the Unix tradition.                                                                                                                                                                                                                                                                                                 |
+| planner.add_producer_consumer                  | FALSE            | Increase prefetching of data from disk. Disable for in-memory reads.                                                                                                                                                                                                                                                                                             |
+| planner.affinity_factor                        | 1.2              | Factor by which a node with endpoint affinity  is favored while creating assignment. Accepts inputs of type DOUBLE.                                                                                                                                                                                                                                              |
+| planner.broadcast_factor                       | 1                |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.broadcast_threshold                    | 10000000         | The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647                                                                                                                                                            |
+| planner.disable_exchanges                      | FALSE            | Toggles the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                               |
+| planner.enable_broadcast_join                  | TRUE             | Changes the state of aggregation and join operators. The broadcast join can be used for hash join, merge join and nested loop join. Use to join a large (fact) table to relatively smaller (dimension) tables. Do not disable.                                                                                                                                   |
+| planner.enable_constant_folding                | TRUE             | If one side of a filter condition is a constant expression, constant folding evaluates the expression in the planning phase and replaces the expression with the constant value. For example, Drill can rewrite WHERE age + 5 < 42 as WHERE age < 37.                                                                                                            |
+| planner.enable_decimal_data_type               | FALSE            | False disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive.                                                                                                                                                                                                                                              |
+| planner.enable_demux_exchange                  | FALSE            | Toggles the state of hashing to a demulitplexed exchange.                                                                                                                                                                                                                                                                                                        |
+| planner.enable_hash_single_key                 | TRUE             | Each hash key is associated with a single value.                                                                                                                                                                                                                                                                                                                 |
+| planner.enable_hashagg                         | TRUE             | Enable hash aggregation; otherwise, Drill does a sort-based aggregation. Does not write to disk. Enable is recommended.                                                                                                                                                                                                                                          |
+| planner.enable_hashjoin                        | TRUE             | Enable the memory hungry hash join. Drill assumes that a query with have adequate memory to complete and tries to use the fastest operations possible to complete the planned inner, left, right, or full outer joins using a hash table. Does not write to disk. Disabling hash join allows Drill to manage arbitrarily large data in a small memory footprint. |
+| planner.enable_hashjoin_swap                   | TRUE             | Enables consideration of multiple join order sequences during the planning phase. Might negatively affect the performance of some queries due to inaccuracy of estimated row count especially after a filter, join, or aggregation.                                                                                                                              |
+| planner.enable_hep_join_opt                    |                  | Enables the heuristic planner for joins.                                                                                                                                                                                                                                                                                                                         |
+| planner.enable_mergejoin                       | TRUE             | Sort-based operation. A merge join is used for inner join, left and right outer joins. Inputs to the merge join must be sorted. It reads the sorted input streams from both sides and finds matching rows. Writes to disk.                                                                                                                                       |
+| planner.enable_multiphase_agg                  | TRUE             | Each minor fragment does a local aggregation in phase 1, distributes on a hash basis using GROUP-BY keys partially aggregated results to other fragments, and all the fragments perform a total aggregation using this data.                                                                                                                                     |
+| planner.enable_mux_exchange                    | TRUE             | Toggles the state of hashing to a multiplexed exchange.                                                                                                                                                                                                                                                                                                          |
+| planner.enable_nestedloopjoin                  | TRUE             | Sort-based operation. Writes to disk.                                                                                                                                                                                                                                                                                                                            |
+| planner.enable_nljoin_for_scalar_only          | TRUE             | Supports nested loop join planning where the right input is scalar in order to enable NOT-IN, Inequality, Cartesian, and uncorrelated EXISTS planning.                                                                                                                                                                                                           |
+| planner.enable_streamagg                       | TRUE             | Sort-based operation. Writes to disk.                                                                                                                                                                                                                                                                                                                            |
+| planner.identifier_max_length                  | 1024             | A minimum length is needed because option names are identifiers themselves.                                                                                                                                                                                                                                                                                      |
+| planner.join.hash_join_swap_margin_factor      | 10               | The number of join order sequences to consider during the planning phase.                                                                                                                                                                                                                                                                                        |
+| planner.join.row_count_estimate_factor         | 1                | The factor for adjusting the estimated row count when considering multiple join order sequences during the planning phase.                                                                                                                                                                                                                                       |
+| planner.memory.average_field_width             | 8                |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.enable_memory_estimation        | FALSE            | Toggles the state of memory estimation and re-planning of the query. When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.                                                                                                                                   |
+| planner.memory.hash_agg_table_factor           | 1.1              |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.hash_join_table_factor          | 1.1              |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.max_query_memory_per_node       | 2147483648 bytes | Sets the maximum estimate of memory for a query per node in bytes. If the estimate is too low, Drill re-plans the query without memory-constrained operators.                                                                                                                                                                                                    |
+| planner.memory.non_blocking_operators_memory   | 64               | Extra query memory per node foer non-blocking operators. This option is currently used only for memory estimation. Range: 0-2048 MB                                                                                                                                                                                                                              |
+| planner.nestedloopjoin_factor                  | 100              |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.partitioner_sender_max_threads         | 8                | Upper limit of threads for outbound queuing.                                                                                                                                                                                                                                                                                                                     |
+| planner.partitioner_sender_set_threads         | -1               |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.partitioner_sender_threads_factor      | 2                |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.producer_consumer_queue_size           | 10               | How much data to prefetch from disk (in record batches) out of band of query execution                                                                                                                                                                                                                                                                           |
+| planner.slice_target                           | 100000           | The number of records manipulated within a fragment before Drill parallelizes operations.                                                                                                                                                                                                                                                                        |
+| planner.width.max_per_node                     | 3                | Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query’s major fragment on a node.                                                                                                                                                |
+| planner.width.max_per_query                    | 1000             | Same as max per node but applies to the query as executed by the entire cluster. For example, this value might be the number of active Drillbits, or a higher number to return results faster.                                                                                                                                                                   |
+| store.format                                   | parquet          | Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, or text. Allowed values: 0, -1, 1000000                                                                                                                                                                                                      |
+| store.json.all_text_mode                       | FALSE            | Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.                                                                                                                                                                                                                                                                              |
+| store.json.extended_types                      | FALSE            | Turns on special JSON structures that Drill serializes for storing more type information than the [four basic JSON types] (http://docs.mongodb.org/manual/reference/mongodb-extended-json/).                                                                                                                                                                     |
+| store.json.read_numbers_as_double              | FALSE            | Reads numbers with or without a decimal point as DOUBLE. Prevents schema change errors.                                                                                                                                                                                                                                                                          |
+| store.mongo.all_text_mode                      | FALSE            | Similar to store.json.all_text_mode for MongoDB.                                                                                                                                                                                                                                                                                                                 |
+| store.mongo.read_numbers_as_double             | FALSE            | Similar to store.json.read_numbers_as_double.                                                                                                                                                                                                                                                                                                                    |
+| store.parquet.block-size                       | 536870912        | Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.                                                                                                                                                                                                                               |
+| store.parquet.compression                      | snappy           | Compression type for storing Parquet output. Allowed values: snappy, gzip, none                                                                                                                                                                                                                                                                                  |
+| store.parquet.enable_dictionary_encoding       | FALSE            | Do not change.                                                                                                                                                                                                                                                                                                                                                   |
+| store.parquet.use_new_reader                   | FALSE            | Not supported in this release.                                                                                                                                                                                                                                                                                                                                   |
+| store.text.estimated_row_size_bytes            | 100              | Estimate of the row size in a delimited text file, such as csv. The closer to actual, the better the query plan. Used for all csv files in the system/session where the value is set. Impacts the decision to plan a broadcast join or not.                                                                                                                      |
+| window.enable                                  | FALSE            | Not supported in this release. Coming soon.                                                                                                                                                                                                                                                                                                                      |
 
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/2a06d657/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index f9ab8bf..29efeb2 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -28,7 +28,7 @@ JSON data consists of the following types:
 * Value: a string, number, true, false, null
 * Whitespace: used between tokens
 
-The following table shows SQL-JSON data type mapping, assuming you use the default `all_text_mode` option setting, false: 
+The following table shows SQL-JSON data type mapping: 
 
 | SQL Type | JSON Type | Description                                                                                   |
 |----------|-----------|-----------------------------------------------------------------------------------------------|
@@ -37,16 +37,23 @@ The following table shows SQL-JSON data type mapping, assuming you use the defau
 | DOUBLE   | Numeric   | Number having a decimal point in JSON, 8-byte double precision floating point number in Drill |
 | VARCHAR  | String    | Character string of variable length                                                           |
 
-Drill does not support JSON lists of different types. For example, JSON does not enforce types or distinguish between integers and floating point values. When reading numerical values from a JSON file, Drill distinguishes integers from floating point numbers by the presence or lack of a decimal point. If some numbers in a JSON map or array appear with and without a decimal point, such as 0 and 0.0, Drill throws a schema change error.
+By default, Drill does not support JSON lists of different types. For example, JSON does not enforce types or distinguish between integers and floating point values. When reading numerical values from a JSON file, Drill distinguishes integers from floating point numbers by the presence or lack of a decimal point. If some numbers in a JSON map or array appear with and without a decimal point, such as 0 and 0.0, Drill throws a schema change error. You use the following options to read JSON lists of different types:
+
+* `store.json.read_numbers_as_double`  
+  Reads numbers from JSON files with or without a decimal point as DOUBLE.
+* `store.json.all_text_mode`
+  Reads all data from JSON files as VARCHAR.
+
+The following session/system options for `store.json.all_text_mode` and `store.json.read_numbers_as_double` options is false. Enable the latter if the JSON contains integers and floating point numbers. Using either option prevents schema errors, but using `store.json.read_numbers_as_double` has an advantage over `store.json.all_text_mode`: You do not have to cast every number from VARCHAR to DOUBLE or BIGINT when you query the JSON file.
 
 ### Handling Type Differences
-Use the all text mode to prevent the schema change error that occurs from when a JSON list includes different types, as described in the previous section. Set the `store.json.all_text_mode` property to true.
+Set the `store.json.read_numbers_as_double` property to true.
 
-    ALTER SYSTEM SET `store.json.all_text_mode` = true;
+    ALTER SYSTEM SET `store.json.read_numbers_as_double` = true;
 
-When you set this option, Drill reads all data from the JSON files as VARCHAR. After reading the data, use a SELECT statement in Drill to cast data as follows:
+When you set this option, Drill reads all numbers from the JSON files as DOUBLE. After reading the data, use a SELECT statement in Drill to cast data as follows:
 
-* Cast JSON values to [SQL types]({{ site.baseurl }}/docs/data-types), such as BIGINT, DECIMAL, FLOAT, and INTEGER.
+* Cast JSON values to [SQL types]({{ site.baseurl }}/docs/data-types), such as BIGINT, FLOAT, and INTEGER.
 * Cast JSON strings to [Drill Date/Time Data Type Formats]({{ site.baseurl }}/docs/supported-date-time-data-type-formats).
 
 Drill uses [map and array data types]({{ site.baseurl }}/docs/data-types) internally for reading complex and nested data structures from JSON. You can cast data in a map or array of data to return a value from the structure, as shown in [“Create a view on a MapR-DB table”] ({{ site.baseurl }}/docs/lesson-2-run-queries-with-ansi-sql). “Query Complex Data” shows how to access nested arrays.
@@ -472,9 +479,9 @@ Drill cannot read JSON files containing changes in the schema. For example, atte
 
 Drill interprets numbers that do not have a decimal point as BigInt values. In this example, Drill recognizes the first two coordinates as doubles and the third coordinate as a BigInt, which causes an error. 
                 
-Workaround: Set the `store.json.all_text_mode` property, described earlier, to true.
+Workaround: Set the `store.json.read_numbers_as_double` property, described earlier, to true.
 
-    ALTER SYSTEM SET `store.json.all_text_mode` = true;
+    ALTER SYSTEM SET `store.json.read_numbers_as_double` = true;
 
 ### Selecting all in a JSON directory query
 Drill currently returns only fields common to all the files in a [directory query]({{ site.baseurl }}/docs/querying-directories) that selects all (SELECT *) JSON files.

http://git-wip-us.apache.org/repos/asf/drill/blob/2a06d657/_docs/sql-reference/090-sql-extensions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/090-sql-extensions.md b/_docs/sql-reference/090-sql-extensions.md
index bb294f9..a30961c 100644
--- a/_docs/sql-reference/090-sql-extensions.md
+++ b/_docs/sql-reference/090-sql-extensions.md
@@ -13,7 +13,7 @@ Drill extends the SELECT statement for reading complex, multi-structured data. T
 Drill supports Hive and HBase as a plug-and-play data source. Drill can read tables created in Hive that use [data types compatible]({{ site.baseurl }}/docs/hive-to-drill-data-type-mapping) with Drill.  You can query Hive tables without modifications. You can query self-describing data without requiring metadata definitions in the Hive metastore. Primitives, such as JOIN, support columnar operation. 
 
 ## Extensions for JSON-related Data Sources
-For reading all JSON data as text, use the [all text mode](http://drill.apache.org/docs/handling-different-data-types/#all-text-mode-option) extension. Drill extends SQL to provide access to repeating values in arrays and arrays within arrays (array indexes). You can use these extensions to reach into deeply nested data. Drill extensions use standard JavaScript notation for referencing data elements in a hierarchy, as shown in ["Analyzing JSON."]({{ site.baseurl }}/docs/json-data-model#analyzing-json)
+For reading JSON numbers as DOUBLE or reading all JSON data as VARCHAR, use a [store.json option](http://drill.apache.org/docs/handling-different-data-types/#reading-numbers-of-different-types-from-json). Drill extends SQL to provide access to repeating values in arrays and arrays within arrays (array indexes). You can use these extensions to reach into deeply nested data. Drill extensions use standard JavaScript notation for referencing data elements in a hierarchy, as shown in ["Analyzing JSON."]({{ site.baseurl }}/docs/json-data-model#analyzing-json)
 
 ## Extensions for Parquet Data Sources
 SQL does not support all Parquet data types, so Drill infers data types in many instances. Users [cast] ({{ site.baseurl }}/docs/sql-functions) data types to ensure getting a particular data type. Drill offers more liberal casting capabilities than SQL for Parquet conversions if the Parquet data is of a logical type. You can use the default dfs storage plugin installed with Drill for reading and writing Parquet files as shown in the section, [“Parquet Format.”]({{ site.baseurl }}/docs/parquet-format)

http://git-wip-us.apache.org/repos/asf/drill/blob/2a06d657/_docs/sql-reference/data-types/030-handling-different-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/030-handling-different-data-types.md b/_docs/sql-reference/data-types/030-handling-different-data-types.md
index a8f785e..64121dd 100644
--- a/_docs/sql-reference/data-types/030-handling-different-data-types.md
+++ b/_docs/sql-reference/data-types/030-handling-different-data-types.md
@@ -56,10 +56,11 @@ The following example shows a JSON array having complex type values:
       ]
   
 
-## All text mode option
-All text mode is a system option for controlling how Drill implicitly casts JSON data. When reading numerical values from a JSON file, Drill implicitly casts a number to the DOUBLE or BIGINT type depending on the presence or absence a decimal point. If some numbers in a JSON map or array appear with and without a decimal point, such as 0 and 0.0, Drill throws a schema change error. To prevent Drill from attempting to read such data, [set all_text_mode]({{ site.baseurl }}/docs/json-data-model#handling-type-differences) to true. In all text mode, Drill implicitly casts JSON data to VARCHAR, which you can subsequently cast to desired types.
+## Reading numbers of different types from JSON
 
-Drill reads numbers without decimal point as BIGINT values by default. The range of BIGINT is -9223372036854775808 to 9223372036854775807. A BIGINT result outside this range produces an error. Use `all_text_mode` to select data as VARCHAR and then cast the data to a numerical type.
+The `store.json.read_numbers_as_double` and `store.json.all_text_mode` system/session options control how Drill implicitly casts JSON data. By default, when reading numerical values from a JSON file, Drill implicitly casts a number to the DOUBLE or BIGINT type depending on the presence or absence a decimal point. If some numbers in a JSON map or array appear with and without a decimal point, such as 0 and 0.0, Drill throws a schema change error. By default, Drill reads numbers without decimal point as BIGINT values by default. The range of BIGINT is -9223372036854775808 to 9223372036854775807. A BIGINT result outside this range produces an error. 
+
+To prevent Drill from attempting to read such data, set `store.json.read_numbers_as_double` or `store.json.all_text_mode` to true. Using `store.json.all_text_mode` set to true, Drill implicitly casts JSON data to VARCHAR. You need to cast the VARCHAR values to other types you want the returned data to represent. Using `store.json.read_numbers_as_double` set to true, Drill casts numbers in the JSON file to DOUBLE. You need to cast the DOUBLE to any other types of numbers, such as FLOAT and INTEGER, you want the returned data to represent. Using `store.json.read_numbers_as_double` typically involves less casting on your part than using `store.json.all_text_mode`.
 
 ## Guidelines for Using Float and Double
 


[22/26] drill git commit: Migrated site from CSS to SASS. Consolidated CSS and JS files. More CDN use.

Posted by ts...@apache.org.
Migrated site from CSS to SASS. Consolidated CSS and JS files. More CDN use.


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/59bc9151
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/59bc9151
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/59bc9151

Branch: refs/heads/gh-pages
Commit: 59bc91517abf7f6b9334f366a42dc872e0d12917
Parents: d8c9599
Author: Tomer Shiran <ts...@gmail.com>
Authored: Sat May 16 22:41:22 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Sat May 16 22:41:22 2015 -0700

----------------------------------------------------------------------
 _config-prod.yml              |   2 +-
 _config.yml                   |   3 +
 _includes/head.html           |  20 +-
 _layouts/docpage.html         |  57 +--
 _layouts/post.html            |   3 +-
 _sass/_doc-breadcrumbs.scss   |  41 ++
 _sass/_doc-code.scss          |  77 ++++
 _sass/_doc-content.scss       | 423 +++++++++++++++++
 _sass/_doc-syntax.scss        |  60 +++
 _sass/_download.scss          |  33 ++
 _sass/_home-code.scss         |   5 +
 _sass/_home-video-box.scss    |  55 +++
 _sass/_home-video-slider.scss |  34 ++
 _sass/_site-arrows.scss       |  83 ++++
 _sass/_site-main.scss         | 894 ++++++++++++++++++++++++++++++++++++
 _sass/_site-responsive.scss   | 264 +++++++++++
 _sass/_site-search.scss       |   9 +
 css/arrows.css                |  86 ----
 css/breadcrumbs.css           |  40 --
 css/code.css                  |  69 ---
 css/content.scss              |   6 +
 css/docpage.css               | 389 ----------------
 css/download.css              |  33 --
 css/download.scss             |   3 +
 css/home.scss                 |   4 +
 css/responsive.css            | 275 ------------
 css/search.css                |   9 -
 css/site.scss                 |   6 +
 css/style.css                 | 897 -------------------------------------
 css/syntax.css                |  60 ---
 css/video-box.css             |  55 ---
 css/video-slider.css          |  34 --
 images/home-json.png          | Bin 54424 -> 41663 bytes
 index.html                    |  18 +-
 js/drill.js                   |   4 +-
 35 files changed, 2051 insertions(+), 2000 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_config-prod.yml
----------------------------------------------------------------------
diff --git a/_config-prod.yml b/_config-prod.yml
index a1f863d..f68c645 100644
--- a/_config-prod.yml
+++ b/_config-prod.yml
@@ -1,3 +1,3 @@
 # drill.apache.org
 baseurl: ""
-noindex: 0 # Make sure this gets indexed by Google
\ No newline at end of file
+noindex: 0 # Make sure this gets indexed by Google

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_config.yml
----------------------------------------------------------------------
diff --git a/_config.yml b/_config.yml
index 3b254bd..8ca9d9c 100644
--- a/_config.yml
+++ b/_config.yml
@@ -27,3 +27,6 @@ defaults:
       type: docs # This defines the default for anything in the docs collection. An alternative would be to use "path: _docs" here.
     values:
       layout: docpage
+
+sass:
+  style: :compressed
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_includes/head.html
----------------------------------------------------------------------
diff --git a/_includes/head.html b/_includes/head.html
index 528df5f..d2e1e97 100644
--- a/_includes/head.html
+++ b/_includes/head.html
@@ -6,22 +6,16 @@
 
 <title>{% if page.title %}{{ page.title }} - {{ site.title_suffix }}{% else %}{{ site.title }}{% endif %}</title>
 
-<link href="{{ site.baseurl }}/css/syntax.css" rel="stylesheet" type="text/css">
-<link href="{{ site.baseurl }}/css/style.css" rel="stylesheet" type="text/css">
-<link href="{{ site.baseurl }}/css/arrows.css" rel="stylesheet" type="text/css">
-<link href="{{ site.baseurl }}/css/breadcrumbs.css" rel="stylesheet" type="text/css">
-<link href="{{ site.baseurl }}/css/code.css" rel="stylesheet" type="text/css">
-<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css">
-<link href="{{ site.baseurl }}/css/responsive.css" rel="stylesheet" type="text/css">
+<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css" rel="stylesheet" type="text/css"/>
+<link href="{{ site.baseurl }}/css/site.css" rel="stylesheet" type="text/css"/>
 
-<link rel="shortcut icon" href="{{ site.baseurl }}/favicon.ico" type="image/x-icon">
-<link rel="icon" href="{{ site.baseurl }}/favicon.ico" type="image/x-icon">
+<link rel="shortcut icon" href="{{ site.baseurl }}/favicon.ico" type="image/x-icon"/>
+<link rel="icon" href="{{ site.baseurl }}/favicon.ico" type="image/x-icon"/>
 
-<script language="javascript" type="text/javascript" src="{{ site.baseurl }}/js/lib/jquery-1.11.1.min.js"></script>
-<script language="javascript" type="text/javascript" src="{{ site.baseurl }}/js/lib/jquery.easing.1.3.js"></script>
+<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js" language="javascript" type="text/javascript"></script>
+<script src="//cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.easing.min.js" language="javascript" type="text/javascript"></script>
 <script language="javascript" type="text/javascript" src="{{ site.baseurl }}/js/modernizr.custom.js"></script>
 <script language="javascript" type="text/javascript" src="{{ site.baseurl }}/js/script.js"></script>
 <script language="javascript" type="text/javascript" src="{{ site.baseurl }}/js/drill.js"></script>
 
-
-</head>
\ No newline at end of file
+</head>

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_layouts/docpage.html
----------------------------------------------------------------------
diff --git a/_layouts/docpage.html b/_layouts/docpage.html
index de08169..ce64c77 100644
--- a/_layouts/docpage.html
+++ b/_layouts/docpage.html
@@ -2,44 +2,45 @@
 layout: default
 title: Documentation
 ---
+<link href="{{ site.baseurl }}/css/content.css" rel="stylesheet" type="text/css">
 
-  {% include doctoc.html %}
-  {% include breadcrumbs.html %}
+{% include doctoc.html %}
+{% include breadcrumbs.html %}
 
-  <div class="main-content-wrapper">
-    <div class="main-content">
+<div class="main-content-wrapper">
+  <div class="main-content">
 
-      {% if page.relative_path %}
-        <a class="edit-link" href="https://github.com/apache/drill/blob/gh-pages/{{ page.relative_path }}" target="_blank"><i class="fa fa-pencil-square-o"></i></a>
-      {% endif %}
+    {% if page.relative_path %}
+      <a class="edit-link" href="https://github.com/apache/drill/blob/gh-pages/{{ page.relative_path }}" target="_blank"><i class="fa fa-pencil-square-o"></i></a>
+    {% endif %}
 
-      <div class="int_title">
-        <h1>{{ page.title }}</h1>
+    <div class="int_title left">
+      <h1>{{ page.title }}</h1>
 
-      </div>
+    </div>
 
-      <link href="{{ site.baseurl }}/css/docpage.css" rel="stylesheet" type="text/css">
+    <link href="{{ site.baseurl }}/css/docpage.css" rel="stylesheet" type="text/css">
 
-      <div class="int_text" align="left">
-        {% if page_data.children.size == 0 or page.noChildren %}
-          {{ content }}
-      {% else %}
+    <div class="int_text" align="left">
+      {% if page_data.children.size == 0 or page.noChildren %}
+        {{ content }}
+    {% else %}
+        <ul>
+        {% for doc0 in page_data.children %}
+          <li><a href="{{ site.baseurl }}{{ doc0.url }}">{{ doc0.title }}</a></li>
+        {% if doc0.children.size > 0 %}
           <ul>
-          {% for doc0 in page_data.children %}
-            <li><a href="{{ site.baseurl }}{{ doc0.url }}">{{ doc0.title }}</a></li>
-          {% if doc0.children.size > 0 %}
-            <ul>
-            {% for doc1 in doc0.children %}
-              <li><a href="{{ site.baseurl }}{{ doc1.url }}">{{ doc1.title }}</a></li>
-          {% endfor %}
-          </ul>
-        {% endif %}
+          {% for doc1 in doc0.children %}
+            <li><a href="{{ site.baseurl }}{{ doc1.url }}">{{ doc1.title }}</a></li>
         {% endfor %}
         </ul>
       {% endif %}
-        {% unless page.noChildren %}
-          {% include docnav.html %}
-      {% endunless %}
-      </div>
+      {% endfor %}
+      </ul>
+    {% endif %}
+      {% unless page.noChildren %}
+        {% include docnav.html %}
+    {% endunless %}
     </div>
   </div>
+</div>

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_layouts/post.html
----------------------------------------------------------------------
diff --git a/_layouts/post.html b/_layouts/post.html
index 454343a..2d6f5d1 100644
--- a/_layouts/post.html
+++ b/_layouts/post.html
@@ -1,8 +1,9 @@
 ---
 layout: default
 ---
-<div class="post int_text">
+<link href="{{ site.baseurl }}/css/content.css" rel="stylesheet" type="text/css">
 
+<div class="post int_text">
   <header class="post-header">
     <div class="int_title">
       <h1 class="post-title">{{ page.title }}</h1>

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_doc-breadcrumbs.scss
----------------------------------------------------------------------
diff --git a/_sass/_doc-breadcrumbs.scss b/_sass/_doc-breadcrumbs.scss
new file mode 100644
index 0000000..f7cdf57
--- /dev/null
+++ b/_sass/_doc-breadcrumbs.scss
@@ -0,0 +1,41 @@
+.breadcrumbs{
+  display: block;
+  padding: 0.5625rem 0 0.5625rem 0;
+  overflow: hidden;
+  margin-top: 56px;
+  margin-left: 0px;
+  list-style: none;
+  border-bottom: solid 1px #E4E4E4;
+  width: 100%;
+}
+
+.breadcrumbs>*{
+  margin: 0;
+  float: left;
+  font-size: 0.6875rem;
+  line-height: 0.6875rem;
+  text-transform: uppercase;
+  color: #334D5C;
+}
+
+.breadcrumbs>*:before{
+  content: "/";
+  color: #aaa;
+  margin: 0 0.75rem;
+  position: relative;
+  top: 1px;
+}
+
+.breadcrumbs>.current{
+  font-weight: bold;
+}
+
+.breadcrumbs>*.current {
+  cursor: default;
+  color: #333;
+}
+
+.breadcrumbs>* a {
+  color: #1a6bc7;
+  text-decoration:none;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_doc-code.scss
----------------------------------------------------------------------
diff --git a/_sass/_doc-code.scss b/_sass/_doc-code.scss
new file mode 100644
index 0000000..91749c3
--- /dev/null
+++ b/_sass/_doc-code.scss
@@ -0,0 +1,77 @@
+div.highlight pre, code{
+  background: #f5f6f7 url(../images/code-block-bg.png) 0 0 repeat;
+  border-radius: 0;
+  border: none;
+  border-left: 5px solid #494747;
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  font-size: 14px;
+  line-height: 24px;
+  overflow: auto;
+  word-wrap: normal;
+  white-space: pre;
+}
+
+pre{
+  padding: 24px 12px;
+  color: #222;
+  margin: 24px 0;
+}
+
+code{
+  background: #f5f6f7 url(../images/code-block-bg.png) 0 0 repeat;
+  border-radius: 0;
+  border: none;
+  border-left: 5px;
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  font-size: 14px;
+  line-height: 24px;
+  overflow: auto;
+  word-wrap: normal;
+  white-space: pre;
+}
+
+div.admonition{
+  margin: 24px 0;
+  width: auto;
+  max-width: 100%;
+  padding: 2px 12px 22px 12px;
+  border-left: 5px solid transparent;
+}
+
+.admonition .admonition-title{
+  margin-bottom: 0;
+  font-size: 12px;
+  font-weight: bold;
+  text-transform: uppercase;
+  line-height: 24px;
+}
+
+.admonition > p{
+  margin: 0 0 12.5px 0;
+}
+
+.admonition p.first{
+  margin-top: 0 !important;
+}
+
+.admonition > p.last{
+  margin-bottom: 0;
+}
+.admonition .admonition-title:after{
+  content: ":";
+  font-weight: 900;
+}
+
+.admonition.important{
+  background-color: #fff2d5;
+  border-color: #ffb618;
+}
+
+.admonition.important .admonition-title{
+  color: #ffb618;
+}
+
+.admonition.note{
+  background-color: #edf4e8;
+  border-color: #6ba442;
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_doc-content.scss
----------------------------------------------------------------------
diff --git a/_sass/_doc-content.scss b/_sass/_doc-content.scss
new file mode 100644
index 0000000..300d4f9
--- /dev/null
+++ b/_sass/_doc-content.scss
@@ -0,0 +1,423 @@
+/* Navigation menu */
+
+#menu ul li.toc-categories a{
+  height:50px;
+  padding:0;
+  text-decoration: none;
+  width: 60px;
+  text-align: center;
+  color: #bababa;
+}
+
+#menu ul li.toc-categories{
+  float:left;
+  line-height: 45px;
+  font-size: 18px;
+  display: none;
+  overflow: auto;
+}
+
+/* Bottom navigation (left and right arrows) */
+
+div.doc-nav{
+  overflow: auto;
+  width: 100%;
+  margin-top: 30px;
+}
+
+div.doc-nav a{
+  text-decoration: none;
+}
+
+div.doc-nav a:hover{
+  text-decoration: underline;
+}
+
+div.doc-nav span.previous-toc{
+  float: left;
+  width: auto;
+}
+
+div.doc-nav span.back-to-toc{
+  float: left;
+  width: auto;
+  margin-left: 15%;
+}
+
+div.doc-nav span.next-toc{
+  float: right;
+}
+
+/* Main content area */
+
+.main-content .int_text{
+  margin-left: 0px;
+  margin-top: 0px;
+}
+
+.main-content .int_text img{
+  margin: 30px 0px;
+}
+
+.main-content .int_title{
+  text-align: left;
+  margin-left: 0px;
+  margin-top: 30px;
+}
+
+.int_title.left::after{
+  left: 0px;
+}
+
+.main-content .edit-link{
+  position: relative;
+  float: right;
+  margin-top: 13px;
+  margin-right: 20px;
+  text-decoration: none;
+  font-size: 24px;
+  color: #333333;
+}
+
+div.int_title h1, div.main-content h1, div.main-content h2, div.main-content h3, div.main-content h4, div.main-content h5, div.main-content h6{
+  font-weight: normal;
+  line-height: 24px;
+  color: #313030;
+  padding: 0;
+}
+div.main-content h1, div.main-content h2{
+  margin-top: 24px;
+  margin-bottom: 24px;
+}
+
+div.main-content h3{
+  margin-top: 24px;
+  font-weight: bold;
+}
+
+div.section > h2, div.section > h3, div.section > h4{
+  margin: 24px 0;
+}
+
+div.main-content h1, div.int_title h1{
+  border-top: none;
+  font-size: 36px;
+  line-height: 48px;
+  padding: 0;
+}
+
+/***************************
+/* Sidebar doc menu styles
+***************************/
+
+.sidebar{
+  position: fixed;
+  -webkit-transform: translateZ(0);
+  background-color: #f5f6f7;
+  width: 293px;
+  height: auto;
+  top: 51px;
+  bottom: 0;
+  left: 0;
+  overflow-y: auto;
+  overflow-x: hidden;
+  font-size: 0.85em;
+  z-index: 100;
+  transition: left 0.4s cubic-bezier(.02,.01,.47,1);
+  -moz-transition: left 0.4s cubic-bezier(.02,.01,.47,1);
+  -webkit-transition: left 0.4s cubic-bezier(.02,.01,.47,1);
+}
+
+.sidebar.force-expand{
+  left: 0px;
+}
+
+aside{
+  display:block;
+}
+
+div.docsidebar{
+  font-size: 14px;
+  height: 100%;
+}
+
+div.docsidebarwrapper{
+  padding: 0;
+  padding-top: 24px;
+  padding-bottom: 130px;
+  /*min-height: 100%;*/
+  position: relative;
+}
+div.docsidebar h3{
+  padding: 0 12px;
+  font-size: 14px;
+  line-height: 24px;
+  font-weight: bold;
+  margin: -3px 0 15px 0;
+}
+
+div.docsidebarwrapper ul{
+  margin: 12px 0 0 0;
+  padding: 0;
+}
+
+div.docsidebar ul{
+  list-style: none;
+  margin: 10px 0 10px 0;
+  padding: 0;
+  color: #000;
+}
+
+div.docsidebar ul li{
+  font-weight: normal;
+  line-height: 24px;
+}
+
+.docsidebarwrapper > ul > .toctree-l1.current_section{
+  background-color: #fff;
+  border-right: 1px solid #f5f6f7;
+}
+
+.docsidebarwrapper a{
+  color: #333333;
+  text-decoration: none;
+}
+
+.docsidebarwrapper > ul > ul.current_section{
+  background-color: #fff;
+  border-right: 1px solid #f5f6f7;
+  margin-right: 0px;
+}
+
+.docsidebarwrapper > ul > .toctree-l1{
+  padding: 11px 0 0 12px;
+  line-height: 24px;
+  border-top: 1px solid #ebebed;
+}
+
+.docsidebarwrapper > ul > .toctree-l1 > a{
+  font-size: 18px;
+  line-height: 24px;
+  width: 100%;
+  display: inline-block;
+}
+
+div.docsidebar ul ul, div.docsidebar ul.want-points{
+  list-style: none outside none;
+  margin-left: 0;
+}
+
+div.docsidebar ul ul{
+  margin-top: 0px;
+  margin-bottom: 0;
+}
+
+div.docsidebar ul ul ul{
+  margin-top: -3px;
+}
+
+.docsidebarwrapper li.toctree-l1 ul > li > a{
+  line-height: 24px;
+  display: inline-block;
+  width: 100%;
+}
+
+.docsidebarwrapper li.toctree-l1 ul > li{
+  font-size: 14px;
+}
+
+div.docsidebar li.toctree-l2{
+  text-indent: -12px;
+  padding-left: 47px;
+}
+
+div.docsidebar li.toctree-l1 a:hover, div.docsidebar li.toctree-l2 a:hover, div.docsidebar li.toctree-l3 a:hover{
+  text-decoration: underline;
+}
+
+div.docsidebar li.toctree-l3 {
+  padding-left: 57px;
+  text-indent: -12px;
+}
+
+span.expand, span.contract{
+  width: 0px;
+  cursor: pointer;
+  font-size: 80%;
+  display:none; 
+  position: relative;
+  right: 5px;
+}
+
+span.expand.show, span.contract.show{
+  display: inline-block;
+}
+
+.docsidebarwrapper li.toctree-l2.current, .docsidebarwrapper li.toctree-l3.current{
+  background-color: #1A6BC7;
+  color: white;
+}
+
+.docsidebarwrapper li.toctree-l2.current.current > a, .docsidebarwrapper li.toctree-l3.current.current > a{
+  color: white;
+}
+
+.docsidebarwrapper a:hover{
+  color: black;
+}
+
+.docsidebarwrapper a{
+  display: inline-block;
+  width: 100%;
+}
+
+.docsidebarwrapper ul{
+  font-size: 14px;
+}
+
+.permalink{
+  text-decoration:none;
+  font-size: 70%;
+  color: #1A6BC7;
+}
+
+.permalink.hide{
+  visibility: hidden;
+}
+
+/* Responsive */
+
+@media (max-width: 320px){
+  .int_text{
+    width: auto;
+  }
+  .int_title{
+    width: auto;
+  }
+}
+
+@media (max-width: 768px){
+  .int_text{
+    width: auto;
+  }
+  .int_title{
+    width: auto;
+  }
+}
+
+@media (max-width: 1024px){
+  div#footer{
+    margin-left: 0px;
+    width: 100%;
+  }
+  
+  div#footer .wrapper{
+    padding: 0 20px;
+  }
+  
+  .main-content .edit-link{
+    margin-right: 0px; /* container takes care of right margin */
+  }
+
+  #menu ul li.toc-categories{
+    display: inline-block;
+    width: 60px;
+    text-align: center;
+  }
+  
+  #menu ul li.logo{
+    padding-left: 0px;
+  }
+  
+  #menu.force-expand ul li.toc-categories{
+    display: inline-block;
+  }
+
+  .page-wrap div.int_title.margin_110{
+    margin-top:110px;
+  }
+
+  li.toc-categories .expand-toc-icon{
+    font-size: 24px;
+  }
+  
+  li.toc-categories a.expand-toc-icon{
+    color: black;
+  }
+
+  .expand-toc-icon:hover, .expand-toc-icon:active{
+    color: white;
+    text-decoration: none;
+  }
+
+  .sidebar{
+    left: -293px;
+    box-shadow: 0 0 13px rgba(0,0,0,0.3);
+  }
+
+  .sidebar.reveal{
+    left: 0;
+  }
+  
+  .main-content.force-expand{
+    margin-left: 313px;
+  }
+  
+  #footer.force-expand .wrapper{
+    margin-left: 313px;
+  }
+  
+  .main-content{
+    margin: 0px 20px 0px 20px;
+  }
+  
+  .int_title{
+    margin-top: 60px;
+  }
+  
+  .int_title h1{
+    font-size: 28px;
+  }
+  
+  .page-wrap #footer{
+    width: auto;
+  }
+  
+  .breadcrumbs.force-expand li:first-of-type{
+    margin-left: 301px;
+  }
+}
+
+@media (min-width: 1025px){
+  .main-content-wrapper{
+    width: 1092px; 
+  }
+  
+  .main-content{
+    margin-left: 313px;
+  }
+  
+  #menu ul li.logo{
+    padding-left: 30px;
+  }
+  
+  #footer .wrapper{
+    margin-left: 313px;
+  }
+  
+  #footer{
+    width: 100%;
+  }
+  
+ .breadcrumbs{
+    margin-left: 0px;
+  }
+  
+  .breadcrumbs li:first-of-type{
+    margin-left: 301px;
+  }
+}
+
+#footer{
+  width: 1092px;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_doc-syntax.scss
----------------------------------------------------------------------
diff --git a/_sass/_doc-syntax.scss b/_sass/_doc-syntax.scss
new file mode 100644
index 0000000..2774b76
--- /dev/null
+++ b/_sass/_doc-syntax.scss
@@ -0,0 +1,60 @@
+.highlight  { background: #ffffff; }
+.highlight .c { color: #999988; font-style: italic } /* Comment */
+.highlight .err { color: #a61717; background-color: #e3d2d2 } /* Error */
+.highlight .k { font-weight: bold } /* Keyword */
+.highlight .o { font-weight: bold } /* Operator */
+.highlight .cm { color: #999988; font-style: italic } /* Comment.Multiline */
+.highlight .cp { color: #999999; font-weight: bold } /* Comment.Preproc */
+.highlight .c1 { color: #999988; font-style: italic } /* Comment.Single */
+.highlight .cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */
+.highlight .gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
+.highlight .gd .x { color: #000000; background-color: #ffaaaa } /* Generic.Deleted.Specific */
+.highlight .ge { font-style: italic } /* Generic.Emph */
+.highlight .gr { color: #aa0000 } /* Generic.Error */
+.highlight .gh { color: #999999 } /* Generic.Heading */
+.highlight .gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
+.highlight .gi .x { color: #000000; background-color: #aaffaa } /* Generic.Inserted.Specific */
+.highlight .go { color: #888888 } /* Generic.Output */
+.highlight .gp { color: #555555 } /* Generic.Prompt */
+.highlight .gs { font-weight: bold } /* Generic.Strong */
+.highlight .gu { color: #aaaaaa } /* Generic.Subheading */
+.highlight .gt { color: #aa0000 } /* Generic.Traceback */
+.highlight .kc { font-weight: bold } /* Keyword.Constant */
+.highlight .kd { font-weight: bold } /* Keyword.Declaration */
+.highlight .kp { font-weight: bold } /* Keyword.Pseudo */
+.highlight .kr { font-weight: bold } /* Keyword.Reserved */
+.highlight .kt { color: #445588; font-weight: bold } /* Keyword.Type */
+.highlight .m { color: #009999 } /* Literal.Number */
+.highlight .s { color: #d14 } /* Literal.String */
+.highlight .na { color: #008080 } /* Name.Attribute */
+.highlight .nb { color: #0086B3 } /* Name.Builtin */
+.highlight .nc { color: #445588; font-weight: bold } /* Name.Class */
+.highlight .no { color: #008080 } /* Name.Constant */
+.highlight .ni { color: #800080 } /* Name.Entity */
+.highlight .ne { color: #990000; font-weight: bold } /* Name.Exception */
+.highlight .nf { color: #990000; font-weight: bold } /* Name.Function */
+.highlight .nn { color: #555555 } /* Name.Namespace */
+.highlight .nt { color: #000080 } /* Name.Tag */
+.highlight .nv { color: #008080 } /* Name.Variable */
+.highlight .ow { font-weight: bold } /* Operator.Word */
+.highlight .w { color: #bbbbbb } /* Text.Whitespace */
+.highlight .mf { color: #009999 } /* Literal.Number.Float */
+.highlight .mh { color: #009999 } /* Literal.Number.Hex */
+.highlight .mi { color: #009999 } /* Literal.Number.Integer */
+.highlight .mo { color: #009999 } /* Literal.Number.Oct */
+.highlight .sb { color: #d14 } /* Literal.String.Backtick */
+.highlight .sc { color: #d14 } /* Literal.String.Char */
+.highlight .sd { color: #d14 } /* Literal.String.Doc */
+.highlight .s2 { color: #d14 } /* Literal.String.Double */
+.highlight .se { color: #d14 } /* Literal.String.Escape */
+.highlight .sh { color: #d14 } /* Literal.String.Heredoc */
+.highlight .si { color: #d14 } /* Literal.String.Interpol */
+.highlight .sx { color: #d14 } /* Literal.String.Other */
+.highlight .sr { color: #009926 } /* Literal.String.Regex */
+.highlight .s1 { color: #d14 } /* Literal.String.Single */
+.highlight .ss { color: #990073 } /* Literal.String.Symbol */
+.highlight .bp { color: #999999 } /* Name.Builtin.Pseudo */
+.highlight .vc { color: #008080 } /* Name.Variable.Class */
+.highlight .vg { color: #008080 } /* Name.Variable.Global */
+.highlight .vi { color: #008080 } /* Name.Variable.Instance */
+.highlight .il { color: #009999 } /* Literal.Number.Integer.Long */

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_download.scss
----------------------------------------------------------------------
diff --git a/_sass/_download.scss b/_sass/_download.scss
new file mode 100644
index 0000000..0c92169
--- /dev/null
+++ b/_sass/_download.scss
@@ -0,0 +1,33 @@
+.table {
+  display: table;   /* Allow the centering to work */
+  margin: 0 auto;
+}
+
+ul#download_buttons {
+  list-style: none;
+  padding-left:0px;
+  margin-bottom:100px;
+}
+
+ul#download_buttons li {
+  float: left;
+  text-align: center;
+  background-color: #4aaf4c;
+  margin:0 10px 10px 10px;
+  width: 235px;
+  line-height: 60px;
+}
+
+ul#download_buttons li a{
+  text-decoration:none;
+  color:#fff;
+  display:block;
+}
+ 
+ul#download_buttons li a:hover {
+  background-color: #348436;
+}
+
+div#download_bar:after{
+  clear:both;
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_home-code.scss
----------------------------------------------------------------------
diff --git a/_sass/_home-code.scss b/_sass/_home-code.scss
new file mode 100644
index 0000000..11806e7
--- /dev/null
+++ b/_sass/_home-code.scss
@@ -0,0 +1,5 @@
+pre{
+  padding: 24px 12px;
+  color: #222;
+  margin: 24px 0;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_home-video-box.scss
----------------------------------------------------------------------
diff --git a/_sass/_home-video-box.scss b/_sass/_home-video-box.scss
new file mode 100644
index 0000000..6a46f37
--- /dev/null
+++ b/_sass/_home-video-box.scss
@@ -0,0 +1,55 @@
+div#video-box{
+  position:relative;
+  float:right;
+  width:320px;
+  height:160px;
+}
+
+div#video-box div.background{
+  position:absolute;
+  background-color:#fff;
+  height:100%;
+  width:100%;
+  opacity:.15;
+}
+
+div#video-box div.row{
+  position:absolute;
+  height:40px;
+  width:100%;
+  border-bottom:dotted 1px #999;
+  top:0px;
+  font-size:12px;
+  color:black;
+  line-height:40px;
+  
+}
+
+div#video-box div.row.r1{
+  top:40px;
+}
+
+div#video-box div.row.r2{
+  top:80px;
+}
+
+div#video-box div.row.r3{
+  top:120px;
+  border-bottom:none;
+}
+
+div#video-box div.row div{
+  overflow: hidden;
+  margin:5px;
+  height:30px;
+  float:left;
+}
+
+div#video-box div.row div img{
+  height:40px;
+  margin:-5px 0;
+}
+
+div#video-box a{
+  color:#006;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_home-video-slider.scss
----------------------------------------------------------------------
diff --git a/_sass/_home-video-slider.scss b/_sass/_home-video-slider.scss
new file mode 100644
index 0000000..7e72996
--- /dev/null
+++ b/_sass/_home-video-slider.scss
@@ -0,0 +1,34 @@
+div#video-slider{
+  width:260px;
+  float:right;
+}
+
+div.slide{
+  position:relative;
+  padding:0px 0px;
+}
+
+img.thumbnail{
+  width:100%;
+  margin:0 auto;
+}
+
+img.play{
+  position:absolute;
+  width:40px;
+  left:110px;
+  top:60px;
+}
+
+div.title{
+  layout:block;
+  bottom:0px;
+  left:0px;
+  width:100%;
+  line-height:20px;
+  color:#000;
+  opacity:.4;
+  text-align:center;
+  font-size:12px;
+  background-color:#fff;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_site-arrows.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-arrows.scss b/_sass/_site-arrows.scss
new file mode 100644
index 0000000..c16b0e3
--- /dev/null
+++ b/_sass/_site-arrows.scss
@@ -0,0 +1,83 @@
+.nav-circlepop a{
+  width: 50px;
+  height: 50px;
+}
+
+.nav-circlepop a::before{
+  position: absolute;
+  top: 0;
+  left: 0;
+  width: 100%;
+  height: 100%;
+  border-radius: 50%;
+  background: #fff;
+  content: '';
+  opacity: 0;
+  -webkit-transition: -webkit-transform 0.3s, opacity 0.3s;
+  transition: transform 0.3s, opacity 0.3s;
+  -webkit-transform: scale(0.9);
+  transform: scale(0.9);
+}
+
+.nav-circlepop .icon-wrap{
+  position: relative;
+  display: block;
+  margin: 10% 0 0 10%;
+  width: 80%;
+  height: 80%;
+}
+
+.nav-circlepop a.next .icon-wrap{
+  -webkit-transform: rotate(180deg);
+  transform: rotate(180deg);
+}
+
+.nav-circlepop .icon-wrap::before,
+.nav-circlepop .icon-wrap::after{
+  position: absolute;
+  left: 25%;
+  width: 3px;
+  height: 50%;
+  background: #fff;
+  content: '';
+  -webkit-transition: -webkit-transform 0.3s, background-color 0.3s;
+  transition: transform 0.3s, background-color 0.3s;
+  -webkit-backface-visibility: hidden;
+  backface-visibility: hidden;
+}
+
+.nav-circlepop .icon-wrap::before{
+  -webkit-transform: translateX(-50%) rotate(30deg);
+  transform: translateX(-50%) rotate(30deg);
+  -webkit-transform-origin: 0 100%;
+  transform-origin: 0 100%;
+}
+
+.nav-circlepop .icon-wrap::after{
+  top: 50%;
+  -webkit-transform: translateX(-50%) rotate(-30deg);
+  transform: translateX(-50%) rotate(-30deg);
+  -webkit-transform-origin: 0 0;
+  transform-origin: 0 0;
+}
+
+.nav-circlepop a:hover::before{
+  opacity: 1;
+  -webkit-transform: scale(1);
+  transform: scale(1);
+}
+
+.nav-circlepop a:hover .icon-wrap::before,
+.nav-circlepop a:hover .icon-wrap::after{
+  background: #4aaf4c;
+}
+
+.nav-circlepop a:hover .icon-wrap::before{
+  -webkit-transform: translateX(-50%) rotate(45deg);
+  transform: translateX(-50%) rotate(45deg);
+}
+
+.nav-circlepop a:hover .icon-wrap::after{
+  -webkit-transform: translateX(-50%) rotate(-45deg);
+  transform: translateX(-50%) rotate(-45deg);
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_site-main.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-main.scss b/_sass/_site-main.scss
new file mode 100644
index 0000000..8c69c50
--- /dev/null
+++ b/_sass/_site-main.scss
@@ -0,0 +1,894 @@
+@charset "UTF-8";
+
+* {
+  outline:none;
+}
+
+html {
+  height: 100%;
+}
+
+body {
+  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
+  padding:0;
+  margin: 0;
+  height: 100%;
+}
+.page-wrap{
+  min-height: 100%;
+  margin-bottom: -60px; /* negative value of footer height */
+}
+
+.mw { min-width:999px; width:100%; }
+
+.nw { white-space:nowrap; }
+
+a.anchor {
+  display:none;
+  font-size:0px;
+  position:absolute;
+  margin-top:-50px;
+}
+
+.home_txt a.anchor {
+  margin-top:-90px;
+}
+
+#menu {
+  position:fixed;  
+  top:0;
+  width:100%;
+  z-index:5;
+}
+
+/* This seems to override menu position fixed. Fixed positioning allows menu to always be available at the top of the viewport, and JS is not needed to do this */
+/*
+#menu.r {
+  position:absolute;
+}
+*/
+
+#menu ul {
+  background:#051221;
+  display:block;
+  font-size:0px;
+  list-style:none;
+  overflow:hidden;
+  padding:0;
+  text-align:right;
+  /*
+  -webkit-box-shadow: 1px 1px 1px 0px rgba(0, 0, 0, 0.4);
+  -moz-box-shadow:    1px 1px 1px 0px rgba(0, 0, 0, 0.4);
+  box-shadow:         1px 1px 1px 0px rgba(0, 0, 0, 0.4);
+  */
+}
+
+#menu ul li {
+  display:inline-block;
+  font-size:14px;
+  margin:0;
+  padding:0;
+}
+
+#menu ul li.logo {
+  float:left;
+  padding-left:30px;
+}
+
+#menu ul li.logo:hover { background:none; }
+
+#menu ul li.logo a {
+  background:url(../images/apachedrill.png) no-repeat center;
+  background-size:auto 27px;
+  display:block;
+  height:50px;
+  padding:0;
+  width:80px;
+}
+
+#menu ul li a {
+  color:#FFF;
+  text-decoration:none;
+  line-height:50px;
+  padding:14px 20px;
+}
+
+#menu ul li.d, #menu ul li.d:hover {
+  background-color: #4aaf4c;
+  font-size:12px;
+  text-transform:uppercase;
+}
+#menu ul li.d a .fam {
+  position: relative;
+  right: 8px;
+  font-size: 14px;
+}
+
+#menu ul li.d:hover {
+  background-color:#348436;
+}
+
+#menu ul li.d * {
+  cursor:pointer;
+}
+
+#menu ul li.d a {
+  padding:0px 30px 0 40px;
+  display:block;
+}
+
+
+#menu ul li.l {
+  cursor:pointer;  
+}
+
+#menu ul li.l span {
+  background:url(../images/len.png) no-repeat center;
+  background-size:auto 16px;
+  display:block;
+  line-height:50px;
+  padding:0 20px;
+  width:16px;
+}
+
+#menu ul li.l.open {
+  background-color:#145aa8;
+}
+
+#menu ul li#twitter-menu-item {
+  width:30px;
+  padding-left: 2px;
+  padding-right:10px;
+}
+
+#menu ul li#twitter-menu-item a {
+  padding: 10px;
+}
+
+#menu ul li#twitter-menu-item img {
+  width: 22px;
+}
+
+#menu ul li ul {
+  background:#1a6bc7;
+  display:none;
+  margin:0;
+  padding:0;
+  position:absolute;
+  text-align:left;
+}
+
+#menu ul li ul li {
+  display:block;
+}
+
+#menu ul li ul li a {
+  display:block;
+  line-height:30px;
+  padding:3px 20px;
+}
+
+#menu ul li ul li a:hover {
+  background:#145aa8;
+}
+
+#menu ul li:hover {
+  background:#1a6bc7;  
+}
+
+#menu ul li:hover ul {
+  display:block;
+}
+#menu ul li.clear-float{
+  display:none;
+}
+#subhead {
+  background:#145aa8;
+  color:#FFF;
+  font-size:12px;
+  font-weight:bold;
+  height:40px;
+  line-height:40px;
+  left:0px;
+  letter-spacing:1px;
+  right:0px;
+  position:fixed;  
+  text-align:center;
+  text-transform:uppercase;
+  top:10px;
+  z-index:4;  
+  
+  -webkit-transition: all 0.3s;
+  transition: all 0.3s;
+}
+
+#subhead.show {
+  top:50px;
+}
+
+#subhead ul {
+  list-style:none;
+  margin:0;
+  padding:0;
+}
+
+#subhead ul li {
+  display:inline-block;
+  list-style:none;
+  margin:0;
+  padding:0 35px 0 35px;
+}
+
+#subhead ul li a {
+  background-size:16px auto;
+  background-position:left center;
+  background-repeat:no-repeat;
+  color:#FFF;
+  display:block;
+  padding:0 0 0 25px;
+  text-decoration:none;
+}
+
+#subhead ul li.ag a {
+  background-image:url(../images/agility-w.png);
+}
+
+#subhead ul li.fl a {
+  background-image:url(../images/flexibility-w.png);
+}
+
+#subhead ul li.fam a {
+  background-image:url(../images/familiarity-w.png);
+}
+
+#header {
+  background:url(../images/reel-bg.png) no-repeat;
+  background-size:cover;
+  height:300px;
+  overflow:hidden;
+  position:relative;
+}
+
+#header .scroller {
+  margin-left:0px;  
+  overflow:hidden;
+}
+
+#header .scroller .item {
+  
+  float:left;
+  height:300px;
+  position:relative;
+  width:100%;  
+  z-index:1;
+}
+
+#header .scroller .item p a {
+  color:#FFF;
+  font-weight:bold;
+  overflow: hidden;
+  text-decoration:none;  
+  
+  position: relative;
+  display: inline-block;
+  outline: none;
+  vertical-align: bottom;
+  text-decoration: none;
+  white-space: nowrap;
+}
+
+#header .scroller .item p a::before {
+  position: absolute;
+  top: 0;
+  left: 0;
+  z-index: -1;
+  width: 100%;
+  height: 100%;
+  background: rgba(149,165,166,0.4);
+  content: '';
+  -webkit-transition: -webkit-transform 0.3s;
+  transition: transform 0.3s;
+  -webkit-transform: scaleY(0.618) translateX(-100%);
+  transform: scaleY(0.618) translateX(-100%);
+}
+
+#header .scroller .item p a:hover::before,
+#header .scroller .item p a:focus::before {
+  -webkit-transform: scaleY(0.618) translateX(0);
+  transform: scaleY(0.618) translateX(0);
+}
+
+
+#header .scroller .item .tc {
+  color:#FFF;
+  margin-left:80px;
+  position:relative;
+  width:900px;
+  margin:0 auto;
+}
+
+#header .scroller .item .tc h1, #header .scroller .item .tc h2 {
+  font-size:36px;
+  font-weight:lighter;
+  margin:0 0 8px 0;
+  padding:0;
+}
+#header .scroller .item .tc h2 {
+  font-size: 18px;
+}
+
+#header .scroller .item .tc p {
+  font-size:14px;
+  font-weight:lighter;
+  line-height:24px;
+  margin:0;
+  padding:0;
+}
+
+#header .scroller .item .btn {
+  background: none;
+  border: 2px solid #fff;
+  cursor: pointer;
+  color:#FFF;
+  display: inline-block;
+  font-size:12px;
+  font-weight: bold;
+  outline: none;
+  margin-top:18px;
+  position: relative;
+  padding: 5px 30px;
+  text-decoration:none;
+  text-transform: uppercase;
+  
+  -webkit-transition: all 0.3s;
+  -moz-transition: all 0.3s;
+  transition: all 0.3s;
+}
+
+#header .scroller .item .btn:after {
+  content: '';
+  position: absolute;
+  z-index: -1;
+  -webkit-transition: all 0.3s;
+  -moz-transition: all 0.3s;
+  transition: all 0.3s;
+}
+
+#header .scroller .item .btn-1c:after {
+  width: 0%;
+  height: 100%;
+  top: 0;
+  left: 0;
+  background: #fff;
+}
+
+#header .scroller .item .btn-1c:hover,
+#header .scroller .item .btn-1c:active {
+  color: #0e83cd;
+}
+
+#header .scroller .item .btn-1c:hover:after,
+#header .scroller .item .btn-1c:active:after {
+  width: 100%;
+}
+
+#header .aLeft {
+  cursor:pointer;
+  height:30px;
+  left:20px;
+  margin-top:-15px;
+  position:absolute;
+  top:50%;
+  width:30px;  
+  z-index:2;
+}
+
+#header .aRight {
+  cursor:pointer;
+  height:30px;
+  right:20px;
+  margin-top:-15px;
+  position:absolute;
+  top:50%;
+  width:30px;  
+  z-index:2;
+}
+
+.dots {
+  bottom:30px;
+  right:80px;
+  position:absolute;
+  z-index:2;  
+}
+
+.dots .dot {
+  border-radius: 50%;
+  background-color: transparent;
+  box-shadow: inset 0 0 0 2px white;
+  -webkit-transition: box-shadow 0.3s ease;
+  transition: box-shadow 0.3s ease;
+  
+  cursor:pointer;
+  display:inline-block;
+  height:10px;
+  margin-left:10px;
+  width:10px;
+}
+
+.dots .dot:hover,
+.dots .dot:focus {
+  box-shadow: inset 0 0 0 2px rgba(255, 255, 255, 0.6)
+}
+
+.dots .dot.sel {
+  box-shadow: inset 0 0 0 8px white;
+}
+div.alertbar
+  {
+    background-color:#ffc;
+    text-align: center;
+    display: block;
+    padding:10px;
+    border-bottom: solid 1px #cc9;
+  }
+div.alertbar .hor-bar:after {
+  content: "|";
+}
+span.strong {
+  font-weight: bold;
+}
+
+.introWrapper {
+  border-bottom:1px solid #CCC;
+}
+
+table.intro {
+  background:url(../images/intro-bg.gif) no-repeat center;
+  table-layout:fixed;
+  text-align:center;  
+  width: 940px;
+}
+
+table.intro td {
+  background-position:center 25px;
+  background-repeat:no-repeat;
+  background-size:25px auto;
+  padding:65px 0 0 0;
+  position:relative;
+  vertical-align:top;
+}
+
+table.intro td.ag {
+  background-image:url(../images/agility.png);
+}
+
+table.intro td.fl {
+  background-image:url(../images/flexibility.png);
+}
+
+table.intro td.fam {
+  background-image:url(../images/familiarity.png);
+}
+
+table.intro h1 {
+  font-size:36px;
+  font-weight:normal;
+  margin:0;
+  padding:0;
+}
+
+table.intro p {
+  font-size:16px;
+  font-weight:lighter;
+  line-height:22px;
+  margin:0;
+  padding:2px 35px 30px 35px;
+}
+
+table.intro span {
+  bottom:30px;
+  display:block;
+  position:absolute;
+  width:100%;
+}
+
+table.intro a {
+  color:#1a6bc7;
+  font-size:12px;  
+  font-weight: bold;
+}
+
+#blu {
+  display:table;
+  font-size:12px;
+  font-weight:lighter;
+  line-height:28px;
+  table-layout:fixed;
+}
+
+#blu a {
+  color:#FFF;
+  text-decoration:none;
+}
+
+#blu .cell {
+  color:#FFF;
+  display:table-cell;  
+  padding:40px 0;
+  overflow:hidden;
+  vertical-align:middle;
+}
+
+#blu .cell.left {
+  background:#1b2b3e;
+  padding-right:54px;
+}
+
+#blu .cell.left .wrapper {
+  float:right;  
+}
+
+#blu .cell.right {
+  background:#184f8d;
+  padding-left:54px;
+}
+
+#blu .cell.right .wrapper {
+  float:left;  
+}
+
+#blu .cell .wrapper {
+  width:425px;
+}
+
+#blu h2 {
+  font-size:24px;
+  font-weight:lighter;  
+  margin:0 0 10px 0;
+  padding:0;
+}
+
+.page-wrap:after {
+  display: block;
+  content: "";
+}
+#footer {
+  color: black;
+  background-color: white;
+  font-size:9px;
+  font-weight:lighter;
+  line-height:20px;
+  padding:30px 0;
+  text-align:center;
+}
+#footer, .page-wrap:after {
+  height: 60px;
+}
+
+#footer .wrapper {
+  padding:0 80px;
+}
+
+.bui {
+  display:none;
+  position:fixed;
+  top:0;
+  left:0;
+  right:0;
+  bottom:0;
+  background:rgba(0,0,0,0.8);
+  z-index:4;  
+}
+
+.disclaimer {
+  background:#f6f5f5;
+  font-size:12px;
+  font-weight:lighter;
+  line-height:24px;
+  text-align:center;
+}
+
+.disclaimer .wrapper {
+  margin:auto;
+  padding:50px 0 50px 0;
+  width:780px;
+}
+
+.disclaimer h2 {
+  font-size:24px;
+  font-weight:lighter;  
+  margin:0 0 10px 0;
+  padding:0;
+}
+
+.int_text {
+  margin:40px auto 30px auto;
+  width:780px;
+}
+
+/* Blog */
+div.post.int_text {
+  margin:40px auto 60px auto;
+}
+
+.int_text a, .int_title a {
+  color:#1a6bc7;
+  /* font-weight:normal;  */
+}
+
+.int_text p, .int_text ul, .int_text ol { 
+  font-size:16px;
+  line-height:28px;
+  
+}
+
+.int_text p.l1 {
+  padding-left:30px;  
+}
+
+.int_text h2 {
+  font-size:24px;
+  font-weight:normal;  
+  margin:30px 0 0 0;
+}
+
+.int_text img {
+  display:block;
+  margin:30px auto;  
+}
+
+ul.num {
+  list-style:decimal;  
+}
+
+.int_title {
+  font-size:16px;
+  font-weight:lighter;
+  margin:auto;
+  margin-top:80px;
+  padding:0 0 15px 0;
+  position:relative;
+  text-align:center;
+  width:600px;  
+}
+
+.int_title.int_title_img {
+  background-position:center top;
+  background-repeat:no-repeat;
+  background-size:25px auto;
+  padding-top:40px;  
+}
+
+.int_title.int_title_img.architecture {
+  background-image:url(../images/architecture.png);  
+}
+
+.int_title.int_title_img.community {
+  background-image:url(../images/community.png);  
+}
+
+.int_title.int_title_img.download {
+  background-image:url(../images/download.png);  
+}
+
+.int_title p {
+  line-height:30px;
+  margin:10px 0 25px 0;
+}
+
+.int_title h1 {
+  font-size:36px;
+  margin: 20px 0px 20px 0px;
+}
+
+.int_title:after {
+  background:#1a6bc7;
+  bottom:24px;
+  content:" ";
+  height:5px;
+  left:275px;
+  position:absolute;
+  width:50px;
+}
+
+table.intro a:before, table.intro a:after {
+    backface-visibility: hidden;
+    pointer-events: none;
+}
+
+table.intro a, .int_title a {
+  display:inline-block;
+    overflow: hidden;
+  outline: medium none;
+    position: relative;
+    text-decoration: none;
+    vertical-align: bottom;
+    white-space: nowrap;
+}
+
+#header .dots, .aLeft, .aRight { display:none; }
+
+p.info {
+  background-color: #ffc;
+  border: solid 1px #cc9;
+  padding: 5px;
+}
+
+/* This is to address an issue in Markdown processing which introduces <p> inside <li>. */
+li p {
+  margin-top: 0px;
+}
+
+.hidden {
+  display:none;
+}
+
+/******************
+ Search Bar
+******************/
+
+#menu .search-bar {
+  line-height: 30px;
+  margin: 0 20px 0 20px;
+}
+
+#menu .search-bar form {
+  border-radius: 6px;
+  border: solid 1px black;
+  background-color: #1A6BC7;
+}
+
+#menu .search-bar input[type='text'] {
+  border: none;
+  color: white;
+  background-color: transparent !important;
+  font-size: 14px;
+  font-weight: inherit;
+  padding: 0 0 0 8px;
+  line-height: 20px;
+  width: 50px;
+}
+#menu .search-bar input[placeholder] {
+  opacity: .7;
+}
+
+#menu .search-bar:hover {
+  background-color: black;
+}
+
+#menu .search-bar button[type='submit'] {
+  display: inline;
+  border: none;
+  background:none;
+  position: relative;
+  color: white;
+  font-size: 14px;
+  cursor: pointer;
+  width: 33px;
+}
+#menu .search-bar ::-webkit-input-placeholder {
+   color: white;
+}
+
+#menu .search-bar :-moz-placeholder { /* Firefox 18- */
+   color: white;  
+}
+
+#menu .search-bar ::-moz-placeholder {  /* Firefox 19+ */
+   color: white;  
+}
+
+#menu .search-bar :-ms-input-placeholder {  
+   color: white;
+}
+
+.int_text table{border-collapse:collapse;border-spacing:0;empty-cells:show;border:1px solid #cbcbcb}
+.int_text table caption{color:#000;font-style: italic;padding:1em 0;text-align:center}
+.int_text table td, .int_text table th{border-left:1px solid #cbcbcb;border-width:0 0 0 1px;font-size:inherit;margin:0;overflow:visible;padding:.5em 1em}
+.int_text table td:first-child, .int_text table th:first-child{border-left-width:0}
+.int_text table thead{background-color:#e0e0e0;color:#000;text-align:left;vertical-align:bottom}
+.int_text table td{background-color:transparent}
+.int_text table-odd td{background-color:#f2f2f2}
+.int_text table-striped tr:nth-child(2n-1) td{background-color:#f2f2f2}
+.int_text table-bordered td{border-bottom:1px solid #cbcbcb}
+.int_text table-bordered tbody>tr:last-child>td{border-bottom-width:0}
+.int_text table-horizontal td, .int_text table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #cbcbcb}
+.int_text table-horizontal tbody>tr:last-child>td{border-bottom-width:0}
+
+
+div.alertbar{
+  line-height:1;
+  text-align: center;
+}
+
+div.alertbar div{
+  display: inline-block;
+  vertical-align: middle;
+  padding:0 10px;
+}
+
+div.alertbar div:nth-child(2){
+  border-right:solid 1px #cc9;
+}
+
+div.alertbar div.news{
+  font-weight:bold;
+}
+
+div.alertbar a{
+  
+}
+div.alertbar div span{
+  font-size:65%;
+  color:#aa7;
+}
+
+div.home-row{
+  border-bottom:solid 1px #ccc;
+  margin:0 auto;
+  text-align:center;
+}
+
+div.home-row div{
+  display:inline-block;
+  vertical-align:middle;
+  text-align:left;
+}
+
+div.home-row:nth-child(odd) div.big{
+  width:300px;
+}
+
+div.home-row:nth-child(odd) div.description{
+  margin-left:40px;
+  width:580px;
+}
+
+div.home-row:nth-child(even) div.description{
+  width:580px;
+}
+
+div.home-row:nth-child(even) div.big{
+  margin-left:40px;
+  width:300px;
+}
+
+.home-row h1 {
+  font-size:24px;
+  margin:24px 0;
+  font-weight:bold;
+}
+
+.home-row h2 {
+  font-size:20px;
+  margin:20px 0;
+  font-weight:bold;
+}
+
+.home-row p {
+  font-size:16px;
+  line-height:22px;
+}
+
+.home-row div.small{
+  display:none;
+}
+
+.home-row div.big{
+  display:inline-block;
+}
+
+div.home-row div pre{
+  background:#f3f5f7;
+  color:#2a333c;
+  border:solid 1px #aaa;
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  font-size: 12px;
+  line-height: 1.5;
+}
+
+div.home-row div pre span.code-underline{
+  font-weight:bold;
+  color:#000;
+  text-decoration: underline;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_site-responsive.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-responsive.scss b/_sass/_site-responsive.scss
new file mode 100644
index 0000000..1d3390d
--- /dev/null
+++ b/_sass/_site-responsive.scss
@@ -0,0 +1,264 @@
+#menu ul li.toc-categories {
+  display:none;
+}
+
+#menu ul li.menu-break {
+  display:none;
+}
+
+.mobile-break {
+  display: none;
+}
+
+@media (min-width: 1025px) {
+  #menu ul li.expand-menu {
+    display: none;
+  }
+
+}
+
+@media (max-width: 1024px) {
+  table.intro {
+    width: 940px;
+  }
+  .mw {
+    min-width: 0px;
+  }
+  .breadcrumbs li:first-of-type {
+    margin-left: 8px;
+  }
+  #menu ul li.logo {
+    padding-left: 30px;
+  }
+   #menu ul li.expand-menu {
+    display: none;
+  }
+
+  #menu ul li, #menu ul li.d{
+    display: none;
+  }
+  .home_txt p {
+    width: auto;
+  }
+  .int_title {
+    text-align: left;
+    width: auto;
+    margin: 80px 20px 0px 20px;
+  }
+  .int_title:after {
+    left: 0px;
+  }
+  .int_text {
+    width: auto;
+    margin: 20px 10px 100px 20px;
+  }
+  #menu.force-expand ul li, #menu.force-expand ul li.d {
+    display:inline-block;
+  }
+
+  #menu.force-expand ul li.toc-categories {
+    display: none;
+  }
+  #menu.force-expand ul li ul li{
+    display:block;
+  }
+  #menu.force-expand ul li.nav, #menu.force-expand ul li#twitter-menu-item {
+    clear:both;
+    margin: 0 auto;
+  }
+  #menu ul li.logo {
+    display: block;
+  }
+  #menu ul li.expand-menu {
+    display: inline-block;
+    float: right;
+  }
+  #menu ul br.menu-break {
+    display: block;
+  }
+  #menu ul li.expand-menu, #menu ul li.expand-menu a {
+    height: 50px;
+    width: 110px;
+  }
+  #menu ul li.expand-menu span.expand-icon{
+    font-size: 24px;
+  }
+  #menu ul li.expand-menu span.menu-text{
+    margin-right:7px;
+    position: relative;
+    bottom: 3px;
+  }
+  #menu ul li.pull-right {
+    float: right;
+    margin-right: 10px;
+  }
+
+  /* Blog Posts */
+
+  div.post.int_text {
+    margin: 40px 20px 20px 20px;
+  }
+  div.post .post-header .int_title {
+    margin-left: 0px;
+  }
+  
+  
+  div.home-row{
+    width:100%;
+  }
+  
+  div.home-row:nth-child(odd) div.small{
+    width:300px;
+  }
+
+  div.home-row:nth-child(odd) div.description{
+    margin-left:20px;
+    width:auto;
+  }
+
+  div.home-row:nth-child(even) div.description{
+    margin-left:20px;
+    width:auto;
+  }
+
+  div.home-row:nth-child(even) div.small{
+    margin:0 0 15px 0;
+    width:300px;
+  }
+  
+  div.home-row div.big{
+    display:none;
+  }
+  
+  div.home-row div.small{
+    display:inline-block;
+  }
+  
+  table.intro {
+    width: 100%;
+    background: none;
+  }
+
+}
+@media (max-width: 768px) {
+  #menu.force-expand ul li.nav {
+    clear:both;
+    margin: 0 auto;
+  }
+  #menu.force-expand ul li.search-bar {
+    clear:both;
+  }
+
+  /* Responsive Homepage 768 max width */
+  div.headlines.tc {
+    margin-left: 35px;
+  }
+  br.mobile-break {
+    display: block;
+  }
+  
+  img {
+    max-width: 100%;
+  }
+
+  /* Search page */
+  div.int_search {
+    width: 100%;
+  }
+}
+
+@media (max-width: 570px) {
+
+  /* Responsive Layout 570 max width */
+
+
+  /* Responsive Menu 570 max width */
+
+  #menu.force-expand ul li.nav, #menu.force-expand ul li#twitter-menu-item {
+    clear: both;
+    width: 100%;
+    margin: auto;
+  }
+  #menu.force-expand ul li {
+    display:block;
+  }
+  #menu ul li.clear-float{
+    clear:both;
+    width: 0px;
+    display:block;
+  }
+  #menu ul li.search-bar {
+    margin: 0 11px 0 20px;
+    float: right;
+  }
+  #menu ul li.d
+  {
+    margin: 10px;
+    clear: both;
+    float: right;
+  }
+  #menu.force-expand {
+    position: relative;
+  }
+
+  /* Responsive Homepage 570 max width */
+  #header .scroller .item .tc h1{
+    font-size: 40px;
+  }
+  #header .scroller .item .tc h2 {
+    font-size: 24px;
+    line-height: 32px;
+  }
+  #header .scroller .item div.headlines.tc {
+    margin-left: 30px;
+  }
+  div.headlines a.download-headline {
+    font-size: .7em;
+  }
+  #header .scroller .item div.headlines .btn { font-size: 16px; }
+
+  div.alertbar {
+    text-align: left;
+    padding:0 25px;
+  }
+  div.alertbar div {
+    display: block;
+    padding:10px 0;
+  }
+  div.alertbar div:nth-child(1){
+    border-right:none;
+    border-bottom:solid 1px #cc9;
+  }
+  div.alertbar div:nth-child(2){
+    border-right:none;
+    border-bottom:solid 1px #cc9;
+  }
+  table.intro {
+    width: 100%;
+    background: none;
+  }
+  table.intro td {
+    display: block;
+    padding: 25px 0 5px 0;
+  }
+  table.intro td h1 {
+    font-size: 24px;
+    text-align: left;
+    padding-left: 60px;
+  }
+  table.intro td.ag, table.intro td.fl, table.intro td.fam {
+    background-position: 20px 23%;
+  }
+  table.intro p {
+    font-size: 16px;
+    line-height: 24px;
+    padding: 2px 25px 15px 20px;
+    text-align: left;
+  }
+  table.intro span {
+    position: relative;
+    bottom: 10px;
+    padding-left: 20px;
+    text-align: left;
+  }
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/_sass/_site-search.scss
----------------------------------------------------------------------
diff --git a/_sass/_site-search.scss b/_sass/_site-search.scss
new file mode 100644
index 0000000..680cea5
--- /dev/null
+++ b/_sass/_site-search.scss
@@ -0,0 +1,9 @@
+.int_search {
+  margin:50px auto 0 auto;
+  width:780px;
+  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
+}
+
+.int_search .gsc-control-cse, .int_search .gsc-control-cse .gsc-table-result {
+  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
+}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/arrows.css
----------------------------------------------------------------------
diff --git a/css/arrows.css b/css/arrows.css
deleted file mode 100755
index 9906307..0000000
--- a/css/arrows.css
+++ /dev/null
@@ -1,86 +0,0 @@
-@charset "UTF-8";
-/* CSS Document */
-
-.nav-circlepop a {
-	width: 50px;
-	height: 50px;
-}
-
-.nav-circlepop a::before {
-	position: absolute;
-	top: 0;
-	left: 0;
-	width: 100%;
-	height: 100%;
-	border-radius: 50%;
-	background: #fff;
-	content: '';
-	opacity: 0;
-	-webkit-transition: -webkit-transform 0.3s, opacity 0.3s;
-	transition: transform 0.3s, opacity 0.3s;
-	-webkit-transform: scale(0.9);
-	transform: scale(0.9);
-}
-
-.nav-circlepop .icon-wrap {
-	position: relative;
-	display: block;
-	margin: 10% 0 0 10%;
-	width: 80%;
-	height: 80%;
-}
-
-.nav-circlepop a.next .icon-wrap {
-	-webkit-transform: rotate(180deg);
-	transform: rotate(180deg);
-}
-
-.nav-circlepop .icon-wrap::before,
-.nav-circlepop .icon-wrap::after {
-	position: absolute;
-	left: 25%;
-	width: 3px;
-	height: 50%;
-	background: #fff;
-	content: '';
-	-webkit-transition: -webkit-transform 0.3s, background-color 0.3s;
-	transition: transform 0.3s, background-color 0.3s;
-	-webkit-backface-visibility: hidden;
-	backface-visibility: hidden;
-}
-
-.nav-circlepop .icon-wrap::before {
-	-webkit-transform: translateX(-50%) rotate(30deg);
-	transform: translateX(-50%) rotate(30deg);
-	-webkit-transform-origin: 0 100%;
-	transform-origin: 0 100%;
-}
-
-.nav-circlepop .icon-wrap::after {
-	top: 50%;
-	-webkit-transform: translateX(-50%) rotate(-30deg);
-	transform: translateX(-50%) rotate(-30deg);
-	-webkit-transform-origin: 0 0;
-	transform-origin: 0 0;
-}
-
-.nav-circlepop a:hover::before {
-	opacity: 1;
-	-webkit-transform: scale(1);
-	transform: scale(1);
-}
-
-.nav-circlepop a:hover .icon-wrap::before,
-.nav-circlepop a:hover .icon-wrap::after {
-	background: #4aaf4c;
-}
-
-.nav-circlepop a:hover .icon-wrap::before {
-	-webkit-transform: translateX(-50%) rotate(45deg);
-	transform: translateX(-50%) rotate(45deg);
-}
-
-.nav-circlepop a:hover .icon-wrap::after {
-	-webkit-transform: translateX(-50%) rotate(-45deg);
-	transform: translateX(-50%) rotate(-45deg);
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/breadcrumbs.css
----------------------------------------------------------------------
diff --git a/css/breadcrumbs.css b/css/breadcrumbs.css
deleted file mode 100644
index e9503c4..0000000
--- a/css/breadcrumbs.css
+++ /dev/null
@@ -1,40 +0,0 @@
-.breadcrumbs
-{
-  display: block;
-  padding: 0.5625rem 0 0.5625rem 0;
-  overflow: hidden;
-  margin-top: 56px;
-  margin-left: 0px;
-  list-style: none;
-  border-bottom: solid 1px #E4E4E4;
-  width: 100%;
-}
-
-.breadcrumbs>*
-{
-  margin: 0;
-  float: left;
-  font-size: 0.6875rem;
-  line-height: 0.6875rem;
-  text-transform: uppercase;
-  color: #334D5C;
-}
-.breadcrumbs>*:before
-{
-  content: "/";
-  color: #aaa;
-  margin: 0 0.75rem;
-  position: relative;
-  top: 1px;
-}
-.breadcrumbs>.current {
-  font-weight: bold;
-}
-.breadcrumbs>*.current {
-  cursor: default;
-  color: #333;
-}
-.breadcrumbs>* a {
-  color: #1a6bc7;
-  text-decoration:none;
-}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/code.css
----------------------------------------------------------------------
diff --git a/css/code.css b/css/code.css
deleted file mode 100644
index b0de98d..0000000
--- a/css/code.css
+++ /dev/null
@@ -1,69 +0,0 @@
-div.highlight pre, code {
-  background: #f5f6f7 url(../images/code-block-bg.png) 0 0 repeat;
-  border-radius: 0;
-  border: none;
-  border-left: 5px solid #494747;
-  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
-  font-size: 14px;
-  line-height: 24px;
-  overflow: auto;
-  word-wrap: normal;
-  white-space: pre;
-}
-
-pre {
-  padding: 24px 12px;
-  color: #222;
-  margin: 24px 0;
-}
-code {
-  background: #f5f6f7 url(../images/code-block-bg.png) 0 0 repeat;
-  border-radius: 0;
-  border: none;
-  border-left: 5px;
-  font-family: font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
-  font-size: 14px;
-  line-height: 24px;
-  overflow: auto;
-  word-wrap: normal;
-  white-space: pre;
-}
-
-div.admonition {
-  margin: 24px 0;
-  width: auto;
-  max-width: 100%;
-  padding: 2px 12px 22px 12px;
-  border-left: 5px solid transparent;
-}
-.admonition .admonition-title {
-  margin-bottom: 0;
-  font-size: 12px;
-  font-weight: bold;
-  text-transform: uppercase;
-  line-height: 24px;
-}
-.admonition > p {
-  margin: 0 0 12.5px 0;
-}
-.admonition p.first {
-  margin-top: 0 !important;
-}
-.admonition > p.last {
-  margin-bottom: 0;
-}
-.admonition .admonition-title:after {
-  content: ":";
-  font-weight: 900;
-}
-.admonition.important{
-  background-color: #fff2d5;
-  border-color: #ffb618;
-}
-.admonition.important .admonition-title {
-  color: #ffb618;
-}
-.admonition.note{
-  background-color: #edf4e8;
-  border-color: #6ba442;
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/content.scss
----------------------------------------------------------------------
diff --git a/css/content.scss b/css/content.scss
new file mode 100644
index 0000000..895285d
--- /dev/null
+++ b/css/content.scss
@@ -0,0 +1,6 @@
+---
+---
+@import "doc-content",
+        "doc-breadcrumbs",
+        "doc-code",
+        "doc-syntax";
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/docpage.css
----------------------------------------------------------------------
diff --git a/css/docpage.css b/css/docpage.css
deleted file mode 100644
index d7cf704..0000000
--- a/css/docpage.css
+++ /dev/null
@@ -1,389 +0,0 @@
-/*****************
- Categories Bar
-*****************/
-#menu ul li.toc-categories a {
-	height:50px;
-	padding:0;
-  text-decoration: none;
-	width: 60px;
-  text-align: center;
-  color: #bababa;
-}
-#menu ul li.toc-categories {
-	float:left;
-  line-height: 45px;
-  font-size: 18px;
-  display: none;
-  overflow: auto;
-}
-
-/****************
- Docs Nav (Bottom)
-****************/
-
-div.doc-nav {
-  overflow: auto;
-  width: 100%;
-  margin-top: 30px;
-}
-div.doc-nav a{
-  text-decoration: none;
-}
-div.doc-nav a:hover{
-  text-decoration: underline;
-}
-div.doc-nav span.previous-toc {
-  float: left;
-  width: auto;
-}
-div.doc-nav span.back-to-toc {
-  float: left;
-  width: auto;
-  margin-left: 15%;
-}
-div.doc-nav span.next-toc {
-  float: right;
-}
-
-
-/*****************
- Main Area
-*****************/
-
-.main-content .int_text {
-  margin-left: 0px;
-  margin-top: 0px;
-}
-.main-content .int_text img {
-  margin: 30px 0px;
-}
-.main-content .int_title
-{
-  text-align: left;
-  margin-left: 0px;
-  margin-top: 30px;
-}
-.int_title::after{
-  left: 0px;
-}
-.main-content .edit-link {
-  position: relative;
-  float: right;
-  margin-top: 13px;
-  margin-right: 20px;
-  text-decoration: none;
-  font-size: 24px;
-  color: #333333;
-}
-div.int_title h1, div.main-content h1, div.main-content h2, div.main-content h3, div.main-content h4, div.main-content h5, div.main-content h6 {
-  /*font-family: "PT Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;*/
-  font-weight: normal;
-  line-height: 24px;
-  color: #313030;
-  padding: 0;
-}
-div.main-content h1, div.main-content h2 {
-  margin-top: 24px;
-  margin-bottom: 24px;
-}
-
-div.main-content h3 {
-  margin-top: 24px;
-  font-weight: bold;
-}
-
-div.section > h2, div.section > h3, div.section > h4 {
-  margin: 24px 0;
-}
-div.main-content h1, div.int_title h1 {
-  border-top: none;
-  font-size: 36px;
-  line-height: 48px;
-  padding: 0;
-}
-
-/***************************
-/* Sidebar doc menu styles
-***************************/
-
-.sidebar {
-  position: fixed;
-  -webkit-transform: translateZ(0);
-  background-color: #f5f6f7;
-  width: 293px;
-  height: auto;
-  top: 51px;
-  bottom: 0;
-  left: 0;
-  overflow-y: auto;
-  overflow-x: hidden;
-  font-size: 0.85em;
-  z-index: 100;
-  transition: left 0.4s cubic-bezier(.02,.01,.47,1);
-  -moz-transition: left 0.4s cubic-bezier(.02,.01,.47,1);
-  -webkit-transition: left 0.4s cubic-bezier(.02,.01,.47,1);
-}
-.sidebar.force-expand {
-  left: 0px;
-}
-aside {
-  display:block;
-}
-div.docsidebar {
-  font-size: 14px;
-  height: 100%;
-}
-div.docsidebarwrapper {
-  padding: 0;
-  padding-top: 24px;
-  padding-bottom: 130px;
-  /*min-height: 100%;*/
-  position: relative;
-}
-div.docsidebar h3 {
-  padding: 0 12px;
-  font-size: 14px;
-  line-height: 24px;
-  font-weight: bold;
-  margin: -3px 0 15px 0;
-}
-div.docsidebarwrapper ul {
-  margin: 12px 0 0 0;
-  padding: 0;
-}
-div.docsidebar ul {
-  list-style: none;
-  margin: 10px 0 10px 0;
-  padding: 0;
-  color: #000;
-}
-div.docsidebar ul li {
-  font-weight: normal;
-  line-height: 24px;
-}
-.docsidebarwrapper > ul > .toctree-l1.current_section {
-  background-color: #fff;
-  border-right: 1px solid #f5f6f7;
-}
-.docsidebarwrapper a
-{
-  color: #333333;
-  text-decoration: none;
-}
-.docsidebarwrapper > ul > ul.current_section {
-  background-color: #fff;
-  border-right: 1px solid #f5f6f7;
-  margin-right: 0px;
-}
-.docsidebarwrapper > ul > .toctree-l1 {
-  padding: 11px 0 0 12px;
-  line-height: 24px;
-  border-top: 1px solid #ebebed;
-}
-.docsidebarwrapper > ul > .toctree-l1 > a {
-  font-size: 18px;
-  line-height: 24px;
-  width: 100%;
-  display: inline-block;
-}
-div.docsidebar ul ul, div.docsidebar ul.want-points {
-  list-style: none outside none;
-  margin-left: 0;
-}
-div.docsidebar ul ul {
-  margin-top: 0px;
-  margin-bottom: 0;
-}
-div.docsidebar ul ul ul {
-  margin-top: -3px;
-}
-
-.docsidebarwrapper li.toctree-l1 ul > li > a {
-  line-height: 24px;
-  display: inline-block;
-  width: 100%;
-}
-.docsidebarwrapper li.toctree-l1 ul > li
-{
-  font-size: 14px;
-}
-div.docsidebar li.toctree-l2 {
-  text-indent: -12px;
-  padding-left: 47px;
-}
-div.docsidebar li.toctree-l1 a:hover, div.docsidebar li.toctree-l2 a:hover, div.docsidebar li.toctree-l3 a:hover {
-  text-decoration: underline;
-}
-
-div.docsidebar li.toctree-l3  {
-  padding-left: 57px;
-  text-indent: -12px;
-}
-span.expand, span.contract{
-  width: 0px;
-  cursor: pointer;
-  font-size: 80%;
-  display:none; 
-  position: relative;
-  right: 5px;
-}
-span.expand.show, span.contract.show {
-  display: inline-block;
-}
-
-.docsidebarwrapper li.toctree-l2.current, .docsidebarwrapper li.toctree-l3.current {
-  background-color: #1A6BC7;
-  color: white;
-}
-.docsidebarwrapper li.toctree-l2.current.current > a, .docsidebarwrapper li.toctree-l3.current.current > a {
-  color: white;
-}
-.docsidebarwrapper a:hover {
-  color: black;
-}
-.docsidebarwrapper a {
-  display: inline-block;
-  width: 100%;
-}
-
-.docsidebarwrapper ul {
-  font-size: 14px;
-}
-
-.permalink
-{
-  text-decoration:none;
-  font-size: 70%;
-  color: #1A6BC7;
-}
-.permalink.hide {
-  visibility: hidden;
-}
-/****************************
-/* Responsive media queries
-/***************************/
-
-@media (max-width: 320px) {
-  .int_text {
-    width: auto;
-  }
-  .int_title {
-    width: auto;
-  }
-}
-@media (max-width: 768px) {
-  .int_text {
-    width: auto;
-  }
-  .int_title {
-    width: auto;
-  }
-  
-}
-/* Browser styles when browser max width is the follow. 
-   Need menu click to make sidebar appear */
-
-@media (max-width: 1024px) {
-  div#footer {
-    margin-left: 0px;
-    width: 100%;
-  }
-  div#footer .wrapper {
-    padding: 0 20px;
-  }
-  .main-content .edit-link {
-    margin-right: 0px; /* container takes care of right margin */
-  }
-
-  #menu ul li.toc-categories {
-    display: inline-block;
-    width: 60px;
-    text-align: center;
-  }
-  #menu ul li.logo {
-    padding-left: 0px;
-  }
-  #menu.force-expand ul li.toc-categories {
-    display: inline-block;
-  }
-
-  .page-wrap div.int_title.margin_110 {
-    margin-top:110px;
-  }
-
-  li.toc-categories .expand-toc-icon {
-    font-size: 24px;
-    /*padding-right: 10px;*/
-  }
-  li.toc-categories a.expand-toc-icon {
-    color: black;
-  }
-
-  .expand-toc-icon:hover,
-  .expand-toc-icon:active {
-    color: white;
-    text-decoration: none;
-  }
-
-  .sidebar {
-    left: -293px;
-    box-shadow: 0 0 13px rgba(0,0,0,0.3);
-  }
-
-  .sidebar.reveal {
-    left: 0;
-  }
-  .main-content.force-expand {
-    margin-left: 313px;
-  }
-  #footer.force-expand .wrapper {
-    margin-left: 313px;
-  }
-  .main-content{
-    margin: 0px 20px 0px 20px;
-  }
-  .int_title {
-    margin-top: 60px;
-  }
-  .int_title h1 {
-    font-size: 28px;
-  }
-  .page-wrap #footer {
-    width: auto;
-  }
-  .breadcrumbs.force-expand li:first-of-type {
-    margin-left: 301px;
-  }
-}
-@media (min-width: 1025px) {
-  .main-content-wrapper {
-    width: 1092px; 
-  }
-  .main-content
-  {
-    margin-left: 313px;
-  }
-  #menu ul li.logo {
-    padding-left: 30px;
-  }
-  #footer .wrapper {
-    margin-left: 313px;
-  }
-  #footer {
-    width: 100%;
-  }
- .breadcrumbs {
-    margin-left: 0px;
-  }
-  .breadcrumbs li:first-of-type {
-    margin-left: 301px;
-  }
-}
-/*
-div.page-wrap:after {
-  height: 0px;
-}
-*/
-#footer {
-  width: 1092px;
-}

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/download.css
----------------------------------------------------------------------
diff --git a/css/download.css b/css/download.css
deleted file mode 100644
index 0c92169..0000000
--- a/css/download.css
+++ /dev/null
@@ -1,33 +0,0 @@
-.table {
-  display: table;   /* Allow the centering to work */
-  margin: 0 auto;
-}
-
-ul#download_buttons {
-  list-style: none;
-  padding-left:0px;
-  margin-bottom:100px;
-}
-
-ul#download_buttons li {
-  float: left;
-  text-align: center;
-  background-color: #4aaf4c;
-  margin:0 10px 10px 10px;
-  width: 235px;
-  line-height: 60px;
-}
-
-ul#download_buttons li a{
-  text-decoration:none;
-  color:#fff;
-  display:block;
-}
- 
-ul#download_buttons li a:hover {
-  background-color: #348436;
-}
-
-div#download_bar:after{
-  clear:both;
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/download.scss
----------------------------------------------------------------------
diff --git a/css/download.scss b/css/download.scss
new file mode 100644
index 0000000..bca3068
--- /dev/null
+++ b/css/download.scss
@@ -0,0 +1,3 @@
+---
+---
+@import "download";

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/home.scss
----------------------------------------------------------------------
diff --git a/css/home.scss b/css/home.scss
new file mode 100644
index 0000000..90148c0
--- /dev/null
+++ b/css/home.scss
@@ -0,0 +1,4 @@
+---
+---
+@import "home-video-slider",
+        "home-code";

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/responsive.css
----------------------------------------------------------------------
diff --git a/css/responsive.css b/css/responsive.css
deleted file mode 100644
index 17da16d..0000000
--- a/css/responsive.css
+++ /dev/null
@@ -1,275 +0,0 @@
-#menu ul li.toc-categories {
-  display:none;
-}
-
-#menu ul li.menu-break {
-  display:none;
-}
-.mobile-break {
-  display: none;
-}
-
-/*@media (max-width: 768px) {
-  #menu ul li{
-    display: block;
-  }
-  #menu ul li.expand-menu {
-    display: none;
-  }
-}
-*/
-@media (min-width: 1025px) {
-  #menu ul li.expand-menu {
-    display: none;
-  }
-
-}
-
-@media (max-width: 1024px) {
-  table.intro {
-    width: 940px;
-  }
-  .mw {
-    min-width: 0px;
-  }
-  .breadcrumbs li:first-of-type {
-    margin-left: 8px;
-  }
-  #menu ul li.logo {
-    padding-left: 30px;
-  }
-   #menu ul li.expand-menu {
-    display: none;
-  }
-
-  #menu ul li, #menu ul li.d{
-    display: none;
-  }
-  .home_txt p {
-    width: auto;
-  }
-  .int_title {
-    text-align: left;
-    width: auto;
-    margin: 80px 20px 0px 20px;
-  }
-  .int_title:after {
-    left: 0px;
-  }
-  .int_text {
-    width: auto;
-    margin: 20px 10px 100px 20px;
-  }
-  #menu.force-expand ul li, #menu.force-expand ul li.d {
-    display:inline-block;
-  }
-
-  #menu.force-expand ul li.toc-categories {
-    display: none;
-  }
-  #menu.force-expand ul li ul li{
-    display:block;
-  }
-  #menu.force-expand ul li.nav, #menu.force-expand ul li#twitter-menu-item {
-    clear:both;
-    margin: 0 auto;
-  }
-  #menu ul li.logo {
-    display: block;
-  }
-  #menu ul li.expand-menu {
-    display: inline-block;
-    float: right;
-  }
-  #menu ul br.menu-break {
-    display: block;
-  }
-  #menu ul li.expand-menu, #menu ul li.expand-menu a {
-    height: 50px;
-    width: 110px;
-  }
-  #menu ul li.expand-menu span.expand-icon{
-    font-size: 24px;
-  }
-  #menu ul li.expand-menu span.menu-text{
-    margin-right:7px;
-    position: relative;
-    bottom: 3px;
-  }
-  #menu ul li.pull-right {
-    float: right;
-    margin-right: 10px;
-  }
-
-  /* Blog Posts */
-
-  div.post.int_text {
-    margin: 40px 20px 20px 20px;
-  }
-  div.post .post-header .int_title {
-    margin-left: 0px;
-  }
-  
-  
-  div.home-row{
-    width:100%;
-  }
-  
-  div.home-row:nth-child(odd) div.small{
-    width:300px;
-  }
-
-  div.home-row:nth-child(odd) div.description{
-    margin-left:20px;
-    width:auto;
-  }
-
-  div.home-row:nth-child(even) div.description{
-    margin-left:20px;
-    width:auto;
-  }
-
-  div.home-row:nth-child(even) div.small{
-    margin:0 0 15px 0;
-    width:300px;
-  }
-  
-  div.home-row div.big{
-    display:none;
-  }
-  
-  div.home-row div.small{
-    display:inline-block;
-  }
-  
-  table.intro {
-    width: 100%;
-    background: none;
-  }
-
-}
-@media (max-width: 768px) {
-  #menu.force-expand ul li.nav {
-    clear:both;
-    margin: 0 auto;
-  }
-  #menu.force-expand ul li.search-bar {
-    clear:both;
-  }
-
-  /* Responsive Homepage 768 max width */
-  div.headlines.tc {
-    margin-left: 35px;
-  }
-  br.mobile-break {
-    display: block;
-  }
-  
-  img {
-    max-width: 100%;
-  }
-
-  /* Search page */
-  div.int_search {
-    width: 100%;
-  }
-}
-
-@media (max-width: 570px) {
-
-  /* Responsive Layout 570 max width */
-
-
-  /* Responsive Menu 570 max width */
-
-  #menu.force-expand ul li.nav, #menu.force-expand ul li#twitter-menu-item {
-    clear: both;
-    width: 100%;
-    margin: auto;
-  }
-  #menu.force-expand ul li {
-    display:block;
-  }
-  #menu ul li.clear-float{
-    clear:both;
-    width: 0px;
-    display:block;
-  }
-  #menu ul li.search-bar {
-    margin: 0 11px 0 20px;
-    float: right;
-  }
-  #menu ul li.d
-  {
-    margin: 10px;
-    clear: both;
-    float: right;
-  }
-  #menu.force-expand {
-    position: relative;
-  }
-
-  /* Responsive Homepage 570 max width */
-  #header .scroller .item .tc h1{
-    font-size: 40px;
-  }
-  #header .scroller .item .tc h2 {
-    font-size: 24px;
-    line-height: 32px;
-  }
-  #header .scroller .item div.headlines.tc {
-    margin-left: 30px;
-  }
-  div.headlines a.download-headline {
-    font-size: .7em;
-  }
-  #header .scroller .item div.headlines .btn { font-size: 16px; }
-
-  div.alertbar {
-    text-align: left;
-    padding:0 25px;
-  }
-  div.alertbar div {
-    display: block;
-    padding:10px 0;
-  }
-  div.alertbar div:nth-child(1){
-    border-right:none;
-    border-bottom:solid 1px #cc9;
-  }
-  div.alertbar div:nth-child(2){
-    border-right:none;
-    border-bottom:solid 1px #cc9;
-  }
-  table.intro {
-    width: 100%;
-    background: none;
-  }
-  table.intro td {
-    display: block;
-    padding: 25px 0 5px 0;
-  }
-  table.intro td h1 {
-    font-size: 24px;
-    text-align: left;
-    padding-left: 60px;
-  }
-  table.intro td.ag, table.intro td.fl, table.intro td.fam {
-    background-position: 20px 23%;
-  }
-  table.intro p {
-    font-size: 16px;
-    line-height: 24px;
-    padding: 2px 25px 15px 20px;
-    text-align: left;
-  }
-  table.intro span {
-    position: relative;
-    bottom: 10px;
-    padding-left: 20px;
-    text-align: left;
-  }
-
-}
-
-

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/search.css
----------------------------------------------------------------------
diff --git a/css/search.css b/css/search.css
deleted file mode 100644
index 64d25c8..0000000
--- a/css/search.css
+++ /dev/null
@@ -1,9 +0,0 @@
-.int_search {
-  margin:50px auto 0 auto;
-  width:780px;
-  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
-}
-
-.int_search .gsc-control-cse, .int_search .gsc-control-cse .gsc-table-result {
-  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/site.scss
----------------------------------------------------------------------
diff --git a/css/site.scss b/css/site.scss
new file mode 100644
index 0000000..382ff3b
--- /dev/null
+++ b/css/site.scss
@@ -0,0 +1,6 @@
+---
+---
+@import "site-main",
+        "site-responsive",
+        "site-search",
+        "site-arrows";

http://git-wip-us.apache.org/repos/asf/drill/blob/59bc9151/css/style.css
----------------------------------------------------------------------
diff --git a/css/style.css b/css/style.css
deleted file mode 100755
index 414efa5..0000000
--- a/css/style.css
+++ /dev/null
@@ -1,897 +0,0 @@
-@charset "UTF-8";
-
-@import url(http://fonts.googleapis.com/css?family=Lato:400,300,700);
-
-* {
-  outline:none;
-}
-
-html {
-  height: 100%;
-}
-
-body {
-  font-family: 'PT Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
-  padding:0;
-  margin: 0;
-  height: 100%;
-}
-.page-wrap{
-  min-height: 100%;
-  margin-bottom: -60px; /* negative value of footer height */
-}
-
-.mw { min-width:999px; width:100%; }
-
-.nw { white-space:nowrap; }
-
-a.anchor {
-  display:none;
-  font-size:0px;
-  position:absolute;
-  margin-top:-50px;
-}
-
-.home_txt a.anchor {
-  margin-top:-90px;
-}
-
-#menu {
-  position:fixed;  
-  top:0;
-  width:100%;
-  z-index:5;
-}
-
-/* This seems to override menu position fixed. Fixed positioning allows menu to always be available at the top of the viewport, and JS is not needed to do this */
-/*
-#menu.r {
-  position:absolute;
-}
-*/
-
-#menu ul {
-  background:#051221;
-  display:block;
-  font-size:0px;
-  list-style:none;
-  overflow:hidden;
-  padding:0;
-  text-align:right;
-  /*
-  -webkit-box-shadow: 1px 1px 1px 0px rgba(0, 0, 0, 0.4);
-  -moz-box-shadow:    1px 1px 1px 0px rgba(0, 0, 0, 0.4);
-  box-shadow:         1px 1px 1px 0px rgba(0, 0, 0, 0.4);
-  */
-}
-
-#menu ul li {
-  display:inline-block;
-  font-size:14px;
-  margin:0;
-  padding:0;
-}
-
-#menu ul li.logo {
-  float:left;
-  padding-left:30px;
-}
-
-#menu ul li.logo:hover { background:none; }
-
-#menu ul li.logo a {
-  background:url(../images/apachedrill.png) no-repeat center;
-  background-size:auto 27px;
-  display:block;
-  height:50px;
-  padding:0;
-  width:80px;
-}
-
-#menu ul li a {
-  color:#FFF;
-  text-decoration:none;
-  line-height:50px;
-  padding:14px 20px;
-}
-
-#menu ul li.d, #menu ul li.d:hover {
-  background-color: #4aaf4c;
-  font-size:12px;
-  text-transform:uppercase;
-}
-#menu ul li.d a .fam {
-  position: relative;
-  right: 8px;
-  font-size: 14px;
-}
-
-#menu ul li.d:hover {
-  background-color:#348436;
-}
-
-#menu ul li.d * {
-  cursor:pointer;
-}
-
-#menu ul li.d a {
-  padding:0px 30px 0 40px;
-  display:block;
-}
-
-
-#menu ul li.l {
-  cursor:pointer;  
-}
-
-#menu ul li.l span {
-  background:url(../images/len.png) no-repeat center;
-  background-size:auto 16px;
-  display:block;
-  line-height:50px;
-  padding:0 20px;
-  width:16px;
-}
-
-#menu ul li.l.open {
-  background-color:#145aa8;
-}
-
-#menu ul li#twitter-menu-item {
-  width:30px;
-  padding-left: 2px;
-  padding-right:10px;
-}
-
-#menu ul li#twitter-menu-item a {
-  padding: 10px;
-}
-
-#menu ul li#twitter-menu-item img {
-  width: 22px;
-}
-
-#menu ul li ul {
-  background:#1a6bc7;
-  display:none;
-  margin:0;
-  padding:0;
-  position:absolute;
-  text-align:left;
-}
-
-#menu ul li ul li {
-  display:block;
-}
-
-#menu ul li ul li a {
-  display:block;
-  line-height:30px;
-  padding:3px 20px;
-}
-
-#menu ul li ul li a:hover {
-  background:#145aa8;
-}
-
-#menu ul li:hover {
-  background:#1a6bc7;  
-}
-
-#menu ul li:hover ul {
-  display:block;
-}
-#menu ul li.clear-float{
-  display:none;
-}
-#subhead {
-  background:#145aa8;
-  color:#FFF;
-  font-size:12px;
-  font-weight:bold;
-  height:40px;
-  line-height:40px;
-  left:0px;
-  letter-spacing:1px;
-  right:0px;
-  position:fixed;  
-  text-align:center;
-  text-transform:uppercase;
-  top:10px;
-  z-index:4;  
-  
-  -webkit-transition: all 0.3s;
-  transition: all 0.3s;
-}
-
-#subhead.show {
-  top:50px;
-}
-
-#subhead ul {
-  list-style:none;
-  margin:0;
-  padding:0;
-}
-
-#subhead ul li {
-  display:inline-block;
-  list-style:none;
-  margin:0;
-  padding:0 35px 0 35px;
-}
-
-#subhead ul li a {
-  background-size:16px auto;
-  background-position:left center;
-  background-repeat:no-repeat;
-  color:#FFF;
-  display:block;
-  padding:0 0 0 25px;
-  text-decoration:none;
-}
-
-#subhead ul li.ag a {
-  background-image:url(../images/agility-w.png);
-}
-
-#subhead ul li.fl a {
-  background-image:url(../images/flexibility-w.png);
-}
-
-#subhead ul li.fam a {
-  background-image:url(../images/familiarity-w.png);
-}
-
-#header {
-  background:url(../images/reel-bg.png) no-repeat;
-  background-size:cover;
-  height:300px;
-  overflow:hidden;
-  position:relative;
-}
-
-#header .scroller {
-  margin-left:0px;  
-  overflow:hidden;
-}
-
-#header .scroller .item {
-  
-  float:left;
-  height:300px;
-  position:relative;
-  width:100%;  
-  z-index:1;
-}
-
-#header .scroller .item p a {
-  color:#FFF;
-  font-weight:bold;
-  overflow: hidden;
-  text-decoration:none;  
-  
-  position: relative;
-  display: inline-block;
-  outline: none;
-  vertical-align: bottom;
-  text-decoration: none;
-  white-space: nowrap;
-}
-
-#header .scroller .item p a::before {
-  position: absolute;
-  top: 0;
-  left: 0;
-  z-index: -1;
-  width: 100%;
-  height: 100%;
-  background: rgba(149,165,166,0.4);
-  content: '';
-  -webkit-transition: -webkit-transform 0.3s;
-  transition: transform 0.3s;
-  -webkit-transform: scaleY(0.618) translateX(-100%);
-  transform: scaleY(0.618) translateX(-100%);
-}
-
-#header .scroller .item p a:hover::before,
-#header .scroller .item p a:focus::before {
-  -webkit-transform: scaleY(0.618) translateX(0);
-  transform: scaleY(0.618) translateX(0);
-}
-
-
-#header .scroller .item .tc {
-  color:#FFF;
-  margin-left:80px;
-  position:relative;
-  width:900px;
-  margin:0 auto;
-}
-
-#header .scroller .item .tc h1, #header .scroller .item .tc h2 {
-  font-size:36px;
-  font-weight:lighter;
-  margin:0 0 8px 0;
-  padding:0;
-}
-#header .scroller .item .tc h2 {
-  font-size: 18px;
-}
-
-#header .scroller .item .tc p {
-  font-size:14px;
-  font-weight:lighter;
-  line-height:24px;
-  margin:0;
-  padding:0;
-}
-
-#header .scroller .item .btn {
-  background: none;
-  border: 2px solid #fff;
-  cursor: pointer;
-  color:#FFF;
-  display: inline-block;
-  font-size:12px;
-  font-weight: bold;
-  outline: none;
-  margin-top:18px;
-  position: relative;
-  padding: 5px 30px;
-  text-decoration:none;
-  text-transform: uppercase;
-  
-  -webkit-transition: all 0.3s;
-  -moz-transition: all 0.3s;
-  transition: all 0.3s;
-}
-
-#header .scroller .item .btn:after {
-  content: '';
-  position: absolute;
-  z-index: -1;
-  -webkit-transition: all 0.3s;
-  -moz-transition: all 0.3s;
-  transition: all 0.3s;
-}
-
-#header .scroller .item .btn-1c:after {
-  width: 0%;
-  height: 100%;
-  top: 0;
-  left: 0;
-  background: #fff;
-}
-
-#header .scroller .item .btn-1c:hover,
-#header .scroller .item .btn-1c:active {
-  color: #0e83cd;
-}
-
-#header .scroller .item .btn-1c:hover:after,
-#header .scroller .item .btn-1c:active:after {
-  width: 100%;
-}
-
-#header .aLeft {
-  cursor:pointer;
-  height:30px;
-  left:20px;
-  margin-top:-15px;
-  position:absolute;
-  top:50%;
-  width:30px;  
-  z-index:2;
-}
-
-#header .aRight {
-  cursor:pointer;
-  height:30px;
-  right:20px;
-  margin-top:-15px;
-  position:absolute;
-  top:50%;
-  width:30px;  
-  z-index:2;
-}
-
-.dots {
-  bottom:30px;
-  right:80px;
-  position:absolute;
-  z-index:2;  
-}
-
-.dots .dot {
-  border-radius: 50%;
-  background-color: transparent;
-  box-shadow: inset 0 0 0 2px white;
-  -webkit-transition: box-shadow 0.3s ease;
-  transition: box-shadow 0.3s ease;
-  
-  cursor:pointer;
-  display:inline-block;
-  height:10px;
-  margin-left:10px;
-  width:10px;
-}
-
-.dots .dot:hover,
-.dots .dot:focus {
-  box-shadow: inset 0 0 0 2px rgba(255, 255, 255, 0.6)
-}
-
-.dots .dot.sel {
-  box-shadow: inset 0 0 0 8px white;
-}
-div.alertbar
-  {
-    background-color:#ffc;
-    text-align: center;
-    display: block;
-    padding:10px;
-    border-bottom: solid 1px #cc9;
-  }
-div.alertbar .hor-bar:after {
-  content: "|";
-}
-span.strong {
-  font-weight: bold;
-}
-
-.introWrapper {
-  border-bottom:1px solid #CCC;
-}
-
-table.intro {
-  background:url(../images/intro-bg.gif) no-repeat center;
-  table-layout:fixed;
-  text-align:center;  
-  width: 940px;
-}
-
-table.intro td {
-  background-position:center 25px;
-  background-repeat:no-repeat;
-  background-size:25px auto;
-  padding:65px 0 0 0;
-  position:relative;
-  vertical-align:top;
-}
-
-table.intro td.ag {
-  background-image:url(../images/agility.png);
-}
-
-table.intro td.fl {
-  background-image:url(../images/flexibility.png);
-}
-
-table.intro td.fam {
-  background-image:url(../images/familiarity.png);
-}
-
-table.intro h1 {
-  font-size:36px;
-  font-weight:normal;
-  margin:0;
-  padding:0;
-}
-
-table.intro p {
-  font-size:16px;
-  font-weight:lighter;
-  line-height:22px;
-  margin:0;
-  padding:2px 35px 30px 35px;
-}
-
-table.intro span {
-  bottom:30px;
-  display:block;
-  position:absolute;
-  width:100%;
-}
-
-table.intro a {
-  color:#1a6bc7;
-  font-size:12px;  
-  font-weight: bold;
-}
-
-#blu {
-  display:table;
-  font-size:12px;
-  font-weight:lighter;
-  line-height:28px;
-  table-layout:fixed;
-}
-
-#blu a {
-  color:#FFF;
-  text-decoration:none;
-}
-
-#blu .cell {
-  color:#FFF;
-  display:table-cell;  
-  padding:40px 0;
-  overflow:hidden;
-  vertical-align:middle;
-}
-
-#blu .cell.left {
-  background:#1b2b3e;
-  padding-right:54px;
-}
-
-#blu .cell.left .wrapper {
-  float:right;  
-}
-
-#blu .cell.right {
-  background:#184f8d;
-  padding-left:54px;
-}
-
-#blu .cell.right .wrapper {
-  float:left;  
-}
-
-#blu .cell .wrapper {
-  width:425px;
-}
-
-#blu h2 {
-  font-size:24px;
-  font-weight:lighter;  
-  margin:0 0 10px 0;
-  padding:0;
-}
-
-.page-wrap:after {
-  display: block;
-  content: "";
-}
-#footer {
-  color: black;
-  background-color: white;
-  font-size:9px;
-  font-weight:lighter;
-  line-height:20px;
-  padding:30px 0;
-  text-align:center;
-}
-#footer, .page-wrap:after {
-  height: 60px;
-}
-
-#footer .wrapper {
-  padding:0 80px;
-}
-
-.bui {
-  display:none;
-  position:fixed;
-  top:0;
-  left:0;
-  right:0;
-  bottom:0;
-  background:rgba(0,0,0,0.8);
-  z-index:4;  
-}
-
-.disclaimer {
-  background:#f6f5f5;
-  font-size:12px;
-  font-weight:lighter;
-  line-height:24px;
-  text-align:center;
-}
-
-.disclaimer .wrapper {
-  margin:auto;
-  padding:50px 0 50px 0;
-  width:780px;
-}
-
-.disclaimer h2 {
-  font-size:24px;
-  font-weight:lighter;  
-  margin:0 0 10px 0;
-  padding:0;
-}
-
-.int_text {
-  margin:40px auto 30px auto;
-  width:780px;
-}
-
-/* Blog */
-div.post.int_text {
-  margin:40px auto 60px auto;
-}
-
-.int_text a, .int_title a {
-  color:#1a6bc7;
-  /* font-weight:normal;  */
-}
-
-.int_text p, .int_text ul, .int_text ol { 
-  font-size:16px;
-  line-height:28px;
-  
-}
-
-.int_text p.l1 {
-  padding-left:30px;  
-}
-
-.int_text h2 {
-  font-size:24px;
-  font-weight:normal;  
-  margin:30px 0 0 0;
-}
-
-.int_text img {
-  display:block;
-  margin:30px auto;  
-}
-
-ul.num {
-  list-style:decimal;  
-}
-
-.int_title {
-  font-size:16px;
-  font-weight:lighter;
-  margin:auto;
-  margin-top:80px;
-  padding:0 0 15px 0;
-  position:relative;
-  text-align:center;
-  width:600px;  
-}
-
-.int_title.int_title_img {
-  background-position:center top;
-  background-repeat:no-repeat;
-  background-size:25px auto;
-  padding-top:40px;  
-}
-
-.int_title.int_title_img.architecture {
-  background-image:url(../images/architecture.png);  
-}
-
-.int_title.int_title_img.community {
-  background-image:url(../images/community.png);  
-}
-
-.int_title.int_title_img.download {
-  background-image:url(../images/download.png);  
-}
-
-.int_title p {
-  line-height:30px;
-  margin:10px 0 25px 0;
-}
-
-.int_title h1 {
-  font-size:36px;
-  margin: 20px 0px 20px 0px;
-}
-
-.int_title:after {
-  background:#1a6bc7;
-  bottom:24px;
-  content:" ";
-  height:5px;
-  left:275px;
-  position:absolute;
-  width:50px;
-}
-
-table.intro a:before, table.intro a:after {
-    backface-visibility: hidden;
-    pointer-events: none;
-}
-
-table.intro a, .int_title a {
-  display:inline-block;
-    overflow: hidden;
-  outline: medium none;
-    position: relative;
-    text-decoration: none;
-    vertical-align: bottom;
-    white-space: nowrap;
-}
-
-#header .dots, .aLeft, .aRight { display:none; }
-
-p.info {
-  background-color: #ffc;
-  border: solid 1px #cc9;
-  padding: 5px;
-}
-
-/* This is to address an issue in Markdown processing which introduces <p> inside <li>. */
-li p {
-  margin-top: 0px;
-}
-
-.hidden {
-  display:none;
-}
-
-/******************
- Search Bar
-******************/
-
-#menu .search-bar {
-  line-height: 30px;
-  margin: 0 20px 0 20px;
-}
-
-#menu .search-bar form {
-  border-radius: 6px;
-  border: solid 1px black;
-  background-color: #1A6BC7;
-}
-
-#menu .search-bar input[type='text'] {
-  border: none;
-  color: white;
-  background-color: transparent !important;
-  font-size: 14px;
-  font-weight: inherit;
-  padding: 0 0 0 8px;
-  line-height: 20px;
-  font-family: "Lato";
-  width: 44px;
-}
-#menu .search-bar input[placeholder] {
-  opacity: .7;
-}
-
-#menu .search-bar:hover {
-  background-color: black;
-}
-
-#menu .search-bar button[type='submit'] {
-  display: inline;
-  border: none;
-  background:none;
-  position: relative;
-  color: white;
-  font-size: 14px;
-  cursor: pointer;
-  width: 33px;
-}
-#menu .search-bar ::-webkit-input-placeholder {
-   color: white;
-}
-
-#menu .search-bar :-moz-placeholder { /* Firefox 18- */
-   color: white;  
-}
-
-#menu .search-bar ::-moz-placeholder {  /* Firefox 19+ */
-   color: white;  
-}
-
-#menu .search-bar :-ms-input-placeholder {  
-   color: white;
-}
-
-.int_text table{border-collapse:collapse;border-spacing:0;empty-cells:show;border:1px solid #cbcbcb}
-.int_text table caption{color:#000;font-style: italic;padding:1em 0;text-align:center}
-.int_text table td, .int_text table th{border-left:1px solid #cbcbcb;border-width:0 0 0 1px;font-size:inherit;margin:0;overflow:visible;padding:.5em 1em}
-.int_text table td:first-child, .int_text table th:first-child{border-left-width:0}
-.int_text table thead{background-color:#e0e0e0;color:#000;text-align:left;vertical-align:bottom}
-.int_text table td{background-color:transparent}
-.int_text table-odd td{background-color:#f2f2f2}
-.int_text table-striped tr:nth-child(2n-1) td{background-color:#f2f2f2}
-.int_text table-bordered td{border-bottom:1px solid #cbcbcb}
-.int_text table-bordered tbody>tr:last-child>td{border-bottom-width:0}
-.int_text table-horizontal td, .int_text table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #cbcbcb}
-.int_text table-horizontal tbody>tr:last-child>td{border-bottom-width:0}
-
-
-div.alertbar{
-  line-height:1;
-  text-align: center;
-}
-
-div.alertbar div{
-  display: inline-block;
-  vertical-align: middle;
-  padding:0 10px;
-}
-
-div.alertbar div:nth-child(2){
-  border-right:solid 1px #cc9;
-}
-
-div.alertbar div.news{
-  font-weight:bold;
-}
-
-div.alertbar a{
-  
-}
-div.alertbar div span{
-  font-size:65%;
-  color:#aa7;
-}
-
-div.home-row{
-  border-bottom:solid 1px #ccc;
-  margin:0 auto;
-  text-align:center;
-}
-
-div.home-row div{
-  display:inline-block;
-  vertical-align:middle;
-  text-align:left;
-}
-
-div.home-row:nth-child(odd) div.big{
-  width:300px;
-}
-
-div.home-row:nth-child(odd) div.description{
-  margin-left:40px;
-  width:580px;
-}
-
-div.home-row:nth-child(even) div.description{
-  width:580px;
-}
-
-div.home-row:nth-child(even) div.big{
-  margin-left:40px;
-  width:300px;
-}
-
-.home-row h1 {
-  font-size:24px;
-  margin:24px 0;
-  font-weight:bold;
-}
-
-.home-row h2 {
-  font-size:20px;
-  margin:20px 0;
-  font-weight:bold;
-}
-
-.home-row p {
-  font-size:16px;
-  line-height:22px;
-}
-
-.home-row div.small{
-  display:none;
-}
-
-.home-row div.big{
-  display:inline-block;
-}
-
-div.home-row div pre{
-  background:#f3f5f7;
-  color:#2a333c;
-  border:solid 1px #aaa;
-  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
-  font-size: 12px;
-  line-height: 1.5;
-}
-
-div.home-row div pre span.code-underline{
-  font-weight:bold;
-  color:#000;
-  text-decoration: underline;
-}
\ No newline at end of file


[19/26] drill git commit: last-min features

Posted by ts...@apache.org.
http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md b/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
index 82ab745..7e638b6 100644
--- a/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
+++ b/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
@@ -58,15 +58,15 @@ analysis extremely easy.
         dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json`
         limit 1;
 
-    +-------------+--------------+------------+------------+------------+------------+--------------+------------+------------+------------+------------+------------+------------+------------+---------------+
-    | business_id | full_address |   hours    |     open    | categories |            city    | review_count |        name   | longitude  |   state  |   stars          |  latitude  | attributes |          type    | neighborhoods |
-    +-------------+--------------+------------+------------+------------+------------+--------------+------------+------------+------------+------------+------------+------------+------------+---------------+
-    | vcNAWiLM4dR7D2nwwJ7nCA | 4840 E Indian School Rd
-    Ste 101
-    Phoenix, AZ 85018 | {"Tuesday":{"close":"17:00","open":"08:00"},"Friday":{"close":"17:00","open":"08:00"},"Monday":{"close":"17:00","open":"08:00"},"Wednesday":{"close":"17:00","open":"08:00"},"Thursday":{"close":"17:00","open":"08:00"},"Sunday":{},"Saturday":{}} | true              | ["Doctors","Health & Medical"] | Phoenix  | 7                   | Eric Goldberg, MD | -111.983758 | AZ       | 3.5                | 33.499313  | {"By Appointment Only":true,"Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}} | business   | []                  |
-    +-------------+--------------+------------+------------+------------+------------+--------------+------------+------------+------------+------------+------------+------------+------------+---------------+
+    +------------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+--------------------------------+---------+--------------+-------------------+-------------+-------+-------+-----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------------+
+    | business_id            | full_address                                       | hours                                                                                                                                                                                                                                                      | open | categories                     | city    | review_count | name              | longitude   | state | stars | latitude  | attributes                                                                                                                                                   | type     | neighborhoods |
+    +------------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+--------------------------------+---------+--------------+-------------------+-------------+-------+-------+-----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------------+
+    | vcNAWiLM4dR7D2nwwJ7nCA | 4840 E Indian School Rd Ste 101, Phoenix, AZ 85018 | fill in{"Tuesday":{"close":"17:00","open":"08:00"},"Friday":{"close":"17:00","open":"08:00"},"Monday":{"close":"17:00","open":"08:00"},"Wednesday":{"close":"17:00","open":"08:00"},"Thursday":{"close":"17:00","open":"08:00"},"Sunday":{},"Saturday":{}} | true | ["Doctors","Health & Medical"] | Phoenix | 7            | Eric Goldberg, MD | -111.983758 | AZ    | 3.5   | 33.499313 | {"By Appointment Only":true,"Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}} | business | []            |
+    +------------------------+----------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+--------------------------------+---------+--------------+-------------------+-------------+-------+-------+-----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+----------+---------------+
 
-{% include startnote.html %}You can directly query self-describing files such as JSON, Parquet, and text. There is no need to create metadata definitions in the Hive metastore.{% include endnote.html %}
+{% include startnote.html %}This document aligns Drill output for example purposes. Drill output is not aligned in this case.{% include endnote.html %}
+
+You can directly query self-describing files such as JSON, Parquet, and text. There is no need to create metadata definitions in the Hive metastore.
 
 ### 2\. Explore the business data set further
 
@@ -128,20 +128,20 @@ analysis extremely easy.
     dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json`
     where review_count > 1000 order by `review_count` desc limit 10;
 
-    +------------+------------+------------+----------------------------+
-    |    name                |   state     |    city     | review_count |
-    +------------+------------+------------+----------------------------+
-    | Mon Ami Gabi           | NV          | Las Vegas  | 4084          |
-    | Earl of Sandwich       | NV          | Las Vegas  | 3655          |
-    | Wicked Spoon           | NV          | Las Vegas  | 3408          |
-    | The Buffet             | NV          | Las Vegas  | 2791          |
-    | Serendipity 3          | NV          | Las Vegas  | 2682          |
-    | Bouchon                | NV          | Las Vegas  | 2419          |
-    | The Buffet at Bellagio | NV          | Las Vegas  | 2404          |
-    | Bacchanal Buffet       | NV          | Las Vegas  | 2369          |
-    | The Cosmopolitan of Las Vegas | NV   | Las Vegas  | 2253          |
-    | Aria Hotel & Casino    | NV          | Las Vegas  | 2224          |
-    +------------+------------+------------+----------------------------+
+    +-------------------------------+-------------+------------+---------------+
+    |           name                |   state     |    city    |  review_count |
+    +-------------------------------+-------------+------------+---------------+
+    | Mon Ami Gabi                  | NV          | Las Vegas  | 4084          |
+    | Earl of Sandwich              | NV          | Las Vegas  | 3655          |
+    | Wicked Spoon                  | NV          | Las Vegas  | 3408          |
+    | The Buffet                    | NV          | Las Vegas  | 2791          |
+    | Serendipity 3                 | NV          | Las Vegas  | 2682          |
+    | Bouchon                       | NV          | Las Vegas  | 2419          |
+    | The Buffet at Bellagio        | NV          | Las Vegas  | 2404          |
+    | Bacchanal Buffet              | NV          | Las Vegas  | 2369          |
+    | The Cosmopolitan of Las Vegas | NV          | Las Vegas  | 2253          |
+    | Aria Hotel & Casino           | NV          | Las Vegas  | 2224          |
+    +-------------------------------+-------------+----------------------------+
 
 #### Saturday open and close times for a few businesses
 
@@ -151,9 +151,9 @@ analysis extremely easy.
     dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json`
     b limit 10;
 
-    +------------+------------+----------------------------+
+    +----------------------------+------------+------------+
     |    name                    |   EXPR$1   |   EXPR$2   |
-    +------------+------------+----------------------------+
+    +----------------------------+------------+------------+
     | Eric Goldberg, MD          | 08:00      | 17:00      |
     | Pine Cone Restaurant       | null       | null       |
     | Deforest Family Restaurant | 06:00      | 22:00      |
@@ -164,7 +164,7 @@ analysis extremely easy.
     | McFarland Public Library   | 09:00      | 20:00      |
     | Green Lantern Restaurant   | 06:00      | 02:00      |
     | Spartan Animal Hospital    | 07:30      | 18:00      |
-    +------------+------------+----------------------------+
+    +----------------------------+------------+------------+
 
 Note how Drill can traverse and refer through multiple levels of nesting.
 
@@ -188,29 +188,33 @@ the data).
 Then, query the attribute’s data.
 
     0: jdbc:drill:zk=local> select attributes from dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json` limit 10;
-    +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-    | attributes                                                                                                                                                                       |
-    +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-    | {"By Appointment Only":"true","Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}} |
+
+    +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+    |                                                     attributes                                                                                                                    |
+    +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+    | {"By Appointment Only":"true","Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}}                    |
     | {"Take-out":"true","Good For":{"dessert":"false","latenight":"false","lunch":"true","dinner":"false","breakfast":"false","brunch":"false"},"Caters":"false","Noise Level":"averag |
     | {"Take-out":"true","Good For":{"dessert":"false","latenight":"false","lunch":"false","dinner":"false","breakfast":"false","brunch":"true"},"Caters":"false","Noise Level":"quiet" |
     | {"Take-out":"true","Good For":{},"Takes Reservations":"false","Delivery":"false","Ambience":{},"Parking":{"garage":"false","street":"false","validated":"false","lot":"true","val |
     | {"Take-out":"true","Good For":{},"Ambience":{},"Parking":{},"Has TV":"false","Outdoor Seating":"false","Attire":"casual","Music":{},"Hair Types Specialized In":{},"Payment Types |
-    | {"Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}} |
-    | {"Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}} |
-    | {"Good For":{},"Ambience":{},"Parking":{},"Wi-Fi":"free","Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}} |
-    | {"Take-out":"true","Good For":{"dessert":"false","latenight":"false","lunch":"false","dinner":"true","breakfast":"false","brunch":"false"},"Noise Level":"average","Takes Reserva |
-    | {"Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}} |
-    +------------+
+    | {"Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}}                                                 |
+    | {"Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}}                                                 |
+    | {"Good For":{},"Ambience":{},"Parking":{},"Wi-Fi":"free","Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}}                                  |
+    | {"Take-out":"true","Good For":{"dessert":"false","latenight":"false","lunch":"false","dinner":"true","breakfast":"false","brunch":"false"},"Noise Level":"average"                |
+    | {"Good For":{},"Ambience":{},"Parking":{},"Music":{},"Hair Types Specialized In":{},"Payment Types":{},"Dietary Restrictions":{}}                                                 |
+    +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+{% include startnote.html %}This document aligns Drill output for example purposes. Drill output is not aligned in this case.{% include endnote.html %}
 
 Turn off the all text mode so we can continue to perform arithmetic operations
 on data.
 
     0: jdbc:drill:zk=local> alter system set `store.json.all_text_mode` = false;
-    +------------+------------+
-    |     ok             |  summary   |
-    +------------+------------+
-    | true              | store.json.all_text_mode updated. |
+    +-------+------------------------------------+
+    |  ok   |              summary               |
+    +-------+------------------------------------+
+    | true  | store.json.all_text_mode updated.  |
+    +-------+------------------------------------+
 
 ### 4\. Explore the restaurant businesses in the data set
 
@@ -225,40 +229,43 @@ on data.
 
 #### Top restaurants in number of reviews
 
-    0: jdbc:drill:zk=local> select name,state,city,`review_count` from dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json` where true=repeated_contains(categories,'Restaurants') order by `review_count` desc limit 10
-    . . . . . . . . . . . > ;
-    +------------+------------+------------+--------------+
-    |    name         |   state    |    city     | review_count |
-    +------------+------------+------------+--------------+
-    | Mon Ami Gabi | NV               | Las Vegas  | 4084         |
-    | Earl of Sandwich | NV         | Las Vegas  | 3655         |
-    | Wicked Spoon | NV             | Las Vegas  | 3408         |
-    | The Buffet | NV       | Las Vegas  | 2791         |
-    | Serendipity 3 | NV              | Las Vegas  | 2682         |
-    | Bouchon       | NV         | Las Vegas  | 2419           |
-    | The Buffet at Bellagio | NV             | Las Vegas  | 2404         |
-    | Bacchanal Buffet | NV        | Las Vegas  | 2369         |
-    | Hash House A Go Go | NV                | Las Vegas  | 2201         |
-    | Mesa Grill | NV         | Las Vegas  | 2004         |
-    +------------+------------+------------+--------------+
+    0: jdbc:drill:zk=local> select name,state,city,`review_count` from dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json` where true=repeated_contains(categories,'Restaurants') order by `review_count` desc limit 10;
+
+    +------------------------+-------+-----------+--------------+
+    |          name          | state |    city   | review_count |
+    +------------------------+-------+-----------+--------------+
+    | Mon Ami Gabi           | NV    | Las Vegas | 4084         |
+    | Earl of Sandwich       | NV    | Las Vegas | 3655         |
+    | Wicked Spoon           | NV    | Las Vegas | 3408         |
+    | The Buffet             | NV    | Las Vegas | 2791         |
+    | Serendipity 3          | NV    | Las Vegas | 2682         |
+    | Bouchon                | NV    | Las Vegas | 2419         |
+    | The Buffet at Bellagio | NV    | Las Vegas | 2404         |
+    | Bacchanal Buffet       | NV    | Las Vegas | 2369         |
+    | Hash House A Go Go     | NV    | Las Vegas | 2201         |
+    | Mesa Grill             | NV    | Las Vegas | 2004         |
+    +------------------------+-------+-----------+--------------+
 
 #### Top restaurants in number of listed categories
 
     0: jdbc:drill:zk=local> select name,repeated_count(categories) as categorycount, categories from dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json` where true=repeated_contains(categories,'Restaurants') order by repeated_count(categories) desc limit 10;
-    +------------+---------------+------------+
-    |    name         | categorycount | categories |
-    +------------+---------------+------------+
-    | Binion's Hotel & Casino | 10           | ["Arts & Entertainment","Restaurants","Bars","Casinos","Event Planning & Services","Lounges","Nightlife","Hotels & Travel","American (N |
-    | Stage Deli | 10        | ["Arts & Entertainment","Food","Hotels","Desserts","Delis","Casinos","Sandwiches","Hotels & Travel","Restaurants","Event Planning & Services"] |
-    | Jillian's  | 9               | ["Arts & Entertainment","American (Traditional)","Music Venues","Bars","Dance Clubs","Nightlife","Bowling","Active Life","Restaurants"] |
-    | Hotel Chocolat | 9               | ["Coffee & Tea","Food","Cafes","Chocolatiers & Shops","Specialty Food","Event Planning & Services","Hotels & Travel","Hotels","Restaurants"] |
-    | Hotel du Vin & Bistro Edinburgh | 9           | ["Modern European","Bars","French","Wine Bars","Event Planning & Services","Nightlife","Hotels & Travel","Hotels","Restaurants" |
-    | Elixir             | 9             | ["Arts & Entertainment","American (Traditional)","Music Venues","Bars","Cocktail Bars","Nightlife","American (New)","Local Flavor","Restaurants"] |
-    | Tocasierra Spa and Fitness | 8                  | ["Beauty & Spas","Gyms","Medical Spas","Health & Medical","Fitness & Instruction","Active Life","Day Spas","Restaurants"] |
-    | Costa Del Sol At Sunset Station | 8            | ["Steakhouses","Mexican","Seafood","Event Planning & Services","Hotels & Travel","Italian","Restaurants","Hotels"] |
-    | Scottsdale Silverado Golf Club | 8              | ["Fashion","Shopping","Sporting Goods","Active Life","Golf","American (New)","Sports Wear","Restaurants"] |
-    | House of Blues | 8               | ["Arts & Entertainment","Music Venues","Restaurants","Hotels","Event Planning & Services","Hotels & Travel","American (New)","Nightlife"] |
-    +------------+---------------+------------+
+
+    +---------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
+    | name                            | categorycount | categories                                                                                                                                        |
+    +---------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
+    | Binion's Hotel & Casino         | 10            | ["Arts &,Entertainment","Restaurants","Bars","Casinos","Event,Planning &,Services","Lounges","Nightlife","Hotels &,Travel","American]             |
+    | Stage Deli                      | 10            | ["Arts &,Entertainment","Food","Hotels","Desserts","Delis","Casinos","Sandwiches","Hotels,& Travel","Restaurants","Event Planning &,Services"]    |
+    | Jillian's                       | 9             | ["Arts &,Entertainment","American (Traditional)","Music,Venues","Bars","Dance,Clubs","Nightlife","Bowling","Active,Life","Restaurants"]           |
+    | Hotel Chocolat                  | 9             | ["Coffee &,Tea","Food","Cafes","Chocolatiers &,Shops","Specialty Food","Event Planning &,Services","Hotels & Travel","Hotels","Restaurants"]      |
+    | Hotel du Vin & Bistro Edinburgh | 9             | ["Modern,European","Bars","French","Wine,Bars","Event Planning &,Services","Nightlife","Hotels &,Travel","Hotels","Restaurants"]                  |
+    | Elixir                          | 9             | ["Arts &,Entertainment","American (Traditional)","Music,Venues","Bars","Cocktail,Bars","Nightlife","American (New)","Local,Flavor","Restaurants"] |
+    | Tocasierra Spa and Fitness      | 8             | ["Beauty &,Spas","Gyms","Medical Spas","Health &,Medical","Fitness & Instruction","Active,Life","Day Spas","Restaurants"]                         |
+    | Costa Del Sol At Sunset Station | 8             | ["Steakhouses","Mexican","Seafood","Event,Planning & Services","Hotels &,Travel","Italian","Restaurants","Hotels"]                                |
+    | Scottsdale Silverado Golf Club  | 8             | ["Fashion","Shopping","Sporting,Goods","Active Life","Golf","American,(New)","Sports Wear","Restaurants"]                                         |
+    | House of Blues                  | 8             | ["Arts & Entertainment","Music Venues","Restaurants","Hotels","Event Planning & Services","Hotels & Travel","American (New)","Nightlife"]         |
+    +---------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
+
+{% include startnote.html %}This document aligns Drill output for example purposes. Drill output is not aligned in this case.{% include endnote.html %}
 
 #### Top first categories in number of review counts
 
@@ -266,20 +273,21 @@ on data.
     from dfs.`/users/nrentachintala/Downloads/yelp_academic_dataset_business.json` 
     group by categories[0] 
     order by count(categories[0]) desc limit 10;
-    +------------+---------------+
-    |   EXPR$0   | categorycount |
-    +------------+---------------+
-    | Food       | 4294          |
-    | Shopping   | 1885          |
-    | Active Life | 1676          |
-    | Bars       | 1366          |
-    | Local Services | 1351          |
-    | Mexican    | 1284          |
-    | Hotels & Travel | 1283          |
-    | Fast Food  | 963           |
+
+    +----------------------+---------------+
+    | EXPR$0               | categorycount |
+    +----------------------+---------------+
+    | Food                 | 4294          |
+    | Shopping             | 1885          |
+    | Active Life          | 1676          |
+    | Bars                 | 1366          |
+    | Local Services       | 1351          |
+    | Mexican              | 1284          |
+    | Hotels & Travel      | 1283          |
+    | Fast Food            | 963           |
     | Arts & Entertainment | 906           |
-    | Hair Salons | 901           |
-    +------------+---------------+
+    | Hair Salons          | 901           |
+    +----------------------+---------------+
 
 ### 5\. Explore the Yelp reviews dataset and combine with the businesses.
 
@@ -287,11 +295,11 @@ on data.
 
     0: jdbc:drill:zk=local> select * 
     from dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_review.json` limit 1;
-    +------------+------------+------------+------------+------------+------------+------------+-------------+
-    |   votes          |  user_id   | review_id  |   stars    |            date    |    text           |          type    | business_id |
-    +------------+------------+------------+------------+------------+------------+------------+-------------+
-    | {"funny":0,"useful":2,"cool":1} | Xqd0DzHaiyRqVH3WRG7hzg | 15SdjuK7DmYqUAj6rjGowg | 5            | 2007-05-17 | dr. goldberg offers everything i look for in a general practitioner.  he's nice and easy to talk to without being patronizing; he's always on time in seeing his patients; he's affiliated with a top-notch hospital (nyu) which my parents have explained to me is very important in case something happens and you need surgery; and you can get referrals to see specialists without having to see him first.  really, what more do you need?  i'm sitting here trying to think of any complaints i have about him, but i'm really drawing a blank. | review | vcNAWiLM4dR7D2nwwJ7nCA |
-    +------------+------------+------------+------------+------------+------------+------------+-------------+
+    +---------------------------------+------------------------+------------------------+-------+------------+----------------------------------------------------------------------+--------+------------------------+
+    | votes                           | user_id                | review_id              | stars | date       | text                                                                 | type   | business_id            |
+    +---------------------------------+------------------------+------------------------+-------+------------+----------------------------------------------------------------------+--------+------------------------+
+    | {"funny":0,"useful":2,"cool":1} | Xqd0DzHaiyRqVH3WRG7hzg | 15SdjuK7DmYqUAj6rjGowg | 5     | 2007-05-17 | dr. goldberg offers everything i look for in a general practitioner. | review | vcNAWiLM4dR7D2nwwJ7nCA |
+    +---------------------------------+------------------------+------------------------+-------+------------+----------------------------------------------------------------------+--------+------------------------+
 
 #### Top businesses with cool rated reviews
 
@@ -305,14 +313,14 @@ of the reviews themselves.
     FROM dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_review.json` r
     GROUP BY r.business_id having sum(r.votes.cool) > 2000 
     order by sum(r.votes.cool)  desc);
-    +------------+
-    |    name         |
-    +------------+
-    | Earl of Sandwich |
-    | XS Nightclub |
+    +-------------------------------+
+    |             name              |
+    +-------------------------------+
+    | Earl of Sandwich              |
+    | XS Nightclub                  |
     | The Cosmopolitan of Las Vegas |
-    | Wicked Spoon |
-    +------------+
+    | Wicked Spoon                  |
+    +-------------------------------+
 
 #### Create a view with the combined business and reviews data sets
 
@@ -326,19 +334,19 @@ instead of in a logical view, you can use CREATE TABLE AS SELECT syntax.
     Select b.name,b.stars,b.state,b.city,r.votes.funny,r.votes.useful,r.votes.cool, r.`date` 
     from dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json` b, dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_review.json` r 
     where r.business_id=b.business_id
-    +------------+------------+
-    |     ok             |  summary   |
-    +------------+------------+
-    | true              | View 'businessreviews' created successfully in 'dfs.tmp' schema |
-    +------------+------------+
+    +------------+-----------------------------------------------------------------+
+    |     ok     |                           summary                               |
+    +------------+-----------------------------------------------------------------+
+    | true       | View 'businessreviews' created successfully in 'dfs.tmp' schema |
+    +------------+-----------------------------------------------------------------+
 
 Let’s get the total number of records from the view.
 
     0: jdbc:drill:zk=local> select count(*) as Total from dfs.tmp.businessreviews;
     +------------+
-    |   Total   |
+    |   Total    |
     +------------+
-    | 1125458       |
+    | 1125458    |
     +------------+
 
 In addition to these queries, you can get many more deeper insights using
@@ -359,30 +367,30 @@ data so you can apply even deeper SQL functionality. Here is a sample query:
 
     0: jdbc:drill:zk=local> select name, flatten(categories) as category 
     from dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json`  limit 20;
-    +------------+------------+
-    |    name         |   category   |
-    +------------+------------+
-    | Eric Goldberg, MD | Doctors          |
-    | Eric Goldberg, MD | Health & Medical |
-    | Pine Cone Restaurant | Restaurants |
-    | Deforest Family Restaurant | American (Traditional) |
-    | Deforest Family Restaurant | Restaurants |
-    | Culver's   | Food       |
-    | Culver's   | Ice Cream & Frozen Yogurt |
-    | Culver's   | Fast Food  |
-    | Culver's   | Restaurants |
-    | Chang Jiang Chinese Kitchen | Chinese    |
-    | Chang Jiang Chinese Kitchen | Restaurants |
-    | Charter Communications | Television Stations |
-    | Charter Communications | Mass Media |
-    | Air Quality Systems | Home Services |
-    | Air Quality Systems | Heating & Air Conditioning/HVAC |
-    | McFarland Public Library | Libraries  |
-    | McFarland Public Library | Public Services & Government |
-    | Green Lantern Restaurant | American (Traditional) |
-    | Green Lantern Restaurant | Restaurants |
-    | Spartan Animal Hospital | Veterinarians |
-    +------------+------------+
+    +-----------------------------+---------------------------------+
+    | name                        | category                        |
+    +-----------------------------+---------------------------------+
+    | Eric Goldberg, MD           | Doctors                         |
+    | Eric Goldberg, MD           | Health & Medical                |
+    | Pine Cone Restaurant        | Restaurants                     |
+    | Deforest Family Restaurant  | American (Traditional)          |
+    | Deforest Family Restaurant  | Restaurants                     |
+    | Culver's                    | Food                            |
+    | Culver's                    | Ice Cream & Frozen Yogurt       |
+    | Culver's                    | Fast Food                       |
+    | Culver's                    | Restaurants                     |
+    | Chang Jiang Chinese Kitchen | Chinese                         |
+    | Chang Jiang Chinese Kitchen | Restaurants                     |
+    | Charter Communications      | Television Stations             |
+    | Charter Communications      | Mass Media                      |
+    | Air Quality Systems         | Home Services                   |
+    | Air Quality Systems         | Heating & Air Conditioning/HVAC |
+    | McFarland Public Library    | Libraries                       |
+    | McFarland Public Library    | Public Services & Government    |
+    | Green Lantern Restaurant    | American (Traditional)          |
+    | Green Lantern Restaurant    | Restaurants                     |
+    | Spartan Animal Hospital     | Veterinarians                   |
+    +-----------------------------+---------------------------------+
 
 #### Top categories used in business reviews
 
@@ -390,20 +398,20 @@ data so you can apply even deeper SQL functionality. Here is a sample query:
     from (select flatten(categories) catl from dfs.`/yelp_academic_dataset_business.json` ) celltbl 
     group by celltbl.catl 
     order by count(celltbl.catl) desc limit 10 ;
-    +------------+-------------+
-    |    catl    | categorycnt |
-    +------------+-------------+
-    | Restaurants | 14303       |
-    | Shopping   | 6428        |
-    | Food       | 5209        |
-    | Beauty & Spas | 3421        |
-    | Nightlife  | 2870        |
-    | Bars       | 2378        |
+    +------------------+-------------+
+    | catl             | categorycnt |
+    +------------------+-------------+
+    | Restaurants      | 14303       |
+    | Shopping         | 6428        |
+    | Food             | 5209        |
+    | Beauty & Spas    | 3421        |
+    | Nightlife        | 2870        |
+    | Bars             | 2378        |
     | Health & Medical | 2351        |
-    | Automotive | 2241        |
-    | Home Services | 1957        |
-    | Fashion    | 1897        |
-    +------------+-------------+
+    | Automotive       | 2241        |
+    | Home Services    | 1957        |
+    | Fashion          | 1897        |
+    +------------------+-------------+
 
 Stay tuned for more features and upcoming activities in the Drill community.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/6ea0c7a8/_docs/tutorials/050-analyzing-highly-dynamic-datasets.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/050-analyzing-highly-dynamic-datasets.md b/_docs/tutorials/050-analyzing-highly-dynamic-datasets.md
index 1bb325f..ffbf1b3 100644
--- a/_docs/tutorials/050-analyzing-highly-dynamic-datasets.md
+++ b/_docs/tutorials/050-analyzing-highly-dynamic-datasets.md
@@ -49,35 +49,38 @@ Step 3: Start analyzing the data using SQL
 First, let’s take a look at the dataset:
 
     0: jdbc:drill:zk=local> SELECT * FROM dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_checkin.json` limit 2;
-    +--------------+------------+-------------+
-    | checkin_info |    type    | business_id |
-    +--------------+------------+-------------+
+    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+------------------------+
+    |                                                                 checkin_info                                                                                                                                                             |    type    |      business_id       |
+    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+------------------------+
     | {"3-4":1,"13-5":1,"6-6":1,"14-5":1,"14-6":1,"14-2":1,"14-3":1,"19-0":1,"11-5":1,"13-2":1,"11-6":2,"11-3":1,"12-6":1,"6-5":1,"5-5":1,"9-2":1,"9-5":1,"9-6":1,"5-2":1,"7-6":1,"7-5":1,"7-4":1,"17-5":1,"8-5":1,"10-2":1,"10-5":1,"10-6":1} | checkin    | JwUE5GmEO-sH1FuwJgKBlQ |
-    | {"6-6":2,"6-5":1,"7-6":1,"7-5":1,"8-5":2,"10-5":1,"9-3":1,"12-5":1,"15-3":1,"15-5":1,"15-6":1,"16-3":1,"10-0":1,"15-4":1,"10-4":1,"8-2":1} | checkin    | uGykseHzyS5xAMWoN6YUqA |
-    +--------------+------------+-------------+
-You query the data in JSON files directly. Schema definitions in Hive store are no necessary. The names of the elements within the `checkin_info` column are different between the first and second row.
+    | {"6-6":2,"6-5":1,"7-6":1,"7-5":1,"8-5":2,"10-5":1,"9-3":1,"12-5":1,"15-3":1,"15-5":1,"15-6":1,"16-3":1,"10-0":1,"15-4":1,"10-4":1,"8-2":1}                                                                                               | checkin    | uGykseHzyS5xAMWoN6YUqA |
+    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------+------------------------+
 
-Drill provides a function called KVGEN (Key Value Generator) which is useful when working with complex data that contain arbitrary maps consisting of dynamic and unknown element names such as checkin_info. KVGEN turns the dynamic map into an array of key-value pairs where keys represent the dynamic element names.
+{% include startnote.html %}This document aligns Drill output for example purposes. Drill output is not aligned in this case.{% include endnote.html %}
+
+You query the data in JSON files directly. Schema definitions in Hive store are not necessary. The names of the elements within the `checkin_info` column are different between the first and second row.
+
+Drill provides a function called KVGEN (Key Value Generator) which is useful when working with complex data that contains arbitrary maps consisting of dynamic and unknown element names such as checkin_info. KVGEN turns the dynamic map into an array of key-value pairs where keys represent the dynamic element names.
 
 Let’s apply KVGEN on the `checkin_info` element to generate key-value pairs.
 
     0: jdbc:drill:zk=local> SELECT KVGEN(checkin_info) checkins FROM dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_checkin.json` LIMIT 2;
-    +------------+
-    |  checkins  |
-    +------------+
+    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+    |                                                                    checkins                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
+    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
     | [{"key":"3-4","value":1},{"key":"13-5","value":1},{"key":"6-6","value":1},{"key":"14-5","value":1},{"key":"14-6","value":1},{"key":"14-2","value":1},{"key":"14-3","value":1},{"key":"19-0","value":1},{"key":"11-5","value":1},{"key":"13-2","value":1},{"key":"11-6","value":2},{"key":"11-3","value":1},{"key":"12-6","value":1},{"key":"6-5","value":1},{"key":"5-5","value":1},{"key":"9-2","value":1},{"key":"9-5","value":1},{"key":"9-6","value":1},{"key":"5-2","value":1},{"key":"7-6","value":1},{"key":"7-5","value":1},{"key":"7-4","value":1},{"key":"17-5","value":1},{"key":"8-5","value":1},{"key":"10-2","value":1},{"key":"10-5","value":1},{"key":"10-6","value":1}] |
-    | [{"key":"6-6","value":2},{"key":"6-5","value":1},{"key":"7-6","value":1},{"key":"7-5","value":1},{"key":"8-5","value":2},{"key":"10-5","value":1},{"key":"9-3","value":1},{"key":"12-5","value":1},{"key":"15-3","value":1},{"key":"15-5","value":1},{"key":"15-6","value":1},{"key":"16-3","value":1},{"key":"10-0","value":1},{"key":"15-4","value":1},{"key":"10-4","value":1},{"key":"8-2","value":1}] |
-    +------------+
+    | [{"key":"6-6","value":2},{"key":"6-5","value":1},{"key":"7-6","value":1},{"key":"7-5","value":1},{"key":"8-5","value":2},{"key":"10-5","value":1},{"key":"9-3","value":1},{"key":"12-5","value":1},{"key":"15-3","value":1},{"key":"15-5","value":1},{"key":"15-6","value":1},{"key":"16-3","value":1},{"key":"10-0","value":1},{"key":"15-4","value":1},{"key":"10-4","value":1},{"key":"8-2","value":1}]                                                                                                                                                                                                                                                                               |
+    +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
 Drill provides another function to operate on complex data called ‘Flatten’ to break the list of key-value pairs resulting from ‘KVGen’ into separate rows to further apply analytic functions on it.
 
     0: jdbc:drill:zk=local> SELECT FLATTEN(KVGEN(checkin_info)) checkins FROM dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_checkin.json` LIMIT 20;
-    +------------+
-    |  checkins  |
-    +------------+
-    | {"key":"3-4","value":1} |
+    +--------------------------+
+    |         checkins         |
+    +--------------------------+
+    | {"key":"3-4","value":1}  |
     | {"key":"13-5","value":1} |
-    | {"key":"6-6","value":1} |
+    | {"key":"6-6","value":1}  |
     | {"key":"14-5","value":1} |
     | {"key":"14-6","value":1} |
     | {"key":"14-2","value":1} |
@@ -88,14 +91,14 @@ Drill provides another function to operate on complex data called ‘Flatten’
     | {"key":"11-6","value":2} |
     | {"key":"11-3","value":1} |
     | {"key":"12-6","value":1} |
-    | {"key":"6-5","value":1} |
-    | {"key":"5-5","value":1} |
-    | {"key":"9-2","value":1} |
-    | {"key":"9-5","value":1} |
-    | {"key":"9-6","value":1} |
-    | {"key":"5-2","value":1} |
-    | {"key":"7-6","value":1} |
-    +------------+
+    | {"key":"6-5","value":1}  |
+    | {"key":"5-5","value":1}  |
+    | {"key":"9-2","value":1}  |
+    | {"key":"9-5","value":1}  |
+    | {"key":"9-6","value":1}  |
+    | {"key":"5-2","value":1}  |
+    | {"key":"7-6","value":1}  |
+    +--------------------------+
 
 You can get value from the data quickly by applying both KVGEN and FLATTEN functions on the datasets on the fly--no need for time-consuming schema definitions and data storage in intermediate formats.
 


[13/26] drill git commit: minor edit

Posted by ts...@apache.org.
minor edit


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/d5b22a44
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/d5b22a44
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/d5b22a44

Branch: refs/heads/gh-pages
Commit: d5b22a444f75e00293624977359d4e7340d5be89
Parents: babf492
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 15 15:21:19 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 15 15:21:19 2015 -0700

----------------------------------------------------------------------
 _docs/tutorials/060-analyzing-social-media.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/d5b22a44/_docs/tutorials/060-analyzing-social-media.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/060-analyzing-social-media.md b/_docs/tutorials/060-analyzing-social-media.md
index 80dde9b..0be61ab 100644
--- a/_docs/tutorials/060-analyzing-social-media.md
+++ b/_docs/tutorials/060-analyzing-social-media.md
@@ -86,7 +86,7 @@ MicroStrategy provides an AWS instance of various sizes. It comes with a free 30
 
 To provision the MicroStrategy node in AWS:
 
-1. On the [MicroStrategy website](http://www.microstrategy.com/us/analytics/analytics-on-aws), click *Get started*.  
+1. On the [MicroStrategy website](http://www.microstrategy.com/us/analytics/analytics-on-aws), click **Get started**.  
 2. Select some number of users, for example, select 25 users.  
 3. Select the AWS region. Using a MapR node and MicroStrategy instance in the same AWS region is highly recommended.
 4. Click **Continue**.  


[10/26] drill git commit: social media tutorial

Posted by ts...@apache.org.
social media tutorial


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/c18b098f
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/c18b098f
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/c18b098f

Branch: refs/heads/gh-pages
Commit: c18b098fdf07f77b0b22f574414a7604f50a88b1
Parents: 10e158f
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 15 15:01:57 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 15 15:01:57 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 |  71 ++++++-
 .../050-json-data-model.md                      |   2 +-
 _docs/img/socialmed1.png                        | Bin 0 -> 91288 bytes
 _docs/img/socialmed10.png                       | Bin 0 -> 50143 bytes
 _docs/img/socialmed11.png                       | Bin 0 -> 21996 bytes
 _docs/img/socialmed12.png                       | Bin 0 -> 51774 bytes
 _docs/img/socialmed13.png                       | Bin 0 -> 209081 bytes
 _docs/img/socialmed2.png                        | Bin 0 -> 58175 bytes
 _docs/img/socialmed3.png                        | Bin 0 -> 37943 bytes
 _docs/img/socialmed4.png                        | Bin 0 -> 19875 bytes
 _docs/img/socialmed5.png                        | Bin 0 -> 53990 bytes
 _docs/img/socialmed6.png                        | Bin 0 -> 35748 bytes
 _docs/img/socialmed7.png                        | Bin 0 -> 59350 bytes
 _docs/img/socialmed8.png                        | Bin 0 -> 4234 bytes
 _docs/img/socialmed9.png                        | Bin 0 -> 11851 bytes
 .../030-analyzing-the-yelp-academic-dataset.md  |   6 +-
 _docs/tutorials/060-analyzing-social-media.md   | 206 +++++++++++++++++++
 17 files changed, 271 insertions(+), 14 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 28fb349..5f133d5 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -185,8 +185,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Install Drill", 
-            "next_url": "/docs/install-drill/", 
+            "next_title": "Analyzing Social Media", 
+            "next_url": "/docs/analyzing-social-media/", 
             "parent": "Tutorials", 
             "previous_title": "Summary", 
             "previous_url": "/docs/summary/", 
@@ -194,6 +194,23 @@
             "title": "Analyzing Highly Dynamic Datasets", 
             "url": "/docs/analyzing-highly-dynamic-datasets/"
         }, 
+        "Analyzing Social Media": {
+            "breadcrumbs": [
+                {
+                    "title": "Tutorials", 
+                    "url": "/docs/tutorials/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "Install Drill", 
+            "next_url": "/docs/install-drill/", 
+            "parent": "Tutorials", 
+            "previous_title": "Analyzing Highly Dynamic Datasets", 
+            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+            "relative_path": "_docs/tutorials/060-analyzing-social-media.md", 
+            "title": "Analyzing Social Media", 
+            "url": "/docs/analyzing-social-media/"
+        }, 
         "Analyzing the Yelp Academic Dataset": {
             "breadcrumbs": [
                 {
@@ -3296,8 +3313,8 @@
             "next_title": "Install Drill Introduction", 
             "next_url": "/docs/install-drill-introduction/", 
             "parent": "", 
-            "previous_title": "Analyzing Highly Dynamic Datasets", 
-            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+            "previous_title": "Analyzing Social Media", 
+            "previous_url": "/docs/analyzing-social-media/", 
             "relative_path": "_docs/040-install-drill.md", 
             "title": "Install Drill", 
             "url": "/docs/install-drill/"
@@ -8083,14 +8100,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Install Drill", 
-                    "next_url": "/docs/install-drill/", 
+                    "next_title": "Analyzing Social Media", 
+                    "next_url": "/docs/analyzing-social-media/", 
                     "parent": "Tutorials", 
                     "previous_title": "Summary", 
                     "previous_url": "/docs/summary/", 
                     "relative_path": "_docs/tutorials/050-analyzing-highly-dynamic-datasets.md", 
                     "title": "Analyzing Highly Dynamic Datasets", 
                     "url": "/docs/analyzing-highly-dynamic-datasets/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "Tutorials", 
+                            "url": "/docs/tutorials/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Install Drill", 
+                    "next_url": "/docs/install-drill/", 
+                    "parent": "Tutorials", 
+                    "previous_title": "Analyzing Highly Dynamic Datasets", 
+                    "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+                    "relative_path": "_docs/tutorials/060-analyzing-social-media.md", 
+                    "title": "Analyzing Social Media", 
+                    "url": "/docs/analyzing-social-media/"
                 }
             ], 
             "next_title": "Tutorials Introduction", 
@@ -9111,14 +9145,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Install Drill", 
-                    "next_url": "/docs/install-drill/", 
+                    "next_title": "Analyzing Social Media", 
+                    "next_url": "/docs/analyzing-social-media/", 
                     "parent": "Tutorials", 
                     "previous_title": "Summary", 
                     "previous_url": "/docs/summary/", 
                     "relative_path": "_docs/tutorials/050-analyzing-highly-dynamic-datasets.md", 
                     "title": "Analyzing Highly Dynamic Datasets", 
                     "url": "/docs/analyzing-highly-dynamic-datasets/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "Tutorials", 
+                            "url": "/docs/tutorials/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Install Drill", 
+                    "next_url": "/docs/install-drill/", 
+                    "parent": "Tutorials", 
+                    "previous_title": "Analyzing Highly Dynamic Datasets", 
+                    "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+                    "relative_path": "_docs/tutorials/060-analyzing-social-media.md", 
+                    "title": "Analyzing Social Media", 
+                    "url": "/docs/analyzing-social-media/"
                 }
             ], 
             "next_title": "Tutorials Introduction", 
@@ -9358,8 +9409,8 @@
             "next_title": "Install Drill Introduction", 
             "next_url": "/docs/install-drill-introduction/", 
             "parent": "", 
-            "previous_title": "Analyzing Highly Dynamic Datasets", 
-            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+            "previous_title": "Analyzing Social Media", 
+            "previous_url": "/docs/analyzing-social-media/", 
             "relative_path": "_docs/040-install-drill.md", 
             "title": "Install Drill", 
             "url": "/docs/install-drill/"

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 0e8b0d3..90b69a1 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -44,7 +44,7 @@ By default, Drill does not support JSON lists of different types. For example, J
 * `store.json.all_text_mode`  
   Reads all data from JSON files as VARCHAR. You need to cast numbers from VARCHAR to numerical data types, such as DOUBLE or INTEGER.
 
-The default setting of `store.json.all_text_mode` and `store.json.read_numbers_as_double` options is false. Using either option prevents schema errors, but using `store.json.read_numbers_as_double` has an advantage over `store.json.all_text_mode`. Using `store.json.read_numbers_as_double` typically involves less explicit casting than using `store.json.all_text_mode` because you can often use the numerical data as is -\-DOUBLE.
+The default setting of `store.json.all_text_mode` and `store.json.read_numbers_as_double` options is false. Using either option prevents schema errors, but using `store.json.read_numbers_as_double` has an advantage over `store.json.all_text_mode`. Using `store.json.read_numbers_as_double` typically involves less explicit casting than using `store.json.all_text_mode` because you can often use the numerical data as is-\-DOUBLE.
 
 ### Handling Type Differences
 Set the `store.json.read_numbers_as_double` property to true.

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed1.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed1.png b/_docs/img/socialmed1.png
new file mode 100644
index 0000000..86c4776
Binary files /dev/null and b/_docs/img/socialmed1.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed10.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed10.png b/_docs/img/socialmed10.png
new file mode 100644
index 0000000..978c951
Binary files /dev/null and b/_docs/img/socialmed10.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed11.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed11.png b/_docs/img/socialmed11.png
new file mode 100644
index 0000000..140529d
Binary files /dev/null and b/_docs/img/socialmed11.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed12.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed12.png b/_docs/img/socialmed12.png
new file mode 100644
index 0000000..f244838
Binary files /dev/null and b/_docs/img/socialmed12.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed13.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed13.png b/_docs/img/socialmed13.png
new file mode 100644
index 0000000..db7fc6a
Binary files /dev/null and b/_docs/img/socialmed13.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed2.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed2.png b/_docs/img/socialmed2.png
new file mode 100644
index 0000000..b5d78e3
Binary files /dev/null and b/_docs/img/socialmed2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed3.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed3.png b/_docs/img/socialmed3.png
new file mode 100644
index 0000000..72d651a
Binary files /dev/null and b/_docs/img/socialmed3.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed4.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed4.png b/_docs/img/socialmed4.png
new file mode 100644
index 0000000..52f69bc
Binary files /dev/null and b/_docs/img/socialmed4.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed5.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed5.png b/_docs/img/socialmed5.png
new file mode 100644
index 0000000..40f63d2
Binary files /dev/null and b/_docs/img/socialmed5.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed6.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed6.png b/_docs/img/socialmed6.png
new file mode 100644
index 0000000..7e15565
Binary files /dev/null and b/_docs/img/socialmed6.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed7.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed7.png b/_docs/img/socialmed7.png
new file mode 100644
index 0000000..38f2d89
Binary files /dev/null and b/_docs/img/socialmed7.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed8.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed8.png b/_docs/img/socialmed8.png
new file mode 100644
index 0000000..7cc0600
Binary files /dev/null and b/_docs/img/socialmed8.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/img/socialmed9.png
----------------------------------------------------------------------
diff --git a/_docs/img/socialmed9.png b/_docs/img/socialmed9.png
new file mode 100644
index 0000000..ec55ea7
Binary files /dev/null and b/_docs/img/socialmed9.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md b/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
index c822ada..82ab745 100644
--- a/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
+++ b/_docs/tutorials/030-analyzing-the-yelp-academic-dataset.md
@@ -106,7 +106,7 @@ analysis extremely easy.
 
     0: jdbc:drill:zk=local> select stars,trunc(avg(review_count)) reviewsavg 
     from dfs.`/users/nrentachintala/Downloads/yelp/yelp_academic_dataset_business.json`
-    group by stars order by stars desc;``
+    group by stars order by stars desc;
 
     +------------+------------+
     |   stars    | reviewsavg |
@@ -263,7 +263,7 @@ on data.
 #### Top first categories in number of review counts
 
     0: jdbc:drill:zk=local> select categories[0], count(categories[0]) as categorycount 
-    from dfs.`/users/nrentachintala/Downloads/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_business.json` 
+    from dfs.`/users/nrentachintala/Downloads/yelp_academic_dataset_business.json` 
     group by categories[0] 
     order by count(categories[0]) desc limit 10;
     +------------+---------------+
@@ -387,7 +387,7 @@ data so you can apply even deeper SQL functionality. Here is a sample query:
 #### Top categories used in business reviews
 
     0: jdbc:drill:zk=local> select celltbl.catl, count(celltbl.catl) categorycnt 
-    from (select flatten(categories) catl from dfs.`/users/nrentachintala/Downloads/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_business.json` ) celltbl 
+    from (select flatten(categories) catl from dfs.`/yelp_academic_dataset_business.json` ) celltbl 
     group by celltbl.catl 
     order by count(celltbl.catl) desc limit 10 ;
     +------------+-------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/c18b098f/_docs/tutorials/060-analyzing-social-media.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/060-analyzing-social-media.md b/_docs/tutorials/060-analyzing-social-media.md
new file mode 100644
index 0000000..80dde9b
--- /dev/null
+++ b/_docs/tutorials/060-analyzing-social-media.md
@@ -0,0 +1,206 @@
+---
+title: "Analyzing Social Media"
+parent: "Tutorials"
+---
+
+This tutorial covers how to analyze Twitter data in native JSON format using Apache Drill. First, you configure an environment to stream the Twitter data filtered on keywords and languages using Apache Flume, and then you analyze the data using Drill. Finally, you run interactive reports and analysis using MicroStrategy.
+
+## Social Media Analysis Prerequisites
+
+* Twitter developer account
+* AWS account
+* A MapR node on AWS
+* A MicroStrategy AWS instance
+
+## Configuring the AWS environment
+
+Configuring the environment on Amazon Web Services (AWS) consists of these tasks:
+
+* Create a Twitter Dev account and register a Twitter application  
+* Provision a preconfigured AWS MapR node with Flume and Drill  
+* Provision a MicroStrategy AWS instance  
+* Configure MicroStrategy to run reports and analyses using Drill  
+* Create a Twitter Dev account and register an application
+
+This tutorial assumes you are familiar with MicroStrategy. For information about using MicroStrategy, see the [MicroStrategy documentation](http://www.microstrategy.com/Strategy/media/downloads/products/cloud/cloud_aws-user-guide.pdf).
+
+----------
+
+## Establishing a Twitter Feed and Flume Credentials
+
+The following steps establish a Twitter feed and get Twitter credentials required by Flume to set up Twitter as a data source:
+
+1. Go to dev.twitter.com and sign in with your Twitter account details.  
+2. Click **Manage Your Apps** under Tools in the page footer.  
+3. Click **Create New App** and fill in the form, then create the application.
+4. On the **Keys and Access Tokens** tab, create an access token, and then click **Create My Access Token**. If you have read-only access, you can create the token.
+5. Copy the following credentials for the Twitter App that will be used to configure Flume: 
+   * Consumer Key
+   * Consumer Secret
+   * Access Token
+   * Access Token Secret
+
+----------
+
+## Provision Preconfigured MapR Node on AWS
+
+You need to provision a preconfigured MapR node on AWS named ami-4dedc47d. The AMI is already configured with Flume, Drill, and specific elements to support data streaming from Twitter and Drill query views. The AMI is publicly available under Community AMIs, has a 6GB root drive, and a 100GB data drive. Being a small node, very large volumes of data will significantly decrease the response time to Twitter data queries.
+
+1. In AWS, launch an instance.  
+   The AMI image is preconfigured to use a m2.2xlarge instance type with 4 vCPUs and 32GB of memory.  
+2. Select the AMI id ami-4dedc47d.  
+3. Make sure that the instance has been assigned an external IP address; an Elastic IP is preferred, but not essential.  
+4. Verify that a security group is used with open TCP and UDP ports on the node. At this time, all ports are left open on the node.
+5. After provisioning and booting up the instance, reboot the node in the AWS EC2 management interface to finalize the configuration.
+
+The node is now configured with the required Flume and Drill installation. Next, update the Flume configuration files with the required credentials and keywords.
+
+----------
+
+## Update Flume Configuration Files
+
+1. Log in as the ec2-user using the AWS credentials.
+2. Switch to the mapr user on the node using `su – mapr.`
+3. Update the Flume configuration files `flume-env.sh` and `flume`.conf in the `<FLUME HOME>/conf` directory using the Twitter app credentials from the first section. See the [sample files](https://github.com/mapr/mapr-demos/tree/master/drill-twitter-MSTR/flume).
+4. Enter the desired keywords, separated by a comma.  
+   Separate multiple keywords using a space.  
+5. Filter tweets for specific languages, if needed, by entering the ISO 639-1 [language codes](http://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) separated by a comma. If you need no language filtering, leave the parameter blank.  
+6. Go to the FLUME HOME directory and, as user `mapr`, type screen on the command line as user `mapr`:  
+7. Start Flume by typing the following command:  
+
+        ./bin/flume-ng agent --conf ./conf/ -f ./conf/flume.conf -Dflume.root.logger=INFO,console -n TwitterAgent
+8. Enter `CTRL+a` to exit, followed by `d` to detach.  
+   To go back to the screen terminal, simply enter screen –r to reattach.  
+   Twitter data streams into the system.  
+9. Run the following command to verify volumes:
+
+         du –h /mapr/drill_demo/twitter/feed.
+
+You cannot run queries until data appears in the feed directory. Allow 20-30 minutes minimum. 
+
+----------
+
+## Provision a MicroStrategy AWS Instance
+
+MicroStrategy provides an AWS instance of various sizes. It comes with a free 30-day trial for the MicroStrategy instance. AWS charges still apply for the platform and OS.
+
+To provision the MicroStrategy node in AWS:
+
+1. On the [MicroStrategy website](http://www.microstrategy.com/us/analytics/analytics-on-aws), click *Get started*.  
+2. Select some number of users, for example, select 25 users.  
+3. Select the AWS region. Using a MapR node and MicroStrategy instance in the same AWS region is highly recommended.
+4. Click **Continue**.  
+5. On the Manual Launch tab, click **Launch with EC2 Console** next to the appropriate region, and select **r3.large instance**.  
+   An EC2 instance of r3.large is sufficient for the 25 user version.  
+6. Click **Configure Instance Details**.
+7. Select an appropriate network setting and zones, ideally within the same zone and network as the MapR node that you provisioned.
+   {% include startimportant.html %}Make sure that the MicroStrategy instance has a Public IP; elastic IP is preferred but not essential.{% include endimportant.html %}
+8. Keep the default storage.
+9. Assign a tag to identify the instance.
+10. Select a security group that allows sufficient access to external IPs and open all ports because security is not a concern. 
+11. In the AWS Console, launch an instance, and when the AWS reports that the instance is running, select it, and click **Connect**.
+12. Click **Get Password** to get the OS Administrator password.
+
+The instance is now accessible with RDP and is using the relevant AWS credentials and security.
+
+----------
+
+## Configure MicroStrategy
+
+You need to configure MicroStrategy to integrate with Drill using the ODBC driver. You install a MicroStrategy package with a number of useful, prebuilt reports for working with Twitter data. You can modify the reports or use the reports as a template to create new and more interesting reports and analysis models.
+
+1. Configure a System DSN named `Twitter` with the ODBC administrator. The quick start version of the MapR ODBC driver requires the DSN.  
+2. [Download the quick start version of the MapR ODBC driver for Drill](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v0.08.1.0618/MapRDrillODBC32.msi).  
+3. [Configure the ODBC driver](http://drill.apache.org/docs/using-microstrategy-analytics-with-apache-drill) for Drill on MicroStrategy Analytics.  
+    The Drill object is part of the package and doesn’t need to be configured.  
+4. Use the AWS Private IP if both the MapR node and the MicroStrategy instance are located in the same region (recommended).
+5. Download the [Drill and Twitter configuration](https://github.com/mapr/mapr-demos/blob/master/drill-twitter-MSTR/MSTR/DrillTwitterProjectPackage.mmp) package for MicroStrategy on the Windows system using Git for Windows or the full GitHub for Windows.
+
+----------
+
+## Import Reports
+
+1. In MicroStrategy Developer, select **Schema > Create New Project** to create a new project with MicroStrategy Developer.  
+2. Click **Create Project** and type a name for the new project.  
+3. Click **OK**.  
+   The Project appears in MicroStrategy Developer.  
+4. Open MicroStrategy Object Manager.  
+5. Connect to the Project Source and login as Administrator.  
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed1.png)
+6. In MicroStrategy Object Manager, MicroStrategy Analytics Modules, select the project for the package. For example, select **Twitter analysis Apache Drill**.  
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed2.png)
+7. Select **Tools > Import Configuration Package**.  
+8. Open the configuration package file, and click **Proceed**.  
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed3.png)
+   The package with the reports is available in MicroStrategy.  
+
+You can test and modify the reports in MicroStrategy Developer. Configure permissions if necessary.
+
+----------
+
+## Update the Schema
+
+1. In MicroStrategy Developer, select **Schema > Update Schema**.  
+2. In Schema Update, select all check boxes, and click **Update**.  
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed4.png)
+
+----------
+
+## Create a User and Set the Password
+
+1. Expand Administration.  
+2. Expand User Manager, and click **Everyone**.  
+3. Right-click to create a new user, or click **Administrator** to edit the password.  
+
+----------
+
+## About the Reports
+
+There are 18 reports in the package. Most reports prompt you to specify date ranges, output limits, and terms as needed. The package contains reports in three main categories:
+
+* Volumes: A number of reports that show the total volume of Tweets by different date and time designations.
+* Top List: Displays the top Tweets, Retweets, hashtags and users are displayed.
+* Specific Terms: Tweets and Retweets that can be measured or listed based on terms in the text of the Tweet itself.
+
+You can copy and modify the reports or use the reports as a template for querying Twitter data using Drill. 
+
+You can access reports through MicroStrategy Developer or the web interface. MicroStrategy Developer provides a more powerful interface than the web interface to modify reports or add new reports, but requires RDP access to the node.
+
+----------
+
+## Using the Web Interface
+
+1. Using a web browser, enter the URL for the web interface:  
+         http://<MSTR node name or IP address>/MicroStrategy/asp/Main.aspx
+2. Log in as the User you created or as Administrator, using the credentials created initially with Developer.  
+3. On the Welcome MicroStrategy Web User page, choose the project that was used to load the analysis package: **Drill Twitter Analysis**.  
+   ![choose project]({{ site.baseurl }}/docs/img/socialmed5.png)
+4. Select **Shared Reports**.  
+   The folders with the three main categories of the reports appear.
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed6.png)
+5. Select a report, and respond to any prompts. For example, to run the Top Tweet Languages by Date Range, enter the required Date_Start and Date_End.  
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed7.png)
+6. Click **Run Report**.  
+   A histogram report appears showing the top tweet languages by date range.
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed8.png)
+7. To refresh the data or re-enter prompt values, select **Data > Refresh** or **Data > Re-prompt**.
+
+## Browsing the Apache Drill Twitter Analysis Reports
+
+The MicroStrategy Developer reports are located in the Public Objects folder of the project you chose for installing the package.  
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed9.png)
+Many of the reports require you to respond to prompts to select the desired data. For example, select the Top Hashtags report in the right-hand column. This report requires you to respond to prompts for a Start Date and End Date to specify the date range for data of interest; by default, data for the last two months, ending with the current date is selected. You can also specify the limit for the number of Top Hashtags to be returned; the default is the top 10 hashtags.  
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed10.png)
+When you click **Finish** a bar chart report with the hashtag and number of times it appeared in the specified data range appears.  
+   ![project sources]({{ site.baseurl }}/docs/img/socialmed11.png)
+
+Other reports are available in the bundle. For example, this report shows total tweets by hour:
+   ![tweets by hour]({{ site.baseurl }}/docs/img/socialmed12.png)
+This report shows top Retweets for a date range with original Tweet date and count in the date range.  
+   ![retweets report]({{ site.baseurl }}/docs/img/socialmed13.png)
+
+----------
+
+## Summary
+
+In this tutorial, you learned how to configure an environment to stream Twitter data using Apache Flume. You then learned how to analyze the data in native JSON format with SQL using Apache Drill, and how to run interactive reports and analysis using MicroStrategy.
\ No newline at end of file


[06/26] drill git commit: Jason's review

Posted by ts...@apache.org.
Jason's review


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/10e158f8
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/10e158f8
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/10e158f8

Branch: refs/heads/gh-pages
Commit: 10e158f886e614f9855b648772ef6d46ac5ecd23
Parents: f58d360
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 14 15:28:07 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 14 15:28:07 2015 -0700

----------------------------------------------------------------------
 _docs/data-sources-and-file-formats/050-json-data-model.md | 4 ++--
 _docs/sql-reference/data-types/010-supported-data-types.md | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/10e158f8/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 82b8a3a..0e8b0d3 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -40,9 +40,9 @@ The following table shows SQL-JSON data type mapping:
 By default, Drill does not support JSON lists of different types. For example, JSON does not enforce types or distinguish between integers and floating point values. When reading numerical values from a JSON file, Drill distinguishes integers from floating point numbers by the presence or lack of a decimal point. If some numbers in a JSON map or array appear with and without a decimal point, such as 0 and 0.0, Drill throws a schema change error. You use the following options to read JSON lists of different types:
 
 * `store.json.read_numbers_as_double`  
-  Reads numbers from JSON files with or without a decimal point as DOUBLE. You need to cast numbers from VARCHAR to numerical data types, such as DOUBLE or INTEGER.
+  Reads numbers from JSON files with or without a decimal point as DOUBLE. You need to cast numbers from DOUBLE to other numerical types only if you cannot use the numbers as DOUBLE.
 * `store.json.all_text_mode`  
-  Reads all data from JSON files as VARCHAR. you need to cast numbers from DOUBLE to other numerical types only if you cannot use the numbers as DOUBLE.
+  Reads all data from JSON files as VARCHAR. You need to cast numbers from VARCHAR to numerical data types, such as DOUBLE or INTEGER.
 
 The default setting of `store.json.all_text_mode` and `store.json.read_numbers_as_double` options is false. Using either option prevents schema errors, but using `store.json.read_numbers_as_double` has an advantage over `store.json.all_text_mode`. Using `store.json.read_numbers_as_double` typically involves less explicit casting than using `store.json.all_text_mode` because you can often use the numerical data as is -\-DOUBLE.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/10e158f8/_docs/sql-reference/data-types/010-supported-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/010-supported-data-types.md b/_docs/sql-reference/data-types/010-supported-data-types.md
index c588a8b..5d7fa86 100644
--- a/_docs/sql-reference/data-types/010-supported-data-types.md
+++ b/_docs/sql-reference/data-types/010-supported-data-types.md
@@ -22,7 +22,7 @@ Drill reads from and writes to data sources having a wide variety of types. Dril
 | CHARACTER VARYING, CHARACTER, CHAR, or VARCHAR*** | UTF8-encoded variable-length string. The default limit is 1 character. The maximum character limit is 2,147,483,647. | CHAR(30) casts data to a 30-character string maximum.                          |
 
 
-\* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. The NUMERIC data type is an alias for the DECIMAL data type.
+\* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. The NUMERIC data type is an alias for the DECIMAL data type.  
 \*\* Not currently supported.  
 \*\*\* Currently, Drill supports only variable-length strings.