You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by ts...@apache.org on 2015/05/12 07:56:43 UTC

[01/25] drill git commit: Fix heading and code problems

Repository: drill
Updated Branches:
  refs/heads/gh-pages 6b7b7aa1d -> fcb4f412b


Fix heading and code problems


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/6d0d5812
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/6d0d5812
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/6d0d5812

Branch: refs/heads/gh-pages
Commit: 6d0d58126dc9a84a689d209033129f9117c1170d
Parents: 3ababd8
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 7 12:50:53 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 7 12:50:53 2015 -0700

----------------------------------------------------------------------
 .../070-configuring-user-impersonation.md               | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/6d0d5812/_docs/configure-drill/070-configuring-user-impersonation.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/070-configuring-user-impersonation.md b/_docs/configure-drill/070-configuring-user-impersonation.md
index 0aa43d8..6203ca1 100644
--- a/_docs/configure-drill/070-configuring-user-impersonation.md
+++ b/_docs/configure-drill/070-configuring-user-impersonation.md
@@ -54,7 +54,8 @@ When users query a view, Drill accesses the underlying data as the user that cre
  
 The view owner or a superuser can modify permissions on the view file directly or they can set view permissions at the system or session level prior to creating any views. Any user that alters view permissions must have write access on the directory or workspace in which they are working. See Modifying Permissions on a View File and Modifying SYSTEM|SESSION Level View Permissions. 
 
-#### Modifying Permissions on a View File
+### Modifying Permissions on a View File
+
 Only a view owner or a super user can modify permissions on a view file to change them from owner to group or world readable. Before you grant permission to users to access a view, verify that they have access to the directory or workspace in which the view file is stored.
 
 Use the `chmod` and `chown` commands with the appropriate octal code to change permissions on a view file:
@@ -64,7 +65,8 @@ Use the `chmod` and `chown` commands with the appropriate octal code to change p
     hadoop fs –chown <user>:<group> <file_name>
 Example: `hadoop fs –chmod 750 employees.drill.view`
 
-####Modifying SYSTEM|SESSION Level View Permissions
+### Modifying SYSTEM|SESSION Level View Permissions
+
 Use the `ALTER SESSION|SYSTEM` command with the `new_view_default_permissions` parameter and the appropriate octal code to set view permissions at the system or session level prior to creating a view.
  
     ALTER SESSION SET `new_view_default_permissions` = '<octal_code>';
@@ -91,7 +93,7 @@ In this scenario, when Chad queries Jane’s view Drill returns an error stating
 
 If users encounter this error, you can increase the maximum hop setting to accommodate users running queries on views. When configuring the maximum number of hops that Drill can make, consider that joined views increase the number of identity transitions required for Drill to access the underlying data.
 
-#### Configuring Impersonation and Chaining
+### Configuring Impersonation and Chaining
 Chaining is a system-wide setting that applies to all views. Currently, Drill does not provide an option to  allow different chain lengths for different views.
 
 Complete the following steps on each Drillbit node to enable user impersonation, and set the maximum number of chained user hops that Drill allows:
@@ -117,7 +119,6 @@ Complete the following steps on each Drillbit node to enable user impersonation,
    * In a non-MapR environment, run the following command:  
      <DRILLINSTALL_HOME>/bin/drillbit.sh restart
 
-
 ## Impersonation and Chaining Example
 Frank is a senior HR manager at a company. Frank has access to all of the employee data because he is a member of the hr group. Frank created a table named “employees” in his home directory to store the employee data he uses. Only Frank has access to this table.
  
@@ -131,6 +132,7 @@ Frank needs to share a subset of this information with Joe who is an HR manager
 rwxr-----     frank:mgr   /user/frank/emp_mgr_view.drill.view
  
 The emp_mgr_view.drill.view file contains the following view definition:
+
 (view definition: SELECT emp_id, emp_name, emp_salary, emp_addr, emp_phone FROM \`/user/frank/employee\` WHERE emp_mgr = user())
  
 When Joe issues SELECT * FROM emp_mgr_view, Drill impersonates Frank when accessing the employee data, and the query returns the data that Joe has permission to see based on the view definition. The query results do not include any sensitive data because the view protects that information. If Joe tries to query the employees table directly, Drill returns an error or null values.
@@ -143,7 +145,7 @@ rwxr-----     joe:joeteam   /user/joe/emp_team_view.drill.view
  
 The emp_team_view.drill.view file contains the following view definition:
  
-(view definition: SELECT emp_id, emp_name, emp_phone FROM `/user/frank/emp_mgr_view.drill`);
+(view definition: SELECT emp_id, emp_name, emp_phone FROM \`/user/frank/emp_mgr_view.drill\`);
  
 When anyone on Joe’s team issues SELECT * FROM emp_team_view, Drill impersonates Joe to access the emp_team_view and then impersonates Frank to access the emp_mgr_view and the employee data. Drill returns the data that Joe’s team has can see based on the view definition. If anyone on Joe’s team tries to query the emp_mgr_view or employees table directly, Drill returns an error or null values.
  


[02/25] drill git commit: Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages

Posted by ts...@apache.org.
Merge branch 'gh-pages' of https://github.com/tshiran/drill into gh-pages


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/22d4df33
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/22d4df33
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/22d4df33

Branch: refs/heads/gh-pages
Commit: 22d4df338ede29f7ce2aba3f849a87e5d4627b25
Parents: 6d0d581 6b7b7aa
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 7 12:51:14 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 7 12:51:14 2015 -0700

----------------------------------------------------------------------
 css/style.css                                   |  714 +++----
 css/video-box.css                               |   55 +
 index.html                                      |   27 +
 static/fancybox/blank.gif                       |  Bin 0 -> 43 bytes
 static/fancybox/fancybox_loading.gif            |  Bin 0 -> 6567 bytes
 static/fancybox/fancybox_loading@2x.gif         |  Bin 0 -> 13984 bytes
 static/fancybox/fancybox_overlay.png            |  Bin 0 -> 1003 bytes
 static/fancybox/fancybox_sprite.png             |  Bin 0 -> 1362 bytes
 static/fancybox/fancybox_sprite@2x.png          |  Bin 0 -> 6553 bytes
 static/fancybox/helpers/fancybox_buttons.png    |  Bin 0 -> 1080 bytes
 .../helpers/jquery.fancybox-buttons.css         |   97 +
 .../fancybox/helpers/jquery.fancybox-buttons.js |  122 ++
 .../fancybox/helpers/jquery.fancybox-media.js   |  199 ++
 .../fancybox/helpers/jquery.fancybox-thumbs.css |   55 +
 .../fancybox/helpers/jquery.fancybox-thumbs.js  |  162 ++
 static/fancybox/jquery.fancybox.css             |  274 +++
 static/fancybox/jquery.fancybox.js              | 2020 ++++++++++++++++++
 static/fancybox/jquery.fancybox.pack.js         |   46 +
 18 files changed, 3414 insertions(+), 357 deletions(-)
----------------------------------------------------------------------



[07/25] drill git commit: typo

Posted by ts...@apache.org.
typo


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/d7bc3a65
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/d7bc3a65
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/d7bc3a65

Branch: refs/heads/gh-pages
Commit: d7bc3a6554a38b9e0ba5e494464d373fcd11c331
Parents: 9a8897a
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 8 09:16:43 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 8 09:16:43 2015 -0700

----------------------------------------------------------------------
 _docs/getting-started/020-why-drill.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/d7bc3a65/_docs/getting-started/020-why-drill.md
----------------------------------------------------------------------
diff --git a/_docs/getting-started/020-why-drill.md b/_docs/getting-started/020-why-drill.md
index 61dac30..f7f4495 100644
--- a/_docs/getting-started/020-why-drill.md
+++ b/_docs/getting-started/020-why-drill.md
@@ -81,7 +81,7 @@ Drill exposes a simple and high-performance Java API to build custom functions (
 
 
 ## 9. High performance
-Drill is designed fround the ground up for high throughput and low latency. It doesn't use a general purpose execution engine like MapReduce, Tez or Spark. As a result, Drill is able to deliver its unparalleled flexibility (schema-free JSON model) without compromising performance. Drill's optimizer leverages rule- and cost-based techniques, as well as data locality and operator push-down (the ability to push down query fragments into the back-end data sources). Drill also provides a columnar and vectorized execution engine, resulting in higher memory and CPU efficiency.
+Drill is designed from the ground up for high throughput and low latency. It doesn't use a general purpose execution engine like MapReduce, Tez or Spark. As a result, Drill is able to deliver its unparalleled flexibility (schema-free JSON model) without compromising performance. Drill's optimizer leverages rule- and cost-based techniques, as well as data locality and operator push-down (the ability to push down query fragments into the back-end data sources). Drill also provides a columnar and vectorized execution engine, resulting in higher memory and CPU efficiency.
 
 ## 10. Scales from a single laptop to a 1000-node cluster
 Drill is available as a simple download you can run on your laptop. When you're ready to analyze larger datasets, simply deploy Drill on your Hadoop cluster (up to 1000 commodity servers). Drill leverages the aggregate memory in the cluster to execute queries using an optimistic pipelined model, and automatically spills to disk when the working set doesn't fit in memory.


[08/25] drill git commit: add openkb blog reference

Posted by ts...@apache.org.
add openkb blog reference


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/b65acbba
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/b65acbba
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/b65acbba

Branch: refs/heads/gh-pages
Commit: b65acbba2d332395757bd0d94fcdbaadd8167dc9
Parents: d7bc3a6
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 8 10:52:57 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 8 10:52:57 2015 -0700

----------------------------------------------------------------------
 _docs/sql-reference/sql-functions/020-data-type-conversion.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/b65acbba/_docs/sql-reference/sql-functions/020-data-type-conversion.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/020-data-type-conversion.md b/_docs/sql-reference/sql-functions/020-data-type-conversion.md
index 611d4cf..3ad20f7 100644
--- a/_docs/sql-reference/sql-functions/020-data-type-conversion.md
+++ b/_docs/sql-reference/sql-functions/020-data-type-conversion.md
@@ -714,9 +714,9 @@ Convert a UTC date to a timestamp offset from the UTC time zone code.
     1 row selected (0.129 seconds)
 
 ## Time Zone Limitation
-Currently Drill does not support conversion of a date, time, or timestamp from one time zone to another. The workaround is to configure Drill to use [UTC](http://www.timeanddate.com/time/aboututc.html)-based time, convert your data to UTC timestamps, and perform date/time operation in UTC.  
+Currently Drill does not support conversion of a date, time, or timestamp from one time zone to another. Queries of data associated with a time zone can return inconsistent results or an error. For more information, see the ["Understanding Drill's Timestamp and Timezone"](http://www.openkb.info/2015/05/understanding-drills-timestamp-and.html#.VUzhotpVhHw) blog. The Drill time zone is based on the operating system time zone unless you override it. To work around the limitation, configure Drill to use [UTC](http://www.timeanddate.com/time/aboututc.html)-based time, convert your data to UTC timestamps, and perform date/time operation in UTC.  
 
-1. Take a look at the Drill time zone configuration by running the TIMEOFDAY function. This function returns the local date and time with time zone information.
+1. Take a look at the Drill time zone configuration by running the TIMEOFDAY function or by querying the system.options table. This TIMEOFDAY function returns the local date and time with time zone information. 
 
         SELECT TIMEOFDAY() FROM sys.version;
 


[04/25] drill git commit: rip out MapR stuff

Posted by ts...@apache.org.
rip out MapR stuff


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/913b998b
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/913b998b
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/913b998b

Branch: refs/heads/gh-pages
Commit: 913b998b531ee73fcb05084e55ec84381d4b4d3b
Parents: 80bbe06
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 7 16:06:15 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 7 16:06:15 2015 -0700

----------------------------------------------------------------------
 .../050-configuring-multitenant-resources.md    | 49 ++------------------
 1 file changed, 5 insertions(+), 44 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/913b998b/_docs/configure-drill/050-configuring-multitenant-resources.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/050-configuring-multitenant-resources.md b/_docs/configure-drill/050-configuring-multitenant-resources.md
index f8ca673..2acd0e7 100644
--- a/_docs/configure-drill/050-configuring-multitenant-resources.md
+++ b/_docs/configure-drill/050-configuring-multitenant-resources.md
@@ -2,25 +2,9 @@
 title: "Configuring Multitenant Resources"
 parent: "Configuring a Multitenant Cluster"
 ---
-Drill operations are memory and CPU-intensive. Currently, Drill resources are managed outside of any cluster management service, such as the MapR warden service. In a multitenant or any other type of cluster, YARN-enabled or not, you configure memory and memory usage limits for Drill by modifying drill-env.sh as described in ["Configuring Drill Memory"]({{site.baseurl}}/docs/configuring-drill-memory).
+Drill operations are memory and CPU-intensive. Currently, Drill resources are managed outside of any cluster management service. In a multitenant or any other type of cluster, YARN-enabled or not, you configure memory and memory usage limits for Drill by modifying drill-env.sh as described in ["Configuring Drill Memory"]({{site.baseurl}}/docs/configuring-drill-memory).
 
-Configure a multitenant cluster to account for resources required for Drill. For example, on a MapR cluster, ensure warden accounts for resources required for Drill. Configuring `drill-env.sh` allocates resources for Drill to use during query execution, while configuring the following properties in `warden-drill-bits.conf` prevents warden from committing the resources to other processes.
-
-    service.heapsize.min=<some value in MB>
-    service.heapsize.max=<some value in MB>
-    service.heapsize.percent=<a whole number>
-
-{% include startimportant.html %}Set the `service.heapsize` properties in `warden.drill-bits.conf` regardless of whether you changed defaults in `drill-env.sh` or not.{% include endimportant.html %}
-
-The section, ["Configuring Drill in a YARN-enabled MapR Cluster"]({{site.baseurl}}/docs/configuring-multitenant-resources#configuring-drill-in-a-yarn-enabled-mapr-cluster) shows an example of setting the `service.heapsize` properties. The `service.heapsize.percent` is the percentage of memory for the service bounded by minimum and maximum values. Typically, users change `service.heapsize.percent`. Using a percentage has the advantage of increasing or decreasing resources according to different node's configuration. For more information about the `service.heapsize` properties, see the section, ["warden.<servicename>.conf"](http://doc.mapr.com/display/MapR/warden.%3Cservicename%3E.conf).
-
-You need to statically partition the cluster to designate which partition handles which workload. To configure resources for Drill in a MapR cluster, modify one or more of the following files in `/opt/mapr/conf/conf.d` that the installation process creates. 
-
-* `warden.drill-bits.conf`
-* `warden.nodemanager.conf`
-* `warden.resourcemanager.conf`
-
-Configure Drill memory by modifying `warden.drill-bits.conf` in YARN and non-YARN clusters. Configure other resources by modifying `warden.nodemanager.conf `and `warden.resourcemanager.conf `in a YARN-enabled cluster.
+Configure a multitenant cluster manager to account for resources required for Drill. Configuring `drill-env.sh` allocates resources for Drill to use during query execution. It might be necessary to configure the cluster manager from committing the resources to other processes.
 
 ## Configuring Drill in a YARN-enabled MapR Cluster
 
@@ -49,34 +33,11 @@ ResourceManager and NodeManager memory in `warden.resourcemanager.conf` and
     service.heapsize.max=325
     service.heapsize.percent=2
 
-Change these settings for NodeManager and ResourceManager to reconfigure the total memory required for YARN services to run. If you want to place an upper limit on memory set YARN_NODEMANAGER_HEAPSIZE or YARN_RESOURCEMANAGER_HEAPSIZE environment variable in `/opt/mapr/hadoop/hadoop-2.5.1/etc/hadoop/yarn-env.sh`. The `-Xmx` option is not set, allowing memory on to grow as needed.
-
-### MapReduce v1 Resources
-
-The following default settings in `/opt/mapr/conf/warden.conf` control MapReduce v1 memory:
-
-    mr1.memory.percent=50
-    mr1.cpu.percent=50
-    mr1.disk.percent=50
-
-Modify these settings to reconfigure MapReduce v1 resources to suit your application needs, as described in section ["Resource Allocation for Jobs and Applications"](http://doc.mapr.com/display/MapR/Resource+Allocation+for+Jobs+and+Applications) of the MapR documentation. Remaining memory is given to YARN applications. 
-
-
-### MapReduce v2 and other Resources
-
-You configure memory for each service by setting three values in `warden.conf`.
-
-    service.command.<servicename>.heapsize.percent
-    service.command.<servicename>.heapsize.max
-    service.command.<servicename>.heapsize.min
-
-Configure memory for other services in the same manner, as described in [MapR documentation](http://doc.mapr.com/display/MapR/warden.%3Cservicename%3E.conf)
+Change these settings for NodeManager and ResourceManager to reconfigure the total memory required for YARN services to run. If you want to place an upper limit on memory set YARN_NODEMANAGER_HEAPSIZE or YARN_RESOURCEMANAGER_HEAPSIZE environment variable. Do not set the `-Xmx` option to allow the heap to grow as needed.
 
-For more information about managing memory in a MapR cluster, see the following sections in the MapR documentation:
+### MapReduce Resources
 
-* [Memory Allocation for Nodes](http://doc.mapr.com/display/MapR/Memory+Allocation+for+Nodes)  
-* [Cluster Resource Allocation](http://doc.mapr.com/display/MapR/Cluster+Resource+Allocation)  
-* [Customizing Memory Settings for MapReduce v1](http://doc.mapr.com/display/MapR/Customize+Memory+Settings+for+MapReduce+v1)  
+Modify MapReduce memory to suit your application needs. Remaining memory is typically given to YARN applications. 
 
 ## How to Manage Drill CPU Resources
 Currently, you do not manage CPU resources within Drill. [Use Linux `cgroups`](http://en.wikipedia.org/wiki/Cgroups) to manage the CPU resources.
\ No newline at end of file


[06/25] drill git commit: add link

Posted by ts...@apache.org.
add link


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/9a8897aa
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/9a8897aa
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/9a8897aa

Branch: refs/heads/gh-pages
Commit: 9a8897aae31bc53bfcd9b4293595964145e9716a
Parents: ef3572b
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 7 16:42:04 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 7 16:42:04 2015 -0700

----------------------------------------------------------------------
 _docs/configure-drill/060-configuring-a-shared-drillbit.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/9a8897aa/_docs/configure-drill/060-configuring-a-shared-drillbit.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/060-configuring-a-shared-drillbit.md b/_docs/configure-drill/060-configuring-a-shared-drillbit.md
index 69ae187..22f46d7 100644
--- a/_docs/configure-drill/060-configuring-a-shared-drillbit.md
+++ b/_docs/configure-drill/060-configuring-a-shared-drillbit.md
@@ -53,7 +53,7 @@ A parallelizer in the Foreman transforms the physical plan into multiple phases.
 
 ## Data Isolation
 
-Tenants can share data on a cluster using Drill views and impersonation. ??Link to impersonation doc.??
+Tenants can share data on a cluster using Drill views and [impersonation]({{site.baseurl}}/docs/configuring-user-impersonation). 
 
 
 


[13/25] drill git commit: add bob's file to nav

Posted by ts...@apache.org.
add bob's file to nav


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/35b11d29
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/35b11d29
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/35b11d29

Branch: refs/heads/gh-pages
Commit: 35b11d29f922743715e64ba356751f456620f0fa
Parents: e1e95eb
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 8 16:21:11 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 8 16:21:11 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 71 +++++++++++++++++---
 .../060-configuring-a-shared-drillbit.md        |  2 +-
 .../020-start-up-options.md                     |  2 +-
 3 files changed, 63 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/35b11d29/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index deeb210..39107cd 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -4578,14 +4578,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Query Data", 
-                    "next_url": "/docs/query-data/", 
+                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md", 
                     "title": "Using Tibco Spotfire with Drill", 
                     "url": "/docs/using-tibco-spotfire-with-drill/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "ODBC/JDBC Interfaces", 
+                            "url": "/docs/odbc-jdbc-interfaces/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Query Data", 
+                    "next_url": "/docs/query-data/", 
+                    "parent": "ODBC/JDBC Interfaces", 
+                    "previous_title": "Using Tibco Spotfire with Drill", 
+                    "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
+                    "title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -5167,8 +5184,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-            "previous_title": "Using Tibco Spotfire with Drill", 
-            "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
+            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"
@@ -8127,6 +8144,23 @@
             "title": "Useful Research", 
             "url": "/docs/useful-research/"
         }, 
+        "Using Apache Drill with Tableau 9 Desktop": {
+            "breadcrumbs": [
+                {
+                    "title": "ODBC/JDBC Interfaces", 
+                    "url": "/docs/odbc-jdbc-interfaces/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "Query Data", 
+            "next_url": "/docs/query-data/", 
+            "parent": "ODBC/JDBC Interfaces", 
+            "previous_title": "Using Tibco Spotfire with Drill", 
+            "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+            "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
+            "title": "Using Apache Drill with Tableau 9 Desktop", 
+            "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
+        }, 
         "Using Custom Functions in Queries": {
             "breadcrumbs": [
                 {
@@ -8537,8 +8571,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Query Data", 
-            "next_url": "/docs/query-data/", 
+            "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+            "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using MicroStrategy Analytics with Drill", 
             "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
@@ -10190,14 +10224,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Query Data", 
-                    "next_url": "/docs/query-data/", 
+                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md", 
                     "title": "Using Tibco Spotfire with Drill", 
                     "url": "/docs/using-tibco-spotfire-with-drill/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "ODBC/JDBC Interfaces", 
+                            "url": "/docs/odbc-jdbc-interfaces/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Query Data", 
+                    "next_url": "/docs/query-data/", 
+                    "parent": "ODBC/JDBC Interfaces", 
+                    "previous_title": "Using Tibco Spotfire with Drill", 
+                    "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
+                    "title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -10585,8 +10636,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-            "previous_title": "Using Tibco Spotfire with Drill", 
-            "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
+            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
+            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"

http://git-wip-us.apache.org/repos/asf/drill/blob/35b11d29/_docs/configure-drill/060-configuring-a-shared-drillbit.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/060-configuring-a-shared-drillbit.md b/_docs/configure-drill/060-configuring-a-shared-drillbit.md
index 22f46d7..52e3db4 100644
--- a/_docs/configure-drill/060-configuring-a-shared-drillbit.md
+++ b/_docs/configure-drill/060-configuring-a-shared-drillbit.md
@@ -13,7 +13,7 @@ Set [options in sys.options]({{site.baseurl}}/docs/configuration-options-introdu
 
 ### Example Configuration
 
-For example, you configure the queue reserved for large queries to hold a 5-query maximum. You configure the queue reserved for small queries to hold 20 queries. Users start to run queries, and Drill receives the following query requests in this order:
+For example, you configure the queue reserved for large queries for a 5-query maximum. You configure the queue reserved for small queries for 20 queries. Users start to run queries, and Drill receives the following query requests in this order:
 
 * Query A (blue): 1 billion records, Drill estimates 10 million rows will be processed  
 * Query B (red): 2 billion records, Drill estimates 20 million rows will be processed  

http://git-wip-us.apache.org/repos/asf/drill/blob/35b11d29/_docs/configure-drill/configuration-options/020-start-up-options.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/020-start-up-options.md b/_docs/configure-drill/configuration-options/020-start-up-options.md
index 8a06232..e525608 100644
--- a/_docs/configure-drill/configuration-options/020-start-up-options.md
+++ b/_docs/configure-drill/configuration-options/020-start-up-options.md
@@ -36,7 +36,7 @@ file tells Drill to scan that JAR file or associated object and include it.
 
 You can run the following query to see a list of Drill’s startup options:
 
-    SELECT * FROM sys.options WHERE type='BOOT'
+    SELECT * FROM sys.options WHERE type='BOOT';
 
 ## Configuring Start-Up Options
 


[20/25] drill git commit: rename files, fix links

Posted by ts...@apache.org.
rename files, fix links


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/51704666
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/51704666
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/51704666

Branch: refs/heads/gh-pages
Commit: 517046660e8f79672d57fd32822a040b603f6a37
Parents: 0b4cb1f
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon May 11 10:03:08 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Mon May 11 10:03:08 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 |  83 ++++++++--
 ...microstrategy-analytics-with-apache-drill.md | 153 +++++++++++++++++++
 ...-using-microstrategy-analytics-with-drill.md | 153 -------------------
 .../060-tibco-spotfire with Drill.md            |  50 ------
 .../060-using-tibco-spotfire-with-drill.md      |  50 ++++++
 _docs/tutorials/010-tutorials-introduction.md   |   2 +-
 6 files changed, 271 insertions(+), 220 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/51704666/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 39107cd..9f4bdbe 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -185,8 +185,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Install Drill", 
-            "next_url": "/docs/install-drill/", 
+            "next_title": "Analyzing Social Media with MicroStrategy", 
+            "next_url": "/docs/analyzing-social-media-with-microstrategy/", 
             "parent": "Tutorials", 
             "previous_title": "Summary", 
             "previous_url": "/docs/summary/", 
@@ -194,6 +194,23 @@
             "title": "Analyzing Highly Dynamic Datasets", 
             "url": "/docs/analyzing-highly-dynamic-datasets/"
         }, 
+        "Analyzing Social Media with MicroStrategy": {
+            "breadcrumbs": [
+                {
+                    "title": "Tutorials", 
+                    "url": "/docs/tutorials/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "Install Drill", 
+            "next_url": "/docs/install-drill/", 
+            "parent": "Tutorials", 
+            "previous_title": "Analyzing Highly Dynamic Datasets", 
+            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+            "relative_path": "_docs/tutorials/060-analyzing-social-media-with-microstrategy.md", 
+            "title": "Analyzing Social Media with MicroStrategy", 
+            "url": "/docs/analyzing-social-media-with-microstrategy/"
+        }, 
         "Analyzing the Yelp Academic Dataset": {
             "breadcrumbs": [
                 {
@@ -3296,8 +3313,8 @@
             "next_title": "Install Drill Introduction", 
             "next_url": "/docs/install-drill-introduction/", 
             "parent": "", 
-            "previous_title": "Analyzing Highly Dynamic Datasets", 
-            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+            "previous_title": "Analyzing Social Media with MicroStrategy", 
+            "previous_url": "/docs/analyzing-social-media-with-microstrategy/", 
             "relative_path": "_docs/040-install-drill.md", 
             "title": "Install Drill", 
             "url": "/docs/install-drill/"
@@ -4566,7 +4583,7 @@
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Drill Explorer on Windows", 
                     "previous_url": "/docs/using-drill-explorer-on-windows/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-drill.md", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md", 
                     "title": "Using MicroStrategy Analytics with Drill", 
                     "url": "/docs/using-microstrategy-analytics-with-drill/"
                 }, 
@@ -4583,7 +4600,7 @@
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md", 
                     "title": "Using Tibco Spotfire with Drill", 
                     "url": "/docs/using-tibco-spotfire-with-drill/"
                 }, 
@@ -8066,14 +8083,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Install Drill", 
-                    "next_url": "/docs/install-drill/", 
+                    "next_title": "Analyzing Social Media with MicroStrategy", 
+                    "next_url": "/docs/analyzing-social-media-with-microstrategy/", 
                     "parent": "Tutorials", 
                     "previous_title": "Summary", 
                     "previous_url": "/docs/summary/", 
                     "relative_path": "_docs/tutorials/050-analyzing-highly-dynamic-datasets.md", 
                     "title": "Analyzing Highly Dynamic Datasets", 
                     "url": "/docs/analyzing-highly-dynamic-datasets/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "Tutorials", 
+                            "url": "/docs/tutorials/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Install Drill", 
+                    "next_url": "/docs/install-drill/", 
+                    "parent": "Tutorials", 
+                    "previous_title": "Analyzing Highly Dynamic Datasets", 
+                    "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+                    "relative_path": "_docs/tutorials/060-analyzing-social-media-with-microstrategy.md", 
+                    "title": "Analyzing Social Media with MicroStrategy", 
+                    "url": "/docs/analyzing-social-media-with-microstrategy/"
                 }
             ], 
             "next_title": "Tutorials Introduction", 
@@ -8229,7 +8263,7 @@
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using Drill Explorer on Windows", 
             "previous_url": "/docs/using-drill-explorer-on-windows/", 
-            "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-drill.md", 
+            "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md", 
             "title": "Using MicroStrategy Analytics with Drill", 
             "url": "/docs/using-microstrategy-analytics-with-drill/"
         }, 
@@ -8576,7 +8610,7 @@
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using MicroStrategy Analytics with Drill", 
             "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-            "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md", 
+            "relative_path": "_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md", 
             "title": "Using Tibco Spotfire with Drill", 
             "url": "/docs/using-tibco-spotfire-with-drill/"
         }, 
@@ -9077,14 +9111,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Install Drill", 
-                    "next_url": "/docs/install-drill/", 
+                    "next_title": "Analyzing Social Media with MicroStrategy", 
+                    "next_url": "/docs/analyzing-social-media-with-microstrategy/", 
                     "parent": "Tutorials", 
                     "previous_title": "Summary", 
                     "previous_url": "/docs/summary/", 
                     "relative_path": "_docs/tutorials/050-analyzing-highly-dynamic-datasets.md", 
                     "title": "Analyzing Highly Dynamic Datasets", 
                     "url": "/docs/analyzing-highly-dynamic-datasets/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "Tutorials", 
+                            "url": "/docs/tutorials/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Install Drill", 
+                    "next_url": "/docs/install-drill/", 
+                    "parent": "Tutorials", 
+                    "previous_title": "Analyzing Highly Dynamic Datasets", 
+                    "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+                    "relative_path": "_docs/tutorials/060-analyzing-social-media-with-microstrategy.md", 
+                    "title": "Analyzing Social Media with MicroStrategy", 
+                    "url": "/docs/analyzing-social-media-with-microstrategy/"
                 }
             ], 
             "next_title": "Tutorials Introduction", 
@@ -9324,8 +9375,8 @@
             "next_title": "Install Drill Introduction", 
             "next_url": "/docs/install-drill-introduction/", 
             "parent": "", 
-            "previous_title": "Analyzing Highly Dynamic Datasets", 
-            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
+            "previous_title": "Analyzing Social Media with MicroStrategy", 
+            "previous_url": "/docs/analyzing-social-media-with-microstrategy/", 
             "relative_path": "_docs/040-install-drill.md", 
             "title": "Install Drill", 
             "url": "/docs/install-drill/"
@@ -10212,7 +10263,7 @@
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Drill Explorer on Windows", 
                     "previous_url": "/docs/using-drill-explorer-on-windows/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-drill.md", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md", 
                     "title": "Using MicroStrategy Analytics with Drill", 
                     "url": "/docs/using-microstrategy-analytics-with-drill/"
                 }, 
@@ -10229,7 +10280,7 @@
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md", 
                     "title": "Using Tibco Spotfire with Drill", 
                     "url": "/docs/using-tibco-spotfire-with-drill/"
                 }, 

http://git-wip-us.apache.org/repos/asf/drill/blob/51704666/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
new file mode 100755
index 0000000..95b847a
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
@@ -0,0 +1,153 @@
+---
+title: "Using MicroStrategy Analytics with Drill"
+parent: "ODBC/JDBC Interfaces"
+---
+Apache Drill is certified with the MicroStrategy Analytics Enterprise Platform™. You can connect MicroStrategy Analytics Enterprise to Apache Drill and explore multiple data formats instantly on Hadoop. Use the combined power of these tools to get direct access to semi-structured data without having to rely on IT teams for schema creation.
+
+Complete the following steps to use Apache Drill with MicroStrategy Analytics Enterprise:
+ 
+1.  Install the Drill ODBC driver from MapR.
+2.	Configure the MicroStrategy Drill Object.
+3.	Create the MicroStrategy database connection for Drill.
+4.	Query and analyze the data.
+
+----------
+
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. 
+
+Complete the following steps to install and configure the driver:
+
+1.	Download the driver from the following location: 
+
+    http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/
+
+    {% include startnote.html %}Use the 32-bit Windows driver for MicroStrategy 9.4.1.{% include endnote.html %}
+
+2.	Complete steps 2-8 under *Installing the Driver* on the following page: 
+
+    https://cwiki.apache.org/confluence/display/DRILL/Using+the+MapR+ODBC+Driver+on+Windows
+3.	Complete the steps on the following page to configure the driver:
+
+    https://cwiki.apache.org/confluence/display/DRILL/Step+2.+Configure+ODBC+Connections+to+Drill+Data+Sources 
+
+    {% include startnote.html %}Verify that you are using the 32-bit driver since both drivers can coexist on the same machine.{% include endnote.html %} 
+
+	a.	Verify the version number of the driver.
+
+    	 
+	b.	Click Test to verify that the ODBC configuration works before using it with MicroStrategy.
+
+    ![]({{ site.baseurl }}/docs/img/image_2.png)
+
+----------
+
+
+### Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise 
+The steps listed in this section were created based on the MicroStrategy Technote for installing DBMS objects which you can reference at: 
+
+http://community.microstrategy.com/t5/Database/TN43537-How-to-install-DBMS-objects-provided-by-MicroStrategy/ta-p/193352
+
+
+Complete the following steps to install the Drill Object on MicroStrategy Analytics Enterprise:
+
+1. Obtain the Drill Object from MicroStrategy Technical Support. The Drill Object is contained in a file named `MapR_Drill.PDS`. When you get this file, store it locally in your Windows file system.
+2. Open MicroStrategy Developer. 
+3. Expand Administration, and open Configuration Manager.
+4. Select **Database Instances**.
+   ![]({{ site.baseurl }}/docs/img/image_3.png)
+5. Right-click in the area where the current database instances display. 
+   ![]({{ site.baseurl }}/docs/img/image_4.png)
+6. Select **New – Database Instance**. 
+7. Once the Database Instances window opens, select **Upgrade**.
+   ![]({{ site.baseurl }}/docs/img/image_5.png)
+8. Enter the path and file name for the Drill Object file in the DB types script file field. Alternatively, you can use the browse button next to the field to search for the file. 
+   ![]({{ site.baseurl }}/docs/img/image_6.png)
+9.  Click **Load**. 
+10.	Once loaded, select the MapR Drill database type in the left column.
+11.	Click **>** to load MapR Drill into **Existing database types**. 
+12.	Click **OK** to save the database type.
+13.	Restart MicroStrategy Intelligence Server if it is used for the project source.
+   ![]({{ site.baseurl }}/docs/img/image_7.png)
+
+MicroStrategy Analytics Enterprise can now access Apache Drill.
+
+
+----------
+
+### Step 3: Create the MicroStrategy database connection for Apache Drill
+Complete the following steps to use the Database Instance Wizard to create the MicroStrategy database connection for Apache Drill:
+
+1. In MicroStrategy  Developer, select **Administration > Database Instance Wizard**.
+   ![]({{ site.baseurl }}/docs/img/image_8.png)
+2. Enter a name for the database, and select **MapR Drill** as the Database type from the drop-down menu.
+   ![]({{ site.baseurl }}/docs/img/image_9.png)
+3. Click **Next**. 
+4. Select the ODBC DSN that you configured with the ODBC Administrator.
+   ![]({{ site.baseurl }}/docs/img/image_10.png)
+5. Provide the login information for the connection and then click **Finish**.
+
+You can now use MicroStrategy Analytics Enterprise to access Drill as a database instance. 
+
+----------
+
+
+### Step 4: Query and Analyze the Data
+This step includes an example scenario that shows you how to use MicroStrategy, with Drill as the database instance, to analyze Twitter data stored as complex JSON documents. 
+
+####Scenario
+The Drill distributed file system plugin is configured to read Twitter data in a directory structure. A view is created in Drill to capture the most relevant maps and nested maps and arrays for the Twitter JSON documents. Refer to the following page for more information about how to configure and use Drill to work with complex data:
+
+https://cwiki.apache.org/confluence/display/DRILL/Query+Data
+
+####Part 1: Create a Project
+Complete the following steps to create a project:
+
+1. In MicroStrategy Developer, use the Project Creation Assistant to create a new project.
+   ![]({{ site.baseurl }}/docs/img/image_11.png)
+2.  Once the Assistant starts, click **Create Project**, and enter a name for the new project. 
+3.	Click **OK**. 
+4.	Click **Select tables from the Warehouse Catalog**. 
+5.	Select the Drill database instance connection from the drop down list, and click **OK**.	MicroStrategy queries Drill and displays all of the available tables and views.
+   ![]({{ site.baseurl }}/docs/img/image_12.png)
+6.	Select the two views created for the Twitter Data.
+7.	Use **>** to move the views to **Tables being used in the project**. 
+8.	Click **Save and Close**.
+9.	Click **OK**. The new project is created in MicroStrategy Developer. 
+
+####Part 2: Create a Freeform Report to Analyze Data
+Complete the following steps to create a Freeform Report and analyze data:
+
+1.	In Developer, open the Project and then open Public Objects.
+2.	Click **Reports**.
+3.	Right-click in the pane on the right, and select **New > Report**.
+   ![]({{ site.baseurl }}/docs/img/image_13.png)
+4.	Click the **Freeform Soures** tab, and select the Drill data source.
+   ![]({{ site.baseurl }}/docs/img/image_14.png)
+5.	Verify that **Create Freeform SQL Report** is selected, and click **OK**. This allows you to enter a quick query to gather data. The Freeform SQL Editor window appears.
+   ![]({{ site.baseurl }}/docs/img/image_15.png)
+6.	Enter a SQL query in the field provided. Attributes specified display. 
+In this scenario, a simple query that selects and groups the tweet source and counts the number of times the same source appeared in a day is entered. The tweet source was added as a text metric and the count as a number. 
+7.	Click **Data/Run Report** to run the query. A bar chart displays the output.
+   ![]({{ site.baseurl }}/docs/img/image_16.png)
+
+You can see that there are three major sources for the captured tweets. You can change the view to tabular format and apply a filter to see that iPhone, Android, and Web Client are the three major sources of tweets for this specific data set.
+![]({{ site.baseurl }}/docs/img/image_17.png)
+
+In this scenario, you learned how to configure MicroStrategy Analytics Enterprise to work with Apache Drill. 
+
+----------
+
+### Certification Links
+
+MicroStrategy announced post certification of Drill 0.6 and 0.7 with MicroStrategy Analytics Enterprise 9.4.1
+
+
+http://community.microstrategy.com/t5/Database/TN225724-Post-Certification-of-MapR-Drill-0-6-and-0-7-with/ta-p/225724
+
+http://community.microstrategy.com/t5/Release-Notes/TN231092-Certified-Database-and-ODBC-configurations-for/ta-p/231092
+
+http://community.microstrategy.com/t5/Release-Notes/TN231094-Certified-Database-and-ODBC-configurations-for/ta-p/231094   
+

http://git-wip-us.apache.org/repos/asf/drill/blob/51704666/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-drill.md b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-drill.md
deleted file mode 100755
index 95b847a..0000000
--- a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-drill.md
+++ /dev/null
@@ -1,153 +0,0 @@
----
-title: "Using MicroStrategy Analytics with Drill"
-parent: "ODBC/JDBC Interfaces"
----
-Apache Drill is certified with the MicroStrategy Analytics Enterprise Platform™. You can connect MicroStrategy Analytics Enterprise to Apache Drill and explore multiple data formats instantly on Hadoop. Use the combined power of these tools to get direct access to semi-structured data without having to rely on IT teams for schema creation.
-
-Complete the following steps to use Apache Drill with MicroStrategy Analytics Enterprise:
- 
-1.  Install the Drill ODBC driver from MapR.
-2.	Configure the MicroStrategy Drill Object.
-3.	Create the MicroStrategy database connection for Drill.
-4.	Query and analyze the data.
-
-----------
-
-
-### Step 1: Install and Configure the MapR Drill ODBC Driver 
-
-Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. 
-
-Complete the following steps to install and configure the driver:
-
-1.	Download the driver from the following location: 
-
-    http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/
-
-    {% include startnote.html %}Use the 32-bit Windows driver for MicroStrategy 9.4.1.{% include endnote.html %}
-
-2.	Complete steps 2-8 under *Installing the Driver* on the following page: 
-
-    https://cwiki.apache.org/confluence/display/DRILL/Using+the+MapR+ODBC+Driver+on+Windows
-3.	Complete the steps on the following page to configure the driver:
-
-    https://cwiki.apache.org/confluence/display/DRILL/Step+2.+Configure+ODBC+Connections+to+Drill+Data+Sources 
-
-    {% include startnote.html %}Verify that you are using the 32-bit driver since both drivers can coexist on the same machine.{% include endnote.html %} 
-
-	a.	Verify the version number of the driver.
-
-    	 
-	b.	Click Test to verify that the ODBC configuration works before using it with MicroStrategy.
-
-    ![]({{ site.baseurl }}/docs/img/image_2.png)
-
-----------
-
-
-### Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise 
-The steps listed in this section were created based on the MicroStrategy Technote for installing DBMS objects which you can reference at: 
-
-http://community.microstrategy.com/t5/Database/TN43537-How-to-install-DBMS-objects-provided-by-MicroStrategy/ta-p/193352
-
-
-Complete the following steps to install the Drill Object on MicroStrategy Analytics Enterprise:
-
-1. Obtain the Drill Object from MicroStrategy Technical Support. The Drill Object is contained in a file named `MapR_Drill.PDS`. When you get this file, store it locally in your Windows file system.
-2. Open MicroStrategy Developer. 
-3. Expand Administration, and open Configuration Manager.
-4. Select **Database Instances**.
-   ![]({{ site.baseurl }}/docs/img/image_3.png)
-5. Right-click in the area where the current database instances display. 
-   ![]({{ site.baseurl }}/docs/img/image_4.png)
-6. Select **New – Database Instance**. 
-7. Once the Database Instances window opens, select **Upgrade**.
-   ![]({{ site.baseurl }}/docs/img/image_5.png)
-8. Enter the path and file name for the Drill Object file in the DB types script file field. Alternatively, you can use the browse button next to the field to search for the file. 
-   ![]({{ site.baseurl }}/docs/img/image_6.png)
-9.  Click **Load**. 
-10.	Once loaded, select the MapR Drill database type in the left column.
-11.	Click **>** to load MapR Drill into **Existing database types**. 
-12.	Click **OK** to save the database type.
-13.	Restart MicroStrategy Intelligence Server if it is used for the project source.
-   ![]({{ site.baseurl }}/docs/img/image_7.png)
-
-MicroStrategy Analytics Enterprise can now access Apache Drill.
-
-
-----------
-
-### Step 3: Create the MicroStrategy database connection for Apache Drill
-Complete the following steps to use the Database Instance Wizard to create the MicroStrategy database connection for Apache Drill:
-
-1. In MicroStrategy  Developer, select **Administration > Database Instance Wizard**.
-   ![]({{ site.baseurl }}/docs/img/image_8.png)
-2. Enter a name for the database, and select **MapR Drill** as the Database type from the drop-down menu.
-   ![]({{ site.baseurl }}/docs/img/image_9.png)
-3. Click **Next**. 
-4. Select the ODBC DSN that you configured with the ODBC Administrator.
-   ![]({{ site.baseurl }}/docs/img/image_10.png)
-5. Provide the login information for the connection and then click **Finish**.
-
-You can now use MicroStrategy Analytics Enterprise to access Drill as a database instance. 
-
-----------
-
-
-### Step 4: Query and Analyze the Data
-This step includes an example scenario that shows you how to use MicroStrategy, with Drill as the database instance, to analyze Twitter data stored as complex JSON documents. 
-
-####Scenario
-The Drill distributed file system plugin is configured to read Twitter data in a directory structure. A view is created in Drill to capture the most relevant maps and nested maps and arrays for the Twitter JSON documents. Refer to the following page for more information about how to configure and use Drill to work with complex data:
-
-https://cwiki.apache.org/confluence/display/DRILL/Query+Data
-
-####Part 1: Create a Project
-Complete the following steps to create a project:
-
-1. In MicroStrategy Developer, use the Project Creation Assistant to create a new project.
-   ![]({{ site.baseurl }}/docs/img/image_11.png)
-2.  Once the Assistant starts, click **Create Project**, and enter a name for the new project. 
-3.	Click **OK**. 
-4.	Click **Select tables from the Warehouse Catalog**. 
-5.	Select the Drill database instance connection from the drop down list, and click **OK**.	MicroStrategy queries Drill and displays all of the available tables and views.
-   ![]({{ site.baseurl }}/docs/img/image_12.png)
-6.	Select the two views created for the Twitter Data.
-7.	Use **>** to move the views to **Tables being used in the project**. 
-8.	Click **Save and Close**.
-9.	Click **OK**. The new project is created in MicroStrategy Developer. 
-
-####Part 2: Create a Freeform Report to Analyze Data
-Complete the following steps to create a Freeform Report and analyze data:
-
-1.	In Developer, open the Project and then open Public Objects.
-2.	Click **Reports**.
-3.	Right-click in the pane on the right, and select **New > Report**.
-   ![]({{ site.baseurl }}/docs/img/image_13.png)
-4.	Click the **Freeform Soures** tab, and select the Drill data source.
-   ![]({{ site.baseurl }}/docs/img/image_14.png)
-5.	Verify that **Create Freeform SQL Report** is selected, and click **OK**. This allows you to enter a quick query to gather data. The Freeform SQL Editor window appears.
-   ![]({{ site.baseurl }}/docs/img/image_15.png)
-6.	Enter a SQL query in the field provided. Attributes specified display. 
-In this scenario, a simple query that selects and groups the tweet source and counts the number of times the same source appeared in a day is entered. The tweet source was added as a text metric and the count as a number. 
-7.	Click **Data/Run Report** to run the query. A bar chart displays the output.
-   ![]({{ site.baseurl }}/docs/img/image_16.png)
-
-You can see that there are three major sources for the captured tweets. You can change the view to tabular format and apply a filter to see that iPhone, Android, and Web Client are the three major sources of tweets for this specific data set.
-![]({{ site.baseurl }}/docs/img/image_17.png)
-
-In this scenario, you learned how to configure MicroStrategy Analytics Enterprise to work with Apache Drill. 
-
-----------
-
-### Certification Links
-
-MicroStrategy announced post certification of Drill 0.6 and 0.7 with MicroStrategy Analytics Enterprise 9.4.1
-
-
-http://community.microstrategy.com/t5/Database/TN225724-Post-Certification-of-MapR-Drill-0-6-and-0-7-with/ta-p/225724
-
-http://community.microstrategy.com/t5/Release-Notes/TN231092-Certified-Database-and-ODBC-configurations-for/ta-p/231092
-
-http://community.microstrategy.com/t5/Release-Notes/TN231094-Certified-Database-and-ODBC-configurations-for/ta-p/231094   
-

http://git-wip-us.apache.org/repos/asf/drill/blob/51704666/_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md b/_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md
deleted file mode 100755
index 65c5d64..0000000
--- a/_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md	
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: "Using Tibco Spotfire with Drill"
-parent: "ODBC/JDBC Interfaces"
----
-Tibco Spotfire Desktop is a powerful analytic tool that enables SQL statements when connecting to data sources. Spotfire Desktop can utilize the powerful query capabilities of Apache Drill to query complex data structures. Use the MapR Drill ODBC Driver to configure Tibco Spotfire Desktop with Apache Drill.
-
-To use Spotfire Desktop with Apache Drill, complete the following steps:
-
-1.  Install the Drill ODBC Driver from MapR.
-2.	Configure the Spotfire Desktop data connection for Drill.
-
-----------
-
-
-### Step 1: Install and Configure the MapR Drill ODBC Driver 
-
-Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. 
-
-Complete the following steps to install and configure the driver:
-
-1.    Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
-**Note:** Spotfire Desktop 6.5.1 utilizes the 64-bit ODBC driver.
-2.    Complete steps 2-8 under on the following page to install the driver:<br> 
-[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
-3.    Complete the steps on the following page to configure the driver:<br>
-[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
-
-----------
-
-
-### Step 2: Configure the Spotfire Desktop Data Connection for Drill 
-Complete the following steps to configure a Drill data connection: 
-
-1. Select the **Add Data Connection** option or click the Add Data Connection button in the menu bar, as shown in the image below:![](http://i.imgur.com/p3LNNBs.png)
-2. When the dialog window appears, click the **Add** button, and select **Other/Database** from the dropdown list.![](http://i.imgur.com/u1g9kaT.png)
-3. In the Open Database window that appears, select **Odbc Data Provider** and then click **Configure**. ![](http://i.imgur.com/8Gu0GAZ.png)
-4. In the Configure Data Source Connection window that appears, select the Drill DSN that you configured in the ODBC administrator, and enter the relevant credentials for Drill.<br> ![](http://i.imgur.com/Yd6BKls.png) 
-5. Click **OK** to continue. The Spotfire Desktop queries the Drill metadata for available schemas, tables, and views. You can navigate the schemas in the left-hand column. After you select a specific view or table, the relevant SQL displays in the right-hand column. 
-![](http://i.imgur.com/wNBDs5q.png)
-6. Optionally, you can modify the SQL to work best with Drill. Simply change the schema.table.* notation in the SELECT statement to simply * or the relevant column names that are needed. 
-Note that Drill has certain reserved keywords that you must put in back ticks [ ` ] when needed. See [Drill Reserved Keywords](http://drill.apache.org/docs/reserved-keywords/).
-7. Once the SQL is complete, provide a name for the Data Source and click **OK**. Spotfire Desktop queries Drill and retrieves the data for analysis. You can use the functionality of Spotfire Desktop to work with the data.
-![](http://i.imgur.com/j0MWorh.png)
-
-**NOTE:** You can use the SQL statement column to query data and complex structures that do not display in the left-hand schema column. A good example is JSON files in the file system.
-
-**SQL Example:**<br>
-SELECT t.trans_id, t.`date`, t.user_info.cust_id as cust_id, t.user_info.device as device FROM dfs.clicks.`/clicks/clicks.campaign.json` t
-
-----------

http://git-wip-us.apache.org/repos/asf/drill/blob/51704666/_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md b/_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md
new file mode 100755
index 0000000..65c5d64
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md
@@ -0,0 +1,50 @@
+---
+title: "Using Tibco Spotfire with Drill"
+parent: "ODBC/JDBC Interfaces"
+---
+Tibco Spotfire Desktop is a powerful analytic tool that enables SQL statements when connecting to data sources. Spotfire Desktop can utilize the powerful query capabilities of Apache Drill to query complex data structures. Use the MapR Drill ODBC Driver to configure Tibco Spotfire Desktop with Apache Drill.
+
+To use Spotfire Desktop with Apache Drill, complete the following steps:
+
+1.  Install the Drill ODBC Driver from MapR.
+2.	Configure the Spotfire Desktop data connection for Drill.
+
+----------
+
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. 
+
+Complete the following steps to install and configure the driver:
+
+1.    Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
+**Note:** Spotfire Desktop 6.5.1 utilizes the 64-bit ODBC driver.
+2.    Complete steps 2-8 under on the following page to install the driver:<br> 
+[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
+3.    Complete the steps on the following page to configure the driver:<br>
+[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
+
+----------
+
+
+### Step 2: Configure the Spotfire Desktop Data Connection for Drill 
+Complete the following steps to configure a Drill data connection: 
+
+1. Select the **Add Data Connection** option or click the Add Data Connection button in the menu bar, as shown in the image below:![](http://i.imgur.com/p3LNNBs.png)
+2. When the dialog window appears, click the **Add** button, and select **Other/Database** from the dropdown list.![](http://i.imgur.com/u1g9kaT.png)
+3. In the Open Database window that appears, select **Odbc Data Provider** and then click **Configure**. ![](http://i.imgur.com/8Gu0GAZ.png)
+4. In the Configure Data Source Connection window that appears, select the Drill DSN that you configured in the ODBC administrator, and enter the relevant credentials for Drill.<br> ![](http://i.imgur.com/Yd6BKls.png) 
+5. Click **OK** to continue. The Spotfire Desktop queries the Drill metadata for available schemas, tables, and views. You can navigate the schemas in the left-hand column. After you select a specific view or table, the relevant SQL displays in the right-hand column. 
+![](http://i.imgur.com/wNBDs5q.png)
+6. Optionally, you can modify the SQL to work best with Drill. Simply change the schema.table.* notation in the SELECT statement to simply * or the relevant column names that are needed. 
+Note that Drill has certain reserved keywords that you must put in back ticks [ ` ] when needed. See [Drill Reserved Keywords](http://drill.apache.org/docs/reserved-keywords/).
+7. Once the SQL is complete, provide a name for the Data Source and click **OK**. Spotfire Desktop queries Drill and retrieves the data for analysis. You can use the functionality of Spotfire Desktop to work with the data.
+![](http://i.imgur.com/j0MWorh.png)
+
+**NOTE:** You can use the SQL statement column to query data and complex structures that do not display in the left-hand schema column. A good example is JSON files in the file system.
+
+**SQL Example:**<br>
+SELECT t.trans_id, t.`date`, t.user_info.cust_id as cust_id, t.user_info.device as device FROM dfs.clicks.`/clicks/clicks.campaign.json` t
+
+----------

http://git-wip-us.apache.org/repos/asf/drill/blob/51704666/_docs/tutorials/010-tutorials-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/010-tutorials-introduction.md b/_docs/tutorials/010-tutorials-introduction.md
index 1c74a39..f9c6263 100644
--- a/_docs/tutorials/010-tutorials-introduction.md
+++ b/_docs/tutorials/010-tutorials-introduction.md
@@ -14,7 +14,7 @@ If you've never used Drill, use these tutorials to download, install, and start
   Delve into changing data without creating a schema or going through an ETL phase.
 * [Tableau Examples]({{site.baseurl}}/docs/tableau-examples)  
   Access Hive tables in Tableau.  
-* [Using MicroStrategy Analytics with Drill]({{site.baseurl}}/docs/using-microstrategy-analytics-with-drill/)  
+* [Using MicroStrategy Analytics with Drill]({{site.baseurl}}/docs/using-microstrategy-analytics-with--apache-drill/)  
   Use the Drill ODBC driver from MapR to analyze data and generate a report using Drill from the MicroStrategy UI.  
 * [Using Drill to Analyze Amazon Spot Prices](https://github.com/vicenteg/spot-price-history#drill-workshop---amazon-spot-prices)  
   A Drill workshop on github that covers views of JSON and Parquet data.  


[25/25] drill git commit: Improved video widget and responsive bug fixes

Posted by ts...@apache.org.
Improved video widget and responsive bug fixes


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/fcb4f412
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/fcb4f412
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/fcb4f412

Branch: refs/heads/gh-pages
Commit: fcb4f412bb27fc55a8a9525418cc151590b90259
Parents: 4f410f9
Author: Tomer Shiran <ts...@gmail.com>
Authored: Mon May 11 22:44:40 2015 -0700
Committer: Tomer Shiran <ts...@gmail.com>
Committed: Mon May 11 22:44:40 2015 -0700

----------------------------------------------------------------------
 _includes/authors.html           |   1 +
 css/responsive.css               |  19 +++++++++++--------
 css/style.css                    |  28 ++++++++++++++++++++++++++++
 css/video-slider.css             |  34 ++++++++++++++++++++++++++++++++++
 images/play-mq.png               | Bin 0 -> 1050 bytes
 images/thumbnail-65c42i7Xg7Q.jpg | Bin 0 -> 12659 bytes
 images/thumbnail-6pGeQOXDdD8.jpg | Bin 0 -> 13315 bytes
 images/thumbnail-MYY51kiFPTk.jpg | Bin 0 -> 13058 bytes
 images/thumbnail-bhmNbH2yzhM.jpg | Bin 0 -> 14299 bytes
 index.html                       |  29 ++++++++++++++++-------------
 10 files changed, 90 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/_includes/authors.html
----------------------------------------------------------------------
diff --git a/_includes/authors.html b/_includes/authors.html
new file mode 100644
index 0000000..30d358b
--- /dev/null
+++ b/_includes/authors.html
@@ -0,0 +1 @@
+{% for alias in post.authors %}{% assign author = site.data.authors[alias] %}{{ author.name }}{% unless forloop.last %}, {% endunless %}{% endfor %}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/css/responsive.css
----------------------------------------------------------------------
diff --git a/css/responsive.css b/css/responsive.css
index b5ba351..81714a5 100644
--- a/css/responsive.css
+++ b/css/responsive.css
@@ -191,18 +191,21 @@
   }
   #header .scroller .item div.headlines .btn { font-size: 16px; }
 
-  div.alertbar .hor-bar:after {
-    content: "";
-  }
   div.alertbar {
     text-align: left;
-    padding-left: 25px;
-  }
-  div.alertbar a {
-    display:block;
+    padding:0 25px;
   }
-  div.alertbar span.strong {
+  div.alertbar div {
     display: block;
+    padding:10px 0;
+  }
+  div.alertbar div:nth-child(1){
+    border-right:none;
+    border-bottom:solid 1px #cc9;
+  }
+  div.alertbar div:nth-child(2){
+    border-right:none;
+    border-bottom:solid 1px #cc9;
   }
   table.intro {
     width: 100%;

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/css/style.css
----------------------------------------------------------------------
diff --git a/css/style.css b/css/style.css
index 9828806..aa8cfb5 100755
--- a/css/style.css
+++ b/css/style.css
@@ -824,3 +824,31 @@ li p {
 .int_text table-bordered tbody>tr:last-child>td{border-bottom-width:0}
 .int_text table-horizontal td, .int_text table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #cbcbcb}
 .int_text table-horizontal tbody>tr:last-child>td{border-bottom-width:0}
+
+
+div.alertbar{
+  line-height:1;
+  text-align: center;
+}
+
+div.alertbar div{
+  display: inline-block;
+  vertical-align: middle;
+  padding:0 10px;
+}
+
+div.alertbar div:nth-child(2){
+  border-right:solid 1px #cc9;
+}
+
+div.alertbar div.news{
+  font-weight:bold;
+}
+
+div.alertbar a{
+  
+}
+div.alertbar div span{
+  font-size:65%;
+  color:#aa7;
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/css/video-slider.css
----------------------------------------------------------------------
diff --git a/css/video-slider.css b/css/video-slider.css
new file mode 100644
index 0000000..e80ece7
--- /dev/null
+++ b/css/video-slider.css
@@ -0,0 +1,34 @@
+div#video-slider{
+  width:260px;
+  float:right;
+}
+
+div.slide{
+  position:relative;
+  padding:0px 0px;
+}
+
+img.thumbnail {
+  width:100%;
+  margin:0 auto;
+}
+
+img.play{
+  position:absolute;
+  width:40px;
+  left:110px;
+  top:60px;
+}
+
+div.title{
+  layout:block;
+  bottom:0px;
+  left:0px;
+  width:100%;
+  line-height:20px;
+  color:#000;
+  opacity:.4;
+  text-align:center;
+  font-size:12px;
+  background-color:#fff;
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/images/play-mq.png
----------------------------------------------------------------------
diff --git a/images/play-mq.png b/images/play-mq.png
new file mode 100644
index 0000000..c423b2a
Binary files /dev/null and b/images/play-mq.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/images/thumbnail-65c42i7Xg7Q.jpg
----------------------------------------------------------------------
diff --git a/images/thumbnail-65c42i7Xg7Q.jpg b/images/thumbnail-65c42i7Xg7Q.jpg
new file mode 100644
index 0000000..61f8b4f
Binary files /dev/null and b/images/thumbnail-65c42i7Xg7Q.jpg differ

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/images/thumbnail-6pGeQOXDdD8.jpg
----------------------------------------------------------------------
diff --git a/images/thumbnail-6pGeQOXDdD8.jpg b/images/thumbnail-6pGeQOXDdD8.jpg
new file mode 100644
index 0000000..af9bdbc
Binary files /dev/null and b/images/thumbnail-6pGeQOXDdD8.jpg differ

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/images/thumbnail-MYY51kiFPTk.jpg
----------------------------------------------------------------------
diff --git a/images/thumbnail-MYY51kiFPTk.jpg b/images/thumbnail-MYY51kiFPTk.jpg
new file mode 100644
index 0000000..6e6b0bb
Binary files /dev/null and b/images/thumbnail-MYY51kiFPTk.jpg differ

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/images/thumbnail-bhmNbH2yzhM.jpg
----------------------------------------------------------------------
diff --git a/images/thumbnail-bhmNbH2yzhM.jpg b/images/thumbnail-bhmNbH2yzhM.jpg
new file mode 100644
index 0000000..93747fb
Binary files /dev/null and b/images/thumbnail-bhmNbH2yzhM.jpg differ

http://git-wip-us.apache.org/repos/asf/drill/blob/fcb4f412/index.html
----------------------------------------------------------------------
diff --git a/index.html b/index.html
index 0ae78f2..7ad2709 100755
--- a/index.html
+++ b/index.html
@@ -2,8 +2,11 @@
 layout: default
 ---
 <link href="{{ site.baseurl }}/static/fancybox/jquery.fancybox.css" rel="stylesheet" type="text/css">
-<link href="{{ site.baseurl }}/css/video-box.css" rel="stylesheet" type="text/css">
+<link href="{{ site.baseurl }}/css/video-slider.css" rel="stylesheet" type="text/css">
 <script language="javascript" type="text/javascript" src="{{ site.baseurl }}/static/fancybox/jquery.fancybox.pack.js"></script>
+<link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/jquery.slick/1.5.0/slick.css"/>
+<link rel="stylesheet" type="text/css" href="//cdn.jsdelivr.net/jquery.slick/1.5.0/slick-theme.css"/>
+<script type="text/javascript" src="//cdn.jsdelivr.net/jquery.slick/1.5.0/slick.min.js"></script>
 
 <script type="text/javascript">
 
@@ -18,6 +21,12 @@ $(document).ready(function() {
       this.href = url
     }
   });
+  
+  $('div#video-slider').slick({
+    autoplay: true,
+    autoplaySpeed: 5000,
+    dots: true
+  });
 });
 
 </script>
@@ -33,12 +42,11 @@ $(document).ready(function() {
 <div class="scroller">
   <div class="item">
     <div class="headlines tc">
-      <div id="video-box">   
-        <div class="background"></div>
-        <div class="row r0"><div><a class="various fancybox.iframe" href="http://www.youtube.com/watch?v=65c42i7Xg7Q"><img src="http://img.youtube.com/vi/65c42i7Xg7Q/1.jpg" /></a></div><a class="various fancybox.iframe" href="http://www.youtube.com/watch?v=65c42i7Xg7Q">1. The Rise of the Non-Relational Datastore</a></div>
-        <div class="row r1"><div><a class="various fancybox.iframe" href="http://www.youtube.com/watch?v=MYY51kiFPTk"><img src="http://img.youtube.com/vi/MYY51kiFPTk/1.jpg" /></a></div><a class="various fancybox.iframe" href="http://www.youtube.com/watch?v=MYY51kiFPTk">2. Deployment Options and Client Access</a></div>
-        <div class="row r2"><div><a class="various fancybox.iframe" href="http://www.youtube.com/watch?v=bhmNbH2yzhM"><img src="http://img.youtube.com/vi/bhmNbH2yzhM/1.jpg" /></a></div><a class="various fancybox.iframe" href="http://www.youtube.com/watch?v=bhmNbH2yzhM">3. Connecting to Data Sources</a></div>
-        <div class="row r3"><div><a class="various fancybox.iframe" href="http://www.youtube.com/watch?v=6pGeQOXDdD8"><img src="http://img.youtube.com/vi/6pGeQOXDdD8/1.jpg" /></a></div><a class="various fancybox.iframe" href="http://www.youtube.com/watch?v=6pGeQOXDdD8">4. High-Performance SQL with a JSON Model</a></div>
+      <div id="video-slider" class="slider">
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=65c42i7Xg7Q"><img src="{{ site.baseurl }}/images/thumbnail-65c42i7Xg7Q.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">The Rise of the Non-Relational Datastore</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=MYY51kiFPTk"><img src="{{ site.baseurl }}/images/thumbnail-MYY51kiFPTk.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Deployment Options and BI Tools</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=bhmNbH2yzhM"><img src="{{ site.baseurl }}/images/thumbnail-bhmNbH2yzhM.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">Connecting to Data Sources</div></div>
+        <div class="slide"><a class="various fancybox.iframe" href="//www.youtube.com/watch?v=6pGeQOXDdD8"><img src="{{ site.baseurl }}/images/thumbnail-6pGeQOXDdD8.jpg" class="thumbnail" /><img src="{{ site.baseurl }}/images/play-mq.png" class="play" /></a><div class="title">High Performance with a JSON Data Model</div></div>
       </div>
       <h1 class="main-headline">Apache Drill</h1>
       <h2 id="sub-headline">Schema-free SQL Query Engine <br class="mobile-break" /> for Hadoop and NoSQL</h2>
@@ -49,12 +57,7 @@ $(document).ready(function() {
 
 </div><!-- header -->
 <div class="alertbar">
-  <span class="strong">News:</span> 
-  {% assign post = site.categories.blog[0] %}<!-- previously: site.posts -->
-  <a href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a>
-  <span class="hor-bar"></span>
-  {% assign post = site.categories.blog[1] %}<!-- previously: site.posts -->
-  <a href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a>
+  <div class="news">News:</div>{% assign post = site.categories.blog[0] %}<div><a href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a><br/><span>({% include authors.html %})</span></div>{% assign post = site.categories.blog[1] %}<div><a href="{{ post.url | prepend: site.baseurl }}">{{ post.title }}</a><br/><span>({% include authors.html %})</span></div>
 </div>
 
 <div class="mw introWrapper">


[21/25] drill git commit: title change

Posted by ts...@apache.org.
title change


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/fa1ee7fc
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/fa1ee7fc
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/fa1ee7fc

Branch: refs/heads/gh-pages
Commit: fa1ee7fc88a9cd8718e68de0d3d2be8e1988b49b
Parents: 5170466
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon May 11 10:06:57 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Mon May 11 10:06:57 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 42 ++++++++++----------
 ...microstrategy-analytics-with-apache-drill.md |  2 +-
 2 files changed, 22 insertions(+), 22 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/fa1ee7fc/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 9f4bdbe..3799e2d 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -4551,8 +4551,8 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Using MicroStrategy Analytics with Drill", 
-                            "next_url": "/docs/using-microstrategy-analytics-with-drill/", 
+                            "next_title": "Using MicroStrategy Analytics with Apache Drill", 
+                            "next_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
                             "parent": "Using ODBC on Windows", 
                             "previous_title": "Tableau Examples", 
                             "previous_url": "/docs/tableau-examples/", 
@@ -4584,8 +4584,8 @@
                     "previous_title": "Using Drill Explorer on Windows", 
                     "previous_url": "/docs/using-drill-explorer-on-windows/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md", 
-                    "title": "Using MicroStrategy Analytics with Drill", 
-                    "url": "/docs/using-microstrategy-analytics-with-drill/"
+                    "title": "Using MicroStrategy Analytics with Apache Drill", 
+                    "url": "/docs/using-microstrategy-analytics-with-apache-drill/"
                 }, 
                 {
                     "breadcrumbs": [
@@ -4598,8 +4598,8 @@
                     "next_title": "Using Apache Drill with Tableau 9 Desktop", 
                     "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                     "parent": "ODBC/JDBC Interfaces", 
-                    "previous_title": "Using MicroStrategy Analytics with Drill", 
-                    "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
+                    "previous_title": "Using MicroStrategy Analytics with Apache Drill", 
+                    "previous_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md", 
                     "title": "Using Tibco Spotfire with Drill", 
                     "url": "/docs/using-tibco-spotfire-with-drill/"
@@ -8224,8 +8224,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Using MicroStrategy Analytics with Drill", 
-            "next_url": "/docs/using-microstrategy-analytics-with-drill/", 
+            "next_title": "Using MicroStrategy Analytics with Apache Drill", 
+            "next_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
             "parent": "Using ODBC on Windows", 
             "previous_title": "Tableau Examples", 
             "previous_url": "/docs/tableau-examples/", 
@@ -8250,7 +8250,7 @@
             "title": "Using JDBC", 
             "url": "/docs/using-jdbc/"
         }, 
-        "Using MicroStrategy Analytics with Drill": {
+        "Using MicroStrategy Analytics with Apache Drill": {
             "breadcrumbs": [
                 {
                     "title": "ODBC/JDBC Interfaces", 
@@ -8264,8 +8264,8 @@
             "previous_title": "Using Drill Explorer on Windows", 
             "previous_url": "/docs/using-drill-explorer-on-windows/", 
             "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md", 
-            "title": "Using MicroStrategy Analytics with Drill", 
-            "url": "/docs/using-microstrategy-analytics-with-drill/"
+            "title": "Using MicroStrategy Analytics with Apache Drill", 
+            "url": "/docs/using-microstrategy-analytics-with-apache-drill/"
         }, 
         "Using ODBC on Linux and Mac OS X": {
             "breadcrumbs": [
@@ -8557,8 +8557,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using MicroStrategy Analytics with Drill", 
-                    "next_url": "/docs/using-microstrategy-analytics-with-drill/", 
+                    "next_title": "Using MicroStrategy Analytics with Apache Drill", 
+                    "next_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
                     "parent": "Using ODBC on Windows", 
                     "previous_title": "Tableau Examples", 
                     "previous_url": "/docs/tableau-examples/", 
@@ -8608,8 +8608,8 @@
             "next_title": "Using Apache Drill with Tableau 9 Desktop", 
             "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
             "parent": "ODBC/JDBC Interfaces", 
-            "previous_title": "Using MicroStrategy Analytics with Drill", 
-            "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
+            "previous_title": "Using MicroStrategy Analytics with Apache Drill", 
+            "previous_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
             "relative_path": "_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md", 
             "title": "Using Tibco Spotfire with Drill", 
             "url": "/docs/using-tibco-spotfire-with-drill/"
@@ -10231,8 +10231,8 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Using MicroStrategy Analytics with Drill", 
-                            "next_url": "/docs/using-microstrategy-analytics-with-drill/", 
+                            "next_title": "Using MicroStrategy Analytics with Apache Drill", 
+                            "next_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
                             "parent": "Using ODBC on Windows", 
                             "previous_title": "Tableau Examples", 
                             "previous_url": "/docs/tableau-examples/", 
@@ -10264,8 +10264,8 @@
                     "previous_title": "Using Drill Explorer on Windows", 
                     "previous_url": "/docs/using-drill-explorer-on-windows/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md", 
-                    "title": "Using MicroStrategy Analytics with Drill", 
-                    "url": "/docs/using-microstrategy-analytics-with-drill/"
+                    "title": "Using MicroStrategy Analytics with Apache Drill", 
+                    "url": "/docs/using-microstrategy-analytics-with-apache-drill/"
                 }, 
                 {
                     "breadcrumbs": [
@@ -10278,8 +10278,8 @@
                     "next_title": "Using Apache Drill with Tableau 9 Desktop", 
                     "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                     "parent": "ODBC/JDBC Interfaces", 
-                    "previous_title": "Using MicroStrategy Analytics with Drill", 
-                    "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
+                    "previous_title": "Using MicroStrategy Analytics with Apache Drill", 
+                    "previous_url": "/docs/using-microstrategy-analytics-with-apache-drill/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/060-using-tibco-spotfire-with-drill.md", 
                     "title": "Using Tibco Spotfire with Drill", 
                     "url": "/docs/using-tibco-spotfire-with-drill/"

http://git-wip-us.apache.org/repos/asf/drill/blob/fa1ee7fc/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
index 95b847a..d6140aa 100755
--- a/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
+++ b/_docs/odbc-jdbc-interfaces/050-using-microstrategy-analytics-with-apache-drill.md
@@ -1,5 +1,5 @@
 ---
-title: "Using MicroStrategy Analytics with Drill"
+title: "Using MicroStrategy Analytics with Apache Drill"
 parent: "ODBC/JDBC Interfaces"
 ---
 Apache Drill is certified with the MicroStrategy Analytics Enterprise Platform™. You can connect MicroStrategy Analytics Enterprise to Apache Drill and explore multiple data formats instantly on Hadoop. Use the combined power of these tools to get direct access to semi-structured data without having to rely on IT teams for schema creation.


[22/25] drill git commit: spelling of lesson

Posted by ts...@apache.org.
spelling of lesson


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/7317bf0c
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/7317bf0c
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/7317bf0c

Branch: refs/heads/gh-pages
Commit: 7317bf0c45ea421009a94be310dd1d8b1f0b0e5c
Parents: fa1ee7f
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon May 11 14:31:49 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Mon May 11 14:31:49 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 150 +++++++++----------
 .../030-lesson-1-learn-about-the-data-set.md    |   4 +-
 .../040-lesson-2-run-queries-with-ansi-sql.md   |   2 +-
 ...esson-3-run-queries-on-complex-data-types.md |   2 +-
 4 files changed, 79 insertions(+), 79 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/7317bf0c/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 3799e2d..08bfcd8 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -2979,8 +2979,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Lession 1: Learn about the Data Set", 
-            "next_url": "/docs/lession-1-learn-about-the-data-set/", 
+            "next_title": "Lesson 1: Learn about the Data Set", 
+            "next_url": "/docs/lesson-1-learn-about-the-data-set/", 
             "parent": "Learn Drill with the MapR Sandbox", 
             "previous_title": "Installing the Apache Drill Sandbox", 
             "previous_url": "/docs/installing-the-apache-drill-sandbox/", 
@@ -3804,8 +3804,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Lession 1: Learn about the Data Set", 
-                    "next_url": "/docs/lession-1-learn-about-the-data-set/", 
+                    "next_title": "Lesson 1: Learn about the Data Set", 
+                    "next_url": "/docs/lesson-1-learn-about-the-data-set/", 
                     "parent": "Learn Drill with the MapR Sandbox", 
                     "previous_title": "Installing the Apache Drill Sandbox", 
                     "previous_url": "/docs/installing-the-apache-drill-sandbox/", 
@@ -3825,14 +3825,14 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Lession 2: Run Queries with ANSI SQL", 
-                    "next_url": "/docs/lession-2-run-queries-with-ansi-sql/", 
+                    "next_title": "Lesson 2: Run Queries with ANSI SQL", 
+                    "next_url": "/docs/lesson-2-run-queries-with-ansi-sql/", 
                     "parent": "Learn Drill with the MapR Sandbox", 
                     "previous_title": "Getting to Know the Drill Sandbox", 
                     "previous_url": "/docs/getting-to-know-the-drill-sandbox/", 
                     "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md", 
-                    "title": "Lession 1: Learn about the Data Set", 
-                    "url": "/docs/lession-1-learn-about-the-data-set/"
+                    "title": "Lesson 1: Learn about the Data Set", 
+                    "url": "/docs/lesson-1-learn-about-the-data-set/"
                 }, 
                 {
                     "breadcrumbs": [
@@ -3846,14 +3846,14 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Lession 3: Run Queries on Complex Data Types", 
-                    "next_url": "/docs/lession-3-run-queries-on-complex-data-types/", 
+                    "next_title": "Lesson 3: Run Queries on Complex Data Types", 
+                    "next_url": "/docs/lesson-3-run-queries-on-complex-data-types/", 
                     "parent": "Learn Drill with the MapR Sandbox", 
-                    "previous_title": "Lession 1: Learn about the Data Set", 
-                    "previous_url": "/docs/lession-1-learn-about-the-data-set/", 
+                    "previous_title": "Lesson 1: Learn about the Data Set", 
+                    "previous_url": "/docs/lesson-1-learn-about-the-data-set/", 
                     "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md", 
-                    "title": "Lession 2: Run Queries with ANSI SQL", 
-                    "url": "/docs/lession-2-run-queries-with-ansi-sql/"
+                    "title": "Lesson 2: Run Queries with ANSI SQL", 
+                    "url": "/docs/lesson-2-run-queries-with-ansi-sql/"
                 }, 
                 {
                     "breadcrumbs": [
@@ -3870,11 +3870,11 @@
                     "next_title": "Summary", 
                     "next_url": "/docs/summary/", 
                     "parent": "Learn Drill with the MapR Sandbox", 
-                    "previous_title": "Lession 2: Run Queries with ANSI SQL", 
-                    "previous_url": "/docs/lession-2-run-queries-with-ansi-sql/", 
+                    "previous_title": "Lesson 2: Run Queries with ANSI SQL", 
+                    "previous_url": "/docs/lesson-2-run-queries-with-ansi-sql/", 
                     "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md", 
-                    "title": "Lession 3: Run Queries on Complex Data Types", 
-                    "url": "/docs/lession-3-run-queries-on-complex-data-types/"
+                    "title": "Lesson 3: Run Queries on Complex Data Types", 
+                    "url": "/docs/lesson-3-run-queries-on-complex-data-types/"
                 }, 
                 {
                     "breadcrumbs": [
@@ -3891,8 +3891,8 @@
                     "next_title": "Analyzing Highly Dynamic Datasets", 
                     "next_url": "/docs/analyzing-highly-dynamic-datasets/", 
                     "parent": "Learn Drill with the MapR Sandbox", 
-                    "previous_title": "Lession 3: Run Queries on Complex Data Types", 
-                    "previous_url": "/docs/lession-3-run-queries-on-complex-data-types/", 
+                    "previous_title": "Lesson 3: Run Queries on Complex Data Types", 
+                    "previous_url": "/docs/lesson-3-run-queries-on-complex-data-types/", 
                     "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/060-summary.md", 
                     "title": "Summary", 
                     "url": "/docs/summary/"
@@ -3907,7 +3907,7 @@
             "title": "Learn Drill with the MapR Sandbox", 
             "url": "/docs/learn-drill-with-the-mapr-sandbox/"
         }, 
-        "Lession 1: Learn about the Data Set": {
+        "Lesson 1: Learn about the Data Set": {
             "breadcrumbs": [
                 {
                     "title": "Learn Drill with the MapR Sandbox", 
@@ -3919,16 +3919,16 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Lession 2: Run Queries with ANSI SQL", 
-            "next_url": "/docs/lession-2-run-queries-with-ansi-sql/", 
+            "next_title": "Lesson 2: Run Queries with ANSI SQL", 
+            "next_url": "/docs/lesson-2-run-queries-with-ansi-sql/", 
             "parent": "Learn Drill with the MapR Sandbox", 
             "previous_title": "Getting to Know the Drill Sandbox", 
             "previous_url": "/docs/getting-to-know-the-drill-sandbox/", 
             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md", 
-            "title": "Lession 1: Learn about the Data Set", 
-            "url": "/docs/lession-1-learn-about-the-data-set/"
+            "title": "Lesson 1: Learn about the Data Set", 
+            "url": "/docs/lesson-1-learn-about-the-data-set/"
         }, 
-        "Lession 2: Run Queries with ANSI SQL": {
+        "Lesson 2: Run Queries with ANSI SQL": {
             "breadcrumbs": [
                 {
                     "title": "Learn Drill with the MapR Sandbox", 
@@ -3940,16 +3940,16 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Lession 3: Run Queries on Complex Data Types", 
-            "next_url": "/docs/lession-3-run-queries-on-complex-data-types/", 
+            "next_title": "Lesson 3: Run Queries on Complex Data Types", 
+            "next_url": "/docs/lesson-3-run-queries-on-complex-data-types/", 
             "parent": "Learn Drill with the MapR Sandbox", 
-            "previous_title": "Lession 1: Learn about the Data Set", 
-            "previous_url": "/docs/lession-1-learn-about-the-data-set/", 
+            "previous_title": "Lesson 1: Learn about the Data Set", 
+            "previous_url": "/docs/lesson-1-learn-about-the-data-set/", 
             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md", 
-            "title": "Lession 2: Run Queries with ANSI SQL", 
-            "url": "/docs/lession-2-run-queries-with-ansi-sql/"
+            "title": "Lesson 2: Run Queries with ANSI SQL", 
+            "url": "/docs/lesson-2-run-queries-with-ansi-sql/"
         }, 
-        "Lession 3: Run Queries on Complex Data Types": {
+        "Lesson 3: Run Queries on Complex Data Types": {
             "breadcrumbs": [
                 {
                     "title": "Learn Drill with the MapR Sandbox", 
@@ -3964,11 +3964,11 @@
             "next_title": "Summary", 
             "next_url": "/docs/summary/", 
             "parent": "Learn Drill with the MapR Sandbox", 
-            "previous_title": "Lession 2: Run Queries with ANSI SQL", 
-            "previous_url": "/docs/lession-2-run-queries-with-ansi-sql/", 
+            "previous_title": "Lesson 2: Run Queries with ANSI SQL", 
+            "previous_url": "/docs/lesson-2-run-queries-with-ansi-sql/", 
             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md", 
-            "title": "Lession 3: Run Queries on Complex Data Types", 
-            "url": "/docs/lession-3-run-queries-on-complex-data-types/"
+            "title": "Lesson 3: Run Queries on Complex Data Types", 
+            "url": "/docs/lesson-3-run-queries-on-complex-data-types/"
         }, 
         "Lexical Structure": {
             "breadcrumbs": [
@@ -7766,8 +7766,8 @@
             "next_title": "Analyzing Highly Dynamic Datasets", 
             "next_url": "/docs/analyzing-highly-dynamic-datasets/", 
             "parent": "Learn Drill with the MapR Sandbox", 
-            "previous_title": "Lession 3: Run Queries on Complex Data Types", 
-            "previous_url": "/docs/lession-3-run-queries-on-complex-data-types/", 
+            "previous_title": "Lesson 3: Run Queries on Complex Data Types", 
+            "previous_url": "/docs/lesson-3-run-queries-on-complex-data-types/", 
             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/060-summary.md", 
             "title": "Summary", 
             "url": "/docs/summary/"
@@ -7972,8 +7972,8 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Lession 1: Learn about the Data Set", 
-                            "next_url": "/docs/lession-1-learn-about-the-data-set/", 
+                            "next_title": "Lesson 1: Learn about the Data Set", 
+                            "next_url": "/docs/lesson-1-learn-about-the-data-set/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
                             "previous_title": "Installing the Apache Drill Sandbox", 
                             "previous_url": "/docs/installing-the-apache-drill-sandbox/", 
@@ -7993,14 +7993,14 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Lession 2: Run Queries with ANSI SQL", 
-                            "next_url": "/docs/lession-2-run-queries-with-ansi-sql/", 
+                            "next_title": "Lesson 2: Run Queries with ANSI SQL", 
+                            "next_url": "/docs/lesson-2-run-queries-with-ansi-sql/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
                             "previous_title": "Getting to Know the Drill Sandbox", 
                             "previous_url": "/docs/getting-to-know-the-drill-sandbox/", 
                             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md", 
-                            "title": "Lession 1: Learn about the Data Set", 
-                            "url": "/docs/lession-1-learn-about-the-data-set/"
+                            "title": "Lesson 1: Learn about the Data Set", 
+                            "url": "/docs/lesson-1-learn-about-the-data-set/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -8014,14 +8014,14 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Lession 3: Run Queries on Complex Data Types", 
-                            "next_url": "/docs/lession-3-run-queries-on-complex-data-types/", 
+                            "next_title": "Lesson 3: Run Queries on Complex Data Types", 
+                            "next_url": "/docs/lesson-3-run-queries-on-complex-data-types/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
-                            "previous_title": "Lession 1: Learn about the Data Set", 
-                            "previous_url": "/docs/lession-1-learn-about-the-data-set/", 
+                            "previous_title": "Lesson 1: Learn about the Data Set", 
+                            "previous_url": "/docs/lesson-1-learn-about-the-data-set/", 
                             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md", 
-                            "title": "Lession 2: Run Queries with ANSI SQL", 
-                            "url": "/docs/lession-2-run-queries-with-ansi-sql/"
+                            "title": "Lesson 2: Run Queries with ANSI SQL", 
+                            "url": "/docs/lesson-2-run-queries-with-ansi-sql/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -8038,11 +8038,11 @@
                             "next_title": "Summary", 
                             "next_url": "/docs/summary/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
-                            "previous_title": "Lession 2: Run Queries with ANSI SQL", 
-                            "previous_url": "/docs/lession-2-run-queries-with-ansi-sql/", 
+                            "previous_title": "Lesson 2: Run Queries with ANSI SQL", 
+                            "previous_url": "/docs/lesson-2-run-queries-with-ansi-sql/", 
                             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md", 
-                            "title": "Lession 3: Run Queries on Complex Data Types", 
-                            "url": "/docs/lession-3-run-queries-on-complex-data-types/"
+                            "title": "Lesson 3: Run Queries on Complex Data Types", 
+                            "url": "/docs/lesson-3-run-queries-on-complex-data-types/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -8059,8 +8059,8 @@
                             "next_title": "Analyzing Highly Dynamic Datasets", 
                             "next_url": "/docs/analyzing-highly-dynamic-datasets/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
-                            "previous_title": "Lession 3: Run Queries on Complex Data Types", 
-                            "previous_url": "/docs/lession-3-run-queries-on-complex-data-types/", 
+                            "previous_title": "Lesson 3: Run Queries on Complex Data Types", 
+                            "previous_url": "/docs/lesson-3-run-queries-on-complex-data-types/", 
                             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/060-summary.md", 
                             "title": "Summary", 
                             "url": "/docs/summary/"
@@ -9000,8 +9000,8 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Lession 1: Learn about the Data Set", 
-                            "next_url": "/docs/lession-1-learn-about-the-data-set/", 
+                            "next_title": "Lesson 1: Learn about the Data Set", 
+                            "next_url": "/docs/lesson-1-learn-about-the-data-set/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
                             "previous_title": "Installing the Apache Drill Sandbox", 
                             "previous_url": "/docs/installing-the-apache-drill-sandbox/", 
@@ -9021,14 +9021,14 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Lession 2: Run Queries with ANSI SQL", 
-                            "next_url": "/docs/lession-2-run-queries-with-ansi-sql/", 
+                            "next_title": "Lesson 2: Run Queries with ANSI SQL", 
+                            "next_url": "/docs/lesson-2-run-queries-with-ansi-sql/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
                             "previous_title": "Getting to Know the Drill Sandbox", 
                             "previous_url": "/docs/getting-to-know-the-drill-sandbox/", 
                             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md", 
-                            "title": "Lession 1: Learn about the Data Set", 
-                            "url": "/docs/lession-1-learn-about-the-data-set/"
+                            "title": "Lesson 1: Learn about the Data Set", 
+                            "url": "/docs/lesson-1-learn-about-the-data-set/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -9042,14 +9042,14 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "Lession 3: Run Queries on Complex Data Types", 
-                            "next_url": "/docs/lession-3-run-queries-on-complex-data-types/", 
+                            "next_title": "Lesson 3: Run Queries on Complex Data Types", 
+                            "next_url": "/docs/lesson-3-run-queries-on-complex-data-types/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
-                            "previous_title": "Lession 1: Learn about the Data Set", 
-                            "previous_url": "/docs/lession-1-learn-about-the-data-set/", 
+                            "previous_title": "Lesson 1: Learn about the Data Set", 
+                            "previous_url": "/docs/lesson-1-learn-about-the-data-set/", 
                             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md", 
-                            "title": "Lession 2: Run Queries with ANSI SQL", 
-                            "url": "/docs/lession-2-run-queries-with-ansi-sql/"
+                            "title": "Lesson 2: Run Queries with ANSI SQL", 
+                            "url": "/docs/lesson-2-run-queries-with-ansi-sql/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -9066,11 +9066,11 @@
                             "next_title": "Summary", 
                             "next_url": "/docs/summary/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
-                            "previous_title": "Lession 2: Run Queries with ANSI SQL", 
-                            "previous_url": "/docs/lession-2-run-queries-with-ansi-sql/", 
+                            "previous_title": "Lesson 2: Run Queries with ANSI SQL", 
+                            "previous_url": "/docs/lesson-2-run-queries-with-ansi-sql/", 
                             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md", 
-                            "title": "Lession 3: Run Queries on Complex Data Types", 
-                            "url": "/docs/lession-3-run-queries-on-complex-data-types/"
+                            "title": "Lesson 3: Run Queries on Complex Data Types", 
+                            "url": "/docs/lesson-3-run-queries-on-complex-data-types/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -9087,8 +9087,8 @@
                             "next_title": "Analyzing Highly Dynamic Datasets", 
                             "next_url": "/docs/analyzing-highly-dynamic-datasets/", 
                             "parent": "Learn Drill with the MapR Sandbox", 
-                            "previous_title": "Lession 3: Run Queries on Complex Data Types", 
-                            "previous_url": "/docs/lession-3-run-queries-on-complex-data-types/", 
+                            "previous_title": "Lesson 3: Run Queries on Complex Data Types", 
+                            "previous_url": "/docs/lesson-3-run-queries-on-complex-data-types/", 
                             "relative_path": "_docs/tutorials/learn-drill-with-the-mapr-sandbox/060-summary.md", 
                             "title": "Summary", 
                             "url": "/docs/summary/"

http://git-wip-us.apache.org/repos/asf/drill/blob/7317bf0c/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md
index ef6294a..a6153d1 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/030-lesson-1-learn-about-the-data-set.md
@@ -1,5 +1,5 @@
 ---
-title: "Lession 1: Learn about the Data Set"
+title: "Lesson 1: Learn about the Data Set"
 parent: "Learn Drill with the MapR Sandbox"
 ---
 ## Goal
@@ -380,7 +380,7 @@ subdirectories to return the total number of rows in those files.
 # What's Next
 
 Go to [Lesson 2: Run Queries with ANSI
-SQL]({{ site.baseurl }}/docs/lession-2-run-queries-with-ansi-sql).
+SQL]({{ site.baseurl }}/docs/lesson-2-run-queries-with-ansi-sql).
 
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/7317bf0c/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
index a1faa76..353dfc9 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
@@ -1,5 +1,5 @@
 ---
-title: "Lession 2: Run Queries with ANSI SQL"
+title: "Lesson 2: Run Queries with ANSI SQL"
 parent: "Learn Drill with the MapR Sandbox"
 ---
 ## Goal

http://git-wip-us.apache.org/repos/asf/drill/blob/7317bf0c/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md
index d5c53f2..bcb62ee 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/050-lesson-3-run-queries-on-complex-data-types.md
@@ -1,5 +1,5 @@
 ---
-title: "Lession 3: Run Queries on Complex Data Types"
+title: "Lesson 3: Run Queries on Complex Data Types"
 parent: "Learn Drill with the MapR Sandbox"
 ---
 ## Goal


[24/25] drill git commit: Tableau Server doc

Posted by ts...@apache.org.
Tableau Server doc


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/4f410f97
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/4f410f97
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/4f410f97

Branch: refs/heads/gh-pages
Commit: 4f410f975e99e446d76b4c6678cff026c46cd47f
Parents: 1cec6cc
Author: Bob Rumsby <br...@mapr.com>
Authored: Mon May 11 18:16:40 2015 -0700
Committer: Bob Rumsby <br...@mapr.com>
Committed: Mon May 11 18:16:40 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 142 +++++++++----------
 _docs/img/tableau-server-authentication.png     | Bin 0 -> 42451 bytes
 _docs/img/tableau-server-publish-datasource.png | Bin 0 -> 79417 bytes
 .../img/tableau-server-publish-datasource2.png  | Bin 0 -> 38100 bytes
 .../img/tableau-server-publish-datasource3.png  | Bin 0 -> 42950 bytes
 _docs/img/tableau-server-publish1.png           | Bin 0 -> 20646 bytes
 _docs/img/tableau-server-publish2.png           | Bin 0 -> 51834 bytes
 _docs/img/tableau-server-signin1.png            | Bin 0 -> 21257 bytes
 _docs/img/tableau-server-signin2.png            | Bin 0 -> 19833 bytes
 ...-using-apache-drill-with-tableau-9-server.md |  76 ++++++++++
 10 files changed, 147 insertions(+), 71 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 08bfcd8..28fb349 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -185,8 +185,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Analyzing Social Media with MicroStrategy", 
-            "next_url": "/docs/analyzing-social-media-with-microstrategy/", 
+            "next_title": "Install Drill", 
+            "next_url": "/docs/install-drill/", 
             "parent": "Tutorials", 
             "previous_title": "Summary", 
             "previous_url": "/docs/summary/", 
@@ -194,23 +194,6 @@
             "title": "Analyzing Highly Dynamic Datasets", 
             "url": "/docs/analyzing-highly-dynamic-datasets/"
         }, 
-        "Analyzing Social Media with MicroStrategy": {
-            "breadcrumbs": [
-                {
-                    "title": "Tutorials", 
-                    "url": "/docs/tutorials/"
-                }
-            ], 
-            "children": [], 
-            "next_title": "Install Drill", 
-            "next_url": "/docs/install-drill/", 
-            "parent": "Tutorials", 
-            "previous_title": "Analyzing Highly Dynamic Datasets", 
-            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
-            "relative_path": "_docs/tutorials/060-analyzing-social-media-with-microstrategy.md", 
-            "title": "Analyzing Social Media with MicroStrategy", 
-            "url": "/docs/analyzing-social-media-with-microstrategy/"
-        }, 
         "Analyzing the Yelp Academic Dataset": {
             "breadcrumbs": [
                 {
@@ -3313,8 +3296,8 @@
             "next_title": "Install Drill Introduction", 
             "next_url": "/docs/install-drill-introduction/", 
             "parent": "", 
-            "previous_title": "Analyzing Social Media with MicroStrategy", 
-            "previous_url": "/docs/analyzing-social-media-with-microstrategy/", 
+            "previous_title": "Analyzing Highly Dynamic Datasets", 
+            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
             "relative_path": "_docs/040-install-drill.md", 
             "title": "Install Drill", 
             "url": "/docs/install-drill/"
@@ -4612,14 +4595,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Query Data", 
-                    "next_url": "/docs/query-data/", 
+                    "next_title": "Using Apache Drill with Tableau 9 Server", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-server/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Tibco Spotfire with Drill", 
                     "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
                     "title": "Using Apache Drill with Tableau 9 Desktop", 
                     "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "ODBC/JDBC Interfaces", 
+                            "url": "/docs/odbc-jdbc-interfaces/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Query Data", 
+                    "next_url": "/docs/query-data/", 
+                    "parent": "ODBC/JDBC Interfaces", 
+                    "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/080-using-apache-drill-with-tableau-9-server.md", 
+                    "title": "Using Apache Drill with Tableau 9 Server", 
+                    "url": "/docs/using-apache-drill-with-tableau-9-server/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -5201,8 +5201,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+            "previous_title": "Using Apache Drill with Tableau 9 Server", 
+            "previous_url": "/docs/using-apache-drill-with-tableau-9-server/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"
@@ -8083,31 +8083,14 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Analyzing Social Media with MicroStrategy", 
-                    "next_url": "/docs/analyzing-social-media-with-microstrategy/", 
+                    "next_title": "Install Drill", 
+                    "next_url": "/docs/install-drill/", 
                     "parent": "Tutorials", 
                     "previous_title": "Summary", 
                     "previous_url": "/docs/summary/", 
                     "relative_path": "_docs/tutorials/050-analyzing-highly-dynamic-datasets.md", 
                     "title": "Analyzing Highly Dynamic Datasets", 
                     "url": "/docs/analyzing-highly-dynamic-datasets/"
-                }, 
-                {
-                    "breadcrumbs": [
-                        {
-                            "title": "Tutorials", 
-                            "url": "/docs/tutorials/"
-                        }
-                    ], 
-                    "children": [], 
-                    "next_title": "Install Drill", 
-                    "next_url": "/docs/install-drill/", 
-                    "parent": "Tutorials", 
-                    "previous_title": "Analyzing Highly Dynamic Datasets", 
-                    "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
-                    "relative_path": "_docs/tutorials/060-analyzing-social-media-with-microstrategy.md", 
-                    "title": "Analyzing Social Media with MicroStrategy", 
-                    "url": "/docs/analyzing-social-media-with-microstrategy/"
                 }
             ], 
             "next_title": "Tutorials Introduction", 
@@ -8186,8 +8169,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Query Data", 
-            "next_url": "/docs/query-data/", 
+            "next_title": "Using Apache Drill with Tableau 9 Server", 
+            "next_url": "/docs/using-apache-drill-with-tableau-9-server/", 
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using Tibco Spotfire with Drill", 
             "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
@@ -8195,6 +8178,23 @@
             "title": "Using Apache Drill with Tableau 9 Desktop", 
             "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
         }, 
+        "Using Apache Drill with Tableau 9 Server": {
+            "breadcrumbs": [
+                {
+                    "title": "ODBC/JDBC Interfaces", 
+                    "url": "/docs/odbc-jdbc-interfaces/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "Query Data", 
+            "next_url": "/docs/query-data/", 
+            "parent": "ODBC/JDBC Interfaces", 
+            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
+            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+            "relative_path": "_docs/odbc-jdbc-interfaces/080-using-apache-drill-with-tableau-9-server.md", 
+            "title": "Using Apache Drill with Tableau 9 Server", 
+            "url": "/docs/using-apache-drill-with-tableau-9-server/"
+        }, 
         "Using Custom Functions in Queries": {
             "breadcrumbs": [
                 {
@@ -9111,31 +9111,14 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Analyzing Social Media with MicroStrategy", 
-                    "next_url": "/docs/analyzing-social-media-with-microstrategy/", 
+                    "next_title": "Install Drill", 
+                    "next_url": "/docs/install-drill/", 
                     "parent": "Tutorials", 
                     "previous_title": "Summary", 
                     "previous_url": "/docs/summary/", 
                     "relative_path": "_docs/tutorials/050-analyzing-highly-dynamic-datasets.md", 
                     "title": "Analyzing Highly Dynamic Datasets", 
                     "url": "/docs/analyzing-highly-dynamic-datasets/"
-                }, 
-                {
-                    "breadcrumbs": [
-                        {
-                            "title": "Tutorials", 
-                            "url": "/docs/tutorials/"
-                        }
-                    ], 
-                    "children": [], 
-                    "next_title": "Install Drill", 
-                    "next_url": "/docs/install-drill/", 
-                    "parent": "Tutorials", 
-                    "previous_title": "Analyzing Highly Dynamic Datasets", 
-                    "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
-                    "relative_path": "_docs/tutorials/060-analyzing-social-media-with-microstrategy.md", 
-                    "title": "Analyzing Social Media with MicroStrategy", 
-                    "url": "/docs/analyzing-social-media-with-microstrategy/"
                 }
             ], 
             "next_title": "Tutorials Introduction", 
@@ -9375,8 +9358,8 @@
             "next_title": "Install Drill Introduction", 
             "next_url": "/docs/install-drill-introduction/", 
             "parent": "", 
-            "previous_title": "Analyzing Social Media with MicroStrategy", 
-            "previous_url": "/docs/analyzing-social-media-with-microstrategy/", 
+            "previous_title": "Analyzing Highly Dynamic Datasets", 
+            "previous_url": "/docs/analyzing-highly-dynamic-datasets/", 
             "relative_path": "_docs/040-install-drill.md", 
             "title": "Install Drill", 
             "url": "/docs/install-drill/"
@@ -10292,14 +10275,31 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Query Data", 
-                    "next_url": "/docs/query-data/", 
+                    "next_title": "Using Apache Drill with Tableau 9 Server", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-server/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Tibco Spotfire with Drill", 
                     "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
                     "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
                     "title": "Using Apache Drill with Tableau 9 Desktop", 
                     "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
+                }, 
+                {
+                    "breadcrumbs": [
+                        {
+                            "title": "ODBC/JDBC Interfaces", 
+                            "url": "/docs/odbc-jdbc-interfaces/"
+                        }
+                    ], 
+                    "children": [], 
+                    "next_title": "Query Data", 
+                    "next_url": "/docs/query-data/", 
+                    "parent": "ODBC/JDBC Interfaces", 
+                    "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/080-using-apache-drill-with-tableau-9-server.md", 
+                    "title": "Using Apache Drill with Tableau 9 Server", 
+                    "url": "/docs/using-apache-drill-with-tableau-9-server/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -10687,8 +10687,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+            "previous_title": "Using Apache Drill with Tableau 9 Server", 
+            "previous_url": "/docs/using-apache-drill-with-tableau-9-server/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/img/tableau-server-authentication.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-server-authentication.png b/_docs/img/tableau-server-authentication.png
new file mode 100644
index 0000000..c15ec7d
Binary files /dev/null and b/_docs/img/tableau-server-authentication.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/img/tableau-server-publish-datasource.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-server-publish-datasource.png b/_docs/img/tableau-server-publish-datasource.png
new file mode 100644
index 0000000..f55796b
Binary files /dev/null and b/_docs/img/tableau-server-publish-datasource.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/img/tableau-server-publish-datasource2.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-server-publish-datasource2.png b/_docs/img/tableau-server-publish-datasource2.png
new file mode 100644
index 0000000..8cb00ca
Binary files /dev/null and b/_docs/img/tableau-server-publish-datasource2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/img/tableau-server-publish-datasource3.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-server-publish-datasource3.png b/_docs/img/tableau-server-publish-datasource3.png
new file mode 100644
index 0000000..324a309
Binary files /dev/null and b/_docs/img/tableau-server-publish-datasource3.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/img/tableau-server-publish1.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-server-publish1.png b/_docs/img/tableau-server-publish1.png
new file mode 100644
index 0000000..86d4ff9
Binary files /dev/null and b/_docs/img/tableau-server-publish1.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/img/tableau-server-publish2.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-server-publish2.png b/_docs/img/tableau-server-publish2.png
new file mode 100644
index 0000000..1a3297b
Binary files /dev/null and b/_docs/img/tableau-server-publish2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/img/tableau-server-signin1.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-server-signin1.png b/_docs/img/tableau-server-signin1.png
new file mode 100644
index 0000000..6157e07
Binary files /dev/null and b/_docs/img/tableau-server-signin1.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/img/tableau-server-signin2.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-server-signin2.png b/_docs/img/tableau-server-signin2.png
new file mode 100644
index 0000000..ecd332e
Binary files /dev/null and b/_docs/img/tableau-server-signin2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/4f410f97/_docs/odbc-jdbc-interfaces/080-using-apache-drill-with-tableau-9-server.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/080-using-apache-drill-with-tableau-9-server.md b/_docs/odbc-jdbc-interfaces/080-using-apache-drill-with-tableau-9-server.md
new file mode 100644
index 0000000..92e4dc2
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/080-using-apache-drill-with-tableau-9-server.md
@@ -0,0 +1,76 @@
+---
+title: "Using Apache Drill with Tableau 9 Server"
+parent: "ODBC/JDBC Interfaces"
+---
+
+This document describes how to connect Tableau 9 Server to Apache Drill and explore multiple data formats instantly on Hadoop, as well as share all the Tableau visualizations in a collaborative environment. Use the combined power of these tools to get direct access to semi-structured data, without having to rely on IT teams for schema creation and data manipulation. 
+
+To use Apache Drill with Tableau 9 Server, complete the following steps: 
+
+1.	Install the Drill ODBC driver from MapR on the Tableau Server system and configure ODBC data sources.
2.	Install the Tableau Data-connection Customization (TDC) file.
3.	Publish Tableau visualizations and data sources from Tableau Desktop to Tableau Server for collaboration.
+
+----------
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. For the best experience use the latest release of Apache Drill. For Tableau 9.0 Server, Drill Version 0.9 or higher is recommended.
+
+Complete the following steps to install and configure the driver:
+
+1. Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
+**Note:** Tableau 9.0 Server works with the 64-bit ODBC driver.
+2. Complete steps 2-8 under on the following page to install the driver:<br> 
+[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
+3. Complete the steps on the following page to configure the driver:<br>
+[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
+4. If Drill authentication is enabled, select **Basic Authentication** as the authentication type. Enter a valid user and password. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-odbc-setup.png)
+
+Note: If you select **ZooKeeper Quorum** as the ODBC connection type, the client system must be able to resolve the hostnames of the ZooKeeper nodes. The simplest way is to add the hostnames and IP addresses for the ZooKeeper nodes to the `%WINDIR%\system32\drivers\etc\hosts` file. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-odbc-setup-2.png)

Also make sure to test the ODBC connection to Drill before using it with Tableau.
+
+----------
+
+### Step 2: Install the Tableau Data-connection Customization (TDC) File
+
+The MapR Drill ODBC Driver includes a file named `MapRDrillODBC.TDC`. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau.
+
+For Tableau Server, you need to manually copy this file to the Server Datasources folder:
+
1.	Locate the `MapRDrillODBC.tdc` file in the `~\Program Files\MapR Drill ODBC Driver\Resources` folder.
2.	Copy the file to the `~\ProgramData\Tableau\Tableau Server\data\tabsvc\vizqlserver\Datasources` folder.
3.	Restart Tableau Server.
+
+For more information about Tableau TDC configuration, see [Customizing and Tuning ODBC Connections](http://kb.tableau.com/articles/knowledgebase/customizing-odbc-connections)
+
+----------
+
+
+### Step 3: Publish Tableau Visualizations and Data Sources
+
+For collaboration purposes, you can now use Tableau Desktop to publish data sources and visualizations on Tableau Server.
+
+####Publishing Visualizations
+
+To publish a visualization from Tableau Desktop to Tableau Server:

1. Configure Tableau Desktop by using the ODBC driver; see []()

2. For best results, verify that the ODBC configuration and DSNs (data source names) are the same for both Tableau Desktop and Tableau Server.

3. Create visualizations in Tableau Desktop using Drill as the data source.

4. Connect to Tableau Server from Tableau Desktop. Select **Server > Sign In**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-signin1.png)
+
+5. Sign into Tableau Server using the server hostname or IP address, username, and password. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-signin2.png)
+
+6. You can now publish a workbook to Tableau Server. Select **Server > Publish Workbook**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish1.png)
+
+7. Select the project from the drop-down list. Enter a name for the visualization to be published and provide a description and tags as needed. Assign permissions and views to be shared. Then click **Authentication**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish2.png)
+
+8. In the Authentication window, select **Embedded Password**, then click **OK**. Then click **Publish** in the Publish Workbook window to publish the visualization to Tableau Server. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-authentication.png)
+
+####Publishing Data Sources
+
+If all you want to do is publish data sources to Tableau Server, follow these steps:
+
1.	Open data source(s) in Tableau Desktop.
2.	In the Workbook, select **Data > Data Source Name > Publish to Server**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish-datasource.png)

3.	If you are not already signed in, sign into Tableau Server.
4.	Select the project from the drop-down list and enter a name for the data source (or keep the same name that is used in the Desktop workbook). ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish-datasource2.png)

5.	In the **Authentication** drop-down list, select **Embedded Password**. Select permissions as needed, then click **Publish**. The data source will now be published on the Tableau Server and is available for building visualizations. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-server-publish-datasource3.png)
+
+
+
+
+
+
+
+
+
+----------
+
+In this quick tutorial, you saw how you can configure Tableau Server 9.0 to work with Tableau Desktop and Apache Drill. 
+


[19/25] drill git commit: zookeeper to ZooKeeper

Posted by ts...@apache.org.
zookeeper to ZooKeeper


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/0b4cb1ff
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/0b4cb1ff
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/0b4cb1ff

Branch: refs/heads/gh-pages
Commit: 0b4cb1ff594143408e77f3ffd9e215921f1b1d1c
Parents: a3c95b3
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Sat May 9 13:16:47 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Sat May 9 13:16:47 2015 -0700

----------------------------------------------------------------------
 _docs/connect-a-data-source/060-hbase-storage-plugin.md      | 4 ++--
 _docs/install/050-starting-drill-in-distributed mode.md      | 8 ++++----
 .../030-starting-drill-on-linux-and-mac-os-x.md              | 4 ++--
 .../050-starting-drill-on-windows.md                         | 2 +-
 _docs/tutorials/020-drill-in-10-minutes.md                   | 4 ++--
 5 files changed, 11 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/0b4cb1ff/_docs/connect-a-data-source/060-hbase-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/060-hbase-storage-plugin.md b/_docs/connect-a-data-source/060-hbase-storage-plugin.md
index 693e487..3677004 100644
--- a/_docs/connect-a-data-source/060-hbase-storage-plugin.md
+++ b/_docs/connect-a-data-source/060-hbase-storage-plugin.md
@@ -2,7 +2,7 @@
 title: "HBase Storage Plugin"
 parent: "Storage Plugin Configuration"
 ---
-Register a storage plugin instance and specify a zookeeper quorum to connect
+Register a storage plugin instance and specify a ZooKeeper quorum to connect
 Drill to an HBase data source. When you register a storage plugin instance for
 an HBase data source, provide a unique name for the instance, and identify the
 type as “hbase” in the Drill Web UI.
@@ -13,7 +13,7 @@ To register HBase with Drill, complete the following steps:
 
   1. Navigate to [http://localhost:8047](http://localhost:8047/), and select the **Storage** tab
   2. In the disabled storage plugins section, click **Update** next to the `hbase` instance.
-  3. In the Configuration window, specify the Zookeeper quorum and port. 
+  3. In the Configuration window, specify the ZooKeeper quorum and port. 
   
 
      **Example**  

http://git-wip-us.apache.org/repos/asf/drill/blob/0b4cb1ff/_docs/install/050-starting-drill-in-distributed mode.md
----------------------------------------------------------------------
diff --git a/_docs/install/050-starting-drill-in-distributed mode.md b/_docs/install/050-starting-drill-in-distributed mode.md
index 97c50df..ac730c7 100644
--- a/_docs/install/050-starting-drill-in-distributed mode.md	
+++ b/_docs/install/050-starting-drill-in-distributed mode.md	
@@ -31,8 +31,8 @@ To start SQLLine, use the following **sqlline command** syntax:
 * `-u` is the option that precedes a connection string. Required.  
 * `jdbc` is the connection protocol. Required.  
 * `schema` is the name of a [storage plugin]({{site.baseurl}}/docs/storage-plugin-registration) to use for queries. Optional.  
-* `Zk=zkname` is one or more zookeeper host names or IP addresses. Optional if you are running SQLLine and zookeeper on the local node.  
-* `port` is the zookeeper port number. Optional. Port 2181 is the default.  
+* `Zk=zkname` is one or more ZooKeeper host names or IP addresses. Optional if you are running SQLLine and ZooKeeper on the local node.  
+* `port` is the ZooKeeper port number. Optional. Port 2181 is the default.  
 
 #### Examples of Starting Drill
 This example also starts SQLLine using the `dfs` storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query:
@@ -40,7 +40,7 @@ This example also starts SQLLine using the `dfs` storage plugin. Specifying the
 
     bin/sqlline –u jdbc:drill:schema=dfs;zk=centos26
 
-This command starts SQLLine in a cluster configured to run zookeeper on three nodes:
+This command starts SQLLine in a cluster configured to run ZooKeeper on three nodes:
 
     bin/sqlline –u jdbc:drill:zk=cento23,zk=centos24,zk=centos26:5181
 
@@ -51,7 +51,7 @@ Complete the following steps to start Drill:
   1. Navigate to the Drill installation directory, and issue the following command to start a Drillbit:
   
         bin/drillbit.sh restart
-  2. Issue the following command to invoke SQLLine and start Drill if zookeeper is running on the same node as SQLLine:
+  2. Issue the following command to invoke SQLLine and start Drill if ZooKeeper is running on the same node as SQLLine:
   
         bin/sqlline -u jdbc:drill:
      

http://git-wip-us.apache.org/repos/asf/drill/blob/0b4cb1ff/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md b/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
index e19f224..ae72664 100644
--- a/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
+++ b/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
@@ -2,7 +2,7 @@
 title: "Starting Drill on Linux and Mac OS X"
 parent: "Installing Drill in Embedded Mode"
 ---
-Launch SQLLine using the sqlline command to start to Drill in embedded mode. The command directs SQLLine to connect to Drill using jdbc. The zk=local means the local node is the zookeeper node. Complete the following steps to launch SQLLine and start Drill:
+Launch SQLLine using the sqlline command to start to Drill in embedded mode. The command directs SQLLine to connect to Drill using jdbc. The zk=local means the local node is the ZooKeeper node. Complete the following steps to launch SQLLine and start Drill:
 
 1. Navigate to the Drill installation directory. For example:  
 
@@ -18,7 +18,7 @@ Launch SQLLine using the sqlline command to start to Drill in embedded mode. The
 
 ## Example of Starting Drill
 
-The simplest example of how to start SQLLine is to identify the protocol, JDBC, and zookeeper node or nodes in the **sqlline** command. This example starts SQLLine on a node in an embedded, single-node cluster:
+The simplest example of how to start SQLLine is to identify the protocol, JDBC, and ZooKeeper node or nodes in the **sqlline** command. This example starts SQLLine on a node in an embedded, single-node cluster:
 
     sqlline -u jdbc:drill:zk=local
 

http://git-wip-us.apache.org/repos/asf/drill/blob/0b4cb1ff/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md b/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
index 75c25b6..ba95059 100644
--- a/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
+++ b/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
@@ -2,7 +2,7 @@
 title: "Starting Drill on Windows"
 parent: "Installing Drill in Embedded Mode"
 ---
-Launch SQLLine using the **sqlline command** to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The `zk=local` means the local node is the zookeeper node. Complete the following steps to launch SQLLine and start Drill:
+Launch SQLLine using the **sqlline command** to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The `zk=local` means the local node is the ZooKeeper node. Complete the following steps to launch SQLLine and start Drill:
 
 1. Open the apache-drill-0.9.0 folder.  
 2. Open the bin folder, and double-click the `sqlline.bat` file:

http://git-wip-us.apache.org/repos/asf/drill/blob/0b4cb1ff/_docs/tutorials/020-drill-in-10-minutes.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/020-drill-in-10-minutes.md b/_docs/tutorials/020-drill-in-10-minutes.md
index 01f8bca..9f16044 100755
--- a/_docs/tutorials/020-drill-in-10-minutes.md
+++ b/_docs/tutorials/020-drill-in-10-minutes.md
@@ -58,7 +58,7 @@ The extraction process creates the installation directory named apache-drill-0.9
 At this point, you can [start Drill]({{site.baseurl}}/docs/drill-in-10-minutes/#start-drill).
 
 ## Start Drill on Linux and Mac OS X
-Launch SQLLine using the sqlline command to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The zk=local means the local node is the zookeeper node. Complete the following steps to launch SQLLine and start Drill:
+Launch SQLLine using the sqlline command to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The zk=local means the local node is the ZooKeeper node. Complete the following steps to launch SQLLine and start Drill:
 
 1. Navigate to the Drill installation directory. For example:  
 
@@ -83,7 +83,7 @@ You can install Drill on Windows 7 or 8. First, set the JAVA_HOME environment va
    At this point, you can start Drill.  
 
 ## Start Drill on Windows
-Launch SQLLine using the **sqlline command** to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The `zk=local` means the local node is the zookeeper node. Complete the following steps to launch SQLLine and start Drill:
+Launch SQLLine using the **sqlline command** to start to Drill in embedded mode. The command directs SQLLine to connect to Drill. The `zk=local` means the local node is the ZooKeeper node. Complete the following steps to launch SQLLine and start Drill:
 
 1. Open the apache-drill-0.9.0 folder.  
 2. Open the bin folder, and double-click the `sqlline.bat` file:


[18/25] drill git commit: minor edit

Posted by ts...@apache.org.
minor edit


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/a3c95b37
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/a3c95b37
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/a3c95b37

Branch: refs/heads/gh-pages
Commit: a3c95b37028fe7b084b7c5e06e8a791b9283d918
Parents: 029d411
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Sat May 9 12:42:20 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Sat May 9 12:42:20 2015 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md                      | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/a3c95b37/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index c26dd6b..37de559 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -34,7 +34,7 @@ The sys.options table lists the following options that you can set at the sessio
 | planner.broadcast_factor                       | 1          |                                                                                                                                                                                                                                                                                                                                                                  |
 | planner.broadcast_threshold                    | 10000000   | The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647                                                                                                                                                            |
 | planner.disable_exchanges                      | FALSE      | Toggles the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                               |
-| planner.enable_broadcast_join                  | TRUE       | Changes the state of aggregation and join operators. The broadcast join can be used for hash join, merge join and nested loop join. Broadcast join is very useful in situations where a large (fact) table is being joined to relatively smaller (dimension) tables. Do not disable.                                                                             |
+| planner.enable_broadcast_join                  | TRUE       | Changes the state of aggregation and join operators. The broadcast join can be used for hash join, merge join and nested loop join. Use to join a large (fact) table to relatively smaller (dimension) tables. Do not disable.                                                                                                                                   |
 | planner.enable_constant_folding                | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
 | planner.enable_demux_exchange                  | FALSE      | Toggles the state of hashing to a demulitplexed exchange.                                                                                                                                                                                                                                                                                                        |
 | planner.enable_hash_single_key                 | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |


[14/25] drill git commit: remove config options table

Posted by ts...@apache.org.
remove config options table


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/314aae10
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/314aae10
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/314aae10

Branch: refs/heads/gh-pages
Commit: 314aae105125b6c0338b6de3bfd1ab220c2d4259
Parents: 35b11d2
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 8 16:51:13 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 8 16:51:13 2015 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md   | 398 +------------------
 1 file changed, 1 insertion(+), 397 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/314aae10/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index e2b184c..53fb5fb 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -9,402 +9,6 @@ env.sh` and `drill-override.conf` files. Drill stores these files in the
 `/conf` directory. Drill sources` /etc/drill/conf` if it exists. Otherwise,
 Drill sources the local `<drill_installation_directory>/conf` directory.
 
-The sys.options table in Drill contains information about boot (start-up) and system options listed in the tables on this page. 
+The sys.options table in Drill contains information about boot (start-up) and system options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view these options. The sys.options also contains many session or system level options, some of whicha are described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). 
 
-## Boot Options
-The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view these options. 
-
-<table>
-  <tr>
-    <th>Name</th>
-    <th>Default</th>
-    <th>Comments</th>
-  </tr>
-  <tr>
-    <td>drill.exec.buffer.impl</td>
-    <td>"org.apache.drill.exec.work.batch.UnlimitedRawBatchBuffer"</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>drill.exec.buffer.size</td>
-    <td>6</td>
-    <td>Available memory in terms of record batches to hold data downstream of an operation. Increase this value to increase query speed.</td>
-  </tr>
-  <tr>
-    <td>drill.exec.compile.debug</td>
-    <td>TRUE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>drill.exec.http.enabled</td>
-    <td>TRUE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>drill.exec.operator.packages</td>
-    <td>"org.apache.drill.exec.physical.config"</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>drill.exec.sort.external.batch.size</td>
-    <td>4000</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>drill.exec.sort.external.spill.directories</td>
-    <td>"/tmp/drill/spill"</td>
-    <td>Determines which directory to use for spooling</td>
-  </tr>
-  <tr>
-    <td>drill.exec.sort.external.spill.group.size</td>
-    <td>100</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>drill.exec.storage.file.text.batch.size</td>
-    <td>4000</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>drill.exec.storage.packages</td>
-    <td>"org.apache.drill.exec.store" "org.apache.drill.exec.store.mock"</td>
-    <td>Ignore or include this module, including supplementary configuraiton information when scanning the class path scanning. This file is in [HOCON format](https://github.com/typesafehub/config/blob/master/HOCON.md).</td>
-  </tr>
-  <tr>
-    <td>drill.exec.sys.store.provider.class</td>
-    <td>ZooKeeper: "org.apache.drill.exec.store.sys.zk.ZkPStoreProvider"</td>
-    <td>The Pstore (Persistent Configuration Storage) provider to use. The Pstore holds configuration and profile data.</td>
-  </tr>
-  <tr>
-    <td>drill.exec.zk.connect</td>
-    <td>"localhost:2181"</td>
-    <td>The ZooKeeper quorum that Drill uses to connect to data sources. Configure on each Drillbit node.</td>
-  </tr>
-  <tr>
-    <td>drill.exec.zk.refresh</td>
-    <td>500</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>file.separator</td>
-    <td>"/"</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>java.specification.version</td>
-    <td>1.7</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>java.vm.name</td>
-    <td>"Java HotSpot(TM) 64-Bit Server VM"</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>java.vm.specification.version</td>
-    <td>1.7</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>log.path</td>
-    <td>"/log/sqlline.log"</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>sun.boot.library.path</td>
-    <td>/Library/Java/JavaVirtualMachines/jdk1.7.0_71.jdk/Contents/Home/jre/lib</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>sun.java.command</td>
-    <td>"sqlline.SqlLine -d org.apache.drill.jdbc.Driver --maxWidth=10000 -u jdbc:drill:zk=local"</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>sun.os.patch.level</td>
-    <td>unknown</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>user</td>
-    <td>""</td>
-    <td></td>
-  </tr>
-</table>
-
-## System Options
-The sys.options table lists the following options that you can set at the session or system level as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options) 
-
-<table>
-  <tr>
-    <th>Name</th>
-    <th>Default</th>
-    <th>Comments</th>
-  </tr>
-  <tr>
-    <td>drill.exec.functions.cast_empty_string_to_null</td>
-    <td>FALSE</td>
-    <td>Not supported in this release.</td>
-  </tr>
-  <tr>
-    <td>drill.exec.storage.file.partition.column.label</td>
-    <td>dir</td>
-    <td>Accepts a string input.</td>
-  </tr>
-  <tr>
-    <td>exec.errors.verbose</td>
-    <td>FALSE</td>
-    <td>Toggles verbose output of executable error messages</td>
-  </tr>
-  <tr>
-    <td>exec.java_compiler</td>
-    <td>DEFAULT</td>
-    <td>Switches between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by default for generated source code of less than exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.</td>
-  </tr>
-  <tr>
-    <td>exec.java_compiler_debug</td>
-    <td>TRUE</td>
-    <td>Toggles the output of debug-level compiler error messages in runtime generated code.</td>
-  </tr>
-  <tr>
-    <td>exec.java_compiler_janino_maxsize</td>
-    <td>262144</td>
-    <td>See the exec.java_compiler option comment. Accepts inputs of type LONG.</td>
-  </tr>
-  <tr>
-    <td>exec.max_hash_table_size</td>
-    <td>1073741824</td>
-    <td>Ending size for hash tables. Range: 0 - 1073741824</td>
-  </tr>
-  <tr>
-    <td>exec.min_hash_table_size</td>
-    <td>65536</td>
-    <td>Starting size for hash tables. Increase according to available memory to improve performance. Range: 0 - 1073741824</td>
-  </tr>
-  <tr>
-    <td>exec.queue.enable</td>
-    <td>FALSE</td>
-    <td>Changes the state of query queues to control the number of queries that run simultaneously.</td>
-  </tr>
-  <tr>
-    <td>exec.queue.large</td>
-    <td>10</td>
-    <td>Sets the number of large queries that can run concurrently in the cluster.
-Range: 0-1000</td>
-  </tr>
-  <tr>
-    <td>exec.queue.small</td>
-    <td>100</td>
-    <td>Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001</td>
-  </tr>
-  <tr>
-    <td>exec.queue.threshold</td>
-    <td>30000000</td>
-    <td>Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807</td>
-  </tr>
-  <tr>
-    <td>exec.queue.timeout_millis</td>
-    <td>300000</td>
-    <td>Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807</td>
-  </tr>
-  <tr>
-    <td>planner.add_producer_consumer</td>
-    <td>FALSE</td>
-    <td>Increase prefetching of data from disk. Disable for in-memory reads.</td>
-  </tr>
-  <tr>
-    <td>planner.affinity_factor</td>
-    <td>1.2</td>
-    <td>Accepts inputs of type DOUBLE.</td>
-  </tr>
-  <tr>
-    <td>planner.broadcast_factor</td>
-    <td>1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.broadcast_threshold</td>
-    <td>10000000</td>
-    <td>The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647</td>
-  </tr>
-  <tr>
-    <td>planner.disable_exchanges</td>
-    <td>FALSE</td>
-    <td>Toggles the state of hashing to a random exchange.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_broadcast_join</td>
-    <td>TRUE</td>
-    <td>Changes the state of aggregation and join operators. Do not disable.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_demux_exchange</td>
-    <td>FALSE</td>
-    <td>Toggles the state of hashing to a demulitplexed exchange.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_hash_single_key</td>
-    <td>TRUE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.enable_hashagg</td>
-    <td>TRUE</td>
-    <td>Enable hash aggregation; otherwise, Drill does a sort-based aggregation. Does not write to disk. Enable is recommended.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_hashjoin</td>
-    <td>TRUE</td>
-    <td>Enable the memory hungry hash join. Drill assumes that a query with have adequate memory to complete and tries to use the fastest operations possible to complete the planned inner, left, right, or full outer joins using a hash table. Does not write to disk. Disabling hash join allows Drill to manage arbitrarily large data in a small memory footprint.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_hashjoin_swap</td>
-    <td></td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.enable_mergejoin</td>
-    <td>TRUE</td>
-    <td>Sort-based operation. A merge join is used for inner join, left and right outer joins.  Inputs to the merge join must be sorted. It reads the sorted input streams from both sides and finds matching rows. Writes to disk.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_multiphase_agg</td>
-    <td>TRUE</td>
-    <td>Each minor fragment does a local aggregation in phase 1, distributes on a hash basis using GROUP-BY keys partially aggregated results to other fragments, and all the fragments perform a total aggregation using this data.  
- </td>
-  </tr>
-  <tr>
-    <td>planner.enable_mux_exchange</td>
-    <td>TRUE</td>
-    <td>Toggles the state of hashing to a multiplexed exchange.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_streamagg</td>
-    <td>TRUE</td>
-    <td>Sort-based operation. Writes to disk.</td>
-  </tr>
-  <tr>
-    <td>planner.identifier_max_length</td>
-    <td>1024</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.join.hash_join_swap_margin_factor</td>
-    <td>10</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.join.row_count_estimate_factor</td>
-    <td>1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.memory.average_field_width</td>
-    <td>8</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.memory.enable_memory_estimation</td>
-    <td>FALSE</td>
-    <td>Toggles the state of memory estimation and re-planning of the query.
-When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.</td>
-  </tr>
-  <tr>
-    <td>planner.memory.hash_agg_table_factor</td>
-    <td>1.1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.memory.hash_join_table_factor</td>
-    <td>1.1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.memory.max_query_memory_per_node</td>
-    <td>2147483648</td>
-    <td>Sets the maximum estimate of memory for a query per node. If the estimate is too low, Drill re-plans the query without memory-constrained operators.</td>
-  </tr>
-  <tr>
-    <td>planner.memory.non_blocking_operators_memory</td>
-    <td>64</td>
-    <td>Range: 0-2048</td>
-  </tr>
-  <tr>
-    <td>planner.partitioner_sender_max_threads</td>
-    <td>8</td>
-    <td>Upper limit of threads for outbound queuing.</td>
-  </tr>
-  <tr>
-    <td>planner.partitioner_sender_set_threads</td>
-    <td>-1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.partitioner_sender_threads_factor</td>
-    <td>1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.producer_consumer_queue_size</td>
-    <td>10</td>
-    <td>How much data to prefetch from disk (in record batches) out of band of query execution</td>
-  </tr>
-  <tr>
-    <td>planner.slice_target</td>
-    <td>100000</td>
-    <td>The number of records manipulated within a fragment before Drill parallelizes operations.</td>
-  </tr>
-  <tr>
-    <td>planner.width.max_per_node</td>
-    <td>3</td>
-    <td>Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query’s major fragment on a node.</td>
-  </tr>
-  <tr>
-    <td>planner.width.max_per_query</td>
-    <td>1000</td>
-    <td>Same as max per node but applies to the query as executed by the entire cluster.</td>
-  </tr>
-  <tr>
-    <td>store.format</td>
-    <td>parquet</td>
-    <td>Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, or text. Allowed values: 0, -1, 1000000</td>
-  </tr>
-  <tr>
-    <td>store.json.all_text_mode</a></td>
-    <td>FALSE</td>
-    <td>Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.</td>
-  </tr>
-  <tr>
-    <td>store.mongo.all_text_mode</td>
-    <td>FALSE</td>
-    <td>Similar to store.json.all_text_mode for MongoDB.</td>
-  </tr>
-  <tr>
-    <td>store.parquet.block-size</a></td>
-    <td>536870912</td>
-    <td>Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.</td>
-  </tr>
-  <tr>
-    <td>store.parquet.compression</td>
-    <td>snappy</td>
-    <td>Compression type for storing Parquet output. Allowed values: snappy, gzip, none</td>
-  </tr>
-  <tr>
-    <td>store.parquet.enable_dictionary_encoding*</td>
-    <td>FALSE</td>
-    <td>Do not change.</td>
-  </tr>
-  <tr>
-    <td>store.parquet.use_new_reader</td>
-    <td>FALSE</td>
-    <td>Not supported</td>
-  </tr>
-  <tr>
-    <td>window.enable*</td>
-    <td>FALSE</td>
-    <td>Coming soon.</td>
-  </tr>
-</table>
-
-\* Not supported in this release.
 


[09/25] drill git commit: config options

Posted by ts...@apache.org.
config options


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/1ad5dbde
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/1ad5dbde
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/1ad5dbde

Branch: refs/heads/gh-pages
Commit: 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
Parents: b65acbb
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 8 14:56:26 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 8 14:56:26 2015 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md   | 29 +++++++++++---------
 1 file changed, 16 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/1ad5dbde/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index 0298006..e2b184c 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -144,7 +144,7 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>drill.exec.functions.cast_empty_string_to_null</td>
     <td>FALSE</td>
-    <td></td>
+    <td>Not supported in this release.</td>
   </tr>
   <tr>
     <td>drill.exec.storage.file.partition.column.label</td>
@@ -189,22 +189,23 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>exec.queue.large</td>
     <td>10</td>
-    <td>Range: 0-1000</td>
+    <td>Sets the number of large queries that can run concurrently in the cluster.
+Range: 0-1000</td>
   </tr>
   <tr>
     <td>exec.queue.small</td>
     <td>100</td>
-    <td>Range: 0-1001</td>
+    <td>Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001</td>
   </tr>
   <tr>
     <td>exec.queue.threshold</td>
     <td>30000000</td>
-    <td>Range: 0-9223372036854775807</td>
+    <td>Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807</td>
   </tr>
   <tr>
     <td>exec.queue.timeout_millis</td>
     <td>300000</td>
-    <td>Range: 0-9223372036854775807</td>
+    <td>Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807</td>
   </tr>
   <tr>
     <td>planner.add_producer_consumer</td>
@@ -224,7 +225,7 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>planner.broadcast_threshold</td>
     <td>10000000</td>
-    <td>Threshold in number of rows that triggers a broadcast join for a query if the right side of the join contains fewer rows than the threshold. Avoids broadcasting too many rows to join. Range: 0-2147483647</td>
+    <td>The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647</td>
   </tr>
   <tr>
     <td>planner.disable_exchanges</td>
@@ -254,7 +255,7 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>planner.enable_hashjoin</td>
     <td>TRUE</td>
-    <td>Enable the memory hungry hash join. Does not write to disk.</td>
+    <td>Enable the memory hungry hash join. Drill assumes that a query with have adequate memory to complete and tries to use the fastest operations possible to complete the planned inner, left, right, or full outer joins using a hash table. Does not write to disk. Disabling hash join allows Drill to manage arbitrarily large data in a small memory footprint.</td>
   </tr>
   <tr>
     <td>planner.enable_hashjoin_swap</td>
@@ -264,12 +265,13 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>planner.enable_mergejoin</td>
     <td>TRUE</td>
-    <td>Sort-based operation. Writes to disk.</td>
+    <td>Sort-based operation. A merge join is used for inner join, left and right outer joins.  Inputs to the merge join must be sorted. It reads the sorted input streams from both sides and finds matching rows. Writes to disk.</td>
   </tr>
   <tr>
     <td>planner.enable_multiphase_agg</td>
     <td>TRUE</td>
-    <td></td>
+    <td>Each minor fragment does a local aggregation in phase 1, distributes on a hash basis using GROUP-BY keys partially aggregated results to other fragments, and all the fragments perform a total aggregation using this data.  
+ </td>
   </tr>
   <tr>
     <td>planner.enable_mux_exchange</td>
@@ -304,7 +306,8 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>planner.memory.enable_memory_estimation</td>
     <td>FALSE</td>
-    <td></td>
+    <td>Toggles the state of memory estimation and re-planning of the query.
+When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.</td>
   </tr>
   <tr>
     <td>planner.memory.hash_agg_table_factor</td>
@@ -319,7 +322,7 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>planner.memory.max_query_memory_per_node</td>
     <td>2147483648</td>
-    <td></td>
+    <td>Sets the maximum estimate of memory for a query per node. If the estimate is too low, Drill re-plans the query without memory-constrained operators.</td>
   </tr>
   <tr>
     <td>planner.memory.non_blocking_operators_memory</td>
@@ -329,7 +332,7 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>planner.partitioner_sender_max_threads</td>
     <td>8</td>
-    <td></td>
+    <td>Upper limit of threads for outbound queuing.</td>
   </tr>
   <tr>
     <td>planner.partitioner_sender_set_threads</td>
@@ -354,7 +357,7 @@ The sys.options table lists the following options that you can set at the sessio
   <tr>
     <td>planner.width.max_per_node</td>
     <td>3</td>
-    <td>The maximum degree of distribution of a query across cores and cluster nodes.</td>
+    <td>Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query’s major fragment on a node.</td>
   </tr>
   <tr>
     <td>planner.width.max_per_query</td>


[05/25] drill git commit: more mapr stuff removed

Posted by ts...@apache.org.
more mapr stuff removed


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/ef3572bf
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/ef3572bf
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/ef3572bf

Branch: refs/heads/gh-pages
Commit: ef3572bfcf7ff42de34bb51d565ac9096e998013
Parents: 913b998
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 7 16:16:13 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 7 16:16:13 2015 -0700

----------------------------------------------------------------------
 .../050-configuring-multitenant-resources.md           | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/ef3572bf/_docs/configure-drill/050-configuring-multitenant-resources.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/050-configuring-multitenant-resources.md b/_docs/configure-drill/050-configuring-multitenant-resources.md
index 2acd0e7..fcdab5c 100644
--- a/_docs/configure-drill/050-configuring-multitenant-resources.md
+++ b/_docs/configure-drill/050-configuring-multitenant-resources.md
@@ -2,11 +2,11 @@
 title: "Configuring Multitenant Resources"
 parent: "Configuring a Multitenant Cluster"
 ---
-Drill operations are memory and CPU-intensive. Currently, Drill resources are managed outside of any cluster management service. In a multitenant or any other type of cluster, YARN-enabled or not, you configure memory and memory usage limits for Drill by modifying drill-env.sh as described in ["Configuring Drill Memory"]({{site.baseurl}}/docs/configuring-drill-memory).
+Drill operations are memory and CPU-intensive. Currently, Drill resources are managed outside of any cluster management service. In a multitenant or any other type of cluster, YARN-enabled or not, you configure memory and memory usage limits for Drill by modifying the `drill-env.sh` file as described in ["Configuring Drill Memory"]({{site.baseurl}}/docs/configuring-drill-memory).
 
 Configure a multitenant cluster manager to account for resources required for Drill. Configuring `drill-env.sh` allocates resources for Drill to use during query execution. It might be necessary to configure the cluster manager from committing the resources to other processes.
 
-## Configuring Drill in a YARN-enabled MapR Cluster
+## Configuring Drill in a YARN-enabled
 
 To add Drill to a YARN-enabled cluster, change memory resources to suit your application. For example, you have 120G of available memory that you allocate to following workloads in a Yarn-enabled cluster:
 
@@ -26,14 +26,7 @@ YARN consists of two main services:
 * NodeManager  
   There is one instance per node. 
 
-ResourceManager and NodeManager memory in `warden.resourcemanager.conf` and
- `warden.nodemanager.conf` are set to the following defaults. 
-
-    service.heapsize.min=64
-    service.heapsize.max=325
-    service.heapsize.percent=2
-
-Change these settings for NodeManager and ResourceManager to reconfigure the total memory required for YARN services to run. If you want to place an upper limit on memory set YARN_NODEMANAGER_HEAPSIZE or YARN_RESOURCEMANAGER_HEAPSIZE environment variable. Do not set the `-Xmx` option to allow the heap to grow as needed.
+Configure NodeManager and ResourceManager to reconfigure the total memory required for YARN services to run. If you want to place an upper limit on memory set YARN_NODEMANAGER_HEAPSIZE or YARN_RESOURCEMANAGER_HEAPSIZE environment variable. Do not set the `-Xmx` option to allow the heap to grow as needed.
 
 ### MapReduce Resources
 


[16/25] drill git commit: html > markdown table format

Posted by ts...@apache.org.
html > markdown table format


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/598be281
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/598be281
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/598be281

Branch: refs/heads/gh-pages
Commit: 598be2810054cb608cd4854c132f9df6165c6b3c
Parents: 7a40745
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Sat May 9 06:42:11 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Sat May 9 06:42:11 2015 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md   | 349 +++----------------
 1 file changed, 58 insertions(+), 291 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/598be281/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index 8cfadf8..8443036 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -14,297 +14,64 @@ The sys.options table in Drill contains information about boot (start-up) and sy
 ## System Options
 The sys.options table lists the following options that you can set at the session or system level as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options) 
 
-<table>
-  <tr>
-    <th>Name</th>
-    <th>Default</th>
-    <th>Comments</th>
-  </tr>
-  <tr>
-    <td>drill.exec.functions.cast_empty_string_to_null</td>
-    <td>FALSE</td>
-    <td>Not supported in this release.</td>
-  </tr>
-  <tr>
-    <td>drill.exec.storage.file.partition.column.label</td>
-    <td>dir</td>
-    <td>Accepts a string input.</td>
-  </tr>
-  <tr>
-    <td>exec.errors.verbose</td>
-    <td>FALSE</td>
-    <td>Toggles verbose output of executable error messages</td>
-  </tr>
-  <tr>
-    <td>exec.java_compiler</td>
-    <td>DEFAULT</td>
-    <td>Switches between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by default for generated source code of less than exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.</td>
-  </tr>
-  <tr>
-    <td>exec.java_compiler_debug</td>
-    <td>TRUE</td>
-    <td>Toggles the output of debug-level compiler error messages in runtime generated code.</td>
-  </tr>
-  <tr>
-    <td>exec.java_compiler_janino_maxsize</td>
-    <td>262144</td>
-    <td>See the exec.java_compiler option comment. Accepts inputs of type LONG.</td>
-  </tr>
-  <tr>
-    <td>exec.max_hash_table_size</td>
-    <td>1073741824</td>
-    <td>Ending size for hash tables. Range: 0 - 1073741824</td>
-  </tr>
-  <tr>
-    <td>exec.min_hash_table_size</td>
-    <td>65536</td>
-    <td>Starting size for hash tables. Increase according to available memory to improve performance. Range: 0 - 1073741824</td>
-  </tr>
-  <tr>
-    <td>exec.queue.enable</td>
-    <td>FALSE</td>
-    <td>Changes the state of query queues to control the number of queries that run simultaneously.</td>
-  </tr>
-  <tr>
-    <td>exec.queue.large</td>
-    <td>10</td>
-    <td>Sets the number of large queries that can run concurrently in the cluster.
-Range: 0-1000</td>
-  </tr>
-  <tr>
-    <td>exec.queue.small</td>
-    <td>100</td>
-    <td>Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001</td>
-  </tr>
-  <tr>
-    <td>exec.queue.threshold</td>
-    <td>30000000</td>
-    <td>Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807</td>
-  </tr>
-  <tr>
-    <td>exec.queue.timeout_millis</td>
-    <td>300000</td>
-    <td>Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807</td>
-  </tr>
-  <tr>
-    <td>planner.add_producer_consumer</td>
-    <td>FALSE</td>
-    <td>Increase prefetching of data from disk. Disable for in-memory reads.</td>
-  </tr>
-  <tr>
-    <td>planner.affinity_factor</td>
-    <td>1.2</td>
-    <td>Accepts inputs of type DOUBLE.</td>
-  </tr>
-  <tr>
-    <td>planner.broadcast_factor</td>
-    <td>1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.broadcast_threshold</td>
-    <td>10000000</td>
-    <td>The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647</td>
-  </tr>
-  <tr>
-    <td>planner.disable_exchanges</td>
-    <td>FALSE</td>
-    <td>Toggles the state of hashing to a random exchange.</td>
-  </tr>
-  
-  <tr>
-    <td>planner.enable_broadcast_join</td>
-    <td>TRUE</td>
-    <td>Changes the state of aggregation and join operators. Do not disable.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_constant_folding</td>
-    <td>TRUE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.enable_demux_exchange</td>
-    <td>FALSE</td>
-    <td>Toggles the state of hashing to a demulitplexed exchange.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_hash_single_key</td>
-    <td>TRUE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.enable_hashagg</td>
-    <td>TRUE</td>
-    <td>Enable hash aggregation; otherwise, Drill does a sort-based aggregation. Does not write to disk. Enable is recommended.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_hashjoin</td>
-    <td>TRUE</td>
-    <td>Enable the memory hungry hash join. Drill assumes that a query with have adequate memory to complete and tries to use the fastest operations possible to complete the planned inner, left, right, or full outer joins using a hash table. Does not write to disk. Disabling hash join allows Drill to manage arbitrarily large data in a small memory footprint.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_hashjoin_swap</td>
-    <td>TRUE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.enable_mergejoin</td>
-    <td>TRUE</td>
-    <td>Sort-based operation. A merge join is used for inner join, left and right outer joins.  Inputs to the merge join must be sorted. It reads the sorted input streams from both sides and finds matching rows. Writes to disk.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_multiphase_agg</td>
-    <td>TRUE</td>
-    <td>Each minor fragment does a local aggregation in phase 1, distributes on a hash basis using GROUP-BY keys partially aggregated results to other fragments, and all the fragments perform a total aggregation using this data.  
- </td>
-  </tr>
-  <tr>
-    <td>planner.enable_mux_exchange</td>
-    <td>TRUE</td>
-    <td>Toggles the state of hashing to a multiplexed exchange.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_nestedloopjoin</td>
-    <td>TRUE</td>
-    <td>Sort-based operation. Writes to disk.</td>
-  </tr>
-  <tr>
-    <td>planner.enable_nljoin_for_scalar_only</td>
-    <td>TRUE</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.enable_streamagg</td>
-    <td>TRUE</td>
-    <td>Sort-based operation. Writes to disk.</td>
-  </tr>
-  <tr>
-    <td>planner.identifier_max_length</td>
-    <td>1024</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.join.hash_join_swap_margin_factor</td>
-    <td>10</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.join.row_count_estimate_factor</td>
-    <td>1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.memory.average_field_width</td>
-    <td>8</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.memory.enable_memory_estimation</td>
-    <td>FALSE</td>
-    <td>Toggles the state of memory estimation and re-planning of the query.
-When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.</td>
-  </tr>
-  <tr>
-    <td>planner.memory.hash_agg_table_factor</td>
-    <td>1.1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.memory.hash_join_table_factor</td>
-    <td>1.1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.memory.max_query_memory_per_node</td>
-    <td>2147483648</td>
-    <td>Sets the maximum estimate of memory for a query per node. If the estimate is too low, Drill re-plans the query without memory-constrained operators.</td>
-  </tr>
-  <tr>
-    <td>planner.memory.non_blocking_operators_memory</td>
-    <td>64</td>
-    <td>Range: 0-2048</td>
-  </tr>
-  <tr>
-    <td>planner.nestedloopjoin_factor</td>
-    <td>100</td>
-    <td></td>
-  </tr>
-    <tr>
-    <td>planner.partitioner_sender_max_threads</td>
-    <td>8</td>
-    <td>Upper limit of threads for outbound queuing.</td>
-  </tr>
-  <tr>
-    <td>planner.partitioner_sender_set_threads</td>
-    <td>-1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.partitioner_sender_threads_factor</td>
-    <td>1</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>planner.producer_consumer_queue_size</td>
-    <td>10</td>
-    <td>How much data to prefetch from disk (in record batches) out of band of query execution</td>
-  </tr>
-  <tr>
-    <td>planner.slice_target</td>
-    <td>100000</td>
-    <td>The number of records manipulated within a fragment before Drill parallelizes operations.</td>
-  </tr>
-  <tr>
-    <td>planner.width.max_per_node</td>
-    <td>3</td>
-    <td>Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query’s major fragment on a node.</td>
-  </tr>
-  <tr>
-    <td>planner.width.max_per_query</td>
-    <td>1000</td>
-    <td>Same as max per node but applies to the query as executed by the entire cluster.</td>
-  </tr>
-  <tr>
-    <td>store.format</td>
-    <td>parquet</td>
-    <td>Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, or text. Allowed values: 0, -1, 1000000</td>
-  </tr>
-  <tr>
-    <td>store.json.all_text_mode</a></td>
-    <td>FALSE</td>
-    <td>Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.</td>
-  </tr>
-  <tr>
-    <td>store.mongo.all_text_mode</td>
-    <td>FALSE</td>
-    <td>Similar to store.json.all_text_mode for MongoDB.</td>
-  </tr>
-  <tr>
-    <td>store.parquet.block-size</a></td>
-    <td>536870912</td>
-    <td>Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.</td>
-  </tr>
-  <tr>
-    <td>store.parquet.compression</td>
-    <td>snappy</td>
-    <td>Compression type for storing Parquet output. Allowed values: snappy, gzip, none</td>
-  </tr>
-  <tr>
-    <td>store.parquet.enable_dictionary_encoding*</td>
-    <td>FALSE</td>
-    <td>Do not change.</td>
-  </tr>
-  <tr>
-    <td>store.parquet.use_new_reader</td>
-    <td>FALSE</td>
-    <td>Not supported</td>
-  </tr>
-  <tr>
-    <td>window.enable*</td>
-    <td>FALSE</td>
-    <td>Coming soon.</td>
-  </tr>
-</table>
+| Name                                           | Default    | Comments                                                                                                                                                                                                                                                                                                                                                         |
+|------------------------------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| drill.exec.functions.cast_empty_string_to_null | FALSE      | Not supported in this release.                                                                                                                                                                                                                                                                                                                                   |
+| drill.exec.storage.file.partition.column.label | dir        | Accepts a string input.                                                                                                                                                                                                                                                                                                                                          |
+| exec.errors.verbose                            | FALSE      | Toggles verbose output of executable error messages                                                                                                                                                                                                                                                                                                              |
+| exec.java_compiler                             | DEFAULT    | Switches between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by default for generated source code of less than exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.                                                                                                                                                |
+| exec.java_compiler_debug                       | TRUE       | Toggles the output of debug-level compiler error messages in runtime generated code.                                                                                                                                                                                                                                                                             |
+| exec.java_compiler_janino_maxsize              | 262144     | See the exec.java_compiler option comment. Accepts inputs of type LONG.                                                                                                                                                                                                                                                                                          |
+| exec.max_hash_table_size                       | 1073741824 | Ending size for hash tables. Range: 0 - 1073741824                                                                                                                                                                                                                                                                                                               |
+| exec.min_hash_table_size                       | 65536      | Starting size for hash tables. Increase according to available memory to improve performance. Range: 0 - 1073741824                                                                                                                                                                                                                                              |
+| exec.queue.enable                              | FALSE      | Changes the state of query queues to control the number of queries that run simultaneously.                                                                                                                                                                                                                                                                      |
+| exec.queue.large                               | 10         | Sets the number of large queries that can run concurrently in the cluster. Range: 0-1000                                                                                                                                                                                                                                                                         |
+| exec.queue.small                               | 100        | Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001                                                                                                                                                                                                                                                                         |
+| exec.queue.threshold                           | 30000000   | Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807                                                                                                                                                          |
+| exec.queue.timeout_millis                      | 300000     | Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807                                                                                                                                                                                                                                                                |
+| planner.add_producer_consumer                  | FALSE      | Increase prefetching of data from disk. Disable for in-memory reads.                                                                                                                                                                                                                                                                                             |
+| planner.affinity_factor                        | 1.2        | Accepts inputs of type DOUBLE.                                                                                                                                                                                                                                                                                                                                   |
+| planner.broadcast_factor                       | 1          |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.broadcast_threshold                    | 10000000   | The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647                                                                                                                                                            |
+| planner.disable_exchanges                      | FALSE      | Toggles the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                               |
+| planner.enable_broadcast_join                  | TRUE       | Changes the state of aggregation and join operators. Do not disable.                                                                                                                                                                                                                                                                                             |
+| planner.enable_constant_folding                | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.enable_demux_exchange                  | FALSE      | Toggles the state of hashing to a demulitplexed exchange.                                                                                                                                                                                                                                                                                                        |
+| planner.enable_hash_single_key                 | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.enable_hashagg                         | TRUE       | Enable hash aggregation; otherwise, Drill does a sort-based aggregation. Does not write to disk. Enable is recommended.                                                                                                                                                                                                                                          |
+| planner.enable_hashjoin                        | TRUE       | Enable the memory hungry hash join. Drill assumes that a query with have adequate memory to complete and tries to use the fastest operations possible to complete the planned inner, left, right, or full outer joins using a hash table. Does not write to disk. Disabling hash join allows Drill to manage arbitrarily large data in a small memory footprint. |
+| planner.enable_hashjoin_swap                   | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.enable_mergejoin                       | TRUE       | Sort-based operation. A merge join is used for inner join, left and right outer joins. Inputs to the merge join must be sorted. It reads the sorted input streams from both sides and finds matching rows. Writes to disk.                                                                                                                                       |
+| planner.enable_multiphase_agg                  | TRUE       | Each minor fragment does a local aggregation in phase 1, distributes on a hash basis using GROUP-BY keys partially aggregated results to other fragments, and all the fragments perform a total aggregation using this data.                                                                                                                                     |
+| planner.enable_mux_exchange                    | TRUE       | Toggles the state of hashing to a multiplexed exchange.                                                                                                                                                                                                                                                                                                          |
+| planner.enable_nestedloopjoin                  | TRUE       | Sort-based operation. Writes to disk.                                                                                                                                                                                                                                                                                                                            |
+| planner.enable_nljoin_for_scalar_only          | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.enable_streamagg                       | TRUE       | Sort-based operation. Writes to disk.                                                                                                                                                                                                                                                                                                                            |
+| planner.identifier_max_length                  | 1024       |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.join.hash_join_swap_margin_factor      | 10         |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.join.row_count_estimate_factor         | 1          |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.average_field_width             | 8          |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.enable_memory_estimation        | FALSE      | Toggles the state of memory estimation and re-planning of the query. When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.                                                                                                                                   |
+| planner.memory.hash_agg_table_factor           | 1.1        |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.hash_join_table_factor          | 1.1        |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.memory.max_query_memory_per_node       | 2147483648 | Sets the maximum estimate of memory for a query per node. If the estimate is too low, Drill re-plans the query without memory-constrained operators.                                                                                                                                                                                                             |
+| planner.memory.non_blocking_operators_memory   | 64         | Range: 0-2048                                                                                                                                                                                                                                                                                                                                                    |
+| planner.nestedloopjoin_factor                  | 100        |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.partitioner_sender_max_threads         | 8          | Upper limit of threads for outbound queuing.                                                                                                                                                                                                                                                                                                                     |
+| planner.partitioner_sender_set_threads         | -1         |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.partitioner_sender_threads_factor      | 1          |                                                                                                                                                                                                                                                                                                                                                                  |
+| planner.producer_consumer_queue_size           | 10         | How much data to prefetch from disk (in record batches) out of band of query execution                                                                                                                                                                                                                                                                           |
+| planner.slice_target                           | 100000     | The number of records manipulated within a fragment before Drill parallelizes operations.                                                                                                                                                                                                                                                                        |
+| planner.width.max_per_node                     | 3          | Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query’s major fragment on a node.                                                                                                                                                |
+| planner.width.max_per_query                    | 1000       | Same as max per node but applies to the query as executed by the entire cluster.                                                                                                                                                                                                                                                                                 |
+| store.format                                   | parquet    | Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, or text. Allowed values: 0, -1, 1000000                                                                                                                                                                                                      |
+| store.json.all_text_mode                       | FALSE      | Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.                                                                                                                                                                                                                                                                              |
+| store.mongo.all_text_mode                      | FALSE      | Similar to store.json.all_text_mode for MongoDB.                                                                                                                                                                                                                                                                                                                 |
+| store.parquet.block-size                       | 536870912  | Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.                                                                                                                                                                                                                               |
+| store.parquet.compression                      | snappy     | Compression type for storing Parquet output. Allowed values: snappy, gzip, none                                                                                                                                                                                                                                                                                  |
+| store.parquet.enable_dictionary_encoding*      | FALSE      | Do not change.                                                                                                                                                                                                                                                                                                                                                   |
+| store.parquet.use_new_reader                   | FALSE      | Not supported                                                                                                                                                                                                                                                                                                                                                    |
+| window.enable*                                 | FALSE      | Coming soon.                                                                                                                                                                                                                                                                                                                                                     |
 
 \* Not supported in this release.
 


[11/25] drill git commit: Tableau 9 Desktop doc

Posted by ts...@apache.org.
Tableau 9 Desktop doc


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/4857f6c1
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/4857f6c1
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/4857f6c1

Branch: refs/heads/gh-pages
Commit: 4857f6c191e015f79aa189cde814463e3ba76420
Parents: 3f63669 1ad5dbd
Author: Bob Rumsby <br...@mapr.com>
Authored: Fri May 8 15:21:47 2015 -0700
Committer: Bob Rumsby <br...@mapr.com>
Committed: Fri May 8 15:21:47 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 1954 +++++++++--------
 _docs/045-configure-drill.md                    |    6 +
 _docs/110-manage-drill.md                       |    6 -
 .../010-configure-drill-introduction.md         |   10 +
 .../020-configuring-drill-memory.md             |   40 +
 ...guring-a-multitenant-cluster-introduction.md |   20 +
 .../040-configuring-a-multitenant-cluster.md    |    5 +
 .../050-configuring-multitenant-resources.md    |   36 +
 .../060-configuring-a-shared-drillbit.md        |   65 +
 .../070-configuring-user-impersonation.md       |  152 ++
 .../075-configuring-user-authentication.md      |  157 ++
 .../080-configuration-options.md                |    9 +
 .../configure-drill/100-ports-used-by-drill.md  |   15 +
 _docs/configure-drill/110-partition-pruning.md  |   75 +
 .../010-configuration-options-introduction.md   |  410 ++++
 .../020-start-up-options.md                     |   63 +
 .../030-planning-and-exececution-options.md     |   60 +
 .../040-persistent-configuration-storage.md     |   92 +
 _docs/getting-started/020-why-drill.md          |    2 +-
 _docs/img/UserAuthProcess.PNG                   |  Bin 0 -> 30800 bytes
 _docs/img/UserAuth_ODBC_Driver.png              |  Bin 0 -> 83049 bytes
 .../050-starting-drill-in-distributed mode.md   |   14 +-
 .../030-starting-drill-on-linux-and-mac-os-x.md |   29 +
 .../050-starting-drill-on-windows.md            |   13 +-
 .../010-manage-drill-introduction.md            |    7 -
 ...-configuring-drill-in-a-dedicated-cluster.md |   30 -
 .../012-configuring-a-multitenant-cluster.md    |    5 -
 ...guring-a-multitenant-cluster-introduction.md |   22 -
 .../015-configuring-multitenant-resources.md    |   80 -
 .../017-configuring-a-shared-drillbit.md        |   65 -
 _docs/manage-drill/020-configuration-options.md |    9 -
 _docs/manage-drill/030-start-stop.md            |   42 -
 _docs/manage-drill/040-ports-used-by-drill.md   |   15 -
 _docs/manage-drill/050-partition-pruning.md     |   75 -
 ...and-canceling-queries-in-the-Drill-Web-UI.md |   30 -
 .../010-configuration-options-introduction.md   |  407 ----
 .../020-start-up-options.md                     |   63 -
 .../030-planning-and-exececution-options.md     |   60 -
 .../040-persistent-configuration-storage.md     |   92 -
 .../060-tibco-spotfire with Drill.md            |   50 +
 ...-apache-drill-with-tibco-spotfire-desktop.md |   50 -
 ...and-canceling-queries-in-the-Drill-Web-UI.md |   30 +
 .../sql-commands/030-create-table-as-command.md |    2 +-
 .../sql-commands/050-create-view-command.md     |    2 +-
 .../sql-commands/070-explain-commands.md        |    2 +-
 .../090-show-databases-and-show-schemas.md      |    2 +-
 .../sql-functions/020-data-type-conversion.md   |    4 +-
 css/style.css                                   |  714 +++----
 css/video-box.css                               |   55 +
 index.html                                      |   27 +
 static/fancybox/blank.gif                       |  Bin 0 -> 43 bytes
 static/fancybox/fancybox_loading.gif            |  Bin 0 -> 6567 bytes
 static/fancybox/fancybox_loading@2x.gif         |  Bin 0 -> 13984 bytes
 static/fancybox/fancybox_overlay.png            |  Bin 0 -> 1003 bytes
 static/fancybox/fancybox_sprite.png             |  Bin 0 -> 1362 bytes
 static/fancybox/fancybox_sprite@2x.png          |  Bin 0 -> 6553 bytes
 static/fancybox/helpers/fancybox_buttons.png    |  Bin 0 -> 1080 bytes
 .../helpers/jquery.fancybox-buttons.css         |   97 +
 .../fancybox/helpers/jquery.fancybox-buttons.js |  122 ++
 .../fancybox/helpers/jquery.fancybox-media.js   |  199 ++
 .../fancybox/helpers/jquery.fancybox-thumbs.css |   55 +
 .../fancybox/helpers/jquery.fancybox-thumbs.js  |  162 ++
 static/fancybox/jquery.fancybox.css             |  274 +++
 static/fancybox/jquery.fancybox.js              | 2020 ++++++++++++++++++
 static/fancybox/jquery.fancybox.pack.js         |   46 +
 65 files changed, 5786 insertions(+), 2362 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/4857f6c1/_data/docs.json
----------------------------------------------------------------------
diff --cc _data/docs.json
index 19f26f1,4ced7af..14aa8c6
--- a/_data/docs.json
+++ b/_data/docs.json
@@@ -4544,8 -4578,8 +4578,9 @@@
                          }
                      ], 
                      "children": [], 
 -                    "next_title": "Query Data", 
 -                    "next_url": "/docs/query-data/", 
++<<<<<<< HEAD
 +                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
 +                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                      "parent": "ODBC/JDBC Interfaces", 
                      "previous_title": "Using MicroStrategy Analytics with Drill", 
                      "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
@@@ -5128,15 -5145,32 +5163,37 @@@
                      "relative_path": "_docs/query-data/070-query-sys-tbl.md", 
                      "title": "Querying System Tables", 
                      "url": "/docs/querying-system-tables/"
-                 }
-             ], 
-             "next_title": "Query Data Introduction", 
-             "next_url": "/docs/query-data-introduction/", 
-             "parent": "", 
-             "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
-             "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
-             "relative_path": "_docs/070-query-data.md", 
-             "title": "Query Data", 
+                 }, 
+                 {
+                     "breadcrumbs": [
+                         {
+                             "title": "Query Data", 
+                             "url": "/docs/query-data/"
+                         }
+                     ], 
+                     "children": [], 
+                     "next_title": "SQL Reference", 
+                     "next_url": "/docs/sql-reference/", 
+                     "parent": "Query Data", 
+                     "previous_title": "Querying System Tables", 
+                     "previous_url": "/docs/querying-system-tables/", 
+                     "relative_path": "_docs/query-data/080-monitoring-and-canceling-queries-in-the-Drill-Web-UI.md", 
+                     "title": "Monitoring and Canceling Queries in the Drill Web UI", 
+                     "url": "/docs/monitoring-and-canceling-queries-in-the-drill-web-ui/"
+                 }
+             ], 
+             "next_title": "Query Data Introduction", 
+             "next_url": "/docs/query-data-introduction/", 
+             "parent": "", 
++<<<<<<< HEAD
++            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
++            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
++=======
+             "previous_title": "Using Tibco Spotfire", 
+             "previous_url": "/docs/using-tibco-spotfire/", 
++>>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
+             "relative_path": "_docs/070-query-data.md", 
+             "title": "Query Data", 
              "url": "/docs/query-data/"
          }, 
          "Query Data Introduction": {
@@@ -8110,40 -8127,6 +8150,43 @@@
              "title": "Useful Research", 
              "url": "/docs/useful-research/"
          }, 
++<<<<<<< HEAD
 +        "Using Apache Drill with Tableau 9 Desktop": {
 +            "breadcrumbs": [
 +                {
 +                    "title": "ODBC/JDBC Interfaces", 
 +                    "url": "/docs/odbc-jdbc-interfaces/"
 +                }
 +            ], 
 +            "children": [], 
 +            "next_title": "Query Data", 
 +            "next_url": "/docs/query-data/", 
 +            "parent": "ODBC/JDBC Interfaces", 
 +            "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
 +            "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
 +            "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
 +            "title": "Using Apache Drill with Tableau 9 Desktop", 
 +            "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
 +        }, 
 +        "Using Apache Drill with Tibco Spotfire Desktop": {
 +            "breadcrumbs": [
 +                {
 +                    "title": "ODBC/JDBC Interfaces", 
 +                    "url": "/docs/odbc-jdbc-interfaces/"
 +                }
 +            ], 
 +            "children": [], 
 +            "next_title": "Using Apache Drill with Tableau 9 Desktop", 
 +            "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
 +            "parent": "ODBC/JDBC Interfaces", 
 +            "previous_title": "Using MicroStrategy Analytics with Drill", 
 +            "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
 +            "relative_path": "_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md", 
 +            "title": "Using Apache Drill with Tibco Spotfire Desktop", 
 +            "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
 +        }, 
++=======
++>>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
          "Using Custom Functions in Queries": {
              "breadcrumbs": [
                  {
@@@ -9875,8 -10173,8 +10233,13 @@@
                          }
                      ], 
                      "children": [], 
-                     "next_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                     "next_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
++<<<<<<< HEAD
++                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
++                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
++=======
+                     "next_title": "Using Tibco Spotfire", 
+                     "next_url": "/docs/using-tibco-spotfire/", 
++>>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
                      "parent": "ODBC/JDBC Interfaces", 
                      "previous_title": "Using Drill Explorer on Windows", 
                      "previous_url": "/docs/using-drill-explorer-on-windows/", 
@@@ -9895,28 -10193,11 +10258,19 @@@
                      "next_title": "Query Data", 
                      "next_url": "/docs/query-data/", 
                      "parent": "ODBC/JDBC Interfaces", 
++<<<<<<< HEAD
 +                    "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
 +                    "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
 +                    "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
 +                    "title": "Using Apache Drill with Tableau 9 Desktop", 
 +                    "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
++=======
+                     "previous_title": "Using MicroStrategy Analytics with Drill", 
+                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
+                     "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire.md", 
+                     "title": "Using Tibco Spotfire", 
+                     "url": "/docs/using-tibco-spotfire/"
++>>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
                  }
              ], 
              "next_title": "Interfaces Introduction", 
@@@ -10287,8 -10585,8 +10658,13 @@@
              "next_title": "Query Data Introduction", 
              "next_url": "/docs/query-data-introduction/", 
              "parent": "", 
++<<<<<<< HEAD
 +            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
 +            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
++=======
+             "previous_title": "Using Tibco Spotfire", 
+             "previous_url": "/docs/using-tibco-spotfire/", 
++>>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
              "relative_path": "_docs/070-query-data.md", 
              "title": "Query Data", 
              "url": "/docs/query-data/"
@@@ -12156,4 -12156,4 +12234,4 @@@
              "url": "/docs/project-bylaws/"
          }
      ]
--}
++}


[15/25] drill git commit: revive sys options table

Posted by ts...@apache.org.
revive sys options table


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/7a407455
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/7a407455
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/7a407455

Branch: refs/heads/gh-pages
Commit: 7a407455efe0f8349d5d7bd2d8686a01698b6294
Parents: 314aae1
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Fri May 8 18:03:23 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Fri May 8 18:03:23 2015 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md   | 300 ++++++++++++++++++-
 1 file changed, 299 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/7a407455/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index 53fb5fb..8cfadf8 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -9,6 +9,304 @@ env.sh` and `drill-override.conf` files. Drill stores these files in the
 `/conf` directory. Drill sources` /etc/drill/conf` if it exists. Otherwise,
 Drill sources the local `<drill_installation_directory>/conf` directory.
 
-The sys.options table in Drill contains information about boot (start-up) and system options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view these options. The sys.options also contains many session or system level options, some of whicha are described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). 
+The sys.options table in Drill contains information about boot (start-up) and system options. The section, ["Start-up Options"]({{site.baseurl}}/docs/start-up-options), covers how to configure and view key boot options. The sys.options table also contains many system options, some of which are described in detail the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options). The following table lists the options in alphabetical order and provides a brief description of supported options:
+
+## System Options
+The sys.options table lists the following options that you can set at the session or system level as described in the section, ["Planning and Execution Options"]({{site.baseurl}}/docs/planning-and-execution-options) 
+
+<table>
+  <tr>
+    <th>Name</th>
+    <th>Default</th>
+    <th>Comments</th>
+  </tr>
+  <tr>
+    <td>drill.exec.functions.cast_empty_string_to_null</td>
+    <td>FALSE</td>
+    <td>Not supported in this release.</td>
+  </tr>
+  <tr>
+    <td>drill.exec.storage.file.partition.column.label</td>
+    <td>dir</td>
+    <td>Accepts a string input.</td>
+  </tr>
+  <tr>
+    <td>exec.errors.verbose</td>
+    <td>FALSE</td>
+    <td>Toggles verbose output of executable error messages</td>
+  </tr>
+  <tr>
+    <td>exec.java_compiler</td>
+    <td>DEFAULT</td>
+    <td>Switches between DEFAULT, JDK, and JANINO mode for the current session. Uses Janino by default for generated source code of less than exec.java_compiler_janino_maxsize; otherwise, switches to the JDK compiler.</td>
+  </tr>
+  <tr>
+    <td>exec.java_compiler_debug</td>
+    <td>TRUE</td>
+    <td>Toggles the output of debug-level compiler error messages in runtime generated code.</td>
+  </tr>
+  <tr>
+    <td>exec.java_compiler_janino_maxsize</td>
+    <td>262144</td>
+    <td>See the exec.java_compiler option comment. Accepts inputs of type LONG.</td>
+  </tr>
+  <tr>
+    <td>exec.max_hash_table_size</td>
+    <td>1073741824</td>
+    <td>Ending size for hash tables. Range: 0 - 1073741824</td>
+  </tr>
+  <tr>
+    <td>exec.min_hash_table_size</td>
+    <td>65536</td>
+    <td>Starting size for hash tables. Increase according to available memory to improve performance. Range: 0 - 1073741824</td>
+  </tr>
+  <tr>
+    <td>exec.queue.enable</td>
+    <td>FALSE</td>
+    <td>Changes the state of query queues to control the number of queries that run simultaneously.</td>
+  </tr>
+  <tr>
+    <td>exec.queue.large</td>
+    <td>10</td>
+    <td>Sets the number of large queries that can run concurrently in the cluster.
+Range: 0-1000</td>
+  </tr>
+  <tr>
+    <td>exec.queue.small</td>
+    <td>100</td>
+    <td>Sets the number of small queries that can run concurrently in the cluster. Range: 0-1001</td>
+  </tr>
+  <tr>
+    <td>exec.queue.threshold</td>
+    <td>30000000</td>
+    <td>Sets the cost threshold, which depends on the complexity of the queries in queue, for determining whether query is large or small. Complex queries have higher thresholds. Range: 0-9223372036854775807</td>
+  </tr>
+  <tr>
+    <td>exec.queue.timeout_millis</td>
+    <td>300000</td>
+    <td>Indicates how long a query can wait in queue before the query fails. Range: 0-9223372036854775807</td>
+  </tr>
+  <tr>
+    <td>planner.add_producer_consumer</td>
+    <td>FALSE</td>
+    <td>Increase prefetching of data from disk. Disable for in-memory reads.</td>
+  </tr>
+  <tr>
+    <td>planner.affinity_factor</td>
+    <td>1.2</td>
+    <td>Accepts inputs of type DOUBLE.</td>
+  </tr>
+  <tr>
+    <td>planner.broadcast_factor</td>
+    <td>1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.broadcast_threshold</td>
+    <td>10000000</td>
+    <td>The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647</td>
+  </tr>
+  <tr>
+    <td>planner.disable_exchanges</td>
+    <td>FALSE</td>
+    <td>Toggles the state of hashing to a random exchange.</td>
+  </tr>
+  
+  <tr>
+    <td>planner.enable_broadcast_join</td>
+    <td>TRUE</td>
+    <td>Changes the state of aggregation and join operators. Do not disable.</td>
+  </tr>
+  <tr>
+    <td>planner.enable_constant_folding</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_demux_exchange</td>
+    <td>FALSE</td>
+    <td>Toggles the state of hashing to a demulitplexed exchange.</td>
+  </tr>
+  <tr>
+    <td>planner.enable_hash_single_key</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_hashagg</td>
+    <td>TRUE</td>
+    <td>Enable hash aggregation; otherwise, Drill does a sort-based aggregation. Does not write to disk. Enable is recommended.</td>
+  </tr>
+  <tr>
+    <td>planner.enable_hashjoin</td>
+    <td>TRUE</td>
+    <td>Enable the memory hungry hash join. Drill assumes that a query with have adequate memory to complete and tries to use the fastest operations possible to complete the planned inner, left, right, or full outer joins using a hash table. Does not write to disk. Disabling hash join allows Drill to manage arbitrarily large data in a small memory footprint.</td>
+  </tr>
+  <tr>
+    <td>planner.enable_hashjoin_swap</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_mergejoin</td>
+    <td>TRUE</td>
+    <td>Sort-based operation. A merge join is used for inner join, left and right outer joins.  Inputs to the merge join must be sorted. It reads the sorted input streams from both sides and finds matching rows. Writes to disk.</td>
+  </tr>
+  <tr>
+    <td>planner.enable_multiphase_agg</td>
+    <td>TRUE</td>
+    <td>Each minor fragment does a local aggregation in phase 1, distributes on a hash basis using GROUP-BY keys partially aggregated results to other fragments, and all the fragments perform a total aggregation using this data.  
+ </td>
+  </tr>
+  <tr>
+    <td>planner.enable_mux_exchange</td>
+    <td>TRUE</td>
+    <td>Toggles the state of hashing to a multiplexed exchange.</td>
+  </tr>
+  <tr>
+    <td>planner.enable_nestedloopjoin</td>
+    <td>TRUE</td>
+    <td>Sort-based operation. Writes to disk.</td>
+  </tr>
+  <tr>
+    <td>planner.enable_nljoin_for_scalar_only</td>
+    <td>TRUE</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.enable_streamagg</td>
+    <td>TRUE</td>
+    <td>Sort-based operation. Writes to disk.</td>
+  </tr>
+  <tr>
+    <td>planner.identifier_max_length</td>
+    <td>1024</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.join.hash_join_swap_margin_factor</td>
+    <td>10</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.join.row_count_estimate_factor</td>
+    <td>1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.average_field_width</td>
+    <td>8</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.enable_memory_estimation</td>
+    <td>FALSE</td>
+    <td>Toggles the state of memory estimation and re-planning of the query.
+When enabled, Drill conservatively estimates memory requirements and typically excludes these operators from the plan and negatively impacts performance.</td>
+  </tr>
+  <tr>
+    <td>planner.memory.hash_agg_table_factor</td>
+    <td>1.1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.hash_join_table_factor</td>
+    <td>1.1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.memory.max_query_memory_per_node</td>
+    <td>2147483648</td>
+    <td>Sets the maximum estimate of memory for a query per node. If the estimate is too low, Drill re-plans the query without memory-constrained operators.</td>
+  </tr>
+  <tr>
+    <td>planner.memory.non_blocking_operators_memory</td>
+    <td>64</td>
+    <td>Range: 0-2048</td>
+  </tr>
+  <tr>
+    <td>planner.nestedloopjoin_factor</td>
+    <td>100</td>
+    <td></td>
+  </tr>
+    <tr>
+    <td>planner.partitioner_sender_max_threads</td>
+    <td>8</td>
+    <td>Upper limit of threads for outbound queuing.</td>
+  </tr>
+  <tr>
+    <td>planner.partitioner_sender_set_threads</td>
+    <td>-1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.partitioner_sender_threads_factor</td>
+    <td>1</td>
+    <td></td>
+  </tr>
+  <tr>
+    <td>planner.producer_consumer_queue_size</td>
+    <td>10</td>
+    <td>How much data to prefetch from disk (in record batches) out of band of query execution</td>
+  </tr>
+  <tr>
+    <td>planner.slice_target</td>
+    <td>100000</td>
+    <td>The number of records manipulated within a fragment before Drill parallelizes operations.</td>
+  </tr>
+  <tr>
+    <td>planner.width.max_per_node</td>
+    <td>3</td>
+    <td>Maximum number of threads that can run in parallel for a query on a node. A slice is an individual thread. This number indicates the maximum number of slices per query for the query’s major fragment on a node.</td>
+  </tr>
+  <tr>
+    <td>planner.width.max_per_query</td>
+    <td>1000</td>
+    <td>Same as max per node but applies to the query as executed by the entire cluster.</td>
+  </tr>
+  <tr>
+    <td>store.format</td>
+    <td>parquet</td>
+    <td>Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, or text. Allowed values: 0, -1, 1000000</td>
+  </tr>
+  <tr>
+    <td>store.json.all_text_mode</a></td>
+    <td>FALSE</td>
+    <td>Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.</td>
+  </tr>
+  <tr>
+    <td>store.mongo.all_text_mode</td>
+    <td>FALSE</td>
+    <td>Similar to store.json.all_text_mode for MongoDB.</td>
+  </tr>
+  <tr>
+    <td>store.parquet.block-size</a></td>
+    <td>536870912</td>
+    <td>Sets the size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system.</td>
+  </tr>
+  <tr>
+    <td>store.parquet.compression</td>
+    <td>snappy</td>
+    <td>Compression type for storing Parquet output. Allowed values: snappy, gzip, none</td>
+  </tr>
+  <tr>
+    <td>store.parquet.enable_dictionary_encoding*</td>
+    <td>FALSE</td>
+    <td>Do not change.</td>
+  </tr>
+  <tr>
+    <td>store.parquet.use_new_reader</td>
+    <td>FALSE</td>
+    <td>Not supported</td>
+  </tr>
+  <tr>
+    <td>window.enable*</td>
+    <td>FALSE</td>
+    <td>Coming soon.</td>
+  </tr>
+</table>
+
+\* Not supported in this release.
+
 
 


[17/25] drill git commit: add Aman's info about broadcast

Posted by ts...@apache.org.
add Aman's info about broadcast


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/029d411f
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/029d411f
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/029d411f

Branch: refs/heads/gh-pages
Commit: 029d411fc0e39e33c5bebfb06714b3d203fb48be
Parents: 598be28
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Sat May 9 12:39:32 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Sat May 9 12:39:32 2015 -0700

----------------------------------------------------------------------
 .../010-configuration-options-introduction.md                      | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/029d411f/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
index 8443036..c26dd6b 100644
--- a/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
+++ b/_docs/configure-drill/configuration-options/010-configuration-options-introduction.md
@@ -34,7 +34,7 @@ The sys.options table lists the following options that you can set at the sessio
 | planner.broadcast_factor                       | 1          |                                                                                                                                                                                                                                                                                                                                                                  |
 | planner.broadcast_threshold                    | 10000000   | The maximum number of records allowed to be broadcast as part of a query. After one million records, Drill reshuffles data rather than doing a broadcast to one side of the join. Range: 0-2147483647                                                                                                                                                            |
 | planner.disable_exchanges                      | FALSE      | Toggles the state of hashing to a random exchange.                                                                                                                                                                                                                                                                                                               |
-| planner.enable_broadcast_join                  | TRUE       | Changes the state of aggregation and join operators. Do not disable.                                                                                                                                                                                                                                                                                             |
+| planner.enable_broadcast_join                  | TRUE       | Changes the state of aggregation and join operators. The broadcast join can be used for hash join, merge join and nested loop join. Broadcast join is very useful in situations where a large (fact) table is being joined to relatively smaller (dimension) tables. Do not disable.                                                                             |
 | planner.enable_constant_folding                | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |
 | planner.enable_demux_exchange                  | FALSE      | Toggles the state of hashing to a demulitplexed exchange.                                                                                                                                                                                                                                                                                                        |
 | planner.enable_hash_single_key                 | TRUE       |                                                                                                                                                                                                                                                                                                                                                                  |


[03/25] drill git commit: title changes, multitenant doc review changes

Posted by ts...@apache.org.
title changes, multitenant doc review changes


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/80bbe062
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/80bbe062
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/80bbe062

Branch: refs/heads/gh-pages
Commit: 80bbe0620885de7d1e9926d40693d9175ecb8a35
Parents: 22d4df3
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Thu May 7 15:45:58 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Thu May 7 15:45:58 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 | 170 +++++++++----------
 _docs/030-configuring-user-imperso.textClipping |   0
 .../010-configure-drill-introduction.md         |   9 +-
 .../020-configuring-drill-memory.md             |   5 +-
 ...guring-a-multitenant-cluster-introduction.md |   8 +-
 .../050-configuring-multitenant-resources.md    |  44 ++---
 .../060-configuring-a-shared-drillbit.md        |   6 +-
 .../060-tibco-spotfire with Drill.md            |  50 ++++++
 ...-apache-drill-with-tibco-spotfire-desktop.md |  50 ------
 .../sql-commands/030-create-table-as-command.md |   2 +-
 .../sql-commands/050-create-view-command.md     |   2 +-
 .../sql-commands/070-explain-commands.md        |   2 +-
 .../090-show-databases-and-show-schemas.md      |   2 +-
 13 files changed, 167 insertions(+), 183 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 91d3fe5..4ced7af 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -50,7 +50,7 @@
                 }
             ], 
             "children": [], 
-            "next_title": "CREATE TABLE AS (CTAS) command", 
+            "next_title": "CREATE TABLE AS (CTAS) Command", 
             "next_url": "/docs/create-table-as-ctas-command/", 
             "parent": "SQL Commands", 
             "previous_title": "ALTER SESSION Command", 
@@ -624,7 +624,7 @@
             "title": "CASE", 
             "url": "/docs/case/"
         }, 
-        "CREATE TABLE AS (CTAS) command": {
+        "CREATE TABLE AS (CTAS) Command": {
             "breadcrumbs": [
                 {
                     "title": "SQL Commands", 
@@ -636,16 +636,16 @@
                 }
             ], 
             "children": [], 
-            "next_title": "CREATE VIEW command", 
+            "next_title": "CREATE VIEW Command", 
             "next_url": "/docs/create-view-command/", 
             "parent": "SQL Commands", 
             "previous_title": "ALTER SYSTEM Command", 
             "previous_url": "/docs/alter-system-command/", 
             "relative_path": "_docs/sql-reference/sql-commands/030-create-table-as-command.md", 
-            "title": "CREATE TABLE AS (CTAS) command", 
+            "title": "CREATE TABLE AS (CTAS) Command", 
             "url": "/docs/create-table-as-ctas-command/"
         }, 
-        "CREATE VIEW command": {
+        "CREATE VIEW Command": {
             "breadcrumbs": [
                 {
                     "title": "SQL Commands", 
@@ -660,10 +660,10 @@
             "next_title": "DESCRIBE Command", 
             "next_url": "/docs/describe-command/", 
             "parent": "SQL Commands", 
-            "previous_title": "CREATE TABLE AS (CTAS) command", 
+            "previous_title": "CREATE TABLE AS (CTAS) Command", 
             "previous_url": "/docs/create-table-as-ctas-command/", 
             "relative_path": "_docs/sql-reference/sql-commands/050-create-view-command.md", 
-            "title": "CREATE VIEW command", 
+            "title": "CREATE VIEW Command", 
             "url": "/docs/create-view-command/"
         }, 
         "Compiling Drill from Source": {
@@ -1731,10 +1731,10 @@
                 }
             ], 
             "children": [], 
-            "next_title": "EXPLAIN commands", 
+            "next_title": "EXPLAIN Commands", 
             "next_url": "/docs/explain-commands/", 
             "parent": "SQL Commands", 
-            "previous_title": "CREATE VIEW command", 
+            "previous_title": "CREATE VIEW Command", 
             "previous_url": "/docs/create-view-command/", 
             "relative_path": "_docs/sql-reference/sql-commands/060-describe-command.md", 
             "title": "DESCRIBE Command", 
@@ -2760,7 +2760,7 @@
             "title": "Driver Configuration Options", 
             "url": "/docs/driver-configuration-options/"
         }, 
-        "EXPLAIN commands": {
+        "EXPLAIN Commands": {
             "breadcrumbs": [
                 {
                     "title": "SQL Commands", 
@@ -2778,7 +2778,7 @@
             "previous_title": "DESCRIBE Command", 
             "previous_url": "/docs/describe-command/", 
             "relative_path": "_docs/sql-reference/sql-commands/070-explain-commands.md", 
-            "title": "EXPLAIN commands", 
+            "title": "EXPLAIN Commands", 
             "url": "/docs/explain-commands/"
         }, 
         "Embedded Mode Prerequisites": {
@@ -4561,8 +4561,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "next_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+                    "next_title": "Using Tibco Spotfire", 
+                    "next_url": "/docs/using-tibco-spotfire/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Drill Explorer on Windows", 
                     "previous_url": "/docs/using-drill-explorer-on-windows/", 
@@ -4583,9 +4583,9 @@
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md", 
-                    "title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
+                    "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire.md", 
+                    "title": "Using Tibco Spotfire", 
+                    "url": "/docs/using-tibco-spotfire/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -5167,8 +5167,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-            "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+            "previous_title": "Using Tibco Spotfire", 
+            "previous_url": "/docs/using-tibco-spotfire/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"
@@ -5930,16 +5930,16 @@
                 }
             ], 
             "children": [], 
-            "next_title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+            "next_title": "SHOW DATABASES and SHOW SCHEMAS Command", 
             "next_url": "/docs/show-databases-and-show-schemas-command/", 
             "parent": "SQL Commands", 
-            "previous_title": "EXPLAIN commands", 
+            "previous_title": "EXPLAIN Commands", 
             "previous_url": "/docs/explain-commands/", 
             "relative_path": "_docs/sql-reference/sql-commands/080-select.md", 
             "title": "SELECT Statements", 
             "url": "/docs/select-statements/"
         }, 
-        "SHOW DATABASES AND SHOW SCHEMAS Command": {
+        "SHOW DATABASES and SHOW SCHEMAS Command": {
             "breadcrumbs": [
                 {
                     "title": "SQL Commands", 
@@ -5957,7 +5957,7 @@
             "previous_title": "SELECT Statements", 
             "previous_url": "/docs/select-statements/", 
             "relative_path": "_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md", 
-            "title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+            "title": "SHOW DATABASES and SHOW SCHEMAS Command", 
             "url": "/docs/show-databases-and-show-schemas-command/"
         }, 
         "SHOW FILES Command": {
@@ -5975,7 +5975,7 @@
             "next_title": "SHOW TABLES Command", 
             "next_url": "/docs/show-tables-command/", 
             "parent": "SQL Commands", 
-            "previous_title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+            "previous_title": "SHOW DATABASES and SHOW SCHEMAS Command", 
             "previous_url": "/docs/show-databases-and-show-schemas-command/", 
             "relative_path": "_docs/sql-reference/sql-commands/100-show-files.md", 
             "title": "SHOW FILES Command", 
@@ -6064,7 +6064,7 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "CREATE TABLE AS (CTAS) command", 
+                    "next_title": "CREATE TABLE AS (CTAS) Command", 
                     "next_url": "/docs/create-table-as-ctas-command/", 
                     "parent": "SQL Commands", 
                     "previous_title": "ALTER SESSION Command", 
@@ -6085,13 +6085,13 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "CREATE VIEW command", 
+                    "next_title": "CREATE VIEW Command", 
                     "next_url": "/docs/create-view-command/", 
                     "parent": "SQL Commands", 
                     "previous_title": "ALTER SYSTEM Command", 
                     "previous_url": "/docs/alter-system-command/", 
                     "relative_path": "_docs/sql-reference/sql-commands/030-create-table-as-command.md", 
-                    "title": "CREATE TABLE AS (CTAS) command", 
+                    "title": "CREATE TABLE AS (CTAS) Command", 
                     "url": "/docs/create-table-as-ctas-command/"
                 }, 
                 {
@@ -6109,10 +6109,10 @@
                     "next_title": "DESCRIBE Command", 
                     "next_url": "/docs/describe-command/", 
                     "parent": "SQL Commands", 
-                    "previous_title": "CREATE TABLE AS (CTAS) command", 
+                    "previous_title": "CREATE TABLE AS (CTAS) Command", 
                     "previous_url": "/docs/create-table-as-ctas-command/", 
                     "relative_path": "_docs/sql-reference/sql-commands/050-create-view-command.md", 
-                    "title": "CREATE VIEW command", 
+                    "title": "CREATE VIEW Command", 
                     "url": "/docs/create-view-command/"
                 }, 
                 {
@@ -6127,10 +6127,10 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "EXPLAIN commands", 
+                    "next_title": "EXPLAIN Commands", 
                     "next_url": "/docs/explain-commands/", 
                     "parent": "SQL Commands", 
-                    "previous_title": "CREATE VIEW command", 
+                    "previous_title": "CREATE VIEW Command", 
                     "previous_url": "/docs/create-view-command/", 
                     "relative_path": "_docs/sql-reference/sql-commands/060-describe-command.md", 
                     "title": "DESCRIBE Command", 
@@ -6154,7 +6154,7 @@
                     "previous_title": "DESCRIBE Command", 
                     "previous_url": "/docs/describe-command/", 
                     "relative_path": "_docs/sql-reference/sql-commands/070-explain-commands.md", 
-                    "title": "EXPLAIN commands", 
+                    "title": "EXPLAIN Commands", 
                     "url": "/docs/explain-commands/"
                 }, 
                 {
@@ -6169,10 +6169,10 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                    "next_title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                     "next_url": "/docs/show-databases-and-show-schemas-command/", 
                     "parent": "SQL Commands", 
-                    "previous_title": "EXPLAIN commands", 
+                    "previous_title": "EXPLAIN Commands", 
                     "previous_url": "/docs/explain-commands/", 
                     "relative_path": "_docs/sql-reference/sql-commands/080-select.md", 
                     "title": "SELECT Statements", 
@@ -6196,7 +6196,7 @@
                     "previous_title": "SELECT Statements", 
                     "previous_url": "/docs/select-statements/", 
                     "relative_path": "_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md", 
-                    "title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                    "title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                     "url": "/docs/show-databases-and-show-schemas-command/"
                 }, 
                 {
@@ -6214,7 +6214,7 @@
                     "next_title": "SHOW TABLES Command", 
                     "next_url": "/docs/show-tables-command/", 
                     "parent": "SQL Commands", 
-                    "previous_title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                    "previous_title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                     "previous_url": "/docs/show-databases-and-show-schemas-command/", 
                     "relative_path": "_docs/sql-reference/sql-commands/100-show-files.md", 
                     "title": "SHOW FILES Command", 
@@ -6995,7 +6995,7 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "CREATE TABLE AS (CTAS) command", 
+                            "next_title": "CREATE TABLE AS (CTAS) Command", 
                             "next_url": "/docs/create-table-as-ctas-command/", 
                             "parent": "SQL Commands", 
                             "previous_title": "ALTER SESSION Command", 
@@ -7016,13 +7016,13 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "CREATE VIEW command", 
+                            "next_title": "CREATE VIEW Command", 
                             "next_url": "/docs/create-view-command/", 
                             "parent": "SQL Commands", 
                             "previous_title": "ALTER SYSTEM Command", 
                             "previous_url": "/docs/alter-system-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/030-create-table-as-command.md", 
-                            "title": "CREATE TABLE AS (CTAS) command", 
+                            "title": "CREATE TABLE AS (CTAS) Command", 
                             "url": "/docs/create-table-as-ctas-command/"
                         }, 
                         {
@@ -7040,10 +7040,10 @@
                             "next_title": "DESCRIBE Command", 
                             "next_url": "/docs/describe-command/", 
                             "parent": "SQL Commands", 
-                            "previous_title": "CREATE TABLE AS (CTAS) command", 
+                            "previous_title": "CREATE TABLE AS (CTAS) Command", 
                             "previous_url": "/docs/create-table-as-ctas-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/050-create-view-command.md", 
-                            "title": "CREATE VIEW command", 
+                            "title": "CREATE VIEW Command", 
                             "url": "/docs/create-view-command/"
                         }, 
                         {
@@ -7058,10 +7058,10 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "EXPLAIN commands", 
+                            "next_title": "EXPLAIN Commands", 
                             "next_url": "/docs/explain-commands/", 
                             "parent": "SQL Commands", 
-                            "previous_title": "CREATE VIEW command", 
+                            "previous_title": "CREATE VIEW Command", 
                             "previous_url": "/docs/create-view-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/060-describe-command.md", 
                             "title": "DESCRIBE Command", 
@@ -7085,7 +7085,7 @@
                             "previous_title": "DESCRIBE Command", 
                             "previous_url": "/docs/describe-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/070-explain-commands.md", 
-                            "title": "EXPLAIN commands", 
+                            "title": "EXPLAIN Commands", 
                             "url": "/docs/explain-commands/"
                         }, 
                         {
@@ -7100,10 +7100,10 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                            "next_title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                             "next_url": "/docs/show-databases-and-show-schemas-command/", 
                             "parent": "SQL Commands", 
-                            "previous_title": "EXPLAIN commands", 
+                            "previous_title": "EXPLAIN Commands", 
                             "previous_url": "/docs/explain-commands/", 
                             "relative_path": "_docs/sql-reference/sql-commands/080-select.md", 
                             "title": "SELECT Statements", 
@@ -7127,7 +7127,7 @@
                             "previous_title": "SELECT Statements", 
                             "previous_url": "/docs/select-statements/", 
                             "relative_path": "_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md", 
-                            "title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                            "title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                             "url": "/docs/show-databases-and-show-schemas-command/"
                         }, 
                         {
@@ -7145,7 +7145,7 @@
                             "next_title": "SHOW TABLES Command", 
                             "next_url": "/docs/show-tables-command/", 
                             "parent": "SQL Commands", 
-                            "previous_title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                            "previous_title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                             "previous_url": "/docs/show-databases-and-show-schemas-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/100-show-files.md", 
                             "title": "SHOW FILES Command", 
@@ -8127,23 +8127,6 @@
             "title": "Useful Research", 
             "url": "/docs/useful-research/"
         }, 
-        "Using Apache Drill with Tibco Spotfire Desktop": {
-            "breadcrumbs": [
-                {
-                    "title": "ODBC/JDBC Interfaces", 
-                    "url": "/docs/odbc-jdbc-interfaces/"
-                }
-            ], 
-            "children": [], 
-            "next_title": "Query Data", 
-            "next_url": "/docs/query-data/", 
-            "parent": "ODBC/JDBC Interfaces", 
-            "previous_title": "Using MicroStrategy Analytics with Drill", 
-            "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-            "relative_path": "_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md", 
-            "title": "Using Apache Drill with Tibco Spotfire Desktop", 
-            "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
-        }, 
         "Using Custom Functions in Queries": {
             "breadcrumbs": [
                 {
@@ -8207,8 +8190,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-            "next_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+            "next_title": "Using Tibco Spotfire", 
+            "next_url": "/docs/using-tibco-spotfire/", 
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using Drill Explorer on Windows", 
             "previous_url": "/docs/using-drill-explorer-on-windows/", 
@@ -8546,6 +8529,23 @@
             "title": "Using SQL Functions, Clauses, and Joins", 
             "url": "/docs/using-sql-functions-clauses-and-joins/"
         }, 
+        "Using Tibco Spotfire": {
+            "breadcrumbs": [
+                {
+                    "title": "ODBC/JDBC Interfaces", 
+                    "url": "/docs/odbc-jdbc-interfaces/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "Query Data", 
+            "next_url": "/docs/query-data/", 
+            "parent": "ODBC/JDBC Interfaces", 
+            "previous_title": "Using MicroStrategy Analytics with Drill", 
+            "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
+            "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire.md", 
+            "title": "Using Tibco Spotfire", 
+            "url": "/docs/using-tibco-spotfire/"
+        }, 
         "Using a Connection String": {
             "breadcrumbs": [
                 {
@@ -10173,8 +10173,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "next_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+                    "next_title": "Using Tibco Spotfire", 
+                    "next_url": "/docs/using-tibco-spotfire/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Drill Explorer on Windows", 
                     "previous_url": "/docs/using-drill-explorer-on-windows/", 
@@ -10195,9 +10195,9 @@
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md", 
-                    "title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
+                    "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire.md", 
+                    "title": "Using Tibco Spotfire", 
+                    "url": "/docs/using-tibco-spotfire/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -10585,8 +10585,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-            "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+            "previous_title": "Using Tibco Spotfire", 
+            "previous_url": "/docs/using-tibco-spotfire/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"
@@ -11093,7 +11093,7 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "CREATE TABLE AS (CTAS) command", 
+                            "next_title": "CREATE TABLE AS (CTAS) Command", 
                             "next_url": "/docs/create-table-as-ctas-command/", 
                             "parent": "SQL Commands", 
                             "previous_title": "ALTER SESSION Command", 
@@ -11114,13 +11114,13 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "CREATE VIEW command", 
+                            "next_title": "CREATE VIEW Command", 
                             "next_url": "/docs/create-view-command/", 
                             "parent": "SQL Commands", 
                             "previous_title": "ALTER SYSTEM Command", 
                             "previous_url": "/docs/alter-system-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/030-create-table-as-command.md", 
-                            "title": "CREATE TABLE AS (CTAS) command", 
+                            "title": "CREATE TABLE AS (CTAS) Command", 
                             "url": "/docs/create-table-as-ctas-command/"
                         }, 
                         {
@@ -11138,10 +11138,10 @@
                             "next_title": "DESCRIBE Command", 
                             "next_url": "/docs/describe-command/", 
                             "parent": "SQL Commands", 
-                            "previous_title": "CREATE TABLE AS (CTAS) command", 
+                            "previous_title": "CREATE TABLE AS (CTAS) Command", 
                             "previous_url": "/docs/create-table-as-ctas-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/050-create-view-command.md", 
-                            "title": "CREATE VIEW command", 
+                            "title": "CREATE VIEW Command", 
                             "url": "/docs/create-view-command/"
                         }, 
                         {
@@ -11156,10 +11156,10 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "EXPLAIN commands", 
+                            "next_title": "EXPLAIN Commands", 
                             "next_url": "/docs/explain-commands/", 
                             "parent": "SQL Commands", 
-                            "previous_title": "CREATE VIEW command", 
+                            "previous_title": "CREATE VIEW Command", 
                             "previous_url": "/docs/create-view-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/060-describe-command.md", 
                             "title": "DESCRIBE Command", 
@@ -11183,7 +11183,7 @@
                             "previous_title": "DESCRIBE Command", 
                             "previous_url": "/docs/describe-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/070-explain-commands.md", 
-                            "title": "EXPLAIN commands", 
+                            "title": "EXPLAIN Commands", 
                             "url": "/docs/explain-commands/"
                         }, 
                         {
@@ -11198,10 +11198,10 @@
                                 }
                             ], 
                             "children": [], 
-                            "next_title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                            "next_title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                             "next_url": "/docs/show-databases-and-show-schemas-command/", 
                             "parent": "SQL Commands", 
-                            "previous_title": "EXPLAIN commands", 
+                            "previous_title": "EXPLAIN Commands", 
                             "previous_url": "/docs/explain-commands/", 
                             "relative_path": "_docs/sql-reference/sql-commands/080-select.md", 
                             "title": "SELECT Statements", 
@@ -11225,7 +11225,7 @@
                             "previous_title": "SELECT Statements", 
                             "previous_url": "/docs/select-statements/", 
                             "relative_path": "_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md", 
-                            "title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                            "title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                             "url": "/docs/show-databases-and-show-schemas-command/"
                         }, 
                         {
@@ -11243,7 +11243,7 @@
                             "next_title": "SHOW TABLES Command", 
                             "next_url": "/docs/show-tables-command/", 
                             "parent": "SQL Commands", 
-                            "previous_title": "SHOW DATABASES AND SHOW SCHEMAS Command", 
+                            "previous_title": "SHOW DATABASES and SHOW SCHEMAS Command", 
                             "previous_url": "/docs/show-databases-and-show-schemas-command/", 
                             "relative_path": "_docs/sql-reference/sql-commands/100-show-files.md", 
                             "title": "SHOW FILES Command", 

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/030-configuring-user-imperso.textClipping
----------------------------------------------------------------------
diff --git a/_docs/030-configuring-user-imperso.textClipping b/_docs/030-configuring-user-imperso.textClipping
deleted file mode 100644
index e69de29..0000000

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/configure-drill/010-configure-drill-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/010-configure-drill-introduction.md b/_docs/configure-drill/010-configure-drill-introduction.md
index 6efe513..62f17f3 100644
--- a/_docs/configure-drill/010-configure-drill-introduction.md
+++ b/_docs/configure-drill/010-configure-drill-introduction.md
@@ -2,6 +2,9 @@
 title: "Configure Drill Introduction"
 parent: "Configure Drill"
 ---
-When using Drill, you need to make sufficient memory available Drill and other workloads running on the cluster. You might want to modify options for performance or functionality. For example, the default storage format for CTAS
-statements is Parquet. Using a configuration option, you can modify the default setting so that output data
-is stored in CSV or JSON format. The section covers the many options you can configure and how to configure memory resources for Drill running along side other workloads. This section also includes ports used by Drill.
+When using Drill, you need to make sufficient memory available Drill when running Drill alone or along side other workloads on the cluster. The next section, ["Configuring Drill Memory"]({{site.baseurl}}/docs/configuring-drill-memory) describes how to configure memory for a Drill cluster. Configuring other resources for [multitenancy clusters]({{site.baseurl}}/docs/configuring-multitenant-resources) or for [sharing a Drillbit]({{site.baseurl}}/docs/configuring-a-shared-drillbit) on a cluster is covered later.
+
+You can also modify options for performance or functionality. For example, changing the default storage format is a typical functional change. The default storage format for CTAS
+statements is Parquet. Using a configuration option, you can modify Drill to store the output data in CSV or JSON format. 
+
+The section, ["Configuration Options Introduction"]({{site.baseurl}}/docs/configuration-options-introduction) summarizes the many options you can configure. 

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/configure-drill/020-configuring-drill-memory.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/020-configuring-drill-memory.md b/_docs/configure-drill/020-configuring-drill-memory.md
index 150ce1f..81487d1 100644
--- a/_docs/configure-drill/020-configuring-drill-memory.md
+++ b/_docs/configure-drill/020-configuring-drill-memory.md
@@ -33,5 +33,8 @@ The drill-env.sh file contains the following options:
 
     export DRILL_JAVA_OPTS="-Xms1G -Xmx$DRILL_MAX_HEAP -XX:MaxDirectMemorySize=$DRILL_MAX_DIRECT_MEMORY -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=1G -ea"
 
-The DRILL_MAX_DIRECT_MEMORY is the Java direct memory. The DRILL_MAX_HEAP is the maximum theoretical heap limit for the JVM. Xmx specifies the maximum memory allocation pool for a Java Virtual Machine (JVM). Xms specifies the initial memory allocation pool.
+* DRILL_MAX_DIRECT_MEMORY is the Java direct memory. 
+* DRILL_MAX_HEAP is the maximum theoretical heap limit for the JVM. 
+* Xmx specifies the maximum memory allocation pool for a Java Virtual Machine (JVM). 
+* Xms specifies the initial memory allocation pool.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/configure-drill/030-configuring-a-multitenant-cluster-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/030-configuring-a-multitenant-cluster-introduction.md b/_docs/configure-drill/030-configuring-a-multitenant-cluster-introduction.md
index 978d374..80edfc8 100644
--- a/_docs/configure-drill/030-configuring-a-multitenant-cluster-introduction.md
+++ b/_docs/configure-drill/030-configuring-a-multitenant-cluster-introduction.md
@@ -3,9 +3,7 @@ title: "Configuring a Multitenant Cluster Introduction"
 parent: "Configuring a Multitenant Cluster"
 ---
 
-Drill supports multiple users sharing a Drillbit. You can also run separate Drillbits running on different nodes in the cluster.
-
-Drill typically runs along side other workloads, including the following:  
+Drill supports multiple users sharing a Drillbit and running separate Drillbits on different nodes in the cluster. Drill typically runs along side other workloads, including the following:  
 
 * Mapreduce  
 * Yarn  
@@ -13,10 +11,10 @@ Drill typically runs along side other workloads, including the following:
 * Hive and Pig  
 * Spark  
 
-You need to plan and configure these resources for use with Drill and other workloads: 
+You need to plan and configure the following resources for use with Drill and other workloads: 
 
 * [Memory]({{site.baseurl}}/docs/configuring-multitenant-resources)  
 * [CPU]({{site.baseurl}}/docs/configuring-multitenant-resources#how-to-manage-drill-cpu-resources)  
 * Disk  
 
-Configure, memory, queues, and parallelization when users [share a Drillbit]({{site.baseurl}}/docs/configuring-resources-for-a-shared-drillbit).
\ No newline at end of file
+When users share a Drillbit, [configure queues]({{site.baseurl}}/docs/configuring-resources-for-a-shared-drillbit#configuring-query-queuing) and [parallelization]({{site.baseurl}}/docs/configuring-resources-for-a-shared-drillbit#configuring-parallelization) in addition to memory.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/configure-drill/050-configuring-multitenant-resources.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/050-configuring-multitenant-resources.md b/_docs/configure-drill/050-configuring-multitenant-resources.md
index 53341fb..f8ca673 100644
--- a/_docs/configure-drill/050-configuring-multitenant-resources.md
+++ b/_docs/configure-drill/050-configuring-multitenant-resources.md
@@ -2,27 +2,17 @@
 title: "Configuring Multitenant Resources"
 parent: "Configuring a Multitenant Cluster"
 ---
-Drill operations are memory and CPU-intensive. Currently, Drill resources are managed outside MapR Warden service in terms of configuring the resources as well as enforcing the resource usage within the limit. Configure memory for Drill in a multitenant by modifying drill-env.sh. <Add detail on property names etc>
+Drill operations are memory and CPU-intensive. Currently, Drill resources are managed outside of any cluster management service, such as the MapR warden service. In a multitenant or any other type of cluster, YARN-enabled or not, you configure memory and memory usage limits for Drill by modifying drill-env.sh as described in ["Configuring Drill Memory"]({{site.baseurl}}/docs/configuring-drill-memory).
 
-3. To ensure Warden account for resources required for Drill, make sure the following properties are set appropriately in warden.drill-bits.conf. For reference on the meaning of the properties refer to the following MapR doc <add link>
-
-service.heapsize.min
-service.heapsize.max
-service.heapsize.percent
-
-The percent should always be considered as the one to change, it is more intuitive to understand and grows/shrinks according to different node's configuration. 
-
-It will be good if someone in Drill QA could try it out and see if it fits Drill's needs. 
-
- 
-
-Note that these properties should be set in addition to configuring the resources the in drill-env.conf even if you didnt change the defaults in drill-env.sh. Setting up memory limit in drill-env.sh tells Drill how much resources to use during query execution and setting up these warden-drill-bits.conf tells warden not to commit these resources to some other process. In near future, we expect to provide a more deeper integration on these settings
-\<give an example>
-
-4. This configuration is same whether you use Drill is enabled cluster or non-YARN cluster.
+Configure a multitenant cluster to account for resources required for Drill. For example, on a MapR cluster, ensure warden accounts for resources required for Drill. Configuring `drill-env.sh` allocates resources for Drill to use during query execution, while configuring the following properties in `warden-drill-bits.conf` prevents warden from committing the resources to other processes.
 
+    service.heapsize.min=<some value in MB>
+    service.heapsize.max=<some value in MB>
+    service.heapsize.percent=<a whole number>
 
+{% include startimportant.html %}Set the `service.heapsize` properties in `warden.drill-bits.conf` regardless of whether you changed defaults in `drill-env.sh` or not.{% include endimportant.html %}
 
+The section, ["Configuring Drill in a YARN-enabled MapR Cluster"]({{site.baseurl}}/docs/configuring-multitenant-resources#configuring-drill-in-a-yarn-enabled-mapr-cluster) shows an example of setting the `service.heapsize` properties. The `service.heapsize.percent` is the percentage of memory for the service bounded by minimum and maximum values. Typically, users change `service.heapsize.percent`. Using a percentage has the advantage of increasing or decreasing resources according to different node's configuration. For more information about the `service.heapsize` properties, see the section, ["warden.<servicename>.conf"](http://doc.mapr.com/display/MapR/warden.%3Cservicename%3E.conf).
 
 You need to statically partition the cluster to designate which partition handles which workload. To configure resources for Drill in a MapR cluster, modify one or more of the following files in `/opt/mapr/conf/conf.d` that the installation process creates. 
 
@@ -32,16 +22,6 @@ You need to statically partition the cluster to designate which partition handle
 
 Configure Drill memory by modifying `warden.drill-bits.conf` in YARN and non-YARN clusters. Configure other resources by modifying `warden.nodemanager.conf `and `warden.resourcemanager.conf `in a YARN-enabled cluster.
 
-## Configuring Drill Memory in a Mixed Cluster
-
-Add the following lines to the `warden.drill-bits.conf` file to configure memory resources for Drill:
-
-    service.heapsize.min=<some value in MB>
-    service.heapsize.max=<some value in MB>
-    service.heapsize.percent=<a whole number>
-
-The service.heapsize.percent is the percentage of memory for the service bounded by minimum and maximum values.
-
 ## Configuring Drill in a YARN-enabled MapR Cluster
 
 To add Drill to a YARN-enabled cluster, change memory resources to suit your application. For example, you have 120G of available memory that you allocate to following workloads in a Yarn-enabled cluster:
@@ -69,11 +49,11 @@ ResourceManager and NodeManager memory in `warden.resourcemanager.conf` and
     service.heapsize.max=325
     service.heapsize.percent=2
 
-Change these settings for NodeManager and ResourceManager to reconfigure the total memory required for YARN services to run. If you want to place an upper limit on memory set YARN_NODEMANAGER_HEAPSIZE or YARN_RESOURCEMANAGER_HEAPSIZE environment variable in /opt/mapr/hadoop/hadoop-2.5.1/etc/hadoop/yarn-env.sh. The -Xmx option is not set, allowing memory on to grow as needed.
+Change these settings for NodeManager and ResourceManager to reconfigure the total memory required for YARN services to run. If you want to place an upper limit on memory set YARN_NODEMANAGER_HEAPSIZE or YARN_RESOURCEMANAGER_HEAPSIZE environment variable in `/opt/mapr/hadoop/hadoop-2.5.1/etc/hadoop/yarn-env.sh`. The `-Xmx` option is not set, allowing memory on to grow as needed.
 
 ### MapReduce v1 Resources
 
-The following default settings in /opt/mapr/conf/warden.conf control MapReduce v1 memory:
+The following default settings in `/opt/mapr/conf/warden.conf` control MapReduce v1 memory:
 
     mr1.memory.percent=50
     mr1.cpu.percent=50
@@ -94,9 +74,9 @@ Configure memory for other services in the same manner, as described in [MapR do
 
 For more information about managing memory in a MapR cluster, see the following sections in the MapR documentation:
 
-* [Memory Allocation for Nodes](http://doc.mapr.com/display/MapR40x/Memory+Allocation+for+Nodes)  
-* [Cluster Resource Allocation](http://doc.mapr.com/display/MapR40x/Cluster+Resource+Allocation)  
-* [Customizing Memory Settings for MapReduce v1](http://doc.mapr.com/display/MapR40x/Customize+Memory+Settings+for+MapReduce+v1)  
+* [Memory Allocation for Nodes](http://doc.mapr.com/display/MapR/Memory+Allocation+for+Nodes)  
+* [Cluster Resource Allocation](http://doc.mapr.com/display/MapR/Cluster+Resource+Allocation)  
+* [Customizing Memory Settings for MapReduce v1](http://doc.mapr.com/display/MapR/Customize+Memory+Settings+for+MapReduce+v1)  
 
 ## How to Manage Drill CPU Resources
 Currently, you do not manage CPU resources within Drill. [Use Linux `cgroups`](http://en.wikipedia.org/wiki/Cgroups) to manage the CPU resources.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/configure-drill/060-configuring-a-shared-drillbit.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/060-configuring-a-shared-drillbit.md b/_docs/configure-drill/060-configuring-a-shared-drillbit.md
index 3f83736..69ae187 100644
--- a/_docs/configure-drill/060-configuring-a-shared-drillbit.md
+++ b/_docs/configure-drill/060-configuring-a-shared-drillbit.md
@@ -2,9 +2,9 @@
 title: "Configuring Resources for a Shared Drillbit"
 parent: "Configuring a Multitenant Cluster"
 ---
-To manage a cluster in which multiple users share a Drillbit, you configure Drill queuing and parallelization in addition to memory, as described in the previous section.
+To manage a cluster in which multiple users share a Drillbit, you configure Drill queuing and parallelization in addition to memory, as described in the previous section, ["Configuring Drill Memory"]({{site.baseurl}}/docs/configuring-drill-memory/).
 
-##Configuring Drill Query Queuing
+##Configuring Query Queuing
 
 Set [options in sys.options]({{site.baseurl}}/docs/configuration-options-introduction/) to enable and manage query queuing, which is turned off by default. There are two types of queues: large and small. You configure a maximum number of queries that each queue allows by configuring the following options in the `sys.options` table:
 
@@ -26,7 +26,7 @@ The exec.queue.threshold default is 30 million, which is the estimated rows to b
 
 The Drill queuing configuration in this example tends to give many users running small queries a rapid response. Users running a large query might experience some delay until an earlier-received large query returns, freeing space in the large queue to process queries that are waiting.
 
-## Controlling Parallelization
+## Configuring Parallelization
 
 By default, Drill parallelizes operations when number of records manipulated within a fragment reaches 100,000. When parallelization of operations is high, the cluster operates as fast as possible, which is fine for a single user. In a contentious multi-tenant situation, however, you need to reduce parallelization to levels based on user needs.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md b/_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md
new file mode 100755
index 0000000..65c5d64
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md	
@@ -0,0 +1,50 @@
+---
+title: "Using Tibco Spotfire with Drill"
+parent: "ODBC/JDBC Interfaces"
+---
+Tibco Spotfire Desktop is a powerful analytic tool that enables SQL statements when connecting to data sources. Spotfire Desktop can utilize the powerful query capabilities of Apache Drill to query complex data structures. Use the MapR Drill ODBC Driver to configure Tibco Spotfire Desktop with Apache Drill.
+
+To use Spotfire Desktop with Apache Drill, complete the following steps:
+
+1.  Install the Drill ODBC Driver from MapR.
+2.	Configure the Spotfire Desktop data connection for Drill.
+
+----------
+
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. 
+
+Complete the following steps to install and configure the driver:
+
+1.    Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
+**Note:** Spotfire Desktop 6.5.1 utilizes the 64-bit ODBC driver.
+2.    Complete steps 2-8 under on the following page to install the driver:<br> 
+[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
+3.    Complete the steps on the following page to configure the driver:<br>
+[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
+
+----------
+
+
+### Step 2: Configure the Spotfire Desktop Data Connection for Drill 
+Complete the following steps to configure a Drill data connection: 
+
+1. Select the **Add Data Connection** option or click the Add Data Connection button in the menu bar, as shown in the image below:![](http://i.imgur.com/p3LNNBs.png)
+2. When the dialog window appears, click the **Add** button, and select **Other/Database** from the dropdown list.![](http://i.imgur.com/u1g9kaT.png)
+3. In the Open Database window that appears, select **Odbc Data Provider** and then click **Configure**. ![](http://i.imgur.com/8Gu0GAZ.png)
+4. In the Configure Data Source Connection window that appears, select the Drill DSN that you configured in the ODBC administrator, and enter the relevant credentials for Drill.<br> ![](http://i.imgur.com/Yd6BKls.png) 
+5. Click **OK** to continue. The Spotfire Desktop queries the Drill metadata for available schemas, tables, and views. You can navigate the schemas in the left-hand column. After you select a specific view or table, the relevant SQL displays in the right-hand column. 
+![](http://i.imgur.com/wNBDs5q.png)
+6. Optionally, you can modify the SQL to work best with Drill. Simply change the schema.table.* notation in the SELECT statement to simply * or the relevant column names that are needed. 
+Note that Drill has certain reserved keywords that you must put in back ticks [ ` ] when needed. See [Drill Reserved Keywords](http://drill.apache.org/docs/reserved-keywords/).
+7. Once the SQL is complete, provide a name for the Data Source and click **OK**. Spotfire Desktop queries Drill and retrieves the data for analysis. You can use the functionality of Spotfire Desktop to work with the data.
+![](http://i.imgur.com/j0MWorh.png)
+
+**NOTE:** You can use the SQL statement column to query data and complex structures that do not display in the left-hand schema column. A good example is JSON files in the file system.
+
+**SQL Example:**<br>
+SELECT t.trans_id, t.`date`, t.user_info.cust_id as cust_id, t.user_info.device as device FROM dfs.clicks.`/clicks/clicks.campaign.json` t
+
+----------

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md b/_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md
deleted file mode 100755
index a559c71..0000000
--- a/_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: "Using Apache Drill with Tibco Spotfire Desktop"
-parent: "ODBC/JDBC Interfaces"
----
-Tibco Spotfire Desktop is a powerful analytic tool that enables SQL statements when connecting to data sources. Spotfire Desktop can utilize the powerful query capabilities of Apache Drill to query complex data structures. Use the MapR Drill ODBC Driver to configure Tibco Spotfire Desktop with Apache Drill.
-
-To use Spotfire Desktop with Apache Drill, complete the following steps:
-
-1.  Install the Drill ODBC Driver from MapR.
-2.	Configure the Spotfire Desktop data connection for Drill.
-
-----------
-
-
-### Step 1: Install and Configure the MapR Drill ODBC Driver 
-
-Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. 
-
-Complete the following steps to install and configure the driver:
-
-1.    Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
-**Note:** Spotfire Desktop 6.5.1 utilizes the 64-bit ODBC driver.
-2.    Complete steps 2-8 under on the following page to install the driver:<br> 
-[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
-3.    Complete the steps on the following page to configure the driver:<br>
-[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
-
-----------
-
-
-### Step 2: Configure the Spotfire Desktop Data Connection for Drill 
-Complete the following steps to configure a Drill data connection: 
-
-1. Select the **Add Data Connection** option or click the Add Data Connection button in the menu bar, as shown in the image below:![](http://i.imgur.com/p3LNNBs.png)
-2. When the dialog window appears, click the **Add** button, and select **Other/Database** from the dropdown list.![](http://i.imgur.com/u1g9kaT.png)
-3. In the Open Database window that appears, select **Odbc Data Provider** and then click **Configure**. ![](http://i.imgur.com/8Gu0GAZ.png)
-4. In the Configure Data Source Connection window that appears, select the Drill DSN that you configured in the ODBC administrator, and enter the relevant credentials for Drill.<br> ![](http://i.imgur.com/Yd6BKls.png) 
-5. Click **OK** to continue. The Spotfire Desktop queries the Drill metadata for available schemas, tables, and views. You can navigate the schemas in the left-hand column. After you select a specific view or table, the relevant SQL displays in the right-hand column. 
-![](http://i.imgur.com/wNBDs5q.png)
-6. Optionally, you can modify the SQL to work best with Drill. Simply change the schema.table.* notation in the SELECT statement to simply * or the relevant column names that are needed. 
-Note that Drill has certain reserved keywords that you must put in back ticks [ ` ] when needed. See [Drill Reserved Keywords](http://drill.apache.org/docs/reserved-keywords/).
-7. Once the SQL is complete, provide a name for the Data Source and click **OK**. Spotfire Desktop queries Drill and retrieves the data for analysis. You can use the functionality of Spotfire Desktop to work with the data.
-![](http://i.imgur.com/j0MWorh.png)
-
-**NOTE:** You can use the SQL statement column to query data and complex structures that do not display in the left-hand schema column. A good example is JSON files in the file system.
-
-**SQL Example:**<br>
-SELECT t.trans_id, t.`date`, t.user_info.cust_id as cust_id, t.user_info.device as device FROM dfs.clicks.`/clicks/clicks.campaign.json` t
-
-----------

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/sql-reference/sql-commands/030-create-table-as-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/030-create-table-as-command.md b/_docs/sql-reference/sql-commands/030-create-table-as-command.md
index 5bab011..8e0a4e1 100644
--- a/_docs/sql-reference/sql-commands/030-create-table-as-command.md
+++ b/_docs/sql-reference/sql-commands/030-create-table-as-command.md
@@ -1,5 +1,5 @@
 ---
-title: "CREATE TABLE AS (CTAS) command"
+title: "CREATE TABLE AS (CTAS) Command"
 parent: "SQL Commands"
 ---
 You can create tables in Drill by using the CTAS command:

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/sql-reference/sql-commands/050-create-view-command.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/050-create-view-command.md b/_docs/sql-reference/sql-commands/050-create-view-command.md
index d21ea12..53cf3b8 100644
--- a/_docs/sql-reference/sql-commands/050-create-view-command.md
+++ b/_docs/sql-reference/sql-commands/050-create-view-command.md
@@ -1,5 +1,5 @@
 ---
-title: "CREATE VIEW command"
+title: "CREATE VIEW Command"
 parent: "SQL Commands"
 ---
 The CREATE VIEW command creates a virtual structure for the result set of a

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/sql-reference/sql-commands/070-explain-commands.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/070-explain-commands.md b/_docs/sql-reference/sql-commands/070-explain-commands.md
index 5aab0e9..acf0825 100644
--- a/_docs/sql-reference/sql-commands/070-explain-commands.md
+++ b/_docs/sql-reference/sql-commands/070-explain-commands.md
@@ -1,5 +1,5 @@
 ---
-title: "EXPLAIN commands"
+title: "EXPLAIN Commands"
 parent: "SQL Commands"
 ---
 EXPLAIN is a useful tool for examining the steps that a query goes through

http://git-wip-us.apache.org/repos/asf/drill/blob/80bbe062/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md b/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md
index c000f32..0f227ae 100644
--- a/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md
+++ b/_docs/sql-reference/sql-commands/090-show-databases-and-show-schemas.md
@@ -1,5 +1,5 @@
 ---
-title: "SHOW DATABASES AND SHOW SCHEMAS Command"
+title: "SHOW DATABASES and SHOW SCHEMAS Command"
 parent: "SQL Commands"
 ---
 The SHOW DATABASES and SHOW SCHEMAS commands generate a list of available Drill schemas that you can query.


[12/25] drill git commit: try again

Posted by ts...@apache.org.
try again


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/e1e95eb7
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/e1e95eb7
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/e1e95eb7

Branch: refs/heads/gh-pages
Commit: e1e95eb7ad532598847a4b86fcf16ccac1cdaa53
Parents: 4857f6c
Author: Bob Rumsby <br...@mapr.com>
Authored: Fri May 8 16:01:04 2015 -0700
Committer: Bob Rumsby <br...@mapr.com>
Committed: Fri May 8 16:01:04 2015 -0700

----------------------------------------------------------------------
 _data/docs.json | 124 ++++++++++-----------------------------------------
 1 file changed, 23 insertions(+), 101 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/e1e95eb7/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 14aa8c6..deeb210 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -4561,8 +4561,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Tibco Spotfire", 
-                    "next_url": "/docs/using-tibco-spotfire/", 
+                    "next_title": "Using Tibco Spotfire with Drill", 
+                    "next_url": "/docs/using-tibco-spotfire-with-drill/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Drill Explorer on Windows", 
                     "previous_url": "/docs/using-drill-explorer-on-windows/", 
@@ -4578,32 +4578,14 @@
                         }
                     ], 
                     "children": [], 
-<<<<<<< HEAD
-                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
-                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
-                    "parent": "ODBC/JDBC Interfaces", 
-                    "previous_title": "Using MicroStrategy Analytics with Drill", 
-                    "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md", 
-                    "title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
-                }, 
-                {
-                    "breadcrumbs": [
-                        {
-                            "title": "ODBC/JDBC Interfaces", 
-                            "url": "/docs/odbc-jdbc-interfaces/"
-                        }
-                    ], 
-                    "children": [], 
                     "next_title": "Query Data", 
                     "next_url": "/docs/query-data/", 
                     "parent": "ODBC/JDBC Interfaces", 
-                    "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
-                    "title": "Using Apache Drill with Tableau 9 Desktop", 
-                    "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
+                    "previous_title": "Using MicroStrategy Analytics with Drill", 
+                    "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
+                    "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md", 
+                    "title": "Using Tibco Spotfire with Drill", 
+                    "url": "/docs/using-tibco-spotfire-with-drill/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -5185,13 +5167,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-<<<<<<< HEAD
-            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
-=======
-            "previous_title": "Using Tibco Spotfire", 
-            "previous_url": "/docs/using-tibco-spotfire/", 
->>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
+            "previous_title": "Using Tibco Spotfire with Drill", 
+            "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"
@@ -8150,43 +8127,6 @@
             "title": "Useful Research", 
             "url": "/docs/useful-research/"
         }, 
-<<<<<<< HEAD
-        "Using Apache Drill with Tableau 9 Desktop": {
-            "breadcrumbs": [
-                {
-                    "title": "ODBC/JDBC Interfaces", 
-                    "url": "/docs/odbc-jdbc-interfaces/"
-                }
-            ], 
-            "children": [], 
-            "next_title": "Query Data", 
-            "next_url": "/docs/query-data/", 
-            "parent": "ODBC/JDBC Interfaces", 
-            "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
-            "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
-            "title": "Using Apache Drill with Tableau 9 Desktop", 
-            "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
-        }, 
-        "Using Apache Drill with Tibco Spotfire Desktop": {
-            "breadcrumbs": [
-                {
-                    "title": "ODBC/JDBC Interfaces", 
-                    "url": "/docs/odbc-jdbc-interfaces/"
-                }
-            ], 
-            "children": [], 
-            "next_title": "Using Apache Drill with Tableau 9 Desktop", 
-            "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
-            "parent": "ODBC/JDBC Interfaces", 
-            "previous_title": "Using MicroStrategy Analytics with Drill", 
-            "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-            "relative_path": "_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md", 
-            "title": "Using Apache Drill with Tibco Spotfire Desktop", 
-            "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
-        }, 
-=======
->>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
         "Using Custom Functions in Queries": {
             "breadcrumbs": [
                 {
@@ -8250,8 +8190,8 @@
                 }
             ], 
             "children": [], 
-            "next_title": "Using Tibco Spotfire", 
-            "next_url": "/docs/using-tibco-spotfire/", 
+            "next_title": "Using Tibco Spotfire with Drill", 
+            "next_url": "/docs/using-tibco-spotfire-with-drill/", 
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using Drill Explorer on Windows", 
             "previous_url": "/docs/using-drill-explorer-on-windows/", 
@@ -8589,7 +8529,7 @@
             "title": "Using SQL Functions, Clauses, and Joins", 
             "url": "/docs/using-sql-functions-clauses-and-joins/"
         }, 
-        "Using Tibco Spotfire": {
+        "Using Tibco Spotfire with Drill": {
             "breadcrumbs": [
                 {
                     "title": "ODBC/JDBC Interfaces", 
@@ -8602,9 +8542,9 @@
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using MicroStrategy Analytics with Drill", 
             "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-            "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire.md", 
-            "title": "Using Tibco Spotfire", 
-            "url": "/docs/using-tibco-spotfire/"
+            "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md", 
+            "title": "Using Tibco Spotfire with Drill", 
+            "url": "/docs/using-tibco-spotfire-with-drill/"
         }, 
         "Using a Connection String": {
             "breadcrumbs": [
@@ -10233,13 +10173,8 @@
                         }
                     ], 
                     "children": [], 
-<<<<<<< HEAD
-                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
-                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
-=======
-                    "next_title": "Using Tibco Spotfire", 
-                    "next_url": "/docs/using-tibco-spotfire/", 
->>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
+                    "next_title": "Using Tibco Spotfire with Drill", 
+                    "next_url": "/docs/using-tibco-spotfire-with-drill/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Drill Explorer on Windows", 
                     "previous_url": "/docs/using-drill-explorer-on-windows/", 
@@ -10258,19 +10193,11 @@
                     "next_title": "Query Data", 
                     "next_url": "/docs/query-data/", 
                     "parent": "ODBC/JDBC Interfaces", 
-<<<<<<< HEAD
-                    "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
-                    "title": "Using Apache Drill with Tableau 9 Desktop", 
-                    "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
-=======
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire.md", 
-                    "title": "Using Tibco Spotfire", 
-                    "url": "/docs/using-tibco-spotfire/"
->>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
+                    "relative_path": "_docs/odbc-jdbc-interfaces/060-tibco-spotfire with Drill.md", 
+                    "title": "Using Tibco Spotfire with Drill", 
+                    "url": "/docs/using-tibco-spotfire-with-drill/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -10658,13 +10585,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-<<<<<<< HEAD
-            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
-=======
-            "previous_title": "Using Tibco Spotfire", 
-            "previous_url": "/docs/using-tibco-spotfire/", 
->>>>>>> 1ad5dbde38cfa53bddb1f753051357bd4afdcb71
+            "previous_title": "Using Tibco Spotfire with Drill", 
+            "previous_url": "/docs/using-tibco-spotfire-with-drill/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"
@@ -12234,4 +12156,4 @@
             "url": "/docs/project-bylaws/"
         }
     ]
-}
+}
\ No newline at end of file


[23/25] drill git commit: lesson spelling error

Posted by ts...@apache.org.
lesson spelling error


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/1cec6cc0
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/1cec6cc0
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/1cec6cc0

Branch: refs/heads/gh-pages
Commit: 1cec6cc0cdf671de91780e5683cc13c1384d1053
Parents: 7317bf0
Author: Kristine Hahn <kh...@maprtech.com>
Authored: Mon May 11 16:49:00 2015 -0700
Committer: Kristine Hahn <kh...@maprtech.com>
Committed: Mon May 11 16:49:00 2015 -0700

----------------------------------------------------------------------
 _docs/data-sources-and-file-formats/050-json-data-model.md     | 2 +-
 .../sql-reference/nested-data-functions/030-repeated-count.md  | 2 +-
 _docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md       | 6 +++---
 .../020-getting-to-know-the-drill-sandbox.md                   | 2 +-
 .../040-lesson-2-run-queries-with-ansi-sql.md                  | 2 +-
 5 files changed, 7 insertions(+), 7 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/1cec6cc0/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index ee6da3e..f9ab8bf 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -49,7 +49,7 @@ When you set this option, Drill reads all data from the JSON files as VARCHAR. A
 * Cast JSON values to [SQL types]({{ site.baseurl }}/docs/data-types), such as BIGINT, DECIMAL, FLOAT, and INTEGER.
 * Cast JSON strings to [Drill Date/Time Data Type Formats]({{ site.baseurl }}/docs/supported-date-time-data-type-formats).
 
-Drill uses [map and array data types]({{ site.baseurl }}/docs/data-types) internally for reading complex and nested data structures from JSON. You can cast data in a map or array of data to return a value from the structure, as shown in [“Create a view on a MapR-DB table”] ({{ site.baseurl }}/docs/lession-2-run-queries-with-ansi-sql). “Query Complex Data” shows how to access nested arrays.
+Drill uses [map and array data types]({{ site.baseurl }}/docs/data-types) internally for reading complex and nested data structures from JSON. You can cast data in a map or array of data to return a value from the structure, as shown in [“Create a view on a MapR-DB table”] ({{ site.baseurl }}/docs/lesson-2-run-queries-with-ansi-sql). “Query Complex Data” shows how to access nested arrays.
 
 ## Reading JSON
 To read JSON data using Drill, use a [file system storage plugin]({{ site.baseurl }}/docs/connect-to-a-data-source) that defines the JSON format. You can use the `dfs` storage plugin, which includes the definition. 

http://git-wip-us.apache.org/repos/asf/drill/blob/1cec6cc0/_docs/sql-reference/nested-data-functions/030-repeated-count.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/nested-data-functions/030-repeated-count.md b/_docs/sql-reference/nested-data-functions/030-repeated-count.md
index 0fb2d8d..79ce75e 100644
--- a/_docs/sql-reference/nested-data-functions/030-repeated-count.md
+++ b/_docs/sql-reference/nested-data-functions/030-repeated-count.md
@@ -41,4 +41,4 @@ file. The counts are restricted to rows that contain the string `pizza`.
 	7 rows selected (2.03 seconds)
 
 For another example of this function, see the following lesson in the Apache
-Drill Tutorial for Hadoop: [Lesson 3: Run Queries on Complex Data Types]({{ site.baseurl }}/docs/lession-3-run-queries-on-complex-data-types/).
\ No newline at end of file
+Drill Tutorial for Hadoop: [Lesson 3: Run Queries on Complex Data Types]({{ site.baseurl }}/docs/lesson-3-run-queries-on-complex-data-types/).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/1cec6cc0/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md b/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md
index f53608f..ab12d86 100644
--- a/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md
+++ b/_docs/tutorials/040-learn-drill-with-the-mapr-sandbox.md
@@ -10,9 +10,9 @@ the following pages in order:
 
   * [Installing the Apache Drill Sandbox]({{ site.baseurl }}/docs/installing-the-apache-drill-sandbox)
   * [Getting to Know the Drill Setup]({{ site.baseurl }}/docs/getting-to-know-the-drill-sandbox)
-  * [Lesson 1: Learn About the Data Set]({{ site.baseurl }}/docs/lession-1-learn-about-the-data-set)
-  * [Lesson 2: Run Queries with ANSI SQL]({{ site.baseurl }}/docs/lession-2-run-queries-with-ansi-sql)
-  * [Lesson 3: Run Queries on Complex Data Types]({{ site.baseurl }}/docs/lession-3-run-queries-on-complex-data-types)
+  * [Lesson 1: Learn About the Data Set]({{ site.baseurl }}/docs/lesson-1-learn-about-the-data-set)
+  * [Lesson 2: Run Queries with ANSI SQL]({{ site.baseurl }}/docs/lesson-2-run-queries-with-ansi-sql)
+  * [Lesson 3: Run Queries on Complex Data Types]({{ site.baseurl }}/docs/lesson-3-run-queries-on-complex-data-types)
   * [Summary]({{ site.baseurl }}/docs/summary)
 
 ## About Apache Drill

http://git-wip-us.apache.org/repos/asf/drill/blob/1cec6cc0/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md
index 6772e5a..ee5133f 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/020-getting-to-know-the-drill-sandbox.md
@@ -139,5 +139,5 @@ You have a number of data sources to explore.  For example, analyzing customer r
 # What's Next
 
 Start running queries by going to [Lesson 1: Learn About the Data
-Set]({{ site.baseurl }}/docs/lession-1-learn-about-the-data-set).
+Set]({{ site.baseurl }}/docs/lesson-1-learn-about-the-data-set).
 

http://git-wip-us.apache.org/repos/asf/drill/blob/1cec6cc0/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
----------------------------------------------------------------------
diff --git a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
index 353dfc9..8e53b7c 100644
--- a/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
+++ b/_docs/tutorials/learn-drill-with-the-mapr-sandbox/040-lesson-2-run-queries-with-ansi-sql.md
@@ -382,7 +382,7 @@ workspace, so the query specifies the full path to the file:
 
 ## What's Next
 
-Go to [Lesson 3: Run Queries on Complex Data Types]({{ site.baseurl }}/docs/lession-3-run-queries-on-complex-data-types). 
+Go to [Lesson 3: Run Queries on Complex Data Types]({{ site.baseurl }}/docs/lesson-3-run-queries-on-complex-data-types). 
 
 
 


[10/25] drill git commit: Tableau 9 Desktop doc

Posted by ts...@apache.org.
Tableau 9 Desktop doc


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/3f63669b
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/3f63669b
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/3f63669b

Branch: refs/heads/gh-pages
Commit: 3f63669b7557580d46059b3c815531d0007d9bf8
Parents: d15c67c
Author: Bob Rumsby <br...@mapr.com>
Authored: Fri May 8 15:12:31 2015 -0700
Committer: Bob Rumsby <br...@mapr.com>
Committed: Fri May 8 15:12:31 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 |  49 ++++++----
 _docs/img/connect-list.png                      | Bin 0 -> 37212 bytes
 _docs/img/custom-sql-query.png                  | Bin 0 -> 41626 bytes
 _docs/img/edit-custom-sql.png                   | Bin 0 -> 49362 bytes
 _docs/img/install-tableau-tdc.png               | Bin 0 -> 54167 bytes
 _docs/img/new-data-source.png                   | Bin 0 -> 44304 bytes
 _docs/img/other-dbs-2.png                       | Bin 0 -> 64300 bytes
 _docs/img/other-dbs.png                         | Bin 0 -> 58118 bytes
 _docs/img/tableau-desktop-query.png             | Bin 0 -> 101358 bytes
 _docs/img/tableau-error.png                     | Bin 0 -> 57812 bytes
 _docs/img/tableau-join-key.png                  | Bin 0 -> 38948 bytes
 _docs/img/tableau-odbc-setup-2.png              | Bin 0 -> 78225 bytes
 _docs/img/tableau-odbc-setup.png                | Bin 0 -> 61459 bytes
 _docs/img/tableau-schemas.png                   | Bin 0 -> 66323 bytes
 _docs/img/tableau-select-schema.png             | Bin 0 -> 117436 bytes
 ...using-apache-drill-with-tableau-9-desktop.md |  96 +++++++++++++++++++
 16 files changed, 129 insertions(+), 16 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 95424c3..19f26f1 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -4544,8 +4544,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "next_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
@@ -4566,9 +4566,9 @@
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
                     "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/060-using-spotfire.md", 
-                    "title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
+                    "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
+                    "title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -5133,8 +5133,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-            "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
+            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"
@@ -8110,7 +8110,7 @@
             "title": "Useful Research", 
             "url": "/docs/useful-research/"
         }, 
-        "Using Apache Drill with Tibco Spotfire Desktop": {
+        "Using Apache Drill with Tableau 9 Desktop": {
             "breadcrumbs": [
                 {
                     "title": "ODBC/JDBC Interfaces", 
@@ -8123,7 +8123,24 @@
             "parent": "ODBC/JDBC Interfaces", 
             "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
             "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
-            "relative_path": "_docs/odbc-jdbc-interfaces/060-using-spotfire.md", 
+            "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
+            "title": "Using Apache Drill with Tableau 9 Desktop", 
+            "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
+        }, 
+        "Using Apache Drill with Tibco Spotfire Desktop": {
+            "breadcrumbs": [
+                {
+                    "title": "ODBC/JDBC Interfaces", 
+                    "url": "/docs/odbc-jdbc-interfaces/"
+                }
+            ], 
+            "children": [], 
+            "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+            "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
+            "parent": "ODBC/JDBC Interfaces", 
+            "previous_title": "Using MicroStrategy Analytics with Drill", 
+            "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
+            "relative_path": "_docs/odbc-jdbc-interfaces/060-using-apache-drill-with-tibco-spotfire-desktop.md", 
             "title": "Using Apache Drill with Tibco Spotfire Desktop", 
             "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
         }, 
@@ -9875,8 +9892,8 @@
                         }
                     ], 
                     "children": [], 
-                    "next_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "next_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+                    "next_title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "next_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using MicroStrategy Analytics with Drill", 
                     "previous_url": "/docs/using-microstrategy-analytics-with-drill/", 
@@ -9897,9 +9914,9 @@
                     "parent": "ODBC/JDBC Interfaces", 
                     "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
                     "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
-                    "relative_path": "_docs/odbc-jdbc-interfaces/060-using-spotfire.md", 
-                    "title": "Using Apache Drill with Tibco Spotfire Desktop", 
-                    "url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/"
+                    "relative_path": "_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md", 
+                    "title": "Using Apache Drill with Tableau 9 Desktop", 
+                    "url": "/docs/using-apache-drill-with-tableau-9-desktop/"
                 }
             ], 
             "next_title": "Interfaces Introduction", 
@@ -10270,8 +10287,8 @@
             "next_title": "Query Data Introduction", 
             "next_url": "/docs/query-data-introduction/", 
             "parent": "", 
-            "previous_title": "Using Apache Drill with Tibco Spotfire Desktop", 
-            "previous_url": "/docs/using-apache-drill-with-tibco-spotfire-desktop/", 
+            "previous_title": "Using Apache Drill with Tableau 9 Desktop", 
+            "previous_url": "/docs/using-apache-drill-with-tableau-9-desktop/", 
             "relative_path": "_docs/070-query-data.md", 
             "title": "Query Data", 
             "url": "/docs/query-data/"

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/connect-list.png
----------------------------------------------------------------------
diff --git a/_docs/img/connect-list.png b/_docs/img/connect-list.png
new file mode 100644
index 0000000..c812cb9
Binary files /dev/null and b/_docs/img/connect-list.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/custom-sql-query.png
----------------------------------------------------------------------
diff --git a/_docs/img/custom-sql-query.png b/_docs/img/custom-sql-query.png
new file mode 100644
index 0000000..4b53d58
Binary files /dev/null and b/_docs/img/custom-sql-query.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/edit-custom-sql.png
----------------------------------------------------------------------
diff --git a/_docs/img/edit-custom-sql.png b/_docs/img/edit-custom-sql.png
new file mode 100644
index 0000000..580f5d5
Binary files /dev/null and b/_docs/img/edit-custom-sql.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/install-tableau-tdc.png
----------------------------------------------------------------------
diff --git a/_docs/img/install-tableau-tdc.png b/_docs/img/install-tableau-tdc.png
new file mode 100644
index 0000000..21a1c53
Binary files /dev/null and b/_docs/img/install-tableau-tdc.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/new-data-source.png
----------------------------------------------------------------------
diff --git a/_docs/img/new-data-source.png b/_docs/img/new-data-source.png
new file mode 100644
index 0000000..fbd0556
Binary files /dev/null and b/_docs/img/new-data-source.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/other-dbs-2.png
----------------------------------------------------------------------
diff --git a/_docs/img/other-dbs-2.png b/_docs/img/other-dbs-2.png
new file mode 100644
index 0000000..b3f183f
Binary files /dev/null and b/_docs/img/other-dbs-2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/other-dbs.png
----------------------------------------------------------------------
diff --git a/_docs/img/other-dbs.png b/_docs/img/other-dbs.png
new file mode 100644
index 0000000..4e987cd
Binary files /dev/null and b/_docs/img/other-dbs.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/tableau-desktop-query.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-desktop-query.png b/_docs/img/tableau-desktop-query.png
new file mode 100644
index 0000000..842c2e3
Binary files /dev/null and b/_docs/img/tableau-desktop-query.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/tableau-error.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-error.png b/_docs/img/tableau-error.png
new file mode 100644
index 0000000..a443cf1
Binary files /dev/null and b/_docs/img/tableau-error.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/tableau-join-key.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-join-key.png b/_docs/img/tableau-join-key.png
new file mode 100644
index 0000000..acd66a2
Binary files /dev/null and b/_docs/img/tableau-join-key.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/tableau-odbc-setup-2.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-odbc-setup-2.png b/_docs/img/tableau-odbc-setup-2.png
new file mode 100644
index 0000000..46c8c53
Binary files /dev/null and b/_docs/img/tableau-odbc-setup-2.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/tableau-odbc-setup.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-odbc-setup.png b/_docs/img/tableau-odbc-setup.png
new file mode 100644
index 0000000..a09be18
Binary files /dev/null and b/_docs/img/tableau-odbc-setup.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/tableau-schemas.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-schemas.png b/_docs/img/tableau-schemas.png
new file mode 100644
index 0000000..4283922
Binary files /dev/null and b/_docs/img/tableau-schemas.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/img/tableau-select-schema.png
----------------------------------------------------------------------
diff --git a/_docs/img/tableau-select-schema.png b/_docs/img/tableau-select-schema.png
new file mode 100644
index 0000000..1f6b773
Binary files /dev/null and b/_docs/img/tableau-select-schema.png differ

http://git-wip-us.apache.org/repos/asf/drill/blob/3f63669b/_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md b/_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md
new file mode 100644
index 0000000..33f5221
--- /dev/null
+++ b/_docs/odbc-jdbc-interfaces/070-using-apache-drill-with-tableau-9-desktop.md
@@ -0,0 +1,96 @@
+---
+title: "Using Apache Drill with Tableau 9 Desktop"
+parent: "ODBC/JDBC Interfaces"
+---
+This document describes how to connect Tableau 9 Desktop to Apache Drill and explore multiple data formats instantly on Hadoop. Use the combined power of these tools to get direct access to semi-structured data, without having to rely on IT teams for schema creation.
+
+To use Apache Drill with Tableau 9 Desktop, complete the following steps: 
+
+1. Install the Drill ODBC driver from MapR.
+2. Install the Tableau Data-connection Customization (TDC) file.
+3. Connect Tableau to Drill Using ODBC.
+4. Query and analyze various data formats with Tableau and Drill.
+
+----------
+
+### Step 1: Install and Configure the MapR Drill ODBC Driver 
+
+Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. For the best experience use the latest release of Apache Drill. For Tableau 9.0 Desktop, Drill Version 0.9 or higher is recommended.
+
+Complete the following steps to install and configure the driver:
+
+1. Download the 64-bit MapR Drill ODBC Driver for Windows from the following location:<br> [http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/](http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/)     
+**Note:** Tableau 9.0 Desktop 64 bit can use either the 32-bit driver or the 64-bit driver.
+2. Complete steps 2-8 under on the following page to install the driver:<br> 
+[http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/](http://drill.apache.org/docs/step-1-install-the-mapr-drill-odbc-driver-on-windows/)
+3. Complete the steps on the following page to configure the driver:<br>
+[http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/](http://drill.apache.org/docs/step-2-configure-odbc-connections-to-drill-data-sources/)
+4. If Drill authentication is enabled, select **Basic Authentication** as the authentication type. Enter a valid user and password. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-odbc-setup.png)
+
+Note: If you select **ZooKeeper Quorum** as the ODBC connection type, the client system must be able to resolve the hostnames of the ZooKeeper nodes. The simplest way is to add the hostnames and IP addresses for the ZooKeeper nodes to the `%WINDIR%\system32\drivers\etc\hosts` file. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-odbc-setup-2.png)

Also make sure to test the ODBC connection to Drill before using it with Tableau.
+
+----------
+
+### Step 2: Install the Tableau Data-connection Customization (TDC) File
+
+The MapR Drill ODBC Driver includes a file named `MapRDrillODBC.TDC`. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau.
+
+The MapR Drill ODBC Driver installer automatically installs the TDC file if the installer can find the Tableau installation. If you installed the MapR Drill ODBC Driver first and then installed Tableau, the TDC file is not installed automatically, and you need to install it manually. 
+
+**To install the MapRDrillODBC.TDC file manually:**
+
+  1. Click **Start > All Programs > MapR Drill ODBC Driver <version> (32|64-bit) > Install Tableau TDC File**. ![drill query flow]({{ site.baseurl }}/docs/img/install-tableau-tdc.png)
+  2. When the installation completes, press any key to continue.   
+For example, you can press the SPACEBAR key.
+
+If the installation of the TDC file fails, this is likely because your Tableau repository is not in a location other than the default one. In this case, manually copy the My Tableau Repository to the following location: `C:\Users\<user>\Documents\My Tableau Repository`. Repeat the procedure to install the `MapRDrillODBC.TDC` file manually.
+
+
+----------
+
+
+### Step 3: Connect Tableau to Drill via ODBC
+Complete the following steps to configure an ODBC data connection: 
+
+To connect Tableau to Drill use the following steps:
+
+1.	In a Tableau Workbook click **Data > New Data Source**.![drill query flow]({{ site.baseurl }}/docs/img/new-data-source.png)
+2.	In the **Connect** list, select **Other Databases (ODBC)**. ![drill query flow]({{ site.baseurl }}/docs/img/connect-list.png)
+3.	On the Server Connection window, select the DSN configured in Step 1 from the drop-down list of ODBC data sources. Then click **Connect**. Note: You will be prompted to enter a username and password; these entries will be passed to the Server Connection window. 
+![drill query flow]({{ site.baseurl }}/docs/img/other-dbs.png) ![drill query flow]({{ site.baseurl }}/docs/img/other-dbs-2.png)
+Tableau is now connected to Drill, and you can select various tables and views. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-schemas.png)
+4.	Click the **Schema** drop-down list and choose a schema by clicking the search icon: all available Drill schemas will be displayed. When you have selected the schema, click the search icon in the Table dialog box: all available tables or views will be displayed. 
+
+You can select tables and views to build a Tableau Visualization. You can also use custom SQL by clicking the **New Custom SQL** option. 
+
+Tableau can natively work with Hive tables and Drill views. For Drill data sources, including directly accessible  file systems or HBase/MapR-DB tables, you can either use the Custom SQL option, or you can create a view in Drill to easily represent the complex data to Tableau. For more information, see the following links:
+http://drill.apache.org/docs/step-3-connect-to-drill-data-sources-from-a-bi-tool/
+http://drill.apache.org/docs/tableau-examples/
+
+Note: If Drill authentication and impersonation is enabled, only the views that the user has access to will be displayed in the Table dialog box. Also, if custom SQL is being used to try and access data sources that the user does not have access to, an error message will be displayed. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-error.png)
+
+----------
+
+### Step 4: Query and Analyze the Data 
+
+Tableau Desktop can now use Drill to query various data sources and visualize the information.
+
+Assume that a retailer has weblog data stored as JSON files in a nested directory structure and product information stored in a Hive table. Using Drill and Tableau, users would like to understand the volume of product sold by state and product category.
+
+1.	Using the New Custom SQL function in Tableau, write a Drill query to read the JSON files without the need for ETL. Casting data types is recommended when you are working directly with files. 
+For example: ![drill query flow]({{ site.baseurl }}/docs/img/edit-custom-sql.png)
+
+2.	Next select the Hive products table: ![drill query flow]({{ site.baseurl }}/docs/img/custom-sql-query.png)
+
+3.	Verify that Tableau is joining the two data sources (JSON files and Hive table) on the prod_id key:![drill query flow]({{ site.baseurl }}/docs/img/tableau-join-key.png)
+The data sources are now configured and ready to be used in the visualization.
+4.	Drag State from the Custom SQL (JSON files) to **Columns**, and drag Category from the Hive products table to **Rows**. 
+5.	Create a calculated field called `Total Number of Products` and enter `count(prod_id)` for the `prod_id` field from the weblog files (Custom SQL), then drag it to **Rows** next to Category. The visualization will now show the total products by category and state. 
+6.	To filter out weblog data where products were not bought, drag the `purch_flag` field from the weblog files to **Filters**. Select only data where the `purch_flag` was true.
+7.	Finally, order the data from the state with the most products sold to the one with the least.
+8.	Add a grand total row by clicking **Analysis > Totals > Show Column Grand Totals**. ![drill query flow]({{ site.baseurl }}/docs/img/tableau-desktop-query.png)
+
+----------
+
+In this quick tutorial, you saw how you can configure Tableau Desktop 9.0 to work with Apache Drill. 
+