You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by kr...@apache.org on 2015/12/10 03:45:31 UTC

[2/3] drill-site git commit: squash 4 commits

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/lesson-1-learn-about-the-data-set/index.html
----------------------------------------------------------------------
diff --git a/docs/lesson-1-learn-about-the-data-set/index.html b/docs/lesson-1-learn-about-the-data-set/index.html
index 2dc77f9..350b040 100644
--- a/docs/lesson-1-learn-about-the-data-set/index.html
+++ b/docs/lesson-1-learn-about-the-data-set/index.html
@@ -1079,7 +1079,7 @@ the Drill shell, type:</p>
 +-------+--------------------------------------------+
 1 row selected 
 </code></pre></div>
-<h3 id="list-the-available-workspaces-and-databases:">List the available workspaces and databases:</h3>
+<h3 id="list-the-available-workspaces-and-databases">List the available workspaces and databases:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; show databases;
 +---------------------+
 |     SCHEMA_NAME     |
@@ -1109,7 +1109,7 @@ different database schemas (namespaces) in a relational database system.</p>
 This is a Hive external table pointing to the data stored in flat files on the
 MapR file system. The orders table contains 122,000 rows.</p>
 
-<h3 id="set-the-schema-to-hive:">Set the schema to hive:</h3>
+<h3 id="set-the-schema-to-hive">Set the schema to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1121,7 +1121,7 @@ MapR file system. The orders table contains 122,000 rows.</p>
 <p>You will run the USE command throughout this tutorial. The USE command sets
 the schema for the current session.</p>
 
-<h3 id="describe-the-table:">Describe the table:</h3>
+<h3 id="describe-the-table">Describe the table:</h3>
 
 <p>You can use the DESCRIBE command to show the columns and data types for a Hive
 table:</p>
@@ -1140,7 +1140,7 @@ table:</p>
 <p>The DESCRIBE command returns complete schema information for Hive tables based
 on the metadata available in the Hive metastore.</p>
 
-<h3 id="select-5-rows-from-the-orders-table:">Select 5 rows from the orders table:</h3>
+<h3 id="select-5-rows-from-the-orders-table">Select 5 rows from the orders table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from orders limit 5;
 +------------+------------+------------+------------+------------+-------------+
 |  order_id  |   month    |  cust_id   |   state    |  prod_id   | order_total |
@@ -1198,7 +1198,7 @@ columns typical of a time-series database.</p>
 
 <p>The customers table contains 993 rows.</p>
 
-<h3 id="set-the-workspace-to-maprdb:">Set the workspace to maprdb:</h3>
+<h3 id="set-the-workspace-to-maprdb">Set the workspace to maprdb:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">use maprdb;
 +-------+-------------------------------------+
 |  ok   |               summary               |
@@ -1207,7 +1207,7 @@ columns typical of a time-series database.</p>
 +-------+-------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="describe-the-tables:">Describe the tables:</h3>
+<h3 id="describe-the-tables">Describe the tables:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; describe customers;
 +--------------+------------------------+--------------+
 | COLUMN_NAME  |       DATA_TYPE        | IS_NULLABLE  |
@@ -1240,7 +1240,7 @@ structure, and “ANY” represents the fact that the column value can be of any
 data type. Observe the row_key, which is also simply bytes and has the type
 ANY.</p>
 
-<h3 id="select-5-rows-from-the-products-table:">Select 5 rows from the products table:</h3>
+<h3 id="select-5-rows-from-the-products-table">Select 5 rows from the products table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from products limit 5;
 +--------------+----------------------------------------------------------------------------------------------------------------+-------------------+
 |   row_key    |                                                    details                                                     |      pricing      |
@@ -1260,7 +1260,7 @@ and pricing) have the map data type and appear as JSON strings.</p>
 
 <p>In Lesson 2, you will use CAST functions to return typed data for each column.</p>
 
-<h3 id="select-5-rows-from-the-customers-table:">Select 5 rows from the customers table:</h3>
+<h3 id="select-5-rows-from-the-customers-table">Select 5 rows from the customers table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">+0: jdbc:drill:&gt; select * from customers limit 5;
 +--------------+-----------------------+-------------------------------------------------+---------------------------------------------------------------------------------------+
 |   row_key    |        address        |                     loyalty                     |                                       personal                                        |
@@ -1304,7 +1304,7 @@ setup beyond the definition of a workspace.</p>
 
 <h3 id="query-nested-clickstream-data">Query nested clickstream data</h3>
 
-<h4 id="set-the-workspace-to-dfs.clicks:">Set the workspace to dfs.clicks:</h4>
+<h4 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1325,7 +1325,7 @@ location specified in the workspace. For example:</p>
 relative to this path. The clicks directory referred to in the following query
 is directly below the nested directory.</p>
 
-<h4 id="select-2-rows-from-the-clicks.json-file:">Select 2 rows from the clicks.json file:</h4>
+<h4 id="select-2-rows-from-the-clicks-json-file">Select 2 rows from the clicks.json file:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from `clicks/clicks.json` limit 2;
 +-----------+-------------+-----------+---------------------------------------------------+-------------------------------------------+
 | trans_id  |    date     |   time    |                     user_info                     |                trans_info                 |
@@ -1343,7 +1343,7 @@ to refer to a file in a local or distributed file system.</p>
 path. This is necessary whenever the file path contains Drill reserved words
 or characters.</p>
 
-<h4 id="select-2-rows-from-the-campaign.json-file:">Select 2 rows from the campaign.json file:</h4>
+<h4 id="select-2-rows-from-the-campaign-json-file">Select 2 rows from the campaign.json file:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from `clicks/clicks.campaign.json` limit 2;
 +-----------+-------------+-----------+---------------------------------------------------+---------------------+----------------------------------------+
 | trans_id  |    date     |   time    |                     user_info                     |       ad_info       |               trans_info               |
@@ -1377,7 +1377,7 @@ for that month. The total number of records in all log files is 48000.</p>
 are many of these files, but you can use Drill to query them all as a single
 data source, or to query a subset of the files.</p>
 
-<h4 id="set-the-workspace-to-dfs.logs:">Set the workspace to dfs.logs:</h4>
+<h4 id="set-the-workspace-to-dfs-logs">Set the workspace to dfs.logs:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.logs;
 +-------+---------------------------------------+
 |  ok   |                summary                |
@@ -1386,7 +1386,7 @@ data source, or to query a subset of the files.</p>
 +-------+---------------------------------------+
 1 row selected
 </code></pre></div>
-<h4 id="select-2-rows-from-the-logs-directory:">Select 2 rows from the logs directory:</h4>
+<h4 id="select-2-rows-from-the-logs-directory">Select 2 rows from the logs directory:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from logs limit 2;
 +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
 | dir0  | dir1  | trans_id  |    date     |   time    | cust_id  | device  | state  | camp_id  | keywords  | prod_id  | purch_flag  |
@@ -1405,7 +1405,7 @@ directory path on the file system.</p>
 subdirectories below the logs directory. In Lesson 3, you will do more complex
 queries that leverage these dynamic variables.</p>
 
-<h4 id="find-the-total-number-of-rows-in-the-logs-directory-(all-files):">Find the total number of rows in the logs directory (all files):</h4>
+<h4 id="find-the-total-number-of-rows-in-the-logs-directory-all-files">Find the total number of rows in the logs directory (all files):</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select count(*) from logs;
 +---------+
 | EXPR$0  |
@@ -1417,7 +1417,7 @@ queries that leverage these dynamic variables.</p>
 <p>This query traverses all of the files in the logs directory and its
 subdirectories to return the total number of rows in those files.</p>
 
-<h1 id="what&#39;s-next">What&#39;s Next</h1>
+<h1 id="what-39-s-next">What&#39;s Next</h1>
 
 <p>Go to <a href="/docs/lesson-2-run-queries-with-ansi-sql">Lesson 2: Run Queries with ANSI
 SQL</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/lesson-2-run-queries-with-ansi-sql/index.html
----------------------------------------------------------------------
diff --git a/docs/lesson-2-run-queries-with-ansi-sql/index.html b/docs/lesson-2-run-queries-with-ansi-sql/index.html
index d2acab0..506cb29 100644
--- a/docs/lesson-2-run-queries-with-ansi-sql/index.html
+++ b/docs/lesson-2-run-queries-with-ansi-sql/index.html
@@ -1057,7 +1057,7 @@ statement.</p>
 
 <h2 id="aggregation">Aggregation</h2>
 
-<h3 id="set-the-schema-to-hive:">Set the schema to hive:</h3>
+<h3 id="set-the-schema-to-hive">Set the schema to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1066,7 +1066,7 @@ statement.</p>
 +-------+-------------------------------------------+
 1 row selected 
 </code></pre></div>
-<h3 id="return-sales-totals-by-month:">Return sales totals by month:</h3>
+<h3 id="return-sales-totals-by-month">Return sales totals by month:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select `month`, sum(order_total)
 from orders group by `month` order by 2 desc;
 +------------+---------+
@@ -1092,7 +1092,7 @@ database queries.</p>
 <p>Note that back ticks are required for the “month” column only because “month”
 is a reserved word in SQL.</p>
 
-<h3 id="return-the-top-20-sales-totals-by-month-and-state:">Return the top 20 sales totals by month and state:</h3>
+<h3 id="return-the-top-20-sales-totals-by-month-and-state">Return the top 20 sales totals by month and state:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select `month`, state, sum(order_total) as sales from orders group by `month`, state
 order by 3 desc limit 20;
 +-----------+--------+---------+
@@ -1128,7 +1128,7 @@ aliases and table aliases.</p>
 
 <p>This query uses the HAVING clause to constrain an aggregate result.</p>
 
-<h3 id="set-the-workspace-to-dfs.clicks">Set the workspace to dfs.clicks</h3>
+<h3 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1137,7 +1137,7 @@ aliases and table aliases.</p>
 +-------+-----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="return-total-number-of-clicks-for-devices-that-indicate-high-click-throughs:">Return total number of clicks for devices that indicate high click-throughs:</h3>
+<h3 id="return-total-number-of-clicks-for-devices-that-indicate-high-click-throughs">Return total number of clicks for devices that indicate high click-throughs:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.user_info.device, count(*) from `clicks/clicks.json` t 
 group by t.user_info.device
 having count(*) &gt; 1000;
@@ -1180,7 +1180,7 @@ duplicate rows from those files): <code>clicks.campaign.json</code> and <code>cl
 
 <h2 id="subqueries">Subqueries</h2>
 
-<h3 id="set-the-workspace-to-hive:">Set the workspace to hive:</h3>
+<h3 id="set-the-workspace-to-hive">Set the workspace to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1189,7 +1189,7 @@ duplicate rows from those files): <code>clicks.campaign.json</code> and <code>cl
 +-------+-------------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="compare-order-totals-across-states:">Compare order totals across states:</h3>
+<h3 id="compare-order-totals-across-states">Compare order totals across states:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select ny_sales.cust_id, ny_sales.total_orders, ca_sales.total_orders
 from
 (select o.cust_id, sum(o.order_total) as total_orders from hive.orders o where state = &#39;ny&#39; group by o.cust_id) ny_sales
@@ -1227,7 +1227,7 @@ limit 20;
 
 <h2 id="cast-function">CAST Function</h2>
 
-<h3 id="use-the-maprdb-workspace:">Use the maprdb workspace:</h3>
+<h3 id="use-the-maprdb-workspace">Use the maprdb workspace:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use maprdb;
 +-------+-------------------------------------+
 |  ok   |               summary               |
@@ -1260,7 +1260,7 @@ from customers t limit 5;
 <li>The table alias t is required; otherwise the column family names would be parsed as table names and the query would return an error.</li>
 </ul>
 
-<h3 id="remove-the-quotes-from-the-strings:">Remove the quotes from the strings:</h3>
+<h3 id="remove-the-quotes-from-the-strings">Remove the quotes from the strings:</h3>
 
 <p>You can use the regexp_replace function to remove the quotes around the
 strings in the query results. For example, to return a state name va instead
@@ -1283,7 +1283,7 @@ from customers t limit 1;
 +-------+----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="use-a-mutable-workspace:">Use a mutable workspace:</h3>
+<h3 id="use-a-mutable-workspace">Use a mutable workspace:</h3>
 
 <p>A mutable (or writable) workspace is a workspace that is enabled for “write”
 operations. This attribute is part of the storage plugin configuration. You
@@ -1322,7 +1322,7 @@ statement.</p>
 defined in data sources such as Hive, HBase, and the file system. Drill also
 supports the creation of metadata in the file system.</p>
 
-<h3 id="query-data-from-the-view:">Query data from the view:</h3>
+<h3 id="query-data-from-the-view">Query data from the view:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from custview limit 1;
 +----------+-------------------+-----------+----------+--------+----------+-------------+
 | cust_id  |       name        |  gender   |   age    | state  | agg_rev  | membership  |
@@ -1337,7 +1337,7 @@ supports the creation of metadata in the file system.</p>
 
 <p>Continue using <code>dfs.views</code> for this query.</p>
 
-<h3 id="join-the-customers-view-and-the-orders-table:">Join the customers view and the orders table:</h3>
+<h3 id="join-the-customers-view-and-the-orders-table">Join the customers view and the orders table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select membership, sum(order_total) as sales from hive.orders, custview
 where orders.cust_id=custview.cust_id
 group by membership order by 2;
@@ -1363,7 +1363,7 @@ rows are wide, set the maximum width of the display to 10000:</p>
 
 <p>Do not use a semicolon for this SET command.</p>
 
-<h3 id="join-the-customers,-orders,-and-clickstream-data:">Join the customers, orders, and clickstream data:</h3>
+<h3 id="join-the-customers-orders-and-clickstream-data">Join the customers, orders, and clickstream data:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select custview.membership, sum(orders.order_total) as sales from hive.orders, custview,
 dfs.`/mapr/demo.mapr.com/data/nested/clicks/clicks.json` c 
 where orders.cust_id=custview.cust_id and orders.cust_id=c.user_info.cust_id 
@@ -1393,7 +1393,7 @@ hive.orders table is also visible to the query.</p>
 workspace, so the query specifies the full path to the file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">dfs.`/mapr/demo.mapr.com/data/nested/clicks/clicks.json`
 </code></pre></div>
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="what-39-s-next">What&#39;s Next</h2>
 
 <p>Go to <a href="/docs/lesson-3-run-queries-on-complex-data-types">Lesson 3: Run Queries on Complex Data Types</a>. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/lesson-3-run-queries-on-complex-data-types/index.html
----------------------------------------------------------------------
diff --git a/docs/lesson-3-run-queries-on-complex-data-types/index.html b/docs/lesson-3-run-queries-on-complex-data-types/index.html
index 8704785..e3e3fdd 100644
--- a/docs/lesson-3-run-queries-on-complex-data-types/index.html
+++ b/docs/lesson-3-run-queries-on-complex-data-types/index.html
@@ -1068,7 +1068,7 @@ exist. Here is a visual example of how this works:</p>
 
 <p><img src="/docs/img/example_query.png" alt="drill query flow"></p>
 
-<h3 id="set-workspace-to-dfs.logs:">Set workspace to dfs.logs:</h3>
+<h3 id="set-workspace-to-dfs-logs">Set workspace to dfs.logs:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.logs;
 +-------+---------------------------------------+
 |  ok   |                summary                |
@@ -1077,7 +1077,7 @@ exist. Here is a visual example of how this works:</p>
 +-------+---------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="query-logs-data-for-a-specific-year:">Query logs data for a specific year:</h3>
+<h3 id="query-logs-data-for-a-specific-year">Query logs data for a specific year:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from logs where dir0=&#39;2013&#39; limit 10;
 +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
 | dir0  | dir1  | trans_id  |    date     |   time    | cust_id  | device  | state  | camp_id  | keywords  | prod_id  | purch_flag  |
@@ -1099,7 +1099,7 @@ exist. Here is a visual example of how this works:</p>
 dir0 refers to the first level down from logs, dir1 to the next level, and so
 on. So this query returned 10 of the rows for February 2013.</p>
 
-<h3 id="further-constrain-the-results-using-multiple-predicates-in-the-query:">Further constrain the results using multiple predicates in the query:</h3>
+<h3 id="further-constrain-the-results-using-multiple-predicates-in-the-query">Further constrain the results using multiple predicates in the query:</h3>
 
 <p>This query returns a list of customer IDs for people who made a purchase via
 an IOS5 device in August 2013.</p>
@@ -1116,7 +1116,7 @@ order by `date`;
 
 ...
 </code></pre></div>
-<h3 id="return-monthly-counts-per-customer-for-a-given-year:">Return monthly counts per customer for a given year:</h3>
+<h3 id="return-monthly-counts-per-customer-for-a-given-year">Return monthly counts per customer for a given year:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select cust_id, dir1 month_no, count(*) month_count from logs
 where dir0=2014 group by cust_id, dir1 order by cust_id, month_no limit 10;
 +----------+-----------+--------------+
@@ -1144,7 +1144,7 @@ year: 2014.</p>
 analyze nested data natively without transformation. If you are familiar with
 JavaScript notation, you will already know how some of these extensions work.</p>
 
-<h3 id="set-the-workspace-to-dfs.clicks:">Set the workspace to dfs.clicks:</h3>
+<h3 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1153,7 +1153,7 @@ JavaScript notation, you will already know how some of these extensions work.</p
 +-------+-----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="explore-clickstream-data:">Explore clickstream data:</h3>
+<h3 id="explore-clickstream-data">Explore clickstream data:</h3>
 
 <p>Note that the user_info and trans_info columns contain nested data: arrays and
 arrays within arrays. The following queries show how to access this complex
@@ -1170,7 +1170,7 @@ data.</p>
 +-----------+-------------+-----------+---------------------------------------------------+---------------------------------------------------------------------------+
 5 rows selected
 </code></pre></div>
-<h3 id="unpack-the-user_info-column:">Unpack the user_info column:</h3>
+<h3 id="unpack-the-user_info-column">Unpack the user_info column:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.user_info.cust_id as custid, t.user_info.device as device,
 t.user_info.state as state
 from `clicks/clicks.json` t limit 5;
@@ -1195,7 +1195,7 @@ column name, and <code>cust_id</code> is a nested column name.</p>
 <p>The table alias is required; otherwise column names such as <code>user_info</code> are
 parsed as table names by the SQL parser.</p>
 
-<h3 id="unpack-the-trans_info-column:">Unpack the trans_info column:</h3>
+<h3 id="unpack-the-trans_info-column">Unpack the trans_info column:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_info.prod_id as prodid, t.trans_info.purch_flag as
 purchased
 from `clicks/clicks.json` t limit 5;
@@ -1228,7 +1228,7 @@ notation to write interesting queries against nested array data.</p>
 </code></pre></div>
 <p>refers to the 21st value, assuming one exists.</p>
 
-<h3 id="find-the-first-product-that-is-searched-for-in-each-transaction:">Find the first product that is searched for in each transaction:</h3>
+<h3 id="find-the-first-product-that-is-searched-for-in-each-transaction">Find the first product that is searched for in each transaction:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.trans_info.prod_id[0] from `clicks/clicks.json` t limit 5;
 +------------+------------+
 |  trans_id  |   EXPR$1   |
@@ -1241,7 +1241,7 @@ notation to write interesting queries against nested array data.</p>
 +------------+------------+
 5 rows selected
 </code></pre></div>
-<h3 id="for-which-transactions-did-customers-search-on-at-least-21-products?">For which transactions did customers search on at least 21 products?</h3>
+<h3 id="for-which-transactions-did-customers-search-on-at-least-21-products">For which transactions did customers search on at least 21 products?</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.trans_info.prod_id[20]
 from `clicks/clicks.json` t
 where t.trans_info.prod_id[20] is not null
@@ -1260,7 +1260,7 @@ order by trans_id limit 5;
 <p>This query returns transaction IDs and product IDs for records that contain a
 non-null product ID at the 21st position in the array.</p>
 
-<h3 id="return-clicks-for-a-specific-product-range:">Return clicks for a specific product range:</h3>
+<h3 id="return-clicks-for-a-specific-product-range">Return clicks for a specific product range:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from (select t.trans_id, t.trans_info.prod_id[0] as prodid,
 t.trans_info.purch_flag as purchased
 from `clicks/clicks.json` t) sq
@@ -1283,7 +1283,7 @@ ordered list of products purchased rather than a random list).</p>
 
 <h2 id="perform-operations-on-arrays">Perform Operations on Arrays</h2>
 
-<h3 id="rank-successful-click-conversions-and-count-product-searches-for-each-session:">Rank successful click conversions and count product searches for each session:</h3>
+<h3 id="rank-successful-click-conversions-and-count-product-searches-for-each-session">Rank successful click conversions and count product searches for each session:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.`date` as session_date, t.user_info.cust_id as
 cust_id, t.user_info.device as device, repeated_count(t.trans_info.prod_id) as
 prod_count, t.trans_info.purch_flag as purch_flag
@@ -1309,7 +1309,7 @@ in descending order. Only clicks that have resulted in a purchase are counted.</
 <p>To facilitate additional analysis on this result set, you can easily and
 quickly create a Drill table from the results of the query.</p>
 
-<h3 id="continue-to-use-the-dfs.clicks-workspace">Continue to use the dfs.clicks workspace</h3>
+<h3 id="continue-to-use-the-dfs-clicks-workspace">Continue to use the dfs.clicks workspace</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1318,7 +1318,7 @@ quickly create a Drill table from the results of the query.</p>
 +-------+-----------------------------------------+
 1 row selected (1.61 seconds)
 </code></pre></div>
-<h3 id="return-product-searches-for-high-value-customers:">Return product searches for high-value customers:</h3>
+<h3 id="return-product-searches-for-high-value-customers">Return product searches for high-value customers:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id
 from 
 hive.orders as o
@@ -1342,7 +1342,7 @@ where o.order_total &gt; (select avg(inord.order_total)
 <p>This query returns a list of products that are being searched for by customers
 who have made transactions that are above the average in their states.</p>
 
-<h3 id="materialize-the-result-of-the-previous-query:">Materialize the result of the previous query:</h3>
+<h3 id="materialize-the-result-of-the-previous-query">Materialize the result of the previous query:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; create table product_search as select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id
 from
 hive.orders as o
@@ -1364,7 +1364,7 @@ query returns (107,482) and stores them in the format specified by the storage
 plugin (Parquet format in this example). You can create tables that store data
 in csv, parquet, and json formats.</p>
 
-<h3 id="query-the-new-table-to-verify-the-row-count:">Query the new table to verify the row count:</h3>
+<h3 id="query-the-new-table-to-verify-the-row-count">Query the new table to verify the row count:</h3>
 
 <p>This example simply checks that the CTAS statement worked by verifying the
 number of rows in the table.</p>
@@ -1376,7 +1376,7 @@ number of rows in the table.</p>
 +---------+
 1 row selected (0.155 seconds)
 </code></pre></div>
-<h3 id="find-the-storage-file-for-the-table:">Find the storage file for the table:</h3>
+<h3 id="find-the-storage-file-for-the-table">Find the storage file for the table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">[root@maprdemo product_search]# cd /mapr/demo.mapr.com/data/nested/product_search
 [root@maprdemo product_search]# ls -la
 total 451
@@ -1390,7 +1390,7 @@ stored in the location defined by the dfs.clicks workspace:</p>
 </code></pre></div>
 <p>There is a subdirectory that has the same name as the table you created.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="what-39-s-next">What&#39;s Next</h2>
 
 <p>Complete the tutorial with the <a href="/docs/summary">Summary</a>.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/mongodb-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/mongodb-storage-plugin/index.html b/docs/mongodb-storage-plugin/index.html
index 4f67e6c..25d9e59 100644
--- a/docs/mongodb-storage-plugin/index.html
+++ b/docs/mongodb-storage-plugin/index.html
@@ -1210,7 +1210,7 @@ Drill data sources, including MongoDB. </p>
 | -72.576142 |
 +------------+
 </code></pre></div>
-<h2 id="using-odbc/jdbc-drivers">Using ODBC/JDBC Drivers</h2>
+<h2 id="using-odbc-jdbc-drivers">Using ODBC/JDBC Drivers</h2>
 
 <p>You can query MongoDB through standard
 BI tools, such as Tableau and SQuirreL. For information about Drill ODBC and JDBC drivers, refer to <a href="/docs/odbc-jdbc-interfaces">Drill Interfaces</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/odbc-configuration-reference/index.html
----------------------------------------------------------------------
diff --git a/docs/odbc-configuration-reference/index.html b/docs/odbc-configuration-reference/index.html
index 89d0458..4c36445 100644
--- a/docs/odbc-configuration-reference/index.html
+++ b/docs/odbc-configuration-reference/index.html
@@ -1328,7 +1328,7 @@ The Simba ODBC Driver for Apache Drill produces two log files at the location yo
 <li>Save the mapr.drillodbc.ini configuration file.</li>
 </ol>
 
-<h4 id="what&#39;s-next?-go-to-connecting-to-odbc-data-sources.">What&#39;s Next? Go to <a href="/docs/connecting-to-odbc-data-sources">Connecting to ODBC Data Sources</a>.</h4>
+<h4 id="what-39-s-next-go-to-connecting-to-odbc-data-sources">What&#39;s Next? Go to <a href="/docs/connecting-to-odbc-data-sources">Connecting to ODBC Data Sources</a>.</h4>
 
     
       

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/parquet-format/index.html
----------------------------------------------------------------------
diff --git a/docs/parquet-format/index.html b/docs/parquet-format/index.html
index cb9efd1..19b403c 100644
--- a/docs/parquet-format/index.html
+++ b/docs/parquet-format/index.html
@@ -1097,7 +1097,7 @@
 <li>In the CTAS command, cast JSON string data to corresponding <a href="/docs/json-data-model/#data-type-mapping">SQL types</a>.</li>
 </ul>
 
-<h3 id="example:-read-json,-write-parquet">Example: Read JSON, Write Parquet</h3>
+<h3 id="example-read-json-write-parquet">Example: Read JSON, Write Parquet</h3>
 
 <p>This example demonstrates a storage plugin definition, a sample row of data from a JSON file, and a Drill query that writes the JSON input to Parquet output. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/partition-pruning/index.html
----------------------------------------------------------------------
diff --git a/docs/partition-pruning/index.html b/docs/partition-pruning/index.html
index 98c8378..b6196b9 100644
--- a/docs/partition-pruning/index.html
+++ b/docs/partition-pruning/index.html
@@ -1035,7 +1035,7 @@
 
 <p>The query planner in Drill performs partition pruning by evaluating the filters. If no partition filters are present, the underlying Scan operator reads all files in all directories and then sends the data to operators, such as Filter, downstream. When partition filters are present, the query planner pushes the filters down to the Scan if possible. The Scan reads only the directories that match the partition filters, thus reducing disk I/O.</p>
 
-<h2 id="migrating-partitioned-data-from-drill-1.1-1.2-to-drill-1.3">Migrating Partitioned Data from Drill 1.1-1.2 to Drill 1.3</h2>
+<h2 id="migrating-partitioned-data-from-drill-1-1-1-2-to-drill-1-3">Migrating Partitioned Data from Drill 1.1-1.2 to Drill 1.3</h2>
 
 <p>Use the <a href="https://github.com/parthchandra/drill-upgrade">drill-upgrade tool</a> to migrate Parquet data that you generated in Drill 1.1 or 1.2 before attempting to use the data with Drill 1.3 partition pruning.  This migration is mandatory because Parquet data generated by Drill 1.1 and 1.2 must be marked as Drill-generated, as described in <a href="https://issues.apache.org/jira/browse/DRILL-4070">DRILL-4070</a>. </p>
 
@@ -1055,7 +1055,7 @@
 
 <p>Unlike using the Drill 1.0 partitioning, no view query is subsequently required, nor is it necessary to use the <a href="/docs/querying-directories">dir* variables</a> after you use the Drill 1.1 PARTITION BY clause in a CTAS statement. </p>
 
-<h2 id="drill-1.0-partitioning">Drill 1.0 Partitioning</h2>
+<h2 id="drill-1-0-partitioning">Drill 1.0 Partitioning</h2>
 
 <p>You perform the following steps to partition data in Drill 1.0.   </p>
 
@@ -1067,7 +1067,7 @@
 
 <p>After partitioning the data, you need to create a view of the partitioned data to query the data. You can use the <a href="/docs/querying-directories">dir* variables</a> in queries to refer to subdirectories in your workspace path.</p>
 
-<h3 id="drill-1.0-partitioning-example">Drill 1.0 Partitioning Example</h3>
+<h3 id="drill-1-0-partitioning-example">Drill 1.0 Partitioning Example</h3>
 
 <p>Suppose you have text files containing several years of log data. To partition the data by year and quarter, create the following hierarchy of directories:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   …/logs/1994/Q1  

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/plugin-configuration-basics/index.html
----------------------------------------------------------------------
diff --git a/docs/plugin-configuration-basics/index.html b/docs/plugin-configuration-basics/index.html
index 81d2df3..0cd5375 100644
--- a/docs/plugin-configuration-basics/index.html
+++ b/docs/plugin-configuration-basics/index.html
@@ -1123,13 +1123,13 @@ Using a copy of an existing configuration reduces the risk of JSON coding errors
   </tr>
   <tr>
     <td>&quot;formats&quot;</td>
-    <td>&quot;psv&quot;<br>&quot;csv&quot;<br>&quot;tsv&quot;<br>&quot;parquet&quot;<br>&quot;json&quot;<br>&quot;avro&quot;<br>&quot;maprdb&quot;<em><br>&quot;sequencefile&quot;</td>
+    <td>&quot;psv&quot;<br>&quot;csv&quot;<br>&quot;tsv&quot;<br>&quot;parquet&quot;<br>&quot;json&quot;<br>&quot;avro&quot;<br>&quot;maprdb&quot;<br>&quot;sequencefile&quot;</td>
     <td>yes</td>
-    <td>One or more valid file formats for reading. Drill implicitly detects formats of some files based on extension or bits of data in the file; others require configuration.</td>
+    <td>One or more valid file formats for reading. Drill detects formats of some files; others require configuration. The maprdb format is in installations of the mapr-drill package.  </td>
   </tr>
   <tr>
     <td>&quot;formats&quot; . . . &quot;type&quot;</td>
-    <td>&quot;text&quot;<br>&quot;parquet&quot;<br>&quot;json&quot;<br>&quot;maprdb&quot;</em><br>&quot;avro&quot;<br>&quot;sequencefile&quot;</td>
+    <td>&quot;text&quot;<br>&quot;parquet&quot;<br>&quot;json&quot;<br>&quot;maprdb&quot;<br>&quot;avro&quot;<br>&quot;sequencefile&quot;</td>
     <td>yes</td>
     <td>Format type. You can define two formats, csv and psv, as type &quot;Text&quot;, but having different delimiters. </td>
   </tr>
@@ -1174,13 +1174,11 @@ Using a copy of an existing configuration reduces the risk of JSON coding errors
     <td>&quot;formats&quot; . . . &quot;extractHeader&quot;</td>
     <td>true</td>
     <td>no</td>
-    <td>Set to true to extract and use headers as column names when reading a delimited text file, false otherwise. Ensure skipFirstLine=false when extractHeader=true.
+    <td>Set to true to extract and use headers as column names when reading a delimited text file, false otherwise. Ensure skipFirstLine is not true when extractHeader=false.
     </td>
   </tr>
 </table></p>
 
-<p>* Pertains only to distributed Drill installations using the mapr-drill package.  </p>
-
 <h2 id="using-the-formats-attributes">Using the Formats Attributes</h2>
 
 <p>You set the formats attributes, such as skipFirstLine, in the <code>formats</code> area of the storage plugin configuration. When setting attributes for text files, such as CSV, you also need to set the <code>sys.options</code> property <code>exec.storage.enable_new_text_reader</code> to true (the default). For more information and examples of using formats for text files, see <a href="/docs/text-files-csv-tsv-psv/">&quot;Text Files: CSV, TSV, PSV&quot;</a>.  </p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-hbase/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-hbase/index.html b/docs/querying-hbase/index.html
index 4b752d7..a50f791 100644
--- a/docs/querying-hbase/index.html
+++ b/docs/querying-hbase/index.html
@@ -1044,7 +1044,7 @@ How to use optimization features in Drill 1.2 and later<br></li>
 How to use Drill 1.2 to leverage new features introduced by <a href="https://issues.apache.org/jira/browse/HBASE-8201">HBASE-8201 Jira</a></li>
 </ul>
 
-<h2 id="tutorial--querying-hbase-data">Tutorial--Querying HBase Data</h2>
+<h2 id="tutorial-querying-hbase-data">Tutorial--Querying HBase Data</h2>
 
 <p>This tutorial shows how to connect Drill to an HBase data source, create simple HBase tables, and query the data using Drill.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-json-files/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-json-files/index.html b/docs/querying-json-files/index.html
index 86b4d0a..8e557a0 100644
--- a/docs/querying-json-files/index.html
+++ b/docs/querying-json-files/index.html
@@ -1035,7 +1035,7 @@
       
         <p>To query complex JSON files, you need to understand the <a href="/docs/json-data-model/">&quot;JSON Data Model&quot;</a>. This section provides a trivial example of querying a sample file that Drill installs. </p>
 
-<h2 id="about-the-employee.json-file">About the employee.json File</h2>
+<h2 id="about-the-employee-json-file">About the employee.json File</h2>
 
 <p>The sample file, <code>employee.json</code>, is packaged in the Foodmart data JAR in Drill&#39;s
 classpath:  </p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-plain-text-files/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-plain-text-files/index.html b/docs/querying-plain-text-files/index.html
index 72200d5..3184f28 100644
--- a/docs/querying-plain-text-files/index.html
+++ b/docs/querying-plain-text-files/index.html
@@ -1064,7 +1064,7 @@ found&quot; error if references to files in queries do not match these condition
       &quot;delimiter&quot;: &quot;|&quot;
     }
 </code></pre></div>
-<h2 id="select-*-from-a-csv-file">SELECT * FROM a CSV File</h2>
+<h2 id="select-from-a-csv-file">SELECT * FROM a CSV File</h2>
 
 <p>The first query selects rows from a <code>.csv</code> text file. The file contains seven
 records:</p>
@@ -1095,7 +1095,7 @@ each row.</p>
 +-----------------------------------+
 7 rows selected (0.089 seconds)
 </code></pre></div>
-<h2 id="columns[n]-syntax">Columns[n] Syntax</h2>
+<h2 id="columns-n-syntax">Columns[n] Syntax</h2>
 
 <p>You can use the <code>COLUMNS[n]</code> syntax in the SELECT list to return these CSV
 rows in a more readable, column by column, format. (This syntax uses a zero-

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-sequence-files/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-sequence-files/index.html b/docs/querying-sequence-files/index.html
index f1330a5..d4edaba 100644
--- a/docs/querying-sequence-files/index.html
+++ b/docs/querying-sequence-files/index.html
@@ -1036,7 +1036,7 @@
         <p>Sequence files are flat files storing binary key value pairs.
 Drill projects sequence files as table with two columns &#39;binary_key&#39;, &#39;binary_value&#39;.</p>
 
-<h3 id="querying-sequence-file.">Querying sequence file.</h3>
+<h3 id="querying-sequence-file">Querying sequence file.</h3>
 
 <p>Start drill shell</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">    SELECT *

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-system-tables/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-system-tables/index.html b/docs/querying-system-tables/index.html
index 7e733a2..0ebb252 100644
--- a/docs/querying-system-tables/index.html
+++ b/docs/querying-system-tables/index.html
@@ -1093,7 +1093,7 @@ requests.</p>
 
 <p>Query the drillbits, version, options, boot, threads, and memory tables in the sys database.</p>
 
-<h3 id="query-the-drillbits-table.">Query the drillbits table.</h3>
+<h3 id="query-the-drillbits-table">Query the drillbits table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from drillbits;
 +-------------------+------------+--------------+------------+---------+
 |   hostname        |  user_port | control_port | data_port  |  current|
@@ -1121,7 +1121,7 @@ True means the Drillbit is connected to the session or client running the
 query. This Drillbit is the Foreman for the current session.<br></li>
 </ul>
 
-<h3 id="query-the-version-table.">Query the version table.</h3>
+<h3 id="query-the-version-table">Query the version table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from version;
 +-------------------------------------------+--------------------------------------------------------------------+----------------------------+--------------+----------------------------+
 |                 commit_id                 |                           commit_message                           |        commit_time         | build_email  |         build_time         |
@@ -1145,7 +1145,7 @@ example.</li>
 The time that the release was built.</li>
 </ul>
 
-<h3 id="query-the-options-table.">Query the options table.</h3>
+<h3 id="query-the-options-table">Query the options table.</h3>
 
 <p>Drill provides system, session, and boot options that you can query.</p>
 
@@ -1187,7 +1187,7 @@ The default value, which is of the double, float, or long double data type;
 otherwise, null.</li>
 </ul>
 
-<h3 id="query-the-boot-table.">Query the boot table.</h3>
+<h3 id="query-the-boot-table">Query the boot table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from boot limit 10;
 +--------------------------------------+----------+-------+---------+------------+-------------------------+-----------+------------+
 |                 name                 |   kind   | type  | status  |  num_val   |       string_val        | bool_val  | float_val  |
@@ -1225,7 +1225,7 @@ The default value, which is of the double, float, or long double data type;
 otherwise, null.</li>
 </ul>
 
-<h3 id="query-the-threads-table.">Query the threads table.</h3>
+<h3 id="query-the-threads-table">Query the threads table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from threads;
 +--------------------+------------+----------------+---------------+
 |       hostname     | user_port  | total_threads  | busy_threads  |
@@ -1248,7 +1248,7 @@ The peak thread count on the node.</li>
 The current number of live threads (daemon and non-daemon) on the node.</li>
 </ul>
 
-<h3 id="query-the-memory-table.">Query the memory table.</h3>
+<h3 id="query-the-memory-table">Query the memory table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from memory;
 +--------------------+------------+---------------+-------------+-----------------+---------------------+-------------+
 |       hostname     | user_port  | heap_current  |  heap_max   | direct_current  | jvm_direct_current  | direct_max  |

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/ranking-window-functions/index.html
----------------------------------------------------------------------
diff --git a/docs/ranking-window-functions/index.html b/docs/ranking-window-functions/index.html
index 7c18e10..d9d0d83 100644
--- a/docs/ranking-window-functions/index.html
+++ b/docs/ranking-window-functions/index.html
@@ -1095,7 +1095,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
 
 <p>The following examples show queries that use each of the ranking window functions in Drill. See <a href="/docs/sql-window-functions-examples/">Window Functions Examples</a> for information about the data and setup for these examples.</p>
 
-<h3 id="cume_dist()">CUME_DIST()</h3>
+<h3 id="cume_dist">CUME_DIST()</h3>
 
 <p>The following query uses the CUME_DIST() window function to calculate the cumulative distribution of sales for each dealer in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, cume_dist() over(order by sales) as cumedist from q1_sales;
@@ -1135,7 +1135,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+------------+
    10 rows selected (0.198 seconds)  
 </code></pre></div>
-<h3 id="ntile()">NTILE()</h3>
+<h3 id="ntile">NTILE()</h3>
 
 <p>The following example uses the NTILE window function to divide the Q1 sales into five groups and list the sales in ascending order.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_mgr, sales, ntile(5) over(order by sales) as ntilerank from q1_sales;
@@ -1173,7 +1173,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +-----------------+------------+--------+------------+
    10 rows selected (0.312 seconds)
 </code></pre></div>
-<h3 id="percent_rank()">PERCENT_RANK()</h3>
+<h3 id="percent_rank">PERCENT_RANK()</h3>
 
 <p>The following query uses the PERCENT_RANK() window function to calculate the percent rank for employee sales in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, percent_rank() over(order by sales) as perrank from q1_sales; 
@@ -1193,7 +1193,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+---------------------+
    10 rows selected (0.169 seconds)
 </code></pre></div>
-<h3 id="rank()">RANK()</h3>
+<h3 id="rank">RANK()</h3>
 
 <p>The following query uses the RANK() window function to rank the employee sales for Q1. The word rank in Drill is a reserved keyword and must be enclosed in back ticks (``).</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, rank() over(order by sales) as `rank` from q1_sales;
@@ -1213,7 +1213,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+-------+
    10 rows selected (0.174 seconds)
 </code></pre></div>
-<h3 id="row_number()">ROW_NUMBER()</h3>
+<h3 id="row_number">ROW_NUMBER()</h3>
 
 <p>The following query uses the ROW_NUMBER() window function to number the sales for each dealer_id. The word rownum contains the reserved keyword row and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">    select dealer_id, emp_name, sales, row_number() over(partition by dealer_id order by sales) as `rownum` from q1_sales;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/rdbms-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/rdbms-storage-plugin/index.html b/docs/rdbms-storage-plugin/index.html
index 0c09b9b..ba91e69 100644
--- a/docs/rdbms-storage-plugin/index.html
+++ b/docs/rdbms-storage-plugin/index.html
@@ -1045,7 +1045,7 @@
 <li>Add a new storage configuration to Drill through the web ui. Example configurations for <a href="#Example-Oracle-Configuration">Oracle</a>, <a href="#Example-SQL-Server-Configuration">SQL Server</a>, <a href="#Example-MySQL-Configuration">MySQL</a> and <a href="#Example-Postgres-Configuration">Postgres</a> are provided below.</li>
 </ol>
 
-<h2 id="example:-working-with-mysql">Example: Working with MySQL</h2>
+<h2 id="example-working-with-mysql">Example: Working with MySQL</h2>
 
 <p>Drill communicates with MySQL through the JDBC driver using the configuration that you specify in the Web Console or through the <a href="/docs/plugin-configuration-basics/#storage-plugin-rest-api">REST API</a>.  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/rest-api/index.html
----------------------------------------------------------------------
diff --git a/docs/rest-api/index.html b/docs/rest-api/index.html
index 148aee6..8495dcc 100644
--- a/docs/rest-api/index.html
+++ b/docs/rest-api/index.html
@@ -1086,7 +1086,7 @@
 
 <hr>
 
-<h3 id="post-/query.json">POST /query.json</h3>
+<h3 id="post-query-json">POST /query.json</h3>
 
 <p>Submit a query and return results.</p>
 
@@ -1127,7 +1127,7 @@
 
 <hr>
 
-<h3 id="get-/profiles.json">GET /profiles.json</h3>
+<h3 id="get-profiles-json">GET /profiles.json</h3>
 
 <p>Get the profiles of running and completed queries. </p>
 
@@ -1151,7 +1151,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/profiles/{queryid}.json">GET /profiles/{queryid}.json</h3>
+<h3 id="get-profiles-queryid-json">GET /profiles/{queryid}.json</h3>
 
 <p>Get the profile of the query that has the given queryid.</p>
 
@@ -1169,7 +1169,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/profiles/cancel/{queryid}">GET /profiles/cancel/{queryid}</h3>
+<h3 id="get-profiles-cancel-queryid">GET /profiles/cancel/{queryid}</h3>
 
 <p>Cancel the query that has the given queryid.</p>
 
@@ -1192,7 +1192,7 @@
 
 <hr>
 
-<h3 id="get-/storage.json">GET /storage.json</h3>
+<h3 id="get-storage-json">GET /storage.json</h3>
 
 <p>Get the list of storage plugin names and configurations.</p>
 
@@ -1226,7 +1226,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/storage/{name}.json">GET /storage/{name}.json</h3>
+<h3 id="get-storage-name-json">GET /storage/{name}.json</h3>
 
 <p>Get the definition of the named storage plugin.</p>
 
@@ -1250,7 +1250,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/storage/{name}/enable/{val}">Get /storage/{name}/enable/{val}</h3>
+<h3 id="get-storage-name-enable-val">Get /storage/{name}/enable/{val}</h3>
 
 <p>Enable or disable the named storage plugin.</p>
 
@@ -1273,7 +1273,7 @@
 
 <hr>
 
-<h3 id="post-/storage/{name}.json">POST /storage/{name}.json</h3>
+<h3 id="post-storage-name-json">POST /storage/{name}.json</h3>
 
 <p>Create or update a storage plugin configuration.</p>
 
@@ -1306,7 +1306,7 @@
 
 <hr>
 
-<h3 id="delete-/storage/{name}.json">DELETE /storage/{name}.json</h3>
+<h3 id="delete-storage-name-json">DELETE /storage/{name}.json</h3>
 
 <p>Delete a storage plugin configuration.</p>
 
@@ -1329,7 +1329,7 @@
 
 <hr>
 
-<h3 id="get-/stats.json">GET /stats.json</h3>
+<h3 id="get-stats-json">GET /stats.json</h3>
 
 <p>Get Drillbit information, such as ports numbers.</p>
 
@@ -1360,7 +1360,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/status">GET /status</h3>
+<h3 id="get-status">GET /status</h3>
 
 <p>Get the status of Drill. </p>
 
@@ -1380,7 +1380,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/status/metrics">GET /status/metrics</h3>
+<h3 id="get-status-metrics">GET /status/metrics</h3>
 
 <p>Get the current memory metrics.</p>
 
@@ -1399,7 +1399,7 @@
 
 <hr>
 
-<h3 id="get-/status/threads">GET /status/threads</h3>
+<h3 id="get-status-threads">GET /status/threads</h3>
 
 <p>Get the status of threads.</p>
 
@@ -1430,7 +1430,7 @@
 
 <hr>
 
-<h3 id="get-/options.json">GET /options.json</h3>
+<h3 id="get-options-json">GET /options.json</h3>
 
 <p>List the name, default, and data type of the system and session options.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/s3-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/s3-storage-plugin/index.html b/docs/s3-storage-plugin/index.html
index a6a00a8..1067e73 100644
--- a/docs/s3-storage-plugin/index.html
+++ b/docs/s3-storage-plugin/index.html
@@ -1039,7 +1039,7 @@
 
 <p>There are two simple steps to follow: (1) provide your AWS credentials (2) configure S3 storage plugin with S3 bucket</p>
 
-<h4 id="(1)-aws-credentials">(1) AWS credentials</h4>
+<h4 id="1-aws-credentials">(1) AWS credentials</h4>
 
 <p>To enable Drill&#39;s S3a support, edit the file conf/core-site.xml in your Drill install directory, replacing the text ENTER_YOUR_ACESSKEY and ENTER_YOUR_SECRETKEY with your AWS credentials.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">&lt;configuration&gt;
@@ -1056,7 +1056,7 @@
 
 &lt;/configuration&gt;
 </code></pre></div>
-<h4 id="(2)-configure-s3-storage-plugin">(2) Configure S3 Storage Plugin</h4>
+<h4 id="2-configure-s3-storage-plugin">(2) Configure S3 Storage Plugin</h4>
 
 <p>Enable S3 storage plugin if you already have one configured or you can add a new plugin by following these steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/sequence-files/index.html
----------------------------------------------------------------------
diff --git a/docs/sequence-files/index.html b/docs/sequence-files/index.html
index 82d3a86..953e53d 100644
--- a/docs/sequence-files/index.html
+++ b/docs/sequence-files/index.html
@@ -1034,7 +1034,7 @@
         <p>Hadoop Sequence files (<a href="https://wiki.apache.org/hadoop/SequenceFile">https://wiki.apache.org/hadoop/SequenceFile</a>) are flat files storing binary key, value pairs.
 Drill projects sequence files as table with two columns - &#39;binary_key&#39;, &#39;binary_value&#39; of type VARBINARY.</p>
 
-<h3 id="storage-plugin-format-for-sequence-files.">Storage plugin format for sequence files.</h3>
+<h3 id="storage-plugin-format-for-sequence-files">Storage plugin format for sequence files.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">. . .
 &quot;sequencefile&quot;: {
   &quot;type&quot;: &quot;sequencefile&quot;,
@@ -1044,7 +1044,7 @@ Drill projects sequence files as table with two columns - &#39;binary_key&#39;,
 },
 . . .
 </code></pre></div>
-<h3 id="querying-sequence-file.">Querying sequence file.</h3>
+<h3 id="querying-sequence-file">Querying sequence file.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT *
 FROM dfs.tmp.`simple.seq`
 LIMIT 1;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/sql-extensions/index.html
----------------------------------------------------------------------
diff --git a/docs/sql-extensions/index.html b/docs/sql-extensions/index.html
index bcbcbd3..34bf69a 100644
--- a/docs/sql-extensions/index.html
+++ b/docs/sql-extensions/index.html
@@ -1037,7 +1037,7 @@
 
 <p>Drill extends the SELECT statement for reading complex, multi-structured data. The extended CREATE TABLE AS provides the capability to write data of complex/multi-structured data types. Drill extends the <a href="http://drill.apache.org/docs/lexical-structure">lexical rules</a> for working with files and directories, such as using back ticks for including file names, directory names, and reserved words in queries. Drill syntax supports using the file system as a persistent store for query profiles and diagnostic information.</p>
 
-<h2 id="extensions-for-hive--and-hbase-related-data-sources">Extensions for Hive- and HBase-related Data Sources</h2>
+<h2 id="extensions-for-hive-and-hbase-related-data-sources">Extensions for Hive- and HBase-related Data Sources</h2>
 
 <p>Drill supports Hive and HBase as a plug-and-play data source. Drill can read tables created in Hive that use <a href="/docs/hive-to-drill-data-type-mapping">data types compatible</a> with Drill.  You can query Hive tables without modifications. You can query self-describing data without requiring metadata definitions in the Hive metastore. Primitives, such as JOIN, support columnar operation. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/starting-drill-in-distributed-mode/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-drill-in-distributed-mode/index.html b/docs/starting-drill-in-distributed-mode/index.html
index d3e5832..ff8e8a7 100644
--- a/docs/starting-drill-in-distributed-mode/index.html
+++ b/docs/starting-drill-in-distributed-mode/index.html
@@ -1041,7 +1041,7 @@
 <li>Using an Ad-Hoc Connection to Drill</li>
 </ul>
 
-<h2 id="using-the-drillbit.sh-command">Using the drillbit.sh Command</h2>
+<h2 id="using-the-drillbit-sh-command">Using the drillbit.sh Command</h2>
 
 <p>To use Drill in distributed mode, you need to control a Drillbit. If you use Drill in embedded mode, you do not use the <strong>drillbit.sh</strong> command. </p>
 
@@ -1055,7 +1055,7 @@
 
 <p>You can use a configuration file to start Drill. Using such a file is handy for controlling Drillbits on multiple nodes.</p>
 
-<h3 id="drillbit.sh-command-syntax">drillbit.sh Command Syntax</h3>
+<h3 id="drillbit-sh-command-syntax">drillbit.sh Command Syntax</h3>
 
 <p><code>drillbit.sh [--config &lt;conf-dir&gt;] (start|stop|status|restart|autorestart)</code></p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/starting-drill-on-linux-and-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-drill-on-linux-and-mac-os-x/index.html b/docs/starting-drill-on-linux-and-mac-os-x/index.html
index b378624..8a40b31 100644
--- a/docs/starting-drill-on-linux-and-mac-os-x/index.html
+++ b/docs/starting-drill-on-linux-and-mac-os-x/index.html
@@ -1048,7 +1048,7 @@
 
 <p>To start Drill, you can also use the <strong>sqlline</strong> command and a custom connection string, as described in detail in <a href="/docs/starting-drill-in-distributed-mode/#using-an-ad-hoc-connection-to-drill">&quot;Using an Ad-Hoc Connection to Drill&quot;</a>. For example, you can specify the default storage plugin configuration when you start the shell. Doing so eliminates the need to specify the storage plugin configuration in the query. For example, this command specifies the <code>dfs</code> storage plugin:</p>
 
-<p><code>bin/sqlline –u jdbc:drill:schema=dfs;zk=local</code></p>
+<p><code>bin/sqlline –u jdbc:drill:zk=local;schema=dfs</code></p>
 
 <p>If you start Drill on one network, and then want to use Drill on another network, such as your home network, restart Drill.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/starting-drill-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-drill-on-windows/index.html b/docs/starting-drill-on-windows/index.html
index 1337a01..6cff89e 100644
--- a/docs/starting-drill-on-windows/index.html
+++ b/docs/starting-drill-on-windows/index.html
@@ -1049,7 +1049,7 @@
 
 <p>You can use the schema option in the <strong>sqlline</strong> command to specify a storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query. For example, this command specifies the <code>dfs</code> storage plugin:</p>
 
-<p><code>C:\bin\sqlline sqlline.bat –u &quot;jdbc:drill:schema=dfs;zk=local&quot;</code></p>
+<p><code>C:\bin\sqlline sqlline.bat –u &quot;jdbc:drill:zk=local;schema=dfs&quot;</code></p>
 
 <p>If you start Drill on one network, and then want to use Drill on another network, such as your home network, restart Drill.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/starting-the-web-console/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-the-web-console/index.html b/docs/starting-the-web-console/index.html
index 78539a4..6daae2b 100644
--- a/docs/starting-the-web-console/index.html
+++ b/docs/starting-the-web-console/index.html
@@ -1033,7 +1033,7 @@
       
         <p>The Drill Web Console is one of several <a href="/docs/architecture-introduction/#drill-clients">client interfaces</a> you can use to access Drill. </p>
 
-<h2 id="drill-1.1-and-earlier">Drill 1.1 and Earlier</h2>
+<h2 id="drill-1-1-and-earlier">Drill 1.1 and Earlier</h2>
 
 <p>In Drill 1.1 and earlier, to open the Drill Web Console, launch a web browser, and go to the following URL:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/tableau-examples/index.html
----------------------------------------------------------------------
diff --git a/docs/tableau-examples/index.html b/docs/tableau-examples/index.html
index 5c842c2..7576983 100644
--- a/docs/tableau-examples/index.html
+++ b/docs/tableau-examples/index.html
@@ -1049,7 +1049,7 @@ DSN to a Drill data source and then access the data in Tableau 8.1.</p>
 source data. You define schemas by configuring storage plugins on the Storage
 tab of the <a href="/docs/getting-to-know-the-drill-sandbox/#storage-plugin-overview">Drill Web Console</a>. Also, the examples assume you <a href="/docs/supported-data-types/#enabling-the-decimal-type">enabled the DECIMAL data type</a> in Drill.  </p>
 
-<h2 id="example:-connect-to-a-hive-table-in-tableau">Example: Connect to a Hive Table in Tableau</h2>
+<h2 id="example-connect-to-a-hive-table-in-tableau">Example: Connect to a Hive Table in Tableau</h2>
 
 <p>To access Hive tables in Tableau 8.1, connect to the Hive schema using a DSN
 and then visualize the data in Tableau.<br>
@@ -1060,7 +1060,7 @@ and then visualize the data in Tableau.<br>
 
 <hr>
 
-<h2 id="step-1:-create-a-dsn-to-a-hive-table">Step 1: Create a DSN to a Hive Table</h2>
+<h2 id="step-1-create-a-dsn-to-a-hive-table">Step 1: Create a DSN to a Hive Table</h2>
 
 <p>In this step, we will create a DSN that accesses a Hive table.</p>
 
@@ -1082,7 +1082,7 @@ In this example, we are connecting to a Zookeeper Quorum. Verify that the Cluste
 
 <hr>
 
-<h2 id="step-2:-connect-to-hive-tables-in-tableau">Step 2: Connect to Hive Tables in Tableau</h2>
+<h2 id="step-2-connect-to-hive-tables-in-tableau">Step 2: Connect to Hive Tables in Tableau</h2>
 
 <p>Now, we can connect to Hive tables.</p>
 
@@ -1108,7 +1108,7 @@ configure the connection to the Hive table and click <strong>OK</strong>.</li>
 
 <hr>
 
-<h2 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h2>
+<h2 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h2>
 
 <p>Once you connect to the data, the columns appear in the Data window. To
 visualize the data, drag fields from the Data window to the workspace view.</p>
@@ -1117,7 +1117,7 @@ visualize the data, drag fields from the Data window to the workspace view.</p>
 
 <p><img src="/docs/img/student_hive.png" alt=""></p>
 
-<h2 id="example:-connect-to-self-describing-data-in-tableau">Example: Connect to Self-Describing Data in Tableau</h2>
+<h2 id="example-connect-to-self-describing-data-in-tableau">Example: Connect to Self-Describing Data in Tableau</h2>
 
 <p>You can connect to self-describing data in Tableau in the following ways:</p>
 
@@ -1126,7 +1126,7 @@ visualize the data, drag fields from the Data window to the workspace view.</p>
 <li>Use Tableau’s Custom SQL to query the self-describing data directly. </li>
 </ol>
 
-<h3 id="option-1.-using-a-view-to-connect-to-self-describing-data">Option 1. Using a View to Connect to Self-Describing Data</h3>
+<h3 id="option-1-using-a-view-to-connect-to-self-describing-data">Option 1. Using a View to Connect to Self-Describing Data</h3>
 
 <p>The following example describes how to create a view of an HBase table and
 connect to that view in Tableau 8.1. You can also use these steps to access
@@ -1137,7 +1137,7 @@ data for other sources such as Hive, Parquet, JSON, TSV, and CSV.</p>
   <p class="last">This example assumes that there is a schema named hbase that contains a table named s_voters and a schema named dfs.default that points to a writable location.  </p>
 </div>
 
-<h4 id="step-1.-create-a-view-and-a-dsn">Step 1. Create a View and a DSN</h4>
+<h4 id="step-1-create-a-view-and-a-dsn">Step 1. Create a View and a DSN</h4>
 
 <p>In this step, we will use the ODBC Administrator to access the Drill Explorer
 where we can create a view of an HBase table. Then, we will use the ODBC
@@ -1191,7 +1191,7 @@ view.</p></li>
 <li><p>Click <strong>OK</strong> to close the ODBC Data Source Administrator.</p></li>
 </ol>
 
-<h4 id="step-2.-connect-to-the-view-from-tableau">Step 2. Connect to the View from Tableau</h4>
+<h4 id="step-2-connect-to-the-view-from-tableau">Step 2. Connect to the View from Tableau</h4>
 
 <p>Now, we can connect to the view in Tableau.</p>
 
@@ -1214,7 +1214,7 @@ view.</p></li>
 <li>In the <em>Data Connection dialog</em>, click <strong>Connect Live</strong>.</li>
 </ol>
 
-<h4 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
+<h4 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
 
 <p>Once you connect to the data in Tableau, the columns appear in the Data
 window. To visualize the data, drag fields from the Data window to the
@@ -1224,7 +1224,7 @@ workspace view.</p>
 
 <p><img src="/docs/img/VoterContributions_hbaseview.png" alt=""></p>
 
-<h3 id="option-2.-using-custom-sql-to-access-self-describing-data">Option 2. Using Custom SQL to Access Self-Describing Data</h3>
+<h3 id="option-2-using-custom-sql-to-access-self-describing-data">Option 2. Using Custom SQL to Access Self-Describing Data</h3>
 
 <p>The following example describes how to use custom SQL to connect to a Parquet
 file and then visualize the data in Tableau 8.1. You can use the same steps to
@@ -1235,7 +1235,7 @@ access data from other sources such as Hive, HBase, JSON, TSV, and CSV.</p>
   <p class="last">This example assumes that there is a schema named dfs.default which contains a parquet file named region.parquet.  </p>
 </div>
 
-<h4 id="step-1.-create-a-dsn-to-the-parquet-file-and-preview-the-data">Step 1. Create a DSN to the Parquet File and Preview the Data</h4>
+<h4 id="step-1-create-a-dsn-to-the-parquet-file-and-preview-the-data">Step 1. Create a DSN to the Parquet File and Preview the Data</h4>
 
 <p>In this step, we will create a DSN that accesses files on the DFS. We will
 also use Drill Explorer to preview the SQL that we want to use to connect to
@@ -1271,7 +1271,7 @@ You can copy this query to file so that you can use it in Tableau.</li>
 <li>Click <strong>OK</strong> to close the ODBC Data Source Administrator.</li>
 </ol>
 
-<h4 id="step-2.-connect-to-a-parquet-file-in-tableau-using-custom-sql">Step 2. Connect to a Parquet File in Tableau using Custom SQL</h4>
+<h4 id="step-2-connect-to-a-parquet-file-in-tableau-using-custom-sql">Step 2. Connect to a Parquet File in Tableau using Custom SQL</h4>
 
 <p>Now, we can create a connection to the Parquet file using the custom SQL.</p>
 
@@ -1300,7 +1300,7 @@ You can copy this query to file so that you can use it in Tableau.</li>
 <li><p>In the <em>Data Connection dialog</em>, click <strong>Connect Live</strong>.</p></li>
 </ol>
 
-<h4 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
+<h4 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
 
 <p>Once you connect to the data, the fields appear in the Data window. To
 visualize the data, drag fields from the Data window to the workspace view.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/troubleshooting/index.html
----------------------------------------------------------------------
diff --git a/docs/troubleshooting/index.html b/docs/troubleshooting/index.html
index a09b7a5..a08f5e7 100644
--- a/docs/troubleshooting/index.html
+++ b/docs/troubleshooting/index.html
@@ -1142,7 +1142,7 @@ Symptom:   </p>
 </ul></li>
 </ul>
 
-<h3 id="access-nested-fields-without-table-name/alias">Access Nested Fields without Table Name/Alias</h3>
+<h3 id="access-nested-fields-without-table-name-alias">Access Nested Fields without Table Name/Alias</h3>
 
 <p>Symptom: </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   SELECT x.y …  
@@ -1216,7 +1216,7 @@ Symptom:   </p>
 <p>Solution: Make sure that the ODBC driver version is compatible with the server version. <a href="/docs/installing-the-odbc-driver">Driver installation instructions</a> include how to check the driver version. 
 Turn on ODBC driver debug logging to better understand failure.  </p>
 
-<h3 id="jdbc/odbc-connection-issues-with-zookeeper">JDBC/ODBC Connection Issues with ZooKeeper</h3>
+<h3 id="jdbc-odbc-connection-issues-with-zookeeper">JDBC/ODBC Connection Issues with ZooKeeper</h3>
 
 <p>Symptom: Client cannot resolve ZooKeeper host names for JDBC/ODBC.</p>
 
@@ -1240,13 +1240,13 @@ Turn on ODBC driver debug logging to better understand failure.  </p>
 
 <p>Solution: Verify that the column alias does not conflict with the storage type. See <a href="/docs/lexical-structure/#case-sensitivity">Lexical Structures</a>.  </p>
 
-<h3 id="list-(array)-contains-null">List (Array) Contains Null</h3>
+<h3 id="list-array-contains-null">List (Array) Contains Null</h3>
 
 <p>Symptom: UNSUPPORTED_OPERATION ERROR: Null values are not supported in lists by default. </p>
 
 <p>Solution: Avoid selecting fields that are arrays containing nulls. Change Drill session settings to enable all_text_mode. Set store.json.all_text_mode to true, so Drill treats JSON null values as a string containing the word &#39;null&#39;.</p>
 
-<h3 id="select-count-(*)-takes-a-long-time-to-run">SELECT COUNT (*) Takes a Long Time to Run</h3>
+<h3 id="select-count-takes-a-long-time-to-run">SELECT COUNT (*) Takes a Long Time to Run</h3>
 
 <p>Solution: In some cases, the underlying storage format does not have a built-in capability to return a count of records in a table.  In these cases, Drill does a full scan of the data to verify the number of records.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/tutorial-develop-a-simple-function/index.html
----------------------------------------------------------------------
diff --git a/docs/tutorial-develop-a-simple-function/index.html b/docs/tutorial-develop-a-simple-function/index.html
index bf7bf48..7c14b99 100644
--- a/docs/tutorial-develop-a-simple-function/index.html
+++ b/docs/tutorial-develop-a-simple-function/index.html
@@ -1059,7 +1059,7 @@
 
 <hr>
 
-<h2 id="step-1:-add-dependencies">Step 1: Add dependencies</h2>
+<h2 id="step-1-add-dependencies">Step 1: Add dependencies</h2>
 
 <p>First, add the following Drill dependency to your maven project:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"> <span class="nt">&lt;dependency&gt;</span>
@@ -1070,7 +1070,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-2:-add-annotations-to-the-function-template">Step 2: Add annotations to the function template</h2>
+<h2 id="step-2-add-annotations-to-the-function-template">Step 2: Add annotations to the function template</h2>
 
 <p>To start implementing the DrillSimpleFunc interface, add the following annotations to the @FunctionTemplate declaration:</p>
 
@@ -1106,7 +1106,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-3:-declare-input-parameters">Step 3: Declare input parameters</h2>
+<h2 id="step-3-declare-input-parameters">Step 3: Declare input parameters</h2>
 
 <p>The function will be generated dynamically, as you can see in the <a href="https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillSimpleFuncHolder.java/#L42">DrillSimpleFuncHolder</a>, and the input parameters and output holders are defined using holders by annotations. Define the parameters using the @Param annotation. </p>
 
@@ -1138,7 +1138,7 @@
 
 <hr>
 
-<h2 id="step-4:-declare-the-return-value-type">Step 4: Declare the return value type</h2>
+<h2 id="step-4-declare-the-return-value-type">Step 4: Declare the return value type</h2>
 
 <p>Also, using the @Output annotation, define the returned value as VarCharHolder type. Because you are manipulating a VarChar, you also have to inject a buffer that Drill uses for the output. </p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">SimpleMaskFunc</span> <span class="kd">implements</span> <span class="n">DrillSimpleFunc</span> <span class="o">{</span>
@@ -1153,7 +1153,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-5:-implement-the-eval()-method">Step 5: Implement the eval() method</h2>
+<h2 id="step-5-implement-the-eval-method">Step 5: Implement the eval() method</h2>
 
 <p>The MASK function does not require any setup, so you do not need to define the setup() method. Define only the eval() method. </p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kt">void</span> <span class="nf">eval</span><span class="o">()</span> <span class="o">{</span>
@@ -1187,7 +1187,7 @@
 
 <p>Even to a seasoned Java developer, the eval() method might look a bit strange because Drill generates the final code on the fly to fulfill a query request. This technique leverages Java’s just-in-time (JIT) compiler for maximum speed.</p>
 
-Basic Coding Rules</h2>
+<h2 id="basic-coding-rules">Basic Coding Rules</h2>
 
 <p>To leverage Java’s just-in-time (JIT) compiler for maximum speed, you need to adhere to some basic rules.</p>
 
@@ -1221,9 +1221,11 @@ Basic Coding Rules</h2>
     <span class="nt">&lt;/executions&gt;</span>
 <span class="nt">&lt;/plugin&gt;</span>
 </code></pre></div>
-Add a drill-module.conf File to Resources</h2>
+<h2 id="add-a-drill-module-conf-file-to-resources">Add a drill-module.conf File to Resources</h2>
 
-<p>Add a <code>drill-module.conf</code> file in the resources folder of your project. The presence of this file tells Drill that your jar contains a custom function. If you have no specific configuration to set for your function, you can keep this file empty.</p>
+<p>Add a <code>drill-module.conf</code> file in the resources folder of your project. The presence of this file tells Drill that your jar contains a custom function. Put the following line in the <code>drill-module.config</code>:</p>
+
+<p><code>drill.classpath.scanning.packages += &quot;org.apache.drill.contrib.function&quot;</code></p>
 
 <h2 id="build-and-deploy-the-function">Build and Deploy the Function</h2>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/useful-research/index.html
----------------------------------------------------------------------
diff --git a/docs/useful-research/index.html b/docs/useful-research/index.html
index d619555..3b0aa35 100644
--- a/docs/useful-research/index.html
+++ b/docs/useful-research/index.html
@@ -1068,7 +1068,7 @@
 <li>Design Proposal for Drill: <a href="http://www.slideshare.net/CamuelGilyadov/apache-drill-14071739">http://www.slideshare.net/CamuelGilyadov/apache-drill-14071739</a></li>
 </ul>
 
-<h2 id="dazo-(second-generation-opendremel)">Dazo (second generation OpenDremel)</h2>
+<h2 id="dazo-second-generation-opendremel">Dazo (second generation OpenDremel)</h2>
 
 <ul>
 <li>Dazo repos: <a href="https://github.com/Dazo-org">https://github.com/Dazo-org</a></li>
@@ -1082,7 +1082,7 @@
 <li><a href="https://github.com/rgrzywinski/field-stripe/">https://github.com/rgrzywinski/field-stripe/</a></li>
 </ul>
 
-Code generation / Physical plan generation</h2>
+<h2 id="code-generation-physical-plan-generation">Code generation / Physical plan generation</h2>
 
 <ul>
 <li><a href="http://www.vldb.org/pvldb/vol4/p539-neumann.pdf">http://www.vldb.org/pvldb/vol4/p539-neumann.pdf</a> (SLIDES: <a href="http://www.vldb.org/2011/files/slides/research9/rSession9-3.pdf">http://www.vldb.org/2011/files/slides/research9/rSession9-3.pdf</a>)</li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-apache-drill-with-tableau-9-desktop/index.html
----------------------------------------------------------------------
diff --git a/docs/using-apache-drill-with-tableau-9-desktop/index.html b/docs/using-apache-drill-with-tableau-9-desktop/index.html
index ed83888..18ab6b9 100644
--- a/docs/using-apache-drill-with-tableau-9-desktop/index.html
+++ b/docs/using-apache-drill-with-tableau-9-desktop/index.html
@@ -1046,7 +1046,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
 
 <p>Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. For the best experience use the latest release of Apache Drill. For Tableau 9.0 Desktop, Drill Version 0.9 or higher is recommended.</p>
 
@@ -1066,13 +1066,13 @@
 
 <hr>
 
-<h3 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
+<h3 id="step-2-install-the-tableau-data-connection-customization-tdc-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau. The MapR Drill ODBC Driver installer automatically installs the TDC file if the installer can find the Tableau installation. If you installed the MapR Drill ODBC Driver first and then installed Tableau, the TDC file is not installed automatically, and you need to <a href="/docs/installing-the-tdc-file-on-windows/">install the TDC file manually</a>. </p>
 
 <hr>
 
-<h3 id="step-3:-connect-tableau-to-drill-via-odbc">Step 3: Connect Tableau to Drill via ODBC</h3>
+<h3 id="step-3-connect-tableau-to-drill-via-odbc">Step 3: Connect Tableau to Drill via ODBC</h3>
 
 <p>Complete the following steps to configure an ODBC data connection: </p>
 
@@ -1097,7 +1097,7 @@ Tableau is now connected to Drill, and you can select various tables and views.
 
 <hr>
 
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h3 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
 
 <p>Tableau Desktop can now use Drill to query various data sources and visualize the information.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-apache-drill-with-tableau-9-server/index.html
----------------------------------------------------------------------
diff --git a/docs/using-apache-drill-with-tableau-9-server/index.html b/docs/using-apache-drill-with-tableau-9-server/index.html
index 3f6b1df..fc22b6b 100644
--- a/docs/using-apache-drill-with-tableau-9-server/index.html
+++ b/docs/using-apache-drill-with-tableau-9-server/index.html
@@ -1045,7 +1045,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
 
 <p>Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. The latest release of Apache Drill. For Tableau 9.0 Server, Drill Version 0.9 or higher is recommended.</p>
 
@@ -1065,7 +1065,7 @@
 
 <hr>
 
-<h3 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
+<h3 id="step-2-install-the-tableau-data-connection-customization-tdc-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau.</p>
 
@@ -1078,7 +1078,7 @@
 
 <hr>
 
-<h3 id="step-3:-publish-tableau-visualizations-and-data-sources">Step 3: Publish Tableau Visualizations and Data Sources</h3>
+<h3 id="step-3-publish-tableau-visualizations-and-data-sources">Step 3: Publish Tableau Visualizations and Data Sources</h3>
 
 <p>For collaboration purposes, you can now use Tableau Desktop to publish data sources and visualizations on Tableau Server.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-jdbc-with-squirrel-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/using-jdbc-with-squirrel-on-windows/index.html b/docs/using-jdbc-with-squirrel-on-windows/index.html
index 3616de8..13591f2 100644
--- a/docs/using-jdbc-with-squirrel-on-windows/index.html
+++ b/docs/using-jdbc-with-squirrel-on-windows/index.html
@@ -1050,7 +1050,7 @@
 
 <hr>
 
-<h2 id="step-1:-getting-the-drill-jdbc-driver">Step 1: Getting the Drill JDBC Driver</h2>
+<h2 id="step-1-getting-the-drill-jdbc-driver">Step 1: Getting the Drill JDBC Driver</h2>
 
 <p>The Drill JDBC Driver <code>JAR</code> file must exist in a directory on your Windows
 machine in order to configure the driver in the SQuirreL client.</p>
@@ -1069,7 +1069,7 @@ you can locate the driver in the following directory:</p>
 </code></pre></div>
 <hr>
 
-<h2 id="step-2:-installing-and-starting-squirrel">Step 2: Installing and Starting SQuirreL</h2>
+<h2 id="step-2-installing-and-starting-squirrel">Step 2: Installing and Starting SQuirreL</h2>
 
 <p>To install and start SQuirreL, complete the following steps:</p>
 
@@ -1082,14 +1082,14 @@ you can locate the driver in the following directory:</p>
 
 <hr>
 
-<h2 id="step-3:-adding-the-drill-jdbc-driver-to-squirrel">Step 3: Adding the Drill JDBC Driver to SQuirreL</h2>
+<h2 id="step-3-adding-the-drill-jdbc-driver-to-squirrel">Step 3: Adding the Drill JDBC Driver to SQuirreL</h2>
 
 <p>To add the Drill JDBC Driver to SQuirreL, define the driver and create a
 database alias. The alias is a specific instance of the driver configuration.
 SQuirreL uses the driver definition and alias to connect to Drill so you can
 access data sources that you have registered with Drill.</p>
 
-<h3 id="a.-define-the-driver">A. Define the Driver</h3>
+<h3 id="a-define-the-driver">A. Define the Driver</h3>
 
 <p>To define the Drill JDBC Driver, complete the following steps:</p>
 
@@ -1131,7 +1131,7 @@ access data sources that you have registered with Drill.</p>
 
 <p><img src="/docs/img/52.png" alt="drill query flow"></p>
 
-<h3 id="b.-create-an-alias">B. Create an Alias</h3>
+<h3 id="b-create-an-alias">B. Create an Alias</h3>
 
 <p>To create an alias, complete the following steps:</p>
 
@@ -1182,7 +1182,7 @@ access data sources that you have registered with Drill.</p>
 
 <hr>
 
-<h2 id="step-4:-running-a-drill-query-from-squirrel">Step 4: Running a Drill Query from SQuirreL</h2>
+<h2 id="step-4-running-a-drill-query-from-squirrel">Step 4: Running a Drill Query from SQuirreL</h2>
 
 <p>Once you have SQuirreL successfully connected to your cluster through the
 Drill JDBC Driver, you can issue queries from the SQuirreL client. You can run

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-microstrategy-analytics-with-apache-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/using-microstrategy-analytics-with-apache-drill/index.html b/docs/using-microstrategy-analytics-with-apache-drill/index.html
index 8575682..b47f6c0 100644
--- a/docs/using-microstrategy-analytics-with-apache-drill/index.html
+++ b/docs/using-microstrategy-analytics-with-apache-drill/index.html
@@ -1046,7 +1046,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1082,7 +1082,7 @@
 
 <hr>
 
-<h3 id="step-2:-install-the-drill-object-on-microstrategy-analytics-enterprise">Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise</h3>
+<h3 id="step-2-install-the-drill-object-on-microstrategy-analytics-enterprise">Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise</h3>
 
 <p>The steps listed in this section were created based on the MicroStrategy Technote for installing DBMS objects which you can reference at: </p>
 
@@ -1115,7 +1115,7 @@
 
 <hr>
 
-<h3 id="step-3:-create-the-microstrategy-database-connection-for-apache-drill">Step 3: Create the MicroStrategy database connection for Apache Drill</h3>
+<h3 id="step-3-create-the-microstrategy-database-connection-for-apache-drill">Step 3: Create the MicroStrategy database connection for Apache Drill</h3>
 
 <p>Complete the following steps to use the Database Instance Wizard to create the MicroStrategy database connection for Apache Drill:</p>
 
@@ -1134,7 +1134,7 @@
 
 <hr>
 
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h3 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
 
 <p>This step includes an example scenario that shows you how to use MicroStrategy, with Drill as the database instance, to analyze Twitter data stored as complex JSON documents. </p>
 
@@ -1142,7 +1142,7 @@
 
 <p>The Drill distributed file system plugin is configured to read Twitter data in a directory structure. A view is created in Drill to capture the most relevant maps and nested maps and arrays for the Twitter JSON documents. Refer to <a href="/docs/query-data-introduction/">Query Data</a> for more information about how to configure and use Drill to work with complex data:</p>
 
-<h4 id="part-1:-create-a-project">Part 1: Create a Project</h4>
+<h4 id="part-1-create-a-project">Part 1: Create a Project</h4>
 
 <p>Complete the following steps to create a project:</p>
 
@@ -1160,7 +1160,7 @@
 <li> Click <strong>OK</strong>. The new project is created in MicroStrategy Developer. </li>
 </ol>
 
-<h4 id="part-2:-create-a-freeform-report-to-analyze-data">Part 2: Create a Freeform Report to Analyze Data</h4>
+<h4 id="part-2-create-a-freeform-report-to-analyze-data">Part 2: Create a Freeform Report to Analyze Data</h4>
 
 <p>Complete the following steps to create a Freeform Report and analyze data:</p>