You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by kr...@apache.org on 2015/12/10 03:45:30 UTC

[1/3] drill-site git commit: squash 4 commits

Repository: drill-site
Updated Branches:
  refs/heads/asf-site e6f79da83 -> 0d7ffd4b0


http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-qlik-sense-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/using-qlik-sense-with-drill/index.html b/docs/using-qlik-sense-with-drill/index.html
index fadc5ed..835048e 100644
--- a/docs/using-qlik-sense-with-drill/index.html
+++ b/docs/using-qlik-sense-with-drill/index.html
@@ -1056,7 +1056,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-drill-odbc-driver">Step 1: Install and Configure the Drill ODBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-drill-odbc-driver">Step 1: Install and Configure the Drill ODBC Driver</h3>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1070,7 +1070,7 @@
 
 <hr>
 
-<h3 id="step-2:-configure-a-connection-in-qlik-sense">Step 2: Configure a Connection in Qlik Sense</h3>
+<h3 id="step-2-configure-a-connection-in-qlik-sense">Step 2: Configure a Connection in Qlik Sense</h3>
 
 <p>Once you create an ODBC DSN, it shows up as another option when you create a connection from a new and/or existing Qlik Sense application. The steps for creating a connection from an application are the same in Qlik Sense Desktop and Qlik Sense Server. </p>
 
@@ -1086,7 +1086,7 @@
 
 <hr>
 
-<h3 id="step-3:-authenticate">Step 3: Authenticate</h3>
+<h3 id="step-3-authenticate">Step 3: Authenticate</h3>
 
 <p>After providing the credentials and saving the connection, click <strong>Select</strong> in the new connection to trigger the authentication against Drill.  </p>
 
@@ -1102,7 +1102,7 @@
 
 <hr>
 
-<h3 id="step-4:-select-tables-and-load-the-data-model">Step 4: Select Tables and Load the Data Model</h3>
+<h3 id="step-4-select-tables-and-load-the-data-model">Step 4: Select Tables and Load the Data Model</h3>
 
 <p>Explore the various tables available in Drill, and select the tables of interest. For each table selected, Qlik Sense shows a preview of the logic used for the table.  </p>
 
@@ -1137,7 +1137,7 @@
 
 <hr>
 
-<h3 id="step-5:-analyze-data-with-qlik-sense-and-drill">Step 5: Analyze Data with Qlik Sense and Drill</h3>
+<h3 id="step-5-analyze-data-with-qlik-sense-and-drill">Step 5: Analyze Data with Qlik Sense and Drill</h3>
 
 <p>After the data model is loaded into the application, use Qlik Sense to build a wide range of visualizations on top of the data that Drill delivers via ODBC. Qlik Sense specializes in self-service data visualization at the point of decision.  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-the-jdbc-driver/index.html
----------------------------------------------------------------------
diff --git a/docs/using-the-jdbc-driver/index.html b/docs/using-the-jdbc-driver/index.html
index b28bb0f..31a973c 100644
--- a/docs/using-the-jdbc-driver/index.html
+++ b/docs/using-the-jdbc-driver/index.html
@@ -1072,8 +1072,8 @@ drill-1.0.0.tar.gz</a>. Extract the file. On Windows, you may need to use a deco
 
 <p>The format of the JDBC URL differs slightly, depending on the way you want to connect to the Drillbit: random, local, or direct. This section covers using the URL for a random or local connection. Using a URL to <a href="/docs/using-the-jdbc-driver/#using-the-jdbc-url-format-for-a-direct-drillbit-connection">directly connect to a Drillbit</a> is covered later. If you want ZooKeeper to randomly choose a Drillbit in the cluster, or if you want to connect to the local Drillbit, the format of the driver URL is:</p>
 
-<p><code>jdbc:drill:[schema=&lt;storage plugin&gt;;]zk=&lt;zk name&gt;[:&lt;port&gt;][,&lt;zk name2&gt;[:&lt;port&gt;]...</code><br>
-  <code>]&lt;directory&gt;/&lt;cluster ID&gt;</code></p>
+<p><code>jdbc:drill:zk=&lt;zk name&gt;[:&lt;port&gt;][,&lt;zk name2&gt;[:&lt;port&gt;]...</code><br>
+  <code>&lt;directory&gt;/&lt;cluster ID&gt;;[schema=&lt;storage plugin&gt;]</code></p>
 
 <p>where</p>
 
@@ -1119,8 +1119,8 @@ drill.exec: {
 
 <p>If you want to connect directly to a Drillbit instead of using ZooKeeper to choose the Drillbit, replace <code>zk=&lt;zk name&gt;</code> with <code>drillbit=&lt;node&gt;</code> as shown in the following URL.</p>
 
-<p><code>jdbc:drill:[schema=&lt;storage plugin&gt;;]drillbit=&lt;node name&gt;[:&lt;port&gt;][,&lt;node name2&gt;[:&lt;port&gt;]...</code><br>
-  <code>]&lt;directory&gt;/&lt;cluster ID&gt;</code></p>
+<p><code>jdbc:drill:drillbit=&lt;node name&gt;[:&lt;port&gt;][,&lt;node name2&gt;[:&lt;port&gt;]...</code><br>
+  <code>&lt;directory&gt;/&lt;cluster ID&gt;[schema=&lt;storage plugin&gt;]</code></p>
 
 <p>where</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-tibco-spotfire-desktop-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/using-tibco-spotfire-desktop-with-drill/index.html b/docs/using-tibco-spotfire-desktop-with-drill/index.html
index cc12089..f62ea40 100644
--- a/docs/using-tibco-spotfire-desktop-with-drill/index.html
+++ b/docs/using-tibco-spotfire-desktop-with-drill/index.html
@@ -1044,7 +1044,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1062,7 +1062,7 @@
 
 <hr>
 
-<h3 id="step-2:-configure-the-spotfire-desktop-data-connection-for-drill">Step 2: Configure the Spotfire Desktop Data Connection for Drill</h3>
+<h3 id="step-2-configure-the-spotfire-desktop-data-connection-for-drill">Step 2: Configure the Spotfire Desktop Data Connection for Drill</h3>
 
 <p>Complete the following steps to configure a Drill data connection: </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/value-window-functions/index.html
----------------------------------------------------------------------
diff --git a/docs/value-window-functions/index.html b/docs/value-window-functions/index.html
index 6892520..c92b79f 100644
--- a/docs/value-window-functions/index.html
+++ b/docs/value-window-functions/index.html
@@ -1073,12 +1073,12 @@
 
 <h2 id="syntax">Syntax</h2>
 
-<h3 id="lag-|-lead">LAG | LEAD</h3>
+<h3 id="lag-lead">LAG | LEAD</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   LAG | LEAD
    ( expression )
    OVER ( [ PARTITION BY expr_list ] [ ORDER BY order_list ] )  
 </code></pre></div>
-<h3 id="first_value-|-last_value">FIRST_VALUE | LAST_VALUE</h3>
+<h3 id="first_value-last_value">FIRST_VALUE | LAST_VALUE</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   FIRST_VALUE | LAST_VALUE
    ( expression ) OVER
    ( [ PARTITION BY expr_list ] [ ORDER BY order_list ][ frame_clause ] )  
@@ -1104,7 +1104,7 @@ The frame clause refines the set of rows in a function&#39;s window, including o
 
 <p>The following examples show queries that use each of the value window functions in Drill.  </p>
 
-<h3 id="lag()">LAG()</h3>
+<h3 id="lag">LAG()</h3>
 
 <p>The following example uses the LAG window function to show the quantity of records sold to the Tower Records customer with customer ID 8  and the dates that customer 8 purchased records. To compare each sale with the previous sale for customer 8, the query returns the previous quantity sold for each sale. Since there is no purchase before 1976-01-25, the first previous quantity sold value is null. Note that the term &quot;date&quot; in the query is enclosed in back ticks because it is a reserved keyword in Drill.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select cust_id, `date`, qty_sold, lag(qty_sold,1) over (order by cust_id, `date`) as prev_qtysold from sales where cust_id = 8 order by cust_id, `date`;  
@@ -1120,7 +1120,7 @@ The frame clause refines the set of rows in a function&#39;s window, including o
    +----------+-------------+-----------+---------------+
    5 rows selected (0.331 seconds)
 </code></pre></div>
-<h3 id="lead()">LEAD()</h3>
+<h3 id="lead">LEAD()</h3>
 
 <p>The following example uses the LEAD window function to provide the commission for concert tickets with show ID 172 and the next commission for subsequent ticket sales. Since there is no commission after 40.00, the last next_comm value is null. Note that the term &quot;date&quot; in the query is enclosed in back ticks because it is a reserved keyword in Drill.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select show_id, `date`, commission, lead(commission,1) over (order by `date`) as next_comm from commission where show_id = 172;
@@ -1142,7 +1142,7 @@ The frame clause refines the set of rows in a function&#39;s window, including o
    +----------+-------------+-------------+------------+
    12 rows selected (0.241 seconds)
 </code></pre></div>
-<h3 id="first_value()">FIRST_VALUE()</h3>
+<h3 id="first_value">FIRST_VALUE()</h3>
 
 <p>The following example uses the FIRST_VALUE window function to identify the employee with the lowest sales for each dealer in Q1:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, first_value(sales) over (partition by dealer_id order by sales) as dealer_low from q1_sales;
@@ -1162,7 +1162,7 @@ The frame clause refines the set of rows in a function&#39;s window, including o
    +-----------------+------------+--------+-------------+
    10 rows selected (0.299 seconds)
 </code></pre></div>
-<h3 id="last_value()">LAST_VALUE()</h3>
+<h3 id="last_value">LAST_VALUE()</h3>
 
 <p>The following example uses the LAST_VALUE window function to identify the last car sale each employee made at each dealership in 2013:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, `year`, last_value(sales) over (partition by  emp_name order by `year`) as last_sale from emp_sales where `year` = 2013;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/why-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/why-drill/index.html b/docs/why-drill/index.html
index 86910c6..3fdaeb7 100644
--- a/docs/why-drill/index.html
+++ b/docs/why-drill/index.html
@@ -1033,7 +1033,7 @@
       
         <h2 id="top-10-reasons-to-use-drill">Top 10 Reasons to Use Drill</h2>
 
-<h2 id="1.-get-started-in-minutes">1. Get started in minutes</h2>
+<h2 id="1-get-started-in-minutes">1. Get started in minutes</h2>
 
 <p>It takes just a few minutes to get started with Drill. Untar the Drill software on your Linux, Mac, or Windows laptop and run a query on a local file. No need to set up any infrastructure or to define schemas. Just point to the data, such as data in a file, directory, HBase table, and drill.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">$ tar -xvf apache-drill-&lt;version&gt;.tar.gz
@@ -1047,11 +1047,11 @@ $ &lt;install directory&gt;/bin/drill-embedded
 | 4            | Michael Spence             | Michael             | Spence        | 2            | VP Country Manager         | 0         | 1              | 1969-06-20  | 1998-01-01 00:00:00.0  | 40000.0  | 1              | Graduate Degree      | S               | M       | Senior Management     |
 | 5            | Maya Gutierrez             | Maya                | Gutierrez     | 2            | VP Country Manager         | 0         | 1              | 1951-05-10  | 1998-01-01 00:00:00.0  | 35000.0  | 1              | Bachelors Degree     | M               | F       | Senior Management     |
 </code></pre></div>
-<h2 id="2.-schema-free-json-model">2. Schema-free JSON model</h2>
+<h2 id="2-schema-free-json-model">2. Schema-free JSON model</h2>
 
 <p>Drill is the world&#39;s first and only distributed SQL engine that doesn&#39;t require schemas. It shares the same schema-free JSON model as MongoDB and Elasticsearch. No need to define and maintain schemas or transform data (ETL). Drill automatically understands the structure of the data. </p>
 
-<h2 id="3.-query-complex,-semi-structured-data-in-situ">3. Query complex, semi-structured data in-situ</h2>
+<h2 id="3-query-complex-semi-structured-data-in-situ">3. Query complex, semi-structured data in-situ</h2>
 
 <p>Using Drill&#39;s schema-free JSON model, you can query complex, semi-structured data in situ. No need to flatten or transform the data prior to or during query execution. Drill also provides intuitive extensions to SQL to work with nested data. Here&#39;s a simple query on a JSON file demonstrating how to access nested elements and arrays:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT * FROM (SELECT t.trans_id,
@@ -1062,7 +1062,7 @@ WHERE sq.prod_id BETWEEN 700 AND 750 AND
       sq.purchased = &#39;true&#39;
 ORDER BY sq.prod_id;
 </code></pre></div>
-<h2 id="4.-real-sql----not-&quot;sql-like&quot;">4. Real SQL -- not &quot;SQL-like&quot;</h2>
+<h2 id="4-real-sql-not-quot-sql-like-quot">4. Real SQL -- not &quot;SQL-like&quot;</h2>
 
 <p>Drill supports the standard SQL:2003 syntax. No need to learn a new &quot;SQL-like&quot; language or struggle with a semi-functional BI tool. Drill supports many data types including DATE, INTERVAL, TIMESTAMP, and VARCHAR, as well as complex query constructs such as correlated sub-queries and joins in WHERE clauses. Here is an example of a TPC-H standard query that runs in Drill:</p>
 
@@ -1079,11 +1079,11 @@ WHERE o.o_orderdate &gt;= DATE &#39;1996-10-01&#39;
       GROUP BY o.o_orderpriority
       ORDER BY o.o_orderpriority;
 </code></pre></div>
-<h2 id="5.-leverage-standard-bi-tools">5. Leverage standard BI tools</h2>
+<h2 id="5-leverage-standard-bi-tools">5. Leverage standard BI tools</h2>
 
 <p>Drill works with standard BI tools. You can use your existing tools, such as Tableau, MicroStrategy, QlikView and Excel. </p>
 
-<h2 id="6.-interactive-queries-on-hive-tables">6. Interactive queries on Hive tables</h2>
+<h2 id="6-interactive-queries-on-hive-tables">6. Interactive queries on Hive tables</h2>
 
 <p>Apache Drill lets you leverage your investments in Hive. You can run interactive queries with Drill on your Hive tables and access all Hive input/output formats (including custom SerDes). You can join tables associated with different Hive metastores, and you can join a Hive table with an HBase table or a directory of log files. Here&#39;s a simple query in Drill on a Hive table:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT `month`, state, sum(order_total) AS sales
@@ -1091,7 +1091,7 @@ FROM hive.orders
 GROUP BY `month`, state
 ORDER BY 3 DESC LIMIT 5;
 </code></pre></div>
-<h2 id="7.-access-multiple-data-sources">7. Access multiple data sources</h2>
+<h2 id="7-access-multiple-data-sources">7. Access multiple data sources</h2>
 
 <p>Drill is extensible. You can connect Drill out-of-the-box to file systems (local or distributed, such as S3 and HDFS), HBase and Hive. You can implement a storage plugin to make Drill work with any other data source. Drill can combine data from multiple data sources on the fly in a single query, with no centralized metadata definitions. Here&#39;s a query that combines data from a Hive table, an HBase table (view) and a JSON file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT custview.membership, sum(orders.order_total) AS sales
@@ -1100,15 +1100,15 @@ WHERE orders.cust_id = custview.cust_id AND orders.cust_id = c.user_info.cust_id
 GROUP BY custview.membership
 ORDER BY 2;
 </code></pre></div>
-<h2 id="8.-user-defined-functions-(udfs)-for-drill-and-hive">8. User-Defined Functions (UDFs) for Drill and Hive</h2>
+<h2 id="8-user-defined-functions-udfs-for-drill-and-hive">8. User-Defined Functions (UDFs) for Drill and Hive</h2>
 
 <p>Drill exposes a simple, high-performance Java API to build <a href="/docs/develop-custom-functions/">custom user-defined functions</a> (UDFs) for adding your own business logic to Drill.  Drill also supports Hive UDFs. If you have already built UDFs in Hive, you can reuse them with Drill with no modifications. </p>
 
-<h2 id="9.-high-performance">9. High performance</h2>
+<h2 id="9-high-performance">9. High performance</h2>
 
 <p>Drill is designed from the ground up for high throughput and low latency. It doesn&#39;t use a general purpose execution engine like MapReduce, Tez or Spark. As a result, Drill is flexible (schema-free JSON model) and performant. Drill&#39;s optimizer leverages rule- and cost-based techniques, as well as data locality and operator push-down, which is the capability to push down query fragments into the back-end data sources. Drill also provides a columnar and vectorized execution engine, resulting in higher memory and CPU efficiency.</p>
 
-<h2 id="10.-scales-from-a-single-laptop-to-a-1000-node-cluster">10. Scales from a single laptop to a 1000-node cluster</h2>
+<h2 id="10-scales-from-a-single-laptop-to-a-1000-node-cluster">10. Scales from a single laptop to a 1000-node cluster</h2>
 
 <p>Drill is available as a simple download you can run on your laptop. When you&#39;re ready to analyze larger datasets, deploy Drill on your Hadoop cluster (up to 1000 commodity servers). Drill leverages the aggregate memory in the cluster to execute queries using an optimistic pipelined model, and automatically spills to disk when the working set doesn&#39;t fit in memory.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/workspaces/index.html
----------------------------------------------------------------------
diff --git a/docs/workspaces/index.html b/docs/workspaces/index.html
index 725510c..a788a13 100644
--- a/docs/workspaces/index.html
+++ b/docs/workspaces/index.html
@@ -1082,7 +1082,7 @@ location of the data:</p>
 
 <p><code>&lt;plugin&gt;.&lt;workspace name&gt;.`&lt;location&gt;</code>`  </p>
 
-<h2 id="overriding-dfs.default">Overriding <code>dfs.default</code></h2>
+<h2 id="overriding-dfs-default">Overriding <code>dfs.default</code></h2>
 
 <p>You may want to override the hidden default workspace in scenarios where users do not have permissions to access the root directory. 
 Add the following workspace entry to the <code>dfs</code> storage plugin configuration to override the default workspace:</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/faq/index.html
----------------------------------------------------------------------
diff --git a/faq/index.html b/faq/index.html
index db5a1c7..3d0d5a2 100644
--- a/faq/index.html
+++ b/faq/index.html
@@ -115,11 +115,11 @@
 
 <div class="int_text" align="left"><h2 id="overview">Overview</h2>
 
-<h3 id="why-drill?">Why Drill?</h3>
+<h3 id="why-drill">Why Drill?</h3>
 
 <p>The 40-year monopoly of the RDBMS is over. With the exponential growth of data in recent years, and the shift towards rapid application development, new data is increasingly being stored in non-relational datastores including Hadoop, NoSQL and cloud storage. Apache Drill enables analysts, business users, data scientists and developers to explore and analyze this data without sacrificing the flexibility and agility offered by these datastores. Drill processes the data in-situ without requiring users to define schemas or transform data.</p>
 
-<h3 id="what-are-some-of-drill&#39;s-key-features?">What are some of Drill&#39;s key features?</h3>
+<h3 id="what-are-some-of-drill-39-s-key-features">What are some of Drill&#39;s key features?</h3>
 
 <p>Drill is an innovative distributed SQL engine designed to enable data exploration and analytics on non-relational datastores. Users can query the data using standard SQL and BI tools without having to create and manage schemas. Some of the key features are:</p>
 
@@ -130,7 +130,7 @@
 <li>Pluggable architecture enables connectivity to multiple datastores</li>
 </ul>
 
-<h3 id="how-does-drill-achieve-performance?">How does Drill achieve performance?</h3>
+<h3 id="how-does-drill-achieve-performance">How does Drill achieve performance?</h3>
 
 <p>Drill is built from the ground up to achieve high throughput and low latency. The following capabilities help accomplish that:</p>
 
@@ -142,7 +142,7 @@
 <li><strong>Optimistic/pipelined execution</strong>: Drill is able to stream data in memory between operators. Drill minimizes the use of disks unless needed to complete the query.</li>
 </ul>
 
-<h3 id="what-datastores-does-drill-support?">What datastores does Drill support?</h3>
+<h3 id="what-datastores-does-drill-support">What datastores does Drill support?</h3>
 
 <p>Drill is primarily focused on non-relational datastores, including Hadoop, NoSQL and cloud storage. The following datastores are currently supported:</p>
 
@@ -154,7 +154,7 @@
 
 <p>A new datastore can be added by developing a storage plugin. Drill&#39;s unique schema-free JSON data model enables it to query non-relational datastores in-situ (many of these systems store complex or schema-free data).</p>
 
-<h3 id="what-clients-are-supported?">What clients are supported?</h3>
+<h3 id="what-clients-are-supported">What clients are supported?</h3>
 
 <ul>
 <li><strong>BI tools</strong> via the ODBC and JDBC drivers (eg, Tableau, Excel, MicroStrategy, Spotfire, QlikView, Business Objects)</li>
@@ -164,7 +164,7 @@
 
 <h2 id="comparisons">Comparisons</h2>
 
-<h3 id="is-drill-a-&#39;sql-on-hadoop&#39;-engine?">Is  Drill a &#39;SQL-on-Hadoop&#39; engine?</h3>
+<h3 id="is-drill-a-39-sql-on-hadoop-39-engine">Is  Drill a &#39;SQL-on-Hadoop&#39; engine?</h3>
 
 <p>Drill supports a variety of non-relational datastores in addition to Hadoop. Drill takes a different approach compared to traditional SQL-on-Hadoop technologies like Hive and Impala. For example, users can directly query self-describing data (eg, JSON, Parquet) without having to create and manage schemas.</p>
 
@@ -219,11 +219,11 @@
 </tr>
 </tbody></table>
 
-<h3 id="is-spark-sql-similar-to-drill?">Is Spark SQL similar to Drill?</h3>
+<h3 id="is-spark-sql-similar-to-drill">Is Spark SQL similar to Drill?</h3>
 
 <p>No. Spark SQL is primarily designed to enable developers to incorporate SQL statements in Spark programs. Drill does not depend on Spark, and is targeted at business users, analysts, data scientists and developers. </p>
 
-<h3 id="does-drill-replace-hive?">Does Drill replace Hive?</h3>
+<h3 id="does-drill-replace-hive">Does Drill replace Hive?</h3>
 
 <p>Hive is a batch processing framework most suitable for long-running jobs. For data exploration and BI, Drill provides a much better experience than Hive.</p>
 
@@ -231,7 +231,7 @@
 
 <h2 id="metadata">Metadata</h2>
 
-<h3 id="how-does-drill-support-queries-on-self-describing-data?">How does Drill support queries on self-describing data?</h3>
+<h3 id="how-does-drill-support-queries-on-self-describing-data">How does Drill support queries on self-describing data?</h3>
 
 <p>Drill&#39;s flexible JSON data model and on-the-fly schema discovery enable it to query self-describing data.</p>
 
@@ -240,11 +240,11 @@
 <li><strong>On-the-fly schema discovery (or late binding)</strong>: Traditional query engines (eg, relational databases, Hive, Impala, Spark SQL) need to know the structure of the data before query execution. Drill, on the other hand, features a fundamentally different architecture, which enables execution to begin without knowing the structure of the data. The query is automatically compiled and re-compiled during the execution phase, based on the actual data flowing through the system. As a result, Drill can handle data with evolving schema or even no schema at all (eg, JSON files, MongoDB collections, HBase tables).</li>
 </ul>
 
-<h3 id="but-i-already-have-schemas-defined-in-hive-metastore?-can-i-use-that-with-drill?">But I already have schemas defined in Hive Metastore? Can I use that with Drill?</h3>
+<h3 id="but-i-already-have-schemas-defined-in-hive-metastore-can-i-use-that-with-drill">But I already have schemas defined in Hive Metastore? Can I use that with Drill?</h3>
 
 <p>Absolutely. Drill has a storage plugin for Hive tables, so you can simply point Drill to the Hive Metastore and start performing low-latency queries on Hive tables. In fact, a single Drill cluster can query data from multiple Hive Metastores, and even perform joins across these datasets.</p>
 
-<h3 id="is-drill-&quot;anti-schema&quot;-or-&quot;anti-dba&quot;?">Is Drill &quot;anti-schema&quot; or &quot;anti-DBA&quot;?</h3>
+<h3 id="is-drill-quot-anti-schema-quot-or-quot-anti-dba-quot">Is Drill &quot;anti-schema&quot; or &quot;anti-DBA&quot;?</h3>
 
 <p>Not at all. Drill actually takes advantage of schemas when available. For example, Drill leverages the schema information in Hive when querying Hive tables. However, when querying schema-free datastores like MongoDB, or raw files on S3 or Hadoop, schemas are not available, and Drill is still able to query that data.</p>
 
@@ -258,7 +258,7 @@
 
 <p>Drill is all about flexibility. The flexible schema management capabilities in Drill allow users to explore raw data and then create models/structure with <code>CREATE TABLE</code> or <code>CREATE VIEW</code> statements, or with Hive Metastore.</p>
 
-<h3 id="what-does-a-drill-query-look-like?">What does a Drill query look like?</h3>
+<h3 id="what-does-a-drill-query-look-like">What does a Drill query look like?</h3>
 
 <p>Drill uses a decentralized metadata model and relies on its storage plugins to provide metadata. There is a storage plugin associated with each data source that is supported by Drill.</p>
 
@@ -269,25 +269,25 @@
 <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">hive1</span><span class="p">.</span><span class="n">logs</span><span class="p">.</span><span class="n">frontend</span><span class="p">;</span>
 <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">hbase1</span><span class="p">.</span><span class="n">events</span><span class="p">.</span><span class="n">clicks</span><span class="p">;</span>
 </code></pre></div>
-<h3 id="what-sql-functionality-does-drill-support?">What SQL functionality does Drill support?</h3>
+<h3 id="what-sql-functionality-does-drill-support">What SQL functionality does Drill support?</h3>
 
 <p>Drill supports standard SQL (aka ANSI SQL). In addition, it features several extensions that help with complex data, such as the <code>KVGEN</code> and <code>FLATTEN</code> functions. For more details, refer to the <a href="/docs/sql-reference/">SQL Reference</a>.</p>
 
-<h3 id="do-i-need-to-load-data-into-drill-to-start-querying-it?">Do I need to load data into Drill to start querying it?</h3>
+<h3 id="do-i-need-to-load-data-into-drill-to-start-querying-it">Do I need to load data into Drill to start querying it?</h3>
 
 <p>No. Drill can query data &#39;in-situ&#39;.</p>
 
 <h2 id="getting-started">Getting Started</h2>
 
-<h3 id="what-is-the-best-way-to-get-started-with-drill?">What is the best way to get started with Drill?</h3>
+<h3 id="what-is-the-best-way-to-get-started-with-drill">What is the best way to get started with Drill?</h3>
 
 <p>The best way to get started is to try it out. It only takes a few minutes and all you need is a laptop (Mac, Windows or Linux). We&#39;ve compiled <a href="/docs/tutorials-introduction/">several tutorials</a> to help you get started.</p>
 
-<h3 id="how-can-i-ask-questions-and-provide-feedback?">How can I ask questions and provide feedback?</h3>
+<h3 id="how-can-i-ask-questions-and-provide-feedback">How can I ask questions and provide feedback?</h3>
 
 <p>Please post your questions and feedback to <a href="mailto:user@drill.apache.org">user@drill.apache.org</a>. We are happy to help!</p>
 
-<h3 id="how-can-i-contribute-to-drill?">How can I contribute to Drill?</h3>
+<h3 id="how-can-i-contribute-to-drill">How can I contribute to Drill?</h3>
 
 <p>The documentation has information on <a href="/docs/contribute-to-drill/">how to contribute</a>.</p>
 </div>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/feed.xml
----------------------------------------------------------------------
diff --git a/feed.xml b/feed.xml
index 6734dce..1769470 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,9 +6,9 @@
 </description>
     <link>/</link>
     <atom:link href="/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Wed, 09 Dec 2015 15:17:14 -0800</pubDate>
-    <lastBuildDate>Wed, 09 Dec 2015 15:17:14 -0800</lastBuildDate>
-    <generator>Jekyll v2.5.2</generator>
+    <pubDate>Wed, 09 Dec 2015 18:34:28 -0800</pubDate>
+    <lastBuildDate>Wed, 09 Dec 2015 18:34:28 -0800</lastBuildDate>
+    <generator>Jekyll v2.4.0</generator>
     
       <item>
         <title>Drill 1.3 Released</title>
@@ -135,8 +135,9 @@ Jacques Nadeau&lt;/p&gt;
     
       <item>
         <title>Drill Tutorial at NoSQL Now! 2015</title>
-        <description>&lt;p&gt;&lt;script type=&quot;text/javascript&quot; src=&quot;//addthisevent.com/libs/1.5.8/ate.min.js&quot;&gt;&lt;/script&gt;
-&lt;a href=&quot;/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/&quot; title=&quot;Add to Calendar&quot; class=&quot;addthisevent&quot;&gt;
+        <description>&lt;script type=&quot;text/javascript&quot; src=&quot;//addthisevent.com/libs/1.5.8/ate.min.js&quot;&gt;&lt;/script&gt;
+
+&lt;p&gt;&lt;a href=&quot;/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/&quot; title=&quot;Add to Calendar&quot; class=&quot;addthisevent&quot;&gt;
     Add to Calendar
     &lt;span class=&quot;_start&quot;&gt;08-20-2015 13:00:00&lt;/span&gt;
     &lt;span class=&quot;_end&quot;&gt;08-20-2014 16:15:00&lt;/span&gt;
@@ -205,7 +206,7 @@ Jacques Nadeau&lt;/p&gt;
   &amp;lt;version&amp;gt;1.1.0&amp;lt;/version&amp;gt;
 &amp;lt;/dependency&amp;gt;
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-&lt;h2 id=&quot;mongodb-3.0-support&quot;&gt;MongoDB 3.0 Support&lt;/h2&gt;
+&lt;h2 id=&quot;mongodb-3-0-support&quot;&gt;MongoDB 3.0 Support&lt;/h2&gt;
 
 &lt;p&gt;Drill now uses MongoDB&amp;#39;s latest Java driver and has enhanced connection pooling for better performance and resilience in large-scale deployments.  Learn more about using the &lt;a href=&quot;https://drill.apache.org/docs/mongodb-plugin-for-apache-drill/&quot;&gt;MongoDB plugin&lt;/a&gt;.&lt;/p&gt;
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/js/script.js
----------------------------------------------------------------------
diff --git a/js/script.js b/js/script.js
index 8fc7a54..ea318e2 100644
--- a/js/script.js
+++ b/js/script.js
@@ -1,107 +1,107 @@
-var reelPointer = null;
-$(document).ready(function(e) {
-
-  $(".aLeft").click(function() {
-		moveReel("prev");
-	});
-	$(".aRight").click(function() {
-		moveReel("next");
-	});
-
-	if ($("#header .scroller .item").length == 1) {
-
-	} else {
-
-		$("#header .dots, .aLeft, .aRight").css({ display: 'block' });
-		$("#header .scroller .item").each(function(i) {
-			$("#header .dots").append("<div class='dot'></div>");
-			$("#header .dots .dot").eq(i).click(function() {
-				var index = $(this).prevAll(".dot").length;
-				moveReel(index);
-			});
-		});
-
-		reelPointer = setTimeout(function() { moveReel(1); },5000);
-	}
-	
-	$("#menu ul li").each(function(index, element) {
-        if ($(this).find("ul").length) {
-			$(this).addClass("parent");
-		}
-    });
-
-	$("#header .dots .dot:eq(0)").addClass("sel");
-
-	resized();
-
-	$(window).scroll(onScroll);
-
-    var pathname = window.location.pathname;
-    var pathSlashesReplaced = pathname.replace(/\//g, " ");
-    var pathSlashesReplacedNoFirstDash = pathSlashesReplaced.replace(" ","");
-    var newClass = pathSlashesReplacedNoFirstDash.replace(/(\.[\s\S]+)/ig, "");
-	$("body").addClass(newClass);
-    if ( $("body").attr("class") == "")
-    {
-         $("body").addClass("class");
-    }
-});
-
-var reel_currentIndex = 0;
-function resized() {
-
-	var WW = parseInt($(window).width(),10);
-	var IW = (WW < 999) ? 999 : WW;
-	var IH = parseInt($("#header .scroller .item").css("height"),10);
-	var IN = $("#header .scroller .item").length;
-
-	$("#header .scroller").css({ width: (IN * IW)+"px", marginLeft: -(reel_currentIndex * IW)+"px" });
-	$("#header .scroller .item").css({ width: IW+"px" });
-
-
-	$("#header .scroller .item").each(function(i) {
-		var th = parseInt($(this).find(".tc").height(),10);
-		var d = IH - th + 25;
-		$(this).find(".tc").css({ top: Math.round(d/2)+"px" });
-	});
-
-	if (WW < 999) $("#menu").addClass("r");
-	else $("#menu").removeClass("r");
-
-	onScroll();
-
-}
-
-function moveReel(direction) {
-
-	if (reelPointer) clearTimeout(reelPointer);
-
-	var IN = $("#header .scroller .item").length;
-	var IW = $("#header .scroller .item").width();
-	if (direction == "next") reel_currentIndex++;
-	else if (direction == "prev") reel_currentIndex--;
-	else reel_currentIndex = direction;
-
-	if (reel_currentIndex >= IN) reel_currentIndex = 0;
-	if (reel_currentIndex < 0) reel_currentIndex = IN-1;
-
-	$("#header .dots .dot").removeClass("sel");
-	$("#header .dots .dot").eq(reel_currentIndex).addClass("sel");
-
-	$("#header .scroller").stop(false,true,false).animate({ marginLeft: -(reel_currentIndex * IW)+"px" }, 1000, "easeOutQuart");
-
-	reelPointer = setTimeout(function() { moveReel(1); },5000);
-
-}
-
-function onScroll() {
-	var ST = document.body.scrollTop || document.documentElement.scrollTop;
-	//if ($("#menu.r").length) {
-	//	$("#menu.r").css({ top: ST+"px" });
-	//} else {
-	//	$("#menu").css({ top: "0px" });
-	//}
-
-	if (ST > 400) $("#subhead").addClass("show");
-	else $("#subhead").removeClass("show");
-}
+var reelPointer = null;
+$(document).ready(function(e) {
+
+  $(".aLeft").click(function() {
+		moveReel("prev");
+	});
+	$(".aRight").click(function() {
+		moveReel("next");
+	});
+
+	if ($("#header .scroller .item").length == 1) {
+
+	} else {
+
+		$("#header .dots, .aLeft, .aRight").css({ display: 'block' });
+		$("#header .scroller .item").each(function(i) {
+			$("#header .dots").append("<div class='dot'></div>");
+			$("#header .dots .dot").eq(i).click(function() {
+				var index = $(this).prevAll(".dot").length;
+				moveReel(index);
+			});
+		});
+
+		reelPointer = setTimeout(function() { moveReel(1); },5000);
+	}
+	
+	$("#menu ul li").each(function(index, element) {
+        if ($(this).find("ul").length) {
+			$(this).addClass("parent");
+		}
+    });
+
+	$("#header .dots .dot:eq(0)").addClass("sel");
+
+	resized();
+
+	$(window).scroll(onScroll);
+
+    var pathname = window.location.pathname;
+    var pathSlashesReplaced = pathname.replace(/\//g, " ");
+    var pathSlashesReplacedNoFirstDash = pathSlashesReplaced.replace(" ","");
+    var newClass = pathSlashesReplacedNoFirstDash.replace(/(\.[\s\S]+)/ig, "");
+	$("body").addClass(newClass);
+    if ( $("body").attr("class") == "")
+    {
+         $("body").addClass("class");
+    }
+});
+
+var reel_currentIndex = 0;
+function resized() {
+
+	var WW = parseInt($(window).width(),10);
+	var IW = (WW < 999) ? 999 : WW;
+	var IH = parseInt($("#header .scroller .item").css("height"),10);
+	var IN = $("#header .scroller .item").length;
+
+	$("#header .scroller").css({ width: (IN * IW)+"px", marginLeft: -(reel_currentIndex * IW)+"px" });
+	$("#header .scroller .item").css({ width: IW+"px" });
+
+
+	$("#header .scroller .item").each(function(i) {
+		var th = parseInt($(this).find(".tc").height(),10);
+		var d = IH - th + 25;
+		$(this).find(".tc").css({ top: Math.round(d/2)+"px" });
+	});
+
+	if (WW < 999) $("#menu").addClass("r");
+	else $("#menu").removeClass("r");
+
+	onScroll();
+
+}
+
+function moveReel(direction) {
+
+	if (reelPointer) clearTimeout(reelPointer);
+
+	var IN = $("#header .scroller .item").length;
+	var IW = $("#header .scroller .item").width();
+	if (direction == "next") reel_currentIndex++;
+	else if (direction == "prev") reel_currentIndex--;
+	else reel_currentIndex = direction;
+
+	if (reel_currentIndex >= IN) reel_currentIndex = 0;
+	if (reel_currentIndex < 0) reel_currentIndex = IN-1;
+
+	$("#header .dots .dot").removeClass("sel");
+	$("#header .dots .dot").eq(reel_currentIndex).addClass("sel");
+
+	$("#header .scroller").stop(false,true,false).animate({ marginLeft: -(reel_currentIndex * IW)+"px" }, 1000, "easeOutQuart");
+
+	reelPointer = setTimeout(function() { moveReel(1); },5000);
+
+}
+
+function onScroll() {
+	var ST = document.body.scrollTop || document.documentElement.scrollTop;
+	//if ($("#menu.r").length) {
+	//	$("#menu.r").css({ top: ST+"px" });
+	//} else {
+	//	$("#menu").css({ top: "0px" });
+	//}
+
+	if (ST > 400) $("#subhead").addClass("show");
+	else $("#subhead").removeClass("show");
+}


[3/3] drill-site git commit: squash 4 commits

Posted by kr...@apache.org.
squash 4 commits


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/0d7ffd4b
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/0d7ffd4b
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/0d7ffd4b

Branch: refs/heads/asf-site
Commit: 0d7ffd4b0185f530d35aa1c02aa1ee02198b4601
Parents: e6f79da
Author: Kris Hahn <kr...@apache.org>
Authored: Wed Dec 9 18:45:11 2015 -0800
Committer: Kris Hahn <kr...@apache.org>
Committed: Wed Dec 9 18:45:11 2015 -0800

----------------------------------------------------------------------
 blog/2014/11/19/sql-on-mongodb/index.html       |   4 +-
 .../12/02/drill-top-level-project/index.html    |   2 +-
 .../index.html                                  |  15 +-
 blog/2014/12/16/whats-coming-in-2015/index.html |   4 +-
 .../index.html                                  |   2 +-
 blog/2015/07/05/drill-1.1-released/index.html   |   2 +-
 .../drill-tutorial-at-nosql-now-2015/index.html |   5 +-
 docs/aggregate-window-functions/index.html      |  10 +-
 .../index.html                                  |  14 +-
 .../apache-drill-1-1-0-release-notes/index.html |   6 +-
 .../apache-drill-1-2-0-release-notes/index.html |   2 +-
 .../index.html                                  |   2 +-
 docs/apache-drill-contribution-ideas/index.html |   2 +-
 docs/compiling-drill-from-source/index.html     |   4 +-
 docs/configuring-jreport-with-drill/index.html  |   6 +-
 docs/configuring-odbc-on-linux/index.html       |  10 +-
 docs/configuring-odbc-on-mac-os-x/index.html    |  10 +-
 docs/configuring-odbc-on-windows/index.html     |   2 +-
 .../index.html                                  |   4 +-
 .../index.html                                  |   8 +-
 .../index.html                                  |  10 +-
 docs/configuring-user-impersonation/index.html  |   2 +-
 .../index.html                                  |   2 +-
 docs/custom-function-interfaces/index.html      |   6 +-
 docs/data-type-conversion/index.html            |   2 +-
 docs/date-time-and-timestamp/index.html         |   2 +-
 .../index.html                                  |   2 +-
 docs/drill-introduction/index.html              |   6 +-
 docs/drill-patch-review-tool/index.html         |  20 +-
 docs/drill-plan-syntax/index.html               |   2 +-
 docs/drop-table/index.html                      |  14 +-
 docs/explain/index.html                         |   2 +-
 .../index.html                                  |   2 +-
 .../index.html                                  |   6 +-
 docs/installing-the-driver-on-linux/index.html  |   6 +-
 .../index.html                                  |   6 +-
 .../installing-the-driver-on-windows/index.html |   8 +-
 docs/json-data-model/index.html                 |  18 +-
 docs/kvgen/index.html                           |   2 +-
 .../index.html                                  |  30 +--
 .../index.html                                  |  28 +--
 .../index.html                                  |  36 ++--
 docs/mongodb-storage-plugin/index.html          |   2 +-
 docs/odbc-configuration-reference/index.html    |   2 +-
 docs/parquet-format/index.html                  |   2 +-
 docs/partition-pruning/index.html               |   6 +-
 docs/plugin-configuration-basics/index.html     |  10 +-
 docs/querying-hbase/index.html                  |   2 +-
 docs/querying-json-files/index.html             |   2 +-
 docs/querying-plain-text-files/index.html       |   4 +-
 docs/querying-sequence-files/index.html         |   2 +-
 docs/querying-system-tables/index.html          |  12 +-
 docs/ranking-window-functions/index.html        |  10 +-
 docs/rdbms-storage-plugin/index.html            |   2 +-
 docs/rest-api/index.html                        |  28 +--
 docs/s3-storage-plugin/index.html               |   4 +-
 docs/sequence-files/index.html                  |   4 +-
 docs/sql-extensions/index.html                  |   2 +-
 .../index.html                                  |   4 +-
 .../index.html                                  |   2 +-
 docs/starting-drill-on-windows/index.html       |   2 +-
 docs/starting-the-web-console/index.html        |   2 +-
 docs/tableau-examples/index.html                |  26 +--
 docs/troubleshooting/index.html                 |   8 +-
 .../index.html                                  |  18 +-
 docs/useful-research/index.html                 |   4 +-
 .../index.html                                  |   8 +-
 .../index.html                                  |   6 +-
 .../index.html                                  |  12 +-
 .../index.html                                  |  12 +-
 docs/using-qlik-sense-with-drill/index.html     |  10 +-
 docs/using-the-jdbc-driver/index.html           |   8 +-
 .../index.html                                  |   4 +-
 docs/value-window-functions/index.html          |  12 +-
 docs/why-drill/index.html                       |  20 +-
 docs/workspaces/index.html                      |   2 +-
 faq/index.html                                  |  34 +--
 feed.xml                                        |  13 +-
 js/script.js                                    | 214 +++++++++----------
 79 files changed, 422 insertions(+), 419 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2014/11/19/sql-on-mongodb/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/11/19/sql-on-mongodb/index.html b/blog/2014/11/19/sql-on-mongodb/index.html
index 32301a9..5efc20b 100644
--- a/blog/2014/11/19/sql-on-mongodb/index.html
+++ b/blog/2014/11/19/sql-on-mongodb/index.html
@@ -149,7 +149,7 @@
 <li>Optimizations</li>
 </ul>
 
-<h2 id="drill-and-mongodb-setup-(standalone/replicated/sharded)">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
+<h2 id="drill-and-mongodb-setup-standalone-replicated-sharded">Drill and MongoDB Setup (Standalone/Replicated/Sharded)</h2>
 
 <h3 id="standalone">Standalone</h3>
 
@@ -190,7 +190,7 @@
 
 <p>In replicated mode, whichever drillbit receives the query connects to the nearest <code>mongod</code> (local <code>mongod</code>) to read the data.</p>
 
-<h3 id="sharded/sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
+<h3 id="sharded-sharded-with-replica-set">Sharded/Sharded with Replica Set</h3>
 
 <ul>
 <li>Start Mongo processes in sharded mode</li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2014/12/02/drill-top-level-project/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/02/drill-top-level-project/index.html b/blog/2014/12/02/drill-top-level-project/index.html
index 70743fa..8eb74e2 100644
--- a/blog/2014/12/02/drill-top-level-project/index.html
+++ b/blog/2014/12/02/drill-top-level-project/index.html
@@ -160,7 +160,7 @@
 
 <p>After almost two years of research and development, we released Drill 0.4 in August, and continued with monthly releases since then.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="what-39-s-next">What&#39;s Next</h2>
 
 <p>Graduating to a top-level project is a significant milestone, but it&#39;s really just the beginning of the journey. In fact, we&#39;re currently wrapping up Drill 0.7, which includes hundreds of fixes and enhancements, and we expect to release that in the next couple weeks.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
index 2b6fe1d..4afef95 100644
--- a/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
+++ b/blog/2014/12/11/apache-drill-qa-panelist-spotlight/index.html
@@ -127,8 +127,9 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-<a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
+    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+
+<p><a href="/blog/2014/12/11/apache-drill-qa-panelist-spotlight/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">12-17-2014 11:30:00</span>
     <span class="_end">12-17-2014 12:30:00</span>
@@ -152,23 +153,23 @@
 
 <p>Apache Drill committers Tomer Shiran, Jacques Nadeau, and Ted Dunning, as well as Tableau Product Manager Jeff Feng and Data Scientist Dr. Kirk Borne will be on hand to answer your questions.</p>
 
-<h4 id="tomer-shiran,-apache-drill-founder-(@tshiran)">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
+<h4 id="tomer-shiran-apache-drill-founder-tshiran">Tomer Shiran, Apache Drill Founder (@tshiran)</h4>
 
 <p>Tomer Shiran is the founder of Apache Drill, and a PMC member and committer on the project. He is VP Product Management at MapR, responsible for product strategy, roadmap and new feature development. Prior to MapR, Tomer held numerous product management and engineering roles at Microsoft, most recently as the product manager for Microsoft Internet Security &amp; Acceleration Server (now Microsoft Forefront). He is the founder of two websites that have served tens of millions of users, and received coverage in prestigious publications such as The New York Times, USA Today and The Times of London. Tomer is also the author of a 900-page programming book. He holds an MS in Computer Engineering from Carnegie Mellon University and a BS in Computer Science from Technion - Israel Institute of Technology.</p>
 
-<h4 id="jeff-feng,-product-manager-tableau-software-(@jtfeng)">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
+<h4 id="jeff-feng-product-manager-tableau-software-jtfeng">Jeff Feng, Product Manager Tableau Software (@jtfeng)</h4>
 
 <p>Jeff Feng is a Product Manager at Tableau and leads their Big Data product roadmap &amp; strategic vision.  In his role, he focuses on joint technology integration and partnership efforts with a number of Hadoop, NoSQL and web application partners in helping users see and understand their data.</p>
 
-<h4 id="ted-dunning,-apache-drill-comitter-(@ted_dunning)">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
+<h4 id="ted-dunning-apache-drill-comitter-ted_dunning">Ted Dunning, Apache Drill Comitter (@Ted_Dunning)</h4>
 
 <p>Ted Dunning is Chief Applications Architect at MapR Technologies and committer and PMC member of the Apache Mahout, Apache ZooKeeper, and Apache Drill projects and mentor for Apache Storm. He contributed to Mahout clustering, classification and matrix decomposition algorithms  and helped expand the new version of Mahout Math library. Ted was the chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems, he built fraud detection systems for ID Analytics (LifeLock) and he has issued 24 patents to date. Ted has a PhD in computing science from University of Sheffield. When he’s not doing data science, he plays guitar and mandolin.</p>
 
-<h4 id="jacques-nadeau,-vice-president,-apache-drill-(@intjesus)">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
+<h4 id="jacques-nadeau-vice-president-apache-drill-intjesus">Jacques Nadeau, Vice President, Apache Drill (@intjesus)</h4>
 
 <p>Jacques Nadeau leads Apache Drill development efforts at MapR Technologies. He is an industry veteran with over 15 years of big data and analytics experience. Most recently, he was cofounder and CTO of search engine startup YapMap. Before that, he was director of new product engineering with Quigo (contextual advertising, acquired by AOL in 2007). He also built the Avenue A | Razorfish analytics data warehousing system and associated services practice (acquired by Microsoft).</p>
 
-<h4 id="dr.-kirk-borne,-george-mason-university-(@kirkdborne)">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
+<h4 id="dr-kirk-borne-george-mason-university-kirkdborne">Dr. Kirk Borne, George Mason University (@KirkDBorne)</h4>
 
 <p>Dr. Kirk Borne is a Transdisciplinary Data Scientist and an Astrophysicist. He is Professor of Astrophysics and Computational Science in the George Mason University School of Physics, Astronomy, and Computational Sciences. He has been at Mason since 2003, where he teaches and advises students in the graduate and undergraduate Computational Science, Informatics, and Data Science programs. Previously, he spent nearly 20 years in positions supporting NASA projects, including an assignment as NASA&#39;s Data Archive Project Scientist for the Hubble Space Telescope, and as Project Manager in NASA&#39;s Space Science Data Operations Office. He has extensive experience in big data and data science, including expertise in scientific data mining and data systems. He has published over 200 articles (research papers, conference papers, and book chapters), and given over 200 invited talks at conferences and universities worldwide.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2014/12/16/whats-coming-in-2015/index.html
----------------------------------------------------------------------
diff --git a/blog/2014/12/16/whats-coming-in-2015/index.html b/blog/2014/12/16/whats-coming-in-2015/index.html
index bc2fad6..596ff85 100644
--- a/blog/2014/12/16/whats-coming-in-2015/index.html
+++ b/blog/2014/12/16/whats-coming-in-2015/index.html
@@ -213,7 +213,7 @@
 
 <p>If you&#39;re interested in implementing a new storage plugin, I would encourage you to reach out to the Drill developer community on <a href="mailto:dev@drill.apache.org">dev@drill.apache.org</a>. I&#39;m looking forward to publishing an example of a single-query join across 10 data sources.</p>
 
-<h2 id="drill/spark-integration">Drill/Spark Integration</h2>
+<h2 id="drill-spark-integration">Drill/Spark Integration</h2>
 
 <p>We&#39;re seeing growing interest in Spark as an execution engine for data pipelines, providing an alternative to MapReduce. The Drill community is working on integrating Drill and Spark to address a few new use cases:</p>
 
@@ -239,7 +239,7 @@
 <li><strong>Workload management</strong>: A single cluster is often shared among many users and groups, and everyone expects answers in real-time. Workload management prioritizes the allocation of resources to ensure that the most important workloads get done first so that business demands can be met. Administrators need to be able to assign priorities and quotas at a fine granularity. We&#39;re working on enhancing Drill&#39;s workload management to provide these capabilities while providing tight integration with YARN and Mesos.</li>
 </ul>
 
-<h2 id="we-would-love-to-hear-from-you!">We Would Love to Hear From You!</h2>
+<h2 id="we-would-love-to-hear-from-you">We Would Love to Hear From You!</h2>
 
 <p>Are there other features you would like to see in Drill? We would love to hear from you:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
index a297e47..58be84b 100644
--- a/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
+++ b/blog/2015/01/27/schema-free-json-data-infrastructure/index.html
@@ -129,7 +129,7 @@
   <article class="post-content">
     <p>JSON has emerged in recent years as the de-facto standard data exchange format. It is being used everywhere. Front-end Web applications use JSON to maintain data and communicate with back-end applications. Web APIs are JSON-based (eg, <a href="https://dev.twitter.com/rest/public">Twitter REST APIs</a>, <a href="http://developers.marketo.com/documentation/rest/">Marketo REST APIs</a>, <a href="https://developer.github.com/v3/">GitHub API</a>). It&#39;s the format of choice for public datasets, operational log files and more.</p>
 
-<h1 id="why-is-json-a-convenient-data-exchange-format?">Why is JSON a Convenient Data Exchange Format?</h1>
+<h1 id="why-is-json-a-convenient-data-exchange-format">Why is JSON a Convenient Data Exchange Format?</h1>
 
 <p>While I won&#39;t dive into the historical roots of JSON (JavaScript Object Notation, <a href="http://en.wikipedia.org/wiki/JSON#JavaScript_eval.28.29"><code>eval()</code></a>, etc.), I do want to highlight several attributes of JSON that make it a convenient data exchange format:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2015/07/05/drill-1.1-released/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/07/05/drill-1.1-released/index.html b/blog/2015/07/05/drill-1.1-released/index.html
index 98ef6e6..64c88e9 100644
--- a/blog/2015/07/05/drill-1.1-released/index.html
+++ b/blog/2015/07/05/drill-1.1-released/index.html
@@ -167,7 +167,7 @@
   &lt;version&gt;1.1.0&lt;/version&gt;
 &lt;/dependency&gt;
 </code></pre></div>
-<h2 id="mongodb-3.0-support">MongoDB 3.0 Support</h2>
+<h2 id="mongodb-3-0-support">MongoDB 3.0 Support</h2>
 
 <p>Drill now uses MongoDB&#39;s latest Java driver and has enhanced connection pooling for better performance and resilience in large-scale deployments.  Learn more about using the <a href="https://drill.apache.org/docs/mongodb-plugin-for-apache-drill/">MongoDB plugin</a>.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
----------------------------------------------------------------------
diff --git a/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
index 3973da6..486f7ed 100644
--- a/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
+++ b/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/index.html
@@ -127,8 +127,9 @@
   <div class="addthis_sharing_toolbox"></div>
 
   <article class="post-content">
-    <p><script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
-<a href="/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/" title="Add to Calendar" class="addthisevent">
+    <script type="text/javascript" src="//addthisevent.com/libs/1.5.8/ate.min.js"></script>
+
+<p><a href="/blog/2015/07/23/drill-tutorial-at-nosql-now-2015/" title="Add to Calendar" class="addthisevent">
     Add to Calendar
     <span class="_start">08-20-2015 13:00:00</span>
     <span class="_end">08-20-2014 16:15:00</span>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/aggregate-window-functions/index.html
----------------------------------------------------------------------
diff --git a/docs/aggregate-window-functions/index.html b/docs/aggregate-window-functions/index.html
index 41bfc32..12dc386 100644
--- a/docs/aggregate-window-functions/index.html
+++ b/docs/aggregate-window-functions/index.html
@@ -1109,7 +1109,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
 
 <p>The following examples show queries that use each of the aggregate window functions in Drill. See <a href="/docs/sql-window-functions-examples/">SQL Window Functions Examples</a> for information about the data and setup for these examples.</p>
 
-<h3 id="avg()">AVG()</h3>
+<h3 id="avg">AVG()</h3>
 
 <p>The following query uses the AVG() window function with the PARTITION BY clause to calculate the average sales for each car dealer in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, avg(sales) over (partition by dealer_id) as avgsales from q1_sales;
@@ -1129,7 +1129,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+-----------+
    10 rows selected (0.455 seconds)
 </code></pre></div>
-<h3 id="count()">COUNT()</h3>
+<h3 id="count">COUNT()</h3>
 
 <p>The following query uses the COUNT (*) window function to count the number of sales in Q1, ordered by dealer_id. The word count is enclosed in back ticks (``) because it is a reserved keyword in Drill.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, count(*) over(order by dealer_id) as `count` from q1_sales;
@@ -1167,7 +1167,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +------------+--------+--------+
    10 rows selected (0.249 seconds)
 </code></pre></div>
-<h3 id="max()">MAX()</h3>
+<h3 id="max">MAX()</h3>
 
 <p>The following query uses the MAX() window function with the PARTITION BY clause to identify the employee with the maximum number of car sales in Q1 at each dealership. The word max is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, max(sales) over(partition by dealer_id) as `max` from q1_sales;
@@ -1187,7 +1187,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+--------+
    10 rows selected (0.402 seconds)
 </code></pre></div>
-<h3 id="min()">MIN()</h3>
+<h3 id="min">MIN()</h3>
 
 <p>The following query uses the MIN() window function with the PARTITION BY clause to identify the employee with the minimum number of car sales in Q1 at each dealership. The word min is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_name, dealer_id, sales, min(sales) over(partition by dealer_id) as `min` from q1_sales;
@@ -1207,7 +1207,7 @@ If an ORDER BY clause is used for an aggregate function, an explicit frame claus
    +-----------------+------------+--------+-------+
    10 rows selected (0.194 seconds)
 </code></pre></div>
-<h3 id="sum()">SUM()</h3>
+<h3 id="sum">SUM()</h3>
 
 <p>The following query uses the SUM() window function to total the amount of sales for each dealer in Q1. The word sum is a reserved keyword in Drill and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, sum(sales) over(partition by dealer_id) as `sum` from q1_sales;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/analyzing-the-yelp-academic-dataset/index.html
----------------------------------------------------------------------
diff --git a/docs/analyzing-the-yelp-academic-dataset/index.html b/docs/analyzing-the-yelp-academic-dataset/index.html
index 1067bf3..61da50d 100644
--- a/docs/analyzing-the-yelp-academic-dataset/index.html
+++ b/docs/analyzing-the-yelp-academic-dataset/index.html
@@ -1068,7 +1068,7 @@ analysis extremely easy.</p>
 
 <h2 id="querying-data-with-drill">Querying Data with Drill</h2>
 
-<h3 id="1.-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
+<h3 id="1-view-the-contents-of-the-yelp-business-data">1. View the contents of the Yelp business data</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; !set maxwidth 10000
 
 0: jdbc:drill:zk=local&gt; select * from
@@ -1088,7 +1088,7 @@ analysis extremely easy.</p>
 
 <p>You can directly query self-describing files such as JSON, Parquet, and text. There is no need to create metadata definitions in the Hive metastore.</p>
 
-<h3 id="2.-explore-the-business-data-set-further">2. Explore the business data set further</h3>
+<h3 id="2-explore-the-business-data-set-further">2. Explore the business data set further</h3>
 
 <h4 id="total-reviews-in-the-data-set">Total reviews in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select sum(review_count) as totalreviews 
@@ -1139,7 +1139,7 @@ group by stars order by stars desc;
 | 1.0        | 4.0        |
 +------------+------------+
 </code></pre></div>
-<h4 id="top-businesses-with-high-review-counts-(&gt;-1000)">Top businesses with high review counts (&gt; 1000)</h4>
+<h4 id="top-businesses-with-high-review-counts-gt-1000">Top businesses with high review counts (&gt; 1000)</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select name, state, city, `review_count` from
 dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json`
 where review_count &gt; 1000 order by `review_count` desc limit 10;
@@ -1183,7 +1183,7 @@ b limit 10;
 </code></pre></div>
 <p>Note how Drill can traverse and refer through multiple levels of nesting.</p>
 
-<h3 id="3.-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
+<h3 id="3-get-the-amenities-of-each-business-in-the-data-set">3. Get the amenities of each business in the data set</h3>
 
 <p>Note that the attributes column in the Yelp business data set has a different
 element for every row, representing that businesses can have separate
@@ -1231,7 +1231,7 @@ on data.</p>
 | true  | store.json.all_text_mode updated.  |
 +-------+------------------------------------+
 </code></pre></div>
-<h3 id="4.-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
+<h3 id="4-explore-the-restaurant-businesses-in-the-data-set">4. Explore the restaurant businesses in the data set</h3>
 
 <h4 id="number-of-restaurants-in-the-data-set">Number of restaurants in the data set</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select count(*) as TotalRestaurants from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_business.json` where true=repeated_contains(categories,&#39;Restaurants&#39;);
@@ -1303,9 +1303,9 @@ order by count(categories[0]) desc limit 10;
 | Hair Salons          | 901           |
 +----------------------+---------------+
 </code></pre></div>
-<h3 id="5.-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses.">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
+<h3 id="5-explore-the-yelp-reviews-dataset-and-combine-with-the-businesses">5. Explore the Yelp reviews dataset and combine with the businesses.</h3>
 
-<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset.">Take a look at the contents of the Yelp reviews dataset.</h4>
+<h4 id="take-a-look-at-the-contents-of-the-yelp-reviews-dataset">Take a look at the contents of the Yelp reviews dataset.</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=local&gt; select * 
 from dfs.`/&lt;path-to-yelp-dataset&gt;/yelp/yelp_academic_dataset_review.json` limit 1;
 +---------------------------------+------------------------+------------------------+-------+------------+----------------------------------------------------------------------+--------+------------------------+

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/apache-drill-1-1-0-release-notes/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-1-1-0-release-notes/index.html b/docs/apache-drill-1-1-0-release-notes/index.html
index 0ef3745..51d057d 100644
--- a/docs/apache-drill-1-1-0-release-notes/index.html
+++ b/docs/apache-drill-1-1-0-release-notes/index.html
@@ -1035,7 +1035,7 @@
 
 <p>It has been about 6 weeks since the release of Drill 1.0.0. Today we&#39;re happy to announce the availability of Drill 1.1.0, providing 119 additional enhancements and bug fixes. </p>
 
-<h2 id="noteworthy-new-features-in-drill-1.1.0">Noteworthy New Features in Drill 1.1.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1-1-0">Noteworthy New Features in Drill 1.1.0</h2>
 
 <p>Drill now supports window functions, automatic partitioning, and Hive impersonation. </p>
 
@@ -1059,13 +1059,13 @@
 <li>AVG<br></li>
 </ul>
 
-<h3 id="automatic-partitioning-in-ctas-(drill-3333)"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
+<h3 id="automatic-partitioning-in-ctas-drill-3333"><a href="/docs/partition-pruning/#automatic-partitioning">Automatic Partitioning</a> in CTAS (DRILL-3333)</h3>
 
 <p>When a table is created with a partition by clause, the parquet writer will create separate files for the different partition values. The data will first be sorted by the partition keys, and the parquet writer will create a new file when it encounters a new value for the partition columns. </p>
 
 <p>When queries are issued against data that was created this way, partition pruning will work if the filter contains a partition column. Unlike directory-based partitioning, no view is required, nor is it necessary to reference the dir* column names. </p>
 
-<h3 id="hive-impersonation-support-(drill-3203)"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
+<h3 id="hive-impersonation-support-drill-3203"><a href="/docs/configuring-user-impersonation-with-hive-authorization">Hive impersonation</a> support (DRILL-3203)</h3>
 
 <p>When impersonation is enabled, Drill now supports impersonating the user who issued the query when accessing Hive metadata/data (instead of accessing Hive as the user that started the drillbit). </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/apache-drill-1-2-0-release-notes/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-1-2-0-release-notes/index.html b/docs/apache-drill-1-2-0-release-notes/index.html
index fafd84d..49f6955 100644
--- a/docs/apache-drill-1-2-0-release-notes/index.html
+++ b/docs/apache-drill-1-2-0-release-notes/index.html
@@ -1040,7 +1040,7 @@
 <li><a href="/docs/apache-drill-1-2-0-release-notes/#important-unresolved-issues">Important unresolved issues</a></li>
 </ul>
 
-<h2 id="noteworthy-new-features-in-drill-1.2.0">Noteworthy New Features in Drill 1.2.0</h2>
+<h2 id="noteworthy-new-features-in-drill-1-2-0">Noteworthy New Features in Drill 1.2.0</h2>
 
 <p>This release of Drill introduces a number of enhancements, including the following ones:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/apache-drill-contribution-guidelines/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-contribution-guidelines/index.html b/docs/apache-drill-contribution-guidelines/index.html
index c4ae837..0268b7f 100644
--- a/docs/apache-drill-contribution-guidelines/index.html
+++ b/docs/apache-drill-contribution-guidelines/index.html
@@ -1187,7 +1187,7 @@ it easy to quickly view the contents of the patch in a web browser.</p>
 <li>Once your patch is accepted, be sure to upload a final version which grants rights to the ASF.</li>
 </ul>
 
-<h2 id="where-is-a-good-place-to-start-contributing?">Where is a good place to start contributing?</h2>
+<h2 id="where-is-a-good-place-to-start-contributing">Where is a good place to start contributing?</h2>
 
 <p>After getting the source code, building and running a few simple queries, one
 of the simplest places to start is to implement a DrillFunc.<br>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/apache-drill-contribution-ideas/index.html
----------------------------------------------------------------------
diff --git a/docs/apache-drill-contribution-ideas/index.html b/docs/apache-drill-contribution-ideas/index.html
index ab36dc4..9f5571d 100644
--- a/docs/apache-drill-contribution-ideas/index.html
+++ b/docs/apache-drill-contribution-ideas/index.html
@@ -1089,7 +1089,7 @@ own use case). Then try to implement one.</p>
 <li>Approximate aggregate functions (such as what is available in BlinkDB)</li>
 </ul>
 
-<h2 id="support-for-new-file-format-readers/writers">Support for new file format readers/writers</h2>
+<h2 id="support-for-new-file-format-readers-writers">Support for new file format readers/writers</h2>
 
 <p>Currently Drill supports text, JSON and Parquet file formats natively when
 interacting with file system. More readers/writers can be introduced by

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/compiling-drill-from-source/index.html
----------------------------------------------------------------------
diff --git a/docs/compiling-drill-from-source/index.html b/docs/compiling-drill-from-source/index.html
index a38addd..ad4d258 100644
--- a/docs/compiling-drill-from-source/index.html
+++ b/docs/compiling-drill-from-source/index.html
@@ -1050,10 +1050,10 @@ Maven and JDK installed:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">java -version
 mvn -version
 </code></pre></div>
-<h2 id="1.-clone-the-repository">1. Clone the Repository</h2>
+<h2 id="1-clone-the-repository">1. Clone the Repository</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">git clone https://git-wip-us.apache.org/repos/asf/drill.git
 </code></pre></div>
-<h2 id="2.-compile-the-code">2. Compile the Code</h2>
+<h2 id="2-compile-the-code">2. Compile the Code</h2>
 <div class="highlight"><pre><code class="language-text" data-lang="text">cd drill
 mvn clean install -DskipTests
 </code></pre></div>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-jreport-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-jreport-with-drill/index.html b/docs/configuring-jreport-with-drill/index.html
index aa268e0..12b0e26 100644
--- a/docs/configuring-jreport-with-drill/index.html
+++ b/docs/configuring-jreport-with-drill/index.html
@@ -1045,7 +1045,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
+<h3 id="step-1-install-the-drill-jdbc-driver-with-jreport">Step 1: Install the Drill JDBC Driver with JReport</h3>
 
 <p>Drill provides standard JDBC connectivity to integrate with JReport. JReport 13.1 requires Drill 1.0 or later.
 For general instructions on installing the Drill JDBC driver, see <a href="/docs/using-the-jdbc-driver/">Using JDBC</a>.</p>
@@ -1065,7 +1065,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 
 <hr>
 
-<h3 id="step-2:-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
+<h3 id="step-2-create-a-new-jreport-catalog-to-manage-the-drill-connection">Step 2: Create a New JReport Catalog to Manage the Drill Connection</h3>
 
 <ol>
 <li> Click Create <strong>New -&gt; Catalog…</strong></li>
@@ -1080,7 +1080,7 @@ For example, on Windows, copy the Drill JDBC driver jar file into:</p>
 <li>Click <strong>Done</strong> when you have added all the tables you need. </li>
 </ol>
 
-<h3 id="step-3:-use-jreport-designer">Step 3: Use JReport Designer</h3>
+<h3 id="step-3-use-jreport-designer">Step 3: Use JReport Designer</h3>
 
 <ol>
 <li> In the Catalog Browser, right-click <strong>Queries</strong> and select <strong>Add Query…</strong></li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-odbc-on-linux/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-linux/index.html b/docs/configuring-odbc-on-linux/index.html
index bcf5373..b77031d 100644
--- a/docs/configuring-odbc-on-linux/index.html
+++ b/docs/configuring-odbc-on-linux/index.html
@@ -1065,7 +1065,7 @@ on Linux, copy the following configuration files in <code>/opt/mapr/drillobdc/Se
 
 <hr>
 
-<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <ol>
 <li>Set the ODBCINI environment variable to point to the <code>.odbc.ini</code> in your home directory. For example:<br>
@@ -1085,7 +1085,7 @@ Only include the path to the shared libraries corresponding to the driver matchi
 
 <hr>
 
-<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. To use Drill in embedded mode, set the following properties:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ConnectionType=Direct
@@ -1171,7 +1171,7 @@ behavior of DSNs using the MapR Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1193,7 +1193,7 @@ Driver=/opt/mapr/drillodbc/lib/64/libmaprdrillodbc64.so
 </code></pre></div>
 <hr>
 
-<h2 id="step-4:-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-4-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
 
 <p>Configure the MapR Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1216,7 +1216,7 @@ SwapFilePath=/tmp
 ODBCInstLib=libiodbcinst.so
 . . .
 </code></pre></div>
-<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the MapR Drill ODBC Driver in the <code>mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-odbc-on-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-mac-os-x/index.html b/docs/configuring-odbc-on-mac-os-x/index.html
index af7c354..290f782 100644
--- a/docs/configuring-odbc-on-mac-os-x/index.html
+++ b/docs/configuring-odbc-on-mac-os-x/index.html
@@ -1079,7 +1079,7 @@ on Mac OS X, copy the following configuration files in <code>/opt/mapr/drillodbc
 
 <hr>
 
-<h2 id="step-1:-set-environment-variables">Step 1: Set Environment Variables</h2>
+<h2 id="step-1-set-environment-variables">Step 1: Set Environment Variables</h2>
 
 <p>Create or modify the <code>/etc/launchd.conf</code> file to set environment variables. Set the SIMBAINI variable to point to the <code>.mapr.drillodbc.ini</code> file, the ODBCSYSINI varialbe to the <code>.odbcinst.ini</code> file, the ODBCINI variable to the <code>.odbc.ini</code> file, and the DYLD_LIBRARY_PATH to the location of the dynamic linker (DYLD) libraries and to the MapR Drill ODBC Driver. If you installed the iODBC driver manager using the DMG, the DYLD libraries are installed in <code>/usr/local/iODBC/lib</code>. The launchd.conf file should look something like this:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">setenv SIMBAINI /Users/joeuser/.mapr.drillodbc.ini
@@ -1091,7 +1091,7 @@ setenv DYLD_LIBRARY_PATH /usr/local/iODBC/lib:/opt/mapr/drillodbc/lib/universal
 
 <hr>
 
-<h2 id="step-2:-define-the-odbc-data-sources-in-.odbc.ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
+<h2 id="step-2-define-the-odbc-data-sources-in-odbc-ini">Step 2: Define the ODBC Data Sources in .odbc.ini</h2>
 
 <p>Define the ODBC data sources in the <code>~/.odbc.ini</code> configuration file for your environment. </p>
 
@@ -1173,7 +1173,7 @@ behavior of DSNs using the MapR Drill ODBC Driver.</p>
 
 <hr>
 
-<h2 id="step-3:-(optional)-define-the-odbc-driver-in-.odbcinst.ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
+<h2 id="step-3-optional-define-the-odbc-driver-in-odbcinst-ini">Step 3: (Optional) Define the ODBC Driver in .odbcinst.ini</h2>
 
 <p>The <code>.odbcinst.ini</code> is an optional configuration file that defines the ODBC
 Drivers. This configuration file is optional because you can specify drivers
@@ -1189,7 +1189,7 @@ Driver=/opt/mapr/drillodbc/lib/universal/libmaprdrillodbc.dylib
 </code></pre></div>
 <hr>
 
-<h2 id="step-4:-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
+<h2 id="step-4-configure-the-mapr-drill-odbc-driver">Step 4: Configure the MapR Drill ODBC Driver</h2>
 
 <p>Configure the MapR Drill ODBC Driver for your environment by modifying the <code>.mapr.drillodbc.ini</code> configuration
 file. This configures the driver to work with your ODBC driver manager. The following sample shows a possible configuration, which you can use as is if you installed the default iODBC driver manager.</p>
@@ -1208,7 +1208,7 @@ SwapFilePath=/tmp
 # iODBC
 ODBCInstLib=libiodbcinst.dylib
 </code></pre></div>
-<h3 id="configuring-.mapr.drillodbc.ini">Configuring .mapr.drillodbc.ini</h3>
+<h3 id="configuring-mapr-drillodbc-ini">Configuring .mapr.drillodbc.ini</h3>
 
 <p>To configure the MapR Drill ODBC Driver in the <code>mapr.drillodbc.ini</code> configuration file, complete the following steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-odbc-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-odbc-on-windows/index.html b/docs/configuring-odbc-on-windows/index.html
index ac67c5e..ddb1e5d 100644
--- a/docs/configuring-odbc-on-windows/index.html
+++ b/docs/configuring-odbc-on-windows/index.html
@@ -1041,7 +1041,7 @@ sources:</p>
 <li>Create an ODBC Connection String</li>
 </ul>
 
-<h2 id="sample-odbc-configuration-(dsn)">Sample ODBC Configuration (DSN)</h2>
+<h2 id="sample-odbc-configuration-dsn">Sample ODBC Configuration (DSN)</h2>
 
 <p>You can see how to create a DSN to connect to Drill data sources by taking a look at the preconfigured sample that the installer sets up. If
 you want to create a DSN for a 32-bit application, you must use the 32-bit

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-resources-for-a-shared-drillbit/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-resources-for-a-shared-drillbit/index.html b/docs/configuring-resources-for-a-shared-drillbit/index.html
index 489fa2f..fdc6aff 100644
--- a/docs/configuring-resources-for-a-shared-drillbit/index.html
+++ b/docs/configuring-resources-for-a-shared-drillbit/index.html
@@ -1064,7 +1064,7 @@ The maximum degree of distribution of a query across cores and cluster nodes.</l
 Same as max per node but applies to the query as executed by the entire cluster.</li>
 </ul>
 
-<h3 id="planner.width.max_per_node">planner.width.max_per_node</h3>
+<h3 id="planner-width-max_per_node">planner.width.max_per_node</h3>
 
 <p>Configure the <code>planner.width.max_per_node</code> to achieve fine grained, absolute control over parallelization. In this context <em>width</em> refers to fanout or distribution potential: the ability to run a query in parallel across the cores on a node and the nodes on a cluster. A physical plan consists of intermediate operations, known as query &quot;fragments,&quot; that run concurrently, yielding opportunities for parallelism above and below each exchange operator in the plan. An exchange operator represents a breakpoint in the execution flow where processing can be distributed. For example, a single-process scan of a file may flow into an exchange operator, followed by a multi-process aggregation fragment.</p>
 
@@ -1074,7 +1074,7 @@ Same as max per node but applies to the query as executed by the entire cluster.
 
 <p>When you modify the default setting, you can supply any meaningful number. The system does not automatically scale down your setting.</p>
 
-<h3 id="planner.width.max_per_query">planner.width.max_per_query</h3>
+<h3 id="planner-width-max_per_query">planner.width.max_per_query</h3>
 
 <p>The max_per_query value also sets the maximum degree of parallelism for any given stage of a query, but the setting applies to the query as executed by the whole cluster (multiple nodes). In effect, the actual maximum width per query is the <em>minimum of two values</em>: min((number of nodes * width.max_per_node), width.max_per_query)</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-tibco-spotfire-server-with-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-tibco-spotfire-server-with-drill/index.html b/docs/configuring-tibco-spotfire-server-with-drill/index.html
index 05e06a1..c34fc68 100644
--- a/docs/configuring-tibco-spotfire-server-with-drill/index.html
+++ b/docs/configuring-tibco-spotfire-server-with-drill/index.html
@@ -1046,7 +1046,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-drill-jdbc-driver">Step 1: Install and Configure the Drill JDBC Driver</h3>
 
 <p>Drill provides standard JDBC connectivity, making it easy to integrate data exploration capabilities on complex, schema-less data sets. Tibco Spotfire Server (TSS) requires Drill 1.0 or later, which incudes the JDBC driver. The JDBC driver is bundled with the Drill configuration files, and it is recommended that you use the JDBC driver that is shipped with the specific Drill version.</p>
 
@@ -1074,7 +1074,7 @@ For Windows systems, the hosts file is located here:
 
 <hr>
 
-<h3 id="step-2:-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
+<h3 id="step-2-configure-the-drill-data-source-template-in-tss">Step 2: Configure the Drill Data Source Template in TSS</h3>
 
 <p>The Drill Data Source template can now be configured with the TSS Configuration Tool. The Windows-based TSS Configuration Tool is recommended. If TSS is installed on a Linux system, you also need to install TSS on a small Windows-based system so you can utilize the Configuration Tool. In this case, it is also recommended that you install the Drill JDBC driver on the TSS Windows system.</p>
 
@@ -1129,7 +1129,7 @@ For Windows systems, the hosts file is located here:
 </code></pre></div>
 <hr>
 
-<h3 id="step-3:-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
+<h3 id="step-3-configure-drill-data-sources-with-tibco-spotfire-desktop">Step 3: Configure Drill Data Sources with Tibco Spotfire Desktop</h3>
 
 <p>To configure Drill data sources in TSS, you need to use the Tibco Spotfire Desktop client.</p>
 
@@ -1146,7 +1146,7 @@ For Windows systems, the hosts file is located here:
 
 <hr>
 
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h3 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
 
 <p>After the Drill data source has been configured in the Information Designer, the information elements can be defined. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-user-impersonation-with-hive-authorization/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation-with-hive-authorization/index.html b/docs/configuring-user-impersonation-with-hive-authorization/index.html
index 0cdad6e..dfe1efa 100644
--- a/docs/configuring-user-impersonation-with-hive-authorization/index.html
+++ b/docs/configuring-user-impersonation-with-hive-authorization/index.html
@@ -1063,7 +1063,7 @@
 <li>Hive remote metastore repository configured<br></li>
 </ul>
 
-<h2 id="step-1:-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
+<h2 id="step-1-enabling-drill-impersonation">Step 1: Enabling Drill Impersonation</h2>
 
 <p>Modify <code>&lt;DRILL_HOME&gt;/conf/drill-override.conf</code> on each Drill node to include the required properties, set the <a href="/docs/configuring-user-impersonation/#chained-impersonation">maximum number of chained user hops</a>, and restart the Drillbit process.</p>
 
@@ -1082,7 +1082,7 @@
 <code>&lt;DRILLINSTALL_HOME&gt;/bin/drillbit.sh restart</code>  </p></li>
 </ol>
 
-<h2 id="step-2:-updating-hive-site.xml">Step 2:  Updating hive-site.xml</h2>
+<h2 id="step-2-updating-hive-site-xml">Step 2:  Updating hive-site.xml</h2>
 
 <p>Update hive-site.xml with the parameters specific to the type of authorization that you are configuring and then restart Hive.  </p>
 
@@ -1114,7 +1114,7 @@
 <strong>Description:</strong> Tells HiveServer2 to execute Hive operations as the user submitting the query. Must be set to true for the storage based model.<br>
 <strong>Value:</strong> true</p>
 
-<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
+<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-storage-based-authorization">Example of hive-site.xml configuration with the required properties for storage based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1190,7 +1190,7 @@
 <strong>Description:</strong> In unsecure mode, setting this property to true causes the metastore to execute DFS operations using the client&#39;s reported user and group permissions. Note: This property must be set on both the client and server sides. This is a best effort property. If the client is set to true and the server is set to false, the client setting is ignored.<br>
 <strong>Value:</strong> false  </p>
 
-<h3 id="example-of-hive-site.xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
+<h3 id="example-of-hive-site-xml-configuration-with-the-required-properties-for-sql-standard-based-authorization">Example of hive-site.xml configuration with the required properties for SQL standard based authorization</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   &lt;configuration&gt;
      &lt;property&gt;
        &lt;name&gt;hive.metastore.uris&lt;/name&gt;
@@ -1238,7 +1238,7 @@
      &lt;/property&gt;    
     &lt;/configuration&gt;
 </code></pre></div>
-<h2 id="step-3:-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
+<h2 id="step-3-modifying-the-hive-storage-plugin">Step 3: Modifying the Hive Storage Plugin</h2>
 
 <p>Modify the Hive storage plugin configuration in the Drill Web Console to include specific authorization settings. The Drillbit that you use to access the Web Console must be running.  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-user-impersonation/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-user-impersonation/index.html b/docs/configuring-user-impersonation/index.html
index 8adcfaa..1869b2c 100644
--- a/docs/configuring-user-impersonation/index.html
+++ b/docs/configuring-user-impersonation/index.html
@@ -1096,7 +1096,7 @@ hadoop fs –chown &lt;user&gt;:&lt;group&gt; &lt;file_name&gt;
 </code></pre></div>
 <p>Example: <code>hadoop fs –chmod 750 employees.drill.view</code></p>
 
-<h3 id="modifying-system|session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
+<h3 id="modifying-system-session-level-view-permissions">Modifying SYSTEM|SESSION Level View Permissions</h3>
 
 <p>Use the <code>ALTER SESSION|SYSTEM</code> command with the <code>new_view_default_permissions</code> parameter and the appropriate octal code to set view permissions at the system or session level prior to creating a view.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `new_view_default_permissions` = &#39;&lt;octal_code&gt;&#39;;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/configuring-web-console-and-rest-api-security/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-web-console-and-rest-api-security/index.html b/docs/configuring-web-console-and-rest-api-security/index.html
index 61b1291..da396d0 100644
--- a/docs/configuring-web-console-and-rest-api-security/index.html
+++ b/docs/configuring-web-console-and-rest-api-security/index.html
@@ -1038,7 +1038,7 @@ With Web Console security in place, users who do not have administrator privileg
 
 <h2 id="https-support">HTTPS Support</h2>
 
-<p>Drill 1.2 uses the Linux Pluggable Authentication Module (PAM) and code-level support for transport layer security (TLS) to secure the Web Console and REST API. By default, the Web Console and REST API support the HTTP protocol. You set the following start-up option to TRUE to enable HTTPS support:</p>
+<p>Drill 1.2 uses code-level support for transport layer security (TLS) to secure the Web Console and REST API. By default, the Web Console and REST API support the HTTP protocol. You set the following start-up option to TRUE to enable HTTPS support:</p>
 
 <p><code>drill.exec.http.ssl_enabled</code></p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/custom-function-interfaces/index.html
----------------------------------------------------------------------
diff --git a/docs/custom-function-interfaces/index.html b/docs/custom-function-interfaces/index.html
index 57006cd..3f28e17 100644
--- a/docs/custom-function-interfaces/index.html
+++ b/docs/custom-function-interfaces/index.html
@@ -1046,13 +1046,13 @@ public static class Add1 implements DrillSimpleFunc{
 
 <p>The simple function interface includes the <code>@Param</code> and <code>@Output</code> holders where you indicate the data types that your function can process.</p>
 
-<h3 id="@param-holder">@Param Holder</h3>
+<h3 id="param-holder">@Param Holder</h3>
 
 <p>This holder indicates the data type that the function processes as input and determines the number of parameters that your function accepts within the query. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Param BigIntHolder input1;
 @Param BigIntHolder input2;
 </code></pre></div>
-<h3 id="@output-holder">@Output Holder</h3>
+<h3 id="output-holder">@Output Holder</h3>
 
 <p>This holder indicates the data type that the processing returns. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Output BigIntHolder out;
@@ -1108,7 +1108,7 @@ public static class MySecondMin implements DrillAggFunc {
 </code></pre></div>
 <p>The aggregate function interface includes holders where you indicate the data types that your function can process. This interface includes the @Param and @Output holders previously described and also includes the @Workspace holder. </p>
 
-<h3 id="@workspace-holder">@Workspace holder</h3>
+<h3 id="workspace-holder">@Workspace holder</h3>
 
 <p>This holder indicates the data type used to store intermediate data during processing. For example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">@Workspace BigIntHolder min;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/data-type-conversion/index.html
----------------------------------------------------------------------
diff --git a/docs/data-type-conversion/index.html b/docs/data-type-conversion/index.html
index a69817f..438e381 100644
--- a/docs/data-type-conversion/index.html
+++ b/docs/data-type-conversion/index.html
@@ -1630,7 +1630,7 @@ use in your Drill queries as described in this section:</p>
 </tr>
 </tbody></table>
 
-<h3 id="format-specifiers-for-date/time-conversions">Format Specifiers for Date/Time Conversions</h3>
+<h3 id="format-specifiers-for-date-time-conversions">Format Specifiers for Date/Time Conversions</h3>
 
 <p>Use the following Joda format specifiers for date/time conversions:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/date-time-and-timestamp/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-and-timestamp/index.html b/docs/date-time-and-timestamp/index.html
index cf4193b..5f2606c 100644
--- a/docs/date-time-and-timestamp/index.html
+++ b/docs/date-time-and-timestamp/index.html
@@ -1140,7 +1140,7 @@ SELECT INTERVAL &#39;13&#39; month FROM (VALUES(1));
 +------------+
 1 row selected (0.076 seconds)
 </code></pre></div>
-<h2 id="date,-time,-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
+<h2 id="date-time-and-timestamp">DATE, TIME, and TIMESTAMP</h2>
 
 <p>DATE, TIME, and TIMESTAMP literals. Drill stores values in Coordinated Universal Time (UTC). Drill supports time functions in the range 1971 to 2037.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/date-time-functions-and-arithmetic/index.html
----------------------------------------------------------------------
diff --git a/docs/date-time-functions-and-arithmetic/index.html b/docs/date-time-functions-and-arithmetic/index.html
index 2f3dbf2..6e2897e 100644
--- a/docs/date-time-functions-and-arithmetic/index.html
+++ b/docs/date-time-functions-and-arithmetic/index.html
@@ -1537,7 +1537,7 @@ SELECT NOW() FROM (VALUES(1));
 +------------+
 1 row selected (0.062 seconds)
 </code></pre></div>
-<h2 id="date,-time,-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
+<h2 id="date-time-and-interval-arithmetic-functions">Date, Time, and Interval Arithmetic Functions</h2>
 
 <p>Is the day returned from the NOW function the same as the day returned from the CURRENT_DATE function?</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT EXTRACT(day FROM NOW()) = EXTRACT(day FROM CURRENT_DATE) FROM (VALUES(1));

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/drill-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-introduction/index.html b/docs/drill-introduction/index.html
index 3d06113..941e8fa 100644
--- a/docs/drill-introduction/index.html
+++ b/docs/drill-introduction/index.html
@@ -1038,7 +1038,7 @@ applications, while still providing the familiarity and ecosystem of ANSI SQL,
 the industry-standard query language. Drill provides plug-and-play integration
 with existing Apache Hive and Apache HBase deployments. </p>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.2">What&#39;s New in Apache Drill 1.2</h2>
+<h2 id="what-39-s-new-in-apache-drill-1-2">What&#39;s New in Apache Drill 1.2</h2>
 
 <p>This release of Drill fixes <a href="/docs/apache-drill-1-2-0-release-notes/">many issues</a> and introduces a number of enhancements, including the following ones:</p>
 
@@ -1071,7 +1071,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Improved LIMIT processing</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.1">What&#39;s New in Apache Drill 1.1</h2>
+<h2 id="what-39-s-new-in-apache-drill-1-1">What&#39;s New in Apache Drill 1.1</h2>
 
 <p>Many enhancements in Apache Drill 1.1 include the following key features:</p>
 
@@ -1082,7 +1082,7 @@ Javadocs and better application dependency compatibility<br></li>
 <li>Support for UNION and UNION ALL and better optimized plans that include UNION.</li>
 </ul>
 
-<h2 id="what&#39;s-new-in-apache-drill-1.0">What&#39;s New in Apache Drill 1.0</h2>
+<h2 id="what-39-s-new-in-apache-drill-1-0">What&#39;s New in Apache Drill 1.0</h2>
 
 <p>Apache Drill 1.0 offers the following new features:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/drill-patch-review-tool/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-patch-review-tool/index.html b/docs/drill-patch-review-tool/index.html
index 710051f..3320147 100644
--- a/docs/drill-patch-review-tool/index.html
+++ b/docs/drill-patch-review-tool/index.html
@@ -1064,7 +1064,7 @@
 
 <h3 id="drill-jira-and-reviewboard-script">Drill JIRA and Reviewboard script</h3>
 
-<h4 id="1.-setup">1. Setup</h4>
+<h4 id="1-setup">1. Setup</h4>
 
 <ol>
 <li>Follow instructions <a href="/docs/drill-patch-review-tool/#jira-command-line-tool">here</a> to setup the jira-python package</li>
@@ -1075,7 +1075,7 @@ On Mac -&gt; sudo easy_install argparse
 </code></pre></div></li>
 </ol>
 
-<h4 id="2.-usage">2. Usage</h4>
+<h4 id="2-usage">2. Usage</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">nnarkhed-mn: nnarkhed$ python drill-patch-review.py --help
 usage: drill-patch-review.py [-h] -b BRANCH -j JIRA [-s SUMMARY]
                              [-d DESCRIPTION] [-r REVIEWBOARD] [-t TESTING]
@@ -1102,7 +1102,7 @@ optional arguments:
   -rbu, --reviewboard-user Reviewboard user name
   -rbp, --reviewboard-password Reviewboard password
 </code></pre></div>
-<h4 id="3.-upload-patch">3. Upload patch</h4>
+<h4 id="3-upload-patch">3. Upload patch</h4>
 
 <ol>
 <li>Specify the branch against which the patch should be created (-b)</li>
@@ -1113,7 +1113,7 @@ optional arguments:
 <p>Example:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">python drill-patch-review.py -b origin/master -j DRILL-241 -rbu tnachen -rbp password
 </code></pre></div>
-<h4 id="4.-update-patch">4. Update patch</h4>
+<h4 id="4-update-patch">4. Update patch</h4>
 
 <ol>
 <li>Specify the branch against which the patch should be created (-b)</li>
@@ -1128,12 +1128,12 @@ optional arguments:
 </code></pre></div>
 <h3 id="jira-command-line-tool">JIRA command line tool</h3>
 
-<h4 id="1.-download-the-jira-command-line-package">1. Download the JIRA command line package</h4>
+<h4 id="1-download-the-jira-command-line-package">1. Download the JIRA command line package</h4>
 
 <p>Install the jira-python package.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">sudo easy_install jira-python
 </code></pre></div>
-<h4 id="2.-configure-jira-username-and-password">2. Configure JIRA username and password</h4>
+<h4 id="2-configure-jira-username-and-password">2. Configure JIRA username and password</h4>
 
 <p>Include a jira.ini file in your $HOME directory that contains your Apache JIRA
 username and password.</p>
@@ -1146,7 +1146,7 @@ password=***********
 <p>This is a quick tutorial on using <a href="https://reviews.apache.org">Review Board</a>
 with Drill.</p>
 
-<h4 id="1.-install-the-post-review-tool">1. Install the post-review tool</h4>
+<h4 id="1-install-the-post-review-tool">1. Install the post-review tool</h4>
 
 <p>If you are on RHEL, Fedora or CentOS, follow these steps:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">sudo yum install python-setuptools
@@ -1159,7 +1159,7 @@ sudo easy_install -U RBTools
 <p>For other platforms, follow the <a href="http://www.reviewboard.org/docs/manual/dev/users/tools/post-review/">instructions</a> to
 setup the post-review tool.</p>
 
-<h4 id="2.-configure-stuff">2. Configure Stuff</h4>
+<h4 id="2-configure-stuff">2. Configure Stuff</h4>
 
 <p>Then you need to configure a few things to make it work.</p>
 
@@ -1177,7 +1177,7 @@ TARGET_GROUPS = &#39;drill-git&#39;
 
 <h3 id="faq">FAQ</h3>
 
-<h4 id="when-i-run-the-script,-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
+<h4 id="when-i-run-the-script-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">nnarkhed$python drill-patch-review.py -b trunk -j DRILL-241
 There don&#39;t seem to be any diffs
 </code></pre></div>
@@ -1188,7 +1188,7 @@ There don&#39;t seem to be any diffs
 <li>The -b branch is not pointing to the remote branch. In the example above, &quot;trunk&quot; is specified as the branch, which is the local branch. The correct value for the -b (--branch) option is the remote branch. &quot;git branch -r&quot; gives the list of the remote branch names.</li>
 </ul>
 
-<h4 id="when-i-run-the-script,-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
+<h4 id="when-i-run-the-script-it-throws-the-following-error-and-exits">When I run the script, it throws the following error and exits</h4>
 
 <p>Error uploading diff</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/drill-plan-syntax/index.html
----------------------------------------------------------------------
diff --git a/docs/drill-plan-syntax/index.html b/docs/drill-plan-syntax/index.html
index 3a4e910..9bb54c1 100644
--- a/docs/drill-plan-syntax/index.html
+++ b/docs/drill-plan-syntax/index.html
@@ -1033,7 +1033,7 @@
 
     <div class="int_text" align="left">
       
-        <h3 id="whats-the-plan?">Whats the plan?</h3>
+        <h3 id="whats-the-plan">Whats the plan?</h3>
 
 <p>This section is about the end-to-end plan flow for Drill. The incoming query
 to Drill can be a SQL 2003 query/DrQL or MongoQL. The query is converted to a

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/drop-table/index.html
----------------------------------------------------------------------
diff --git a/docs/drop-table/index.html b/docs/drop-table/index.html
index 27ede7f..370dd98 100644
--- a/docs/drop-table/index.html
+++ b/docs/drop-table/index.html
@@ -1088,7 +1088,7 @@
 
 <p>The following examples show results for several DROP TABLE scenarios.  </p>
 
-<h3 id="example-1:-identifying-a-schema">Example 1:  Identifying a schema</h3>
+<h3 id="example-1-identifying-a-schema">Example 1:  Identifying a schema</h3>
 
 <p>This example shows you how to identify a schema with the USE and DROP TABLE commands and successfully drop a table named <code>donuts_json</code> in the <code>&quot;donuts&quot;</code> workspace configured within the DFS storage plugin configuration.  </p>
 
@@ -1142,7 +1142,7 @@
    Error: PARSE ERROR: Root schema is immutable. Creating or dropping tables/views is not allowed in root schema.Select a schema using &#39;USE schema&#39; command.
    [Error Id: 8c42cb6a-27eb-48fd-b42a-671a6fb58c14 on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-2:-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
+<h3 id="example-2-dropping-a-table-created-from-a-file">Example 2: Dropping a table created from a file</h3>
 
 <p>In the following example, the <code>donuts_json</code> table is removed from the <code>/tmp</code> workspace using the DROP TABLE command. This example assumes that the steps in the <a href="/docs/create-table-as-ctas/#complete-ctas-example">Complete CTAS Example</a> were already completed. </p>
 
@@ -1178,7 +1178,7 @@
    +-------+------------------------------+
    1 row selected (0.107 seconds)  
 </code></pre></div>
-<h3 id="example-3:-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
+<h3 id="example-3-dropping-a-table-created-as-a-directory">Example 3: Dropping a table created as a directory</h3>
 
 <p>When you create a table that writes files to a directory, you can issue the <code>DROP TABLE</code> command against the table to remove the directory. All files and subdirectories are deleted. For example, the following CTAS command writes Parquet data from the <code>nation.parquet</code> file, installed with Drill, to the <code>/tmp/name_key</code> directory.  </p>
 
@@ -1235,7 +1235,7 @@
    +-------+---------------------------+
    1 row selected (0.086 seconds)
 </code></pre></div>
-<h3 id="example-4:-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
+<h3 id="example-4-dropping-a-table-that-does-not-exist">Example 4: Dropping a table that does not exist</h3>
 
 <p>The following example shows the result of dropping a table that does not exist because it was either already dropped or never existed. </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; use use dfs.tmp;
@@ -1251,7 +1251,7 @@
    Error: VALIDATION ERROR: Table [name_key] not found
    [Error Id: fc6bfe17-d009-421c-8063-d759d7ea2f4e on 10.250.56.218:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-5:-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
+<h3 id="example-5-dropping-a-table-without-permissions">Example 5: Dropping a table without permissions</h3>
 
 <p>The following example shows the result of dropping a table without the appropriate permissions in the file system.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   0: jdbc:drill:zk=local&gt; drop table name_key;
@@ -1259,7 +1259,7 @@
    Error: PERMISSION ERROR: Unauthorized to drop table
    [Error Id: 36f6b51a-786d-4950-a4a7-44250f153c55 on 10.10.30.167:31010] (state=,code=0)  
 </code></pre></div>
-<h3 id="example-6:-dropping-and-querying-a-table-concurrently">Example 6: Dropping and querying a table concurrently</h3>
+<h3 id="example-6-dropping-and-querying-a-table-concurrently">Example 6: Dropping and querying a table concurrently</h3>
 
 <p>The result of this scenario depends on the delta in time between one user dropping a table and another user issuing a query against the table. Results can also vary. In some instances the drop may succeed and the query fails completely or the query completes partially and then the table is dropped returning an exception in the middle of the query results.</p>
 
@@ -1281,7 +1281,7 @@
    Fragment 1:0
    [Error Id: 6e3c6a8d-8cfd-4033-90c4-61230af80573 on 10.10.30.167:31010] (state=,code=0)
 </code></pre></div>
-<h3 id="example-7:-dropping-a-table-with-different-file-formats">Example 7: Dropping a table with different file formats</h3>
+<h3 id="example-7-dropping-a-table-with-different-file-formats">Example 7: Dropping a table with different file formats</h3>
 
 <p>The following example shows the result of dropping a table when multiple file formats exists in the directory. In this scenario, the <code>sales_dir</code> table resides in the <code>dfs.sales</code> workspace and contains Parquet, CSV, and JSON files.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/explain/index.html
----------------------------------------------------------------------
diff --git a/docs/explain/index.html b/docs/explain/index.html
index b0ad10e..cb63fee 100644
--- a/docs/explain/index.html
+++ b/docs/explain/index.html
@@ -1069,7 +1069,7 @@ you are selecting from, you are likely to see plan changes.</p>
 <p>This option returns costing information. You can use this option for both
 physical and logical plans.</p>
 
-<h4 id="with-implementation-|-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
+<h4 id="with-implementation-without-implementation">WITH IMPLEMENTATION | WITHOUT IMPLEMENTATION</h4>
 
 <p>These options return the physical and logical plan information, respectively.
 The default is physical (WITH IMPLEMENTATION).</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/getting-to-know-the-drill-sandbox/index.html
----------------------------------------------------------------------
diff --git a/docs/getting-to-know-the-drill-sandbox/index.html b/docs/getting-to-know-the-drill-sandbox/index.html
index 540a17f..3737c55 100644
--- a/docs/getting-to-know-the-drill-sandbox/index.html
+++ b/docs/getting-to-know-the-drill-sandbox/index.html
@@ -1142,7 +1142,7 @@ URI. Metadata for Hive tables is automatically available for users to query.</p>
 </code></pre></div>
 <p>Do not use this storage plugin configuration outside the sandbox. Use the configuration for either the <a href="/docs/hive-storage-plugin/">remote or embedded metastore configuration</a>.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="what-39-s-next">What&#39;s Next</h2>
 
 <p>Start running queries by going to <a href="/docs/lesson-1-learn-about-the-data-set">Lesson 1: Learn About the Data
 Set</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/installing-the-apache-drill-sandbox/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-apache-drill-sandbox/index.html b/docs/installing-the-apache-drill-sandbox/index.html
index e2b39ae..d360cc2 100644
--- a/docs/installing-the-apache-drill-sandbox/index.html
+++ b/docs/installing-the-apache-drill-sandbox/index.html
@@ -1068,7 +1068,7 @@ instructions:</p>
 <li>To install VirtualBox, see the <a href="http://dlc.sun.com.edgesuite.net/virtualbox/4.3.4/UserManual.pdf">Oracle VM VirtualBox User Manual</a>. By downloading VirtualBox, you agree to the terms and conditions of the respective license.</li>
 </ul>
 
-<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player/vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
+<h2 id="installing-the-mapr-sandbox-with-apache-drill-on-vmware-player-vmware-fusion">Installing the MapR Sandbox with Apache Drill on VMware Player/VMware Fusion</h2>
 
 <p>Complete the following steps to install the MapR Sandbox with Apache Drill on
 VMware Player or VMware Fusion:</p>
@@ -1110,7 +1110,7 @@ The Import Virtual Machine dialog appears.</p></li>
 <li>Alternatively, access the command line on the VM: Press Alt+F2 on Windows or Option+F5 on Mac.<br></li>
 </ul>
 
-<h3 id="what&#39;s-next">What&#39;s Next</h3>
+<h3 id="what-39-s-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill
@@ -1160,7 +1160,7 @@ VirtualBox:</p>
 </ul></li>
 </ol>
 
-<h3 id="what&#39;s-next">What&#39;s Next</h3>
+<h3 id="what-39-s-next">What&#39;s Next</h3>
 
 <p>After downloading and installing the sandbox, continue with the tutorial by
 <a href="/docs/getting-to-know-the-drill-sandbox/">Getting to Know the Drill Sandbox</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/installing-the-driver-on-linux/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-linux/index.html b/docs/installing-the-driver-on-linux/index.html
index 1729f66..c322f86 100644
--- a/docs/installing-the-driver-on-linux/index.html
+++ b/docs/installing-the-driver-on-linux/index.html
@@ -1077,7 +1077,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <p>To install the driver, you need Administrator privileges on the computer.</p>
 
-<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Download either the 32- or 64-bit driver:</p>
 
@@ -1086,7 +1086,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 <li><a href="http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/MapRDrill_odbc_v1.2.0.1000/MapRDrillODBC-1.2.0.x86_64.rpm">MapR Drill ODBC Driver (64-bit)</a></li>
 </ul>
 
-<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1141,7 +1141,7 @@ locations and descriptions:</p>
 </tr>
 </tbody></table>
 
-<h2 id="step-3:-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
+<h2 id="step-3-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
 
 <p>To check the version of the driver you installed, use the following case-sensitive command on the terminal command line:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/installing-the-driver-on-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-mac-os-x/index.html b/docs/installing-the-driver-on-mac-os-x/index.html
index 592533b..9179d3e 100644
--- a/docs/installing-the-driver-on-mac-os-x/index.html
+++ b/docs/installing-the-driver-on-mac-os-x/index.html
@@ -1062,7 +1062,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Click the following link to download the driver:  </p>
 
@@ -1070,7 +1070,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <p>To install the driver, complete the following steps:</p>
 
@@ -1092,7 +1092,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 <li><code>/opt/mapr/drillodbc/lib/universal</code> – Binaries directory</li>
 </ul>
 
-<h2 id="step-3:-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
+<h2 id="step-3-check-the-mapr-drill-odbc-driver-version">Step 3: Check the MapR Drill ODBC Driver version</h2>
 
 <p>To check the version of the driver you installed, use the following command on the terminal command line:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">$ pkgutil --info mapr.drillodbc

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/installing-the-driver-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/installing-the-driver-on-windows/index.html b/docs/installing-the-driver-on-windows/index.html
index 0db91dd..67484b2 100644
--- a/docs/installing-the-driver-on-windows/index.html
+++ b/docs/installing-the-driver-on-windows/index.html
@@ -1071,7 +1071,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-1:-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
+<h2 id="step-1-download-the-mapr-drill-odbc-driver">Step 1: Download the MapR Drill ODBC Driver</h2>
 
 <p>Download the installer that corresponds to the bitness of the client application from which you want to create an ODBC connection:</p>
 
@@ -1082,7 +1082,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-2:-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
+<h2 id="step-2-install-the-mapr-drill-odbc-driver">Step 2: Install the MapR Drill ODBC Driver</h2>
 
 <ol>
 <li>Double-click the installer from the location where you downloaded it.</li>
@@ -1095,7 +1095,7 @@ Example: <code>127.0.0.1 localhost</code></p></li>
 
 <hr>
 
-<h2 id="step-3:-verify-the-installation">Step 3: Verify the installation</h2>
+<h2 id="step-3-verify-the-installation">Step 3: Verify the installation</h2>
 
 <p>To verify the installation, perform the following steps:</p>
 
@@ -1112,7 +1112,7 @@ The ODBC Data Source Administrator dialog appears.
 
 <p>You need to configure and start Drill before <a href="/docs/testing-the-odbc-connection/">testing</a> the ODBC Data Source Administrator.</p>
 
-<h2 id="the-tableau-data-connection-customization-(tdc)-file">The Tableau Data-connection Customization (TDC) File</h2>
+<h2 id="the-tableau-data-connection-customization-tdc-file">The Tableau Data-connection Customization (TDC) File</h2>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance
 when using Tableau.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/json-data-model/index.html
----------------------------------------------------------------------
diff --git a/docs/json-data-model/index.html b/docs/json-data-model/index.html
index 10d9faf..c79a7e5 100644
--- a/docs/json-data-model/index.html
+++ b/docs/json-data-model/index.html
@@ -1120,7 +1120,7 @@ Reads all data from JSON files as VARCHAR. You need to cast numbers from VARCHAR
 
 <p>Drill uses these types internally for reading complex and nested data structures from data sources such as JSON.</p>
 
-<h3 id="experimental-feature:-heterogeneous-types">Experimental Feature: Heterogeneous types</h3>
+<h3 id="experimental-feature-heterogeneous-types">Experimental Feature: Heterogeneous types</h3>
 
 <p>The Union type allows storing different types in the same field. This new feature is still considered experimental, and must be explicitly enabled by setting the <code>exec.enabel_union_type</code> option to true.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SESSION SET `exec.enable_union_type` = true;
@@ -1216,11 +1216,11 @@ y[z].x because these references are not ambiguous. Observe the following guideli
 <li>Generate key/value pairs for loosely structured data</li>
 </ul>
 
-<h2 id="example:-flatten-and-generate-key-values-for-complex-json">Example: Flatten and Generate Key Values for Complex JSON</h2>
+<h2 id="example-flatten-and-generate-key-values-for-complex-json">Example: Flatten and Generate Key Values for Complex JSON</h2>
 
 <p>This example uses the following data that represents unit sales of tickets to events that were sold over a period of several days in December:</p>
 
-<h3 id="ticket_sales.json-contents">ticket_sales.json Contents</h3>
+<h3 id="ticket_sales-json-contents">ticket_sales.json Contents</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
   &quot;type&quot;: &quot;ticket&quot;,
   &quot;venue&quot;: 123455,
@@ -1251,7 +1251,7 @@ y[z].x because these references are not ambiguous. Observe the following guideli
 +---------+---------+---------------------------------------------------------------+
 2 rows selected (1.343 seconds)
 </code></pre></div>
-<h3 id="generate-key/value-pairs">Generate Key/Value Pairs</h3>
+<h3 id="generate-key-value-pairs">Generate Key/Value Pairs</h3>
 
 <p>Continuing with the data from <a href="/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json">previous example</a>, use the KVGEN (Key Value Generator) function to generate key/value pairs from complex data. Generating key/value pairs is often helpful when working with data that contains arbitrary maps consisting of dynamic and unknown element names, such as the ticket sales data in this example. For example purposes, take a look at how kvgen breaks the sales data into keys and values representing the key dates and number of tickets sold:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT KVGEN(tkt.sales) AS `key dates:tickets sold` FROM dfs.`/Users/drilluser/ticket_sales.json` tkt;
@@ -1285,7 +1285,7 @@ FROM dfs.`/Users/drilluser/drill/ticket_sales.json`;
 +--------------------------------+
 8 rows selected (0.171 seconds)
 </code></pre></div>
-<h3 id="example:-aggregate-loosely-structured-data">Example: Aggregate Loosely Structured Data</h3>
+<h3 id="example-aggregate-loosely-structured-data">Example: Aggregate Loosely Structured Data</h3>
 
 <p>Use flatten and kvgen together to aggregate the data from the <a href="/docs/json-data-model/#example:-flatten-and-generate-key-values-for-complex-json">previous example</a>. Make sure all text mode is set to false to sum numbers. Drill returns an error if you attempt to sum data in all text mode.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">ALTER SYSTEM SET `store.json.all_text_mode` = false;
@@ -1300,7 +1300,7 @@ FROM dfs.`/Users/drilluser/drill/ticket_sales.json`;
 +--------------+
 1 row selected (0.244 seconds)
 </code></pre></div>
-<h3 id="example:-aggregate-and-sort-data">Example: Aggregate and Sort Data</h3>
+<h3 id="example-aggregate-and-sort-data">Example: Aggregate and Sort Data</h3>
 
 <p>Sum and group the ticket sales by date and sort in ascending order of total tickets sold.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT `right`(tkt.tot_sales.key,2) `December Date`,
@@ -1321,7 +1321,7 @@ ORDER BY TotalSales;
 +----------------+-------------+
 5 rows selected (0.252 seconds)
 </code></pre></div>
-<h3 id="example:-access-a-map-field-in-an-array">Example: Access a Map Field in an Array</h3>
+<h3 id="example-access-a-map-field-in-an-array">Example: Access a Map Field in an Array</h3>
 
 <p>To access a map field in an array, use dot notation to drill down through the hierarchy of the JSON data to the field. Examples are based on the following <a href="https://github.com/zemirco/sf-city-lots-json">City Lots San Francisco in .json</a>.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{
@@ -1385,7 +1385,7 @@ FROM dfs.`/Users/drilluser/citylots.json`;
 
 <p>More examples of drilling down into an array are shown in <a href="/docs/selecting-nested-data-for-a-column">&quot;Selecting Nested Data for a Column&quot;</a>.</p>
 
-<h3 id="example:-flatten-an-array-of-maps-using-a-subquery">Example: Flatten an Array of Maps using a Subquery</h3>
+<h3 id="example-flatten-an-array-of-maps-using-a-subquery">Example: Flatten an Array of Maps using a Subquery</h3>
 
 <p>By flattening the following JSON file, which contains an array of maps, you can evaluate the records of the flattened data.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{&quot;name&quot;:&quot;classic&quot;,&quot;fillings&quot;:[ {&quot;name&quot;:&quot;sugar&quot;,&quot;cal&quot;:500} , {&quot;name&quot;:&quot;flour&quot;,&quot;cal&quot;:300} ] }
@@ -1401,7 +1401,7 @@ SELECT flat.fill FROM (SELECT FLATTEN(t.fillings) AS fill FROM dfs.flatten.`test
 </code></pre></div>
 <p>Use a table alias for column fields and functions when working with complex data sets. Currently, you must use a subquery when operating on a flattened column. Eliminating the subquery and table alias in the WHERE clause, for example <code>flat.fillings[0].cal &gt; 300</code>, does not evaluate all records of the flattened data against the predicate and produces the wrong results.</p>
 
-<h3 id="example:-access-map-fields-in-a-map">Example: Access Map Fields in a Map</h3>
+<h3 id="example-access-map-fields-in-a-map">Example: Access Map Fields in a Map</h3>
 
 <p>This example uses a WHERE clause to drill down to a third level of the following JSON hierarchy to get the max_hdl greater than 160:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">{

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/kvgen/index.html
----------------------------------------------------------------------
diff --git a/docs/kvgen/index.html b/docs/kvgen/index.html
index 7356b7c..0bd4e4b 100644
--- a/docs/kvgen/index.html
+++ b/docs/kvgen/index.html
@@ -1122,7 +1122,7 @@ array down into multiple distinct rows and further query those rows.</p>
 {&quot;key&quot;: &quot;c&quot;, &quot;value&quot;: &quot;valC&quot;}
 {&quot;key&quot;: &quot;d&quot;, &quot;value&quot;: &quot;valD&quot;}
 </code></pre></div>
-<h2 id="example:-different-data-type-values">Example: Different Data Type Values</h2>
+<h2 id="example-different-data-type-values">Example: Different Data Type Values</h2>
 
 <p>Assume that a JSON file called <code>kvgendata.json</code> includes multiple records that
 look like this one:</p>


[2/3] drill-site git commit: squash 4 commits

Posted by kr...@apache.org.
http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/lesson-1-learn-about-the-data-set/index.html
----------------------------------------------------------------------
diff --git a/docs/lesson-1-learn-about-the-data-set/index.html b/docs/lesson-1-learn-about-the-data-set/index.html
index 2dc77f9..350b040 100644
--- a/docs/lesson-1-learn-about-the-data-set/index.html
+++ b/docs/lesson-1-learn-about-the-data-set/index.html
@@ -1079,7 +1079,7 @@ the Drill shell, type:</p>
 +-------+--------------------------------------------+
 1 row selected 
 </code></pre></div>
-<h3 id="list-the-available-workspaces-and-databases:">List the available workspaces and databases:</h3>
+<h3 id="list-the-available-workspaces-and-databases">List the available workspaces and databases:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; show databases;
 +---------------------+
 |     SCHEMA_NAME     |
@@ -1109,7 +1109,7 @@ different database schemas (namespaces) in a relational database system.</p>
 This is a Hive external table pointing to the data stored in flat files on the
 MapR file system. The orders table contains 122,000 rows.</p>
 
-<h3 id="set-the-schema-to-hive:">Set the schema to hive:</h3>
+<h3 id="set-the-schema-to-hive">Set the schema to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1121,7 +1121,7 @@ MapR file system. The orders table contains 122,000 rows.</p>
 <p>You will run the USE command throughout this tutorial. The USE command sets
 the schema for the current session.</p>
 
-<h3 id="describe-the-table:">Describe the table:</h3>
+<h3 id="describe-the-table">Describe the table:</h3>
 
 <p>You can use the DESCRIBE command to show the columns and data types for a Hive
 table:</p>
@@ -1140,7 +1140,7 @@ table:</p>
 <p>The DESCRIBE command returns complete schema information for Hive tables based
 on the metadata available in the Hive metastore.</p>
 
-<h3 id="select-5-rows-from-the-orders-table:">Select 5 rows from the orders table:</h3>
+<h3 id="select-5-rows-from-the-orders-table">Select 5 rows from the orders table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from orders limit 5;
 +------------+------------+------------+------------+------------+-------------+
 |  order_id  |   month    |  cust_id   |   state    |  prod_id   | order_total |
@@ -1198,7 +1198,7 @@ columns typical of a time-series database.</p>
 
 <p>The customers table contains 993 rows.</p>
 
-<h3 id="set-the-workspace-to-maprdb:">Set the workspace to maprdb:</h3>
+<h3 id="set-the-workspace-to-maprdb">Set the workspace to maprdb:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">use maprdb;
 +-------+-------------------------------------+
 |  ok   |               summary               |
@@ -1207,7 +1207,7 @@ columns typical of a time-series database.</p>
 +-------+-------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="describe-the-tables:">Describe the tables:</h3>
+<h3 id="describe-the-tables">Describe the tables:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; describe customers;
 +--------------+------------------------+--------------+
 | COLUMN_NAME  |       DATA_TYPE        | IS_NULLABLE  |
@@ -1240,7 +1240,7 @@ structure, and “ANY” represents the fact that the column value can be of any
 data type. Observe the row_key, which is also simply bytes and has the type
 ANY.</p>
 
-<h3 id="select-5-rows-from-the-products-table:">Select 5 rows from the products table:</h3>
+<h3 id="select-5-rows-from-the-products-table">Select 5 rows from the products table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from products limit 5;
 +--------------+----------------------------------------------------------------------------------------------------------------+-------------------+
 |   row_key    |                                                    details                                                     |      pricing      |
@@ -1260,7 +1260,7 @@ and pricing) have the map data type and appear as JSON strings.</p>
 
 <p>In Lesson 2, you will use CAST functions to return typed data for each column.</p>
 
-<h3 id="select-5-rows-from-the-customers-table:">Select 5 rows from the customers table:</h3>
+<h3 id="select-5-rows-from-the-customers-table">Select 5 rows from the customers table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">+0: jdbc:drill:&gt; select * from customers limit 5;
 +--------------+-----------------------+-------------------------------------------------+---------------------------------------------------------------------------------------+
 |   row_key    |        address        |                     loyalty                     |                                       personal                                        |
@@ -1304,7 +1304,7 @@ setup beyond the definition of a workspace.</p>
 
 <h3 id="query-nested-clickstream-data">Query nested clickstream data</h3>
 
-<h4 id="set-the-workspace-to-dfs.clicks:">Set the workspace to dfs.clicks:</h4>
+<h4 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1325,7 +1325,7 @@ location specified in the workspace. For example:</p>
 relative to this path. The clicks directory referred to in the following query
 is directly below the nested directory.</p>
 
-<h4 id="select-2-rows-from-the-clicks.json-file:">Select 2 rows from the clicks.json file:</h4>
+<h4 id="select-2-rows-from-the-clicks-json-file">Select 2 rows from the clicks.json file:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from `clicks/clicks.json` limit 2;
 +-----------+-------------+-----------+---------------------------------------------------+-------------------------------------------+
 | trans_id  |    date     |   time    |                     user_info                     |                trans_info                 |
@@ -1343,7 +1343,7 @@ to refer to a file in a local or distributed file system.</p>
 path. This is necessary whenever the file path contains Drill reserved words
 or characters.</p>
 
-<h4 id="select-2-rows-from-the-campaign.json-file:">Select 2 rows from the campaign.json file:</h4>
+<h4 id="select-2-rows-from-the-campaign-json-file">Select 2 rows from the campaign.json file:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from `clicks/clicks.campaign.json` limit 2;
 +-----------+-------------+-----------+---------------------------------------------------+---------------------+----------------------------------------+
 | trans_id  |    date     |   time    |                     user_info                     |       ad_info       |               trans_info               |
@@ -1377,7 +1377,7 @@ for that month. The total number of records in all log files is 48000.</p>
 are many of these files, but you can use Drill to query them all as a single
 data source, or to query a subset of the files.</p>
 
-<h4 id="set-the-workspace-to-dfs.logs:">Set the workspace to dfs.logs:</h4>
+<h4 id="set-the-workspace-to-dfs-logs">Set the workspace to dfs.logs:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.logs;
 +-------+---------------------------------------+
 |  ok   |                summary                |
@@ -1386,7 +1386,7 @@ data source, or to query a subset of the files.</p>
 +-------+---------------------------------------+
 1 row selected
 </code></pre></div>
-<h4 id="select-2-rows-from-the-logs-directory:">Select 2 rows from the logs directory:</h4>
+<h4 id="select-2-rows-from-the-logs-directory">Select 2 rows from the logs directory:</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from logs limit 2;
 +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
 | dir0  | dir1  | trans_id  |    date     |   time    | cust_id  | device  | state  | camp_id  | keywords  | prod_id  | purch_flag  |
@@ -1405,7 +1405,7 @@ directory path on the file system.</p>
 subdirectories below the logs directory. In Lesson 3, you will do more complex
 queries that leverage these dynamic variables.</p>
 
-<h4 id="find-the-total-number-of-rows-in-the-logs-directory-(all-files):">Find the total number of rows in the logs directory (all files):</h4>
+<h4 id="find-the-total-number-of-rows-in-the-logs-directory-all-files">Find the total number of rows in the logs directory (all files):</h4>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select count(*) from logs;
 +---------+
 | EXPR$0  |
@@ -1417,7 +1417,7 @@ queries that leverage these dynamic variables.</p>
 <p>This query traverses all of the files in the logs directory and its
 subdirectories to return the total number of rows in those files.</p>
 
-<h1 id="what&#39;s-next">What&#39;s Next</h1>
+<h1 id="what-39-s-next">What&#39;s Next</h1>
 
 <p>Go to <a href="/docs/lesson-2-run-queries-with-ansi-sql">Lesson 2: Run Queries with ANSI
 SQL</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/lesson-2-run-queries-with-ansi-sql/index.html
----------------------------------------------------------------------
diff --git a/docs/lesson-2-run-queries-with-ansi-sql/index.html b/docs/lesson-2-run-queries-with-ansi-sql/index.html
index d2acab0..506cb29 100644
--- a/docs/lesson-2-run-queries-with-ansi-sql/index.html
+++ b/docs/lesson-2-run-queries-with-ansi-sql/index.html
@@ -1057,7 +1057,7 @@ statement.</p>
 
 <h2 id="aggregation">Aggregation</h2>
 
-<h3 id="set-the-schema-to-hive:">Set the schema to hive:</h3>
+<h3 id="set-the-schema-to-hive">Set the schema to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1066,7 +1066,7 @@ statement.</p>
 +-------+-------------------------------------------+
 1 row selected 
 </code></pre></div>
-<h3 id="return-sales-totals-by-month:">Return sales totals by month:</h3>
+<h3 id="return-sales-totals-by-month">Return sales totals by month:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select `month`, sum(order_total)
 from orders group by `month` order by 2 desc;
 +------------+---------+
@@ -1092,7 +1092,7 @@ database queries.</p>
 <p>Note that back ticks are required for the “month” column only because “month”
 is a reserved word in SQL.</p>
 
-<h3 id="return-the-top-20-sales-totals-by-month-and-state:">Return the top 20 sales totals by month and state:</h3>
+<h3 id="return-the-top-20-sales-totals-by-month-and-state">Return the top 20 sales totals by month and state:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select `month`, state, sum(order_total) as sales from orders group by `month`, state
 order by 3 desc limit 20;
 +-----------+--------+---------+
@@ -1128,7 +1128,7 @@ aliases and table aliases.</p>
 
 <p>This query uses the HAVING clause to constrain an aggregate result.</p>
 
-<h3 id="set-the-workspace-to-dfs.clicks">Set the workspace to dfs.clicks</h3>
+<h3 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1137,7 +1137,7 @@ aliases and table aliases.</p>
 +-------+-----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="return-total-number-of-clicks-for-devices-that-indicate-high-click-throughs:">Return total number of clicks for devices that indicate high click-throughs:</h3>
+<h3 id="return-total-number-of-clicks-for-devices-that-indicate-high-click-throughs">Return total number of clicks for devices that indicate high click-throughs:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.user_info.device, count(*) from `clicks/clicks.json` t 
 group by t.user_info.device
 having count(*) &gt; 1000;
@@ -1180,7 +1180,7 @@ duplicate rows from those files): <code>clicks.campaign.json</code> and <code>cl
 
 <h2 id="subqueries">Subqueries</h2>
 
-<h3 id="set-the-workspace-to-hive:">Set the workspace to hive:</h3>
+<h3 id="set-the-workspace-to-hive">Set the workspace to hive:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use hive.`default`;
 +-------+-------------------------------------------+
 |  ok   |                  summary                  |
@@ -1189,7 +1189,7 @@ duplicate rows from those files): <code>clicks.campaign.json</code> and <code>cl
 +-------+-------------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="compare-order-totals-across-states:">Compare order totals across states:</h3>
+<h3 id="compare-order-totals-across-states">Compare order totals across states:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select ny_sales.cust_id, ny_sales.total_orders, ca_sales.total_orders
 from
 (select o.cust_id, sum(o.order_total) as total_orders from hive.orders o where state = &#39;ny&#39; group by o.cust_id) ny_sales
@@ -1227,7 +1227,7 @@ limit 20;
 
 <h2 id="cast-function">CAST Function</h2>
 
-<h3 id="use-the-maprdb-workspace:">Use the maprdb workspace:</h3>
+<h3 id="use-the-maprdb-workspace">Use the maprdb workspace:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use maprdb;
 +-------+-------------------------------------+
 |  ok   |               summary               |
@@ -1260,7 +1260,7 @@ from customers t limit 5;
 <li>The table alias t is required; otherwise the column family names would be parsed as table names and the query would return an error.</li>
 </ul>
 
-<h3 id="remove-the-quotes-from-the-strings:">Remove the quotes from the strings:</h3>
+<h3 id="remove-the-quotes-from-the-strings">Remove the quotes from the strings:</h3>
 
 <p>You can use the regexp_replace function to remove the quotes around the
 strings in the query results. For example, to return a state name va instead
@@ -1283,7 +1283,7 @@ from customers t limit 1;
 +-------+----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="use-a-mutable-workspace:">Use a mutable workspace:</h3>
+<h3 id="use-a-mutable-workspace">Use a mutable workspace:</h3>
 
 <p>A mutable (or writable) workspace is a workspace that is enabled for “write”
 operations. This attribute is part of the storage plugin configuration. You
@@ -1322,7 +1322,7 @@ statement.</p>
 defined in data sources such as Hive, HBase, and the file system. Drill also
 supports the creation of metadata in the file system.</p>
 
-<h3 id="query-data-from-the-view:">Query data from the view:</h3>
+<h3 id="query-data-from-the-view">Query data from the view:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from custview limit 1;
 +----------+-------------------+-----------+----------+--------+----------+-------------+
 | cust_id  |       name        |  gender   |   age    | state  | agg_rev  | membership  |
@@ -1337,7 +1337,7 @@ supports the creation of metadata in the file system.</p>
 
 <p>Continue using <code>dfs.views</code> for this query.</p>
 
-<h3 id="join-the-customers-view-and-the-orders-table:">Join the customers view and the orders table:</h3>
+<h3 id="join-the-customers-view-and-the-orders-table">Join the customers view and the orders table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select membership, sum(order_total) as sales from hive.orders, custview
 where orders.cust_id=custview.cust_id
 group by membership order by 2;
@@ -1363,7 +1363,7 @@ rows are wide, set the maximum width of the display to 10000:</p>
 
 <p>Do not use a semicolon for this SET command.</p>
 
-<h3 id="join-the-customers,-orders,-and-clickstream-data:">Join the customers, orders, and clickstream data:</h3>
+<h3 id="join-the-customers-orders-and-clickstream-data">Join the customers, orders, and clickstream data:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select custview.membership, sum(orders.order_total) as sales from hive.orders, custview,
 dfs.`/mapr/demo.mapr.com/data/nested/clicks/clicks.json` c 
 where orders.cust_id=custview.cust_id and orders.cust_id=c.user_info.cust_id 
@@ -1393,7 +1393,7 @@ hive.orders table is also visible to the query.</p>
 workspace, so the query specifies the full path to the file:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">dfs.`/mapr/demo.mapr.com/data/nested/clicks/clicks.json`
 </code></pre></div>
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="what-39-s-next">What&#39;s Next</h2>
 
 <p>Go to <a href="/docs/lesson-3-run-queries-on-complex-data-types">Lesson 3: Run Queries on Complex Data Types</a>. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/lesson-3-run-queries-on-complex-data-types/index.html
----------------------------------------------------------------------
diff --git a/docs/lesson-3-run-queries-on-complex-data-types/index.html b/docs/lesson-3-run-queries-on-complex-data-types/index.html
index 8704785..e3e3fdd 100644
--- a/docs/lesson-3-run-queries-on-complex-data-types/index.html
+++ b/docs/lesson-3-run-queries-on-complex-data-types/index.html
@@ -1068,7 +1068,7 @@ exist. Here is a visual example of how this works:</p>
 
 <p><img src="/docs/img/example_query.png" alt="drill query flow"></p>
 
-<h3 id="set-workspace-to-dfs.logs:">Set workspace to dfs.logs:</h3>
+<h3 id="set-workspace-to-dfs-logs">Set workspace to dfs.logs:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.logs;
 +-------+---------------------------------------+
 |  ok   |                summary                |
@@ -1077,7 +1077,7 @@ exist. Here is a visual example of how this works:</p>
 +-------+---------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="query-logs-data-for-a-specific-year:">Query logs data for a specific year:</h3>
+<h3 id="query-logs-data-for-a-specific-year">Query logs data for a specific year:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from logs where dir0=&#39;2013&#39; limit 10;
 +-------+-------+-----------+-------------+-----------+----------+---------+--------+----------+-----------+----------+-------------+
 | dir0  | dir1  | trans_id  |    date     |   time    | cust_id  | device  | state  | camp_id  | keywords  | prod_id  | purch_flag  |
@@ -1099,7 +1099,7 @@ exist. Here is a visual example of how this works:</p>
 dir0 refers to the first level down from logs, dir1 to the next level, and so
 on. So this query returned 10 of the rows for February 2013.</p>
 
-<h3 id="further-constrain-the-results-using-multiple-predicates-in-the-query:">Further constrain the results using multiple predicates in the query:</h3>
+<h3 id="further-constrain-the-results-using-multiple-predicates-in-the-query">Further constrain the results using multiple predicates in the query:</h3>
 
 <p>This query returns a list of customer IDs for people who made a purchase via
 an IOS5 device in August 2013.</p>
@@ -1116,7 +1116,7 @@ order by `date`;
 
 ...
 </code></pre></div>
-<h3 id="return-monthly-counts-per-customer-for-a-given-year:">Return monthly counts per customer for a given year:</h3>
+<h3 id="return-monthly-counts-per-customer-for-a-given-year">Return monthly counts per customer for a given year:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select cust_id, dir1 month_no, count(*) month_count from logs
 where dir0=2014 group by cust_id, dir1 order by cust_id, month_no limit 10;
 +----------+-----------+--------------+
@@ -1144,7 +1144,7 @@ year: 2014.</p>
 analyze nested data natively without transformation. If you are familiar with
 JavaScript notation, you will already know how some of these extensions work.</p>
 
-<h3 id="set-the-workspace-to-dfs.clicks:">Set the workspace to dfs.clicks:</h3>
+<h3 id="set-the-workspace-to-dfs-clicks">Set the workspace to dfs.clicks:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1153,7 +1153,7 @@ JavaScript notation, you will already know how some of these extensions work.</p
 +-------+-----------------------------------------+
 1 row selected
 </code></pre></div>
-<h3 id="explore-clickstream-data:">Explore clickstream data:</h3>
+<h3 id="explore-clickstream-data">Explore clickstream data:</h3>
 
 <p>Note that the user_info and trans_info columns contain nested data: arrays and
 arrays within arrays. The following queries show how to access this complex
@@ -1170,7 +1170,7 @@ data.</p>
 +-----------+-------------+-----------+---------------------------------------------------+---------------------------------------------------------------------------+
 5 rows selected
 </code></pre></div>
-<h3 id="unpack-the-user_info-column:">Unpack the user_info column:</h3>
+<h3 id="unpack-the-user_info-column">Unpack the user_info column:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.user_info.cust_id as custid, t.user_info.device as device,
 t.user_info.state as state
 from `clicks/clicks.json` t limit 5;
@@ -1195,7 +1195,7 @@ column name, and <code>cust_id</code> is a nested column name.</p>
 <p>The table alias is required; otherwise column names such as <code>user_info</code> are
 parsed as table names by the SQL parser.</p>
 
-<h3 id="unpack-the-trans_info-column:">Unpack the trans_info column:</h3>
+<h3 id="unpack-the-trans_info-column">Unpack the trans_info column:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_info.prod_id as prodid, t.trans_info.purch_flag as
 purchased
 from `clicks/clicks.json` t limit 5;
@@ -1228,7 +1228,7 @@ notation to write interesting queries against nested array data.</p>
 </code></pre></div>
 <p>refers to the 21st value, assuming one exists.</p>
 
-<h3 id="find-the-first-product-that-is-searched-for-in-each-transaction:">Find the first product that is searched for in each transaction:</h3>
+<h3 id="find-the-first-product-that-is-searched-for-in-each-transaction">Find the first product that is searched for in each transaction:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.trans_info.prod_id[0] from `clicks/clicks.json` t limit 5;
 +------------+------------+
 |  trans_id  |   EXPR$1   |
@@ -1241,7 +1241,7 @@ notation to write interesting queries against nested array data.</p>
 +------------+------------+
 5 rows selected
 </code></pre></div>
-<h3 id="for-which-transactions-did-customers-search-on-at-least-21-products?">For which transactions did customers search on at least 21 products?</h3>
+<h3 id="for-which-transactions-did-customers-search-on-at-least-21-products">For which transactions did customers search on at least 21 products?</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.trans_info.prod_id[20]
 from `clicks/clicks.json` t
 where t.trans_info.prod_id[20] is not null
@@ -1260,7 +1260,7 @@ order by trans_id limit 5;
 <p>This query returns transaction IDs and product IDs for records that contain a
 non-null product ID at the 21st position in the array.</p>
 
-<h3 id="return-clicks-for-a-specific-product-range:">Return clicks for a specific product range:</h3>
+<h3 id="return-clicks-for-a-specific-product-range">Return clicks for a specific product range:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select * from (select t.trans_id, t.trans_info.prod_id[0] as prodid,
 t.trans_info.purch_flag as purchased
 from `clicks/clicks.json` t) sq
@@ -1283,7 +1283,7 @@ ordered list of products purchased rather than a random list).</p>
 
 <h2 id="perform-operations-on-arrays">Perform Operations on Arrays</h2>
 
-<h3 id="rank-successful-click-conversions-and-count-product-searches-for-each-session:">Rank successful click conversions and count product searches for each session:</h3>
+<h3 id="rank-successful-click-conversions-and-count-product-searches-for-each-session">Rank successful click conversions and count product searches for each session:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select t.trans_id, t.`date` as session_date, t.user_info.cust_id as
 cust_id, t.user_info.device as device, repeated_count(t.trans_info.prod_id) as
 prod_count, t.trans_info.purch_flag as purch_flag
@@ -1309,7 +1309,7 @@ in descending order. Only clicks that have resulted in a purchase are counted.</
 <p>To facilitate additional analysis on this result set, you can easily and
 quickly create a Drill table from the results of the query.</p>
 
-<h3 id="continue-to-use-the-dfs.clicks-workspace">Continue to use the dfs.clicks workspace</h3>
+<h3 id="continue-to-use-the-dfs-clicks-workspace">Continue to use the dfs.clicks workspace</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; use dfs.clicks;
 +-------+-----------------------------------------+
 |  ok   |                 summary                 |
@@ -1318,7 +1318,7 @@ quickly create a Drill table from the results of the query.</p>
 +-------+-----------------------------------------+
 1 row selected (1.61 seconds)
 </code></pre></div>
-<h3 id="return-product-searches-for-high-value-customers:">Return product searches for high-value customers:</h3>
+<h3 id="return-product-searches-for-high-value-customers">Return product searches for high-value customers:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id
 from 
 hive.orders as o
@@ -1342,7 +1342,7 @@ where o.order_total &gt; (select avg(inord.order_total)
 <p>This query returns a list of products that are being searched for by customers
 who have made transactions that are above the average in their states.</p>
 
-<h3 id="materialize-the-result-of-the-previous-query:">Materialize the result of the previous query:</h3>
+<h3 id="materialize-the-result-of-the-previous-query">Materialize the result of the previous query:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:&gt; create table product_search as select o.cust_id, o.order_total, t.trans_info.prod_id[0] as prod_id
 from
 hive.orders as o
@@ -1364,7 +1364,7 @@ query returns (107,482) and stores them in the format specified by the storage
 plugin (Parquet format in this example). You can create tables that store data
 in csv, parquet, and json formats.</p>
 
-<h3 id="query-the-new-table-to-verify-the-row-count:">Query the new table to verify the row count:</h3>
+<h3 id="query-the-new-table-to-verify-the-row-count">Query the new table to verify the row count:</h3>
 
 <p>This example simply checks that the CTAS statement worked by verifying the
 number of rows in the table.</p>
@@ -1376,7 +1376,7 @@ number of rows in the table.</p>
 +---------+
 1 row selected (0.155 seconds)
 </code></pre></div>
-<h3 id="find-the-storage-file-for-the-table:">Find the storage file for the table:</h3>
+<h3 id="find-the-storage-file-for-the-table">Find the storage file for the table:</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">[root@maprdemo product_search]# cd /mapr/demo.mapr.com/data/nested/product_search
 [root@maprdemo product_search]# ls -la
 total 451
@@ -1390,7 +1390,7 @@ stored in the location defined by the dfs.clicks workspace:</p>
 </code></pre></div>
 <p>There is a subdirectory that has the same name as the table you created.</p>
 
-<h2 id="what&#39;s-next">What&#39;s Next</h2>
+<h2 id="what-39-s-next">What&#39;s Next</h2>
 
 <p>Complete the tutorial with the <a href="/docs/summary">Summary</a>.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/mongodb-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/mongodb-storage-plugin/index.html b/docs/mongodb-storage-plugin/index.html
index 4f67e6c..25d9e59 100644
--- a/docs/mongodb-storage-plugin/index.html
+++ b/docs/mongodb-storage-plugin/index.html
@@ -1210,7 +1210,7 @@ Drill data sources, including MongoDB. </p>
 | -72.576142 |
 +------------+
 </code></pre></div>
-<h2 id="using-odbc/jdbc-drivers">Using ODBC/JDBC Drivers</h2>
+<h2 id="using-odbc-jdbc-drivers">Using ODBC/JDBC Drivers</h2>
 
 <p>You can query MongoDB through standard
 BI tools, such as Tableau and SQuirreL. For information about Drill ODBC and JDBC drivers, refer to <a href="/docs/odbc-jdbc-interfaces">Drill Interfaces</a>.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/odbc-configuration-reference/index.html
----------------------------------------------------------------------
diff --git a/docs/odbc-configuration-reference/index.html b/docs/odbc-configuration-reference/index.html
index 89d0458..4c36445 100644
--- a/docs/odbc-configuration-reference/index.html
+++ b/docs/odbc-configuration-reference/index.html
@@ -1328,7 +1328,7 @@ The Simba ODBC Driver for Apache Drill produces two log files at the location yo
 <li>Save the mapr.drillodbc.ini configuration file.</li>
 </ol>
 
-<h4 id="what&#39;s-next?-go-to-connecting-to-odbc-data-sources.">What&#39;s Next? Go to <a href="/docs/connecting-to-odbc-data-sources">Connecting to ODBC Data Sources</a>.</h4>
+<h4 id="what-39-s-next-go-to-connecting-to-odbc-data-sources">What&#39;s Next? Go to <a href="/docs/connecting-to-odbc-data-sources">Connecting to ODBC Data Sources</a>.</h4>
 
     
       

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/parquet-format/index.html
----------------------------------------------------------------------
diff --git a/docs/parquet-format/index.html b/docs/parquet-format/index.html
index cb9efd1..19b403c 100644
--- a/docs/parquet-format/index.html
+++ b/docs/parquet-format/index.html
@@ -1097,7 +1097,7 @@
 <li>In the CTAS command, cast JSON string data to corresponding <a href="/docs/json-data-model/#data-type-mapping">SQL types</a>.</li>
 </ul>
 
-<h3 id="example:-read-json,-write-parquet">Example: Read JSON, Write Parquet</h3>
+<h3 id="example-read-json-write-parquet">Example: Read JSON, Write Parquet</h3>
 
 <p>This example demonstrates a storage plugin definition, a sample row of data from a JSON file, and a Drill query that writes the JSON input to Parquet output. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/partition-pruning/index.html
----------------------------------------------------------------------
diff --git a/docs/partition-pruning/index.html b/docs/partition-pruning/index.html
index 98c8378..b6196b9 100644
--- a/docs/partition-pruning/index.html
+++ b/docs/partition-pruning/index.html
@@ -1035,7 +1035,7 @@
 
 <p>The query planner in Drill performs partition pruning by evaluating the filters. If no partition filters are present, the underlying Scan operator reads all files in all directories and then sends the data to operators, such as Filter, downstream. When partition filters are present, the query planner pushes the filters down to the Scan if possible. The Scan reads only the directories that match the partition filters, thus reducing disk I/O.</p>
 
-<h2 id="migrating-partitioned-data-from-drill-1.1-1.2-to-drill-1.3">Migrating Partitioned Data from Drill 1.1-1.2 to Drill 1.3</h2>
+<h2 id="migrating-partitioned-data-from-drill-1-1-1-2-to-drill-1-3">Migrating Partitioned Data from Drill 1.1-1.2 to Drill 1.3</h2>
 
 <p>Use the <a href="https://github.com/parthchandra/drill-upgrade">drill-upgrade tool</a> to migrate Parquet data that you generated in Drill 1.1 or 1.2 before attempting to use the data with Drill 1.3 partition pruning.  This migration is mandatory because Parquet data generated by Drill 1.1 and 1.2 must be marked as Drill-generated, as described in <a href="https://issues.apache.org/jira/browse/DRILL-4070">DRILL-4070</a>. </p>
 
@@ -1055,7 +1055,7 @@
 
 <p>Unlike using the Drill 1.0 partitioning, no view query is subsequently required, nor is it necessary to use the <a href="/docs/querying-directories">dir* variables</a> after you use the Drill 1.1 PARTITION BY clause in a CTAS statement. </p>
 
-<h2 id="drill-1.0-partitioning">Drill 1.0 Partitioning</h2>
+<h2 id="drill-1-0-partitioning">Drill 1.0 Partitioning</h2>
 
 <p>You perform the following steps to partition data in Drill 1.0.   </p>
 
@@ -1067,7 +1067,7 @@
 
 <p>After partitioning the data, you need to create a view of the partitioned data to query the data. You can use the <a href="/docs/querying-directories">dir* variables</a> in queries to refer to subdirectories in your workspace path.</p>
 
-<h3 id="drill-1.0-partitioning-example">Drill 1.0 Partitioning Example</h3>
+<h3 id="drill-1-0-partitioning-example">Drill 1.0 Partitioning Example</h3>
 
 <p>Suppose you have text files containing several years of log data. To partition the data by year and quarter, create the following hierarchy of directories:  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   …/logs/1994/Q1  

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/plugin-configuration-basics/index.html
----------------------------------------------------------------------
diff --git a/docs/plugin-configuration-basics/index.html b/docs/plugin-configuration-basics/index.html
index 81d2df3..0cd5375 100644
--- a/docs/plugin-configuration-basics/index.html
+++ b/docs/plugin-configuration-basics/index.html
@@ -1123,13 +1123,13 @@ Using a copy of an existing configuration reduces the risk of JSON coding errors
   </tr>
   <tr>
     <td>&quot;formats&quot;</td>
-    <td>&quot;psv&quot;<br>&quot;csv&quot;<br>&quot;tsv&quot;<br>&quot;parquet&quot;<br>&quot;json&quot;<br>&quot;avro&quot;<br>&quot;maprdb&quot;<em><br>&quot;sequencefile&quot;</td>
+    <td>&quot;psv&quot;<br>&quot;csv&quot;<br>&quot;tsv&quot;<br>&quot;parquet&quot;<br>&quot;json&quot;<br>&quot;avro&quot;<br>&quot;maprdb&quot;<br>&quot;sequencefile&quot;</td>
     <td>yes</td>
-    <td>One or more valid file formats for reading. Drill implicitly detects formats of some files based on extension or bits of data in the file; others require configuration.</td>
+    <td>One or more valid file formats for reading. Drill detects formats of some files; others require configuration. The maprdb format is in installations of the mapr-drill package.  </td>
   </tr>
   <tr>
     <td>&quot;formats&quot; . . . &quot;type&quot;</td>
-    <td>&quot;text&quot;<br>&quot;parquet&quot;<br>&quot;json&quot;<br>&quot;maprdb&quot;</em><br>&quot;avro&quot;<br>&quot;sequencefile&quot;</td>
+    <td>&quot;text&quot;<br>&quot;parquet&quot;<br>&quot;json&quot;<br>&quot;maprdb&quot;<br>&quot;avro&quot;<br>&quot;sequencefile&quot;</td>
     <td>yes</td>
     <td>Format type. You can define two formats, csv and psv, as type &quot;Text&quot;, but having different delimiters. </td>
   </tr>
@@ -1174,13 +1174,11 @@ Using a copy of an existing configuration reduces the risk of JSON coding errors
     <td>&quot;formats&quot; . . . &quot;extractHeader&quot;</td>
     <td>true</td>
     <td>no</td>
-    <td>Set to true to extract and use headers as column names when reading a delimited text file, false otherwise. Ensure skipFirstLine=false when extractHeader=true.
+    <td>Set to true to extract and use headers as column names when reading a delimited text file, false otherwise. Ensure skipFirstLine is not true when extractHeader=false.
     </td>
   </tr>
 </table></p>
 
-<p>* Pertains only to distributed Drill installations using the mapr-drill package.  </p>
-
 <h2 id="using-the-formats-attributes">Using the Formats Attributes</h2>
 
 <p>You set the formats attributes, such as skipFirstLine, in the <code>formats</code> area of the storage plugin configuration. When setting attributes for text files, such as CSV, you also need to set the <code>sys.options</code> property <code>exec.storage.enable_new_text_reader</code> to true (the default). For more information and examples of using formats for text files, see <a href="/docs/text-files-csv-tsv-psv/">&quot;Text Files: CSV, TSV, PSV&quot;</a>.  </p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-hbase/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-hbase/index.html b/docs/querying-hbase/index.html
index 4b752d7..a50f791 100644
--- a/docs/querying-hbase/index.html
+++ b/docs/querying-hbase/index.html
@@ -1044,7 +1044,7 @@ How to use optimization features in Drill 1.2 and later<br></li>
 How to use Drill 1.2 to leverage new features introduced by <a href="https://issues.apache.org/jira/browse/HBASE-8201">HBASE-8201 Jira</a></li>
 </ul>
 
-<h2 id="tutorial--querying-hbase-data">Tutorial--Querying HBase Data</h2>
+<h2 id="tutorial-querying-hbase-data">Tutorial--Querying HBase Data</h2>
 
 <p>This tutorial shows how to connect Drill to an HBase data source, create simple HBase tables, and query the data using Drill.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-json-files/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-json-files/index.html b/docs/querying-json-files/index.html
index 86b4d0a..8e557a0 100644
--- a/docs/querying-json-files/index.html
+++ b/docs/querying-json-files/index.html
@@ -1035,7 +1035,7 @@
       
         <p>To query complex JSON files, you need to understand the <a href="/docs/json-data-model/">&quot;JSON Data Model&quot;</a>. This section provides a trivial example of querying a sample file that Drill installs. </p>
 
-<h2 id="about-the-employee.json-file">About the employee.json File</h2>
+<h2 id="about-the-employee-json-file">About the employee.json File</h2>
 
 <p>The sample file, <code>employee.json</code>, is packaged in the Foodmart data JAR in Drill&#39;s
 classpath:  </p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-plain-text-files/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-plain-text-files/index.html b/docs/querying-plain-text-files/index.html
index 72200d5..3184f28 100644
--- a/docs/querying-plain-text-files/index.html
+++ b/docs/querying-plain-text-files/index.html
@@ -1064,7 +1064,7 @@ found&quot; error if references to files in queries do not match these condition
       &quot;delimiter&quot;: &quot;|&quot;
     }
 </code></pre></div>
-<h2 id="select-*-from-a-csv-file">SELECT * FROM a CSV File</h2>
+<h2 id="select-from-a-csv-file">SELECT * FROM a CSV File</h2>
 
 <p>The first query selects rows from a <code>.csv</code> text file. The file contains seven
 records:</p>
@@ -1095,7 +1095,7 @@ each row.</p>
 +-----------------------------------+
 7 rows selected (0.089 seconds)
 </code></pre></div>
-<h2 id="columns[n]-syntax">Columns[n] Syntax</h2>
+<h2 id="columns-n-syntax">Columns[n] Syntax</h2>
 
 <p>You can use the <code>COLUMNS[n]</code> syntax in the SELECT list to return these CSV
 rows in a more readable, column by column, format. (This syntax uses a zero-

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-sequence-files/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-sequence-files/index.html b/docs/querying-sequence-files/index.html
index f1330a5..d4edaba 100644
--- a/docs/querying-sequence-files/index.html
+++ b/docs/querying-sequence-files/index.html
@@ -1036,7 +1036,7 @@
         <p>Sequence files are flat files storing binary key value pairs.
 Drill projects sequence files as table with two columns &#39;binary_key&#39;, &#39;binary_value&#39;.</p>
 
-<h3 id="querying-sequence-file.">Querying sequence file.</h3>
+<h3 id="querying-sequence-file">Querying sequence file.</h3>
 
 <p>Start drill shell</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">    SELECT *

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/querying-system-tables/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-system-tables/index.html b/docs/querying-system-tables/index.html
index 7e733a2..0ebb252 100644
--- a/docs/querying-system-tables/index.html
+++ b/docs/querying-system-tables/index.html
@@ -1093,7 +1093,7 @@ requests.</p>
 
 <p>Query the drillbits, version, options, boot, threads, and memory tables in the sys database.</p>
 
-<h3 id="query-the-drillbits-table.">Query the drillbits table.</h3>
+<h3 id="query-the-drillbits-table">Query the drillbits table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from drillbits;
 +-------------------+------------+--------------+------------+---------+
 |   hostname        |  user_port | control_port | data_port  |  current|
@@ -1121,7 +1121,7 @@ True means the Drillbit is connected to the session or client running the
 query. This Drillbit is the Foreman for the current session.<br></li>
 </ul>
 
-<h3 id="query-the-version-table.">Query the version table.</h3>
+<h3 id="query-the-version-table">Query the version table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from version;
 +-------------------------------------------+--------------------------------------------------------------------+----------------------------+--------------+----------------------------+
 |                 commit_id                 |                           commit_message                           |        commit_time         | build_email  |         build_time         |
@@ -1145,7 +1145,7 @@ example.</li>
 The time that the release was built.</li>
 </ul>
 
-<h3 id="query-the-options-table.">Query the options table.</h3>
+<h3 id="query-the-options-table">Query the options table.</h3>
 
 <p>Drill provides system, session, and boot options that you can query.</p>
 
@@ -1187,7 +1187,7 @@ The default value, which is of the double, float, or long double data type;
 otherwise, null.</li>
 </ul>
 
-<h3 id="query-the-boot-table.">Query the boot table.</h3>
+<h3 id="query-the-boot-table">Query the boot table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from boot limit 10;
 +--------------------------------------+----------+-------+---------+------------+-------------------------+-----------+------------+
 |                 name                 |   kind   | type  | status  |  num_val   |       string_val        | bool_val  | float_val  |
@@ -1225,7 +1225,7 @@ The default value, which is of the double, float, or long double data type;
 otherwise, null.</li>
 </ul>
 
-<h3 id="query-the-threads-table.">Query the threads table.</h3>
+<h3 id="query-the-threads-table">Query the threads table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from threads;
 +--------------------+------------+----------------+---------------+
 |       hostname     | user_port  | total_threads  | busy_threads  |
@@ -1248,7 +1248,7 @@ The peak thread count on the node.</li>
 The current number of live threads (daemon and non-daemon) on the node.</li>
 </ul>
 
-<h3 id="query-the-memory-table.">Query the memory table.</h3>
+<h3 id="query-the-memory-table">Query the memory table.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">0: jdbc:drill:zk=10.10.100.113:5181&gt; select * from memory;
 +--------------------+------------+---------------+-------------+-----------------+---------------------+-------------+
 |       hostname     | user_port  | heap_current  |  heap_max   | direct_current  | jvm_direct_current  | direct_max  |

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/ranking-window-functions/index.html
----------------------------------------------------------------------
diff --git a/docs/ranking-window-functions/index.html b/docs/ranking-window-functions/index.html
index 7c18e10..d9d0d83 100644
--- a/docs/ranking-window-functions/index.html
+++ b/docs/ranking-window-functions/index.html
@@ -1095,7 +1095,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
 
 <p>The following examples show queries that use each of the ranking window functions in Drill. See <a href="/docs/sql-window-functions-examples/">Window Functions Examples</a> for information about the data and setup for these examples.</p>
 
-<h3 id="cume_dist()">CUME_DIST()</h3>
+<h3 id="cume_dist">CUME_DIST()</h3>
 
 <p>The following query uses the CUME_DIST() window function to calculate the cumulative distribution of sales for each dealer in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, sales, cume_dist() over(order by sales) as cumedist from q1_sales;
@@ -1135,7 +1135,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+------------+
    10 rows selected (0.198 seconds)  
 </code></pre></div>
-<h3 id="ntile()">NTILE()</h3>
+<h3 id="ntile">NTILE()</h3>
 
 <p>The following example uses the NTILE window function to divide the Q1 sales into five groups and list the sales in ascending order.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select emp_mgr, sales, ntile(5) over(order by sales) as ntilerank from q1_sales;
@@ -1173,7 +1173,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +-----------------+------------+--------+------------+
    10 rows selected (0.312 seconds)
 </code></pre></div>
-<h3 id="percent_rank()">PERCENT_RANK()</h3>
+<h3 id="percent_rank">PERCENT_RANK()</h3>
 
 <p>The following query uses the PERCENT_RANK() window function to calculate the percent rank for employee sales in Q1.  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, percent_rank() over(order by sales) as perrank from q1_sales; 
@@ -1193,7 +1193,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+---------------------+
    10 rows selected (0.169 seconds)
 </code></pre></div>
-<h3 id="rank()">RANK()</h3>
+<h3 id="rank">RANK()</h3>
 
 <p>The following query uses the RANK() window function to rank the employee sales for Q1. The word rank in Drill is a reserved keyword and must be enclosed in back ticks (``).</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   select dealer_id, emp_name, sales, rank() over(order by sales) as `rank` from q1_sales;
@@ -1213,7 +1213,7 @@ The window clauses for the function. The OVER clause cannot contain an explicit
    +------------+-----------------+--------+-------+
    10 rows selected (0.174 seconds)
 </code></pre></div>
-<h3 id="row_number()">ROW_NUMBER()</h3>
+<h3 id="row_number">ROW_NUMBER()</h3>
 
 <p>The following query uses the ROW_NUMBER() window function to number the sales for each dealer_id. The word rownum contains the reserved keyword row and must be enclosed in back ticks (``).  </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">    select dealer_id, emp_name, sales, row_number() over(partition by dealer_id order by sales) as `rownum` from q1_sales;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/rdbms-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/rdbms-storage-plugin/index.html b/docs/rdbms-storage-plugin/index.html
index 0c09b9b..ba91e69 100644
--- a/docs/rdbms-storage-plugin/index.html
+++ b/docs/rdbms-storage-plugin/index.html
@@ -1045,7 +1045,7 @@
 <li>Add a new storage configuration to Drill through the web ui. Example configurations for <a href="#Example-Oracle-Configuration">Oracle</a>, <a href="#Example-SQL-Server-Configuration">SQL Server</a>, <a href="#Example-MySQL-Configuration">MySQL</a> and <a href="#Example-Postgres-Configuration">Postgres</a> are provided below.</li>
 </ol>
 
-<h2 id="example:-working-with-mysql">Example: Working with MySQL</h2>
+<h2 id="example-working-with-mysql">Example: Working with MySQL</h2>
 
 <p>Drill communicates with MySQL through the JDBC driver using the configuration that you specify in the Web Console or through the <a href="/docs/plugin-configuration-basics/#storage-plugin-rest-api">REST API</a>.  </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/rest-api/index.html
----------------------------------------------------------------------
diff --git a/docs/rest-api/index.html b/docs/rest-api/index.html
index 148aee6..8495dcc 100644
--- a/docs/rest-api/index.html
+++ b/docs/rest-api/index.html
@@ -1086,7 +1086,7 @@
 
 <hr>
 
-<h3 id="post-/query.json">POST /query.json</h3>
+<h3 id="post-query-json">POST /query.json</h3>
 
 <p>Submit a query and return results.</p>
 
@@ -1127,7 +1127,7 @@
 
 <hr>
 
-<h3 id="get-/profiles.json">GET /profiles.json</h3>
+<h3 id="get-profiles-json">GET /profiles.json</h3>
 
 <p>Get the profiles of running and completed queries. </p>
 
@@ -1151,7 +1151,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/profiles/{queryid}.json">GET /profiles/{queryid}.json</h3>
+<h3 id="get-profiles-queryid-json">GET /profiles/{queryid}.json</h3>
 
 <p>Get the profile of the query that has the given queryid.</p>
 
@@ -1169,7 +1169,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/profiles/cancel/{queryid}">GET /profiles/cancel/{queryid}</h3>
+<h3 id="get-profiles-cancel-queryid">GET /profiles/cancel/{queryid}</h3>
 
 <p>Cancel the query that has the given queryid.</p>
 
@@ -1192,7 +1192,7 @@
 
 <hr>
 
-<h3 id="get-/storage.json">GET /storage.json</h3>
+<h3 id="get-storage-json">GET /storage.json</h3>
 
 <p>Get the list of storage plugin names and configurations.</p>
 
@@ -1226,7 +1226,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/storage/{name}.json">GET /storage/{name}.json</h3>
+<h3 id="get-storage-name-json">GET /storage/{name}.json</h3>
 
 <p>Get the definition of the named storage plugin.</p>
 
@@ -1250,7 +1250,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/storage/{name}/enable/{val}">Get /storage/{name}/enable/{val}</h3>
+<h3 id="get-storage-name-enable-val">Get /storage/{name}/enable/{val}</h3>
 
 <p>Enable or disable the named storage plugin.</p>
 
@@ -1273,7 +1273,7 @@
 
 <hr>
 
-<h3 id="post-/storage/{name}.json">POST /storage/{name}.json</h3>
+<h3 id="post-storage-name-json">POST /storage/{name}.json</h3>
 
 <p>Create or update a storage plugin configuration.</p>
 
@@ -1306,7 +1306,7 @@
 
 <hr>
 
-<h3 id="delete-/storage/{name}.json">DELETE /storage/{name}.json</h3>
+<h3 id="delete-storage-name-json">DELETE /storage/{name}.json</h3>
 
 <p>Delete a storage plugin configuration.</p>
 
@@ -1329,7 +1329,7 @@
 
 <hr>
 
-<h3 id="get-/stats.json">GET /stats.json</h3>
+<h3 id="get-stats-json">GET /stats.json</h3>
 
 <p>Get Drillbit information, such as ports numbers.</p>
 
@@ -1360,7 +1360,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/status">GET /status</h3>
+<h3 id="get-status">GET /status</h3>
 
 <p>Get the status of Drill. </p>
 
@@ -1380,7 +1380,7 @@
 </code></pre></div>
 <hr>
 
-<h3 id="get-/status/metrics">GET /status/metrics</h3>
+<h3 id="get-status-metrics">GET /status/metrics</h3>
 
 <p>Get the current memory metrics.</p>
 
@@ -1399,7 +1399,7 @@
 
 <hr>
 
-<h3 id="get-/status/threads">GET /status/threads</h3>
+<h3 id="get-status-threads">GET /status/threads</h3>
 
 <p>Get the status of threads.</p>
 
@@ -1430,7 +1430,7 @@
 
 <hr>
 
-<h3 id="get-/options.json">GET /options.json</h3>
+<h3 id="get-options-json">GET /options.json</h3>
 
 <p>List the name, default, and data type of the system and session options.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/s3-storage-plugin/index.html
----------------------------------------------------------------------
diff --git a/docs/s3-storage-plugin/index.html b/docs/s3-storage-plugin/index.html
index a6a00a8..1067e73 100644
--- a/docs/s3-storage-plugin/index.html
+++ b/docs/s3-storage-plugin/index.html
@@ -1039,7 +1039,7 @@
 
 <p>There are two simple steps to follow: (1) provide your AWS credentials (2) configure S3 storage plugin with S3 bucket</p>
 
-<h4 id="(1)-aws-credentials">(1) AWS credentials</h4>
+<h4 id="1-aws-credentials">(1) AWS credentials</h4>
 
 <p>To enable Drill&#39;s S3a support, edit the file conf/core-site.xml in your Drill install directory, replacing the text ENTER_YOUR_ACESSKEY and ENTER_YOUR_SECRETKEY with your AWS credentials.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">&lt;configuration&gt;
@@ -1056,7 +1056,7 @@
 
 &lt;/configuration&gt;
 </code></pre></div>
-<h4 id="(2)-configure-s3-storage-plugin">(2) Configure S3 Storage Plugin</h4>
+<h4 id="2-configure-s3-storage-plugin">(2) Configure S3 Storage Plugin</h4>
 
 <p>Enable S3 storage plugin if you already have one configured or you can add a new plugin by following these steps:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/sequence-files/index.html
----------------------------------------------------------------------
diff --git a/docs/sequence-files/index.html b/docs/sequence-files/index.html
index 82d3a86..953e53d 100644
--- a/docs/sequence-files/index.html
+++ b/docs/sequence-files/index.html
@@ -1034,7 +1034,7 @@
         <p>Hadoop Sequence files (<a href="https://wiki.apache.org/hadoop/SequenceFile">https://wiki.apache.org/hadoop/SequenceFile</a>) are flat files storing binary key, value pairs.
 Drill projects sequence files as table with two columns - &#39;binary_key&#39;, &#39;binary_value&#39; of type VARBINARY.</p>
 
-<h3 id="storage-plugin-format-for-sequence-files.">Storage plugin format for sequence files.</h3>
+<h3 id="storage-plugin-format-for-sequence-files">Storage plugin format for sequence files.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">. . .
 &quot;sequencefile&quot;: {
   &quot;type&quot;: &quot;sequencefile&quot;,
@@ -1044,7 +1044,7 @@ Drill projects sequence files as table with two columns - &#39;binary_key&#39;,
 },
 . . .
 </code></pre></div>
-<h3 id="querying-sequence-file.">Querying sequence file.</h3>
+<h3 id="querying-sequence-file">Querying sequence file.</h3>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT *
 FROM dfs.tmp.`simple.seq`
 LIMIT 1;

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/sql-extensions/index.html
----------------------------------------------------------------------
diff --git a/docs/sql-extensions/index.html b/docs/sql-extensions/index.html
index bcbcbd3..34bf69a 100644
--- a/docs/sql-extensions/index.html
+++ b/docs/sql-extensions/index.html
@@ -1037,7 +1037,7 @@
 
 <p>Drill extends the SELECT statement for reading complex, multi-structured data. The extended CREATE TABLE AS provides the capability to write data of complex/multi-structured data types. Drill extends the <a href="http://drill.apache.org/docs/lexical-structure">lexical rules</a> for working with files and directories, such as using back ticks for including file names, directory names, and reserved words in queries. Drill syntax supports using the file system as a persistent store for query profiles and diagnostic information.</p>
 
-<h2 id="extensions-for-hive--and-hbase-related-data-sources">Extensions for Hive- and HBase-related Data Sources</h2>
+<h2 id="extensions-for-hive-and-hbase-related-data-sources">Extensions for Hive- and HBase-related Data Sources</h2>
 
 <p>Drill supports Hive and HBase as a plug-and-play data source. Drill can read tables created in Hive that use <a href="/docs/hive-to-drill-data-type-mapping">data types compatible</a> with Drill.  You can query Hive tables without modifications. You can query self-describing data without requiring metadata definitions in the Hive metastore. Primitives, such as JOIN, support columnar operation. </p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/starting-drill-in-distributed-mode/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-drill-in-distributed-mode/index.html b/docs/starting-drill-in-distributed-mode/index.html
index d3e5832..ff8e8a7 100644
--- a/docs/starting-drill-in-distributed-mode/index.html
+++ b/docs/starting-drill-in-distributed-mode/index.html
@@ -1041,7 +1041,7 @@
 <li>Using an Ad-Hoc Connection to Drill</li>
 </ul>
 
-<h2 id="using-the-drillbit.sh-command">Using the drillbit.sh Command</h2>
+<h2 id="using-the-drillbit-sh-command">Using the drillbit.sh Command</h2>
 
 <p>To use Drill in distributed mode, you need to control a Drillbit. If you use Drill in embedded mode, you do not use the <strong>drillbit.sh</strong> command. </p>
 
@@ -1055,7 +1055,7 @@
 
 <p>You can use a configuration file to start Drill. Using such a file is handy for controlling Drillbits on multiple nodes.</p>
 
-<h3 id="drillbit.sh-command-syntax">drillbit.sh Command Syntax</h3>
+<h3 id="drillbit-sh-command-syntax">drillbit.sh Command Syntax</h3>
 
 <p><code>drillbit.sh [--config &lt;conf-dir&gt;] (start|stop|status|restart|autorestart)</code></p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/starting-drill-on-linux-and-mac-os-x/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-drill-on-linux-and-mac-os-x/index.html b/docs/starting-drill-on-linux-and-mac-os-x/index.html
index b378624..8a40b31 100644
--- a/docs/starting-drill-on-linux-and-mac-os-x/index.html
+++ b/docs/starting-drill-on-linux-and-mac-os-x/index.html
@@ -1048,7 +1048,7 @@
 
 <p>To start Drill, you can also use the <strong>sqlline</strong> command and a custom connection string, as described in detail in <a href="/docs/starting-drill-in-distributed-mode/#using-an-ad-hoc-connection-to-drill">&quot;Using an Ad-Hoc Connection to Drill&quot;</a>. For example, you can specify the default storage plugin configuration when you start the shell. Doing so eliminates the need to specify the storage plugin configuration in the query. For example, this command specifies the <code>dfs</code> storage plugin:</p>
 
-<p><code>bin/sqlline –u jdbc:drill:schema=dfs;zk=local</code></p>
+<p><code>bin/sqlline –u jdbc:drill:zk=local;schema=dfs</code></p>
 
 <p>If you start Drill on one network, and then want to use Drill on another network, such as your home network, restart Drill.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/starting-drill-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-drill-on-windows/index.html b/docs/starting-drill-on-windows/index.html
index 1337a01..6cff89e 100644
--- a/docs/starting-drill-on-windows/index.html
+++ b/docs/starting-drill-on-windows/index.html
@@ -1049,7 +1049,7 @@
 
 <p>You can use the schema option in the <strong>sqlline</strong> command to specify a storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query. For example, this command specifies the <code>dfs</code> storage plugin:</p>
 
-<p><code>C:\bin\sqlline sqlline.bat –u &quot;jdbc:drill:schema=dfs;zk=local&quot;</code></p>
+<p><code>C:\bin\sqlline sqlline.bat –u &quot;jdbc:drill:zk=local;schema=dfs&quot;</code></p>
 
 <p>If you start Drill on one network, and then want to use Drill on another network, such as your home network, restart Drill.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/starting-the-web-console/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-the-web-console/index.html b/docs/starting-the-web-console/index.html
index 78539a4..6daae2b 100644
--- a/docs/starting-the-web-console/index.html
+++ b/docs/starting-the-web-console/index.html
@@ -1033,7 +1033,7 @@
       
         <p>The Drill Web Console is one of several <a href="/docs/architecture-introduction/#drill-clients">client interfaces</a> you can use to access Drill. </p>
 
-<h2 id="drill-1.1-and-earlier">Drill 1.1 and Earlier</h2>
+<h2 id="drill-1-1-and-earlier">Drill 1.1 and Earlier</h2>
 
 <p>In Drill 1.1 and earlier, to open the Drill Web Console, launch a web browser, and go to the following URL:</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/tableau-examples/index.html
----------------------------------------------------------------------
diff --git a/docs/tableau-examples/index.html b/docs/tableau-examples/index.html
index 5c842c2..7576983 100644
--- a/docs/tableau-examples/index.html
+++ b/docs/tableau-examples/index.html
@@ -1049,7 +1049,7 @@ DSN to a Drill data source and then access the data in Tableau 8.1.</p>
 source data. You define schemas by configuring storage plugins on the Storage
 tab of the <a href="/docs/getting-to-know-the-drill-sandbox/#storage-plugin-overview">Drill Web Console</a>. Also, the examples assume you <a href="/docs/supported-data-types/#enabling-the-decimal-type">enabled the DECIMAL data type</a> in Drill.  </p>
 
-<h2 id="example:-connect-to-a-hive-table-in-tableau">Example: Connect to a Hive Table in Tableau</h2>
+<h2 id="example-connect-to-a-hive-table-in-tableau">Example: Connect to a Hive Table in Tableau</h2>
 
 <p>To access Hive tables in Tableau 8.1, connect to the Hive schema using a DSN
 and then visualize the data in Tableau.<br>
@@ -1060,7 +1060,7 @@ and then visualize the data in Tableau.<br>
 
 <hr>
 
-<h2 id="step-1:-create-a-dsn-to-a-hive-table">Step 1: Create a DSN to a Hive Table</h2>
+<h2 id="step-1-create-a-dsn-to-a-hive-table">Step 1: Create a DSN to a Hive Table</h2>
 
 <p>In this step, we will create a DSN that accesses a Hive table.</p>
 
@@ -1082,7 +1082,7 @@ In this example, we are connecting to a Zookeeper Quorum. Verify that the Cluste
 
 <hr>
 
-<h2 id="step-2:-connect-to-hive-tables-in-tableau">Step 2: Connect to Hive Tables in Tableau</h2>
+<h2 id="step-2-connect-to-hive-tables-in-tableau">Step 2: Connect to Hive Tables in Tableau</h2>
 
 <p>Now, we can connect to Hive tables.</p>
 
@@ -1108,7 +1108,7 @@ configure the connection to the Hive table and click <strong>OK</strong>.</li>
 
 <hr>
 
-<h2 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h2>
+<h2 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h2>
 
 <p>Once you connect to the data, the columns appear in the Data window. To
 visualize the data, drag fields from the Data window to the workspace view.</p>
@@ -1117,7 +1117,7 @@ visualize the data, drag fields from the Data window to the workspace view.</p>
 
 <p><img src="/docs/img/student_hive.png" alt=""></p>
 
-<h2 id="example:-connect-to-self-describing-data-in-tableau">Example: Connect to Self-Describing Data in Tableau</h2>
+<h2 id="example-connect-to-self-describing-data-in-tableau">Example: Connect to Self-Describing Data in Tableau</h2>
 
 <p>You can connect to self-describing data in Tableau in the following ways:</p>
 
@@ -1126,7 +1126,7 @@ visualize the data, drag fields from the Data window to the workspace view.</p>
 <li>Use Tableau’s Custom SQL to query the self-describing data directly. </li>
 </ol>
 
-<h3 id="option-1.-using-a-view-to-connect-to-self-describing-data">Option 1. Using a View to Connect to Self-Describing Data</h3>
+<h3 id="option-1-using-a-view-to-connect-to-self-describing-data">Option 1. Using a View to Connect to Self-Describing Data</h3>
 
 <p>The following example describes how to create a view of an HBase table and
 connect to that view in Tableau 8.1. You can also use these steps to access
@@ -1137,7 +1137,7 @@ data for other sources such as Hive, Parquet, JSON, TSV, and CSV.</p>
   <p class="last">This example assumes that there is a schema named hbase that contains a table named s_voters and a schema named dfs.default that points to a writable location.  </p>
 </div>
 
-<h4 id="step-1.-create-a-view-and-a-dsn">Step 1. Create a View and a DSN</h4>
+<h4 id="step-1-create-a-view-and-a-dsn">Step 1. Create a View and a DSN</h4>
 
 <p>In this step, we will use the ODBC Administrator to access the Drill Explorer
 where we can create a view of an HBase table. Then, we will use the ODBC
@@ -1191,7 +1191,7 @@ view.</p></li>
 <li><p>Click <strong>OK</strong> to close the ODBC Data Source Administrator.</p></li>
 </ol>
 
-<h4 id="step-2.-connect-to-the-view-from-tableau">Step 2. Connect to the View from Tableau</h4>
+<h4 id="step-2-connect-to-the-view-from-tableau">Step 2. Connect to the View from Tableau</h4>
 
 <p>Now, we can connect to the view in Tableau.</p>
 
@@ -1214,7 +1214,7 @@ view.</p></li>
 <li>In the <em>Data Connection dialog</em>, click <strong>Connect Live</strong>.</li>
 </ol>
 
-<h4 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
+<h4 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
 
 <p>Once you connect to the data in Tableau, the columns appear in the Data
 window. To visualize the data, drag fields from the Data window to the
@@ -1224,7 +1224,7 @@ workspace view.</p>
 
 <p><img src="/docs/img/VoterContributions_hbaseview.png" alt=""></p>
 
-<h3 id="option-2.-using-custom-sql-to-access-self-describing-data">Option 2. Using Custom SQL to Access Self-Describing Data</h3>
+<h3 id="option-2-using-custom-sql-to-access-self-describing-data">Option 2. Using Custom SQL to Access Self-Describing Data</h3>
 
 <p>The following example describes how to use custom SQL to connect to a Parquet
 file and then visualize the data in Tableau 8.1. You can use the same steps to
@@ -1235,7 +1235,7 @@ access data from other sources such as Hive, HBase, JSON, TSV, and CSV.</p>
   <p class="last">This example assumes that there is a schema named dfs.default which contains a parquet file named region.parquet.  </p>
 </div>
 
-<h4 id="step-1.-create-a-dsn-to-the-parquet-file-and-preview-the-data">Step 1. Create a DSN to the Parquet File and Preview the Data</h4>
+<h4 id="step-1-create-a-dsn-to-the-parquet-file-and-preview-the-data">Step 1. Create a DSN to the Parquet File and Preview the Data</h4>
 
 <p>In this step, we will create a DSN that accesses files on the DFS. We will
 also use Drill Explorer to preview the SQL that we want to use to connect to
@@ -1271,7 +1271,7 @@ You can copy this query to file so that you can use it in Tableau.</li>
 <li>Click <strong>OK</strong> to close the ODBC Data Source Administrator.</li>
 </ol>
 
-<h4 id="step-2.-connect-to-a-parquet-file-in-tableau-using-custom-sql">Step 2. Connect to a Parquet File in Tableau using Custom SQL</h4>
+<h4 id="step-2-connect-to-a-parquet-file-in-tableau-using-custom-sql">Step 2. Connect to a Parquet File in Tableau using Custom SQL</h4>
 
 <p>Now, we can create a connection to the Parquet file using the custom SQL.</p>
 
@@ -1300,7 +1300,7 @@ You can copy this query to file so that you can use it in Tableau.</li>
 <li><p>In the <em>Data Connection dialog</em>, click <strong>Connect Live</strong>.</p></li>
 </ol>
 
-<h4 id="step-3.-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
+<h4 id="step-3-visualize-the-data-in-tableau">Step 3. Visualize the Data in Tableau</h4>
 
 <p>Once you connect to the data, the fields appear in the Data window. To
 visualize the data, drag fields from the Data window to the workspace view.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/troubleshooting/index.html
----------------------------------------------------------------------
diff --git a/docs/troubleshooting/index.html b/docs/troubleshooting/index.html
index a09b7a5..a08f5e7 100644
--- a/docs/troubleshooting/index.html
+++ b/docs/troubleshooting/index.html
@@ -1142,7 +1142,7 @@ Symptom:   </p>
 </ul></li>
 </ul>
 
-<h3 id="access-nested-fields-without-table-name/alias">Access Nested Fields without Table Name/Alias</h3>
+<h3 id="access-nested-fields-without-table-name-alias">Access Nested Fields without Table Name/Alias</h3>
 
 <p>Symptom: </p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">   SELECT x.y …  
@@ -1216,7 +1216,7 @@ Symptom:   </p>
 <p>Solution: Make sure that the ODBC driver version is compatible with the server version. <a href="/docs/installing-the-odbc-driver">Driver installation instructions</a> include how to check the driver version. 
 Turn on ODBC driver debug logging to better understand failure.  </p>
 
-<h3 id="jdbc/odbc-connection-issues-with-zookeeper">JDBC/ODBC Connection Issues with ZooKeeper</h3>
+<h3 id="jdbc-odbc-connection-issues-with-zookeeper">JDBC/ODBC Connection Issues with ZooKeeper</h3>
 
 <p>Symptom: Client cannot resolve ZooKeeper host names for JDBC/ODBC.</p>
 
@@ -1240,13 +1240,13 @@ Turn on ODBC driver debug logging to better understand failure.  </p>
 
 <p>Solution: Verify that the column alias does not conflict with the storage type. See <a href="/docs/lexical-structure/#case-sensitivity">Lexical Structures</a>.  </p>
 
-<h3 id="list-(array)-contains-null">List (Array) Contains Null</h3>
+<h3 id="list-array-contains-null">List (Array) Contains Null</h3>
 
 <p>Symptom: UNSUPPORTED_OPERATION ERROR: Null values are not supported in lists by default. </p>
 
 <p>Solution: Avoid selecting fields that are arrays containing nulls. Change Drill session settings to enable all_text_mode. Set store.json.all_text_mode to true, so Drill treats JSON null values as a string containing the word &#39;null&#39;.</p>
 
-<h3 id="select-count-(*)-takes-a-long-time-to-run">SELECT COUNT (*) Takes a Long Time to Run</h3>
+<h3 id="select-count-takes-a-long-time-to-run">SELECT COUNT (*) Takes a Long Time to Run</h3>
 
 <p>Solution: In some cases, the underlying storage format does not have a built-in capability to return a count of records in a table.  In these cases, Drill does a full scan of the data to verify the number of records.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/tutorial-develop-a-simple-function/index.html
----------------------------------------------------------------------
diff --git a/docs/tutorial-develop-a-simple-function/index.html b/docs/tutorial-develop-a-simple-function/index.html
index bf7bf48..7c14b99 100644
--- a/docs/tutorial-develop-a-simple-function/index.html
+++ b/docs/tutorial-develop-a-simple-function/index.html
@@ -1059,7 +1059,7 @@
 
 <hr>
 
-<h2 id="step-1:-add-dependencies">Step 1: Add dependencies</h2>
+<h2 id="step-1-add-dependencies">Step 1: Add dependencies</h2>
 
 <p>First, add the following Drill dependency to your maven project:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"> <span class="nt">&lt;dependency&gt;</span>
@@ -1070,7 +1070,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-2:-add-annotations-to-the-function-template">Step 2: Add annotations to the function template</h2>
+<h2 id="step-2-add-annotations-to-the-function-template">Step 2: Add annotations to the function template</h2>
 
 <p>To start implementing the DrillSimpleFunc interface, add the following annotations to the @FunctionTemplate declaration:</p>
 
@@ -1106,7 +1106,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-3:-declare-input-parameters">Step 3: Declare input parameters</h2>
+<h2 id="step-3-declare-input-parameters">Step 3: Declare input parameters</h2>
 
 <p>The function will be generated dynamically, as you can see in the <a href="https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillSimpleFuncHolder.java/#L42">DrillSimpleFuncHolder</a>, and the input parameters and output holders are defined using holders by annotations. Define the parameters using the @Param annotation. </p>
 
@@ -1138,7 +1138,7 @@
 
 <hr>
 
-<h2 id="step-4:-declare-the-return-value-type">Step 4: Declare the return value type</h2>
+<h2 id="step-4-declare-the-return-value-type">Step 4: Declare the return value type</h2>
 
 <p>Also, using the @Output annotation, define the returned value as VarCharHolder type. Because you are manipulating a VarChar, you also have to inject a buffer that Drill uses for the output. </p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">SimpleMaskFunc</span> <span class="kd">implements</span> <span class="n">DrillSimpleFunc</span> <span class="o">{</span>
@@ -1153,7 +1153,7 @@
 </code></pre></div>
 <hr>
 
-<h2 id="step-5:-implement-the-eval()-method">Step 5: Implement the eval() method</h2>
+<h2 id="step-5-implement-the-eval-method">Step 5: Implement the eval() method</h2>
 
 <p>The MASK function does not require any setup, so you do not need to define the setup() method. Define only the eval() method. </p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kt">void</span> <span class="nf">eval</span><span class="o">()</span> <span class="o">{</span>
@@ -1187,7 +1187,7 @@
 
 <p>Even to a seasoned Java developer, the eval() method might look a bit strange because Drill generates the final code on the fly to fulfill a query request. This technique leverages Java’s just-in-time (JIT) compiler for maximum speed.</p>
 
-Basic Coding Rules</h2>
+<h2 id="basic-coding-rules">Basic Coding Rules</h2>
 
 <p>To leverage Java’s just-in-time (JIT) compiler for maximum speed, you need to adhere to some basic rules.</p>
 
@@ -1221,9 +1221,11 @@ Basic Coding Rules</h2>
     <span class="nt">&lt;/executions&gt;</span>
 <span class="nt">&lt;/plugin&gt;</span>
 </code></pre></div>
-Add a drill-module.conf File to Resources</h2>
+<h2 id="add-a-drill-module-conf-file-to-resources">Add a drill-module.conf File to Resources</h2>
 
-<p>Add a <code>drill-module.conf</code> file in the resources folder of your project. The presence of this file tells Drill that your jar contains a custom function. If you have no specific configuration to set for your function, you can keep this file empty.</p>
+<p>Add a <code>drill-module.conf</code> file in the resources folder of your project. The presence of this file tells Drill that your jar contains a custom function. Put the following line in the <code>drill-module.config</code>:</p>
+
+<p><code>drill.classpath.scanning.packages += &quot;org.apache.drill.contrib.function&quot;</code></p>
 
 <h2 id="build-and-deploy-the-function">Build and Deploy the Function</h2>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/useful-research/index.html
----------------------------------------------------------------------
diff --git a/docs/useful-research/index.html b/docs/useful-research/index.html
index d619555..3b0aa35 100644
--- a/docs/useful-research/index.html
+++ b/docs/useful-research/index.html
@@ -1068,7 +1068,7 @@
 <li>Design Proposal for Drill: <a href="http://www.slideshare.net/CamuelGilyadov/apache-drill-14071739">http://www.slideshare.net/CamuelGilyadov/apache-drill-14071739</a></li>
 </ul>
 
-<h2 id="dazo-(second-generation-opendremel)">Dazo (second generation OpenDremel)</h2>
+<h2 id="dazo-second-generation-opendremel">Dazo (second generation OpenDremel)</h2>
 
 <ul>
 <li>Dazo repos: <a href="https://github.com/Dazo-org">https://github.com/Dazo-org</a></li>
@@ -1082,7 +1082,7 @@
 <li><a href="https://github.com/rgrzywinski/field-stripe/">https://github.com/rgrzywinski/field-stripe/</a></li>
 </ul>
 
-Code generation / Physical plan generation</h2>
+<h2 id="code-generation-physical-plan-generation">Code generation / Physical plan generation</h2>
 
 <ul>
 <li><a href="http://www.vldb.org/pvldb/vol4/p539-neumann.pdf">http://www.vldb.org/pvldb/vol4/p539-neumann.pdf</a> (SLIDES: <a href="http://www.vldb.org/2011/files/slides/research9/rSession9-3.pdf">http://www.vldb.org/2011/files/slides/research9/rSession9-3.pdf</a>)</li>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-apache-drill-with-tableau-9-desktop/index.html
----------------------------------------------------------------------
diff --git a/docs/using-apache-drill-with-tableau-9-desktop/index.html b/docs/using-apache-drill-with-tableau-9-desktop/index.html
index ed83888..18ab6b9 100644
--- a/docs/using-apache-drill-with-tableau-9-desktop/index.html
+++ b/docs/using-apache-drill-with-tableau-9-desktop/index.html
@@ -1046,7 +1046,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
 
 <p>Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. For the best experience use the latest release of Apache Drill. For Tableau 9.0 Desktop, Drill Version 0.9 or higher is recommended.</p>
 
@@ -1066,13 +1066,13 @@
 
 <hr>
 
-<h3 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
+<h3 id="step-2-install-the-tableau-data-connection-customization-tdc-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau. The MapR Drill ODBC Driver installer automatically installs the TDC file if the installer can find the Tableau installation. If you installed the MapR Drill ODBC Driver first and then installed Tableau, the TDC file is not installed automatically, and you need to <a href="/docs/installing-the-tdc-file-on-windows/">install the TDC file manually</a>. </p>
 
 <hr>
 
-<h3 id="step-3:-connect-tableau-to-drill-via-odbc">Step 3: Connect Tableau to Drill via ODBC</h3>
+<h3 id="step-3-connect-tableau-to-drill-via-odbc">Step 3: Connect Tableau to Drill via ODBC</h3>
 
 <p>Complete the following steps to configure an ODBC data connection: </p>
 
@@ -1097,7 +1097,7 @@ Tableau is now connected to Drill, and you can select various tables and views.
 
 <hr>
 
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h3 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
 
 <p>Tableau Desktop can now use Drill to query various data sources and visualize the information.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-apache-drill-with-tableau-9-server/index.html
----------------------------------------------------------------------
diff --git a/docs/using-apache-drill-with-tableau-9-server/index.html b/docs/using-apache-drill-with-tableau-9-server/index.html
index 3f6b1df..fc22b6b 100644
--- a/docs/using-apache-drill-with-tableau-9-server/index.html
+++ b/docs/using-apache-drill-with-tableau-9-server/index.html
@@ -1045,7 +1045,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
 
 <p>Drill uses standard ODBC connectivity to provide easy data-exploration capabilities on complex, schema-less data sets. The latest release of Apache Drill. For Tableau 9.0 Server, Drill Version 0.9 or higher is recommended.</p>
 
@@ -1065,7 +1065,7 @@
 
 <hr>
 
-<h3 id="step-2:-install-the-tableau-data-connection-customization-(tdc)-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
+<h3 id="step-2-install-the-tableau-data-connection-customization-tdc-file">Step 2: Install the Tableau Data-connection Customization (TDC) File</h3>
 
 <p>The MapR Drill ODBC Driver includes a file named <code>MapRDrillODBC.TDC</code>. The TDC file includes customizations that improve ODBC configuration and performance when using Tableau.</p>
 
@@ -1078,7 +1078,7 @@
 
 <hr>
 
-<h3 id="step-3:-publish-tableau-visualizations-and-data-sources">Step 3: Publish Tableau Visualizations and Data Sources</h3>
+<h3 id="step-3-publish-tableau-visualizations-and-data-sources">Step 3: Publish Tableau Visualizations and Data Sources</h3>
 
 <p>For collaboration purposes, you can now use Tableau Desktop to publish data sources and visualizations on Tableau Server.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-jdbc-with-squirrel-on-windows/index.html
----------------------------------------------------------------------
diff --git a/docs/using-jdbc-with-squirrel-on-windows/index.html b/docs/using-jdbc-with-squirrel-on-windows/index.html
index 3616de8..13591f2 100644
--- a/docs/using-jdbc-with-squirrel-on-windows/index.html
+++ b/docs/using-jdbc-with-squirrel-on-windows/index.html
@@ -1050,7 +1050,7 @@
 
 <hr>
 
-<h2 id="step-1:-getting-the-drill-jdbc-driver">Step 1: Getting the Drill JDBC Driver</h2>
+<h2 id="step-1-getting-the-drill-jdbc-driver">Step 1: Getting the Drill JDBC Driver</h2>
 
 <p>The Drill JDBC Driver <code>JAR</code> file must exist in a directory on your Windows
 machine in order to configure the driver in the SQuirreL client.</p>
@@ -1069,7 +1069,7 @@ you can locate the driver in the following directory:</p>
 </code></pre></div>
 <hr>
 
-<h2 id="step-2:-installing-and-starting-squirrel">Step 2: Installing and Starting SQuirreL</h2>
+<h2 id="step-2-installing-and-starting-squirrel">Step 2: Installing and Starting SQuirreL</h2>
 
 <p>To install and start SQuirreL, complete the following steps:</p>
 
@@ -1082,14 +1082,14 @@ you can locate the driver in the following directory:</p>
 
 <hr>
 
-<h2 id="step-3:-adding-the-drill-jdbc-driver-to-squirrel">Step 3: Adding the Drill JDBC Driver to SQuirreL</h2>
+<h2 id="step-3-adding-the-drill-jdbc-driver-to-squirrel">Step 3: Adding the Drill JDBC Driver to SQuirreL</h2>
 
 <p>To add the Drill JDBC Driver to SQuirreL, define the driver and create a
 database alias. The alias is a specific instance of the driver configuration.
 SQuirreL uses the driver definition and alias to connect to Drill so you can
 access data sources that you have registered with Drill.</p>
 
-<h3 id="a.-define-the-driver">A. Define the Driver</h3>
+<h3 id="a-define-the-driver">A. Define the Driver</h3>
 
 <p>To define the Drill JDBC Driver, complete the following steps:</p>
 
@@ -1131,7 +1131,7 @@ access data sources that you have registered with Drill.</p>
 
 <p><img src="/docs/img/52.png" alt="drill query flow"></p>
 
-<h3 id="b.-create-an-alias">B. Create an Alias</h3>
+<h3 id="b-create-an-alias">B. Create an Alias</h3>
 
 <p>To create an alias, complete the following steps:</p>
 
@@ -1182,7 +1182,7 @@ access data sources that you have registered with Drill.</p>
 
 <hr>
 
-<h2 id="step-4:-running-a-drill-query-from-squirrel">Step 4: Running a Drill Query from SQuirreL</h2>
+<h2 id="step-4-running-a-drill-query-from-squirrel">Step 4: Running a Drill Query from SQuirreL</h2>
 
 <p>Once you have SQuirreL successfully connected to your cluster through the
 Drill JDBC Driver, you can issue queries from the SQuirreL client. You can run

http://git-wip-us.apache.org/repos/asf/drill-site/blob/0d7ffd4b/docs/using-microstrategy-analytics-with-apache-drill/index.html
----------------------------------------------------------------------
diff --git a/docs/using-microstrategy-analytics-with-apache-drill/index.html b/docs/using-microstrategy-analytics-with-apache-drill/index.html
index 8575682..b47f6c0 100644
--- a/docs/using-microstrategy-analytics-with-apache-drill/index.html
+++ b/docs/using-microstrategy-analytics-with-apache-drill/index.html
@@ -1046,7 +1046,7 @@
 
 <hr>
 
-<h3 id="step-1:-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
+<h3 id="step-1-install-and-configure-the-mapr-drill-odbc-driver">Step 1: Install and Configure the MapR Drill ODBC Driver</h3>
 
 <p>Drill uses standard ODBC connectivity to provide easy data exploration capabilities on complex, schema-less data sets. Verify that the ODBC driver version that you download correlates with the Apache Drill version that you use. Ideally, you should upgrade to the latest version of Apache Drill and the MapR Drill ODBC Driver. </p>
 
@@ -1082,7 +1082,7 @@
 
 <hr>
 
-<h3 id="step-2:-install-the-drill-object-on-microstrategy-analytics-enterprise">Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise</h3>
+<h3 id="step-2-install-the-drill-object-on-microstrategy-analytics-enterprise">Step 2: Install the Drill Object on MicroStrategy Analytics Enterprise</h3>
 
 <p>The steps listed in this section were created based on the MicroStrategy Technote for installing DBMS objects which you can reference at: </p>
 
@@ -1115,7 +1115,7 @@
 
 <hr>
 
-<h3 id="step-3:-create-the-microstrategy-database-connection-for-apache-drill">Step 3: Create the MicroStrategy database connection for Apache Drill</h3>
+<h3 id="step-3-create-the-microstrategy-database-connection-for-apache-drill">Step 3: Create the MicroStrategy database connection for Apache Drill</h3>
 
 <p>Complete the following steps to use the Database Instance Wizard to create the MicroStrategy database connection for Apache Drill:</p>
 
@@ -1134,7 +1134,7 @@
 
 <hr>
 
-<h3 id="step-4:-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
+<h3 id="step-4-query-and-analyze-the-data">Step 4: Query and Analyze the Data</h3>
 
 <p>This step includes an example scenario that shows you how to use MicroStrategy, with Drill as the database instance, to analyze Twitter data stored as complex JSON documents. </p>
 
@@ -1142,7 +1142,7 @@
 
 <p>The Drill distributed file system plugin is configured to read Twitter data in a directory structure. A view is created in Drill to capture the most relevant maps and nested maps and arrays for the Twitter JSON documents. Refer to <a href="/docs/query-data-introduction/">Query Data</a> for more information about how to configure and use Drill to work with complex data:</p>
 
-<h4 id="part-1:-create-a-project">Part 1: Create a Project</h4>
+<h4 id="part-1-create-a-project">Part 1: Create a Project</h4>
 
 <p>Complete the following steps to create a project:</p>
 
@@ -1160,7 +1160,7 @@
 <li> Click <strong>OK</strong>. The new project is created in MicroStrategy Developer. </li>
 </ol>
 
-<h4 id="part-2:-create-a-freeform-report-to-analyze-data">Part 2: Create a Freeform Report to Analyze Data</h4>
+<h4 id="part-2-create-a-freeform-report-to-analyze-data">Part 2: Create a Freeform Report to Analyze Data</h4>
 
 <p>Complete the following steps to create a Freeform Report and analyze data:</p>